text
stringlengths
56
7.94M
\begin{document} \title{ On generalized Howell designs with block size three} \footnotetext[1]{School of Mathematics and Statistics, University of New South Wales, Sydney, NSW 2052, Australia. Email: \texttt{[email protected]}} \footnotetext[2]{Division of Science (Mathematics), Grenfell Campus, Memorial University of Newfoundland, Corner Brook, NL A2H 6P9, Canada. Email: \texttt{[email protected]}} \footnotetext[3]{Department of Mathematical Sciences, University of New Brunswick, 100 Tucker Park Rd., Saint John, NB E2L 4L5, Canada. Email: \texttt{[email protected]}} \footnotetext[4]{Department of Mathematics, Ryerson University, 350 Victoria St., Toronto, ON M5B 2K3, Canada. Email: \texttt{[email protected], [email protected]}} \footnotetext[5]{Supported by Vice-President (Grenfell Campus) Research Fund, Memorial University of Newfoundland.} \footnotetext[6]{Supported by an NSERC Discovery Grant.} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \begin{abstract} In this paper, we examine a class of doubly resolvable combinatorial objects. Let $t, k, \lambda, s$ and $v$ be nonnegative integers, and let $X$ be a set of $v$ symbols. A {\em generalized Howell design}, denoted $t$-$\mathrm{GHD}_{k}(s,v;\lambda)$, is an $s\times s$ array, each cell of which is either empty or contains a $k$-set of symbols from $X$, called a {\em block}, such that: (i) each symbol appears exactly once in each row and in each column (i.e.\ each row and column is a resolution of $X$); (ii) no $t$-subset of elements from $X$ appears in more than $\lambda$ cells. Particular instances of the parameters correspond to Howell designs, doubly resolvable balanced incomplete block designs (including Kirkman squares), doubly resolvable nearly Kirkman triple systems, and simple orthogonal multi-arrays (which themselves generalize mutually orthogonal Latin squares). Generalized Howell designs also have connections with permutation arrays and multiply constant-weight codes. In this paper, we concentrate on the case that $t=2$, $k=3$ and $\lambda=1$, and write $\mathrm{GHD}(s,v)$. In this case, the number of empty cells in each row and column falls between 0 and $(s-1)/3$. Previous work has considered the existence of GHDs on either end of the spectrum, with at most 1 or at least $(s-2)/3$ empty cells in each row or column. In the case of one empty cell, we correct some results of Wang and Du, and show that there exists a $\mathrm{GHD}(n+1,3n)$ if and only if $n \geq 6$, except possibly for $n=6$. In the case of two empty cells, we show that there exists a $\mathrm{GHD}(n+2,3n)$ if and only if $n \geq 6$. Noting that the proportion of cells in a given row or column of a $\mathrm{GHD}(s,v)$ which are empty falls in the interval $[0,1/3)$, we prove that for any $\pi \in [0,5/18]$, there is a $\mathrm{GHD}(s,v)$ whose proportion of empty cells in a row or column is arbitrarily close to $\pi$. \end{abstract} {\bf Keywords:} Generalized Howell designs, triple systems, doubly resolvable designs. {\bf MSC2010:} Primary: 05B07, 05B15; Secondary: 05B40, 94B25 \section{Introduction} Combinatorial designs on square arrays have been the subject of much attention, with mutually orthogonal Latin squares being the most natural example. Block designs with two orthogonal resolutions can also be thought of in this way, with the rows and columns of the array labelled by the resolution classes. In this paper, we consider generalized Howell designs, which are objects that in some sense fill in the gap between these two cases. We refer the reader to \cite{Handbook} for background on these objects and design theory in general. \subsection{Definition and examples} \label{DefnSection} In this paper, we examine a class of doubly resolvable designs, defined below, which generalize a number of well-known objects. \begin{definition} Let $t, k, \lambda, s$ and $v$ be nonnegative integers, and let $X$ be a set of $v$ symbols. A {\em generalized Howell design}, denoted $t$-$\mathrm{GHD}_{k}(s,v;\lambda)$, is an $s\times s$ array, each cell of which is either empty or contains a $k$-set of symbols from $X$, called a {\em block}, such that: \begin{enumerate} \item \label{latin} each symbol appears exactly once in each row and in each column (i.e.\ each row and column is a resolution of $X$); \item no $t$-subset of elements from $X$ appears in more than $\lambda$ cells. \end{enumerate} \end{definition} In the case that $t=2$, $k=2$ and $\lambda=1$, a $2$-$\mathrm{GHD}_2(s,v;1)$ is known as a {\em Howell design} $\mathrm{H}(s,v)$. In the literature, two different generalizations of Howell designs have been proposed, both of which can be incorporated into the definition above: these are due to Deza and Vanstone~\cite{DezaVanstone} (which corresponds to the case that $t=2$) and to Rosa~\cite{Rosa} (which corresponds to the case that $t=k$). The objects in question have appeared under several names in the literature. The term {\em generalized Howell design} appears in both~\cite{DezaVanstone} and~\cite{Rosa} in reference to the particular cases studied in these papers, and also in papers such as~\cite{WangDu}. The term {\em generalized Kirkman square} or GKS has more recently been introduced to the literature (see~\cite{DuAbelWang,Etzion}); in particular, in these papers, a $\mathrm{GKS}_k(v;1,\lambda;s)$ is defined to be what we have defined as a $2$-$\mathrm{GHD}_k(s,v;\lambda)$. In this paper we will concentrate on the case when $t=2$, $k=3$ and $\lambda=1$, in which case we omit these parameters in the notation and simply write $\mathrm{GHD}(s,v)$, or $\mathrm{GHD}_k(s,v)$ for more general $k$. Two obvious necessary conditions for the existence of a non-trivial $2$-$\mathrm{GHD}_k(s,v;\lambda)$ are that $v\equiv 0 \pmod k$ and that $\frac{v}{k}\leq s \leq \frac{\lambda(v-1)}{k-1}$. In particular, when $k=3$, $t=2$ and $\lambda=1$, we have that $\frac{v}{3}\leq s \leq \frac{v-1}{2}$. Since a $t$-$\mathrm{GHD}_k(s,v;\lambda)$ contains exactly $n=\frac{v}{k}$ non-empty cells in each row and column, it can be helpful to write $t$-$\mathrm{GHD}_k(n+e, kn;\lambda)$ (or $\mathrm{GHD}(n+e, 3n)$ in the case $k=3$, $t=2$ and $\lambda=1$), where $e$ is then the number of empty cells in each row and column. A Howell design with $v=s+1$, i.e. an $\mathrm{H}(s,s+1)$ is called a {\em Room square}. The study of Room squares goes back to the original work of Kirkman \cite{Kirkman} in 1850, where he presents a Room square with side length~$7$, i.e.\ an $\mathrm{H}(7,8)$. The name of this object, however, is in reference to T.~G.~Room~\cite{Room}, who also constructed an $\mathrm{H}(7,8)$, and in addition showed that there is no $\mathrm{H}(3,4)$ or $\mathrm{H}(5,6)$. The existence of Room squares was settled in 1975 by Mullin and Wallis \cite{Mullin}; for a survey on Room squares see \cite{blue book}. \begin{theorem}[Mullin and Wallis \cite{Mullin}] \label{Room squares} There exists a Room square of side $s$ if and only if $s$ is odd and either $s=1$ or $s \geq 7$. \end{theorem} More generally, Stinson \cite{Stinson} showed existence of Howell designs with odd side $s$ and Anderson, Schellenberg and Stinson \cite{ASS} showed existence of Howell designs with even side $s$. We thus have the following. \begin{theorem}[Stinson \cite{Stinson}; Anderson, Schellenberg and Stinson \cite{ASS}] \label{HD} There exists an $\mathrm{H}(s,2n)$ if and only if $n=0$ or $n \leq s \leq 2n-1$, $(s,2n)\not\in \{(2,4), (3,4), (5,6), (5,8)\}$. \end{theorem} A $2$-$\mathrm{GHD}_k(s,v;\lambda)$ with $s=\frac{\lambda(v-1)}{k-1}$ is equivalent to a {\em doubly resolvable balanced incomplete block design} $\mathrm{BIBD}(v,k,\lambda)$. Doubly resolvable designs and related objects have been studied, for example, in~\cite{OCD, Curran Vanstone, Fuji-Hara Vanstone, Fuji-Hara Vanstone TD, Vanstone AG, Lamken 09, Vanstone 80, Vanstone 82}. In particular, Fuji-Hara and Vanstone investigated orthogonal resolutions in affine geometries, showing the existence of a doubly resolvable $\mathrm{BIBD}(q^n,q,1)$ for prime powers $q$ and integers $n \geq 3$. Asymptotic existence of doubly resolvable $\mathrm{BIBD}(v,k,1)$ was shown by Rosa and Vanstone~\cite{Rosa Vanstone} for $k=3$ and by Lamken~\cite{Lamken 09} for general $k$. For $t=2$, $k=3$ and $\lambda=1$, a $\mathrm{GHD}(s,v)$ with $s = \lfloor\frac{v-1}{2}\rfloor$ corresponds to a {\em Kirkman square}, $\mathrm{KS}(v)$, (i.e.\ a doubly resolvable Kirkman triple system of order $v$) when $v \equiv 3 \pmod 6$ and a {\em doubly resolvable nearly Kirkman triple system}, $\mathrm{DRNKTS}(v)$, when $v\equiv 0 \pmod 6$. A Kirkman square of order $3$ is trivial to construct. Mathon and Vanstone~\cite{Mathon Vanstone} showed the non-existence of a $\mathrm{KS}(9)$ or $\mathrm{KS}(15)$, while the non-existence of a $\mathrm{DRNKTS}(6)$ and $\mathrm{DRNKTS}(12)$ follows from Kotzig and Rosa~\cite{KotzigRosa74}. For many years, the smallest known example of a $\mathrm{GHD}(s,v)$ with $s = \lfloor\frac{v-1}{2}\rfloor$ (other than the trivial case of $s=1$, $v=3$) was a $\mathrm{DRNKTS}(24)$, found by Smith in 1977~\cite{Smith77}. However, the smallest possible example of such a GHD with $v\equiv 0 \pmod 6$ is for $v=18$ and $s=8$; these were recently obtained and classified up to isomorphism in~\cite{Finland}, from which the following example is taken. \begin{example} \label{FinlandExample} The following is a $\mathrm{GHD}(8,18)$, or equivalently a $\mathrm{DRNKTS}(18)$. \renewcommand{1.0}{1.25} \[ \begin{array}{|c|c|c|c|c|c|c|c|} \hline & & agh & bkm & ejp & fir & clq & dno \\ \hline & & bln & aij & cho & dgq & ekr & fmp \\ \hline acd & boq & & & gmr & hkp & fjn & eil \\ \hline bpr & aef & & & inq & jlo & dhm & cgk \\ \hline fgl & dik & emq & cnr & & & aop & bhj \\ \hline ehn & cjm & fko & dlp & & & bgi & aqr \\ \hline imo & gnp & djr & fhq & akl & bce & & \\ \hline jkq & hlr & cip & ego & bdf & amn & & \\ \hline \end{array} \] \renewcommand{1.0}{1.0} \end{example} The existence of Kirkman squares was settled by Colbourn, Lamken, Ling and Mills in \cite{CLLM}, with 23 possible exceptions, 11 of which where solved in \cite{ALW} and one ($v=351$) was solved in~\cite{ACCLWW}. Abel, Chan, Colbourn, Lamken, Wang and Wang \cite{ACCLWW} have determined the existence of doubly resolvable nearly Kirkman triple systems, with 64 possible exceptions. We thus have the following result. \begin{theorem}[\cite{ACCLWW,ALW,CLLM}] \label{Kirkman squares} Let $v \equiv 0 \pmod{3}$ be a positive integer. Then a $\mathrm{GHD}(\lfloor\frac{v-1}{2}\rfloor,v)$ exists if and only if either $v=3$ or $v\geq 18$, with 75 possible exceptions. The symbol $N$ will be used to denote this set of possible exceptions throughout the paper. \end{theorem} Recently, Du, Abel and Wang~\cite{DuAbelWang} have proved the existence of $\mathrm{GHD}(\frac{v-4}{2},v)$ for $v \equiv 0 \pmod{12}$ with at most 15 possible exceptions and $\mathrm{GHD}(\frac{v-6}{2},v)$ for $v \equiv 0 \pmod{6}$ and $v>18$ with at most 31 possible exceptions. Two of these possible exceptions ($\mathrm{GHD}(9,24)$ and $\mathrm{GHD}(12,30)$) will be addressed later in this paper. At the other end of the spectrum, the case when $s=n$, $v=kn$, $t=2$ and $\lambda=1$ is known as a $\mathrm{SOMA}(k,n)$. SOMAs, or {\em simple orthogonal multi-arrays}, were introduced by Phillips and Wallis~\cite{PhillipsWallis} and have been investigated by Soicher~\cite{Soicher} and Arhin~\cite{ArhinThesis,Arhin}. We note that the existence of a $2$-$\mathrm{GHD}_{k}(n,kn;1)$ is guaranteed by the existence of $k$ mutually orthogonal Latin squares (MOLS) of side $n$. (For more information on Latin squares and MOLS, see~\cite[Part III]{Handbook}.) It is well known that there exist 3 MOLS of side $n$ for every $n\neq 2,3,6,10$. Interestingly, even though the corresponding set of 3 MOLS is not known to exist (and is known not to for $n=2,3,6$), the existence of a $\mathrm{GHD}(6, 18)$ and $\mathrm{GHD}(10, 30)$ has been shown by Brickell~\cite{brickell} and Soicher~\cite{Soicher}, respectively. A $\mathrm{GHD}(1,3)$ exists trivially but it is easily seen that the existence of a $\mathrm{GHD}(2,6)$ or $\mathrm{GHD}(3,9)$ is impossible. \begin{theorem} \label{618} \label{Latin} There exists a $\mathrm{GHD}(n,3n)$ if and only if $n =1$ or $n \geq 4$. \end{theorem} \subsection{GHDs, permutation arrays and codes} \label{PA-code} In~\cite{DezaVanstone}, Deza and Vanstone noted $2$-$\mathrm{GHD}_k(s,v;\lambda)$ are equivalent to a particular type of permutation array. \begin{definition} A {\em permutation array} $\mathrm{PA}(s,\lambda;v)$ on a set $S$ of $s$ symbols is an $v \times s$ array such that each row is a permutation of $S$ and any two rows agree in at most $\lambda$ columns. A permutation array in which each pair of rows agree in exactly $\lambda$ symbols is called {\em equidistant}. If, in any given column, each symbol appears either $0$ or $k$ times, then a permutation array is said to be {\em $k$-uniform}. \end{definition} The rows of a permutation array form an error-correcting code with minimum distance $s-\lambda$. Codes formed in this manner, called {\em permutation codes}, have more recently attracted attention due to their applications to powerline communications; see~\cite{ChuColbournDukes, ColbournKloveLing, Huczynska}. In~\cite{DezaVanstone}, it is noted that a $2$-$\mathrm{GHD}_k(s,v;\lambda)$ is equivalent to a $k$-uniform $\mathrm{PA}(s,\lambda;v)$. In the case that $s=\frac{\lambda(v-1)}{k-1}$, so that the $\mathrm{GHD}$ is a doubly resolvable BIBD, the permutation array is equidistant. There are also known connections between GHDs and other classes of codes. A {\em doubly constant weight code} is a binary constant-weight code with length $n=n_1+n_2$ and weight $w=w_1+w_2$, where each codeword has $w_1$ 1s in the first $n_1$ positions and $w_2$ 1s in the final $n_2$ positions. Such a code with minimum distance $d$ is referred to as a $(w_1,n_1,w_2,n_2,d)$-code. In~\cite{Etzion}, Etzion describes connections between certain classes of designs and optimal doubly constant weight codes. In particular, a $2$-$\mathrm{GHD}_k(s,v;1)$ gives a $(2,2s,k,v,2k+2)$-code. \subsection{GHDs as generalized packing designs} The notion of {\em generalized $t$-designs} introduced by Cameron in 2009 \cite{Cameron09}, and the broader notion of {\em generalized packing designs} introduced by Bailey and Burgess \cite{packings}, provide a common framework for studying many classes of combinatorial designs. As described below, generalized Howell designs fit neatly into this framework. Recently, Chee, Kiah, Zhang and Zhang~\cite{CKZZ} noted that generalized packing designs are equivalent to {\em multiply constant-weight codes}, introduced in~\cite{Chee2}, which themselves generalize doubly constant-weight codes as discussed in Section~\ref{PA-code}. Suppose that $v,k,t,\lambda,m$ are integers where $v \geq k \geq t \geq 1$, $\lambda \geq 1$ and $m\geq 1$. Let $\mathbf{v} = (v_1,v_2,\ldots,v_m)$ and $\mathbf{k}=(k_1,k_2,\ldots,k_m)$ be $m$-tuples of positive integers with sum $v$ and $k$ respectively, where $k_i\leq v_i$ for all $i$, and let $\mathbf{X} = (X_1,X_2,\ldots,X_m)$ be an $m$-tuple of pairwise disjoint sets, where $|X_i|=v_i$. We say that an $m$-tuple of {\em non-negative} integers $\mathbf{t}=(t_1,t_2,\ldots,t_m)$ is {\em admissible} if $t_i\leq k_i$ for all $i$ and the entries $t_i$ sum to $t$. Now we define a {\em $t$-$(\mathbf{v},\mathbf{k},\lambda)$ generalized packing design} to be a collection $\mathcal{P}$ of $m$-tuples of sets $(B_1,B_2,\ldots,B_m)$, where $B_i\subseteq X_i$ and $|B_i|=k_i$ for all $i$, with the property that for any admissible $\mathbf{t}$, any $m$-tuple of sets $(T_1,T_2,\ldots,T_m)$ (where $T_i\subseteq X_i$ and $|T_i|=t_i$ for all $i$) is contained in at most $\lambda$ members of $\mathcal{P}$. We say a generalized packing is {\em optimal} if it contains the largest possible number of blocks. (See~\cite{packings} for further details, and for numerous examples.) The connection with GHDs is the following: any $2$-$\mathrm{GHD}_k(s,n)$ forms an optimal $2$-$(\mathbf{v},\mathbf{k},1)$ generalized packing, where $\mathbf{v}=(n,s,s)$ and $\mathbf{k}=(k,1,1)$. The ``point set'' of the generalized packing is formed from the points of the GHD, together with the row labels and the column labels. Since $t=2$, the only possible admissible triples $\mathbf{t}$ are $(2,0,0)$, $(1,0,1)$, $(0,1,0)$ and $(0,1,1)$. The first of these tells us that no pair of points may occur in more than one block of the GHD; the second and third tell us that no point may be repeated in any row or any column; the last tells us that any entry of the $s\times s$ array may contain only one block of the GHD. In fact, GHDs may be used to obtain $2$-$(\mathbf{v},\mathbf{k},1)$ generalized packings for $\mathbf{k}=(k,1,1)$ more generally, for more arbitrary $\mathbf{v}$. If $\mathbf{v}=(n,r,s)$, we may obtain a rectangular array by (for instance) deleting rows or adding empty columns. Depending on the value of $n$ relative to $r,s$, we may need to delete points and/or blocks. This idea is discussed further in~\cite[Section 3.5]{packings}. \section{Terminology} \subsection{Designs and resolutions} In this section, we discuss various useful classes of combinatorial designs. For more information on these and related objects, see~\cite{Handbook}. We say that $(X, \cB)$ is a {\em pairwise balanced design} PBD$(v, K,\lambda)$ if $X$ is a set of $v$ elements and $\cB$ is a collection of subsets of $X$, called {\em blocks}, which between them contain every pair of elements of $X$ exactly $\lambda$ times. If $K=\{k\}$, then a $\mathrm{PBD}(v,\{k\},\lambda)$ is referred to as a {\em balanced incomplete block design}, $\mathrm{BIBD}(v,k,\lambda)$. A collection of blocks that between them contain each point of $X$ exactly once is called a {\em resolution class} of $X$. If $\cB$ can be partitioned into resolution classes we say that the design is {\em resolvable}, and refer to the partition as a {\em resolution}. It is possible that a design may admit two resolutions, $\cR$ and $\cS$. If $|R_i\cap S_j| \leq 1$, for every resolution class $R_i\in \cR$ and $S_j\in \cS$, we say that these resolutions are {\em orthogonal}. A design admitting a pair of orthogonal resolutions is called {\em doubly resolvable}. In the definition of a PBD or BIBD, if we relax the constraint that every pair appears exactly once to that every pair appears at most once, then we have a {\em packing} design. Thus, a $2$-$\mathrm{GHD}_k(s,v;\lambda)$ may be viewed as a doubly resolvable packing design. A {\em group-divisible design}, $(k,\lambda)$-GDD of type $g^u$, is a triple $(X,\cG, \cB)$, where $X$ is a set of $gu$ points, $\cG$ is a partition of $X$ into $u$ subsets, called {\em groups}, of size $g$, and $\cB$ is a collection of subsets, called {\em blocks}, of size $k$, with the properties that no pair of points in the same group appears together in a block, and each pair of points in different groups occurs in exactly one block. A $(k,1)$-GDD of type $n^k$ is called a {\em transversal design}, and denoted $\mathrm{T}(k,n)$. Note that a transversal design can be interpreted as a packing which misses the pairs contained in the groups. Alternatively, adding the groups as blocks, we may form from a transversal design a $\mathrm{PBD}(kn,\{k,n\})$. PBDs formed in this way will often be used as ingredients in the construction of GHDs in later sections. A resolvable transversal design with block size $k$ and group size $n$ is denoted $\mathrm{RT}(k,n)$. It is well known that a set of $k$ MOLS of side $n$ is equivalent to an $\mathrm{RT}(k+1,n)$. \subsection{Subdesigns and holes} The word {\em hole} is used with different meanings in different areas of design theory. For example, a hole in a BIBD refers to a set $H$ of points such that no pair of elements of $H$ appears in the blocks, while a hole in a Latin square means an empty subsquare, so that certain rows and columns do not contain all symbols. In the case of GHDs, three types of ``hole'' may occur, and each will be of importance later in the paper. We thus develop appropriate terminology to differentiate the uses of the term ``hole''. First, a {\em pairwise hole} in a GHD is a set of points $H \subseteq X$ with the property that no pair of points of $H$ appear in a block of the GHD. Thus, a pairwise hole corresponds with the notion of a hole in a BIBD. \begin{definition} A {\em $\mathrm{GHD}_k^*(s,v)$}, $\cal{G}$, is a $2$-$\mathrm{GHD}_k(s,v;1)$ with a pairwise hole of size $v-s(k-1)$. Thus the symbol set $X$ of a $\mathrm{GHD}_k^*(s,v)$ can be written as \[ X=Y \cup (S \times \{1,2,\ldots,k-1\}), \] where $|S|=s$, $|Y|=v-s(k-1)$, and $Y$ is a pairwise hole of $\cal{G}$. \end{definition} Note that this definition extends in a natural way to higher $\lambda$; however, for our purposes it will be enough to only consider the case that $\lambda=1$. Also note that in the case that $k=2$, a $\mathrm{GHD}_2^*(s,v)$ is precisely the $\mathrm{H}^*(s,v)$ described by Stinson~\cite{Stinson}. We will refer to any $\mathrm{GHD}_k^*(s,v)$ as having the {\em $*$-property}. As our primary focus is on the case that $k=3$, we will omit the subscript $k$ in this case. Note that trivially any $\mathrm{GHD}(s,2s+1)$ (i.e.\ Kirkman square) has the $*$-property. It is also clear that any $\mathrm{GHD}(s,2s+2)$ (i.e.\ DRNKTS) has the \mbox{$*$-property,} as the values of the parameters force there to be an unused pair of points. In the case of a $\mathrm{GHD}(s,3s)$ formed by superimposing three $\mathrm{MOLS}(s)$, the unused pairs of points are those which occur within the three groups of size $s$ in the corresponding transversal design; one of these groups may be taken as $Y$, so any $\mathrm{GHD}(s,3s)$ formed in this manner is a $\mathrm{GHD}^*(s,3s)$. In addition, the existence of a decomposable $\mathrm{SOMA}(3,6)$~\cite{PhillipsWallis} and $\mathrm{SOMA}(3,10)$~\cite{Soicher} yield the existence of $\mathrm{GHD}^*(6,18)$ and $\mathrm{GHD}^*(10,30)$. Thus, we have the following. \begin{theorem} \label{GHD*} \begin{enumerate} \item[(i)] There exists a $\mathrm{GHD}^*\left(\left\lfloor \frac{v-1}{2} \right\rfloor,v \right)$ for any $v\equiv 0 \pmod 3$, whenever $v=3$ or $v \geq 18$ and $v \notin N$. \item[(ii)] There exists a $\mathrm{GHD}^*(n,3n)$ if and only if $n \neq 2$ or $3$. \end{enumerate} \end{theorem} The second type of hole in a GHD is an {\em array hole}. To define it, we need the concepts of trivial GHDs and subdesigns of GHDs. A $t$-$\mathrm{GHD}_k(s,0;\lambda)$ is called {\em trivial}; that is, a trivial GHD is an empty $s \times s$ array. If a $t$-$\mathrm{GHD}_k(s,v;\lambda)$, $\cG$, has a subarray $\cH$ which is itself a $t$-$\mathrm{GHD}_k(s',v';\lambda')$ for some $s'\leq s$, $v' \leq v$ and $\lambda' \leq \lambda$ and whose point set is a pairwise hole, then we say that $\cH$ is a {\em subdesign} of $\cG$. In particular, if the subdesign $\cH$ is trivial, then we call $\cH$ an {\em array hole} in $\cG$, and say that $\cG$ has an array hole of size $s'$. There is a third type of hole that may occur, a {\em Latin hole}, which is a set of elements $H \subseteq X$ and sets $R$, $C$ of rows and columns, respectively, such that each row in $R$ (resp.\ column in $C$) contains no points in $H$, but all points in $X \setminus H$. The concepts of array holes and Latin holes may coincide when there is an array hole in a Latin square, and each row and column intersecting the array hole misses the same subset $H$ of the point set. This is often referred to in the literature as a Latin square with a hole. The notation $t$~$\mathrm{IMOLS}(s,a)$ is used for $t~\mathrm{MOLS}(s)$, each of which has a hole of side length $a$ on the same set of positions. Also, the notation $t~\mathrm{IMOLS}(s,a,b)$ is used for a set of $t~\mathrm{MOLS}(s)$ with two disjoint holes of orders $a$ and $b$, with two extra properties: (1) the two Latin holes have no symbols in common; (2) no common row or column within the larger square intersects both array holes. Latin holes and array holes also both feature in the concept of frames, which are described in Section~\ref{FrameConstr}. \section{Construction methods} \label{ConstrSection} \subsection{Frames} \label{FrameConstr} In this section, we discuss a useful method, that of frames, which will allow us to construct infinite families of GHDs. The central idea has been used to construct GHDs on both ends of the spectrum. However, the terminology in the literature varies: for MOLS and GHDs with few empty cells, authors often refer to HMOLS~\cite{AbelBennettGe,BennettColbournZhu,WangDu}, while for doubly resolvable designs authors often speak of frames~\cite{CLLM, blue book}. We use the latter terminology, as it is more easily adaptable to more general kinds of GHDs. First we begin with a definition. \begin{definition} \label{Frame Definition} Let $s$ and $v$ be integers, $X$ be a set of $v$ points, $\{G_1, \ldots, G_n\}$ be a partition of $X$, and $g_i=|G_i|$. Also let $s_1, \ldots, s_n$ be non-negative integers with $\sum_{i=1}^{n} s_i = s$. A {\em generalized Howell frame}, $\mathrm{GHF}_k$, of type $(s_1,g_1)\ldots (s_n, g_n)$ is a square array of side $s$, $A$, that has the following properties: \begin{enumerate} \item Every cell of $A$ is either empty or contains a $k$-subset of elements of $X$. The filled cells are called {\em blocks}. \item No pair of points from $X$ appear together in more than one block, and no pair of points in any $G_i$ appear together in any block. \item The main diagonal of $A$ consists of empty $s_i\times s_i$ subsquares, $A_i$. \item Each row and column of the array with empty diagonal entry $A_i$ is a resolution of $X\setminus G_i$. \end{enumerate} \end{definition} We will use an exponential notation $(s_1,g_1)^{n_1}\ldots (s_t,g_t)^{n_t}$ to indicate that there are $n_i$ occurrences of $(s_i, g_i)$ in the partition. In the case that $k=3$ and $g_i=3s_i$, we will refer to a GHF of type $s_1s_2\cdots s_n$, and will use exponential notation $s_1^{\alpha_1} \cdots s_r^{\alpha_r}$ to indicate that there are $\alpha_i$ occurrences of $s_i$. Note that this concept of frame is analogous to the HMOLS used in~\cite{AbelBennettGe,BennettColbournZhu,WangDu}. \begin{theorem} \cite{AbelBennett, AbelBennettGe} \label{Uniform Frames} Let $h\geq 1$ and $u\geq 5$ be integers. Then there exists a GHF of type $h^u$ except for $(h,u)=(1,6)$, and possibly for $(h,u) \in \{(1,10), (3,28), (6,18)\}$. \end{theorem} The following two lemmas are similar to Lemmas 2.2 and 2.3 in \cite{BennettColbournZhu} where they were written in the language of transversal designs with holes and holey MOLS. More information on the construction methods used to establish them can be found in \cite{BrouwerVanRees}. For the sake of clarity, we give a brief proof here of Lemma~\ref{n=38 lemma}; the proof of Lemma~\ref{n=68 lemma} is similar. \begin{lemma} \label{n=38 lemma} Let $h,m$ be positive integers such that $4~\mathrm{MOLS}(m)$ exist. Let $v_1, v_2, \ldots, v_{m-1}$ be non-negative integers such that for each $i=1,2,\ldots,m-1$, there exist $3~\mathrm{IMOLS}(h+v_i, v_i)$. Then for $v=v_1+v_2+\cdots+v_{m-1}$, there exists a GHF of type $h^mv^1$. \end{lemma} \begin{proof} Take the first three $\mathrm{MOLS}(m)$ on disjoint symbol sets and superimpose them to form a $\mathrm{GHD}(m,3m)$, $A$. These MOLS possess $m$ disjoint transversals, $T_0, T_1, \ldots, T_{m-1}$, since the cells occupied by any symbol in the fourth square form such a transversal. Permuting rows and columns if necessary, we can assume that $T_0$ is on the main diagonal. Form an $(hm+v)\times(hm+v)$ array $A'$ as follows. For $i,j=1,2,\ldots,m$, the subarray in rows $h(i-1)+1, \ldots, hi$ and columns $h(j-1)+1,\ldots,hj$ corresponds to the $(i,j)$-cell in $A$. We now fill the cells of $A'$. The $h \times h$ subsquares along the diagonal, corresponding to $T_0$, will remain empty. Next, consider the positions arising from the transversals $T_1, \ldots, T_{m-1}$. Partition the last $v$ rows into $m-1$ sets $R_1, R_2,\ldots,R_{m-1}$, with $R_{\ell}$ containing $v_{\ell}$ rows. Similarly partition the last $v$ columns into $C_1, \ldots, C_{m-1}$, where $C_{\ell}$ contains $v_{\ell}$ columns. Suppose that in $A$, cell $(i,j)$, containing entry $\{x_{ij}, y_{ij}, z_{ij}\}$, is in $T_{\ell}$. In the entries of $A'$ arising from this cell, together with the entries in columns $h(j-1)+1,\ldots,hj$ of $R_{\ell}$, the entries in rows $h(i-1)+1,\ldots,hi$ of $C_{\ell}$ and the $v_{\ell} \times v_{\ell}$ subsquare formed by the intersection of $R_{\ell}$ and $C_{\ell}$, place the superimposed three $\mathrm{IMOLS}(h+v_{\ell}, v_{\ell})$, so that the missing hole is on the $v_{\ell} \times v_{\ell}$ subsquare formed by the intersection of $R_{\ell}$ and $C_{\ell}$. For these superimposed MOLS, the symbol set is $(\{x_{ij}, y_{ij}, z_{ij}\} \times \mathbb{Z}_h) \cup \{ \infty_{\ell, 1},\infty_{\ell, 2}, \ldots, \infty_{\ell, v_{\ell}}\}$, and the missing elements due to the hole are $\infty_{\ell, 1},\infty_{\ell, 2}, \ldots, \infty_{\ell, v_{\ell}}$. It is straightforward to verify that the resulting array $A'$ is a GHF of type $h^mv^1$. \end{proof} \begin{lemma} \label{n=68 lemma} Let $h$, $m$, $x$ and $t$ be positive integers. Suppose there exist $3+x~\mathrm{MOLS}(m)$ and $3~\mathrm{MOLS}(h)$. For each $1 \leq i \leq x-1$, let $w_i$ be a non-negative integer such that there exist $3~\mathrm{IMOLS}(h+w_i, w_i)$. Then if $w=w_1+ w_2+ \cdots+w_{x-1}$, there exists a GHF of type $h^{m-1}(m+w)^1$. \end{lemma} To apply Lemmas~\ref{n=38 lemma} and \ref{n=68 lemma}, we need some information on existence of $3+x~\mathrm{MOLS}(m)$, $4~\mathrm{MOLS}(m)$ and $3~\mathrm{IMOLS}(h+a,a)$. These existence results are given in the next two lemmas. \begin{lemma} \label{n=14 lemma} There exist $4~\mathrm{MOLS}(m)$ for all integers $m \geq 5$ except for $m=6$ and possibly for $m \in \{10,22\}$. Also, if $m$ is a prime power, there exist $m-1~\mathrm{MOLS}(m)$. \end{lemma} \begin{proof} See \cite{Todorov} for $n=14$, \cite{Abel} for $n=18$ and \cite[Section III.3.6]{Handbook} for other values. \end{proof} \begin{lemma} \label{n=10 lemma} \cite[Section III.4.3]{Handbook} Suppose $y,a$ are positive integers with $y \geq 4a$. Then $3~\mathrm{IMOLS}(y,a)$ exist, except for $(y,a) = (6,1)$, and possibly for $(y,a) = (10,1)$. \end{lemma} Most of the frames that we require in Section~\ref{TwoEmptySection} are obtained using Lemma~\ref{n=38 lemma}, but Theorem~\ref{Uniform Frames} and Lemma~\ref{n=68 lemma} will also be used. The majority of frames required come from the following special cases of Lemma~\ref{n=38 lemma}. \begin{lemma} \label{Frame 7^uv} Let $h,m,v$ be integers with $m \geq 5,$ $m \notin \{6,10,22\}$. Then there exists a GHF of type $h^mv^1$ if either: \begin{enumerate} \item $h=6$ and $m-1 \leq v \leq 2(m-1)$; \item $h\in \{7,8\}$ and $0 \leq v \leq 2(m-1)$; \item $h\in \{9,11\}$, $0 \leq v \leq 3(m-1)$ and $(h,v) \neq (9,1)$. \end{enumerate} \end{lemma} \begin{proof} Apply Lemma~\ref{n=38 lemma}, and let $h$ and $m$ be as given in that lemma. The admissible values of $m$ guarantee the existence of $4~\mathrm{MOLS}(m)$ (see Lemma~\ref{n=14 lemma}). The range for $v= \sum_{i=1}^{m-1} v_i$ is easily determined from the feasible values of $v_i$. We require that $3~\mathrm{IMOLS}(h+v_i, v_i)$ exist, so in light of Lemma~\ref{n=10 lemma}, we may take $v_i \in \{1,2\}$ when $h=6$, $v_i \in \{0, 1,2\}$ when $h \in \{7,8\}$, $v_i \in \{0,2,3\}$ when $h=9$ and $v_i \in \{0,1,2,3\}$ when $h=11$. \end{proof} We next discuss how to construct GHDs from frames. The following result is a generalization of the Basic Frame Construction for doubly resolvable designs; see~\cite{ACCLWW, CLLM, Lamken95}. \begin{theorem} [Basic Frame Construction] \label{Frame Construction} Suppose there exists a $\mathrm{GHF}_k$ of type $\Pi_{i=1}^m (s_i,g_i)$. Moreover, suppose that for $1 \leq i \leq m$, there exists a $\mathrm{GHD}_k(s_i+e,g_i+u)$ and this design contains a $\mathrm{GHD}_k(e,u)$ as a subdesign for $1 \leq i \leq m-1$. Then there exists a \mbox{$\mathrm{GHD}_k(s+e,v+u)$,} where $s=\sum_{i=1}^m s_i$ and $v=\sum_{i=1}^m g_i$. \end{theorem} \begin{proof} Suppose the $\mathrm{GHF}_k$ of type $\Pi (s_i,g_i)$ has point set $X$ and groups $G_i$, $i=1,2,\ldots,m$. Let $U$ be a set of size $u$, disjoint from $X$, and take our new point set to be $X \cup U$. Now, add $e$ new rows and columns. For each group $G_i$ with $1 \leq i \leq m-1$, fill the $s_i \times s_i$ subsquare corresponding to $G_i$, together with the $e$ new rows and columns, with a copy of the $\mathrm{GHD}_k(s_i+e,g_i+u)$, with the sub-$\mathrm{GHD}_k(e,u)$ (containing the points in $U$) over the $e \times e$ subsquare which forms the intersection of the new rows and columns. See Figure~\ref{BasicFrameFigure}. We then delete the blocks of the sub-$\mathrm{GHD}_k(e,u)$. Now, fill the $s_m \times s_m$ subsquare corresponding to $G_m$, together with the $e$ new rows and columns with the $\mathrm{GHD}_k(s_m+e,g_m+u)$. \begin{figure} \caption{\label{BasicFrameFigure} \label{BasicFrameFigure} \end{figure} We show that each point occurs in some cell of each row. In the $e$ new rows, the points in $U$ appear in the blocks from the $\mathrm{GHD}_k(s_m+e,g_m+u)$, and the points in each $G_i$ occur in the columns corresponding to the $\mathrm{GHD}_k(s_i+e,g_i+u)$ for $1 \leq i \leq m$. In a row which includes part of the $s_i \times s_i$ subsquare corresponding to the group $G_i$, the elements of $G_j$ ($j \neq i$) appear in the frame, while the elements of $G_i \cup U$ appear in the added $\mathrm{GHD}_k(s_i+e,g_i+u)$. In a similar way, each element occurs exactly once in every column. \end{proof} We will generally use the Basic Frame Construction with $u=0$, so that our ingredient GHDs have an $e \times e$ empty subsquare. For this special case, we have the following. \begin{corollary} \label{FrameCorollary} Suppose that: (1) there exists a $\mathrm{GHF}_k$ of type $\Pi_{i=1}^m (s_i,g_i)$; (2) for each $i\in \{1,\ldots,m-1\}$ there exists a $\mathrm{GHD}_k(s_i+e,g_i)$ containing a trivial subdesign $\mathrm{GHD}_k(e,0)$; and (3) there exists a $\mathrm{GHD}_k(s_m+e,g_m)$. Then there exists a $\mathrm{GHD}_k(s+e,v)$, where $s=\sum_{i=1}^m s_i$ and $v=\sum_{i=1}^m g_i$. \end{corollary} \subsection{Starter-adder method} \label{SA section} In practice, to apply the Basic Frame Construction, Theorem~\ref{Frame Construction}, described in Section~\ref{FrameConstr}, we need to first obtain small GHDs with sub-GHDs to use as ingredients in the construction. One important technique for constructing GHDs of small side length is the {\em starter-adder method}. See~\cite{blue book} for further background and examples. Let $G$ be a group, written additively, and consider the set $G\times\{0,1, \ldots,c\}$, which we think of as $c$ copies of $G$ labelled by subscripts. For elements $g,h\in G$, a {\em pure $(i,i)$-difference} is an element of the form $\pm(g_i-h_i)$ (i.e.\ the subscripts are the same), while a {\em mixed $(i,j)$-difference} is of the form $g_i-h_j$ with $i < j$. In both cases, subtraction is done in $G$, so that $g_i-h_j = g'_i - h'_j$ if and only if $g-h=g'-h'$ in $G$, for any choice of subscripts. \pagebreak \begin{definition} \label{Starter-Adder Definition} Let $(G,+)$ be a group of order $n+x$, and let $X=(G \times\{0,1\}) \cup (\{\infty\} \times \{1,2, \ldots, n-2x\})$. A {\em transitive starter-adder} for a $\mathrm{GHD}(n+x,3n)$ is a collection of triples $\mathcal{ST}=\{S_1, S_2, \ldots, S_{n}\}$, called a {\em transitive starter}, and a set $A=\{a_1, a_2, \ldots, a_{n}\}$, called an {\em adder}, of elements of $G$ with the following properties: \begin{enumerate} \item $\mathcal{ST}$ is a partition of $X$. \item For any $i \in \{0,1\}$, each element of $G$ occurs at most once as a pure $(i,i)$ difference within a triple of $\mathcal{ST}$. Likewise, each element of $G$ occurs at most once as a mixed $(0,1)$ difference within a triple of $\mathcal{ST}$. If $|G|$ is even, no $g$ with order 2 in $G$ occurs as a pure $(i,i)$ difference for any $i$ in the triples of $\mathcal{ST}$. \item Each $\infty_i$ occurs in one triple of the form $\{\infty_i, g_0, h_1\}$, where $g,h \in G$. \item The sets $S_1+a_1, S_2+a_2, \ldots, S_{n}+a_{n}$ form a partition of $X$. Here $S_j+a_j$ is the triple formed by adding $a_j$ to the non-subscript part of each non-infinite element of $S_j$, and $\infty_i +a_j = \infty_i$ for any $i \in \{1,2, \ldots, n-2x\}$, $j \in \{1,2, \ldots, n\}$. \end{enumerate} \end{definition} Note that if $G$ is not abelian, the element $a_j$ would be added on the right in each case. However, in this paper we will always take the group $G$ to be $\mathbb{Z}_{n+x}$. Transitive starters and adders can be used to construct $\mathrm{GHD}(n+x,3n)$s in the following manner: label the rows and columns of the $(n+x) \times (n+x)$ array by the elements of $G$, then in row $i$, place the triple $S_j+i$ in the column labelled as $i-a_j$. Thus, the first row of the array contains the blocks of the starter $\mathcal{ST}$, with block $S_j$ in column $-a_j$; by the definition of a starter, these blocks form a resolution class. The remaining rows consist of translates of the first, with their positions shifted, so that the first column contains the blocks $S_j+a_j$ for $1 \leq j \leq n$. By the definition of an adder, these blocks are pairwise disjoint and thus also form a resolution class. The remaining columns are translates of the first. By construction, the two resolutions (into rows and into columns) are orthogonal. Note also that the infinite points used in a transitive starter-adder construction for a $\mathrm{GHD}(n+x,3n)$ form a pairwise hole of size $n-2x = 3n - 2(n+x)$. Therefore any $\mathrm{GHD}(n+x,3n)$ obtained from a transitive starter-adder constrution possesses the $*$-property. We thus have the following theorem: \begin{theorem} If there exists a transitive starter $\mathcal{ST}$ and a corresponding adder $A$ for a $\mathrm{GHD}(n+x,3n)$ then there exists a $\mathrm{GHD}^*(n+x,3n)$. \end{theorem} \begin{example} \label{SAexample8} For $s=10$, $v=24$, the following is a transitive starter and adder using $(\mathbb{Z}_{10}\times\{0,1\}) \cup \{\infty_0,\infty_1,\infty_2, \infty_3 \}$. (The terms in square brackets give the adder.) \[ \begin{array}{@{}*{4}l@{}} 0_{1} 6_{1} 7_{1} [2] & 4_{1} 2_{1} 7_{0} [3] & 6_{0} 8_{0} 8_{1} [8] & 0_{0} 1_{0} 4_{0} [7] \\ \infty_0 2_{0} 3_{1} [1] & \infty_1 5_{0} 9_{1} [4] & \infty_2 3_{0} 1_{1} [9] & \infty_3 9_{0} 5_{1} [6] \end{array} \] Using this starter and adder, we obtain the following $\mathrm{GHD}^*(10,24)$. \renewcommand{1.0}{1.25} \[ \arraycolsep 3.2pt \begin{array}{|c|c|c|c|c|c|c|c|c|c|} \hline & \infty_2 3_0 1_1 & 6_0 8_0 8_1 & 0_0 1_0 4_0 & \infty_3 9_0 5_1 & & \infty_1 5_0 9_1 & 4_1 2_1 7_0 & 0_1 6_1 7_1 & \infty_0 2_0 3_1 \\ \hline \infty_0 3_0 4_1 & & \infty_2 4_0 2_1 & 7_0 9_0 9_1 & 1_0 2_0 5_0 & \infty_3 0_0 6_1 & & \infty_1 6_0 0_1 & 5_1 3_1 8_0 & 1_1 7_1 8_1 \\ \hline 2_1 8_1 9_1 & \infty_0 4_0 5_1 & & \infty_2 5_0 3_1 & 8_0 0_0 0_1 & 2_0 3_0 6_0 & \infty_3 1_0 7_1 & & \infty_1 7_0 1_1 & 6_1 3_1 9_0 \\ \hline 7_1 4_1 0_0 & 3_1 9_1 0_1 & \infty_0 5_0 6_1 & & \infty_2 6_0 4_1 & 9_0 1_0 1_1 & 3_0 4_0 7_0 & \infty_3 2_0 8_1 & &\infty_1 8_0 2_1 \\ \hline \infty_1 9_0 3_1 & 8_1 5_1 1_0 & 4_1 0_1 1_1 & \infty_0 6_0 7_1 & & \infty_2 7_0 5_1 & 0_0 2_0 2_1 & 4_0 5_0 8_0 & \infty_3 3_0 9_1 & \\ \hline & \infty_1 0_0 4_1 & 9_1 6_1 2_0 & 5_1 1_1 2_1 & \infty_0 7_0 8_1 & & \infty_2 8_0 6_1 & 1_0 3_0 3_1 & 5_0 6_0 9_0 & \infty_3 4_0 0_1 \\ \hline \infty_3 5_0 1_1 & & \infty_1 1_0 5_1 & 0_1 7_1 3_0 & 6_1 2_1 3_1 & \infty_0 8_0 9_1 & & \infty_2 9_0 7_1 & 2_0 4_0 4_1 & 6_0 7_0 0_0 \\ \hline 7_0 8_0 1_0 & \infty_3 6_0 2_1 & & \infty_1 2_1 6_0 & 1_1 8_1 4_0 & 7_1 3_1 4_1 & \infty_0 9_0 0_1 & & \infty_2 0_0 8_1 & 3_0 5_0 5_1 \\ \hline 4_0 6_0 6_1 & 8_0 9_0 2_0 & \infty_3 7_0 3_1 & & \infty_1 3_1 7_0 & 2_1 9_1 5_0 & 8_1 4_1 5_1 & \infty_0 0_0 1_1 & & \infty_2 1_0 9_1 \\ \hline \infty_2 2_0 0_1 & 5_0 7_0 7_1 & 9_0 0_0 3_0 & \infty_3 8_0 4_1 & & \infty_1 4_1 8_0 & 3_1 0_1 6_0 & 9_1 5_1 6_1 & \infty_0 1_0 2_1 & \\ \hline \end{array} \] \renewcommand{1.0}{1.0} \end{example} Another type of starter-adder is an {\em intransitive starter-adder}. This term is defined below. \begin{definition} \label{IntStarter-Adder Definition} Let $(G,+)$ be a group of order $n$, and let $X=(G \times\{0,1,2\})$. An {\em intransitive starter-adder} for a $\mathrm{GHD}(n+x,3n)$ consists of a collection $\mathcal{ST}$ of $n+x$ triples, called an {\em intransitive starter}, and a set $A=\{a_1, a_2, \ldots, a_{n-x}\}$ of elements of $G$, called an {\em adder}, with the following properties: \begin{enumerate} \item $\mathcal{ST}$ can be partitioned into three sets $S$, $R$ and $C$ of sizes $n-x$, $x$ and $x$ respectively. We write $S=\{S_1, S_2, \ldots, S_{n-x}\}$, $R=\{R_1, R_2, \ldots, R_{x}\}$ and $C=\{C_1, C_2, \ldots, C_{x}\}$. \item $S \cup R$ is a partition of $X$. \item For each $i \in \{0,1,2\}$, each element of $G$ occurs at most once as a pure $(i,i)$ difference within a triple of $\mathcal{ST}$. Also for each pair $(i,j) \in \{ (0,1), (0,2), (1,2)\}$, each element of $G$ occurs at most once as a mixed $(i,j)$ difference within a triple of $\mathcal{ST}$. If $|G|$ is even, no pure $(i,i)$ difference $g$ with order 2 in $G$ occurs within these triples. \item If $S + A$ is the set of blocks $\{S_1 + a_1, S_2+ a_2, \ldots, S_{n-x} + a_{n-x}\}$ then $(S + A) \cup C$ is a partition of $X$. As before, $S_j + a_j$ denotes the block obtained by adding $a_j$ to the non-subscript part of all elements of $S_j$. \end{enumerate} \end{definition} To obtain a $\mathrm{GHD}(n+x, 3n)$ from such an intransitive starter-adder, we proceed as follows. Let $\mathcal{G}$ be the required GHD, and let its the top left $n \times n$ subarray be $\mathcal{H}$. Label the first $n$ rows and columns of $\cal{G}$ by the elements of $G$. For $i \in G$, we then place the block $S_j + i$ in the $(i, i-a_j)$ cell of $\mathcal{H}$. In the top row of $\mathcal{G}$ and the final $x$ columns (which we label as $n+1, n+2, \ldots, n+x$), place the blocks $R_1, R_2, \ldots, R_{x}$ from $R$; then for $i \in G$, and $j=1,2, \ldots, x$, place $R_j + i$ in the $(i, n+j)$ cell of $\mathcal{G}$. Similarly, label the last $x$ rows of $\mathcal{G}$ as $n+1, n+2, \ldots, n+x$. In the initial column and last $x$ rows of $\mathcal{G}$ we place the blocks $C_1, C_2, \ldots, C_{x}$ from $C$, and for $i \in G$, $j=1,2, \ldots, x$, we place $C_j + i$ in the $(n+j, i)$ cell of $\mathcal{G}$. The bottom right $x \times x$ subarray of $\mathcal{G}$ is always an empty subarray. \begin{theorem} If there exists an intransitive starter $\mathcal{ST}$ and a corresponding adder $A$ for a $\mathrm{GHD}(n+x,3n)$ then there exists a $\mathrm{GHD}(n+x,3n)$ with an empty $x \times x$ sub-array. \end{theorem} \begin{example} \label{SAexample7} For $n=7$, $x=2$, the following is an intransitive starter-adder over $(\mathbb{Z}_{7} \times\{0,1,2\})$ for a $\mathrm{GHD}^*( 9,21)$. (In square brackets we either give the adder for the corrresponding starter block, or indicate whether it belongs to $R$ or $C$.) \[ \begin{array}{@{}*{5}l@{}} 2_{2} 3_{2} 5_{2} [0] & 0_{0} 1_{0} 3_{0} [6] & 6_{1} 0_{1} 2_{1} [1] & 6_{0} 3_{1} 4_{2} [2] & 5_0 1_{1} 6_{2} [5] \\ 4_0 5_{1} 0_{2} [R] & 5_{0} 4_{1} 0_2 [C] & 2_0 4_1 1_2 [R] & 4_0 2_{1} 1_{2} [C] \end{array} \] Using this starter and adder, we obtain the following $\mathrm{GHD}^*( 9,21)$. \renewcommand{1.0}{1.25} \[ \arraycolsep 4.5pt \begin{array}{|c|c|c|c|c|c|c||c|c|} \hline 2_2 3_2 5_2 & 0_0 1_0 3_0 & 5_0 1_1 6_2 & & & 6_0 3_1 4_2 & 6_1 0_1 2_1 & 4_0 5_1 0_2 & 2_0 4_1 1_2 \\ \hline 0_1 1_1 3_1 & 3_2 4_2 6_2 & 1_0 2_0 4_0 & 6_0 2_1 0_2 & & & 0_0 4_1 5_2 & 5_0 6_1 1_2 & 3_0 5_1 2_2 \\ \hline 1_0 5_1 6_2 & 1_1 2_1 4_1 & 4_2 5_2 0_2 & 2_0 3_0 5_0 & 0_0 3_1 1_2 & & & 6_0 0_1 2_2 & 4_0 6_1 3_2 \\ \hline & 2_0 6_1 0_2 & 2_1 3_1 5_1 & 5_2 6_2 1_2 & 3_0 4_0 6_0 & 1_0 4_1 2_2 & & 0_0 1_1 3_2 & 5_0 0_1 4_2 \\ \hline & & 3_0 0_1 1_2 & 3_1 4_1 6_1 & 6_2 0_2 2_2 & 4_0 5_0 0_0 & 2_0 5_1 3_2 & 1_0 2_1 4_2 & 6_0 1_1 5_2 \\ \hline 3_0 6_1 4_2 & & & 4_0 1_1 2_2 & 4_1 5_1 0_1 & 0_2 1_2 3_2 & 5_0 6_0 1_0 & 2_0 3_1 5_2 & 0_0 2_1 6_2 \\ \hline 6_0 0_0 2_0 & 4_0 0_1 5_2 & & & 5_0 2_1 3_2 & 5_1 6_1 1_1 & 1_2 2_2 4_2 & 3_0 4_1 6_2 & 1_0 3_1 0_2 \\ \hline \hline 5_0 4_1 0_2 & 6_0 5_1 1_2 & 0_0 6_1 2_2 & 1_0 0_1 3_2 & 2_0 1_1 4_2 & 3_0 2_1 5_2 & 4_0 3_1 6_2 & & \\ \hline 4_0 2_1 1_2 & 5_0 3_1 2_2 & 6_0 4_1 3_2 & 0_0 5_1 4_2 & 1_0 6_1 5_2 & 2_0 0_1 6_2 & 3_0 1_1 0_2 & & \\ \hline \end{array} \] \renewcommand{1.0}{1.0} \end{example} We point out that the underlying block design for this GHD is a $(3,1)$-GDD of type $3^7$ with groups $\{t_0, t_1, t_2\}$ for $t \in \mathbb{Z}_7$. Since it has a pairwise hole of size $3$, this GHD also has the $*$-property. However, GHDs obtained by the intransitive starter-adder method usually do not possess the $*$-property. In all examples of $\mathrm{GHD}(n+x,3n)$s obtained by an intransitive starter-adder in this paper, the group $G$ will be taken as $Z_n$. Also, the underlying block design for any $\mathrm{GHD}(n+x,3n)$ that we construct in this way will have (in addition to the obvious automorphism of order $n$) an automorphism of order $2$. This automorphism maps the point $t_0$ to $t_1$, $t_1$ to $t_0$ and $t_2$ to $t_2$. It also maps any starter block with adder $a$ to the starter + adder block with adder $-a$, and each block in $R$ to a block in $C$. For instance, in the previous example, the starter block $\{6_0, 3_1, 4_2\}$ is mapped to the starter + adder block $\{3_0, 6_1, 4_2\} = \{5_0, 1_1, 6_2\} + 5$. \subsection{Designs with the $*$-property} The following lemma generalizes constructions of Stinson~\cite{Stinson} and Vanstone~\cite{Vanstone 80} for $\mathrm{GHD}^*$s. In~\cite{Stinson}, the result is given only for block size $k=2$, while in~\cite{Vanstone 80}, it is done for general block size in the case that $g=1$. \begin{lemma} \label{Stinson 1} Let $g$ be a positive integer, and for each $i \in \{1,2,\ldots,g\}$, let $u_i$ be a non-negative integer. Suppose that there exists a $\mathrm{PBD}(v,K,1)$, $(X, \cB)$, containing $g$ (not necessarily disjoint) resolution classes, $P_1, P_2, \ldots, P_g$. Moreover, suppose that for every block $B\in \cB$, there exists a $\mathrm{GHD}_k^*(|B|, (k-1)|B|+1+u_B)$, where $u_B = \sum_{\{i\mid B\in P_i\}} u_i$. Then there exists a $\mathrm{GHD}_k^*(v, (k-1)v+u+1)$, where $u = \sum_{i=1}^g u_i$. \end{lemma} \begin{proof} We construct the resulting $\mathrm{GHD}_k^*$, $\cA$, on point set $(X\times \Z_{k-1})\cup I\cup \{\infty\}$, where $I = \{ \infty_{ij} \mid 1\leq i \leq g, 1\leq j \leq u_i\}$, and index the rows and columns of $\cA$ by $X$. For each block $B\in \cB$ we define $I_B = \{\infty_{ij}\mid B\in P_i, 1\leq j \leq u_i\}$ and construct a \mbox{$\mathrm{GHD}_k^*(|B|, (k-1)|B|+1+u_B)$,} $\cA_B$, indexed by $B$, with point set $(B\times\Z_{k-1})\cup I_B\cup \{\infty\}$ and pairwise hole $I_B \cup \{\infty\}$. In this $\mathrm{GHD}$, the block $\{\infty, (x,0), \ldots, (x,{k-2})\}$ should appear in the $(x,x)$-entry of $\cA_B$ for all $x \in B$. For each cell indexed by $(x,y)\in X^2$, if $x\neq y$ there is a block with $\{x,y\}\in B$ and we place the entry from $\cA_B$ indexed by $(x,y)$ in the cell of $\cA$ indexed by $(x,y)$. For each $x\in X$, in the diagonal $(x,x)$-entry of $\cA$ we place the block $\{\infty, (x,0), \ldots, (x,{k-2})\}$. We now show that the resulting $\cA$ is a $\mathrm{GHD}_k^*$, with pairwise hole $I\cup\{\infty\}$. We first show that no pair appears more than once by considering a pair of points in $(X\times\Z_{k-1})\cup I\cup \{\infty\}$. If $x,y\in I\cup\{\infty\}$, it is evident that none of the elements of $I\cup \{\infty\}$ appear together in a block of $\cA$ as the elements of this set are always in the pairwise holes of the $\cA_B$ from which the blocks of $\cA$ are drawn, nor do they appear together in the diagonal elements. We now consider $(x,a) \in X\times\Z_{k-1}$. If $y = \infty_{ij}$ then there is a unique block $B$ which contains the point $x$ in $P_i$ and $(x,a)$ and $y$ cannot appear together more than once in $\cA_B$. If $y=\infty$, it appears with $(x,a)$ only on the diagonal. Finally, if $(y,b)\in X\times\Z_{k-1}$, there is a unique block $B$ which contains $x$ and $y$ and $(x,a)$ and $(y,b)$ cannot appear together more than once in $\cA_B$. We now show that the rows of $\cA$ are resolutions of $(X\times\Z_{k-1})\cup I\cup\{\infty\}$. Consider a row indexed by $x\in X$ and an element $\alpha\in (X\times \Z_{k-1})\cup I\cup\{\infty\}$. If $\alpha = (x,a)$ or $\alpha = \infty$, $\alpha$ appears as a diagonal entry. If $\alpha=(y,b)$, where $y\in X\setminus \{x\}$, find the block $B$ containing $\{x,y\}$. Now, in the row indexed by $x$ in $\cA_B$ the element $\alpha$ must appear, say it appears in the column indexed by $z$, then the cell indexed by $(x,z)$ of $\cA$ will contain $\alpha$. If $\alpha = \infty_{ij}\in I$, find the block $B$ of the resolution class $P_i$ which contains $x$. As above, in the row indexed by $x$ in $\cA_B$ the element $\alpha$ must appear, say it appears in the column indexed by $z$, then the cell indexed by $(x,z)$ of $\cA$ will contain $\alpha$. A similar argument shows that the columns are also resolutions of $X$. \end{proof} Note that the statement of Lemma~\ref{Stinson 1} does not require the $g$ resolution classes to be disjoint, or even distinct. In practice, however, when applying Lemma~\ref{Stinson 1}, we will use resolvable pairwise balanced designs in the construction and take $P_1,P_2,\ldots,P_g$ to be the distinct classes of the resolution. Thus, for any block $B$, $u_B$ will be $u_i$, where $P_i$ is the parallel class containing $B$. In particular, we will use PBDs formed from resolvable transversal designs, so we record this case for $k=3$ in the following lemma. \begin{lemma} \label{Stinson 1 TD} Suppose there exists a $\mathrm{RTD}(n,g)$. For $i \in \{1, 2, \ldots, g\}$, let $u_i$ be a non-negative integer such that there exists a $\mathrm{GHD}^*(n,2n+1+u_i)$, and let $u_{g+1}$ be a non-negative integer such that there exists a $\mathrm{GHD}^*(g,2g+1+u_{g+1})$. Then there exists a $\mathrm{GHD}^*(ng,2ng+u+1)$, where $u=\sum_{i=1}^{g+1}u_i$. \end{lemma} \begin{proof} Beginning with the $\mathrm{RTD}(n,g)$, construct a resolvable $\mathrm{PBD}(ng,\{n,g\},1)$ whose blocks are those of the resolvable transversal design together with a single parallel class whose blocks are its groups. Note that the resolution of this design consists of $g$ parallel classes, say $P_1, P_2, \ldots, P_g$, consisting of blocks of size $n$, and one parallel class, $P_{g+1}$, consisting of blocks of size $g$. The result is now a clear application of Lemma~\ref{Stinson 1}. \end{proof} \section{GHDs with one or two empty cells in each row and column} \label{TwoEmptySection} \subsection{Existence of $\mathrm{GHD}(n+1,3n)$} \label{t=1 section} In~\cite{WangDu}, Wang and Du asserted the existence of $\mathrm{GHD}(n+1,3n)$ for all $n \geq 7$, with at most five possible exceptions. However, there are issues with some of the constructions in their paper, in particular the modified starter-adder constructions for GHD$(n+1,3n)$ with $n$ even and $10 \leq n \leq 36$. These constructions contained at least one starter block such that one infinite point was added to half its translates (i.e $(n+1)/2$ of them) and another infinite point was added to the other $(n+1)/2$ translates. However this procedure is not possible for $n$ even, since $(n+1)/2$ is then not an integer. In addition, many of their recursive constructions later rely on existence of some of these faulty designs. We thus prove an existence result for $\mathrm{GHD}(n+1,3n)$ here, namely Theorem~\ref{Existence_t=1}, which has no exceptions for $n \geq 7$. Note that not all of the constructions in~\cite{WangDu} are problematic, and we will quote some results from that paper as part of our proof. \begin{theorem} \label{Existence_t=1} Let $n$ be a positive integer. There exists a $\mathrm{GHD}(n+1,3n)$ if and only if $n \geq 6$, except possibly for $n =6$. \end{theorem} If $n \leq 5$, there does not exist a $\mathrm{GHD}(n+1,3n)$. In particular, if $n<3$, the obvious necessary conditions are not satisfied. For $n=3$ or $4$, a $\mathrm{GHD}(n+1,3n)$ would be equivalent to a Kirkman square of order 9 or a doubly resolvable nearly Kirkman triple system of order 12, both of which are known not to exist; see~\cite{ACCLWW,CLLM}. The nonexistence of a $\mathrm{GHD}(6,15)$ was stated in~\cite{WangDu} as a result of a computer search. We now turn our attention to existence. The following GHDs are constructed in~\cite{WangDu}. \begin{lemma}[Wang and Du~\cite{WangDu}] \label{revised WangDu starter adder} There exists a $\mathrm{GHD}^*(n+1,3n)$ if either (1) $n=8$, or (2) $n$ is odd, and either $7 \leq n \leq 33$ or $n=39$. \end{lemma} We find a number of other small GHDs by starter-adder methods. \begin{lemma} \label{new starter adder t=1} \begin{enumerate} \item There exists a $\mathrm{GHD}^*(n+1,3n)$ for $n \in \{14, 20, 26, 32, 38, 41, 44\}$. \item There exists a $\mathrm{GHD}(n+1,3n)$ for $n \in \{10, 12, 16, 18, 22, 28, 30, 36, 46\}$. \end{enumerate} \end{lemma} \begin{proof} Starters and adders for these GHDs can be found in Appendix~\ref{1EmptyAppendix}. We use a transitive starter and adder for the values in (1), and an intransitive one for the values in (2). \end{proof} \begin{lemma} \label{StarterAdderMod2} There exists a $\mathrm{GHD}^*(n+1,3n)$ for $n=37$. \end{lemma} \begin{proof} We give a transitive starter and adder, modifying how we develop some of the blocks; this method is also used in~\cite{WangDu}. We develop the subscripts of $\infty_0$ and $\infty_1$ modulo 2. The points $\infty_2$ and $\infty_3$ are treated similarly: $\{\infty_2,a_i,b_i\}$ gives $\{\infty_3, (a+1)_i, (b+1)_i\}$ in the next row, while $\{\infty_3,c_j,d_j\}$ yields $\{\infty_2,(c+1)_j,(d+1)_j\}$ in the next row. Note that this ``swapping'' of infinite points in subsequent rows allows us to include a pure difference in blocks containing an infinite point. That the block of the starter containing $\infty_0$ (resp.\ $\infty_2$) also has points of the form $a_0$, $b_0$ where $a$ and $b$ have different parities and the block containing $\infty_1$ (resp.\ $\infty_3$) also has points of the form $c_1$, $d_1$, where $c$ and $d$ have different parities, combined with the fact that $n+1$ is even, ensures that no pair of points is repeated as we develop. The points $\infty_4, \infty_5, \ldots, \infty_{34}$ remain fixed as their blocks are developed. The starter blocks are given below, with the corresponding adders in square brackets. \[ \arraycolsep 1.9pt \begin{array}{@{}*{6}l@{}} 0_0 7_0 13_0 [0] & 10_1 14_1 20_1 [12] & \infty_0 37_0 2_0 [18] & \infty_1 32_1 1_1 [10] & \infty_2 33_0 34_0 [32] & \infty_3 2_1 7_1 [22] \\ \infty_4 25_0 25_1 [6] & \infty_5 8_0 9_1 [14] & \infty_6 16_0 18_1 [26] & \infty_7 26_0 29_1 [4] & \infty_8 18_0 22_1 [17] & \infty_9 21_0 26_1 [28] \\ \infty_{10} 24_0 30_1 [15] & \infty_{11} 3_0 11_1 [23] & \infty_{12} 4_0 13_1 [2] & \infty_{13} 5_0 15_1 [31] & \infty_{14} 10_0 21_1 [27] & \infty_{15} 12_0 24_1 [35] \\ \infty_{16} 28_0 3_1 [34] & \infty_{17} 19_0 34_1 [21] & \infty_{18} 30_0 8_1 [33] & \infty_{19} 27_0 6_1 [19] & \infty_{20} 15_0 33_1 [3] & \infty_{21} 23_0 4_1 [9] \\ \infty_{22} 11_0 31_1 [37] & \infty_{23} 22_0 5_1 [30] & \infty_{24} 35_0 19_1 [8] & \infty_{25} 31_0 16_1 [36] & \infty_{26} 14_0 0_1 [20] & \infty_{27} 1_0 27_1 [11] \\ \infty_{28} 9_0 36_1 [7] & \infty_{29} 6_0 37_1 [13] & \infty_{30} 29_0 23_1 [24] & \infty_{31} 17_0 12_1 [16] & \infty_{32} 32_0 28_1 [29] & \infty_{33} 20_0 17_1 [1] \\ \infty_{34} 36_0 35_1 [5] \end{array} \] \end{proof} \begin{lemma}\label{StarterAdderMod5} There exists a $\mathrm{GHD}(n+1,3n)$ for $n\in \{24,34\}$. \end{lemma} \begin{proof} We use a similar modification of the transitive starter-adder method to that in Lemma~\ref{StarterAdderMod2}. In this case, the subscripts of $\infty_0, \infty_1, \infty_2, \infty_3, \infty_4$ are developed modulo $5$ as we develop the remaining points modulo ${n+1}$. For $n=24$, the starter and adder are as follows: \[ \arraycolsep 2.2pt \begin{array}{@{}*{6}l@{}} 1_0 4_0 10_0 [19] & 0_1 7_1 15_1 [24] & \infty_0 20_0 21_1 [5] & \infty_1 12_0 19_0 [20] & \infty_2 5_1 19_1 [10] & \infty_3 2_1 3_1 [15] \\ \infty_4 13_0 21_0 [0] & \infty_5 7_0 12_1 [8] & \infty_6 6_0 13_1 [6] & \infty_7 23_0 6_1 [7] & \infty_8 2_0 11_1 [17] & \infty_9 24_0 9_1 [2] \\ \infty_{10} 18_0 4_1 [4] & \infty_{11} 14_0 1_1 [21] & \infty_{12} 5_0 18_1 [23] & \infty_{13} 8_0 22_1 [3] & \infty_{14} 9_0 24_1 [22] & \infty_{15} 0_0 17_1 [18] \\ \infty_{16} 15_0 8_1 [1] & \infty_{17} 3_0 23_1 [14] & \infty_{18} 17_0 14_1 [16] & \infty_{19} 22_0 20_1 [12] & \infty_{20} 11_0 10_1 [13] & \infty_{21} 16_0 16_1 [11] \end{array} \] For $n=34$, the starter and adder are as follows: \[ \arraycolsep 1.9pt \begin{array}{@{}*{6}l@{}} 0_0 11_0 17_0 [4] & 10_1 21_1 22_1 [24] & \infty_0 6_0 13_1 [0] & \infty_1 20_1 27_1 [10] & \infty_2 15_0 34_0 [15] & \infty_3 18_0 27_0 [30] \\ \infty_4 11_1 14_1 [5] & \infty_5 29_0 29_1 [26] & \infty_6 5_0 6_1 [11] & \infty_7 9_0 12_1 [9] & \infty_8 1_0 5_1 [2] & \infty_9 2_0 7_1 [34] \\ \infty_{10} 20_0 28_1 [8] & \infty_{11} 7_0 16_1 [16] & \infty_{12} 8_0 18_1 [17] & \infty_{13} 4_0 15_1 [29] & \infty_{14} 12_0 24_1 [23] & \infty_{15} 21_0 34_1 [6] \\ \infty_{16} 19_0 33_1 [33] & \infty_{17} 23_0 3_1 [20] & \infty_{18} 22_0 4_1 [25] & \infty_{19} 26_0 9_1 [19] & \infty_{20} 24_0 8_1 [18] & \infty_{21} 16_0 1_1 [3] \\ \infty_{22} 10_0 31_1 [22] & \infty_{23} 13_0 0_1 [27] & \infty_{24} 14_0 2_1 [12] & \infty_{25} 33_0 23_1 [1] & \infty_{26} 28_0 19_1 [31] & \infty_{27} 25_0 17_1 [21] \\ \infty_{28} 32_0 25_1 [32] & \infty_{29} 3_0 32_1 [28] & \infty_{30} 30_0 26_1 [7] & \infty_{31} 31_0 30_1 [13] \end{array} \] \end{proof} Most of our GHDs are constructed using the Basic Frame Construction with $u=0$ (Corollary~\ref{FrameCorollary}). Note that in every $\mathrm{GHD}(n+1,3n)$ there is exactly one empty cell in each row and column; thus every $\mathrm{GHD}(n+1,3n)$ contains a $\mathrm{GHD}(1,0)$ as a subdesign. We therefore do not need to separately verify existence of this subdesign in the construction. \begin{lemma} \label{uniform t=1} There is a $\mathrm{GHD}(n+1,3n)$ for $n \in \{35,40,45,49,50,51,54,55\}$. \end{lemma} \begin{proof} For $n=35,40,45,49,54$ and $55$, there exist (by Theorem~\ref{Uniform Frames}) GHFs of types $7^5$, $8^5$, $9^5$ $7^7$, $9^6$ and $11^5$. For $n=50,51$, there exist (by Lemma~\ref{n=68 lemma}, with $h=m=7$, $x=1$ and either $w_1 = w = 1$ or $w_1 = w = 2$) GHFs of types $7^6 8^1$ and $7^6 9^1$. Since there exist $\mathrm{GHD}(8,21)$, $\mathrm{GHD}(9,24)$, $\mathrm{GHD}(10,27)$ and $\mathrm{GHD}(12,33)$ by Lemma~\ref{revised WangDu starter adder}, the result follows by Corollary~\ref{FrameCorollary}. \end{proof} \begin{lemma} \label{t=1 recursion} If $ n \in \{42,43,47,48,52,53\}$ or $n \geq 56$, then there is a $\mathrm{GHD}(n+1,3n)$. \end{lemma} \begin{proof} First, suppose that $n \geq 84$. Here, we can write $n=7m+v$, where $m \geq 11$ is odd, $v$ is odd and $7 \leq v \leq 20 \leq 2m-2$. By Lemma~\ref{Frame 7^uv}, there is a GHF of type $7^m v^1$. Since there exists a $\mathrm{GHD}(8,21)$ and a $\mathrm{GHD}(v+1,3v)$ (by Lemma~\ref{revised WangDu starter adder} or Lemma~\ref{new starter adder t=1}), a $\mathrm{GHD}(n+1,3n)$ exists by Corollary~\ref{FrameCorollary}. For the remaining values of $n$, we give in Table~\ref{ghd(n+1,3n) Frame Table} a frame of appropriate type $h^mv^1$, where $n= hm+v$; these frames all exist by Lemma~\ref{Frame 7^uv}. Together with the existence of a $\mathrm{GHD}(h+1,3h)$ and a $\mathrm{GHD}(v+1,3v)$ (by Lemma~\ref{revised WangDu starter adder} or Lemma~\ref{new starter adder t=1}), these frames give the required GHD$(n+1,3n)$ by Corollary~\ref{FrameCorollary}. \begin{table}[htbp] \caption{Frames used for $\mathrm{GHD}(n+1,3n)$ with $n \in \{42,43,47,48,52,53\}$ and $56 \leq n \leq 83$.} \centering \begin{tabular}{ccccccccccc} \hline $n=hm+v$ & $h$ & $m$ & $v$ & \hspace{0.5cm} & $n=hm+v$ & $h$ & $m$ & $v$ \\ \hline 42--43 & 7 & 5 & 7--8 & & 47--48 & 8 & 5 & 7--8 \\ 52--53 & 9 & 5 & 7--8 & & 56--61 & 7 & 7 & 7--12 \\ 62 & 11 & 5 & 7 & & 63--69 & 7 & 8 & 7--13 \\ 70--79 & 7 & 9 & 7--16 & & 80--83 & 8 & 9 & 8--11 \\ \hline \end{tabular} \label{ghd(n+1,3n) Frame Table} \end{table} \end{proof} Taken together, Lemmas~\ref{revised WangDu starter adder}--\ref{StarterAdderMod5} and \ref{uniform t=1}--\ref{t=1 recursion} prove Theorem~\ref{Existence_t=1}. \subsection{Existence of $\mathrm{GHD}(n+2,3n)$} \label{t=2 section} In this section, we consider the existence of $\mathrm{GHD}(n+2,3n)$, which have two empty cells in each row and column, and prove the following theorem. \begin{theorem} \label{Existence_t=2} Let $n$ be a positive integer. Then there exists a $\mathrm{GHD}(n+2,3n)$ if and only if $n \geq 6$. \end{theorem} Note that if $1 \leq n \leq 4$, the necessary conditions for the existence of a $\mathrm{GHD}(n+2,3n)$ are not satisfied. Moreover, for $n=5$, there is no $\mathrm{GHD}(7,15)$~\cite{Mathon Vanstone}. Thus it suffices to consider the case that $n \geq 6$. For relatively small values of $n$, we construct $\mathrm{GHD}(n+2,3n)$ mainly by starter-adder methods. These will then be used in the Basic Frame Construction with $u=0$ (Corollary~\ref{FrameCorollary}) to give the remaining GHDs. \begin{lemma} \label{Small Cases} \rule{0ex}{0ex} For all $n \in \{6, \ldots, 29\} \cup \{31, \ldots, 34\} \cup \{39,44\}$, a $\mathrm{GHD}^*(n+2,3n)$ exists. Moreover, if $n$ is even, then there exists such a design containing a $\mathrm{GHD}(2,0)$ as a subdesign. \end{lemma} \begin{proof} As mentioned in Section~\ref{DefnSection}, there exists a $\mathrm{GHD}(8,18)$~\cite{Finland}. As such a design is equivalent to a $\mathrm{DRNKTS}(18)$, it has the $*$-property. Note that the $\mathrm{GHD}^*(8,18)$ exhibited in Example~\ref{FinlandExample} has a $2\times2$ empty subsquare, i.e\ a sub-$\mathrm{GHD}(2,0)$. For the remaining values of $n$, a transitive starter and adder for a $\mathrm{GHD}^*(n+2,3n)$ can be found in Appendix~\ref{starters and adders}. Note that the group used is $\mathbb{Z}_{n+2}$. For $n$ even, the adder does not contain 0 or $(n+2)/2$, thus ensuring that the $(0,0)$-cell and $(0,(n+2)/2)$-cell are empty, and as we develop, the $((n+2)/2,0)$-cell and $((n+2)/2,(n+2)/2)$-cell will also be empty, yielding a sub-$\mathrm{GHD}(2,0)$. \end{proof} We remark that a $\mathrm{GHD}^*(10,24)$ with a sub-$\mathrm{GHD}(2,0)$ can also be obtained from~\cite[Table 8]{DuAbelWang}. \begin{lemma} \label{Odd2hole} \rule{0ex}{0ex} For $n \in \{7,9,11\}$, there exists a $\mathrm{GHD}(n+2,3n)$ containing a $\mathrm{GHD}(2,0)$ as a subdesign. Further, when $n=7$, this GHD has the $*$-property. \end{lemma} \begin{proof} For these values of $n$, an intransitive starter and adder for a $\mathrm{GHD}^*(n+2,3n)$ can be found in Appendix~\ref{starters and adders}. When $n=7$, this GHD is also given in Example~\ref{SAexample7}; there we indicated that this one has the $*$-property. \end{proof} Our main recursive construction for $n \geq 84$ uses frames of type $7^mv^1$ from Lemma~\ref{Frame 7^uv}. For smaller $n$, we also use frames of types $h^mv^1$ with $h \in \{6,8,9,11\}$, which also come from Lemma~\ref{Frame 7^uv}. Note that in light of Lemma~\ref{Small Cases}, in order to prove Theorem~\ref{Existence_t=2}, we need to obtain a $\mathrm{GHD}(n+2,3n)$ for each $n \in \{30, 35, 36, 37, 38, 40, 41, 42, 43\}$ and for all $n \geq 45$. \begin{lemma} \label{GHDs from uniform frames} There exists a $\mathrm{GHD}(n+2,3n)$ for $n \in \{30,$ $35,$ $36,$ $37,$ $38,$ $40,$ $41,$ $46,$ $49,$ $50,$ $51,$ $54,$ $55\}$. \end{lemma} \begin{proof} We apply Corollary~\ref{FrameCorollary} to construct the required designs. For $n=30,$ $35$, $36$, $40$, $49$, $54$ and $55$, we use GHFs of types $6^5$, $7^5$, $6^6$, $8^5$, $7^7$, $9^6$ and $11^5$ respectively. These frames all exist by Theorem~\ref{Uniform Frames}. For $n=37$, $38$, $41$ and $46$, we use GHFs of types $6^5 7^1$, $6^5 8^1$, $7^5 6^1$ and $8^5 6^1$ respectively (these all exist by Lemma~\ref{Frame 7^uv}). For $n=50$ and $51$, we use GHFs of types $7^6 8^1$ and $7^6 9^1$ respectively (these exist by Lemma~\ref{n=68 lemma} with $h=m=7$, $x=2$ and $w_1 = w = 1$ or $2$). Since there exist $\mathrm{GHD}(8,18)$, $\mathrm{GHD}(9,21)$, $\mathrm{GHD}(10,24)$, $\mathrm{GHD}(11,27)$ and $\mathrm{GHD}(13,33)$ each containing a sub-$\mathrm{GHD}(2,0)$ (by Lemmas~\ref{Small Cases} and \ref{Odd2hole}) the result follows. \end{proof} \begin{lemma} \label{Intermediate range} There exists a $\mathrm{GHD}(n+2,3n)$ if $n \in \{42, 43, 47, 48, 52, 53\}$ or $n \geq 56$. \end{lemma} \begin{proof} We write $n$ in one of the forms $7m+v$, $8m+v$, $9m+v$ or $11m+v$ (where (1) $m \geq 5$ and either $m$ is odd or $m=8$, (2) $7 \leq v \leq$ min$(2(m-1), 20$)) in the same manner as we did in Lemma~\ref{t=1 recursion}. We then construct a frame of type $7^mv^1$, $8^mv^1$, $9^m v^1$ or $11^mv^1$ from Lemma~\ref{Frame 7^uv} together with a $\mathrm{GHD}(9,21)$, $\mathrm{GHD}(10,24)$, $\mathrm{GHD}(11,27)$ or $\mathrm{GHD}(13,33)$ (each containing a $\mathrm{GHD}(2,0)$ as a subdesign) and a $\mathrm{GHD}(v+2,3v)$ (all of these exist by Lemma~\ref{Small Cases} or Lemma~\ref{Odd2hole}). Applying Corollary~\ref{FrameCorollary}, using these frames and GHDs then produces the required $\mathrm{GHD}(n+2,3n)$. \end{proof} Lemmas~\ref{Small Cases}, \ref{Odd2hole}, \ref{GHDs from uniform frames} and \ref{Intermediate range} together prove Theorem~\ref{Existence_t=2}. \section{GHDs across the spectrum} \label{Spectrum} In a non-trivial $\mathrm{GHD}(s,v)$ (i.e.\ where $v\neq 0)$, we require that $2s+1 \leq v \leq 3s$. A $\mathrm{GHD}(s,3s)$ has no empty cells, while a $\mathrm{GHD}(s,2s+1)$ has $(s-1)/3$ empty cells in each row and column. Noting that $\lim_{s\rightarrow\infty} \frac{s-1}{3s} = \frac{1}{3}$, we see that the proportion of cells in a given row or column which are empty falls in the interval $[0,1/3)$. In this section, we prove that for any $\pi \in [0,5/18]$, there is a GHD whose proportion of empty cells in a row or column is arbitrarily close to $\pi$. Our main tool in this section is Lemma~\ref{Stinson 1} and its variant, Lemma~\ref{Stinson 1 TD}. As an ingredient for this construction, we require GHDs which have the $*$-property. We note that GHDs constructed by the Basic Frame Construction do not always have the $*$-property, even if the input designs do.\footnote{ GHDs constructed by the Basic Frame Construction do have the $*$-property if (1) the frame used is a $\mathrm{GHF}_k$ of type $(s_1,g_1), \ldots, (s_n, g_n)$ with $g_i = (k-1)s_i$ for $i=1, \ldots, n$ and (2) for $i=1, \ldots n$, the input designs are $\mathrm{GHD}(s_i, g_i + t)$s with a pairwise hole of size $t$ for some $t$. However, condition (1) is not satisfied by the frames in this paper.} Thus, in general, we cannot use the results of Section~\ref{TwoEmptySection} for this purpose. However, as previously noted, those GHDs constructed by transitive starter-adder methods do have the $*$-property. \begin{lemma} \label{1-empty-cell} Let $m \geq 6$. There exists a $\mathrm{GHD}^*(2^m,3\cdot 2^m-3)$. \end{lemma} \begin{proof} Since $2^{m-3}$ is a prime power and $m \geq 6$, there is an $\mathrm{RTD}(8,2^{m-3})$, with disjoint parallel classes $P_1, P_2, \ldots, P_{2^{m-3}}$. Form a $\mathrm{PBD}(2^m,\{2^{m-3},8\},1)$ by adding a single parallel class $P_{2^{m-3}+1}$ consisting of the groups of the RTD. For the parallel class $P_{2^{m-3}+1}$, set $u_{2^{m-3}+1} =2^{m-3}-1$. For parallel classes $P_i$ with $1 \leq i \leq 2^{m-3}-1$, let $u_i=7$, and for parallel class $P_{2^{m-3}}$, let $u_{2^{m-3}}=4$. Since there exist a $\mathrm{GHD}^*(2^{m-3},3\cdot 2^{m-3})$ and a $\mathrm{GHD}^*(8,24)$ (both by Theorem~\ref{GHD*}(ii)), as well as a $\mathrm{GHD}^*(8,21)$ (by Lemma~\ref{revised WangDu starter adder}), and \[ 2 \cdot 2^m+1 + \left((2^{m-3}-1) + (2^{m-3}-1)(7)+4\right) = 3 \cdot 2^m-3 \] it follows by Lemma~\ref{Stinson 1 TD} that there exists a $\mathrm{GHD}^*(2^m,3 \cdot 2^m-3)$. \end{proof} \begin{lemma} \label{2-empty-cell} Let $m \geq 6$. There exists a $\mathrm{GHD}^*(2^m,3\cdot 2^m-6)$. \end{lemma} \begin{proof} The proof is similar to that of Lemma~\ref{1-empty-cell}, except that we take $u_{2^{m-3}}=1$ rather than 4, which requires a $\mathrm{GHD}^*(8,18)$ (given in Example~\ref{FinlandExample}) rather than a $\mathrm{GHD}^*(8,21)$. \end{proof} With these ingredients in hand, we now construct GHDs with side length a power of 2. \begin{lemma} \label{Power-2-even} Let $m \geq 7$ be odd, and let $A=\frac{5}{36}\cdot 2^{2m}+\frac{5}{18}\cdot 2^m-\frac{19}{9}$. For all $0 \leq \alpha \leq A$, there exists a $\mathrm{GHD}^*(2^{2m},3 \cdot 2^{2m}-6\alpha)$. \end{lemma} \begin{proof} Since $2^m$ is a prime power, there is a $\mathrm{PBD}(2^{2m},\{2^m\},1)$ (an affine plane of order $2^m$); note that the number of parallel classes is $2^m+1$. Let $x$ and $y$ be integers with $0 \leq x, y \leq 2^m+1$ and $x+y \leq 2^{m}+1$. In Lemma~\ref{Stinson 1}, for $x$ parallel classes take $u_i=1$, for $y$ parallel classes take $u_i=2^m-1$, and for the remaining $2^m+1-x-y$ parallel classes take $u_i=2^m-7$. Note that there exist a $\mathrm{GHD}^*(2^m, 2 \cdot 2^m + 2)$ (by Theorem~\ref{GHD*}(i)), a $\mathrm{GHD}^*(2^m, 3 \cdot 2^m)$ (by Theorem~\ref{GHD*}(ii)) and a $\mathrm{GHD}^*(2^m, 3 \cdot 2^m-6)$ (by Lemma~\ref{2-empty-cell}). Thus, by Lemma~\ref{Stinson 1}, there is a $\mathrm{GHD}^*(2^{2m}, 2 \cdot 2^{2m}+1+x+y(2^m-1)+(2^m+1-x-y)(2^m-7))$. Note that the number of points is \[ 2 \cdot 2^{2m}+1+x+y(2^m-1)+(2^m+1-x-y)(2^m-7) = 3 \cdot 2^{2m} -6(1+2^{m})-x(2^m-8)+6y. \] Let $f(m,x,y)$ denote this number of points. For a fixed $x$, varying $y$ between 0 and $2^m+1-x$ gives all values of the number of points congruent to $0\pmod{6}$ between $f(m,x,0)$ and $f(m,x,2^m+1-x)$. Noting that for fixed $m$, $f(m,x,y)$ is linear in $x$ and $y$, and solving $f(m,x,0)=f(m,x+1,2^m+1-(x+1))$ for $x$, we obtain the solution $x_0=\frac{5}{6}2^m+\frac{4}{3}$ (which is an integer since $m$ is odd). For $x \leq x_0$, we have that $f(m,x,0) \leq f(m,x+1,2^m+1-(x+1))$, which means that we cover all possible values for the number of points congruent to $0\pmod{6}$ from $f(m,x_0+1,0)$ to $3 \cdot 2^{2m}$. Moreover, \begin{eqnarray*} f(m,x_0+1,0) &=& 3\cdot 2^{2m}-6(1+2^m)-\left(\frac{5}{6} \cdot 2^m+\frac{7}{3}\right)(2^m-8) \\ &=& 3 \cdot 2^{2m} - \left(6 + 6 \cdot 2^m + \frac{5}{6} \cdot 2^{2m}-\frac{20}{3} \cdot 2^m +\frac{7}{3} \cdot 2^m - \frac{56}{3}\right) \\ &=& 3 \cdot 2^{2m} - \left( \frac{5}{6} \cdot 2^{2m} + \frac{5}{3} \cdot 2^{m} - \frac{38}{3}\right) \\ &=& 3 \cdot 2^{2m} - 6 \left( \frac{5}{36} \cdot 2^{2m} + \frac{5}{18} \cdot 2^m - \frac{19}{9} \right) \\ &=& 3 \cdot 2^{2m} - 6A, \end{eqnarray*} and so the result is verified. \end{proof} \begin{lemma} \label{Power-2-odd} Let $m \geq 7$ be odd, and let $A'=\frac{5}{36} \cdot 2^{2m} + \frac{1}{9} \cdot 2^m - \frac{23}{18}$. For all $1 \leq \alpha \leq A'$, there exists a $\mathrm{GHD}(2^{2m},3\cdot 2^{2m}-6\alpha+3)$. \end{lemma} \begin{proof} The proof is similar to that of Lemma~\ref{Power-2-even}, except that on one parallel class we take $u_i=2^m-4$ instead of $u_i=2^m-7$. This requires a $\mathrm{GHD}^*(2^m,3 \cdot 2^m-3)$, which exists by Lemma~\ref{1-empty-cell}. \end{proof} In Lemma~\ref{Power-2-even}, the number of empty cells in each row of the $\mathrm{GHD}^*(2^{2m},3 \cdot 2^{2m}-6\alpha)$ is $2\alpha$, while in Lemma~\ref{Power-2-odd}, the number of empty cells in each row of the \mbox{$\mathrm{GHD}^*(2^{2m},3 \cdot 2^{2m}-6\alpha+3)$} is $2\alpha-1$. Note that in Lemmas~\ref{Power-2-even} and~\ref{Power-2-odd}, for $\alpha=A$ and $\alpha=A'$, respectively, the number of points is less than $3 \cdot 2^{2m} - \frac{5}{6} \cdot 2^{2m}$, so that we have GHDs of side length $2^{2m}$ for any number of empty cells per row between 0 and $\frac{5}{18} \cdot 2^{2m}$, giving proportions of empty cells per row or column between 0 and $\frac{5}{18}$. Approximating any real number $\pi \in [0,5/18]$ by a dyadic rational, this means that we can now construct a $\mathrm{GHD}$ such that the proportion of empty cells in a row is arbitrarily close to $\pi$. \begin{theorem} \label{proportion} Let $\pi \in [0,5/18]$. For any $\varepsilon>0$, there exists an odd integer $m$ and an integer $v$ for which there exists a $\mathrm{GHD}^*(2^{2m},v)$, $\mathcal{D}$, such that the proportion $\pi_0$ of empty cells in each row or column of $\mathcal{D}$ satisfies $|\pi-\pi_0|<\epsilon$. \end{theorem} \begin{proof} Given $\pi$ and $\varepsilon$, there exists $m_0$ such that for all $m_1>m_0$, $|\pi-\lfloor 2^{m_1} \pi \rfloor / 2^{m_1}|<\varepsilon$. Let $m$ be an odd integer with $2m>m_0$, and let $\pi_0=\lfloor 2^{2m} \pi \rfloor / 2^{2m}$. Note that if $\pi=0$, then $\pi_0=0$. Otherwise, since $\pi-\frac{1}{2^{2m}} < \pi_0 \leq \pi$, we may also choose $m$ sufficiently large to ensure that $\pi_0 \in [0,\frac{5}{18}]$. Thus Lemmas~\ref{Power-2-odd} and~\ref{Power-2-even} enable us to construct a $\mathrm{GHD}^*$ with side length $2^{2m}$ whose proportion of empty cells in a row is $\pi_0$. \end{proof} Theorem~\ref{proportion} shows that we can find a $\mathrm{GHD}$ with an arbitrary proportion of empty cells across five-sixths of the interval $[0,1/3)$ of possible proportions. This improves on previous work which has shown existence only very close to the ends of the spectrum. The impact of this result is discussed further in Section~\ref{concl}. We remark also that GHDs exist throughout the spectrum with more general side lengths than powers of~$2$. First, the methods of Lemmas~\ref{1-empty-cell}--\ref{Power-2-odd} work for primes other than~$2$, provided we can find appropriate ingredient $\mathrm{GHD}$s. Also, by a straightforward generalization of the Moore--MacNeish product construction for MOLS~\cite{MacNeish22, Moore}, the existence of a $\mathrm{GHD}(s,v)$ and $3~\mathrm{MOLS}(n)$ implies that there exists a $\mathrm{GHD}(ns,nv)$. See, for instance, Theorem 2.6 in \cite{YanYin} for a similar result. By applying this construction to the results of Lemmas~\ref{Power-2-even} and~\ref{Power-2-odd}, we can find GHDs with an arbitrary proportion of empty cells (for proportions in $[0,5/18]$) for side lengths of the form $2^{2m}n$. \section{Conclusion} \label{concl} The concept of generalized Howell designs brings together various classes of designs, from doubly resolvable BIBDs on one side of the spectrum to MOLS and SOMAs on the other. In this paper, we have defined generalized Howell designs in a way that encompasses several previously studied generalizations of Howell designs, and have attempted to unify disparate terminology for techniques used on both ends of the spectrum. In Section~\ref{ConstrSection}, we described several construction techniques for $\mathrm{GHD}$s, several of which generalize known constructions for Howell designs, doubly resolvable designs and MOLS. These construction techniques were used in Section~\ref{TwoEmptySection} to settle existence of $\mathrm{GHD}(s,v)$ in the case that the number of empty cells in each row and column is one or two, with one possible exception (a $\mathrm{GHD}(7,18)$) (Theorems~\ref{Existence_t=1} and \ref{Existence_t=2}). The existence of $\mathrm{GHD}(s,v)$ with $e$ empty cells, where $e \in \{3, 4, \ldots, (s-3)/3 \} \setminus \{(s-6)/3\}$ remains open in general (although in \cite{DuAbelWang}, the case $e=(s-4)/3$ was solved for $e$ even with 15 possible exceptions). $e=0$ (a $\mathrm{GHD}(6,18)$) $e=1$ (a $\mathrm{GHD}(9,24)$) and $e=2$ (a $\mathrm{GHD}(12,30)$) are now known to exist. (see Theorems~\ref{618}, \ref{Existence_t=1} and \ref{Existence_t=2}).) A simpler interim result would be to show existence for an interval of $e$-values of a given fixed length. We conjecture that there exists a $\mathrm{GHD}(s,v)$ whenever the obvious necessary conditions are satisfied, with at most a small number of exceptions for each $e$. The main result of Section~\ref{Spectrum} is that for any $\pi \in [0,5/18]$, there exists a $\mathrm{GHD}$ whose proportion of cells in a given row or column which are empty is arbitrarily close to $\pi$. This is a powerful result. While it does not close the existence spectrum, it does provide strong evidence that this should be possible. Previous work has focused on the two ends of the spectrum: Kirkman squares and DRNKTSs at one end, and MOLS, SOMAs, and GHDs with one empty cell per row/column at the other; Theorem~\ref{proportion} shows existence of GHDs across five-sixths of the spectrum. The techniques of Section~\ref{Spectrum} can be used to give some examples of $\mathrm{GHD}$s with proportion greater than $5/18$, but necessarily bounded away from $1/3$. It remains a challenging open problem to show that there exist $\mathrm{GHD}$s whose proportion of empty cells per row or column can be arbitrarily close to any element of $[0,1/3)$. \section{Acknowledgments} The authors would like to thank Esther Lamken for a number of useful comments and in particular for suggesting the intransitive starter-adder method, which was used for several of the smaller GHDs in this paper. \begin{appendices} \section{Starters and adders for small \mbox{\boldmath $\mathrm{GHD}(n+1,3n)$}} \label{1EmptyAppendix} First we give those obtained by transitive starters and adders: \begin{example} For $n=14$: \[ \begin{array}{@{}*{6}l@{}} 0_0 3_0 5_0 [10] & 1_1 3_1 7_1 [ 5] & \infty_0 4_0 4_1 [12] & \infty_1 8_0 9_1 [ 1] & \infty_2 9_0 11_1 [ 2] & \infty_3 11_0 14_1 [ 8] \\ \infty_4 13_0 2_1 [ 9] & \infty_5 10_0 0_1 [ 4] & \infty_6 14_0 5_1 [13] & \infty_7 6_0 13_1 [11] & \infty_8 1_0 10_1 [ 7] & \infty_9 2_0 12_1 [ 3] \\ \infty_{10} 12_0 8_1 [ 6] & \infty_{11} 7_0 6_1 [14] \end{array} \] \end{example} \begin{example} For $n=20$: \[ \begin{array}{@{}*{6}l@{}} 0_0 1_0 3_0 [14] & 2_1 6_1 8_1 [ 7] & \infty_0 8_0 9_1 [ 2] & \infty_1 9_0 11_1 [12] & \infty_2 7_0 10_1 [11] & \infty_3 11_0 15_1 [ 5] \\ \infty_4 13_0 18_1 [17] & \infty_5 18_0 3_1 [ 4] & \infty_6 10_0 17_1 [10] & \infty_7 14_0 1_1 [18] & \infty_8 12_0 0_1 [16] & \infty_9 15_0 4_1 [ 8] \\ \infty_{10} 5_0 16_1 [ 1] & \infty_{11} 16_0 7_1 [ 3] & \infty_{12} 20_0 12_1 [13] & \infty_{13} 4_0 19_1 [20] & \infty_{14} 19_0 14_1 [15] & \infty_{15} 17_0 13_1 [ 9] \\ \infty_{16} 2_0 20_1 [ 6] & \infty_{17} 6_0 5_1 [19] \end{array} \] \end{example} \begin{example} For $n=26$: \[ \begin{array}{@{}*{6}l@{}} 0_0 6_0 10_0 [18] & 3_1 11_1 16_1 [ 9] & \infty_0 8_0 8_1 [19] & \infty_1 14_0 15_1 [17] & \infty_2 22_0 24_1 [25] & \infty_3 11_0 14_1 [ 2] \\ \infty_4 13_0 17_1 [16] & \infty_5 17_0 22_1 [24] & \infty_6 7_0 13_1 [15] & \infty_7 2_0 9_1 [ 4] & \infty_8 24_0 5_1 [12] & \infty_9 1_0 10_1 [11] \\ \infty_{10} 15_0 25_1 [20] & \infty_{11} 12_0 23_1 [ 7] & \infty_{12} 19_0 4_1 [ 6] & \infty_{13} 16_0 2_1 [21] & \infty_{14} 20_0 7_1 [ 1] & \infty_{15} 18_0 6_1 [ 5] \\ \infty_{16} 23_0 12_1 [ 3] & \infty_{17} 3_0 20_1 [14] & \infty_{18} 9_0 1_1 [23] & \infty_{19} 25_0 18_1 [13] & \infty_{20} 5_0 26_1 [10] & \infty_{21} 26_0 21_1 [ 8] \\ \infty_{22} 4_0 0_1 [26] & \infty_{23} 21_0 19_1 [22] \end{array} \] \end{example} \begin{example} For $n=32$: \[ \begin{array}{@{}*{6}l@{}} 0_0 4_0 7_0 [22] & 5_1 6_1 11_1 [11] & \infty_0 25_0 25_1 [ 8] & \infty_1 8_0 9_1 [17] & \infty_2 16_0 18_1 [25] & \infty_3 11_0 14_1 [ 1] \\ \infty_4 13_0 17_1 [14] & \infty_5 17_0 22_1 [29] & \infty_6 6_0 12_1 [15] & \infty_7 1_0 8_1 [ 3] & \infty_8 2_0 10_1 [32] & \infty_9 10_0 19_1 [13] \\ \infty_{10} 3_0 13_1 [28] & \infty_{11} 18_0 29_1 [18] & \infty_{12} 19_0 31_1 [21] & \infty_{13} 21_0 1_1 [23] & \infty_{14} 22_0 4_1 [26] & \infty_{15} 20_0 3_1 [ 4] \\ \infty_{16} 23_0 7_1 [16] & \infty_{17} 12_0 30_1 [31] & \infty_{18} 14_0 0_1 [ 5] & \infty_{19} 15_0 2_1 [ 2] & \infty_{20} 5_0 26_1 [27] & \infty_{21} 27_0 16_1 [20] \\ \infty_{22} 9_0 32_1 [ 7] & \infty_{23} 24_0 15_1 [ 6] & \infty_{24} 29_0 21_1 [24] & \infty_{25} 30_0 23_1 [12] & \infty_{26} 26_0 20_1 [ 9] & \infty_{27} 32_0 27_1 [19] \\ \infty_{28} 28_0 24_1 [10] & \infty_{29} 31_0 28_1 [30] \end{array} \] \end{example} \begin{example} For $n=38$: \[ \begin{array}{@{}*{6}l@{}} 0_0 6_0 7_0 [26] & 8_1 14_1 24_1 [13] & \infty_0 20_0 20_1 [20] & \infty_1 24_0 25_1 [ 1] & \infty_2 17_0 19_1 [ 4] & \infty_3 26_0 29_1 [ 2] \\ \infty_4 19_0 23_1 [23] & \infty_5 23_0 28_1 [30] & \infty_6 25_0 31_1 [19] & \infty_7 2_0 9_1 [29] & \infty_8 3_0 12_1 [27] & \infty_9 5_0 15_1 [ 7] \\ \infty_{10} 10_0 21_1 [31] & \infty_{11} 18_0 30_1 [25] & \infty_{12} 29_0 3_1 [ 9] & \infty_{13} 35_0 10_1 [24] & \infty_{14} 28_0 4_1 [28] & \infty_{15} 30_0 7_1 [17] \\ \infty_{16} 33_0 11_1 [22] & \infty_{17} 14_0 32_1 [21] & \infty_{18} 37_0 17_1 [12] & \infty_{19} 32_0 13_1 [36] & \infty_{20} 34_0 16_1 [32] & \infty_{21} 12_0 34_1 [33] \\ \infty_{22} 22_0 6_1 [14] & \infty_{23} 16_0 1_1 [34] & \infty_{24} 11_0 36_1 [11] & \infty_{25} 13_0 0_1 [ 5] & \infty_{26} 38_0 26_1 [38] & \infty_{27} 9_0 37_1 [ 6] \\ \infty_{28} 15_0 5_1 [37] & \infty_{29} 8_0 38_1 [16] & \infty_{30} 1_0 33_1 [ 8] & \infty_{31} 27_0 22_1 [35] & \infty_{32} 31_0 27_1 [ 3] & \infty_{33} 21_0 18_1 [18] \\ \infty_{34} 4_0 2_1 [15] & \infty_{35} 36_0 35_1 [10] \end{array} \] \end{example} \begin{example} For $n=41$: \[ \begin{array}{@{}*{6}l@{}} 0_0 24_0 32_0 [28] & 10_1 11_1 13_1 [14] & \infty_0 26_0 26_1 [ 8] & \infty_1 8_0 9_1 [ 1] & \infty_2 16_0 18_1 [ 5] & \infty_3 29_0 32_1 [10] \\ \infty_4 18_0 22_1 [15] & \infty_5 22_0 27_1 [36] & \infty_6 17_0 23_1 [33] & \infty_7 35_0 0_1 [11] & \infty_8 4_0 12_1 [31] & \infty_9 5_0 14_1 [19] \\ \infty_{10} 6_0 16_1 [32] & \infty_{11} 9_0 20_1 [39] & \infty_{12} 3_0 15_1 [17] & \infty_{13} 11_0 24_1 [20] & \infty_{14} 33_0 5_1 [35] & \infty_{15} 2_0 17_1 [41] \\ \infty_{16} 21_0 37_1 [34] & \infty_{17} 31_0 6_1 [40] & \infty_{18} 15_0 33_1 [29] & \infty_{19} 30_0 7_1 [24] & \infty_{20} 25_0 3_1 [16] & \infty_{21} 19_0 40_1 [30] \\ \infty_{22} 14_0 36_1 [13] & \infty_{23} 23_0 4_1 [22] & \infty_{24} 20_0 2_1 [37] & \infty_{25} 36_0 19_1 [38] & \infty_{26} 1_0 29_1 [18] & \infty_{27} 10_0 39_1 [12] \\ \infty_{28} 12_0 1_1 [ 2] & \infty_{29} 38_0 28_1 [27] & \infty_{30} 40_0 31_1 [ 7] & \infty_{31} 7_0 41_1 [23] & \infty_{32} 41_0 34_1 [26] & \infty_{33} 27_0 21_1 [ 9] \\ \infty_{34} 13_0 8_1 [ 4] & \infty_{35} 34_0 30_1 [ 6] & \infty_{36} 28_0 25_1 [25] & \infty_{37} 37_0 35_1 [ 0] & \infty_{38} 39_0 38_1 [ 3] \end{array} \] \end{example} \begin{example} For $n=44$: \[ \begin{array}{@{}*{6}l@{}} 0_0 2_0 10_0 [30] & 16_1 30_1 36_1 [15] & \infty_0 11_0 11_1 [28] & \infty_1 25_0 26_1 [31] & \infty_2 26_0 28_1 [ 2] & \infty_3 21_0 24_1 [5] \\ \infty_4 16_0 20_1 [ 7] & \infty_5 27_0 32_1 [17] & \infty_6 29_0 35_1 [32] & \infty_7 1_0 8_1 [35] & \infty_8 4_0 12_1 [23] & \infty_9 8_0 17_1 [43] \\ \infty_{10} 19_0 29_1 [33] & \infty_{11} 33_0 44_1 [22] & \infty_{12} 30_0 42_1 [16] & \infty_{13} 36_0 4_1 [12] & \infty_{14} 38_0 7_1 [29] & \infty_{15} 39_0 9_1 [10] \\ \infty_{16} 34_0 5_1 [ 4] & \infty_{17} 37_0 10_1 [27] & \infty_{18} 32_0 6_1 [42] & \infty_{19} 20_0 40_1 [13] & \infty_{20} 42_0 18_1 [24] & \infty_{21} 15_0 37_1 [26] \\ \infty_{22} 9_0 33_1 [38] & \infty_{23} 14_0 39_1 [39] & \infty_{24} 40_0 21_1 [19] & \infty_{25} 7_0 34_1 [36] & \infty_{26} 3_0 31_1 [21] & \infty_{27} 18_0 2_1 [44] \\ \infty_{28} 41_0 27_1 [41] & \infty_{29} 6_0 38_1 [ 3] & \infty_{30} 13_0 1_1 [37] & \infty_{31} 24_0 13_1 [34] & \infty_{32} 35_0 25_1 [25] & \infty_{33} 12_0 3_1 [ 8] \\ \infty_{34} 31_0 23_1 [11] & \infty_{35} 22_0 15_1 [ 9] & \infty_{36} 28_0 22_1 [ 6] & \infty_{37} 5_0 0_1 [20] & \infty_{38} 23_0 19_1 [40] & \infty_{39} 17_0 14_1 [18] \\ \infty_{40} 43_0 41_1 [14] & \infty_{41} 44_0 43_1 [ 1] \end{array} \] \end{example} Now the intransitive starters and adders: \begin{example} For $n=10$: \[ \begin{array}{@{}*{6}l@{}} 0_2 6_2 7_2 [ 0] & 7_0 8_0 3_2 [ 8] & 5_1 6_1 1_2 [ 2] & 9_0 6_0 8_2 [ 4] & 3_1 0_1 2_2 [ 6] & 3_0 1_0 4_2 [ 1] \\ 4_1 2_1 5_2 [ 9] & 4_0 0_0 8_1 [ 7] & 5_0 1_1 7_1 [ 3] & 2_0 9_1 9_2 [ R] & 9_0 2_1 9_2 [ C] \end{array} \] \end{example} \begin{example} For $n=12$: \[ \begin{array}{@{}*{6}l@{}} 3_2 6_2 10_2 [ 0] & 6_0 10_0 9_0 [ 4] & 10_1 2_1 1_1 [ 8] & 5_0 0_1 9_2 [ 2] & 2_0 7_1 11_2 [10] & 7_0 3_1 1_2 [ 1] \\ 4_0 8_1 2_2 [11] & 11_0 1_0 0_2 [ 5] & 4_1 6_1 5_2 [ 7] & 8_0 9_1 4_2 [ 3] & 0_0 11_1 7_2 [ 9] & 3_0 5_1 8_2 [ R] \\ 5_0 3_1 8_2 [ C] \end{array} \] \end{example} \begin{example} For $n=16$: \[ \begin{array}{@{}*{6}l@{}} 6_2 12_2 15_2 [ 0] & 0_0 2_0 7_0 [10] & 10_1 12_1 1_1 [ 6] & 4_0 10_0 5_2 [ 4] & 8_1 14_1 9_2 [12] & 8_0 9_1 1_2 [13] \\ 6_0 5_1 14_2 [ 3] & 3_0 6_1 8_2 [15] & 5_0 2_1 7_2 [ 1] & 12_0 3_1 0_2 [11] & 14_0 7_1 11_2 [ 5] & 13_0 1_0 4_2 [14] \\ 11_1 15_1 2_2 [ 2] & 11_0 0_1 10_2 [ 9] & 9_0 4_1 3_2 [ 7] & 15_0 13_1 13_2 [ R] & 13_0 15_1 13_2 [ C] \end{array} \] \end{example} \begin{example} For $n=18$: \[ \begin{array}{@{}*{6}l@{}} 1_2 2_2 12_2 [ 0] & 0_0 8_0 7_0 [12] & 12_1 2_1 1_1 [ 6] & 4_0 10_0 3_2 [ 4] & 8_1 14_1 7_2 [14] & 12_0 15_1 13_2 [ 1] \\ 16_0 13_1 14_2 [17] & 1_0 0_1 5_2 [ 5] & 5_0 6_1 10_2 [13] & 14_0 3_1 6_2 [ 3] & 6_0 17_1 9_2 [15] & 3_0 5_1 0_2 [ 8] \\ 13_0 11_1 8_2 [10] & 9_0 11_0 17_2 [16] & 7_1 9_1 15_2 [ 2] & 17_0 4_1 11_2 [11] & 15_0 10_1 4_2 [ 7] & 2_0 16_1 16_2 [ R] \\ 16_0 2_1 16_2 [ C] \end{array} \] \end{example} \begin{example} For $n=22$: \[ \begin{array}{@{}*{6}l@{}} 0_2 7_2 13_2 [ 0] & 0_0 4_0 7_0 [16] & 16_1 20_1 1_1 [ 6] & 8_0 14_0 11_2 [10] & 18_1 2_1 21_2 [12] & 10_0 11_1 4_2 [ 5] \\ 16_0 15_1 9_2 [17] & 19_0 0_1 6_2 [ 9] & 9_0 6_1 15_2 [13] & 2_0 9_1 14_2 [ 3] & 12_0 5_1 17_2 [19] & 1_0 19_1 18_2 [ 2] \\ 21_0 3_1 20_2 [20] & 11_0 21_1 12_2 [18] & 17_0 7_1 8_2 [ 4] & 20_0 3_0 5_2 [14] & 12_1 17_1 19_2 [ 8] & 15_0 13_1 1_2 [15] \\ 6_0 8_1 16_2 [ 7] & 5_0 14_1 3_2 [21] & 13_0 4_1 2_2 [ 1] & 18_0 10_1 10_2 [ R] & 10_0 18_1 10_2 [ C] \end{array} \] \end{example} \begin{example} For $n=28$: \[ \begin{array}{@{}*{6}l@{}} 4_2 12_2 17_2 [ 0] & 0_0 8_0 5_0 [16] & 16_1 24_1 21_1 [12] & 20_0 26_1 23_2 [ 8] & 6_0 0_1 3_2 [20] & 22_0 5_1 16_2 [19] \\ 24_0 13_1 7_2 [ 9] & 9_0 12_1 27_2 [23] & 7_0 4_1 22_2 [ 5] & 10_0 17_1 18_2 [ 1] & 18_0 11_1 19_2 [27] & 13_0 17_0 2_2 [18] \\ 3_1 7_1 20_2 [10] & 15_0 25_1 1_2 [ 4] & 1_0 19_1 5_2 [24] & 25_0 2_1 21_2 [21] & 23_0 18_1 14_2 [ 7] & 3_0 15_1 24_2 [17] \\ 4_0 20_1 13_2 [11] & 21_0 2_0 9_2 [ 6] & 27_1 8_1 15_2 [22] & 11_0 12_0 10_2 [26] & 9_1 10_1 8_2 [ 2] & 27_0 1_1 11_2 [15] \\ 16_0 14_1 26_2 [13] & 26_0 22_1 0_2 [25] & 19_0 23_1 25_2 [ 3] & 14_0 6_1 6_2 [ R] & 6_0 14_1 6_2 [ C] \end{array} \] \end{example} \begin{example} For $n=30$: \[ \begin{array}{@{}*{6}l@{}} 3_2 7_2 10_2 [ 0] & 0_0 8_0 5_0 [18] & 18_1 26_1 23_1 [12] & 18_0 24_1 22_2 [28] & 22_0 16_1 20_2 [ 2] & 28_0 9_1 15_2 [27] \\ 6_0 25_1 12_2 [ 3] & 27_0 0_1 2_2 [23] & 23_0 20_1 25_2 [ 7] & 10_0 17_1 11_2 [17] & 4_0 27_1 28_2 [13] & 9_0 13_0 21_2 [ 6] \\ 15_1 19_1 27_2 [24] & 19_0 29_1 26_2 [22] & 21_0 11_1 18_2 [ 8] & 7_0 12_1 23_2 [21] & 3_0 28_1 14_2 [ 9] & 1_0 13_1 16_2 [ 1] \\ 14_0 2_1 17_2 [29] & 15_0 24_0 4_2 [20] & 5_1 14_1 24_2 [10] & 25_0 26_0 9_2 [26] & 21_1 22_1 5_2 [ 4] & 29_0 1_1 19_2 [11] \\ 12_0 10_1 0_2 [19] & 2_0 6_1 1_2 [ 5] & 11_0 7_1 6_2 [25] & 20_0 3_1 29_2 [14] & 17_0 4_1 13_2 [16] & 16_0 8_1 8_2 [ R] \\ 8_0 16_1 8_2 [ C] \end{array} \] \end{example} \begin{example} For $n=36$: \[ \begin{array}{@{}*{6}l@{}} 6_2 18_2 35_2 [ 0] & 0_0 4_0 7_0 [26] & 26_1 30_1 33_1 [10] & 34_0 4_1 3_2 [14] & 18_0 12_1 17_2 [22] & 32_0 7_1 33_2 [35] \\ 6_0 31_1 32_2 [ 1] & 35_0 2_1 15_2 [25] & 27_0 24_1 4_2 [11] & 28_0 35_1 19_2 [31] & 30_0 23_1 14_2 [ 5] & 21_0 25_1 28_2 [ 8] \\ 33_0 29_1 0_2 [28] & 25_0 1_1 31_2 [30] & 31_0 19_1 25_2 [ 6] & 15_0 20_1 2_2 [27] & 11_0 6_1 29_2 [ 9] & 3_0 13_1 1_2 [ 7] \\ 20_0 10_1 8_2 [29] & 19_0 28_1 21_2 [20] & 12_0 3_1 5_2 [16] & 17_0 18_1 13_2 [34] & 16_0 15_1 11_2 [ 2] & 13_0 27_1 24_2 [19] \\ 10_0 32_1 7_2 [17] & 9_0 22_0 34_2 [12] & 21_1 34_1 10_2 [24] & 29_0 2_0 12_2 [15] & 8_1 17_1 27_2 [21] & 26_0 9_1 30_2 [32] \\ 5_0 22_1 26_2 [ 4] & 1_0 23_0 9_2 [13] & 14_1 0_1 22_2 [23] & 8_0 14_0 23_2 [33] & 5_1 11_1 20_2 [ 3] & 24_0 16_1 16_2 [ R] \\ 16_0 24_1 16_2 [ C] \end{array} \] \end{example} \begin{example} For $n=46$: \[ \begin{array}{@{}*{6}l@{}} 4_2 17_2 35_2 [ 0] & 0_0 4_0 7_0 [36] & 36_1 40_1 43_1 [10] & 44_0 4_1 45_2 [22] & 26_0 20_1 21_2 [24] & 40_0 5_1 44_2 [25] \\ 30_0 19_1 23_2 [21] & 45_0 2_1 1_2 [45] & 1_0 44_1 0_2 [ 1] & 42_0 3_1 11_2 [ 3] & 6_0 45_1 14_2 [43] & 37_0 41_1 27_2 [ 2] \\ 43_0 39_1 29_2 [44] & 21_0 33_1 12_2 [16] & 3_0 37_1 28_2 [30] & 17_0 22_1 5_2 [11] & 33_0 28_1 16_2 [35] & 13_0 23_1 20_2 [39] \\ 16_0 6_1 13_2 [ 7] & 25_0 34_1 7_2 [32] & 20_0 11_1 39_2 [14] & 31_0 32_1 37_2 [28] & 14_0 13_1 19_2 [18] & 41_0 9_1 30_2 [13] \\ 22_0 8_1 43_2 [33] & 15_0 32_0 42_2 [ 6] & 21_1 38_1 2_2 [40] & 19_0 24_0 36_2 [ 5] & 24_1 29_1 41_2 [41] & 38_0 7_1 18_2 [ 4] \\ 11_0 42_1 22_2 [42] & 27_0 9_0 40_2 [37] & 18_1 0_1 31_2 [ 9] & 8_0 18_0 38_2 [17] & 25_1 35_1 9_2 [29] & 36_0 12_1 6_2 [27] \\ 39_0 17_1 33_2 [19] & 10_0 31_1 34_2 [20] & 5_0 30_1 8_2 [26] & 28_0 1_1 15_2 [34] & 35_0 16_1 3_2 [12] & 2_0 15_1 24_2 [ 8] \\ 23_0 10_1 32_2 [38] & 12_0 14_1 10_2 [15] & 29_0 27_1 25_2 [31] & 34_0 26_1 26_2 [ R] & 26_0 34_1 26_2 [ C] \end{array} \] \end{example} \section{Starters and adders for small \mbox{\boldmath $\mathrm{GHD}(n+2,3n)$}} \label{starters and adders} First we give those obtained by transitive starters and adders: \begin{example} For $n=8$: \[ \begin{array}{@{}*{8}l@{}} 0_{1} 6_{1} 7_{1} [2] & 4_{1} 2_{1} 7_{0} [3] & 6_{0} 8_{0} 8_{1} [8] & 0_{0} 1_{0} 4_{0} [7] & \infty_0 2_{0} 3_{1} [1] & \infty_1 5_{0} 9_{1} [4] & \infty_2 3_{0} 1_{1} [9] & \infty_3 9_{0} 5_{1} [6] \end{array} \] \end{example} \begin{example} For $n=9$: \[ \begin{array}{@{}*{7}l@{}} 4_{0} 5_{1} 3_{1} [8] & 10_{0} 2_{0} 3_{0} [4] & 6_{0} 8_{0} 6_{1} [3] & 9_{1} 10_{1} 2_{1} [6] & \infty_0 0_{0} 4_{1} [2] & \infty_1 7_{0} 1_{1} [9] & \infty_2 9_{0} 0_{1} [1] \\ \infty_3 5_{0} 8_{1} [10] & \infty_4 1_{0} 7_{1} [7] & \end{array} \] \end{example} \begin{example} For $n=10$: \[ \begin{array}{@{}*{7}l@{}} 3_{1} 10_{1} 11_{1} [3] & 3_{0} 4_{0} 6_{0} [10] & 1_{0} 8_{0} 4_{1} [7] & 7_{0} 5_{1} 8_{1} [4] & \infty_{0} 0_{0} 0_{1} [5] & \infty_{1} 11_{0} 6_{1} [1] & \infty_{2} 10_{0} 9_{1} [11] \\ \infty_{3} 5_{0} 2_{1} [2] & \infty_{4} 2_{0} 7_{1} [8] & \infty_{5} 9_{0} 1_{1} [9] \end{array} \] \end{example} \begin{example} For $n=11$: \[ \begin{array}{lllllll} 3_{1} 11_{0} 0_{0} [7] & 9_{0} 2_{0} 5_{0} [4] & 10_{1} 11_{1} 2_{1} [6] & 4_{1} 10_{0} 6_{1}[1] & \infty_0 3_{0} 0_{1}[0] & \infty_1 12_{0} 5_{1} [9] & \infty_2 8_{0} 9_{1} [2] \\ \infty_3 6_{0} 1_{1} [8] & \infty_4 4_{0} 8_{1} [11] & \infty_5 1_{0} 12_{1} [3] & \infty_6 7_{0} 7_{1} [5] \end{array} \] \end{example} \begin{example} For $n=12$: \[ \begin{array}{@{}*{7}l@{}} 3_{1} 7_{0} 2_{0} [2] & 8_{1} 0_{1} 10_{1} [6] & 1_{1} 6_{1} 6_{0} [11] & 1_{0} 3_{0} 4_{0} [9] & \infty_{0} 11_{0} 13_{1} [5] & \infty_{1} 0_{0} 5_{1} [8] \\ \infty_{2} 5_{0} 9_{1} [1] & \infty_{3} 8_{0} 2_{1} [13] & \infty_{4} 10_{0} 7_{1} [4] & \infty_{5} 12_{0} 4_{1} [3] & \infty_{6} 13_{0} 11_{1} [12] & \infty_{7} 9_{0} 12_{1} [10] & \end{array} \] \end{example} \begin{example} For $n=13$: \[ \begin{array}{lllllll} 0_{0} 4_{1} 8_{1} [4] & 6_{0} 7_{0} 9_{0} [8] & 11_{0} 1_{0} 2_{1} [7] & 5_{1} 6_{1}, 11_{1} [9] & \infty_0,13_{0} 7_{1} [0] & \infty_1 8_{0} 3_{1} [14] \\ \infty_2 4_{0} 1_{1} [5] & \infty_3 14_{0}, 13_{1} [12] & \infty_4 12_{0} 0_{1} [13] & \infty_5 10_{0} 12_{1} [6] & \infty_6 5_{0} 10_{1} [1] & \infty_7 3_{0} 14_{1} [2] \\ \infty_8 2_{0} 9_{1} [10] \end{array} \] \end{example} \begin{example} For $n=14$: \[ \begin{array}{@{}*{7}l@{}} 6_{0} 9_{1} 8_{1} [5] & 9_{0} 13_{0} 0_{0} [7] & 10_{0} 5_{1} 11_{0} [3] & 0_{1} 4_{1} 6_{1} [1] & \infty_0 3_{0} 12_{1} [6] & \infty_1 12_{0} 10_{1} [9] \\ \infty_2 8_{0} 14_{1} [14] & \infty_3 15_{0} 3_{1} [13] & \infty_4 4_{0} 11_{1} [4] & \infty_5 7_{0} 15_{1} [11] & \infty_6 1_{0} 2_{1} [2] & \infty_7 2_{0} 7_{1} [15] \\ \infty_8 5_{0} 1_{1} [10] & \infty_9 14_{0} 13_{1} [12] \end{array} \] \end{example} \begin{example} For $n=15$: \[ \begin{array}{@{}*{7}l@{}} 10_{1} 8_{1} 16_{1} [7] & 16_{0} 4_{0} 13_{1} [11] & 7_{0} 6_{1} 5_{1} [4] & 2_{0} 13_{0} 0_{0} [16] & \infty_0 14_{0} 4_{1} [0] & \infty_1 15_{0} 1_{1} [15] \\ \infty_2 10_{0}, 11_{1} [9] & \infty_3 6_{0} 0_{1} [14] & \infty_4 12_{0} 7_{1} [12] & \infty_5 9_{0} 14_{1} [8] & \infty_6 1_{0} 3_{1} [5] & \infty_7 3_{0} 9_{1} [2] \\ \infty_8 8_{0} 12_{1} [1] & \infty_{9} 5_{0} 15_{1} [3] & \infty_{10} 11_{0} 2_{1} [10] \end{array} \] \end{example} \begin{example} For $n=16$: \[ \begin{array}{lllllll} 0_{0} 16_{0} 4_{0} [10] & 17_{1} 2_{1} 6_{0} [14] & 12_{1} 2_{0} 17_{0} [5] & 13_{1} 0_{1} 1_{1} [7] & \infty_{0} 14_{0} 14_{1} [1] & \infty_{1} 15_{0} 5_{1} [4] \\ \infty_{2} 13_{0} 4_{1} [17] & \infty_{3} 11_{0} 15_{1} [13] & \infty_{4} 9_{0} 3_{1} [8] & \infty_{5} 10_{0} 16_{1} [6] & \infty_{6} 3_{0} 10_{1} [2] & \infty_{7} 12_{0} 9_{1} [15] \\ \infty_{8} 7_{0} 8_{1} [11] & \infty_{9} 8_{0} 11_{1} [3] & \infty_{10} 1_{0} 6_{1} [12] & \infty_{11} 5_{0} 7_{1} [16] \end{array} \] \end{example} \begin{example} For $n=17$: \[ \begin{array}{lllllll} 11_{1} 3_{1} 7_{0} [2] & 10_{0} 5_{1} 14_{0} [5] & 0_{1} 10_{1} 15_{1} [16] & 11_{0} 1_{0} 3_{0} [15] & \infty_0 2_{0} 8_{1} [0] & \infty_1 17_{0} 9_{1} [8] \\ \infty_2 9_{0} 12_{1} [11] & \infty_3 4_{0} 4_{1} [10] & \infty_4 0_{0} 18_{1} [12] & \infty_5 18_{0} 1_{1} [18] & \infty_6 12_{0} 2_{1} [1] & \infty_7 15_{0} 16_{1} [9] \\ \infty_8 16_{0} 13_{1} [7] & \infty_{9} 6_{0} 14_{1} [4] & \infty_{10} 5_{0} 17_{1} [17] & \infty_{11} 13_{0} 7_{1} [14] & \infty_{12} 8_{0} 6_{1} [3] \end{array} \] \end{example} \begin{example} For $n=18$: \[ \begin{array}{lllllll} 17_{0} 18_{1} 17_{1} [7] & 1_{0} 15_{0} 10_{0} [17] & 19_{0} 16_{0} 1_{1} [1] & 0_{1} 2_{1} 16_{1} [15] & \infty_0 9_{0} 5_{1} [4] & \infty_1 8_{0} 6_{1} [13] \\ \infty_2 18_{0} 11_{1} [12] & \infty_3 6_{0} 3_{1} [5] & \infty_4 5_{0} 4_{1} [3] & \infty_5 14_{0} 9_{1} [11] & \infty_6 4_{0} 13_{1} [19] & \infty_7 2_{0} 12_{1} [14] \\ \infty_8 13_{0} 7_{1} [9] & \infty_9 3_{0} 14_{1} [16] & \infty_{10} 7_{0} 10_{1} [8] & \infty_{11} 11_{0} 15_{1} [18] & \infty_{12} 12_{0} 19_{1} [2] & \infty_{13} 0_{0} 8_{1} [6] \end{array} \] \end{example} \begin{example} For $n=19$: \[ \begin{array}{llllll} 0_{0} 1_{0} 5_{0} [0] & 16_{1} 3_{1} 7_{0} [1] & 18_{0} 16_{0} 8_{1} [16] & 6_{1} 7_{1} 9_{1} [12] & \infty_0 20_{0} 20_{1} [3] & \infty_1 12_{0} 14_{1} [2] \\ \infty_2 17_{0} 4_{1} [10] & \infty_3 9_{0} 2_{1} [6] & \infty_4 10_{0} 11_{1} [9] & \infty_5 19_{0} 17_{1} [5] & \infty_6 14_{0} 0_{1} [11] & \infty_7 4_{0} 1_{1} [14] \\ \infty_8 3_{0} 18_{1} [13] & \infty_{9} 2_{0} 12_{1} [15] & \infty_{10} 13_{0} 19_{1} [7] & \infty_{11} 15_{0} 10_{1} [18] & \infty_{12} 6_{0} 5_{1} [4] & \infty_{13} 11_{0} 15_{1} [19] \\ \infty_{14} 8_{0} 13_{1} [20] \end{array} \] \end{example} \begin{example} For $n=20$: \[ \begin{array}{@{}*{6}l@{}} 1_{1} 2_{1} 9_{0} [15] & 16_{1} 5_{0} 10_{0} [21] & 0_{0} 20_{0} 4_{0} [3] & 7_{1} 17_{1} 21_{1} [5] & \infty_0 15_{0} 18_{1} [1] & \infty_1 18_{0} 15_{1} [18] \\ \infty_2 11_{0} 5_{1} [4] & \infty_3 2_{0} 9_{1} [9] & \infty_4 13_{0} 4_{1} [19] & \infty_5 1_{0} 10_{1} [20] & \infty_6 16_{0} 20_{1} [12] & \infty_7 14_{0} 13_{1} [16] \\ \infty_8 7_{0} 3_{1} [10] & \infty_9 12_{0} 0_{1} [6] & \infty_{10} 6_{0} 14_{1} [7] & \infty_{11} 3_{0} 8_{1} [17] & \infty_{12} 17_{0} 12_{1} [2] & \infty_{13} 8_{0} 6_{1} [14] \\ \infty_{14} 19_{0} 19_{1} [8] & \infty_{15} 21_{0} 11_{1} [13] & \end{array} \] \end{example} \begin{example} For $n=21$: \[ \begin{array}{llllll} 22_{0} 5_{0} 7_{0} [13] & 21_{0} 14_{0} 3_{1} [5] & 20_{1} 8_{1} 13_{0} [15] & 12_{1} 4_{1} 5_{1} [2] & \infty_0 11_{0} 9_{1} [0] & \infty_1 15_{0} 17_{1} [1] \\ \infty_2 0_{0} 13_{1} [4] & \infty_3 12_{0} 0_{1} [11] & \infty_4 3_{0} 22_{1} [3] & \infty_5 17_{0} 2_{1} [8] & \infty_6 6_{0} 15_{1} [9] & \infty_7 8_{0} 11_{1} [16] \\ \infty_8 19_{0} 18_{1} [21] & \infty_{9} 2_{0} 19_{1} [7] & \infty_{10} 4_{0} 14_{1} [6] & \infty_{11} 10_{0} 10_{1} [12] & \infty_{12} 1_{0} 16_{1} [20] & \infty_{13} 16_{0} 7_{1} [14] \\ \infty_{14} 20_{0} 21_{1} [17] & \infty_{15} 9_{0} 6_{1} [22] & \infty_{16} 18_{0} 1_{1} [18] \end{array} \] \end{example} \begin{example} For $n=22$: \[ \begin{array}{@{}*{6}l@{}} 13_{1} 9_{1} 20_{0} [3] & 10_{0} 23_{0} 7_{1} [15] & 17_{1} 22_{1} 19_{1} [9] & 19_{0} 0_{0} 1_{0} [21] & \infty_0 3_{0} 18_{1} [1] & \infty_1 4_{0} 6_{1} [2] \\ \infty_2 11_{0} 10_{1} [7] & \infty_3 14_{0} 4_{1} [5] & \infty_4 13_{0} 5_{1} [20] & \infty_5 17_{0} 20_{1} [14] & \infty_6 12_{0} 12_{1} [17] & \infty_7 15_{0} 16_{1} [11] \\ \infty_8 22_{0} 2_{1} [22] & \infty_9 9_{0} 21_{1} [23] & \infty_{10} 18_{0} 3_{1} [18] & \infty_{11} 8_{0} 14_{1} [16] & \infty_{12} 21_{0} 8_{1} [6] & \infty_{13} 16_{0} 23_{1} [19] \\ \infty_{14} 6_{0} 11_{1} [4] & \infty_{15} 7_{0} 1_{1} [10] & \infty_{16} 2_{0} 0_{1} [13] & \infty_{17} 5_{0} 15_{1} [8] & \end{array} \] \end{example} \begin{example} For $n=23$: \[ \begin{array}{@{}*{6}l@{}} 12_{0} 8_{0} 21_{1} [1] & 19_{1} 17_{0} 20_{0} [6] & 14_{1} 13_{0} 7_{0} [14] & 2_{1} 11_{1} 16_{1} [7] & \infty_0 18_{0} 5_{1} [0] & \infty_1 6_{0} 12_{1} [2] \\ \infty_2 3_{0} 22_{1} [9] & \infty_3 19_{0} 17_{1} [10] & \infty_4 5_{0} 20_{1} [17] & \infty_5 22_{0} 13_{1} [19] & \infty_6 24_{0} 7_{1} [8] & \infty_7 1_{0} 18_{1} [24] \\ \infty_8 15_{0} 1_{1} [15] & \infty_9 0_{0} 10_{1} [3] & \infty_{10} 21_{0} 24_{1} [21] & \infty_{11} 10_{0} 15_{1} [4] & \infty_{12} 9_{0} 4_{1} [22] & \infty_{13} 11_{0} 8_{1} [13] \\ \infty_{14} 14_{0} 3_{1} [5] & \infty_{15} 23_{0} 23_{1} [12] & \infty_{16} 2_{0} 6_{1} [18] & \infty_{17} 16_{0} 9_{1} [20] & \infty_{18} 4_{0} 0_{1} [11] & \end{array} \] \end{example} \begin{example} For $n=24$: \[ \begin{array}{@{}*{6}l@{}} 4_{0} 24_{0} 23_{0} [12] & 14_{0} 0_{0} 5_{0} [14] & 4_{1} 15_{1} 16_{1} [19] & 8_{1} 12_{1} 17_{1} [7] & \infty_0 21_{0} 13_{1} [1] & \infty_1 1_{0} 10_{1} [2] \\ \infty_2 15_{0} 25_{1} [5] & \infty_3 13_{0} 21_{1} [18] & \infty_4 16_{0} 20_{1} [11] & \infty_5 25_{0} 0_{1} [22] & \infty_6 11_{0} 14_{1} [23] & \infty_7 12_{0} 19_{1} [6] \\ \infty_8 20_{0} 6_{1} [10] & \infty_9 6_{0} 11_{1} [17] & \infty_{10} 17_{0} 23_{1} [21] & \infty_{11} 22_{0} 24_{1} [3] & \infty_{12} 3_{0} 3_{1} [4] & \infty_{13} 2_{0} 1_{1} [9] \\ \infty_{14} 8_{0} 22_{1} [24] & \infty_{15} 7_{0} 18_{1} [8] & \infty_{16} 18_{0} 7_{1} [25] & \infty_{17} 10_{0} 5_{1} [16] & \infty_{18} 9_{0} 2_{1} [15] & \infty_{19} 19_{0} 9_{1} [20] \end{array} \] \end{example} \begin{example} For $n=25$: \[ \begin{array}{@{}*{6}l@{}} 15_{1} 3_{1} 5_{1} [19] & 7_{0} 1_{0} 11_{0} [9] & 16_{1} 21_{1} 25_{1} [25] & 14_{0} 9_{0} 16_{0} [8] & \infty_0 13_{0} 11_{1} [0] & \infty_1 2_{0} 17_{1} [1] \\ \infty_2 12_{0} 7_{1} [2] & \infty_3 5_{0} 23_{1} [3] & \infty_4 21_{0} 22_{1} [5] & \infty_5 0_{0} 4_{1} [6] & \infty_6 25_{0} 9_{1} [23] & \infty_7 18_{0} 26_{1} [18] \\ \infty_8 6_{0} 13_{1} [26] & \infty_9 20_{0} 19_{1} [14] & \infty_{10} 23_{0} 8_{1} [22] & \infty_{11} 24_{0} 18_{1} [7] & \infty_{12} 26_{0} 1_{1} [12] & \infty_{13} 3_{0} 0_{1} [16] \\ \infty_{14} 22_{0} 12_{1} [17] & \infty_{15} 17_{0} 10_{1} [10] & \infty_{16} 8_{0} 14_{1} [21] & \infty_{17} 10_{0} 6_{1} [15] & \infty_{18} 15_{0} 2_{1} [13] & \infty_{19} 4_{0} 20_{1} [11] \\ \infty_{20} 19_{0} 24_{1} [4] & \end{array} \] \end{example} \begin{example} For $n=26$: \[ \begin{array}{@{}*{6}l@{}} 23_{0} 23_{1} 1_{1} [11] & 8_{0} 9_{0} 20_{0} [5] & 11_{0} 3_{0} 7_{1} [15] & 16_{1} 5_{1} 6_{1} [25] & \infty_0 1_{0} 8_{1} [1] & \infty_1 2_{0} 21_{1} [2] \\ \infty_2 13_{0} 2_{1} [3] & \infty_3 0_{0} 10_{1} [9] & \infty_4 27_{0} 11_{1} [4] & \infty_5 25_{0} 15_{1} [20] & \infty_6 24_{0} 17_{1} [12] & \infty_7 12_{0} 26_{1} [27] \\ \infty_8 16_{0} 18_{1} [24] & \infty_9 7_{0} 22_{1} [22] & \infty_{10} 19_{0} 24_{1} [8] & \infty_{11} 5_{0} 14_{1} [10] & \infty_{12} 15_{0} 3_{1} [7] & \infty_{13} 17_{0} 0_{1} [21] \\ \infty_{14} 26_{0} 25_{1} [23] & \infty_{15} 10_{0} 4_{1} [13] & \infty_{16} 14_{0} 27_{1} [19] & \infty_{17} 6_{0} 9_{1} [18] & \infty_{18} 4_{0} 12_{1} [16] & \infty_{19} 18_{0} 19_{1} [17] \\ \infty_{20} 21_{0} 13_{1} [26] & \infty_{21} 22_{0} 20_{1} [6] & \end{array} \] \end{example} \begin{example} For $n=27$: \[ \begin{array}{@{}*{6}l@{}} 4_{0} 8_{0} 22_{0} [17] & 11_{0} 19_{0} 22_{1} [11] & 5_{1} 18_{1} 6_{0} [14] & 2_{1} 10_{1} 4_{1} [16] & \infty_0 24_{0} 16_{1} [0] & \infty_1 14_{0} 27_{1} [1] \\ \infty_2 25_{0} 14_{1} [3] & \infty_3 9_{0} 26_{1} [4] & \infty_4 27_{0} 8_{1} [2] & \infty_5 10_{0} 19_{1} [6] & \infty_6 26_{0} 20_{1} [7] & \infty_7 0_{0} 24_{1} [27] \\ \infty_8 16_{0} 21_{1} [21] & \infty_9 18_{0} 9_{1} [5] & \infty_{10} 20_{0} 17_{1} [12] & \infty_{11} 23_{0} 25_{1} [13] & \infty_{12} 12_{0} 13_{1} [22] & \infty_{13} 2_{0} 0_{1} [15] \\ \infty_{14} 3_{0} 28_{1} [8] & \infty_{15} 13_{0} 6_{1} [28] & \infty_{16} 1_{0} 15_{1} [25] & \infty_{17} 15_{0} 1_{1} [23] & \infty_{18} 17_{0} 23_{1} [18] & \infty_{19} 7_{0} 7_{1} [24] \\ \infty_{20} 28_{0} 3_{1} [20] & \infty_{21} 21_{0} 11_{1} [26] & \infty_{22} 5_{0} 12_{1} [9] & \end{array} \] \end{example} \begin{example} For $n=28$: \[ \begin{array}{@{}*{6}l@{}} 22_{0} 3_{0} 6_{0} [26] & 27_{0} 2_{0} 28_{0} [24] & 10_{1} 18_{1} 13_{1} [23] & 8_{1} 14_{1} 27_{1} [17] & \infty_0 12_{0} 25_{1} [1] & \infty_1 10_{0} 22_{1} [2] \\ \infty_2 16_{0} 2_{1} [3] & \infty_3 1_{0} 19_{1} [4] & \infty_4 5_{0} 5_{1} [5] & \infty_5 15_{0} 24_{1} [8] & \infty_6 24_{0} 29_{1} [13] & \infty_7 4_{0} 23_{1} [29] \\ \infty_8 25_{0} 12_{1} [9] & \infty_9 7_{0} 17_{1} [20] & \infty_{10} 14_{0} 6_{1} [14] & \infty_{11} 26_{0} 28_{1} [18] & \infty_{12} 13_{0} 16_{1} [12] & \infty_{13} 0_{0} 7_{1} [11] \\ \infty_{14} 18_{0} 11_{1} [28] & \infty_{15} 29_{0} 26_{1} [21] & \infty_{16} 20_{0} 4_{1} [25] & \infty_{17} 21_{0} 20_{1} [10] & \infty_{18} 17_{0} 21_{1} [22] & \infty_{19} 9_{0} 0_{1} [27] \\ \infty_{20} 23_{0} 1_{1} [7] & \infty_{21} 8_{0} 3_{1} [16] & \infty_{22} 19_{0} 15_{1} [19] & \infty_{23} 11_{0} 9_{1} [6] & \end{array} \] \end{example} \begin{example} For $n=29$: \[ \begin{array}{@{}*{6}l@{}} 12_{0} 17_{0} 13_{1} [29] & 9_{0} 10_{0} 0_{0} [13] & 7_{1} 23_{1} 25_{1} [7] & 27_{1} 30_{1} 2_{0} [16] & \infty_{0} 27_{0} 19_{1} [0] & \infty_{1} 20_{0} 2_{1} [1] \\ \infty_{2} 15_{0} 22_{1} [2] & \infty_{3} 26_{0} 5_{1} [3] & \infty_{4} 3_{0} 18_{1} [4] & \infty_{5} 28_{0} 15_{1} [5] & \infty_{6} 22_{0} 26_{1} [9] & \infty_{7} 21_{0} 29_{1} [11] \\ \infty_{8} 14_{0} 28_{1} [10] & \infty_{9} 18_{0} 8_{1} [17] & \infty_{10} 6_{0} 17_{1} [30] & \infty_{11} 16_{0} 14_{1} [14] & \infty_{12} 24_{0} 12_{1} [19] & \infty_{13} 29_{0} 20_{1} [21] \\ \infty_{14} 19_{0} 0_{1} [18] & \infty_{15} 8_{0} 24_{1} [12] & \infty_{16} 25_{0} 3_{1} [20] & \infty_{17} 4_{0} 4_{1} [22] & \infty_{18} 7_{0} 10_{1} [27] & \infty_{19} 30_{0} 1_{1} [26] \\ \infty_{20} 23_{0} 9_{1} [24] & \infty_{21} 11_{0} 16_{1} [28] & \infty_{22} 5_{0} 11_{1} [6] & \infty_{23} 1_ {0} 21_{1} [8] & \infty_{24} 13_{0} 6_{1} [15] & \end{array} \] \end{example} \begin{example} For $n=31$: \[ \begin{array}{@{}*{6}l@{}} 11_{0} 19_{0} 13_{0} [28] & 22_{1} 17_{1} 24_{1} [26] & 4_{0} 24_{0} 27_{0} [9] & 28_{1} 19_{1} 31_{1} [11] & \infty_{0} 1_{0} 5_{1} [0] & \infty_{1} 23_{0} 30_{1} [1] \\ \infty_{2} 25_{0} 26_{1} [2] & \infty_{3} 17_{0} 10_{1} [3] & \infty_{4} 31_{0} 8_{1} [4] & \infty_{5} 5_{0} 18_{1} [5] & \infty_{6} 6_{0} 20_{1} [6] & \infty_{7} 12_{0} 3_{1} [13] \\ \infty_{8} 28_{0} 13_{1} [31] & \infty_{9} 8_{0} 25_{1} [7] & \infty_{10} 14_{0} 11_{1} [16] & \infty_{11} 3_{0} 12_{1} [8] & \infty_{12} 20_{0} 23_{1} [17] & \infty_{13} 9_{0} 14_{1} [23] \\ \infty_{14} 26_{0} 21_{1} [12] & \infty_{15} 21_{0} 0_{1} [29] & \infty_{16} 16_{0} 32_{1} [15] & \infty_{17} 22_{0} 4_{1} [30] & \infty_{18} 15_{0} 7_{1} [14] & \infty_{19} 0_{0} 6_{1} [18] \\ \infty_{20} 10_{0} 9_{1} [32] & \infty_{21} 29_{0} 27_{1} [25] & \infty_{22} 32_{0} 1_{1} [24] & \infty_{23} 30_{0} 16_{1} [19] & \infty_{24} 18_{0} 29_{1} [22] & \infty_{25} 7_{0} 15_{1} [21] \\ \infty_{26} 2_{0} 2_{1} [20] & \end{array} \] \end{example} \begin{example} For $n=32$: \[ \begin{array}{@{}*{6}l@{}} 0_0 3_0 11_0 [31] & 12_0 13_0 27_0 [3] & 23_1 24_1 29_1 [7] & 15_1 26_1 5_1 [27] & \infty_0 17_0 17_1 [9] & \infty_1 15_0 16_1 [12] \\ \infty_2 26_0 28_1 [26] & \infty_3 1_0 4_1 [1] & \infty_4 2_0 6_1 [4] & \infty_5 4_0 9_1 [5] & \infty_6 5_0 11_1 [6] & \infty_7 6_0 13_1 [11] \\ \infty_8 10_0 18_1 [22] & \infty_9 16_0 25_1 [13] & \infty_{10} 9_0 19_1 [30] & \infty_{11} 19_0 30_1 [25] & \infty_{12} 20_0 32_1 [2] & \infty_{13} 18_0 31_1 [15] \\ \infty_{14} 21_0 1_1 [32] & \infty_{15} 22_0 3_1 [24] & \infty_{16} 28_0 10_1 [19] & \infty_{17} 25_0 8_1 [10] & \infty_{18} 23_0 7_1 [18] & \infty_{19} 29_0 14_1 [29] \\ \infty_{20} 14_0 0_1 [23] & \infty_{21} 33_0 20_1 [21] & \infty_{22} 24_0 12_1 [33] & \infty_{23} 32_0 21_1 [16] & \infty_{24} 31_0 22_1 [28] & \infty_{25} 7_0 33_1 [14] \\ \infty_{26} 8_0 2_1 [20] & \infty_{27} 30_0 27_1 [8] \end{array} \] \end{example} \begin{example} For $n=33$: \[ \begin{array}{@{}*{6}l@{}} 0_0 11_0 32_0 [0] & 6_0 23_0 25_0 [ 8] & 16_1 17_1 21_1 [11] & 7_1 27_1 33_1 [32] & \infty_0 17_0 18_1 [22] & \infty_1 26_0 28_1 [14] \\ \infty_2 16_0 19_1 [26] & \infty_3 1_0 5_1 [ 1] & \infty_4 3_0 8_1 [ 3] & \infty_5 4_0 10_1 [ 4] & \infty_6 2_0 9_1 [13] & \infty_7 5_0 13_1 [21] \\ \infty_8 14_0 23_1 [10] & \infty_9 10_0 20_1 [15] & \infty_{10} 13_0 24_1 [34] & \infty_{11} 18_0 31_1 [ 5] & \infty_{12} 20_0 34_1 [ 9] & \infty_{13} 21_0 1_1 [24] \\ \infty_{14} 31_0 12_1 [17] & \infty_{15} 22_0 4_1 [33] & \infty_{16} 28_0 11_1 [ 2] & \infty_{17} 30_0 14_1 [ 6] & \infty_{18} 15_0 0_1 [19] & \infty_{19} 29_0 15_1 [23] \\ \infty_{20} 19_0 6_1 [25] & \infty_{21} 7_0 30_1 [31] & \infty_{22} 8_0 32_1 [20] & \infty_{23} 12_0 2_1 [ 7] & \infty_{24} 34_0 25_1 [28] & \infty_{25} 9_0 3_1 [12] \\ \infty_{26} 33_0 29_1 [18] & \infty_{27} 24_0 22_1 [29] & \infty_{28} 27_0 26_1 [30] \end{array} \] \end{example} \begin{example} For $n=34$: \[ \begin{array}{@{}*{6}l@{}} 0_0 1_0 11_0 [35] & 15_0 17_0 32_0 [1] & 1_1 23_1 26_1 [11] & 10_1 14_1 27_1 [25] & \infty_0 18_0 18_1 [20] & \infty_1 12_0 13_1 [12] \\ \infty_2 26_0 28_1 [27] & \infty_3 2_0 5_1 [2] & \infty_4 3_0 7_1 [3] & \infty_5 4_0 9_1 [5] & \infty_6 5_0 11_1 [9] & \infty_7 8_0 15_1 [17] \\ \infty_8 9_0 17_1 [14] & \infty_9 7_0 16_1 [24] & \infty_{10} 10_0 20_1 [16] & \infty_{11} 19_0 30_1 [32] & \infty_{12} 20_0 32_1 [28] & \infty_{13} 21_0 34_1 [7] \\ \infty_{14} 22_0 0_1 [15] & \infty_{15} 25_0 4_1 [19] & \infty_{16} 23_0 3_1 [6] & \infty_{17} 27_0 8_1 [22] & \infty_{18} 24_0 6_1 [23] & \infty_{19} 29_0 12_1 [10] \\ \infty_{20} 13_0 33_1 [21] & \infty_{21} 14_0 35_1 [29] & \infty_{22} 16_0 2_1 [4] & \infty_{23} 35_0 22_1 [31] & \infty_{24} 6_0 31_1 [13] & \infty_{25} 31_0 21_1 [26] \\ \infty_{26} 28_0 19_1 [30] & \infty_{27} 33_0 25_1 [8] & \infty_{28} 30_0 24_1 [33] & \infty_{29} 34_0 29_1 [34] \end{array} \] \end{example} \begin{example} For $n=39$: \[ \begin{array}{@{}*{6}l@{}} 0_0 11_0 18_0 [0] & 10_0 22_0 36_0 [17] & 2_1 27_1 33_1 [ 4] & 13_1 39_1 40_1 [13] & \infty_0 19_0 19_1 [ 2] & \infty_1 12_0 14_1 [30] \\ \infty_2 20_0 23_1 [18] & \infty_3 16_0 21_1 [28] & \infty_4 2_0 8_1 [35] & \infty_5 3_0 10_1 [20] & \infty_6 1_0 9_1 [31] & \infty_7 6_0 15_1 [40] \\ \infty_8 14_0 24_1 [15] & \infty_9 4_0 16_1 [ 6] & \infty_{10} 25_0 38_1 [10] & \infty_{11} 32_0 5_1 [11] & \infty_{12} 29_0 3_1 [25] & \infty_{13} 31_0 6_1 [29] \\ \infty_{14} 35_0 11_1 [22] & \infty_{15} 24_0 1_1 [26] & \infty_{16} 26_0 4_1 [21] & \infty_{17} 7_0 28_1 [23] & \infty_{18} 13_0 35_1 [ 7] & \infty_{19} 40_0 22_1 [37] \\ \infty_{20} 8_0 32_1 [14] & \infty_{21} 9_0 34_1 [36] & \infty_{22} 15_0 0_1 [19] & \infty_{23} 21_0 7_1 [27] & \infty_{24} 33_0 20_1 [16] & \infty_{25} 37_0 26_1 [32] \\ \infty_{26} 5_0 36_1 [ 9] & \infty_{27} 27_0 18_1 [38] & \infty_{28} 38_0 31_1 [34] & \infty_{29} 23_0 17_1 [33] & \infty_{30} 17_0 12_1 [ 8] & \infty_{31} 34_0 30_1 [24] \\ \infty_{32} 28_0 25_1 [39] & \infty_{33} 39_0 37_1 [ 1] & \infty_{34} 30_0 29_1 [ 3] \end{array} \] \end{example} \begin{example} For $n=44$: \[ \begin{array}{@{}*{6}l@{}} 0_0 2_0 7_0 [44] & 21_0 24_0 33_0 [2] & 0_1 11_1 17_1 [6] & 10_1 14_1 19_1 [40] & \infty_0 32_0 32_1 [28] & \infty_1 26_0 27_1 [29] \\ \infty_2 18_0 20_1 [45] & \infty_3 1_0 4_1 [18] & \infty_4 3_0 7_1 [36] & \infty_5 4_0 9_1 [16] & \infty_6 6_0 12_1 [9] & \infty_7 8_0 15_1 [5] \\ \infty_8 10_0 18_1 [30] & \infty_9 34_0 43_1 [37] & \infty_{10} 35_0 45_1 [32] & \infty_{11} 13_0 24_1 [15] & \infty_{12} 29_0 41_1 [41] & \infty_{13} 36_0 3_1 [43] \\ \infty_{14} 38_0 6_1 [38] & \infty_{15} 39_0 8_1 [3] & \infty_{16} 31_0 1_1 [31] & \infty_{17} 17_0 34_1 [1] & \infty_{18} 19_0 37_1 [33] & \infty_{19} 20_0 39_1 [25] \\ \infty_{20} 22_0 42_1 [19] & \infty_{21} 23_0 44_1 [14] & \infty_{22} 14_0 36_1 [13] & \infty_{23} 5_0 28_1 [17] & \infty_{24} 9_0 33_1 [20] & \infty_{25} 15_0 40_1 [34] \\ \infty_{26} 45_0 25_1 [12] & \infty_{27} 40_0 21_1 [8] & \infty_{28} 41_0 23_1 [39] & \infty_{29} 44_0 31_1 [10] & \infty_{30} 12_0 2_1 [24] & \infty_{31} 25_0 16_1 [22] \\ \infty_{32} 43_0 35_1 [7] & \infty_{33} 37_0 30_1 [21] & \infty_{34} 11_0 5_1 [42] & \infty_{35} 27_0 22_1 [11] & \infty_{36} 42_0 38_1 [35] & \infty_{37} 16_0 13_1 [27] \\ \infty_{38} 28_0 26_1 [4] & \infty_{39} 30_0 29_1 [26] \end{array} \] \end{example} Now the intransitive starters and adders: \begin{example} For $n= 7$: \[ \begin{array}{@{}*{7}l@{}} 2_{2} 3_{2} 5_{2} [0] & 0_{0} 1_{0} 3_{0} [6] & 6_{1} 0_{1} 2_{1} [1] & 6_{0} 3_{1} 4_{2} [ 2] & 5_0 1_{1} 6_{2} [5] & 4_{0} 5_1 0_{2} [R] & 5_{0} 4_{1} 0_{2} [C] \\ 2_{0} 4_{1} 1_{2} [ R] & 4_0 2_{1} 1_{2} [C] \\ \end{array} \] \end{example} \begin{example} For $n= 9$: \[ \begin{array}{@{}*{7}l@{}} 0_{2} 3_{2} 5_{2} [0] & 6_{0} 4_{0} 8_{2} [5] & 2_{1} 0_{1} 4_{2} [4] & 1_{0} 0_{0} 7_{2} [3] & 4_1 3_{1} 1_{2} [6] & 8_{0} 3_0 7_{1} [7] & 5_{0} 6_{1} 1_{1} [2] \\ 2_{0} 8_{1} 2_{2} [R] & 8_{0} 2_{1} 2_{2} [C] & 7_{0} 5_{1} 6_{2} [R] & 5_0 7_{1} 6_{2} [C] \\ \end{array} \] \end{example} \begin{example} For $n=11$: \[ \begin{array}{@{}*{7}l@{}} 0_{2} 1_{2} 9_{2} [0] & 4_{0} 8_{0} 7_{0} [5] & 9_{1} 2_{1} 1_{1} [6] & 5_{0} 4_{1} 7_{2} [ 9] & 2_0 3_{1} 5_{2} [2] & 1_{0} 6_0 6_{2} [4] & 5_{1} 10_{1} 10_{2} [7] \\ 9_{0} 0_{1} 4_{2} [10] & 10_{0} 8_{1} 3_{2} [1] & 0_{0} 7_{1} 8_{2} [R] & 7_0 0_{1} 8_{2} [C] & 3_{0} 6_1 2_{2} [R] & 6_0 3_{1} 2_{2} [C] \\ \end{array} \] \end{example} \end{appendices} \end{document}
\begin{document} \title{Rollercoaster Permutations and Partition Numbers} \author{William Adamczak} \address{Siena College, Loudonville, NY 12211} \email{[email protected]} \author{Jacob Boni} \address{Siena College, Loudonville, NY 12211} \email{[email protected]} \subjclass[2000]{Primary 54C40, 14E20; Secondary 46E25, 20C20} \date{\today} \keywords{Combinatorics, Permutations} \begin{abstract} This paper explores the properties of partitions of roller coaster permutations. A roller coaster permutation is a permutation the alternates between increasing and decreasing a maximum number of times, while its subsequences also alternate between increasing and decreasing a maximum number of times simultaneously. The focus of this paper is on achieving an upper bound for the partition number of a roller coaster permutation of length $n$. \end{abstract} \maketitle \section{Introduction} Roller coaster permutations first show up in a work of Ahmed \& Snevily \cite{ahsn} where roller coaster permutations are described as a permutations that maximize the total switches from ascending to descending (or visa versa) for a permutation and all of its subpermutations simultaneously. More basically, this counts the greatest number of ups and downs or increases and decreases for the permutation and all possible subpermutations. Several of the properties of roller coaster permutations that were conjectured by Ahmed \& Snevily are proven in a paper of the first author \cite{adam} and are relied on heavily in developing an upper bound for the partition number of a roller coaster permutation. These permutations are connected to pattern avoiding permutations as is seen in Mansour \cite{mans} in the context of avoiding the subpermutation 132. These are also strongly connected to forbidden subsequences and partitions of permutations is seen in Stankova \cite{stank}, where certain forbidden subsequences end up being roller coaster permutations, particularly $F(1,1)$ is a subset of $RC(n)$. Consequently, these permutations are related to stack sortable permutations as seen in Egge \& Mansour \cite{egma}, where the connection between forbidden subsequences and stack sortability is made. Kezdy, Snevily \& Wang\cite{kesnwa} explored partitions of permutations into increasing and decreasing subsequences, where they took the approach of associating a graph to a permutation, they then translated the notion of partitions to the lack of existence of certain subgraphs. Our approach here relies rather on the underlying structure of these permutations, particularly the alternating structure, together with the relative positions of entries that are forced on roller coaster permutations. \section{Background} \begin{defn} A permutation of length $n$ is an ordered rearrangement on the set $\{1,2,3 ... n\}$ for some $n$. The collection of all such permutations is denoted $S_n$. \end{defn} \begin{defn} A Roller coaster permutation is a permutation that maximizes the number of changes from increasing to decreasing over itself, and all of it's subsequences, simultaneously. Here a subsequence of a permutation is an ordered subset of the original permutation \cite{ahsn}. \end{defn} The collection of all roller coaster permutations in $S_n$ is denoted $RC_n$. and have been explicitly found for small $n$ and are as follows: \begin{flushleft} RC(3) = \{132, 213, 231, 312\}\newline RC(4) = \{2143, 2413, 3142, 3412\} \newline RC(5) = \{24153, 25143, 31524, 32514, 34152, 35142, 41523, 42513\} \newline RC(6) = \{326154, 351624, 426153, 451623\} \newline RC(7) = \{3517264, 3527164, 3617254, 3627154, 4261735, 4271635, \newline 4361725, 4371625, 4517263, 4527163, 4617253, 4627153,\newline 5261734, 5271634, 5361724, 5371624\} \newline RC(8) = \{43718265, 46281735, 53718264, 56281734\} \newline RC(9) = \{471639285, 471936285, 472639185, 472936185, 481639275, 481936275, \newline 482639175, 482936175, 528174936, 528471936, 529174836, 529471836,\newline 538174926, 538471926, 539174826, 539471826, 571639284, 571936284, \newline 572639184, 572936184, 581639274, 581936274, 582639174, 582936174,\newline 628174935, 628471935, 629174835, 629471835, 638174925, 638471925, \newline 639174825, 639471825\}.\cite{ahsn} \end{flushleft} \begin{defn} An alternating permutation is a permutation $\pi$ of such that $\pi_1 < \pi_2 > \pi_3 \ldots $ and a reverse alternating permutation is a permutation $\pi$ of such that $\pi_1 > \pi_2 < \pi_3 \ldots $. \end{defn} \begin{example} The following is a graphical representation of the permutation \{4,3,7,1,8,2,6,5\}. This permutation is reverse alternating, as you can see that the first entry is greater than the second entry and the pattern defined above continues throughout the entire permutation. \begin{center} \begin{tikzpicture} \node [draw,outer sep=0,inner sep=1,minimum size=10] (v1) at (1,4) {4}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v2) at (2,3) {3}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v3) at (3,7) {7}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v4) at (4,1) {1}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v5) at (5,8) {8}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v6) at (6,2) {2}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v7) at (7,6) {6}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v8) at (8,5) {5}; \draw (v1) edge (v2); \draw (v2) edge (v3); \draw (v3) edge (v4); \draw (v4) edge (v5); \draw (v5) edge (v6); \draw (v6) edge (v7); \draw (v7) edge (v8); \end{tikzpicture} \end{center} \end{example} \begin{example} The permutation \{5,6,2,8,1,7,3,4\}, pictured below, is an example of a forward alternating permutation. Sometimes forward alternating permutations are simply referred to as being alternating. \begin{center} \begin{tikzpicture} \node [draw,outer sep=0,inner sep=1,minimum size=10] (v1) at (1,5) {5}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v2) at (2,6) {6}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v3) at (3,2) {2}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v4) at (4,8) {8}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v5) at (5,1) {1}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v6) at (6,7) {7}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v7) at (7,3) {3}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v8) at (8,4) {4}; \draw (v1) edge (v2); \draw (v2) edge (v3); \draw (v3) edge (v4); \draw (v4) edge (v5); \draw (v5) edge (v6); \draw (v6) edge (v7); \draw (v7) edge (v8); \end{tikzpicture} \end{center} \end{example} \begin{defn} The reverse of a permutation $\pi$ is the permutation with entries given by $(\pi_n, \pi_{n-1}, \ldots, \pi_1)$. \end{defn} \begin{defn} The compliment of a permutation $\pi$ is $(n+1-\pi_1, n+1-\pi_2, \ldots, n+1-\pi_n)$. \end{defn} \begin{example} An example of a permutation and it's compliment are \{3,6,2,7,1,5,4\} and \{5,2,6,1,7,3,4\}. These permutations follow the deffinition above, notice that the first element of each, 3 and 5 fit in the equation $5=7+1-3$. Both of these permutations have been graphically displayed below. The reverse of \{3,6,2,7,1,5,4\} is \{4,5,1,7,2,6,3\}. Notice that the reverse and complement of a permutation aren't necessarily equal. \newline \newline \begin{tikzpicture} \node [draw,outer sep=0,inner sep=1,minimum size=10] (v1) at (1,3) {3}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v2) at (2,6) {6}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v3) at (3,2) {2}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v4) at (4,7) {7}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v5) at (5,1) {1}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v6) at (6,5) {5}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v7) at (7,4) {4}; \draw (v1) edge (v2); \draw (v2) edge (v3); \draw (v3) edge (v4); \draw (v4) edge (v5); \draw (v5) edge (v6); \draw (v6) edge (v7); \begin{scope}[xshift=7.5cm] \node [draw,outer sep=0,inner sep=1,minimum size=10] (v1) at (1,5) {5}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v2) at (2,2) {2}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v3) at (3,6) {6}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v4) at (4,1) {1}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v5) at (5,7) {7}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v6) at (6,3) {3}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v7) at (7,4) {4}; \draw (v1) edge (v2); \draw (v2) edge (v3); \draw (v3) edge (v4); \draw (v4) edge (v5); \draw (v5) edge (v6); \draw (v6) edge (v7); \end{scope} \end{tikzpicture} \end{example} Below we give a collection of theorems regarding the structure of roller coaster permutations. We will use these heavily in arriving at an upper bound for the partition number. \begin{thm} Given $\pi \in RC_n$, the reverse and compliment of $\pi$ are also members of $RC_n$ \cite{ahsn}. \end{thm} \begin{thm} Given $\pi \in RC_n$, we have that $\pi$ is either alternating or reverse alternating \cite{adam}. \end{thm} \begin{thm} Given $\pi \in RC_n \ , \abs{\pi_1 - \pi_n} = 1$, \cite{adam}. \end{thm} \begin{thm} For $\pi \in RC_n$ if $\pi$ is alternating then $\pi_i > \pi_1,\pi_n$ for even $i$. If $\pi$ is reverse alternating then $\pi_i > \pi_1,\pi_n$ for odd $i$ \cite{adam}. \end{thm} \begin{example} Below is a graphical representation of the permutation \{5,3,7,1,8,2,6,4\}. As you can see, the end points are 5 and 4, which have a difference of 1 as stated in Theorem 2.8. Also in the drawing below, notice that some elements have been circled into different sets, these being 7,8 and 6 in the "top" set and 3,1 and 2 in the "bottom" set. Notice that the top set is entirely comprised of numbers greater than the end points and the bottom is comprised entirely of numbers less than the end points. The top set has elements that are in the odd indicies while the bottom set has elements that are in the even indecies, just as Theorem 2.9 states. \begin{center} \begin{tikzpicture} \node [draw,outer sep=0,inner sep=1,minimum size=10] (v1) at (1,5) {5}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v2) at (2,3) {3}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v3) at (3,7) {7}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v4) at (4,1) {1}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v5) at (5,8) {8}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v6) at (6,2) {2}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v7) at (7,6) {6}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v8) at (8,4) {4}; \draw (v1) edge (v2); \draw (v2) edge (v3); \draw (v3) edge (v4); \draw (v4) edge (v5); \draw (v5) edge (v6); \draw (v6) edge (v7); \draw (v7) edge (v8); \draw (0,5) edge (9,5); \draw (0,4) edge (9,4); \draw plot[smooth cycle, tension=.7] coordinates {(2,7) (5,9) (8,6) (7,5) (5,7) (3,6)}; \draw plot[smooth cycle, tension=.7] coordinates {(4,0) (7,2) (6,3) (4,2) (2,4) (1,3)}; \end{tikzpicture} \end{center} \end{example} \begin{defn} A subsequence of a permutation is said to be monotonic, if it is strictly increasing or strictly decreasing. \end{defn} Monotonic subsequences are sometimes called runs. In the permutation \{5,8,2,6,3,9,1,7,4\} there are a few runs. The run (589) is an increasing, while (974) is a decreasing run. The longest run in this permutation is (8631). \begin{defn} A partition of a permutation is the set of disjoint monotonic subsequences of that permutation. \end{defn} \begin{defn} The partition number of a permutation, denoted $P(\pi)$, is the least number of partitions that permutation $\pi$ can be broken into. \end{defn} \begin{example} Here you can see a graphical representation of the permutation \{3,2,6,1,5,4\}. The oval distinguish the runs in the partition. Notice that there are two ovals each with three numbers in them. This shows that the runs in this permutation are \{3,2,1\} and \{6,5,4\}. It also shows that there are two runs, which means that this permutation has a partition number of 2, or in other words, $P(326154)=2$. \begin{center} \begin{tikzpicture} \node [draw,outer sep=0,inner sep=1,minimum size=10] (v1) at (1,3) {3}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v2) at (2,2) {2}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v3) at (3,6) {6}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v4) at (4,1) {1}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v5) at (5,5) {5}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v6) at (6,4) {4}; \draw (v1) edge (v2); \draw (v2) edge (v3); \draw (v3) edge (v4); \draw (v4) edge (v5); \draw (v5) edge (v6); \draw plot[smooth cycle, tension=.7] coordinates {(3,7) (2,6) (6,3) (7,4)}; \draw plot[smooth cycle, tension=.7] coordinates {(1,4) (0,3) (4,0) (5,1)}; \end{tikzpicture} \end{center} \end{example} \begin{example} Here is another partitioned permutation. This time the permutation is \{4,7,1,6,3,9,2,8,5\}. \begin{center} \begin{tikzpicture} \node [draw,outer sep=0,inner sep=1,minimum size=10] (v1) at (1,4) {4}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v2) at (2,7) {7}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v3) at (3,1) {1}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v4) at (4,6) {6}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v5) at (5,3) {3}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v6) at (6,9) {9}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v7) at (7,2) {2}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v8) at (8,8) {8}; \node [draw,outer sep=0,inner sep=1,minimum size=10] (v9) at (9,5) {5}; \draw (v1) edge (v2); \draw (v2) edge (v3); \draw (v3) edge (v4); \draw (v4) edge (v5); \draw (v5) edge (v6); \draw (v6) edge (v7); \draw (v7) edge (v8); \draw (v8) edge (v9); \draw plot[smooth cycle, tension=.7] coordinates {(6,10) (9,8) (10,5) (9,4) (7,7) (5,9)}; \draw plot[smooth cycle, tension=.7] coordinates {(1,5) (4,1) (3,0) (0,4)}; \draw plot[smooth cycle, tension=.7] coordinates {(4,7) (1,7) (6,1) (8,2)}; \end{tikzpicture} \end{center} \end{example} \begin{defn} $P(n)$ is the number of partitions that any permutation $\pi$ can be broken into where $\pi \in RC_n$. $P_{max}(n)$ is the upper bound on $P(n)$. \end{defn} \section{Results} \begin{thm} For $\pi \in RC_n$ the partition number $p_{max}(\pi)$ is bounded above by $\floor{\frac{\ceil{\frac{n-2}{2}}}{2}} +2$. \end{thm} \begin{proof} Without loss of generality we may assume that $\pi \in RC_n$ and $\pi$ is reverse alternating, i.e. $\pi$ starts with a descent, otherwise we could take the compliment of $\pi$ which is also in $RC_n$, since complimenting exchanges alternating for reverse alternating and we may then use the same argument that follows and then take the compliment again. \begin{itemize} \item Excluding the endpoints, there will be $\ceil{\frac{n-2}{2}}$ positions below the endpoints and $\floor{\frac{n-2}{2}}$ positions above the endpoints. Those positions below the endpoints are at even indices and the positions above are at odd indices. \item Partition the even indices into contiguous increasing runs and do the same with odd indices. The number of runs made from even indices will be $\floor{\frac{\ceil{\frac{n-2}{2}}}{2}}+1$ \item Note that when partitioning a forward or reverse alternating partition into contiguous increasing runs, the $k^{th}$ run will have an earliest start at index $2k-2$ for $k>1$ and the latest finish for this run will be at index $2k$. The $k^{th}$ index from the from the bottom partitions comes before the $k^{th}$ partition of the top partitions due to $\pi$ being reverse alternating. So the latest finish for the $k-1^{st}$ run is, at worst, equal to the latest start for the $k^{th}$ run, thus the $k+1^{st}$ segment from the top starts after the $k^{th}$ run from the the bottom. \item So the first run on the top pairs with the start point and then the $k^{th}$ run on the bottom pairs with the $k+1^{st}$ run on the top. If the number of runs on the bottom is greater than the the number of runs on the top then the second to last run on the bottom pairs with the end point and we have an extra $+1$ in the partition number. Otherwise the last run on the bottom will pair with the end point. Thereby establishing the claim. \end{itemize} \end{proof} We found exact numbers for $P_{max}(n)$ for $n < 15$ experimentally using code developed in the Sage computer algebra system. These values can be found in the table below. \newline \newline \begin{center} \begin{tabular}{c|c|c} $n$ & $P_{max(n)}$ & $\floor{\frac{\ceil{\frac{n-2}{2}}}{2}} +2$ \\ \hline 3 & 2 & 2 \\ 4 & 2 & 2 \\ 5 & 2 & 3 \\ 6 & 3 & 3 \\ 7 & 3 & 3 \\ 8 & 3 & 3 \\ 9 & 4 & 4 \\ 10 & 4 & 4 \\ 11 & 4 & 4 \\ 12 & 4 & 4 \\ 13 & 5 & 5 \\ 14 & 5 & 5 \\ \end{tabular} \end{center} Note that the bound found in the theorem above is nearly sharp. For $n<15$ the upper bound we found is very close to the actual values of $P_{max}(n)$. The only deviation is our upper bound at $n=5$ was 1 greater than the actual value. \end{document}
\begin{document} \author[M.~Bartoletti]{Massimo Bartoletti\lmcsorcid{0000-0003-3796-9774}}[a] \author[M. Murgia]{Maurizio Murgia\lmcsorcid{0000-0001-7613-621X}}[b] \author[R. Zunino]{Roberto Zunino\lmcsorcid{0000-0002-9630-429X}}[c] \address{University of Cagliari, Cagliari, Italy} \email{[email protected]} \address{Gran Sasso Science Institute, L'Aquila, Italy} \email{[email protected]} \address{Universit\`a degli Studi di Trento, Trento, Italy} \email{[email protected]} \title[Probabilistic bisimulations for PCTL]{Sound approximate and asymptotic \texorpdfstring{\\}{} probabilistic bisimulations for PCTL} \maketitle \begin{abstract} We tackle the problem of establishing the soundness of approximate bisimilarity with respect to PCTL and its relaxed semantics. To this purpose, we consider a notion of bisimilarity inspired by the one introduced by Desharnais, Laviolette, and Tracol, and parametric with respect to an approximation error $\delta$, and to the depth $n$ of the observation along traces. Essentially, our soundness theorem establishes that, when a state $q$ satisfies a given formula up-to error $\delta$ and steps $n$, and $q$ is bisimilar to $qi$ up-to error $\deltai$ and enough steps, we prove that $qi$ also satisfies the formula up-to a suitable error $\deltaii$ and steps $n$. The new error $\deltaii$ is computed from $\delta,\deltai$ and the formula, and only depends linearly on $n$. We provide a detailed overview of our soundness proof. We extend our bisimilarity notion to families of states, thus obtaining an asymptotic equivalence on such families. We then consider an asymptotic satisfaction relation for PCTL formulae, and prove that asymptotically equivalent families of states asymptotically satisfy the same formulae. \end{abstract} \section{Introduction} The behaviour of many real-world systems can be formally modelled as probabilistic processes, e.g.\@\xspace as discrete-time Markov chains. Specifying and verifying properties on these systems requires probabilistic versions of temporal logics, such as PCTL~\cite{HanssonJonsson94}. PCTL allows to express probability bounds using the formula $\logPr{\geq \pi}{\psi}$, which is satisfied by those states starting from which the path formula $\psi$ holds with probability $\geq \pi$. A well-known issue is that real-world systems can have tiny deviations from their mathematical models, while logical properties, such as those written in PCTL, impose sharp constraints on the behaviour. To address this issue, one can use a \emph{relaxed} semantics for PCTL, as in~\cite{DInnocenzo12hscc}. There, the semantics of formulae is parameterised over the error $\delta\geq 0$ one is willing to tolerate. While in the standard semantics of $\logPr{\geq \pi}{\psi}$ the bound $\geq\pi$ is \emph{exact}, in relaxed PCTL this bound is weakened to $\geq\pi-\delta$. So, the relaxed semantics generalises the standard PCTL semantics of~\cite{HanssonJonsson94}, which can be obtained by choosing $\delta=0$. Instead, choosing an error $\delta > 0$ effectively provides a way to measure ``how much'' a state satisfies a given formula: some states might require only a very small error, while others a much larger one. When dealing with temporal logics such as PCTL, one often wants to study some notion of state equivalence which preserves the semantics of formulae: that is, when two states are equivalent, they satisfy the same formulae. For instance, probabilistic bisimilarities like those in~\cite{Desharnais10iandc,Desharnais02iandc,Larsen91iandc} preserve the semantics of formulae for PCTL and other temporal logics. Although \emph{strict} probabilistic bisimilarity preserves the semantics of relaxed PCTL, it is not \emph{robust} against small deviations in the probability of transitions in Markov chains~\cite{Giacalone90ifip2}. A possible approach to deal with this issue is to also relax the notion of probabilistic bisimilarity, by making it parametric with respect to an error $\delta$~\cite{DInnocenzo12hscc}. Relaxing bisimilarity in this way poses a choice regarding which properties of the strict probabilistic bisimilarity are to be kept. In particular, transitivity is enjoyed by the strict probabilistic bisimilarity, but it is \emph{not} desirable for the relaxed notion. Indeed, we could have three states $q,qi$ and $qii$ where the behaviour of $q$ and $qi$ is similar enough (within the error $\delta$), the behaviour of $qi$ and $qii$ is also similar enough (within $\delta$), but the distance between $q$ and $qii$ is larger than the allowed error $\delta$. At best, we can have a sort of ``triangular inequality'', where $q$ and $qii$ can still be related but only with a larger error $2\cdot\delta$. \begin{figure} \caption{A Markov chain modelling repeated tosses of a fair coin.} \end{figure} Bisimilarity is usually defined by coinduction, essentially requiring that the relation is preserved along an arbitrarily long sequence of moves. Still, in some settings, observing the behaviour over a very long run is undesirable. For instance, consider the PCTL formula $\phi = \logPr{\geq 0.5}{{\sf true}\ {\sf U}^{\leq n}\ {\sf a}}$, which is satisfied by those states from which, with probability $\geq 0.5$, ${\sf a}$ is satisfied within $n$ steps. In this case, a behavioural equivalence relation that preserves the semantics of $\phi$ can neglect the long-run behaviour after $n$ steps. More generally, if all the until operators are \emph{bounded}, as in $\phi_1 {\sf U}^{\leq k} \phi_2$, then each formula has an upper bound of steps $n$ after which a behavioural equivalence relation can ignore what happens next. Observing the behaviour after this upper bound is unnecessarily strict, and indeed in some settings it is customary to neglect what happens in the very long run. For instance, a real-world player repeatedly tossing a coin is usually considered equivalent to a Markov chain with two states and four transitions with probability $\nicefrac{1}{2}$ (see~\autoref{fig:fair-coin}), even if in the long run the real-world system will diverge from the ideal one (e.g.\@\xspace, when the player dies). Another setting where observing the long-term behaviour is notoriously undesirable is that of cryptography. When studying the security of systems modelling cryptographic protocols, two states are commonly considered equivalent when their behaviour is similar (up to a small error $\delta$) in the short run, even when in the very long run they diverge. For instance, a state $q$ could represent an ideal system where no attacks can be performed by construction, while another state $qi$ could represent a real system where an adversary can try to disrupt the cryptographic protocol. In such a scenario, if the protocol is secure, we would like to have $q$ and $qi$ equivalent, since the behaviour of the real system is close to the one of the ideal system. Note that in the real system an adversary can repeatedly try to guess the secret cryptographic keys, and break security in the very long run, with very high probability. Accordingly, standard security definitions require that the behaviour of the ideal and real system are within a small error, but only for a \emph{bounded} number of steps, after which their behaviour could diverge. \paragraph{Contributions} To overcome the above mentioned issues, in this work we introduce a bounded, approximate notion of bisimilarity $\crysim{n}{\delta}$, that only observes the first $n$ steps, and allows for an error $\delta$. Unlike standard bisimilarity, our relation is naturally defined by \emph{induction} on $n$. We call this looser variant of bisimilarity an \emph{up-to-$n,\delta$} bisimilarity. We showcase up-to-$n,\delta$ bisimilarity on a running example (Examples~\ref{ex:pctl:padlock}, \ref{ex:sim:padlock}, \ref{ex:results:padlock}, and~\ref{ex:asymptotic:padlock}), comparing an ideal combination padlock against a real one which can be opened by an adversary guessing its combination. We show that the two systems are bisimilar up-to-$n,\delta$, while they are not bisimilar according to the standard coinductive notion. We then discuss how the two systems satisfy a basic security property expressed in PCTL, with suitable errors. To make our theory amenable to reason about infinite-state systems, such as those usually found when modelling cryptographic protocols, all our results apply to Markov chains with countably many states. In this respect, our work departs from most literature on probabilistic bisimulations~\cite{DInnocenzo12hscc,Song13lmcs} and bisimilarity distances~\cite{Breugel17siglog,TangB17concur,TangB18cav,TangB16concur,Fu12icalp,ChenBW12fossacs,BreugelSW08lmcs}, which usually assume \emph{finite}-state Markov chains, as they focus on computing the distances. In~\autoref{ex:sim:pingpong} we exploit infinite-state Markov chains to compare a biased random bit generator with an ideal one. Our first main contribution is a soundness theorem establishing that, when a state $q$ satisfies a PCTL formula $\phi$ (up to a given error), any bisimilar state $qi \crysim{}{} q$ must also satisfy $\phi$, at the cost of a slight increase of the error. More precisely, if $\phi$ only involves until operators bounded by $\leqn$, state $q$ satisfies $\phi$ up to some error, and bisimilarity holds for enough steps and error $\delta$, then $qi$ satisfies $\phi$ with an \emph{additional} asymptotic error $O(n\cdot\delta)$. This asymptotic behaviour is compatible with the usual assumptions of computational security in cryptography. There, models of security protocols include a security parameter $\eta$, which affects the length of the cryptographic keys and the running time of the protocol: more precisely, a protocol is assumed to run for $n(\eta)$ steps, which is polynomially bounded w.r.t.\@\xspace $\eta$. As already mentioned above, cryptographic notions of security do not observe the behaviour of the systems after this bound $n(\eta)$, since in the long run an adversary can surely guess the secret keys by brute force. Coherently, a protocol is considered to be secure if (roughly) its actual behaviour is \emph{approximately} equivalent to the ideal one for $n(\eta)$ steps and up to an error $\delta(\eta)$, which has to be a negligible function, asymptotically approaching zero faster than any rational function. Under these bounds on $n$ and $\delta$, the asymptotic error $O(n\cdot\delta)$ in our soundness theorem is negligible in $\eta$. Consequently, if two states $q$ and $qi$ represent the ideal and actual behaviour, respectively, and they are bisimilar up to a negligible error, they will satisfy the same PCTL formulae with a negligible error. We formalise this reasoning by providing a notion of \emph{asymptotic equivalence}. We start by considering families of states $\Xi(\eta)$, intuitively representing the behaviour of a system depending on a security parameter $\eta$. Our asymptotic equivalence $\Xi_1 \equiv \Xi_2$ holds whenever the behaviour of the two families is $n,\delta$-bisimilar within a negligible error whenever we only perform a polynomial number of steps. We further introduce an \emph{asymptotic satisfaction relation} $\Xi \models \phi$ which holds whenever the state $\Xi(\eta)$ satisfies $\phi$ under similar assumptions on the number of steps and the allowed error. Our second main result is the soundness of the asymptotic equivalence with respect to asymptotic satisfaction. Asymptotically equivalent families asymptotically satisfy the same PCTL formulae. We provide a detailed overview of the proof of our soundness theorem for $n,\delta$-bisimilarity in~\autoref{sec:result}, deferring the gory technicalities to~\autoref{sec:proofs}. The proof of asymptotic soundness, which exploits the soundness theorem for $n,\delta$-bisimilarity, is given in~\autoref{sec:asymptotic}. \section{Related work} There is a well-established line of research on establishing soundness and completeness of probabilistic bisimulations against various kinds of probabilistic logics \cite{Desharnais10iandc,FurberMM19lmcs,Hermanns11iandc,Larsen91iandc,Mio17fuin,Mio18lics}. The work closest to ours is that of D’Innocenzo, Abate and Katoen~\cite{DInnocenzo12hscc}, which addresses the model checking problem on a relaxed PCTL differing from ours in a few aspects. First, their syntax allows for an individual bound on the number of steps $k$ for each until operator ${\sf U}^{\leq k}$, while we assume all such bounds are equal and we make the semantics of PCTL parametrized w.r.t.\@\xspace the number of steps to be considered in until. This approach allows us to simplify the statement of the soundness theorem and the definition of asymptotic satisfaction relation, since the bound is not fixed by the formula, but it is a parameter of the semantics. Dealing with the case where each until in a formula could have its bound seems possible, at the cost of increasing the level of technicalities. Second, their main result shows that bisimilar states up-to a given error $\epsilon$ satisfy the same formulae $\psi$, provided that $\psi$ ranges over the so-called $\epsilon$-robust formulae. Instead, our soundness result applies to \emph{all} PCTL formulae, and ensures that when moving from a state satisfying $\phi$ to a bisimilar one, $\phi$ is still satisfied, but at the cost of slightly increasing the error. Third, their relaxed semantics differs from ours. In ours, we relax all the probability bounds by the same amount $\delta$. Instead, the relaxation in~\cite{DInnocenzo12hscc} affects the bounds by a different amount which depends on the error~$\epsilon$, the until bound $k$, and the underlying DTMC. Desharnais, Laviolette and Tracol~\cite{Desharnais08qest} use a coinductive approximate probabilistic bisimilarity, up-to an error $\delta$. Using such coinductive bisimilarity, \cite{Desharnais08qest} establishes the soundness and completeness with respect to a Larsen-Skou logic~\cite{Larsen91iandc} (instead of PCTL). In~\cite{Desharnais08qest}, a bounded, up-to $n,\delta$ version of bisimilarity is only briefly used to derive a decision algorithm for coinductive bisimilarity under the assumption that the state space is finite. In our work, instead, the bounded up-to $n,\delta$ bisimilarity is the main focus of study. In particular, our soundness result only assumes $n,\delta$ bisimilarity, which is strictly weaker than coinductive bisimilarity. Another minor difference is that~\cite{Desharnais08qest} considers a labelled Markov process, i.e.\@\xspace the probabilistic variant of a labelled transition system, while we instead focus on DTMCs having labels on states. Bian and Abate~\cite{Bian17fossacs} study bisimulation and trace equivalence up-to an error $\epsilon$, and show that $\epsilon$-bisimilar states are also $\epsilon’$-trace equivalent for a suitable $\epsilon’$ which depends on~$\epsilon$. Furthermore, they show that $\epsilon$-trace equivalent states satisfy the same formulae in a bounded LTL, up-to a certain error. In our work, we focus instead on the branching logic PCTL. A related research line is that on \emph{bisimulation metrics} \cite{Breugel17siglog,BreugelHMW05icalp,BreugelW05tcs}. Some of these metrics, like our up-to bisimilarity, take approximations into account~\cite{Desharnais99concur,Castiglioni16qapl}. Similarly to our bisimilarity, bisimulation metrics allow to establish two states equivalent up-to a certain error (but usually do not take into account the bound on the number of steps). Interestingly, Castiglioni, Gebler and Tini~\cite{Castiglioni16qapl} introduce a notion of distance between Larsen-Skou formulae, and prove that the bisimulation distance between two processes corresponds to the distance between their mimicking formulae. De Alfaro, Majumdar, Raman and Stoelinga~\cite{deAlfaroMRS08} elegantly characterise bisimulation metrics with a quantitative $\mu$-calculus. Such logic allows to specify interesting properties such as maximal reachability and safety probability, and the maximal probability of satisfying a general $\omega$-regular specification, but not full PCTL. Mio~\cite{Mio14fossacs} characterises a bisimulation metric based on total variability with a more general quantitative $\mu$-calculus, dubbed {\L}ukasiewicz $\mu$-calculus, able to encode PCTL. Both \cite{deAlfaroMRS08} and~\cite{Mio14fossacs} do not take the number of steps into account, therefore their applicability to the analysis of security protocols is yet to be investigated. Metrics with discount~\cite{Desharnais04tcs,deAlfaro03icalp,Bacci21lmcs,DengCPP06entcs,BreugelSW08lmcs} are sometimes used to relate the behaviour of probabilistic processes, weighing less those events that happen in the far future compared to those happening in the first steps. Often, in these metrics each step causes the probability of the next events to be multiplied by a constant factor $c < 1$, in order to diminish their importance. Note that this discount makes it so that after $\eta$ steps, this diminishing factor becomes $c^\eta$, which is a negligible function of $\eta$. As discussed before, in cryptographic security one needs to consider as important those events happening within polynomially many steps, while neglecting the ones after such a polynomial threshold. Using an exponential discount factor $c^\eta$ after only $\eta$ steps goes against this principle, since it would cause a secure system to be at a negligible distance from an insecure one which can be violated after just $\eta$ steps. For this reason, instead of using a metric with discount, in this paper we resort to a bisimilarity that is parametrized over the number of steps $n$ and error $\delta$, allowing us to obtain a notion which distinguishes between the mentioned secure and insecure systems. Several works develop algorithms to decide probabilistic bisimilarity, and to compute metrics \cite{BreugelW14birthday,ChenBW12fossacs,Fu12icalp,TangB16concur,TangB17concur,TangB18cav}. To this purpose, they restrict to finite-state systems, like e.g.\@\xspace probabilistic automata. Our results, instead, apply also to infinite-state systems. In \cite{ZuninoD05} a calculus with cryptographic primitives is introduced, together with a semantics where attackers have a probability $\pi(\eta)$ of guessing encryption keys. It is shown that, assuming that $\pi(\eta)$ is negligible and that attackers run in polynomial time, some security properties (e.g.\@\xspace secrecy, authentication) are equivalent to the analogous properties with standard Dolev-Yao assumptions (that is, attackers never guess keys but are not restricted to polynomial time). This result can be seen as a special case of our asymptotic soundness theorem. The interesting work \cite{LagoG22} proposes a behavioural notion of indistinguishability between session typed probabilistic $\pi$-calculus processes, with the aim of providing a formal system for proving security of real cryptographic protocols by comparison with ideal ones. The type system, which is based on bounded linear logic \cite{GirardSS92,LagoG16}, guarantees that processes terminate in polynomial time. This differs from our approach, where polynomiality appears directly in the equivalence definition (\autoref{def:crysim}). Moreover, the calculus of \cite{LagoG22} is quite restrictive: for instance, it is not possible to specify adversaries that access an oracle a polynomial number of times. By contrast, our abstract model is general enough to represent such adversaries. \paragraph{Comparison with~\cite{BMZ22coordination}} This paper extends the work~\cite{BMZ22coordination} in two directions. First, the current paper includes the proofs of all statements, which were not present in~\cite{BMZ22coordination}. Second, in~\cite{BMZ22coordination} we hinted at the possible application of soundness to the asymptotic behaviour of systems which depend on a parameter $\eta$. Here, we properly develop and formalise that intuition in~\autoref{sec:asymptotic}, providing a new asymptotic soundness result. \section{The probabilistic temporal logic PCTL} Assume a set $\mathcal{L}$ of labels, ranged over by $l$, and let $\delta,\pi$ range over non-negative reals. A \emph{discrete-time Markov chain} (DTMC) is a standard model of probabilistic systems. Throughout this paper, we consider a DTMC having a countable, possibly infinite, set of states $q$, each carrying a subset of labels $\ell(q) \subseteq \mathcal{L}$. \begin{defi}[Discrete-Time Markov Chain] \ellel{def:pctl:dtmc} A (labelled) DTMC is a triple $(\mathcal{Q}, \Pr, \ell)$ where: \begin{itemize} \item $\mathcal{Q}$ is a countable set of states; \item $\Pr : \mathcal{Q}^2 \to [0,1]$ is a function, named transition probability function; \item $\ell : \mathcal{Q} \to \mathcal{P}(\mathcal{L})$ is a labelling function \end{itemize} Given $q \in \mathcal{Q}$ and $Q \subseteq \mathcal{Q}$, we write $\tsPr{q}{Q}$ for $\sum_{qi\inQ} \tsPr{q}{qi}$ and we require that $\tsPr{q}{\mathcal{Q}}=1$ for all $q\in\mathcal{Q}$. \end{defi} A \emph{trace} is an infinite sequence of states $t = q_0q_1\cdots$, where we write $t(i)$ for $q_i$, i.e.\@\xspace the $i$-th element of $t$. A \emph{trace fragment} is a finite, non-empty sequence of states $\tilde{t} = q_0 \cdots q_{n-1}$, where $\card{\tilde{t}}= n\geq 1$ is its length. Given a trace fragment $\tilde{t}$ and a state $q$, we write $\tilde{t}q^\omega$ for the trace $\tilde{t}qqq\cdots$. It is well-known that, given an initial state $q_0$, the DTMC induces a $\sigma$-algebra of measurable sets of traces $T$ starting from $q_0$, i.e.\@\xspace~the $\sigma$-algebra generated by cylinder sets~\cite{BaierKatoen08}. More in detail, given a trace fragment $\tilde{t} = q_0 \cdots q_{n-1}$, its \emph{cylinder set} \[ \cyl{\tilde{t}} \; = \; \setcomp{t}{\text{$\tilde{t}$ is a prefix of $t$}} \] is given probability: \[ \Pr(\cyl{\tilde{t}}) \; = \; \prod_{i=0}^{n-2} \tsPr{q_i}{q_{i+1}} \] As usual, if $n=1$ the product is empty and evaluates to $1$. Closing the family of cylinder sets under countable unions and complement we obtain the family of measurable sets. The probability measure on cylinder sets then uniquely extends to all the measurable sets. Given a set of trace fragments $\tilde{T}$, all starting from the same state $q_0$ and having the same length, we let \( \Pr(\tilde{T}) = \Pr(\bigcup_{\tilde{t}\in\tilde{T}} \cyl{\tilde{t}}) = \sum_{\tilde{t}\in\tilde{T}} \Pr(\cyl{\tilde{t}}) \). Note that using same-length trace fragments ensures that their cylinder sets are disjoint, hence the second equality holds. Below, we define PCTL formulae. Our syntax is mostly standard, except for the \emph{until} operator. There, for the sake of simplicity, we do not bound the number of steps in the syntax $\phi_1\ {\sf U}\ \phi_2$, but we do so in the semantics. Concretely, this amounts to imposing the same bound to \emph{all} the occurrences of ${\sf U}$ in the formula. Such bound is then provided as a parameter to the semantics. \begin{defi}[PCTL Syntax] The syntax of PCTL is given by the following grammar, defining \emph{state formulae} $\phi$ and \emph{path formulae} $\psi$: \begin{align*} \phi & ::= l \mid {\sf true} \mid \lnot \phi \mid \phi \land \phi \mid \logPr{\rhd \pi}{\psi} \qquad \mbox{ where } \rhd \in \setenum{>,\geq} \\ \psi & ::= {\sf X}\ \phi \mid \phi\ {\sf U}\ \phi \end{align*} As syntactic sugar, we write $\logPr{< \pi}{\psi}$ for $\lnot\logPr{\geq \pi}{\psi}$, and $\logPr{\leq \pi}{\psi}$ for $\lnot\logPr{> \pi}{\psi}$. \end{defi} Given a PCTL formula $\phi$, we define its maximum ${\sf X}$-nesting $\nestMax{{\sf X}}{\phi}$ and its maximum ${\sf U}$-nesting $\nestMax{{\sf U}}{\phi}$ inductively as follows: \begin{defi}[Maximum Nesting] For $\circ \in \setenum{{\sf X},{\sf U}}$, we define: \[ \begin{array}{c} \nestMax{\circ}{l} = 0 \qquad \nestMax{\circ}{\sf true} = 0 \qquad \nestMax{\circ}{\lnot\phi} = \nestMax{\circ}{\phi} \\[8pt] \nestMax{\circ}{\phi_1 \land \phi_2} = \max(\nestMax{\circ}{\phi_1},\nestMax{\circ}{\phi_2}) \qquad \nestMax{\circ}{\logPr{\rhd \pi}{\psi}} = \nestMax{\circ}{\psi} \\[8pt] \nestMax{\circ}{{\sf X} \phi} = \nestMax{\circ}{\phi} + \begin{cases} 1 & \text{if $\circ = {\sf X}$} \\ 0 & \text{otherwise} \end{cases} \\[16pt] \nestMax{\circ}{\phi_1 {\sf U} \phi_2} = \max(\nestMax{\circ}{\phi_1},\nestMax{\circ}{\phi_2}) + \begin{cases} 1 & \text{if $\circ = {\sf U}$} \\ 0 & \text{otherwise} \end{cases} \end{array} \] \end{defi} We now define a semantics for PCTL where the probability bounds $\rhd \pi$ in $\logPr{\rhd \pi}{\psi}$ can be relaxed or strengthened by an error $\delta$. Our semantics is parameterized over the \emph{until} bound $n$, the error $\delta\in\mathbb{R}^{\geq 0}$, and a direction $r\in\setenum{+1,-1}$. Given the parameters, the semantics associates each PCTL state formula with the set of states satisfying it. Intuitively, when $r = +1$ we relax the semantics of the formula, so that increasing $\delta$ causes more states to satisfy it. More precisely, the probability bounds $\rhd\pi$ in positive occurrences of $\logPr{\rhd \pi}{\psi}$ are decreased by $\delta$, while those in negative occurrences are increased by $\delta$. Dually, when $r = -1$ we strengthen the semantics, modifying $\rhd\pi$ in the opposite direction. Our semantics is inspired by the relaxed / strengthened PCTL semantics of~\cite{DInnocenzo12hscc}. \begin{defi}[PCTL Semantics] \ellel{def:pctl:sem} The semantics of PCTL formulae is given below. Let $n \in \mathbb{N}$, $\delta\in\mathbb{R}^{\geq 0}$ and $r \in \setenum{+1,-1}$. \[ \begin{array}{ll} \sem{n}{\delta}{r}{l} &= \setcomp{q\in\mathcal{Q}}{l\in\ell(q)} \\ \sem{n}{\delta}{r}{\sf true} &= \mathcal{Q} \\ \sem{n}{\delta}{r}{\lnot\phi} &= \mathcal{Q} \setminus \sem{n}{\delta}{-r}{\phi} \\ \sem{n}{\delta}{r}{\phi_1 \land \phi_2} &= \sem{n}{\delta}{r}{\phi_1} \cap \sem{n}{\delta}{r}{\phi_2} \\ \sem{n}{\delta}{r}{\logPr{\rhd \pi}{\psi}} &= \setcomp{q\in\mathcal{Q}}{ \Pr(\trStart{q} \cap \sem{n}{\delta}{r}{\psi}) + r \cdot \delta \rhd \pi } \\ \sem{n}{\delta}{r}{{\sf X} \phi} &= \setcomp{t}{t(1) \in \sem{n}{\delta}{r}{\phi}} \\ \sem{n}{\delta}{r}{\phi_1 {\sf U} \phi_2} &= \setcomp{t}{ \exists i\in 0..n.\ t(i) \in \sem{n}{\delta}{r}{\phi_2} \land \forall j\in 0..i-1.\ t(j) \in \sem{n}{\delta}{r}{\phi_1}} \end{array} \] \end{defi} The semantics is mostly standard, except for $\logPr{\rhd \pi}{\psi}$ and $\phi_1 {\sf U} \phi_2$. The semantics of $\logPr{\rhd \pi}{\psi}$ adds $r\cdot\delta$ to the probability of satisfying $\psi$, which relaxes or strengthens (depending on $r$) the probability bound as needed. The semantics of $\phi_1 {\sf U} \phi_2$ uses the parameter $n$ to bound the number of steps within which $\phi_2$ must hold. Our semantics enjoys monotonicity. The semantics of state and path formulae is increasing w.r.t.\@\xspace~$\delta$ if $r = +1$, and decreasing otherwise. The semantics also increases when moving from $r=-1$ to $r=+1$. \begin{lem}[Monotonicity] \ellel{lem:pctl:monotonicity} Whenever $\delta \leq \deltai$, we have: \begin{align*} & \sem{n}{\delta}{+1}{\phi} \subseteq \sem{n}{\deltai}{+1}{\phi} && \sem{n}{\deltai}{-1}{\phi} \subseteq \sem{n}{\delta}{-1}{\phi} && \sem{n}{\delta}{-1}{\phi} \subseteq \sem{n}{\delta}{+1}{\phi} \\ & \sem{n}{\delta}{+1}{\psi} \subseteq \sem{n}{\deltai}{+1}{\psi} && \sem{n}{\deltai}{-1}{\psi} \subseteq \sem{n}{\delta}{-1}{\psi} && \sem{n}{\delta}{-1}{\psi} \subseteq \sem{n}{\delta}{+1}{\psi} \end{align*} \end{lem} Note that monotonicity does \emph{not} hold for the parameter $n$, i.e.\@\xspace even if $n\leqni$, we can \emph{not} conclude $\sem{n}{\delta}{+1}{\phi} \subseteq \sem{ni}{\delta}{+1}{\phi}$. As a counterexample, let $\mathcal{Q} = \setenum{q_0, q_1}$, $\ell(q_0)=\emptyset$, $\ell(q_1)=\setenum{\sf a}$, $\tsPr{q_0}{q_1}=\tsPr{q_1}{q_1}=1$, and $\tsPr{q}{q'}=0$ elsewhere. Given $\phi=\logPr{\leq 0}{{\sf true}\ {\sf U}\ {\sf a}}$, we have $q_0 \in \sem{0}{0}{+1}{\phi}$ since in $n=0$ steps it is impossible to reach a state satisfying $\sf a$. However, we do \emph{not} have $q_0 \in \sem{1}{0}{+1}{\phi}$ since in $ni=1$ steps we always reach $q_1$, which satisfies $\sf a$. \begin{figure} \caption{A Markov chain modelling an ideal (left) and a real (right) padlock.} \end{figure} \begin{exa}\ellel{ex:pctl:padlock} We compare an ideal combination padlock to a real one from the point of view of an adversary. The ideal padlock has a single state $q_{\sf ok}$, representing a closed padlock that can not be opened. Instead, the real padlock is under attack from the adversary who tries to open the padlock by repeatedly guessing its 5-digit PIN. At each step the adversary generates a (uniformly) random PIN, different from all the ones which have been attempted so far, and tries to open the padlock with it. The states of the real padlock are ${q_0,\ldots,q_{N-1}}$ (with $N=10^5$), where $q_i$ represents the situation where $i$ unsuccessful attempts have been made, and an additional state $q_{\sf err}$ that represents that the padlock was opened. Since after $i$ attempts the adversary needs to guess the correct PIN among the $N-i$ remaining combinations, the real padlock in state $q_i$ moves to $q_{\sf err}$ with probability $1/(N-i)$, and to $q_{i+1}$ with the complementary probability. Summing up, we simultaneously model both the ideal and real padlock as a single DTMC with the following transition probability function (see~\autoref{fig:padlock}): \[ \begin{array}{l@{\qquad}l} \Pr(q_{\sf ok},q_{\sf ok})=1 \\ \Pr(q_{\sf err},q_{\sf err})=1 \\ \Pr(q_i,q_{\sf err}) = 1/(N-i) & 0\leq i<N \\ \Pr(q_i,q_{i+1}) = 1-1/(N-i) & 0\leq i<N-1 \\ \Pr(q,qi) = 0 & \text{otherwise} \end{array} \] We label the states with labels $\mathcal{L}=\setenum{\sf err}$ by letting $\ell(q_{\sf err})=\setenum{\sf err}$ and $\ell(q)=\emptyset$ for all $q\neqq_{\sf err}$. The PCTL formula $\phi = \logPr{\leq 0}{{\sf true}\ {\sf U}\ {\sf err}}$ models the expected behaviour of an unbreakable padlock, requiring that the set of traces where the padlock is eventually opened has zero probability. Formally, $\phi$ is satisfied by state $q$ when \begin{align} \nonumber q \in \sem{n}{\delta}{+1}{\phi} & \iff q \in \sem{n}{\delta}{+1}{\lnot \logPr{> 0}{{\sf true}\ {\sf U}\ {\sf err}}} \\ \nonumber & \iff q \notin \sem{n}{\delta}{-1}{\logPr{> 0}{{\sf true}\ {\sf U}\ {\sf err}}} \\ \nonumber & \iff \lnot ( \Pr(\trStart{q} \cap \sem{n}{\delta}{-1}{{\sf true}\ {\sf U}\ {\sf err}}) - \delta > 0 ) \\ \ellel{eq:padlock-pr} & \iff \Pr(\trStart{q} \cap \sem{n}{\delta}{-1}{{\sf true}\ {\sf U}\ {\sf err}}) \leq \delta \end{align} When $q=q_{\sf ok}$ we have that $\trStart{q_{\sf ok}} \cap\sem{n}{\delta}{-1}{{\sf true}\ {\sf U}\ {\sf err}} = \emptyset$, hence the above probability is zero, which is surely $\leq \delta$. Consequently, $\phi$ is satisfied by the ideal padlock $q_{\sf ok}$, for all $n\geq 0$ and $\delta\geq 0$. By contrast, $\phi$ is not always satisfied by the real padlock $q=q_0$, since we have $q_0\in \sem{n}{\delta}{+1}{\phi}$ only for some values of $n$ and $\delta$. To show why, we start by considering some trivial cases. Choosing $\delta=1$ makes equation~\eqref{eq:padlock-pr} trivially true for all $n$. Furthermore, if we choose $n=1$, then $\trStart{q_0} \cap\sem{n}{\delta}{-1}{{\sf true}\ {\sf U}\ {\sf err}} = \setenum{q_0q_{\sf err}^\omega}$ is a set of traces with probability $1/N$. Therefore, equation~\eqref{eq:padlock-pr} holds only when $\delta\geq 1/N$. More in general, when $n\geq 1$, we have \[ \trStart{q_0} \cap \sem{n}{\delta}{-1}{{\sf true}\ {\sf U}\ {\sf err}} = \setenum{q_0q_{\sf err}^\omega,\ q_0q_1q_{\sf err}^\omega,\ q_0q_1q_2q_{\sf err}^\omega,\ \ldots,\ q_0\ldotsq_{n-1}q_{\sf err}^\omega } \] The probability of the above set is the probability of guessing the PIN within $n$ steps. The complementary event, i.e.\@\xspace not guessing the PIN for $n$ times, has probability \[ \dfrac{N-1}{N} \cdot \dfrac{N-2}{N-1} \cdots \dfrac{N-n}{N-(n-1)} = \dfrac{N-n}{N} \] Consequently, \eqref{eq:padlock-pr} simplifies to $n/N \leq \delta$, suggesting the least value of $\delta$ (depending on $n$) for which $q_0$ satisfies $\phi$. For instance, when $n=10^3$, this amounts to claiming that the real padlock is secure, up to an error of $\delta = n/N = 10^{-2}$. \end{exa} \section{Up-to-$n,\delta$ Bisimilarity} We now define a relation on states $q \crysim{n}{\delta} qi$ that intuitively holds whenever $q$ and $qi$ exhibit similar behaviour for a bounded number of steps. The parameter $n$ controls the number of steps, while $\delta$ controls the error allowed in each step. Note that since we only observe the first $n$ steps, our notion is \emph{inductive}, unlike unbounded bisimilarity which is co-inductive, similarly to~\cite{Castiglioni16qapl}. Our notion is also inspired by~\cite{Desharnais08qest}. \begin{defi}[Up-to-$n,\delta$ Bisimilarity] \ellel{def:param-bisim} We define the relation $q \crysim{n}{\delta} qi$ as follows by induction on $n$: \begin{enumerate} \item $q \crysim{0}{\delta} qi$ always holds \item $q \crysim{n+1}{\delta} qi$ holds if and only if, for all $Q \subseteq \mathcal{Q}$: \begin{enumerate} \item\ellel{def:param-bisim:a} $\ell(q) = \ell(qi)$ \item\ellel{def:param-bisim:b} $\tsPr{q}{Q} \leq \tsPr{qi}{\cryset{n}{\delta}{Q}} + \delta$ \item\ellel{def:param-bisim:c} $\tsPr{qi}{Q} \leq \tsPr{q}{\cryset{n}{\delta}{Q}} + \delta$ \end{enumerate} \end{enumerate} where $\cryset{n}{\delta}{Q} = \setcomp{qi}{\exists q\inQ.\ q \crysim{n}{\delta} qi}$ is the image of the set $Q$ according to the bisimilarity relation. \end{defi} We now establish two basic properties of the bisimilarity. Our notion is reflexive and symmetric, and enjoys a triangular property. Furthermore, it is monotonic on both $n$ and $\delta$. \begin{lem} The relation $\crysim{}{}$ satisfies: \[ q \crysim{n}{\delta} q \qquad\quad q \crysim{n}{\delta} qi \implies qi \crysim{n}{\delta} q \qquad\quad q \crysim{n}{\delta} qi \land qi \crysim{n}{\deltai} qii \implies q \crysim{n}{\delta+\deltai} qii \] \end{lem} \begin{proof} Straightforward induction on $n$. \end{proof} \begin{lem}[Monotonicity] \ellel{lem:sim:monotonicity} \begin{align*} ni \leq n & \;\;\implies\;\; \crysim{n}{\delta} \;\;\subseteq\;\; \crysim{ni}{\delta} \\ \delta \leq \deltai & \;\;\implies\;\; \crysim{n}{\delta} \;\;\subseteq\;\; \crysim{n}{\deltai} \end{align*} \end{lem} \begin{exa} \ellel{ex:sim:padlock} We use up-to-$n,\delta$ bisimilarity to compare the behaviour of the ideal padlock $q_{\sf ok}$ and the real one, in any of its states, when observed for $n$ steps. When $n=0$ bisimilarity trivially holds, so below we only consider $n>0$. We start from the simplest case: bisimilarity does not hold between $q_{\sf ok}$ and $q_{\sf err}$. Indeed, $q_{\sf ok}$ and $q_{\sf err}$ have distinct labels ($\ell(q_{\sf ok})=\emptyset\neq\setenum{{\sf err}}=\ell(q_{\sf err})$), hence we do not have $q_{\sf ok} \crysim{n}{\delta} q_{\sf err}$, no matter what $n>0$ and $\delta$ are. We now compare $q_{\sf ok}$ with any $q_i$. When $n=1$, both states have an empty label set, i.e.~$\ell(q_{\sf ok})=\ell(q_i)=\emptyset$, hence they are bisimilar for any error $\delta$. We therefore can write $q_{\sf ok} \crysim{1}{\delta} q_i$ for any $\delta\geq 0$. When $n=2$, we need a larger error $\delta$ to make $q_{\sf ok}$ and $q_i$ bisimilar. Indeed, if we perform a move from $q_i$, the padlock can be broken with probability $1/(N-i)$, in which case we reach $q_{\sf err}$, thus violating bisimilarity. Accounting for such probability, we only obtain $q_{\sf ok} \crysim{2}{\delta} q_i$ for any $\delta\geq 1/(N-i)$. When $n=3$, we need an even larger error $\delta$ to make $q_{\sf ok}$ and $q_i$ bisimilar. Indeed, while the first PIN guessing attempt has probability $1/(N-i)$, in the second move the guessing probability increases to $1/(N-i-1)$. Choosing $\delta$ equal to the largest probability is enough to account for both moves, hence we obtain $q_{\sf ok} \crysim{3}{\delta} q_i$ for any $\delta\geq 1/(N-i-1)$. Technically, note that the denominator $N-i-1$ might be zero, since when $i=n-1$ the first move always guesses the PIN, and the second guess never actually happens. In such case, we instead take $\delta=1$. More in detail, we verify item~\eqref{def:param-bisim:b} of \autoref{def:param-bisim} for $q_{\sf ok} \crysim{3}{\delta} q_i$, assuming $\delta\geq 1/(N-i-1)$. We must prove that: \[ \tsPr{q_{\sf ok}}{Q} \leq \tsPr{q_i}{\cryset{2}{\delta}{Q}} + \delta \] When $q_{\sf ok} \not\in Q$ we have $\tsPr{q_{\sf ok}}{Q} = 0$, hence the inequality holds trivially. Otherwise, if $q_{\sf ok} \in Q$ we first observe that $\tsPr{q_{\sf ok}}{Q} = 1$. From the case $n = 2$, we have $q_{\sf ok} \crysim{2}{\delta} q_{i+1}$, since $\delta \geq 1/(N-(i+1))$. Hence, $q_{i+1} \in \;\cryset{2}{\delta}{Q}$ and so: \[ \tsPr{q_i}{\cryset{2}{\delta}{Q}} + \delta \geq \tsPr{q_i}{\setenum{q_{i+1}}} + \delta = 1 - \dfrac{1}{N-i} + \delta \geq 1 - \dfrac{1}{N-i} + \dfrac{1}{N-i-1} \geq 1 \] This proves item~\eqref{def:param-bisim:b}; the proof for item~\eqref{def:param-bisim:c} is similar. More in general, for an arbitrary $n\geq 2$, we obtain through a similar argument that $q_{\sf ok} \crysim{n}{\delta} q_i$ for any $\delta\geq 1/(N-i-n+2)$. Intuitively, $\delta=1/(N-i-n+2)$ is the probability of guessing the PIN in the last attempt (the $n$-th), which is the attempt having the highest success probability. Again, when the denominator $N-i-n+2$ becomes zero (or negative), we instead take $\delta=1$. \end{exa} Note that the DTMC of the ideal and real padlocks (Example~\ref{ex:pctl:padlock}) has finitely many states. Our bisimilarity notion and results, however, can also deal with DTMCs with a countably infinite set of states, as we show in the next example. \begin{figure} \caption{A Markov chain modelling an unfair random generator of bit streams.} \end{figure} \begin{exa} \ellel{ex:sim:pingpong} We consider an ideal system which randomly generates bit streams in a fair way. We model such a system as having two states $\setenum{q_a,q_b}$, with transition probabilities $\Pr(x,y)=1/2$ for any $x,y \in \setenum{q_a,q_b}$, as in~\autoref{fig:fair-coin}. We label state $q_a$ with label $\sf a$ denoting bit $0$, and state $q_b$ with label $\sf b$ denoting bit $1$. We compare this ideal system with a real system which generates bit streams in an unfair way. At each step, the real system draws a ball from an urn, initially having $g_0$ $\sf a$-labelled balls and $g_0$ $\sf b$-labelled balls. After each drawing, the ball is placed back in the urn. However, every time an $\sf a$-labelled ball is drawn, an additional $\sf a$-labelled ball is put in the urn, making the next drawings more biased towards $\sf a$. We model the real system using the infinite\footnote{ Modelling this behaviour inherently requires an \emph{infinite} set of states, since each number of $\sf a$-labelled balls in the urn leads to a unique transition probability function.} set of states $\mathbb N \times \setenum{{\sf a},{\sf b}}$, whose first component counts the number of $\sf a$-labelled balls in the urn, and the second component is the label of the last-drawn ball. The transition probabilities are as follows, where $g_0\in\mathbb N^+$ (see~\autoref{fig:pingpong}): \[ \begin{array}{ll@{\qquad}l} \Pr((g,x),(g+1,{\sf a})) &= g / (g+g_0) \\ \Pr((g,x),(g,{\sf b})) &= g_0 / (g+g_0) \\ \Pr((g,x),(g',x')) &= 0 & \text{otherwise} \end{array} \] We label each such state with its second component. We now compare the ideal system to the real one. Intuitively, the ideal system, when started from state $q_a$, produces a sequence of states whose labels are uniform independent random values in $\setenum{{\sf a},{\sf b}}$. Instead, the real system slowly becomes more and more biased towards label $\sf a$. More precisely, when started from state $(g_0, {\sf a})$, in the first drawing the next label is uniformly distributed between ${\sf a}$ and ${\sf b}$, as in the ideal system. When the sampled state has label $\sf a$, this causes the component $g$ to be incremented, increasing the probability $g/(g+g_0)$ of sampling another $\sf a$ in the next steps. Indeed, the value $g$ is always equal to $g_0$ plus the number of sampled $\sf a$-labelled states so far. Therefore, unlike the ideal system, on the long run the real system will visit $\sf a$-labelled states with very high probability, since the $g$ component slowly but steadily increases. While this fact makes the two systems \emph{not} bisimilar according to the standard probabilistic bisimilarity~\cite{Larsen89popl}, if we restrict the number of steps to $n \ll g_0$ and tolerate a small error $\delta$, we can obtain $q_a \crysim{n}{\delta} (g_0,{\sf a})$. For instance, if we let $g_0=1000$, $n=100$ and $\delta=0.05$ we have $q_a \crysim{n}{\delta} (g_0,{\sf a})$. This is because, in $n$ steps, the first component $g$ of a real system $(g,x)$ will at most reach $1100$, making the probability of the next step to be $(g+1,{\sf a})$ to be at most $1100/2100\simeq 0.523$. This differs from the ideal probability $0.5$ by less than $\delta$, hence bisimilarity holds. \end{exa} \section{Soundness} \ellel{sec:result} Our soundness theorem shows that, if we consider any state $q$ satisfying $\phi$ (with steps $n$ and error $\deltai$), and any state $qi$ which is bisimilar to $q$ (with enough steps and error $\delta$), then $qi$ must satisfy $\phi$, with the same number $n$ of steps, at the cost of suitably increasing the error. For a fixed $\phi$, the ``large enough'' number of steps and the increase in the error depend linearly on $n$. \begin{thm}[Soundness] \ellel{th:soundness} Let $k_X = \nestMax{{\sf X}}{\phi}$ be the maximum ${\sf X}$-nesting of a formula $\phi$, and let $k_U = \nestMax{{\sf U}}{\phi}$ be the maximum ${\sf U}$-nesting of $\phi$. Then, for all $n,\delta,\deltai$ we have: \[ \begin{array}{c} \cryset{nb}{\delta}{ \sem{n}{\deltai}{+1}{\phi}} \subseteq \sem{n}{nb\cdot\delta+\deltai}{+1}{\phi} \tag*{ where $nb = n\cdotk_U + k_X + 1$} \end{array} \] \end{thm} \begin{exa} \ellel{ex:results:padlock} We apply~\autoref{th:soundness} to our padlock system in the running example. We take the same formula $\phi = \logPr{\leq 0}{{\sf true}\ {\sf U}\ {\sf err}}$ of~\autoref{ex:pctl:padlock} and choose $n=10^3$ and $\deltai=0$. Since $\phi$ has only one until operator and no next operators, the value $nb$ in the theorem statement is $nb = 10^3\cdot 1+0+1 = 1001$. Therefore, from~\autoref{th:soundness} we obtain, for all $\delta$: \[ \begin{array}{ll} & \cryset{1001}{\delta}{ \sem{1000}{0}{+1}{\phi}} \subseteq \sem{1000}{1001\cdot \delta}{+1}{\phi} \end{array} \] In~\autoref{ex:pctl:padlock} we discussed how the ideal padlock $q_{\sf ok}$ satisfies the formula $\phi$ for any number of steps and any error value. In particular, choosing 1000 steps and zero error, we get $q_{\sf ok}\in \sem{1000}{0}{+1}{\phi}$. Moreover, in~\autoref{ex:sim:padlock} we observed that states $q_{\sf ok}$ and $q_0$ are bisimilar with $nb=1001$ and $\delta=1/(N-0-nb+2) = 1/99001$, i.e.\@\xspace~$q_{\sf ok} \crysim{nb}{\delta} q_0$. In such case, the theorem ensures that $q_0\in\sem{1000}{1001/99001}{+1}{\phi}$, hence the real padlock can be considered unbreakable if we limit our attention to the first $n=1000$ steps, up to an error of $1001/99001 \approx 0.010111$. Finally, we note that such error is remarkably close to the least value that would still make $q_0$ satisfy $\phi$, which we computed in~\autoref{ex:pctl:padlock} as $n/N = 10^3/10^5 = 0.01$. \end{exa} In the rest of this section, we describe the general structure of the proof in a top-down fashion, leaving the detailed proof for~\autoref{sec:proofs}. We prove the soundness theorem by induction on the state formula $\phi$, hence we also need to deal with path formulae $\psi$. Note that the statement of the theorem considers the image of the semantics of the state formula $\phi$ w.r.t.~bisimilarity (i.e., $\cryset{nb}{\delta}{\sem{n}{\deltai}{+1}{\phi}}$). Analogously, to deal with path formulae we also need an analogous notion on sets of traces. To this purpose, we consider the set of traces in the definition of the semantics: $T = \trStart{p} \cap \sem{n}{\delta}{r}{\psi}$. Then, given a state $q$ bisimilar to $p$, we define the set of \emph{pointwise bisimilar traces} starting from $q$, which we denote with $\TR{n}{\delta,\stateQ}{T}$. Technically, since $\psi$ can only observe a finite portion of a trace, it is enough to define $\TR{n}{\delta,\stateQ}{\tilde{T}}$ on sets of \emph{trace fragments} $\tilde{T}$. \begin{defi} Write $\fram{q_0}{n}$ for the set of all trace fragments of length $n$ starting from $q_0$. Assuming $\stateP \CSim{n}{\delta} \stateQ$, we define $\TR{n}{\delta,\stateQ}{}: \mathcal{P}(\fram{\stateP}{n}) \rightarrow \mathcal{P}(\fram{\stateQ}{n})$ as follows: \[ \TR{n}{\delta,\stateQ}{\tilde{T}} = \setcomp{\tilde{u} \in \fram{\stateQ}{n}}{ \exists \tilde{t} \in \tilde{T}.\, \forall 0 \leq i < n.\ \tilde{t}(i) \CSim{n-i}{\delta} \tilde{u}(i) } \] \end{defi} \noindent In particular, notice that $\fram{q}{1} = \setenum{q}$ (the trace fragment of length 1), and so: \[ \TR{1}{\delta,\stateQ}{\emptyset} = \emptyset \qquad \TR{1}{\delta,\stateQ}{\setenum{q}} = \setenum{q} \] The key inequality we exploit in the proof (\autoref{lem:traces}) compares the probability of a set of trace fragments $\tilde{T}$ starting from $\stateP$ to the one of the related set of trace fragments $\TR{m}{\delta,\stateQ}{\tilde{T}}$ starting from a $\stateQ$ bisimilar to $\stateP$. We remark that the component $nb \delta$ in the error that appears in~\autoref{th:soundness} results from the component $m \delta$ appearing in the following lemma. \begin{lem}\ellel{lem:traces} If $\stateP \CSim{n}{\delta} \stateQ$ and $\tilde{T}$ is a set of trace fragments of length $m$, with $m \leq n$, starting from $\stateP$, then: \[ \prob{\tilde{T}}{} \leq \prob{\TR{m}{\delta,\stateQ}{\tilde{T}}}{} + m \delta \] \end{lem} \autoref{lem:traces} allows $\tilde{T}$ to be an infinite set (because the set of states $\mathcal{Q}$ can be infinite). We reduce this case to that in which $\tilde{T}$ is finite. We first recall a basic calculus property: any inequality $a \leq b$ can be proved by establishing instead $a \leq b + \epsilon$ for all $\epsilon > 0$. Then, since the probability distribution of trace fragments of length $m$ is discrete, for any $\epsilon>0$ we can always take a finite subset of the infinite set $\tilde{T}$ whose probability differs from that of $\tilde{T}$ less than $\epsilon$. It is then enough to consider the case in which $\tilde{T}$ is finite, as done in the following lemma. \begin{lem}\ellel{lem:finite-traces} If $\stateP \CSim{n}{\delta} \stateQ$ and $\tilde{T}$ is a finite set of trace fragments of length $n > 0$ starting from $\stateP$, then: \[ \prob{\tilde{T}}{} \leq \prob{\TR{n}{\delta,\stateQ}{\tilde{T}}}{} + n \delta \] \end{lem} We prove~\autoref{lem:finite-traces} by induction on $n$. In the inductive step, we partition the traces according to their first move, i.e., on their next state after $\stateP$ (for the trace fragments in $T$) or $\stateQ$ (for the bisimilar counterparts). A main challenge here is caused by the probabilities of such moves being weakly connected. Indeed, when $\stateP$ moves to $\statePi$, we might have several states $\stateQi$, bisimilar to $\statePi$, such that $\stateQ$ moves to $\stateQi$. Worse, when $\stateP$ moves to another state $\statePii$, we might find that some of the states $\stateQi$ we met before are also bisimilar to $\statePii$. Such overlaps make it hard to connect the probability of $\stateP$ moves to that of $\stateQ$ moves. To overcome these issues, we exploit the technical lemma below. Let set $A$ represent the $\stateP$ moves, and set $B$ represent the $\stateQ$ moves. Then, associate to each set element $a\in A,b\in B$ a value ($f_A(a), f_B(b)$ in the lemma) representing the move probability. The lemma ensures that each $f_A(a)$ can be expressed as a weighted sum of $f_B(b)$ for the elements $b$ bisimilar to $a$. Here, the weights $h(a,b)$ make it possible to relate a $\stateP$ move to a ``weighted set'' of $\stateQ$ moves. Furthermore, the lemma ensures that no $b\in B$ has been cumulatively used for more than a unit weight ($\sum_{a \in A} h(a,b) \leq 1$). \begin{lem}\ellel{lem:matching} Let $A$ be a finite set and $B$ be a countable set, equipped with functions $f_A: A \rightarrow \mathbb{R}_0^+$ and $f_B: B \rightarrow \mathbb{R}_0^+$. Let $g:A \rightarrow 2^B$ be such that $\sum_{b \in g(a)} f_B(b)$ converges for all $a \in A$. If, for all $A' \subseteq A:$ \begin{equation}\ellel{eq:matching-assumption} \sum_{a \in A'} f_A(a) \leq \sum_{b \in \bigcup_{a \in A'} g(a)} f_B(b) \end{equation} then there exists $h:A \times B \rightarrow \intervalCC 0 1$ such that: \begin{align}\ellel{eq:matching-thesis:1} &\forall b \in B: \sum_{a \in A} h(a,b) \leq 1 \\ \ellel{eq:matching-thesis:2} &\forall A' \subseteq A: \sum_{a \in A'} f_A(a) = \sum_{a \in A'} \sum_{b \in g(a)} h(a,b) f_B(b) \end{align} \end{lem} \begin{figure} \caption{Graphical representation of~\autoref{lem:matching} \end{figure} We visualize~\autoref{lem:matching} in~\autoref{fig:matching} through an example. The leftmost graph shows a finite set $A=\setenum{a_1,a_2,a_3}$ where each $a_i$ is equipped with its associated value $f_A(a_i)$ and, similarly, a finite set $B=\setenum{b_1,\ldots,b_4}$ where each $b_i$ has its own value $f_B(b_i)$. The function $g$ is rendered as the edges of the graph, connecting each $a_i$ with all $b_j \in g(a_i)$. The graph satisfies the hypotheses, as one can easily verify. For instance, when $A' = \setenum{a_1,a_2}$ inequality \eqref{eq:matching-assumption} simplifies to $0.3+0.5 \leq 0.5+0.6$. The thesis ensures the existence of a weight function $h(-,-)$ whose values are shown in the graph on the left over each edge. These values indeed satisfy \eqref{eq:matching-thesis:1}: for instance, if we pick $b=b_2$ the inequality reduces to $0.5+0.1\bar 6 \leq 1$. Furthermore, \eqref{eq:matching-thesis:2} is also satisfied: for instance, taking $A'=\setenum{a_2}$ the equation reduces to $0.5 = 0.4\cdot 0.5+0.5\cdot 0.6$, while taking $A'=\setenum{a_3}$ the equation reduces to $0.2 = 0.1\bar 6\cdot 0.6+1.0\cdot 0.05+1.0\cdot 0.05$. The rightmost graph in~\autoref{fig:matching} instead sketches how our proof devises the desired weight function $h$, by constructing a network flow problem, and exploiting the well-known min-cut/max-flow theorem~\cite{MinCut}, following the approach of~\cite{Baier98thesis}. We start by adding a source node to the right (white bullet in the figure), connected to nodes in $B$, and a sink node to the left, connected to nodes in $A$. We write the capacity over each edge: we use $f_B(b_i)$ for the edges connected to the source, $f_A(a_i)$ for the edges connected to the sink, and $+\infty$ for the other edges in the middle. Then, we argue that the leftmost cut $C$ shown in the figure is a min-cut. Intuitively, if we take another cut $C'$ not including some edge in $C$, then $C'$ has to include other edges making $C'$ not any better than $C$. Indeed, $C'$ can surely not include any edge in the middle, since they have $+\infty$ capacity. Therefore, if $C'$ does not include an edge from some $a_i$ to the sink, it has to include all the edges from the source to each $b_j \in g(a_i)$. In this case, hypothesis \eqref{eq:matching-assumption} ensures that doing so does not lead to a better cut. Hence, $C$ is indeed a min-cut. From the max-flow corresponding to the min-cut, we derive the values for $h(-,-)$. Thesis \eqref{eq:matching-thesis:1} follows from the flow conservation law on each $b_i$, and the fact that the incoming flow of each $b_j$ from the source is bounded by the capacity of the related edge. Thesis \eqref{eq:matching-thesis:2} follows from the flow conservation law on each $a_i$, and the fact that the outgoing flow of each $a_i$ to the sink is exactly the capacity of the related edge, since the edge is on a min-cut. \section{Asymptotic equivalence} \ellel{sec:asymptotic} In this section we transport the notion of bisimilarity and the semantics of PCTL to \emph{families} of states, thus reasoning on their asymptotic behaviours. More precisely, given a state-labelled DTMC $\mathcal{Q}$, we define a family of states to be an infinite sequence $\Xi: \mathbb{N}\to\mathcal{Q}$. Intuitively, $\Xi(\eta)$ can describe the behaviour of a probabilistic system depending on a security parameter $\eta \in \mathbb{N}$. When using bisimilarity (\autoref{def:param-bisim}) to relate two given states $Q_1$ and $Q_2$, we have to provide a number of steps $n$ and a probability error $\delta$. By contrast, when relating two families $\Xi_1$ and $\Xi_2$ we want to focus on their asymptotic behaviour, and obtain an equivalence that does not depend on specific values of $n$ and $\delta$. To do so, we start by recalling the standard definition of \emph{negligible function}: \begin{defi}[Negligible Function] A function $f : \mathbb{N}\to\mathbb{R}$ is said to be negligible whenever \[ \forall c\in\mathbb{N}.\ \exists \bar \eta.\ \forall \eta\geq\bar \eta.\ |f(\eta)| \leq \eta^{-c} \] \end{defi} We say that $\Xi_1$ and $\Xi_2$ are asymptotically equivalent ($\Xi_1 \equiv \Xi_2$) when the families are asymptotically pointwise bisimilar with a negligible error $\delta(\eta)$ whenever $n(\eta)$ is a polynomial. \begin{defi}[Asymptotic Equivalence]\ellel{def:crysim} Given $\Xi_1,\Xi_2 : \mathbb{N}\to\mathcal{Q}$, we write $\Xi_1 \equiv \Xi_2$ if and only if for each polynomial $n(-)$ there exists a negligible function $\delta(-)$ and $\bar \eta \in \mathbb{N}$ such that for all $\eta\geq \bar \eta$ we have $\Xi_1(\eta) \crysim{n(\eta)}{\delta(\eta)} \Xi_2(\eta)$ \end{defi} \begin{lem} $\equiv$ is an equivalence relation. \end{lem} \begin{proof} Reflexivity and symmetry are trivial. For transitivity, given a polynomial $n(-)$, let $\delta_1(-),\delta_2(-)$ be the negligible functions resulting from the hypotheses $\Xi_1 \equiv \Xi_2$ and $\Xi_2 \equiv \Xi_3$, respectively. Asymptotically, we obtain \[ \Xi_1(\eta) \crysim{n(\eta)}{\delta_1(\eta)} \Xi_2(\eta) \qquad\land\qquad \Xi_2(\eta) \crysim{n(\eta)}{\delta_2(\eta)} \Xi_3(\eta) \] By the transitivity of $\crysim{}{}$, we get \[ \Xi_1(\eta) \crysim{n(\eta)}{\delta_1(\eta)+\delta_2(\eta)} \Xi_3(\eta) \] Hence we obtain the thesis since the sum of negligible functions $\delta_1(\eta)+\delta_2(\eta)$ is negligible. \end{proof} We now provide an asymptotic semantics for PCTL, by defining its satisfaction relation $\Xi \models \phi$. As done above, this notion does not depend on specific values for $n$ and $\delta$ (unlike the semantics in \autoref{def:pctl:sem}), but instead considers the asymptotic behaviour of the family. \begin{defi}[Asymptotic Satisfaction Relation] We write $\Xi \models \phi$ when there exists a polynomial $\barn(-)$ such that for each polynomial $n(-) \geq \barn(-)$ there exists a negligible function $\delta(-)$ and $\bar\eta \in \mathbb{N}$ such that for all $\eta\geq \bar\eta$ we have $\Xi(\eta) \in \sem{n(\eta)}{\delta(\eta)}{+1}{\phi}$ \end{defi} In the above definition, we only consider polynomials greater than a threshold $\barn(-)$. This is because a family $\Xi$ representing, say, a protocol could require a given (polynomial) number of steps to complete its execution. It is reasonable, for instance, that $\Xi(\eta)$ needs to exchange $\eta^2$ messages over a network to perform its task. In such cases, we still want to make $\Xi$ satisfy a formula $\phi$ stating that the task is eventually completed with high probability. If we quantified over all polynomials $n(-)$, we would also allow choosing small polynomials like $n(\eta)=\eta$ or even $n(\eta)=1$, which would not provide $\Xi$ enough time to complete. Using a (polynomial) threshold $\barn(-)$, instead, we always provide enough time. We now establish the main result of this section, asymptotic soundness, stating that equivalent families of states asymptotically satisfy the same PCTL formulae. The proof relies on our previous soundness~\autoref{th:soundness}. \begin{thm}[Asymptotic Soundness] \ellel{th:asymptotic} Let $\Xi_1,\Xi_2$ be families of states such that $\Xi_1 \equiv \Xi_2$. For every PCTL formula $\phi$: \[ \Xi_1 \models \phi \iff \Xi_2 \models \phi \] \end{thm} \begin{proof} Assuming $\Xi_1 \models \phi$ and $\Xi_1 \equiv \Xi_2$, we prove $\Xi_2 \models \phi$. Let $k_X = \nestMax{{\sf X}}{\phi}$ be the maximum ${\sf X}$-nesting of $\phi$, and let $k_U = \nestMax{{\sf U}}{\phi}$ be the maximum ${\sf U}$-nesting of $\phi$. Let $\barn_1(-)$ as in the definition of the hypothesis $\Xi_1 \models \phi$. To prove the thesis $\Xi_2 \models \phi$, we choose $\barn_2(-) = \barn_1(-)$, and we consider an arbitrary $n_2(-)\geq\barn_2(-)=\barn_1(-)$. We can then choose $n_1(-) = n_2(-)$ in the same hypothesis, and obtain a negligible $\delta_1(-)$ and $\bar\eta_1$, where for any $\eta \geq \bar\eta_1$ we have \begin{equation} \ellel{eq:fam1-hp} \Xi_1(\eta) \in \sem{n_2(\eta)}{\delta_1(\eta)}{+1}{\phi} \end{equation} We now exploit the other hypothesis $\Xi_1 \equiv \Xi_2$, choosing the polynomial \begin{equation} \ellel{eq:n-eta} n(\eta) = n_2(\eta) \cdot k_U + k_X + 1 \end{equation} and obtaining a negligible $\delta(-)$ and $\bar\eta$ where for any $\eta\geq\bar\eta$ we have \begin{equation} \ellel{eq:sim-hp} \Xi_1(\eta) \crysim{n(\eta)}{\delta(\eta)} \Xi_2(\eta) \end{equation} To prove the thesis, we finally choose the negligible function $\delta_2(\eta) = n(\eta)\cdot\delta(\eta)+\delta_1(\eta)$ and $\bar\eta_2 = \max(\bar\eta_1, \bar\eta)$. By~\autoref{th:soundness} we have that for any $\eta\geq\bar\eta_2$: \[ \cryset{n(\eta)}{\delta(\eta)}{ \sem{n_2(\eta)}{\delta_1(\eta)}{+1}{\phi}} \subseteq \sem{n_2(\eta)}{n(\eta)\cdot\delta(\eta)+\delta_1(\eta)}{+1}{\phi} \mbox{ where $n(\eta)$ is as in \eqref{eq:n-eta}.} \] Applying this to \eqref{eq:fam1-hp} and \eqref{eq:sim-hp} we then have that, for any $\eta\geq\bar\eta_2$: \[ \Xi_2(\eta) \in \sem{n_2(\eta)}{n(\eta)\cdot\delta(\eta)+\delta_1(\eta)}{+1}{\phi} \] which is our thesis \[ \Xi_2(\eta) \in \sem{n_2(\eta)}{\delta_2(\eta)}{+1}{\phi} \qedhere \] \end{proof} \begin{exa}\ellel{ex:asymptotic:padlock} We now return to the padlock examples \ref{ex:pctl:padlock} and \ref{ex:sim:padlock}. We again consider an ideal padlock modelled by a state $q_{\sf ok}$, but also a sequence of padlocks having an increasing number of digits $\eta$, hence an increasing number $N=10^\eta$ of combinations. We assume that state $q_{i,10^\eta}$ models the state of a padlock having $\eta$ digits where the adversary has already made $i$ brute force attempts, following the same strategy as in the previous examples. The transition probabilities are also similarly defined. In this scenario, we can define two state families. Family $\Xi_1(\eta) = q_{\sf ok}$ represents a (constant) sequence of ideal padlocks, while family $\Xi_2(\eta) = q_{0,10^\eta}$ represents a sequence of realistic padlocks with no previous brute force attempt ($i=0$), in increasing order of robustness. Indeed, as $\eta$ increases, the padlock becomes harder to break by brute force since the number of combinations $N=10^\eta$ grows. In~\autoref{ex:sim:padlock}, we have seen that \[ \Xi_1(\eta) \crysim{n(\eta)}{\delta(\eta)} \Xi_2(\eta) \qquad \mbox{ where } \delta(\eta) = \dfrac{1}{N-0-n(\eta)+2} = \dfrac{1}{10^\eta-n(\eta)+2} \] and we can observe that the above $\delta(\eta)$ is indeed negligible when $n(\eta)$ is a polynomial. This means that $\Xi_1 \equiv \Xi_2$ holds, hence we can apply \autoref{th:asymptotic} and conclude that the families $\Xi_1$ and $\Xi_2$ asymptotically satisfy the same PCTL formulae. This is intuitive since, when the adversary can only attempt a polynomial number of brute force attacks, and when the number of combinations increases exponentially, the robustness of the realistic padlocks effectively approaches that of the ideal one. \end{exa} We now discuss how~\autoref{th:asymptotic} could be applied to a broad class of systems. Consider the execution of an ideal cryptographic protocol, modelled as a DTMC starting from the initial state $q_i$. This model could represent, for instance, the semantics of a formal, symbolic system such as those that can be expressed using process algebras. In this scenario, the underlying cryptographic primitives can be \emph{perfect}, in the sense that ciphertexts reveal no information to whom does not know the decryption key, signatures can never be forged, hash preimages can never be found, and so on, despite the amount of computational resources available to the adversary. Given such a model, it is then possible to refine it making the cryptographic primitives more realistic, allowing an adversary to attempt attacks such as decryptions and signature forgeries, which however succeed only with negligible probability w.r.t.\@\xspace a security parameter $\eta$. This more realistic system can be modelled using a distinct DTMC state $q^\eta_r$ whose behaviour is similar to that of $q_i$: the state transitions are essentially the same, except for the cases in which the adversary is successful in attacking the cryptographic primitives. Therefore, the transition probabilities are almost the same, differing only by a negligible quantity. Therefore, we can let $\Xi_1(\eta)=q_i$ and $\Xi_2(\eta)=q^\eta_r$, and observe that they are indeed asymptotically equivalent. Note that this holds in general by construction, no matter what is the behaviour of the ideal system $q_i$ we started from. Finally, by~\autoref{th:asymptotic} we can claim that both families $\Xi_1,\Xi_2$ asymptotically satisfy the same PCTL formulas. This makes it possible, in general, to prove properties on the simpler $q_i$ system, possibly even using some automatic verification tools, and transfer these results in the more realistic setting $q^\eta_r$. A special case of this fact was originally studied in~\cite{ZuninoD05}, which however only considered reachability properties. By comparison, \autoref{th:asymptotic} is much more general, allowing one to transfer any property that can be expressed using a PCTL formula. The construction above allows one to refine an ideal system $q_i$ into a more realistic one $q^\eta_r$ by taking certain adversaries into account. However, if our goal were to study the security of the system against \emph{all} reasonable adversaries, then the above approach would not be applicable. Indeed, it is easy to find an ideal system and a corresponding realistic refinement, comprising a reasonable adversary, where the asymptotic equivalence does not hold. For instance, consider an ideal protocol where Alice and Bob exchange ten messages, after which Alice randomly chooses and exchanges a single bit. To assess the security of a realistic implementation, we might want to consider the case where Alice is an adversary. In such case, a malicious Alice could exchange the first two messages, then flip a coin $b \leftarrow \{0,1\}$ in secret, exchange the other eight messages, and finally send the value $b$. The behaviour of such realistic system differs from the ideal one, since the ideal one has a probabilistic choice point only at the end, while the realistic system anticipates it after the first two moves. It is easy to check (and well known) that moving choices to an earlier point makes standard bisimilarity fail, and this is the case also for asymptotic equivalence. The failure of asymptotic equivalence prevents us from applying the asymptotic soundness theorem. In particular, assume that we have proved that the ideal system enjoys certain specifications expressed as PCTL formulae. We can not exploit the theorem to show that also the realistic system with the adversary enjoys the same specifications. \section{Conclusions} In this paper we studied how the (relaxed) semantics of PCTL formulae interacts with (approximate) probabilistic bisimulation. In the regular, non relaxed case, it is well-known that when a state $q$ satisfies a PCTL formula $\phi$, then all the states that are probabilistic-bisimilar to $q$ also satisfy $\phi$ (\cite{Desharnais10iandc}). \autoref{th:soundness} extends this to the relaxed semantics, establishing that when a state $q$ satisfies a PCTL formula $\phi$ up-to $n$ steps and error $\delta$, then all the states that are approximately probabilistic bisimilar to $q$ with error $\deltai$ (and enough steps) also satisfy $\phi$ up-to $n$ steps and suitably increased error. We provide a way to compute the new error in terms of $n, \delta, \deltai$. \autoref{th:asymptotic} extends such soundness result to the asymptotic behaviour where the error becomes negligible when the number of steps is polynomially bounded. Our results are a first step towards a novel approach to the security analysis of cryptographic protocols using probabilistic bisimulations. When one is able to prove that a real-world specification of a cryptographic protocol is asymptotically equivalent to an ideal one, then one can invoke~\autoref{th:asymptotic} and claim that the two models satisfy the same PCTL formulae, essentially reducing the security proof of the cryptographic protocol to verifying the ideal model. A relevant line for future work is to study the applicability of our theory in this setting. As discussed in~\autoref{sec:asymptotic}, our approach is not applicable to all protocols and all adversaries. A relevant line of research could be the study of larger asymptotic equivalences, which allow to transfer properties from ideal to realistic systems. This could be achieved, e.g., by considering weaker logics than PCTL, or moving to linear temporal logics. Another possible line of research would be investigating proof techniques for establishing approximate bisimilarity and refinement~\cite{Jonsson91lics}, as well as devising algorithms for approximate bisimilarity, along the lines of~\cite{BreugelW14birthday,ChenBW12fossacs,Fu12icalp,TangB16concur,TangB17concur,TangB18cav}. This direction, however, would require restricting our theory to finite-state systems, which contrasts with our general motivation coming from cryptographic security. Indeed, in the analysis of cryptographic protocols, security is usually to be proven against an arbitrary adversary, hence also against infinite-state ones. Hence, model-checking of finite-state systems would not directly be applicable in this setting. \paragraph{Acknowledgements} Massimo Bartoletti is partially supported by Conv.\ Fondazione di Sardegna \& Atenei Sardi project F75F21001220007 \emph{ASTRID}. Maurizio Murgia and Roberto Zunino are partially supported PON \textit{Distributed Ledgers for Secure Open Communities}. Maurizio Murgia is partially supported by MUR PON REACT EU DM 1062/21. \appendix \section{Proofs} \ellel{sec:proofs} \begin{proofof}{Lemma}{lem:pctl:monotonicity} We simultaneously prove the whole statement by induction on the structure of the formulae $\phi$ and $\psi$. The cases $\phi=l$ and $\phi=\sf true$ result in trivial equalities. For the case $\phi=\lnot\phii$ we need to prove \begin{align*} & \sem{n}{\delta}{+1}{\lnot\phii} \subseteq \sem{n}{\deltai}{+1}{\lnot\phii} \\ & \sem{n}{\deltai}{-1}{\lnot\phii} \subseteq \sem{n}{\delta}{-1}{\lnot\phii} \\ & \sem{n}{\delta}{-1}{\lnot\phii} \subseteq \sem{n}{\delta}{+1}{\lnot\phii} \end{align*} which is equivalent to \begin{align*} & \mathcal{Q}\setminus\sem{n}{\delta}{-1}{\phii} \subseteq \mathcal{Q}\setminus\sem{n}{\deltai}{-1}{\phii} \\ & \mathcal{Q}\setminus\sem{n}{\deltai}{+1}{\phii} \subseteq \mathcal{Q}\setminus\sem{n}{\delta}{+1}{\phii} \\ & \mathcal{Q}\setminus\sem{n}{\delta}{+1}{\phii} \subseteq \mathcal{Q}\setminus\sem{n}{\delta}{-1}{\phii} \end{align*} which, in turn, is equivalent to \begin{align*} & \sem{n}{\deltai}{-1}{\phii} \subseteq \sem{n}{\delta}{-1}{\phii} \\ & \sem{n}{\delta}{+1}{\phii} \subseteq \sem{n}{\deltai}{+1}{\phii} \\ & \sem{n}{\delta}{-1}{\phii} \subseteq \sem{n}{\delta}{+1}{\phii} \end{align*} which is the induction hypothesis. \noindent For the case $\phi=\phi_1 \land \phi_2$ we need to prove \begin{align*} & \sem{n}{\delta}{+1}{\phi_1\land\phi_2} \subseteq \sem{n}{\deltai}{+1}{\phi_1\land\phi_2} \\ & \sem{n}{\deltai}{-1}{\phi_1\land\phi_2} \subseteq \sem{n}{\delta}{-1}{\phi_1\land\phi_2} \\ & \sem{n}{\delta}{-1}{\phi_1\land\phi_2} \subseteq \sem{n}{\delta}{+1}{\phi_1\land\phi_2} \end{align*} which is equivalent to \begin{align*} & \sem{n}{\delta}{+1}{\phi_1} \cap \sem{n}{\delta}{+1}{\phi_2} \subseteq \sem{n}{\deltai}{+1}{\phi_1} \cap \sem{n}{\deltai}{+1}{\phi_2} \\ & \sem{n}{\deltai}{-1}{\phi_1} \cap \sem{n}{\deltai}{-1}{\phi_2} \subseteq \sem{n}{\delta}{-1}{\phi_1} \cap \sem{n}{\delta}{-1}{\phi_2} \\ & \sem{n}{\delta}{-1}{\phi_1} \cap \sem{n}{\delta}{-1}{\phi_2} \subseteq \sem{n}{\delta}{+1}{\phi_1} \cap \sem{n}{\delta}{+1}{\phi_2} \end{align*} which immediately follows from the induction hypothesis on $\phi_1$ and $\phi_2$. For the case $\phi=\logPr{\rhd \pi}{\psi}$ we need to prove \begin{align*} & \sem{n}{\delta}{+1}{\logPr{\rhd \pi}{\psi}} \subseteq \sem{n}{\deltai}{+1}{\logPr{\rhd \pi}{\psi}} \\ & \sem{n}{\deltai}{-1}{\logPr{\rhd \pi}{\psi}} \subseteq \sem{n}{\delta}{-1}{\logPr{\rhd \pi}{\psi}} \\ & \sem{n}{\delta}{-1}{\logPr{\rhd \pi}{\psi}} \subseteq \sem{n}{\delta}{+1}{\logPr{\rhd \pi}{\psi}} \end{align*} The first inclusion follows from \begin{align*} \sem{n}{\delta}{+1}{\logPr{\rhd \pi}{\psi}} & = \setcomp{q\in\mathcal{Q}}{ \Pr(\trStart{q} \cap \sem{n}{\delta}{+1}{\psi}) + \delta \rhd \pi } \\ & \subseteq \setcomp{q\in\mathcal{Q}}{ \Pr(\trStart{q} \cap \sem{n}{\deltai}{+1}{\psi}) + \deltai \rhd \pi } \\ & = \sem{n}{\deltai}{+1}{\logPr{\rhd \pi}{\psi}} \end{align*} where we exploited $\delta\leq\deltai$, the induction hypothesis $\sem{n}{\delta}{+1}{\psi} \subseteq \sem{n}{\deltai}{+1}{\psi}$, the monotonicity of $\Pr(-)$, and the fact that $\geq\circ\,\rhd \subseteq \rhd$. The second inclusion follows from an analogous argument: \begin{align*} \sem{n}{\deltai}{-1}{\logPr{\rhd \pi}{\psi}} & = \setcomp{q\in\mathcal{Q}}{ \Pr(\trStart{q} \cap \sem{n}{\deltai}{-1}{\psi}) - \deltai \rhd \pi } \\ & \subseteq \setcomp{q\in\mathcal{Q}}{ \Pr(\trStart{q} \cap \sem{n}{\delta}{-1}{\psi}) - \delta \rhd \pi } \\ & = \sem{n}{\delta}{-1}{\logPr{\rhd \pi}{\psi}} \end{align*} where we exploited $-\deltai\leq-\delta$, the induction hypothesis $\sem{n}{\deltai}{-1}{\psi} \subseteq \sem{n}{\delta}{-1}{\psi}$, the monotonicity of $\Pr(-)$, and the fact that $\geq\circ\,\rhd \subseteq \rhd$. For $\psi = {\sf X} \phi$, we can observe that $\sem{n}{\delta}{r}{{\sf X} \phi} = f(\sem{n}{\delta}{r}{\phi})$ where $f$ is a monotonic function mapping sets of states to sets of traces, which does not depend on $\delta,r,n$. Hence, the thesis follows from the set inclusions about the semantics of $\phi$ in the induction hypothesis. Similarly, for $\psi = \phi_1 {\sf U} \phi_2$, we can observe that \( \sem{n}{\delta}{r}{\phi_1 {\sf U} \phi_2} = g_n(\sem{n}{\delta}{r}{\phi_1}, \sem{n}{\delta}{r}{\phi_2}) \) where $g_n$ is a monotonic function mapping pairs of sets of states to sets of traces, which does not depend on $\delta,r$ (but only on $n$). Hence, the thesis follows from the set inclusions about the semantics of $\phi_1$ and $\phi_2$ in the induction hypothesis. \qed \end{proofof} \begin{proofof}{Lemma}{lem:sim:monotonicity} The statement follows by induction on $n-ni$ from the following properties: \begin{align} \ellel{eq:sim:monotonicity:1} & \delta \leq \deltai \;\land\; p \crysim{n}{\delta} q \implies p \crysim{n}{\deltai} q \\ \ellel{eq:sim:monotonicity:2} & p \crysim{n+1}{\delta} q \implies p \crysim{n}{\delta} q \end{align} To prove \eqref{eq:sim:monotonicity:1} we proceed by induction on $n$. In the base case $n = 0$ the thesis trivially follows by the first case of Definition~\ref{def:param-bisim}. For the inductive case, we assume \eqref{eq:sim:monotonicity:1} holds for $n$, and prove it for $n+1$. Therefore, we assume $p \crysim{n+1}{\delta} q$ and prove $p \crysim{n+1}{\deltai} q$. To prove the thesis, we must show that all the items of \autoref{def:param-bisim} hold. Item~\eqref{def:param-bisim:a} directly follows from the hypothesis. For item~\eqref{def:param-bisim:b} we have \[ \tsPr{p}{Q} \leq \tsPr{q}{\cryset{n}{\delta}{Q}} + \delta \leq \tsPr{q}{\cryset{n}{\deltai}{Q}} + \deltai \] where the first inequality follows from the hypothesis $p \crysim{n+1}{\delta} q$, while the second one follows from the induction hypothesis (which implies $\cryset{n}{\delta}{Q} \subseteq \cryset{n}{\deltai}{Q}$) and $\delta\leq\deltai$. Item~\eqref{def:param-bisim:c} is analogous. We now prove \eqref{eq:sim:monotonicity:2}, proceeding by induction on $n$. In the base case $n=0$, the thesis trivially follows by the first case of~\autoref{def:param-bisim}. For the inductive case, we assume the statement holds for $n$, and we prove it for $n+1$. Therefore, we assume $p \crysim{n+2}{\delta} q$ and prove $p \crysim{n+1}{\delta} q$. To prove the thesis, we must show that all the items of \autoref{def:param-bisim} hold. Item~\eqref{def:param-bisim:a} directly follows from the hypothesis. For item~\eqref{def:param-bisim:b} of the thesis we have \[ \tsPr{p}{Q} \leq \tsPr{q}{\cryset{n+1}{\delta}{Q}} + \delta \leq \tsPr{q}{\cryset{n}{\delta}{Q}} + \delta \] where the first inequality follows from the hypothesis $p \crysim{n+2}{\delta} q$, while the second one follows from the induction hypothesis (which implies $\cryset{n+1}{\delta}{Q} \subseteq \cryset{n}{\delta}{Q}$). Item~\eqref{def:param-bisim:c} is analogous. \qed \end{proofof} \begin{samepage} \begin{applemma}\ellel{lem:leq-eps-implies-leq} Let $a,b \in \mathbb{R}$. If $\forall \epsilon > 0: a \leq b + \epsilon$ then $a \leq b$. \end{applemma} \begin{proof} If $a>b$, taking $\epsilon=(a-b)/2$ contradicts the hypothesis. \end{proof} \end{samepage} \begin{proofof}{Lemma}{lem:traces} By Lemma~\ref{lem:sim:monotonicity} we have that $\stateP \CSim{m}{\delta} \stateQ$. If $T$ is finite the thesis follows from Lemma~\ref{lem:finite-traces}. If $T$ is infinite, it must be countable: this follows by the fact that Markov chains states are countable and the length of the traces in $T$ is finite. So, let $\tilde{t}_0 \tilde{t}_1 \hdots$ be an enumeration of $T$. By definition of infinite sum, we have that: \[ \prob{T}{} = \lim_{k \to \infty} {\sum_{i = 0}^k \prob{\tilde{t}_i}{}} \] By definition of limit of a sequence, we have that for all $\epsilon > 0$ there exists $v \in \mathbb{N}$ such that for all $k > v$: \[ \abs{\prob{T}{} - \sum_{i = 0}^k \prob{\tilde{t}_i}{}} < \epsilon \] Since $\prob{\tilde{t}_i}{} \geq 0$ for all $i$, we can drop the absolute value and we get: \begin{equation}\ellel{lem:traces:eq1} \prob{T}{} - \sum_{i = 0}^k \prob{\tilde{t}_i}{} < \epsilon \end{equation} By Lemma~\ref{lem:leq-eps-implies-leq} it suffice to show $\prob{T}{} \leq \prob{\TR{m}{\delta,\stateQ}{T}}{} + \delta m + \epsilon$ for all $\epsilon > 0$, or equivalently: \[ \prob{T}{} - \epsilon \leq \prob{\TR{m}{\delta,\stateQ}{T}}{} + \delta m \] So, let $\epsilon > 0$ and let $k$ be such that Lemma~\ref{lem:traces:eq1} holds. Then we have: \[ \prob{T}{} - \epsilon < \sum_{i = 0}^k \prob{\tilde{t}_i}{} \] Let $T' = \setcomp{\tilde{t}_i}{i \leq k}$. Since $\sum_{i = 0}^k \prob{\tilde{t}_i}{} = \prob{T'}{}$ and $T'$ is finite, by Lemma~\ref{lem:finite-traces} we have: \[ \sum_{i = 0}^k \prob{\tilde{t}_i}{} \leq \prob{\TR{m}{\delta,\stateQ}{T'}}{} + \delta m \] Since $\TR{m}{\delta,\stateQ}{T'} \subseteq \TR{m}{\delta,\stateQ}{T}$ we have that: \[ \prob{\TR{m}{\delta,\stateQ}{T'}}{} + \delta m \leq \prob{\TR{m}{\delta,\stateQ}{T}}{} + \delta m \] Summing up, we have that $\prob{T}{} - \epsilon \leq \prob{\TR{m}{\delta,\stateQ}{T}}{} + \delta m$ for all $\epsilon > 0$. By Lemma~\ref{lem:leq-eps-implies-leq} it follows that $\prob{T}{} \leq \prob{\TR{m}{\delta,\stateQ}{T}}{} + \delta m$ as required. \qed \end{proofof} \begin{proofof}{Lemma}{lem:matching} Without loss of generality, we prove the statement under the following additional assumptions: \begin{align} \ellel{eq:matching-aux2} & \forall b \in B: f_B(b) > 0 \\ \ellel{eq:matching-aux1} & \forall b \in B: \setcomp{a \in A}{b \in g(a)} \neq \emptyset \qquad \text{and} \\ \nonumber & \qquad \forall b_1,b_2 \in B: \setcomp{a \in A}{b_1 \in g(a)} = \setcomp{a \in A}{b_2 \in g(a)} \implies b_1 = b_2 \end{align} If $B$ does not satisfy \autoref{eq:matching-aux2}, just remove from $B$ the elements $b$ such that $f_B(b) = 0$ adjust $g$ accordingly, and set $h(a,b) = 0$. \autoref{eq:matching-assumption} still holds since we removed only elements whose value is zero. If $B$ does not satisfy \autoref{eq:matching-aux1}, it can be transformed to a set that does. To see why, let $\equiv \subseteq B \times B$ be defined as: \[ b \equiv b' \text{ iff } \setcomp{a \in A}{b \in g(a)} = \setcomp{a \in A}{b' \in g(a)} \] Let $\hat{B}$ be the set of equivalence classes w.r.t.\@\xspace $\equiv$. For an equivalence class $[b]$, define: \[ f_{\hat{B}}([b]) = \sum_{b' \in [b]}{f_B(b')} \qquad\qquad g'(a) = \setcomp{[b]}{b \in g(a)} \] It is easy to verify that~\eqref{eq:matching-aux1} is satisfied. Notice that $\sum_{[b] \in g'(a)} f_{\hat{B}}([b])$ converges, since: \[ \sum_{[b] \in g'(a)} f_{\hat{B}}([b]) = \sum_{[b] \in g'(a)} \sum_{b' \in [b]}f_{B}(b) = \sum_{b \in g(a)}f_{B}(b) \] We now show that $A,\hat{B}$ and $g'$ satisfy \autoref{eq:matching-assumption}. We have that, for all $b \in B$, $f_B(b) \leq f_{\hat{B}}([b])$ and $b \in g(a) \implies [b] \in g'(a)$. Therefore, for all $A' \subseteq A$: \[ \sum_{a \in A'} f_A(a) \leq \sum_{b \in \bigcup_{a \in A'} g(a)} f_B(b) \leq \sum_{[b] \in \bigcup_{a \in A'} g'(a)} f_{\hat{B}}([b]) \] From a function $h'$ satisfying \autoref{eq:matching-thesis:1} and~\autoref{eq:matching-thesis:2} for $A, \hat{B}$ and $g'$ we can easily obtain a function $h$ for $A, B$ and $g$: e.g.\@\xspace, set $h(a,b) = h'(a,[b])\frac{f_B(b)}{f_{\hat{B}}([b])}$. Notice that $f_{\hat{B}}([b]) > 0$ by \autoref{eq:matching-aux2}, and that if $B$ satisfies \autoref{eq:matching-aux1} it then holds that $\card{B} < 2^{\card{A}}$, and so $B$ is finite. That said, we show that the thesis holds by reducing to the max-flow problem~\cite{MinCut}. Assume w.l.o.g.\ that $A$ and $B$ are disjoint. Let $N = (V,E)$ be a directed graph, where $V = A \cup B \cup \setenum{s,t}$ with $s,t \not\in A \cup B$ and: \[ E = \setcomp{(s,b)}{b \in B} \cup \setcomp{(b,a)}{a \in A, b \in g(a)} \cup \setcomp{(a,t)}{a \in A} \] Define edge capacity $w: E \rightarrow \mathbb{R}_0^+ \cup \setenum{\infty}$ as follows: \[ w(s,b) = f_B(b) \qquad w(b,a) = \infty \qquad w(a,t) = f_A(a) \] Consider the cut $C = \setcomp{(a,t)}{a \in A}$ associated with partition $(V \setminus \setenum{t},\setenum{t})$. Such cut has capacity $\sum_{a \in A} f_A(a)$ and we argue it is minimum. Take a cut $C'$ of the network. First notice that if $C'$ contains edges of the form $(b,a)$ its capacity would be infinite. We can therefore consider only cuts whose elements are of the form $(s,b)$ or $(a,t)$, and thus for all $a \in A$ we have that $a$ and the elements of $g(a)$ are in the same partition. In other words, $s$ partition is of the form $A' \cup \bigcup_{a \in A'} g(a) \cup \setenum{s}$, $t$ partition is of the form $A \setminus A' \cup \bigcup_{a \in (A \setminus A')} g(a) \cup \setenum{t}$, where $A' \subseteq A$. So capacity of $C'$ is $\sum_{a \in A'} f_A(a) + \sum_{b \in g(A \setminus A')} f_B(b)$. Now, capacity of $C$ is $\sum_{a \in A'} f_A(a) + \sum_{a \in (A \setminus A')} f_A(a)$. Since $\sum_{a \in (A \setminus A')} f_A(a) \leq \sum_{b \in g(A \setminus A')} f_B(b)$ by assumption \autoref{eq:matching-assumption}, we have that capacity of $C$ is minimal. By the min-cut max-flow theorem \cite{MinCut}, we have that the max flow of the network has capacity $\sum_{a \in A} f_A(a)$. Let $\mathit{flow}: E \rightarrow \mathbb{R}_0^+$ be the a flow associated to such cut. Consequently, we have that $\mathit{flow}(a,t) = f_A(a)$ for all $a \in A$. Define: \[ h(a,b) = \begin{cases} \frac{\mathit{flow}(b,a)}{f_B(b)} & \text{ if } b \in g(a)\\ 0 & \text{ otherwise} \end{cases} \] We have to show that $h$ satisfies \autoref{eq:matching-thesis:1} and~\autoref{eq:matching-thesis:2}. Let $A' \subseteq A$. We have that: \begin{align*} \sum_{a \in A'} \sum_{b \in g(a)} h(a,b) f_B(b) & = \sum_{a \in A'} \sum_{b \in g(a)} \frac{\mathit{flow}(b,a)}{f_B(b)} f_B(b) \\ & = \sum_{a \in A'} \sum_{b \in g(a)} \mathit{flow}(b,a) \end{align*} By the conservation of flow constraint, we have that: \begin{align*} \sum_{a \in A'} \sum_{b \in g(a)} \mathit{flow}(b,a) & = \sum_{a \in A'} \mathit{flow}(a,t) \\ & = \sum_{a \in A'} f_A(a) \end{align*} So summing up we have that: \[ \sum_{a \in A'} \sum_{b \in g(a)} h(a,b) f_B(b) = \sum_{a \in A'} f_A(a) \] For the remaining part, let $b \in B$. We have that: \begin{align*} \sum_{a \in A} h(a,b) & = \sum_{a \in \setcomp{a'}{b \in g(a')}} h(a,b) \\ & = \sum_{a \in \setcomp{a'}{b \in g(a')}} \frac{\mathit{flow}(b,a)}{f_B(b)} \\ & = \frac{1}{f_B(b)} {\sum_{a \in \setcomp{a'}{b \in g(a')}} \mathit{flow}(b,a)} \\ & \leq\; \frac{f_B(b)}{f_B(b)} \\ & = \; 1 \tag*{\qed} \end{align*} \end{proofof} \begin{proofof}{Lemma}{lem:finite-traces} By induction on $n$. The base case ($n = 1$) is trivial as $T = \{\stateP\}$ and $\TR{n}{\delta,\stateQ}{T} = \{\stateQ\}$, or $T = \emptyset$ and $\TR{n}{\delta,\stateQ}{T} = \emptyset$. Therefore, $\prob{T}{} = \prob{\TR{n}{\delta,\stateQ}{T}}{} = \card{T}$. For the inductive case, first notice that: \[ \prob{T}{}\;\;= \;\; \sum_{\tilde{t} \in T} \prob{\stateP}{\tilde{t}(1)}\prob{\tilde{t}(1 .. n - 1)}{} \] Referring to Lemma~\ref{lem:matching}, let $A = \setcomp{\tilde{t}(1)}{\tilde{t} \in T}$, $B = \setcomp{\stateQi}{\statePi \CSim{n - 1}{\delta} \stateQi \text{ for some } \statePi \in A} \cup \setenum{D}$, where $D$ is a special element not occurring in $A \cup B$. Let $f_A(\statePi) = \prob{\stateP}{\statePi}$, $f_B(\stateQi) = \prob{\stateQ}{\stateQi}$ and $f_B(D) = \delta$. Finally, let $g(\statePi) = \;\R{n-1}{\delta}{\statePi} \cup \setenum{D}$. By~\autoref{def:param-bisim}, we have that $A, B, f_A, f_B$ and $g$ satisfy \autoref{eq:matching-assumption} of \autoref{lem:matching}. Indeed, for all $A' \subseteq A$, we have that: \[ \sum_{a \in A'} f_A(a) = \prob{\stateP}{A'} \leq \prob{\stateQ}{\cryset{n - 1}{\delta}{A'}} + \delta = \sum_{b \in \bigcup_{a \in A'} g(a)} f_B(b) \] We can then conclude that there exist $h$ such that, for all $A' \subseteq A$: \[ \prob{\stateP}{A'} = \sum_{\statePi \in A'} \Big( h(\statePi,D) \delta \;\;+ \sum_{\stateQi \in \R{n-1}{\delta}{\statePi}} h(\statePi,\stateQi)\prob{\stateQ}{\stateQi}\Big) \] Let $T_{P} = \setcomp{\tilde{t}(1..n - 1)}{\tilde{t} \in T \,\land\, \tilde{t}(1) \in P}$ where $P \subseteq A$. We simply write $T_{\statePi}$ if $P = \{\statePi\}$. So, we have that: \begin{align*} \prob{T}{} & \;\;=\;\;\sum_{\tilde{t} \in T} \prob{\stateP}{\tilde{t}(1)} \prob{\tilde{t}(1.. n - 1)}{} \\ & \;\;= \;\; \sum_{\statePi \in A} \prob{\stateP}{\statePi}\prob{T_{\statePi}}{} \\ & \;\;= \;\; \sum_{\statePi \in A} \prob{T_{\statePi}}{}\Big(h(\statePi,D) \delta \;\;+ \sum_{\stateQi \in \R{n-1}{\delta}{\statePi}} h(\statePi,\stateQi)\prob{\stateQ}{\stateQi}\Big) \\ & \;\;\leq \;\; \delta + \sum_{\statePi \in A} \prob{T_{\statePi}}{}\sum_{\stateQi \in \R{n-1}{\delta}{\statePi}} h(\statePi,\stateQi)\prob{\stateQ}{\stateQi} \\ & \;\;= \;\; \delta + \sum_{\statePi \in A} \sum_{\stateQi \in \R{n-1}{\delta}{\statePi}} h(\statePi,\stateQi)\prob{\stateQ}{\stateQi}\prob{T_{\statePi}}{} \\ & \;\;\leq \;\; \delta + \sum_{\statePi \in A} \sum_{\stateQi \in \R{n-1}{\delta}{\statePi}} h(\statePi,\stateQi)\prob{\stateQ}{\stateQi}\Big(\prob{\TR{n-1}{\delta,\stateQi}{T_{\statePi}}}{} + \delta(n - 1)\Big) \\ & \;\;= \;\; \delta + s_1 + s_2 \end{align*} where: \begin{align*} s_1 & = \sum_{\statePi \in A} \sum_{\stateQi \in \R{n-1}{\delta}{\statePi}} h(\statePi,\stateQi)\prob{\stateQ}{\stateQi} \delta (n - 1) \\ s_2 & = \sum_{\statePi \in A} \sum_{\stateQi \in \R{n-1}{\delta}{\statePi}} h(\statePi,\stateQi)\prob{\stateQ}{\stateQi}\prob{\TR{n-1}{\delta,\stateQi}{T_{\statePi}}}{} \end{align*} Now: \begin{align*} s_1 & = \delta (n - 1)\sum_{\statePi \in A} \sum_{\stateQi \in \R{n-1}{\delta}{\statePi}} h(\statePi,\stateQi)\prob{\stateQ}{\stateQi} \\ & \leq \delta (n - 1)\prob{\stateP}{A} \\ & \leq \delta (n - 1) \end{align*} Therefore $\delta + s_1 \leq \delta n$. It remains to show that $s_2 \leq \prob{\TR{n}{\delta,\stateQ}{T}}{}$. First notice that $s_2$ can be rewritten as follows by a simple reordering of terms: \[ s_2 = \sum_{\stateQi \in \R{n - 1}{\delta}{A}} \sum_{\statePi \in A \cap \R{n - 1}{\delta}{\stateQi}} h(\statePi,\stateQi)\prob{\stateQ}{\stateQi}\prob{\TR{n-1}{\delta,\stateQi}{T_{\statePi}}}{} \] So: \begin{align*} s_2 & = \sum_{\stateQi \in \R{n - 1}{\delta}{A}} \quad \sum_{\statePi \in A \cap \R{n - 1}{\delta}{\stateQi}} h(\statePi,\stateQi)\prob{\stateQ}{\stateQi}\prob{\TR{n-1}{\delta,\stateQi}{T_{\statePi}}}{} \\ & \leq \sum_{\stateQi \in \R{n - 1}{\delta}{A}} \quad \sum_{\statePi \in A \cap \R{n - 1}{\delta}{\stateQi}} h(\statePi,\stateQi)\prob{\stateQ}{\stateQi} \prob{\TR{n-1}{\delta,\stateQi}{T_{A \cap \R{n - 1}{\delta}{\stateQi}}}}{} \\ & \leq \sum_{\stateQi \in \R{n - 1}{\delta}{A}} \prob{\stateQ}{\stateQi}\prob{\TR{n-1}{\delta,\stateQi}{T_{A \cap \R{n - 1}{\delta}{\stateQi}}}}{} \sum_{\statePi \in A \cap \R{n - 1}{\delta}{\stateQi}} h(\statePi,\stateQi) \\ & \leq \sum_{\stateQi \in \R{n - 1}{\delta}{A}} \prob{\stateQ}{\stateQi}\prob{\TR{n-1}{\delta,\stateQi}{T_{A \cap \R{n - 1}{\delta}{\stateQi}}}}{} \\ & = \prob{\TR{n}{\delta,\stateQ}{T}}{} \end{align*} The last equality follows by partitioning $\TR{n}{\delta,\stateQ}{T}$ according to the second state of each trace $\stateQi$. The set of all such second states is the set of those bisimilar to (some state of) $A$, namely $\R{n - 1}{\delta}{A}$. Given any such $\stateQi$, the probability of its partition is $\prob{\stateQ}{\stateQi}\prob{U_{\stateQi}}{}$ where $U_{\stateQi}$ is the set of the \emph{tails} of $\TR{n}{\delta,\stateQ}{T}$ starting from $\stateQi$. Since this set is defined taking pointwise bisimilar traces, we can equivalently express $U_{\stateQi}$ by first taking the tails of $T$ (i.e., $T_A$), and then considering the bisimilar traces: in other words, we have $U_{\stateQi} = \TR{n-1}{\delta,\stateQi}{T_{A}}$. Note that the states in $A$ which are not bisimilar to $\stateQi$ do not contribute to $\TR{n-1}{\delta,\stateQi}{T_{A}}$ in any way, so we can also write the desired $U_{\stateQi} = \TR{n-1}{\delta,\stateQi}{T_{A \cap \R{n - 1}{\delta}{\stateQi}}}$. \qed \end{proofof} \begin{applemma}\ellel{lem:logic-finite-traces-next} Let $T = \setcomp{t}{t(0) = \stateP \,\land\, t \sat{\delta}{n}{r} \logNext \sFormula}$ for some $\stateP,\sFormula$, and let $m \geq 2$. Then: \[ \prob{T}{} = \prob{\setcomp{\tilde{t}}{\card{t} = m \,\land\, \tilde{t}(0) = \stateP \,\land\, \tilde{t} \sat{\delta}{n}{r} \logNext \sFormula}}{} \] \end{applemma} \begin{proof} Trivial. \end{proof} \begin{applemma}\ellel{lem:logic-finite-traces} Let $T = \setcomp{t}{t(0) = \stateP \,\land\, t \sat{\delta}{n}{r} \sFormula[1] \logUntil \sFormula[2]}$ for some $\stateP,\sFormula[1], \sFormula[2]$, and let $m \geq n + 1$. Then: \[ \prob{T}{} = \prob{\setcomp{\tilde{t}}{\card{\tilde{t}} = m \,\land\, \tilde{t}(0) = \stateP \,\land\, \tilde{t} \sat{\delta}{n}{r} \sFormula[1] \logUntil \sFormula[2]}}{} \] \end{applemma} \begin{proof} (Sketch) Let $\tilde{T} = \setcomp{\tilde{t}}{\card{\tilde{t}} = m \,\land\, \tilde{t}(0) = \stateP \,\land\, \tilde{t} \sat{\delta}{n}{r} \sFormula[1] \logUntil \sFormula[2]}$. The thesis follows from the fact that $T = \bigcup_{\tilde{t} \in \tilde{T}}{\cyl{\tilde{t}}}$. \end{proof} \noindent For notational convenience, hereafter we will often write $q \sat{\delta}{n}{r} \phi$ instead of $q \in \sem{n}{\delta}{r}{\phi}$. \begin{applemma}\ellel{lem:bisimi-implies-prop-preserv} Let $k$ and $n$ be, respectively, the maximum nesting level of $\logUntil$ and of $\logNext$ in $\sFormula$, and let $\stateP \CSim{mk + n + 1}{\delta_1} \stateQ$. Then: \begin{enumerate} \item \ellel{lem:bisimi-implies-prop-preserv:item1} $\stateP \sat{\delta_2}{m}{+1} \sFormula \implies \stateQ \sat{\delta_2 + \delta_1(mk + n + 1)}{m}{+1} \sFormula$ \item \ellel{lem:bisimi-implies-prop-preserv:item2} $\stateP \not\sat{\delta_2}{m}{-1} \sFormula \implies \stateQ \not\sat{\delta_2 + \delta_1(mk + n + 1)}{m}{-1} \sFormula$ \end{enumerate} \end{applemma} \begin{proof} By induction on $\sFormula$. The cases $\logTrue$ and $\atomA$ are trivial. \begin{itemize} \item $\mathtt{\neg} \sFormulai$. We only show \autoref{lem:bisimi-implies-prop-preserv:item1} as the other item is similar. So, suppose $\stateP \sat{\delta_2}{m}{+1} \mathtt{\neg} \sFormulai$. Then, $\stateP \not\sat{\delta_2}{m}{-1} \sFormula$. By the induction hypothesis we have that $\stateQ \not\sat{\delta_2 + \delta_1(mk + n + 1)}{m}{-1} \sFormula$, and hence $\stateQ \sat{\delta_2 + \delta_1(mk + n + 1)}{m}{+1} \mathtt{\neg} \sFormulai$ as required. \item $\sFormula[1] \,\mathtt{\land}\, \sFormula[2]$. We only show \autoref{lem:bisimi-implies-prop-preserv:item1} as the other item is similar. So, suppose $\stateP \sat{\delta_2}{m}{+1} \sFormula[1] \,\mathtt{\land}\, \sFormula[2]$. Then $\stateP \sat{\delta_2}{m}{+1} \sFormula[1]$ and $\stateP \sat{\delta_2}{m}{+1} \sFormula[2]$. By the induction hypothesis $\stateQ \sat{\delta_2 + \delta_1(mk + n + 1)}{m}{+1} \sFormula[1]$ and $\stateQ \sat{\delta_2 + \delta_1(mk + n + 1)}{m}{+1} \sFormula[2]$. Therefore $\stateQ \sat{\delta_2 + \delta_1(mk + n + 1)}{m}{+1} \sFormula[1] \,\mathtt{\land}\, \sFormula[2]$ as required. \item $\logPr{\rhd \pi}{\pFormula}$. For \autoref{lem:bisimi-implies-prop-preserv:item1}, suppose that $\stateP \sat{\delta_2}{m}{+1} \logPr{\rhd \pi}{\pFormula}$. We only deal with the case $\rhd = \,\geq$, since the case $\rhd = \, >$ is analogous. Let: \[ T = \setcomp{\tilde{t}}{\card{\tilde{t}} = mk + n + 1 \,\land\, \tilde{t}(0) = \stateP \,\land\, \tilde{t} \sat{\delta_2}{m}{+1} \pFormula}{} \] We start by proving that: \begin{equation}\ellel{lem:bisimi-implies-prop-preserv:eq1} \forall \tilde{u} \in \TR{mk + n + 1}{\delta_1,\stateQ}{T} \;\; : \;\; \tilde{u} \sat{\delta_2 + \delta_1(mk + n + 1)}{m}{+1} \pFormula \end{equation} Let $\tilde{u} \in \TR{m k + n + 1}{\delta_1,\stateQ}{T}$. Then, there is $\tilde{t} \in T$ such that, for all $0 \leq i < mk + n + 1$: \[\tilde{t}(i) \CSim{mk + n + 1-i}{\delta_1} \tilde{u}(i)\] We proceed by cases on $\pFormula$. \begin{itemize} \item $\sFormula[1] \logUntil \sFormula[2]$. First notice that $mk + n + 1 \geq m + 1$, and hence by \autoref{lem:logic-finite-traces} we have that: \[\prob{T}{} = \prob{\setcomp{t}{t(0) = \stateP \,\land\, t \sat{\delta_2}{m}{+1} \sFormula[1] \logUntil \sFormula[2]}}{}\] We then have $\prob{T}{} + \delta_2 \geq \pi$. Since $\tilde{t} \sat{\delta_2}{m}{+1} \sFormula[1] \logUntil \sFormula[2]$, we have that: \[ \exists i \leq m: \tilde{t}(i) \sat{\delta_2}{m}{+1} \sFormula[2] \,\land\, \forall j < i: \tilde{t}(j) \sat{\delta_2}{m}{+1} \sFormula[1] \] Let $n'$ be the maximum nesting level of $\logNext$ in $\sFormula[2]$. We know that: \[ \tilde{t}(i) \CSim{mk + n + 1-i}{\delta_1} \tilde{u}(i) \,\land\, mk + n + 1 - i > m(k - 1) + n' + 1 \] Then, by Lemma~\ref{lem:sim:monotonicity} (monotonicity of $\CSim{}{}$), we have that: \[ \tilde{t}(i) \CSim{m(k - 1) + n' + 1}{\delta_1} \tilde{u}(i) \] Then, by the induction hypothesis, we have that: \[ \tilde{u}(i) \sat{\delta_2 + \delta_1(m(k - 1) + n' + 1)}{m}{+1} \sFormula[2] \] By Lemma~\ref{lem:pctl:monotonicity} (monotonicity of $\sat{}{}{}$) it follows that: \[ \tilde{u}(i) \sat{\delta_2 + \delta_1(mk + n + 1)}{m}{+1} \sFormula[2] \] With a similar argument we can conclude that, for all $j < i$: \[ \tilde{u}(j) \sat{\delta_2 + \delta_1(mk + n + 1)}{m}{+1} \sFormula[1] \] Hence \autoref{lem:bisimi-implies-prop-preserv:eq1} holds. \item $\logNext \sFormula[1]$. First notice that $mk + n + 1 \geq 2$, and hence by \autoref{lem:logic-finite-traces-next} we have that: \[ \prob{T}{} = \prob{\setcomp{t}{t(0) = \stateP \,\land\, t \sat{\delta_2}{m}{+1} \logNext \sFormula[1]}}{} \] Then, $\prob{T}{} + \delta_2 \geq \pi$. Since $\tilde{t} \sat{\delta_2}{m}{+1} \logNext \sFormula[1]$, we have that \( \tilde{t}(1) \sat{\delta_2}{m}{+1} \sFormula[1] \). We know that \( \tilde{u}(1) \CSim{mk + n}{\delta_1} \tilde{t}(1) \). By the induction hypothesis, \( \tilde{u}(1) \sat{\delta_2 + \delta_1(mk + n)}{m}{+1} \sFormula[1] \). By Lemma~\ref{lem:pctl:monotonicity} (monotonicity of $\sat{}{}{}$) it follows that: \( \tilde{u}(i) \sat{\delta_2 + \delta_1(mk + n + 1)}{m}{+1} \sFormula[1] \). Hence, \eqref{lem:bisimi-implies-prop-preserv:eq1} holds. \end{itemize} Back to the main statement, we have that, by Lemma~\ref{lem:traces}: \[ \prob{\TR{mk + n + 1}{\delta_1,\stateQ}{T}}{} + \delta_2 + \delta_1 (mk + n + 1) \geq \prob{T}{} + \delta_2 \] So, summing up: \begin{align*} & \prob{\setcomp{t}{t(0) = \stateQ \;\land\; t\sat{\delta_2 + \delta_1(mk + n + 1)}{m}{+1} \pFormula}}{} + \delta_2 + \delta_1 (mk + n + 1) \\ = \; & \prob{\setcomp{\tilde{t}}{\card{\tilde{t}} = mk + n + 1 \;\land\; \tilde{t}(0) = \stateQ \;\land\; \tilde{t}\sat{\delta_2 + \delta_1(mk + n + 1)}{m}{+1} \pFormula}}{} \\ & \;\;\;\; + \delta_2 + \delta_1 (mk + n + 1) \\ \geq \; & \prob{\TR{mk + n + 1}{\delta_1,\stateQ}{T}}{} + \delta_2 + \delta_1 (mk + n + 1) \\ \geq \; & \prob{T}{} + \delta_2 \\ \geq \; & \pi \end{align*} Therefore, $\stateQ \sat{\delta_2 + \delta_1(mk + n)}{m}{+1} \logPr{\geq \pi}{\pFormula}$. For \autoref{lem:bisimi-implies-prop-preserv:item2}, suppose that $\stateP \not\sat{\delta_2}{m}{-1} \logPr{\geq \pi}{\pFormula}$. Then: \[ \prob{\setcomp{t}{t(0) = \stateP \,\land\, t \sat{\delta_2}{m}{-1} \pFormula}{}}{} - \delta_2 < \pi \] From the above, by a case analysis on $\pFormula$, and exploiting \autoref{lem:logic-finite-traces} and \autoref{lem:logic-finite-traces-next}, we conclude that $\prob{T}{} - \delta_2 < \pi$, where: \[ T = \setcomp{\tilde{t}}{\card{\tilde{t}} = mk + n + 1 \,\land\, t(0) = \stateP \,\land\, t \sat{\delta_2}{m}{-1} \pFormula} \] Let: \[\bar T = \setcomp{\tilde{t}}{\card{\tilde{t}} = mk + n + 1 \,\land\, t(0) = \stateP \,\land\, \tilde{t} \not\sat{\delta_2}{m}{-1} \pFormula}{}\] We have that $1 - \prob{\bar T}{} = \prob{T}{}$. We start by proving that: \[ \forall \tilde{u} \in \TR{mk + n + 1}{\delta_1,\stateQ}{\bar T} \;\; : \;\; \tilde{u} \not\sat{\delta_2 + \delta_1(mk + n + 1)}{m}{-1} \pFormula \] Let $\tilde{u} \in \TR{mk + n + 1}{\delta_1,\stateQ}{\bar T}$. Then, there exist $\tilde{t} \in \bar{T}$ such that, for all $0 \leq i < mk + n + 1$: \[\tilde{t}(i) \CSim{mk + n-i}{\delta_1} \tilde{u}(i)\] We proceed by cases on $\pFormula$. \begin{itemize} \item $\sFormula[1] \logUntil \sFormula[2]$. Since $\tilde{t} \not\sat{\delta_2}{m}{-1} \sFormula[1] \logUntil \sFormula[2]$, we have that: \[ \forall i \leq m: \tilde{t}(i) \not\sat{\delta_1}{m}{-1} \sFormula[2] \lor \exists j < i: \tilde{t}(j) \not\sat{\delta_2}{m}{-1} \sFormula[1] \] Take $i \leq m$. Let $n'$ be the maximum nesting level of $\logNext$ in $\sFormula[2]$. If $\tilde{t}(i) \not\sat{\delta_1}{m}{-1} \sFormula[2]$, since \[ \tilde{t}(i) \CSim{mk + n + 1 - i}{\delta_1} \tilde{u}(i) \,\land\, mk + n + 1 - i > m(k - 1) + n' + 1 \] by Lemma~\ref{lem:sim:monotonicity} (monotonicity of $\CSim{}{}$) we have that: \[ \tilde{t}(i) \CSim{m(k - 1) + n' + 1}{\delta_1} \tilde{u}(i) \] By the induction hypothesis we have that: \[ \tilde{u}(i) \not\sat{\delta_2 + \delta_1(m(k - 1) + n' + 1)}{m}{-1} \sFormula[2] \] By Lemma~\ref{lem:pctl:monotonicity} (monotonicity of $\sat{}{}{}$) it follows: \[ \tilde{u}(i) \not\sat{\delta_2 + \delta_1(mk + n + 1)}{m}{-1} \sFormula[2] \] If $\tilde{t}(j) \not\sat{\delta_1}{m}{-1} \sFormula[1]$ for some $j < i$, with a similar argument we can conclude that: \[ \tilde{u}(j) \not\sat{\delta_2 + \delta_1(m(k - 1) + n + 1)}{m}{-1} \sFormula[1] \] \item $\logNext \sFormula[1]$. Since $\tilde{t} \not\sat{\delta_2}{m}{-1} \logNext \sFormula[1]$, we have that: \( \tilde{t}(1) \not\sat{\delta_2}{m}{-1} \sFormula[1] \). Since $\tilde{t}(1) \CSim{mk + n}{\delta_1} \tilde{u}(1)$, by the induction hypothesis we have \( \tilde{u}(i) \not\sat{\delta_2 + \delta_1(mk + n)}{m}{-1} \sFormula[1] \). By Lemma~\ref{lem:pctl:monotonicity} it follows that: \[ \tilde{u}(i) \not\sat{\delta_2 + \delta_1(mk + n + 1)}{m}{-1} \sFormula[1] \] \end{itemize} Back to the main statement, by Lemma~\ref{lem:traces} we have that: \[\prob{\bar T}{} \leq \prob{\TR{mk + n + 1}{\delta_1,\stateQ}{\bar T}}{} + \delta_1(mk + n + 1)\] Summing up, we have that: \begin{align*} \hspace{-12pt} & \prob{\setcomp{t}{t(0) = \stateQ \,\land\, t\sat{\delta_2 + \delta_1(mk + n + 1)}{m}{-1} \pFormula}}{} - \delta_2 - \delta_1 (mk + n + 1) \\ & = \prob{\setcomp{\tilde{t}}{\card{\tilde{t}} = \card{\tilde{t}} = mk + n + 1 \,\land\, \tilde{t}(0) = \stateQ \,\land\, \tilde{t}\sat{\delta_2 + \delta_1(mk + n + 1)}{m}{-1} \pFormula}}{} \\ & \hspace{12pt} - \delta_2 - \delta_1 (mk + n + 1) \\ & = 1 - \prob{\setcomp{\tilde{t}}{\card{\tilde{t}} = mk + n + 1 \,\land\, \tilde{t}(0) = \stateQ \,\land\, \tilde{t}\not\sat{\delta_2 + \delta_1(mk + n + 1)}{m}{-1} \pFormula}}{} \\ & \hspace{12pt} - \delta_2 - \delta_1 (mk + n + 1) \\ & \leq 1 - \prob{\TR{mk + n}{\delta_1,\stateQ}{\bar T}}{} - \delta_2 - \delta_1 (mk + n + 1) \\ & \leq 1 - \prob{\bar T}{} - \delta_2 \\ & = \prob{T}{} - \delta_2 < \pi \end{align*} Therefore, $\stateQ \not\sat{\delta_2 + \delta_1(mk + n)}{m}{-1} \logPr{\geq \pi}{\pFormula}$. \qedhere \end{itemize} \end{proof} \begin{proofof}{Theorem}{th:soundness} Immediate consequence of~\autoref{lem:bisimi-implies-prop-preserv}.\qed \ \end{proofof} \end{document}
\begin{document} \title[Kauffman-Vogel and Murakami-Ohtsuki-Yamada Polynomials]{On the Kauffman-Vogel and the Murakami-Ohtsuki-Yamada Graph Polynomials} \author{Hao Wu} \address{Department of Mathematics, The George Washington University, Monroe Hall, Room 240, 2115 G Street, NW, Washington DC 20052} \email{[email protected]} \subjclass[2000]{Primary 57M25} \keywords{Kauffman polynomial, HOMFLY-PT polynomial} \begin{abstract} This paper consists of three parts. First, we generalize the Jaeger Formula to express the Kauffman-Vogel graph polynomial as a state sum of the Murakami-Ohtsuki-Yamada graph polynomial. Then, we demonstrate that reversing the orientation and the color of a MOY graph along a simple circuit does not change the $\mathfrak{sl}(N)$ Murakami-Ohtsuki-Yamada polynomial or the $\mathfrak{sl}(N)$ homology of this MOY graph. In fact, reversing the orientation and the color of a component of a colored link only changes the $\mathfrak{sl}(N)$ homology by an overall grading shift. Finally, as an application of the first two parts, we prove that the $\mathfrak{so}(6)$ Kauffman polynomial is equal to the $2$-colored $\mathfrak{sl}(4)$ Reshetikhin-Turaev link polynomial, which implies that the $2$-colored $\mathfrak{sl}(4)$ link homology categorifies the $\mathfrak{so}(6)$ Kauffman polynomial. \end{abstract} \maketitle \tableofcontents \section{The Jaeger Formula of the Kauffman-Vogel Polynomial}\label{sec-Jaeger} \subsection{The Kauffman and the HOMFLY-PT link polynomials} The Kauffman polynomial $P(K)(q,a)$ defined in \cite{Kauffman} is an invariant of unoriented framed link in $S^3$. Here, we use the following normalization of the Kauffman polynomial. \begin{equation}\label{Kauffman-skein} \begin{cases} P(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\circle{15}} \end{picture}) = \frac{a-a^{-1}}{q-q^{-1}} +1 \\ P(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\line(1,1){20}} \put(-2,12){\line(-1,1){8}} \put(2,8){\line(1,-1){8}} \end{picture}) - P(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,0){\line(-1,1){20}} \put(2,12){\line(1,1){8}} \put(-2,8){\line(-1,-1){8}} \end{picture}) = (q-q^{-1})(P(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}) - P(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture})) \\ P(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\line(1,1){12}} \put(-2,12){\line(-1,1){8}} \qbezier(2,12)(10,20)(10,10) \qbezier(2,8)(10,0)(10,10) \end{picture}) = a P(\setlength{\unitlength}{.75pt} \begin{picture}(15,20)(-10,7) \qbezier(-10,0)(10,10)(-10,20) \end{picture}) \end{cases} \end{equation} The $\mathfrak{so}(N)$ Kauffman polynomial $P_{N}(K)(q)$ is defined to be the specialization \begin{equation}\label{Kauffman-N-def} P_{N}(K)(q)= P(K)(q,q^{N-1}). \end{equation} The HOMFLY-PT polynomial $R(K)(q,a)$ defined in \cite{HOMFLY,PT} is an invariant of oriented framed link in $S^3$. Here, we use the following normalization of the HOMFLY-PT polynomial. \begin{equation}\label{HOMFLY-skein} \begin{cases} R(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\circle{15}} \end{picture}) = \frac{a-a^{-1}}{q-q^{-1}} \\ R(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,1){20}} \put(-2,12){\varepsilonctor(-1,1){8}} \put(2,8){\line(1,-1){8}} \end{picture}) - R(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,0){\varepsilonctor(-1,1){20}} \put(2,12){\varepsilonctor(1,1){8}} \put(-2,8){\line(-1,-1){8}} \end{picture}) = (q-q^{-1})R(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,20){\varepsilonctor(1,1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}) \\ R(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\line(1,1){12}} \put(-2,12){\line(-1,1){8}} \qbezier(2,12)(10,20)(10,10) \qbezier(2,8)(10,0)(10,10) \end{picture}) = a R(\setlength{\unitlength}{.75pt} \begin{picture}(15,20)(-10,7) \qbezier(-10,0)(10,10)(-10,20) \end{picture}) \end{cases} \end{equation} The $\mathfrak{sl}(N)$ HOMFLY-PT polynomial $R_{N}(K)(q)$ is defined to be the specialization \begin{equation}\label{HOMFLY-N-def} R_{N}(K)(q)= R(K)(q,q^{N}). \end{equation} It is easy to renormalize $P(K)(q,a)$ and $R(K)(q,a)$ to make them invariant under Reidemeister move (I) too. \subsection{The Jaeger Formula} The Jaeger Formula can be found in, for example, \cite{Ferrand, Kauffman-book}. Here, we give it a slightly different formulation. Given an unoriented link diagram $D$, we call a segment of the link between two adjacent crossings an edge of this diagram $D$. An edge orientation of $D$ is an orientation of all the edges of $D$. We say that an edge orientation of $D$ is balanced if, at every crossing, two edges point inward and two edges point outward. Up to rotation, there are four possible balanced edge orientations near a crossing. (See Figure \ref{balanced-orientation-crossing-fig}.) \begin{figure} \caption{Balanced edge orientations near a crossing} \label{balanced-orientation-crossing-fig} \end{figure} Denote by $\widetilde{\mathcal{O}}(D)$ the set of all balanced edge orientations of $D$. Equipping $D$ with $\varrho \in \widetilde{\mathcal{O}}(D)$, we get an edge-oriented diagram $D_\varrho$. We say that $\varrho$ is admissible if $D_\varrho$ does not contain a top inward crossing. We denote by $\mathcal{O}(D)$ the subset of $\widetilde{\mathcal{O}}(D)$ consisting of all admissible balanced edge orientations of $D$. \begin{figure} \caption{Resolutions of a top outward crossing} \label{res-top-out-fig} \end{figure} For $\varrho \in \mathcal{O}(D)$, we allow the two resolutions in Figure \ref{res-top-out-fig} at each top outward crossing of $D_\varrho$. A resolution $\varsigma$ of $D_\varrho$ is a choice of $A$ or $B$ resolution of every top outward crossing of $D_\varrho$. Denote by $\Sigma(D_\varrho)$ the set of all resolutions of $D_\varrho$. For each $\varsigma \in \Sigma(D_\varrho)$ and each top outward crossing $c$ of $D_\varrho$, we define a local weight \begin{equation}\label{local-weight-crossing-Jaeger} [D_\varrho,\varsigma;c]= \begin{cases} q-q^{-1} & \text{if } \varsigma \text{ applies } A \text{ to } c, \\ -q+q^{-1} & \text{if } \varsigma \text{ applies } B \text{ to } c. \end{cases} \end{equation} The total weight $[D_\varrho,\varsigma]$ of the resolution $\varsigma$ is defined to be \begin{equation}\label{weight-link-Jaeger} [D_\varrho,\varsigma]= \prod_c [D_\varrho,\varsigma;c], \end{equation} where $c$ runs through all top outward crossings of $D_\varrho$. For $\varsigma \in \Sigma(D_\varrho)$, denote by $D_{\varrho,\varsigma}$ the oriented link diagram (in the usual sense) obtained by applying $\varsigma$ to $D_\varrho$. As an immersed curve in $\mathbb{R}^2$, $D_{\varrho,\varsigma}$ has a rotation number $\mathrm{rot}(D_{\varrho,\varsigma})$ (which is also known as the Whitney index or the degree of the Gauss map.) The following is our formulation of the Jaeger Formula, which is easily shown to be equivalent to the Jaeger Formula given in \cite{Ferrand, Kauffman-book}. \begin{equation}\label{eq-Jaeger-formula} P(D)(q,a^2q^{-1}) = \sum_{\varrho \in \mathcal{O}(D)} \sum_{\varsigma \in \Sigma(D_\varrho)} (a^{-1}q)^{\mathrm{rot}(D_{\varrho,\varsigma})} [D_\varrho,\varsigma] R(D_{\varrho,\varsigma})(q,a). \end{equation} Plugging $a=q^N$ into the above formula, we get \begin{equation}\label{eq-Jaeger-formula-N} P_{2N}(D)(q) = \sum_{\varrho \in \mathcal{O}(D)} \sum_{\varsigma \in \Sigma(D_\varrho)} q^{-(N-1)\mathrm{rot}(D_{\varrho,\varsigma})} [D_\varrho,\varsigma] R_N(D_{\varrho,\varsigma})(q). \end{equation} \subsection{The Kauffman-Vogel polynomial, the Murakami-Ohtsuki-Yamada polynomial and the Jaeger Formula}\label{subsec-Jaeger-graph} The first objective of the present paper is to generalize the Jaeger Formula \eqref{eq-Jaeger-formula} to express the Kauffman-Vogel polynomial as a state sum in terms of the Murakami-Ohtsuki-Yamada polynomial of (uncolored) oriented knotted $4$-valent graphs. A knotted $4$-valent graph is an immersion of an abstract $4$-valent graph into $\mathbb{R}^2$ whose only singularities are finitely many crossings away from vertices. Here, a crossing is a transversal double point with one intersecting branch specified as upper and the other as lower. Two knotted $4$-valent graph are equivalent if they are isotopic to each other via a rigid vertex isotopy. (See \cite[Section 1]{KV} for the definition of rigid vertex isotopies.) \begin{figure} \caption{Vertex of an oriented knotted $4$-valent graph} \label{oriented-vertex-fig} \end{figure} We say that a knotted $4$-valent graph is oriented if the underlying abstract $4$-valent graph is oriented in such a way that, up to rotation, very vertex in the knotted $4$-valent graph looks like the one in Figure \ref{oriented-vertex-fig}. We say that a knotted $4$-valent graph is unoriented if the underlying abstract $4$-valent graph is unoriented. Note that some orientations of the underlying abstract $4$-valent graph do not give rise to orientations of the knotted $4$-valent graph. The Kauffman-Vogel polynomial $P(D)(q,a)$ defined in \cite{KV} is an invariant of unoriented knotted $4$-valent graphs under regular rigid vertex isotopy. It is defined by the skein relations \eqref{Kauffman-skein} of the Kauffman polynomial plus the following additional relation. \begin{equation}\label{Kauffman-skein-vertex} P(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\line(1,1){10}} \put(-10,0){\line(1,1){10}} \put(0,10){\line(-1,1){10}} \put(10,0){\line(-1,1){10}} \end{picture})= - P(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\line(1,1){20}} \put(-2,12){\line(-1,1){8}} \put(2,8){\line(1,-1){8}} \end{picture}) + q P(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}) +q^{-1} P(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture}) = - P(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,0){\line(-1,1){20}} \put(2,12){\line(1,1){8}} \put(-2,8){\line(-1,-1){8}} \end{picture}) + q^{-1} P(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}) + q P(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture}). \end{equation} The $\mathfrak{so}(N)$ Kauffman-Vogel polynomial $P_{N}(D)(q)$ is defined to be the specialization \begin{equation}\label{Kauffman-Vogel-2N-def} P_{N}(D)(q)= P(D)(q,q^{N-1}). \end{equation} The Murakami-Ohtsuki-Yamada polynomial\footnote{For oriented knotted $4$-valent graphs, the Murakami-Ohtsuki-Yamada polynomial was first defined by Kauffman and Vogel \cite{KV}. Murakami, Ohtsuki and Yamada \cite{MOY} generalized it to knotted MOY graphs and used it to recover the Reshetikhin-Turaev $\mathfrak{sl}(N)$ polynomial of links colored by wedge powers of the defining representation.} $R(D)(q,a)$ of oriented knotted $4$-valent graphs is an invariant under regular rigid vertex isotopy. It is defined by the skein relations \eqref{HOMFLY-skein} of the HOMFLY-PT polynomial plus the following additional relation. \begin{equation}\label{HOMFLY-skein-vertex} R(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(-10,0){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \end{picture}) = - R(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,1){20}} \put(-2,12){\varepsilonctor(-1,1){8}} \put(2,8){\line(1,-1){8}} \end{picture}) + q R(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,20){\varepsilonctor(1,1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}) = - R(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,0){\varepsilonctor(-1,1){20}} \put(2,12){\varepsilonctor(1,1){8}} \put(-2,8){\line(-1,-1){8}} \end{picture}) + q^{-1} R(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,20){\varepsilonctor(1,1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}). \end{equation} The $\mathfrak{sl}(N)$ Murakami-Ohtsuki-Yamada polynomial $R_{N}(D)(q)$ is defined to be the specialization \begin{equation}\label{MOY-uncolored-N-def} R_{N}(D)(q)= R(D)(q,q^{N}). \end{equation} From now on, we will refer to the Kauffman-Vogel polynomial as the KV polynomial and the Murakami-Ohtsuki-Yamada polynomial as the MOY polynomial. Given a knotted $4$-valent graph $D$, we call a segment of $D$ between two adjacent vertices or crossings an edge. (An edge can have a crossing and a vertex as its end points. Note that an edge of the underlying abstract $4$-valent graph may be divided into several edges in $D$ by crossings.) An edge orientation of $D$ is an orientation of all the edges of $D$. We say that an edge orientation of $D$ is balanced if, at every crossing and every vertex, two edges point inward and two edges point outward. As before, up to rotation, there are four possible balanced edge orientations near a crossing. (See Figure \ref{balanced-orientation-crossing-fig}.) Up to rotation, there are two possible balanced edge orientations near a vertex. (See Figure \ref{balanced-orientation-vertex-fig}.) \begin{figure} \caption{Balanced edge orientations near a vertex} \label{balanced-orientation-vertex-fig} \end{figure} Denote by $\widetilde{\mathcal{O}}(D)$ the set of all balanced edge orientations of $D$. Equipping $D$ with $\varrho \in \widetilde{\mathcal{O}}(D)$, we get an edge-oriented diagram $D_\varrho$. We say that $\varrho$ is admissible if $D_\varrho$ does not contain a top inward crossing. We denote by $\mathcal{O}(D)$ the subset of $\widetilde{\mathcal{O}}(D)$ consisting of all admissible balanced edge orientations of $D$. For $\varrho \in \mathcal{O}(D)$, we allow the two resolutions in Figure \ref{res-top-out-fig} at each top outward crossing of $D_\varrho$ and the two resolutions in Figure \ref{res-non-crossing-vertex-fig} at each non-crossing-like vertex. A resolution $\varsigma$ of $D_\varrho$ is a choice of $A$ or $B$ resolution of every top outward crossing of $D_\varrho$ and $L$ or $R$ resolution of every non-crossing-like vertex. Denote by $\Sigma(D_\varrho)$ the set of all resolutions of $D_\varrho$. \begin{figure} \caption{Resolutions of a non-crossing-like vertex} \label{res-non-crossing-vertex-fig} \end{figure} Fix a $\varsigma \in \Sigma(D_\varrho)$. For each top outward crossing $c$ of $D_\varrho$, the local weight $[D_\varrho,\varsigma;c]$ is defined as in \eqref{local-weight-crossing-Jaeger}. For each non-crossing-like vertex $v$, we define a local weight $[D_\varrho,\varsigma;v]$ by the following equation. \begin{equation}\label{local-weight-vertex-Jaeger} [D_\varrho,\varsigma;v]= \begin{cases} q & \text{if } \varsigma \text{ applies } L \text{ to } v, \\ q^{-1} & \text{if } \varsigma \text{ applies } R \text{ to } v. \end{cases} \end{equation} The total weight $[D_\varrho,\varsigma]$ of the resolution $\varsigma$ is defined to be \begin{equation}\label{weight-graph-Jaeger} [D_\varrho,\varsigma]= \left(\prod_c [D_\varrho,\varsigma;c]\right) \cdot \left(\prod_v [D_\varrho,\varsigma;v]\right), \end{equation} where $c$ runs through all top outward crossings of $D_\varrho$ and $v$ runs through all non-crossing-like vertices of $D_\varrho$. The following theorem is our generalization of the Jaeger Formula to knotted $4$-valent graphs. \begin{theorem}\label{thm-Jaeger-formula-graph} \begin{equation}\label{eq-Jaeger-formula-graph} P(D)(q,a^2q^{-1}) = \sum_{\varrho \in \mathcal{O}(D)} \sum_{\varsigma \in \Sigma(D_\varrho)} (a^{-1}q)^{\mathrm{rot}(D_{\varrho,\varsigma})} [D_\varrho,\varsigma] R(D_{\varrho,\varsigma})(q,a). \end{equation} Consequently, for $N\geq 1$, \begin{equation}\label{eq-Jaeger-formula-N-graph} P_{2N}(D)(q) = \sum_{\varrho \in \mathcal{O}(D)} \sum_{\varsigma \in \Sigma(D_\varrho)} q^{-(N-1)\mathrm{rot}(D_{\varrho,\varsigma})} [D_\varrho,\varsigma] R_N(D_{\varrho,\varsigma})(q). \end{equation} \end{theorem} \begin{remark} Murakami, Ohtsuki and Yamada \cite{MOY} established a state sum formula for the $\mathfrak{sl}(N)$ MOY polynomial $R_N$. Combining that with \eqref{eq-Jaeger-formula-N-graph}, we get a state sum formula for the $\mathfrak{so}(2N)$ KV polynomial. Specially, note that the $\mathfrak{sl}(1)$ MOY polynomial of a $4$-valent graph $D$ \textbf{embedded} in $\mathbb{R}^2$ is simply given by \begin{equation}\label{eq-sl-1} R_1(D) = \begin{cases} 1 & \text{if } D \text{ has no vertex}, \\ 0 & \text{otherwise}. \end{cases} \end{equation} Using \eqref{eq-sl-1} and \eqref{eq-Jaeger-formula-N-graph}, it is straightforward to recover the formula of the $\mathfrak{so}(2)$ KV polynomial of a planar $4$-valent graph given by Carpentier \cite[Theorem 4]{Carpentier1} and Caprau, Tipton \cite[Theorem 4]{Caprau-Tipton}. We would also like to point out that the concept of balanced edge orientation is implicit in \cite{Carpentier2}, in which Carpentier gave an alternative proof of \cite[Theorem 4]{Carpentier1}. \end{remark} \begin{proof}[Proof of Theorem \ref{thm-Jaeger-formula-graph}] We prove Theorem \ref{thm-Jaeger-formula-graph} by inducting on the number of vertices in $D$. The proof comes down to a straightforward but rather lengthy tabulation of all admissible balanced edge orientations of $D$. If $D$ contains no vertex, then \eqref{eq-Jaeger-formula-graph} becomes \eqref{eq-Jaeger-formula}, which is known to be true. Assume that \eqref{eq-Jaeger-formula-graph} is true if $D$ has at most $n-1$ vertices. Now let $D$ be a knotted $4$-valent graph with $n$ vertices. \begin{figure}\label{D-hatD-DA-DB-fig} \end{figure} Choose a vertex $v$ of $D$. Define $\widehat{D}$, $D^A$ and $D^B$ to be the knotted $4$-valent graphs obtained from $D$ by replacing $v$ by the local configurations in Figure \ref{D-hatD-DA-DB-fig}. By skein relation \eqref{Kauffman-skein-vertex}, we have \begin{equation}\label{eq-D-hatD-DA-DB} P(D)=-P(\widehat{D})+q P(D^A) +q^{-1} P(D^B). \end{equation} Note that each of $\widehat{D}$, $D^A$ and $D^B$ has only $n-1$ vertices. So \eqref{eq-Jaeger-formula-graph} is true for $\widehat{D}$, $D^A$ and $D^B$. Thus, by \eqref{eq-D-hatD-DA-DB}, to prove \eqref{eq-Jaeger-formula-graph} for $D$, we only need to check that \begin{eqnarray} \label{eq-Jaeger-D-hatD-DA-DB} && \sum_{\varrho \in \mathcal{O}(D)} \sum_{\varsigma \in \Sigma(D_\varrho)} (a^{-1}q)^{\mathrm{rot}(D_{\varrho,\varsigma})} [D_\varrho,\varsigma] R(D_{\varrho,\varsigma})(q,a)\\ & = & -\sum_{\varrho \in \mathcal{O}(\widehat{D})} \sum_{\varsigma \in \Sigma(\widehat{D}_\varrho)} (a^{-1}q)^{\mathrm{rot}(\widehat{D}_{\varrho,\varsigma})} [\widehat{D}_\varrho,\varsigma] R(\widehat{D}_{\varrho,\varsigma})(q,a) \nonumber \\ && + q \sum_{\varrho \in \mathcal{O}(D^A)} \sum_{\varsigma \in \Sigma(D^A_\varrho)} (a^{-1}q)^{\mathrm{rot}(D^A_{\varrho,\varsigma})} [D^A_\varrho,\varsigma] R(D^A_{\varrho,\varsigma})(q,a) \nonumber \\ && +q^{-1} \sum_{\varrho \in \mathcal{O}(D^B)} \sum_{\varsigma \in \Sigma(D^B_\varrho)} (a^{-1}q)^{\mathrm{rot}(D^B_{\varrho,\varsigma})} [D^B_\varrho,\varsigma] R(D^B_{\varrho,\varsigma})(q,a)\nonumber. \end{eqnarray} According to the orientations of the four edges of $D$ incidental at $v$, we divide $\mathcal{O}(D)$ into six disjoint subsets \begin{equation}\label{O-D-Subsets} \mathcal{O}(D)=\mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(-10,0){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture})\sqcup\mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(-10,0){\varepsilonctor(1,1){10}} \put(-10,20){\varepsilonctor(1,-1){10}} \put(0,10){\varepsilonctor(1,-1){10}} \put(4,8){\tiny{$v$}} \end{picture}) \sqcup\mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(-1,-1){10}} \put(0,10){\varepsilonctor(-1,-1){10}} \put(-10,20){\varepsilonctor(1,-1){10}} \put(0,10){\varepsilonctor(1,-1){10}} \put(4,8){\tiny{$v$}} \end{picture}) \sqcup\mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(-1,-1){10}} \put(0,10){\varepsilonctor(-1,-1){10}} \put(0,10){\varepsilonctor(-1,1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture}) \sqcup\mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,-1){10}} \put(-10,20){\varepsilonctor(1,-1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture}) \sqcup\mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(-1,-1){10}} \put(-10,0){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,1){10}} \put(0,10){\varepsilonctor(1,-1){10}} \put(4,8){\tiny{$v$}} \end{picture}), \end{equation} where $\ast$ in $\mathcal{O}(D;\ast)$ specifies the edge orientation near $v$. Note that, depending on $D$, some of these subsets may be empty. Using similar notations, we have the following partitions of $\mathcal{O}(\widehat{D})$, $\mathcal{O}(D^A)$ and $\mathcal{O}(D^B)$. \begin{eqnarray} \label{O-hatD-Subsets}&& \mathcal{O}(\widehat{D})= \mathcal{O}(\widehat{D};\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,1){20}} \put(-2,12){\varepsilonctor(-1,1){8}} \put(2,8){\line(1,-1){8}} \end{picture}) \sqcup\mathcal{O}(\widehat{D};\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,1){20}} \put(-10,20){\line(1,-1){8}} \put(2,8){\varepsilonctor(1,-1){8}} \end{picture}) \sqcup\mathcal{O}(\widehat{D};\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(-1,-1){20}} \put(-10,20){\line(1,-1){8}} \put(2,8){\varepsilonctor(1,-1){8}} \end{picture}) \sqcup\mathcal{O}(\widehat{D};\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(-1,-1){20}} \put(-2,12){\varepsilonctor(-1,1){8}} \put(2,8){\line(1,-1){8}} \end{picture}) \sqcup\mathcal{O}(\widehat{D};\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(-1,-1){10}} \put(0,10){\varepsilonctor(1,1){10}} \put(-10,20){\varepsilonctor(1,-1){8}} \put(10,0){\varepsilonctor(-1,1){8}} \end{picture}), \\ \label{O-DA-Subsets}&& \mathcal{O}(D^A) = \mathcal{O}(D^A;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,20){\varepsilonctor(1,1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}) \sqcup \mathcal{O}(D^A;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(-1,-1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}) \sqcup \mathcal{O}(D^A;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(-1,-1){0}} \put(10,20){\varepsilonctor(1,1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}) \sqcup \mathcal{O}(D^A;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}), \\ \label{O-DB-Subsets}&& \mathcal{O}(D^B) = \mathcal{O}(D^B;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(1,1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture}) \sqcup\mathcal{O}(D^B;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(-10,0){\varepsilonctor(-1,-1){0}} \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture}) \sqcup\mathcal{O}(D^B;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(1,1){0}} \put(-10,0){\varepsilonctor(-1,-1){0}} \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture}) \sqcup\mathcal{O}(D^B;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture}). \end{eqnarray} First, let us consider the subset $\mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(-10,0){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture})$. There are obvious bijections \begin{eqnarray*} \mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(-10,0){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture}) & \xrightarrow{\varphi} & \mathcal{O}(\widehat{D};\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,1){20}} \put(-2,12){\varepsilonctor(-1,1){8}} \put(2,8){\line(1,-1){8}} \end{picture}), \\ \mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(-10,0){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture}) & \xrightarrow{\psi} & \mathcal{O}(D^A;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,20){\varepsilonctor(1,1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}) \end{eqnarray*} that preserve the orientations of corresponding edges. Moreover, for each $\varrho \in \mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(-10,0){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture})$, there are obvious bijections \begin{eqnarray*} \Sigma(D_\varrho) \xrightarrow{\varphi_\varrho} \Sigma(\widehat{D}_{\varphi(\varrho)}), \\ \Sigma(D_\varrho) \xrightarrow{\psi_\varrho} \Sigma(D^A_{\psi(\varrho)}) \end{eqnarray*} such that, for any $\varsigma \in \Sigma(D_\varrho)$, $\varsigma$, $\varphi_\varrho(\varsigma)$ and $\psi_\varrho(\varsigma)$ are identical outside the parts shown in Figure \ref{D-hatD-DA-DB-fig}. Note that the four edges at $v$ are oriented in a crossing-like way. So $\varsigma$ (resp. $\varphi_\varrho(\varsigma)$ and $\psi_\varrho(\varsigma)$) does not change the part of $D_\varrho$ (resp. $\widehat{D}_{\varphi(\varrho)}$ and $D^A_{\psi(\varrho)}$) shown in Figure \ref{D-hatD-DA-DB-fig}. This implies that \begin{equation}\label{weight-D-hatD-DA} [D_\varrho,\varsigma] =[\widehat{D}_{\varphi(\varrho)},\varphi_\varrho(\varsigma)] =[D^A_{\psi(\varrho)},\psi_\varrho(\varsigma)]. \end{equation} It is also easy to see that \begin{equation}\label{rot-D-hatD-DA} \mathrm{rot}(D_{\varrho,\varsigma}) =\mathrm{rot}(\widehat{D}_{\varphi(\varrho),\varphi_\varrho(\varsigma)}) = \mathrm{rot}(D^A_{\psi(\varrho),\psi_\varrho(\varsigma)}). \end{equation} By the skein relation \eqref{HOMFLY-skein-vertex}, we know that \begin{equation}\label{HOMFLY-D-hatD-DA} R(D_{\varrho,\varsigma}) = -R(\widehat{D}_{\varphi(\varrho),\varphi_\varrho(\varsigma)}) +q R(D^A_{\psi(\varrho),\psi_\varrho(\varsigma)}). \end{equation} Combining equations \eqref{weight-D-hatD-DA}, \eqref{rot-D-hatD-DA} and \eqref{HOMFLY-D-hatD-DA}, we get \begin{eqnarray} \label{eq-Jaeger-D-hatD-DA-DB-u} && \sum_{\varrho \in \mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(-10,0){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture})} \sum_{\varsigma \in \Sigma(D_\varrho)} (a^{-1}q)^{\mathrm{rot}(D_{\varrho,\varsigma})} [D_\varrho,\varsigma] R(D_{\varrho,\varsigma})(q,a)\\ & = & -\sum_{\varrho \in \mathcal{O}(\widehat{D};\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,1){20}} \put(-2,12){\varepsilonctor(-1,1){8}} \put(2,8){\line(1,-1){8}} \end{picture})} \sum_{\varsigma \in \Sigma(\widehat{D}_\varrho)} (a^{-1}q)^{\mathrm{rot}(\widehat{D}_{\varrho,\varsigma})} [\widehat{D}_\varrho,\varsigma] R(\widehat{D}_{\varrho,\varsigma})(q,a) \nonumber \\ && + q \sum_{\varrho \in \mathcal{O}(D^A;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,20){\varepsilonctor(1,1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture})} \sum_{\varsigma \in \Sigma(D^A_\varrho)} (a^{-1}q)^{\mathrm{rot}(D^A_{\varrho,\varsigma})} [D^A_\varrho,\varsigma] R(D^A_{\varrho,\varsigma})(q,a). \nonumber \end{eqnarray} One can similarly deduce that \begin{eqnarray} \label{eq-Jaeger-D-hatD-DA-DB-r} && \sum_{\varrho \in \mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(-10,0){\varepsilonctor(1,1){10}} \put(-10,20){\varepsilonctor(1,-1){10}} \put(0,10){\varepsilonctor(1,-1){10}} \put(4,8){\tiny{$v$}} \end{picture})} \sum_{\varsigma \in \Sigma(D_\varrho)} (a^{-1}q)^{\mathrm{rot}(D_{\varrho,\varsigma})} [D_\varrho,\varsigma] R(D_{\varrho,\varsigma})(q,a)\\ & = & -\sum_{\varrho \in \mathcal{O}(\widehat{D};\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,1){20}} \put(-10,20){\line(1,-1){8}} \put(2,8){\varepsilonctor(1,-1){8}} \end{picture})} \sum_{\varsigma \in \Sigma(\widehat{D}_\varrho)} (a^{-1}q)^{\mathrm{rot}(\widehat{D}_{\varrho,\varsigma})} [\widehat{D}_\varrho,\varsigma] R(\widehat{D}_{\varrho,\varsigma})(q,a) \nonumber \\ && + q^{-1} \sum_{\varrho \in \mathcal{O}(D^B;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(1,1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture})} \sum_{\varsigma \in \Sigma(D^B_\varrho)} (a^{-1}q)^{\mathrm{rot}(D^B_{\varrho,\varsigma})} [D^B_\varrho,\varsigma] R(D^B_{\varrho,\varsigma})(q,a), \nonumber \end{eqnarray} \begin{eqnarray} \label{eq-Jaeger-D-hatD-DA-DB-d} && \sum_{\varrho \in \mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(-1,-1){10}} \put(0,10){\varepsilonctor(-1,-1){10}} \put(-10,20){\varepsilonctor(1,-1){10}} \put(0,10){\varepsilonctor(1,-1){10}} \put(4,8){\tiny{$v$}} \end{picture})} \sum_{\varsigma \in \Sigma(D_\varrho)} (a^{-1}q)^{\mathrm{rot}(D_{\varrho,\varsigma})} [D_\varrho,\varsigma] R(D_{\varrho,\varsigma})(q,a)\\ & = & -\sum_{\varrho \in \mathcal{O}(\widehat{D};\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(-1,-1){20}} \put(-10,20){\line(1,-1){8}} \put(2,8){\varepsilonctor(1,-1){8}} \end{picture})} \sum_{\varsigma \in \Sigma(\widehat{D}_\varrho)} (a^{-1}q)^{\mathrm{rot}(\widehat{D}_{\varrho,\varsigma})} [\widehat{D}_\varrho,\varsigma] R(\widehat{D}_{\varrho,\varsigma})(q,a) \nonumber \\ && + q \sum_{\varrho \in \mathcal{O}(D^A;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(-1,-1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture})} \sum_{\varsigma \in \Sigma(D^A_\varrho)} (a^{-1}q)^{\mathrm{rot}(D^A_{\varrho,\varsigma})} [D^A_\varrho,\varsigma] R(D^A_{\varrho,\varsigma})(q,a), \nonumber \end{eqnarray} \begin{eqnarray} \label{eq-Jaeger-D-hatD-DA-DB-l} && \sum_{\varrho \in \mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(-1,-1){10}} \put(0,10){\varepsilonctor(-1,-1){10}} \put(0,10){\varepsilonctor(-1,1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture})} \sum_{\varsigma \in \Sigma(D_\varrho)} (a^{-1}q)^{\mathrm{rot}(D_{\varrho,\varsigma})} [D_\varrho,\varsigma] R(D_{\varrho,\varsigma})(q,a)\\ & = & -\sum_{\varrho \in \mathcal{O}(\widehat{D};\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(-1,-1){20}} \put(-2,12){\varepsilonctor(-1,1){8}} \put(2,8){\line(1,-1){8}} \end{picture})} \sum_{\varsigma \in \Sigma(\widehat{D}_\varrho)} (a^{-1}q)^{\mathrm{rot}(\widehat{D}_{\varrho,\varsigma})} [\widehat{D}_\varrho,\varsigma] R(\widehat{D}_{\varrho,\varsigma})(q,a) \nonumber \\ && + q^{-1} \sum_{\varrho \in \mathcal{O}(D^B;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(-10,0){\varepsilonctor(-1,-1){0}} \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture})} \sum_{\varsigma \in \Sigma(D^B_\varrho)} (a^{-1}q)^{\mathrm{rot}(D^B_{\varrho,\varsigma})} [D^B_\varrho,\varsigma] R(D^B_{\varrho,\varsigma})(q,a). \nonumber \end{eqnarray} Now we consider $\mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,-1){10}} \put(-10,20){\varepsilonctor(1,-1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture})$. There are obvious bijections \begin{eqnarray*} \mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,-1){10}} \put(-10,20){\varepsilonctor(1,-1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture}) & \xrightarrow{\varphi} & \mathcal{O}(\widehat{D};\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(-1,-1){10}} \put(0,10){\varepsilonctor(1,1){10}} \put(-10,20){\varepsilonctor(1,-1){8}} \put(10,0){\varepsilonctor(-1,1){8}} \end{picture}), \\ \mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,-1){10}} \put(-10,20){\varepsilonctor(1,-1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture}) & \xrightarrow{\psi^A} & \mathcal{O}(D^A;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(-1,-1){0}} \put(10,20){\varepsilonctor(1,1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}), \\ \mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,-1){10}} \put(-10,20){\varepsilonctor(1,-1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture}) & \xrightarrow{\psi^B} & \mathcal{O}(D^B;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(1,1){0}} \put(-10,0){\varepsilonctor(-1,-1){0}} \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture}) \end{eqnarray*} that preserve the orientations of corresponding edges. Given a $\varrho \in \mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,-1){10}} \put(-10,20){\varepsilonctor(1,-1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture})$, there are partitions \begin{eqnarray} \label{partition-D-out} \Sigma(D_\varrho) & = & \Sigma^R(D_\varrho) \sqcup \Sigma^L(D_\varrho), \\ \label{partition-hatD-out} \Sigma(\widehat{D}_{\varphi(\varrho)}) & = & \Sigma^A(\widehat{D}_{\varphi(\varrho)}) \sqcup \Sigma^B(\widehat{D}_{\varphi(\varrho)}) \end{eqnarray} according to what local resolution is applied to $v$ and the corresponding crossing in $\widehat{D}$. There are bijections \begin{eqnarray*} \Sigma^R(D_\varrho) & \xrightarrow{\varphi_\varrho^A} & \Sigma^A(\widehat{D}_{\varphi(\varrho)}), \\ \Sigma^L(D_\varrho) & \xrightarrow{\varphi_\varrho^B} & \Sigma^B(\widehat{D}_{\varphi(\varrho)}), \\ \Sigma^R(D_\varrho) & \xrightarrow{\psi_\varrho^A} & \Sigma(D_{\psi^A(\varrho)}^A), \\ \Sigma^L(D_\varrho) & \xrightarrow{\psi_\varrho^B} & \Sigma(D_{\psi^B(\varrho)}^B) \end{eqnarray*} such that the corresponding resolutions are identical outside the parts shown in Figure \ref{D-hatD-DA-DB-fig}. For a $\varsigma \in \Sigma^R(D_\varrho)$, it is easy to see that $D_{\varrho,\varsigma} = \widehat{D}_{\varphi(\varrho), \varphi_\varrho^A(\varsigma)} = D_{\psi^A(\varrho), \psi_\varrho^A(\varsigma)}^A$. So \begin{eqnarray} \label{eq-rot-D-hatD-DA-out} \mathrm{rot}(D_{\varrho,\varsigma}) & = & \mathrm{rot}(\widehat{D}_{\varphi(\varrho), \varphi_\varrho^A(\varsigma)}) = \mathrm{rot}(D_{\psi^A(\varrho), \psi_\varrho^A(\varsigma)}^A), \\ \label{eq-HOMFLY-D-hatD-DA-out} R(D_{\varrho,\varsigma}) & = & R(\widehat{D}_{\varphi(\varrho), \varphi_\varrho^A(\varsigma)}) = R(D_{\psi^A(\varrho), \psi_\varrho^A(\varsigma)}^A). \end{eqnarray} One can also easily check that the weights satisfy \begin{equation}\label{eq-weight-D-hatD-DA-out} [D_{\varrho},\varsigma] = \frac{q^{-1}}{q-q^{-1}}[\widehat{D}_{\varphi(\varrho)}, \varphi_\varrho^A(\varsigma)] = q^{-1}[D_{\psi^A(\varrho)}^A, \psi_\varrho^A(\varsigma)]. \end{equation} So \begin{equation}\label{eq-weight-D-hatD-DA-out-2} [D_{\varrho},\varsigma] = -[\widehat{D}_{\varphi(\varrho)}, \varphi_\varrho^A(\varsigma)] +q[D_{\psi^A(\varrho)}^A, \psi_\varrho^A(\varsigma)]. \end{equation} Combining equations \eqref{eq-rot-D-hatD-DA-out}, \eqref{eq-HOMFLY-D-hatD-DA-out} and \eqref{eq-weight-D-hatD-DA-out-2}, we get \begin{eqnarray} \label{eq-sum-D-hatD-DA-out} && \sum_{\varsigma \in \Sigma^R(D_\varrho)} (a^{-1}q)^{\mathrm{rot}(D_{\varrho,\varsigma})} [D_\varrho,\varsigma] R(D_{\varrho,\varsigma})(q,a)\\ & = & -\sum_{\varsigma \in \Sigma^A(\widehat{D}_{\varphi(\varrho)})} (a^{-1}q)^{\mathrm{rot}(\widehat{D}_{{\varphi(\varrho)},\varsigma})} [\widehat{D}_{\varphi(\varrho)},\varsigma] R(\widehat{D}_{\varphi(\varrho),\varsigma})(q,a) \nonumber \\ && + q \sum_{\varsigma \in \Sigma(D^A_{\psi^A(\varrho)})} (a^{-1}q)^{\mathrm{rot}(D^A_{\psi^A(\varrho),\varsigma})} [D^A_{\psi^A(\varrho)},\varsigma] R(D^A_{\psi^A(\varrho),\varsigma})(q,a). \nonumber \end{eqnarray} Similarly, one gets \begin{eqnarray} \label{eq-sum-D-hatD-DB-out} && \sum_{\varsigma \in \Sigma^L(D_\varrho)} (a^{-1}q)^{\mathrm{rot}(D_{\varrho,\varsigma})} [D_\varrho,\varsigma] R(D_{\varrho,\varsigma})(q,a)\\ & = & -\sum_{\varsigma \in \Sigma^B(\widehat{D}_{\varphi(\varrho)})} (a^{-1}q)^{\mathrm{rot}(\widehat{D}_{\varphi(\varrho),\varsigma})} [\widehat{D}_{\varphi(\varrho)},\varsigma] R(\widehat{D}_{\varphi(\varrho),\varsigma})(q,a) \nonumber \\ && + q^{-1} \sum_{\varsigma \in \Sigma(D^B_{\psi^B(\varrho)})} (a^{-1}q)^{\mathrm{rot}(D^B_{\psi^B(\varrho),\varsigma})} [D^B_{\psi^B(\varrho)},\varsigma] R(D^B_{\psi^B(\varrho),\varsigma})(q,a). \nonumber \end{eqnarray} Equations \eqref{eq-sum-D-hatD-DA-out} and \eqref{eq-sum-D-hatD-DB-out} imply that \begin{eqnarray} \label{eq-Jaeger-D-hatD-DA-DB-out} && \sum_{\varrho \in \mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,-1){10}} \put(-10,20){\varepsilonctor(1,-1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \put(4,8){\tiny{$v$}} \end{picture})} \sum_{\varsigma \in \Sigma(D_\varrho)} (a^{-1}q)^{\mathrm{rot}(D_{\varrho,\varsigma})} [D_\varrho,\varsigma] R(D_{\varrho,\varsigma})(q,a)\\ & = & -\sum_{\varrho \in \mathcal{O}(\widehat{D};\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(-1,-1){10}} \put(0,10){\varepsilonctor(1,1){10}} \put(-10,20){\varepsilonctor(1,-1){8}} \put(10,0){\varepsilonctor(-1,1){8}} \end{picture})} \sum_{\varsigma \in \Sigma(\widehat{D}_\varrho)} (a^{-1}q)^{\mathrm{rot}(\widehat{D}_{\varrho,\varsigma})} [\widehat{D}_\varrho,\varsigma] R(\widehat{D}_{\varrho,\varsigma})(q,a) \nonumber \\ && + q \sum_{\varrho \in \mathcal{O}(D^A;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(-1,-1){0}} \put(10,20){\varepsilonctor(1,1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture})} \sum_{\varsigma \in \Sigma(D^A_\varrho)} (a^{-1}q)^{\mathrm{rot}(D^A_{\varrho,\varsigma})} [D^A_\varrho,\varsigma] R(D^A_{\varrho,\varsigma})(q,a) \nonumber \\ && +q^{-1} \sum_{\varrho \in \mathcal{O}(D^B;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(1,1){0}} \put(-10,0){\varepsilonctor(-1,-1){0}} \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture})} \sum_{\varsigma \in \Sigma(D^B_\varrho)} (a^{-1}q)^{\mathrm{rot}(D^B_{\varrho,\varsigma})} [D^B_\varrho,\varsigma] R(D^B_{\varrho,\varsigma})(q,a)\nonumber. \end{eqnarray} A similar argument shows that \begin{eqnarray} \label{eq-Jaeger-D-hatD-DA-DB-in} && \sum_{\varrho \in \mathcal{O}(D;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,20){\varepsilonctor(-1,-1){10}} \put(-10,0){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,1){10}} \put(0,10){\varepsilonctor(1,-1){10}} \put(4,8){\tiny{$v$}} \end{picture})} \sum_{\varsigma \in \Sigma(D_\varrho)} (a^{-1}q)^{\mathrm{rot}(D_{\varrho,\varsigma})} [D_\varrho,\varsigma] R(D_{\varrho,\varsigma})(q,a)\\ & = & q \sum_{\varrho \in \mathcal{O}(D^A;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture})} \sum_{\varsigma \in \Sigma(D^A_\varrho)} (a^{-1}q)^{\mathrm{rot}(D^A_{\varrho,\varsigma})} [D^A_\varrho,\varsigma] R(D^A_{\varrho,\varsigma})(q,a) \nonumber \\ && +q^{-1} \sum_{\varrho \in \mathcal{O}(D^B;\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture})} \sum_{\varsigma \in \Sigma(D^B_\varrho)} (a^{-1}q)^{\mathrm{rot}(D^B_{\varrho,\varsigma})} [D^B_\varrho,\varsigma] R(D^B_{\varrho,\varsigma})(q,a)\nonumber. \end{eqnarray} From partitions \eqref{O-D-Subsets}, \eqref{O-hatD-Subsets}, \eqref{O-DA-Subsets} and \eqref{O-DB-Subsets}, we know that equations \eqref{eq-Jaeger-D-hatD-DA-DB-u}, \eqref{eq-Jaeger-D-hatD-DA-DB-r}, \eqref{eq-Jaeger-D-hatD-DA-DB-d}, \eqref{eq-Jaeger-D-hatD-DA-DB-l}, \eqref{eq-Jaeger-D-hatD-DA-DB-out} and \eqref{eq-Jaeger-D-hatD-DA-DB-in} imply that \eqref{eq-Jaeger-D-hatD-DA-DB} is true. This proves that \eqref{eq-Jaeger-formula-graph} is true for $D$. So we have completed the induction and proved \eqref{eq-Jaeger-formula-graph}. Plugging $a=q^N$ into equation \eqref{eq-Jaeger-formula-graph}, we get \eqref{eq-Jaeger-formula-N-graph}. \end{proof} \section{Color and Orientation in the $\mathfrak{sl}(N)$ MOY Polynomial} \label{sec-MOY} In Section \ref{sec-Jaeger}, we only discussed a very special case of the MOY graph polynomial. In this section, we review the $\mathfrak{sl}(N)$ MOY polynomial in its full generality and prove that it is invariant under certain changes of color and orientation. In fact, such invariance holds for the colored $\mathfrak{sl}(N)$ homology too. \subsection{The $\mathfrak{sl}(N)$ MOY graph polynomial} In this subsection, We review the $\mathfrak{sl}(N)$ MOY graph polynomial defined \cite{MOY}. Our notations and normalizations are slightly different from that used in \cite{MOY}. \begin{figure}\label{fig-MOY-vertex} \end{figure} \begin{definition}\label{def-MOY} A MOY coloring of an oriented trivalent graph is a function from the set of edges of this graph to the set of non-negative integers such that every vertex of the colored graph is of one of the two types in Figure \ref{fig-MOY-vertex}. A MOY graph is an oriented trivalent graph equipped with a MOY coloring \textbf{embedded} in the plane. A knotted MOY graph is an oriented trivalent graph equipped with a MOY coloring \textbf{immersed} in the plane such that \begin{itemize} \item the set of singularities consists of finitely many transversal double points away from vertices, \item at each of these transversal double points, we specify the upper- and the lower- branches (which makes it a crossing.) \end{itemize} \end{definition} Fix a positive integer $N$. Define $\mathcal{N}= \{2k-N+1|k=0,1,\dots, N-1\}$ and denote by $\mathcal{P}(\mathcal{N})$ the power set of $\mathcal{N}$. Let $\Gamma$ be a MOY graph. Denote by $E(\Gamma)$ the set of edges of $\Gamma$, by $V(\Gamma)$ the set of vertices of $\Gamma$ and by $\mathsf{c}:E(\Gamma) \rightarrow \mathbb{Z}_{\geq 0}$ the color function of $\Gamma$. That is, for every edge $e$ of $\Gamma$, $\mathsf{c}(e) \in \mathbb{Z}_{\geq 0}$ is the color of $e$. \begin{definition}\label{MOY-state-def} A state of $\Gamma$ is a function $\varphi: E(\Gamma) \rightarrow \mathcal{P}(\mathcal{N})$ such that \begin{enumerate}[(i)] \item for every edge $e$ of $\Gamma$, $\#\varphi(e) = \mathsf{c}(e)$, \item for every vertex $v$ of $\Gamma$, as depicted in Figure \ref{fig-MOY-vertex}, we have $\varphi(e)=\varphi(e_1) \cup \varphi(e_2)$. \end{enumerate} Note that (i) and (ii) imply that $\varphi(e_1) \cap \varphi(e_2)=\emptyset$. Denote by $\mathcal{S}_N(\Gamma)$ the set of states of $\Gamma$. \end{definition} Define a function $\pi:\mathcal{P}(\mathcal{N}) \times \mathcal{P}(\mathcal{N}) \rightarrow \mathbb{Z}_{\geq 0}$ by \begin{equation}\label{eq-def-pi} \pi (A_1, A_2) = \# \{(a_1,a_2) \in A_1 \times A_2 ~|~ a_1>a_2\} \text{ for } A_1,~A_2 \in \mathcal{P}(\mathcal{N}). \end{equation} Let $\varphi$ be a state of $\Gamma$. For a vertex $v$ of $\Gamma$ (as depicted in Figure \ref{fig-MOY-vertex}), the weight of $v$ with respect to $\varphi$ is defined to be \begin{equation}\label{eq-weight-vertex} \mathrm{wt}(v;\varphi) = \frac{\mathsf{c}(e_1)\mathsf{c}(e_2)}{2} - \pi(\varphi(e_1),\varphi(e_2)). \end{equation} Next, replace each edge $e$ of $\Gamma$ by $\mathsf{c}(e)$ parallel edges, assign to each of these new edges a different element of $\varphi(e)$ and, at every vertex, connect each pair of new edges assigned the same element of $\mathcal{N}$. This changes $\Gamma$ into a collection $\mathcal{C}_\varphi$ of embedded oriented circles, each of which is assigned an element of $\mathcal{N}$. By abusing notation, we denote by $\varphi(C)$ the element of $\mathcal{N}$ assigned to $C\in \mathcal{C}_\varphi$. Note that: \begin{itemize} \item There may be intersections between different circles in $\mathcal{C}_\varphi$. But, each circle in $\mathcal{C}_\varphi$ is embedded, that is, it has no self-intersection or self-tangency. \item There may be more than one way to do this. But if we view $\mathcal{C}_\varphi$ as a virtual link and the intersection points between different elements of $\mathcal{C}_\varphi$ are virtual crossings, then the above construction is unique up to purely virtual regular Reidemeister moves. \end{itemize} The rotation number $\mathrm{rot}(\varphi)$ of $\varphi$ is then defined to be \begin{equation}\label{eq-rot-state} \mathrm{rot}(\varphi) = \sum_{C\in \mathcal{C}_\varphi} \varphi(C) \mathrm{rot}(C). \end{equation} Note that the sum $\sum_{C\in \mathcal{C}_\varphi} \mathrm{rot}(C)$ is independent of the choice of $\varphi \in \mathcal{S}_N(\Gamma)$. We call this sum the rotation number of $\Gamma$. That is, \begin{equation}\label{eq-rot-gamma} \mathrm{rot}(\Gamma) := \sum_{C\in \mathcal{C}_\varphi} \mathrm{rot}(C). \end{equation} \begin{definition}\label{def-MOY-graph-poly}\cite{MOY} The $\mathfrak{sl}(N)$ MOY graph polynomial of $\Gamma$ is defined to be \begin{equation}\label{MOY-bracket-def} \left\langle \Gamma \right\rangle_N := \begin{cases} \sum_{\varphi \in \mathcal{S}_N(\Gamma)} \left(\prod_{v \in V(\Gamma)} q^{\mathrm{wt}(v;\varphi)}\right) q^{\mathrm{rot}(\varphi)} & \text{if } 0\leq \mathsf{c}(e) \leq N ~\forall ~e \in E(\Gamma), \\ 0 & \text{otherwise}. \end{cases} \end{equation} For a knotted MOY graph $D$, define the $\mathfrak{sl}(N)$ MOY polynomial $\left\langle D \right\rangle_N$ of $D$ by applying the following skein sum at every crossing of $D$. \begin{equation}\label{MOY-skein-general-+} \left\langle \setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,0) \put(-20,-20){\varepsilonctor(1,1){40}} \put(20,-20){\line(-1,1){15}} \put(-5,5){\varepsilonctor(-1,1){15}} \put(-11,15){\tiny{$_m$}} \put(9,15){\tiny{$_n$}} \end{picture} \right\rangle_N = \sum_{k=\max\{0,m-n\}}^{m} (-1)^{m-k} q^{k-m}\left\langle \setlength{\unitlength}{1pt} \begin{picture}(70,60)(-35,30) \put(-15,0){\varepsilonctor(0,1){20}} \put(-15,20){\varepsilonctor(0,1){20}} \put(-15,40){\varepsilonctor(0,1){20}} \put(15,0){\varepsilonctor(0,1){20}} \put(15,20){\varepsilonctor(0,1){20}} \put(15,40){\varepsilonctor(0,1){20}} \put(15,20){\varepsilonctor(-1,0){30}} \put(-15,40){\varepsilonctor(1,0){30}} \put(-25,5){\tiny{$_{n}$}} \put(-25,55){\tiny{$_{m}$}} \put(-30,30){\tiny{$_{n+k}$}} \put(-2,15){\tiny{$_{k}$}} \put(-12,43){\tiny{$_{n+k-m}$}} \put(18,5){\tiny{$_{m}$}} \put(18,55){\tiny{$_{n}$}} \put(18,30){\tiny{$_{m-k}$}} \end{picture}\right\rangle_N, \end{equation} \begin{equation}\label{MOY-skein-general--} \left\langle \setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,0) \put(20,-20){\varepsilonctor(-1,1){40}} \put(-20,-20){\line(1,1){15}} \put(5,5){\varepsilonctor(1,1){15}} \put(-11,15){\tiny{$_m$}} \put(9,15){\tiny{$_n$}} \end{picture} \right\rangle_N = \sum_{k=\max\{0,m-n\}}^{m} (-1)^{k-m} q^{m-k}\left\langle \setlength{\unitlength}{1pt} \begin{picture}(70,60)(-35,30) \put(-15,0){\varepsilonctor(0,1){20}} \put(-15,20){\varepsilonctor(0,1){20}} \put(-15,40){\varepsilonctor(0,1){20}} \put(15,0){\varepsilonctor(0,1){20}} \put(15,20){\varepsilonctor(0,1){20}} \put(15,40){\varepsilonctor(0,1){20}} \put(15,20){\varepsilonctor(-1,0){30}} \put(-15,40){\varepsilonctor(1,0){30}} \put(-25,5){\tiny{$_{n}$}} \put(-25,55){\tiny{$_{m}$}} \put(-30,30){\tiny{$_{n+k}$}} \put(-2,15){\tiny{$_{k}$}} \put(-12,43){\tiny{$_{n+k-m}$}} \put(18,5){\tiny{$_{m}$}} \put(18,55){\tiny{$_{n}$}} \put(18,30){\tiny{$_{m-k}$}} \end{picture}\right\rangle_N. \end{equation} \end{definition} \begin{theorem}\cite{MOY} $\left\langle D \right\rangle_N$ is invariant under Reidemeister (II), (III) moves and changes under Reidemeister (I) moves only by a factor of $\pm q^k$, which depends on the color of the edge involved in the Reidemeister (I) move. \end{theorem} As pointed out in \cite{MOY}, if $D$ is a link diagram colored by positive integers, then $\left\langle D \right\rangle_N$ is the Reshetikhin-Turaev $\mathfrak{sl}(N)$ polynomial of the link colored by corresponding wedge powers of the defining representation of $\mathfrak{sl}(N;\mathbb{C})$. \begin{figure}\label{4-valent-to-MOY-fig} \end{figure} Let $D$ be an oriented knotted $4$-valent graph as defined in Subsection \ref{subsec-Jaeger-graph}. We color all edges of $D$ by $1$ and modify its vertices as in Figure \ref{4-valent-to-MOY-fig}. This gives us a MOY graph, which we identify with $D$. Thus, $\left\langle D\right\rangle_N$ is now defined for any oriented knotted $4$-valent graph $D$. Moreover, it was established in \cite{MOY} that \begin{equation}\label{MOY-skein-special} \begin{cases} \left\langle \setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\circle{15}} \end{picture}\right\rangle_N = \frac{q^N-q^{-N}}{q-q^{-1}} \\ \left\langle \setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,1){20}} \put(-2,12){\varepsilonctor(-1,1){8}} \put(2,8){\line(1,-1){8}} \end{picture}\right\rangle_N - \left\langle \setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,0){\varepsilonctor(-1,1){20}} \put(2,12){\varepsilonctor(1,1){8}} \put(-2,8){\line(-1,-1){8}} \end{picture}\right\rangle_N = (q-q^{-1})\left\langle \setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,20){\varepsilonctor(1,1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}\right\rangle_N \\ \left\langle \setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\line(1,1){12}} \put(-2,12){\line(-1,1){8}} \qbezier(2,12)(10,20)(10,10) \qbezier(2,8)(10,0)(10,10) \end{picture}\right\rangle_N = -q^{-N} \left\langle \setlength{\unitlength}{.75pt} \begin{picture}(15,20)(-10,7) \qbezier(-10,0)(10,10)(-10,20) \end{picture}\right\rangle_N \\ \left\langle \setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(0,10){\varepsilonctor(1,1){10}} \put(-10,0){\varepsilonctor(1,1){10}} \put(0,10){\varepsilonctor(-1,1){10}} \put(10,0){\varepsilonctor(-1,1){10}} \end{picture}\right\rangle_N = \left\langle \setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,1){20}} \put(-2,12){\varepsilonctor(-1,1){8}} \put(2,8){\line(1,-1){8}} \end{picture}\right\rangle_N + q^{-1} \left\langle \setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,20){\varepsilonctor(1,1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}\right\rangle_N = \left\langle \setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(10,0){\varepsilonctor(-1,1){20}} \put(2,12){\varepsilonctor(1,1){8}} \put(-2,8){\line(-1,-1){8}} \end{picture} \right\rangle_N + q \left\langle \setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,20){\varepsilonctor(1,1){0}} \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}\right\rangle_N \end{cases} \end{equation} (Note that our normalizetion of $\left\langle D\right\rangle_N$ is different from that in \cite{MOY}. Please refer to \cite[Theorem 14.2]{Wu-color} to see how the skein relations in \cite{MOY} translate to our normalization.) Comparing skein relation \eqref{MOY-skein-special} to skein relations \eqref{HOMFLY-skein} and \eqref{HOMFLY-skein-vertex}, one can see that \begin{equation}\label{eq-MOY-HOMFLY} \left\langle D \right\rangle_N = (-1)^m R_N(\overline{D}), \end{equation} where $D$ is an oriented knotted $4$-valent graph, $m$ is the number of crossings in $D$, and $\overline{D}$ is the oriented knotted $4$-valent graph obtained from $D$ by switching the upper- and the lower-branches at every crossing of $D$. \subsection{Reversing the orientation and the color along a simple circuit} In the remainder of this section, we fix a positive integer $N$. Let $\Gamma$ be a MOY graph and $\Delta$ a simple circuit of $\Gamma$. That is, $\Delta$ is a subgraph of $\Gamma$ such that \begin{enumerate}[(i)] \item $\Delta$ is a (piecewise smoothly) embedded circle in $\mathbb{R}^2$; \item the orientations of all edges of $\Delta$ coincide with the same orientation of this embedded circle. \end{enumerate} We call the color change $k \leadsto N-k$ a reversal of color. It is easy to see that, if we reverse both the orientation and the color of the edges along $\Delta$, then we get another MOY graph $\Gamma'$. We have the following theorem. \begin{theorem}\label{thm-oc-reverse} \begin{equation}\label{eq-oc-reverse} \left\langle \Gamma \right\rangle_N = \left\langle \Gamma' \right\rangle_N \end{equation} \end{theorem} \begin{proof} We prove equation \eqref{eq-oc-reverse} using a localized formulation of the state sum \eqref{MOY-bracket-def}. Cut each edge of $\Gamma$ at one point in its interior. This divides $\Gamma$ into a collection of neighborhoods of its vertices, each of which is a vertex with three adjacent half-edges. (See Figure \ref{fig-MOY-vertex-angles}, where $e$, $e_1$ and $e_2$ are the three half-edges.) \begin{figure}\label{fig-MOY-vertex-angles} \end{figure} Let $\varphi \in \mathcal{S}_N(\Gamma)$. For a vertex of $\Gamma$, if it is of the form $v$ in Figure \ref{fig-MOY-vertex-angles}, we denote by $\alphapha$ the directed angle from $e_1$ to $e$ and by $\beta$ the directed angle from $e_2$ to $e$. We define \begin{eqnarray} \label{rot-def-local-v} \mathrm{rot}(v;\varphi) & = & \frac{1}{2\pi} \int_{e}\kappa ds \cdot \sum \varphi(e) +\frac{1}{2\pi}\left(\alphapha + \int_{e_1}\kappa ds\right) \cdot \sum \varphi(e_1) \\ && + \frac{1}{2\pi}\left(\beta + \int_{e_2}\kappa ds\right) \cdot \sum \varphi(e_2), \nonumber \end{eqnarray} where $\kappa$ is the signed curvature of a plane curve and $\sum A:=\sum_{a\in A} a$ for a subset $A$ of $\mathcal{N}= \{2k-N+1|k=0,1,\dots, N-1\}$. If the vertex is of the form $\hat{v}$ in Figure \ref{fig-MOY-vertex-angles}, we denote by $\hat{\alphapha}$ the directed angle from $e$ to $e_1$ and by $\hat{\beta}$ the directed angle from $e$ to $e_2$. We define \begin{eqnarray} \label{rot-def-local-v-prime} \mathrm{rot}(\hat{v};\varphi) & = & \frac{1}{2\pi}\int_{e}\kappa ds \cdot \sum \varphi(e) + \frac{1}{2\pi} \left(\hat{\alphapha}+\int_{e_1}\kappa ds\right) \cdot \sum \varphi(e_1) \\ && + \frac{1}{2\pi}\left(\hat{\beta}+\int_{e_2}\kappa ds\right) \cdot \sum \varphi(e_2) . \nonumber \end{eqnarray} Using the Gauss-Bonnet Theorem, one can easily check that \begin{equation} \label{eq-rot-sum} \mathrm{rot}(\varphi) = \sum_{v \in V(\Gamma)} \mathrm{rot}(v;\varphi). \end{equation} So, by Definition \ref{def-MOY-graph-poly}, we have \begin{equation} \label{eq-MOY-local} \left\langle \Gamma \right\rangle_N = \sum_{\varphi \in \mathcal{S}_N(\Gamma)}\prod_{v \in V(\Gamma)} q^{\mathrm{wt}(v;\varphi)+\mathrm{rot}(v;\varphi)}. \end{equation} Since $\Gamma'$ is obtained from $\Gamma$ by reversing the orientation and the color of the edges along $\Delta$, there are natural bijections between $V(\Gamma)$ and $V(\Gamma')$ and between $E(\Gamma)$ and $E(\Gamma')$. Basically, every vertex corresponds to itself and every edge corresponds to itself (with reversed color and orientation if the edge belongs to $\Delta$.) For a vertex $v$ of $\Gamma$, we denote by $v'$ the vertex of $\Gamma'$ corresponding to $v$. For an edge $e$ of $\Gamma$, we denote by $e'$ the edge of $\Gamma'$ corresponding to $e$. Given a $\varphi \in \mathcal{S}_N(\Gamma)$, we define $\varphi': E(\Gamma') \rightarrow \mathcal{P(N)}$ by \begin{equation}\label{eq-varphi-prime-def} \varphi'(e') = \begin{cases} \varphi(e) & \text{if } e \notin E(\Delta); \\ \mathcal{N} \setminus \varphi(e) & \text{if } e \in E(\Delta). \end{cases} \end{equation} It is easy to see that $\varphi' \in \mathcal{S}_N(\Gamma')$ and that $\varphi \mapsto \varphi'$ is a bijection from $\mathcal{S}_N(\Gamma)$ to $\mathcal{S}_N(\Gamma')$. We claim that, for all $v \in V(\Gamma)$ and $\varphi \in \mathcal{S}_N(\Gamma)$, \begin{equation}\label{eq-local-state-match} \mathrm{wt}(v;\varphi)+ \mathrm{rot}(v;\varphi) = \mathrm{wt}(v';\varphi') + \mathrm{rot}(v';\varphi'). \end{equation} From equation \eqref{eq-MOY-local}, one can see that equation \eqref{eq-local-state-match} implies Theorem \ref{thm-oc-reverse}. To prove equation \eqref{eq-local-state-match}, we need to consider how the change $\Gamma \leadsto \Gamma'$ affects the vertex $v$. If $v$ is not a vertex of $\Delta$, then none of the three edges incidental at $v$ is changed. So equation \eqref{eq-local-state-match} is trivially true. If $v$ is a vertex of $\Delta$, then exactly two edges incidental at $v$ are changed, and one of these changed edge must be the edge $e$ in Figure \ref{fig-MOY-vertex-angles} (for $v$ in either form.) So, counting the choices of the form of $v$ and the choices of the other changed edge, there are four possible ways to change $v$ if $v$ is a vertex of $\Delta$. (See Figure \ref{rotation-numbers-oc-reverse-index-fig} below.) The proofs of \eqref{eq-local-state-match} in these four cases are very similar. So we only give the details for the case in Figure \ref{fig-MOY-vertex-change} and leave the other cases to the reader. \begin{figure}\label{fig-MOY-vertex-change} \end{figure} First, let us consider $\mathrm{wt}(v;\varphi)$ and $\mathrm{wt}(v';\varphi')$. \begin{eqnarray*} \mathrm{wt}(v';\varphi') & = & \frac{n(N-m-n)}{2} -\pi(\varphi'(e_2'),\varphi'(e')) \\ & = & \frac{n(N-m-n)}{2} - (n(N-m-n) - \pi(\varphi'(e'),\varphi'(e_2'))) \\ & = & \pi(\mathcal{N}\setminus\varphi(e),\varphi(e_2)) - \frac{n(N-m-n)}{2}. \end{eqnarray*} Note that $\pi(\varphi(e_1),\varphi(e_2)) + \pi(\mathcal{N}\setminus\varphi(e),\varphi(e_2)) = \pi(\mathcal{N}\setminus\varphi(e_2),\varphi(e_2))$. The above implies \begin{eqnarray*} && \mathrm{wt}(v';\varphi') - \mathrm{wt}(v;\varphi) \\ & = & \pi(\mathcal{N}\setminus\varphi(e),\varphi(e_2)) - \frac{n(N-m-n)}{2} - \frac{mn}{2} + \pi(\varphi(e_1),\varphi(e_2)) \\ & = & \pi(\mathcal{N}\setminus\varphi(e_2),\varphi(e_2)) - \frac{n(N-n)}{2} \end{eqnarray*} Write $\varphi(e_2) =\{j_1,\dots,j_n\} \subset \mathcal{N}$, where $j_1<j_2<\cdots <j_n$. Then \[ \pi(\mathcal{N}\setminus\varphi(e_2),\varphi(e_2)) = \sum_{l=1}^n [\frac{1}{2}(N-1-j_l)-(n-l)] = \frac{n(N-n)}{2} - \frac{1}{2}\sum \varphi(e_2). \] Altogether, we get \begin{equation}\label{eq-weight-change} \mathrm{wt}(v';\varphi') = \mathrm{wt}(v;\varphi) - \frac{1}{2}\sum \varphi(e_2). \end{equation} Now we compare $\mathrm{rot}(v;\varphi)$ to $\mathrm{rot}(v';\varphi')$. As before, denote by $\alphapha$ the directed angle from $e_1$ to $e$ and by $\beta$ the directed angle from $e_2$ to $e$. Also denote by $\gamma$ the directed angle from $e_2'$ to $e_1'$. Note that the directed angle from $e'$ to $e_1'$ is $-\alphapha$. Since $\sum\mathcal{N}=0$, we have $\sum (\mathcal{N} \setminus A) = - \sum A$ for any $A\subset \mathcal{N}$. Moreover, note that reversing the orientation of a plane curve changes the sign of its curvature $\kappa$. \begin{eqnarray*} \mathrm{rot}(v';\varphi') & = & \frac{1}{2\pi} \int_{e_1'}\kappa ds \cdot \sum \varphi'(e_1') +\frac{1}{2\pi}\left(-\alphapha + \int_{e'}\kappa ds\right) \cdot \sum \varphi'(e') \\ && + \frac{1}{2\pi}\left(\gamma + \int_{e_2'}\kappa ds\right) \cdot \sum \varphi'(e_2') \\ & = & \frac{1}{2\pi} \int_{e_1}\kappa ds \cdot \sum \varphi(e_1) +\frac{1}{2\pi}\left(\alphapha + \int_{e}\kappa ds\right) \cdot \sum \varphi(e) \\ && + \frac{1}{2\pi}\left(\gamma + \int_{e_2}\kappa ds\right) \cdot \sum \varphi(e_2) \\ & = & \frac{1}{2\pi} \left( \alphapha+ \int_{e_1}\kappa ds\right) \cdot \sum \varphi(e_1) +\frac{1}{2\pi}\int_{e}\kappa ds \cdot \sum \varphi(e) \\ && + \frac{1}{2\pi}\left(\alphapha+\gamma + \int_{e_2}\kappa ds\right) \cdot \sum \varphi(e_2) \\ & = & \mathrm{rot}(v;\varphi) + \frac{\alphapha-\beta+\gamma}{2\pi} \cdot \sum \varphi(e_2). \end{eqnarray*} Note that $\alphapha-\beta+\gamma =\pi$. The above shows that \begin{equation}\label{rotation-local-change} \mathrm{rot}(v';\varphi') = \mathrm{rot}(v;\varphi) + \frac{1}{2} \sum \varphi(e_2). \end{equation} Equation \eqref{eq-local-state-match} follows easily from equations \eqref{eq-weight-change} and \eqref{rotation-local-change}. \end{proof} \subsection{The colored $\mathfrak{sl}(N)$ homology} Theorem \ref{thm-oc-reverse} is also true for the colored $\mathfrak{sl}(N)$ homology for MOY graphs defined in \cite{Wu-color}. More precisely, reversing the orientation and the color along a simple circuit in a MOY graph does not change the homotopy type of the matrix factorization associated to this MOY graph. To prove this, we first recall some basic properties of the matrix factorizations associated to MOY graphs. We denote by $C_N(\Gamma)$ the $\mathbb{Z}_2\oplus\mathbb{Z}$-graded matrix factorization associated a MOY graph $\Gamma$ defined in \cite[Definition 5.5]{Wu-color} and by $\hat{C}_N(D)$ the $\mathbb{Z}_2\oplus\mathbb{Z}\oplus\mathbb{Z}$-graded unnormalized chain complex associated to a knotted MOY graph $D$ defined in \cite[Definitions 11.4 and 11.16]{Wu-color}. Recall that: \begin{itemize} \item The $\mathbb{Z}_2$-grading of $C_N(\Gamma)$ and $\hat{C}_N(D)$ comes from the definition of matrix factorizations and is trivial on the homology $H_N(\Gamma)$ and $\hat{H}_N(D)$ of $C_N(\Gamma)$ and $\hat{C}_N(D)$. (See \cite[Theorems 1.3 and 14.7]{Wu-color}.) \item The $\mathbb{Z}$-grading of $C_N(\Gamma)$ comes from the polynomial grading of the base ring and is called the quantum grading. The homology $H_N(\Gamma)$ inherits this quantum grading. \item One $\mathbb{Z}$-grading of $\hat{C}_N(D)$ is the quantum grading induced by the quantum grading of matrix factorizations of MOY graphs. The other $\mathbb{Z}$-grading of $\hat{C}_N(D)$ is the homological grading. $\hat{H}_N(D)$ inherits both of these gradings. \end{itemize} Also note that, for a MOY graph $\Gamma$, $C_N(\Gamma)= \hat{C}_N(\Gamma)$. \begin{theorem}\cite{Wu-color}\label{thm-MOY-calculus} \begin{enumerate} \item $\hat{C}_N( \bigcirc_m ) \simeq \mathbb{C}\{\qb{N}{m}\}$, where $\bigcirc_m$ is a circle colored by $m$. \item $\hat{C}_N( \setlength{\unitlength}{1pt} \begin{picture}(50,50)(-80,20) \put(-60,10){\varepsilonctor(0,1){10}} \put(-60,20){\varepsilonctor(-1,1){20}} \put(-60,20){\varepsilonctor(1,1){10}} \put(-50,30){\varepsilonctor(-1,1){10}} \put(-50,30){\varepsilonctor(1,1){10}} \put(-75,3){\tiny{$i+j+k$}} \put(-55,21){\tiny{$j+k$}} \put(-80,42){\tiny{$i$}} \put(-60,42){\tiny{$j$}} \put(-40,42){\tiny{$k$}} \end{picture}) \simeq \hat{C}_N( \setlength{\unitlength}{1pt} \begin{picture}(50,50)(40,20) \put(60,10){\varepsilonctor(0,1){10}} \put(60,20){\varepsilonctor(1,1){20}} \put(60,20){\varepsilonctor(-1,1){10}} \put(50,30){\varepsilonctor(1,1){10}} \put(50,30){\varepsilonctor(-1,1){10}} \put(45,3){\tiny{$i+j+k$}} \put(38,21){\tiny{$i+j$}} \put(80,42){\tiny{$k$}} \put(60,42){\tiny{$j$}} \put(40,42){\tiny{$i$}} \end{picture})$. \item $\hat{C}_N( \setlength{\unitlength}{0.75pt} \begin{picture}(65,80)(-30,35) \put(0,-5){\varepsilonctor(0,1){15}} \put(0,60){\varepsilonctor(0,1){15}} \qbezier(0,10)(-10,10)(-10,15) \qbezier(0,60)(-10,60)(-10,55) \put(-10,15){\varepsilonctor(0,1){40}} \qbezier(0,10)(15,10)(15,20) \qbezier(0,60)(15,60)(15,50) \put(15,20){\varepsilonctor(0,1){30}} \put(5,65){\tiny{$_{m+n}$}} \put(5,3){\tiny{$_{m+n}$}} \put(17,35){\tiny{$_{n}$}} \put(-22,35){\tiny{$_{m}$}} \end{picture}) \simeq \hat{C}_N(\setlength{\unitlength}{.75pt} \begin{picture}(55,80)(-20,40) \put(0,0){\varepsilonctor(0,1){80}} \put(5,75){\tiny{$_{m+n}$}} \end{picture})\{\qb{m+n}{n}\} $. \item $\hat{C}_N( \setlength{\unitlength}{.75pt} \begin{picture}(60,80)(-30,40) \put(0,0){\varepsilonctor(0,1){30}} \put(0,30){\varepsilonctor(0,1){20}} \put(0,50){\varepsilonctor(0,1){30}} \put(-1,40){\line(1,0){2}} \qbezier(0,30)(25,20)(25,30) \qbezier(0,50)(25,60)(25,50) \put(25,50){\varepsilonctor(0,-1){20}} \put(5,75){\tiny{$_{m}$}} \put(5,5){\tiny{$_{m}$}} \put(-30,38){\tiny{$_{m+n}$}} \put(14,60){\tiny{$_{n}$}} \end{picture}) \simeq \hat{C}_N( \setlength{\unitlength}{.75pt} \begin{picture}(40,80)(-20,40) \put(0,0){\varepsilonctor(0,1){80}} \put(5,75){\tiny{$_{m}$}} \end{picture})\{\qb{N-m}{n}\}$. \item $\hat{C}_N( \setlength{\unitlength}{.75pt} \begin{picture}(80,60)(-180,30) \put(-170,0){\varepsilonctor(1,1){20}} \put(-150,20){\varepsilonctor(1,0){20}} \put(-130,20){\varepsilonctor(0,1){20}} \put(-130,20){\varepsilonctor(1,-1){20}} \put(-130,40){\varepsilonctor(-1,0){20}} \put(-150,40){\varepsilonctor(0,-1){20}} \put(-150,40){\varepsilonctor(-1,1){20}} \put(-110,60){\varepsilonctor(-1,-1){20}} \put(-175,0){\tiny{$_1$}} \put(-175,55){\tiny{$_1$}} \put(-127,30){\tiny{$_1$}} \put(-108,0){\tiny{$_m$}} \put(-108,55){\tiny{$_m$}} \put(-160,30){\tiny{$_m$}} \put(-150,45){\tiny{$_{m+1}$}} \put(-150,13){\tiny{$_{m+1}$}} \end{picture}) \simeq \hat{C}_N( \setlength{\unitlength}{.75pt} \begin{picture}(60,60)(-30,30) \put(-20,0){\varepsilonctor(0,1){60}} \put(20,60){\varepsilonctor(0,-1){60}} \put(-25,30){\tiny{$_1$}} \put(22,30){\tiny{$_m$}} \end{picture}) \oplus \hat{C}_N( \setlength{\unitlength}{.75pt} \begin{picture}(60,60)(100,30) \put(110,0){\varepsilonctor(1,1){20}} \put(130,20){\varepsilonctor(1,-1){20}} \put(130,40){\varepsilonctor(0,-1){20}} \put(130,40){\varepsilonctor(-1,1){20}} \put(150,60){\varepsilonctor(-1,-1){20}} \put(105,0){\tiny{$_1$}} \put(105,55){\tiny{$_1$}} \put(152,0){\tiny{$_m$}} \put(152,55){\tiny{$_m$}} \put(132,30){\tiny{$_{m-1}$}} \end{picture})\{[N-m-1]\}.$ \item $\hat{C}_N( \setlength{\unitlength}{.75pt} \begin{picture}(95,90)(-180,45) \put(-160,0){\varepsilonctor(0,1){25}} \put(-160,65){\varepsilonctor(0,1){25}} \put(-160,25){\varepsilonctor(0,1){40}} \put(-160,65){\varepsilonctor(1,0){40}} \put(-120,25){\varepsilonctor(0,1){40}} \put(-120,25){\varepsilonctor(-1,0){40}} \put(-120,0){\varepsilonctor(0,1){25}} \put(-120,65){\varepsilonctor(0,1){25}} \put(-167,13){\tiny{$_1$}} \put(-167,72){\tiny{$_l$}} \put(-180,48){\tiny{$_{l+n}$}} \put(-117,13){\tiny{$_{m+l-1}$}} \put(-117,72){\tiny{$_{m}$}} \put(-117,48){\tiny{$_{m-n}$}} \put(-155,32){\tiny{$_{l+n-1}$}} \put(-142,58){\tiny{$_n$}} \end{picture}) \simeq \hat{C}_N( \setlength{\unitlength}{.75pt} \begin{picture}(85,90)(-30,45) \put(-20,0){\varepsilonctor(0,1){45}} \put(-20,45){\varepsilonctor(0,1){45}} \put(20,0){\varepsilonctor(0,1){45}} \put(20,45){\varepsilonctor(0,1){45}} \put(20,45){\varepsilonctor(-1,0){40}} \put(-27,20){\tiny{$_1$}} \put(23,20){\tiny{$_{m+l-1}$}} \put(-27,65){\tiny{$_l$}} \put(23,65){\tiny{$_m$}} \put(-5,38){\tiny{$_{l-1}$}} \end{picture}) \{\qb{m-1}{n}\} \oplus \hat{C}_N( \setlength{\unitlength}{.75pt} \begin{picture}(60,90)(110,45) \put(110,0){\varepsilonctor(2,3){20}} \put(150,0){\varepsilonctor(-2,3){20}} \put(130,30){\varepsilonctor(0,1){30}} \put(130,60){\varepsilonctor(-2,3){20}} \put(130,60){\varepsilonctor(2,3){20}} \put(117,20){\tiny{$_1$}} \put(140,20){\tiny{$_{m+l-1}$}} \put(117,65){\tiny{$_l$}} \put(140,65){\tiny{$_m$}} \put(133,42){\tiny{$_{m+l}$}} \end{picture}) \{\qb{m-1}{n-1}\}.$ \item $\hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(80,30)(-35,30) \put(-15,0){\varepsilonctor(0,1){20}} \put(-15,20){\varepsilonctor(0,1){20}} \put(-15,40){\varepsilonctor(0,1){20}} \put(15,0){\varepsilonctor(0,1){20}} \put(15,20){\varepsilonctor(0,1){20}} \put(15,40){\varepsilonctor(0,1){20}} \put(15,20){\varepsilonctor(-1,0){30}} \put(-15,40){\varepsilonctor(1,0){30}} \put(-25,5){\tiny{$_{n}$}} \put(-25,55){\tiny{$_{m}$}} \put(-30,30){\tiny{$_{n+k}$}} \put(-2,15){\tiny{$_{k}$}} \put(-13,44){\tiny{$_{n+k-m}$}} \put(18,5){\tiny{$_{m+l}$}} \put(18,55){\tiny{$_{n+l}$}} \put(18,30){\tiny{$_{m+l-k}$}} \end{picture}) \simeq \bigoplus_{j=\max\{m-n,0\}}^m \hat{C}_N( \setlength{\unitlength}{1pt} \begin{picture}(80,40)(-40,30) \put(-15,0){\varepsilonctor(0,1){20}} \put(-15,20){\varepsilonctor(0,1){20}} \put(-15,40){\varepsilonctor(0,1){20}} \put(15,0){\varepsilonctor(0,1){20}} \put(15,20){\varepsilonctor(0,1){20}} \put(15,40){\varepsilonctor(0,1){20}} \put(15,40){\varepsilonctor(-1,0){30}} \put(-15,20){\varepsilonctor(1,0){30}} \put(-25,5){\tiny{$_{n}$}} \put(-25,55){\tiny{$_{m}$}} \put(-35,30){\tiny{$_{m-j}$}} \put(-2,45){\tiny{$_{j}$}} \put(-12,15){\tiny{$_{n+j-m}$}} \put(18,5){\tiny{$_{m+l}$}} \put(18,55){\tiny{$_{n+l}$}} \put(18,30){\tiny{$_{n+l+j}$}} \end{picture})\{\qb{l}{k-j}\}$. \end{enumerate} Here, ``$\simeq$" means that there is a homogeneous homotopy equivalence of chain complexes of graded matrix factorizations between the two sides that preserves the $\mathbb{Z}_2$-grading, the quantum grading and the homological grading. The above relations remain true if we reverse the orientation of the MOY graphs on both sides or reverse the orientation of $\mathbb{R}^2$. For a MOY graph $\Gamma$, denote by $H_N^j(\Gamma)$ the homogeneous part of $H_N(\Gamma)$ of quantum degree $j$. Then \[ \sum_j q^j \dim_\mathbb{C} H_N^j(\Gamma) = \left\langle \Gamma \right\rangle_N. \] In other words, the graded dimension of $H_N(\Gamma)$ is equal to $\left\langle \Gamma \right\rangle_N$. \end{theorem} \begin{theorem}\cite{Wu-color}\label{thm-MOY-knotted-invariance} \begin{enumerate} \item $\hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,20)(-20,20) \put(0,0){\line(0,1){8}} \put(0,12){\varepsilonctor(0,1){8}} \put(0,20){\varepsilonctor(1,1){20}} \put(0,20){\varepsilonctor(-1,1){20}} \put(-20,10){\varepsilonctor(1,0){40}} \put(-13,35){\tiny{$_m$}} \put(10,35){\tiny{$_l$}} \put(3,2){\tiny{$_{m+l}$}} \put(10,13){\tiny{$_n$}} \end{picture}) \simeq \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,20)(-20,20) \put(0,0){\varepsilonctor(0,1){20}} \put(0,20){\line(1,1){8}} \put(0,20){\line(-1,1){8}} \put(12,32){\varepsilonctor(1,1){8}} \put(-12,32){\varepsilonctor(-1,1){8}} \put(-20,30){\varepsilonctor(1,0){40}} \put(-13,35){\tiny{$_m$}} \put(10,35){\tiny{$_l$}} \put(3,2){\tiny{$_{m+l}$}} \put(12,26){\tiny{$_n$}} \end{picture}) $. We call the difference between the two knotted MOY graphs here a fork sliding. The homotopy equivalence remains true if we reverse the orientation of one or both strands involved, or if the horizontal strand is under the vertex instead of above it. See the full statement in \cite[Theorem 12.1]{Wu-color}. \item For a knotted MOY graph $D$, the homotopy type of $\hat{C}_N(D)$, with its three gradings, is invariant under regular Reidemeister moves. \item $\hat{C}_N(\setlength{\unitlength}{.75pt} \begin{picture}(30,20)(-10,7) \put(-10,0){\line(1,1){12}} \put(-2,12){\line(-1,1){8}} \qbezier(2,12)(10,20)(10,10) \qbezier(2,8)(10,0)(10,10) \put(12,12){\tiny{$m$}} \end{picture}) = \hat{C}_N(\setlength{\unitlength}{.75pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(10,10)(-10,20) \put(3,12){\tiny{$m$}} \end{picture})\left\langle m\right\rangle \| m \| \{q^{-m(N+1-m)}\}$ \item For a knotted MOY graph $D$, define the homology $\hat{H}_N(D)$ of $\hat{C}_N(D)$ as in \cite[Subsection 1.2]{Wu-color}. Then the graded Euler characteristic of $\hat{H}_N(D)$ is equal to $\left\langle D \right\rangle_N$. \end{enumerate} \end{theorem} \begin{remark}\label{homology-grading-conventions} In the above theorems, \begin{enumerate}[1.] \item ``$\left\langle \ast \right\rangle$" means shifting the $\mathbb{Z}_2$-grading by $\ast$. (See for example \cite[Subsection 2.3]{Wu-color}.) \item ``$\|\ast\|$" means shifting the homological grading up by $\ast$. (See for example \cite[Definition 2.33]{Wu-color}.) \item ``$\{F(q)\}$" means shifting the quantum grading up by $F(q)$. (See for example \cite[Subsection 2.1]{Wu-color}.) \item Our normalization of the quantum integers is \[ [j] := \frac{q^j-q^{-j}}{q-q^{-1}}, \] \[ [j]! := [1] \cdot [2] \cdots [j], \] \[ \qb{j}{k} := \frac{[j]!}{[k]!\cdot [j-k]!}. \] \end{enumerate} \end{remark} From the definition of $\hat{C}_N(D)$ in \cite[Definitions 11.4 and 11.16]{Wu-color}, we have the following simple lemma. \begin{lemma}\cite[Lemma 7.3]{Wu-color-ras}\label{lemma-l-N-crossings} \begin{equation}\label{eq-l-N-+} \hat{C}_N (\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,0) \put(-20,-20){\varepsilonctor(1,1){40}} \put(20,-20){\line(-1,1){15}} \put(-5,5){\varepsilonctor(-1,1){15}} \put(-11,15){\tiny{$_l$}} \put(8,15){\tiny{$_N$}} \end{picture}) \cong \hat{C}_N (\setlength{\unitlength}{.5pt} \begin{picture}(85,45)(-40,45) \put(-20,0){\varepsilonctor(0,1){45}} \put(-20,45){\varepsilonctor(0,1){45}} \put(20,0){\varepsilonctor(0,1){45}} \put(20,45){\varepsilonctor(0,1){45}} \put(-20,45){\varepsilonctor(1,0){40}} \put(-35,20){\tiny{$_N$}} \put(25,20){\tiny{$_{l}$}} \put(-32,65){\tiny{$_l$}} \put(25,65){\tiny{$_N$}} \put(-13,38){\tiny{$_{N-l}$}} \end{picture})\|l\|\{q^{-l}\}, \end{equation} \begin{equation}\label{eq-l-N--} \hat{C}_N (\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,0) \put(20,-20){\varepsilonctor(-1,1){40}} \put(-20,-20){\line(1,1){15}} \put(5,5){\varepsilonctor(1,1){15}} \put(-11,15){\tiny{$_l$}} \put(8,15){\tiny{$_N$}} \end{picture}) \cong \hat{C}_N (\setlength{\unitlength}{.5pt} \begin{picture}(85,45)(-40,45) \put(-20,0){\varepsilonctor(0,1){45}} \put(-20,45){\varepsilonctor(0,1){45}} \put(20,0){\varepsilonctor(0,1){45}} \put(20,45){\varepsilonctor(0,1){45}} \put(-20,45){\varepsilonctor(1,0){40}} \put(-35,20){\tiny{$_N$}} \put(25,20){\tiny{$_{l}$}} \put(-32,65){\tiny{$_l$}} \put(25,65){\tiny{$_N$}} \put(-13,38){\tiny{$_{N-l}$}} \end{picture})\|-l\|\{q^{l}\}, \end{equation} Consequently, \begin{equation}\label{eq-l-N--to+} \hat{C}_N (\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,0) \put(-20,-20){\varepsilonctor(1,1){40}} \put(20,-20){\line(-1,1){15}} \put(-5,5){\varepsilonctor(-1,1){15}} \put(-11,15){\tiny{$_l$}} \put(8,15){\tiny{$_N$}} \end{picture}) \cong \hat{C}_N (\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,0) \put(20,-20){\varepsilonctor(-1,1){40}} \put(-20,-20){\line(1,1){15}} \put(5,5){\varepsilonctor(1,1){15}} \put(-11,15){\tiny{$_l$}} \put(8,15){\tiny{$_N$}} \end{picture})\|2l\|\{q^{-2l}\}, \end{equation} \begin{equation}\label{eq-l-N-+to-} \hat{C}_N (\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,0) \put(20,-20){\varepsilonctor(-1,1){40}} \put(-20,-20){\line(1,1){15}} \put(5,5){\varepsilonctor(1,1){15}} \put(-11,15){\tiny{$_l$}} \put(8,15){\tiny{$_N$}} \end{picture}) \cong \hat{C}_N (\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,0) \put(-20,-20){\varepsilonctor(1,1){40}} \put(20,-20){\line(-1,1){15}} \put(-5,5){\varepsilonctor(-1,1){15}} \put(-11,15){\tiny{$_l$}} \put(8,15){\tiny{$_N$}} \end{picture})\|-2l\|\{q^{2l}\}. \end{equation} \end{lemma} We will also need the following lemma. \begin{lemma}\label{lemma-twisted-forks} $~$ \begin{equation}\label{eq-twisted-forks-+} \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){10}} \put(5,3){\tiny{$m+n$}} \qbezier(0,10)(-20,20)(0,30) \put(0,30){\varepsilonctor(2,1){20}} \qbezier(0,10)(20,20)(4,28) \put(15,32){\tiny{$n$}} \put(-4,32){\varepsilonctor(-2,1){16}} \put(-18,32){\tiny{$m$}} \end{picture}) \simeq \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){20}} \put(5,7){\tiny{$m+n$}} \put(0,20){\varepsilonctor(1,1){20}} \put(12,25){\tiny{$n$}} \put(0,20){\varepsilonctor(-1,1){20}} \put(-15,25){\tiny{$m$}} \end{picture})\{q^{mn}\}, \end{equation} \begin{equation}\label{eq-twisted-forks--} \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){10}} \put(5,3){\tiny{$m+n$}} \qbezier(0,10)(20,20)(0,30) \put(0,30){\varepsilonctor(-2,1){20}} \qbezier(0,10)(-20,20)(-4,28) \put(15,32){\tiny{$n$}} \put(4,32){\varepsilonctor(2,1){16}} \put(-18,32){\tiny{$m$}} \end{picture}) \simeq \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){20}} \put(5,7){\tiny{$m+n$}} \put(0,20){\varepsilonctor(1,1){20}} \put(12,25){\tiny{$n$}} \put(0,20){\varepsilonctor(-1,1){20}} \put(-15,25){\tiny{$m$}} \end{picture})\{q^{-mn}\}. \end{equation} The above relations remain true if we reverse the orientation of the knotted MOY graphs on both sides. \end{lemma} \begin{proof} We prove \eqref{eq-twisted-forks-+} only. The proof of \eqref{eq-twisted-forks--} is similar and left to the reader. To prove \eqref{eq-twisted-forks-+}, we induce on $n$. When $n=1$, \eqref{eq-twisted-forks-+} is proved in \cite[Proposition 6.1]{Yonezawa3}. Assume \eqref{eq-twisted-forks-+} is true for $n$. By Theorems \ref{thm-MOY-calculus}, \ref{thm-MOY-knotted-invariance} and the induction hypothesis, we have \begin{eqnarray*} && \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){10}} \put(5,3){\tiny{$m+n+1$}} \qbezier(0,10)(-20,20)(0,30) \put(0,30){\varepsilonctor(2,1){20}} \qbezier(0,10)(20,20)(4,28) \put(15,32){\tiny{$n+1$}} \put(-4,32){\varepsilonctor(-2,1){16}} \put(-18,32){\tiny{$m$}} \end{picture})\{[n+1]\} \simeq \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){10}} \put(5,3){\tiny{$m+n+1$}} \put(0,10){\varepsilonctor(-2,1){10}} \put(-10,25){\line(2,1){10}} \put(-10,25){\varepsilonctor(2,1){0}} \put(-10,25){\varepsilonctor(-2,1){0}} \put(-20,18){\tiny{$n$}} \put(-3,18){\tiny{$1$}} \qbezier(-10,15)(0,20)(-10,25) \qbezier(-10,15)(-20,20)(-10,25) \put(0,30){\varepsilonctor(2,1){20}} \qbezier(0,10)(20,20)(4,28) \put(15,32){\tiny{$n+1$}} \put(-4,32){\varepsilonctor(-2,1){16}} \put(-18,32){\tiny{$m$}} \end{picture}) \simeq \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){10}} \put(5,3){\tiny{$m+n+1$}} \put(0,10){\varepsilonctor(-2,1){10}} \put(-10,25){\line(2,1){10}} \put(-10,25){\varepsilonctor(2,1){0}} \put(-20,18){\tiny{$n$}} \put(-3,20){\tiny{$1$}} \put(-10,15){\varepsilonctor(2,1){25}} \qbezier(15,27.5)(20,30)(15,37.5) \qbezier(-10,15)(-20,20)(-10,25) \put(0,30){\varepsilonctor(2,1){20}} \qbezier(0,10)(20,20)(14,23) \put(8,26){\line(-2,1){6}} \put(21,40){\tiny{$n+1$}} \put(-4,32){\varepsilonctor(-2,1){16}} \put(-18,32){\tiny{$m$}} \end{picture}) \\ && \\ & \simeq & \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){10}} \put(5,3){\tiny{$m+n+1$}} \put(0,10){\varepsilonctor(2,1){10}} \put(0,10){\line(-2,1){10}} \put(-10,25){\line(2,1){10}} \put(-10,25){\varepsilonctor(2,1){0}} \put(-20,18){\tiny{$n$}} \put(7,20){\tiny{$1$}} \qbezier(-10,15)(-20,20)(-10,25) \put(0,30){\varepsilonctor(2,1){20}} \qbezier(10,15)(20,20)(12,24) \put(10,15){\varepsilonctor(0,1){20}} \put(8,26){\line(-2,1){6}} \put(21,40){\tiny{$n+1$}} \put(-4,32){\varepsilonctor(-2,1){16}} \put(-18,32){\tiny{$m$}} \end{picture}) \simeq \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){10}} \put(5,3){\tiny{$m+n+1$}} \put(0,10){\varepsilonctor(2,1){10}} \put(0,10){\line(-2,1){10}} \put(-10,25){\line(2,1){10}} \put(-10,25){\varepsilonctor(2,1){0}} \put(-20,18){\tiny{$n$}} \put(12,20){\tiny{$1$}} \qbezier(-10,15)(-20,20)(-10,25) \put(0,30){\varepsilonctor(2,1){20}} \qbezier(10,15)(8,26)(2,29) \put(10,15){\varepsilonctor(0,1){20}} \put(21,40){\tiny{$n+1$}} \put(-4,32){\varepsilonctor(-2,1){16}} \put(-18,32){\tiny{$m$}} \end{picture})\{q^m\} \simeq \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){10}} \put(5,3){\tiny{$m+n+1$}} \qbezier(0,10)(10,10)(10,15) \put(0,10){\varepsilonctor(-2,1){10}} \put(-10,25){\line(2,1){10}} \put(10,35){\varepsilonctor(2,1){0}} \put(-20,18){\tiny{$n$}} \put(12,20){\tiny{$1$}} \qbezier(-10,15)(-20,20)(-10,25) \put(0,30){\varepsilonctor(2,1){20}} \qbezier(-10,15)(8,26)(2,29) \put(10,15){\varepsilonctor(0,1){20}} \put(21,40){\tiny{$n+1$}} \put(-4,32){\varepsilonctor(-2,1){16}} \put(-18,32){\tiny{$m$}} \end{picture})\{q^m\} \\ && \\ & \simeq & \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){10}} \put(5,3){\tiny{$m+n+1$}} \qbezier(0,10)(10,10)(10,15) \put(0,10){\varepsilonctor(-2,1){10}} \put(10,35){\varepsilonctor(2,1){0}} \put(0,25){\tiny{$n$}} \put(12,20){\tiny{$1$}} \qbezier(-10,15)(-10,25)(0,30) \qbezier(-10,15)(-10,35)(-15,37.5) \put(0,30){\varepsilonctor(2,1){20}} \put(10,15){\varepsilonctor(0,1){20}} \put(21,40){\tiny{$n+1$}} \put(-15,37.5){\varepsilonctor(-2,1){5}} \put(-18,32){\tiny{$m$}} \end{picture})\{q^{m(n+1)}\} \simeq \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){10}} \put(5,3){\tiny{$m+n+1$}} \put(0,10){\varepsilonctor(-2,3){20}} \put(10,35){\varepsilonctor(2,1){0}} \put(0,25){\tiny{$n$}} \put(12,20){\tiny{$1$}} \qbezier(10,15)(-10,25)(0,30) \put(0,30){\varepsilonctor(2,1){20}} \put(10,15){\varepsilonctor(0,1){20}} \put(0,10){\varepsilonctor(2,1){10}} \put(21,40){\tiny{$n+1$}} \put(-13,32){\tiny{$m$}} \end{picture})\{q^{m(n+1)}\} \\ & \simeq & \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){20}} \put(5,7){\tiny{$m+n+1$}} \put(0,20){\varepsilonctor(1,1){20}} \put(14,30){\tiny{$n+1$}} \put(0,20){\varepsilonctor(-1,1){20}} \put(-15,25){\tiny{$m$}} \end{picture})\{q^{m(n+1)}[n+1]\}. \end{eqnarray*} \noindent By \cite[Proposition 3.20]{Wu-color}, the above implies that \[ \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){10}} \put(5,3){\tiny{$m+n+1$}} \qbezier(0,10)(-20,20)(0,30) \put(0,30){\varepsilonctor(2,1){20}} \qbezier(0,10)(20,20)(4,28) \put(15,32){\tiny{$n+1$}} \put(-4,32){\varepsilonctor(-2,1){16}} \put(-18,32){\tiny{$m$}} \end{picture}) \simeq \hat{C}_N(\setlength{\unitlength}{1pt} \begin{picture}(40,40)(-20,20) \put(0,0){\varepsilonctor(0,1){20}} \put(5,7){\tiny{$m+n+1$}} \put(0,20){\varepsilonctor(1,1){20}} \put(14,30){\tiny{$n+1$}} \put(0,20){\varepsilonctor(-1,1){20}} \put(-15,25){\tiny{$m$}} \end{picture}) \{q^{m(n+1)}\}. \] This completes the induction and proves \eqref{eq-twisted-forks-+}. \end{proof} From \cite[Theorems 1.3 and 14.7]{Wu-color}, we know that the $\mathbb{Z}_2$-grading of $C_N$ and $\hat{C}_N$ is always pure and does not carry any significant information. So we do not keep track of this $\mathbb{Z}_2$-grading in the remainder of this paper. The next is the main theorem of this subsection. \begin{theorem}\label{thm-oc-reverse-homology} Let $\Gamma$ be a MOY graph and $\Delta$ a simple circuit of $\Gamma$. Denote by $\Gamma'$ the MOY graph obtained from $\Gamma$ by reversing the orientation and the color of edges along $\Delta$. Then, up to an overall shift of the $\mathbb{Z}_2$-grading, we have \begin{equation}\label{eq-oc-reverse-homology} C_N(\Gamma) \simeq C_N(\Gamma'). \end{equation} \end{theorem} \begin{proof} We construct the homotopy equivalence in \eqref{eq-oc-reverse-homology} in three steps. \emph{Step One: Modifying vertices.} Let $v$ be a vertex in $\Delta$. We demonstrate in Figure \ref{oc-reverse-vertex-fig} how to modify $v$ into its corresponding vertex $v'$ in $\Gamma'$ using homogeneous homotopy equivalence. Here, we assume that the half edges $e$ and $e_1$ belong to $\Delta$.\footnote{As mentioned in the proof of Theorem \ref{thm-oc-reverse}, depending on the type of $v$ (splitting or merging) and the choice of the two edges belonging to $\Delta$, there are four possible local configurations of $\Delta$ near $v$. (See Figure \ref{rotation-numbers-oc-reverse-index-fig} below.) Figure \ref{oc-reverse-vertex-fig} covers only one of these four possible local configurations of $\Delta$ near $v$. But the other three possibilities are obtained from this one type by a reversal of orientation or horizontal flipping or both. We leave the construction for the other three cases to the reader.} \begin{figure}\label{oc-reverse-vertex-fig} \end{figure} Note that: \begin{enumerate} \item By Theorems \ref{thm-MOY-calculus}, \ref{thm-MOY-knotted-invariance} and Lemmas \ref{lemma-l-N-crossings}, \ref{lemma-twisted-forks}, each change in Figure \ref{oc-reverse-vertex-fig} induces a homogeneous homotopy equivalence. \item The upper right vertex in the last step in Figure \ref{oc-reverse-vertex-fig} is identical to the vertex $v'$ in $\Gamma$ corresponding to $v$. \end{enumerate} \emph{Step Two: Modifying edges.} After applying Step One to every vertex along $\Delta$, each edge $e$ along $\Delta$ becomes one of the two configurations in the second row in Figure \ref{oc-reverse-edge-fig}. We further modify these two configurations as in Figure \ref{oc-reverse-edge-fig}. \begin{figure}\label{oc-reverse-edge-fig} \end{figure} Note that: \begin{enumerate} \item By Theorem \ref{thm-MOY-calculus} and Lemmas \ref{lemma-l-N-crossings}, each change made to these to configurations in Figure \ref{oc-reverse-edge-fig} induces a homogeneous homotopy equivalence. \item At every crossing, the branch colored by $N$ is on top. \end{enumerate} \emph{Step Three: Removing the unknot.} After applying Step Two to every edge along $\Delta$, we get a knotted MOY graph $D$ consisting of $\Gamma'$ and an unknot colored by $N$ that is everywhere above $\Gamma'$. We can move this unknot away from $\Gamma'$ using regular Reidemeister moves and fork sliding (given in Part (1) of Theorem \ref{thm-MOY-knotted-invariance}) and obtain a MOY graph $\widetilde{\Gamma}$. By Theorem \ref{thm-MOY-knotted-invariance}, these moves induce a homogeneous homotopy equivalence. By Part (1) of Theorem \ref{thm-MOY-calculus}, we know that removing this unknot from $\widetilde{\Gamma}$ induces a homogeneous homotopy equivalence. Putting \textit{Steps One--Three} together, we get a homogeneous homotopy equivalence\footnote{Strictly speaking, there are two notions of homotopy equivalence involved here. That is, homotopy equivalence of matrix factorizations and homotopy equivalence of complexes of matrix factorizations. But it is easy to see that, for $C_N(\Gamma)$ and $C_N(\Gamma')$, these two notions are equivalent.} from $C_N(\Gamma)$ to $C_N(\Gamma')$. It remains to check that this homotopy equivalence preserves the quantum grading. But, by Theorem \ref{thm-MOY-calculus}, the graded dimensions of the homology of $C_N(\Gamma)$ and $C_N(\Gamma')$ are equal to $\left\langle \Gamma\right\rangle_N$ and $\left\langle \Gamma'\right\rangle_N$, which are equal by Theorem \ref{thm-oc-reverse}. So any homotopy equivalence from $C_N(\Gamma)$ to $C_N(\Gamma')$ must preserve the quantum grading. This completes the proof. \end{proof} Using similar techniques, we can prove that reversing the orientation and the color of a component of a link colored by elements of $\{0,1,\dots,N\}$ only changes the $\mathfrak{sl}(N)$ link homology by a grading shift. Let $L$ be an oriented framed link in $S^3$ colored by elements of $\{0,1,\dots,N\}$. Denote by $\mathcal{K}$ the set of components of $L$ and by $\mathsf{c}: \mathcal{K} \rightarrow \{0,1,\dots,N\}$ the color function of $L$. That is, for any component $K$ of $L$, the color of $K$ is $\mathsf{c}(K) \in \{0,1,\dots,N\}$. Furthermore, for any component $K$ of $L$, denote by $w(K)$ the writhe of $K$ and, for any two components $K, ~K'$ of $L$, denote by $l(K,K')$ the linking number of $K, ~K'$. \begin{theorem}\label{thm-oc-reverse-homology-link} Suppose $K$ is a component of $L$ and the colored framed oriented link $L'$ is obtained from $L$ by reversing the orientation and the color of $K$. Then, up to an overall shift of the $\mathbb{Z}_2$-grading, \begin{equation}\label{eq-oc-reverse-homology-link} \hat{C}_N(L') \simeq \hat{C}_N(L) ~\| s \|~ \{ q^{-s}\}, \end{equation} where \[ s = (N-2\mathsf{c}(K))w(K) - 2\sum_{K' \in \mathcal{K}\setminus \{K\}} \mathsf{c}(K') l(K,K'). \] In particular, \begin{equation}\label{eq-oc-reverse-poly-link} \left\langle L' \right\rangle_N = (-1)^{N w(K)} \cdot q^{-s} \cdot \left\langle L \right\rangle_N. \end{equation} \end{theorem} \begin{figure}\label{oc-reverse-homology-link-fig1} \end{figure} \begin{proof} Suppose the color of $K$ is $m$. In a small segment of $K$, create a ``bubble" as in the first step in Figure \ref{oc-reverse-homology-link-fig1}. Then, using fork sliding (Part (1) of Theorem \ref{thm-MOY-knotted-invariance}) and Reidemeister moves of type (II), we can push the left vertex of this bubble along $K$ until it is back in the same small segment of $K$. This is shown in step two in Figure \ref{oc-reverse-homology-link-fig1}. The last two steps in Figure \ref{oc-reverse-homology-link-fig1} are local and self-explanatory. The end result of all these changes is a link $L_1$ consisting of $L'$ and an extra component $\widetilde{K}$ colored by $N$ that is obtained by slightly pushing $K$ in the direction of its framing. (So $\widetilde{K}$ is isotopic to $K$.) By Theorems \ref{thm-MOY-calculus}, \ref{thm-MOY-knotted-invariance} and Lemma \ref{lemma-l-N-crossings}, each step in Figure \ref{oc-reverse-homology-link-fig1} induces a homogeneous homotopy equivalence that preserves both the quantum grading and the homological grading. So \[ \hat{C}_N(L_1) \simeq \hat{C}_N(L). \] By switching the upper- and lower-branches at crossings, we can unlink $\widetilde{K}$ from every component of $L'$. From relations \eqref{eq-l-N--to+} and \eqref{eq-l-N-+to-} in Lemma \ref{lemma-l-N-crossings}, we know that unlinking $\widetilde{K}$ from a component $K'$ of $L'$ shifts the homological grading by $- 2 \mathsf{c}(K') l(K,K')$ and the quantum grading by $2 \mathsf{c}(K') l(K,K')$. Note that: \begin{itemize} \item If $K'$ is the component of $L'$ obtained by reversing the orientation and the color of $K$, then $\mathsf{c}(K') = N- \mathsf{c}(K)$ and $l(K,K') = -w(K)$. \item If $K'$ is any other component of $L'$, then $K'$ is also a component of $L$. More precisely, $K' \in \mathcal{K}\setminus \{K\}$. \end{itemize} Thus, unlinking $\widetilde{K}$ from $L'$ shifts the homological grading by $$2(N-\mathsf{c}(K))w(K) - 2\sum_{K' \in \mathcal{K}\setminus \{K\}} \mathsf{c}(K') l(K,K') = Nw(K) + s$$ and the quantum grading by $$-2(N-\mathsf{c}(K))w(K) + 2\sum_{K' \in \mathcal{K}\setminus \{K\}} \mathsf{c}(K') l(K,K') = -Nw(K) - s.$$ In other words, we have \[ \hat{C}_N(L' \sqcup \widetilde{K}) \simeq \hat{C}_N(L_1) ~\| Nw(K) + s \|~ \{ q^{-Nw(K) - s} \}, \] where $L' \sqcup \widetilde{K}$ is $L'$ plus a copy of $\widetilde{K}$ that is unlinked to $L'$. Next, using \eqref{eq-l-N-+} and \eqref{eq-l-N--} in Lemma \ref{lemma-l-N-crossings}, we can change $\widetilde{K}$ (which is now not linked to $L'$) into an unlink $U$ with Seifert framing (which is not linked to $L'$) and get \[ \hat{C}_N ( U ) \simeq \hat{C}_N (\widetilde{K}) \| -Nw(K) \| \{ q^{Nw(K)}\}. \] Putting the above together, we get \begin{equation}\label{eq-oc-reverse-homology-link-proof-1} \hat{C}_N(L' \sqcup U) \simeq \hat{C}_N(L) ~\| s \|~ \{ q^{-s}\}. \end{equation} Finally, by Part (1) of Theorem \ref{thm-MOY-calculus}, we have \begin{equation}\label{eq-oc-reverse-homology-link-proof-2} \hat{C}_N(L') \simeq \hat{C}_N(L' \sqcup U). \end{equation} Homotopy equivalence \eqref{eq-oc-reverse-homology-link} follows from \eqref{eq-oc-reverse-homology-link-proof-1} and \eqref{eq-oc-reverse-homology-link-proof-2}. \end{proof} For a knotted MOY graph, we also have a notion of simple circuits. For a knotted MOY graph $D$, a subgraph $\Delta$ of $D$ is called a simple circuit if \begin{enumerate}[(i)] \item $\Delta$ is a diagram of a knot; \item the orientations of all edges of $\Delta$ coincide with the same orientation of this knot. \end{enumerate} Combining the tricks used in the proofs of Theorems \ref{thm-oc-reverse-homology} and \ref{thm-oc-reverse-homology-link}, one can prove the following corollary. We leave the proof to the reader. \begin{corollary}\label{cor-oc-reverse-homology-general} Let $D$ be a knotted MOY graph and $\Delta$ a simple circuit of $D$. Denote by $D'$ the knotted MOY graph obtained from $D$ by reversing the orientation and the color of edges along $D$. Then, up to an overall shift of the $\mathbb{Z}_2$-, quantum and homological gradings, $\hat{C}_N(D)$ is homotopic to $\hat{C}_N(D')$. In particular, $\left\langle D \right\rangle_N$ and $\left\langle D' \right\rangle_N$ differ from each other only by a factor of the form $\pm q^k$. \end{corollary} \section{An Explicit $\mathfrak{so}(6)$ Kauffman Homology} Theorem \ref{thm-oc-reverse-homology-link} implies that the $N$-colored $\mathfrak{sl}(2N)$ link homology is essentially an invariant of unoriented links. If $N=1$, this homology is the well known Khovanov homology \cite{K1}. In this section, we shed some light on the $2$-colored $\mathfrak{sl}(4)$ link homology. Specifically, we use results from the first two sections to verify that, up to normalization, the $2$-colored $\mathfrak{sl}(4)$ Reshetikhin-Turaev link polynomial is equal to the $\mathfrak{so}(6)$ Kauffman polynomial and, therefore, up to normalization, the $2$-colored $\mathfrak{sl}(4)$ link homology categorifies the $\mathfrak{so}(6)$ Kauffman polynomial. We do so by comparing the Jaeger Formula for the $\mathfrak{so}(6)$ KV polynomial (equation \eqref{eq-Jaeger-formula-N-graph} with $N=3$) to the composition product of the MOY polynomial associated to the embedding $\mathfrak{sl}(1)\times\mathfrak{sl}(3)\hookrightarrow \mathfrak{sl}(4)$. \begin{remark}\label{remark-approaches} There is an alternative approach to the above result. Basically, one can apply Corollary \ref{cor-oc-reverse-homology-general} to the $\mathfrak{sl}(4)$ MOY polynomial of MOY resolutions of $2$-colored link diagrams and keep track of the shifting of the quantum grading while doing so. This would allow one to show that the $2$-colored $\mathfrak{sl}(4)$ Reshetikhin-Turaev link polynomial satisfies the skein relation \eqref{Kauffman-skein} for the $\mathfrak{so}(6)$ Kauffman polynomial. We leave it to the reader to figure out the details of this approach. Since the coincidence of the $\mathfrak{so}(6)$ Jaeger Formula and the $\mathfrak{sl}(1)\times\mathfrak{sl}(3)\hookrightarrow \mathfrak{sl}(4)$ composition product is itself interesting, we choose to use this coincidence in our proof. \end{remark} \subsection{Renormalizing the $N$-colored $\mathfrak{sl}(2N)$ link homology} We start by renormalizing the $N$-colored $\mathfrak{sl}(2N)$ link homology to make it independent of the orientation. \begin{definition}\label{def-2N-homology-renormalized} Let $L$ be an oriented framed link that is colored entirely by $N$. Assume the writhe of $L$ is $w(L)$. We define \begin{eqnarray} \label{eq-renormal-homology} \widetilde{C}_{2N}(L) & = & \hat{C}_{2N}(L) \|-\frac{N}{2}w(L)\| \{q^{\frac{N}{2}w(L)}\}, \\ \label{eq-renormal-polynomial} \widetilde{R}_{2N}(L) & = & (-q)^{\frac{N}{2}w(L)} \left\langle L \right\rangle_{2N}. \end{eqnarray} Denote by $\widetilde{H}_{2N}(L)$ the homology of $\widetilde{C}_{2N}(L)$. Note that, if $N$ is odd, then $\frac{N}{2}w(L) \in \frac{1}{2}\mathbb{Z}$. In this case, $\widetilde{C}_{2N}(L)$ and $\widetilde{H}_{2N}(L)$ are $(\frac{1}{2}\mathbb{Z})\oplus(\frac{1}{2}\mathbb{Z})$-graded. \end{definition} \begin{lemma}\label{lemma-independence-orientation} The homotopy type of $\widetilde{C}_{2N}(L)$, with its quantum and homological gradings, is independent of the orientation of $L$. Consequently, $\widetilde{R}_{2N}(L)$ does not depend on the orientation of $L$. \end{lemma} \begin{proof} This follows easily from Theorem \ref{thm-oc-reverse-homology-link}. \end{proof} \subsection{The composition product} Now we review the composition product established in \cite{Wu-color-MFW}. \begin{definition}\label{def-MOY-label} Let $\Gamma$ be a MOY graph. Denote by $\mathsf{c}$ its color function. That is, for every edge $e$ of $\Gamma$, the color of $e$ is $\mathsf{c}(e)$. A labeling $\mathsf{f}$ of $\Gamma$ is a MOY coloring of the underlying oriented trivalent graph of $\Gamma$ such that $\mathsf{f}(e)\leq \mathsf{c}(e)$ for every edge $e$ of $\Gamma$. Denote by $\mathcal{L}(\Gamma)$ the set of all labellings of $\Gamma$. For every $\mathsf{f} \in \mathcal{L}(\Gamma)$, denote by $\Gamma_{\mathsf{f}}$ the MOY graph obtained by re-coloring the underlying oriented trivalent graph of $\Gamma$ using $\mathsf{f}$. For every $\mathsf{f} \in \mathcal{L}(\Gamma)$, define a function $\bar{\mathsf{f}}$ on $E(\Gamma)$ by $\bar{\mathsf{f}}(e)= \mathsf{c}(e)- \mathsf{f}(e)$ for every edge $e$ of $\Gamma$. It is easy to see that $\bar{\mathsf{f}}\in \mathcal{L}(\Gamma)$. Let $v$ be a vertex of $\Gamma$ of either type in Figure \ref{fig-MOY-vertex}. (Note that, in either case, $e_1$ is to the left of $e_2$ when one looks in the direction of $e$.) For every $\mathsf{f} \in \mathcal{L}(\Gamma)$, define \[ [v|\Gamma|\mathsf{f}] = \frac{1}{2} (\mathsf{f}(e_1)\bar{\mathsf{f}}(e_2) - \bar{\mathsf{f}}(e_1)\mathsf{f}(e_2)). \] \end{definition} The following is the composition product established in \cite{Wu-color-MFW}. \begin{theorem}\cite[Theorem 1.7]{Wu-color-MFW}\label{THM-composition-product} Let $\Gamma$ be a MOY graph. For positive integers $M$, $N$ and $\mathsf{f} \in \mathcal{L}(\Gamma)$, define \[ \sigma_{M,N}(\Gamma,\mathsf{f}) = M \cdot \mathrm{rot}(\Gamma_{\bar{\mathsf{f}}}) - N \cdot \mathrm{rot}(\Gamma_{\mathsf{f}}) + \sum_{v\in V(\Gamma)} [v|\Gamma|\mathsf{f}], \] where the rotation number $\mathrm{rot}$ is defined in \eqref{eq-rot-gamma}. Then \begin{equation}\label{eq-composition-product} \left\langle \Gamma \right\rangle_{M+N} = \sum_{\mathsf{f} \in \mathcal{L}(\Gamma)} q^{\sigma_{M,N}(\Gamma,\mathsf{f})} \cdot \left\langle \Gamma_{\mathsf{f}} \right\rangle_M \cdot \left\langle \Gamma_{\bar{\mathsf{f}}} \right\rangle_N. \end{equation} \end{theorem} \begin{remark} The composition product \eqref{eq-composition-product} can be viewed as induced by the embedding $\mathfrak{su}(M)\times\mathfrak{su}(N)\hookrightarrow \mathfrak{su}(M+N)$. The Jaeger Formula \eqref{eq-Jaeger-formula-N} and \eqref{eq-Jaeger-formula-N-graph} can be viewed as induced by the embedding $\mathfrak{su}(N)\hookrightarrow \mathfrak{so}(2N)$. In \cite{Chen-Reshetikhin}, Chen and Reshetikhin presented an extensive study of formulas of the (uncolored) HOMFLY-PT and Kauffman polynomials induced by these and other embeddings. \end{remark} \subsection{The $\mathfrak{so}(6)$ KV polynomial and the $\mathfrak{sl}(4)$ MOY polynomial} In this section, we prove that the $\mathfrak{so}(6)$ KV polynomial of a planar $4$-valent graph is equal to the $\mathfrak{sl}(4)$ MOY polynomial of ``mostly $2$-colored" MOY graphs. \begin{figure}\label{mostly-2-colored-MOY-local-figure} \end{figure} \begin{definition}\label{def-mostly-2-colored-MOY} We call a MOY graph mostly $2$-colored if all of its edges are colored by $2$ except in local configurations of the two types given in Figure \ref{mostly-2-colored-MOY-local-figure}. Note that all vertices of a mostly $2$-colored MOY graph are contained in such local configurations. \end{definition} \begin{figure}\label{mostly-2-colored-MOY-to-4-valent-figure} \end{figure} The following is the main theorem of this subsection. \begin{theorem}\label{thm-mostly-2-colored-MOY-to-4-valent} Given a mostly $2$-colored MOY graph $\Gamma$, we remove the $4$-colored edge in every type one local configuration, shrink the square in every type two configuration to a vertex, remove color and orientation, and smooth out all the vertices of valence $2$. (See Figure \ref{mostly-2-colored-MOY-to-4-valent-figure}.) This gives an unoriented $4$-valent graph $G(\Gamma)$ embedded in $\mathbb{R}^2$. Then \begin{equation}\label{eq-mostly-2-colored-MOY-to-4-valent} \left\langle \Gamma \right\rangle_4 = P_6(G(\Gamma)). \end{equation} \end{theorem} Note that the $\mathfrak{so}(6)$ Jaeger formula expresses $P_6(G(\Gamma))$ as a state sum of $\mathfrak{sl}(3)$ MOY polynomials and the $\mathfrak{sl}(1)\times\mathfrak{sl}(3)\hookrightarrow \mathfrak{sl}(4)$ composition product expresses $\left\langle \Gamma \right\rangle_4$ as a state sum of $\mathfrak{sl}(3)$ MOY polynomials. We prove equation \eqref{eq-mostly-2-colored-MOY-to-4-valent} by showing that these two state sums are essentially the same. Several notions of rotation numbers are involved in these two formulas. We need Lemma \ref{lemma-rotation-numbers-oc-reverse} below to track the rotation numbers of MOY graphs. \begin{figure}\label{rotation-numbers-oc-reverse-index-fig} \end{figure} \begin{lemma}\label{lemma-rotation-numbers-oc-reverse} Let $\Gamma$ be a MOY graph and $\Delta$ a simple circuit of $\Gamma$. Denote by $\Gamma'$ the MOY graph obtained from $\Gamma$ by reversing the orientation and the color of edges (with respect to $N$) along $\Delta$. Recall that the rotation number of a MOY graph is defined in equation \eqref{eq-rot-gamma}. We view $\Delta$ as an uncolored oriented circle embedded $\mathbb{R}^2$ and define $\mathrm{rot}\Delta$ to be the usual rotation number of this circle. Then \begin{equation}\label{eq-rotation-numbers-oc-reverse} \mathrm{rot} \Gamma' = \mathrm{rot} \Gamma -N \mathrm{rot} \Delta + \sum_v d(v \leadsto v'), \end{equation} where $v$ runs through all vertices of $\Delta$, and $d(v\leadsto v')$ is defined in Figure \ref{rotation-numbers-oc-reverse-index-fig}. \end{lemma} \begin{proof} We prove Lemma \ref{lemma-rotation-numbers-oc-reverse} using a localization of the rotation number similar to that used in the proof of Theorem \ref{thm-oc-reverse}. Cut each edge of $\Gamma$ at one point in its interior. This divides $\Gamma$ into a collection of neighborhoods of its vertices, each of which is a vertex with three adjacent half-edges. (See Figure \ref{fig-MOY-vertex-angles}, where $e$, $e_1$ and $e_2$ are the three half-edges.) For a vertex of $\Gamma$, if it is of the form $v$ in Figure \ref{fig-MOY-vertex-angles}, we denote by $\alphapha$ the directed angle from $e_1$ to $e$ and by $\beta$ the directed angle from $e_2$ to $e$. We define \begin{equation}\label{gamma-rot-def-local-v} \mathrm{rot}(v) = \frac{m+n}{2\pi} \int_{e}\kappa ds +\frac{m}{2\pi}\left(\alphapha + \int_{e_1}\kappa ds\right) + \frac{n}{2\pi}\left(\beta + \int_{e_2}\kappa ds\right), \end{equation} where $\kappa$ is the signed curvature of a plane curve. If the vertex is of the form $\hat{v}$ in Figure \ref{fig-MOY-vertex-angles}, we denote by $\hat{\alphapha}$ the directed angle from $e$ to $e_1$ and by $\hat{\beta}$ the directed angle from $e$ to $e_2$. We define \begin{equation}\label{gamma-rot-def-local-v-prime} \mathrm{rot}(\hat{v}) = \frac{m+n}{2\pi}\int_{e}\kappa ds + \frac{m}{2\pi} \left(\hat{\alphapha}+\int_{e_1}\kappa ds\right) + \frac{n}{2\pi}\left(\hat{\beta}+\int_{e_2}\kappa\right) ds . \end{equation} By the Gauss-Bonnet Theorem, one can easily see that \begin{equation} \label{eq-gamma-rot-sum} \mathrm{rot}(\Gamma) = \sum_{v \in V(\Gamma)} \mathrm{rot}(v). \end{equation} \begin{figure}\label{delta-rot-local-fig} \end{figure} For a vertex $v$ of $\Delta$, denote by $e_1$ and $e_2$ the two half-edges incident at $v$ belonging to $\Delta$. Assume that $e_1$ points into $v$, $e_2$ points out of $v$, and the directed angle from $e_1$ to $e_2$ is $\theta$. (See Figure \ref{delta-rot-local-fig}.) Define \begin{equation}\label{eq-delta-rot-local} \mathrm{rot}_\Delta (v) = \frac{1}{2\pi}\left(\int_{e_1}\kappa ds +\theta + \int_{e_2}\kappa ds\right). \end{equation} By the Gauss-Bonnet Theorem, we know $\mathrm{rot} \Delta = \sum_v \mathrm{rot}_\Delta (v)$, where $v$ runs through all vertices of $\Delta$. For a vertex $v$ of $\Gamma$ contained in $\Delta$, denote by $v'$ the vertex of $\Gamma'$ corresponding to $v$. We claim \begin{equation}\label{eq-local-rot-change} \mathrm{rot}(v') = \mathrm{rot}(v)- N \mathrm{rot}_\Delta (v) + d(v \leadsto v'). \end{equation} Clearly, the lemma follows from \eqref{eq-local-rot-change}. To prove \eqref{eq-local-rot-change}, one needs to check that it is true for all four cases listed in Figure \ref{rotation-numbers-oc-reverse-index-fig}. Since the proofs in all four cases are very similar, we only check the first case here and leave the other three to the reader. In the first case, $v$ and $v'$ are depicted in Figure \ref{fig-MOY-vertex-change}. As before, denote by $\alphapha$ the directed angle from $e_1$ to $e$, by $\beta$ the directed angle from $e_2$ to $e$ and by $\gamma$ the directed angle from $e_2'$ to $e_1'$. Then \begin{eqnarray*} \mathrm{rot}(v') & = & \frac{N-m}{2 \pi} \int_{e_1'}\kappa ds + \frac{N-m-n}{2\pi} \left(-\alphapha + \int_{e'}\kappa ds\right) + \frac{n}{2\pi} \left(\gamma + \int_{e_2'}\kappa ds \right) \\ & = & \frac{m}{2\pi} \left(\alphapha + \int_{e_1}\kappa ds \right) + \frac{n}{2\pi} \left(\beta + \int_{e_2}\kappa ds \right) + \frac{m+n}{2\pi} \left(\int_{e}\kappa ds\right) \\ && - \frac{N}{2\pi} \left( \int_{e_1}\kappa ds + \alphapha + \int_{e}\kappa ds\right) + \frac{\alphapha + \gamma -\beta}{2\pi} n \\ & = & \mathrm{rot}(v)- N \mathrm{rot}_\Delta (v) + \frac{n}{2}, \end{eqnarray*} where, in the last step, we used the fact that $\alphapha + \gamma -\beta = \pi$. This proves \eqref{eq-local-rot-change} in the first case in Figure \ref{rotation-numbers-oc-reverse-index-fig}. \end{proof} Now we are ready to prove Theorem \ref{thm-mostly-2-colored-MOY-to-4-valent}. \begin{proof}[Proof of Theorem \ref{thm-mostly-2-colored-MOY-to-4-valent}] By the composition product \eqref{eq-composition-product}, we know that \[ \left\langle \Gamma \right\rangle_4 = \sum_{\mathsf{f} \in \mathcal{L}(\Gamma)} q^{\sigma_{1,3}(\Gamma,\mathsf{f})} \cdot \left\langle \Gamma_{\mathsf{f}} \right\rangle_1 \cdot \left\langle \Gamma_{\bar{\mathsf{f}}} \right\rangle_3, \] where \begin{equation}\label{eq-composition-product-1+3-power-label} \sigma_{1,3}(\Gamma,\mathsf{f}) = \mathrm{rot}(\Gamma_{\bar{\mathsf{f}}}) - 3 \cdot \mathrm{rot}(\Gamma_{\mathsf{f}}) + \sum_{v\in V(\Gamma)} [v|\Gamma|\mathsf{f}]. \end{equation} Note that, in order for the product $\left\langle \Gamma_{\mathsf{f}} \right\rangle_1 \cdot \left\langle \Gamma_{\bar{\mathsf{f}}} \right\rangle_3$ to be non-zero, we must have $\mathsf{f}(e)=0,1$ and $0\leq\bar{\mathsf{f}}(e) \leq 3$ for all edges of $\Gamma$. Define \[ \mathcal{L}_{\neq0}(\Gamma) = \{\mathsf{f} \in \mathcal{L}(\Gamma)~|~ \mathsf{f}(e)=0,1,~ 0\leq\bar{\mathsf{f}}(e) \leq 3 ~\forall e \in E(\Gamma)\}. \] Then \begin{equation}\label{eq-composition-product-1+3} \left\langle \Gamma \right\rangle_4 = \sum_{\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)} q^{\sigma_{1,3}(\Gamma,\mathsf{f})} \cdot \left\langle \Gamma_{\bar{\mathsf{f}}} \right\rangle_3, \end{equation} where we used the fact that $\left\langle \Gamma_{\mathsf{f}} \right\rangle_1=1$ for all $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$. Denote by $E_2(\Gamma)$ the set of edges of $\Gamma$ colored by $2$ and by $E(G(\Gamma))$ the set of edges of $G(\Gamma)$. Then there is a surjective function $g:E_2(\Gamma) \rightarrow E(G(\Gamma))$ such that $g(e)$ is the edge of $G(\Gamma)$ ``containing" $e$ for every $e\in E_2(\Gamma)$. Note that, for any $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$, $\Gamma_{\mathsf{f}}$ is a collection of pairwise disjoint embedded circles colored by $1$ (after erasing edges colored by $0$.) All possible intersections of $\Gamma_{\mathsf{f}}$ with type one and type two local configurations (defined in Figure \ref{mostly-2-colored-MOY-local-figure}) are described in Figures \ref{local-weights-type-1-fig} and \ref{local-weights-type-2-fig} in Appendix \ref{app-figures}, where edges belonging to $\Gamma_{\mathsf{f}}$ are traced out by \textcolor{BrickRed}{red} paths. For any $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$, we define an edge orientation $\varrho_{\mathsf{f}}$ of $G(\Gamma)$ such that, for all $e \in E_2(\Gamma)$, \[ \varrho_{\mathsf{f}}(g(e)) = \begin{cases} \text{the orientation of } e & \text{if } \mathsf{f}(e) =1, \\ \text{the opposite of the orientation of } e & \text{if } \mathsf{f}(e) =0. \end{cases} \] It is easy to check that $\varrho_{\mathsf{f}}(g(e))$ is a well define balanced edge orientation of $G(\Gamma)$ and the mapping $\mathsf{f} \mapsto \varrho_{\mathsf{f}}$ is a surjection $\mathcal{L}_{\neq0}(\Gamma) \rightarrow \mathcal{O}(G(\Gamma))$. Given a labeling $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$, we define a subgraph $\Delta_{\mathsf{f}}$ of $\Gamma$ (and therefore of $\Gamma_{\bar{\mathsf{f}}}$) such that \begin{itemize} \item if $e \in E_2(\Gamma)$, then $e$ is in $\Delta_{\mathsf{f}}$ if and only if $\mathsf{f}(e)=0$; \item edges of $\Delta_{\mathsf{f}}$ of other colors (which are all contained in local configurations of type one or two) are traced out by \textcolor{SkyBlue}{blue} paths in Figures \ref{local-weights-type-1-fig} and \ref{local-weights-type-2-fig} in Appendix \ref{app-figures}. \end{itemize} It is easy to see that $\Delta_{\mathsf{f}}$ is a union of pairwise disjoint simple circuits of $\Gamma$ (and therefore of $\Gamma_{\bar{\mathsf{f}}}$.) Reversing the orientation and the color (with respect to $3$) of edges of $\Gamma_{\bar{\mathsf{f}}}$ along $\Delta_{\mathsf{f}}$, we get a MOY graph $\Gamma_{\bar{\mathsf{f}}}'$. By Theorem \ref{thm-oc-reverse}, we have \begin{equation}\label{eq-Gamma-bar-Gamma-bar-prime} \left\langle \Gamma_{\bar{\mathsf{f}}}\right\rangle_3 = \left\langle \Gamma_{\bar{\mathsf{f}}}' \right\rangle_3. \end{equation} In $\Gamma_{\bar{\mathsf{f}}}'$, delete all edges colored by $0$, contract all edge colored by $2$ and smooth out all vertices of valence $2$. This gives a partial resolution $G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma_{\mathsf{f}}}$ of $G(\Gamma)_{\varrho_{\mathsf{f}}}$, where $\varsigma_{\mathsf{f}}$ resolves all vertices of $G(\Gamma)_{\varrho_{\mathsf{f}}}$ except those corresponding to the last case in Figure \ref{local-weights-type-2-fig}. In this last case, we need to further choose the resolution. Note that, by \cite[Lemma 2.4]{MOY}, we have \begin{equation}\label{eq-MOY-3-3} \left\langle \setlength{\unitlength}{1.75pt} \begin{picture}(20,10)(-10,7) \put(10,20){\varepsilonctor(-1,-1){5}} \put(-10,0){\varepsilonctor(1,1){5}} \put(-5,15){\varepsilonctor(-1,1){5}} \put(5,5){\varepsilonctor(1,-1){5}} \put(-5,15){\varepsilonctor(0,-1){10}} \put(5,5){\varepsilonctor(0,1){10}} \put(-5,5){\varepsilonctor(1,0){10}} \put(5,15){\varepsilonctor(-1,0){10}} \put(-4,9){\tiny{$1$}} \put(3,9){\tiny{$1$}} \put(0,6){\tiny{$2$}} \put(0,12){\tiny{$2$}} \put(8,15){\tiny{$1$}} \put(6,0){\tiny{$1$}} \put(-8,18){\tiny{$1$}} \put(-8,0){\tiny{$1$}} \end{picture} \right\rangle_3 = \left\langle \setlength{\unitlength}{1.75pt} \begin{picture}(20,10)(-10,7) \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \put(-5,15){\tiny{$1$}} \put(3,3){\tiny{$1$}} \end{picture} \right\rangle_3 + \left\langle \setlength{\unitlength}{1.75pt} \begin{picture}(20,10)(-10,7) \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \put(-5,12){\tiny{$1$}} \put(3,6){\tiny{$1$}} \end{picture} \right\rangle_3. \end{equation} For $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$ and $\varsigma \in \Sigma(G(\Gamma)_{\varrho_{\mathsf{f}}})$, we say that $\varsigma$ is compatible with $\mathsf{f}$ if, on all the vertices that are resolved by $\varsigma_{\mathsf{f}}$, $\varsigma$ agrees with $\varsigma_{\mathsf{f}}$. Denote by $\Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})$ the set of resolutions of $G(\Gamma)_{\varrho_{\mathsf{f}}}$ that are compatible with $\mathsf{f}$. Then the mapping $(\mathsf{f}, \varsigma) \mapsto (\varrho_{\mathsf{f}},\varsigma)$ gives a bijection \begin{equation}\label{bijection-labeling-orientation} \{(\mathsf{f}, \varsigma)~|~ \mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma),~ \varsigma \in \Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})\} \rightarrow \{(\varrho,\varsigma) ~|~ \varrho \in \mathcal{O}(G(\Gamma)),~ \varsigma \in \Sigma (G(\Gamma)_\varrho\}. \end{equation} Let $C$ be a local configuration (of type one or type two in Figure \ref{mostly-2-colored-MOY-local-figure}) in $\Gamma$. For $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$ and $\varsigma \in \Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})$, we define two indices $t_{\mathsf{f},\varsigma}(C)$ and $r_{\mathsf{f},\varsigma}(C)$. The values of these two indices are given in Figures \ref{local-weights-type-1-fig} and \ref{local-weights-type-2-fig} in Appendix \ref{app-figures}. It is straightforward to check that \begin{eqnarray} \label{eq-rot-change-Gamma-prime-G} \mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) & = & \mathrm{rot} (\Gamma_{\bar{\mathsf{f}}}') + \sum_C t_{\mathsf{f},\varsigma}(C), \\ \label{eq-rot-change-rot-f-G} \mathrm{rot}(G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) & = & \mathrm{rot} (\Gamma_{\mathsf{f}}) - \mathrm{rot} (\Delta_{\mathsf{f}}) + \sum_C r_{\mathsf{f},\varsigma}(C), \end{eqnarray} where $C$ runs through all local configurations of type one or two in $\Gamma$. Combining the above and Lemma \ref{lemma-rotation-numbers-oc-reverse}, we have that, for any $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$ and $\varsigma \in \Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})$, \begin{eqnarray} \label{eq-sigma-13-simplified-1} && \sigma_{1,3}(\Gamma,\mathsf{f}) \\ & = & \mathrm{rot}(\Gamma_{\bar{\mathsf{f}}}) - 3 \mathrm{rot}(\Gamma_{\mathsf{f}}) + \sum_{v\in V(\Gamma)} [v|\Gamma|\mathsf{f}] \nonumber \\ & = & \mathrm{rot}(\Gamma_{\bar{\mathsf{f}}}') - 3 (\mathrm{rot}(\Gamma_{\mathsf{f}})- 3 \mathrm{rot} (\Delta_{\mathsf{f}})) + \sum_{v\in V(\Gamma)} ([v|\Gamma|\mathsf{f}] -d(v\leadsto v')) \nonumber \\ & = & -2\mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) + \sum_C (3r_{\mathsf{f},\varsigma}(C)- t_{\mathsf{f},\varsigma}(C)) + \sum_{v\in V(\Gamma)} ([v|\Gamma|\mathsf{f}]-d(v\leadsto v')) \nonumber \\ & = & -2\mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) +\sum_C \left( 3r_{\mathsf{f},\varsigma}(C)- t_{\mathsf{f},\varsigma}(C) + \sum_{v\in V(C)} ([v|\Gamma|\mathsf{f}]-d(v\leadsto v')) \right), \nonumber \end{eqnarray} where $C$ runs through all local configurations of type one or two in $\Gamma$, $V(C)$ is the set of vertices of $C$, and we use the convention that $d(v\leadsto v')=0$ if the vertex $v$ is unchanged in $\Gamma_{\bar{\mathsf{f}}} \leadsto \Gamma_{\bar{\mathsf{f}}}'$. The values of $r_{\mathsf{f},\varsigma}(C)$, $t_{\mathsf{f},\varsigma}(C)$, $\sum_{v\in V(C)} [v|\Gamma|\mathsf{f}]$ and $\sum_{v\in V(C)}d(v\leadsto v'))$ are recorded in Figures \ref{local-weights-type-1-fig} and \ref{local-weights-type-2-fig} in Appendix \ref{app-figures}. One can verify case by case that \begin{eqnarray} \label{eq-sigma-13-simplified-2} && 3r_{\mathsf{f},\varsigma}(C)- t_{\mathsf{f},\varsigma}(C) + \sum_{v\in V(C)} ([v|\Gamma|\mathsf{f}]-d(v\leadsto v')) \\ & = & \begin{cases} 1 & \text{if } C \text{ is of type two and } \varsigma \text{ applies } L \\ & \text{to the corresponding vertex in } G(\Gamma)_{\varrho_{\mathsf{f}}}, \\ & \\ -1 & \text{if } C \text{ is of type two and } \varsigma \text{ applies } R \\ & \text{to the corresponding vertex in } G(\Gamma)_{\varrho_{\mathsf{f}}}, \\ & \\ 0 & \text{otherwise.} \end{cases} \nonumber \end{eqnarray} Equations \eqref{eq-sigma-13-simplified-1} and \eqref{eq-sigma-13-simplified-2} imply that, for any $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$ and $\varsigma \in \Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})$, \begin{equation}\label{CP-Jaeger-rot-weight-match} q^{\sigma_{1,3}(\Gamma,\mathsf{f})} = q^{-2\mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma})} \cdot [G(\Gamma)_{\varrho_{\mathsf{f}}},\varsigma]. \end{equation} In view of bijection \eqref{bijection-labeling-orientation}, it follows from \eqref{eq-MOY-HOMFLY}, \eqref{eq-composition-product-1+3}, \eqref{eq-Gamma-bar-Gamma-bar-prime}, \eqref{eq-MOY-3-3} and \eqref{CP-Jaeger-rot-weight-match} that \begin{eqnarray} \label{Jaeger-CP-MOY-4-2} \left\langle \Gamma \right\rangle_4 & = & \sum_{\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)} \sum_{\varsigma \in \Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})} q^{-2\mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma})} \cdot [G(\Gamma)_{\varrho_{\mathsf{f}}},\varsigma] \cdot R_3( G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) \\ &= & \sum_{\varrho \in \mathcal{O}(G(\Gamma))} \sum_{\varsigma \in \Sigma(G(\Gamma)_{\varrho})} q^{-2\mathrm{rot} (G(\Gamma)_{\varrho,\varsigma})} \cdot [G(\Gamma)_{\varrho},\varsigma] \cdot R_3( G(\Gamma)_{\varrho,\varsigma}). \nonumber \end{eqnarray} Comparing the right hand side of \eqref{Jaeger-CP-MOY-4-2} to the Jaeger Formula \eqref{eq-Jaeger-formula-N-graph} in Theorem \ref{thm-Jaeger-formula-graph}, we get $\left\langle \Gamma \right\rangle_4 = P_6(G(\Gamma))$. \end{proof} \subsection{An explicit $\mathfrak{so}(6)$ Kauffman homology} Webster \cite{Webster1,Webster2} has categorified, for any simple complex Lie algebra $\mathfrak{g}$, the quantum $\mathfrak{g}$ invariant for links colored by any finite dimensional representations of $\mathfrak{g}$. But his categorification is very abstract. For applications in knot theory, it would help if we have categorifications that are concrete and explicit. For quantum $\mathfrak{sl}(N)$ link invariants, examples of such categorifications can be found in \cite{K1,KR1,Wu-color}. We know much less about explicit categorifications of quantum $\mathfrak{so}(N)$ link invariants. Khovanov and Rozansky \cite{KR3} proposed a categorification of the $\mathfrak{so}(2N)$ Kauffman polynomial. But its invariance under Reidemeister move (III) is still open. They did however point out that the $\mathfrak{so}(4)$ version of their homology is isomorphic to the tensor square of the Khovanov homology \cite{K1} and is therefore a link invariant. More recently, Cooper, Hogancamp and Krushkal \cite{Cooper-Hogancamp-Krushkal} gave an explicit categorification of the $\mathfrak{so}(3)$ Kauffman polynomial. Theorem \ref{thm-mostly-2-colored-MOY-to-4-valent} allows us to give an explicit categorification of the $\mathfrak{so}(6)$ Kauffman polynomial. More precisely, we have the following theorem. \begin{theorem}\label{thm-2-4-MOY-6-Kauffman} Let $L$ be an unoriented framed link in $S^3$. Fix an orientation $\varrho$ of $L$, color all components of $L$ by $2$ and denote the resulted colored oriented framed link $L_\varrho^{(2)}$. Then \begin{equation}\label{eq-2-4-MOY-6-Kauffman} \widetilde{R}_{4}(L_\varrho^{(2)}) = (-1)^m P_6(\overline{L}), \end{equation} where the polynomial $\widetilde{R}_{4}$ is defined in Definition \ref{def-2N-homology-renormalized}, $m$ is the ($\mathbb{Z}_2$-) number of crossings in $L$, and $\overline{L}$ is the mirror image of $L$, that is, $L$ with the upper- and lower- branches switched at every crossing. Consequently, the renormalized $\mathfrak{sl}(4)$ homology $\widetilde{H}_{4}(L_\varrho^{(2)})$ categorifies $(-1)^m P_6(\overline{L})$. \end{theorem} \begin{figure} \caption{MOY resolutions} \label{MOY-resolutions-figure} \end{figure} \begin{figure} \caption{KV resolutions} \label{KV-resolutions-figure} \end{figure} \begin{proof} Replace every crossing of $L_\varrho^{(2)}$ by one of the three local configurations in Figure \ref{MOY-resolutions-figure}. We call the result a MOY resolution of $L_\varrho^{(2)}$ and denote by $\mathcal{MOY}(L_\varrho^{(2)})$ the set of MOY resolutions of $L_\varrho^{(2)}$. Replace every crossing of $L$ by one of the three local configurations in Figure \ref{KV-resolutions-figure}. We call the result a KV resolution of $L$ and denote by $\mathcal{KV}(L)$ the set of KV resolutions of $L$. Note that the mapping $\Gamma \mapsto G(\Gamma)$ defined in Theorem \ref{thm-mostly-2-colored-MOY-to-4-valent} gives a bijection $\mathcal{MOY}(L_\varrho^{(2)}) \rightarrow \mathcal{KV}(L)$. Compare the skein relations \begin{eqnarray*} \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,1){20}} \put(10,0){\line(-1,1){8}} \put(-2,12){\varepsilonctor(-1,1){8}} \put(7,15){\tiny{$2$}} \put(-9,15){\tiny{$2$}} \end{picture}) & = & -q^{-1} \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,20){\varepsilonctor(1,1){0}} \put(4,15){\tiny{$2$}} \put(-6,15){\tiny{$2$}} \end{picture}) + \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(5,15){\varepsilonctor(1,1){5}} \put(-10,0){\varepsilonctor(1,1){5}} \put(-5,15){\varepsilonctor(-1,1){5}} \put(10,0){\varepsilonctor(-1,1){5}} \put(-5,5){\varepsilonctor(0,1){10}} \put(5,5){\varepsilonctor(0,1){10}} \put(5,5){\varepsilonctor(-1,0){10}} \put(-5,15){\varepsilonctor(1,0){10}} \put(-4,9){\tiny{$3$}} \put(3,9){\tiny{$1$}} \put(0,6){\tiny{$1$}} \put(0,12){\tiny{$1$}} \put(6,18){\tiny{$2$}} \put(6,0){\tiny{$2$}} \put(-8,18){\tiny{$2$}} \put(-8,0){\tiny{$2$}} \end{picture}) - q \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(0,15){\varepsilonctor(2,1){10}} \put(-10,0){\varepsilonctor(2,1){10}} \put(0,15){\varepsilonctor(-2,1){10}} \put(10,0){\varepsilonctor(-2,1){10}} \put(0,5){\varepsilonctor(0,1){10}} \put(1.5,9){\tiny{$4$}} \put(5,15){\tiny{$2$}} \put(5,4){\tiny{$2$}} \put(-6,15){\tiny{$2$}} \put(-6,4){\tiny{$2$}} \end{picture}), \\ \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,1){20}} \put(2,8){\varepsilonctor(1,-1){8}} \put(-2,12){\line(-1,1){8}} \put(7,15){\tiny{$2$}} \put(-9,15){\tiny{$2$}} \end{picture}) & = & -q^{-1} \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,2){5}} \put(-10,20){\varepsilonctor(1,-2){5}} \put(-5,10){\varepsilonctor(1,0){10}} \put(5,10){\varepsilonctor(1,2){5}} \put(5,10){\varepsilonctor(1,-2){5}} \put(0,11){\tiny{$4$}} \put(5,15){\tiny{$2$}} \put(5,4){\tiny{$2$}} \put(-6,15){\tiny{$2$}} \put(-6,4){\tiny{$2$}} \end{picture}) + \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(5,15){\varepsilonctor(1,1){5}} \put(-10,0){\varepsilonctor(1,1){5}} \put(-10,20){\varepsilonctor(1,-1){5}} \put(5,5){\varepsilonctor(1,-1){5}} \put(-5,5){\varepsilonctor(0,1){10}} \put(5,15){\varepsilonctor(0,-1){10}} \put(-5,5){\varepsilonctor(1,0){10}} \put(-5,15){\varepsilonctor(1,0){10}} \put(-4,9){\tiny{$1$}} \put(3,9){\tiny{$1$}} \put(0,6){\tiny{$1$}} \put(0,12){\tiny{$3$}} \put(6,18){\tiny{$2$}} \put(6,0){\tiny{$2$}} \put(-8,18){\tiny{$2$}} \put(-8,0){\tiny{$2$}} \end{picture}) - q \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \put(10,20){\varepsilonctor(1,1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \put(4,12){\tiny{$2$}} \put(4,5){\tiny{$2$}} \end{picture}), \\ P_6 (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(10,0){\line(-1,1){20}} \put(-10,0){\line(1,1){8}} \put(2,12){\line(1,1){8}} \end{picture}) & = & q^{-1}P_6(\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}) - P_6(\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(0,10){\line(1,1){10}} \put(-10,0){\line(1,1){10}} \put(0,10){\line(-1,1){10}} \put(10,0){\line(-1,1){10}} \end{picture}) + qP_6(\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture}). \end{eqnarray*} \noindent It is easy to see that equation \eqref{eq-2-4-MOY-6-Kauffman} follows from equation \eqref{eq-mostly-2-colored-MOY-to-4-valent} in Theorem \ref{thm-mostly-2-colored-MOY-to-4-valent}. \end{proof} \begin{question} Is $\widetilde{H}_{4}(L_\varrho^{(2)})$ isomorphic to the $\mathfrak{so}(6)$ version of the homology defined in \cite{KR3}? \end{question} \appendix \section{Figures Used in the Proof of Theorem \ref{thm-mostly-2-colored-MOY-to-4-valent}}\label{app-figures} In Figures \ref{local-weights-type-1-fig} and \ref{local-weights-type-2-fig}, edges along \textcolor{BrickRed}{red paths} belong to $\Gamma_{\mathsf{f}}$ and edges along \textcolor{SkyBlue}{blue paths} belong to $\Delta_{\mathsf{f}}$. \begin{figure}\label{local-weights-type-1-fig} \end{figure} \begin{figure}\label{local-weights-type-2-fig} \end{figure} \end{document}
\begin{document} \title{Hochschild homology and cohomology of generalized Weyl algebras} \tableofcontents \footnotetext[1] {Dto. de Matem\'atica, Facultad de Cs. Exactas y Naturales. Universidad de Buenos Aires. Ciudad Universitaria Pab I. 1428, Buenos Aires - Argentina. e-mail: \texttt{[email protected]}, \texttt{[email protected]}, \texttt{[email protected]}\\ Research partially supported by UBACYT TW69 and CONICET.} \footnotetext[2] {Research member of CONICET (Argentina).} \section*{Introduction} The relevance of algebras such as the Weyl algebra $A_n({\mathbb{C}})$, the enveloping algebra ${\cal U}(\mathfrak{sl}_2)$ and its primitive quotients $B_{\lambda}$ and other algebras related to algebras of differential operators is already well-known. Recently, some articles where their Hochschild homology and cohomology has an important role have been written (see for example \cite{afls}, \cite{al1}, \cite{al2}, \cite{gui-eti}, \cite{odile}, \cite{mariano}, \cite{marianococprim}). Both the results obtained in \cite{afls} and in \cite{marianococprim} seem to depend strongly on intrinsic properties of $A_1({\mathbb{C}})$ and ${\cal U}(\mathfrak{sl}_2)$. However, it is not entirely the case. In this article we consider a class of algebras, called generalized Weyl algebras (GWA for short), defined by V.~Bavula in \cite{Bav} and studied by himself and collaborators in a series of papers (see for example \cite{Bav}, \cite{BavJor}, \cite{BavLen}) from the point of view of ring theory. Our aim is to compute the Hochschild homology and cohomology groups of these algebras and to study whether there is a duality between these groups. Examples of GWA are, as we said before, $n$-th Weyl algebras, ${\cal U}(\mathfrak{sl}_2)$, primitive quotients of ${\cal U}(\mathfrak{sl}_2)$, and also the subalgebras of invariants of these algebras under the action of finite cyclic subgroups of automorphisms. As a consequence we recover in a simple way the results of \cite{afls} and we also complete results of \cite{odile}, \cite{michelinkassel} and of \cite{marianococprim}, giving at the same time a unified method for GWA. The article is organized as follows: In section \ref{sect:GWA} we recall from \cite{Bav} the definition of generalized Weyl algebras and state the main theorems. In section \ref{sect:res} we describe the resolution used afterwards in order to compute the Hochschild homology and cohomology groups. We prove a ``reduction" result (Proposition \ref{prop:reduction}) and finally we prove the main theorem for homology using an spectral sequence argument. Section \ref{sect:coho} is devoted to the computation of Hochschild cohomology of GWAs. As a consequence of the results we obtain, we notice that the hypotheses of Theorem 1 of \cite{VdB} are not sufficient to assure duality between Hochschild homology and cohomology. We state here hypotheses under which duality holds. In section \ref{sect:inv} we consider subalgebras of invariants of the previous ones under diagonalizable cyclic actions of finite order. We first show that these subalgebras are also GWA and we state the main theorem concerning subalgebras of invariants. Finally, in section \ref{sect:apps}, we begin by describing some applications of the above results. The first application is to specialize the results to the usual Weyl algebra. Secondly, we consider the primitive quotients of ${\frak sl}_2$, and considering the Cartan involution $\Omega$ we answer a question of Bavula (\cite{BavJor} remark 3.30) and we finish the proof of the main theorem. The formula for the dimension of $H\!H_*(A^G)$ explains, in particular, the computations made by O.~Fleury for $H\!H_0(B_{\lambda}^G)$. We will work over a field $k$ of characteristic zero and all algebras will be $k$-algebras. Given a $k$-algebra $A$, $\Aut_k(A)$ will always denote the group of $k$-algebra automorphisms of $A$. \section{Generalized Weyl Algebras}\label{sect:GWA} We recall the definition of generalized Weyl algebras given by Bavula in \cite{Bav}. Let $R$ be an algebra, fix a central element $a\in{\cal Z}(R)$ and $\sigma\in \Aut_k(R)$. The generalized Weyl Algebra $A=A(R,a,\sigma)$ is the $k$-algebra generated by $R$ and two new free variables $x$ and $y$ subject to the relations: \begin{alignat*}{2} yx&=a &\qquad&xy=\sigma(a) \\ \intertext{and} xr&=\sigma(r)x &&ry=y\sigma(r) \end{alignat*} for all $r\in R $. \noindent\textbf{Example: }s\begin{enumerate} \item If $R=k[h]$, $a=h$ and $\sigma\in \Aut_k(k[h])$ is the unique automorphism determined by $\sigma(h)=h-1$; then $A(k[h],a,\sigma)\cong A_1(k)$, the usual Weyl algebra, generated by $x$ and $y$ subject to the relation $[x,y]=1$. \item Let $R=k[h,c]$, $\sigma(h)=h-1$ and $\sigma(c)=c$, and define $a:=c-h(h+1)$. Then $A(k[h,c],\sigma,a)\cong {\cal U}(\mathfrak{sl}_2)$. Under the obvious isomorphism (choosing $x$, $y$ and $h$ as the standard generators of $\mathfrak{sl}_2$) the image of the element $c$ corresponds to the Casimir element. \item Given $\lambda \in k$, the maximal primitive quotients of ${\cal U}(\mathfrak{sl}_2)$ are the algebras $B_{\lambda}:={\cal U}(\mathfrak{sl}_2)/\langle c-\lambda\rangle$, cf.~\cite{Dixmier}. They can also be obtained as generalized Weyl algebras because $B_{\lambda}\cong A(k[h],\sigma,a=\lambda - h(h+1))$. \end{enumerate} We will focus on the family of examples $A=A(k[h],a,\sigma)$ with $a=\sum_{i=0}^na_ih^i\in k[h]$ a non-constant polynomial, and the automorphism $\sigma$ defined by $\sigma(h)=h-h_0$, with $h_0\in k\setminus\{0\}$. There is a filtration on $A$ which assigns to the generators $x$ and $y$ degree $n$ and to $h$ degree $2$; the associated graded algebra is, with an obvious notation, $k[x,y,h]/(yx-a_nh^n)$. This is the coordinate ring of a Klein surface. We remark that this is a complete intersection, hence it is a Gorenstein algebra. There is also a graduation on $A$, which we will refer to as weight, such that $\deg x=1$, $\deg y=-1$ and $\deg h=0$. We will denote, for polynomial $a, b\in k[h]$, $\deg a$ the degree of $a$, $a'=\frac{\partial a}{\partial h}$ the formal derivative of $a$, and $(a;b)$ the greatest common divisor of $a$ and $b$. Our main results are the following theorems, whose proofs will be given in next sections. \begin{teo}\label{thm:hom} Let $a\in k[h]$ be a non-constant polynomial, $\sigma\in \Aut_k(k[h])$ defined by $\sigma(h)=h-h_0$ with $0\neq h_0\in k$. Consider $A=A(k[h],a,\sigma)$ and $n=\deg a$, $d=\deg(a;a')$. \begin{itemize} \item If $(a;a')=1$ (i.e. $d=0$), then $\dim_kH\!H_0(A)=n-1$, $\dim_kH\!H_2(A)=1$, and $H\!H_i(A)=0$ for $i\neq 0,2$. \item If $d\geq 1$ then $\dim_kH\!H_0(A)=n-1$, $\dim_kH\!H_1(A)=d-1$, and $\dim_kH\!H_i(A)=d$ for $i\geq 2$. \end{itemize} \end{teo} \begin{teo}\label{thm:coho} Let $a\in k[h]$ be a non-constant polynomial, $\sigma\in \Aut_k(k[h])$ defined by $\sigma(h)=h-h_0$ with $0\neq h_0\in k$. Consider $A=A(k[h],a,\sigma)$ and $n=\deg a$, $d=\deg(a;a')$. \begin{itemize} \item If $(a;a')=1$ (i.e. $d=0$), then $\dim_kH\!H^0(A)=1$, $\dim_kH\!H^2(A)=n-1$, and $H\!H^i(A)=0$ for $i\neq 0,2$. \item If $d\geq 1$ then $\dim_kH\!H^0(A)=1$, $\dim_kH\!H^1(A)=0$, $\dim_kH\!H^2(A)=n-1$, and $\dim_kH\!H^i(A)=d$ for $i\geq 3$. \end{itemize} \end{teo} \section{A resolution for $A$ and proof of the first theorem}\label{sect:res} In this section we will construct a complex of free $A^e$-modules and we will prove, using an appropriate filtration, that this complex is actually a resolution of $A$. The construction of the resolution is performed in two steps. First we consider an algebra $B$ above $A$ which has ``one relation less'' than $A$. Then we use the Koszul resolution of $B$ and obtain a resolution of $A$ mimicking the construction of the resolution for the coordinate ring of an affine hypersurface done in \cite{burg-vigue}. Let us consider $V$ the $k$-vector space with basis $\{e_x, e_y, e_h\}$ and the following complex of free $A^e$-modules: \begin{equation}\label{dag} 0\to A\ot \Lambda^3V\ot A \to A\ot \Lambda^2V\ot A \to A\ot V\ot A \to A\ot A \to 0 \tag{\dag} \end{equation} In order to define the differential, we consider the elements $\lambda_k\in k$ such that \[ \sigma(a)-a=\sum_{k=0}^{n-1}\lambda_kh^k \] and let \[ e_{[x,y]}=\sum_{k,i}\lambda_kh^ie_hh^{k-i-1}, \: e_{[x,h]}=-e_x, \: e_{[y,h]}=e_y \in A\ot V\ot A. \] For simplicity, in these formulas we have written for example $xe_y$ instead of $x\ot e_y\ot 1\in A\ot V\ot A$ and similarly in the other degrees. The differential in \eqref{dag} is formally, with these notations, the Chevalley-Eilenberg differential; for example, \[ d(\alpha e_x\wedge e_y \beta) = \alpha x e_y \beta - \alpha e_y x \beta - \alpha y e_x \beta + \alpha e_x y \beta - \alpha e_{[x,y]} \beta. \] \begin{lem} The homology of the complex \eqref{dag} is isomorphic to $A$ in degree $0$ and $1$, and zero elsewhere. \end{lem} \begin{proof} We consider the $k$-algebra $B$ freely generated by $x,y,h$ modulo relations \[ xh=\sigma(h)x, \qquad hy=y\sigma(h), \qquad xy-yx=\sigma(a)-a. \] Let $f:=yx-b\in B$ where $b\in k[h]$ is such that $\sigma(b)-b=a$; observe that $f$ is central in $B$. This algebra $B$ has been studied by S.~Smith in \cite{smith}. Our interest in it comes from the fact that $A$ is the quotient of $B$ by the two sided ideal generated by $f$. In particular, $B$ has a filtration induced by the filtration on $A$, and it is clear that the associated graded algebra is simply $k[x,y,h]$. We claim that a complex similar to \eqref{dag} but with $A$ replaced throughout by $B$ is a resolution of $B$ by free $B^e$-modules. Indeed, the filtration on $B$ extends to a filtration on this complex, and the associated graded object is acyclic, because it coincides with the usual Koszul resolution of the polynomial algebra $k[x,y,h]$ as a bimodule over itself. The original complex can be recovered by tensoring over $B$ with $A$ on the right and on the left, or, equivalently, by tensoring over $B^e$ with $A^e$. As a result, the homology of the resulting complex computes $\Tor_*^{B^e}(B,A^e)\cong \Tor_*^B(A,A)$ (see for example \textsc{IX}.\S4.4 of \cite{CE}). Consider now the free resolution of $A$ as a left $B$-module \[ 0\to B\to B\to A\to 0 \] with the map $B\to A$ being the natural projection and the other one multiplication by $f$. This can be used to compute $\Tor^B_*(A,A)$, and the proof of the lemma is finished. \end{proof} In order to kill the homology of the complex \eqref{dag}, we consider a resolution of the following type: \begin{equation} \xymatrix@C-10pt@-10pt{ &&0\ar[r]& A\ot \Lambda^3V\ot A \ar[r]& A\ot \Lambda^2V\ot A \ar[r] & A\ot V\ot A \ar[r]& A\ot A \ar[r]& 0\\ &0\ar[r]& A\ot \Lambda^3V\ot A \ar[u]\ar[r]& A\ot \Lambda^2V\ot A\ar[u] \ar[r]& A\ot V\ot A \ar[u]\ar[r]& A\ot A \ar[u]\ar[r]& 0\ar[u]&\\ 0\ar[r]& A\ot \Lambda^3V\ot A \ar[u]\ar[r]& A\ot \Lambda^2V\ot A\ar[u] \ar[r]& A\ot V\ot A \ar[u]\ar[r]& A\ot A \ar[u]\ar[r]& 0\ar[u]&&\\ &\ar[u] &\ar[u]&\ar[u]&\ar[u] && } \tag{\ddag}\label{ddag} \end{equation} The horizontal differentials are the same as before, and the vertical ones--denoted ``$.df$"--are defined as follows: \begin{align*} .df:A\ot A &\to A\ot V\ot A\\ 1\ot 1&\mapsto y e_x+e_y x-\sum_{i,k}a_kh^i e_hh^{k-i-1}\\ \end{align*} \begin{align*} .df:A\ot V\ot A &\to A\ot \Lambda^2V\ot A\\ e_x&\mapsto -e_x\wedge e_y x +\sum_{i,k}a_k\sigma(h^i) e_x\wedge e_hh^{k-i-1}\\ e_y&\mapsto -y e_y\wedge e_x +\sum_{i,k}a_kh^i e_y\wedge e_h\sigma(h^{k-i-1})\\ e_h&\mapsto -y e_h\wedge e_x- e_h\wedge e_y x\\ \end{align*} \begin{align*} .df:A\ot \Lambda^2V\ot A &\to A\ot \Lambda^3V\ot A\\ e_y\wedge e_h&\mapsto y e_y\wedge e_h\wedge e_x \\ e_x\wedge e_h&\mapsto e_x\wedge e_h\wedge e_y x\\ e_x\wedge e_y&\mapsto -\sum_{i,k}a_k\sigma(h^i) e_x\wedge e_y\wedge e_h \sigma(h^{k-i-1}) \end{align*} \begin{prop} The total complex associated to \eqref{ddag} is a resolution of $A$ as $A^e$-module. \end{prop} \begin{proof} That it is a double complex follows from a straightforward computation. We consider again the filtration on $A$ and the filtration induced by it on \eqref{ddag}. All maps respect it, so it will suffice to see that the associated graded complex is a resolution of $\gr A$. Filtering this new complex by rows, we know that the homology of the rows compute $\Tor_*^{\gr B}(\gr(B),\gr(B))$. The only thing to be checked now is that the differential on the $E^1$ term can be identified to $.df$, and this is easily done. \end{proof} In order to compute $H\!H_*(A)$ we can compute the homology of the complex $(A\ot_{A^e}(A\ot \Lambda^*V\ot A),.df,d_{CE})\cong (A\ot \Lambda^*V,.df,d_{CE})$. This is a double complex which can be filtered by the rows, as usual, so we obtain a spectral sequence converging to the homology of the total complex. Of course, the first term is just the homology of the rows. The computation of Hochschild homology can be done in a direct way; however, it is worth noticing that this procedure can be considerably reduced. Let $X_*=(A\ot\Lambda^*V,d)$ be the complex obtained by tensoring the rows in \eqref{ddag} with $A$ over $A^e$, and let $X^0_*$ be the zero weight component of $X_*$. \begin{prop}\label{prop:reduction} The inclusion map $X^0_*\to X_*$ induces an isomorphism in homology. \end{prop} \begin{proof} Let us define the map $s:A\ot \Lambda^*V\to A\ot \Lambda^{*+1}V$ by $s(w\ot v_1\wedge \dots\wedge v_k):= w\ot v_1\wedge \dots\wedge v_k\wedge e_h$. A computation shows that \begin{equation}\label{eq:htpy} (d_{CE}s+sd_{CE})(w\ot v_1\wedge \dots\wedge v_k)= weight(w\ot v_1\wedge \dots\wedge v_k) .(w\ot v_1\wedge \dots\wedge v_k). \end{equation} Since $\chr(k)=0$, this ``Euler'' map is an isomorphism for non-zero weights, but it is the zero map in homology, because \eqref{eq:htpy} shows that it is homotopic to zero. \end{proof} \subsection{The term $E^1$} \paragraph{Computation of $H\!H_0(A)$.} We remark that in the graduation by weight $A=\oplus_{n\in{\mathbb{Z}}}A_n$ we have $A_0=k[h]$, and, for $n>0$, $A_n=k[h]x^n$ and $A_{-n}=k[h]y^n$. We have to compute $H\!H_0(A)=A/[A,A]=A/([A,x]+[A,y]+[A,h])$, and this is, according to proposition \ref{prop:reduction}, the same as $A_0/([A_{-1},x]+[A_{1},y]+[A_0,h])$. Since $A_0=k[h]$, $[A_0,h]=0$. Because $A_{-1}=k[h]y$, a system of linear generators of $[A_{-1},x]$ is given by commutators of the form $[h^iy,x]=h^ia-\sigma(h^ia)=(I\!d-\sigma)(h^ia)$. On the other hand, $A_1=k[h]x$, so $[A_1,y]$ is spanned by the $[h^jx,y]=h^j\sigma(a)-\sigma^{-1}(h^j)a=(I\!d-\sigma)(-\sigma^{-1}(h^j )a)$ for $j\geq0$. As a consequence, $[A_1,y]+[A_{-1},x]=[A_{-1},x]$ is the subspace of $A_0=k[h]$ of all polynomials $pa-\sigma(pa)$ with $p\in k[h]$. The $k$-linear map $I\!d-\sigma:k[h]\to k[h]$ is an epimorphism, and its kernel is the one dimensional subspace consisting of constant polynomials. The subspace of multiples of $a$ has codimension $\deg a=n$, so the image of the restriction of $I\!d-\sigma$ to this subspace has codimension $n-1$. Then we conclude that $\dim_kH\!H_0(A)=n-1$; a basis is given for example by the set of homology classes $\{[1], [h], [h^2],\dots ,[h^{n-2}]\}$. \paragraph{Homology of the row in degree $1$.} Recall that we only have to consider the subcomplex of elements of weight zero. Let us then suppose that $c=sye_x+txe_y+ue_h$ is a $1$-cycle in the row complex of weight zero. This implies that \[ d(sye_x+txe_y+ue_h)= \sigma\left( (\sigma^{-1}(t)-s)a \right) - (\sigma^{-1}(t)-s)a = 0. \] As a consequence, $(\sigma^{-1}(t)-s)a\in \Ker(I\!d-\sigma)=k$. But $a$ is not a constant polynomial, so $\sigma^{-1}(t)-s=0$. In other words, $s=\sigma^{-1}(t)$ and the cycle can be written in the form \[ \sigma^{-1}(t)ye_x+txe_y+ue_h. \] The horizontal boundary of a $2$-chain $pe_x\wedge e_y +qye_x\wedge e_h +rxe_y\wedge e_h$ is \begin{multline*} d_{CE}(pe_x\wedge e_y +qye_x\wedge e_h +rxe_y\wedge e_h)= \\ =(p-\sigma(p))xe_y + (\sigma^{-1}(p)-p)ye_x +\left(-p (\sigma(a')-a') +(q-\sigma^{-1}(r))a - \sigma((q-\sigma^{-1}(r))a)\right)e_h. \end{multline*} We can choose $p$ such that $\sigma(p)-p=t$, so that, adding $d(pe_x\wedge e_y)$ to $c$, we obtain cycle homologous to $c$, in which the only eventually non-zero coefficient is the one corresponding to $e_h$. We can then simply assume that $c$ is of the form $ue_h$ to begin with, and we want to know if it is a boundary or not. The equation $d(pe_x\wedge e_y +qye_x\wedge e_h +rxe_y\wedge e_h)=ue_h$ implies $p=\sigma(p)$, so $p\in k$, and \begin{equation}\label{yyy} u+ \left(-(\sigma(a')-a')p \right) =- \sigma((q-\sigma^{-1}(r))a) +(q-\sigma^{-1}(r))a. \end{equation} If $n=\deg a=1$, then we are in the special case of the usual Weyl algebra. In this case $a'\in k$ and $\sigma(a')-a'=0$, so there is one term less in the left hand side of \eqref{yyy} and the homology of the row in this degree is zero. Suppose now $n\geq 2$; if $p=0$ then $u\in\mathrm{Im}((I\!d-\sigma)|_{a.k[h]})$, and we have, as in degree zero, that $\{[1],\dots,[h^{n-2}]\}$ is a basis of the quotient. If $p\neq 0$, we have to mod out a $(n-1)$-dimensional space by the space spanned by a non-zero element, so we obtain a $(n-2)$-dimensional space. We notice that $\deg(\sigma(a')-a')=\deg a-2$, and since $n\geq 2$, the element $\sigma(a')-a'$ is a non-zero element of $k[h]/(I\!d-\sigma)(a.k[h])$. \paragraph{Homology of the row in degree $2$.} The boundary of a weight zero element in degree two\\ $w:=se_x\wedge e_y + tye_x\wedge e_h +uxe_y\wedge e_h$ is: \[ d(w)= (s-\sigma(s))x e_y - (s-\sigma^ {-1}(s))y e_x +\left( (a'-\sigma(a'))s +(t-\sigma^{-1}(u))a - \sigma((t-\sigma^{-1}(u))a) \right)e_h. \] If $d(w)=0$, we must have that $s=\sigma(s)$, so $s\in k$, and that \[ s(\sigma(a')-a')=-(t-\sigma^{-1}(u))a -\sigma((t-\sigma^{-1}(u))a). \] If $s\neq 0$, then the expression on the left is a polynomial of degree $n-2$, and the degree of the polynomial on the right is (if it is not the zero polynomial) $n+\deg(t-\sigma^{-1}(u))-1$. This is only possible if both sides are zero and we see that $\sigma(t)=u$. We mention that in the case $\deg a=1$ (i.e. the usual Weyl algebra), the expression on the left is always zero independently of $s$, so the argument is not really different in this case. Now we compute the $2$-boundaries: as $p$ varies in $k[h]$, they are the elements \begin{align*} d(p e_x \wedge e_y \wedge e_h)&= [p,x]e_y\wedge e_h -[p,y]e_x\wedge e_h +[p,h]e_x\wedge e_y\\ &=(p-\sigma(p))x e_y\wedge e_h - (p-\sigma^{-1}(p))y e_x\wedge e_h. \end{align*} Given $u$, there is a $p$ such that $(I\!d-\sigma)(p)=u$, and this $p$ automatically satisfies $\sigma^{-1}(p)-p=t$. We remark that the coefficient corresponding to $e_x\wedge e_y$ in a $0$-weight boundary, is always zero. As a consequence, in the case of the usual Weyl algebra, the class of $e_x\wedge e_y$ is a generator of the homology. On the other hand, if $n\geq 2$ the homology is zero. \paragraph{Homology of the row in degree $3$.} The homology in degree three is the kernel of the map $A\ot \Lambda^3V \to A\ot \Lambda^2V$ given by \[ w\mapsto [w,x]e_y\wedge e_h -[w,y]e_x\wedge e_h +[w,h]e_x\wedge e_y. \] It is clearly isomorphic to the center of $A$, which is known to be $k$ (see for example \cite{Bav}). A basis of the homology is given by the class of $e_x\wedge e_y\wedge e_h$. \paragraph{Summary.} We summarize the previous computations in the following table showing the dimensions of the vector spaces in the term $E^1$. In each case, the boxed entry has coordinates $(0,0)$. \[ \begin{array}{ccc} \begin{array}{ccccccc} & & &1 &0 & n-2&\fbox{$n-1$}\\ & &1 &0 &n-2&n-1&\\ &1& 0 & n-2 &n-1& &\\ 1&0&n-2& n-1 & & &\\ \end{array} & \qquad & \begin{array}{ccccccc} & & &1 &1 & 0&\fbox{$0$}\\ & &1 &1 &0&0&\\ &1& 1 & 0 &0& &\\ 1&1&0& 0 & & &\\ \end{array} \\ \\ n\geq2 && \text{The Weyl algebra ($n=1$)} \end{array} \] \subsection{The term $E^2$} The differential $d^1$ corresponds to the vertical differential in the original complex. Let $n\geq 2$. The only relevant component is the map $.df:A\to A\ot V$; we recall that it is defined by \[ .df(b)= bye_x +\sigma(b)xe_y-ba'e_h. \] Adding $d_{CE}(pe_x\wedge e_y)$, where $p$ is such that $b=\sigma(p)-p$, we see that the expression $bye_x +\sigma(b)xe_y-ba'e_h$ is homologous to $\left(-\sigma\left(\sigma^{-1}(p) a'\right)+\left(\sigma^{-1}(p) a'\right)\right)e_h$, and, since the homology of the row in the place corresponding to $A\ot V$ is isomorphic to $k[h]/(I\!d-\sigma)(a.k[h])e_h$, we conclude that the cokernel of the first differential of the spectral sequence (in the same place) is isomorphic to $k[h]/(I\!d-\sigma)(a.k[h] +a'k[h])e_h$. The subspace $ak[h]+a'k[h]$ has codimension $d=\deg(a;a')$, so $(I\!d-\sigma)(ak[h]+a'k[h])$ has codimension $d-1$ (or zero if $d=0$). By linear algebra arguments, the dimension of the kernel of this differential is $d$. The corresponding table at this step of the spectral sequence is the following: \[ \begin{array}{ccc} \begin{array}{ccccccc} & & &1 &0 & d-1&\fbox{$n-1$}\\ & &1 &0 &d-1&d&\\ &1& 0 &d-1 &d& &\\ 1&0&d-1&d & & & \end{array} & \qquad & \begin{array}{ccccccc} & & &1 &0 & 0&\fbox{$n-1$}\\ & &1 &0 &0&1&\\ &1& 0 &0 &1& &\\ 1&0&0&1 & & & \end{array} \\ &&\\ d\geq1 && d=0 \end{array} \] In case $n=1$, the homology of the Weyl algebra is well-known (see for example \cite{kassel} or \cite{sri}), but for completeness we include it. The only relevant differential is the one corresponding to the map $A\ot \Lambda^2V\to A\ot \Lambda^3V$. The generator of the homology of the row in the place corresponding to $A\ot\Lambda^2V$ is $e_x\wedge e_y$, and $.df(e_x\wedge e_y)=-\sigma(a')e_x\wedge e_y\wedge e_h$. But $\deg a=1$ so $a'$ is a non-zero constant; as a consequence $d_1$ is an epimorphism and hence an isomorphism. The table of the dimensions is in this case: \[ \begin{array}{c} \begin{array}{ccccccc} & & &0 &1 & 0&\fbox{$0$}\\ & &0 &0 &0&0&\\ &0& 0 &0 &0& &\\ 0&0&0&0 & & &\\ \end{array} \\ \\ n=1 \end{array} \] We recover thus known results. \subsection{The term $E^3$}\label{subsect:e3} Since $d^2$ has bidegree $(-2,1)$, its only eventually non-zero component has as target a vector space of dimension one, and there are two possibilities: either it is zero, or it is an epimorphism. In order to decide whether it is an epimorphism, it is sufficient to determine if the element $e_x\wedge e_y\wedge e_h$ is a coboundary or not. In other words, we want to know if there exist $z_1=\alpha xe_y\wedge e_h+\beta ye_x\wedge e_h+\gamma e_x\wedge e_y$ and $z_2=p$ such that $z_1.e_f=e_x\wedge e_y \wedge e_h$ and \begin{equation}\label{eq:cond} d_{CE}(z_1)+.df(z_2)=0. \end{equation} We have that $.df(z_1)=(((\alpha-\sigma(\beta))\sigma(a)-\gamma\sigma( a'))e_x\wedge e_y\wedge e_h$; so $.df(z_1)=e_x\wedge e_y\wedge e_h$ if and only if \begin{equation}\label{eq:natural} (\alpha-\sigma(\beta))\sigma(a)-\gamma\sigma( a')=1, \end{equation} if and only if \[ (\sigma^{-1}(\alpha)-\beta)a-\sigma^{-1}(\gamma) a'=1. \] A necessary and sufficient condition for a solution to this equation to exist is that $(a;a')=1$, in other words, that $a$ have only simple roots. If this is the case, let $(\alpha,\beta,\gamma)$ be a solution. We have \begin{multline*} d_{CE}(z_1)= (\gamma- \sigma(\gamma))xe_y- (\gamma- \sigma^{-1}(\gamma))ye_x+\\ +\left(-\gamma(\sigma(a')-a')+ ( \beta- \sigma^{-1}(\alpha))a- \sigma(( \beta- \sigma^{-1}(\alpha))a)\right)e_h;\qquad\qquad\qquad \end{multline*} and using \eqref{eq:natural} we see that this is equal to \[ (\gamma- \sigma(\gamma))xe_y- (\gamma- \sigma^{-1}(\gamma))ye_x +(\gamma-\sigma^{-1}(\gamma))a'e_h. \] On the other hand, $.df(z_2)=.df(p)= pye_x +\sigma(p)xe_y-pa'e_h$. It is then enough to choose $p=\sigma^{-1}(\gamma)-\gamma$ to have equation \eqref{eq:cond} satisfied. We conclude that $d_2$ is an epimorphism if $(a;a')=1$, and zero if not. We can summarize the results of the above computations in the following table containing the dimensions of $H\!H_p(A)$: \[ \begin{array}{||c|c|c||} \hline p&(a:a')=1&\deg((a:a'))=d\geq 1\\ \hline \hline 0 & n-1 & n-1 \\ \hline 1 & 0 & d-1 \\ \hline 2 & 1 & d \\ \hline \geq 3 &0 & d \\ \hline \end{array} \] We note that this proves theorem \ref{thm:hom}. \section{Cohomology}\label{sect:coho} The aim of this section is to compute the Hochschild cohomology of GWA. Once we have done this, we compare the obtained dimensions with duality results. We use resolution \eqref{ddag} of $A$ as an $A^e$-module to compute cohomology. We apply the functor $\Hom_{A^e}(-,A)$ and make the following identifications: \[ \Hom_{A^e}(A\ot \Lambda^kV\ot A,A)\cong \Hom(\Lambda^kV,A)\cong (\Lambda^kV)^*\ot A\cong \Lambda^{3-k}V\ot A. \] Here we are identifing $(\Lambda^{k}V)^*\cong \Lambda^{3-k}V$ using the pairing $\Lambda^{k}V\ot\Lambda^{3-k}V\to\Lambda^3V\cong k$ given by exterior multiplication. Using superscripts for the dual basis, the correspondence between the basis of $(\Lambda^{3-k}V)^*$ and the basis of $\Lambda^k V$ is: \begin{align*} e^h&\mapsto e_x\wedge e_y & e^x\wedge e^h&\mapsto-e_y & 1&\mapsto e_x\wedge e_y\wedge e_h \\ e^y&\mapsto-e_x\wedge e_h &e^x\wedge e^y&\mapsto e_h \\ e^x&\mapsto e_y\wedge e_h &e^y\wedge e^h&\mapsto e_x \end{align*} In this way, we obtain the following double complex, whose total homology computes $H\!H^*(A)$: \[ \xymatrix@C-10pt@R-10pt{ 0\ar[r]& A\ot \Lambda^3V \ar[d]\ar[r]& A\ot \Lambda^2V\ar[d] \ar[r]& A\ot V \ar[d]\ar[r]& A \ar[d]\ar[r]& 0\ar[d]&&\\ &0\ar[r]& A\ot \Lambda^3V \ar[d]\ar[r]& A\ot \Lambda^2V\ar[d] \ar[r]& A\ot V \ar[d]\ar[r]& A \ar[d]\ar[r]& 0\ar[d]&\\ &&0\ar[r]& A\ot \Lambda^3V \ar[d]\ar[r]& A\ot \Lambda^2V\ar[d] \ar[r]& A\ot V \ar[d]\ar[r]& A \ar[d]\ar[r]& 0\\ &&&&&&&} \] The horizontal differentials are, up to sign, {\em exactly} the same as in homology. The vertical differentials are also essentially the same; for example, $.df^*:A\to A\ot V$ is such that \[ .df^*(b)=yb\ot e_x+ bx\ot e_y-\sum_{i,k}h^{k-i-1}ba_kh^i\ot e_h. \] \begin{remark} The differences (and similarities) between the above formulas and the corresponding ones in homology may be explained as follows. Given the map $A^e\to A^e$ defined by $a\ot b\mapsto az\ot wb$, the induced maps on the tensor product and $\Hom$ are related in the following way: when we use the tensor product functor we obtain: \begin{align*} A\cong A\ot_{A^e}A^e&\to A\ot_{A^e}A^e \cong A\\ b&\mapsto wbz. \end{align*} When, on the other hand, we use the $\Hom$ functor we get: \begin{align*} A\cong \Hom_{A^e}(A^e,A)&\to \Hom_{A^e}(A^e,A) \cong A\\ b&\mapsto zbw. \end{align*} \end{remark} This fact implies that we already know the homology of the rows, up to reindexing. However it is worth noticing that there is a change of degree with respect to the previous computation (now degree increases from left to right). Schematically, the dimensions of these homologies are: \[ \begin{array}{ccccccc} \fbox{$1$} &0 & n-2& n-1&&&\\ &1 &0 & n-2& n-1&&\\ &&1 &0 & n-2& n-1&\\ &&&1 &0 & n-2& n-1 \end{array} \] From this, it follows that $\dim_kH\!H^0(A)=1$ and $\dim_kH\!H^1(A)=0$, independently of the polynomial $a$. Also $\dim_kH\!H^2(A)=n-1=\deg a-1$, because of the form of the $E_1$ term in the spectral sequence. As before, there are two different cases: either \textsc{(i)} $(a;a')=1$ or \textsc{(ii)} $1\leq \deg(a;a')=d\leq n-1$. The following tables give the dimensions of the spaces in the $E_2$ terms of the spectral sequence, in both situations: \[ \begin{array}{ccc} \begin{array}{ccccccc} \fbox{$1$} &0 & n-2& 1&&&\\ &1 &0 & 0& 1&&\\ &&1 &0 & 0& 1&\\ &&&1 &0 &0 & 1\\ \end{array} &\qquad& \begin{array}{ccccccc} \fbox{$1$} &0 & n-2& d&&&\\ &1 &0 & d-1& d&&\\ &&1 &0 & d-1& d&\\ &&&1 &0 &d-1 &d \\ \end{array} \\ \\ \text{\textsc{(i)}}&&\text{\textsc{(ii)}} \end{array} \] In each case, the differential $d_2$ is the same, up to our identifications, as the one considered in section \ref{subsect:e3}. As a consequence, we have: in case \textsc{(i)}, the $E_3$ term has the form \[ \begin{array}{ccccccc} \fbox{$1$} &0 & n-2& 0&&&\\ &1 &0 & 0& 0&&\\ &&0 &0 & 0& 0&\\ &&&0 &0 &0 & 0\\ \end{array} \] so $E_\infty=E_3$; in case \textsc{(ii)}, $d_2=0$, and we see that $E_\infty=E_2$. We summarize the results in the following table containing the dimensions of $H\!H^p(A)$: \[ \begin{array}{||c|c|c||} \hline p&(a;a')=1&\deg(a;a')=d\geq 1\\ \hline \hline 0 & 1 & 1 \\ \hline 1 & 0 & 0 \\ \hline 2 & n-1 & n-1 \\ \hline \geq 3 &0 & d \\ \hline \end{array} \] This proves theorem \ref{thm:coho} concerning cohomology. It is clear that, when the polynomial $a$ has multiple roots, there is no duality between Hochschild homology and cohomology, contrary to what one might expect after \cite{VdB}. This is explained by the fact that in this case the algebra $A^e$ has infinite left global dimension; in this situation, theorem 1 in \cite{VdB} fails: one cannot in general replace $\ot$ by $\ot^L$ in the first line of Van den Bergh's proof. One can retain, however, the conclusion in the theorem if one adds the hypothesis that either the $A^e$-module $A$ or the module of coefficients has finite projective dimension. This is explained in detail in \cite{VdBerratum}. \section{Invariants under finite group actions}\label{sect:inv} The algebraic torus $k^*=k\setminus\{0\}$ acts on generalized Weyl algebras by diagonal automorphisms. More precisely, given $w\in k^*$, there is an automorphism of algebras uniquely determined by \[ x\mapsto wx, \qquad y\mapsto w^{-1}y, \qquad h\mapsto h. \] This defines a morphism of groups $k^*\hookrightarrow\Aut_k(A)$ which we will consider as an inclusion. The automorphism defined by $w\in k^*$ is of finite order if and only if $w$ is a root of unity, and, in this case, the subalgebra of invariants can easily be seen to be generated by $\{h,x^m,y^m\}$, where $m$ is the order of $w$. The following lemma is a statement of the fact that the process of taking invariants with respect to finite subgroups of $k^*$ for this action does not lead out of the class of GWA. This enables us to obtain almost immediately the Hochschild homology and cohomology of the invariants. \begin{lem}\label{lemma:inv} Let $A=A(k[h],\sigma, a)$ be a generalized Weyl algebra and let $G:={\mathbb{Z}}/r.{\mathbb{Z}}$ act on $A$ by powers of the diagonal automorphism induced by a primitive $r$-th root of unity. The subalgebra of invariants $A^G$ is isomorphic to the generalized Weyl algebra $A^G=A(k[H],\tau,\wt{a})$, where $\tau(H)=H-1$ and $\wt{a}(H)=\sigma^{-r+1}(a)(rH)\cdots\sigma^{-1}(a)(rH)a(rH)$. \end{lem} \begin{proof} We know that $A^G=\langle h, x^r,y^r \rangle$. Let us write $X:=x^r$, $Y:=y^r$ and $H:=h/r$. Then $XH=x^rh/r=\sigma^r(h/r)x^r=\tau(H)X$ and similarly $HY=Y\tau(H)$. Now \[ YX=y^rx^r=y^{r-1}yxx^{r-1}=y^{r-1}a(h)x^{r-1}= \sigma^{-r+1}(a)(h)y^{r-1}x^{r-1} = \sigma^{-r+1}(a)(rH)y^{r-1}x^{r-1}. \] so clearly the equality $y^rx^r=\wt a(H)$ follows by induction on $r$. \end{proof} The idea to compute (co)homology of $A^G$ is to replace it by the crossed product $A*G$. This change does not affect the homology provided that $A^G$ and $A*G$ are Morita equivalent; this is discussed in detail in \cite{afls}. In particular, this is the case when the polynomial $a$ has no pair of different roots conjugated to each other by $\sigma$---that is, there do not exist $\mu\in{\mathbb{C}}$ and $j\in{\mathbb{Z}}$ such that $a(\mu)=a(\mu+j)=0$---because in this situation the algebra $A=A(k[h],\sigma,a)$ is simple, as proved by Bavula in~\cite{Bav}. We state this as \begin{prop}\label{prop:desc} Let $a\in k[h]$ be a polynomial such that no pair of its roots are conjugated by $\sigma$ in the sense explained above. Let $G$ be any finite subgroup of $\Aut_k(k[h])$. Then there are isomorphisms \[ H\!H^*(A^G) \cong H^*(A,A*G)^G \cong \bigoplus_{\cl{g}\in\cl{G}}H^*(A,Ag)^{{\cal Z}_g}, \] where the sum is over the set $\cl G$ of conjugacy classes $\cl g$ of $G$, and, for each $g\in G$, ${\cal Z}_g$ is the centralizer of $g$ in $G$. Also, there are duality isomorphisms $H\!H_*(A^G)\congH\!H^{2-*}(A^G)$. \end{prop} \begin{proof} Given the hypotheses in the statement, we are in a situation similar to the one considered in \cite{afls}. The proposition follows from the arguments presented there. The last part concerning homology follows from the duality theorem of \cite{VdB}, since the global dimensional of $A^G$ is finite; see also section~7 in~\cite{afls}. \end{proof} Under appropriate conditions on the $g\in G$, we are able to compute the ${\cal Z}_g$-module $H_*(A,Ag)$. This module has always finite dimension as $k$-vector space (see proposition \ref{prop:coef}) and the action of ${\cal Z}_g$ is determined by an element $\Omega\in \Aut_k(A)$ (see proposition \ref{prop:action}). This automorphism $\Omega$ is a generalization of the Cartan involution of ${\frak sl}_2$, and is explained in more detail in section \ref{section:cartan}. We state the theorem, but since we need to know some facts about the group $\Aut_k(A)$, its proof will finish in section \ref{section:end}. \begin{teo}\label{teoG} Let us consider a GWA $A=A(k[h],\sigma, a)$ which is simple. Let $G\subset \Aut_k(A)$ be a finite subgroup such that every element $g$ of $G$ is conjugated in $\Aut_k(A)$ to an element in the torus $k^*$. Let us define $a_1:=\#\{\cl{g}\in\cl{G}\setminus\{I\!d\}\hbox{ such that } \Omega\notin{\cal Z}_g\}$ and $a_2:=\#\{\cl{g}\in\cl{G}\setminus\{I\!d\}\hbox{ such that }\Omega\in{\cal Z}_g\}$. We have that \[ \dim_kH\!H^p(A^G)= \begin{cases} 1 & \text{if $p=0$}\\ (n-1)+na_1+[(n+1)/2]a_2 & \text{if $p=2$}\\ 0 & \text{if $p=1$ or $p>2$}\\ \end{cases} \] \end{teo} \begin{remark} In particular, if the action of ${\cal Z}_g$ is trivial, the formula for $H\!H^2$ gives $\dim H\!H^2(A^G)=n.\#\cl{G}-1$. \end{remark} \begin{proof} Using the hypotheses and the above proposition, the proof will follow from the computation of the dimensions of $H^*(A,Ag)$ (proposition \ref{prop:coef}) and the characterization of the action (proposition \ref{prop:action}). \end{proof} We state the following proposition for automorphisms $g$ of $A$ diagonalizable but not necessarily of finite order, although we do not need such generality. \begin{prop}\label{prop:coef} Let $g\in \Aut_k(A)$ different from the identity and conjugated to an element of $k^*$. Then $H^0(A,Ag)=H^1(A,Ag)=0$, $\dim_kH^2(A,Ag)=n$, and $\dim_kH^*(A,Ag)=d$ for each $*>2$, where $d=\deg(a;a')$. Also, $\dim_kH_0(A,Ag)=\deg a=n$, and, for all $*>0$, $\dim_kH_*(A,Ag)=d$. \end{prop} We can assume that $g\in\Aut_k(A)$ is in fact in $k^*$, since, by Morita invariance, $H^*(A,Ag)\cong H^*(A,Ahgh^{-1})$ for all $h\in\Aut_k(A)$. The groups $H^*(A,Ag)$ can be computed using the complex obtained by applying to the resolution \eqref{ddag} the functor $\Hom_{A^e}({-},Ag)$; it can be identified, using the same idea as in section \ref{sect:coho}, to the double complex \[ \xymatrix@C-10pt@R-10pt{ 0\ar[r]& Ag\ot \Lambda^3V \ar[d]\ar[r]& Ag\ot \Lambda^2V\ar[d] \ar[r]& Ag\ot V \ar[d]\ar[r]& Ag \ar[d]\ar[r]& 0\ar[d]&&\\ &0\ar[r]& Ag\ot \Lambda^3V \ar[d]\ar[r]& Ag\ot \Lambda^2V\ar[d] \ar[r]& Ag\ot V \ar[d]\ar[r]& Ag \ar[d]\ar[r]& 0\ar[d]&\\ &&0\ar[r]& Ag\ot \Lambda^3V \ar[d]\ar[r]& Ag\ot \Lambda^2V\ar[d] \ar[r]& Ag\ot V \ar[d]\ar[r]& Ag \ar[d]\ar[r]& 0\\ &&&&&&&} \] This complex is graded (setting $weight(g)=0$) in an analogous way to the complex which computes $H\!H^*(A)$. It is straightforward to verify that the homotopy defined in \ref{prop:reduction} may be also used in this case. As a consequence, the cohomology of the rows is concentrated in weight zero. \subsection{The term $E_1$} \paragraph{Computation of $H^0(A,Ag)$.} Let $p\in k[h]$ and assume $pge_x\wedge e_y\wedge e_h\in \Ker(d_{CE})$, that is, that \[ (\sigma(p)-wp)xge_y\wedge e_h- (\sigma^{-1}(p)-w^{-1}p)yge_x\wedge e_h+ 0e_x\wedge e_y=0 \] Then $(\sigma-w.I\!d)(p)=0$, so $p=0$, and the cohomology in degree zero vanishes. \paragraph{Homology of the rows, degree $1$.} Given $u$, $v$, $t$ in $k[h]$, a computation shows that $uge_x\wedge e_y+ vyge_x\wedge e_h+ txge_y\wedge e_h\in \Ker(d_{CE})$, if and only if \begin{multline*} (\sigma^{-1}(u)-w^{-1}u)yge_x +(wu-\sigma(u))xge_y+\\ +(w^{-1}t\sigma(a)-\sigma^{-1}(t)a -u(\sigma(a')-a') +wva-\sigma(va))g e_h=0.\qquad\qquad\qquad \end{multline*} Using again that $\sigma-w.I\!d$ is an isomorphism, we conclude that in order for the coefficient of $e_y$ to vanish, $u$ must be zero. Looking at the coefficient of $e_h$, we obtain the only other condition, it is \[ w^{-1}t\sigma(a)-\sigma^{-1}(t)a +wva-\sigma(va)= (\sigma-w.I\!d)((w^{-1}\sigma^{-1}(t)-v)a)=0, \] so $w^{-1}\sigma^{-1}(t)-v=0$. We conclude that any $1$-cocycle is of the form \[ d_{CE}(pge_x\wedge e_y\wedge e_h)= vyge_x\wedge e_h+ w\sigma(v)xge_y\wedge e_h, \] where $p\in k[h]$ is chosen so that $\sigma(p)-wp=w\sigma(v)$. It follows immediately that the cohomology of the rows in degree $1$ is zero. \paragraph{Homology of the rows, degree $2$.} The $2$-coundaries are expressions of the form \begin{multline}\label{eq:2bd} (\sigma^{-1}(u)-w^{-1}u)yge_x +(wu-\sigma(u))xge_y+ \\ +(w^{-1}t\sigma(a)-\sigma^{-1}(t)a -u(\sigma(a')-a') +wva-\sigma(va))g e_h \end{multline} A $2$-cochain $\alpha=pyge_x+qxge_y+rge_h$, with $p$, $q$, $r\in k[h]$ is a cocycle if and only if $q=w\sigma(p)$ holds; in particular, this imposes no conditions on $r$. We can then assume that $p=q=0$ in $\alpha$ because one can add to $\alpha$ a coboundary of the form $d(ug e_x\wedge e_y)$ with $u\in k[h]$. We want now to decide when such a $2$-cocycle is a coboundary. In view of \eqref{eq:2bd} and the fact that $\sigma-w.I\!d$ is an isomorphism, we see at once $u$ must be zero. We are reduced to solve the equation \[ w^{-1}t\sigma(a)-\sigma^{-1}(t)a +wva-\sigma(va)=(\sigma-w.I\!d)((w^{-1}\sigma^{-1}(t)-v)a)= r. \] This can be solved if and only if $(\sigma-w.I\!d)^{-1}(r)$ is a multiple of $a$, so the codimension of the subspace of solutions is $\deg a=n$, in other words, the dimension of the cohomology of the rows in degree $2$ is $n$. \paragraph{Homology of the rows, degree $3$.} The coboundaries of weight zero are of the form \[ d(pyge_x+qxge_y+rge_h)= (wpa-\sigma(pa)+w^{-1}q\sigma(a)-\sigma^{-1}(q)a)g= (\sigma-w.I\!d)((w^{-1}\sigma^{-1}(q)-p)a)g \] with $p$, $q$, $r$ in $k[h]$. Every polynomial in $k[h]$ can be written as $w^{-1}\sigma^{-1}(q)-p$ for some $p, q\in k[h]$, so, since the map $\sigma-w.I\!d:k[h]\to k[h]$ is an isomorphism, we see that the dimension of the cohomology of the row complex in degree $3$ is equal to $\dim_kk[h]/ak[h]=\deg a=n$. \subsection{The term $E_2$} In view of the above computations, the dimensions of the components of the $E_1$-term of the spectral sequence are as follows: \[ \begin{array}{cccccc} \fbox{$0$}&0&n&n& \\ &0&0&n&n& \\ &&0&0&n&n \\ \end{array} \] Consequently, the only relevant vertical differential is \begin{align*} .df: Ag&\to Ag\ot V\\ bg&\mapsto bw^{-1}yge_x+\sigma(b)xge_y-ba'ge_h. \end{align*} Adding $d(qge_x\wedge e_y)$ one sees that this element is cohomologous to $(-ba'+ q(\sigma(a')-a') )ge_h$, where $q\in k[h]$ is such that $(\sigma-w.I\!d)(q)=-\sigma(b)$. But then $b=w\sigma^{-1}(q)-q$, and therefore \[ .df(bg)=(-w\sigma^{-1}(q)a'+ q\sigma(a') )ge_h= (\sigma-w.I\!d)(\sigma^{-1}(q)a')ge_h. \] On the other hand, the target of $.df$ has been already shown to be isomorphic to $k[h]/(\sigma-w.I\!d)(a.k[h])$. Under this isomorphism, the cokernel of $.df$ is isomorphic to $k[h]/(\sigma-w.I\!d)(a.k[h]+a'.k[h])$. Since we have assumed that $(a;a')=1$, the cokernel of $.df$ is zero, and by counting dimensions, the kernel of $.df$ also vanishes. This proves the first part of proposition \ref{prop:coef}; the rest of the statements thereof follow from similar computations, which we omit. In particular, theorem \ref{teoG} follows. We would like to observe that in order to prove theorem \ref{teoG}, if one assumes that the action of ${\cal Z}_g$ is trivial, then the full strength of proposition \ref{prop:coef} is not needed. Indeed, let $g\in G$. Using proposition \ref{prop:desc} for the cyclic subgroup group $C=(g)\subset\Aut_k(A)$ generated by $g$, and the hypothesis on the triviality of the action of the centralizers in homology, we see that the following relation holds for all $p\geq0$: \begin{align} \dim_kH\!H_p(A^{C}) & = \dim_kH\!H_p(A)^{C} +\sum_{1\leq i<|g|} \dim_kH_p(A,Ag^i)^{C}\label{eq:rel}\\ & = \dim_kH\!H_p(A) +\sum_{1\leq i<|g|} \dim_kH_p(A,Ag^i).\notag \end{align} Now, in view of lemma \ref{lemma:inv}, the algebra $A^C$ is a GWA, so we already know its homology. For $p=1$ or $p\geq3$, it vanishes, in particular, $H_p(A,Ag)=0$; for $p=2$, $\dim_kH\!H_2(A^{C})=1=\dim_kH\!H_2(A)$, so again we have $H_2(A,Ag)=0$. Finally, computing $g$-commutators as in the end of section 5.1, it is easy to see that $\dim_kH_0(A,Ag^i)\leq n$ for all $1\leq i<|g|$; since $\dim_kH\!H_0(A^{C})=|g|n-1$ and $\dim_kH\!H_0(A)=n-1$, relation \eqref{eq:rel} forces $\dim_kH_0(A,Ag^i)=n$. \section{Applications}\label{sect:apps} \subsection{The usual Weyl algebra} The results of the previous sections apply to the case when $A=A_1(k)$ ($\chr k=0$) and $G$ is an arbitrary finite subgroup of $\Aut_k(A)$, because in this case the finite order automorphisms of $A$ are always diagonizable, and ${\cal Z}_g$ acts trivially on $H_*(A,Ag)$ for all such $g$. We then recover the results of \cite{afls}. \subsection{Primitive quotients of ${\cal U}(\mathfrak{sl}_2)$} If the polynomial $a$ is of degree two, then $A$ is isomorphic to one of the maximal primitive quotients of ${\cal U}(\mathfrak{sl}_2)$. In this case, O.~Fleury \cite{odile} has proved that the group of automorphisms is isomorphic to the amalgamated product of $\mathit{PSL}(2,{\mathbb{C}})$ with a torsion-free group. The action of $\mathit{PSL}(2,{\mathbb{C}})$ is the one coming from the adjoint action of $\mathit{SL}(2,{\mathbb{C}})$ on ${\cal U}(\mathfrak{sl}_2)$. There is then a simple classification, up to conjugacy, of all finite groups of automorphisms of $A$: they are the cyclic groups $A_n$, the binary dihedral groups $D_n$, and the binary polyhedral groups $E_6$, $E_7$ and $E_8$; cf.~\cite{Springer}. In her thesis, for the regular case, O.~Fleury \cite{odile} has computed, case by case the action of the centralizers and in this way she achieves the computation of $H\!H_0(A^G)$. For positive degrees, following proposition \ref{prop:desc} one has to compute $H_*(A,Ag)$ and the action of ${\cal Z}_g$ on it. After proposition \ref{prop:coef} one knows that $H_*(A,Ag)=0$ for $*>0$ and $g\neq 1$, so, for positive degrees $H\!H_*(A^G)=H\!H_*(A)^G$. But the only positive and nonzero degree of $H\!H_*(A)$ is $*=2$ and $H\!H_2(A)\cong H\!H^0(A)={\cal Z}(A)=k$. Since the duality isomorphism is $G$-equivariant, the action of $G$ on $H\!H_2(A)$ is trivial and we conclude $H\!H_*(A^G)=0$ for $*\neq 0,2$ and $H\!H_2(A^G)=k$. Using the duality one has the cohomology. For the non-regular case, what we are able to compute is not $H\!H_*(A^G)$ but $H\!H_*(A\#G)$. The computation of the action of the centralizers is discussed in next sections, because those actions can be described in general, and also ``explain'' the computations made by O.~Fleury. \subsection{The Cartan involution\label{section:cartan}} In the case of ${\cal U}({\frak sl}_2)$ there is a special automorphism (that descends to $B_{\lambda}$) defined by $e\mapsto f$, $f\mapsto e$ and $h\mapsto -h$. For an arbitrary GWA $A$ with defining polinomial $a(h)$ of degree $n$, there are some particular cases on which a similar automorphism is defined. In \cite{BavJor} the authors find generators of the automorphism group of GWAs. It turns out that $\Aut_k(A)$ is generated by the torus action and exponentials of inner derivations, and, in the case that there is $\rho\in {\mathbb{C}}$ such that $a(\rho -h)=(-1)^ {n}a(h)$, a generalization of the Cartan involution --still called $\Omega$-- is defined as follows: \[x\mapsto y \qquad;\qquad y\mapsto (-1)^{n}x\qquad ; \qquad h\mapsto 1+\rho-h\] Let us call in this section ${\cal G}$ the subgroup of $\Aut_k(A)$ generated by the torus and the exponentials. When the polynomial is reflective (i.e. $a(\rho-h)=(-1)^na(h)$), the group generated by ${\cal G}$ and $\Omega$ coincides with $\Aut_k(A)$. If the polynomial is not reflective, ${\cal G}=\Aut_k(A)$. In \cite{Dixmier} Dixmier shows, in the case of ${\cal U}({\frak sl}_2)$, that $\Omega$ belongs to ${\cal G}$ (and of course this fact descends to the primitive quotients). This situation corresponds to $\deg(a)=2$. In \cite{BavJor} (remark 3.30), the authors ask whether this automorphism $\Omega$ belongs to the subgroup generated by the torus and exponentials. We will answer this question looking at the action of the group of automorphisms on $H\!H_0(A)$, so we begin by recalling the expression of the exponential-type automorphisms. Let $\lambda\in {\mathbb{C}}$ and $m\in {\mathbb{N}}_0$, the two exponential automorphisms associated to them are defined as follows: \begin{align*} \phi_{m,\lambda}:&=\noindent\textbf{Example: }p(\lambda \ad( y^m))\\ x&\mapsto x+\sum_{i=1}^n\frac{(-\lambda)^i}{i!}(\ad y^m)^i(x)\\ y&\mapsto y\\ h&\mapsto h+m\lambda y^{m} \end{align*} \begin{align*} \psi_{m,\lambda}:&=\noindent\textbf{Example: }p(\lambda \ad( x^m))\\ x&\mapsto x\\ y&\mapsto y+\sum_{i=1}^n\frac{\lambda^i}{i!}(\ad x^m)^i(y)\\ h&\mapsto h-m\lambda x^{m} \end{align*} We know that $H\!H_0(A)$ has $\{1,h,h^2,\dots,h^{n-2}\}$ as a basis. If one assumes $n>2$ then the action of $\Omega$ on $H\!H_0(A)$ is not trivial. On the other hand, since the homogeneous components of weight different from zero are commutators, the action of $\psi_{m,\lambda}$ and of $\phi_{n,\lambda}$ is trivial on $H\!H_0(A)$. To see this we consider for example \[\phi_{m,\lambda}(h^i)= \phi_{m,\lambda}(h)^i= (h+m\lambda y^{m})^i\] and it is clear that the 0-weight component of $(h+m\lambda y^m)^i$ equals $h^i$. The action of the torus is also trivial on $H\!H_0(A)$ (it is already trivial on $k[h]$). We conclude that for $n>2$, the automorphism $\Omega$ cannot belong to ${\cal G}$. \subsection{End of the proof of Theorem \ref{teoG}\label{section:end}} The hypotheses of theorem \ref{teoG} are that in the finite group $G$ under consideration, every element $g$ is conjugated (in $\Aut_k(A)$) to an element of the torus. We will show that the triviality of the action of ${\cal Z}_g$ on $H_*(A,Ag)$ is generically satisfied, and the fact that the action of ${\cal Z}_g$ is trivial or not depends only on whether $\Omega$ belongs to ${\cal Z}_g$. \begin{prop}\label{prop:action} Let $A$ be a GWA, $G$ a finite subgroup of $\Aut_k(A)$ such that every $g\in G$ is conjugated to an element of the torus. If $\Omega\notin {\cal Z}_g$ then the action of ${\cal Z}_g$ on $H_*(A,Ag)$ is trivial. \end{prop} \begin{proof} Let us suppose that $a$ is not reflective and let $g\in G$. If $g$ is the identity, then the centralizer of $g$ in $\Aut_k(A)$ is $\Aut_k(A)$ itself, and the triviality of the action on $H\!H_0$ was explained in the above section. For $H\!H_2(A)\cong H\!H^0(A)={\cal Z}(A)=k$ the action is always trivial. When $*\neq 0,2$, $H\!H_*(A)=0$. Let us now consider $g\neq I\!d$. Up to conjugation may assume that $g(h)=h$, $g(x)=wx$ and $g(y)=w^{-1}y$, where $w$ is a root of unity. After proposition \ref{prop:coef} the only non-zero homology group is $H_0(A,Ag)$ which has basis $\{g, hg, h^2g,\dots,h^{n-1}g\}$. If $g'\in \Aut_k(A)$ commutes with $g$ then it induces an automorphism $g'|_{A^g}:A^g\to A^g$. This defines a map ${\cal Z}_g\to \Aut_k(A^g)$. But $A^g=\cl{h, x^{|w|}, y^{|w|} }$ is again a GWA so one knows the generators of its group of automorphisms. It is not hard to see that if $a$ is not reflective then the polynomial $\wt{a}$ associated to $A^g$ is not reflective, neither. In this case $\Aut_k(A)$ is generated by the torus and exponentials of $\lambda\ad x^{|w|.m}$ and $\lambda\ad y^{|w|.m}$. On the other hand, an automorphism $g'$ induces the identity on $A^g$ if and only if it fixes $h$, $x^{|w|}$ and $y^{|w|}$, and by degree considerations it is clear that $g'(x)$ must be a multiple of $x$ and analogously for $y$. We conclude that the group of elements (in $\Aut_k(A)$) commuting with $g$ is generated by the torus and exponentials of $\lambda \ad x^{m.|w|}$ and $\lambda \ad y^{m.|w|}$. It is clear that the torus acts trivially on the vector space spanned by $\{g, hg, h^2g,\dots,h^{n-1}g\}$. A computation similar to the one done in the end of section \ref{section:cartan} shows that $\phi_{m,\lambda}(h^ig)=h^ig$ modulo $g$-commutators, and analogously for $\psi$. If the polynomial $a$ is reflective then $\Aut_k(A)$ is generated by ${\cal G}$ and $\Omega$. We have just seen the triviality of the action for the generators of ${\cal G}$, and $\Omega$ is excluded by hypothesis. \end{proof} We now finish the proof of Theorem \ref{teoG}: We will explain the formula $\dim H\!H_0(A^G)=(n-1)+n.a_1+[(n+1)/2].a_2$. From the decomposition $H\!H_0(A^G)=\bigoplus_{\cl{g}\in \cl{G}}H_0(A,Ag)^{{\cal Z}_g}$, the first ``$n-1$'' comes from the summand corresponding to the identity element that contributes with $\dim H\!H_0(A)^G=\dim HH_0(A)=n-1$. The ``$n.a_1$'' comes from the terms corresponding to conjugacy classes $\cl{g}$ such that ${\cal Z}_g$ does not contain $\Omega$ because in this case $\dim H_0(A,Ag)^{{\cal Z}_g}= \dim H_0(A,Ag)=n$. Finally, the summand ``$[(n+1)/2].a_2$'' corresponds to the conjugacy clases having $\Omega$ in their centralizers, in these cases the dimension of $(kg\oplus khg\oplus kh^2g\oplus \dots \oplus kh^{n-1}g)^{\Omega}$ is the integer part of the half of $n+1$. \begin{remark} This result explains, for $n=2$, the case by case computations of $H\!H_0(B_{\lambda}^G)$ made by O.~Fleury in \cite{odile}, see also \cite{odile2}. \end{remark} \end{document}
\begin{document} \begin{abstract} We investigate uniform, strong, weak and almost weak stability of multiplication semigroups on Banach space valued $L^p$-spaces. We show that, under certain conditions, these properties can be characterized by analogous ones of the pointwise semigroups. Using techniques from selector theory, we prove a spectral mapping theorem for the point spectra of the pointwise and global semigroups and apply this as a major tool for determining almost weak stability. \end{abstract} \maketitle One of the significant features of the Fourier transform is that it converts a differential operator into a multiplication operator induced by some scalar-valued function. The properties of the original operator are then determined by the values of this function. The same holds if a system of differential operators is transformed into a matrix valued multiplication operator on a vector valued function space. This motivates the systematic investigation of multiplication operators on Banach space valued function spaces. Such operators (and semigroups generated by them) have been studied by, e.g. Holderrieth \cite{Hol91} as well as Hardt and Wagenf\"uhrer \cite{HaWa96} for matrix multiplication semigroups and by Arendt and Thomaschewski \cite{ThoAr05} and Graser \cite{Gra97} in the infinite dimensional case. See also \cite[Section 4]{MuNi11} and \cite{Neerven93}. Qualitative properties of such semigroups, e.g. various stability concepts, are of great interest in control theory (see e.g. \cite{CIZ09}). In fact, motivated by these applications, Hans Zwart has proved a characterisation of strong stability of a multiplication semigroup in the finite dimensional case (see \cite{Zwart}), while so-called polynomial stability of multiplication semigroups is characterized in Theorem 4.4 of \cite{BEP06}. The general question in this context is to what extent the global properties of a multiplication operator are determined by the local properties of the pointwise operators. In this paper we systematically investigate spectral and stability properties of multiplication semigroups. Our aim is thus to understand how these properties are related to those of the corresponding pointwise operators or semigroups, as explained below. As a major tool and also as a result of independent interest, we obtain a perfect characterisation of the eigenvalues of multiplication operators (Theorem \ref{thmPtSpM}). Furthermore, we indicate how the global and local stability properties are related for uniform (Theorem \ref{thmUniform}), strong (Theorem \ref{thmSG_STRONG}), weak (Proposition \ref{propWeak}) and almost weak (Theorem \ref{thmAwsSg}) convergence. Finally, we include a section in which we state analogous stability results for the powers of multiplication operators. Throughout this text we assume that the measure space $(\Omega,\Sigma, \mu)$ is $\sigma$-finite. For a separable Banach space $X$, $L^p(\Omega,X)$ denotes the Bochner space $L^p(\Omega,\Sigma,\mu;X)$ for a fixed value of $1\leq p \leq\infty$ {(see e.g \cite{abhn01}, or \cite{DU77}).} \begin{defn}[Multiplication Operator, Pointwise Operator]\lambdabel{defMULT} Let $X$ be a separable Banach space and let $M:\Omega\rightarrow\Lop$ be such that, for every $x\in X$, the function $$ \Omega \nonumberi s\mapsto M(s)x \in X $$ is Bochner measurable. The {\em multiplication operator} ${\mathcal{M}}$ on $L^p(\Omega,X)$, with $1\leq p \leq\infty$, is defined by $$ ({\mathcal{M}} f)(s) := M(s)f(s) \text{ for all } s\in\Omega $$ with $$ D({\mathcal{M}})=\Bigl\{f\in L^p(\Omega,X)\ :\ M(\cdot)f(\cdot)\in L^p(\Omega,X) \Bigr\}. $$ In this context, we call the operators $M(s)$ with $s\in\Omega$ the {\em pointwise operators} on $X$. \end{defn} \begin{rem} The Bochner measurability of $s\mapsto M(s)x$ for all $x \in X$ implies that the function $M(\cdot)f(\cdot)$ is also measurable if $f\in L^p(\Omega,X)$, see \cite[Lemma 2.2.9]{Tho03}. \end{rem} \begin{defn}[Multiplication Semigroup]{\cite[Definition 2.3.8, p.\,45]{Tho03}} If a $C_0$-semigroup $\bigl({\mathcal{M}}(t)\bigr)_{t\geq 0}$ on $L^p(\Omega,X)$ consists of multiplication operators on $L^p(\Omega,X)$, it is called a {\em multiplication semigroup}. (For the general theory of $C_0$-semigroups, we refer to \cite{enna00}.) \end{defn} \begin{rem}\lambdabel{remNOT} If $\bigl({\mathcal{A}},D({\mathcal{A}})\bigr)$ is the generator of the multiplication semigroup $\bigl({\mathcal{M}}(t)\bigr)_{t\geq 0}$, then there exists a family of operators $A(s)$ with domain $D(A(s))$ on $X$ such that $A(s)$ is the generator of a $C_0$-semigroup on $X$ for all $s\in\Omega\begin{align}ckslash N$ for some null set $N$ (see \cite[Theorem 2.3.15, p.\,49]{Tho03}). We call these semigroups on $X$ the {\em pointwise semigroups}. We denote the multiplication semigroup $\bigl({\mathcal{M}}(t)\bigr)_{t\geq 0}$ by $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ and the pointwise semigroups by $\bigl(e^{tA(s)}\bigr)_{t\geq 0}$ for all $s\in\Omega\begin{align}ckslash N$. Furthermore, for every $t\geq 0$, the function from $\Omega$ to $\Lop$, $s\mapsto e^{tA(s)}$, is measurable and the operator $e^{t{\mathcal{A}}}$ is the corresponding multiplication operator on $L^p(\Omega,X)$, cf. \cite[Proposition 2.3.12, p.\,48]{Tho03}. \end{rem} Before discussing stability properties, we recall the characterisation of boundedness of a multiplication operator by Thomaschewski (cf. \cite{Tho03}) or Klaus-J.\ Engel (cf. \cite[Chapter IX, Proposition 1.3]{en97}). \begin{lem}\lambdabel{lemNORM}\cite[Proposition 2.2.14, p.\ 35]{Tho03} Let ${\mathcal{M}}$ be a multiplication operator on $L^p(\Omega,X)$ induced by the function $M(\cdot)$ as in Definition \ref{defMULT}. The operator ${\mathcal{M}}$ is bounded if and only if the function $M(\cdot)$ is essentially bounded{, i.e. if the function $M(\cdot)$ is bounded up to a set of a measure zero. In this case}, we have that \begin{align*} \|{\mathcal{M}}\| &= \essup_{s\in\Omega}{\|M(s)\|} \\ &:= \inf\Bigl\{ C \geq 0\ :\ \mu\bigl(\left\{s\in \Omega\ :\ \|M(s)\| > C\right\}\bigr) = 0 \Bigr\}.\\ \end{align*} \end{lem} We are interested in the extent to which stability properties of the pointwise semigroups determine stability properties (see the sections below) of the corresponding multiplication semigroup. This is not always the case as is illustrated by a simple modification of Zabzyk's classical counterexample to the spectral mapping theorem for $C_0$-semigroups (cf. \cite{enna00} p. 273, Counterexample 3.4). Other examples are discussed in \cite[Section 2]{CIZ09}. \section{Eigenvalues of a multiplication operator} Since many stability properties can be characterised by spectral properties (see, for instance, \cite[Chapter IV and V]{enna00}), we start by investigating the relation between the pointwise spectra $\sigma(M(s))$ with $s\in\Omega$ and the global spectrum $\sigma({\mathcal{M}})$. As a first and important step, we characterise the eigenvalues of ${\mathcal{M}}$ via $M(s)$ with $s\in\Omega$. We take note of the fact that the nontrivial implication in the following theorem is essentially a selector result. See, for instance, \cite{KurRyll_65} for a standard selector theorem. It is possible to present a proof based on methods from this theory, but our proof is more elementary and not less elegant. We would like to mention that the idea of the following proof has been sparked by discussions with Hans Zwart. \begin{thm}\lambdabel{thmPtSpM} For a multiplication operator ${\mathcal{M}}$ on $L^p\left(\Omega,X\right)$ with $1\leq p\leq\infty$ induced by the pointwise operators $\{M(s)\ :\ s\in\Omega\}$ and an arbitrary $\lambdambda \in \mathbb{C}$, the following equivalence holds: $$ \lambda \in \PtSp{{\mathcal{M}}} \Longleftrightarrow \lambda \in \PtSp{M(s)} \mbox{ for $s \in Z$, } $$ where $Z$ is a measurable subset of $\Omega$ with $\mu(Z)>0$. \end{thm} \begin{proof} Assume that the pointwise operators $M(s)$ are injective, i.e., the kernels of $M(s)$ are zero for almost all $s\in\Omega$. Choose an arbitrary function $0\nonumberot=f\in L^p(\Omega,X)$. It is clear that ${\mathcal{M}} f \nonumberot= 0$, hence the kernel of ${\mathcal{M}}$ is zero. Assume now that $M(s)$ is not injective for all $s$ in a set $N$ such that $\mu(N) > 0$. Without loss of generality, take $N = \Omega$. \begin{description} \item[{Idea of proof}] \\ We will construct a sequence of countably-valued measurable functions $(f_m)$ in $L^p(\Omega,X)$, converging pointwise to some non-zero function such that the sequence of functions $({\mathcal{M}} f_m)$ converges to zero in norm. Choose $$ W := \{w_n : n\in\mathbb{N}\} $$ as a countable dense subset of the unit sphere of $X$. We construct for every $m\in\mathbb{N}$ countably many disjoint, measurable sets $\tilde\Omega_{(n_1,n_2,\hdots,n_m)}$ indexed by $(n_1,n_2,\hdots,n_m) \in \prod_{k=1}^m\mathbb{N}$ such that \begin{equation} \lambdabel{eq:normM} \|M(s)w_{n_m}\| \leq \frac{1}{m} \text{ for all } s \in \tilde\Omega_{(n_1,n_2,\hdots,n_m)}, \end{equation} \begin{align}\lambdabel{eq:inclusion} \Omega = \bigcup_{n_1\in\mathbb{N}} \bigcup_{n_2\in\mathbb{N}}\cdots \bigcup_{n_m\in\mathbb{N}} \tilde\Omega_{(n_1,n_2,\hdots,n_m)}, \end{align} and \begin{equation} \lambdabel{eq:subset} \tilde\Omega_{(n_1,n_2,\hdots,n_m)} \subseteq \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}. \end{equation} Furthermore, if $s \in \tilde\Omega_{(n_1,n_2,\hdots,n_m)}$ then, for every $m \leq r \in \mathbb{N}$, there exists a convergent sequence $(a_k)_{k\in\mathbb{N}} \subseteq W$ with $a_k = w_{n_k}$ for $1 \leq k \leq m$, such that $\| M(s)a_k \| \leq \frac{1}{k}$ for $1 \leq k \leq r$. Then, for every $m\in\mathbb{N}$, we define $f_m$ as $$ f_m := \sum_{(n_1,n_2,\hdots,n_{m}) \in \prod_{k=1}^{m}\mathbb{N}} {\mathbbm{1}_{\tilde\Omega_{(n_1,n_2,\hdots,n_m)}} w_{n_m}}. $$ For every $s\in\Omega$, the sequence $$ (f_m(s))_{m\in\mathbb{N}} = (w_{n_m})_{m\in\mathbb{N}} $$ converges, $\|f_m(s)\| = \| w_{n_m} \| = 1 $ for all $m\in\mathbb{N}$ and $$ \| M(s)f_m(s) \| = \| M(s)w_{n_m} \| \leq \frac{1}{m}. $$ \item[{Definitions}] \\ Consider the following set of (convergent) sequences $$ \mathcal{W} := \left\{ \begin{array}{ll} (a_{k})_{k\in\mathbb{N}} : & a_{k} \in W \text{ for all } k\in\mathbb{N}, \text{ and } \\ & \text{ for each } q\in\mathbb{N} , \|a_q - a_{q+j}\| < \frac{1}{q} \\ & \text{ for all } j\in\mathbb{N} \end{array} \right\}. $$ Note that, to each sequence $(a_{k})_{k\in\mathbb{N}} \in \mathcal{W}$, there is a corresponding function \begin{equation} \lambdabel{eq:beta} \beta : \mathbb{N} \rightarrow \mathbb{N} \end{equation} such that $a_k = w_{\beta(k)}$ for all $k\in\mathbb{N}$. Hence we can write $(a_k) = (w_{\beta(k)})$. For each $m\in\mathbb{N}$ define the set $\mathcal{W}_{(n_1,n_2,\ldots,n_m)} \subseteq \mathcal{W}$ for each $m$-tuple $(n_1,n_2,\ldots,n_m) \in \prod_{j=1}^{m}\mathbb{N}$ as $$ \mathcal{W}_{(n_1,n_2,\ldots,n_m)} := \left\{ \begin{array}{ll} (a_{k})_{k\in\mathbb{N}} \in \mathcal{W} : & a_{k} = w_{n_k} \text{ for } 1 \leq k \leq m \end{array} \right\}. $$ Now, for all $n,j \in \mathbb{N}$, define the subsets $$ \Omega_{n,j} := \left\{s \in \Omega : \|M(s)w_n\| \leq \frac{1}{j}\right\} $$ of $\Omega$. \item[{Construction}] \\ Take $m=1$. We now define the sets $\tilde\Omega_{(n_1)}$ for each $n_1 \in \mathbb{N}$ as \begin{align} \Omega_{(n_1)} &:= \left\{ \begin{array}{ll} s \in \Omega\ : & \text{ for each } r\in\mathbb{N} \text{ there exists a sequence } \\ & (a_{k})_{k\in\mathbb{N}} = (w_{\beta(k)})_{k\in\mathbb{N}} \in \mathcal{W}_{(n_1)} \\ & \text{ such that } s\in\bigcap_{j=1}^r{\Omega_{\beta(j),j}} \end{array} \right\} \nonumberotag\\ &\ = \bigcap_{r \in \mathbb{N}} \left( \bigcup_{ (w_{\beta(k)})_{k\in\mathbb{N}} \in \mathcal{W}_{(n_1)} } \left( \bigcap_{j=1}^r{\Omega_{\beta(j),j}} \right) \right) \lambdabel{eq:cntble} \intertext{and} \tilde\Omega_{(n_1)} &:= \Omega_{(n_1)} \begin{align}ckslash \bigcup_{q < n_1}\tilde\Omega_{(q)}.\nonumberotag \end{align} The set $ \Omega_{(n_1)}$ is the countable intersection of the union of certain measurable sets of the form $\bigcap_{j=1}^r{\Omega_{\beta(j),j}}$. As we see in \eqref{eq:cntble} above, this union consists of those sets, for each of which the corresponding sequence $ (w_{\beta(j)})_{j\in\mathbb{N}}$ is in $\mathcal{W}_{(n_1,n_2,\ldots,n_m)} $ which is an uncountable set. However, there are only countably many sets of the form $\bigcap_{j=1}^r{\Omega_{\beta(j),j}}$ for a fixed $r\in\mathbb{N}$ and hence the union can be written as a countable union. Hence $\Omega_{(n_1)}$ and also $\tilde\Omega_{(n_1)}$ are measurable. It now holds that \begin{equation}n \|M(s)w_{n_1}\| \leq 1 \end{equation}n for all $s \in \tilde\Omega_{(n_1)}$ and \begin{align*} \Omega = \bigcup_{n_1\in\mathbb{N}} \tilde\Omega_{(n_1)}. \end{align*} Furthermore, if $s \in \tilde\Omega_{(n_1)}$ then, for every $r\in\mathbb{N}$, there exists a (convergent) sequence $(a_k)_{k\in\mathbb{N}} \in \mathcal{W}$ with $a_1 = w_{n_1}$ and $\| M(s)a_k \| \leq \frac{1}{k}$ for $1 \leq k \leq r$. Now define the function $$ f_1 := \sum_{n_1 \in \mathbb{N}}\mathbbm{1}_{\tilde\Omega_{(n_1)}} w_{n_1}. $$ Observe that $\|f_1\| = 1$ and $\|{\mathcal{M}} f_1\| \leq 1$ as desired. \vspace*{5pt} The recursive definition of the sets $\tilde\Omega_{(n_1,n_2,\hdots,n_m)}$ now follows. Let $m \geq 2$ and assume that we have disjoint sets $\tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}$ with $(n_1,n_2,\hdots,n_{m-1}) \in \prod_{k=1}^{m-1}\mathbb{N}$ such that \begin{equation}n \|M(s)w_{n_{m-1}}\| \leq \frac{1}{m-1} \text{ for all } s \in \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}, \end{equation}n \begin{align*} \Omega = \bigcup_{n_1\in\mathbb{N}} \bigcup_{n_2\in\mathbb{N}}\cdots \bigcup_{n_{m-1}\in\mathbb{N}} \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}, \end{align*} and \begin{equation}n \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})} \subseteq \tilde\Omega_{(n_1,n_2,\hdots,n_{m-2})}. \end{equation}n Moreover, for every $s \in \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}$ and every $m-1 \leq r \in \mathbb{N}$, there exists a sequence $(a_k) \in \mathcal{W}$ with $a_k = w_{n_k}$ for $1 \leq k \leq m-1$ and $\| M(s)a_k \| \leq \frac{1}{k}$ for $1 \leq k \leq r$. Then, for every $(n_1,n_2,\hdots,n_{m-1}) \in \prod_{k=1}^{m-1}\mathbb{N}$ and $n_m \in \mathbb{N}$, define the measurable sets \begin{align*} \Omega_{(n_1,n_2,\hdots,n_m)} &:= \left\{ \begin{array}{ll} s \in \Omega\ : &\text{ for all } r\in\mathbb{N} \text{ there exists a sequence} \\ & (a_k)_{k\in\mathbb{N}} = (w_{\beta(k)})_{k\in\mathbb{N}}\in \mathcal{W}_{(n_1,n_2,\hdots,n_m)} \\ & \text{ such that } s\in\bigcap_{j=1}^r{\Omega_{\beta(j),j}} \end{array} \right\}\\ \intertext{and} \tilde\Omega_{(n_1,n_2,\hdots,n_m)} &:= \tilde\Omega_{n_1,n_2,\hdots,n_{m-1}} \cap \left( \Omega_{(n_1,n_2,\hdots,n_m)} \begin{align}ckslash \bigcup_{q < n_m}\tilde\Omega_{(n_1,n_2,\hdots,n_{m-1},q)}\right). \end{align*} For every $\tilde\Omega_{(n_1,n_2,\hdots,n_m)}$, the properties \eqref{eq:normM}, \eqref{eq:inclusion} and \eqref{eq:subset} hold and if $s \in \tilde\Omega_{(n_1,n_2,\hdots,n_m)}$ then, for every $m \leq r \in \mathbb{N}$, there exists a sequence $(a_k) \in \mathcal{W}$ with $a_k = w_{n_k}$ for $1 \leq k \leq m$ and $\| M(s)a_k \| \leq \frac{1}{k}$ for $1 \leq k \leq r$. Define the function $$ f_m := \sum_{(n_1,n_2,\hdots,n_m) \in \prod_{k=1}^m\mathbb{N}}{\mathbbm{1}_{\tilde\Omega_{(n_1,n_2,\hdots,n_m)}} w_{n_m}}. $$ Thus the sequence $(f_m)_{m\in\mathbb{N}}$ has been constructed with the desired properties. Indeed, for every $m\in\mathbb{N}$, we have $\|f_m\| \geq 1$, because of \eqref{eq:inclusion} and the fact that, if $s\in \tilde\Omega_{(n_1,n_2,\hdots,n_m)}$, then $\|f(s)\| = \|w_{n_m}\| = 1$. We also have that $\|{\mathcal{M}} f_m\| \leq \frac{1}{m}$. Furthermore, the sequences $(f_m(s))_{m\in\mathbb{N}}$ are convergent for every $s\in\Omega$. The pointwise limit $f$ is measurable, nonzero and in the kernel of ${\mathcal{M}}$. \end{description} \end{proof} We obtain the following corollary directly from the above theorem by using the spectral mapping theorem for the point spectrum of the resolvent of a closed operator (see, for instance, \cite[Chapter IV, Theorem 1.13]{enna00}). \begin{cor}\lambdabel{corPtSpA} If $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is a multiplication semigroup with generator ${\mathcal{A}}$ and $\lambdambda\in\mathbb{C}$, then the following equivalence holds. $$ \lambda \in \PtSp{{\mathcal{A}}} \Longleftrightarrow \lambda \in \PtSp{A(s)} \mbox{ for $s \in Z$, } $$ where $Z$ is a measurable subset of $\Omega$ with $\mu(Z)>0$. \end{cor} \section{Uniform Stability} This short section is devoted to the strongest notion of stability, namely uniform stability. \begin{defn}\cite[Definition V.1.1, p. 296]{enna00} A $C_0$-semigroup $\SG$ of operators on a Banach space is called {\em uniformly stable} if $\|T(t)\|\rightarrow 0$ as $t\rightarrow\infty$. \end{defn} Lemma \ref{lemNORM} immediately leads to the following characterisation of uniform stability of multiplication semigroups. \begin{thm} \lambdabel{thmUniform} Let $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ be a multiplication semigroup on $L^p(\Omega,X)$ with $1\leq p \leq \infty$. Then the following are equivalent. \begin{enumerate} \item[(a)] The semigroup $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ is uniformly stable. \item[(b)] The pointwise semigroups converge to $0$ in norm, uniformly in $s$, i.e. $$ \essup_{s\in\Omega}{\left\|e^{t A(s)}\right\|} \rightarrow 0\ \text{ as }\ t\rightarrow\infty. $$ \item[(c)] At some $t_0 > 0$, the spectral radii $\sr{e^{t_0 A(s)}}$ of the pointwise semigroups satisfy $\essup_{s\in\Omega}{\sr{e^{t_0 A(s)}}} < 1$. \item[(d)] There exist constants $C \geq 1$ and $\epsilon > 0$ such that $\|e^{tA(s)}\| \leq Me^{-t\epsilon }$ for all $t\geq 0$ and almost all $s$. \end{enumerate} \end{thm} \begin{rem} Note that, due to Remark 6, the uniform stability of a multiplication semigroup is independent of the value of $p$ as long as $1 \leq p \leq \infty$. \end{rem} \section{Strong Stability} In this section we study strong stability in our context. \begin{defn}\cite[Definition V.1.1, p. 296]{enna00} A $C_0$-semigroup $\SG$ of operators on a Banach space is called {\em strongly stable} if $\|T(t)x\|\rightarrow 0$ as $t\rightarrow\infty$ for all $x \in X$. \end{defn} We now show that a multiplication semigroup is strongly stable if and only if the pointwise semigroups are uniformly bounded in $s$ and strongly stable almost everywhere. The backward implication was proved by Curtain-Iftime-Zwart in \cite{CIZ09} for the special case where $\Omega=[0,1],\ p=2$ and $X= \mathbb{C}^n$. The other implication was conjectured in the same paper and has since been proved by Hans Zwart (see Theorem on p. 3 in \cite{Zwart}), again for $X=\mathbb{C}^n$. \begin{thm} \lambdabel{thmSG_STRONG} Suppose that the multiplication operator ${\mathcal{A}}$ generates a $C_0$-semigroup of multiplication operators $(e^{t{\mathcal{A}}})_{t \geq 0}$ on $L^p(\Omega,X)$ with $1 \leq p < \infty$ such that $\|e^{t{\mathcal{A}}}\| \leq C$ for all $t\geq 0$ and some constant $C >0$. Then the following are equivalent. \begin{enumerate} \item[(a)] The semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is strongly stable. \item[(b)] The pointwise semigroups $\left(e^{tA(s)}\right)_{t\geq 0}$ on $X$ are strongly stable for almost all $s\in \Omega$. \end{enumerate} \end{thm} \begin{proof} Note that, by Lemma \ref{lemNORM}, we have $\|e^{t{\mathcal{A}}}\| \leq C$ for all $t\geq 0$ if and only if $\|e^{tA(s)}\| \leq C$ for almost all $s\in \Omega$ and for all $t\geq 0$. (a)\,$\implies$\,(b): Assume that $\|e^{t{\mathcal{A}}}f\|\rightarrow 0$ as $t\rightarrow\infty$ for all $f\in L^p(\Omega,X)$. Choose an arbitrary $x\in X$ and define the function $f_x:\Omega\rightarrow X$ by $$ f_x(s):= x $$ for all $s\in\Omega$. Since $\Omega$ is $\sigma$-finite, we can write $\Omega=\cup_{n\in\mathbb{N}}{\Omega_n}$ where $\mu(\Omega_n) < \infty$ for every $n\in\mathbb{N}$. Then $\mathbbm{1}_{\Omega_n}{f_x} \in L^p(\Omega,X)$ for every $n\in\mathbb{N}$. By assumption, we have that $$ \left\| e^{t{\mathcal{A}}} \bigl( \mathbbm{1}_{\Omega_n}{f_x} \bigr) \right\|\rightarrow 0 \quad \text{as} \quad t\rightarrow\infty $$ for every $n\in\mathbb{N}$. Therefore, the Riesz subsequence theorem (see e.g. proof of \cite[Chapter I, Theorem 3.12]{RudAna}) implies that, for every $n\in\mathbb{N}$, there exists a sequence $(t_k)_{k \in \mathbb{N}} \subset \mathbb{R}_+$ tending to infinity as $k\rightarrow\infty$, such that the functions $e^{t_k{\mathcal{A}}}\bigl( \mathbbm{1}_{\Omega_n}{f_x} \bigr)$ converge pointwise to $0$, almost everywhere, i.e. \begin{align}\lambdabel{cvg} \left\|e^{t_kA(s)} \bigl( \mathbbm{1}_{\Omega_n}(s){f_x}(s) \bigr) \right\|_{ X} = \left\|e^{t_kA(s)}x\right\|_{ X} \rightarrow 0 \quad \text{as} \quad k\rightarrow\infty \end{align} for $s\in \Omega_n\begin{align}ckslash N_{(x,n)}$, where $N_{(x,n)}$ is a subset of $\Omega$ of measure $0$. We now show that \eqref{cvg} implies that $\left\|e^{tA(s)}x\right\|_{ X} \rightarrow 0$ as $t\rightarrow\infty$ for $s\in \Omega_n\begin{align}ckslash N_{(x,n)}$. Let $\epsilon>0$. Then $\left\|e^{t_kA(s)}x\right\|_{ X}<\frac{1}{C}\epsilon$ for all $k$ greater than or equal to some $k_\epsilon\in\mathbb{N}$. For each $t>t_{k_\epsilon}$, we can write $t=t_{k_\epsilon}+r$ where $r\in\mathbb{R}_+$. Then we have that \begin{align*} \left\|e^{tA(s)}x\right\|_{ X} &= \left\|e^{(r + t_{k_\epsilon})A(s)}x\right\|_{ X} \\ &= \left\| \left(e^{rA(s)}\right) \left(e^{t_{k_\epsilon}A(s)}x\right) \right\|_{ X} \\ &\leq \left\|e^{rA(s)}\right\| \left\|e^{t_{k_\epsilon}A(s)}x\right\|_{ X} \\ &\leq C \left\|e^{t_{k_\epsilon}A(s)}x\right\|_{ X} \\ &\leq C \frac{1}{C}\epsilon \\ &= \epsilon. \end{align*} Hence, \begin{align}\lambdabel{cvg2} \left\|e^{tA(s)}x\right\|_{ X}\rightarrow 0 \quad \text{as} \quad t\rightarrow\infty \end{align} for all $s\in \Omega_n\begin{align}ckslash N_{(x,n)}$. It follows that \eqref{cvg2} holds for all $s\in\Omega\begin{align}ckslash{\left(\cup_{n\in\mathbb{N}}N_{(x,n)} \right)}$. Observe that $N_x := \cup_{n\in\mathbb{N}}N_{(x,n)}$, being a countable union of null sets, is also a null set and hence we have the convergence \eqref{cvg2} almost everywhere. For each $x \in X$, \eqref{cvg2} holds for all $s\in \Omega\begin{align}ckslash N_x$. Now, choose any countable dense subset $C\subset X$. Then \eqref{cvg2} holds for each $x\in C$, for all $s\in \Omega\begin{align}ckslash\left(\cup_{x\in{C}}{N_x}\right)$. Note that $\cup_{x\in{C}}{N_x}$ is also null set. So we have that \eqref{cvg2} holds for all $x$ in a dense subset of $ X$, almost everywhere. Because the semigroups $\left(e^{tA(s)}\right)_{t\geq 0}$ are bounded, it follows that \eqref{cvg2} holds for all $x\in X$, almost everywhere, which is what we wanted to prove. (b)\,$\implies$\,(a): Assume that $\left\|e^{tA(s)}x\right\|_{ X}\rightarrow 0$ as $t\rightarrow\infty$ for all $x\in X$ and almost all $s\in \Omega$. Choose an arbitrary function $f\in L^p(\Omega, X)$. Then $$ \left\|e^{tA(s)}f(s)\right\|_{ X}^p \leq C^p\left\|f(s)\right\|_{ X}^p $$ for almost all $s\in \Omega$. Hence, the functions $\left\|e^{t{\mathcal{A}}}f(\cdot)\right\|_{ X}^p$ are dominated by the integrable function $\left\|Mf(\cdot)\right\|_{ X}^p$. Because of our assumption, we know that\\ $\left\|e^{tA(s)}f(s)\right\|_{ X}^p\rightarrow 0$ as $t\rightarrow\infty$ for almost all $s\in \Omega$. It now follows from Lebesgue's dominated convergence theorem that $$\int_{\Omega}{\left\|e^{tA(s)}f(s)\right\|_{ X}^p{\text d}s}=\left\|e^{t{\mathcal{A}}}f\right\|^p\rightarrow 0\quad\text{ as }\quad t\rightarrow\infty.$$ Thus the proof is complete. \end{proof} \begin{rem} As before, strong stability of a multiplication semigroup is independent of the value $p$, as long as $1\leq p < \infty$. \end{rem} \section{Weak Stability} The concept of weak stability is the most difficult to investigate in this context. We include a trivial example in the scalar case, where the multiplication semigroup is weakly stable, but where none of the pointwise semigroups are. \begin{exm}\lambdabel{exm1} We use the notation introduced in Remark \ref{remNOT} with $\Omega=[0,1]$, $X=\mathbb{C}$ and $A(s):=is$ for all $s\in [0,1]$. Then the semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is weakly stable, but none of the pointwise semigroups are. \end{exm} We show in the proposition below that the weak stability of almost every pointwise semigroup $\left(e^{tA(s)}\right)_{t\geq 0}$ does imply that the multiplication semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is weakly stable. \begin{prop} \lambdabel{propWeak} Let $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ be a bounded multiplication semigroup on the space $L^p(\Omega,X)$, with $1\leq p<\infty$, where $X$ is a reflexive Banach space. If the pointwise semigroups $\left(e^{tA(s)}\right)_{t\geq 0}$ are weakly stable for almost all $s\in\Omega$, then the multiplication semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is weakly stable. \end{prop} \begin{proof} Assume that the pointwise semigroups $\left(e^{t A(s)}\right)_{t\geq 0}$ are weakly stable for almost all $s\in\Omega$. Then there exists a null set $N\subset\Omega$ such that $$ \psi\bigl(e^{tA(s)}x\bigr) \rightarrow 0 \text{ as } t\rightarrow\infty $$ for all $x\in X$, $\psi\in X'$, and for all $s\in\Omega\begin{align}ckslash N$, where $X'$ denotes the continuous dual space of $X$. Since $X$ is a reflexive Banach space, the dual space of $L^p(\Omega,X)$ is $L^q(\Omega,X')$, where $\frac{1}{p} + \frac{1}{q} = 1$ (see \cite[Theorem 8.20.5, p.\,607]{Don95}). Choose arbitrary functions $f\in L^p(\Omega,X)$ and $g\in L^q(\Omega,X')$ (or $g\in L^\infty(\Omega,X')$, if $p=1$). Then \begin{align*} \left| g(s) \biggl( e^{tA(s)}f(s) \biggr) \right| & \leq \|g(s)\|_{X'} \|e^{t A(s)}f(s)\|_X \\ & \leq C \|g(s)\|_{X'} \|f(s)\|_X\\ \end{align*} almost everywhere. Since the real valued function $\|g(\cdot)\|_{X'}$ is in $L^q$ (or in $L^\infty$) and the function $\|f(\cdot)\|_X$ is in $L^p$, we have that the function $C \|g(\cdot)\| \|f(\cdot)\|_X$ is in $L^1$ and hence integrable. The functions $ g(\cdot)\Bigl(e^{tA(\cdot)}f(\cdot)\Bigr) $ are integrable, converge pointwise almost everywhere to $0$ as $t\rightarrow \infty$ and are dominated by the function $C \|g(\cdot)\|_{X'} \|f(\cdot)\|_X$. It follows from Lebesgue's Dominated Convergence Theorem that $$ \int_\Omega{ g(s)\bigl(e^{tA(s)}f(s)\bigr) \text{d}}\mu(s)\rightarrow 0\text{\ \ as\ \ }t\rightarrow\infty. $$ Hence, the semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is weakly stable. \end{proof} \begin{rem} This result is independent of the value of $p$, as long as $1\leq p<\infty$. \end{rem} \section{Almost Weak Stability} We now investigate the almost weak stability of the multiplication semigroup $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ and of the pointwise semigroups $\bigl(e^{t A(s)}\bigr)_{t\geq 0}$ with $s\in\Omega$. In order to define this stability concept, we mention that the density of a Lebesgue measurable subset $Z$ of $\mathbb{R}_+$ is $d := \lim_{t\rightarrow\infty}{\frac{\mu(Z\cap[0,t])}{t}}$ ($\mu$ being the Lebesgue measure), whenever the limit exists. \begin{defn}[Almost Weak Stability] Let $\SG$ be a $C_0$-semigroup on a reflexive Banach space $X$. Then $\SG$ is called {\em almost weakly stable} if there exists a Lebesgue measurable set $Z\subset \mathbb{R}_+$ of density $1$ such that $$ T(t) \rightarrow 0 \quad \text{as} \quad t \to \infty \,,\ t\in Z, $$ in the weak operator topology. \end{defn} The main result of this section is a characterisation of almost weak stability of a multiplication semigroup via Ces\`aro stability (see Definition \ref{defCesaroStable}) of the pointwise semigroups. This is remarkable since it is not true that almost weak stability of a multiplication semigroup and that of the pointwise semigroups are equivalent. We introduce Ces\`aro stability which seems to be the appropriate concept. \begin{defn}[Ces\`aro mean; Mean Ergodic Semigroup, Mean Ergodic Projection](\cite[p.\,20, Chapter I, Definition 2.18, 2.20]{eis10}) Let $\SG$ be a $C_0$-semigroup of operators on a Banach space $X$ with generator $\bigl(A,D(A)\bigr)$. For every $t > 0$, the {\em Ces\`aro mean} $S(t)\in\Lop[X]$ is defined by $$ S(t)x := \frac{1}{t}\int_{0}^t{T(s)x\text{d}s} $$ for all $x\in X$. The semigroup $\SG$ is called {\em mean ergodic} if the Ces\`aro means converge pointwise as $t$ tends to $\infty$. In this case the operator $P \in \Lop[X]$ defined by $$ Px := \lim_{t\to \infty}{S(t)x} $$ is called the {\em mean ergodic projection} corresponding to $\SG$. \end{defn} \begin{rem}\cite[p.\,21-22; Chapter I; Remark 2.21, Proposition 2.24 and Theorem 2.25]{eis10} The mean ergodic projection $P$ commutes with $\SG$ and is indeed a projection. A bounded $\SG$ is mean ergodic if and only if $X = \ker{A} \oplus \cl[\ran{A}]$, where $\ker{A}$ and $\ran{A}$ denote, respectively, the kernel and range of $A$. One also has that $\ran{P}=\ker{A}=\fix{\SG}$ and $\ker{P}=\cl[\ran{A}]$ where $\fix{\SG}$ is the fixed space of $\SG$. \end{rem} \begin{defn}[Ces\`aro Stability of Semigroups]\lambdabel{defCesaroStable} A semigroup $\SG$ is called {\em Ces\`aro stable} if the Ces\`aro means converge to 0 as $t\rightarrow\infty$, i.e. the mean ergodic projection is the $0$ operator. \end{defn} \begin{thm}\lambdabel{thmAwsSg} Let $X$ be a reflexive, separable Banach space. Take $({\mathcal{A}},D({\mathcal{A}}))$ to be the generator of a bounded multiplication semigroup $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ on $L^p(\Omega,X)$ with $1<p<\infty$, and let $A(s)$ with $s\in\Omega$ be the family of operators corresponding to ${\mathcal{A}}$. The semigroup $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ is almost weakly stable if and only if, for each $ir \in i\mathbb{R}$, the rescaled pointwise semigroups $\bigl(e^{irt}e^{tA(s)}\bigr)_{t\geq 0}$ are Ces\`aro stable, almost everywhere. \end{thm} \begin{rem} If the pointwise semigroups are almost weakly stable, almost everywhere, then the rescaled pointwise semigroups $\bigl(e^{irt}e^{tA(s)}\bigr)_{t\geq 0}$ are Ces\`aro stable for each $ir \in i\mathbb{R}$ almost everywhere which implies, by the above theorem, that the multiplication semigroup $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ is almost weakly stable. \end{rem} In order to prove Theorem \ref{thmAwsSg} we need the following characterisation of almost weak stability of a semigroup on a reflexive Banach space via the point spectrum of its generator. \begin{lem}{\cite[Chapter II, Theorem 4.1]{eis10}}\lambdabel{lemAwsSg} Let $\bigl(T(t)\bigr)_{t\geq 0}$ be a bounded $C_0$-semigroup with generator $(A, D(A))$ on a reflexive, separable Banach space $X$. Then the following are equivalent. \begin{enumerate} \item $\bigl(T(t)\bigr)_{t\geq 0}$ is almost weakly stable. \item $\PtSp{A}\cap i\mathbb{R} = \emptyset$, where $\PtSp{A}$ is the point spectrum of $A$. \end{enumerate} \end{lem} We are now ready to prove Theorem \ref{thmAwsSg} by using the above characterisation as well as the spectral mapping result of Theorem \ref{thmPtSpM}. \begin{proof} Using Lemma \ref{lemAwsSg} almost weak stability of $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ is equivalent to $\PtSp{{\mathcal{A}}}\cap i\mathbb{R} = \emptyset$. By Corollary \ref{corPtSpA}, this is equivalent to the following: for each $ir\in i\mathbb{R}$, $\mu\bigl(\{ s\in\Omega | ir \in \PtSp{A(s)} \}\bigr) = 0$. This in turn is equivalent to the fact that, for each $ir\in i\mathbb{R}$, the rescaled pointwise semigroups $\bigl(e^{irt}e^{tA(s)}\bigr)_{t\geq 0}$ are Ces\`aro stable almost everywhere. \end{proof} \begin{exm} Once again, we consider the multiplication semigroup of Example \ref{exm1}. In this example we have for the point spectrum that $\PtSp{A(s)} \cap i\mathbb{R} \nonumberot=\emptyset$ for each $s\in\Omega$, since $is\in\PtSp{A(s)}$ for all $s\in[0,1]$. It follows that none of the pointwise semigroups are almost weakly stable, whereas the point spectrum of ${\mathcal{A}}$ is empty, which means that the semigroup $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ is almost weakly stable. \end{exm} \begin{rem} Since the stability is determined by the pointwise semigroups, the value of $p$ is irrevelant, as long as $1<p<\infty$. \end{rem} \section{Stability of Multiplication Operators} It is possible to develop analogous results for time-discrete semigroups of the form $\{{\mathcal{M}}^n \mid \ n \in \mathbb{N}\}$ for a bounded multiplication operator ${\mathcal{M}}$ on $L^p(\Omega,X)$. The relevant stability properties are the following. \begin{defn}Let $T$ be an operator on a Banach space $X$. Then $T$ is \begin{itemize} \item[(i)] {\em uniformly stable} if $\left\|T^n\right\|\longrightarrow 0$ as $n\rightarrow\infty$. \item[(ii)] {\em strongly stable} if $\left\|T^nf\right\|\longrightarrow 0$ as $n\rightarrow\infty$ for all $f \in X$. \item[(iii)] {\em weakly stable} if $\psi\left(T^nf\right)\longrightarrow 0$ as $n\rightarrow\infty$ for all $f \in X$ and all $\psi\in X'$. \item[(iv)] {\em almost weakly stable} if $\psi(T^{n_k}f)\longrightarrow 0 \mbox{ as } k\rightarrow\infty$ for all $f \in X, \psi \in X'$, and sequences $(n_k)_{k\in\mathbb{N}} \subset \mathbb{N}$ of density 1. \end{itemize} \end{defn} Recall that the {\em density of a sequence} $(n_k)_{k\in\mathbb{N}} \subset \mathbb{N}$ is $$ d := \lim_{n\rightarrow\infty}{\frac{|\{ k\ :\ n_k < n \}|}{n}}, $$ if the limit exists and that a bounded linear operator $T$ on a Banach space is called {\em power bounded} if $\sup_{n \in \mathbb{N}}\left\|T^n\right\| < \infty$. By using methods analogous to those developed in Sections 1 -- 4 we can characterise these stability properties of a power bounded multiplication operator ${\mathcal{M}}$ through the pointwise operators $M(\cdot)$. \begin{thm} Let ${\mathcal{M}}$ be a power-bounded multiplication operator on $L^p(\Omega,X)$ with $1<p<\infty$. \begin{itemize} \item[(i)] Then ${\mathcal{M}}$ is uniformly stable if and only if $M(s)$ is uniformly stable for almost all $s \in \Omega$ and $\essup_{s\in\Omega}{\left\|M(s)^n\right\|} \rightarrow 0$ as $n\rightarrow\infty$, or, equivalently, that $\essup_{s\in\Omega}{\sr{M(s)}} < 1$. \item[(ii)] If $\Omega$ is $\sigma$-finite, then ${\mathcal{M}}$ is strongly stable if and only if $M(s)$ is strongly stable for almost all $s \in \Omega$. \item[(iii)] If the Banach space $X$ is separable, then ${\mathcal{M}}$ is weakly stable if the pointwise operators $M(s)$ are weakly stable almost everywhere. \item[(iv)] Let the measure space $(\Omega,\Sigma,\mu)$ be separable and let $X$ be a reflexive, separable Banach space. Then ${\mathcal{M}}$ is almost weakly stable if and only if, for each $\lambdambda\in\mathbb{C}$ with $|\lambdambda| = 1$, the rescaled pointwise operators $\lambdambda M(s)$ are Ces\`aro stable for almost all $s \in \Omega$. \end{itemize} \end{thm} \end{document}
\begin{document} \title{Reducing quadrangulations of the sphere and the projective plane} \begin{abstract} We show that every quadrangulation of the sphere can be transformed into a $4$-cycle by deletions of degree-$2$ vertices and by $t$-contractions at degree-$3$ vertices. A $t$-contraction simultaneously contracts all incident edges at a vertex with stable neighbourhood. The operation is mainly used in the field of $t$-perfect graphs. \\ We further show that a non-bipartite quadrangulation of the projective plane can be transformed into an odd wheel by $t$-contractions and deletions of degree-$2$ vertices. \\ We deduce that a quadrangulation of the projective plane is (strongly) $t$-perfect if and only if the graph is bipartite. \end{abstract} \section{Introduction} For characterising quadrangulations of the sphere, it is very useful to transform a quadrangulation into a slightly smaller one. Such reductions are mainly based on the following idea: Given a class of quadrangulations, a sequence of particular face-contractions transforms every member of the class into a $4$-cycle; see eg Brinkmann et~al~\cite{BGGMTW05}, Nakamoto~\cite{Nakamoto99}, Negami and Nakamoto~\cite{NeNa93}, and Broersma et al.~\cite{BDG93}. A \emph{face-contraction} identifies two non-adjacent vertices $v_1, v_3$ of a $4$-face $v_1, v_2,v_3,v_4$ in which the common neighbours of $v_1$ and $v_3$ are only $v_2$ and $v_4$. A somewhat different approach was made by Bau et al.~\cite{BMNZ14}. They showed that any quadrangulation of the sphere can be transformed into a $4$-cycle by a sequence of deletions of degree-$2$ vertices and so called hexagonal contractions. The obtained graph is a minor of the previous graph. Both operations can be obtained from face-contractions. We provide a new way to reduce arbitrary quadrangulations of the sphere to a $4$-cycle. Our operations are minor-operations --- in contrast to face-contractions. We use deletions of degree-$2$ vertices and $t$-contractions. A \emph{$t$-contraction} simultaneously contracts all incident edges of a vertex with stable neighbourhood and deletes all multiple edges. The operation is mainly used in the field of $t$-perfection. Face-contractions cannot be obtained from $t$-contractions. We restrict ourselves to $t$-contractions at vertices that are only contained in $4$-cycles whose interior does not contain a vertex. \begin{gather} \mbox{These $t$-contractions and deletions of degree-$2$ vertices } \nonumber \sloppy \\ \mbox{ can be obtained from a sequence of face-contractions.} \label{eq:operations} \sloppy \end{gather} Figure~\ref{fig:facecon} illustrates this. The restriction on the applicable $t$-contractions makes sure that all face-contractions can be applied, ie that all identified vertices are non-adjacent and have no common neighbours besides the two other vertices of their $4$-face. \begin{figure} \caption{Face-contractions that give a deletion of a degree-$2$ vertex, a $t$-contraction at a degree-$3$ and a degree-$6$ vertex} \label{fig:facecon} \end{figure} We prove: \begin{theorem}\label{thm:plane_C4_irreducible} Let $G$ be a quadrangulation of the sphere. Then, there is a sequence of $t$-contractions at degree-$3$ vertices and deletions of degree-$2$ vertices that transforms $G$ into a $4$-cycle. During the whole process, the graph remains a quadrangulation. \end{theorem} The proof of Theorem~\ref{thm:plane_C4_irreducible} can be found in Section~\ref{sec:quadrangulations}. It is easy to see that both operations used in Theorem~\ref{thm:plane_C4_irreducible} are necessary. By \eqref{eq:operations}, Theorem~\ref{thm:plane_C4_irreducible} implies: \begin{gather*} \mbox{Any quadrangulation of the sphere can be transformed } \sloppy \\ \mbox{into a $4$-cycle by a sequence of face-contractions.} \sloppy \end{gather*} Via the dual graph, quadrangulations of the sphere are in one-to-one correspondence with planar $4$-regular (not necessarily simple) graphs. Theorem~\ref{thm:plane_C4_irreducible} thus implies a method to reduce all $4$-regular planar graphs to the graph on two vertices and four parallel edges. Broersma et al.~\cite{BDG93}, Lehel~\cite{Le81}, and Manca~\cite{Ma79} analysed methods to reduce $4$-regular planar graphs to the octahedron graph. In the second part of this paper, we consider quadrangulations of the projective plane. We use Theorem~\ref{thm:plane_C4_irreducible} to reduce all non-bipartite quadrangulations of the projective plane to an odd wheel. A \emph{$p$-wheel} $W_p$ is a graph consisting of a cycle $(w_1, \ldots, w_p,w_1)$ and a vertex $v$ adjacent to all vertices of the cycle. A wheel $W_p$ is an \emph{odd wheel}, if $p$ is odd. Figure~\ref{fig:wheels} shows some odd wheels. \begin{theorem}\label{thm:pp_odd_wheels_irreducible} Let $G$ be a non-bipartite quadrangulation of the projective plane. Then, there is a sequence of $t$-contractions and deletions of degree-$2$ vertices that transforms $G$ into an odd wheel. During the whole process, the graph remains a non-bipartite quadrangulation. \end{theorem} The proof of this theorem can be found in Section~\ref{sec:quadrangulations}. It is easy to see that both operations used in this theorem are necessary. \begin{figure} \caption{The odd wheels $W_3, W_5$ and $W_7$} \label{fig:wheels} \end{figure} \begin{figure} \caption{An even embedding of $W_5$ in the projective plane and face-contractions that produce a smaller odd wheel. Opposite points on the dotted cycle are identified. } \label{fig:pp_wheels} \end{figure} Negami and Nakamoto~\cite{NeNa93} showed that any non-bipartite quadrangulation of the projective plane can be transformed into a $K_4$ by a sequence of face-contractions. This result can be deduced from Theorem~\ref{thm:pp_odd_wheels_irreducible}: By \eqref{eq:operations}, Theorem~\ref{thm:pp_odd_wheels_irreducible} implies that any non-bipartite quadrangulation of the projective plane can be transformed into an odd wheel by a sequence of face-contractions. The odd wheel $W_{2k+1}$ can now be transformed into $W_{2k-1}$ --- and finally into $W_3=K_4$ --- by face-contractions (see Figure~\ref{fig:pp_wheels}). Nakamoto~\cite{Nakamoto99} gave a reduction method based on face-contractions and so called $4$-cycle deletions for non-bipartite quadrangulations of the projective plane with minimum degree $3$. Matsumoto et al.~\cite{MNY16} analysed quadrangulations of the projective plane with respect to hexagonal contractions while Nakamoto considered face-contractions for quadrangulations of the Klein~bottle~\cite{Nakamoto95} and the torus~\cite{Nakamoto96}. Youngs~\cite{You96} and Esperet and Stehl{\'{\i}}k~\cite{Es_Steh15} considered non-bipartite quadrangulations of the projective plane with regard to vertex-colourings and width-parameters. Theorem~\ref{thm:pp_odd_wheels_irreducible} allows an application to the theory of $t$-perfection. A graph $G$ is \emph{$t$-perfect} if its stable set polytope $\textrm{\rm SSP}(G)$ equals the polyhedron $\textrm{\rm TSTAB}(G)$. The \emph{stable set polytope} $\textrm{\rm SSP}(G)$ is the convex hull of stable sets of $G$; the polyhedron \emph{$\textrm{\rm TSTAB}(G)$} is defined via non-negativity-, edge- and odd-cycle inequalities (see Section~\ref{sec:t-perf} for a precise definition). If the system of inequalities defining $\textrm{\rm TSTAB}(G)$ is totally dual integral, the graph $G$ is called \emph{strongly $t$-perfect}. Evidently, strong $t$-perfection implies $t$-perfection. It is not known whether the converse is also true. \begin{theorem}\label{thm:t-perfect} For every quadrangulation $G$ of the projective plane the following assertions are equivalent: \begin{enumerate}[\rm(a)] \item $G$ is $t$-perfect \label{item:t-perfect} \item $G$ is strongly $t$-perfect \label{item:strongly_t-perfect} \item $G$ is bipartite \label{item:bipartite} \end{enumerate} \end{theorem} See Section~\ref{sec:t-perf} for precise definitions and for the proof. A general treatment on $t$-perfect graphs may be found in Gr\"otschel, Lov\'asz and Schrijver~\cite[Ch.~9.1]{GLS88} as well as in Schrijver~\cite[Ch.~68]{LexBible}. We showed that triangulations of the projective plane are (strongly) $t$-perfect if and only if they are perfect and do not contain the complete graph $K_4$~\cite{triang17}. Bruhn and Benchetrit analysed $t$-perfection of triangulations of the sphere~\cite{Ben_Bru15}. Boulala and Uhry~\cite{BouUhr79} established the $t$-perfection of series-parallel graphs. Ge\-rards~\cite{Gerards89} extended this to graphs that do not contain an \emph{odd-$K_4$} as a subgraph (an odd-$K_4$ is a subdivision of $K_4$ in which every triangle becomes an odd circuit). Ge\-rards and Shepherd~\cite{GS98} characterised the graphs with all subgraphs $t$-perfect, while Barahona and Mahjoub~\cite{BM94} described the $t$-imperfect subdivisions of $K_4$. Bruhn and Fuchs~\cite{Bru_Fu15} characterised $t$-perfection of $P_5$-free graphs by forbidden $t$-minors. \section{Quadrangulations} \label{sec:quadrangulations} All the graphs mentioned here are finite and simple. We follow the notation of Diestel~\cite{Diestel}. We begin by recalling several useful definitions related to surface-embedded graphs. For further background on topological graph theory, we refer the reader to Gross and Tucker~\cite{Gross_Tucker} or Mohar and Thomassen~\cite{Mohar_Thomassen}. An \emph{embedding} of a simple graph $G$ on a surface is a continuous one-to-one function from a topological representation of $G$ into the surface. For our purpose, it is convenient to abuse the terminology by referring to the image of $G$ as the \emph{ the graph $G$}. The \emph{faces} of an embedding are the connected components of the complement of $G$. An embedding $G$ is \emph{even} if all faces are bounded by an even circuit. A \emph{quadrangulation} is an embedding where each face is bounded by a circuit of length $4$. A cycle $C$ is \emph{contractible} if $C$ separates the surface into two sets $S_C$ and $\overline{S_C}$ where $S_C$ is homeomorphic to an open disk in $\ensuremath{\mathbb{R}}^2$. Note that for the sphere, $S_C$ and $\overline{S_C}$ are homeomorphic to an open disk. In contrast, for the plane and the projective plane, $\overline{S_C}$ is not homeomorphic to an open disk. For the plane and the projective plane , we call $S_C$ the \emph{interior} of $C$ and $\overline{S_C}$ the \emph{exterior} of $C$. Using the stereographic projection, it is easy to switch between embeddings in the sphere and the plane. In order to have an interior and an exterior of a contractible cycle, we will concentrate on quadrangulations of the plane (and the projective plane). Note that by the Jordan curve theorem, \begin{equation} \label{eq:plane->contractible_cycles} \text{all cycles in the plane are contractible.} \end{equation} A cycle in a non-bipartite quadrangulation of the projective plane is contractible if and only if it has even length (see e.g.~\cite[Lemma 3.1]{Kai_Steh15}). As every non-bipartite even embedding is a subgraph of a non-bipartite quadrangulation, one can easily generalise this result. \begin{observation} \label{obs:cycles_in_pp} A cycle in a non-bipartite even embedding in the projective plane is contractible if and only it has even length. \end{observation} An embedding is a \emph{$2$-cell embedding} if each face is homeomorphic to an open disk. It is well-known that embeddings of $2$-connected graphs in the plane are $2$-cell embeddings. A non-bipartite quadrangulation of the projective plane contains a non-contractible cycle; see Observation~\ref{obs:cycles_in_pp}. The complement of this cycle in the projective plane is homeomorphic to an open disk. Thus, we observe: \begin{observation} \label{obs:closed-cell-embedding} Every quadrangulation of the plane and every non-bipartite quadrangulation of the projective plane is a $2$-cell embedding. \end{observation} This observation makes sure that we can apply Euler's formula to all the considered quadrangulations. A simple graph cannot contain a $4$-circuit that is not a $4$-cycle. Thus, note that every face of a quadrangulation is bounded by a cycle. It is easy to see that \begin{equation} \label{eq:quad_plane->bipartite} \text{all quadrangulations of the plane are bipartite.} \end{equation} We first take a closer look at deletions of degree-$2$ vertices in graphs that are not the $4$-cycle $C_4$. \begin{observation} \label{obs:deg2_vertex} Let $G\not= C_4$ be a quadrangulation of the plane or the projective plane that contains a vertex $v$ of degree $2$. Then, $G-v$ is again a quadrangulation. \end{observation} \begin{proof} Let $u$ and $u'$ be the two neighbours of $v$. Then, there are distinct vertices $s,t$ such that the cycles $(u,v,u',s,u) $ and $(u,v,u',t,u)$ are bounding a face. Thus, $(u,s,u',t,u)$ is a contractible $4$-cycle whose interior contains only $v$ and $G-v$ is again a quadrangulation. \end{proof} We now take a closer look at $t$-contractions. \begin{lemma} \label{lem:quad_stays_quad} Let $G$ be a quadrangulation of the plane or a non-bipartite quadrangulation of the projective plane. Let $G'$ be obtained from $G$ by a $t$-contraction at $v$. If $v$ is not a vertex of a contractible $4$-cycle with some vertices in its interior, then $G'$ is again a quadrangu\-lation. \end{lemma} \begin{proof} Let $G''$ be obtained from $G$ by the operation that identifies $v$ with all its neighbours but does not delete multiple edges. This operation leaves every cycle not containing $v$ untouched, transforms every other cycle $C$ into a cycle of length $|C|-2$, and creates no new cycles. Therefore, all cycles bounding faces of $G''$ are of size $4$ or $2$. The graphs $G'$ and $G''$ differ only in the property that $G''$ has some double edges. These double edges form $2$-cycles that arise from $4$-cycles containing $v$. As all these $4$-cycles are contractible (see~\eqref{eq:plane->contractible_cycles} and Observation~\ref{obs:cycles_in_pp}) with no vertex in their interior, the $2$-cycles are also contractible and contain no vertex in its interior. Deletion of all double edges now gives $G'$ --- an embedded graph where all faces are of size $4$. \end{proof} Lemma~\ref{lem:quad_stays_quad} enables us to prove the following statement that directly implies Theorem~\ref{thm:plane_C4_irreducible}. \begin{lemma}\label{lem:plane_C4_irreducible} Let $G$ be a quadrangulation of the plane. Then, there is a sequence of \begin{itemize} \item $t$-contractions at degree-$3$ vertices that are only contained in $4$-cycles whose interior does not contain a vertex. \item deletions of degree-$2$ vertices \end{itemize} that transforms $G$ into a $4$-cycle. During the whole process, the graph remains a quadrangulation. \end{lemma} \begin{proof} Let $\mathcal{C}$ be the set of all contractible $4$-cycles whose interior contains some vertices of $G$. Note that $\mathcal{C}$ contains the $4$-cycle bounding the outer face unless $G=C_4$. Let $C \in \mathcal{C}$ be a contractible $4$-cycle whose interior does not contain another element of $\mathcal{C}$. We will first see that the interior of $C$ contains a vertex of degree $2$ or $3$: Deletion of all vertices in the exterior of $C$ gives a quadrangulation $G'$ of the plane. As $G$ is connected, one of the vertices in $C$ must have a neighbour in the interior of $C$ and thus must have degree at least $3$. Euler's formula now implies that $ \sum_{v \in V(G')} \deg(v)=2|E(G')| \leq 4|V(G')| -8. $ As no vertex in $G'$ has degree $0$ or $1$, there must be a vertex of degree $2$ or $3$ in $V(G')-V(C)$. This vertex has the same degree in $G$ and is contained in the interior of $C$. We use deletions of degree-$2$ vertices and $t$-contractions at degree-$3$ vertices in the interior of the smallest cycle of $\mathcal{C} $ to successively get rid of all vertices in the interior of $4$-cycles. By Observation~\ref{obs:deg2_vertex} and Lemma~\ref{lem:quad_stays_quad}, the obtained graphs are quadrangulations. Now, suppose that no more $t$-contraction at a degree-$3$ vertex and no more deletion of a degree-$2$ vertex is possible. Assume that the obtained graph is not a $4$-cycle. Then, there is a cycle $C'\in \mathcal{C}$ whose interior does not contain another cycle of $\mathcal{C}$. As we have seen above, $C' \in \mathcal{C}$ contains a vertex $v$ of degree $3$. Since no $t$-contraction can be applied to $v$, the vertex $v$ has two adjacent neighbours. This contradicts~\eqref{eq:quad_plane->bipartite}. \end{proof} In the rest of the paper, we will consider the projective plane. A quadrangulation of the projective plane is~\emph{nice} if no vertex is contained in the interior of a contractible $4$-cycle. \begin{lemma}\label{lem:make_quad_nice} Let $G$ be a non-bipartite quadrangulation of the projective plane. Then, there is a sequence of $t$-contractions and deletions of vertices of degree $2$ that transforms $G$ into a nice quadrangulation. During the whole process, the graph remains a quadrangulation. \end{lemma} \begin{proof} Let $C$ be a contractible $4$-cycle whose interior contains at least one vertex. Delete all vertices that are contained in the exterior of $C$. The obtained graph is a quadrangulation of the plane. By Lemma~\ref{lem:plane_C4_irreducible}, there is a sequence of $t$-contractions (as described in Lemma~\ref{lem:quad_stays_quad}) and deletions of degree-$2$ vertices that eliminates all vertices in the interior of $C$. With this method, it is possible to transform $G$ into a nice quadrangulation. \end{proof} Similar as in the proof of Theorem~\ref{thm:plane_C4_irreducible}, Euler's formula implies that a non-bipartite quadrangulation of the projective plane contains a vertex of degree $2$ or $3$. As no nice quadrangulation has a degree-$2$ vertex (see Observation~\ref{obs:deg2_vertex}), we deduce: \begin{observation}\label{obs:nice_quad_min_degree} Every nice non-bipartite quadrangulation of the projective plane has minimal degree $3$. \end{observation} In an even embedding of an odd wheel $W$, every odd cycle must be non-contractible (see Observation~\ref{obs:cycles_in_pp}). Thus, it is easy to see that there is only one way (up to topological isomorphy) to embed an odd wheel in the projective plane. (This can easily be deduced from~\cite{MoRoVi96} --- a paper dealing with embeddings of planar graphs in the projective plane.) The embedding is illustrated in Figure~\ref{fig:pp_wheels}. Noting that this embedding is a quadrangulation, we observe: \begin{observation} \label{obs:odd_wheel_nice_quad} Let $G$ be a quadrangulation of the projective plane that contains an odd wheel $W$. If $G$ is nice, then $G$ equals $W$. \end{observation} Note that every graph containing an odd wheel also contains an induced odd wheel. Now, we consider even wheels. \begin{lemma} \label{lem:even_wheel_no_embedding} Even wheels $W_{2k}$ for $k\geq 2$ do not have an even embedding in the projective plane. \end{lemma} The statement follows directly from~\cite{MoRoVi96}. We nevertheless give an elementary proof of the lemma. \begin{proof} First assume that the $4$-wheel $W_4$ has an even embedding. As all triangles of $W_4-{w_3w_4}$ must be non-contractible by Observation~\ref{obs:cycles_in_pp}, it is easy to see that the graph must be embedded as in Figure~\ref{fig:pp_evenwheels}. Since the insertion of $w_3w_4$ will create an odd face, $W_4$ is not evenly embeddable. Now assume that $W_{2k}$ for $k \geq 3$ is evenly embedded. Delete the edges $vw_i$ for $i=5, \ldots, 2k$ and note that $w_5, \ldots, w_{2k}$ are now of degree $2$, ie the path $P=(w_4, w_5, \ldots, w_{2k}, w_1)$ bounds two faces or one face from two sides. Deletion of the edges $vw_i$ preserve the even embedding: Deletion of an edge bounding two faces $F_1, F_2$ merges the faces into a new face of size $|F_1|+|F_2|-2$. Deletion of an edge bounding a face $F$ from two sides leads to a new face of size $|F|-2$. In both cases, all other faces are left untouched. Next, replace the odd path $P$ by the edge $w_4w_1$. The two faces $F_3, F_4$ adjacent to $P$ are transferred into two new faces of size $|F_3|- (2k-3)+1$ and $|F_4|- (2k-3)+1$. This yields an even embedding of $W_4$ which is a contradiction. \end{proof} \begin{figure} \caption{The only even embedding of $W_4 - w_3w_4 $ in the projective plane. Opposite points on the dotted cycle are identified.} \label{fig:pp_evenwheels} \end{figure} Note that a $t$-contraction at a vertex $v$ is only allowed if its neighbourhood is stable, that is, if $v$ is not contained in a triangle. The next lemma characterises the quadrangulations to which no $t$-contraction can be applied. \begin{lemma} \label{lem:irred_oddwheel} Let $G$ be a non-bipartite nice quadrangulation of the projective plane where each vertex is contained in a triangle. Then $G$ is an odd wheel. \end{lemma} \begin{proof} By Observation~\ref{obs:nice_quad_min_degree}, there is a vertex $v$ of degree $3$ in $G$. Let $\{x_1,x_2,x_3\}$ be its neighbourhood and let $x_1$,$x_2$ and $v$ form a triangle. Recall that each two triangles are non-contractible (see~Observation~\ref{obs:cycles_in_pp}). Consequently each two triangles intersect. As $x_3$ is contained in a triangle intersecting the triangle $(v,x_1,x_2)$ and as $v$ has no further neighbour, we can suppose without loss of generality that $x_3$ is adjacent to $x_1$. The graph induced by the two triangles $(v,x_1,x_2)$ and $(x_1,v,x_3)$ is not a quadrangulation. Further, addition of the edge $x_2x_3$ yields a $K_4$. By Observation~\ref{obs:odd_wheel_nice_quad}, $G$ then equals the odd wheel $W_3=K_4$. Otherwise, the graph contains a further vertex and this vertex is contained in a further triangle $T$. Since the vertex $v$ has degree $3$, it is not contained in $T$. If further $x_1 \notin V(T)$, then the vertices $x_2$ and $x_3$ must be contained in $T$. But then $x_2x_3\in E(G)$ and, as above, $v$, $x_1$, $x_2$ and $x_3$ form a $K_4$. Therefore, $x_1$ is contained in $T$ and consequently in every triangle of $G$. Since every vertex is contained in a triangle, $x_1$ must be adjacent to all vertices of $ G-x_1$. As $|E(G)|=2|V(G)|-2$ by Euler's formula, the graph $G-x_1$ has $2|V(G)|-2-(|V(G)|-1)=|V(G)|-1=|V(G-x_1)|$ many edges. By Observation~\ref{obs:nice_quad_min_degree}, no vertex in $G$ has degree smaller than $3$. Consequently, no vertex in $G-x_1$ has degree smaller than $2$. Thus, $G-x_1$ is a cycle and $G$ is a wheel. By Lemma~\ref{lem:even_wheel_no_embedding}, $G$ is an odd wheel. \end{proof} Finally, we can prove our second main result: \begin{proof}[Proof of Theorem~\ref{thm:pp_odd_wheels_irreducible}] Transform $G$ into a nice quadrangulation (Lemma~\ref{lem:make_quad_nice}). Now, consecutively apply $t$-contractions (as described in Lemma~\ref{lem:quad_stays_quad}) as long as possible. In each step, the obtained graph is a quadrangulation. By Lemma~\ref{lem:make_quad_nice} we can assume that the quadrangulation is nice. If no more $t$-contraction can be applied, then every vertex is contained in a triangle. By Lemma~\ref{lem:irred_oddwheel}, the obtained quadrangulation is an odd wheel. \end{proof} \section{(Strong) $t$-perfection}\label{sec:t-perf} The \emph{stable set polytope} $\textrm{\rm SSP}(G)\subseteq\mathbb R^{V}$ of a graph $G=(V,E)$ is defined as the convex hull of the characteristic vectors of stable, ie independent, subsets of $V$. The characteristic vector of a subset $S$ of the set $V$ is the vector $\charf{S}\in \{0,1\}^{V}$ with $\charf{S}(v)=1$ if $v\in S$ and $0$ otherwise. We define a second polytope $\textrm{\rm TSTAB}(G)\subseteq\mathbb R^V$ for $G$, given by \begin{eqnarray} \label{inequ} &&x\geq 0,\notag\\ &&x_u+x_v\leq 1\text{ for every edge }uv\in E,\\ &&\sum_{v\in V(C)}x_v\leq \left\lfloor\frac{ |C|}{2}\right\rfloor\text{ for every induced odd cycle }C \text{ in }G.\notag \end{eqnarray} These inequalities are respectively known as non-negativity, edge and odd-cycle inequalities. Clearly, $\textrm{\rm SSP}(G)\subseteq \textrm{\rm TSTAB}(G)$. The graph $G$ is called \emph{$t$-perfect} if $\textrm{\rm SSP}(G)$ and $\textrm{\rm TSTAB}(G)$ coincide. Equivalently, $G$ is $t$-perfect if and only if $\textrm{\rm TSTAB}(G)$ is an integral polytope, ie if all its vertices are integral vectors. The graph $G$ is called \emph{strongly $t$-perfect} if the system \eqref{inequ} of inequalities is totally dual integral. That is, if for each weight vector $w \in \ensuremath{\mathbb{Z}}^{V}$, the linear program of maximizing $w^Tx$ over~\eqref{inequ} has an integer optimum dual solution. This property implies that $\textrm{\rm TSTAB}(G)$ is integral. Therefore, strong $t$-perfection implies $t$-perfection. It is an open question whether every $t$-perfect graph is strongly $t$-perfect. The question is briefly discussed in Schrijver~\cite[Vol. B, Ch. 68]{LexBible}. It is easy to see that all bipartite graphs are (strongly) $t$-perfect (see eg Schrijver~\cite[Ch.~68]{LexBible}) and that vertex deletion preserves (strong) $t$-perfection. Another operation that keeps (strong) $t$-perfection (see eg~\cite[Vol.~B,~Ch.~68.4]{LexBible}) was found by Gerards and Shepherd~\cite{GS98}: the $t$-contraction. Odd wheels $W_{2k+1}$ for $k \geq 1$ are not (strongly) $t$-perfect. Indeed, the vector $(1 \slash 3, \ldots, 1 \slash 3)$ is contained in $\textrm{\rm TSTAB}(W_{2k+1})$ but not in $\textrm{\rm SSP}(W_{2k+1})$. With this knowledge, the proof of Theorem~\ref{thm:t-perfect} follows directly from Theorem~\ref{thm:pp_odd_wheels_irreducible}. \begin{proof} [Proof of Theorem~\ref{thm:t-perfect}] If $G$ is bipartite, the $G$ is (strongly) $t$-perfect. Let $G$ be non-bipartite. Then, there is a sequence of $t$-contractions and deletions of vertices that transforms $G$ into an odd wheel (Theorem~\ref{thm:pp_odd_wheels_irreducible}). As odd wheels are not (strongly) $t$-perfect and as vertex deletion and $t$-contraction preserve (strong) $t$-perfection, $G$ is not (strongly) $t$-perfect. \end{proof} \noindent Elke Fuchs {\tt <[email protected]>}\\ Laura Gellert {\tt <[email protected]>}\\ Institut f\"ur Optimierung und Operations Research\\ Universit\"at Ulm, Ulm\\ Germany\\ \end{document}
\begin{document} \begin{abstract} We consider cardinal invariants related to Shelah's model-theoretic tree properties and the relations that obtain between them. From strong colorings, we construct theories $T$ with $\kappa_{\text{cdt}}(T) > \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)$. We show that these invariants have distinct structural consequences, by investigating their effect on the decay of saturation in ultrapowers. This answers some questions of Shelah. \end{abstract} \title{Invariants related to the tree property} \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} One of the fundamental discoveries in stability theory is that stability is local: a theory is stable if and only if no formula has the order property. Among the stable theories, one can obtain a measure of complexity by associating to each theory $T$ its \emph{stability spectrum}, namely, the class of cardinals $\lambda$ such that $T$ is stable in $\lambda$. A classification of stability spectra was given by Shelah in \cite[Chapter 3]{shelah1990classification}. Part of this analysis amounts showing that stable theories do not have the tree property and, consequently, that forking satisfies local character. But a crucial component of that work was studying the approximations to the tree property which can exist in stable theories and what structural consequences they have. These approximations were measured by a cardinal invariant of the theory called $\kappa(T)$, and Shelah's stability spectrum theorem gives an explicit description of the cardinals in which a given theory $T$ is stable in terms of the cardinality of the set of types in finitely many variables over the empty set and $\kappa(T)$. Shelah used the definition of $\kappa(T)$ as a template for quantifying the global approximations to other tree properties in introducing the invariants $\kappa_{\text{cdt}}(T)$, $\kappa_{\text{sct}}(T)$, and $\kappa_{\text{inp}}(T)$ (see Definition \ref{patterns} below) which bound approximations to the tree property (TP), the tree property of the first kind (TP$_{1}$), and the tree property of the second kind (TP$_{2}$), respectively. Eventually, the local condition that a theory does not have the tree property (\emph{simplicity}), and the global condition that $\kappa(T) = \kappa_{cdt}(T) = \aleph_{0}$ (\emph{supersimplicity}) proved to mark substantial dividing lines. These invariants provide a coarse measure of the complexity of the theory, providing a ``quantitative" description of the patterns that can arise among forking formulas. They are likely to continue to play a role in the development of a structure theory for tame classes of non-simple theories. Motivated by some questions from \cite{shelah1990classification}, we explore which relationships known to hold between the \emph{local} properties TP, TP$_{1}$, and TP$_{2}$ also hold for the \emph{global} invariants $\kappa_{\text{cdt}}(T)$, $\kappa_{\text{sct}}(T)$, and $\kappa_{\text{inp}}(T)$. In short, we are pursuing the following analogy: \begin{table}[h!] \begin{center} \label{tab:table1} \begin{tabular}{c|c|c|c} local & TP & TP$_{1}$ & TP$_{2}$ \\ \hline global & $\kappa_{\text{cdt}}$ & $\kappa_{\text{sct}}$ & $\kappa_{\text{inp}}$ \\ \end{tabular} \end{center} \end{table} \noindent This continues the work done in \cite{ArtemNick}, where, with Artem Chernikov, we considered a global analogue of the following theorem of Shelah: \begin{theorem*}{\cite[III.7.11]{shelah1990classification}} For complete theory $T$, $\kappa_{\text{cdt}}(T) = \infty$ and only if $\kappa_{\text{sct}}(T) = \infty$ or $\kappa_{\text{inp}}(T) = \infty$. That is, $T$ has the tree property if and only if it has the tree property of the first kind or the tree property of the second kind. \end{theorem*} \noindent Shelah then asked if $\kappa_{\text{cdt}}(T) = \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)$ in general \cite[Question III.7.14]{shelah1990classification}\footnote{This formulation is somewhat inaccurate. Shelah defines for $x \in \{\text{cdt},\text{inp},\text{sct}\}$, the cardinal invariant $\kappa r_{x}$, which is the least regular cardinal $\geq \kappa_{x}$. Shelah's precise question was about the possible equality $\kappa r_{\text{cdt}} = \kappa r_{\text{sct}} + \kappa r_{\text{inp}}$. For our purposes, we will only need to consider theories in which $\kappa_{x}$ is a successor cardinal, so we will not need to distinguish between these two variations.}. In \cite{ArtemNick}, we showed that is true under the assumption that $T$ is countable. For a countable theory $T$, the only possible values of these invariants are $\aleph_{0} ,\aleph_{1}$, and $\infty$\textemdash our proof handled each cardinal separately using a different argument in each case. Here we consider this question without any hypothesis on the cardinality of $T$, answering the general question negatively (Theorem \ref{first main theorem} below): \begin{theorem*} There is a stable theory \(T\) so that \(\kappa_{\text{cdt}}(T) > \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)\). Moreover, it is consistent with ZFC that for every regular uncountable \(\kappa\), there is a stable theory \(T\) with \(|T| = \kappa\) and \(\kappa_{\text{cdt}}(T) > \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)\). \end{theorem*} To construct a theory $T$ so that $\kappa_{\text{cdt}}(T) \neq \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)$, we use results on \emph{strong colorings} constructed by Galvin under GCH and later by Shelah in ZFC. These results show that, at suitable regular cardinals, Ramsey's theorem fails in a particularly dramatic way. The statement $\kappa_{\text{cdt}}(T) = \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)$ amounts to saying that a certain large global configuration gives rise to another large configuration which is moreover very uniform. This has the feel of many statements in the partition calculus and we show that, in fact, a coloring $f: [\kappa]^{2} \to 2$ can be used to construct a theory $T^{*}_{\kappa, f}$ such that the existence of a large inp- or sct-patterns relative to $T^{*}_{\kappa,f}$ implies some homogeneity for the coloring $f$. The theories built from the strong colorings of Galvin and Shelah, then, furnish ZFC counter-examples to Shelah's question, and also give a consistency result showing that, consistently, for every regular uncountable cardinal $\kappa$, there is a theory $T$ with $|T| = \kappa$ and $\kappa_{\text{cdt}}(T) \neq \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)$. This suggests that the aforementioned result of \cite{ArtemNick} for countable theories is in some sense the optimal result possible in ZFC. Our second theorem is motivated by the following theorem of Shelah: \begin{theorem*}{\cite[VI.4.7]{shelah1990classification}}\label{saturation} If $T$ is not simple, $\mathcal{D}$ is a regular ultrafilter over $I$, $M$ is an $|I|^{++}$-saturated model of $T$, then $M^{I}/\mathcal{D}$ is not $|I|^{++}$-compact. \end{theorem*} \noindent In an exercise, Shelah claims that the hypothesis that $T$ is not simple in the above theorem may be replaced by the condition $\kappa_{\text{inp}}(T) > |I|^{+}$ and asks if $\kappa_{\text{cdt}}(T) > |I|^{+}$ suffices \cite[Question VI.4.20]{shelah1990classification}. We prove, in Corollary \ref{second main theorem} and Theorem \ref{second main theorem part 2} respectively, the following: \begin{theorem*} There is a theory $T$ such that $\kappa_{\text{inp}}(T) = \lambda^{++}$ yet for any regular ultrafilter $\mathcal{D}$ on $\lambda$ and $\lambda^{++}$-saturated model $M \models T$, $M^{\lambda}/\mathcal{D}$ is $\lambda^{++}$-saturated. \end{theorem*} \begin{theorem*} If $\lambda = \lambda^{<\lambda}$ and $\kappa_{\text{sct}}(T) > \lambda^{+}$, $M$ is an $\lambda^{++}$-saturated model of $T$ and $\mathcal{D}$ is a regular ultrafilter over $\lambda$, then $M^{\lambda}/\mathcal{D}$ is not $\lambda^{++}$-compact. \end{theorem*} \noindent The first of these results contradicts Shelah's Exercise VI.4.19 and \emph{a fortiori} answers Question VI.4.20 negatively. Although $\kappa_{\text{inp}}(T) > |I|^{+}$ and hence $\kappa_{\text{cdt}}(T) > |I|^{+}$ do not suffice to guarantee a loss of saturation in the ultrapower, one can ask if $\kappa_{\text{sct}}(T) > |I|^{+}$ does suffice. Shelah's original argument for Theorem \ref{saturation} does not generalize, but fortunately a recent new proof due to Malliaris and Shelah \cite{Malliaris:2012aa} does and we point out in the second of these two theorems how the revised question can be answered, modulo a mild set-theoretic hypothesis, by an easy and direct adaptation of their argument. These results suggest that the rough-scale asymptotic structure revealed by studying the $\lambda^{++}$-compactness of ultrapowers on $\lambda$ is global in nature and differs from the picture suggested by the local case considered by Shelah. In order to construct these examples, it is necessary to build a theory capable of coding a complicated strong coloring yet simple enough that the invariants are still computable. This was accomplished by a method inspired by Medvedev's $\mathbb{Q}$ACFA construction \cite{Medvedev:2015aa}, realizing the theory as a union of theories in a system of finite reducts each of which is the theory of a Fra\"iss\'e limit. The theories in the finite reducts are $\aleph_{0}$-categorical and eliminate quantifiers and one may apply the $\Delta$-system lemma to the finite reducts arising in global configurations. Altogether, this makes computing the invariants tractable. \textbf{Acknowledgements:} This is work done as part of our dissertation under the supervision of Thomas Scanlon. We would additionally like to acknowledge very helpful input from Artem Chernikov, Leo Harrington, Alex Kruckman, and Maryanthe Malliaris, as well as Assaf Rinot, from whom we first learned of Galvin's work on strong colorings. Finally we would like to thank the anonymous referee for more than one especially thorough reading which did a great deal to improve this paper. \section{Preliminaries} \subsection{Notions from Classification Theory} For the most part, we follow standard model-theoretic notation. We may write $x$ or $a$ to denote a tuple of variables or elements, which may not have length 1. If $x$ is a tuple of variables we write $l(x)$ to denote its length and for each $l < l(x)$, we write $(x)_{l}$ to denote the $l$th coordinate of $x$. If $\varphi(x)$ is a formula and $t \in \{0,1\}$, we write $\varphi(x)^{t}$ to denote $\varphi(x)$ if $t = 1$ and $\neg \varphi(x)$ if $t = 0$. In the following definitions, we will refer to collections of tuples indexed by arrays and trees. For cardinals $\kappa$ and $\lambda$, we use the notation $\unlhd$, $<_{lex}$, $\wedge$, and $\perp$ to refer to the tree partial order, the lexicographic order, the binary meet function, and the relation of incomparability on $\kappa^{<\lambda}$, respectively. Given an element $\eta \in \kappa^{<\lambda}$, we write $l(\eta)$ to denote the length of $\eta$\textemdash that is, the unique $\alpha < \lambda$ such that $\eta \in \kappa^{\alpha}$\textemdash and if $l(\eta)\geq \beta$, we write $\eta | \beta$ for the unique $\nu \unlhd \eta$ with $l(\nu) = \beta$. \begin{defn} \label{patterns} \cite[Definitions III.7.2, III.7.3, III.7.5]{shelah1990classification} \begin{enumerate} \item A \emph{cdt-pattern of height} \(\kappa\) is a sequence of formulas \(\varphi_{i}(x;y_{i})\) (\(i < \kappa, i \text{ successor}\)) and numbers \(n_{i} < \omega\), and a tree of tuples \((a_{\eta})_{\eta \in \omega^{<\kappa}}\) for which \begin{enumerate} \item \(p_{\eta} = \{\varphi_{i}(x;a_{\eta | i}) : i \text{ successor }, i < \kappa\}\) is consistent for \(\eta \in \omega^{\kappa}\). \item \(\{\varphi_{i} (x;a_{\eta \frown \langle \alpha \rangle}) : \alpha < \omega , i = l(\eta) + 1\}\) is \(n_{i}\)-inconsistent. \end{enumerate} \item An \emph{inp-pattern of height} \(\kappa\) is a sequence of formulas \(\varphi_{i}(x;y_{i})\) \((i < \kappa)\), sequences \((a_{i,\alpha}: \alpha < \omega)\), and numbers \(n_{i} <\omega\) such that \begin{enumerate} \item For any \(\eta \in \omega^{\kappa}\), \(\{ \varphi_{i}(x;a_{i,\eta(i)}) : i < \kappa\}\) is consistent. \item For any \(i < \kappa\), \(\{\varphi_{i}(x;a_{i,\alpha}) : \alpha < \omega\}\) is \(n_{i}\)-inconsistent. \end{enumerate} \item An \emph{sct-pattern of height} \(\kappa\) is a sequence of formulas \(\varphi_{i}(x;y_{i})\) \((i < \kappa)\) and a tree of tuples \((a_{\eta})_{\eta \in \omega^{<\kappa}}\) such that \begin{enumerate} \item For every \(\eta \in \omega^{\kappa}\), \(\{\varphi_{\alpha}(x;a_{\eta | \alpha}) : 0 < \alpha < \kappa, \alpha \text{ successor}\}\) is consistent. \item If \(\eta \in \omega^{\alpha}\), \(\nu \in \omega^{\beta}\), \(\alpha, \beta\) are successors, and \(\nu \perp \eta\) then \(\{\varphi_{\alpha}(x;a_{\eta}), \varphi_{\beta}(x;a_{\nu})\}\) are inconsistent. \end{enumerate} \item For \(X \in \{\text{cdt}, \text{sct}, \text{inp}\}\), we define \(\kappa_{X}^{n}(T)\) be the first cardinal \(\kappa\) such that there is no \(X\)-pattern of height \(\kappa\) in \(n\) free variables. We define \(\kappa_{X}(T) = \sup_{n} \{\kappa_{X}^{n}\}\). \end{enumerate} \end{defn} When introducing these definitions, Shelah notes that cdt stands for ``contradictory types" and inp stands for ``independent partitions." He does not explain the meaning of sct, but presumably it is intended to abbreviate something like ``strongly contradictory types". \begin{fact} \label{easy inequalities} \cite[Observation 3.1]{ArtemNick} Suppose $T$ is a complete theory in the language $L$. \begin{enumerate} \item If $T$ is stable, then $\kappa_{\mathrm{cdt}}(T) \leq |L|^{+}$. \item $\kappa_{\mathrm{sct}}(T) \leq \kappa_{\mathrm{cdt}}(T)$ and $\kappa_{\mathrm{inp}}(T) \leq \kappa_{\mathrm{cdt}}(T)$. \end{enumerate} \end{fact} \begin{exmp} Fix a regular uncountable cardinal \(\kappa\) and let \(L = \langle E_{\alpha} : \alpha < \kappa \rangle\) be a language consisting of $\kappa$ many binary relations. Let $T_{\text{sct}}$ be the model companion of the $L$-theory asserting that each $E_{\alpha}$ is an equivalence relation and $\alpha < \beta$ implies $E_{\beta}$ refines $E_{\alpha}$. Let $T_{\text{inp}}$ be the model companion of the $L$-theory which only asserts that each $E_{\alpha}$ is an equivalence relation. In other words, $T_{\text{sct}}$ is the generic theory of $\kappa$ refining equivalence relations and $T_{\text{inp}}$ is the generic theory of $\kappa$ independent equivalence relations. Now $\kappa_{\text{cdt}}(T_{\text{sct}}) = \kappa_{\text{cdt}}(T_{\text{sct}}) = \kappa^{+}\), and further \(\kappa_{\text{sct}}(T_{\text{sct}}) = \kappa_{\text{inp}}(T_{\text{inp}}) = \kappa^{+}\). However, we have \(\kappa_{\text{inp}}(T_{\text{sct}}) = \aleph_{0}\) and \(\kappa_{\text{sct}}(T_{\text{inp}}) = \aleph_{1}\). Computing each of the invariants is straightforward using quantifier elimination for $T_{\text{inp}}$ and $T_{\text{sct}}$ with the exception of $\kappa_{\text{sct}}(T_{\text{inp}}) = \aleph_{1}$. The fact that $\kappa_{\text{cdt}}(T_{\text{inp}}) \geq \aleph_{1}$ implies that $\kappa_{\text{sct}}(T_{\text{inp}}) \geq \aleph_{1}$ by \cite[Proposition 3.14]{ArtemNick}. If $\kappa_{\text{sct}}(T_{\text{inp}}) > \aleph_{1}$ then there is an sct-pattern $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \omega_{1})$, $(a_{\eta})_{\eta \in \omega^{<\omega_{1}}}$. Let $w_{\alpha}$ be the finite set of indices $\beta$ such that the symbol $E_{\beta}$ appears in $\varphi_{\alpha}(x;y_{\alpha})$. After passing to an sct-pattern of the same size, we may assume that the $w_{\alpha}$ form a $\Delta$-system (see Fact \ref{delta-system lemma} below), using that $\kappa$ is regular and uncountable. Now it is easy to check using quantifier elimination for $T_{\text{sct}}$ that there are incomparable $\eta \in \omega^{\alpha}, \nu \in \omega^{\beta}$ for some $\alpha,\beta < \omega_{1}$ such that $\{\varphi_{\alpha}(x;a_{\eta}), \varphi_{\beta}(x;a_{\nu})\}$ is consistent, a contradiction.\end{exmp} The following simple observation will be useful: \begin{lem} \label{no equalities} Suppose $\kappa$ is an infinite cardinal. \begin{enumerate} \item Suppose $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$, $(a_{\alpha,i})_{\alpha < \kappa, i < \omega}$, $(k_{\alpha})_{\alpha < \kappa}$ is an inp-pattern with $l(x) = 1$. Then each formula $\varphi_{\alpha}(x;a_{\alpha,i})$ is non-algebraic. \item Suppose $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$, $(a_{\eta})_{\eta \in \omega^{<\kappa}}$ is an sct-pattern such that $l(x)$ is minimal among sct-patterns of height $\kappa$ modulo $T$. Then no formula $\varphi_{\alpha}(x;a_{\eta})$ with $\eta \in \omega^{\alpha}$ implies $(x)_{l} = c$ for some $l < l(x)$ and parameter $c$. \end{enumerate} \end{lem} \begin{proof} (1) Given any $\alpha < \kappa$ and $i < \omega$, we may, for each $j < \omega$, choose a realization $c_{j} \models \{\varphi_{\alpha}(x;a_{\alpha,i}), \varphi_{\alpha+1}(x;a_{\alpha+1,j})\}$, which is is consistent by the definition of an inp-pattern. Since $\{\varphi_{\alpha+1}(x;a_{\alpha+1,j}) : j < \omega\}$ is $k_{\alpha+1}$-inconsistent, each $c_{j}$ can realize at most $k_{\alpha+1}-1$ many formulas in this set, so $\{c_{j} : j < \omega\}$ must be an infinite set of realizations of $\varphi_{\alpha}(x;a_{\alpha,i})$, which shows $\varphi_{\alpha}(x;a_{\alpha})$ is non-algebraic. (2) Suppose not, so there are $\alpha < \kappa$, $\eta \in \omega^{\alpha}$, and $l < l(x)$ so that $\varphi_{\alpha}(x;a_{\eta}) \vdash (x)_{l} = c$ for some parameter $c$; without loss of generality $l = l(x) - 1$. If $l(x) = 1$, then it follows from the fact that $\{\varphi_{\alpha}(x;a_{\eta}),\varphi_{\alpha+1}(x;a_{\eta \frown \langle i \rangle})\}$ is consistent for each $i < \omega$ that $c \models \{\varphi_{\alpha+1}(x;a_{\eta \frown \langle i \rangle}) : i < \omega\}$, contradicting the fact that this set of formulas is $2$-inconsistent. On the other hand, if $l > 1$, we will let $x' = (x_{0}, \ldots, x_{l-2})$, so that $x = (x',x_{l-1})$ and let $b_{\nu} = (c,a_{\eta \frown \nu})$ for all $\nu \in \omega^{<\kappa}$. Finally, we set $\psi_{\beta}(x';z_{\beta}) = \varphi_{\alpha + \beta}(x';x_{l-1},y_{\alpha+\beta})$. Since for any $\nu \in \omega^{\kappa}$, $\{\varphi_{\alpha + \beta}(x;a_{\eta \frown (\nu | \beta)}) : \beta < \kappa\}$ is consistent and any realization will be of the form $(c',c)$ for some $c'$, it follows that $\{\psi_{\beta}(x';b_{\nu | \beta}) : \beta < \kappa\}$ is consistent. The inconsistency requirement is immediate so it follows that $(\psi_{\beta}(x';z_{\beta}))_{\beta < \kappa}$, $(b_{\eta})_{\eta \in \omega^{<\kappa}}$ is an sct-pattern of height $\kappa$ in fewer than $l(x)$ variables, contradicting the minimality of $l(x)$. \end{proof} \begin{rem} Note that by \cite[Corollary 2.9]{ChernikovNTP2}, if $T$ has an inp-pattern of height $\kappa$, then there is also an inp-pattern of height $\kappa$ in a single free variable, so the hypothesis in (1) that $l(x) = 1$ is equivalent to the requirement that $l(x)$ be minimal among inp-patterns of height $\kappa$. \end{rem} In order to simplify many of the arguments below, it will be useful to work with indiscernible trees and arrays. Define a language \(L_{s,\lambda} = \{\vartriangleleft, \wedge, <_{lex}, P_{\alpha} : \alpha < \lambda\}\) where \(\lambda\) is a cardinal. We may view the tree \(\kappa^{<\lambda}\) as an \(L_{s,\lambda}\)-structure in a natural way, giving \(\vartriangleleft\), \(\wedge\), and \(<_{lex}\) their eponymous interpretations, and interpreting \(P_{\alpha}\) as a predicate which identifies the \(\alpha\)th level. Note that we may define the relation $\eta \perp \nu$ in this language by $\neg (\eta \unlhd \nu) \wedge \neg (\nu \unlhd \eta)$. See \cite{ArtemNick} and \cite{KimKimScow} for a detailed treatment. \begin{defn} \text{ } \begin{enumerate} \item We say \((a_{\eta})_{\eta \in \kappa^{<\lambda}}\) is an \(s\)\emph{-indiscernible tree over A} if \[ \text{qftp}_{L_{s,\lambda}}(\eta_{0}, \ldots, \eta_{n-1}) = \text{qftp}_{L_{s,\lambda}}(\nu_{0}, \ldots, \nu_{n-1}) \] implies \(\text{tp}(a_{\eta_{0}}, \ldots, a_{\eta_{n-1}}/A) = \text{tp}(a_{\nu_{0}}, \ldots, a_{\nu_{n-1}}/A)\). \item We say \((a_{\alpha,i})_{\alpha < \kappa, i < \omega}\) is a \emph{mutually indiscernible array} over $A$ if, for all $\alpha < \kappa$, $(a_{\alpha, i})_{i < \omega}$ is a sequence indiscernible over $A \cup \{a_{\beta,j} : \beta < \kappa, \beta \neq \alpha, j < \omega\}$. \end{enumerate} \end{defn} \begin{fact} \cite[Theorem 4.3]{KimKimScow} \label{s-indiscernible extraction} Given a collection of tuples $(a_{\eta})_{\eta \in \omega^{<\omega}}$, there is $(b_{\eta})_{\eta \in \omega^{<\omega}}$ which is $s$-indiscernible and \emph{locally based} on $(a_{\eta})_{\eta \in \omega^{<\omega}}$, that is, given any $\overline{\eta} = (\eta_{0},\ldots, \eta_{k-1}) \in \omega^{<\omega}$ and $\varphi(x_{0},\ldots, x_{n-1})$ such that $\models \varphi(b_{\eta_{0}},\ldots, b_{\eta_{k-1}})$, there is $\overline{\nu} = (\nu_{0},\ldots, \nu_{n-1}) \in \omega^{<\omega}$ with $\mathrm{qftp}_{L_{s,\omega}}(\overline{\eta}) = \mathrm{qftp}_{L_{s,\omega}}(\overline{\nu})$ and $\models \varphi(a_{\nu_{0}},\ldots, a_{\nu_{n-1}})$. \end{fact} \begin{fact} \cite[Lemma 1.2(2)]{ChernikovNTP2} \label{artem's lemma} Let $(a_{\alpha,i})_{\alpha < n, i < \omega}$ be an array of parameters. Given a finite set of formulas $\Delta$ and $N < \omega$, we can find, for each $\alpha < n$, $i_{\alpha,0} < i_{\alpha,1} < \ldots < i_{\alpha,N-1}$ so that $(a_{\alpha,i_{\alpha,j}})_{\alpha < n, j < N}$ is $\Delta$-mutually indiscernible array\textemdash i.e. for all $\alpha < n$, $(a_{\alpha,i_{\alpha,j}})_{j < N}$ is $\Delta$-indiscernible over $\{a_{\beta,i_{\beta,j}} : \beta \neq \alpha, j < N\}$. \end{fact} \begin{fact} \label{s-IndiscTreeProp} \cite[Lemma 2.2]{ArtemNick} Let $(a_\eta : \eta \in \kappa^{<\lambda})$ be a tree s-indiscernible over a set of parameters $C$. \begin{enumerate} \item All paths have the same type over $C$: for any $\eta, \nu \in \kappa^{\lambda}$, $tp((a_{\eta | \alpha})_{\alpha < \lambda}/C) = tp((a_{\nu|\alpha})_{\alpha < \lambda}/C)$. \item Suppose $\{\eta_{\alpha} : \alpha < \gamma\} \subseteq \kappa^{<\lambda}$ satisfies $\eta_{\alpha} \perp \eta_{\alpha'}$ whenever $\alpha \neq \alpha'$. Then the array $(b_{\alpha, \beta})_{\alpha < \gamma, \beta < \kappa}$ defined by $$ b_{\alpha, \beta} = a_{\eta_{\alpha} \frown \langle \beta \rangle} $$ is mutually indiscernible over $C$. \end{enumerate} \end{fact} Parts (1) and (2) of the following lemma are essentially \cite[Lemma 2.2]{ChernikovNTP2} and \cite[Lemma 3.1(1)]{ArtemNick}, respectively, but we sketch the argument in order to point out that, from a inp- or sct-pattern of height $\kappa$, we can find one with appropriately indiscernible parameters, leaving the formulas fixed. \begin{lem} \label{witness} \begin{enumerate} \item If there is an inp-pattern $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$, $(a_{\alpha,i})_{\alpha < \kappa, i < \omega}$, $(k_{\alpha})_{\alpha < \omega}$ of height $\kappa$ modulo $T$, then there is an inp-pattern $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)\), $(a'_{\alpha, i})_{\alpha < \kappa, i < \omega}$, $(k_{\alpha})_{\alpha < \kappa}$ such that $(a'_{\alpha, i})_{\alpha < \kappa, i < \omega}$ is a mutually indiscernible array. \item If there is an sct-pattern (cdt-pattern) of height $\kappa$ modulo $T$, then there is an sct-pattern (cdt-pattern) $\varphi_{\alpha}(x;y_{\alpha})$, $(a_{\eta})_{\eta \in \omega^{<\kappa}}$ such that $(a_{\eta})_{\eta \in \omega^{<\kappa}}$ is an $s$-indiscernible tree. \end{enumerate} \end{lem} \begin{proof} (1) Given an inp-pattern $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$, $(a_{\alpha,i})_{\alpha < \kappa, i < \omega}$, $(k_{\alpha})_{\alpha < \omega}$, let $\Gamma(z_{\alpha,i} : \alpha < \kappa, i < \omega)$ be a partial type that naturally expresses the following: \begin{itemize} \item $(z_{\alpha,i})_{\alpha < \kappa, i < \omega}$ is a mutually indiscernible array. \item $\{\varphi_{\alpha}(x;z_{\alpha,i}) : i < \omega\}$ is $k_{\alpha}$-inconsistent. \item For every $f: \kappa \to \omega$, $\{\varphi_{\alpha}(x;z_{\alpha,f(\alpha)}) : \alpha < \kappa\}$ is consistent. \end{itemize} By Lemma \ref{artem's lemma}, any finite subset of $\Gamma$ this partial type can be satisfied by an array from $(a_{\alpha,i})_{\alpha < \kappa, i < \omega}$ and therefore $\Gamma$ is consistent by compactness. A realization $(a'_{\alpha,i})_{\alpha < \kappa, i < \omega}$ yields the desired inp-pattern. (2) is entirely similar: given an sct-pattern $\varphi_{\alpha}(x;y_{\alpha})$, $(a_{\eta})_{\eta \in \omega^{<\kappa}}$, apply Fact \ref{s-indiscernible extraction} and compactness to obtain $(b_{\eta})_{\eta \in \omega^{<\kappa}}$, which is $s$-indiscernible and has the property that for any formula $\varphi(x_{0},\ldots, x_{n-1})$ and $\overline{\eta} = (\eta_{0},\ldots, \eta_{n-1}) \in \omega^{<\kappa}$, if $\varphi(b_{\eta_{0}},\ldots, b_{\eta_{n-1}})$, there is $\overline{\nu} = (\nu_{0},\ldots, \nu_{n-1})$ with $\mathrm{qftp}_{L_{s,\kappa}}(\overline{\eta}) = \mathrm{qftp}_{L_{s,\kappa}}(\overline{\nu})$ such that $\varphi(a_{\nu_{0}},\ldots, a_{\nu_{n-1}})$. From this property, it easily follows that, for all $\eta \in \omega^{\alpha}$, $\{\varphi_{\alpha+1}(x;a_{\eta \frown \langle i \rangle}) : i < \omega \}$ is $k_{\alpha+1}$-inconsistent and, for all $\eta \in \omega^{\kappa}$, $\{\varphi_{\alpha}(x;a_{\eta | \alpha}) : \alpha < \kappa\}$ is consistent. Therefore $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$, $(b_{\eta})_{\eta \in \omega^{<\kappa}}$ is the desired sct-pattern. \end{proof} \subsection{Fra\"iss\'e Theory} We will recall some basic facts from Fra\"iss\'e theory, from \cite[Section 7.1]{hodges1993model}. Let \(L\) be a finite language and let \(\mathbb{K}\) be a non-empty finite or countable set of finitely generated \(L\)-structures which has HP, JEP, and AP. Such a class $\mathbb{K}$ is called a \emph{Fra\"iss\'e class}. Then there is an \(L\)-structure \(D\), unique up to isomorphism, such that \(D\) has cardinality \(\leq \aleph_{0}\), \(\mathbb{K}\) is the age of \(D\), and \(D\) is ultrahomogeneous. We call $D$ the \emph{Fra\"iss\'e limit} of $\mathbb{K}$, which we sometimes denote $\text{Flim}(\mathbb{K})$. Given a subset $A$ of the $L$-structure $C$, we write $\langle A \rangle^{C}_{L}$ for the $L$-substructure of $C$ generated by $A$. We say that $\mathbb{K}$ is \emph{uniformly locally finite} if there is a function $g: \omega \to \omega$ such that a structure in $\mathbb{K}$ generated by $n$ elements has cardinality at most $g(n)$. If \(\mathbb{K}\) is a countable uniformly locally finite set of finitely generated \(L\)-structures and $T = \text{Th}(D)$, then \(T\) is \(\aleph_{0}\)-categorical and has quantifier elimination. The following equivalent formulation of ultrahomogeneity is well-known, see, e.g., \cite[Proposition 2.3]{KPT}: \begin{fact} Let \(A\) be a countable structure. Then \(A\) is ultrahomogeneous if and only if it satisfies the following extension property: if \(B,C\) are finitely generated and can be embedded into \(A\), \(f: B \to A\), \(g: B \to C\) are embeddings then there is an embedding \(h: C \to A\) such that \(h \circ g = f\). \end{fact} The following is a straight-forward generalization of \cite[Proposition 5.2]{KPT}: \begin{lem}\label{KPTredux} Suppose \(L \subseteq L'\), and \(\mathbb{K}\) is a Fra\"iss\'e class of \(L\)-structures and \(\mathbb{K}'\) is a Fra\"iss\'e class of \(L'\)-structures satisfying the following two conditions: \begin{enumerate} \item \(A \in \mathbb{K}\) if and only if there is a \(D' \in \mathbb{K}'\) such that \(A\) is an $L$-substructure of \(D' \upharpoonright L\). \item If \(A,B \in \mathbb{K}\), \(\pi: A \to B\) is an \(L\)-embedding, and \(C \in \mathbb{K}'\) with \(C = \langle A \rangle^{C}_{L'}\), then there is a \(D \in \mathbb{K}'\), such that $B$ is an $L$-substructure of $D\upharpoonright L$, and an \(L'\)-embedding \(\tilde{\pi}: C \to D\) extending \(\pi\). \end{enumerate} Then \(\mathrm{Flim}(\mathbb{K}') \upharpoonright L = \mathrm{Flim}(\mathbb{K})\). \end{lem} \begin{proof} Let \(F' = \text{Flim}(\mathbb{K}')\) and suppose \(F = F' \upharpoonright L\). Fix \(A_{0}, B_{0} \in \mathbb{K}\) and an \(L\)-embedding \(\pi: A_{0} \to B_{0}\). Suppose \(\varphi: A_{0} \to F\) is an \(L\)-embedding. Let \(E = \langle \varphi(A_{0}) \rangle^{F'}_{L'}\). Up to isomorphism over \(A_{0}\), there is a unique \(C \in \mathbb{K}'\) containing \(A_{0}\) such that \(C = \langle A_{0} \rangle^{C}_{L'}\) and \(\tilde{\varphi} : C \to F'\) is an \(L'\)-embedding extending \(\varphi\) with \(E = \tilde{\varphi}(C)\), since given another such $C'$ and $\tilde{\varphi}' :C' \to F'$, we have $\tilde{\varphi}'^{-1} \circ \tilde{\varphi} : C \to C'$ is an $L'$-isomorphism which is the identity on $A_{0}$. By (2), there is some \(D \in \mathbb{K}'\) with \(B_{0} \subseteq D \upharpoonright L\) and and there is an \(L'\)-embedding \(\tilde{\pi}: C \to D\) extending \(\pi\). By the extension property for \(F'\), there is an \(L'\)-embedding \(\psi: D \to F'\) such that \(\psi \circ \tilde{\pi} = \tilde{\varphi}\) and hence \(\psi \circ \pi = \varphi\). As \(\psi \upharpoonright B_{0}\) is an \(L\)-embedding, this shows the extension property for \(F\). So \(F\) is ultrahomogeneous, and \(\text{Age}(F) = \mathbb{K}\) by (1) so \(F \cong \text{Flim}(\mathbb{K})\), which completes the proof. \end{proof} \subsection{Strong Colorings} \begin{defn} \cite[Definition A.1.2]{Sh:g} Given cardinals $\lambda, \mu, \theta,$ and $\chi$, we write \(\text{Pr}_{1}(\lambda, \mu, \theta, \chi)\) for the assertion: there is a coloring \(c: [\lambda]^{2} \to \theta\) such that for any \(A \subseteq [\lambda]^{<\chi}\) of size \(\mu\) consisting of pairwise disjoint subsets of \(\lambda\) and any color \(\gamma < \theta\) there are \(a,b \in A\) with \(\max(a) < \min(b)\) with \(c(\{\alpha, \beta\}) = \gamma\) for all \(\alpha \in a\), \(\beta \in b\). \end{defn} Note, for example, that $\text{Pr}_{1}(\lambda, \lambda, 2, 2)$ holds if and only if $\lambda \not\to (\lambda)^{2}_{2}$ - i.e. $\lambda$ is not weakly compact. \begin{obs} \label{monotonicity} For fixed $\lambda$, if $\mu \leq \mu'$, $\theta' \leq \theta$, $\chi' \leq \chi$, then $$ \text{Pr}_{1}(\lambda, \mu, \theta, \chi) \implies \text{Pr}_{1}(\lambda, \mu', \theta', \chi'). $$ \end{obs} \begin{proof} Fix $c: [\lambda]^{2} \to \theta$ witnessing $\text{Pr}_{1}(\lambda, \mu, \theta, \chi)$. Define a new coloring $c': [\lambda]^{2} \to \theta'$ by $c'(\{\alpha, \beta\}) = c(\{\alpha, \beta\})$ if $c(\{\alpha, \beta\}) < \theta'$ and $c'(\{\alpha, \beta\}) = 0$ otherwise. Now suppose $A \subseteq [\lambda]^{< \chi'}$ is a family of pairwise disjoint sets with $|A| \geq \mu'$. Then, in particular, $A \subseteq [\lambda]^{<\chi}$ and $|A| \geq \mu$ so for any $\gamma < \theta'$, as $\gamma < \theta$, there are $a,b \in A$ with $\text{max}(a) < \text{min}(b)$ with $c'(\{\alpha,\beta\}) = c(\{\alpha,\beta\}) = \gamma$ for all $\alpha \in a$, $\beta \in b$, using $\text{Pr}_{1}(\lambda, \mu, \theta, \chi)$ and the definition of $c'$. This shows that $c'$ witnesses $\text{Pr}_{1}(\lambda, \mu',\theta',\chi')$. \end{proof} In the arguments that follow, we will only make use of instances of $\mathrm{Pr}_{1}(\lambda^{+},\lambda^{+}, 2, \aleph_{0})$, which we will obtain from stronger results of Galvin and of Shelah, using Observation \ref{monotonicity}. Galvin proved \(\text{Pr}_{1}\) holds in some form for arbitrary successor cardinals from instances of GCH. Considerably later, Shelah proved that $\text{Pr}_{1}$ holds in a strong form for the double-successors of arbitrary regular cardinals in ZFC. \begin{fact}\cite[Conclusion 4.2]{Sh:572} \label{ShelahPr} The principle \(\text{Pr}_{1}(\lambda^{++}, \lambda^{++}, \lambda^{++}, \lambda)\) holds for every regular cardinal \(\lambda\). \end{fact} The above theorem of Shelah suffices to produce a ZFC counterexample to the equality $\kappa_{\mathrm{cdt}}(T) = \kappa_{\mathrm{inp}}(T) + \kappa_{\mathrm{sct}}(T)$, but we will need Galvin's result on arbitrary successor cardinals in order to get the consistency result contained in Theorem \ref{first main theorem}. Unfortunately, Galvin's result is only implicit in \cite[Lemma 4.1]{Galvin80} in a certain construction, and the argument there refers to earlier sections of his paper. So, following a suggestion of the referee, we have opted for providing a self-contained proof. The argument below merely consolidates Galvin's argument in \cite[Lemma 4.1]{Galvin80} and recasts it in Shelah's $\mathrm{Pr}_{1}$ notation, adding no new ideas. It will be useful to introduce the following notation: given sets $X$ and $Y$, let $X \otimes Y = \{\{x,y\} : x \in X,y \in Y\}$. \begin{lem} \cite[Lemma 3.1]{Galvin80} \label{matrix lemma} Let $\lambda$ be an infinite cardinal and $A$ be a set. Suppose that, for each $\rho < \lambda$, we have a set $I_{\rho}$ with $|I_{\rho}| = \lambda$ and finite sets $E^{\xi}_{\rho} \subseteq A$ $(\xi \in I_{\rho})$ so that for any $a \in A$, $|\{\xi \in I_{\rho} : a \in E^{\xi}_{\rho}\}| < \aleph_{0}$. Then there are pairwise disjoint sets $(A_{\nu} : \nu < \lambda)$ so that for all $\nu < \lambda$ and $\rho < \lambda$ $$ |\{\xi \in I_{\rho} : E^{\xi}_{\rho} \subseteq A_{\nu}\}| = \lambda. $$ \end{lem} \begin{proof} Identify $I_{\rho}$ with $\lambda$ for all $\rho$ and let $<^{*}$ be a well-ordering of $\lambda \times \lambda$ in order-type $\lambda$. By recursion on $(\lambda \times \lambda, <^{*})$, define $(\xi_{(\alpha, \beta)} : (\alpha, \beta) \in \lambda \times \lambda)$ as follows: if $(\xi_{(\gamma, \delta)} : (\gamma, \delta) <^{*} (\alpha, \beta))$ has been defined, choose $\xi_{(\alpha, \beta)}$ to be the least $\xi \in I_{\alpha}$ so that $$ E^{\xi}_{\alpha} \cap \left( \bigcup_{\substack{(\gamma, \delta) <^{*} (\alpha, \beta) \\ \delta \neq \beta}} E^{\xi_{(\gamma, \delta)}}_{\gamma} \right) = \emptyset. $$ There is such a $\xi$ by the pigeonhole principle, given our assumption that $|\{\xi \in I_{\rho} : a \in E^{\xi}_{\rho}\}| < \aleph_{0}$ for all $a \in A$. Now define the sequence of sets $(A_{\nu} : \nu < \lambda)$ by $$ A_{\nu} = \bigcup_{\alpha < \lambda} E_{\nu}^{\xi_{(\alpha, \nu)}}. $$ It is easy to check that this satisfies the requirements. \end{proof} \begin{thm} \cite[Lemma 4.1]{Galvin80} \label{GalvinPr} If \(\lambda\) is an infinite cardinal and \(2^{\lambda} = \lambda^{+}\), then \(\text{Pr}_{1}(\lambda^{+}, \lambda^{+}, \lambda^{+}, \aleph_{0})\). \end{thm} \begin{proof} Let $\langle \overline{B}_{\gamma} : \gamma < \lambda^{+} \rangle$ enumerate all $\lambda$-sequences $\overline{B} = \langle B_{\xi} : \xi < \lambda\rangle$ of pairwise disjoint finite subsets of $\lambda^{+}$. This is possible as $2^{\lambda} = \lambda^{+}$. \textbf{Claim 1}: There is a sequence of pairwise disjoint sets $\langle K_{\nu} : \nu < \lambda^{+} \rangle$ so that, for all $\nu < \lambda^{+}$, $K_{\nu} \subseteq [\lambda^{+}]^{2}$ and, for all $\alpha < \lambda^{+}$, we have $(A)$ implies $(B)$, where: \begin{enumerate} \item[(A)] $\gamma < \alpha$, $\bigcup_{\xi < \lambda} B_{\gamma, \xi} \subseteq \alpha$, $X \in [\alpha]^{<\aleph_{0}}$, and $|\{\xi : B_{\gamma,\xi} \otimes X \subseteq K_{\nu}\}| = \lambda$. \item[(B)] $|\{ \xi : B_{\gamma,\xi} \otimes (X \cup \{\alpha\}) \subseteq K_{\nu} \}| = \lambda$. \end{enumerate} \emph{Proof of claim}: By induction on $\alpha < \lambda^{+}$, we will construct for every $\nu < \lambda$, a set $K_{\nu}(\alpha) \subseteq \alpha$ and define $K_{\nu} = \{\{\beta, \alpha\} : \alpha < \lambda^{+}, \beta \in K_{\nu}(\alpha)\}$. We will define the sets $K_{\nu}(\alpha)$ to be pairwise disjoint and so that: \begin{enumerate} \item[(*)] Whenever $\gamma < \alpha$, $\bigcup_{\xi < \lambda} B_{\gamma,\xi} \subseteq \alpha$, $X \in [\alpha]^{<\aleph_{0}}$, and $|\{\xi : B_{\gamma,\xi} \otimes X \subseteq K_{\nu}\}| = \lambda$, then $|\{\xi : B_{\gamma,\xi} \otimes (X \cup \{\alpha\}) \subseteq K_{\nu}\}| = \lambda$. \end{enumerate} Note that if $\bigcup_{\xi < \lambda} B_{\gamma,\xi} \subseteq \alpha$, $X \in [\alpha]^{<\aleph_{0}}$, then it makes sense to write $\{\xi : B_{\gamma,\xi} \otimes X \subseteq K_{\nu}\}$, since $K_{\nu} \cap [\alpha]^{2}$ has already been defined. Suppose we have constructed $K_{\nu}(\beta)$ for every $\nu < \lambda$ and $\beta < \alpha$. Let $\langle (\nu_{\beta},\gamma_{\rho},X_{\rho}) : \rho < \lambda \rangle$ enumerate all triples $(\nu,\gamma,X)$ satisfying the hypothesis of $(*)$ for $\alpha$. Apply Lemma \ref{matrix lemma} with $A = \alpha$, $I_{\rho} = \{\xi : B_{\gamma_{\rho},\xi} \otimes X_{\rho} \subseteq K_{\nu_{\rho}}\}$, and $E^{\xi}_{\rho} = B_{\gamma_{\rho},\xi}$ to obtain the disjoint sets $A_{\nu} := K_{\nu}(\alpha)$ for all $\nu < \lambda$. Then for all $\nu < \lambda$, we have that if $\gamma < \alpha$, $\bigcup_{\xi < \lambda} B_{\gamma,\xi} \subseteq \alpha$, $X \in [\alpha]^{<\aleph_{0}}$, and $|\{\xi : B_{\gamma,\xi} \otimes X \subseteq K_{\nu}\}| = \lambda$, then $|\{\xi : B_{\gamma,\xi} \otimes X \subseteq K_{\nu} \text{ and } B_{\gamma,\xi} \subseteq K_{\nu}(\alpha)\}| = \lambda$. Since the set $\{\xi : B_{\gamma,\xi} \otimes (X \cup \{\alpha\}) \subseteq K_{\nu}\}$ is equal to the set $\{\xi : B_{\gamma,\xi} \otimes X \subseteq K_{\nu} \text{ and } B_{\gamma,\xi} \subseteq K_{\nu}(\alpha)\}$, by the definition of $K_{\nu}(\alpha)$, this completes the proof of the claim. \qed \textbf{Claim 2}: If $\nu < \lambda$ and $\langle v_{\xi} : \xi < \lambda^{+} \rangle$ is a sequence of pairwise disjoint finite subsets of $\lambda^{+}$, then there are $\xi < \eta < \lambda$ so that $v_{\xi} \otimes v_{\eta} \subseteq K_{\nu}$. \emph{Proof of claim}: There is an index $\gamma < \lambda^{+}$ such that $\overline{B}_{\gamma,\xi} = v_{\xi}$ for all $\xi < \lambda$. By the regularity of $\lambda^{+}$, there is some $\beta < \lambda^{+}$ so that $\bigcup_{\xi < \lambda} v_{\xi} \subseteq \beta$ and we may further choose $\beta$ so that $\gamma < \beta$. Since the sets $v_{\xi}$ are pairwise disjoint, there is some $\eta$ with $\lambda \leq \eta < \lambda^{+}$ so that $v_{\eta} \cap \beta = \emptyset$. It follows that $\gamma < \alpha$ and $\bigcup_{\xi < \lambda} B_{\gamma,\xi} \subseteq \alpha$ for all $\alpha \in v_{\eta}$. List $v_{\eta} = \{\alpha_{0} < \ldots < \alpha_{m-1}\}$. Applying the implication (A)$\implies$(B) of Claim 1 $m$ times, with $\alpha_{0},\ldots, \alpha_{m-1}$ playing the role of $\alpha$ and $\emptyset$, $\{\alpha_{0}\}$, \ldots, $\{\alpha_{0},\ldots, \alpha_{m-1}\}$ playing the role of $X$ in (A), we get that $$ |\{\xi < \lambda : B_{\gamma, \xi} \otimes v_{\eta} \subseteq K_{\nu}\} | = \lambda. $$ In particular, there is some $\xi < \lambda \leq \eta$ so that $v_{\xi} \otimes v_{\eta} \subseteq K_{\nu}$. \qed Now to complete the proof, we must construct a coloring. By replacing $K_{0}$ with $[\lambda^{+}]^{2} \setminus \left(\bigcup_{\nu > 0} K_{\nu}\right)$, we may assume that $\bigcup K_{\nu} = [\lambda^{+}]^{2}$. We define a coloring $c : [\lambda^{+}]^{2} \to \lambda^{+}$ by $c(\{\alpha,\beta\}) = \nu$ if and only if $\{\alpha,\beta\} \in K_{\nu}$, for all $\nu < \lambda^{+}$, which is well-defined since the $K_{\nu}$ are pairwise disjoint with union $[\lambda^{+}]^{2}$. Given any sequence $\langle v_{\xi} : \xi < \lambda^{+}\rangle$ of pairwise disjoint finite subsets of $\lambda^{+}$, we know by the regularity of $\lambda^{+}$ that there is a subsequence $\langle v_{\xi_{\rho}} : \rho < \lambda^{+} \rangle$ so that $\rho < \rho'$ implies $\max (v_{\xi_{\rho}}) < \min (v_{\xi_{\rho'}})$, so, replacing the given sequence by a subsequence, we may assume $\xi < \xi'$ implies $\max(v_{\xi}) < \min(v_{\xi'})$. Given $\nu < \lambda^{+}$, we know, by Claim 2, there are $\xi < \eta< \lambda^{+}$ so that $v_{\xi} \otimes v_{\eta} \subseteq K_{\nu}$ or, in other words, $c(\{\alpha,\beta\}) = \nu$ for all $\alpha \in v_{\xi}$ and $\beta \in v_{\eta}$ which shows $c$ witnesses $\mathrm{Pr}_{1}(\lambda^{+},\lambda^{+},\lambda^{+},\aleph_{0})$. \end{proof} \section{The main construction} From strong colorings, we construct theories with \(\kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T) < \kappa_{\text{cdt}}(T)\). For each regular uncountable cardinal \(\kappa\) and coloring \(f: [\kappa]^{2} \to 2\) we build a theory \(T^{*}_{\kappa,f}\) which comes equipped with a canonical cdt-pattern of height \(\kappa\), in which the consistency of two incomparable nodes, one on level \(\alpha\) and another on level \(\beta\), is determined by the value of the coloring \(f(\{\alpha, \beta\})\). In the next section, we then analyze the possible inp- and sct-patterns that arise in models of \(T^{*}_{\kappa,f}\) and show that the combinatorial properties of the function $f$ are reflected in the values of the cardinal invariants $\kappa_{\mathrm{inp}}$ and $\kappa_{\mathrm{sct}}$. \subsection{Building a Theory} Suppose \(\kappa\) is a regular uncountable cardinal. We define a language \(L_{\kappa} = \langle O,P_{\alpha}, f_{\alpha \beta}, p_{\alpha} : \alpha \leq \beta < \kappa\rangle\), where \(O\) and all the \(P_{\alpha}\) are unary predicates and the \(f_{\alpha \beta}\) and \(p_{\alpha}\) are unary functions. Given a subset \(w \subseteq \kappa\), let \(L_{w} = \langle O, P_{\alpha}, f_{\alpha \beta},p_{\alpha} : \alpha \leq \beta, \alpha,\beta \in w\rangle\). Given a function \(f: [\kappa]^{2} \to 2\), we define a universal theory \(T_{\kappa, f}\) with the following axiom schemas: \begin{enumerate} \item The predicates \(O\) and \((P_{\alpha})_{\alpha < \kappa}\) are pairwise disjoint; \item For all $\alpha < \kappa$, \(f_{\alpha\alpha}\) is the identity function, for all \(\alpha < \beta<\kappa\), \[ (\forall x)\left[ (x \not\in P_{\beta} \to f_{\alpha \beta}(x) = x) \wedge (x \in P_{\beta} \to f_{\alpha \beta}(x) \in P_{\alpha})\right], \] and if \(\alpha < \beta < \gamma<\kappa\), then \[ (\forall x \in P_{\gamma})[f_{\alpha \gamma} (x) = (f_{\alpha \beta} \circ f_{\beta \gamma})(x)]. \] \item For all \(\alpha < \kappa\), \[ (\forall x)\left[(x \not\in O \to p_{\alpha}(x) = x) \wedge (p_{\alpha}(x) \neq x \to p_{\alpha}(x) \in P_{\alpha})\right]. \] \item For all \(\alpha < \beta < \kappa\) satisfying \(f(\{\alpha , \beta\}) = 0\), we have the axiom $$ (\forall z \in O)[p_{\alpha}(z) \neq z \wedge p_{\beta}(z) \neq z \to p_{\alpha}(z) = (f_{\alpha \beta} \circ p_{\beta})(z)]. $$ \end{enumerate} The \(O\) is for ``objects" and \(\bigcup P_{\alpha}\) is a tree of ``parameters'' where each \(P_{\alpha}\) names nodes of level \(\alpha\). The functions \(f_{\alpha \beta}\) map elements of the tree at level \(\beta\) to their unique ancestor at level \(\alpha\). So the tree partial order is coded in a highly non-uniform way, for each pair of levels. The \(p_{\alpha}\)'s should be considered as partial functions on \(O\) which connect objects to elements of the tree: we will write $\text{dom}(p_{\alpha})$ for the set $\{x \in O : p_{\alpha}(x) \neq x\}$. Axiom \((4)\) says, in essence, that if \(f(\{\alpha, \beta\}) = 0\), then the only way for an object in both $\text{dom}(p_{\alpha})$ and $\text{dom}(p_{\beta})$ to connect to a node on level \(\alpha\) and a node on level \(\beta\) is if these two nodes lie along a path in the tree. \begin{lem} Define a class of finite structures \[ \mathbb{K}_{w} = \{ \text{ finite models of }T_{\kappa,f} \upharpoonright L_{w}\}. \] Then for finite \(w\), \(\mathbb{K}_{w}\) is a Fra\"iss\'e class and, moreover, it is uniformly locally finite. \end{lem} \begin{proof} The axioms for \(T_{\kappa,f}\) are universal so HP is clear. JEP and AP are proved similarly, so we will give the argument for AP only. Suppose \(A\) includes into \(B\) and \(C\) where \(A,B,C \in \mathbb{K}_{w}\) and \(B \cap C = A\). Because all the symbols of the language are unary, \(B \cup C\) may be viewed as an \(L_{w}\)-structure by interpreting each predicate \(Q\) of \(L_{w}\) so that \(Q^{B \cup C} = Q^{B} \cup Q^{C}\) and similarly interpreting \(g^{B \cup C} = g^{B} \cup g^{C}\) for all the function symbols \(g \in L_{w}\). It is easy to check that \(B \cup C\) is a model of \(T_{\kappa,f} \upharpoonright L_{w}\). To see uniform local finiteness, just observe that a set of size \(n\) can generate a model of size at most \((|w|+1)n\) in virtue of the way that the functions are defined. \end{proof} Hence, for each finite \(w \subset \kappa\), there is a countable ultrahomogeneous \(L_{w}\)-structure \(M_{w}\) with \(\text{Age}(M_{w}) = \mathbb{K}_{w}\). Let \(T^{*}_{w} = \text{Th}(M_{w})\). In the following lemmas, we will establish the properties needed to apply Lemma \ref{KPTredux} in order to show the $T^{*}_{w}$ cohere. \begin{lem} \label{first condition} Suppose $w \subseteq v$ are finite subsets of $\kappa$ and $A \in \mathbb{K}_{w}$. Then there is an $L_{v}$-structure $D \in \mathbb{K}_{v}$ such that $A \subseteq D \upharpoonright L_{w}$. \end{lem} \begin{proof} We may enumerate $w$ in increasing order as $w = \{\alpha_{0} < \alpha_{1} < \ldots < \alpha_{n-1}\}$. By induction, it suffices to consider the case when $v = w \cup \{\gamma\}$ for some $\gamma \in \kappa \setminus w$. We consider two cases: \textbf{Case 1}: $\alpha_{n-1} < \gamma$ or $w = \emptyset$. In this case, the new symbols in $L_{v}$ not in $L_{w}$ consist of the predicate $P_{\gamma}$, the function $p_{\gamma}$, and the functions $f_{\alpha_{j}\gamma}$ for $j < n$ and $f_{\gamma \gamma}$. We define the underlying set of $D$ to be $A$, and give the symbols of $L_{w}$ their interpretation in $A$. Then we interpret $P_{\gamma}^{D} = \emptyset$, and interpret $p_{\gamma}^{D}$, $f_{\alpha_{j}\gamma}^{D}$ for $j < n$, and $f^{D}_{\gamma \gamma}$ to be the identity function on $D$. Clearly $A = D \upharpoonright L_{w}$ and it is easy to check $D \in \mathbb{K}_{v}$. \textbf{Case 2}: $\gamma < \alpha_{n-1}$. Let $i$ be least such that $\gamma < \alpha_{i}$. We define the underlying set of $D$ to be $A \cup \{*_{d} : d \in P_{\alpha_{i}}^{A}\}$, where the $*_{d}$ denote new formal elements. We interpret all the predicates of $L_{w}$ on $D$ to have the same interpretation on $A$, and we interpret each function of $L_{w}$ to be the identity on $\{*_{d}: d \in P^{A}_{\alpha_{i}}\}$ and, when restricted to $A$, to have the same interpretation as in $A$. The new symbols in $L_{v}$ not in $L_{w}$ are: the predicate $P_{\gamma}$, the function $p_{\gamma}$, and the functions $f_{\alpha_{j}\gamma}$ for $j < i$, the function $f_{\gamma \gamma}$, and the functions $f_{\gamma \alpha_{j}}$ for $i \leq j < n$. We remark that it is possible that $i = 0$, in which case there are no such $j < i$ so our conditions on $f_{\alpha_{j}\gamma}$ below say nothing. We interpret $P_{\gamma}^{D} = \{*_{d} : d \in P^{A}_{\alpha_{i}}\}$ and $p_{\gamma}^{D}$ as the identity function on $D$. Informally speaking, we will interpret the remaining functions so that $*_{d}$ becomes the ancestor of $d$ at level $\gamma$. More precisely, for $j < i$, we set $f^{D}_{\alpha_{j} \gamma}(*_{d}) = f^{A}_{\alpha_{j}\alpha_{i}}(d)$ and to be the identity on the complement of $\{*_{d} : d \in P^{A}_{\alpha_{i}}\}$. Likewise, if $i \leq j < n$ and $e \in P^{D}_{\alpha_{j}}$, we set $f^{D}_{\gamma \alpha_{j}}(e) = *_{f_{\alpha_{i}\alpha_{j}}^{A}(e)}$ and we define $f^{D}_{\gamma \alpha_{j}}$ to be the identity on the complement of $P^{D}_{\alpha_{j}}$. Finally, we set $f^{D}_{\gamma \gamma} = \text{id}_{D}$, which completes the definition of the $L_{v}$-structure $D$. Now we check that $D \in \mathbb{K}_{v}$. By construction and the fact that $\mathbb{A} \in \mathbb{K}_{w}$, all the axioms are clear except, in order to establish (2), we must check that if $\beta < \beta' < \beta''$ are from $v$, then for all $x \in P^{D}_{\beta''}$, $(f^{D}_{\beta \beta'} \circ f^{D}_{\beta'\beta''})(x) = f^{D}_{\beta \beta''}(x)$. We may assume $\gamma \in \{\beta,\beta',\beta''\}$. If $\gamma = \beta''$, then every element of $P_{\gamma}^{D}$ is of the form $*_{d}$ for some $d \in P^{A}_{\alpha_{i}}$ and we have \begin{eqnarray*} (f_{\beta\beta'}^{D} \circ f^{D}_{\beta' \gamma})(*_{d}) &=& (f^{D}_{\beta \beta'} \circ f^{D}_{\beta' \alpha_{i}})(d)\\ &=& f^{D}_{\beta \alpha_{i}}(d) \\ &=& f^{D}_{\beta \gamma}(*_{d}), \end{eqnarray*} by the definition of $f^{D}_{\alpha_{j}\gamma}$ for $j < i$ and the fact that $D$ extends $A$, which satisfies axiom (2). Similarly, if $\gamma = \beta'$ and $x \in P^{D}_{\beta''}$, we have \begin{eqnarray*} (f^{D}_{\beta \gamma} \circ f^{D}_{\gamma \beta''})(x)&=& f^{D}_{\beta \gamma}(*_{f^{D}_{\alpha_{i}\beta''}(x)})\\ &=& f^{D}_{\beta \alpha_{i}}(f^{D}_{\alpha_{i}\beta''}(x)) \\ &=& f^{D}_{\beta \beta''}(x). \end{eqnarray*} Finally, if $\beta = \gamma$ and $x \in P^{D}_{\beta''}$, we have \begin{eqnarray*} f^{D}_{\gamma \beta'}(f^{D}_{\beta' \beta''}(x)) &=& *_{f^{D}_{\alpha_{i}\beta'}(f^{D}_{\beta' \beta''}(x))} \\ &=& *_{f^{D}_{\alpha_{i}\beta''}(x)} \\ &=& f^{D}_{\gamma \beta''}(x), \end{eqnarray*} which verifies that (2) holds of $D$ and therefore $D \in \mathbb{K}_{v}$. \end{proof} \begin{lem} \label{second condition} Suppose $w \subseteq v$ are finite subsets of $\kappa$, $A,B \in \mathbb{K}_{w}$, and $\pi: A \to B$ is an $L_{w}$-embedding. Then given any $C \in \mathbb{K}_{v}$ with $C = \langle A \rangle^{C}_{L_{v}}$, there is $D \in \mathbb{K}_{v}$ and an $L_{v}$-embedding $\tilde{\pi}: C \to D$ extending $\pi$. \end{lem} \begin{proof} As in the proof of Lemma \ref{first condition}, we will list $w$ in increasing order as $w = \{\alpha_{0} < \alpha_{1} < \ldots < \alpha_{n-1}\}$ and assume that $v = w \cup \{\gamma\}$ for some $\gamma \in \kappa \setminus w$. We suppose we are given $A,B,C$, and $\pi$ as in the statement and we will construct $D$ and $\tilde{\pi}$. We may assume $B \cap C = \emptyset$. Note that the condition that $C = \langle A \rangle^{C}_{L_{v}}$ entails that the only elements of $C \setminus A$ are contained in $P_{\gamma}^{C}$ and similarly for $B$ and $D$. \textbf{Case 1}: $\alpha_{n-1} < \gamma$ or $w = \emptyset$. We define the underlying set of $D$ to be $B \cup P^{C}_{\gamma}$ and we define $\tilde{\pi}: C \to D$ so that $\tilde{\pi}\upharpoonright A = \pi$ and $\tilde{\pi} \upharpoonright P^{C}_{\gamma} = \text{id}_{P^{C}_{\gamma}}$. Interpret the predicates of $L_{w}$ on $D$ so that they agree with their interpretation on $B$ and interpret the functions of $L_{w}$ on $D$ so that they are the identity on $P^{C}_{\gamma}$ and so that, when restricted to $B$, they agree with their interpretation on $B$. This will ensure that $D \upharpoonright L_{w}$ is an extension of $B$. Finally, interpret $P_{\gamma}$ so that $P_{\gamma}^{D} = P^{C}_{\gamma}$ and define $f^{D}_{\gamma \gamma} = \text{id}_{D}$. Then for each $j < n$, we interpret $f_{\alpha_{j}\gamma}$ on $D$ so that, if $c \in P^{C}_{\gamma}$, then $f^{D}_{\alpha_{j} \gamma}(c) = \pi(f^{C}_{\alpha_{j}\gamma}(c))$, and if $c \in D \setminus P^{C}_{\gamma}$, then $f^{D}_{\alpha_{j}\gamma}(c) = c$. Note that $\tilde{\pi}(f^{C}_{\alpha_{j}\gamma}(c)) = f^{D}_{\alpha_{j}\gamma}(\tilde{\pi}(c))$ for all $c \in C$. Finally, interpret $p_{\gamma}$ so that, if $d = \pi(c) \in \pi(O^{C}) \subseteq O^{D}$ and $p^{C}_{\gamma}(c) \neq c$, then $p^{D}_{\gamma}(d) = p^{C}_{\gamma}(c)$, and otherwise $p^{D}_{\gamma}(d) = d$. It is clear from the definitions that $\tilde{\pi}(p_{\gamma}^{C}(c)) = p_{\gamma}^{D}(\tilde{\pi}(c))$ for all $c \in C$, so $\tilde{\pi}$ is an $L_{v}$-embedding. We are left with showing that $D \in \mathbb{K}_{v}$. Axioms (1) and (3) are clear from the construction and to check (2), we just need to establish that if $\beta < \beta'$ are from $v$ and $c \in P^{C}_{\gamma}$, then $(f^{D}_{\beta \beta'} \circ f^{D}_{\beta' \gamma})(c) = f^{D}_{\beta \gamma}(c)$. For this, we unravel the definitions and make use of the fact that (2) is true in $C$: \begin{eqnarray*} f^{D}_{\beta \beta'}(f^{D}_{\beta'\gamma}(c)) &=& f^{D}_{\beta \beta'}(\pi(f^{C}_{\beta'\gamma}(c)) \\ &=& \pi (f^{C}_{\beta \beta'}(f^{C}_{\beta' \gamma}(c))\\ &=& \pi(f^{C}_{\beta \gamma}(c)) \\ &=& f^{D}_{\beta \gamma}(c), \end{eqnarray*} which verifies (2). Likewise, to show that (4) holds, we note that if $f(\{\beta,\gamma\}) = 0$, $p^{D}_{\gamma}(d) \neq d$, and $p^{D}_{\beta}(d) \neq d$ for some $\beta \in v$ then, by definition of $p^{D}_{\gamma}$, $d = \tilde{\pi}(c)$ for some $c \in O^{C}$ so $p^{C}_{\beta}(c) = (f_{\beta \gamma}^{C} \circ p^{C}_{\gamma})(c)$ so $p^{D}_{\beta}(d) = (f_{\beta \gamma}^{D} \circ p_{\gamma}^{D})(d)$ as $\tilde{\pi}$ is an embedding, which shows (4) and thus $D \in \mathbb{K}_{v}$. \textbf{Case 2}: $\gamma < \alpha_{n-1}$. Let $i$ be least such that $\gamma < \alpha_{i}$. The underlying set of $D$ will be $B \cup P_{\gamma}^{C} \cup \{*_{d} : d \in P^{B}_{\alpha_{i}} \setminus \pi(P^{A}_{\alpha_{i}})\}$, where each $*_{d}$ denotes a new formal element and we will define $\tilde{\pi}: C \to D$ to be $\pi \cup \text{id}_{P_{\gamma}^{C}}$. As in the previous case, we interpret the predicates of $L_{w}$ on $D$ so that they agree with their interpretation on $B$ and interpret the functions of $L_{w}$ on $D$ so that they are the identity on $P^{C}_{\gamma} \cup \{*_{d} : d \in P^{B}_{\alpha_{i}} \setminus \pi(P^{A}_{\alpha_{i}})\} $ and so that, when restricted to $B$, they agree with their interpretation on $B$. We will interpret $P_{\gamma}$ so that $$ P^{D}_{\gamma} = P^{C}_{\gamma} \cup \{*_{d} : d \in P^{B}_{\alpha_{i}} \setminus \pi(P^{A}_{\alpha_{i}})\} $$ The map $\pi$ will dictate how we have to define the ancestors and descendants at level $\gamma$ of the elements in the image of $\pi$, and, for those elements not in the image of $\pi$, we define the interpretations so that $*_{d}$ will be the ancestor at level $\gamma$ of $d \in P^{B}_{\alpha_{i}} \setminus \pi(P^{A}_{\alpha_{i}})$, as in the previous lemma. For $j < i$, we define $f^{D}_{\alpha_{j}\gamma}$ so that, if $c \in P^{C}_{\gamma}$, $f^{D}_{\alpha_{j}\gamma}(c) = \pi(f^{C}_{\alpha_{j}\gamma}(c))$, and if $d \in P^{B}_{\alpha_{i}} \setminus \pi(P^{A}_{\alpha_{i}})$, then $f^{D}_{\alpha_{j}\gamma}(*_{d}) = f^{B}_{\alpha_{j}\alpha_{i}}(d)$. This defines $f^{D}_{\alpha_{j}\gamma}$ on $P^{D}_{\gamma}$ and we define $f^{D}_{\alpha_{j}\gamma}$ to be the identity on the complement of $P^{D}_{\gamma}$ in $D$. Next, we define $f^{D}_{\gamma \alpha_{i}}$ as follows: if $d = \pi(c) \in \pi(P^{C}_{\alpha_{i}}) \subseteq P^{B}_{\alpha_{i}}$, we put $f_{\gamma \alpha_{i}}^{D}(d) = f^{C}_{\gamma \alpha_{j}}(c)$, and if $e \in P^{B}_{\alpha_{i}} \setminus \pi(P^{C}_{\alpha_{i}})$, then we set $f^{D}_{\gamma \alpha_{i}}(e) = *_{e}$. This defines $f^{D}_{\gamma \alpha_{i}}$ on $P^{D}_{\alpha_{i}}$ and we define $f^{D}_{\gamma \alpha_{i}}$ to be the identity on the complement of $P^{D}_{\alpha_{i}}$ in $D$. For $j > i$, we put $f^{D}_{\gamma \alpha_{j}} = f^{D}_{\gamma \alpha_{i}} \circ f^{D}_{\alpha_{i} \alpha_{j}}$. Then we define $f_{\gamma \gamma} = \text{id}_{D}$. Lastly, we define $p^{D}_{\gamma}$ to be the identity on all elements in the complement of $\pi(O^{A})$ and if $d = \pi(c)$, we put $p_{\gamma}^{D}(d) = d$ if $p^{C}_{\gamma}(c) = c$ and we put $p_{\gamma}^{D}(d) = p^{C}_{\gamma}(c)$ if $p^{C}_{\gamma}(c) \neq c$. This completes the construction. It follows from the definitions that $\tilde{\pi}$ is an $L_{v}$-embedding, so we must check $D \in \mathbb{K}_{v}$. Axioms (1) and (3) are clear from the construction. To show (2), we note that if $\beta < \beta' < \beta''$ and $c \in P^{D}_{\beta''}$, then either $c$ is in the image of $\tilde{\pi}$, in which case it is easy to check that $(f^{D}_{\beta \beta'} \circ f^{D}_{\beta' \beta''})(c) = f^{D}_{\beta \beta''}(c)$ using that (2) is satisfied in $C$ and $\tilde{\pi}$ is an embedding, or $c$ is not in the image of $\pi$, in which case the verification of (2) is identical to the verification of (2) in Case 2 of Lemma \ref{first condition}. The argument for (4) is identical to the argument for (4) in Case 1. We conclude that $D \in \mathbb{K}_{v}$, completing the proof. \end{proof} \begin{cor}\label{dirlim} Suppose \(w \subseteq v \subseteq \kappa\) and \(v,w\) are both finite. Then \(T^{*}_{w} \subseteq T^{*}_{v}\). \end{cor} \begin{proof} We will show $\mathrm{Flim}(\mathbb{K}_{v})\upharpoonright L_{w} = \mathrm{Flim}(\mathbb{K}_{w})$ by applying Lemma \ref{KPTredux}. Condition (1) in the Lemma is proved in Lemma \ref{first condition} and Condition (2) is proved in Lemma \ref{second condition}. \end{proof} Using Corollary \ref{dirlim}, we may define the theory \(T^{*}_{\kappa,f}\) as the union of the $T^{*}_{w}$ for all finite $w \subset \kappa$ and the resulting theory is consistent. Because each $T^{*}_{w}$ is complete and eliminates quantifiers, it follows that $T^{*}_{\kappa,f}$ is a complete theory extending $T_{\kappa,f}$ which eliminates quantifiers. The following lemmas will be useful in analyzing the possible formulas that could appear in the various patterns under consideration. Recall that, for all $\alpha < \kappa$, we write $\text{dom}(p_{\alpha})$ for the definable set $\{x \in O : p_{\alpha}(x) \neq x\}$, or equivalently $\{x \in O : p_{\alpha}(x) \in P_{\alpha}\}$. \begin{lem} \label{nice functions on P} Suppose $w \subseteq \kappa$ is a finite set containing $\beta$ and $\varphi(x)$ is an $L_{w}$-formula with $\varphi(x) \vdash x \in P_{\beta}$. Then for any $L_{w}$-term $t(x)$, there is $\alpha \leq \beta$ in $w$ such that $\varphi(x) \vdash t(x) =f_{\alpha \beta}(x)$. \end{lem} \begin{proof} The proof is by induction on terms. The conclusion holds for the term $x$ since $(\forall x)[f_{\beta \beta}(x) =x]$ is an axiom of $T_{\kappa,f}$. Now suppose $t(x)$ is a term such that $\varphi(x) \vdash t(x) = f_{\alpha \beta}(x)$ for some $\alpha \leq \beta$ from $w$. Then because $\varphi(x) \vdash x \in P_{\beta}$, $\varphi(x) \vdash t(x) \in P_{\alpha}$. It follows that for any $\delta \leq \gamma$ from $w$, $\varphi(x) \vdash p_{\gamma}(t(x)) = t(x)$ and $\varphi(x) \vdash f_{\delta \gamma}(t(x)) = t(x)$ when $\gamma \neq \alpha$. Additionally, if $\delta \leq \alpha$ is from $w$, then $\varphi(x) \vdash f_{\delta \alpha}(t(x)) = (f_{\delta \alpha} \circ f_{\alpha \beta})(x) = f_{\delta \beta}(x)$, which is of the desired form, completing the induction. \end{proof} \begin{lem} \label{nice functions on O} Suppose $w \subseteq \kappa$ is finite and $\varphi(x)$ is a complete $L_{w}$-formula with $\varphi(x) \vdash x \in O$. Then for any term $t(x)$ of $L_{w}$, we have one of the following: \begin{enumerate} \item $\varphi(x) \vdash t(x) =x$. \item $\varphi(x) \vdash t(x) = (f_{\alpha \beta} \circ p_{\beta})(x)$ for some $\alpha \leq \beta$ from $w$. \end{enumerate} \end{lem} \begin{proof} The proof is by induction on terms. Clearly the conclusion holds for the term $t(x) = x$. Now suppose we have established the conclusion for the term $t(x)$. We must prove that it also holds for the terms $p_{\gamma}(t(x))$ and $f_{\delta \gamma}(t(x))$ for $\delta \leq \gamma$ from $w$. If $\varphi(x) \vdash t(x) = x$, then $\varphi(x) \vdash p_{\gamma}(t(x)) = (f_{\gamma \gamma} \circ p_{\gamma})(x)$, which falls under case (2), and $\varphi(x) \vdash f_{\delta \gamma}(t(x)) = x$, since $\varphi(x) \vdash t(x) \in O$ which is under case (1). Now suppose $\varphi(x) \vdash t(x) = (f_{\alpha \beta} \circ p_{\beta})(x)$. Since we already handled terms falling under case (1), we may, by completeness of $\varphi$, assume $\varphi(x) \vdash x \in \text{dom}(p_{\beta})$ and hence $\varphi(x) \vdash t(x) \in P_{\alpha}$. It follows that $\varphi(x) \vdash p_{\gamma}(t(x)) = t(x)$ and $\varphi(x) \vdash f_{\delta\gamma}(t(x)) = t(x)$ when $\gamma \neq \alpha$, which remain under case (2). Finally, we have $\varphi(x) \vdash f_{\delta \alpha}(t(x)) = (f_{\delta\alpha} \circ f_{\alpha \beta} \circ p_{\beta})(x) = (f_{\delta \beta} \circ p_{\beta})(x)$, which also remains under case (2), completing the induction. \end{proof} \section{Analysis of the invariants} In this section, we analyze the possible values of the cardinal invariants under consideration in $T^{*}_{\kappa,f}$ for a coloring $f:[\kappa]^{2} \to 2$. In the first subsection, we show that any $\mathrm{inp}$- and $\mathrm{sct}$-pattern of height $\kappa$ in $T^{*}_{\kappa, f}$ gives rise to one of a particularly uniform and controlled form, which we call \emph{rectified}. In the second subsection, we show $\kappa_{cdt}(T^{*}_{\kappa,f}) = \kappa^{+}$, independent of the choice of $f$. Then, making heavy use of rectification, we show in the next two subsections that if $\kappa_{\text{sct}}(T^{*}_{\kappa,f})$ or $\kappa_{\text{inp}}(T^{*}_{\kappa,f})$ are equal to $\kappa^{+}$, then this has combinatorial consequences for the coloring $f$. More precisely, we show in the third subsection that if there is an inp-pattern of height \(\kappa\), we can conclude that \(f\) has a homogeneous set of size \(\kappa\). In the case that there is an sct-pattern of height \(\kappa\), we cannot quite get a homogeneous set, but one nearly so: we prove in this case that there is precisely the kind of homogeneity which a strong coloring witnessing $\text{Pr}_{1}(\kappa, \kappa, 2, \aleph_{0})$ explicitly prohibits. The theory associated to such a coloring, then, gives the desired counterexample. For the entirety of this section, we will fix $\kappa$ a regular uncountable cardinal, a coloring $f:[\kappa]^{2} \to 2$, and a monster model \(\mathbb{M} \models T^{*}_{\kappa,f}\). \subsection{Rectification} Recall that, given a set $X$, a family of subsets $\mathcal{B} \subseteq \mathcal{P}(X)$ is called a \emph{$\Delta$-system} (of subsets of $X$) if there is some $r \subseteq X$ such that for all distinct $x,y \in \mathcal{B}$, $x \cap y = r$. Given a $\Delta$-system, the common intersection of any two distinct sets is called the \emph{root} of the $\Delta$-system. The following fact gives a condition under which $\Delta$-systems may be shown to exist: \begin{fact} \cite[Lemma III.2.6]{kunen2014set} \label{delta-system lemma} Suppose that $\lambda$ is a regular uncountable cardinal and $\mathcal{A}$ is a family of finite subsets of $\lambda$ with $|\mathcal{A}| = \lambda$. Then there is $\mathcal{B} \subseteq \mathcal{A}$ with $|\mathcal{B}| = \lambda$ and which forms a $\Delta$-system. \end{fact} We note that the definitions below are specific to $T^{*}_{\kappa,f}$. Recall that, given a subset \(w \subseteq \kappa\), we define \(L_{w} = \langle O, P_{\alpha}, f_{\alpha \beta},p_{\alpha} : \alpha \leq \beta, \alpha,\beta \in w\rangle\). \begin{defn} Given $X \in \{\mathrm{inp},\mathrm{sct}\}$, we define a \emph{rectified $X$-pattern as follows}: \begin{enumerate} \item A \emph{rectified $\mathrm{sct}$-pattern of height $\kappa$} is a triple $(\overline{\varphi},(a_{\eta})_{\eta \in \omega^{<\kappa}},\overline{w})$ satisfying the following: \begin{enumerate} \item $(a_{\eta})_{\eta \in \omega^<{\kappa}}$ is an $s$-indiscernible tree of parameters. \item $\overline{\varphi}$ is a sequence of formulas $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$ which, together with the parameters $(a_{\eta})_{\eta \in \omega^{<\kappa}}$ forms an $\mathrm{sct}$-pattern of height $\kappa$. \item $\overline{w} = (w_{\alpha})_{\alpha < \kappa}$ is a $\Delta$-system of finite subsets of $\kappa$ with root $r$ such that every $w_{\alpha}$ has the same cardinality, $\max r < \min(w_{\alpha} \setminus r)$ for all $\alpha < \kappa$, and $\max(w_{\alpha} \setminus r) < \min(w_{\alpha'} \setminus r)$ for all $\alpha < \alpha' < \kappa$. \item For all $\alpha < \kappa$, the formula $\varphi_{\alpha}(x;y_{\alpha})$ is in the language $L_{w_{\alpha}}$ and isolates a complete $L_{w_{\alpha}}$-type over $\emptyset$ in the variables $xy_{\alpha}$. Additionally, for all $\alpha < \kappa$ and $\eta \in \omega^{\alpha}$, the tuple $a_{\eta}$ enumerates an $L_{w_{\alpha}}$-substructure of $\mathbb{M}$. \end{enumerate} \item We define a \emph{rectified $\mathrm{inp}$-pattern of height $\kappa$} to be a quadruple $(\overline{\varphi},\overline{k},(a_{\alpha,i})_{\alpha < \kappa, i < \omega},\overline{w})$ satisfying the following: \begin{enumerate} \item $(a_{\alpha ,i})_{\alpha < \kappa, i < \omega}$ is a mutually indiscernible array of parameters. \item $\overline{\varphi}$ is a sequence of formulas $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$ and $\overline{k} = (k_{\alpha})_{\alpha < \kappa}$ is a sequence of natural numbers which, together with the parameters $(a_{\alpha,i})_{\alpha < \kappa, i < \omega}$ form an $\mathrm{inp}$-pattern of height $\kappa$. \item $\overline{w} = (w_{\alpha})_{\alpha < \kappa}$ is a $\Delta$-system of finite subsets of $\kappa$ with root $r$ such that every $w_{\alpha}$ has the same cardinality, $\max r < \min(w_{\alpha} \setminus r)$ for all $\alpha < \kappa$, and $\max(w_{\alpha} \setminus r) < \min(w_{\alpha'} \setminus r)$ for all $\alpha < \alpha' < \kappa$. \item For all $\alpha < \kappa$, the formula $\varphi_{\alpha}(x;y_{\alpha})$ is in the language $L_{w_{\alpha}}$ and isolates a complete $L_{w_{\alpha}}$-type over $\emptyset$ in the variables $xy_{\alpha}$. Additionally, for all $\alpha < \kappa$ and $i < \omega$, the tuple $a_{\alpha,i}$ enumerates an $L_{w_{\alpha}}$-substructure of $\mathbb{M}$. \end{enumerate} \item We will refer to $\overline{w}$ in the above definitions as the \emph{associated $\Delta$-system} of the rectified $X$-pattern. We will consistently denote the root $r = \{\zeta_{i} : i < n\}$ and the sets $v_{\alpha} = w_{\alpha} \setminus r = \{\beta_{\alpha, i} : i < m \}$, where the enumerations are increasing. \end{enumerate} \end{defn} \begin{lem}\label{tending} Given \(X \in \{\text{inp},\text{sct}\}\), if there is an \(X\)-pattern of height \(\kappa\) in \(T\), there is a rectified one. \end{lem} \begin{proof} Given an \(X\)-pattern with the sequence of formulas \(\overline{\varphi} = (\varphi_{\alpha}(x;y_{\alpha}): \alpha < \kappa)\) one can choose some finite \(w_{\alpha} \subset \kappa\) such that \(\varphi_{\alpha}(x;y_{\alpha})\) is in the language \(L_{w_{\alpha}}\). Apply the \(\Delta\)-system lemma, Fact \ref{delta-system lemma}, to the collection \((w_{\alpha} : \alpha < \kappa)\) to find some \(I \subseteq \kappa\) with $|I| = \kappa$ such that \(\overline{w} = (w_{\alpha} : \alpha \in I)\) forms a \(\Delta\)-system with root $r$. By the pigeonhole principle, using that $\kappa$ is uncountable, and the regularity of $\kappa$, we may assume \(|w_{\alpha}| = m\) for all \(\alpha< \kappa\), \(\max r < \min (w_{\alpha} \setminus r)\) for all $\alpha < \kappa$, and if \(\alpha < \alpha'\), \(\max (w_{\alpha} \setminus r) < \min (w_{\alpha'} \setminus r)\). By renaming, we may assume \(I = \kappa\). If \(X = \text{inp}\), we may take the parameters witnessing that \((\overline{\varphi},\overline{k},(a_{\alpha,i})_{\alpha <\kappa,i < \omega})\) is an inp-pattern to be a mutually indiscernible array by Lemma \ref{witness}(1). Moreover, mutual indiscernibility is clearly preserved after replacing each \(a_{\alpha,i}\) by a tuple enumerating the \(L_{w_{\alpha}}\)-substructure generated by $a_{\alpha,i}$ and, by \(\aleph_{0}\)-categoricity of \(T^{*}_{w_{\alpha}}\), this structure is finite. Let \(b \models \{\varphi_{\alpha}(x;a_{\alpha,0}) : \alpha < \kappa\}\). Using again the \(\aleph_{0}\)-categoricity of \(T^{*}_{w_{\alpha}}\), replace \(\varphi_{\alpha}(x;y_{\alpha})\) by an \(L_{w_{\alpha}}\)-formula \(\varphi'_{\alpha}(x;y_{\alpha})\) such that \(\varphi_{\alpha}'(x,y_{\alpha})\), viewed as an unpartitioned formula in the variables $xy_{\alpha}$, isolates the type \(\text{tp}_{L_{w_{\alpha}}}(ba_{\alpha,0}/\emptyset)\). By mutual indiscernibility, if \(g: \kappa \to \omega\) is a function, there is \(\sigma \in \text{Aut}(\mathbb{M})\) such that \(\sigma(a_{\alpha,0}) = a_{\alpha, g(\alpha)}\) for all \(\alpha < \kappa\). Then \(\sigma(b) \models \{\varphi'_{\alpha}(x;a_{\alpha,g(\alpha)}) : \alpha < \kappa\}\) so paths are consistent. The row-wise inconsistency is clear so if we set \(\overline{\varphi}' =(\varphi'_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)\), we see $(\overline{\varphi}',\overline{k},(a_{\eta,i})_{\alpha < \kappa, i < \omega},\overline{w})$ forms a rectified inp-pattern of height $\kappa$. If \(X = \text{sct}\), we argue similarly. We may take the witnessing parameters $(a_{\eta})_{\eta \in \omega^{<\kappa}}$ to be \(s\)-indiscernible, by Lemma \ref{witness}(2). Likewise, \(s\)-indiscernibility is preserved by replacing each \(a_{\eta}\) by its closure under the functions of \(L_{w_{l(\eta)}}\) and this closure is finite. Let \(b \models \{\varphi_{\alpha}(x;a_{0^{\alpha}}) : \alpha < \kappa\}\) and replace $\overline{\varphi}$ by $\overline{\varphi}'$ where \(\varphi'_{\alpha}(x;y_{\alpha})\) is an \(L_{w_{\alpha}}\)-formula which, viewed as an unpartitioned formula in the variables $xy_{\alpha}$, isolates \(\mathrm{tp}_{L_{w_{\alpha}}}(ba_{0^{\alpha}}/\emptyset)\). For all \(\eta \in \omega^{\kappa}\), there is a \(\sigma \in \text{Aut}(\mathbb{M})\) such that \(\sigma(a_{0^{\alpha}}) = a_{\eta | \alpha}\). Then \(\sigma(b) \models \{\varphi'_{\alpha}(x;a_{\eta | \alpha}) : \alpha < \kappa\}\) so paths are consistent. Incomparable nodes remain inconsistent, so \((\overline{\varphi}',(a_{\eta})_{\eta \in \omega^{<\kappa}},\overline{w})\) forms a rectified sct-pattern. \end{proof} \begin{rem} \label{same number of variables} As the replacement of $(\varphi(x;y_{\alpha}) : \alpha < \kappa)$ with a sequence of complete formulas $(\varphi'_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$ does not change the free variables $x$, if $T$ has an inp- or sct-pattern in $k$ free variables of height $\kappa$, Lemma \ref{tending} produces a rectified inp- or sct-pattern of height $\kappa$ in the same number of free variables. \end{rem} \subsection{Computing \(\kappa_{cdt}\)} \begin{lem}\label{stable} The theory \(T_{\kappa,f}^{*}\) is stable. \end{lem} \begin{proof} Since stability is local, it suffices to show \(T^{*}_{w}\) is stable for all finite \(w \subset \kappa\). Let \(M \models T_{w}\) be a countable model. We will count 1-types in \(T^{*}_{w}\) over \(M\) explicitly using quantifier elimination. Pick some \(p(x) \in S^{1}_{L_{w}}(M)\). If \(x = m\) is a formula in \(p\) for some \(m \in M\) then this formula obviously isolates \(p\) so there are countably many such possibilities. So assume \(x \neq m\) is in \(p\) for all \(m \in M\). Now we break into cases based upon the predicate contained in $p$. If \(x \not\in O \wedge \bigwedge_{\alpha \in w} x \not\in P_{\alpha}\) is a formula in \(p\), then \(p\) is completely determined, so there is a unique type in this case. If \(x \in O\) is a formula in \(p\), then, by quantifier-elimination and Lemma \ref{nice functions on O}, the type is determined after deciding the truth value of $p_{\alpha}(x)= x$ and \((f_{\beta \alpha} \circ p_{\alpha})(x) = m\) for all \(\beta \leq \alpha \in w\) and \(m \in P_{\beta}(M)\). As \((f_{\beta \alpha} \circ p_{\alpha})(x)\) can be equal to at most 1 element of \(P_{\beta}(M)\) and $w$ is finite, there are countably many possibilties for this case. Finally, if \(x \in P_{\beta}\) is a formula in \(p\), then, by quantifier-elimination and Lemma \ref{nice functions on P}, the type is determined after deciding the truth value of \(f_{\gamma \beta}(x) = m\) for \(m \in P_{\gamma}(M)\) for all \(\gamma < \beta < \alpha\) from \(w\). Here again there are only countably many possibilities, by the finiteness of $w$. Since this covers all possible types, we've shown that \(S^{1}_{L_{w}}(M)\) is countable, so \(T^{*}_{w}\) is stable (in fact, as $M$ is an arbitrary countable model, $\omega$-stable) which implies that $T^{*}_{\kappa,f}$ is stable. \end{proof} \begin{prop} \label{cdtcomputation} \(\kappa_{\text{cdt}}(T^{*}_{\kappa,f}) = \kappa^{+}\). \end{prop} \begin{proof} First, we will show $\kappa_{\text{cdt}}(T^{*}_{\kappa,f}) \geq \kappa^{+}$. We will construct a cdt-pattern of height \(\kappa\). By recursion on \(\alpha < \kappa\), we will construct a tree of tuples \((a_{\eta})_{\eta \in \omega^{<\kappa}}\) so that \(l(\eta) = \beta\) implies \(a_{\eta} \in P_{\beta}\) and if \(\eta \unlhd \nu\) with \(l(\eta) = \beta\) and \(l(\nu) = \gamma\), then \(f_{\beta \gamma}(a_{\nu}) = a_{\eta}\). For \(\alpha = 0\), choose an arbitrary \(a \in P_{0}\) and let \(a_{\emptyset} = a\). Now suppose given \((a_{\eta})_{\eta \in \omega^{\leq \alpha}}\). For each \(\eta \in \omega^{\alpha}\), choose a set \(\{b_{i} : i < \omega\} \subseteq f^{-1}_{\alpha \alpha+1}(a_{\eta})\) with the $b_{i}$ pairwise distinct. Define \(a_{\eta \frown \langle i \rangle} = b_{i}\). This gives us \((a_{\eta})_{\eta \in \omega^{\leq \alpha+1}}\) with the desired properties. Now suppose \(\delta\) is a limit and we've defined \((a_{\eta})_{\eta \in \omega^{\leq \alpha}}\) for all \(\alpha < \delta\). Given any \(\eta \in \omega^{\delta}\), we may, by saturation, find an element \(b \in \bigcap_{\alpha < \delta} f^{-1}_{\alpha \delta}(a_{\eta | \alpha})\). Then we can set \(a_{\eta} = b\). This gives \((a_{\eta})_{\eta \in \omega^{\leq \delta}}\) and completes the construction. Given \(\alpha < \kappa\), let \(\varphi_{\alpha}(x;y)\) be the formula \(p_{\alpha}(x) = y\). For any \(\eta \in \omega^{\kappa}\), \(\{\varphi_{\alpha}(x;a_{\eta | \alpha}) : \alpha < \kappa\}\) is consistent and, for all \(\nu \in \omega^{<\kappa}\), \(\{\varphi_{l(\nu)+1}(x;a_{\nu \frown \langle i \rangle}) : i < \omega \}\) is 2-inconsistent. We have thus exhibited a cdt-pattern of height \(\kappa\) so \(\kappa_{\text{cdt}}(T^{*}_{\kappa,f}) \geq \kappa^{+}\). By Lemma \ref{stable} and Fact \ref{easy inequalities}, we have $\kappa_{\mathrm{cdt}}(T^{*}_{\kappa,f}) \leq \kappa^{+}$, so we have the desired equality. \end{proof} \subsection{Case 1: \(\kappa_{\text{inp}} = \kappa^{+}\)} In this subsection, we first show how to produce a homogeneous set of size $\kappa$ for $f$ from an $\mathrm{inp}$-pattern of a very particular form. Then, using rectification, we observe that every $\mathrm{inp}$-pattern of height $\kappa$ gives rise to one of this particular form. Together, these will allow us to calculate an upper bound on $\kappa_{\mathrm{inp}}(T^{*}_{\kappa,f})$ when the coloring $f$ is chosen to have no homogeneous set of size $\kappa$. \begin{lem}\label{case2} Suppose we are given a collection \((\beta_{\alpha, i})_{\alpha < \kappa, i < 2}\) of ordinals smaller than $\kappa$ such that if \(\alpha < \alpha' < \kappa\), then $\beta_{\alpha, 0} \leq \beta_{\alpha,1}$, $\beta_{\alpha',0} \leq \beta_{\alpha',1}$, $\beta_{\alpha,0} \leq \beta_{\alpha',0}$ and $\beta_{\alpha,1} < \beta_{\alpha',1}$. Suppose that there is a mutually indiscernible array \((c_{\alpha, k})_{\alpha < \kappa, k < \omega}\) such that, with \(\varphi_{\alpha}(x;y_{\alpha})\) defined by \((f_{\beta_{\alpha,0}\beta_{\alpha,1}} \circ p_{\beta_{\alpha,1}})(x) = y_{\alpha}\), \((\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)\), \((c_{\alpha, k})_{\alpha < \kappa, k < \omega}\) forms an inp-pattern of height \(\kappa\). Then for all pairs \(\alpha < \alpha'\), \(f(\{\beta_{\alpha,1}, \beta_{\alpha',1}\}) = 1\). \end{lem} \begin{proof} If \(\alpha < \alpha'\) and \(f(\{\beta_{\alpha,1}, \beta_{\alpha',1}\}) = 0\), then $p_{\beta_{\alpha,1}}(x) = (f_{\beta_{\alpha,1} \beta_{\alpha',1}} \circ p_{\beta_{\alpha',1}})(x)$ for any \(x\) with \(p_{\beta_{\alpha,1}}(x) \neq x\) and \(p_{\beta_{\alpha',1}}(x) \neq x\), and hence \begin{eqnarray*} (f_{\beta_{\alpha,0}\beta_{\alpha,1}} \circ p_{\beta_{\alpha,1}})(x) &=& (f_{\beta_{\alpha,0}\beta_{\alpha,1}} \circ f_{\beta_{\alpha,1}\beta_{\alpha',1}} \circ p_{\beta_{\alpha',1}})(x) \\ &=& (f_{\beta_{\alpha,0}\beta_{\alpha',1}} \circ p_{\beta_{\alpha',1}})(x) \\ &=& (f_{\beta_{\alpha,0}, \beta_{\alpha',0}} \circ f_{\beta_{\alpha',0}\beta_{\alpha',1}} \circ p_{\beta_{\alpha',1}})(x). \end{eqnarray*} Consequently, \[ \{(f_{\beta_{\alpha,0}\beta_{\alpha,1}} \circ p_{\beta_{\alpha,1}})(x) = c_{\alpha, k'}, (f_{\beta_{\alpha',0}\beta_{\alpha',1}}\circ p_{\beta_{\alpha',1}})(x) = c_{\alpha',k}\} \] is consistent only if \(c_{\alpha, k'} = f_{\beta_{\alpha,0}\beta_{\alpha',0}}(c_{\alpha',k})\). Because for all $\xi < \kappa$, $(c_{\xi,i})_{i < \omega}$ is indiscernible and, by the definition of an $\mathrm{inp}$-pattern, $\{\varphi_{\xi}(x;c_{\xi,i}) : i < \omega\}$ is inconsistent, we know that $c_{\xi,l} \neq c_{\xi,l'}$ for $l \neq l'$. Fix any $k<\omega$. We have shown there is a unique $k'$ such that \[ \{(f_{\beta_{\alpha,0}\beta_{\alpha,1}} \circ p_{\beta_{\alpha,1}})(x) = c_{\alpha, k'}, (f_{\beta_{\alpha',0}\beta_{\alpha',1}}\circ p_{\beta_{\alpha',1}})(x) = c_{\alpha',k}\} \] is consistent. By the definition of an $\mathrm{inp}$-pattern, given any function $g: \kappa \to \omega$, $$ \{\varphi_{\alpha}(x;c_{\alpha,g(\alpha)}) : \alpha < \kappa\} $$ is consistent and so, in particular, the set \[ \{(f_{\beta_{\alpha,0}\beta_{\alpha,1}} \circ p_{\beta_{\alpha,1}})(x) = c_{\alpha, g(\alpha)}, (f_{\beta_{\alpha',0}\beta_{\alpha',1}}\circ p_{\beta_{\alpha',1}})(x) = c_{\alpha',g(\alpha')}\} \] is consistent. Choosing $g(\alpha') = k$ and $g(\alpha) \neq k'$, we obtain a contradiction. \end{proof} For the remainder of this subsection, we will assume there is an inp-pattern of height $\kappa$ modulo $T$. By Lemma \ref{tending}, it follows there is a \emph{rectified} inp-pattern of height $\kappa$ and, by \cite[Corollary 2.9]{ChernikovNTP2} and Remark \ref{same number of variables}, we may assume that this is witnessed by a rectified inp-pattern in a single free variable. Hence, for the rest of this subsesction, we will fix a rectified inp-pattern \((\overline{\varphi},\overline{k},(a_{\alpha,i})_{\alpha < \kappa, i < \omega},\overline{w})$ and we will assume that each $\varphi_{\alpha}(x;y_{\alpha})$ enumerated in $\overline{\varphi}$ has \(l(x) = 1\). Recall the associated \(\Delta\)-system is denoted \(\overline{w} = (w_{\alpha}: \alpha < \kappa)\) with root \(r = \{\zeta_{i} : i < n\}\) and \(w_{\alpha} \setminus r = v_{\alpha} = \{\beta_{\alpha, j} : j <m\}\), where the enumerations are increasing. \begin{lem} \label{ino} For all \(\alpha < \kappa\), \(\varphi_{\alpha}(x;y_{\alpha}) \vdash x \in O\). \end{lem} \begin{proof} First, note that we may assume that there is a predicate \(Q \in \{O, P_{\zeta_{i}} : i < n\}\) such that $\varphi_{\alpha}(x;y_{\alpha}) \vdash x \in Q$ for all $\alpha < \kappa$. If not, using that the $w_{\alpha}$'s form a $\Delta$-system, that every formula $\varphi_{\alpha}(x;y_{\alpha})$ is complete, and that $\varphi_{\alpha}(x;a_{\alpha,i})$ is consistent with $\varphi_{\beta}(x;a_{\beta,j})$ whenever $\alpha \neq \beta$, there would be some $\alpha < \kappa$ such that $\varphi_{\alpha}(x;y_{\alpha})$ implies that $x$ is not contained in any predicate of $L_{w_{\alpha}}$. By Lemma \ref{no equalities}(1), we know each $\varphi_{\alpha}(x;a_{\alpha,i})$ is non-algebraic, so, in this case it is easy to check that $\{\varphi_{\alpha}(x;a_{\alpha, i}) : i < \omega\}$ is consistent, contradicting the definition of inp-pattern. So we must show that \(\varphi_{\alpha}(x;y_{\alpha}) \vdash P_{\zeta_{i}}\) for some \(i < n\) is impossible. Suppose not and fix $i_{*} < n$ so that $\varphi_{\alpha}(x;y_{\alpha}) \vdash x \in P_{\zeta_{i_{*}}}$ for some $\alpha < \kappa$. Note that it follows that $\varphi_{\alpha}(x;y_{\alpha}) \vdash x \in P_{\zeta_{i_{*}}}$ for \emph{all} $\alpha < \kappa$ as each $\varphi_{\alpha}$ is a complete $L_{w_{\alpha}}$-formula, the predicate $P_{\zeta_{i_{*}}}$ is in every $L_{w_{\alpha}}$, and columns in the $\mathrm{inp}$-pattern are consistent. Write each tuple in the array \(a_{\alpha,i}\) as \(a_{\alpha,i} = (b_{\alpha, i}, c_{\alpha, i}, d_{\alpha, i}, e_{\alpha, i})\) where the elements of \(b_{\alpha, i}\) are in \(O\), the elements of \(c_{\alpha, i}\) are in predicates indexed by the root \(\bigcup_{i < n} P_{\zeta_{i}}\), the elements of \(d_{\alpha,i}\) are in predicates whose index is in \(\bigcup_{j < m} P_{\beta_{\alpha,j}}\), and the elements of \(e_{\alpha,i}\) are not in any predicate of \(L_{w_{\alpha}}\). By completeness, quantifier-elimination, as well as Lemmas \ref{no equalities}(1) and \ref{nice functions on P}, each \(\varphi_{\alpha}(x;a_{\alpha,i})\) is equivalent to the conjunction of the following: \begin{enumerate} \item \(x \in P_{\zeta_{i_{*}}}\) \item \(x \neq (a_{\alpha,i})_{l}\) for all \(l < l(a_{\alpha,i})\) \item \((f_{\gamma \zeta_{i_{*}}}(x) = (c_{\alpha,i})_{l})^{t_{\gamma,l}}\) for all \(l < l(c_{\alpha,i})\) and \(\gamma \in w_{\alpha}\) less than \(\zeta_{i_{*}}\) and some \(t_{\gamma,l} \in \{0,1\}\). \end{enumerate} For each \(k < i_{*}\), let \(\gamma_{k}\) be the least ordinal \(<\kappa\) such that \(\varphi_{\gamma_{k}}(x;a_{\gamma_{k},0}) \vdash f_{\alpha_{k}\alpha_{i_{*}}}(x) = c\) for some \(c \in c_{\gamma_{k},0}\) and \(0\) if there is no such. Let \(\gamma = \max\{\gamma_{k} : k < i_{*}\}\). We claim that \(\{\varphi_{\gamma+1}(x;a_{\gamma+1,j}) : j < \omega\}\) is consistent. Note that any equality of the form $f_{\zeta_{k}\zeta_{i_{*}}}(x) = c$ implied by $\varphi_{\gamma+1}(x;a_{\gamma+1,j})$ is implied by $\varphi_{\gamma_{k}}(x;a_{\gamma_{k},0})$ by indiscernibility and the fact that, for all \(j < \omega\), \[ \{\varphi_{\gamma_{k}}(x;a_{\gamma_{k},0}) ,\varphi_{\gamma+1}(x;a_{\gamma+1,j})\} \] is consistent. Additionally, any inequality of the form \(f_{\zeta_{k}\zeta_{i_{*}}}(x) \neq c\) implied by \(\varphi_{\gamma+1}(x;a_{\gamma+1,j})\) is compatible with \(\{\varphi_{\alpha}(x;a_{\alpha,0}) : \alpha \leq \gamma\}\). Choosing a realization \(b \models \{\varphi_{\alpha}(x;a_{\alpha,0}) : \alpha \leq \gamma\}\) satisfying every inequality of the form \(f_{\zeta_{k}\zeta_{i_*}}(x) \neq c\) implied by the \(\varphi_{\gamma+1}(x;a_{\gamma+1,j})\) yields a realization of \(\{\varphi_{\gamma+1}(x;a_{\gamma+1,j}) : j < \omega\}\), by the description of $\varphi_{\gamma+1}(x;a_{\gamma+1,j})$ as a conjunction given above. This contradicts the definition of inp-pattern. \end{proof} \begin{prop}\label{inpcomputation} If $\kappa_{\mathrm{inp}}(T^{*}_{\kappa,f}) = \kappa^{+}$, then there is a subset \(H \subseteq \kappa\) with \(|H| = \kappa\) such that \(f\) is constant on \([H]^{2}\). \end{prop} \begin{proof} Recall that the hypothesis $\kappa_{\mathrm{inp}}(T^{*}_{\kappa,f}) = \kappa^{+}$ allowed us to fix a rectified inp-pattern \((\overline{\varphi},\overline{k},(a_{\alpha,i})_{\alpha < \kappa, i < \omega},\overline{w})$ with the property that each $\varphi_{\alpha}(x;y_{\alpha})$ enumerated in $\overline{\varphi}$ has \(l(x) = 1\). By completeness and Lemma \ref{ino}, we know that, for each \(\alpha < \kappa\), \(\varphi_{\alpha}(x;y)\vdash x \in O\). Then by quantifier-elimination, completeness, and Lemmas \ref{no equalities}(2) and \ref{nice functions on O}, for each $\alpha < \kappa$, $\varphi_{\alpha}(x;a_{\alpha,0})$ is equivalent to the conjunction of the following: \begin{enumerate} \item \(x \in O\) \item \(x \neq (a_{\alpha,0})_{l}\) for all \(l < l(a_{\alpha,0})\) \item \((p_{\gamma}(x) = x)^{t^{0}_{\gamma}}\) for $\gamma \in w_{\alpha}$ and some $t^{0}_{\gamma} \in \{0,1\}$. \item The values of the \(p_{\gamma}\) and how they descend in the tree: \begin{enumerate} \item $((f_{\delta \gamma} \circ p_{\gamma})(x) = (a_{\alpha,0})_{l})^{t^{1}_{l,\delta,\gamma}}$ for $l < l(a_{\alpha,0})$, $\delta \leq \gamma$ in $w_{\alpha}$, and some $t^{1}_{l,\delta,\gamma} \in \{0,1\}$. \item \(((f_{\delta \gamma} \circ p_{\gamma})(x) = (f_{\delta \gamma'} \circ p_{\gamma'})(x))^{t^{2}_{\delta,\gamma,\gamma'}}\) for \(\delta, \gamma, \gamma' \in w_{\alpha}\) with \(\delta \leq \gamma < \gamma'\), for some $t^{2}_{\delta,\gamma,\gamma'} \in \{0,1\}$. \end{enumerate} \end{enumerate} \textbf{Claim: }Given $\alpha < \kappa$, there are $\epsilon_{\alpha} \leq \epsilon'_{\alpha} \in w_{\alpha}$ and pairwise distinct $c_{\alpha,k} \in a_{\alpha,k}$ such that, for all $k < \omega$, $\varphi_{\alpha}(x;a_{\alpha,k}) \vdash (f_{\epsilon_{\alpha} \epsilon_{\alpha}'} \circ p_{\epsilon'_{\alpha}})(x) = c_{\alpha,k}$. \emph{Proof of claim:} Suppose not. Then, by the description of $\varphi_{\alpha}(x;a_{\alpha,k})$ given above, the following set of formulas \[ \{\varphi_{\alpha}(x;a_{\alpha,k}) : k < \omega\} \] is equivalent to a finite number of equations common to each instance \(\varphi_{\alpha}(x;a_{\alpha,k})\) and an infinite collection of inequations. Then, it is easy to see then that \(\{\varphi_{\alpha}(x;a_{\alpha,k}) : k < \omega\}\) is consistent, contradicting the definition of an inp-pattern. This proves the claim. Note that, by the pigeonhole principle, we may assume that either (i) $\epsilon_{\alpha}, \epsilon_{\alpha}' \in r$ for all $\alpha < \kappa$, (ii) $\epsilon_{\alpha} \in r$, $\epsilon'_{\alpha} \in v_{\alpha}$ for all $\alpha < \kappa$, or (iii) $\epsilon_{\alpha},\epsilon'_{\alpha} \in v_{\alpha}$ for all $\alpha < \kappa$. Case (i) is impossible: as the root \(r = \{\zeta_{i} : i < n\}\) is finite and the all 0's path is consistent, we can find an ordinal \(\gamma < \kappa\) such that for all \(\alpha < \kappa\), if there is a \(c \in a_{\alpha,0}\) such that \(\varphi_{\alpha}(x;a_{\alpha,0}) \vdash (f_{\zeta_{i}\zeta_{i'}} \circ p_{\zeta_{i'}})(x) = c\) for some \(i \leq i' < n\), then there is some \(\alpha' < \gamma\) such that \(\varphi_{\alpha'}(x;a_{\alpha',0}) \vdash (f_{\zeta_{i}\zeta_{i'}} \circ p_{\zeta_{i'}})(x) = c\). Hence, by indiscernibility, the equality $(f_{\epsilon_{\gamma} \epsilon_{\gamma}'} \circ p_{\epsilon'_{\gamma}})(x) = c_{\gamma,k}$ implied by $\varphi_{\gamma}(x;a_{\gamma,k})$ must also be implied by $\varphi_{\alpha}(x;a_{\alpha,0})$ for some $\alpha < \gamma$. Since $\{\varphi_{\alpha}(x;a_{\alpha,0}), \varphi_{\gamma}(x;a_{\gamma,k})\}$ is consistent for all $k < \omega$, this is impossible because the tuples in $(c_{\alpha,k})_{k < \omega}$ are pairwise distinct. Now we consider cases (ii) and (iii). Again by the pigeonhole principle, we may assume that if we are in case (ii), then $\epsilon_{\alpha}$ is constant for all $\alpha$. Then by rectification, we know that, in either case (ii) or (iii), when $\alpha < \alpha'$, $\epsilon_{\alpha} \leq \epsilon_{\alpha'}$ and $\epsilon'_{\alpha} < \epsilon'_{\alpha'}$. Because for all $\alpha < \kappa$, the $c_{\alpha,k}$ are pairwise distinct and $k$ varies, the set of formulas $$ \{(f_{\epsilon_{\alpha}\epsilon'_{\alpha}} \circ p_{\epsilon'_{\alpha}})(x) = c_{\alpha,k} : k < \omega\} $$ is $2$-inconsistent. Moreover, if $g : \kappa \to \omega$ is a function, the partial type $$ \{ (f_{\epsilon_{\alpha}\epsilon_{\alpha'}} \circ p_{\epsilon'_{\alpha}})(x) = c_{\alpha,g(\alpha)} : \alpha < \kappa\} $$ is implied by $\{\varphi_{\alpha}(x;a_{\alpha,g(\alpha)}) : \alpha < \kappa\}$ and is therefore consistent. It follows that $((f_{\epsilon_{\alpha}\epsilon'_{\alpha}} \circ p_{\epsilon'_{\alpha}})(x) = y_{\alpha})_{\alpha < \kappa}, (c_{\alpha,k})_{\alpha < \kappa, k < \omega}$ is an inp-pattern with $k_{\alpha} = 2$ for all $\alpha < \kappa$. By Lemma \ref{case2}, $f(\{\epsilon'_{\alpha},\epsilon'_{\alpha'}\}) = 1$ for all $\alpha < \alpha'$. Therefore $H = \{\epsilon'_{\alpha} : \alpha < \kappa\}$ is a homogeneous set for $f$. \end{proof} \subsection{Case 2: \(\kappa_{\text{sct}} = \kappa^{+}\)} In this subsection, we show that if $\kappa_{sct}(T^{*}_{\kappa,f}) = \kappa^{+}$ then $f$ satisfies a homogeneity property inconsistent with $f$ being a strong coloring. In particular, we will show that if this homogeneity property fails, then for any putative sct-pattern of height $\kappa$, there are two incomparable elements in $\omega^{<\kappa}$ which index compatible formulas, contradicting the inconsistency condition in the definition of an sct-pattern. This step is accomplished by relating consistency of the relevant formulas to an amalgamation problem in finite structures. The following lemma describes the relevant amalgamation problem: \begin{lem}\label{consistency} Suppose we are given the following: \begin{itemize} \item Finite sets \(w, w' \subset \kappa\) with \(w \cap w' = v\) such that for all \(\alpha \in v\), \(\beta \in w \setminus v\), \(\gamma \in w' \setminus v\), we have \(\alpha < \beta < \gamma\) and \(f(\{\beta, \gamma\}) = 1\). \item Structures \(A \in \mathbb{K}_{w \cup w'}\), \(B = \langle d,A \rangle^{B}_{L_{w}} \in \mathbb{K}_{w}\), \(C = \langle e, A \rangle^{C}_{L_{w'}} \in \mathbb{K}_{w'}\) satisfying the following: \begin{enumerate} \item The tuples $d,e$ are contained in $O \cup \bigcup_{\alpha\in v} P_{\alpha}$. \item The map sending \(d \mapsto e\) induces an isomorphism of $L_{v}$-structures over \(A\) between $B = \langle d,A \rangle^{B}_{L_{v}}$ and $C = \langle e, A \rangle_{L_{v}}^{C}$. \end{enumerate} \end{itemize} Then there is \(D = \langle f,A \rangle^{D}_{L_{w \cup w'}} \in \mathbb{K}_{w \cup w'}\) extending \(A\) such that \(l(f)= l(d) = l(e)\) and \(\langle f, A \rangle^{D}_{L_{w}} \cong B\) over \(A\) and \(\langle f,A \rangle^{D}_{L_{w'}} \cong C\) over \(A\) via the isomorphisms over \(A\) sending \(f \mapsto d\) and \(f \mapsto e\), respectively. \end{lem} \begin{proof} Let \(f\) be a tuple of formal elements with \(l(f) = l(d)\)(\(=l(e)\)) with \(L_{w}\) and \(L_{w'}\) interpreted so that \(\langle f,A \rangle_{L_{w}}\) extends \(A\) and is $L_{w}$-isomorphic over \(A\) to \(B\), so that \(\langle f,A \rangle_{L_{w'}}\) extends \(A\) and is $L_{w'}$-isomorphic over \(A\) to \(C\), and so that $\langle f,A \rangle_{L_{w}}$ and $\langle f,A \rangle_{L_{w'}}$ are disjoint over $A \cup \{f\}$. Let $\gamma$ be the least element of $w' \setminus v$ and define \(D\) to have underlying set \[ \langle f,A \rangle_{L_{w}} \cup \langle f,A \rangle_{L_{w'}} \cup \{*_{\alpha,c} : \alpha \in w\setminus v, c \in P_{\gamma}^{\langle f,A \rangle_{L_{w'}}} \setminus P_{\gamma}^{A} \}. \] We must give \(D\) an \(L_{w \cup w'}\)-structure. The main task is to give elements at the levels of the tree indexed by $\alpha \in w' \setminus v$ ancestors at the levels of $w \setminus v$ and the new formal elements $*_{\alpha,c}$ will play this role. Interpret the predicates on $D$ by setting $O^{D} = O^{\langle f,A \rangle_{L_{w}} } = O^{\langle f,A \rangle_{L_{w'}}}$ and, additionally, $$ P^{D}_{\alpha} = \left\{ \begin{matrix} P_{\alpha}^{\langle f,A \rangle_{L_{w'}}} & \text{ if } \alpha \in w' \setminus v \\ P_{\alpha}^{\langle f,A \rangle_{L_{w}}} \cup \{*_{\alpha,c}: c \in P_{\gamma}^{\langle f,A \rangle_{L_{w'}}} \setminus P_{\gamma}^{A}\} & \text{ if } \alpha \in w \setminus v \\ P_{\alpha}^{\langle f, A \rangle_{L_{w}}} \cup P^{\langle f,A \rangle_{L_{w'}}}_{\alpha} & \text{ if } \alpha \in v. \end{matrix} \right. $$ For each of the function symbols $f^{D}_{\alpha \beta}$, we are forced to interpret $f^{D}_{\alpha \beta}$ to be the identity on the complement of $P_{\beta}^{D}$ in $D$, so it suffices to specify the interpretation on $P^{D}_{\beta}$. Given \(\alpha \in w\setminus v\) and $c \in P_{\gamma}^{\langle f,A \rangle_{L_{w'}}} \setminus P_{\gamma}^{A}$, interpret \(f^{D}_{\alpha \gamma}(c) = {*}_{\alpha,c}\) and for any \(\beta \in w'\setminus v\), define \(f_{\alpha \beta}^{D} = f_{\alpha \gamma}^{D} \circ f_{\gamma \beta}^{D}\) on \(P_{\beta}^{D}\). If \(\alpha \in w\setminus v\) and \(\xi \in v\), interpret \(f^{D}_{\xi \alpha}\) so that $f^{D}_{\xi \alpha}|_{P^{\langle f,A\rangle_{L_{w}}}_{\alpha}} = f^{\langle f,A \rangle_{L_{w}}}_{\xi \alpha}|_{P^{\langle f,A\rangle_{L_{w}}}_{\alpha}}$ and \(f^{D}_{\xi \alpha}(*_{\alpha,c}) = f^{D}_{\xi \gamma}(c)\). If $\alpha < \beta$ are both from $w \setminus v$, we likewise define $f^{D}_{\alpha \beta}$ so that $f^{D}_{\alpha \beta}|_{P^{\langle f,A \rangle_{L_{w}}}_{\beta}} = f_{\alpha \beta}^{\langle f, A \rangle_{L_{w}}}$ and $f^{D}_{\alpha \beta}(*_{\beta,c}) = *_{\alpha,c}$. It remains to define the interpretation of $f_{\alpha \beta}^{D}$ when $\alpha <\beta$ are from $(w \cup w')$ and $\alpha,\beta \notin w \setminus v$. If $\beta \in w'$, then we can only set $f^{D}_{\alpha \beta}|_{P^{D}_{\beta}} = f^{\langle f,A \rangle_{L_{w'}}}_{\alpha \beta}|_{P^{D}_{\beta}}$, since $P^{D}_{\beta} = P_{\beta}^{\langle f,A \rangle_{L_{w'}}}$. If $\beta \in v$, then we set $f^{D}_{\alpha \beta}|_{P^{D}_{\beta}} = f^{\langle f, A \rangle_{L_{w}}}_{\alpha \beta}|_{P^{\langle f, A \rangle_{L_{w}}}_{\beta}} \cup f^{\langle f, A \rangle_{L_{w'}}}_{\alpha \beta}|_{P^{\langle f, A \rangle_{L_{w'}}}_{\beta}}$ Finally, interpret each function of the form \(p_{\beta}\) for \(\beta \in w\) to restrict to $p_{\beta}^{\langle f,A \rangle_{L_{w}}}$ and to be the identity on the complemement of $\langle f,A \rangle_{L_{w}}$ and likewise for $\beta \in w'$ (note that these definitions agree for $\alpha \in w \cap w' = v$). This completes the definition of the \(L_{w \cup w'}\)-structure on \(D\). It is clear from construction that $D$ is an $L_{w \cup w'}$-extension of $A$, an $L_{w}$-extension of $\langle f,A \rangle_{L_{w}}$, and an $L_{w'}$-extension of $\langle f,A \rangle_{L_{w'}}$. Now we must check that \(D \in \mathbb{K}_{w \cup w'}\). It is easy to check that axioms \((1)-(3)\) are satisfied in \(D\). As \(f(\{\alpha, \beta\}) = 1\) for all \(\alpha \in w \setminus v, \beta \in w' \setminus v\), the only possible counterexample to axiom (4) can occur when \(\xi \in v\), \(\beta \in (w \cup w') \setminus v\) and \(f(\{\xi, \beta\})=0\). As the formal elements \(*_{\alpha, c}\) are not in the image of \(O\) under the \(p_{\alpha}\), it follows that a counterexample to axiom (4) must come from a counter-example either in \(B\) or \(C\), which is impossible. So \(D \in \mathbb{K}_{w \cup w'}\), which completes the proof. \end{proof} \begin{lem}\label{rootandobject} Suppose \(((\varphi_{\alpha}(x;y_{\alpha}))_{\alpha < \kappa},(a_{\eta})_{\eta \in \omega^{<\kappa}},\overline{w})\) is a rectified sct-pattern such that \(l(x)\) is minimal among sct-patterns of height \(\kappa\). Then for all \(\alpha < \kappa\), \(\varphi_{\alpha}(x;y_{\alpha}) \vdash (x)_{l} \in O \cup \bigcup_{i < n} P_{\zeta_{i}}\) for all \(l < l(x)\), that is, every formula in the pattern implies that every variable $(x)_{l}$ is in $O$ or a predicate indexed by the root of the associated $\Delta$-system. \end{lem} \begin{proof} Suppose not. First, consider the case that for some \(l < l(x)\) and all \(\alpha < \kappa\), $\varphi_{\alpha}(x;y_{\alpha}) \vdash (x)_{l} \not\in O \cup \bigcup_{i < n} P_{\zeta_{i}} \cup \bigcup_{j < m} P_{\beta_{\alpha,j}}$, then the only relations that \(\varphi_{\alpha}(x;y_{\alpha})\) can assert between \((x)_{l}\) and the elements of \(y_{\alpha}\) and the other elements of \(x\) are equalities and inequalities. By Lemma \ref{no equalities}(2), we know that $\varphi_{\alpha}(x;y_{\alpha})$ proves no equalities between elements of $x$ and the element of $y_{\alpha}$ so it can only prove inequalties between $(x)_{l}$ and $y_{\alpha}$, but it is easy to see that this allows us to find an sct-pattern in fewer variables, contradicting minimality (or if $l(x) = 1$ the definition of an sct-pattern). Secondly, consider the case that there is some \(\alpha < \kappa\) and \(j < m\) such that $\varphi_{\alpha}(x;y_{\alpha}) \vdash (x)_{l} \in P_{\beta_{\alpha, j}}$ and therefore, for all \(\alpha' \neq \alpha\), \(\varphi_{\alpha'}(x;y_{\alpha'})\) implies that \((x)_{l}\) is not in any of the unary predicates of \(L_{w_{\alpha'}}\), as \(\beta_{\alpha,j}\) is outside the root of the \(\Delta\)-system. So restricting the given pattern to the formulas \((\varphi_{\alpha'}(x;y_{\alpha'}) : \alpha' < \kappa, \alpha' \neq \alpha)\) yields a rectified sct-pattern of height $\kappa$ which falls into the first case considered, a contradiction. As these are the only cases, we conclude. \end{proof} \begin{prop}\label{sctcomputation} If $\kappa_{\mathrm{sct}}(T^{*}_{\kappa,f}) = \kappa^{+}$, then there is \(\gamma\) such that for any \(\alpha,\alpha'\) with \(\gamma <\alpha < \alpha'<\kappa\) there is \(\xi \in v_{\alpha}, \zeta \in v_{\alpha'}\) such that \(f(\{\xi, \zeta\}) = 0\). \end{prop} \begin{proof} Suppose not. Recall that by Lemma \ref{tending} and Remark \ref{same number of variables}, if there is an sct-pattern of height $\kappa$ in $k$-free variables, there is a sct-pattern in $k$ free variables which is also rectified. It follows we may fix a rectified sct-pattern \(((\varphi_{\alpha}(x;y_{\alpha}))_{\alpha < \kappa},(a_{\eta})_{\eta \in \omega^{<\kappa}},\overline{w})\) such that \(l(x)\) is minimal among sct-patterns of height \(\kappa\). By Lemma \ref{rootandobject}, we know that up to a relabeling of the variables, there is a \(k \leq l(x)\) such that, for all $l < k$, \(\varphi_{\alpha}(x;y_{\alpha}) \vdash (x)_{l} \in P_{\zeta_{i(l)}}\) for some $i(l)<n$ and \(\varphi_{\alpha}(x;y_{\alpha}) \vdash (x)_{l} \in O\) for \(l \geq k\). For each $\alpha < \kappa$, let $\varphi'_{\alpha}(x)$ be a complete $L_{w_{\alpha}}$-formula, without parameters, in the variables $x$ implied by $\varphi_{\alpha}(x;y_{\alpha})$ (which is unique up to logical equivalence, since $\varphi_{\alpha}(x;y_{\alpha})$ was assumed to be a complete $L_{w_{\alpha}}$-formula). Clearly we have, for all $l < k$, \(\varphi'_{\alpha}(x) \vdash (x)_{l} \in P_{\zeta_{i(l)}}\) and \(\varphi'_{\alpha}(x) \vdash (x)_{l} \in O\) for \(l \geq k\), since these are formulas without parameters in $L_{r} \subseteq L_{w_{\alpha}}$. Since all the symbols in the language are unary, it is easy to see from quantifier-elimination that for each $\alpha < \kappa$ and $\eta \in \omega^{\alpha}$, $\varphi_{\alpha}(x;a_{\eta})$ is equivalent to a conjunction of the following: \begin{enumerate} \item $\varphi'_{\alpha}(x)$. \item $(x)_{l} \neq (a_{\eta})_{i}$ for $l < l(x)$ and $i < l(a_{\eta})$ (using the minimality of $l(x)$). \item $(f_{\delta \zeta_{i(l)}}((x)_{l}) = (a_{\eta})_{i})^{t^{0}_{\delta,l,i}}$ for $l < k$, $\delta\in r$ with $\delta < \zeta_{i(l)}$, and $i < l(a_{\eta})$, and for some $t^{0}_{\delta,l,i} \in \{0,1\}$. \item $((f_{\delta \xi} \circ p_{\xi})((x)_{l}) = (a_{\eta})_{i})^{t^{1}_{\delta, \xi,l,i}}$ for $\delta \leq \xi$ from $r$, $k \leq l < l(x)$, and $i < l(a_{\eta})$, and for some $t^{1}_{\delta, \xi,l,i} \in \{0,1\}$. \item $((f_{\delta \xi} \circ p_{\xi})((x)_{l}) = (a_{\eta})_{i})^{t^{2}_{\delta, \xi,l,i}}$ for $\delta \leq \xi$ from $w_{\alpha}$, $\xi \in v_{\alpha}$, $k \leq l < l(x)$, and $i < l(a_{\eta})$, and for some $t^{2}_{\delta, \xi,l,i} \in \{0,1\}$. \end{enumerate} Choose \(\gamma < \kappa\) so that if \(\alpha < \kappa\) and \(\varphi_{\alpha}(x;a_{0^{\alpha}})$ implies a positive instance of one of the equalities in (3) and (4), then this is implied by \(\varphi_{\alpha'}(x;a_{0^{\alpha'}})\) for some \(\alpha' < \gamma\) (possible as the root is finite). By assumption, there are \(\alpha, \alpha'\) with \(\gamma < \alpha < \alpha' < \kappa\) such that \(f(\{\xi, \zeta\}) = 1\) for all \(\xi \in v_{\alpha}, \zeta \in v_{\alpha'}\). Choose \(\eta \in \omega^{\alpha}\), \(\nu \in \omega^{\alpha'}\) both extending $0^{\gamma}$ such that \(\eta \perp \nu\). Let \(A = \langle a_{\eta}, a_{\nu} \rangle_{L_{w_{\alpha} \cup w_{\alpha'}}}\) be the finite \(L_{w_{\alpha} \cup w_{\alpha'}}\)-structure generated by \(a_{\eta}\) and \(a_{\nu}\). Pick $d \models \{\varphi_{\delta}(x;a_{0^{\delta}}): \delta \leq \gamma\} \cup \{\varphi_{\alpha}(x;a_{\eta})\}$ and $e \models \{\varphi_{\delta}(x;a_{0^{\delta}}) : \delta \leq \gamma\} \cup \{\varphi_{\alpha'}(x;a_{\nu})\}$. By the choice of $\gamma$, the $s$-indiscernibility of $(a_{\eta})_{\eta \in \omega^{<\kappa}}$, and quantifier-elimination and the observation above, we have \(\text{tp}_{L_{r}}(d/A) = \text{tp}_{L_{r}}(e/A)\). Let \(B = \langle d,A \rangle_{L_{w_{\alpha}}}\) and \(C = \langle e,A \rangle_{L_{w_{\alpha'}}}\). By Lemma \ref{consistency}, there is a \(D \in \mathbb{K}_{w_{\alpha} \cup w_{\alpha'}}\) such that \(D = \langle g, A \rangle^{D}_{L_{w_{\alpha} \cup w_{\alpha'}}}\) such that \(l(g) = l(d) = l(e)\) and \(\langle g,A \rangle_{L_{w_{\alpha}}} \cong B\) over \(A\) and \(\langle g, A \rangle_{L_{w_{\alpha'}}} \cong C\) over \(A\). Using the extension property to embed $D$ in $\mathbb{M}$ over $A$, it follows that in \(\mathbb{M}\), \(g \models \{\varphi_{\alpha}(x;a_{\eta}), \varphi_{\alpha'}(x;a_{\nu})\}\), contradicting the definition of sct-pattern. This completes the proof. \end{proof} \subsection{Conclusion} \begin{thm} \label{first main theorem} There is a stable theory \(T\) such that \(\kappa_{\text{cdt}}(T) \neq \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)\). Moreover, it is consistent with ZFC that for every regular uncountable \(\kappa\), there is a stable theory \(T\) with \(|T| = \kappa\) and \(\kappa_{\text{cdt}}(T) > \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)\). \end{thm} \begin{proof} If $\kappa$ is regular and uncountable satisfying $\text{Pr}_{1}(\kappa,\kappa,2,\aleph_{0})$, then choose \(f: [\kappa]^{2} \to 2\) witnessing \(\text{Pr}_{1}(\kappa,\kappa,2,\aleph_{0})\). There can be no homogeneous set of size \(\kappa\) for \(f\), since given any $\{x_{\alpha} : \alpha < \kappa\} \subseteq \kappa$, enumerated in increasing order, we obtain a pairwise disjoint family of finite sets $(v_{\alpha})_{\alpha < \kappa}$ defined by $v_{\alpha} = \{x_{\alpha}\}$ and $\mathrm{Pr}_{1}(\kappa,\kappa,2,\aleph_{0})$ implies that for each color $i \in \{0,1\}$, there are $\alpha < \alpha'$ such that $f(\{x_{\alpha},x_{\alpha'}\}) =i$. Moreover, $\mathrm{Pr}_{1}(\kappa,\kappa,2,\aleph_{0})$ implies directly that there can be no collection \((v_{\alpha} : \alpha < \kappa)\) of disjoint finite sets such that, given \(\alpha < \alpha' < \kappa\), there are \(\xi \in v_{\alpha}, \zeta \in v_{\alpha'}\) such that \(f(\{\xi, \zeta\}) = 0\). Let \(T = T^{*}_{\kappa, f}\). This theory is stable by Lemma \ref{stable}. Additionally, \(\kappa_{\mathrm{cdt}}(T) = \kappa^{+}\), by Proposition \ref{cdtcomputation}, but \(\kappa_{\text{sct}}(T) < \kappa^{+}\) and \(\kappa_{\text{inp}}(T) < \kappa^{+}\) by Proposition \ref{sctcomputation} and Proposition \ref{inpcomputation} respectively. By Fact \ref{ShelahPr} and Observation \ref{monotonicity}, \(\text{Pr}_{1}(\lambda^{++}, \lambda^{++}, 2, \aleph_{0})\) holds for any regular uncountable $\lambda$. Then \(T = T^{*}_{\kappa,f}\) gives the desired theory, for \(\kappa = \lambda^{++}\) and any \(f\) witnessing \(\text{Pr}_{1}(\lambda^{++}, \lambda^{++}, 2, \aleph_{0})\). For the ``moreover'' clause, note that ZFC is equiconsistent with ZFC + GCH + ``there are no inaccessible cardinals" (if \(V \models \text{ZFC}\) has a strongly inaccessible in it, replace \(V\) by \(V_{\kappa}\) for \(\kappa\) the least such, then consider \(L\) in \(V\)) which entails that every regular uncountable cardinal is a successor. By Theorem \ref{GalvinPr} this implies that \(\text{Pr}_{1}(\kappa, \kappa,2, \aleph_{0})\) holds for all regular uncountable cardinals \(\kappa\), which completes the proof. \end{proof} \begin{rem} In \cite[Theorem 3.1]{ArtemNick}, it was proved that \(\kappa_{\text{cdt}}(T) = \kappa_{\text{inp}}(T) + \kappa_{\text{sct}}(T)\) for any countable theory \(T\). The above theorem shows that in a certain sense, this result is best possible. \end{rem} \begin{rem} It would be interesting to know if for $\kappa$ strongly inaccessible, there is a theory $T$ with $\kappa_{\mathrm{cdt}}(T) = \kappa^{+} > \kappa_{\text{inp}}(T) + \kappa_{\text{sct}}(T)$. \end{rem} \section{Compactness of ultrapowers}\label{compactness} In this section we study the decay of saturation in regular ultrapowers. We say an ultrafilter $\mathcal{D}$ on $I$ is \emph{regular} if there is a collection of sets $\{X_{\alpha} : \alpha < |I|\} \subset \mathcal{D}$ such that for all $t \in I$, the set $\{\alpha : t \in X_{\alpha}\}$ is finite and $\mathcal{D}$ is \emph{uniform} if all sets in $\mathcal{D}$ have cardinality $|I|$. Recall that a model $M$ is called \emph{$\lambda$-compact} if every (partial) type over $M$ of cardinality less than $\lambda$ is realized in $M$. In the case that the language has size at most $\lambda$, the notions of $\lambda$-compactness and $\lambda$-saturation are equivalent but they may differ if the cardinality of the language exceeds $\lambda$, since, in this case, types over sets of parameters of size less than $\lambda$ may still contain more than $\lambda$ many formulas, in general. Given a theory $T$, we start with a regular uniform ultrafilter $\mathcal{D}$ on $\lambda$ and a $\lambda^{++}$-saturated model $M \models T$. We then consider whether the ultrapower $M^{\lambda}/\mathcal{D}$ is $\lambda^{++}$-compact. Shelah has shown \cite[Theorem VI.4.7]{shelah1990classification} that if $T$ is not simple, then in this situation $M^{\lambda}/\mathcal{D}$ will not be $\lambda^{++}$-compact and asked whether an analogous result holds for theories $T$ with $\kappa_{\text{inp}}(T) > \lambda^{+}$. We will show by direct construction that $\kappa_{\text{inp}}(T) > \lambda^{+}$ does not suffice but, by modifying an argument due to Malliaris and Shelah \cite[Claim 7.5]{Malliaris:2012aa}, $\kappa_{\text{sct}}(T) > \lambda^{+}$ is sufficient to obtain a decay in compactness, by levaraging the finite square principles of Kennedy and Shelah \cite{kennedyshelah}. \subsection{A counterexample} Fix $\kappa$ a regular uncountable cardinal. Let $L'_{\kappa} = \langle O, P_{\alpha},p_{\alpha} : \alpha < \kappa \rangle$ be a language where $O$ and each $P_{\alpha}$ is a unary predicate and each $p_{\alpha}$ is a unary function. Define a theory $T'_{\kappa}$ to be the universal theory with the following as axioms: \begin{enumerate} \item $O$ and the $(P_{\alpha})_{\alpha < \kappa}$ are pairwise disjoint. \item For all $\alpha < \kappa$, $p_{\alpha}$ is a function such that $(\forall x \in O)[p_{\alpha}(x) \in P_{\alpha}]$ and $(\forall x \not\in O)[p_{\alpha}(x) = x]$. \end{enumerate} Given a finite set $w \subset \kappa$, define $L'_{w} = \langle O,P_{\alpha}, p_{\alpha} : \alpha \in w \rangle$. Let $\mathbb{K}'_{w}$ denote the class of finite models of $T'_{\kappa} \upharpoonright L'_{w}$. \begin{lem} Suppose $w \subset \kappa$ is finite. Then $\mathbb{K}'_{w}$ is a Fra\"iss\'e class. \end{lem} \begin{proof} The axioms of $T'_{\kappa}\upharpoonright L_{w}$ are universal so HP is clear. As we allow the empty structure to be a model, JEP follows from AP. For AP, we reduce to the case where $A,B,C \in \mathbb{K}'_{w}$, $A$ is a substructure of both $B$ and $C$ and $B \cap C = A$. Because all the functions in the language are unary, we may define an $L'_{w}$-structure $D$ on $B \cup C$ by taking unions of the relations and functions as interpreted on $B$ and $C$. It is easy to see that $D \in \mathbb{K}'_{w}$, so we are done. \end{proof} By Fra\"iss\'e theory, for each finite $w \subset \kappa$, there is a unique countable ultrahomogeneous $L'_{w}$-structure with age $\mathbb{K}'_{w}$. Let $T^{\dag}_{w}$ denote its theory. We remark that the theory $T^{\dag}_{w}$ is almost a reduct of $T^{*}_{w}$ considered in the previous sections, with the difference that the functions $p_{\alpha}$ are partial in $T^{*}_{w}$ and total in $T^{\dag}_{w}$. One can easily check that $T^{\dag}_{w}$ is interpretable in $T^{*}_{w}$ for $w$ finite, interpreting $O$ by $\bigwedge_{\alpha \in w} \text{dom}(p_{\alpha})$. Since this interpretation is not uniform in $w$, we will still need to rapidly repeat the same steps in the analysis above to show that the $T^{\dag}_{w}$ are coherent. \begin{lem} Suppose $v$ and $w$ are finite sets with $w \subset v \subset \kappa$. Then $T^{\dag}_{w} \subset T^{\dag}_{v}$. \end{lem} \begin{proof} By induction, it suffices to consider the case when $v = w \cup \{\gamma\}$ for some $\gamma \in \kappa \setminus w$. By Fact \ref{KPTredux}, we must show (1) that $A \in \mathbb{K}'_{w}$ if and only if there is $D \in \mathbb{K}'_{v}$ such that $A$ is an $L'_{w}$-substructure of $D \upharpoonright L'_{w}$ and (2) that whenever $A,B \in \mathbb{K}'_{w}$, $\pi : A \to B$ is an $L'_{w}$-embedding, and $C \in \mathbb{K}'_{v}$ satisfies $C = \langle A \rangle^{C}_{L'_{v}}$ then there is $D \in \mathbb{K}'_{v}$ such that $B$ is an $L'_{w}$-substructure of $D \upharpoonright L'_{w}$ and $\pi$ extends to an $L'_{v}$-embedding $\tilde{\pi} : C \to D$. For (1), it is clear from definitions that if $D \in \mathbb{K}'_{v}$ then $D \upharpoonright L'_{w} \in \mathbb{K}'_{w}$. Given $A \in \mathbb{K}'_{w}$, we may construct a suitable $L'_{v}$-structure $D$ as follows. If $O^{A} = \emptyset$, we may simply expand $A$ to $D$ by setting $P_{\gamma}^{D} = \emptyset$ and this trivially satisfies the required axioms. So we will assume $O^{A}$ is non-empty and let the underlying set of $D$ be $A \cup \{*\}$. We interpret the predicates of $L'_{w}$ to have the same interpretation as on $A$, and we interpret the functions of $L'_{w}$ so that their restriction to $A$ are their interpretations on $A$ and so that the functions are the identity on $*$. We additionally set $P^{D}_{\gamma} = \{*\}$ and $p^{D}_{\gamma}$ to be the identity on the complement of $O^{D}$ ($=O^{A}$) and the constant function with value $*$ on $O^{D}$. Clearly $D \in \mathbb{K}'_{w}$, $D = \langle A \rangle_{L'_{v}}$, and $A$ is an $L'_{w}$-substructure of $D \upharpoonright L'_{w}$. For (2), suppose $A,B \in \mathbb{K}'_{w}$, $\pi : A \to B$ is an embedding, and $C \in \mathbb{K}'_{v}$ satisfies $C = \langle A \rangle^{C}_{L'_{v}}$. The requirement that $C = \langle A \rangle^{C}_{L'_{v}}$ entails that any points of $C \setminus A$ lie in $P_{\gamma}^{C}$. In particular, $O^{A} = O^{C}$ and we may use this notation interchangeably. Let $E = O^{B} \setminus \pi(O^{A})$, so that we may write $O^{B} = \pi(O^{A}) \sqcup E$. Define an $L'_{v}$-structure $D$ whose underlying set is $B \cup P_{\gamma}(A) \cup \{*_{e} : e \in E\}$. Interpret the predicates of $L'_{w}$ on $D$ to have the same interpretation as on $B$ and interpret the functions of $L'_{w}$ so that they agree with their interpretations on $B$ and are the identity on the complement of $B$. Then define $P_{\gamma}(D) = P_{\gamma}(A) \cup \{*_{e} : e \in E\}$ and interpret $p^{D}_{\gamma}$ by $$ p_{\gamma}^{D}(x) = \left\{ \begin{matrix} p_{\gamma}^{C}(a) & \text{ if } x = \pi(a) \\ *_{x} & \text{ if } x \not\in \pi(O^{A}). \end{matrix} \right. $$ Clearly $D \in \mathbb{K}'_{v}$. Extend $\pi$ to a map $\tilde{\pi}: C \to D$ by defining $\pi$ to be the identity on $P_{\gamma}(C)$. We claim $\tilde{\pi}$ is an $L'_{v}$-embedding: note that for all $x \in O^{C}$, $p_{\gamma}^{D}(\tilde{\pi}(x)) = p_{\gamma}^{C}(x) = \tilde{\pi}(p_{\gamma}^{C}(x))$ and $\tilde{\pi}$ obviously respects all other structure from $L'_{w}$ as $\pi$ is an $L'_{w}$-embedding. \end{proof} Define the theory $T^{\dag}_{\kappa}$ to be the union of $T^{\dag}_{w}$ for all finite $w \subset \kappa$. This is a complete $L'_{\kappa}$-theory with quantifier elimination, as these properties are inherited from the $T^{\dag}_{w}$. Fix a monster $\mathbb{M} \models T_{\kappa}^{\dag}$ and work there. \begin{prop}\label{bound} The theory $T^{\dag}_{\kappa}$ is stable and $\kappa_{inp}(T^{\dag}_{\kappa}) = \kappa^{+}$. \end{prop} \begin{proof} For each $\alpha < \kappa$, choose for each $\beta < \omega$ $a_{\alpha,\beta} \in P_{\alpha}(\mathbb{M})$ such that $\beta \neq \beta'$ implies $a_{\alpha,\beta} \neq a_{\alpha,\beta'}$. It is easy to check that, for all functions $g: \kappa \to \omega$, $\{p_{\alpha}(x) = a_{\alpha,g(\alpha)} : \alpha < \kappa\}$ is consistent and, for all $\alpha < \kappa$, $\{p_{\alpha}(x) = a_{\alpha, \beta} : \beta < \omega\}$ is $2$-inconsistent by the injectivity of the sequence $(a_{\alpha,\beta})_{\beta < \omega}$. Setting $k_{\alpha} = 2$ for all $\alpha$, we see that $(p_{\alpha}(x) = y_{\alpha} : \alpha < \kappa)$, $(a_{\alpha, \beta})_{\alpha < \kappa, \beta < \omega}$, and $(k_{\alpha})_{\alpha < \kappa}$ forms an inp-pattern of height $\kappa$ so $\kappa_{\text{inp}}(T_{\kappa}^{\dag}) \geq \kappa^{+}$. The stability of $T^{\dag}_{\kappa}$ follows from an argument identical to Lemma \ref{stable} which, by Fact \ref{easy inequalities}, gives the upper bound $\kappa_{\text{inp}}(T_{\kappa}^{\dag}) \leq \kappa^{+}$. \end{proof} \begin{prop}\label{saturation} Suppose $\mathcal{D}$ is an ultrafilter on $\lambda$, $\kappa = \lambda^{+}$, and $M \models T^{\dag}_{\kappa}$ is $\lambda^{++}$-saturated. Then $M^{\lambda}/\mathcal{D}$ is $\lambda^{++}$-saturated. \end{prop} \begin{proof} Suppose $A \subseteq M^{\lambda}/\mathcal{D}$, $|A| = \kappa = \lambda^{+}$. To show that any $q(x) \in S^{1}(A)$ is realized, we have three cases to consider: \begin{enumerate} \item $q(x) \vdash x \in P_{\alpha}$ for some $\alpha < \kappa$ \item $q(x) \vdash x \not\in O$ and $q(x) \vdash x\not\in P_{\alpha}$ for all $\alpha < \kappa$ \item $q(x) \vdash x \in O$. \end{enumerate} It suffices to consider $q$ non-algebraic and $A = \text{dcl}(A)$. In case (1), $q(x)$ is implied by $\{P_{\alpha}(x)\} \cup \{x \neq a : a \in A\}$ and in case (2), $q(x)$ is implied by $\{\neg O(x) \wedge \neg P_{\alpha}(x) : \alpha < \kappa\} \cup \{x \neq a : a \in A\}$. To realize $q(x)$ in case (1), for each $t \in \lambda$, choose some $b_{t} \in P_{\alpha}(M)$ such that $b_{t} \neq a[t]$ for all $a \in A$, which is possible by the $\lambda^{++}$-saturation of $M$ and the fact that $|A| = \lambda^{+}$. Let $b = \langle b_{t} \rangle_{t \in \lambda}/\mathcal{D}$. By $\L$o\'{s}'s theorem, $b \models q$. Realizing $q$ in case (2) is entirely similar. So now we show how to handle case (3). Fix some complete type $q(x) \in S_{1}(A)$ such that $q(x) \vdash x \in O$. First, we note that by possibly growing $A$ by $\kappa$ many elements, we may assume that there is a sequence $(c_{\alpha})_{\alpha < \kappa}$ from $A$ so that $q$ is equivalent to the following: $$ \{x \in O\} \cup \{x \neq a : a \in O(A)\} \cup \{p_{\alpha}(x) = c_{\alpha}\}, $$ This follows from the fact that, for each $\alpha < \kappa$, either $q(x) \vdash p_{\alpha}(x) = c_{\alpha}$ for some $c_{\alpha}$, or it only proves inequations of this form. In the latter case, we can choose some element $c_{\alpha} \in P_{\alpha}(M^{\lambda}/\mathcal{D})$ not in $A$ (possible by case (1) above) and extend $q(x)$ by adding the formula $p_{\alpha}(x) = c_{\alpha}$, which will then imply all inequations of the form $p_{\alpha}(x) \neq a$ for any $a \in A$, and this clearly remains finitely satisfiable. So now given $q$ in the form described above, let $X_{t} = \{\alpha < \kappa : M \models P_{\alpha}(c_{\alpha}[t])\}$ for each $t \in \lambda$. Let $q_{t}(x)$ denote the following set of formulas over $M$: $$ q_{t}(x) = \{x \in O\} \cup \{x \neq a[t]: a \in O(A)\} \cup \{p_{\alpha}(x) = c_{\alpha}[t] : \alpha \in X_{t}\}. $$ By construction, if $\alpha \neq \alpha' \in X_{t}$ then $M \models P_{\alpha}(c_{\alpha}[t]) \wedge P_{\alpha'}(c_{\alpha'}[t])$ so this set of formulas is consistent and over a parameter set from $M$ of size at most $\kappa$, hence realized by some $b_{t} \in M$. Let $b = \langle b_{t} \rangle_{t \in \lambda}/\mathcal{D}$ and let $J_{\alpha}$ be defined by $J_{\alpha} = \{t \in \lambda : M \models P_{\alpha}(c_{\alpha}[t])\}$. Note that, for $t < \lambda$ and $\alpha < \kappa$, $t \in J_{\alpha}$ if and only if $\alpha \in X_{t}$. As $q(x)$ is a consistent set of formulas, $J_{\alpha} \in \mathcal{D}$ and, by construction, $J_{\alpha} \subseteq \{t \in \lambda : M \models p_{\alpha}(b_{t}) = c_{\alpha}[t]\}$ so $M^{\lambda}/\mathcal{D} \models p_{\alpha}(b) = c_{\alpha}$. It is obvious that $b$ satisfies all of the other formulas of $q$ so we are done. \end{proof} \begin{cor} \label{second main theorem} Suppose $T$ is a complete theory, $|I| = \lambda$, $\mathcal{D}$ on $I$ is a ultrafilter, and $M \models T$ is a $\lambda^{++}$-saturated model of $T$. The condition that $\kappa_{\text{inp}}(T) > |I|^{+}$ is, in general, not sufficient to guarantee that $M^{I}/\mathcal{D}$ is not $\lambda^{++}$-compact. In particular, by Fact \ref{easy inequalities}(2), the condition that $\kappa_{\text{cdt}}(T) > |I|^{+}$ is not sufficient to guarantee that $M^{I}/\mathcal{D}$ is not $\lambda^{++}$-compact. \end{cor} \begin{proof} Given $\lambda$, $I$ with $|I| = \lambda$, and an ultrafilter $\mathcal{D}$ on $I$, choose any $\lambda^{++}$-saturated model of $T^{\dag}_{\lambda^{+}}$. By Lemma \ref{bound}, $\kappa_{\text{cdt}}(T^{\dag}_{\lambda^{+}}) \geq \kappa_{\text{inp}}(T^{\dag}_{\lambda^{+}}) = \lambda^{++} > |I|^{+}$, but, by Proposition \ref{saturation}, $M^{I}/\mathcal{D}$ is $\lambda^{++}$-saturated and hence $\lambda^{++}$-compact. \end{proof} \subsection{Loss of saturation from large sct-patterns} If $T$ is not simple, then it has either the tree property of the first kind or the second kind\textemdash Shelah argues in \cite[Theorem VI.4.7]{shelah1990classification} by demonstrating that either property results in a decay of saturation with an argument tailored to each property. The preceding section demonstrates that the analogy between TP$_{2}$ and $\kappa_{\text{inp}}(T) > |I|^{+}$ breaks down, but we show that the analogy between TP$_{1}$ and $\kappa_{\text{sct}}(T) > |I|^{+}$ survives, assuming some set theory. The argument below is a straightforward adaptation of the argument of \cite[Claim 8.5]{Malliaris:2012aa}. Recall that if $T$ is a theory with a distinguished predicate $P$ and $\kappa < \lambda$ are infinite cardinals, then the theory $T$ is said to \emph{admit} $(\lambda, \kappa)$ if there is a model $M \models T$ with $|M| = \lambda$ and $|P^{M}| = \kappa$. The notation $\langle \kappa, \lambda \rangle \to \langle \kappa', \lambda' \rangle$ stands for the assertion that any theory in a countable language that admits $(\lambda,\kappa)$ also admits $(\lambda',\kappa')$. Chang's two-cardinal theorem asserts that if $\lambda = \lambda^{<\lambda}$ then $\langle \aleph_{0},\aleph_{1} \rangle \to \langle \lambda, \lambda^{+}\rangle$ (see, e.g., \cite[Theorem 7.2.7]{chang1990model}\textemdash the statement given here follows from the proof). \begin{fact}\label{square} \cite[Lemma 4]{kennedyshelah} Suppose $\mathcal{D}$ is a regular uniform ultrafilter on $\lambda$ and $\langle \aleph_{0},\aleph_{1} \rangle \to \langle \lambda, \lambda^{+}\rangle$. There is an array of sets $\langle u_{t,\alpha} : t < \lambda, \alpha < \lambda^{+} \rangle$ satisfying the following properties: \begin{enumerate} \item $u_{t, \alpha} \subseteq \alpha$ \item $|u_{t, \alpha}| < \lambda$ \item $\alpha \in u_{t, \beta} \implies u_{t, \beta} \cap \alpha = u_{t, \alpha}$ \item if $u \subseteq \lambda^{+}$, $|u| < \aleph_{0}$ then $\{t < \lambda : (\exists \alpha)(u \subseteq u_{t, \alpha})\} \in \mathcal{D}$. \end{enumerate} \end{fact} \begin{thm} \label{second main theorem part 2} Suppose $|I| = \lambda$ and $\langle \aleph_{0},\aleph_{1} \rangle \to \langle \lambda, \lambda^{+}\rangle$. Suppose $\kappa_{\text{sct}}(T) > |I|^{+}$, $M$ is an $|I|^{++}$-saturated model of $T$ and $\mathcal{D}$ is a regular ultrafilter over $I$. Then $M^{I}/\mathcal{D}$ is not $|I|^{++}$-compact. \end{thm} \begin{proof} Let $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \lambda^{+})$, $(a_{\eta})_{\eta \in \lambda^{<\lambda^{+}}}$ be an sct-pattern. We may assume $l(y_{\alpha}) = k$ for all $\alpha < \lambda^{+}$. Let $\langle u_{t,\alpha} : t < \lambda, \alpha < \lambda^{+} \rangle$ be given as by Fact \ref{square}. We may consider the tree $(\lambda^{+})^{<\lambda}$ as the set of sequences of elements of $\lambda^{+}$ of length $<\lambda$ ordered by extension and then, for each $t < \lambda$ and $\alpha < \lambda^{+}$, we can define $\eta_{t,\alpha} \in (\lambda^{+})^{<\lambda}$ to be the sequence that enumerates $u_{t,\alpha} \cup \{\alpha\}$ in increasing order. Note that if $\alpha < \beta$, then, because $\alpha \in u_{t,\beta}$ implies $u_{t,\beta} \cap \alpha = u_{t,\alpha}$, we have $\eta_{t,\alpha} \vartriangleleft \eta_{t,\beta} \iff \alpha \in u_{t,\beta}$. For each $\alpha < \lambda^{+}$ we thus have an element $c_{\alpha} \in M^{\lambda}/\mathcal{D}$ given by $c_{\alpha} = \langle c_{\alpha}[t]: t < \lambda \rangle /\mathcal{D}$ where $c_{\alpha}[t] = a_{\eta_{t, \alpha}} \in M$. \textbf{Claim:} $p(x):= \{\varphi_{\alpha}(x;c_{\alpha}) : \alpha < \lambda^{+} \}$ is consistent. \emph{Proof of claim:} Fix any finite $u \subseteq \lambda^{+}$. If for some $t < \lambda$ and $\alpha < \lambda^{+}$, we have $u \subseteq u_{t,\alpha}$ then $\{\eta_{t,\beta} : \beta \in u\} \subseteq \{\eta_{t,\beta} : \beta \in u_{t,\alpha}\}$ which is contained in a path, hence $\{\varphi_{\beta}(x;c_{\beta}[t]) : \beta \in u\} = \{\varphi_{\beta}(x;a_{\eta_{t,\beta}}) : \beta \in u\}$ is consistent by definition of an sct-pattern. We know $\{t < \lambda : (\exists \alpha)(u \subseteq u_{t,\alpha})\} \in \mathcal{D}$ so the claim follows by $\L$o\'{s}'s theorem and compactness.\qed Suppose $b = \langle b[t] \rangle_{t \in \lambda} / \mathcal{D}$ is a realization of $p$ in $M^{\lambda}/\mathcal{D}$. For each $\alpha < \lambda^{+}$ define $J_{\alpha} = \{t < \lambda : M \models \varphi_{\alpha}(b[t],c_{\alpha}[t])\} \in \mathcal{D}$. For each $\alpha$, pick $t_{\alpha} \in J_{\alpha}$. The map $\alpha \mapsto t_{\alpha}$ is regressive on the stationary set of $\alpha$ with $\lambda \leq \alpha < \lambda^{+}$. By Fodor's lemma, there's some $t_{*}$ such that the set $S = \{\alpha < \lambda^{+} : t_{\alpha} = t_{*}\}$ is stationary. Therefore $p_{*}(x) = \{\varphi_{\alpha}(x;a_{\eta_{t_{*},\alpha}}) : \alpha \in S\}$ is a consistent partial type in $M$ so $\{\eta_{t_{*},\alpha} : \alpha \in S\}$ is contained in a path, by definition of sct-pattern. Choose an $\alpha \in S$ so that $|S \cap \alpha| = \lambda$. Then, by choice of the $\eta_{t,\alpha}$, we have $\beta \in S \cap \alpha$ implies $\eta_{t_{*},\beta} \unlhd \eta_{t_{*},\alpha}$ and therefore $\beta \in u_{t_{*},\alpha}$. This shows $|u_{t_{*},\alpha}| \geq \lambda$, a contradiction. \end{proof} {} \end{document}
\begin{document} \title{Minimal Ramsey graphs for cyclicity} \author{ Damian Reding \and Anusch Taraz } \address{Technische Universit\"at Hamburg, Institut f\"ur Mathematik, Hamburg, Germany} \email{\{damian.reding|taraz\}@tuhh.de} \maketitle \begin{abstract} We study graphs with the property that every edge-colouring admits a monochromatic cycle (the length of which may depend freely on the colouring) and describe those graphs that are minimal with this property. We show that every member in this class reduces recursively to one of the base graphs $K_5-e$ or $K_4\vee K_4$ (two copies of $K_4$ identified at an edge), which implies that an arbitrary $n$-vertex graph with $e(G)\geq 2n-1$ must contain one of those as a minor. We also describe three explicit constructions governing the reverse process. As an application we are able to establish Ramsey infiniteness for each of the three possible chromatic subclasses $\chi=2, 3, 4$, the unboundedness of maximum degree within the class as well as Ramsey separability of the family of cycles of length $\ifmmode\ell\else\polishlcross\fieq l$ from any of its proper subfamilies. \end{abstract} \ifmmode\ell\else\polishlcross\fiinespread{1.3} \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\ifmmode\ell\else\polishlcross\fiinespacing\@plus\ifmmode\ell\else\polishlcross\fiinespacing}{.5\ifmmode\ell\else\polishlcross\fiinespacing}{\normalfont\scshape\centering\S}}{Introduction and results} By an $r$-\textit{Ramsey graph for} $H$ we mean a graph $G$ with the property that every $r$-edge-colouring of $G$ admits a monochromatic copy of $H$. Wo focus on the Ramsey graphs that are \textit{minimal} with respect to the subgraph relation, i.e. no proper subgraph is a Ramsey graph for $H$. As a consequence of Ramsey's theorem~\cite{RAM} such graphs always exist. Minimal Ramsey graphs, their constructions, number on a fixed vertex set, connectivity as well as extent of chromatic number and maximum degree have been investigated by Burr, Erd\H{o}s and Lov\'asz~\cite{BEL}, Ne\v{s}et\v{r}il and Rödl~\cite{NRC}, Burr, Faudree and Schelp~\cite{BFS} as well as Burr, Ne\v{s}et\v{r}il, Rödl~\cite{BNR} and others. More recently, the question of the minimum degree of minimal Ramsey graphs initiated by Burr, Erd\H{o}s, Lov\'asz~\cite{BEL} was picked up again by Fox, Lin~\cite{FL} and Szab\'o, Zumstein and Z\"urcher~\cite{SZZ}. Subsequently Fox, Grinshpun, Liebenau, Person and Szab\'o~\cite{FGLPS} have employed the parameter in a proof of Ramsey non-equivalence (or separability)~\cite{FGLPS} and also obtained some generalizations to multiple colours~\cite{FGLPSM}. However, a persistent obstacle is that the structure of (minimal) Ramsey graphs for a specific graph $H$ is difficult to characterize, essentially because it requires a practical description of how graphs edge-decompose into $H$-free subgraphs. Indeed, few exact characterizations are known other than some simple ones for stars and collections of such~\cite{BEL}. The obstacle turns out to be a lesser one if $H$ is relaxed to be a graph property. We say that a graph $G$ is an $r$-\textit{Ramsey graph for a graph property $\mathcal{P}$} (which is closed under taking supergraphs), if every $r$ edge-colouring of $G$ admits a monochromatic copy of a member of $\mathcal{P}$. The choice of the member is thus allowed to depend freely on the choice of colouring. We denote that class by $\mathcal{R}_r(\mathcal{P})$ and the subclass of minimal ones by $\mathcal{M}_r(\mathcal{P})\subset\mathcal{R}_r(\mathcal{P})$. Indeed, this is not a far-fetched definition. Results on the corresponding notion of Ramsey numbers for graph properties appear across the literature both in and outside the context of Ramsey theory, e.g. connectivity~\cite{M}, minimum degree~\cite{KS}, planarity~\cite{B}, the contraction clique number~\cite{T} or, more recently, embeddability in the plane~\cite{FKS}. For a small number of such properties, the minimal order $R_r(\mathcal{P})$ of a Ramsey graph for $\mathcal{P}$ is known exactly, e.g. $R_r(\chi\geq k)=(k-1)^r+1$~\cite{LZ}. Most notable, however, is the characterization of the chromatic Ramsey number of $H$ as the Ramsey number for the graph property $Hom(H)$ by Burr, Erd\H{o}s and Lov\'asz~\cite{BEL}. The notion also connects naturally to classical graph parameters. Indeed, for every number $r\geq 2$ of colours we have that $G\in\mathcal{R}_r(\mathcal{C}_{\text{odd}})$, where $\mathcal{C}_{\text{odd}}$ denotes the property of containing an odd cycle, if and only if $\chi (G)\geq 2^r+1$ (for the \emph{if}-direction, note that if $G\notin\mathcal{R}_r(\mathcal{C}_{\text{odd}})$, then $G$ edge-decomposes into $\ifmmode\ell\else\polishlcross\fieq r$ bipartite graphs, whence a proper $2^r$-colouring of $V(G)$ is given by the $r$-tuples of $0$'s and $1$'s. The \emph{only if}-direction follows by a simple inductive argument on $r\geq 1$). Consequently we have that $G\in\mathcal{M}_r(\mathcal{C}_{\text{odd}})$ if and only if $G$ is minimal subject to $\chi(G)\geq 2^r+1$, so the study of $\mathcal{M}_r(\mathcal{C}_{\text{odd}})$ is precisely the study of the well-known notion of $(2^r+1)$-\emph{critical} graphs. The property we focus on in this paper is the property $\mathcal{C}$ of containing an arbitrary cycle. Indeed we have the following useful characterization of $\mathcal{R}_r(\mathcal{C})$ (and hence of $ \mathcal{M}_r(\mathcal{C})$) in terms of local edge-densities of subgraphs. \begin{prop}\ifmmode\ell\else\polishlcross\fiabel{prop:NW} For every integer $r\geq 2$, we have that $G\in\mathcal{R}_r(\mathcal{C})$ if and only if $\frac{e(H)-1}{v(H)-1}\geq r$ for some subgraph $H\subseteq G$, and consequently we have that $G\in\mathcal{M}_r(\mathcal{C})$ if and only if both $\frac{e(G)-1}{v(G)-1}=r$ and $\frac{e(H)-1}{v(H)-1}<r$ for every proper subgraph $H\subset G$. \end{prop} Since the graphs in $\mathcal{R}_r(\mathcal{C})$ are precisely those which do not edge-decompose into $r$ forests, one obtains Proposition \ref{prop:NW} as a direct translation of the following well-known theorem. \begin{thm}\ifmmode\ell\else\polishlcross\fiabel{thm:NW} (\text{Nash-Williams' Arboricity Theorem}~\cite{NW}) Every graph $G$ admits an edge-decomposition into $\ifmmode\ell\else\polishlcross\fieft\ifmmode\ell\else\polishlcross\ficeil ar(G)\right\rceil$ many forests, where $ar(G):=\max_{J\subseteq G, v_J>1}\frac{e_J}{v_J-1}$. \end{thm} We remark that this is not the first time that Theorem \ref{thm:NW} finds use in graph Ramsey theory, see e.g.~\cite{PR} for an account of how the theorem can be used to establish the relation $ar(G)\geq r\cdot ar(F)$ for every $r$-Ramsey graph $G$ of an arbitrary graph $F$. For the rest of the paper we focus on the case $r=2$ and also write $\mathcal{R}(\mathcal{C}):=\mathcal{R}_2(\mathcal{C})$ and $\mathcal{M}(\mathcal{C}):=\mathcal{M}_2(\mathcal{C})$. Given the aforementioned relation between $\mathcal{M}(\mathcal{C})$ and $5$-critical graphs, the latter of which are completely described (in the language of \emph{constructibility}) by the well-known H\'ajos construction~\cite{HJS} originating in the single base graph $K_5$, one might suspect that a similar reduction to base graphs is possible for $\mathcal{M}(\mathcal{C})$. Indeed, our first result does just that. Our two base graphs will be $K_5-e\in\mathcal{M}(\mathcal{C})$ and $K_4\vee K_4\in\mathcal{M}(\mathcal{C})$, the graph obtained by identifying two copies of $K_4$ at an edge; a quick computation based on Proposition \ref{prop:NW} shows that these are in $\mathcal{M}(\mathcal{C})$. \begin{thm}\ifmmode\ell\else\polishlcross\fiabel{thm:first} For every $G\in \mathcal{M}(\mathcal{C})$ there exists $n\in\mathbb{N}_0$ and a sequence $G_k$ of minimal Ramsey graphs for $\mathcal{C}$ such that $$\{K_5-e, K_4\vee K_4\}\ni G_0\prec G_1\prec\ifmmode\ell\else\polishlcross\fidots\prec G_{n}=G,$$ where $\prec$ denotes the minor relation. In fact, for every $k\in [n]$ one can take $G_{k-1}$ to be an arbitrary minimal Ramsey subgraph (for $\mathcal{C}$) of the Ramsey graph (for $\mathcal{C}$) obtained from $G_k$ by contracting an arbitrary edge that belongs to at most one triangle in $G_{k}$. \end{thm} As we shall show, the contraction of an edge, which is in at most one triangle, preserves the Ramsey property of a Ramsey-graph for $\mathcal{C}$, whence a minimal Ramsey-subgraph can be found. The theorem guarantees that continuing the reduction in this way necessarily results in $K_5-e$ or $K_4\vee K_4$. By combining \ref{prop:NW} with \ref{thm:first} we therefore obtain: \begin{cor} Every graph $G$ with $e(G)\geq 2v(G)-1$ contains one of $K_5-e$, $K_4\vee K_4$ as a minor. \end{cor} Upon reinterpretation of Theorem \ref{thm:first}, every $G\in\mathcal{M}(\mathcal{C})$ can be obtained by starting with one of the two base graphs by recursively splitting a vertex of a suitable supergraph. A concrete description of the process would result in an algorithm constructing all minimal Ramsey-graphs for $\mathcal{C}$. Traditionally, for graphs $H$ such extensions were done by means of \emph{signal senders}, i.e. non-Ramsey graphs $G$ with two special edges $e$ and $f$, which attain same (respectively distinct) colours in every $H$-free colouring, which were then use to establish infiniteness of $\mathcal{M}(H)$ and much more, see e.g.~\cite{BEL} and~\cite{BNR}. However, it follows from an extension of Theorem \ref{thm:NW} by Reiher and Sauermann~\cite{RS} that no (positive) signal senders for $\mathcal{C}$ can exist: indeed, given a graph $G$ that edge-decomposes into two forests, for any choice of $e$ and $f$ one finds an edge-decomposition with $e$ and $f$ belonging to different colour classes. Instead, one may prove infiniteness for $\mathcal{M}(\mathcal{C})$ by noting (by an argument similar to that in~\cite{ADOV}) that a $4$-regular graph of girth $g$ (which is known to exist by~\cite{ESA}) must contain a minimal Ramsey graph for cyclicity, where the monochromatic cycles are of length $\geq g$. Our second result provides a much simpler way to make progress towards this aim by describing three entirely constructive ways to enlarge a graph in $\mathcal{M}(\mathcal{C})$ that allow to track its structure; note that the first increases the number of vertices by $1$, while the other two increase it by $2$. \begin{thm}\ifmmode\ell\else\polishlcross\fiabel{thm:second} If $G\in\mathcal{M}(\mathcal{C})$, then also $G^*\in\mathcal{M}(\mathcal{C})$, where $G^*$ is a larger graph obtained from $G$ by applying one of the following three constructions: \begin{enumerate} \item Given a $2$-path $uvw$ in $G$, do the following: Introduce a new vertex $x$. Join $x$ to each of $u, v$ and $w$. Then delete edge $vw$. \item Given an edge $vw$ in $G$, do the following: Introduce a new vertex $x$. Join $x$ to both $v$ and $w$. Then apply construction (1) to the $2$-path $xvw$. \item Given a $2$-path $uvw$ in $G$, do the following: apply construction (1) to $uvw$ and $wvu$ at the same time, that is: Introduce new vertices $x, y$. Join both $x, y$ to each of $u, v, w$. Then delete edges $uv$ and $vw$. \end{enumerate} \end{thm} Note that one has $\chi(G)\ifmmode\ell\else\polishlcross\fieq 4$ for every graph $G\in\mathcal{M}(\mathcal{C})$ or, more generally $\chi(G)\ifmmode\ell\else\polishlcross\fieq 2r$ for every graph $G\in\mathcal{M}_r(\mathcal{C})$. Indeed, any $n$-vertex graph $G\in\mathcal{M}_r(\mathcal{C})$ contains a subgraph $H$ with $\delta(H)\geq\chi(G)-1$, which at the same time satisfies $\delta(H)\ifmmode\ell\else\polishlcross\fieq d(H)\ifmmode\ell\else\polishlcross\fieq\frac{2[r(n-1)+1]}{n}<2r$, where $d(H)$ denotes the average degree of $H$. Our Theorem \ref{thm:second} now implies: \begin{cor}\ifmmode\ell\else\polishlcross\fiabel{cor:inf} Each of the three partition classes of $\mathcal{M}(\mathcal{C})$ corresponding to chromatic number $\chi = 2, 3, 4$, respectively, consists of infinitely many pairwise non-isomorphic graphs. \end{cor} In fact, since our first two constructions can be seen to preserve planarity, infinitely many of the above graphs with $\chi = 2, 3$ can be chosen planar each. On the other hand, the smallest bipartite graph $G\in\mathcal{M}(\mathcal{C})$ is already $K_{3, 5}$ (obtained as $K_5-e\ifmmode\ell\else\polishlcross\fiongrightarrow (K_{2, 3})^+\ifmmode\ell\else\polishlcross\fiongrightarrow (K_{2, 4})^+\ifmmode\ell\else\polishlcross\fiongrightarrow K_{3, 5})$. Since $e(G)>2v(G)-4$, any such must be non-planar.\\ Note that the fact that $\chi(G)\ifmmode\ell\else\polishlcross\fieq 4$ for $G\in\mathcal{M}(\mathcal{C})$ is much unlike the situation for graphs $G\in\mathcal{M}(H)$ for $H=K_3$ or $H$ $3$-connected, where $\chi(G)$ becomes arbitrarily large (see~\cite{BNR}) and hence so does $\Delta(G)$. Despite the boundedness of $\chi(G)$ we are still able to show: \begin{cor}\ifmmode\ell\else\polishlcross\fiabel{cor:delta} For every $\Delta\geq 1$ there exists $G\in\mathcal{M}(\mathcal{C})$ with $\Delta (G)\geq\Delta$. \end{cor} Indeed, Corollary \ref{cor:delta} is a special case of a much more general theorem, which as an exhaustive application of \ref{thm:second} asserts that the structure of $\mathcal{M}(\mathcal{C})$ is actually quite rich. By a \emph{forest of cycles} we refer to a graph $F$ obtained, with disregard to isolated vertices, by starting with a cycle and then recursively adjoining a further cycle by identifying at most one of its vertices with a vertex on already existing cycles. Clearly there are forests of cycles of arbitrarily large maximum degree. Note that thanks to every edge of $F$ belonging to precisely one cycle, we can $2$-edge-colour a forest of cycles $F$ in such a way that every cycle in $F$ is monochromatic while choosing each cycle's colour independently of that of any other cycle. Call any such colouring \emph{cycle-monochromatic}. \begin{thm}\ifmmode\ell\else\polishlcross\fiabel{thm:third} For every forest of cycles $F$ and every integer $n\geq 5$ satisfying $n\geq \ifmmode\ell\else\polishlcross\fieft|F\right|$ there exists $G\in\mathcal{M}(\mathcal{C})$ with the following properties: \begin{enumerate} \item $\ifmmode\ell\else\polishlcross\fieft|G\right|=n$ \item $F$ is a subgraph of $G$ \item Every cycle-monochromatic $2$-edge-colouring of $F$ extends to a $2$-edge-colouring of $G$, in which there are no monochromatic cycles other than those already in $F$. \end{enumerate} \end{thm} Note that the condition $n\geq \ifmmode\ell\else\polishlcross\fieft|F\right|$ could be replaced by $n=\ifmmode\ell\else\polishlcross\fieft|F\right|$ if the definition of a forest of cycles were relaxed so as to allow isolated vertices, but this variant would somewhat undermine the strength of the statement. Since, as is quickly seen, a forest of cycles $F$ on $n$ (non-isolated) vertices contains between $n$ and $\frac{3}{2}(n-1)$ edges, Theorem \ref{thm:third} also guarantees that any such $F$ ($n\geq 5$) extends to some $G\in\mathcal{M}(\mathcal{C})$ with $F$ as a spanning subgraph by adding only $k$ edges, where $\frac{1}{2}(v(F)+1)\ifmmode\ell\else\polishlcross\fieq k\ifmmode\ell\else\polishlcross\fieq v(F)-1$. Finally, we remark on a second corollary of \ref{thm:third}. \begin{cor}\ifmmode\ell\else\polishlcross\fiabel{cor:equiv} For all $l \geq 4$ the family $\{C_3,\ifmmode\ell\else\polishlcross\fidots, C_l\}$ is not Ramsey-equivalent to any proper subfamily of itself, that is, for every proper $\mathcal{F}\subset\{C_3,\ifmmode\ell\else\polishlcross\fidots, C_l\}$ there exists a (minimal) Ramsey-graph for $\{C_3,\ifmmode\ell\else\polishlcross\fidots, C_l\}$, which is not a Ramsey-graph for $\mathcal{F}$. \end{cor} Corollary \ref{cor:equiv} asserts that for every $l\geq 3$ the cycle family $\mathcal{F}:=\{C_3,\ifmmode\ell\else\polishlcross\fidots, C_l\}$ and any proper subfamily $\mathcal{F}_0$ of $\mathcal{F}$ are Ramsey-separable (or Ramsey non-equivalent). The concepts were introduced in~\cite{SZZ} and subsequently studied in e.g.~\cite{FGLPS},~\cite{ARU} and~\cite{BL}. A central open problem in the area is whether some two distinct graphs are Ramsey equivalent. The existence of Ramsey graphs for cycles $C_k$ with girth $k$ (which follows from the Random Ramsey Theorem, see also~\cite{HRRS}) sorts out this question in the case of single cycles and also cycle families $\mathcal{F}_0$ containing the longest cycle $C_l$ of $\mathcal{F}$. In contrast, \ref{cor:equiv} provides constructively a supply of separating Ramsey graphs for all proper $\mathcal{F}_0$.\\ The organization of the paper is as follows. In each of the following three sections we provide the proofs of Theorem \ref{thm:first}, Theorem \ref{thm:second} and Theorem \ref{thm:third}, respectively, and subsequently discuss the possibility of some generalizations in the concluding remarks. \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\ifmmode\ell\else\polishlcross\fiinespacing\@plus\ifmmode\ell\else\polishlcross\fiinespacing}{.5\ifmmode\ell\else\polishlcross\fiinespacing}{\normalfont\scshape\centering\S}}{Proof of theorem \ref{thm:first}} Our proof of \ref{thm:first} relies on three lemmas. We state the elementary one first, which holds for any number of colours. \begin{lem}\ifmmode\ell\else\polishlcross\fiabel{lem:elem} Every $G\in\mathcal{M}_r(\mathcal{C})$ satisfies $r+1\ifmmode\ell\else\polishlcross\fieq\delta (G)\ifmmode\ell\else\polishlcross\fieq 2r-1$ and is also $2$-connected. \end{lem} \begin{proof} An immediate consequence of Proposition \ref{prop:NW} is that every $G\in\mathcal{M}_r(\mathcal{C})$ has size $e(G)=rv(G)-(r-1)$ and every subgraph $H\subseteq G$ has average degree $d(H)<2r$, which implies the upper bound for $\delta (H)$ (including the case $H=G$). For the lower bound for $\delta (G)$ suppose that $G$ contains a vertex $v$ of degree at most $r$. Colour the outgoing edges with distinct colours; since now no monochromatic cycle can pass through $v$, it follows that $G-v$ itself must be Ramsey for $\mathcal{C}$, thus contradicting the minimality of $G$. For connectivity suppose that $G$ can be disconnected by removing at most one vertex, so $G$ consists of two proper subgraphs $G_1, G_2$ which may or may not have a vertex in common. Since removing an edge from $G_1$ destroys the Ramsey property of the whole graph, we can fix an $r$-edge-colouring of $G_2$ without a monochromatic cycle. It follows that $G_1$ itself must be Ramsey for $\mathcal{C}$, again contradicting the minimality of $G$. \end{proof} In the following we assume that $r=2$. The following lemma asserts that contraction of certain edges preserves the Ramsey property for cyclicity. \begin{lem}\ifmmode\ell\else\polishlcross\fiabel{contract} If $G\in\mathcal{R}(\mathcal{C})$, then $G/e\in\mathcal{R}(\mathcal{C})$, where $G/e$ is the graph obtained from $G$ by contracting an arbitrary edge $e\in E(G)$ that lies in at most one triangle. \end{lem} \begin{proof} Let $e$ be as above and fix a $2$-edge-colouring of $G/e$.\\ \emph{Case 1.} If $e$ belongs to no triangle in $G$, then a $2$-edge colouring of $G/e$ induces a $2$-edge colouring of $G-e$, and any monochromatic cycle in $G-e$ induces a monochromatic cycle in $G/e$. If there is no monochromatic cycle in $G-e$, then, by Ramseyness of $G$, rejoining $e$ produces a monochromatic cycle irrespective of its colour. So $G-e$ must contain both a blue and red path joining the vertices of $e$. Note that since these are edge-disjoint, at least one of the paths must have length at least $3$, otherwise $e$ would be chord to a four-cycle. Hence there is a monochromatic cycle in $G/e$.\\ \emph{Case 2.} If $e$ belongs to one triangle in $G$, then a $2$-edge-colouring of $G/e$ induces a $2$-edge colouring of $G-e$ with the other two triangle edges in the same colour. If $G-e$ has no monochromatic cycle, proceed as above. Suppose $G-e$ has a monochromatic cycle. If it does not use both of the other edges of the triangle containing $e$, then it induces a monochromatic in $G/e$. If the cycle does use both, so $e$ is a chord to the cycle, then it must be of length at least $5$ since $e$ is not chord to a four-cycle. But then again there is a path of length at least $3$ joining the vertices of $e$. Hence there is a monochromatic cycle in $G/e$. This completes the proof. \end{proof} Consequently, for graphs with every edge in at most one triangle, e.g. such with girth $\geq 4$, the property of being Ramsey for cyclicity is stable under arbitrary edge-contractions. Note that we could have dealt with case $2$ computationally by invoking Proposition \ref{prop:NW} (thus even obtaining that for $e$ in one triangle the Ramsey-graph $G/e$ is minimal whenever $G$ is) but a constructive proof sheds more light on the subject matter. \begin{lem}\ifmmode\ell\else\polishlcross\fiabel{attached} Any $2$-connected graph $G$ with every edge contained in at least two triangles satisfies $e(G)\geq 2v(G)$, unless $v(G)\ifmmode\ell\else\polishlcross\fieq 6$. \end{lem} \begin{proof} We start with two simple observations:\\ (1) Since every edge of $G$ is chord to a $4$-cycle, we must have $\delta (G)\geq 3$. Note that wlog. we can assume that equality holds, because if $\delta (G)\geq 4$, then $e(G)\geq 2v(G)$ follows by the Handshaking Lemma. Suppose therefore that there is $v\in G$ with $d(v)=3$.\\ (2) Observe further that every vertex $v\in G$ with $d(v)=3$ necessarily lies in a $K_4$ in $G$. This is because each of the three edges incident to $v$ must be a chord of a $C_4$, which due to $d(v)=3$ is necessarily spanned by the other two.\\ Now fix both a $v\in G$ with $d(v)=3$ and a $K_:=K_4\subset G$ with $v\in K$.\\ \emph{Remark.} At this stage it is clear that the two base graphs $K_5-e$ and $K_4\vee K_4$ are the only graphs $G$ with $v(G)<7$, $\delta (G)=3$ and every edge chord of a $4$-cycle: this is clear when $v(G)=5$, and also when $v(G)=6$, since then $K_4\subset G$ with precisely $5$ more edges to built a further $K_4$ housing the remaining two vertices. (Hence, the two graphs also prove the lemma false when $v(G)<7$.)\\ Suppose $K$ is \emph{strongly attached} in $G$, that is, that some vertex $z$, say, outside of $K$ in $G$ is adjacent to at least two vertices $u, w$ in $K$. We choose the reduction of $G$ so that $G'$ also satisfies the hypothesis of the lemma with $v(G')=v(G)-1$ and $e(G')\ifmmode\ell\else\polishlcross\fieq e(G)-2$: Obviously $v$ is not adjacent to $z$, so $v\neq u$ and $v\neq w$. Let $t$ denote the fourth vertex in $K$; it may or may not be adjacent to $z$. Obtain $G'$ from $G$ by deleting $v$ and its three incident edges, and also add the edge between $t$ and $z$, if it does not exist already, so as to ensure that every edge of $G'$ is in at least two triangles. Note that $G'$ remains $2$-connected since clearly none of its vertices is a cutvertex.\\ Else, if $K$ is \emph{weakly attached} in $G$, that is, if every vertex of $G$ outside $K$ is adjacent to at most one vertex in the $K$, consider the following. If $K$ does not contract to a cutvertex, then $G':=G/K$ clearly satisfies the hypothesis of the lemma with $v(G')=v(G)-3$ and $e(G')=e(G)-6$. If $K$ does contract to a cutvertex $v$ in $G/K$, let $V_1, \ifmmode\ell\else\polishlcross\fidots, V_k$ denote the vertex classes of the $k\geq 2$ connected components of $G/K-v$. Note that since $K$ is weakly attached we have that $n_i:=\ifmmode\ell\else\polishlcross\fieft|V_i\right|\geq 3$ and that each of the subgraphs $G_i:=G[V_i\cup V(K)]$ satisfies the hypothesis of the lemma with $n_i+4=\ifmmode\ell\else\polishlcross\fieft|V_i\cup V(K)\right|<v(G)$, so by induction we obtain \begin{eqnarray*} e(G) & = & e(G_1)+\ifmmode\ell\else\polishlcross\fidots + e(G_k)-(k-1)e(K)\geq 2(n_1+4)+\ifmmode\ell\else\polishlcross\fidots 2(n_k+4)-6k+6\\ & = & 2(n_1+\ifmmode\ell\else\polishlcross\fidots +n_k)+8k-6k+6 = 2(v(G)-4)+2k+6\geq 2v(G) \end{eqnarray*} Note that the result now easily follows by induction on $v(G)$, provided it holds true in the cases $v(G)=7, 8, 9$:\\ For the cases $v(G)=8, 9$, consider as before a $K:=K_4\subset G$. If $K$ can be chosen strongly attached, we successfully reduce to the cases $v(G)=7, 8$. If not, then contracting a weakly attached $K$ necessarily results in either $K_5-e$ or $K_4\vee K_4$, with the contraction having occurred at one of its high degree vertices (else a strongly attached $K_4$ in the reduced graph must have already been strongly attached in $G$). Since each of the low degree vertices in the reduced graph is contained in a $K_4$ as well, the same $K_4$'s must have existed in $G$ prior contraction of $K$ or $K$ could not have been weakly attached. Consequently, $K$ intersects one of those $K_4$ at a cutvertex, thus contradicting $2$-connectedness.\\ The case $v(G)=7$ is more involved as we cannot reduce it to a smaller graph as in the previous cases: Suppose there exists a $2$-connected graph $G$ on $7$ vertices, with every edge occurring as the chord to a $4$-cycle, which satisfies $e(G)<2v(G)=14$. We now force a contradiction in several steps: Fix a $K:=K_4$ in $G$ and let $v, u_1, u_2$ denote the $3$ vertices of $G$, which are not vertices of $K$. Since $G$ is $2$-connected, at least $2$ vertices of $K$ are incident to edges not in $K$, hence have degree $\geq 4$ in $G$. If any of these vertices has degree $\geq 5$, then the degree sum of $G$ is $\geq 5\cdot 3 + 4+5=24$. If, however, all of these have degree $=4$, then there must be at least $3$ vertices of degree $=4$ (since we cannot have an odd number of odd degree vertices), in which case the degree sum of $G$ is $\geq 4\cdot 3+3\cdot 4=24$. In any case, $G$ has at least $12$ edges. Hence, as $e(G)\ifmmode\ell\else\polishlcross\fieq 13$, $G$ is obtained from $K\cup\{v, u_1, u_2\}$ by adding $6$ or $7$ edges. Note that since the degrees of $v, u_1, u_2$ are all $\geq 3$, but only $\geq 7$ edges can join $v, u_1, u_2$ to the vertices of $K$, the induced subgraph $H$ of $G$ on vertices $v, u_1, u_2$ contains at least $2$ edges. Wlog. suppose the edges are $u_1 v$ and $vu_2$ and further let $w$ be a vertex of $K$ adjacent to $v$. Note that at this stage there are at most $4$ more edges to add. We claim that $u_1, u_2, v, w$ must form the vertices of a further $K_4$ in $G$. In that case, $G$ is obtained by adding at most one edge to the graph obtained by identifying $K$ with a further copy of $K_4$ at vertex $w$. This is a contradiction because if we do not add the edge, $G$ will not be $2$-connected, but if we do add the edge, it will not be chord to a $4$-cycle because its end vertices will only have $w$ as a common neighbour. If $d(v)=3$, we are done, because $v$ is then contained in a $K_4$ with the remaining vertices necessarily given by the neighbours $u_1, u_2, w$ of $v$. If $d(v)\geq 4$, note that we must have $d(u_1)=3$ and $d(u_2)=3$. This follows since $2$ of $u_1, u_2, v$ must have degree $3$, otherwise $e(G-K)\geq(3+4+4)-e(H)\geq(3+4+4)-3>7$, a contradiction. Hence, both $u_1$ and $u_2$ must lie in a $K_4$ (containing $v$) in $G$. Note that they must lie in the same $K_4$, otherwise the $K_4$ of $u_1$ and $v$ would take up $\geq 3$ of our remaining edges, thus leaving $\ifmmode\ell\else\polishlcross\fieq 1$ to be incident to $u_2$, in which case $d(u_2)\ifmmode\ell\else\polishlcross\fieq 2$, a contradiction. Hence $u_1, u_2, v$ lie in a $K_4$ in $G$, in particular $u_1$ and $u_2$ are adjacent. This leaves $\ifmmode\ell\else\polishlcross\fieq 3$ edges to build up $G$. Assume, towards the final contradiction, that $w$ is not the fourth vertex of that $K_4$. Then, as $d(u_1)=3$ and $d(u_2)=3$, $w$ cannot be adjacent to $u_1$ or $u_2$. Since, however, the edge $wv$ is chord to a $4$-cycle, there must be two further vertices in $K$ that are adjacent to $v$. But then there remains at most one further edge to be incident to one of $u_1$ or $u_2$, in which case either $d(u_1)=2$ or $d(u_2)=2$, a contradiction. \end{proof} We are now ready to prove Theorem \ref{thm:first}. \begin{proof} Given $G\in\mathcal{M}(\mathcal{C})$, apply Lemma \ifmmode\ell\else\polishlcross\fiabel{contract} to a suitable edge and take a minimal Ramsey-subgraph of the resulting Ramsey-graph. Repeat this process until you end up with a graph $G_0$ with the property that every edge of $G$ is in at least two triangles. Since $G_0\in\mathcal{M}(\mathcal{C})$, so $e(G_0)=2v(G_0)-1$, we must have $v(G_0)\ifmmode\ell\else\polishlcross\fieq 6$ by Lemma \ref{attached}. The only such possibilities allowing no further contractions are $K_5-e$ and $K_4\vee K_4$ (the other such graphs on $6$ vertices all reduce to $K_5$-e as remarked above). \end{proof} \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\ifmmode\ell\else\polishlcross\fiinespacing\@plus\ifmmode\ell\else\polishlcross\fiinespacing}{.5\ifmmode\ell\else\polishlcross\fiinespacing}{\normalfont\scshape\centering\S}}{Proof of theorem \ref{thm:second}} We partition Theorem \ref{thm:second} into three lemmas, each governing the effect of the respective operations on a graph in $\mathcal{M}(\mathcal{C})$, then show how they jointly imply Corollary \ref{cor:inf}. \begin{lem}\ifmmode\ell\else\polishlcross\fiabel{lem:path} If $G\in\mathcal{M}(\mathcal{C})$, then $G^*\in\mathcal{M}(\mathcal{C})$, where $G^*$ is the graph obtained from $G$ by applying construction (1) to an arbitrary $2$-path in $G$. \end{lem} \begin{proof} The construction increases the number of vertices by $1$ and the number of edges by $2$, so $G^+$ retains the correct global density in order to be in $\mathcal{M}(\mathcal{C})$. Now, let $H^+\subset G^+$ be a proper subgraph and suppose wlog. that it uses the new vertex, so it uses at most two new edges. Then there exists a proper subgraph $H\subset G$ with $e(H^+)\ifmmode\ell\else\polishlcross\fieq e(H)+2$ and $v(H^+)=v(H)+1$, so $$\frac{e(H^+)-1}{v(H^+)-1}\ifmmode\ell\else\polishlcross\fieq\frac{(e(H)+2)-1}{(v(H)+1)-1}=\frac{(e(H)-1)+2}{v(H)}<\frac{2(v(H)-1)+2}{v(H)}=2.$$ \end{proof} Note that Lemma \ifmmode\ell\else\polishlcross\fiabel{lem:path} alone provides a constructive proof for the existence of infinitely many non-isomorphic minimal Ramsey-graphs for cyclicity. Indeed, applying this to $K_5-e$ in one of two possible ways (up to isomorphism), results in two further minimal Ramsey-graphs on $6$ vertices, one of which is the edge-maximal planar graph with one edge removed. \begin{lem}\ifmmode\ell\else\polishlcross\fiabel{lem:diam} If $G\in\mathcal{M}(\mathcal{C})$, then $G^{*}\in\mathcal{M}(\mathcal{C})$, where $G^{*}$ is the graph obtained from $G$ by applying construction (2) to an arbitrary edge in $G$. \end{lem} \begin{proof} While Lemma \ifmmode\ell\else\polishlcross\fiabel{lem:diam} could be proved similarly to Lemma \ifmmode\ell\else\polishlcross\fiabel{lem:path} via Proposition \ref{prop:NW}, it is possible to provide an exhaustive graph-chasing proof, which may be of independent interest as it works in more generality. Note that the effect of construction (2) is the replacement of an edge by the diamond graph with the non-adjacent vertices taking the place of the ends of the original edge. We prove the lemma with the diamond replaced by any graph $D$, which admits two non-adjacent \emph{contact vertices} $c, d$ with the property that in any $2$-edge-colouring of $D$ without a monochromatic cycle there is a monochromatic path joining $c$ and $d$ (note that a graph in $\mathcal{M}(\mathcal{C})$ with an edge $cd$ removed already has this property). In particular, we prove the following claim.\\ \textsl{Claim.} \emph{If $G\in\mathcal{R}(\mathcal{C})$, then $G^{*}\in\mathcal{R}(\mathcal{C})$, where graph $G^{*}$ is obtained from $G$ via parallel composition of $G-e$ with $D$ (that is, its contact edges taking the place of the ends of $e$). What's more, if $G\in\mathcal{M}(\mathcal{C})$ and $D$ is edge-minimal with the above property (given fixed contact vertices), then $G^{*}\in\mathcal{M}(\mathcal{C})$ as well.}\\ \textsl{Proof of Claim.} Fix a blue-red colouring of the edges of $G^{*}$. This restricts to a colouring of $G-e$; if this admits a monochromatic cycle, then so does $G^{++}$. Otherwise, since $G\in\mathcal{R}(\mathcal{C})$, there is both a red and a blue path in $G-e$ joining the contact vertices. One of these forms a monochromatic cycle in $G^{*}$ along with the monochromatic path in $D$, which must exist by definition whenever there is not already a monochromatic cycle in $D$.\\ Now suppose that both $G$ and $D$ are chosen minimal, in which case both clearly have minimal degree at least $2$. Given any edge $f$ of $G^{*}$ (so $f\neq e$), we show that in some colouring of $G^{*}-f$ there is no monochromatic cycle. If $f$ is an edge of $D$, such a colouring is obtained by fixing both a cycle-free colouring of $G-e$ and a cycle-free colouring of $D-f$ without a monochromatic path joining the contact vertices, and then inserting the coloured $D-f$ into the coloured $G-e$. If $f$ is an edge of $G$, fix both a cycle-free colouring of $G-f$ and a cycle-free colouring of $D$ with precisely one monochromatic path joining the contact vertices. If the path does not have the colour of $e$ in $G$, switch the colours in $D$. Now remove $e$ from the coloured $G-f$ and insert the coloured $D$. In the colouring of $G^{*}$ thus obtained there cannot be a monochromatic cycle. Suppose otherwise; then any monochromatic cycle would need to contain the whole monochromatic path in $D$ (as $G-f-e$ is coloured cycle-free) and since the contact vertices are non-adjacent, they would need to be joined by a path in $G-f-e$ of the colour of the path in $D$, and of length at least $2$. But along with $e$ any such path would form a monochromatic cycle in $G-f$. Contradiction. \end{proof} \begin{lem} If $G\in\mathcal{M}(\mathcal{C})$, then $G^{*}\in\mathcal{M}(\mathcal{C})$, where $G^{*}$ is the graph obtained from $G$ by applying construction (3) to an arbitrary $2$-path in $G$. \end{lem} \begin{proof} Let $G^*$ be the graph obtained from $G\in\mathcal{M}(\mathcal{C})$ by applying construction (3) to some path $uvw$. Since $e(G^*)=e(G)+4$ and $v(G^{*})=v(G)+2$, we have $G\in\mathcal{R}(\mathcal{C})$. To prove minimality, suppose that an edge $e$ is removed from $G^{*}$. Suppose that $e\notin E(G)$. In either case if $e$ is adjacent to $u$ or $w$ or if it is adjacent to $v$, proceed analogously as in the respective case in the proof of the previous lemma. Otherwise, if $e\in E(G)$, put a $2$-colouring on $E(G-e)$ and consider the colours of $uv$ and $vw$. Give the edges $ux, xv$ the colour of $uv$ and $uy$ the other colour. Also, give the edges $vy, yw$ the colour of $vw$ and $xw$ the other colour. If the $2$-colouring of $E(G-e)$ admits no monochromatic cycles, then neither does the so obtained $2$-colouring of $E(G^{*}-e)$. \end{proof} Finally, we are able to prove Corollary \ref{cor:inf}. \begin{proof} In order to obtain infinitely many graphs $G\in\mathcal{M}(\mathcal{C})$ with $\chi (G)=4$ fix a copy of $K_4$ in $K_5-e$ and let $e$ be an edge not belonging to that copy; now simply replace $e$ by a diamond, then replace an edge of that diamond by a diamond and so on. In order to obtain infinitely many $G\in\mathcal{M}(\mathcal{C})$ with $\chi (G)=3$ note that replacing every edge of any graph in $\mathcal{M}(\mathcal{C})$ results in precisely those graphs required. Finally, in order to obtain infinitely many graphs $G\in\mathcal{M}(\mathcal{C})$ with $\chi (G)=2$ start with $G_0:=K_{3, 5}\in\mathcal{M}(\mathcal{C})$ and repeatedly apply the following extension: apply construction (3) to some path $uvw$ in $G_i$ and let $x, y$ denote the two new vertices. Now apply construction (3) to the path $xvy$, thus producing two further vertices $x', y'$. Note that the resulting graph $G_{i+1}\in\mathcal{M}(\mathcal{C})$ is bipartite: Given a $2$-colouring on $V(G_i)$, give $x, y$ the colour of $v$ and $x', y'$ the other colour. (Alternatively note that any odd cycle, which may arise in the intermediate graph, must be using one of the edges $xv, yv$ and is thus destroyed in the construction of $G_{i+1}$.) \end{proof} \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\ifmmode\ell\else\polishlcross\fiinespacing\@plus\ifmmode\ell\else\polishlcross\fiinespacing}{.5\ifmmode\ell\else\polishlcross\fiinespacing}{\normalfont\scshape\centering\S}}{Proof of theorem \ref{thm:third}} \begin{proof} The proof is by induction on $n\geq 5$ and makes heavy use of constructions (1) and (2) as in \ref{thm:second}. For $n=5$ the result needs to be verified manually, and indeed $G=K_5-e$ works for all forests of cycles $F$ with $3\ifmmode\ell\else\polishlcross\fieq v(F)\ifmmode\ell\else\polishlcross\fieq 5$. Let $x, y$ denote the non-adjacent vertices of $K_5-e$ and let $a, b, c$ denote the other three. \begin{enumerate} \item If w.l.o.g. $F$ is the red-coloured triangle $abc$, colour the edges $ay$ and $cx$ red and the remaining path $a-x-b-y-c$ blue. \item If w.l.o.g. $F$ is the red-coloured $4$-cycle $a-b-c-x$, colour edge $cy$ red and the remaining path $x-b-y-a-c$ blue. \item If w.l.o.g. $F$ is a red-coloured $C_5$, colour the remaining $4$-path blue. \item If $F$ is a bowtie and the two triangles are of the same colour, colour the remaining $3$-path with the opposite colour. \item If $F$ is a bowtie and the two triangles are of distinct colours, colour the remaining edges using each colour at least once. \end{enumerate} The aim in the induction step is to carefully build graphs in $\mathcal{M}(\mathcal{C})$ containing some prescribed forests of cycles from those containing some suitable smaller forest of cycles as provided by the induction hypothesis, while maintaining the possibility to extend the edge-colouring without creating new monochromatic cycles.\\ \textsl{Step 1 (Creating new space).} To begin with, we reduce the proof from $n\geq \ifmmode\ell\else\polishlcross\fieft|F\right|$ to $n=\ifmmode\ell\else\polishlcross\fieft|F\right|$. Fix $F$ and suppose $G\in\mathcal{M}(\mathcal{C})$ with $v(G)=v(F)$ is as in the statement of the theorem. We want to increase $G$ by one vertex while maintaining the containment of $F$ and the colouring extension property: Pick a vertex $v\in G$ with $d(v)=3$. Since $v(G)=v(F)$, such lies on precisely one cycle $C$ in $F$. Hence it is incident to an edge $vw$, which is not part of $C$ (even though $w$ may be); if $v$ is not in $F$, pick $vw\notin E(F)$, too. Further pick $u\in C$ such that $uv$ is an edge of $C$. Apply (1) to the path $u-v-w$, thus deleting the edge $vw$ and creating a new vertex $x$ incident to all of $u, v, w$. Note that by removing the edge $vw$ we have not destroyed any cycle of $F$ since thanks to $d(v)=3$, $vw$ is not an edge of $F$. Now given any $2$-edge-colouring of $G-F$ (or $G-F-vw$, respectively) as in the statement of the theorem, extend it by giving $xu$ and $xw$ arbitrary opposite colours and give $xv$ the colour opposite to that of $C$. If we have thus created a new monochromatic cycle, it has to pass through $x$, and hence, by choice of colouring, through $v$. This, however, is impossible since $v$ has maintained $d(v)=3$ throughout the construction. For the rest of the proof we can assume that $F$ is a spanning subgraph of the minimal Ramsey graph that contains it.\\ \textsl{Step 2 (Growing new trees).} We show how to extend the result for $F$ to that for $F$ with a disjoint triangle. Let $G\in\mathcal{M}(\mathcal{C})$ with $F\subset G$ and as in the statement of the theorem, and now without loss of generality $v(F)=v(G)$. Create new space in $G$ as in step 1, thus obtaining $G'$ with $v(G')=v(G)+1$ and the colouring property with respect to $F$ and fix the special edge-colouring of $G'-F$. Consider, as in step 1, the edge $xv$: Replace it by a diamond graph $D$ as in extension (2). Give the remaining so far uncoloured triangle in $D$, which is disjoint from $F$, a monochromatic colouring (this triangle is the new tree). If this is the colour of $xv$, give the two edges in $D$ now incident to $v$ distinct colours. If this is not the colour of $xv$, give the two edges in $D$ now incident to $v$ the colour of $xv$.\\ What we have so far achieved is that it suffices to prove the result for spanning trees of cycles. Note that any such can be obtained recursively by (1) starting with a triangle (2) enlarging it to required size (while it is a 'leaf' of the tree of cycles) (3) creating a required number of branches (that is, pairwise disjoint triangles) and repeating the procedure for any of the new branch triangles in turn. To complete the proof it therefore merely suffices to show how to enlarge cycles in $F$ irrespective of their distribution of attached branches, how to create a new triangle at a given vertex of degree $2$ in $F$ (\emph{extending an existing branch}), and finally, how to create a new triangle at a vertex, which is already used by more than one triangle (\emph{creating a new branch}).\\ \emph{Step 3 (Enlarging existing cycles).} Let $C$ be a cycle in $F$ to be enlarged and let $G\in\mathcal{M}(\mathcal{C})$ be for $F$ as in the statement of the theorem. Let $u-v-w$ be any $2$-path in $C$. Apply extension (1) as in Theorem \ref{thm:second}, thus producing a new vertex $x$ adjacent to all of $u, v, w$. The cycle $C$ is now enlarged in the resulting graph $G^+$ since $vw$ has been replaced by the $2$-path $v-x-w$. Any cycle-monochromatic $2$-edge-colouring $c$ of the enlarged forest $F^+$ now induces a cycle-monochromatic $2$-edge-colouring of $F$; pick a respective $2$-edge-colouring of $G-F$ and extend it to a respective colouring of $G^+-F^+$ by giving edge $xu$ the colour opposite of that of $xv$ in $c$.\\ \textsl{Step 4 (Extending existing branches).} Let $F\subset G$ be as before, and suppose that at $v\in F$ with $d(v)=2$ in $F$ a new triangle branch is to be created. Let $vw$ denote an edge not in $F$. Replace it by a diamond $D$, as before, and give the two edges in $D$ incident to $w$ distinct colours. Verifying the colouring property is now analogous to Step 2.\\ \textsl{Step 5 (Creating new branches).} Suppose that $u$ is a vertex of $F\subset G$, which lies in at least two triangles in $F$, and that a further triangle containing $u$ is to be created. Fix one of the triangles, which without loss of generality is a leaf to the tree of cycles, and label its remaining vertices $v$ and $w$. Apply (1) to $u-v-w$, thus destroying(!) one of the already existing triangles by removing edge $vw$, but instead creating the two new triangles $uvx$ and $uwx$, sharing edge $xu$. Apply now (1) again to the path $u-v-x$, thus destroying triangle $uvx$ by removing edge $vx$, but creating the new triangle $uvx'$, which is edge-disjoint from triangle $uwx$, and the extra edge $xx'$. Any cycle-monochromatic $2$-edge-colouring $c$ of the enlarged forest $F^+$ now induces a cycle-monochromatic $2$-edge-colouring of $F$; pick a corresponding special $2$-edge-colouring of $G-F$ and extend it to a special colouring of $G^+-F^+$ by giving edge $xx'$ the colour opposite to that of the triangles $uwx$ and $uvx'$ if these are monochromatic in $c$, and an arbitrary otherwise. This completes the proof. \end{proof} \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\ifmmode\ell\else\polishlcross\fiinespacing\@plus\ifmmode\ell\else\polishlcross\fiinespacing}{.5\ifmmode\ell\else\polishlcross\fiinespacing}{\normalfont\scshape\centering\S}}{Concluding Remarks} In \ref{thm:first} we proved that every $G\in\mathcal{M}(\mathcal{C})$ can be obtained by starting with one of two base graphs by recursively splitting a vertex of a suitable supergraph. Any such description would shed light on how to constructively increase the girth while maintaining Ramseyness. This may be regarded as a first step towards the construction of Ramsey graphs for fixed length cycles $C_k$ with girth precisely $k$ (see e.g.~\cite{HRRS}, but to the best of our knowledge no explicit construction is known). We therefore raise the weaker question: \begin{question} For any $g\geq 3$, does there exist $G\in\mathcal{M}(\mathcal{C})$ with girth $g$? \end{question} We also note also how Lemma \ref{attached} implies that no minimal Ramsey-graph for $K_3$ is a minimial Ramsey-graph for $\mathcal{C}$ (since in the former every edge is in at least two triangles). It would be therefore interesting to work out what additional conditions on $G\in\mathcal{R}(\mathcal{C})$ ensure that $G\in\mathcal{R}(K_3)$. This might be possibly achieved by approximating the class $\mathcal{R}(K_3)$ by the classes $\mathcal{R}(\mathcal{C}_{\ifmmode\ell\else\polishlcross\fieq l})$ for fixed $l\geq 3$. Constructing graphs which are minimal with this property is probably hard as removing an edge and taking a good colouring gives rise to highly chromatic high-girth girth graphs (for which a non-recursive hypergraph-free construction was given only recently~\cite{A}). Note that similarly our remark in the introduction allows for a simple construction for $G\in\mathcal{R}_r(\mathcal{C}_{\text{odd}\ifmmode\ell\else\polishlcross\fieq l})$, just take $\chi (G)\geq 2^r+1$ and $g(G)\geq l$.\\ Another line of study relates to the fact that a $2$-edge-colouring of a Ramsey-graph for $K_3$ admits multiple monochromatic copies of $K_3$. As a step in this direction it therefore seems plausible to consider graphs with the approximative property that every $2$-edge-colouring admits either two disjoint monochromatic copies of $K_3$ in the same colour or a monochromatic cycle of length $\geq 4$. It is easy to see by case distinction that $G^{+}$, the graph obtained from some $G\in\mathcal{R}(C)$ by joining a new vertex to every vertex of $G$, has this property.\\ With regard to the existence of multiple monochromatic cycles, we observe that thanks to a known decomposition result into pseudoforests, see e.g.~\cite{PR}, one could in principle work out a theorem similar to ours for graphs, for which every $2$-edge-colouring admits a monochromatic connected graph containing at least two cycles. More generally, for $k\geq 1$ set $\mathcal{C}_k:=\{G:\; G\;\text{is connected and contains at least}\; k\;\text{cycles}\}$ and $m_k(G):=\frac{e(G)-1}{v(G)+k-2}$, excluding the trivial graphs. It is then easy to see that if $G$ contains a subgraph $H$ with $m_{k}(H)\geq r$, then $G$ is $r$-Ramsey for $\mathcal{C}_k$, and that if $G$ is minimal $r$-Ramsey for $\mathcal{C}_k$, then $m_k(H)<r$ for every proper subgraph $H\subset G$. Crucial, however, to the characterization of graphs in $\mathcal{M}(\mathcal{C}_k)$ is the validity of the converse, which we do know about for $k\geq 3$. Indeed, with three available cycles allowing for circular arrangements, thus create new cycles, more complicated configuration may be needed in order for the Ramsey-property to be broken by the removal of any single edge. Instead, it seems more conceivable that the $+k$ in the density parameter is replaced by a larger quantity $f(k)$. To make this precise, for every $k\in\mathbb{N}$ let $f(k)$ denote the smallest natural number, if one exists, with the property that, for every integer $r\geq 1$, any graph $G$ satisfying $e(G)\ifmmode\ell\else\polishlcross\fieq r(v(G)+f(k)-2)$ edge-decomposes into at most $r$ subgraphs containing strictly less than $k$ (not necessarily edge-disjoint) cycles each. Note that $f$ is required to depend on $k$ only. If $f(k)$ exist, then its are given by (the ceiling integer part of) the maximum of $\frac{e(G)}{r_k(G)}-v(G)+2$ taken over all graphs, where $r_k(G)$ denotes the size of a smallest edge-decomposition of $G$ into subgraphs with at most $k-1$ cycles. By the above, we know that $f(1)=1$ and $f(2)=2$. For $k\geq 3$ note that $f(k)\geq k$ holds by considering the chain of $k-1$ copies of triangles with two consecutive ones each identified at a vertex. We observe that for every $k$ the following are then equivalent: \begin{enumerate} \item $f(k):=\max\ifmmode\ell\else\polishlcross\fieft\{\frac{e(G)}{r_k(G)}-v(G)+2:\; v(G)\geq 1\right\}<\infty$ \item $\forall r\in\mathbb{N}\backslash\{1\}$: $\mathcal{R}_r(\mathcal{C}_k)=\{G:\;\exists H\subseteq G:\; m_{f(k)}(H)\geq r\}$ \item $\forall r\in\mathbb{N}\backslash\{1\}$: $\mathcal{M}_r(\mathcal{C}_k)=\{G:\; m_{f(k)}(G)=r,\,\forall H\subset G, H\neq G:\; m_{f(k)}(H)< r\}$ \end{enumerate} \begin{question} For any $k\geq 3$, does $f(k)$ exists, that is, is $f(k)<\infty$? If so, what is $f(k)$? \end{question} Finally, we remark that cyclicity and $2$-connectivity are Ramsey equivalent and also that odd cyclicity and $3$-chromaticity are Ramsey equivalent. Undoubtedly, our results could therefore be generalized to both higher connectivity and chromaticity as well as to multiple colours. We thank Dennis Clemens and Matthias Schacht for helpful comments. \end{document}
\begin{document} \preprint{V.M.} \title{Multiple--Instance Learning: Radon--Nikodym Approach to Distribution Regression Problem. } \author{Vladislav Gennadievich \surname{Malyshkin}} \email{[email protected]} \affiliation{Ioffe Institute, Politekhnicheskaya 26, St Petersburg, 194021, Russia} \date{November, 27, 2015} \begin{abstract} \begin{verbatim} $Id: DistReg1Step.tex,v 1.41 2015/12/02 11:00:50 mal Exp $ \end{verbatim} For distribution regression problem, where a bag of $x$--observations is mapped to a single $y$ value, a one--step solution is proposed. The problem of random distribution to random value is transformed to random vector to random value by taking distribution moments of $x$ observations in a bag as random vector. Then Radon--Nikodym or least squares theory can be applied, what give $y(x)$ estimator. The probability distribution of $y$ is also obtained, what requires solving generalized eigenvalues problem, matrix spectrum (not depending on $x$) give possible $y$ outcomes and depending on $x$ probabilities of outcomes can be obtained by projecting the distribution with fixed $x$ value (delta--function) to corresponding eigenvector. A library providing numerically stable polynomial basis for these calculations is available, what make the proposed approach practical. \end{abstract} \keywords{Distribution Regression, Radon--Nikodym} \maketitle \hbox{\small Dedicated to Ira Kudryashova} \section{\label{intro}Introduction} Multiple instance learning\cite{dietterich1997solving} is an important Machine Learning (ML) concept having numerous applications\cite{yang2005review}. In multiple instance learning class label is associated not with a single observation, but with a ``bag'' of observations. A very close problem is distribution regression problem, where a sample distribution of $x$ is mapped to a single $y$ value. There are numerous heuristics methods developed from both: ML and distribution regression sides, see \cite{zhou2004multi,szabo2014learning} for review. As in any ML problem the most important part is not so much the learning algorithm, but the way how the learned knowledge is represented. Learned knowledge is often represented as a set of propositional rules, regression function, Neural Network weights, etc. In this paper we consider the case where knowledge is represented as a function of distribution moments. Recent progress in numerical stability of high order moments calculation\cite{2015arXiv151005510G} allow the moments of very high order to be calculated, e.g. in Ref. \cite{2015arXiv151101887G} up to hundreds, thus make this approach practical. Most of distribution regression algorithms deploy a two--step type of algorithm\cite{szabo2014learning} to solve the problem. In our previous work \cite{2015arXiv151107085G} a two--step solution with knowledge representation in a form of Christoffel function was developed. However, there is exist a one--step solution to distribution regression problem, a random distribution to random value, that converts each bag's observations to moments of it, then solving the problem random vector (the moments of random distribution) to random value. Once this transition is made an answer of least squares or Radon--Nikodym type from Ref. \cite{2015arXiv151005510G} can be applied and close form result obtained. The distribution of outcomes, if required, can be obtained by solving generalized eigenvalues problem, then matrix spectrum give possible $y$ outcomes, and the square of projection of localized at given $x$ bag distribution to eigenvector give each outcome probability. This matrix spectrum ideology is similar to the one we used in \cite{2015arXiv151107085G}, but is more generic and not reducible to Gauss quadrature. The paper is organized as following: In Section \ref{christoffel1Step} a general theory of distribution regression is discussed and close form result or least squares and Radon--Nikodym type are presented. Then in Section \ref{christoffel1StepNum} an algorithm is described and numerical example of calculations is presented. In Section \ref{christoffeldisc} possible further development is discussed. \section{\label{christoffel1Step}One--Step Solution} Consider distribution regression problem where a bag of $N$ observations of $x$ is mapped to a single outcome observation $y$ for $l=[1..M]$. \begin{eqnarray} (x_1,x_2,\dots,x_j,\dots,x_N)^{(l)}&\to&y^{(l)} \label{regressionproblem} \end{eqnarray} A distribution regression problem can have a goal to estimate $y$, average of $y$, distribution of $y$, etc. given specific value of $x$ For further development we need $x$ basis $Q_k(x)$ and some $x$ and $y$ measure. For simplicity, not reducing the generality of the approach, we are going to assume that $x$ measure is a sum over $j$ index $\sum_{j=1}^{N}$, $y$ measure is a $\sum_{l=1}^{M}$, the basis functions $Q_k(x)$ are polynomials $k=0..d_x-1$, where $d_x$ is the number of elements in $x$ basis, typical value for $d_x$ is below 10--15. Let us convert the problem ``random distribution'' to ``random variable'' to the problem ``vector of random variables'' to ``random variable''. The simplest way to obtain ``vector of random variables'' from $x_j^{(l)}$ distributions is to take the moments of it. Now the $<Q_k>^{(l)}$ would be this random vector: \begin{eqnarray} &&<Q_k>^{(l)}=\sum_{j=1}^{N} Q_k(x_j^{(l)}) \label{xmu} \\ &&\left(<Q_0>^{(l)},\dots,<Q_{d_x-1}>^{(l)}\right) \to y^{(l)} \label{Qregressionproblem} \label{momy} \end{eqnarray} Then the (\ref{Qregressionproblem}) becomes vector to value problem. Introduce \begin{eqnarray} Y_q&=&\sum_{l=1}^{M} y^{(l)} <Q_q>^{(l)} \label{Yq} \\ \left(G\right)_{qr}&=&\sum_{l=1}^{M} <Q_q>^{(l)} <Q_r>^{(l)} \label{Gramm} \\ \left(yG\right)_{qr}&=&\sum_{l=1}^{M} y^{(l)} <Q_q>^{(l)} <Q_r>^{(l)} \label{GrammY} \end{eqnarray} The problem now is to estimate $y$ (or distribution of $y$) given $x$ distribution, now mapped to a vector of moments $<Q_k>$ calculated on this $x$ distribution. Let us denote these input moments as $M_k$ to avoid confusion with measures on $x$ and $y$. For the case we study the $x$ value is given, and for a state with exact $x$ the $M_k$ values are: \begin{eqnarray} M_k(x)&=& N Q_k(x) \label{Mdist} \end{eqnarray} what means that all $N$ observations in a bag give exactly the same $x$ value. The problem now becomes a standard: random vector to random variable. We have solutions of two types for this problem, see \cite{2015arXiv151005510G} Appendix D, Least Squares $A_{LS}$ and Radon--Nikodym $A_{RN}$. The answers would be: \begin{eqnarray} A_{LS}(x)&=& \sum\limits_{q,r=0}^{d_x-1} M_q(x) \left(G\right)^{-1}_{qr} Y_r \label{ALS} \\ A_{RN}(x)&=& \frac{\sum\limits_{q,r,s,t=0}^{d_x-1} M_q(x) \left(G\right)^{-1}_{qr} \left(yG\right)_{rs} \left(G\right)^{-1}_{st} M_t(x)} {\sum\limits_{q,r=0}^{d_x-1}M_q(x) \left(G\right)^{-1}_{qr} M_r(x)} \label{ARN} \end{eqnarray} The (\ref{ALS}) is least squares answer to $y$ estimation given $x$. The (\ref{ARN}) is Radon--Nikodym answer to $y$ estimation given $x$. These are the two $y$ estimators at given $x$ for distribution regression problem \ref{regressionproblem}. These answers can be considered as an extension of least squares and Radon--Nikodym type of interpolation from value to value problem to random distribution to random variable problem. In case $N=1$ the $A_{LS}$ and $A_{RN}$ are reduced exactly to value to value problem considered in Ref. \cite{2015arXiv151005510G}. Note, that the $A_{LS}(x)$ answer not necessary preserve $y$ sign, but $A_{RN}(x)$ always preserve $y$ sign, same as in value to value problem. If $y$ distribution at given $x$ need to be estimated this problem can also be solved. With one--step approach of this paper we do not need $Q_m(y)$ basis used in two--step approach of Ref. \cite{2015arXiv151107085G} and outcomes of $y$ are estimated from $x$ moments only. Generalized eigenvalues problem\cite{2015arXiv151005510G} give the answer: \begin{eqnarray} \sum\limits_{r=0}^{d_x-1}\left(yG\right)_{qr} \psi^{(i)}_r &=& y^{(i)} \sum\limits_{r=0}^{d_x-1}\left(G\right)_{qr} \psi^{(i)}_r \label{gevproblem} \end{eqnarray} The result of (\ref{gevproblem}) is eigenvalues $y^{(i)}$ (possible outcomes) and eigenvectors $\psi^{(i)}$ (can be used to compute the probabilities of outcomes). The problem now becomes: given $x$ value estimate possible $y$--outcomes and their probabilities. The moments of states with given $x$ value are $NQ_q(x)$ from (\ref{Mdist}), so the distribution with (\ref{Mdist}) moments should be projected to distributions corresponding to $\psi^{(i)}_q$ states, the square of this projection give the weight and normalized weight give the probability. This is actually very similar to ideology we used in \cite{2015arXiv151107085G}, but the eigenvalues from (\ref{gevproblem}) no longer have a meaning of Gauss quadrature nodes. The eigenvectors $\psi^{(i)}_r$ correspond to distribution with moments $<Q_q>=\sum_{r=0}^{d_x-1} \left(G\right)_{qr} \psi^{(i)}_r$, and the distribution with such moments correspond to $y^{(i)}$ value. These distributions can be considered as ``natural distribution basis''. This is an important generalizatioh of Refs. \cite{2015arXiv151005510G,2015arXiv151101887G} approach to random distribution, where natural basis for random value, not random distribution, was considered. The projection of two $x$ distributions with moments $M^{(1)}_k$ and $M^{(2)}_k$ on each other is \begin{eqnarray} <M^{(1)}|M^{(2)}>_{\pi}&=&\sum_{q,r=0}^{d_x-1} M^{(1)}_q \left(G\right)^{-1}_{qr} M^{(2)}_r \label{proj} \end{eqnarray} then the required probabilities, calculated by projecting the (\ref{Mdist}) distribution to natural basis states, are: \begin{eqnarray} w^{(i)}(x)&=&\left(\sum_{r=0}^{d_x-1}M_r(x) \psi^{(i)}_r \right)^2 \label{wi} \\ P^{(i)}(x)&=&w^{(i)}(x)/\sum_{r=0}^{d_x-1} w^{(r)}(x) \label{Pi} \end{eqnarray} The (\ref{gevproblem}) and (\ref{Pi}) is one--step answer to distribution regression problem: find the outcomes $y^{(i)}$ and their probabilities $P^{(i)}(x)$. Note, that in this setup possible outcomes $y^{(i)}$ do not depend on $x$, and only probabilities $P^{(i)}(x)$ of outcomes depend on $x$. This is different from a two--step solution of \cite{2015arXiv151107085G} where outcomes and their probabilities both depend on $x$. Also note that $\sum_{r=0}^{d_x-1} w^{(r)}(x)=\sum_{q,r=0}^{d_x-1} M_q(x) \left(G\right)^{-1}_{qr} M_r(x)$. One of the major difference between the probabilities (\ref{Pi}) and probabilities from Christoffel function approach \cite{2015arXiv151107085G} is that the (\ref{Pi}) has a meaning of ``true'' probability while in two--step solution \cite{2015arXiv151107085G} Christoffel function value is used as a proxy to probability on first step. It is important to note how the knowledge is represented in these models. The model (\ref{ALS}) has learned knowledge represented in $d_x$ by $d_x$ matrix (\ref{Gramm}) and $d_x$ size vector (\ref{Yq}). The model (\ref{ARN}) as well as distribution answer (\ref{Pi}) has learned knowledge represented in two $d_x$ by $d_x$ matrices (\ref{Gramm}) and (\ref{GrammY}). \section{\label{christoffel1StepNum}Numerical estimation of One--Step Solution} Numerical instability similar to the one of two--stage Christoffel function approach \cite{2015arXiv151107085G} also arise for approach in study, but now the situation is much less problematic, because we do not have $y$--basis $Q_m(y)$, and all the dependence on $y$ enter the answer through matrix (\ref{GrammY}). In this case the only stable $x$ basis $Q_k(x)$ is required. The algorithm for $y$ estimators of (\ref{ALS}) or (\ref{ARN}) is this: Calculate $<Q_k>^{(l)}$ moments from (\ref{xmu}, then calculate matrices (\ref{Gramm}) and (\ref{GrammY}), if least squares approximation is required also calculate moments (\ref{Yq}). In contrast with Christoffel function approach where $<Q_qQ_r>; q,r=[0..d_x-1]$ matrix can be obtained from $Q_k ; k=[0..2d_x-1]$ moments by application of polynomials multiplication operator, here the (\ref{Gramm}) and (\ref{GrammY}) can be hardly obtained this way for $N>1$ and should be calculated directly from sample. This is not a big issue, because $d_x$ is typically not large. Then inverse matrix $\left(G\right)_{qr}$ from (\ref{Gramm}), this matrix is some kind similar to Gramm matrix, but uses distribution moments, not basis functions. Finally put all these to (\ref{ALS}) for least squares $y(x)$ estimation or to (\ref{ARN}) for Radon--Nikodym $y(x)$ estimation. If $y$-- distribution is required then solve generalized eigenvalues problem (\ref{gevproblem}), obtain $y^{(i)}$ as possible $y$--outcomes (they do not depend on $x$), and calculate $x$--dependent probabilities (\ref{Pi}), these are squared projection coefficient of a state with specific $x$ value, point--distribution (\ref{Mdist}), or some other $x$ distribution of general form, to $\psi^{(i)}$ eigenvector. To show an application of this approach let us take several simple distribution to apply the theory. Let $\epsilon$ be a uniformly distributed $[-1;1]$ random variable and take $N=1000$ and $M=10000$. Then consider sample distributions build as following 1) For $l=[1..M]$ take random $x$ out of $[-1;1]$ interval. 2) Calculate $y=f(x)$, take this $y$ as $y^{(l)}$. 3) Build a bag of $x$ observations as $x_j=x+R\epsilon ; j=[1..N]$, where $R$ is a parameter. The following three $f(x)$ functions for building sample distribution are used: \begin{eqnarray} f(x)&=&x \label{flin} \\ f(x)&=&\frac{1}{1+25x^2} \label{frunge} \\ f(x)&=&\left\{\begin{array}{ll} 0 & x\le 0 \\ 1 & x>0\end{array}\right. \label{fstep} \end{eqnarray} \begin{figure} \caption{\label{fig:flin} \label{fig:flin} \end{figure} \begin{figure} \caption{\label{fig:frunge} \label{fig:frunge} \end{figure} \begin{figure} \caption{\label{fig:fstep} \label{fig:fstep} \end{figure} In Figs. \ref{fig:flin}, \ref{fig:frunge}, \ref{fig:fstep}, the (\ref{ALS}) and (\ref{ARN}) answers are presented for $f(x)$ from (\ref{flin}), (\ref{frunge}) and (\ref{fstep}) respectively for $R=\{0.1,0.3\}$ and $d_x=\{10,20\}$. The $x$ range is specially taken slightly wider that $[-1; 1]$ interval to see possible divergence outside of measure support. In most cases Radon--Nikodym answer is superior, and in addition to that it preserves the sign of $y$. Least squares approximation is good for special case $f(x)=x$ and typically diverges at $x$ outside of measure support. \begin{figure} \caption{\label{fig:Px} \label{fig:Px} \end{figure} The numerical estimation of probability function (the $y^{(i)}$ and $P^{(i)}(x)$) were also calculated and eigenvalue index $i$, corresponding to maximal $P$ typically correspond to $y^{(i)}$, for which $f(x)$ is most close. For simplistic case (\ref{flin}) see Fig. \ref{fig:Px}. See the Ref. \cite{polynomialcode}, file com/polytechnik/ algorithms/ ExampleDistribution1Stage.scala for algorithm implementation. \section{\label{christoffeldisc}Discussion} In this work a one--stage approach is applied to distribution regression problem. The bag's observations are initially converted to moments, then least squares or Radon--Nikodym theory can be applied and closed form answer to be received. These (\ref{ALS}) and (\ref{ARN}) estimate $y$ value given $x$. This answer can be generalized to ``what is $y$ estimate given distribution of $x$''. For this problem obtain moments $<Q_k>$, corresponding to given distribution of $x$, first, then use them in (\ref{ALS}) or (\ref{ARN}) instead of $M_k(x)$, corresponding to localized at $x$ state. Similary, if probabilities of $y$ outcomes are required for given distribution of $x$, the $<Q_k>$ should be used in weights expression (\ref{wi}) instead of $M_k(x)$ (this is a special case of two distribution projection on each other (\ref{proj})). Computer code implementing the algorithms is available\cite{polynomialcode}. And in conclusion we want to discuss possible directions of future development. \begin{itemize} \item In this work a closed form solution for random distribution to random value problem (\ref{regressionproblem}) is found. The question arise about problem order increase, replace ``random distribution'' by ``random distribution of random distribution'' (or even further ``random distribution of random distribution of random distribution'', etc.). In this case each $x_j$ in (\ref{regressionproblem}) should be treated as a sample distribution itself, and index $j$ then can be treated as 2D index $x_{j_1,j_2}$. Working with 2D indexes is actually very similar to working with images, see Ref. \cite{2015arXiv151101887G} where the 2D index was used for image reconstruction by applying Radon--Nikodym or least squares approximation. Similarly, the results of this paper, can be generalized to higher order problems, by considering all indexes as 2D. \item Obtaining possible $y$ outcomes as matrix spectrum (\ref{gevproblem}) and then calculating their probabilities by projection (\ref{proj}) of given distribution (point distribution (\ref{Mdist}) is a simplest example of such) to eigenvectors (\ref{Pi}) is a powerful approach to estimation of $y$ distribution under given condition. We can expect this approach to show good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. The reason is because the (\ref{gevproblem}) is expressed in terms of probability states, what make the role of outliers much less important, compared to methods based on $L^2$ norm, particulary least squares. For example, this approach can be applied to distribtions where only first moment of $y$ is finite, while the $L^2$ norm approaches require second moment of $y$ to be finite, what make them inapplicable to distributions with infinite standard deviation. We expect the (\ref{gevproblem}) approach can be a good foundation for construction of Robust Statistics\cite{huber2011robust}. \end{itemize} \end{document}
\begin{document} \begin{frontmatter} \title{Gaining power in multiple testing of interval hypotheses via conditionalization} \runtitle{Interval hypotheses} \author{\fnms{Jules L.} \snm{Ellis$^1$} \mathrm{cor}ref{} } \and \author{\fnms{Jakub} \snm{Pecanka$^2$} } \and \author{\fnms{Jelle} \snm{Goeman$^2$} } \affiliation{Radboud University Nijmegen \thanksmark{m1} and Leiden University Medical Center \thanksmark{m2}} \runauthor{Ellis et al.} \address{1. Radboud University Nijmegen, 2. Leiden University Medical Center} \begin{abstract} In this paper we introduce a novel procedure for improving multiple testing procedures (MTPs) under scenarios when the null hypothesis $p$-values tend to be stochastically larger than standard uniform (referred to as \emph{inflated}). An important class of problems for which this occurs are tests of interval hypotheses. The new procedure starts with a set of $p$-values and discards those with values above a certain pre-selected threshold while the rest are corrected (scaled-up) by the value of the threshold. Subsequently, a chosen family-wise error rate (FWER) or false discovery rate (FDR) MTP is applied to the set of corrected $p$-values only. We prove the general validity of this procedure under independence of $p$-values, and for the special case of the Bonferroni method we formulate several sufficient conditions for the control of the FWER. It is demonstrated that this 'filtering' of $p$-values can yield considerable gains of power under scenarios with inflated null hypotheses $p$-values. \end{abstract} \begin{keyword}[class=MSC] \kwd[Primary ]{62J15} \kwd[; secondary ]{62G30, 62P15, 62P10} \end{keyword} \begin{keyword} \kwd{conditionalized test} \kwd{false discovery rate} \kwd{familywise error-rate} \kwd{multiple testing} \kwd{one-sided tests} \kwd{uniform conditional stochastic order} \end{keyword} \end{frontmatter} \section{Introduction} Multiple testing procedures (MTPs) generally assume that $p$-values of true null hypotheses follow the standard uniform distribution or are stochastically larger. The latter situation may occur when interval (rather than point) null hypotheses are tested. Under such scenarios the $p$-values are standard uniform typically only in borderline cases such as when the true value of a parameter is on the edge of the null hypothesis interval. When the true value of the parameter is in the interior of the interval the $p$-values tend to be stochastically larger than uniform, sometimes dramatically so with many $p$-values having distribution concentrated near 1. We call such $p$-values \emph{inflated}. There are many practical examples of multiple testing situations with interval null hypotheses with some or all of the true parameter values located (deep) in the interior of the null hypothesis interval. (1.) In test construction according to the nonparametric Item Response Theory (IRT) model of Mokken (1971), one can test whether all item-covariances are nonnegative \citep{Mokken1971, Rosenbaum1984, HR1986, JE1997}. Ordinarily, most item-covariances are substantially greater than zero, with only a few negative exceptions. (2.) In large-scale survey evaluations of public organisations, such as schools or health care organizations, it can be interesting to test whether organizations score lower than a benchmark \citep{Normand2007, Ellis2013}. If many organisations score well above the benchmark, a large number of the $p$-values of true null hypotheses become inflated. (3.) When a treatment or a drug is known to have a substantial positive treatment effect in a given population it can be of interest to look for adverse treatment effects in subpopulations. The $p$-values of most null hypotheses again become inflated. Intuitively, if the null $p$-values tend to be stochastically larger than uniform, true and false null hypotheses should be easier to distinguish, making the multiple testing problem easier. However, most MTPs focus the error control on the 'worst case' of standard uniformity and thus miss the opportunity to yield more power for inflated $p$-values. Consequently, in the presence of inflated $p$-values the actual error rate can be (much) smaller than the nominal level. Some MTPs actually lose power when null $p$-values become inflated, e.g.\ adaptive FDR methods \citep[e.g.][]{Storey2002} that incorporate an estimate of $\pi_0$, the proportion of null $p$-values \citep[note 9 on p.\ 258]{Fischer2012}. In this paper we propose a procedure which improves existing MTPs in the presence of inflated $p$-values by adding a simple conditionalization step at the onset of the analysis. For an a priori selected threshold $\lambda\in(0,1]$ (e.g.\ $\lambda=0.5$) we remove (i.e. not reject) all hypotheses with $p$-value above $\lambda$. The remaining $p$-values are scaled up by the threshold $\lambda$: $p_i'=p_i/\lambda $. The selected MTP is subsequently performed on the rescaled $p$-values only. We refer to the altered procedure as the \emph{conditionalized} version of the MTP, leading to procedures such as the \emph{conditionalized Bonferroni procedure} (CBP). In terms of power, there are both benefits and costs associated with conditionalization. The costs come from the scaling of the $p$-values by $1/\lambda$, thus effectively increasing their values. If a fixed significance threshold were used, the number of significant $p$-values would decrease. However, the conditionalization step also tends to increase the significance threshold for each $p$-value by reducing the multiple testing burden (i.e. the number of hypotheses corrected for). Crucially, in scenarios with a large portion of substantially inflated $p$-values the increased significance threshold means that the overall effect of conditionalization results in a more powerful procedure. In the remainder of the paper we formally investigate the effects and benefits of conditionalization. We prove that for scenarios with inflated $p$-values conditionalized procedures retain type I error control whenever $p$-values are independent. Our result applies to both the family-wise error rate (FWER), the false discovery rate (FDR), and other error rates. We also show that if $p$-values are not independent, such control is not automatically guaranteed. We formulate conditions which are sufficient for the control of the FWER by the CBP. We conjecture that the CBP is generally valid for positively correlated $p$-values. Finally, the power of conditionalized procedures is investigated using simulations. \section{Definition of conditionalized tests} We define a multiple testing procedure (MTP) $\mathcal{P}$ as a mapping that transforms any finite vector of $p$-values into an equally long vector of binary decisions. If $\mathcal{P}({p_1},\ldots,{p_m})=({d_1},\ldots,{d_m})$, then ${d_i}$ indicates whether the null hypothesis corresponding to $p_i$ is rejected ($d_i=1$) or not ($d_i=0$). We define a decision rate as the expected value of a function of $\mathcal{P}({p_1},\ldots,{p_m})$. We denote the FWER and FDR of the procedure $\mathcal{P}$ as $\text{$\mathrm{FWER}$}_{\mathcal{P}}$ and $\mathrm{FDR}_{\mathcal{P}}$, respectively. For $\lambda\in(0,1]$ and an MTP $\mathcal{P}$ we define the corresponding conditionalized MTP $\mathcal{P}^\lambda$ as the MTP that, on input of a vector of $p$-values $({p_1},\ldots,{p_m})$, applies $\mathcal{P}$ to the sub-vector consisting of only the rescaled $p$-values $p_i/\lambda$ with $p_i\leq\lambda$, and that does not reject the null hypotheses of the $p$-values with $p_i > \lambda$. Throughout the paper we always assume that both the level of significance $\alpha$ and the conditionalization factor $\lambda$ are fixed (independently of the data) prior to the analysis. In this paper we pay special attention to the conditionalized Bonferroni procedure (CBP) and its control of the FWER. For $\lambda\in(0,1]$ define $R_m(\lambda)=\sum_{i=1}^m\boldsymbol{1}\{p_i\le\lambda\}$. Let $\mathcal{T}\subseteq\{1,\ldots,m\}$ be the index set of true null hypotheses. The FWER of the CBP is defined as \begin{align*} \text{$\mathrm{FWER}$}_{\text{CB}}^{\lambda,\alpha}=P\Big(\bigcup\limits_{i\in\mathcal{T}}{\Big[\,p_i<\frac{\alpha \lambda }{R_m(\lambda) \vee 1}}\Big]\Big). \end{align*} If $\text{$\mathrm{FWER}$}_{\text{CB}}^{\lambda,\alpha}\le\alpha$ for given $\lambda$ and $\alpha$ we say that \emph{the CBP controls the FWER} for those $\lambda$ and $\alpha$. Note that for the sake of simplicity in the rest of the paper we sometimes suppress one or both arguments and simply use $R_m$, $R(\lambda)$, or even $R$ in the place of $R_m(\lambda)$. The proofs of all theorems and lemmas formulated below can be found in the Appendix. \section{FWER and FDR of independent tests} In this section we state our main result: a conditionalized procedure controls FWER (or FDR) if the non-conditionalized procedure controls FWER (or FDR) and if the test statistics are independent and the marginal distributions satisfy a condition that we call \emph{supra-uniformity}. \begin{Definition}[supra-uniformity] \label{def:supre-uniform} The distribution of ${p_i}$ is supra-uniform if for all $\lambda ,\gamma \in [0,1]$ with $ \gamma \leq\lambda$ it holds $P({p_i} < \gamma\,|\,{p_i}\leq\lambda )\leq\gamma /\lambda$. We say that ${p_i}$ is supra-uniform if its distribution is supra-uniform. \end{Definition} Supra-uniformity is also known as the uniform conditional stochastic order (UCSO) \citep[defined by][]{Whitt1980, Whitt1982, KS1982,Ruschendorf1991} relative to the standard uniform distribution $U(0,1)$\xspace. It is well-known that this condition is implied if ${p_i}$ dominates $U(0,1)$\xspace in likelihood ratio order \citep[e.g.,][]{Whitt1980, Denuit2005}. \citealt{Whitt1980} shows that when the sample space is a subset of the real line and the probability measures have densities, then UCSO is equivalent to the monotone likelihood ratio (MLR) property (i.e. for every $y>x$ it holds $f(y)/g(y)\geq{}f(x)/g(x)$). In the case of $U(0,1)$\xspace (i.e. when $g$ is a constant) it is immediately clear that MLR is equivalent to the $p$-values having densities that are increasing on $(0,1)$, which is further equivalent to having cumulative distribution functions that are convex on $(0,1)$. \begin{theorem} \label{thm:independent} Let $\mathcal{P}$ be an MTP and $D$ be a decision rate (e.g., FWER of FDR) such that $D_{\mathcal{P}}\leq\alpha$ for $\alpha\in(0,1)$ whenever the $p$-values of the true hypotheses are independent and supra-uniformly distributed. If the $p$-values of the true hypotheses are independent and supra-uniform, then, for the conditionalized MTP $\mathcal{P}^\lambda$ it holds that $D_{\mathcal{P}^\lambda}\le\alpha$. \end{theorem} The proof of Theorem \ref{thm:independent} can be found in the Appendix. The basic idea behind the proof is to divide the space of $p$-values into orthants partitioned by the events ${[{p_i}\leq\lambda ]}$ versus ${[{p_i} > \lambda ]} $ for all $i$. Conditionally on each of these orthants, the FWER (or FDR) of $\mathcal{P}^\lambda$ is at most $\alpha $. Therefore, the total FWER (or FDR) of $\mathcal{P}^\lambda$ must also be at most $\alpha$. A similar argument is used by \citet{WD1986} in the context of order restricted inference. Many popular multiple testing procedures satisfy the conditions of Theorem \ref{thm:independent}, since they only require a weaker condition $\mathrm{P}(p_i{}\leq{}c)\leq{}c$ in order to preserve type I error control. Consequently, for independent $p$-values the validity of the conditionalized versions of the methods by \cite{Holm1979}, \cite{Hommel1988}, \cite{Hochberg1988} for FWER control and by \cite{BH1995} for FDR control follows by Theorem \ref{thm:independent}. \section{FWER control by the CBP under dependence: finitely many hypotheses} Generalizing Theorem \ref{thm:independent} to the setting with dependent $p$-values is not trivial. In the next three sections of this paper we focus on the specific case of CBP, presenting several sufficient conditions the control of the FWER by the CBP. The overarching theme of these conditions is a requirement for $p$-values to be positively correlated. In light of the several special results obtained below we conjecture that, at least in the multivariate normal model, the CBP controls FWER whenever the $p$-values are positively correlated. Further justification for our conjecture is the extreme case when the $p$-values under the null are all identical at which point the proof of the control of the FWER by the CBP is trivial. \subsection{Negative correlations: a counterexample} To see that the requirement of independence of $p$-values in Theorem~\ref{thm:independent} cannot simply be dropped, consider a multiple testing problem with $m=2$ where $p_1$ and $p_2$ both have a $U(0,1)$\xspace and $p_1=1-p_2$. Assume $\lambda > 1/2$, since otherwise CBP is uniformly less powerful than the classical Bonferroni method. In this setting \[ \text{$\mathrm{FWER}$}cb=\mathrm{P}(p_1\leq\lambda\alpha)+\mathrm{P}(p_2\leq\lambda\alpha)=2\lambda\alpha>\lambda. \] In other words, under the considered setting the CBP either fails to control FWER (with $\lambda>1/2$) or is strictly less powerful than the Bonferroni method (with $\lambda\leq1/2$). \subsection{The bivariate normal case} Proposition \ref{thm:bivnorm} below guarantees FWER control by the CBP for all $\alpha,\lambda\in(0,1)$ in the setting with two $p$-values corresponding to two bivariate zero-mean normally distributed test statistics with positive correlation. Denote as $\mathcal{P}hi$ the standard normal distribution function and as $I_2$ the $2\times 2$ identity matrix. \begin{Proposition} \label{thm:bivnorm} Let $m=2$ and let $(X_1,X_2)'\sim{}N(0,\Sigma_\rho)$, where $\Sigma_\rho=\rho+(1-\rho)I_2$. Set $p_1=1-\mathcal{P}hi(X_1)$ and $p_2=1-\mathcal{P}hi(X_2)$. If $\rho\geq0$, then $\text{$\mathrm{FWER}$}cb\leq\alpha$. \end{Proposition} \subsection{Distributions satisfying the expectation criterion} In the multivariate setting without independence of the $p$-values a number of conditions can be formulated which guarantee the control of the FWER by the CBP. One such sufficient condition is given in Lemma \ref{lem:expcrit}. \begin{lemma} \label{lem:expcrit} Let the $p$-values $p_1,\ldots,p_m$ have continuous distributions $F_1,\ldots,F_m$ that satisfy $(F_i(y) - F_i(x))\lambda \alpha \leq F_i(\lambda \alpha)(y-x)$ for any $x, y \in (0,\lambda \alpha)$ such that $x < y$, and let $P(R_m \le k\,|\,p_i=x)$ be increasing in $x$ for every $k=0,1,\ldots,m$ and $i=1,\ldots,m$ and let \begin{align} \label{eq:expcrit} \sum_{i=1}^{m}{P(p_i\le\lambda\alpha)\,E(R_m^{-1}\,|\,p_i=\lambda\alpha)\le\alpha}. \end{align} Then $\text{$\mathrm{FWER}$}cb\le\alpha$. \end{lemma} We refer to the condition in (\ref{eq:expcrit}) as the \emph{expectation criterion}. Note that for positively associated $p$-values, small values of $p_i$ often occur together with small values of $R_m^{-1}$; hence for small $\lambda \alpha$, the summand in the expectation criterion tends to be small. The expectation criterion can be used to prove a general result on $p$-values arising from equicorrelated jointly normal test statistics formulated as Lemma \ref{lem:equic_norm}. Note that if $p_1\ldots,p_m$ are exchangeable and standard uniform, (\ref{eq:expcrit}) simplifies to $E(R_m^{-1}\mid p_1=\lambda\alpha)\le (\lambda m)^{-1}$. If further $p_1\ldots,p_m$ are derived from jointly normally distributed test statistics, Lemma \ref{lem:equic_norm} gives a further simplification of the condition. \begin{lemma} \label{lem:equic_norm} Assume that $(\mathcal{P}hi^{-1}(p_1),\ldots, \mathcal{P}hi^{-1}(p_m))'\sim{}N(0,\Sigma_\rho)$, where $\Sigma_\rho=\rho+\mathrm{diag}(1-\rho,\ldots,1-\rho)$ with $0\leq\rho<1$. Let \begin{equation} \label{eq:integralcrit} \int_{-\infty }^{\infty }{\frac{\mathrm{var}phi(x)}{\mathcal{P}hi (\mu-\sqrt{\rho}x)}dx}\leq\lambda^{-1}, \end{equation} where $\mu=\mathcal{P}hi^{-1}(\lambda)(1-\rho)^{-1/2}-\mathcal{P}hi^{-1}(\lambda\alpha)\rho(1-\rho)^{-1/2}$. Then $\text{$\mathrm{FWER}$}cb\le\alpha$. \end{lemma} A practical implication of Lemma \ref{lem:equic_norm} is that for a given setting $(\lambda, \alpha, \rho)$ the FWER control by the CBP can be verified numerically by evaluating the one-dimensional integral in (\ref{eq:integralcrit}). The results of our numerical analysis suggest that (\ref{eq:integralcrit}) holds for any $0\leq\rho<1$ and any $0<\lambda\leq1$ whenever $\alpha\leq0.368$. Moreover, we observed evidence that condition (\ref{eq:integralcrit}) is most certainly too strict: Certain combinations of $\alpha$, $\lambda$ and $\rho$ (e.g.\ $\alpha=0.7$, $\lambda=0.9$, $\rho=0.2$) exist for which (\ref{eq:integralcrit}) is violated, but simulations indicate that the CBP controls the FWER for all $\alpha$, $\lambda$ and $\rho \geq 0$ in the case of $p$-values arising from equicorrelated normals with nonnegative means and unit variances. \subsection{Mixtures} Next we show that the control of the type I error rate by the CBP is preserved when distributions are mixed. The result below applies when the family of distributions being mixed is indexed by a one-dimensional parameter, however, it can be easily generalized to more complex families of distributions. \begin{Proposition} \label{lem:mixtures} Let $\mathcal{F}=\{F_w,w\in\mathbb{R}\}$ be a family of distribution functions. Assume that a decision rate $D$ and an MTP $\mathcal{P}$ satisfy $D_{\mathcal{P}} \leq\alpha$ whenever the joint distribution of the $p$-values is in $\mathcal{F}$. For any mixing density $g$, if the $p$-values $p_1,\ldots,p_m$ are distributed according to the mixture $F=\int_{-\infty}^\infty{}F_w g(w)dw$, then $D_{\mathcal{P}} \leq\alpha$. \end{Proposition} Proposition \ref{lem:mixtures} is a general result that applies to many conditionalized procedures. For instance, in the case of the CBP, which controls the FWER whenever the $p$-values are independent and supra-uniform, the proposition guarantees the control of the FWER by the CBP also for mixtures of such distributions. Note that such mixtures may be correlated. \section{FWER control by the CBP under dependence in large testing problems} Finally, we give a sufficient conditions for FWER control by the CBP as the number of hypotheses $m$ approaches infinity. Suppose for a moment that the expectation of $R(\lambda)$ (i.e. the number of $p$-values below $\lambda$) is known. In such case one could use the alternative to CBP that rejects hypothesis $H_i$ whenever $p_i\leq\lambda\alpha/\mathrm{E}[R(\lambda)]$. If the $p$-values are supra-uniform then under arbitrary dependence this procedure controls FWER, since it holds \begin{align*} \mathrm{FWER}_{\mathrm{CBP'}}&\leq\sum\nolimits_{i\in T} P(p_i<\alpha\lambda/E[R(\lambda)]) \\ &\leq\sum\nolimits_{i\in T} P(p_i\leq\lambda)\,P(p_i<\alpha\lambda/E[R(\lambda)]\mid p_i\leq\lambda) \\ &\leq\sum\nolimits_{i\in T} P(p_i\leq\lambda)\,\alpha/E[R(\lambda)] \leq\alpha. \end{align*} This suggests that the CBP should also control FWER for $m\to\infty$ whenever $R(\lambda)$ is an consistent estimator of $\mathrm{E}[R(\lambda)]$. This heuristic argument is formalized in Proposition \ref{thm:exp_bonf}, where $\mathop{\mathrm{plim}}$ denotes convergence in probability. \begin{Proposition} \label{thm:exp_bonf} Let the $p$-values $p_1,\ldots,p_m$ have supra-uniform distributions and let $\mathop{\mathrm{plim}}_{m\to\infty}\,R/m=\eta$ and $\lim_{m\to\infty}\,E(R/m)=\eta$ for some $\eta\in\mathbb{R}$. Then $\limsup_{m\to\infty}\,\text{$\mathrm{FWER}$}cb\leq\alpha$. \end{Proposition} An application of Proposition~\ref{thm:exp_bonf} in a situation where correlations between $p$-values vanish with $m\to\infty$ leads to Corollary \ref{corol:asymptotic}. \begin{Corollary} \label{corol:asymptotic} Denote $\rho_{ij}=\mathrm{cor}(\mbf{1}[p_i\leq\lambda],\mbf{1}[{{p}_{j}}\leq\lambda])$ and put $\rho_{ij}^+=\max\{0,\rho_{ij}\}$. Denote the average off-diagonal positive part of the correlations as \begin{align*} \bar\rho(m)=\frac{2}{m(m-1)}\sum\limits_{i=1}^{m-1}{\sum\limits_{j=i+1}^{m}{{{\rho }_{ij}^{+}}}}. \end{align*} If the $p$-values are supra-uniform and $\lim_{m\to\infty}\,E(R/m)=\eta$ for some $\eta\in\mathbb{R}$ and $\lim_{m\to\infty}\bar\rho(m)=0$, then $\limsup_{m\to\infty}\text{$\mathrm{FWER}$}cb\leq\alpha $. \end{Corollary} An example of the usage of Corollary~\ref{corol:asymptotic} for data analysis can be found in Section \ref{manifest}. \section{FWER investigation - simulations} Our conjecture is that the CBP controls the FWER in the case of positively correlated multivariate normal test statistics. To substantiate this, we conducted the following simulations. We generated the $p$-values as $p_i = \mathcal{P}hi(Z_i)$, with the 'test statistics' $(Z_1, \ldots, Z_m)'\sim{}N(0,\Sigma) $ and $\Sigma = (\sigma_{ij})$ with each $\sigma_{ij} \ge 0$ and each $\sigma_{ii} =1$. The correlation matrices were generated in the following way. First, a covariance matrix was generated as $\Sigma = AA^T$, where the $a_{ij}$ were drawn randomly and independently from a standard normal distribution. If there was a negative covariance, then the smallest covariance was found, and the corresponding negative elements of $a_{ij}$ were set to 0. This was repeated until all covariances were nonnegative. Finally, the covariance matrix was scaled into a correlation matrix. For each $m\in\{1,2,3,4,5,6,7,8,9,10,15,20,25,50,75, 100\}$ we generated 100 correlation matrices $\Sigma$, and for each $\Sigma$ we conducted 10,000 simulations and computed the FWER for the CBP with $\alpha \in {0.05,0.10,\ldots,0.95}$ and $\lambda\in\{0.1,0.2,\ldots,0.9\}$. There were 6 combinations of $(\Sigma,\alpha, \lambda)$ with simulated FWER slightly above $\alpha$, but none of these differences were significant according to a binomial test with significance level 0.05. For $m \ge 6$ we found no cases with simulated FWER above $\alpha$. In our simulations we also explored several multivariate settings with negative correlations (results not included). The simulations showed what was already suggested by the lack of control of the FWER for negative correlations in the bivariate case, namely that FWER is not necessarily controlled with negative correlations (especially for small $m$). \section{Power investigation - simulations} In this section we investigate the power performance of conditionalized tests relative to their non-conditionalized versions through simulations. We consider the following procedures: Bonferroni; {\v S}id{\'a}k \citep[attributed to Tippet by][p.~2433]{Davidov2011}; Fisher combination method based on the transformation $F=-2\sum_{i=1}^m\log {p_i}$ \citep[see][p.~2433]{Davidov2011}; the likelihood ratio (LR) procedure based on the theory of order restricted statistical inference of \citet{RobertsonWD}, using the chi-bar distribution with binomial weights; the ${I_+}$ statistic, based on the empirical distribution function \citep[p.~2433]{Davidov2011}; the Bonferroni plug-in procedure as defined by \citet{FG2009} based on the work of \citet{Storey2002} referred to as the FGS procedure. It should be noted that neither Fisher's method nor the LR and ${I_+}$ procedures provide a decision for each individual hypothesis ${H_i}$. Instead they only allow a conclusion about the intersection hypothesis that all ${H_i}$ are true (i.e. the global null hypothesis). For this reason their usage is limited. Furthermore, both Fisher's and {\v S}id{\'a}k's method as well as the LR and ${I_+}$ procedures assume independence of the analyzed $p$-values. However, this assumption is often violated in practice, which limits the usage of these methods. \subsection{Power as the number of true hypotheses increases} For the power investigation the $p$-values were generated based on $m$ parallel \textit{z}-tests of null hypotheses of type $H_0:\mu_i\geq0$, each based on a sample of size $n$. The $p$-values were calculated as $p_i=\mathcal{P}hi(X_i)$, with $var(X_i) = 1$ and noncentrality parameter $E(X_i)=\mu_i \sqrt{n}$. To each set of $p$-values we applied the conditionalized and the ordinary versions of the considered testing procedures at the overall significance level of $\alpha=.05$. Conditionalizing was applied with $\lambda=0.5$. A number of combinations in terms of noncentrality parameter, hypothesis count and proportion of false hypotheses was considered. For each combination we performed 10,000 replications. Fig.~\ref{fig:figureM} shows the results of a simulation where the number of false hypotheses is fixed while the number of true hypotheses increases. For the false hypotheses the value of the noncentrality parameter was set at $-2$, while for the true null hypotheses it was set at $2$. This illustrates that the power decreases rapidly with the number of true hypotheses for most non-conditionalized procedures. The only exception to this is the LR procedure. In contrast, for all of the considered conditionalized procedures the power decreases much more slowly. This shows that, with the exception of the LR procedure, the conditionalization substantially improves the power performance of the considered procedures in this setting. Among the procedures that permit a per-hypothesis decision (i.e. Bonferroni, {\v S}id{\'a}k, and FGS) it is the conditionalized FGS procedure that shows the highest power. Fig.~\ref{fig:figureP2} illustrates the influence of conditionalization on the performance of the Bonferroni and FGS methods in a setting where the percentage of true hypotheses increases while the total number of hypotheses remains fixed. The figure shows that the conditionalized FGS procedure is the overall best performing procedure among the four. \subsection{Power in pairwise comparisons of ordered means} \label{ordered} Consider a series of independent sample means $y_i \sim N(\mu_i,\sigma^2/n)$ with the compound hypothesis $H:\mu_1 \leq\mu_2\leq\ldots\leq\mu_k$. An analysis method specifically designed for this setting is the isotonic regression \citep{RobertsonWD}, although this method does not allow to deduce specifically which pairs $(\mu_i,\mu_j)$ violate the ordering specified by the null hypothesis. Alternatively, the $k(k-1)/2$ individual hypotheses $H_{ij}:\mu_i\leq\mu_j$ with $i < j$ can be analyzed using one-sided \textit{t}-tests, and the conditionalized Bonferroni or the conditionalized FGS procedures can be applied. The average correlation between the $p$-values vanishes as $m \to \infty$, thus the asymptotic control of the FWER by the CBP follows by Corollary~\ref{corol:asymptotic}. The simulations below indicate that the FWER is in fact controlled even for the small hypothesis counts. The means in this simulation were modeled as $\mu_{i+1}=\mu_i+\delta$ for $i=1,2,\ldots,k-2$, and $\mu_k = \mu_1$. Thus, most means satisfy the ordering of the hypothesis, but the last mean violates it. We used $\sigma = 1$, $n = 10$ and set $\lambda = 0.5$. Fig.~\ref{fig:figurePairs} shows the results for $k = 20$ and $k = 5$ respectively. At $\delta = 0$, it is observed that all four procedures exhibit $\text{$\mathrm{FWER}$}$ below $\alpha$. For both $k = 5$ and $k = 20$, the two conditionalized procedures perform essentially as good or better than their non-conditionalized counterparts across the whole range of $\delta \in [0.1, 3]$. Note that the same results would be obtained with, for example, $n = 90$ and $\delta \in [1/30, 1]$. \section {Examples with real data} \subsection{Example 1 (detecting adverse effects in meta-analysis)} Suppose the effect of a medical or psychological treatment is investigated in a meta-analysis of $m$ studies in distinct populations, and that a one-sided \textit{t}-test of $H_i^*: \mathrm{var}theta_i \le 0$ is conducted in each population, yielding $p$-values $p^*_i$, where $\mathrm{var}theta_i > 0$ indicates a positive, beneficial effect of the treatment in population $i$. In a meta-analysis, one would usually test a weighted average of the effects, say $\bar{\mathrm{var}theta}=\sum\nolimits_iw_i\mathrm{var}theta_i$. However, even if the average effect $\bar{\mathrm{var}theta}$ is positive, there can be populations in which the local effect $\mathrm{var}theta_i$ is negative. It would be wise to test for such adverse effects, as the treatment should not be recommended in such populations. This means that one also has to test the opposite hypothesis, $H_i: \mathrm{var}theta_i \ge 0$, in each population, yielding $p$-values $p_i = 1 - p^*_i$ . This is a problem of the form considered in this article. Under the classical \textit{t}-test applied in the context of interval hypothesis testing, each \textit{t}-statistic has a non-central \textit{t}-distribution with the non-centrality parameter determined by the true value of the expectation $\mathrm{var}theta_i$. Under the principle of least favorability the $p$-values are obtained using the central \textit{t}-distribution. It is well-known that the ratio of densities of \textit{t}-distributions with different non-centrality parameters (and equal degrees of freedom) is monotone \citep{Kruskall1954,Lehmann1955}, and from this it follows that they are ordered in likelihood ratio. This implies that the supra-uniformity of Definition \ref{def:supre-uniform} applies \citep{Whitt1980, Whitt1982, Denuit2005}. Consequently, assuming that the $p$-values are independent, by Theorem \ref{thm:independent} it follows that in this setting the CBP and the conditionalized FGS procedure control the FWER, while the conditionalized Benjamini-Hochberg procedure \citep{BH1995,BY2001} controls the FDR. \subsection{Example 2. Detecting substandard organizations in quality benchmarking} Several countries have developed programs in which the quality of public organizations such as schools or hospitals is assessed. As stated by \citet{Ellis2013}, "such research can consist of large-scale studies where dozens [3], hundreds [4], or thousands [5, 6] of organizations are compared on one or more measures of performance or quality of care, on the basis of a sample of clients or patients from each organization". A goal of such programs is to identify under-performing organizations. For example, in the Consumer Quality Index (CQI) program of the Netherlands, the questionnaire used in 2010 to evaluate the short-term ambulatory mental health and addiction care organizations contained a question whether it was a problem to contact the therapist by phone in the evening or during the weekend in case of emergency. Now suppose that a minimum standard of 90\% satisfaction rate is imposed. Under such standard in each organization 90\% or more of the patients should answer that contacting the therapist outside office hours was not a problem. Investigating whether hospitals satisfy this minimum standard can be done using the binomial test within each hospital with the null hypothesis of type $H_0:\pi\ge.90$ where $\pi$ denotes the success rate. The advisory statistics team debated the question of whether a correction for multiplicity for all hospitals is required in such analyses. The arguments against correcting for all hospitals were motivated by the expected loss of power associated with multiplicity correction for all hospitals in non-conditionalized MTPs. The advantage of using a conditionalized MTP in such setting is that the presence of organizations that score high above the minimum standard does not exacerbate the severity of the multiple testing problem and much of the power is preserved even with many high-performing hospitals included in the analysis. \subsection{Example 3. Testing for manifest monotonicity in IRT}\label{manifest} In Mokken scale analysis it is recommended to test manifest monotonicity \citep{Ark2007}. With $k$ items to be tested suppose that the variables $X_1,\ldots,X_k$ indicate correctness of response for the $k$ items (with $X_i=1/0$ indicating a correct/incorrect answer for the $i$-th item). Denote the \textit{rest score} of the $i$-th item as $X_{-i}=(\sum_{j=1}^{k}X_j)-X_i$. A question of interest is whether $\pi_{ij}:=P(X_i=1\,|\,X_{-i}=j)$ is a nondecreasing function of $j$ within each item $i$. This leads to testing the $k(k-1)/2$ pairwise hypotheses $\pi_{ij'}\leq\pi_{ij}$ for $j'<j$ \citep{Ark2007}. In the subtest E of the Raven Progressive Matrices test in the data set reported by \citet{VE2000} we obtained the following result. For item 11, there were 21 pairs of rest score groups that had to be compared - small adjacent groups were joined together by the program. There were 4 violations with a maximum \textit{z}-statistic of 2.33, yielding an unmodified $p$-value $p = 0.010$. If no multiplicity correction is performed the probability of false rejection for each item undesirably increases with the number of rest score groups. The classical Bonferroni correction yields the adjusted $p$-value of $p'=0.010\times 21=0.21$, while the CBP yields the adjusted $p$-value of $p''=0.010\times 4=0.04$. As the number of items increases, the number of pairwise comparisons increases, but the average correlation between the $z$-statistics vanishes. In this situation, Corollary~\ref{corol:asymptotic} implies that the FWER is asymptotically under control, while the simulations of Section \ref{manifest} indicate that FWER control is already achieved with small hypothesis count. Thus, both the classical Bonferroni correction and the CBP control the FWER, but the CBP yields the smallest $p$-value. \subsection{Example 4. Testing for nonnegative covariances in IRT} In Mokken scale analysis and, more generally in monotone latent variable models, it is required that the test items have nonnegative covariances with each other \citep{Mokken1971,HR1986}. Two approaches are possible in item selection with this requirement. One approach is to retain only items with significantly positive covariances, and the other approach is to delete items with significantly negative covariances. We consider the latter approach here. The distribution of the standardized sample covariances converges to a normal distribution with increasing sample size, which suggests that the CBP might control the FWER in this setting. We investigated this further, both analytically and with simulations. Both approaches suggest that the FWER is indeed under control, but we intend to report the details of this in a psychometric journal. Here, consider only briefly an example. We have deployed this procedure on an exam with 78 multiple choice questions. There were 3003 covariances between items, of which 280 were negative. The smallest unadjusted $p$-value was 0.000243. The Bonferroni corrected $p$-value is 0.73, while the CBP with $\lambda=0.5$ yields a $p$-value of 0.14. \section{Discussion} We have proposed a very simple and general method, called conditionalization, to deal with the presence of inflated $p$-values in multiple testing problems. Such $p$-values often arise in practice for instance when interval hypotheses are tested. We suggest to discard all hypotheses with $p$-values above a pre-chosen constant $\lambda$ (typically 0.5 or higher), and to divide the remaining $p$-values by $\lambda$ before applying the multiple testing procedure of choice. For independent $p$-values, we have proven that the conditionalized procedure controls the same error rate as the original procedure, provided null $p$-values are supra-uniform (i.e.\ dominate the standard uniform distribution in likelihood ratio order). As a rule of thumb, conditionalized procedures can be expected to be more powerful than their ordinary, non-conditionalized counterparts if there are more true hypotheses with inflated $p$-values (i.e.\ with true parameter values deep inside the null hypothesis) than there are false null hypotheses. The power gain achieved by conditionalizing can be substantial, especially for adaptive procedures that incorporate an estimate of the proportion of true null hypotheses. For the case of the conditionalized Bonferroni procedure (CBP) we conjecture that the CBP is valid when the $p$-values are positively correlated. For this case we have given several sufficient conditions for FWER control by the CBP. We accompanied these results with an extensive simulation study and the results give supporting evidence for our conjecture. Nonetheless, a proof of our conjecture still eludes us and thus remains for future research. We have shown that it is not universally valid for negatively correlated variables, however. Other topics that in our opinion deserve further attention are the question of how to optimally choose the value for the cut-off parameter (i.e. $\lambda$) and whether the procedure is valid when the $p$-values are based on discretely distributed test statistics, since these typically do not fulfill the supra-uniformity condition of Theorem \ref{thm:independent}. We believe that this paper makes a strong case for the usage of the conditionalized multiple testing procedures since they mitigate the loss of power typically associated with multiple testing procedures on inflated $p$-values and thus make it more attractive for researchers to formulate their scientific questions in terms of interval hypotheses. In light of the fact that shifting the focus towards interval hypotheses has been advocated as one of the solutions to get out of the current "$p$-value controversy" \citep{Wellek2017} this likely makes conditionalization a very powerful method of analysis. \appendix \section{Proofs} The following notation is used. For $\mbf{p}=({p_1},\ldots,{p_m})$ and a subset $K \subseteq \{ 1,\ldots,m\} $, denote by $\mbf{p}_{\!K}$ be the subvector of components ${p_i}$ with $i\in{}K$. The index set of true hypotheses is denoted by $\mathcal{T}$, that is: $i \in \mathcal{T} \Leftrightarrow ({H_i}$ is true). For a scalar $\lambda $ we write $\mbf{p}_{\!K}\leq\lambda $ or $\mbf{p}_{\!K}>\lambda $ if these inequalities hold for component-wise. ${\mbf{U}}=({U_1},\ldots,{U_m})$ denotes a random vector with independent components that follow $U(0,1)$\xspace. \subsection{Independence case} \begin{proof}[Proof of Theorem 3.2] For any set $K \subseteq \mathcal{T}$, let $\bar K = \mathcal{T} - K$, and define the orthant ${G_K} = [{{\mbf{p}}_K}\leq\lambda,{{\mbf{p}}_{\bar K}} > \lambda ]$. Conditionally on each ${G_K}$, the modified $p$-values ${p'_i} = {p_i}/\lambda$, $i\in\{1,\ldots,m\}$ are independent, and the distribution of each ${p'_i}$, $i\in\mathcal{T}$ stochastically dominates $U(0,1)$\xspace. Because $\mathcal{P}$ controls the decision rate under these circumstances, we may conclude that $E(D_{\mathcal{P}^\lambda}|G_K) \leq \alpha$. Consequently, by the law of total expectation, $E(D_{\mathcal{P}^\lambda})\leq\alpha$. \end{proof} \subsection{Bivariate normal case} In this section we formulate three additional lemmaas which together immediately imply the validity of Proposition 1. \begin{lemma} \label{lem.bivariate_PQD} Let $m=2$ and let $p_1,p_2$ be marginally standard uniformly distributed under the null hypothesis. Put $X_1=\mathcal{P}hi^{-1}(1-p_1)$ and $X_2=\mathcal{P}hi^{-1}(1-p_2)$ and let $X_1,X_2$ be positive quadrant dependent. Let $\alpha,\lambda\in(0,1)$ be such that $1-(1-\tfrac{\lambda\alpha}{2})^2+2(1-\lambda)\lambda\alpha\;\leq\;\alpha$. Then $\text{$\mathrm{FWER}$}cb\leq\alpha$. \end{lemma} \begin{proof} For fixed $\alpha\in(0,1)$ and $\lambda\in(0,1)$ the CBP rejects $H_i$ whenever $p_i\leq\lambda\alpha/R_\lambda$, where $R_\lambda=I\{p_1\leq\lambda\}+I\{p_2\leq\lambda\}$. The corresponding FWER can be written as $\text{$\mathrm{FWER}$}cb=1-P(A^p)+2[P(B^p)-P(C^p)]$, where $A^p=\{p_i\in(\tfrac{\lambda\alpha}{2},1),i=1,2\}$, $B^p=\{p_1\in(\lambda,1),p_2\in(0,\lambda\alpha)\}$ and $C^p=\{p_1\in(\lambda,1),p_2\in(0,\tfrac{\lambda\alpha}{2})\}$. Since $P(C^p)\geq0$, it holds $\text{$\mathrm{FWER}$}cb\leq1-P(A^p)+2P(B^p)$. Since $A^p$ is an "on-diagonal" quadrant, with positive quadrant dependence the probability $P(A^p)$ is \emph{minimized} under independence, when its probability is $P^\bot(A^p)=(1-\tfrac{1}{2}\lambda\alpha)^2$. Analogously, $B^p$ is an "anti-diagonal" quadrant, which means that $P(B^p)$ is \emph{maximized} under independence, thus $P(B^p)\leq{}P^\bot(B^p)=(1-\lambda)\lambda\alpha$. Consequently, $\text{$\mathrm{FWER}$}cb\leq1-(1-\tfrac{\lambda\alpha}{2})^2+2(1-\lambda)\lambda\alpha$. \end{proof} Solving this inequality with respect to $\alpha$ and $\lambda$ yields a set of combinations of $\alpha$ and $\lambda$ for which the CBP controls FWER under positive dependence. The permissible ranges are depicted in Figure \ref{fig.lambda_ok_rough_QPD}. Note that if $\lambda\leq\tfrac{1}{2}$ it holds trivially $\text{$\mathrm{FWER}$}cb\leq\alpha$ since in such case $\text{$\mathrm{FWER}$}cb$ is dominated by the FWER of the classical Bonferroni method. The lemma requires that the test statistics are positive quadrant dependent, which under the bivariate normal model with correlation $\rho$ is equivalent to $\rho\geq0$. \begin{lemma} \label{lem.bivariate_normal} Let $n=2$ and let $p_1,p_2$ be marginally standard uniformly distributed under the null hypothesis. Put $X_1=\mathcal{P}hi^{-1}(1-p_1)$ and $X_2=\mathcal{P}hi^{-1}(1-p_2)$ and let $(X_1,X_2)'\sim{}N(0,\Sigma_\rho)$ under the null hypothesis, where $\Sigma_\rho=\rho+\mathrm{diag}(1-\rho,1-\rho)$. Let $\alpha,\lambda\in(0,1)$ be such that $\alpha\lambda\leq\frac{2}{3}$. If $\rho\geq0$, then $\text{$\mathrm{FWER}$}cb\leq\alpha$. \end{lemma} \begin{proof} Analogously to the proof of Lemma \ref{lem.bivariate_PQD}, we can write $\text{$\mathrm{FWER}$}cb$ as \begin{align} \label{eqn.Fs} \text{$\mathrm{FWER}$}cb&\;=\;1-P_{\!\!\rho}(A^p)+2[P_{\!\!\rho}(B^p)-P_{\!\!\rho}(C^p)], \end{align} where $A^p=\{p_i\in(\tfrac{\lambda\alpha}{2},1),i=1,2\}$, $B^p=\{p_1\in(\lambda,1),p_2\in(0,\lambda\alpha)\}$, $C^p=\{p_1\in(\lambda,1),p_2\in(0,\tfrac{\lambda\alpha}{2})\}$, where we added the lower index $\rho$ into the notation $P_{\!\!\rho}$ to signify that the probability is a function of $\rho$. We proceed to show that $\text{$\mathrm{FWER}$}cb$ is a decreasing function in $\rho\in[0,1)$ whenever $\alpha\lambda\leq\frac{2}{3}$, which we do by differentiating $P_{\!\!\rho}(A^p)$, $P_{\!\!\rho}(B^p)$ and $P_{\!\!\rho}(C^p)$ with respect to $\rho$. By \citet{Tong90} (page 191) the derivative of the bivariate normal distribution function $F_\rho(x)$ with respect to $\rho$ equals its density at $x$. Consequently, $\tfrac{\partial}{\partial\rho}\,P_{\!\!\rho}(A^p)=f_\rho(z_2,z_2)$, $\tfrac{\partial}{\partial\rho}\,P_{\!\!\rho}(B^p)=-f_\rho(z_0,z_1)$, $\tfrac{\partial}{\partial\rho}\,P_{\!\!\rho}(C^p)=-f_\rho(z_0,z_2)$, where $f_\rho$ is the density function of the unit-variance bivariate normal distribution \begin{align*} f_\rho(x,y)&\;=\;\tfrac{1}{2\pi}(1-\rho^2)^{-1/2}\,\exp(-\tfrac{(x-\mu_1)^2-2\rho{}(x-\mu_1)(y-\mu_2)+(y-\mu_2)^2}{2(1-\rho^2)}), \end{align*} and $z_0=\mathcal{P}hi^{-1}(1-\lambda)$, $z_1=\mathcal{P}hi^{-1}(1-\lambda\alpha)$, $z_2=\mathcal{P}hi^{-1}(1-\tfrac{\lambda\alpha}{2})$. Therefore, $\tfrac{\partial}{\partial\rho}\,\text{$\mathrm{FWER}$}cb=-f_\rho(z_2,z_2)-2[f_\rho(z_0,z_1)-f_\rho(z_0,z_2)]$, which in turn means that $\tfrac{\partial}{\partial\rho}\,\text{$\mathrm{FWER}$}cb\leq0$ whenever \begin{align} \label{eqn.Fs_via_devatives_condition} \frac{f_\rho(z_2,z_2)}{f_\rho(z_0,z_2)}+2\frac{f_\rho(z_0,z_1)}{f_\rho(z_0,z_2)}\;\geq\;2. \end{align} The conditional distribution of $X_2\,|\,X_1=z_0$ is $N(\mu_1+\rho(z_0-\mu_2), 1-\rho^2)$, which has density $g_\rho(x;z_0)=(2\pi(1-\rho^2))^{-1/2}\exp(-\tfrac{1}{2}\,(x-\rho{}z_0-(\mu_1-\rho\mu_2))^2(1-\rho^2)^{-1})$. Consequently, \begin{align*} \frac{f_\rho(z_0,z_1)}{f_\rho(z_0,z_2)}&\;=\;\frac{g_\rho(z_1;z_0)}{g_\rho(z_2;z_0)}\;=\;\exp(-(z_1-\rho{}z_0-\mu_1+\rho\mu_2)^2+(z_2-\rho{}z_0-\mu_1+\rho\mu_2)^2). \end{align*} Similarly, the conditional distribution of $X_1\,|\,X_2=z_2$ is $N(\mu_2+\rho(z_2-\mu_1), 1-\rho^2)$, and therefore \begin{align*} \frac{f_\rho(z_2,z_2)}{f_\rho(z_0,z_2)}&\;=\;\frac{g_\rho(z_2;z_2)}{g_\rho(z_0;z_2)}\;=\;\exp(-(z_2-\rho{}z_2-\mu_2+\rho\mu_1)^2+(z_0-\rho{}z_2-\mu_2+\rho\mu_1)^2). \end{align*} Under the null hypothesis we have $\mu_1=\mu_2=0$. Define \begin{align} \label{eqn.h_rho} h(\rho)&\;=\;\exp(-(z_2-\rho{}z_2)^2+(z_0-\rho{}z_2)^2)+2\exp(-(z_1-\rho{}z_0)^2+(z_2-\rho{}z_0)^2). \end{align} Then (\ref{eqn.Fs_via_devatives_condition}) is equivalent to $h(\rho)\geq2$. Differentiating $h(\rho)$ with respect to $\rho$ yields \begin{align*} h'(\rho) &\;=\;2z_2(z_2-z_0)\exp(-(z_2-\rho{}z_2)^2+(z_0-\rho{}z_2)^2)-4z_0(z_2-z_1)\exp(-(z_1-\rho{}z_0)^2+(z_2-\rho{}z_0)^2). \end{align*} Since $z_2\geq0$, $z_2\geq{}z_1$ and $z_0\leq0$, it holds $h'(\rho)\geq0$. In other words, $h(\rho)$ is minimized at $\rho=0$, where $h(0)=\exp(z_0^2-z_2^2)+2\exp(z_2^2-z_1^2)$. Clearly, for any $|z_1|\leq|z_2|$ it holds $h(\rho)\geq2$ and the inequality (\ref{eqn.Fs_via_devatives_condition}) is satisfied. Since $\mathcal{P}hi^{-1}$ is strictly monotone and symmetric about $\tfrac{1}{2}$, finding the largest $\alpha$ for a given $\lambda$ such that $|z_1|\leq|z_2|$ leads to the inequality $\tfrac{1}{2}-(1-\lambda\alpha)\leq1-\tfrac{\lambda\alpha}{2}-\tfrac{1}{2}$, which is equivalent to $\lambda\alpha\leq\tfrac{2}{3}$. \end{proof} As it turns out, Lemmas \ref{lem.bivariate_PQD} and \ref{lem.bivariate_normal} together cover almost all combinations of $\alpha,\lambda\in(0,1)$. In Figure \ref{lem.bivariate_normal} the right plot shows the area (white) which is not covered by the two lemmas. Next we close the "gap" left uncovered by the two lemmas. \begin{lemma} \label{lem.bivariate_normal_gap} Let $n=2$ and let $p_1,p_2$ be marginally standard uniformly distributed under the null hypothesis. Put $X_1=\mathcal{P}hi^{-1}(1-p_1)$ and $X_2=\mathcal{P}hi^{-1}(1-p_2)$ and let $(X_1,X_2)'\sim{}N(0,\Sigma_\rho)$ under the null hypothesis, where $\Sigma_\rho=\rho+\mathrm{diag}(1-\rho,1-\rho)$. Let $\alpha,\lambda\in(0,1)$ be such that $\alpha\lambda\geq\frac{2}{3}$ and $1-(1-\tfrac{\lambda\alpha}{2})^2+2(1-\lambda)\lambda\alpha\;\geq\;\alpha$. If $\rho\geq0$, then $\text{$\mathrm{FWER}$}cb\leq\alpha$. \end{lemma} \begin{proof} It can be easily verified that $(\lambda,\alpha)=(\tfrac{2}{3},1)$ and $(\lambda,\alpha)=(\tfrac{3}{4},\tfrac{8}{9})$ are the two points for which the two inequalities in the lemma simultaneously turn into equalities. Moreover, for any $\lambda,\alpha\in(0,1)$ such that $\lambda\alpha\geq0.69$ it holds $1-(1-\tfrac{\lambda\alpha}{2})^2+2(1-\lambda)\lambda\alpha\;\leq\;\alpha$. Consequently, we only need to show that $\text{$\mathrm{FWER}$}cb\leq\alpha$ for $\lambda,\alpha\in\Omega$, where $\Omega=\{(\alpha,\lambda)\in\mathbb{R}^2:\alpha\in[\tfrac{8}{9},1),\lambda\in[\tfrac{2}{3},\tfrac{3}{4}],\tfrac{2}{3}\leq\lambda\alpha\leq0.69\}$. As discussed in the proof of Lemma \ref{lem.bivariate_normal}, it is sufficient to show that $h(\rho)\geq2$ with $h(\rho)$ defined in (\ref{eqn.h_rho}). It can be easily shown that on $\Omega$ we have $0.398<z_2<0.431$ and $-0.496<z_1<-0.43$ and both $z_1$ and $z_2$ are decreasing in $\lambda\alpha$ and so is $z_2^2-z_1^2$ with the minimum above $-0.087$. Consequently, $-2\exp(z_2^2-z_1^2)\geq1.83$ and since it also holds also $\exp(z_0^2-z_2^2)\geq\exp(z_0^2-z_2^2)\geq\exp(z_0^2-z_2^2)>0.83$, we get $h(0)\geq2$. Since it was already shown in the proof of Lemma \ref{lem.bivariate_normal} that $h(\rho)$ is decreasing in $\rho$, this concludes the proof. \end{proof} \begin{proof}[Proof of Proposition 1] The proposition is an immediate corollary of Lemmas \ref{lem.bivariate_PQD}--\ref{lem.bivariate_normal_gap}. \end{proof} \subsection{Expectation criterion} \begin{proof}[Proof of Lemma 4.1: expectation criterion] It is sufficient to consider the cases where all tested hypotheses are true, since adding false hypotheses to the test cannot increase the FWER. Divide the interval $(0, \lambda \alpha]$ into intervals $B_k = (b_{k+1}, b_k]$ with $b_k = \lambda \alpha / k$ for $k=1,\ldots,m$, and $b_{m+1}=0$. For each $H_i$, denote with $E_i$ the probability to reject $H_i$. It is given by \begin{align*} E_i&\;=\;P(p_i \le \lambda \alpha / (R \lor 1)) \;=\;P(R \le \tfrac{\lambda \alpha}{p_i}, p_i \le \lambda \alpha)) \;=\;\sum_{k=1}^{m}{P(R \le \tfrac{\lambda \alpha}{p_i}\,|\,p_i \in B_k)P(p_i \in B_k)}. \end{align*} Since $P(R\le{}k\,|\,p_i=x)$ is assumed to be increasing in $x$, we get \begin{align*} E_i&\;\leq\;\sum_{k=1}^{m}{P(R \le k\,|\,p_i =\lambda \alpha)P(p_i \in B_k)}\\ &\;\leq\;\sum_{k=1}^{m}{P(R \le k\,|\,p_i =\lambda \alpha)(b_k-b_{k+1})\frac{F_i(\lambda \alpha)}{\lambda \alpha}}\\ &\;=\;\sum_{k=1}^{m}{P(R = k\,|\,p_i =\lambda \alpha)\frac{1}{k} F_i(\lambda \alpha)}\\ &\;=\;E(R^{-1}\,|\,p_i =\lambda \alpha)P(p_i \le \lambda \alpha). \end{align*} Therefore, $\text{$\mathrm{FWER}$}cb\le\sum_{i=1}^{m}{E(R^{-1}\,|\,p_i=\lambda\alpha)P(p_i\le\lambda\alpha)}\leq\alpha$ by the assumptions of the lemma. \end{proof} \subsection{Equicorrelated normal case} For the proof of Lemma 4.2 we need Lemma \ref{lem:binomial}. \begin{lemma} \label{lem:binomial} If $X$ is a random variable with binomial $(n, p)$ distribution, then \begin{align*} E(X+1)^{-1}\;=\;\frac{(1-(1-p)^{n+1})}{(n+1)p} \le \frac{1}{(n+1)p}. \end{align*} \end{lemma} \begin{proof} Using $\frac{1}{k+1}\binom{n}{k} = \frac{1}{n+1}\binom{n+1}{k+1}$ we obtain \begin{align*} E(X+1)^{-1}&\;=\;\frac {1}{n+1} \sum_{k=0}^{n}\tbinom{n+1}{k+1}p^k (1-p)^{n-k} \;=\;\frac{1}{(n+1)p} \sum_{k=1}^{n+1}\tbinom{n+1}{k}p^k (1-p)^{n+1-k}, \end{align*} where the last sum corresponds to the binomial distribution with parameters $n+1$ and $p$, and is thus upper-bounded by 1. \end{proof} \begin{proof}[Proof of Lemma 4.2: equicorrelated normal case] Note that the $p$-values are standard uniform, thus their distribution functions $F_i$ satisfy the condition $(F_i(y) - F_i(x))\lambda \alpha \leq F_i(\lambda \alpha)(y-x)$ of Lemma 4.1. Moreover, $P(R_m \leq k\,|\,p_i)$ is increasing in $p_i$ by Theorem 4.1 of \citet{KR1980}, since the $p_i$ are multivariate totally positive of order 2 and the function $\phi(p_1, \ldots, p_m):=\boldsymbol{1}[{(\sum_{i=1}^m\boldsymbol{1}\{p_i\le\lambda\}) \le k]}$ is increasing. It remains to prove that the the expectation criterion is satisfied. We may write ${{Z}_{i}}=\sqrt{\rho }\,\Theta +\sqrt{1-\rho }\,{{\mathrm{var}epsilon }_{i}}$, where $\Theta,{{\mathrm{var}epsilon }_{1}},\ldots,{{\mathrm{var}epsilon }_{n}}$ are independent standard normal variables. Define $z=\mathcal{P}hi^{-1}(\lambda\alpha)$ and $\Theta_z=(\Theta - \sqrt{\rho} z)/\sqrt{1-\rho}$. Immediately, $\Theta_z$ has a standard normal distribution conditionally on the event $[Z_i=z]$, and hence on $[p_i=\lambda \alpha]$. Define $X_i = \boldsymbol{1}_{\{p_i \leq \lambda\}}$. Conditionally on $\Theta_z$, the $X_i$ are independent Bernoulli variables with success probability $P(X_i=1\,|\,\Theta_z)=\mathcal{P}hi(\mu-\sqrt{\rho}\Theta_z)$. Therefore the conditional distribution $R\,|\,[\Theta_z,p_i=\lambda \alpha]$ is equal to the conditional distribution $R\,|\,[\Theta_z,X_i=1]$, and Lemma \ref{lem:binomial} yields $E(R^{-1}\,|\,\Theta_z,p_i=\lambda\alpha)\le(m\mathcal{P}hi(\mu-\sqrt{\rho}\Theta_z))^{-1}$. Taking the expectation over $\Theta_z$ and using (2) yields $E(R^{-1}\,|\,p_i=\lambda\alpha)\le(m\lambda)^{-1}$, which in turn implies that the expectation criterion is satisfied. The conclusion then follows by Lemma 4.1. \end{proof} \subsection{Mixtures} \begin{proof}[Proof of Proposition 2: mixtures] By the law of total expectation it follows that $E(D_{\mathcal{P}}) = E(E(D_{\mathcal{P}}|w)) \leq \alpha$. \end{proof} \subsection{Asymptotic control} \begin{proof}[Proof of Proposition 3: asymptotic case] For a given $\mathrm{var}epsilon '>0$ we prove that, for $m$ sufficiently large, $\text{$\mathrm{FWER}$}\le\alpha+\mathrm{var}epsilon '$. First, note that there is an $\mathrm{var}epsilon >0$ such that $\frac{\eta +\mathrm{var}epsilon}{\eta -\mathrm{var}epsilon }\le 1+\tfrac{1}{2}\mathrm{var}epsilon'$. Moreover, for $m$ large enough we have, as a consequence of the convergence in probability, it holds $P(|R/m-\eta|\ge\mathrm{var}epsilon)\le\tfrac{1}{2}\mathrm{var}epsilon'$ and $E(R/m)\le\eta+\mathrm{var}epsilon$. Then \begin{align*} \text{$\mathrm{FWER}$}&\;=\;P\Big(\textstyle\bigcup\limits_{i=1}^{m}{p_i<\frac{\alpha \lambda }{R}}\Big) \\ &\;=\;P\Big(\textstyle\bigcup\limits_{i=1}^{m}{p_i<\frac{\alpha\lambda}{R}},\frac{R}{m}\ge\eta-\mathrm{var}epsilon\Big) +P\Big(\textstyle\bigcup\limits_{i=1}^{m}{p_i<\frac{\alpha\lambda}{R}},\frac{R}{m}<\eta-\mathrm{var}epsilon\Big)\\ &\;\le\;P\Big(\textstyle\bigcup\limits_{i=1}^{m}{p_i<\frac{\alpha\lambda}{(\eta-\mathrm{var}epsilon )m}}\Big)+P\big(\frac{R}{m}<\eta -\mathrm{var}epsilon\big) \\ &\;\le\; \sum\limits_{i=1}^{m}{P(p_i<\tfrac{\alpha \lambda }{(\eta -\mathrm{var}epsilon )m})}+\frac{\mathrm{var}epsilon '}{2} \\ &\;=\;\sum\limits_{i=1}^{m}{P(p_i<\tfrac{\alpha \lambda }{(\eta -\mathrm{var}epsilon )m}\,|\,p_i\le \lambda )}\,P(p_i\le \lambda )+\frac{\mathrm{var}epsilon '}{2} \\ &\;\le\; \sum\limits_{i=1}^{m}{\frac{\alpha }{(\eta -\mathrm{var}epsilon )m}}\,P(p_i\le \lambda )+\frac{\mathrm{var}epsilon '}{2} \\ &\;=\; \frac{\alpha }{\eta -\mathrm{var}epsilon }E(R/m)+\frac{\mathrm{var}epsilon '}{2} \\ &\;\le\; \alpha\frac{\eta +\mathrm{var}epsilon }{\eta -\mathrm{var}epsilon }+\frac{\mathrm{var}epsilon '}{2} \\ &\;\le\; \alpha+\mathrm{var}epsilon'. \end{align*} \end{proof} \begin{proof}[Proof of Corollary 5.1] From \begin{align*} \mathrm{var}(R)&\;=\;\sum\limits_{i=1}^{m}{\mathrm{var}(\mathbf{1}[p_i\le\lambda ]) +\sum\limits_{i=1}^{m}{\sum\limits_{j=1,j\ne i}^{m}{\text{cov}(\mathbf{1}[p_i\le\lambda],\mathbf{1}[{{p}_{j}}\le \lambda ])}}} \end{align*} it follows $\mathrm{var}(R/m)\le \frac{1}{4}(\frac{1}{m}+\bar\rho_m)$, where the right-hand side goes to 0 as $m\to\infty$. The rest follows by Proposition 3. \end{proof} \begin{figure}\label{fig:figureM} \end{figure} \begin{figure}\label{fig:figureP2} \end{figure} \begin{figure}\label{fig:figurePairs} \end{figure} \begin{figure} \caption{Permissible ranges for $\lambda$ given $\alpha$. On the left the grey area shows the permissible combinations of $\lambda$ and $\alpha$ based on Lemma \ref{lem.bivariate_PQD} \label{fig.lambda_ok_rough_QPD} \end{figure} \end{document}
\begin{document} \title{Structure-preserving $H^2$ optimal model reduction\\ based on Riemannian trust-region method} \author{Kazuhiro Sato and Hiroyuki Sato \thanks{K. Sato is with the School of Regional Innovation and Social Design Engineering, Kitami Institute of Technology, Hokkaido 090-8507, Japan, email: [email protected]} \thanks{H. Sato is with the Department of Information and Computer Technology, Tokyo University of Science, Tokyo, 125-8585 Japan, email: [email protected]} } \markboth{This paper was published in IEEE Transactions on Automatic Control (DOI: 10.1109/TAC.2017.2723259)} {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for IEEE Journals} \maketitle \begin{abstract} This paper studies stability and symmetry preserving $H^2$ optimal model reduction problems of linear systems which include linear gradient systems as a special case. The problem is formulated as a nonlinear optimization problem on the product manifold of the manifold of symmetric positive definite matrices and the Euclidean spaces. To solve the problem by using the trust-region method, the gradient and Hessian of the objective function are derived. Furthermore, it is shown that if we restrict our systems to gradient systems, the gradient and Hessian can be obtained more efficiently. More concretely, by symmetry, we can reduce linear matrix equations to be solved. In addition, by a simple example, we show that the solutions to our problem and a similar problem in some literatures are not unique and the solution sets of both problems do not contain each other in general. Also, it is revealed that the attained optimal values do not coincide. Numerical experiments show that the proposed method gives a reduced system with the same structure with the original system although the balanced truncation method does not. \end{abstract} \begin{IEEEkeywords} $H^2$ optimal model reduction, Riemannian optimization, structure-preserving model reduction. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction} \IEEEPARstart{M}{odel} reduction method reduces the dimension of the state of a given system to facilitate the controller design. The most famous method is called balanced truncation method, which gives a stable reduced order model with guaranteed $H^{\infty}$ error bounds \cite{antoulas2005approximation, moore1981principal}. Another famous method is moment matching method \cite{astolfi2010model, ionescu2014families}, which gives a reduced system matching some coefficients of the transfer function of a given linear system. In particular, \cite{ionescu2013moment, scherpen2011balanced} have discussed structure preserving model reduction methods which give a reduced model with the same structure with the original system. However, the previous methods do not guarantee any optimality. In \cite{sato2015riemannian, yan1999approximate}, $H^2$ optimal model reduction problems have been studied. Reference \cite{yan1999approximate} has formulated the problem as an optimization problem for minimizing the $H^2$ norm performance index subject to orthogonality constraints. The constrained optimization problem can be regarded as an unconstrained problem on the Stiefel manifold. To solve the problem, an iterative gradient flow method has been proposed in \cite{yan1999approximate}. Reference \cite{sato2015riemannian} has reformulated the optimization problem on the Stiefel manifold into that on the Grassmann manifold because the objective function of the $H^2$ optimal model reduction problem is invariant under actions of the orthogonal group. To solve the problem, the Riemannian trust-region methods have been proposed in \cite{sato2015riemannian}. The proposed methods in \cite{sato2015riemannian, yan1999approximate} preserve stability and symmetric properties of the original system. However, the methods may not give good reduced models as shown in Section \ref{sec4} in this paper. To obtain better reduced models, this paper further exploits the stability and symmetry preserving $H^2$ optimal model reduction problem of linear systems which include gradient systems as a special case. In particular, the present paper reformulates the problems in \cite{sato2015riemannian, yan1999approximate} as an optimization problem on the product manifold of the manifold of symmetric positive definite matrices and two Euclidean spaces. This novel approach is the first to the best of our knowledge. A global optimal solution in this formulation gives a smaller value of the objective function than that in \cite{sato2015riemannian, yan1999approximate}. Furthermore, the $H^2$ optimal model reduction problem of gradient systems is formulated as another specific optimization problem. The contributions of this paper are as follows.\\ 1) We derive the gradient and Hessian of the objective function of the new optimization problem. By using them, we can apply the Riemannian trust-region method to solve the new problem. Furthermore, it is shown that if we restrict our systems to the gradient systems, the gradient and Hessian can be obtained more efficiently. More concretely, by symmetry, we can reduce linear matrix equations to be solved. Some numerical experiments demonstrate that the proposed Riemannian trust-region method gives a reduced system which is sufficiently close to the original system, even if the balanced truncation method and the method in \cite{sato2015riemannian} do not. \\ 2) By a simple example, we show that the solutions to our problem and the problem in \cite{sato2015riemannian, yan1999approximate} are not unique and the solution sets of both problems do not in general contain each other. Also, it is revealed that the attained optimal values do not coincide in general. This paper is organized as follows. In Section \ref{sec2}, we formulate the structure preserving $H^2$ optimal model reduction problem on the manifold. In Section \ref{sec3}, we first review the geometry of the manifold of the symmetric positive definite matrices. Next, we derive the Euclidean gradient of the objective function and then we give the Riemannian gradient and Riemannian Hessian to develop the trust-region method. In Section \ref{sec_Pro2}, the $H^2$ optimal model reduction problem of gradient systems are discussed. In Section \ref{secnew_5}, we study the difference between our problem and the problem in \cite{sato2015riemannian, yan1999approximate}. Section \ref{sec4} shows some numerical experiments to investigate the performance of the proposed method. We demonstrate that the objective function in the case of the proposed method takes a smaller value than those of the balanced truncation method and the method in \cite{sato2015riemannian}. Furthermore, the experiments indicate that although the balanced truncation method does not preserve the original structure, the proposed method does. The conclusion is presented in Section \ref{sec5}. {\it Notation:} The sets of real and complex numbers are denoted by ${\bf R}$ and ${\bf C}$, respectively. The identity matrix of size $n$ is denoted by $I_n$. The symbols ${\rm Sym}(n)$ and ${\rm Skew}(n)$ denote the sets of symmetric and skew-symmetric matrices in ${\bf R}^{n\times n}$, respectively. The set of symmetric positive definite matrices in ${\bf R}^{n\times n}$ is denoted by ${\rm Sym}_+(n)$. The symbols $GL(r)$ and $O(r)$ are the general linear group and the orthogonal group of degree $r$, respectively. Given a matrix $A\in {\bf R}^{n\times n}$, ${\rm tr} (A)$ denotes the sum of the diagonal elements of $A$ and ${\rm sym}(A)$ denotes the symmetric part of $A$; i.e., ${\rm sym} (A) = \frac{A+A^T}{2}$. Here, $A^T$ denotes the transposition of $A$. The tangent space at $x$ on a manifold $\mathcal{M}$ is denoted by $T_x \mathcal{M}$. Given a smooth function $f$ on a manifold $\mathcal{M} \subset {\bf R}^{n_1\times n_2}$, the symbol $\bar{f}$ is the extension of $f$ to the ambient Euclidean space ${\bf R}^{n_1\times n_2}$. The symbols $\nabla$ and ${\rm grad}$ denote the Euclidean and Riemannian gradients, respectively; i.e., given a smooth function $f$ on a manifold $\mathcal{M} \subset {\bf R}^{n_1\times n_2}$, $\nabla$ and ${\rm grad}$ act on ${\bar f}$ and $f$, respectively. The symbol ${\rm Hess}$ denotes the Riemannian Hessian. Given a transfer function $G$, $||G||_{H^2}$ denotes the $H^2$ norm of $G$. \section{Problem setup} \label{sec2} We consider the $H^2$ optimal model reduction problem of a linear time invariant system \begin{align} \begin{cases} \dot{x} = -Ax +Bu, \\ y = Cx, \end{cases} \label{1} \end{align} where $x\in {\bf R}^n$, $u\in {\bf R}^m$, and $y\in {\bf R}^p$ are the state, input, and output, respectively, and where $A\in {\bf R}^{n\times n}$, $B\in {\bf R}^{n\times m}$, and $C\in {\bf R}^{p\times n}$ are constant matrices. Throughout this paper, we assume $A\in {\rm Sym}_{+}(n)$, and thus, all the eigenvalues of $-A$ are negative. Thus, the original system has a stable and symmetric state transition matrix. Note that if $m=p$ (i.e., the number of the output variables is the same with that of the input variables) and $B^T=C$, then the system \eqref{1} is a linear gradient system \cite{ionescu2013moment, scherpen2011balanced}. Note also that the following discussion fully exploits the symmetry of $A$ and does not apply to systems with a non-symmetric matrix $A$. The structure preserving $H^2$ optimal model reduction problem in this paper is to find $A_r\in {\rm Sym}_+(r)$, $B_r\in {\bf R}^{r\times m}$, and $C_r\in {\bf R}^{p\times r}$ for a fixed integer $r$ $(<n)$ such that the associated reduced system \begin{align} \begin{cases} \dot{x}_r = -A_rx_r +B_ru, \\ y_r = C_rx_r \end{cases} \label{2} \end{align} best approximates the original system \eqref{1} in the sense that the $H^2$ norm of the transfer function of the error system between the original system \eqref{1} and the reduced system \eqref{2} is minimized. That is, the stability and symmetry of the state transition matrix are preserved because the reduced matrices $A_r$, $B_r$, and $C_r$ have the same structures with the original matrices $A$, $B$, and $C$, respectively. Note that the symmetry preservation is significant because the symmetry implies that any oscillations never occur when $u=0$. This is because all the eigenvalues of any symmetric matrices are real numbers. If the state transition matrix of the reduced system \eqref{2} is not symmetric, some oscillations may be observed under $u=0$ in contrast to the case of the original system \eqref{1}. The optimization problem to be solved is stated as follows. \begin{framed} Problem 1: \begin{align*} &{\rm minimize} \quad J(A_r,B_r,C_r), \\ &{\rm subject\, to} \quad (A_r,B_r,C_r)\in M. \end{align*} \end{framed} \noindent Here, \begin{align} \label{eq_J} J(A_r,B_r,C_r):=||G-G_r||_{H^2}^2, \end{align} where \begin{align*} G(s) := C(sI_n +A)^{-1}B, \quad s\in {\bf C} \end{align*} is the transfer function of the original system \eqref{1} and $G_r$ is the transfer function of the reduced system \eqref{2}, and \begin{align*} M:= {\rm Sym}_+(r) \times {\bf R}^{r\times m} \times {\bf R}^{p\times r}. \end{align*} Since all the eigenvalues of $-A$ and $-A_r$ are negative, the objective function $J(A_r,B_r,C_r)$ can be expressed as \begin{align*} J(A_r,B_r,C_r) &= {\rm tr}( C\Sigma_c C^T +C_rPC_r^T-2C_rX^TC^T) \\ &= {\rm tr} ( B^T\Sigma_o B +B_r^TQB_r + 2B^TYB_r), \end{align*} where the matrices $\Sigma_c$, $\Sigma_o$, $P$, $Q$, $X$, and $Y$ are the solutions to \begin{align} A\Sigma_c +\Sigma_cA -BB^T &= 0, \nonumber\\ A\Sigma_o + \Sigma_o A- C^TC &= 0, \nonumber\\ A_rP+PA_r-B_rB_r^T &=0, \label{3} \\ A_rQ+QA_r-C_r^TC_r &=0, \label{4} \\ AX+XA_r-BB_r^T &=0, \label{5} \\ AY+YA_r+C^TC_r &=0, \label{6} \end{align} respectively. A similar discussion can be found in \cite{sato2015riemannian}, which contains a more detailed explanation of the calculation. As mentioned earlier, if $m=p$ and $B^T=C$, then the system \eqref{1} is a stable gradient system \cite{ionescu2013moment, scherpen2011balanced}. If this is the case, Problem 1 can be replaced with the following problem. \begin{framed} Problem 2: \begin{align*} &{\rm minimize} \quad \tilde{J}(A_r,B_r), \\ &{\rm subject\, to} \quad (A_r,B_r)\in \tilde{M}. \end{align*} \end{framed} \noindent Here, \begin{align*} \tilde{J}(A_r,B_r) := || B^T(sI_n+A)^{-1}B - B_r^T(sI_r+A_r)B_r||^2_{H^2} \end{align*} and \begin{align*} \tilde{M}:= {\rm Sym}_+(r)\times {\bf R}^{r\times m}. \end{align*} We develop optimization algorithms for solving Problems 1 and 2 in Sections \ref{sec3} and \ref{sec_Pro2}, respectively. \begin{remark} We can also consider the reduced system expressed by \begin{align*} \begin{cases} \dot{x}_r = -U^TAUx_r +U^TB u, \\ y_r = CU x \end{cases} \end{align*} for $U$ belonging to the Stiefel manifold ${\rm St}(r,n):= \left\{ U\in {\bf R}^{n\times r}\, |\, U^TU =I_r\right\}$. Then, Problem 1 is replaced with the following optimization problem on the Stiefel manifold. \begin{framed} Problem 3: \begin{align*} &{\rm minimize} \quad J(U^TAU,U^TB,CU), \\ &{\rm subject\, to} \quad U \in {\rm St}(r,n). \end{align*} \end{framed} \noindent Reference \cite{sato2015riemannian} has proposed the trust-region method for solving Problem 3. \end{remark} \begin{remark} It is beneficial to consider Problem 1 instead of Problem 3. In fact, if $U_*$ is a global optimal solution to Problem 3, then $(U^T_*AU_*,U_*^TB,CU_*)\in M$ is a feasible solution to Problem 1; i.e., the minimum value of Problem 3 is not smaller than that of Problem 1. In Section \ref{sec4}, we verify this fact numerically. Furthermore, we give an example in Section \ref{secnew_5} which shows that the critical points of Problems 1 and 3 do not necessarily coincide with each other. \end{remark} \begin{remark} A possible drawback of Problem 1 is that the problem may not have a solution because the Riemannian manifold $M$ is not compact. However, we always obtained solutions by using the trust-region method in this paper. We leave a general mathematical analysis of the existence of the solution to Problem 1 to future work. \end{remark} \section{Optimization algorithm for Problem 1} \label{sec3} \subsection{General Riemannian trust-region method} We first review Riemannian optimization methods including the Riemannian trust-region method following \cite{absil2009optimization} for readability of the subsequent subsections. We also refer to \cite{absil2009optimization} for schematic figures of Riemannian optimization. In this subsection, we consider a general Riemannian optimization problem to minimize an objective function $h$ defined on a Riemannian manifold $\mathcal{M}$. In optimization on the Euclidean space $\mathcal{E}$, we can compute a point $x_+ \in \mathcal{E}$ from the current point $x \in \mathcal{E}$ and the search direction $d \in \mathcal{E}$ as $x_+=x+d$. However, this update formula cannot be used on $\mathcal{M}$ since $\mathcal{M}$ is not generally a Euclidean space. For $x \in \mathcal{M}$ and $\xi \in T_x \mathcal{M}$, $x+\xi$ is not defined in general. Even if $\mathcal{M}$ is a submanifold of the Euclidean space $\mathcal{E}$ and $x+\xi$ is defined as a point in $\mathcal{E}$, it is not generally on $\mathcal{M}$. Therefore, we seek for a next point $x_+$ on a curve on $\mathcal{M}$ emanating from $x$ in the direction of $\xi$. Such a curve is defined by using a map called an exponential mapping ${\rm Exp}$, which is defined by a curve called geodesic. More concretely, for any $x, y \in \mathcal{M}$ on a geodesic which are sufficiently close to each other, the piece of geodesic between $x$ and $y$ is the shortest among all curves connecting the two points. For any $\xi\in T_x\mathcal{M}$, there exists an interval $I \subset {\bf R}$ around $0$ and a unique geodesic $\Gamma_{(x,\xi)}: I\rightarrow \mathcal{M}$ such that $\Gamma_{(x,\xi)}(0)=x$ and $\dot{\Gamma}_{(x,\xi)}=\xi$. The exponential mapping ${\rm Exp}$ at $x\in \mathcal{M}$ is then defined through this curve as \begin{align} {\rm Exp}_x(\xi):=\Gamma_{(x,\xi)}(1). \label{exp_def} \end{align} This definition is well-defined because the geodesic $\Gamma_{(x,\xi)}$ has the homogeneity property $\Gamma_{(x,a\xi)}(t)=\Gamma_{(x,\xi)}(at)$ for any $a\in {\bf R}$ satisfying $at \in I$. We can thus compute a point $x_+$ as \begin{equation} x_+= {\rm Exp}_x(\xi). \label{next_step} \end{equation} In the trust-region method, at the current point $x$, we compute a second-order approximation of the objective function $h$ based on the Taylor expansion. We minimize the second-order approximation in a ball of a radius called trust-region radius. See Section \ref{Sec:3.E} for detail discussion on the trust-region method for our problem. We thus need the first and second-order derivatives of $h$, which are characterized by the Riemannian gradient and Hessian of $h$. Since $\mathcal{M}$ is a Riemannian manifold, $\mathcal{M}$ has a Riemannian metric $\langle \cdot, \cdot\rangle$, which endows the tangent space $T_x \mathcal{M}$ at each point $x \in \mathcal{M}$ with an inner product $\langle \cdot, \cdot \rangle_x$. The gradient ${\rm grad}\, h(x)$ of $h$ at $x \in \mathcal{M}$ is defined as a tangent vector at $x$ which satisfies \begin{equation} {\rm D}h(x)[\xi] = \langle {\rm grad}\, h(x), \xi\rangle_x \label{grad_def} \end{equation} for any $\xi \in T_x \mathcal{M}$. Here, the left-hand side of \eqref{grad_def} denotes the directional derivative of $h$ at $x$ in the direction $\xi$. The Hessian ${\rm Hess}\, h(x)$ of $h$ at $x$ is defined via the covariant derivative of the gradient ${\rm grad}\, h(x)$. If $\mathcal{M}$ is a Riemannian submanifold of Euclidean space, we can compute the Hessian ${\rm Hess}\, h(x)$ by using the gradient ${\rm grad}\, h(x)$ and the orthogonal projection onto the tangent space $T_x \mathcal{M}$. Based on this fact, we derive the gradient and Hessian of our objective function $J$ in Section \ref{Sec:3.D}. \subsection{Difficulties when we apply the general Riemannian trust-region method} This subsection points out difficulties when we apply the above general Riemannian trust-region method. The first difficulty is to obtain the geodesic $\Gamma_{(x,\xi)}$. In fact, to get $\Gamma_{(x,\xi)}$, we may need to solve a nonlinear differential equation in a local coordinate system around $x\in \mathcal{M}$. The equation may only be approximately solved by a numerical integration scheme. The numerical integration consumes a large amount of time in many cases. As a result, it is difficult to obtain the exponential map defined by \eqref{exp_def} in general. The second difficulty is how to choose a Riemmanian metric $\langle \cdot, \cdot \rangle$. Since the gradient ${\rm grad}\,h(x)$ defined by \eqref{grad_def} varies by the Riemannian metric, we should adopt a metric in such a manner that we can obtain the gradient in a short time. However, the adoption may imply that the manifold $\mathcal{M}$ is not geodesically complete. Here, a Riemannian manifold is called geodesically complete if the exponential mapping is defined for every tangent vector at any point. If $\mathcal{M}$ is not geodesically complete, we have to carefully choose $\xi$ in \eqref{next_step} in such a manner that $x_+$ is contained in $\mathcal{M}$. This leads to computational inefficiency. For example, consider the manifold ${\rm Sym}_{+}(r)$. Since ${\rm Sym}_{+}(r)$ is a submanifold of the vector space ${\rm Sym}(r)$, we can consider the induced metric $\langle \cdot,\cdot \rangle$ from the natural inner product in the ambient space ${\rm Sym}(r)$ as \begin{align} \langle \xi_1, \xi_2 \rangle_{S}:= {\rm tr}(\xi_1 \xi_2) \label{normal_metric} \end{align} for $\xi_1, \xi_2\in T_S {\rm Sym}_+(r)$. Here, $T_S {\rm Sym}_+(r)\cong {\rm Sym}(r)$ as explained in \cite{lang1999fundamentals}. Then, the exponential map is simply given by ${\rm Exp}_S(\xi) = S+\xi.$ However, $S+\xi \not \in {\rm Sym}_+(r)$ for some $\xi\in T_S {\rm Sym}_+(r)$ because of lack of positive definiteness. This means that ${\rm Sym}_+(r)$ is not geodesically complete. As a result, we have to carefully choose $\xi \in T_S {\rm Sym}_+(r)$. Our following discussion overcomes these difficulties. \subsection{Geometry of the manifold ${\rm Sym}_{+}(r)$} \label{Sec:3.B} This subsection introduces another Riemannian metric on the manifold ${\rm Sym}_{+}(r)$ \cite{lang1999fundamentals, Gallier2016, helgason1979differential, helmke1996optimization, pennec2006riemannian}. This is useful to develop an optimization algorithm for solving Problem 1 for the following reasons:\\ 1) The geodesic is given by a closed-form expression. That is, we do not have to integrate a nonlinear differential equation.\\ 2) The manifold ${\rm Sym}_+(r)$ is then geodesically complete in contrast to the case of the Riemannian metric \eqref{normal_metric}. That is, ${\rm Exp}_S(\xi)\in {\rm Sym}_+(r)$ is always defined for any $\xi\in T_S{\rm Sym}_+(r)$. For $\xi_1$, $\xi_2\in T_S {\rm Sym}_{+}(r)$, we define the Riemannian metric as \begin{align} \langle \xi_1, \xi_2\rangle_S := {\rm tr} ( S^{-1} \xi_1 S^{-1} \xi_2 ), \label{metric} \end{align} which is invariant under the group action $\phi_g: S \to gSg^T$ for $g \in GL(r)$; i.e., $\langle {\rm D} \phi_g(\xi_1), {\rm D} \phi_g(\xi_2) \rangle_{\phi_g(S)} = \langle \xi_1,\xi_2 \rangle_S$, where the map ${\rm D}\phi_g: T_S {\rm Sym}_+(r) \rightarrow T_S {\rm Sym}_+(r)$ is a derivative map given by ${\rm D}\phi_g(\xi) =g\xi g^T$. The proof that \eqref{metric} is a Riemannian metric can be found in Chapter XII in \cite{lang1999fundamentals}. Let $f: {\rm Sym}_+(r) \rightarrow {\bf R}$ be a smooth function and $\bar{f}$ the extension of $f$ to the Euclidean space ${\bf R}^{r\times r}$. The relation of the Euclidean gradient $\nabla \bar{f}(S)$ and the directional derivative ${\rm D}\bar{f}(S)[\xi]$ of $\bar{f}$ at $S$ in the direction $\xi$ is given by \begin{align} {\rm tr}\, (\xi^T\nabla \bar{f}(S)) ={\rm D} \bar{f}(S)[\xi]. \end{align} \noindent The Riemannian gradient ${\rm grad}\, f(S)$ is given by \begin{align} \langle {\rm grad} f (S), \xi \rangle_S &= {\rm D}f(S)[\xi] \\ &= {\rm D}\bar{f}(S)[\xi] \\ &= {\rm tr}\, (\xi^T {\rm sym}(\nabla \bar{f}(S)) ). \label{relation} \end{align} Here, we have used $\xi=\xi^T$. From \eqref{metric} and \eqref{relation}, we obtain \begin{align} {\rm grad}\, f(S) = S {\rm sym}(\nabla \bar{f}(S)) S. \label{gradient} \end{align} According to Section 4.1.4 in \cite{jeuris2012survey}, the Riemannian Hessian ${\rm Hess}\,f(S): T_S {\rm Sym}_+(r) \rightarrow T_S {\rm Sym}_+(r)$ of the function $f$ at $S\in {\rm Sym}_+(r)$ is given by \begin{align} {\rm Hess}\,f(S)[\xi] = {\rm D}{\rm grad}\, f(S)[\xi] - {\rm sym} ({\rm grad}\, f(S) S^{-1} \xi ). \label{Hess1} \end{align} Hence, \eqref{gradient} and \eqref{Hess1} yield \begin{align} {\rm Hess}\,f(S)[\xi] =S {\rm sym}( {\rm D} \nabla \bar{f}(S) [\xi] )S + {\rm sym} (\xi {\rm sym} (\nabla \bar{f}(S)) S ). \label{Hess} \end{align} The geodesic $\Gamma_{(S,\xi)}$ on the manifold ${\rm Sym}_+(r)$ going through a point $S\in {\rm Sym}_+(r)$ with a tangent vector $\xi\in T_S {\rm Sym}_+(r)$ is given by \begin{align} \Gamma_{(S,\xi)}(t) = \phi_{\exp (t \zeta)}(S) \label{geo1} \end{align} with $\xi = \zeta S + S \zeta$ for $\zeta\in T_{I_r} {\rm Sym}_+(r)$; i.e., the geodesic is the orbit of the one-parameter subgroup $\exp (t\zeta)$, where $\exp$ is the matrix exponential function. The relation \eqref{geo1} follows from the fact that ${\rm Sym}_+(r)$ is a reductive homogeneous space. For convenience, we prove it in Appendix \ref{ape1}. A detailed explanation of the expression \eqref{geo1} can be found in \cite{Gallier2016}. To simplify \eqref{geo1}, we consider the geodesic going through the origin $I_r =\phi_{S^{-1/2}}(S) \in {\rm Sym}_+(r)$ because the Riemannian metric given by \eqref{metric} is invariant under the group action. In this case, we get $\zeta=\frac{1}{2}\xi$ and $\Gamma_{(I_r,\xi)}(t) = \exp (t\xi)$. Hence, \begin{align*} \Gamma_{(S,\xi)}(t) &= \phi_{S^{1/2}} (\Gamma_{(I_r, {\rm D}\phi_{S^{-1/2}}(\xi) )} (t)) \\ &= S^{\frac{1}{2}} \exp (t S^{-\frac{1}{2}} \xi S^{-\frac{1}{2}} ) S^{\frac{1}{2}}. \end{align*} Therefore, the exponential map on ${\rm Sym}_+(r)$ is given by \begin{align} {\rm Exp}_S (\xi) := \Gamma_{(S,\xi)}(1) = S^{\frac{1}{2}} \exp (S^{-\frac{1}{2}} \xi S^{-\frac{1}{2}} ) S^{\frac{1}{2}}. \label{8} \end{align} Since ${\rm Exp} : T_S {\rm Sym}_+(r) \rightarrow {\rm Sym}_+(r)$ is a bijection \cite{Gallier2016}, ${\rm Sym}_+(r)$ endowed with the Riemannian metric \eqref{metric} is geodesically complete in contrast to the case of \eqref{normal_metric}. \subsection{Euclidean gradient of the objective function $J$} Let $\bar{J}$ denote the extension of the objective function $J$ to the Euclidean space ${\bf R}^{r\times r}\times {\bf R}^{r\times m} \times {\bf R}^{p\times r}$. Then, the Euclidean gradient of $\bar{J}$ is given by \begin{align} & \nabla \bar{J}(A_r,B_r,C_r) \nonumber \\ =& 2( -QP-Y^TX, QB_r+Y^TB, C_rP-CX). \label{16} \end{align} \noindent Although a similar expression can be found in Theorem 3.3 in \cite{van2008h2} and Section 3.2 in \cite{wilson1970optimum}, we provide another proof in Appendix \ref{apeB} because some equations in the proof are needed for deriving the Riemannian Hessian of $J$ as shown in the next subsection. \subsection{Geometry of Problem 1} \label{Sec:3.D} We define the Riemannian metric of the manifold $M$ as \begin{align} & \langle (\xi_1,\eta_1,\zeta_1),(\xi_2,\eta_2,\zeta_2) \rangle_{(A_r,B_r,C_r)} \nonumber \\ :=& {\rm tr}( A_r^{-1} \xi_1 A_r^{-1} \xi_2 ) + {\rm tr} (\eta_1^T\eta_2) + {\rm tr}(\zeta_1^T \zeta_2) \label{Riemannian_metric} \end{align} for $(\xi_1,\eta_1,\zeta_1),(\xi_2,\eta_2,\zeta_2) \in T_{(A_r,B_r,C_r)} M$. Then, it follows from \eqref{gradient} and \eqref{16} that \begin{align} {\rm grad}\, J(A_r,B_r,C_r) =& ( -2 A_r {\rm sym}(QP+Y^TX)A_r, \label{grad_J} \\ & 2(QB_r+Y^TB), 2(C_rP-CX) ). \nonumber \end{align} Furthermore, from \eqref{Hess} and \eqref{16}, the Riemannian Hessian of $J$ at $(A_r,B_r,C_r)$ is given by \begin{align} & {\rm Hess}\, J(A_r,B_r,C_r) [(A'_r,B'_r,C'_r)] \nonumber \\ =& ( -2A_r {\rm sym}( Q'P+QP'+Y'^TX+Y^TX') A_r \nonumber \\ &\, -2 {\rm sym} (A'_r {\rm sym} (QP+Y^TX) A_r), \label{Hess_J}\\ &\,\, 2(Q'B_r +QB'_r +Y'^TB), 2(C'_rP+C_rP'-CX') ), \nonumber \end{align} where $P'$ and $X'$ are the solutions to \eqref{10} and \eqref{11} in Appendix \ref{apeB}, respectively, and $Q'$ and $Y'$ are the solutions to \begin{align} & A_r Q' +Q' A_r+A'_r Q+Q A'_r- C'^T_r C_r-C^T_r C'_r =0, \label{31} \\ & AY'+Y'A_r+YA'_r+C^TC'_r =0. \label{32} \end{align} The equations \eqref{31} and \eqref{32} are obtained by differentiating \eqref{4} and \eqref{6}, respectively. From \eqref{8}, we can define the exponential map on the manifold $M$ as \begin{align} & {\rm Exp}_{(A_r,B_r,C_r)}(\xi,\eta,\zeta) \nonumber \\ :=& (A_r^{\frac{1}{2}} \exp (A_r^{-\frac{1}{2}} \xi A_r^{-\frac{1}{2}} ) A_r^{\frac{1}{2}}, B_r+\eta,C_r+\zeta) \label{33} \end{align} for any $(\xi,\eta,\zeta)\in T_{(A_r,B_r,C_r)} M$; i.e., the manifold $M$ is geodesically complete. \subsection{Trust-region method for Problem 1} \label{Sec:3.E} This section gives the Riemannian trust-region method for solving Problem 1. In \cite{absil2009optimization, absil2007trust}, the Riemannian trust-region method has been discussed in detail. At each iterate $(A_r,B_r,C_r)$ in the Riemannian trust-region method on the manifold $M$, we evaluate the quadratic model $\hat{m}_{(A_r,B_r,C_r)}$ of the objective function $J$ within a trust-region: \begin{align*} &\quad \hat{m}_{(A_r,B_r,C_r)}(\xi,\eta,\zeta) \\ =& J(A_r,B_r,C_r) + \langle {\rm grad}\,J(A_r,B_r,C_r), (\xi,\eta,\zeta) \rangle_{(A_r,B_r,C_r)} \\ &+\frac{1}{2} \langle {\rm Hess}\, J(A_r,B_r,C_r)[(\xi,\eta,\zeta)], (\xi,\eta,\zeta) \rangle_{(A_r,B_r,C_r)}. \end{align*} A trust-region with a radius $\Delta>0$ at $(A_r,B_r,C_r)\in M$ is defined as a ball with center $0$ in $T_{(A_r,B_r,C_r)} M$. Thus, the trust-region subproblem at $(A_r,B_r,C_r)\in M$ with a radius $\Delta$ is defined as a problem of minimizing $\hat{m}_{(A_r,B_r,C_r)}(\xi,\eta,\zeta)$ subject to $(\xi,\eta,\zeta)\in T_{(A_r,B_r,C_r)} M$, $||(\xi,\eta,\zeta)||_{(A_r,B_r,C_r)}:= \sqrt{ \langle (\xi,\eta,\zeta),(\xi,\eta,\zeta) \rangle_{(A_r,B_r,C_r)}} \leq \Delta$. This subproblem can be solved by the truncated conjugate gradient method \cite{absil2009optimization}. Then, we compute the ratio of the decreases in the objective function $J$ and the model $\hat{m}_{(A_r,B_r,C_r)}$ attained by the resulting $(\xi_*,\eta_*,\zeta_*)$ to decide whether $(\xi_*,\eta_*,\zeta_*)$ should be accepted and whether the trust-region with the radius $\Delta$ is appropriate. Algorithm \ref{algorithm} describes the process. The constants $\frac{1}{4}$ and $\frac{3}{4}$ in the condition expressions in Algorithm \ref{algorithm} are commonly used in the trust-region method for a general unconstrained optimization problem. These values ensure the convergence properties of the algorithm \cite{absil2009optimization, absil2007trust}. \begin{algorithm} \caption{Trust-region method for Problem 1.} \label{algorithm} \label{alg1} \begin{algorithmic}[1] \STATE Choose an initial point $((A_{r})_0,(B_{r})_0,(C_{r})_0) \in M$ and parameters $\bar{\Delta}>0$, $\Delta_0\in (0,\bar{\Delta})$, $\rho'\in [0,\frac{1}{4})$. \FOR{$k=0,1,2,\ldots$ } \STATE Solve the following trust-region subproblem for $(\xi,\eta,\zeta)$ to obtain $(\xi_k,\eta_k,\zeta_k)\in T_{(A_r,B_r,C_r)} M$: \begin{align*} &{\rm minimize}\quad \hat{m}_{((A_r)_k,(B_r)_k,(C_r)_k)}(\xi,\eta,\zeta) \\ &{\rm subject\, to}\quad ||(\xi,\eta,\zeta)||_{((A_r)_k,(B_r)_k,(C_r)_k)} \leq \Delta_k, \\ &{\rm where}\quad \hat{m}_k(\xi,\eta,\zeta):=\hat{m}_{((A_r)_k,(B_r)_k,(C_r)_k)}(\xi,\eta,\zeta), \\ &\quad\quad\quad (\xi,\eta,\zeta)\in T_{((A_r)_k,(B_r)_k,(C_r)_k)}M. \end{align*} \STATE Evaluate \begin{align*} \rho_k := \frac{ J({\rm Exp}_{k}(0,0,0)) -J({\rm Exp}_{k}(\xi_k,\eta_k,\zeta_k))}{ \hat{m}_{k}(0,0,0)- \hat{m}_{k} (\xi_k,\eta_k,\zeta_k)} \end{align*} \STATE with ${\rm Exp}_k(\xi,\eta,\zeta):= {\rm Exp}_{((A_r)_k,(B_r)_k,(C_r)_k)}(\xi,\eta,\zeta)$. \IF {$\rho_k<\frac{1}{4}$} \STATE $\Delta_{k+1}=\frac{1}{4}\Delta_k$. \ELSIF {$\rho_k>\frac{3}{4}$ and $||(\xi_k,\eta_k,\zeta_k)||_{((A_r)_k,(B_r)_k,(C_r)_k)} = \Delta_k$} \STATE $\Delta_{k+1} = \min (2\Delta_k,\bar{\Delta})$. \ELSE \STATE $\Delta_{k+1} = \Delta_k$. \ENDIF \IF {$\rho_k>\rho'$} \STATE $((A_r)_{k+1},(B_r)_{k+1},(C_r)_{k+1}) = {\rm Exp}_k(\xi_k,\eta_k,\zeta_k)$. \ELSE \STATE {\small $((A_r)_{k+1},(B_r)_{k+1},(C_r)_{k+1}) = ((A_r)_{k},(B_r)_{k},(C_r)_{k})$.} \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \begin{remark} \label{remark3} The most computational task to perform Algorithm 1 is to solve Eqs. \eqref{3}--\eqref{6} iteratively. Although some algorithms to solve these equations have been studied in some literatures \cite{antoulas2005approximation, benner2003state, damm2008direct}, we need to develop a more effective method for solving large-scale model reduction problems by Algorithm 1. \end{remark} \section{Optimization algorithm for solving Problem 2} \label{sec_Pro2} This section develops an optimization algorithm for solving Problem 2. As with Problem 1, to derive the Riemannian gradient and Hessian of the objective function $\tilde{J}$, we calculate the Euclidean gradient $\nabla \bar{\tilde{J}}$, where $\bar{\tilde{J}}$ is the extension of $\tilde{J}$ to the ambient space ${\bf R}^{r\times r}\times {\bf R}^{r\times m}$. Since $C=B^T$ and $C_r=B_r^T$, it follows from \eqref{3}--\eqref{6} that $P=Q$ and $X=-Y$. Thus, in Appendix \ref{apeB}, by replacing $C$, $C_r$, and $C'_r$ with $B$, $B_r$, and $B'_r$, respectively, we obtain \begin{align*} \nabla \bar{\tilde{J}}(A_r,B_r) = ( -2(P^2-X^TX), 4PB_r-4X^TB). \end{align*} Hence, if we consider the counterpart of the Riemannian metric \eqref{Riemannian_metric} for the manifold $\tilde{M}$ as \begin{align*} \langle (\xi_1,\eta_1),(\xi_2,\eta_2) \rangle_{(A_r,B_r)} = {\rm tr}( A_r^{-1} \xi_1 A_r^{-1} \xi_2 ) + {\rm tr} (\eta_1^T\eta_2), \end{align*} the Riemannian gradient and Hessian of $\tilde{J}$ are given by \begin{align*} & {\rm grad}\, \tilde{J}(A_r,B_r) = (-2A_r{\rm sym}(P^2 -X^T X) A_r, \\ &\quad \quad\quad\quad\quad\quad\quad \quad 4PB_r-4X^TB ), \\ & {\rm Hess}\, \tilde{J}(A_r,B_r) [ (A'_r,B'_r)] \\ =& ( -2A_r {\rm sym}( P'P+PP'-X'^TX-X^TX') A_r \\ &\, -2 {\rm sym} (A'_r {\rm sym} (P^2-X^TX) A_r), \\ &\,\, 4(P'B_r+PB'_r) -4 X'^TB ), \end{align*} respectively. Here, $P$, $X$, $P'$, and $X'$ are the solutions to \eqref{3}, \eqref{5}, \eqref{10}, and \eqref{11}, respectively. The exponential map on the manifold $\tilde{M}$ is, of course, given by \begin{align*} {\rm Exp}_{(A_r,B_r)}(\xi,\eta) =(A_r^{\frac{1}{2}} \exp (A_r^{-\frac{1}{2}} \xi A_r^{-\frac{1}{2}} ) A_r^{\frac{1}{2}}, B_r+\eta). \end{align*} Similarly to Problem 1, we can solve Problem 2 by using a modified algorithm of Algorithm \ref{algorithm}. The reduced system constructed by the solution is also a stable gradient system. Note that in contrast to Problem 1, we do not calculate $Q$, $Y$, $Q'$, and $Y'$; i.e., we only need to calculate $P$, $X$, $P'$, and $X'$ for solving Problem 2 by the trust-region method. This improves computational efficiency. \begin{remark} If $m=p$ and $B^T=C$, we can also regard the system \eqref{1} as a port-Hamiltonian system \cite{ionescu2013moment, van2012l2}. Since a port-Hamiltonian system is passive, the reduced system constructed by the solution to Problem 2 is also passive \cite{van2012l2}. \end{remark} \section{Comparison between Problem 1 and Problem 3} \label{secnew_5} In this section, we compare the reduced systems obtained by solving Problems 1 and 3 and give a simple example which shows that they do not necessarily coincide with each other. For $J$ in \eqref{eq_J}, which is the $H^2$ norm of the error system, let $J_1 := J$ and $J_3(U) := J(U^TAU, U^TB, CU)$. Then, we have \begin{equation*} {\rm grad}\, J_1(A_r, B_r, C_r)=(A_r{\rm sym}(\nabla_{A_r} \bar{J})A_r, \nabla_{B_r} \bar{J}, \nabla_{C_r} \bar{J}) \end{equation*} and \begin{equation*} {\rm grad}\, J_3(U) = \nabla \bar{J_3}(U) - U {\rm sym}(U^T \nabla \bar{J_3}(U)), \end{equation*} where \begin{align*} \nabla \bar{J_3}(U) =& 2 A U {\rm sym}(\nabla_{A_r} \bar{J}(U^TAU, U^TB, CU)) \\ &+ B(\nabla_{B_r} \bar{J}(U^TAU, U^TB, CU))^T \\ &+ C^T\nabla_{C_r} \bar{J}(U^TAU, U^TB, CU). \end{align*} Note that we have used $A^T=A$ and that $\nabla_{A_r} \bar{J}$ denotes the $A_r$-component of $\nabla \bar{J}$. The expression of ${\rm grad}\, J_1(A_r,B_r,C_r)$ is from \eqref{16} and \eqref{grad_J}, and ${\rm grad}\, J_3(U)$ can be found in \cite{sato2015riemannian, yan1999approximate}. Even if ${\rm grad}\, J_1(A_r, B_r, C_r) = 0$ for some $(A_r, B_r, C_r)$, there does not in general exist $U$ such that \begin{equation} \label{EqABC} A_r = U^T A U,\ B_r = U^TB,\ C_r = CU, \end{equation} and ${\rm grad}\,J_3(U) =0$. Conversely, ${\rm grad}\, J_3(U) =0$ does not yield ${\rm grad}\, J_1(U^TAU, U^TB, CU)=0$ either. In order to see this clearly from a simple example, we consider in the remainder of this section the system \eqref{1} with $n=2$ and $m=p=1$ and assume that the dimension of the reduced model is $r=1$. Furthermore, we suppose $A=\begin{pmatrix}2 & 0 \\ 0 & 1\end{pmatrix}$, $B = \begin{pmatrix}-1 \\ 1\end{pmatrix}$, and $C = \begin{pmatrix} 1 & 1 \end{pmatrix}$. For Problem 1, we can obtain $P = B_r^2/2A_r$, $Q = C_r^2/2A_r$, $X = \begin{pmatrix}-B_r/(A_r+2) & B_r/(A_r+1)\end{pmatrix}^T$, and $Y = -\begin{pmatrix}C_r/(A_r+2) & C_r/(A_r+1)\end{pmatrix}^T$ by \eqref{3}--\eqref{6}. Then, a simple analysis implies that ${\rm grad}\, J_1(A_r, B_r, C_r) = 0$ is equivalent to \begin{equation*} B_r C_r = 0 \quad \text{or} \quad A_r=-\frac{1}{2}+\frac{\sqrt{33}}{6},\ B_rC_r=6-\sqrt{33}. \end{equation*} The objective function at these infinite critical points are evaluated as \begin{align*} J(A_r, B_r, C_r) = 1/12 = 0.0833 \end{align*} for any $(A_r, B_r, C_r)$ with $A_r>0$ and $B_rC_r=0$, and \begin{equation*} J(A_r, B_r, C_r) = (569-99\sqrt{33})/24 = 0.0120 \end{equation*} for $A_r=-1/2+\sqrt{33}/6$ and for any $(B_r, C_r)$ with $B_rC_r=6-\sqrt{33}$, which implies that the minimum value of $J$ attained by solving Problem 1 is $0.0120$. For Problem 3, let $U = \begin{pmatrix} u_1 & u_2 \end{pmatrix}^T \in {\rm St}\,(1,2)$. This means that $U$ is in the unit $2$-sphere, that is, $u_1^2+u_2^2=1$. Then, we have $P = (u_1-u_2)^2/2(1+u_1^2)$, $Q = (u_1+u_2)^2/2(1+u_1^2)$,\\ $X =\begin{pmatrix} (u_1-u_2)/(u_1^2+3) & - (u_1-u_2)/(u_1^2+2)\end{pmatrix}^T$, and $Y = -\begin{pmatrix}(u_1+u_2)/(u_1^2+3) & (u_1+u_2)/(u_1^2+2)\end{pmatrix}^T$ in a similar manner to that in Problem 1. A straightforward but tedious calculation shows that ${\rm grad}\, J_3(U) = 0$ holds if and only if \begin{equation} \label{Ueq2} (u_1, u_2) = (\pm 1, 0),\ (0, \pm 1) \end{equation} or \begin{equation} \label{Ueq1} u_1 = \pm 0.5642 \quad \text{and} \quad u_2 = \pm \sqrt{1-u_1^2}= \pm 0.8256, \end{equation} where $u_1=\pm 0.5642$ are the real solutions to the equation $4u_1^{12}+48u_1^{10}+215u_1^8+478u_1^6+515u_1^4+132u_1^2-112=0$. Therefore, there are only $8$ finite discrete critical points of $J_3$ in contrast to Problem 1. The resultant reduced system matrices are then computed by \eqref{EqABC}. Eq.~\eqref{Ueq2} yields $(A_r, B_r, C_r) = (2, \mp 1, \pm 1), (1, \pm 1, \pm 1)$, where $B_r C_r = \pm 1$. In contrast, for \eqref{Ueq1} we have $A_r = 1.318$ and $(B_r, C_r) = (\pm 0.2614, \pm 1.390)$, $(\pm 1.390, \pm 0.2614)$, where $B_rC_r=0.3633$. Meanwhile, the result for Problem 1 yields $B_rC_r=0$ or $B_rC_r=6-\sqrt{33}=0.2554$. Therefore, we can conclude that the reduced systems obtained by the two problems do not coincide with each other in general. Furthermore, we have $J(A_r,B_r,C_r)=0.0389$ for all $(A_r, B_r, C_r)$ obtained by \eqref{Ueq1}, $J(A_r, B_r, C_r) = 1/2 = 0.5$ for $(u_1, u_2) = (\pm 1, 0)$, and $J(A_r, B_r, C_r) = 1/4 = 0.25$ for $(u_1, u_2) = (0, \pm 1)$, all of which are worse than the results in Problem 1. From these observations, we can conclude that the solutions to Problems 1 and 3 are not necessarily unique nor the solution sets of both problems do not contain each other. Also, the attained optimal values do not coincide with each other. \section{Numerical experiments} \label{sec4} This section illustrates that the proposed reduction method preserves the structure of the system \eqref{1} although the balanced truncation method does not preserve it. Furthermore, it is shown that the value of the objective function in the case of the proposed reduction method becomes smaller than that in the case of the reduction method proposed in \cite{sato2015riemannian} even if we choose an initial point in Algorithm 1 as a local optimal solution to Problem 3. This means that the stationary points of Problems 1 and 3 do not coincide. To perform them, we have used Manopt \cite{boumal2014manopt}, which is a MATLAB toolbox for optimization on manifold. We consider a reduction of the system \eqref{1} with $n=5$ and $m=p=2$ to the system \eqref{2} with $r=3$. Here, the system matrices $A$, $B$, and $C$ are given by \begin{align*} A &:= \begin{pmatrix} 3 & -1 & 1 & 1 & -1 \\ -1 & 2 & 0 & 0 & 2 \\ 1 & 0 & 2 & 1 & 1 \\ 1 & 0 & 1 & 3 & 0 \\ -1 & 2 & 1 & 0 & 4 \end{pmatrix}, B := \begin{pmatrix} 0 & 1 \\ 1 & 0 \\ -1 & 1 \\ 1 & 0 \\ 0 & 1 \end{pmatrix},\\ C &:= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 \end{pmatrix}. \end{align*} That is, $(A,B,C)\in {\rm Sym}_+(5)\times {\bf R}^{5\times 2} \times {\bf R}^{2\times 5}$. The balanced truncation method, which is the most popular model reduction method \cite{antoulas2005approximation}, gave the reduced matrix $A_r^{\rm BT}$ as \begin{align*} A_r^{\rm BT} &= \begin{pmatrix} 2.8944 & -0.0422 & -1.4729 \\ -0.0318 & 1.0470 & -0.2615 \\ -1.1764 & -0.2355 & 4.1898 \end{pmatrix}. \end{align*} Thus, $A_r^{\rm BT} \not\in {\rm Sym}_+(3)$; i.e., the balanced truncation method did not preserve the original model structure. Furthermore, we obtained $||G-G_r||_{H^2}=0.0157$. The reduction method which was briefly explained in Remark 1 in \cite{sato2015riemannian}, gave the orthogonal matrix \begin{align} U = \begin{pmatrix} 0.8906 & 0.1189 & -0.1025 \\ -0.1117 & 0.7216 & 0.0373 \\ -0.0650 & -0.1558 & 0.8994 \\ -0.2144 & 0.6138 & 0.0302 \\ 0.3798 & 0.2532 & 0.4223 \end{pmatrix}, \label{U} \end{align} and then \begin{align*} U^TAU &= \begin{pmatrix} 1.9613 & 0.0507 & 0.7510 \\ 0.0507 & 2.8566 & 1.6666 \\ 0.7510 & 1.6666 & 3.1486 \end{pmatrix}. \end{align*} Thus, $U^TAU \in {\rm Sym}_+(3)$; i.e., this method preserved the original model structure. Furthermore, we obtained $||G-G_r||_{H^2}=0.0217$. Note that, in this result, the norm of the gradient of the objective function was approximately equal to $7.493\times 10^{-7}$; i.e., we can expect that a local optimal solution to Problem 3 was obtained. The proposed algorithm gave the reduced matrix $A_r$, $B_r$, and $C_r$ as follows: \begin{align*} A_r &= \begin{pmatrix} 1.8965 & 0.0237 & 0.7778 \\ 0.0237 & 3.1554 & 1.8009 \\ 0.7778 & 1.8009 & 3.1784 \end{pmatrix}, \\ B_r & = \begin{pmatrix} -0.2677 & 1.1820 \\ 1.5124 & 0.2049 \\ -0.7759 & 1.2155 \end{pmatrix}, \\ C_r &= \begin{pmatrix} 0.8726 & 0.1503 & -0.0630 \\ 0.3321 & 0.0680 & 1.3121 \end{pmatrix}. \end{align*} Thus, $A_r\in {\rm Sym}_+(3)$; i.e., the reduced system had the same structure with the original system. Here, we chose an initial point $((A_r)_0, (B_r)_0, (C_r)_0)$ in Algorithm \ref{algorithm} as $(U^TAU, U^TB, CU)$, where $U$ is defined by \eqref{U}. Furthermore, we obtained $||G-G_r||_{H^2}=0.0156$. Hence, the value of the objective function attained by the proposed algorithm was smaller than those by the balanced truncation method and the method in \cite{sato2015riemannian}. This means that the stationary points of Problems 1 and 3 do not coincide. \begin{comment} \begin{align*} A_r &= \begin{pmatrix} 3.0291 & 0.0796 & 1.9073 \\ 0.0796 & 1.9821 & -0.5467 \\ 1.9073 & -0.5467 & 3.2192 \end{pmatrix}, \\ B_r & = \begin{pmatrix} 0.9263 & 1.1414 \\ -1.0458 & 0.3119 \\ -0.3533 & 0.9406 \\ \end{pmatrix}, \\ C_r &= \begin{pmatrix} 1.0185 & 1.0416 & -0.5334 \\ 0.7696 & 1.3947 & 0.7315 \end{pmatrix}. \end{align*} \end{comment} To verify the effectiveness of the proposed algorithm for medium-scale systems, we also randomly created matrices $A$, $B$, and $C$ of larger size. Table \ref{table} shows the values of the relative $H^2$ error in the case of $A\in {\rm Sym}_+(300)$, $B\in {\bf R}^{300\times 3}$, and $C\in {\bf R}^{2\times 300}$, respectively. For all $r$, the relative $H^2$ errors in the proposed method were smaller than those of the balanced truncation method. Furthermore, the reduced models by the balanced truncation method did not have the original symmetric structure while the proposed method had. Moreover, for all $r$, the proposed method was better than the method in \cite{sato2015riemannian}. Here, we note that for each $r$, an initial point $((A_r)_0, (B_r)_0, (C_r)_0)$ in Algorithm \ref{algorithm} to solve Problem 1 was chosen as $(U^TAU, U^TB, CU)$, where $U$ is a local optimal solution to Problem 3. Thus, Table \ref{table} also shows that the stationary points of Problems 1 and 3 do not coincide. \begin{table}[h] \caption{The comparison of the relative $H^2$ error $\frac{||G-G_r||_{H^2}}{||G||_{H^2}}$.} \label{table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $r$& $6 $ & $8$ & $10$ & $12$ \\ \hline Balanced truncation & 0.0141 & 0.0120 & 0.0103 & 0.0088 \\\hline The method in \cite{sato2015riemannian} & 0.0297 & 0.0299 & 0.0294 & 0.0317 \\\hline The proposed method & 0.0112 & 0.0089& 0.0042 & 0.0020 \\\hline \end{tabular} \end{center} \end{table} \begin{remark} As mentioned in Remark \ref{remark3}, in order to solve large-scale model reduction problems by Algorithm 1, a long computational time is needed. On the other hand, a computational time for performing the balanced truncation method is less than it. Furthermore, the balanced truncation method gives upper bounds of the $H^2$ and $H^{\infty}$ error norms \cite{antoulas2005approximation}. From these facts, we suggest that we use the balanced truncation method for determining the possible largest reduced dimension $r$ for performing Algorithm 1 by observing the $H^2$ and $H^{\infty}$ error norms. Then, we can choose an actual $r$ to perform Algorithm 1 as a smaller value than the possible largest dimension. \end{remark} \section{Conclusion} \label{sec5} We have studied the stability and symmetry preserving $H^2$ optimal model reduction problem on the product manifold of the manifold of the symmetric positive definite matrices and two Euclidean spaces. To solve the problem by using the trust-region method, we have derived the Riemannian gradient and Riemannian Hessian. Furthermore, it has been shown that if we restrict our systems to gradient systems, the gradient and Hessian can be obtained more efficiently. By a simple example, we have proved that the solutions to our problem and the problem in \cite{sato2015riemannian} are not unique and the solution sets of both problems do not contain each other in general. Also, it has been revealed that the attained optimal values do not coincide. Numerical experiments have illustrated that although the balanced truncation does not preserve the original symmetric structure of the system, the proposed method preserves the structure. Furthermore, it has been demonstrated that the proposed method is better than our method in \cite{sato2015riemannian}, and also usually better than the balanced truncation method, in the sense of the $H^2$ error norm between the transfer functions of the original and reduced systems. \appendix \subsection{Proof of the fact that ${\rm Sym}_+(r)$ is a reductive homogeneous space} \label{ape1} To prove that ${\rm Sym}_+(r)$ is a reductive homogeneous space, we first note that there is a natural bijection \begin{align} {\rm Sym}_+(r) \cong GL(r)/O(r). \label{doukei} \end{align} To see this, let $\phi_g$ be $GL(r)$ action on the manifold ${\rm Sym}_{+}(r)$; i.e., $\phi_g(S) =gSg^T,\quad g\in GL(r), S\in {\rm Sym}_{+}(r)$. The action $\phi_g$ is transitive; i.e., for any $S_1$, $S_2\in {\rm Sym}_+(r)$, there exists $g\in GL(r)$ such that $\phi_g(S_1)=S_2$. Thus, the manifold ${\rm Sym}_+(r)$ consists of a single orbit; i.e., ${\rm Sym}_+(r)$ is a homogeneous space of $GL(r)$. The action $\phi_g$ has the isotropy subgroup of the orthogonal group $O(r)$ at $I_r\in {\rm Sym}_+(r)$ because $O(r) = \{ g\in GL(r)\,|\, \phi_g (I_r) = I_r \}$. In general, if an action of a group on a set is transitive, the set is isomorphic to a quotient of the group by its isotropy subgroup \cite{Gallier2016}. Hence, \eqref{doukei} holds. From the identification \eqref{doukei}, we can show that the quotient $GL(r)/O(r)$ is reductive; i.e., $T_{I_r} GL(r) \cong T_{I_r} {\rm Sym}_+(r) \oplus T_{I_r} O(r)$ and $O\xi O^{-1} \in T_{I_r} {\rm Sym}_+(r)$ for $\xi\in T_{I_r} {\rm Sym}_+(r)$ and $O\in O(r)$. In fact, these follow from \begin{align*} T_{I_r} GL(r) &\cong {\bf R}^{r\times r} \cong {\rm Sym}(r) \oplus {\rm Skew}(r), \\ {\rm Sym}(r) & \cong T_{I_r} {\rm Sym}_+(r), \,\, {\rm Skew} (r) \cong T_{I_r} O(r), \end{align*} and $O^{-1}=O^T$. \subsection{Proof of \eqref{16}} \label{apeB} The directional derivative of $\bar{J}$ at $(A_r,B_r,C_r)$ in the direction $(A'_r,B'_r,C'_r)$ can be calculated as \begin{align} &{\rm D}\bar{J}(A_r,B_r,C_r)[(A'_r,B'_r,C'_r)] \nonumber \\ =& 2{\rm tr} (C'_r (PC_r^T -X^TC^T)) - 2{\rm tr} (C^TC_rX'^T) +{\rm tr} (C_rP'C_r^T), \label{9} \end{align} where $P'$ and $X'$ are also the directional derivative of $P$ and $X$ at $(A_r,B_r,C_r)$ in the direction $(A'_r,B'_r,C'_r)$, respectively. Differentiating \eqref{3} and \eqref{5}, we obtain \begin{align} & A_r P'+P' A_r +A_r' P+PA'_r-B_r'B_r^T-B_rB'^{T}_r =0, \label{10} \\ & AX'+X'A_r+XA_r'-BB_r'^T =0. \label{11} \end{align} Eqs.\,\eqref{4} and \eqref{10} yield that \begin{align} {\rm tr}(C_r^TC_rP') = -2 {\rm tr}(A_r'^TQP)+2{\rm tr}(B_r'^TQB_r), \label{12} \end{align} and \eqref{6} and \eqref{11} imply that \begin{align} {\rm tr}(-C^TC_rX'^T) = {\rm tr}( (-X A_r'^T+BB_r'^T)^TY). \label{13} \end{align} By substituting \eqref{12} and \eqref{13} into \eqref{9}, we have \begin{align} &{\rm D}\bar{J}(A_r,B_r,C_r)[(A'_r,B'_r,C'_r)] \nonumber \\ =& 2{\rm tr} ( A_r'^T (-QP-Y^TX) ) + 2{\rm tr} (B_r'^T (QB_r+Y^TB) ) \nonumber \\ &+2{\rm tr} (C_r'^T(C_rP-CX) ). \label{14} \end{align} Since the Euclidean gradient $\nabla \bar{J}(A_r,B_r,C_r)$ satisfies \begin{align*} & {\rm D} \bar{J}(A_r,B_r,C_r)[(A'_r,B'_r,C'_r)] \nonumber \\ = & {\rm tr} (A_r'^T \nabla_{A_r} \bar{J}(A_r,B_r,C_r)) + {\rm tr} (B_r'^T \nabla_{B_r} \bar{J}(A_r,B_r,C_r))\\ &+ {\rm tr} (C_r'^T \nabla_{C_r} \bar{J}(A_r,B_r,C_r)), \end{align*} \eqref{14} implies \eqref{16}. \section*{Acknowledgment} This study was supported in part by JSPS KAKENHI Grant Number JP16K17647. The authors would like to thank the anonymous reviewers for their valuable comments that helped improve the paper significantly. \ifCLASSOPTIONcaptionsoff \fi \end{document}
\begin{document} \title{Approximation rate in Wasserstein distance of probability measures on the real line by deterministic empirical measures} \author{O. Bencheikh and B. Jourdain\thanks{Cermics, \'Ecole des Ponts, INRIA, Marne-la-Vall\'ee, France. E-mails : [email protected], [email protected]. The authors would like to acknowledge financial support from Université Mohammed VI Polytechnique. }} \maketitle \begin{abstract} We are interested in the approximation in Wasserstein distance with index $\rho\ge 1$ of a probability measure $\mu$ on the real line with finite moment of order $\rho$ by the empirical measure of $N$ deterministic points. The minimal error converges to $0$ as $N\to+\infty$ and we try to characterize the order associated with this convergence. In \cite{xuberger}, Xu and Berger show that, apart when $\mu$ is a Dirac mass and the error vanishes, the order is not larger than $1$ and give a sufficient condition for the order to be equal to this threshold $1$ in terms of the density of the absolutely continuous with respect to the Lebesgue measure part of $\mu$. They also prove that the order is not smaller than $1/\rho$ when the support of $\mu$ is bounded and not larger when the support is not an interval. We complement these results by checking that for the order to lie in the interval $\left(1/\rho,1\right)$, the support has to be bounded and by stating a necessary and sufficient condition in terms of the tails of $\mu$ for the order to be equal to some given value in the interval $\left(0,1/\rho\right)$, thus precising the sufficient condition in terms of moments given in \cite{xuberger}. In view of practical application, we emphasize that in the proof of each result about the order of convergence of the minimal error, we exhibit a choice of points explicit in terms of the quantile function of $\mu$ which exhibits the same order of convergence. \noindent{\bf Keywords:} deterministic empirical measures, Wasserstein distance, rate of convergence. \noindent {{\bf AMS Subject Classification (2010):} \it 49Q22, 60-08} \end{abstract} \section*{Introduction} Let $\rho\ge 1$ and $\mu$ be a probability measure on the real line. We are interested in the rate of convergence in terms of $N\in{\mathbb N}^*$ of \begin{equation} e_N(\mu,\rho):=\inf\left\{\mathcal{W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{x_i},\mu\right):-\infty<x_1\le x_2\le \cdots\le x_N<+\infty\right\},\label{pbor} \end{equation} where ${\cal W}_\rho$ denotes the Wasserstein distance with index $\rho$. The motivation is the approximation of the probability measure $\mu$ by finitely supported probability measures. An example of application is provided by the optimal initialization of systems of particles with mean-field interaction \cite{jrdcds,benchjour}, where, to preserve the mean-field feature, it is important to get $N$ points with equal weight $\frac{1}{N}$ (of course, nothing prevents several of these points to be equal). The Hoeffding-Fr\'echet or comonotone coupling between two probability measures $\nu$ and $\eta$ on the real line is optimal for $\mathcal{W}_\rho$ so that: \begin{align}\label{Wasserstein} \displaystyle \mathcal{W}_\rho^\rho\left(\nu,\eta\right) = \int_0^1 \left|F^{-1}_{\nu}(u) - F^{-1}_{\eta}(u) \right|^\rho\,du, \end{align} where for $u\in(0,1)$, $F^{-1}_{\nu}(u)= \inf\left\{ x \in {\mathbb R}: \nu\left(\left(-\infty,x\right]\right)\geq u\right\}$ and $F^{-1}_{\eta}(u)= \inf\left\{ x \in {\mathbb R}: \eta\left(\left(-\infty,x\right]\right)\geq u\right\}$ are the respective quantile functions of $\nu$ and $\eta$. We set $F(x)=\mu\left(\left(-\infty,x\right]\right)$ for $x\in{\mathbb R}$ and denote $F^{-1}(u)=\inf\left\{ x \in {\mathbb R}: F(x)\geq u\right\}$ for $u\in(0,1)$. We have $u\le F(x)\Leftrightarrow F^{-1}(u)\le x$. The quantile function $F^{-1}$ is left-continuous and non-decreasing and we denote by $F^{-1}(u+)$ its right-hand limit at $u\in [0,1)$ (in particular $F^{-1}(0+)=\lim\limits_{u\to 0+}F^{-1}(u)\in [-\infty,+\infty)$) and set $F^{-1}(1)=\lim\limits_{u\to 1-}F^{-1}(u)\in(-\infty,+\infty]$. By \eqref{Wasserstein}, when $-\infty<x_1\le x_2\le \cdots\le x_N<+\infty$, \begin{equation} \mathcal{W}_\rho^\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{x_i},\mu\right) =\sum \limits_{i=1}^N \int_{\frac{i-1}{N}}^{\frac{i}{N}} \left|x_{i}-F^{-1}(u)\right|^\rho\,du,\label{wrhomunmu} \end{equation} where, by the inverse transform sampling, the right-hand side is finite if and only if $\int_{\mathbb R}|x|^\rho\mu(dx)<+\infty$. So, when considering $e_N(\mu,\rho)$, we will suppose that $\mu$ has a finite moment of order $\rho$.\\ In the first section of the paper, we recall that, under this moment condition, the infimum in \eqref{pbor} is attained: \begin{equation*} e_N\left(\mu,\rho\right) = {\cal W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{x_i^N},\mu\right)=\sum \limits_{i=1}^N \int_{\frac{i-1}{N}}^{\frac{i}{N}} \left|x^N_{i}-F^{-1}(u)\right|^\rho\,du \end{equation*} for some points $x_i^N\in[F^{-1}(\frac{i-1}{N}+),F^{-1}(\frac{i}{N})]\cap{\mathbb R}$ which are unique as soon as $\rho>1$ and explicit in the quadratic case $\rho=2$. Of course the points $\left(x_i^N\right)_{1\le i\le N}$ depend on $\rho$ but we do not explicit this dependence to keep notations simple. For $\rho=1$, because of the lack of strict convexity of ${\mathbb R}\ni x\mapsto|x|$, there may be several optimal choices among which $x_i^N=F^{-1}\left(\frac{2i-1}{2N}\right)$ for $i\in\{1,\hdots,N\}$. Note that when $\rho\ge\tilde \rho\ge 1 $ and $\displaystyle \int_{\mathbb R}|x|^\rho\mu(dx)<+\infty$, with $(x_i^N)_{1\le i\le N}$ denoting the optimal points for $\rho \ge 1$, \begin{equation} e_N(\mu,\rho)= {\cal W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{x^N_i},\mu\right)\ge {\cal W}_{\tilde \rho}\left(\frac{1}{N}\sum_{i=1}^N\delta_{x^N_i},\mu\right)\ge e_N(\mu,\tilde\rho)\label{minoerhoun}. \end{equation}Hence $\rho\mapsto e_N(\mu,\rho)$ is non-decreasing. We give an alternative expression of $e_N(\mu,\rho)$ in terms of the cumulative distribution function rather than the quantile function and recover that $e_N(\mu,\rho)$ tends to $0$ as $N\to+\infty$ when $\int_{\mathbb R}|x|^\rho\mu(dx)<+\infty$. The main purpose of the paper is to study the rate at which this convergence occurs. In particular, we would like to give sufficient conditions on $\mu$, which, when possible, are also necessary, to ensure convergence at a rate $N^{-\alpha}$ with $\alpha >0$ called the order of convergence. This question has already been studied by Xu and Berger in \cite{xuberger}. According to Theorem 5.20 \cite{xuberger}, $\limsup_{N\to\infty}Ne_N(\mu,\rho)\ge\frac 12\int_0^{\frac 12}|F^{-1}(u+\frac 12)-F^{-1}(u)|du$ with the right-hand side positive apart when $\mu$ is a Dirac mass and $e_N(\mu,\rho)$ vanishes for all $N,\rho\ge 1$. This result may be complemented by the non-asymptotic bound obtained in Lemma 1.4 of the first version \cite{BJ} of the present paper : $Ne_N(\mu,\rho)+(N+1)e_{N+1}(\mu,\rho)\ge \frac 12\int_{{\mathbb R}}F(x)\wedge (1-F(x))\,dx$ where $\int_{{\mathbb R}}F(x)\wedge (1-F(x))\,dx=\int_0^{\frac 12}|F^{-1}(u+\frac 12)-F^{-1}(u)|du$ as easily seen since these integrals correspond to the area of the points at the right to $(F(x))_{-\infty\le x\le F^{-1}(\frac{1}{2})}$, at the left to $(1-F(x))_{F^{-1}(\frac{1}{2})<x<+\infty}$, above $0$ and below $\frac{1}{2}$ respectively computed by integration with respect to the abscissa and to the ordinate. This previous version also contained a section devoted to the case when the support of $\mu$ is bounded i.e. $F^{-1}(1)-F^{-1}(0+)<+\infty$. The results in this section were mainly obtained before by Xu and Berger in \cite{xuberger}. In particular, according to Theorem 5.21 (ii) \cite{xuberger}, when the support of $\mu$ is bounded, then $\sup_{N\ge 1}N^{\frac{1}{\rho}}e_N(\mu,\rho)<+\infty$, while when $F^{-1}$ is discontinuous $\limsup_{N\to+\infty}N^{\frac{1}{\rho}}e_N(\mu,\rho)>0$, according to Remark 5.22 (ii) \cite{xuberger}. We recovered these results and went slightly further by noticing that when the support of $\mu$ is bounded and $F^{-1}$ is continuous, then $\lim_{N\to\infty}N^{\frac{1}{\rho}}e_N(\mu,\rho)=0$ (Proposition 2.1 \cite{BJ}), with an order of convergence arbitrarily slow as examplified by the the beta distribution with parameter $(\beta,1)$ with $\beta>0$ : $\mu_\beta(dx)=\beta {\mathbf 1}_{[0,1]}(x)x^{\beta-1}\,dx$. Indeed, with the notation $\asymp$ defined at the end of the introduction, according to Example 2.3 \cite{BJ}, when $\rho>1$, $e_N(\mu_\beta,\rho)\asymp N^{-\frac{1}{\rho}-\frac{1}{\beta}}\asymp{\cal W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{F^{-1}\left(\frac{i-1}{N}\right)},\mu_\beta\right)$ for $\beta>\frac{\rho}{\rho-1}$, $e_N(\mu_\beta,\rho)\asymp N^{-1}(\ln N)^{\frac{1}{\rho}}\asymp{\cal W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{F^{-1}\left(\frac{i-1}{N}\right)},\mu_\beta\right)$ for $\beta=\frac{\rho}{\rho-1}$ and $\lim_{N\to+\infty}Ne_N(\mu,\rho)=\frac{1}{2\beta(\rho+1)^{1/\rho}}\left(\int_0^1u^{\frac{\rho}{\beta}-\rho}du\right)^{1/\rho}$ when $\beta\in (0,\frac{\rho}{\rho-1})$. The latter limiting behaviour which remains valid for $\rho=1$ whatever $\beta>0$ is a consequence of one of the main results by Xu and Berger \cite{xuberger} namely Theorem 5.15 : when the density $f$ of the absolutely continuous with respect to the Lebesgue measure part of $\mu$ is $dx$ a.e. positive on $\left\{x\in{\mathbb R}:0<F(x)<1\right\}$ (or equivalently $F^{-1}$ is absolutely continuous), then $$\lim_{N\to+\infty} Ne_N(\mu,\rho)=\lim_{N\to+\infty} N{\cal W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{F^{-1}\left(\frac{2i-1}{2N}\right)},\mu\right) = \frac{1}{2(\rho+1)^{1/\rho}}\left(\int_{\mathbb R}\frac{{\mathbf 1}_{\{0<F(x)<1\}}}{f^{\rho-1}(x)}\,dx\right)^{1/\rho}.$$ In Theorem 2.4 \cite{BJ}, we recovered this result and also stated that, without the positivity assumption on the density, $$\liminf_{N\to+\infty} Ne_N(\mu,\rho) \ge \frac{1}{2(\rho+1)^{1/\rho}} \left(\int_{\mathbb R}\frac{{\mathbf 1}_{\{f(x)>0\}}}{f^{\rho-1}(x)}\,dx\right)^{1/\rho}.$$ In particular for $(Ne_N(\mu,1))_{N\ge 1}$ to be bounded the Lebesgue measure of $\{x\in{\mathbb R}:f(x)>0\}$ must be finite. Weakening this necessary condition is discussed in Section 4 of the first version \cite{BJ} of this paper. In \cite{chevallier}, Chevallier addresses the multidimensional setting and proves in Theorem III.3 that for a probability measure $\mu$ on ${\mathbb R}^d$ with support bounded by $r$, there exist points $x_1,\hdots,x_N\in{\mathbb R}^d$ such that $\frac{1}{4r}\mathcal{W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{x_i},\mu\right)\le f_{\rho,d}(N)$ where $f_{\rho,d}(N)$ is respectively equal to $\left(\frac d{d-\rho}\right)^{\frac 1\rho}N^{-\frac 1 d}$, $\left(\frac{1+\ln N}{N}\right)^{\frac 1 d}$, and $\zeta(p/d)N^{-\frac 1 \rho}$ with $\zeta$ denoting the zeta Riemann function when $\rho<d$, $\rho=d$ and $\rho>d$. The case when the support of $\mu$ is not bounded is also considered by Xu and Berger \cite{xuberger} in the one-dimensional setting of the present paper and by \cite{chevallier} in the multidimensional setting. In Corollary III.5 \cite{chevallier}, Chevallier proves that $\lim_{N\to\infty}(f_{\rho,d}(N))^{-\alpha\rho}\mathcal{W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{x_i},\mu\right)=0$ when $\int_{{\mathbb R}^d}|x|^{\frac{\rho}{1-\alpha\rho}}\mu(dx)<+\infty$ for some $\alpha\in (0,\frac 1 \rho)$. This generalizes the one-dimensional statement of Theorem 5.21 (i) \cite{xuberger} : under the same moment condition, $\lim_{N\to\infty}N^\alpha e_N(\mu,\rho)=0$. In Theorem \ref{alphaRater}, using our alternative formula for $e_N(\mu,\rho)$ in terms of the cumulative distribution function of $\mu$, we refine this result by stating the following necessary and sufficient condition $$\forall \alpha\in \left(0,\frac{1}{\rho}\right),\;\lim_{x\to +\infty}x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)=0\Leftrightarrow \lim_{N\to+\infty}N^{\alpha}e_N(\mu,\rho)=0.$$ We also check that $$\forall \alpha\in \left(0,\frac{1}{\rho}\right),\quad\displaystyle{\sup_{N \ge 1}} N^{\alpha} \, e_N(\mu,\rho)<+\infty \Leftrightarrow \sup_{x\ge 0}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty,$$ a condition under which, the order of convergence $\alpha$ of the minimal error $e_N(\mu,\rho)$ is preserved by choosing $x_1=F^{-1}\left(\frac{1}{N}\right)\wedge (-N^{\frac{1}{\rho}-\alpha})$, $x_N=F^{-1}\left(\frac{N-1}{N}\right)\vee N^{\frac{1}{\rho}-\alpha}$ and any $x_i\in[F^{-1}(\frac{i-1}{N}+),F^{-1}(\frac{i}{N})]$ for $2\le i\le N-1$. We also exhibit non-compactly supported probability measures $\mu$ such that, for $\rho>1$, $\lim_{N\to+\infty}N^{\frac{1}{\rho}} \, e_N(\mu,\rho)=0$. Nevertheless, we show that for $\left(N^\alpha e_N(\mu,\rho)\right)_{N\ge 1}$ to be bounded for $\alpha>\frac{1}{\rho}$, the support of $\mu$ has to be bounded. We last give a necessary condition for $\left(N^{\frac{1}{\rho}} e_N(\mu,\rho)\right)_{N\ge 1}$ to be bounded, which unfortunately is not sufficient but ensures the boundedness of $\left(\frac{N^{1/\rho}}{1+\ln N}e_N(\mu,\rho)\right)_{N\ge 1}$. We summarize our results together with the ones obtained by Xu and Berger \cite{xuberger} in Table \ref{res}. \begin{center} \begin{table}[!ht] \begin{tabular}{|c||c|c|}\hline $\alpha$ & Necessary condition & Sufficient condition\\\hline\hline $\alpha=1$ & $\displaystyle \int_{\mathbb R}\frac{{\mathbf 1}_{\left\{f(x)>0 \right\}}}{f^{\rho-1}(x)}\,dx<+\infty$ & $f(x)>0$ $dx$ a.e. on $\left\{x\in{\mathbb R}: 0<F(x)<1 \right\}$ \\ & (Thm. 2.4 \cite{BJ}) & and $\displaystyle \int_{\mathbb R}\frac{{\mathbf 1}_{\{f(x)>0\}}}{f^{\rho-1}(x)}\,dx<+\infty$ (Thm. 5.15 \cite{xuberger}) \\\hline $\alpha\in \left(\frac 1\rho,1\right)$& $F^{-1}$ continuous (Remark 5.22 (ii) \cite{xuberger}) & related to the modulus of continuity of $F^{-1}$\\ when $\rho>1$ & and $\mu$ with bounded support (Prop. \ref{propals1rcomp}) & \\\hline $\alpha=\frac 1\rho$ & $\exists \lambda>0$, $\forall x\ge 0$, $F(-x)+1-F(x)\le\frac{e^{-\lambda x}}{\lambda}$ & $\mu$ with bounded support (Thm. 5.21 (ii) \cite{xuberger}) \\ & (Prop. \ref{propal1rho}) & For $\rho>1$, Exple \ref{exempleexp} with unbounded supp.\\\hline $\alpha\in\left(0,\frac 1\rho\right)$ & ${\sup\limits_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty$& ${\sup\limits_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty$\\ &(Thm. \ref{alphaRater}) &(Thm. \ref{alphaRater})\\\hline \end{tabular}\caption{Conditions for the convergence of $e_N(\mu,\rho)$ with order $\alpha$ : $\sup\limits_{N\ge 1}N^\alpha e_N(\mu,\rho)<+\infty$.}\label{res} \end{table} \end{center} {\bf Notation :} \begin{itemize} \item We denote by $\lfloor x\rfloor$ (resp. $\lceil x\rceil$) the integer $j$ such that $j\le x<j+1$ (resp. $j-1<x\le j$) and by $\{x\}=x-\lfloor x\rfloor$ the integer part of $x\in{\mathbb R}$. \item For two sequences $(a_N)_{N\ge 1}$ and $(b_N)_{N\ge 1}$ of real numbers with $b_N>0$ for $N\ge 2$ we denote $a_N\asymp b_N$ when $\displaystyle 0<\inf_{N\ge 2}\left(\frac{a_N}{b_N}\right)$ and $\displaystyle \sup_{N\ge 2}\left(\frac{a_N}{b_N}\right)<+\infty$. \end{itemize} \section{Preliminary results} When $\rho=1$ (resp. $\rho=2$), $\displaystyle {\mathbb R}\ni y\mapsto N\int_{\frac{i-1}{N}}^{\frac{i}{N}} \left|y-F^{-1}(u)\right|^\rho\,du$ is minimal for $y$ belonging to the set $\left[F^{-1}\left(\frac{2i-1}{2N}\right),F^{-1}\left(\frac{2i-1}{2N}+\right)\right]$ of medians (resp. equal to the mean $\displaystyle N\int_{\frac{i-1}{N}}^{\frac{i}{N}}F^{-1}(u)\,du$) of the image of the uniform law on $\left[\frac{i-1}{N},\frac{i}{N}\right]$ by $F^{-1}$. For general $\rho>1$, the function $\displaystyle {\mathbb R}\ni y\mapsto \int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|y-F^{-1}(u)\right|^\rho\,du$ is strictly convex and continuously differentiable with derivative \begin{equation} \rho\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left({\mathbf 1}_{\left\{y\ge F^{-1}(u)\right\}} \left(y-F^{-1}(u)\right)^{\rho-1}-{\mathbf 1}_{\left\{y<F^{-1}(u)\right\}}\left(F^{-1}(u)-y\right)^{\rho-1}\right)\,du\label{dery} \end{equation} non-positive for $y=F^{-1}\left(\frac{i-1}{N}+\right)$ when either $i=1$ and $F^{-1}(0+)>-\infty$ or $i\ge 2$ and non-negative for $y=F^{-1}\left(\frac{i}{N}\right)$ when either $i\le N-1$ or $i=N$ and $F^{-1}(1)<+\infty$. Since the derivative has a positive limit as $y\to+\infty$ and a negative limit as $y\to-\infty$, we deduce that $\displaystyle {\mathbb R}\ni y\mapsto \int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|y-F^{-1}(u)\right|^\rho\,du$ admits a unique minimizer $x_i^N\in \left[F^{-1}\left(\frac{i-1}{N}+\right),F^{-1}\left(\frac{i}{N}\right)\right]\cap{\mathbb R}$ (to keep notations simple, we do not explicit the dependence of $x_i^N$ on $\rho$). Therefore \begin{equation} e^\rho_N(\mu,\rho)=\sum_{i=1}^N \int_{\frac{i-1}{N}}^{\frac{i}{N}} \left|x_{i}^N-F^{-1}(u)\right|^\rho\,du \mbox{ with } \left[F^{-1}\left(\frac{i-1}{N}+\right),F^{-1}\left(\frac{i}{N}\right)\right]\ni x_i^N=\begin{cases} \displaystyle F^{-1}\left(\frac{2i-1}{2N}\right)\mbox{ if }\rho=1,\\ \displaystyle N\int_{\frac{i-1}{N}}^{\frac{i}{N}}F^{-1}(u)\,du\mbox{ if }\rho=2,\\ \mbox{not explicit otherwise.} \end{cases}\label{enrho} \end{equation} When needing to bound $e_N(\mu,\rho)$ from above, we may replace the optimal point $x_i^N$ by $F^{-1}\left(\frac{2i-1}{2N}\right)$: $$\forall i\in\{1,\hdots,N\},\quad \int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|F^{-1}(u)-x_i^N \right|^\rho\,du\le \int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|F^{-1}(u)-F^{-1}\left(\frac{2i-1}{2N}\right)\right|^\rho\,du,$$ a simple choice particularly appropriate when linearization is possible since $\displaystyle \left[\frac{i-1}{N},\frac{i}{N}\right]\ni v\mapsto\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|u-v\right|^\rho\,du$ is minimal for $v=\frac{2i-1}{2N}$.To bound $e_N(\mu,\rho)$ from below, we can use that, by Jensen's inequality and the minimality of $F^{-1}\left(\frac{2i-1}{2N}\right)$ for $\rho=1$, \begin{align} \displaystyle \int_{\frac{i-1}{N}}^{\frac i N} \left|F^{-1}(u)-x^N_{i}\right|^{\rho}\,du &\ge N^{\rho -1} \left(\int_{\frac{i-1}{N}}^{\frac i N} \left|F^{-1}(u)-x^N_{i}\right|\,du\right)^{\rho} \ge N^{\rho -1} \left(\int_{\frac{i-1}{N}}^{\frac i N} \left|F^{-1}(u)-F^{-1}\left(\frac{2i-1}{2N}\right)\right|\,du\right)^{\rho} \notag\\ &\ge N^{\rho -1} \left(\frac{1}{4N}\left(F^{-1}\left(\frac{2i-1}{2N}\right)-F^{-1}\left(\frac{4i-3}{4N}\right)+F^{-1}\left(\frac{4i-1}{4N}\right)-F^{-1}\left(\frac{2i-1}{2N}\right)\right)\right)^{\rho} \notag\\ &\ge \frac{1}{4^\rho N}\left(F^{-1}\left(\frac{4i-1}{4N}\right)-F^{-1}\left(\frac{4i-3}{4N}\right)\right)^{\rho}.\label{minotermbordlimB} \end{align} We also have an alternative formulation of $e_N(\mu,\rho)$ in terms of the cumulative distribution function $F$ in place of the quantile function $F^{-1}$: \begin{prop}\label{propenf} \begin{equation} e_N^\rho(\mu,\rho)=\rho\sum_{i=1}^N\left(\int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{x_i^N}\left(x_i^N-y\right)^{\rho-1}\left(F(y)-\frac{i-1}{N}\right)\,dy+\int^{F^{-1}\left(\frac{i}{N}\right)}_{x_i^N}\left(y-x_i^N\right)^{\rho-1}\left(\frac{i}{N}-F(y)\right)\,dy\right).\label{enf} \end{equation} \end{prop} Under the convention $F^{-1}(0)=-\infty$, when, for some $i\in\{1,\hdots,N\}$, $F^{-1}\left(\frac{i-1}{N}+\right)>F^{-1}\left(\frac{i-1}{N}\right)$, then $F(y)=\frac{i-1}{N}$ for $y\in\left[F^{-1}\left(\frac{i-1}{N}\right),F^{-1}\left(\frac{i-1}{N}+\right)\right)$ and $\displaystyle \int_{F^{-1}\left(\frac{i-1}{N}\right)}^{F^{-1}\left(\frac{i-1}{N}+\right)}(x_i^N-y)^{\rho-1}\left(F(y)-\frac{i-1}{N}\right)\,dy=0$ so that the lower integration limit in the first integral in the right-hand side of \eqref{enf} may be replaced by $F^{-1}\left(\frac{i-1}{N}\right)$. In a similar way, the upper integration limit in the second integral may be replaced by $F^{-1}\left(\frac{i}{N}+\right)$ under the convention $F^{-1}(1+)=+\infty$. When $\rho=1$, the equality \eqref{enf} follows from the interpretation of ${\cal W}_1(\nu,\eta)$ as the integral of the absolute difference between the cumulative distribution functions of $\nu$ and $\eta$ (equal, as seen with a rotation with angle $\frac{\pi}{2}$, to the integral of the absolute difference between their quantile functions) and the integral simplifies into: \begin{equation}\label{w1altern2b} e_N(\mu,1)=\sum_{i=1}^N\left(\int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{F^{-1}\left(\frac{2i-1}{N}\right)}\left(F(y)-\frac{i-1}{N}\right)\,dy+\int^{F^{-1}\left(\frac{i}{N}\right)}_{F^{-1}\left(\frac{2i-1}{N}\right)}\left(\frac{i}{N}-F(y)\right)\,dy\right)=\frac{1}{N}\int_{\mathbb R}\min_{j\in{\mathbb N}}\left|NF(y)-j\right|\,dy. \end{equation} For $\rho>1$, it can be deduced from the general formula for ${\cal W}_\rho^\rho(\nu,\eta)$ in terms of the cumulative distribution functions of $\mu$ and $\eta$ (see for instance Lemma B.3 \cite{jourey2}). It is also a consequence of the following equality for each term of the decomposition over $i\in\{1,\hdots,N\}$, that we will need next. \begin{lem}\label{lemenf} Assume that $\displaystyle \int_{\mathbb R}|x|^\rho\mu(dx)<+\infty$ with $\rho\ge 1$. For $i\in\{1,\hdots,N\}$ and $x\in \left[F^{-1}\left(\frac{i-1}{N}\right),F^{-1}\left(\frac{i}{N}\right)\right]\cap{\mathbb R}$ (with convention $F^{-1}(0)=F^{-1}(0+)$), we have: $$\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|x-F^{-1}(u)\right|^\rho\,du=\rho\int_{F^{-1}\left(\frac{i-1}{N}\right)}^{x}(x-y)^{\rho-1}\left(F(y)-\frac{i-1}{N}\right)\,dy+\rho\int^{F^{-1}\left(\frac{i}{N}\right)}_{x}(y-x)^{\rho-1}\left(\frac{i}{N}-F(y)\right)\,dy,$$ and the right-hand side is minimal for $x=x_i^N$. \end{lem} \begin{proof} Let $i\in\{1,\hdots,N\}$ and $x\in\left[F^{-1}\left(\frac{i-1}{N}\right),F^{-1}\left(\frac{i}{N}\right)\right]\cap{\mathbb R}$. We have $\frac{i-1}{N}\le F(x)$ and $F(x-)\le\frac{i}{N}$. Since $F^{-1}(u)\le x\Leftrightarrow u\le F(x)$ and $F^{-1}(u)=x$ for $u\in \left(F(x-),F(x)\right]$, we have: $$\int_{\frac{i-1}{N}}^{\frac{i}{N}} \left|x-F^{-1}(u)\right|^\rho\,du=\int_{\frac{i-1}{N}}^{F(x)} \left(x-F^{-1}(u)\right)^\rho\,du+\int^{\frac{i}{N}}_{F(x)} \left(F^{-1}(u)-x\right)^\rho\,du.$$ Using the well-known fact that the image of ${\mathbf 1}_{[0,1]}(v)\,dv\mu(dz)$ by $(v,z)\mapsto F(z-)+v\mu(\{z\})$ is the Lebesgue measure on $[0,1]$ and that ${\mathbf 1}_{[0,1]}(v)\,dv\mu(dz)$ a.e., $F^{-1}\left(F(z-)+v\mu(\{z\})\right)=z$ , we obtain that: \begin{align} \int_{\frac{i-1}{N}}^{F(x)} \left(x-F^{-1}(u)\right)^\rho\,du &= \int_{v=0}^1\int_{z\in {\mathbb R}}{\mathbf 1}_{\left\{\frac{i-1}{N}\le F(z-)+v\mu(\{z\})\le F(x)\right\}}(x-z)^\rho\mu(dz)\,dv\notag\\ &=\int_{v=0}^1\int_{z\in {\mathbb R}}{\mathbf 1}_{\left\{\frac{i-1}{N}\le F(z-)+v\mu(\{z\})\le F(x)\right\}}\int \rho(x-y)^{\rho-1}{\mathbf 1}_{\{z\le y\le x\}}\,dy\mu(dz)\,dv\notag\\ &=\rho\int_{y=-\infty}^{x}(x-y)^{\rho-1}\int_{v=0}^1\int_{z\in {\mathbb R}}{\mathbf 1}_{\left\{\frac{i-1}{N}\le F(z-)+v\mu(\{z\})\right\}}{\mathbf 1}_{\left\{z\le y \right\}}\mu(dz)\,dv\,dy.\label{triplint} \end{align} For $v>0$, $\{z\in{\mathbb R}:F(z-)+v\mu(\{z\})\le F(y)\}=(-\infty,y]\cup\{z\in{\mathbb R}:z>y\mbox{ and }F(z)=F(y)\}$ with $\mu\left(\{z\in{\mathbb R}:z>y\mbox{ and }F(z)=F(y)\}\right)=0$ and therefore $$\int_{z\in {\mathbb R}}{\mathbf 1}_{\left\{\frac{i-1}{N}\le F(z-)+v\mu(\{z\})\right\}}{\mathbf 1}_{\{z\le y\}}\mu(dz)=\int_{z\in {\mathbb R}}{\mathbf 1}_{\left\{\frac{i-1}{N}\le F(z-)+v\mu(\{z\})\le F(y)\right\}}\mu(dz).$$ Plugging this equality in \eqref{triplint}, using again the image of ${\mathbf 1}_{[0,1]}(v)\,dv\mu(dz)$ by $(v,z)\mapsto F(z-)+v\mu(\{z\})$ and the equivalence $\frac{i-1}{N}\le F(y)\Leftrightarrow F^{-1}\left(\frac{i-1}{N}\right)\le y$ , we deduce that: \begin{align*} \int_{\frac{i-1}{N}}^{F(x)}\left(x-F^{-1}(u)\right)^\rho\,du&=\rho\int_{y=-\infty}^{x}(x-y)^{\rho-1}\int_{u=0}^1{\mathbf 1}_{\left\{\frac{i-1}{N}\le u\le F(y)\right\}}\,du\,dy=\rho\int_{F^{-1}\left(\frac{i-1}{N}\right)}^{x}(x-y)^{\rho-1}\left(F(y)-\frac{i-1}{N}\right)\,dy. \end{align*} In a similar way, we check that: $$\int^{\frac{i}{N}}_{F(x)}\left(F^{-1}(u)-x\right)^\rho\,du=\rho\int^{F^{-1}\left(\frac{i}{N}\right)}_{x}(y-x)^{\rho-1}\left(\frac{i}{N}-F(y)\right)\,dy,$$ which concludes the proof.\end{proof} \begin{prop} For each $\rho\ge 1$, we have $\displaystyle \int_{\mathbb R} |x|^\rho\mu(dx)<+\infty\Leftrightarrow \lim_{N\to+\infty} e_N(\mu,\rho)=0$. \end{prop} The direct implication can be deduced from the inequality $e_N(\mu,\rho)\le{\cal W}_\rho\left(\frac{1}{N}\sum\limits_{i=1}^N\delta_{X_i},\mu\right)$ and the almost sure convergence to $0$ of ${\cal W}_\rho\left(\frac{1}{N}\sum\limits_{i=1}^N\delta_{X_i},\mu\right)$ for $(X_i)_{i\ge 1}$ i.i.d. according to $\mu$ deduced from the strong law of large numbers and stated for instance in Theorem 2.13 \cite{bobkovledoux}. We give an alternative simple argument based on \eqref{enf}. \begin{proof} According to the introduction, the finiteness of $e_N(\mu,\rho)$ for some $N\ge 1$ implies that $\displaystyle \int_{\mathbb R}|x|^\rho\mu(dx)<+\infty$. So it is enough to check the zero limit property under the finite moment condition. When respectively $F^{-1}\left(\frac{i}{N}\right)\le 0$, $F^{-1}\left(\frac{i-1}{N}+\right)<0<F^{-1}\left(\frac{i}{N}\right)$ or $F^{-1}\left(\frac{i-1}{N}+\right)\ge 0$ , then, by Lemma \ref{lemenf}, the term with index $i$ in \eqref{enf} is respectively bounded from above by $$\int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{F^{-1}\left(\frac{i}{N}\right)}\left(F^{-1}\left(\frac{i}{N}\right)-y\right)^{\rho-1}\left(F(y)-\frac{i-1}{N}\right)\,dy \le \int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{F^{-1}\left(\frac{i}{N}\right)}(-y)^{\rho-1}\left(\frac{1}{N}\wedge F(y)\right)\,dy,$$ $$\int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{0}(-y)^{\rho-1}\left(\frac{1}{N}\wedge F(y)\right)\,dy+\int^{F^{-1}\left(\frac{i}{N}\right)}_{0}y^{\rho-1}\left(\frac{1}{N}\wedge(1-F(y))\right)\,dy,$$ $$\int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{F^{-1}\left(\frac{i}{N}\right)}\left(y-F^{-1}\left(\frac{i-1}{N}+\right)\right)^{\rho-1}\left(\frac{i}{N}-F(y)\right)\,dy \le \int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{F^{-1}\left(\frac{i}{N}\right)}y^{\rho-1}\left(\frac{1}{N}\wedge(1-F(y))\right)\,dy.$$ After summation, we deduce that: $$e_N^\rho(\mu,\rho)\le \rho\int_{-\infty}^0(-y)^{\rho-1}\left(\frac{1}{N}\wedge F(y)\right)\,dy+\rho\int_0^{+\infty}y^{\rho-1}\left(\frac{1}{N}\wedge(1-F(y))\right)\,dy.$$ Since, by Fubini's theorem, $\displaystyle \rho\int_{-\infty}^0(-y)^{\rho-1}F(y)\,dy+\rho\int_0^{+\infty}y^{\rho-1}(1-F(y))\,dy=\int_{\mathbb R} |x|^\rho\mu(dx)<+\infty$, Lebesgue's theorem ensures that the right-hand side and therefore $e_N(\mu,\rho)$ go to $0$ as $N\to+\infty$. \end{proof} \section{The non compactly supported case}\label{parnoncomp} According to Theorem 5.21 (ii) \cite{xuberger}, when the support of $\mu$ is bounded, $\displaystyle \sup_{N\ge 1}N^{1/\rho}e_N(\mu,\rho)<+\infty$ with $\displaystyle \lim_{N\to+\infty}N^{1/\rho}e_N(\mu,\rho)=0$ if and only if the quantile function $F^{-1}$ is continuous according to Remark 5.22 (ii) \cite{xuberger} and Proposition 2.1 \cite{BJ}. The case $\beta>1$ in the next example illustrates the possibility that, when $\rho>1$, $\displaystyle \lim_{N\to+\infty}N^{\frac{1}{\rho}}e_N(\mu,\rho)=0$ for some non compactly supported probability measures $\mu$. Of course, $F^{-1}$ is then continuous on $(0,1)$, since, by Remark 5.22 (ii) \cite{xuberger}, $\displaystyle \limsup_{N\to+\infty} N^{1/\rho}e_N(\mu,\rho)>0$ otherwise. \begin{exple}\label{exempleexp} For $\mu_\beta(dx)=f(x)\,dx$ with $f(x)=\mathbf{1}_{\{x>0\}}\beta x^{\beta -1}\exp\left(-x^{\beta}\right)$ with $\beta>0$ (the exponential distribution case $\beta=1$, was addressed in Example 5.17 and Remark 5.22 (i) \cite{xuberger}), we have that $F(x) = \mathbf{1}_{\{x>0\}}\left(1-\exp(-x^{\beta})\right)$, $F^{-1}(u) = \left(-\ln(1-u)\right)^{\frac{1}{\beta}}$ and $f\left(F^{-1}(u)\right)=\beta(1-u)(-\ln(1-u))^{1-\frac{1}{\beta}}$. The density $f$ is decreasing on $\left[x_\beta,+\infty\right)$ where $x_\beta=\left(\frac{(\beta-1)\vee 0}{\beta}\right)^{\frac{1}{\beta}}$. Using \eqref{minotermbordlimB}, the equality $F^{-1}(w)-F^{-1}(u)=\int_u^w\frac{dv}{f\left(F^{-1}(v)\right)}$ valid for $u,w\in(0,1)$ and the monotonicity of the density, we obtain that for $N$ large enough so that $\lceil NF(x_\beta)\rceil\le N-1$, \begin{align} e_N^\rho(\mu_\beta,\rho)&\ge \frac{1}{4^{\rho}N}\sum_{i=\lceil NF(x_\beta)\rceil +1}^N\left(\int_{\frac{4i-3}{4N}}^{\frac{4i-1}{4N}}\frac{du}{f(F^{-1}(u))}\right)^\rho\ge \frac{1}{8^{\rho}N^{\rho+1}}\sum_{i=\lceil NF(x_\beta)\rceil +1}^N\frac{1}{f^\rho\left(F^{-1}\left(\frac{4i-3}{4N}\right)\right)}\notag\\ &\ge \frac{1}{(8N)^{\rho}}\sum_{i=\lceil NF(x_\beta)\rceil +2}^N\int_{\frac{i-2}{N}}^{\frac{i-1}{N}}\frac{du}{f^\rho\left(F^{-1}(u)\right)}=\frac{1}{(8N)^{\rho}}\int_{\frac{\lceil NF(x_\beta)\rceil}{N}}^{\frac{N-1}{N}}\frac{du}{f^\rho\left(F^{-1}(u)\right)}.\label{minoenrd} \end{align} Using H\"older's inequality for the second inequality, then Fubini's theorem for the third, we obtain that \begin{align*} e_N^\rho(\mu_\beta,\rho)&-\int_{\frac{N-1}{N}}^{1}\left|x_N^N-F^{-1}(u)\right|^\rho\,du \le \sum_{i=1}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|\int_{\frac{2i-1}{2N}}^u\frac{dv}{f(F^{-1}(v))}\right|^\rho\,du \\&\le \sum_{i=1}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|u-\frac{2i-1}{2N}\right|^{\rho-1}\left|\int_{\frac{2i-1}{2N}}^u \frac{dv}{f^{\rho}\left(F^{-1}(v)\right)}\right|\,du\le\frac{1}{(2N)^\rho\rho}\int_0^{\frac{N-1}{N}}\frac{dv}{f^\rho\left(F^{-1}(v)\right)}. \end{align*} We have $F(x_\beta)<1$ and, when $\beta>1$, $F(x_\beta)>0$. By integration by parts, for $\rho>1$, \begin{align*} (\rho-1)&\int_{F(x_\beta)}^{\frac{N-1}{N}}\frac{\beta^\rho\,du}{f^\rho\left(F^{-1}(u)\right)} = \int_{F(x_\beta)}^{\frac{N-1}{N}}(\rho-1)(1-u)^{-\rho}(-\ln(1-u))^{\frac{\rho}{\beta}-\rho}\,du\\ &=\left[(1-u)^{1-\rho}(-\ln(1-u))^{\frac{\rho}{\beta}-\rho}\right]^{\frac{N-1}{N}}_{F(x_\beta)}+{\left(\frac{\rho}{\beta}-\rho\right)}\int_{F(x_\beta)}^{\frac{N-1}{N}}(1-u)^{-\rho}(-\ln(1-u))^{\frac{\rho}{\beta}-\rho-1}\,du\\ &=N^{\rho-1}(\ln N)^{\frac{\rho}{\beta}-\rho}+o\left(\int_{F(x_\beta)}^{\frac{N-1}{N}}(1-u)^{-\rho}(-\ln(1-u))^{\frac{\rho}{\beta}-\rho}\,du\right)\sim N^{\rho-1}(\ln N)^{\frac{\rho}{\beta}-\rho}, \end{align*} as $N\to+\infty$. We obtain the same equivalent when replacing the lower integration limit $F(x_\beta)$ in the left-hand side by $\frac{\lceil NF(x_\beta)\rceil}{N}$ or $0$ since $\displaystyle \lim_{N\to+\infty}\int^{\frac{\lceil NF(x_\beta)\rceil}{N}}_{F(x_\beta)}\frac{du}{f^\rho(F^{-1}(u))}=0$ and $\displaystyle \int_0^{F(x_\beta)}\frac{du}{f^\rho(F^{-1}(u))}<+\infty$. On the other hand, \begin{align*} \int_{\frac{N-1}{N}}^1\left|x_N^N-F^{-1}(u)\right|^\rho\,du\le \int_{\frac{N-1}{N}}^1\left(\left(-\ln(1-u)\right)^{\frac{1}{\beta}}-(\ln N)^{\frac{1}{\beta}}\right)^\rho\,du. \end{align*} When $\beta<1$, for $u\in\left[\frac{N-1}{N},1\right]$, $\left(-\ln(1-u)\right)^{\frac{1}{\beta}}-(\ln N)^{\frac{1}{\beta}} \le {\frac{1}{\beta}}\left(-\ln(1-u)\right)^{\frac{1}{\beta}-1}\left(-\ln(1-u)-\ln N\right)$ so that \begin{align*} \int_{\frac{N-1}{N}}^1\left(\left(-\ln(1-u)\right)^{\frac{1}{\beta}}-(\ln N)^{\frac{1}{\beta}}\right)^\rho\,du&\le \frac{1}{\beta^\rho}\int_{\frac{N-1}{N}}^1\left(-\ln(1-u)\right)^{\frac{\rho}{\beta}-\rho}\left(-\ln(N(1-u)) \right)^\rho\,du\\ &=\frac{1}{\beta^\rho N}\int_0^1\left(\ln N-\ln v\right)^{\frac{\rho}{\beta}-\rho}(-\ln(v))^\rho\,dv\\ &\le \frac{2^{(\frac{\rho}{\beta}-\rho-1)\vee 0}}{\beta^\rho N}\left(\left(\ln N\right)^{\frac{\rho}{\beta}-\rho}\int_0^1(-\ln(v))^\rho\,dv+\int_0^1(-\ln(v))^{\frac{\rho}{\beta}}\,dv\right). \end{align*} When $\beta\ge 1$, for $N\ge 2$ and $u\in\left[\frac{N-1}{N},1\right]$, $\left(-\ln(1-u)\right)^{\frac{1}{\beta}}-(\ln N)^{\frac{1}{\beta}}\le {\frac{1}{\beta}}\left(\ln N\right)^{\frac{1}{\beta}-1}\left(-\ln(1-u)-\ln N\right)$ so that \begin{align} \int_{\frac{N-1}{N}}^1\left(\left(-\ln(1-u)\right)^{\frac{1}{\beta}}-(\ln N)^{\frac{1}{\beta}}\right)^\rho\,du\le \frac{\left(\ln N\right)^{\frac{\rho}{\beta}-\rho}}{\beta^\rho N}\int_0^1(-\ln(v))^{\frac{\rho}{\beta}}\,dv.\label{amjotailexp} \end{align} We conclude that for $\rho>1$ and $\beta>0$, $e_N(\mu_\beta,\rho)\asymp N^{-\frac{1}{\rho}}(\ln N)^{\frac{1}{\beta}-1}\asymp {\cal W}_\rho(\frac{1}{N}(\sum_{i=1}^{N-1}\delta_{F^{-1}\left(\frac{2i-1}{2N}\right)}+\delta_{F^{-1}\left(\frac{N-1}{N}\right)}),\mu_\beta)$. In view of Theorem 5.20 \cite{xuberger}, this rate of convergence does not extend continuously to $e_N(\mu_\beta,1)$, at least for $\beta>1$. Indeed, by Remark 2.2 \cite{jrdcds}, $e_N(\mu_\beta,1)\asymp N^{-1}(\ln N)^{\frac{1}{\beta}}$, which in view of \eqref{amjotailexp}, implies that $\sum\limits_{i=1}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|x_i^N-F^{-1}(u)\right|^\rho\,du\asymp N^{-1}(\ln N)^{\frac{1}{\beta}}$. In the Gaussian tail case $\beta=2$, $e_N(\mu_2,\rho)\asymp N^{-\frac{1}{\rho}}(\ln N)^{-\frac{1}{2}+{\mathbf 1}_{\{\rho=1\}}}$ for $\rho\ge 1$, like for the true Gaussian distribution, according to Example 5.18 \cite{xuberger}. This matches the rate obtained when $\rho>2$ in Corollary 6.14 \cite{bobkovledoux} for ${\mathbb E}^{1/\rho}\left[{\cal W}_\rho^\rho\left(\frac{1}{N}\sum\limits_{i=1}^N\delta_{X_i},\mu\right)\right]$ where $(X_i)_{i\ge 1}$ are i.i.d. with respect to some Gaussian distribution $\mu$ with positive variance. When $\rho=2$, still according to this corollary the random rate is $N^{-1/2}(\ln\ln N)^{1/2}$ (of course worse than the standard Monte Carlo rate $N^{-1/2}$).\end{exple} According to the next result, the order of convergence of $e_N(\mu,\rho)$ cannot exceed $\frac{1}{\rho}$ when the support of $\mu$ is not bounded. \begin{prop} Let $\rho>1$. Then $\exists \, \alpha>\frac 1\rho,\;\sup_{N\ge 1}N^\alpha e_N(\mu,\rho)<+\infty\implies F^{-1}(1)-F^{-1}(0+)<+\infty$. \label{propals1rcomp} \end{prop} \begin{proof} Let $\rho>1$ and $\alpha>\frac 1\rho$ be such that $\sup_{N\ge 1}N^{\alpha} e_N(\mu,\rho)<+\infty$ so that, by \eqref{enrho}, $$\sup_{N\ge 1}N^{\alpha\rho}\left(\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du\right)<+\infty.$$ By \eqref{minotermbordlimB} for $i=1$ and $N\ge1$, we have: \begin{align*} \displaystyle \int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du \ge \frac{1}{4^\rho N}\left(F^{-1}\left(\frac{1}{2N}\right)-F^{-1}\left(\frac{1}{4N}\right)\right)^{\rho}. \end{align*} Therefore $C:=\sup\limits_{N\ge 1}(2N)^{\alpha-\frac{1}{\rho}}\left(F^{-1}\left(\frac{1}{2N}\right)-F^{-1}\left(\frac{1}{4N}\right)\right)<+\infty$. For $k\in {\mathbb N}^{*}$, we deduce that $F^{-1}\left(2^{-(k+1)}\right)-F^{-1}\left(2^{-k}\right) \ge - C2^{\frac{1-\alpha \rho}{\rho}k}$, and after summation that: \begin{equation} \forall k\in {\mathbb N}^{*},F^{-1}\left(2^{-k}\right) \ge F^{-1}\left(1/2\right) - \frac{C}{2^{\alpha -\frac{1}{\rho}} -1 }\left(1- 2^{\frac{1-\alpha \rho}{\rho}(k-1)}\right).\label{minoF-1u0} \end{equation} When $k\to+\infty$, the right-hand side goes to $\left(F^{-1}\left(\frac 1 2\right) - \frac{C}{2^{\alpha -\frac{1}{\rho}}-1}\right)>-\infty$ so that $F^{-1}(0+)>-\infty$. In a symmetric way, we check that $F^{-1}(1)<+\infty$ so that $\mu$ is compactly supported. \end{proof} The next theorem gives a necessary and sufficient condition for $e_N(\mu,\rho)$ to go to $0$ with order $\alpha\in\left(0,\frac 1\rho\right)$. \begin{thm}\label{alphaRater} Let $\rho\ge 1$ and $\alpha\in \left(0,\frac{1}{\rho}\right)$. We have \begin{align*} {\sup_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty \Leftrightarrow \displaystyle{\sup_{N \ge 1}} N^{\alpha} \, e_N(\mu,\rho)<+\infty\Leftrightarrow \displaystyle{\sup_{N \ge 2}}\sup_{x_{2:N-1}}N^{\alpha}{\cal W}_\rho(\mu_N(x_{2:N-1}),\mu)<+\infty\end{align*} where $\mu_N(x_{2:N-1})=\frac{1}{N}\left(\delta_{F^{-1}\left(\frac{1}{N}\right)\wedge (-N^{\frac{1}{\rho}-\alpha})}+\sum_{i=2}^{N-1}\delta_{x_i}+\delta_{F^{-1}\left(\frac{N-1}{N}\right)\vee N^{\frac{1}{\rho}-\alpha}}\right)$ and $\sup_{x_{2:N-1}}$ means the supremum over the choice of $x_i\in[F^{-1}(\frac{i-1}{N}+),F^{-1}(\frac{i}{N})]$ for $2\le i\le N-1$. Moreover, $$\lim_{x\to +\infty}x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)=0\Leftrightarrow \lim_{N\to+\infty}N^{\alpha}e_N(\mu,\rho)=0.$$ \end{thm} Let $\hat\beta=\sup\{\beta\ge 0:\int_{\mathbb R}|x|^\beta\mu(dx)<\infty\}$. For $\beta \in (0,\hat\beta)$, $\int_{\mathbb R}|x|^\beta\mu(dx)<\infty$ while when $\hat\beta<\infty$, for $\beta>\hat\beta$, $\int_{\mathbb R}|x|^{\frac{\hat\beta+\beta}{2}}=+\infty$, so that by Lemma \ref{lemququcdf} ${\sup_{x\ge 0}}\;x^{\beta}\Big(F(-x)+1-F(x)\Big)=+\infty$. Let $\rho\ge 1$. If $\hat\beta>\rho$, we deduce from Theorem 5.21 (i) \cite{xuberger} that for each $\alpha\in (0,\frac 1\rho-\frac 1 {\hat\beta})$, $\lim_{N\to\infty}N^\alpha e_N(\mu,\rho)=0$ and, when $\hat\beta<+\infty$, Theorem \ref{alphaRater} ensures that for each $\alpha>\frac 1\rho-\frac 1 {\hat\beta}$, $\sup_{N\ge 1} N^\alpha e_N(\mu,\rho)=+\infty$ since $\frac\rho{1-\alpha\rho}>\hat\beta$. In this sense, when $\rho<\hat\beta<+\infty$ the order of convergence of $e_N(\mu,\rho)$ to $0$ is $\frac 1\rho-\frac 1 {\hat\beta}$. Moreover, the boundedness and the vanishing limit at infinity for the sequence $(N^{\frac 1\rho-\frac 1 {\hat\beta}} e_N(\mu,\rho))_{N\ge 1}$ are respectively equivalent to the same property for the function ${\mathbb R}_+\ni x\mapsto x^{\hat\beta}\Big(F(-x)+1-F(x)\Big)<+\infty$. Note that $\limsup_{x\to+\infty} x^{\hat\beta}\Big(F(-x)+1-F(x)\Big)$ can be $0$, or positive or $+\infty$. \begin{remark}\begin{itemize} \item In the proof we check that if $C={\sup\limits_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)$, then \begin{align*} \sup_{N\ge 1} N^{\alpha\rho}e^\rho_N(\mu,\rho)\le \sup_{N \ge 2}\sup_{x_{2:N-1}}N^{\alpha\rho}{\cal W}^\rho_\rho(\mu_N(x_{2:N-1}),\mu)\le 2^\rho C^{1-\alpha\rho}+\frac{1-\alpha\rho}{\alpha\rho}C+1+2^{\rho-1}+2^{\rho+\alpha\rho-2}\left|F^{-1}\left(1/2\right)\right|^\rho. \end{align*} \item According to Theorem 7.16 \cite{bobkovledoux}, for $(X_i)_{i\ge 1}$ i.i.d. according to $\mu$, $$\sup_{N\ge 1}N^{\frac{1}{2\rho}}{\mathbb E}^{1/\rho}\left[{\cal W}_\rho^\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{X_i},\mu\right)\right]\le \left(\rho 2^{\rho-1}\int_{\mathbb R} |x|^{\rho-1}\sqrt{F(x)(1-F(x))}\,dx\right)^{1/\rho}$$ with $\exists \;\varepsilon>0,\;\int_{\mathbb R}|x|^{2\rho+\varepsilon}\mu(dx)<+\infty{\mathbb R}ightarrow \int_{\mathbb R} |x|^{\rho-1}\sqrt{F(x)(1-F(x))}\,dx<+\infty{\mathbb R}ightarrow \int_{\mathbb R}|x|^{2\rho}\mu(dx)<+\infty$ by the discussion just after the theorem. The condition ${\sup_{x\ge 0}}\;x^{2\rho}\Big(F(-x)+1-F(x)\Big)<+\infty$ equivalent to $\sup_{N \ge 1} N^{\frac{1}{2\rho}} \, e_N(\mu,\rho)<+\infty$ is slightly weaker, according to Lemma \ref{lemququcdf} just below. Moreover, we address similarly any order of convergence $\alpha$ with $\alpha\in \left(0,\frac{1}{\rho}\right)$ for $e_N(\mu,\rho)$, while the order $\frac{1}{2\rho}$ seems to play a special role for ${\mathbb E}^{1/\rho}\left[{\cal W}_\rho^\rho\left(\frac{1}{N}\sum\limits_{i=1}^N\delta_{X_i},\mu\right)\right]$ in the random case. When $\rho=1$, the order of convergence $\alpha$ for $\alpha\in(0,1/2)$ is addressed in the random case in Theorem 2.2 \cite{barrGinMat} where the finiteness of ${\sup_{x\ge 0}}\,x^{\frac{1}{1-\alpha}}\Big(F(-x)+1-F(x)\Big)$ is stated to be equivalent to the stochastic boundedness of the sequence $\left(N^\alpha{\cal W}_1\left(\frac{1}{N}\sum\limits_{i=1}^N\delta_{X_i},\mu\right)\right)_{N\ge 1}$. When $\alpha=1/2$, the stochastic boundedness property is, according to Theorem 2.1 (b) \cite{barrGinMat}, equivalent to $\int_{\mathbb R}\sqrt{F(x)(1-F(x))}\,dx<+\infty$. \end{itemize} \end{remark} The proof of Theorem \ref{alphaRater} relies on the two next lemmas. \begin{lem}\label{lemququcdf} For $\beta>0$, we have \begin{align*} \int_{{\mathbb R}}|y|^{\beta}\mu(dy)<+\infty &\implies \lim_{x\to+\infty}x^{\beta}\Big(F(-x)+1-F(x)\Big)=0 \\&\implies {\sup_{x\ge 0}}\;x^{\beta}\Big(F(-x)+1-F(x)\Big)<+\infty \implies \forall \varepsilon\in \left(0,\beta\right],\;\int_{{\mathbb R}}|y|^{\beta-\varepsilon}\mu(dy)<+\infty \end{align*} and ${\sup_{x\ge 0}}\;x^{\beta}\Big(F(-x)+1-F(x)\Big)<+\infty \Leftrightarrow \sup_{u\in(0,1/2]}u^{\frac{1}{\beta}}\left(F^{-1}(1-u)-F^{-1}(u)\right)<+\infty$ with \begin{equation} \sup_{u\in(0,1/2]}u^{\frac{1}{\beta}}\left(F^{-1}(1-u)-F^{-1}(u)\right) \le \left(\sup_{x\ge 0}\,x^{\beta}F(-x)\right)^{\frac{1}{\beta}}+\left(\sup_{x\ge 0}\,x^{\beta}(1-F(x))\right)^{\frac{1}{\beta}}.\label{majodifquantb} \end{equation} Last, $\lim_{x\to +\infty}x^{\beta}\Big(F(-x)+1-F(x)\Big)=0\Leftrightarrow \lim_{u\to 0+}u^{\frac{1}{\beta}}\left(F^{-1}(1-u)-F^{-1}(u)\right)=0$. \end{lem} \begin{proof} Let $\beta>0$. For $x>0$, using the monotonicity of $F$ for the first inequality then that for $y\in[\frac x 2,x]$, $y^{\beta-1}\ge \left(\frac{x}{2}\right)^{\beta-1}\wedge x^{\beta-1}=\frac{x^{\beta-1}}{2^{(\beta-1)\vee 0}}$, we obtain that \begin{align*} F(-x)+1-F(x) \le \frac{2}{x}\int_{x/2}^{x}\Big(F(-y)+1-F(y)\Big)\,dy &\le \frac{2^{\beta\vee 1}}{x^\beta}\int_{x/2}^{+\infty}y^{\beta-1}\Big(F(-y)+1-F(y)\Big)\,dy.\end{align*} Since $\int_0^{+\infty}y^{\beta-1}\Big(F(-y)+1-F(y)\Big)\,dy=\frac 1\beta\int_{{\mathbb R}}|y|^{\beta}\mu(dy)$, the finiteness of $\int_{{\mathbb R}}|y|^{\beta}\mu(dy)$ implies by Lebesgue's theorem that $\lim_{x\to\infty} x^\beta\left(F(-x)+1-F(x)\right)=0$. Since $x\mapsto x^\beta\left(F(-x)+1-F(x)\right)$ is right-continuous with left-hand limits on $[0,+\infty)$, $${\sup_{x\ge 0}}\;x^{\beta}\Big(F(-x)+1-F(x)\Big)<+\infty \Leftrightarrow \limsup_{x\to\infty}x^{\beta}\Big(F(-x)+1-F(x)\Big)<+\infty,$$ with the latter property clearly implied by $\lim_{x\to\infty} x^\beta\left(F(-x)+1-F(x)\right)=0$. For $\varepsilon\in (0,\beta)$, using that for $y\ge 0$, $F(-y)+1-F(y)=\mu((-\infty,-y]\cup (y,+\infty))\le 1$, we obtain \begin{align*} \int_{\mathbb R}|x|^{\beta-\varepsilon}\mu(dx)&=(\beta-\varepsilon)\int_0^{+\infty}y^{\beta-\varepsilon-1}(F(-y)+1-F(y))\,dy\\ &\le (\beta-\varepsilon)\int_0^1y^{\beta-\varepsilon-1}dy+(\beta-\varepsilon)\sup\limits_{x \ge 0}x^{\beta}\Big(F(-x)+1-F(x)\Big)\int_1^{+\infty}y^{-\varepsilon-1}\,dy\\ &=1+\frac{\beta-\varepsilon}{\varepsilon}\sup\limits_{x \ge 0}x^{\beta}\Big(F(-x)+1-F(x)\Big). \end{align*} Therefore $\displaystyle \sup\limits_{x \ge 0}x^{\beta}\Big(F(-x)+1-F(x)\Big)<+\infty\implies \forall \varepsilon\in (0,\beta),\;\int_{\mathbb R}|x|^{\beta-\varepsilon}\mu(dx)<+\infty$.\\ Let us next check that \begin{equation} \sup_{x\ge 0}\,x^{\beta}\Big(F(-x)+(1-F(x))\Big)<+\infty\Leftrightarrow \sup_{u\in(0,1/2]}u^{\frac{1}{\beta}}(F^{-1}(1-u)-F^{-1}(u))<+\infty\label{equivtailquant}. \end{equation} For the necessary condition, we set $u\in(0,1/2]$. Either $F^{-1}(u)\ge 0$ or, since for all $v\in (0,1)$, $F(F^{-1}(v))\ge v$, we have $\left(-F^{-1}(u)\right)^\beta u\le \sup_{x\ge -F^{-1}(u)}\,x^{\beta}F(-x)$ and therefore $F^{-1}(u)\ge -\left(\sup_{x\ge 0}\,x^{\beta}F(-x)\right)^{\frac{1}{\beta}}u^{-\frac{1}{\beta}}$. Either $F^{-1}(1-u)\le 0$ or, since for all $v\in (0,1)$, $F(F^{-1}(v)-)\le v$, we have $(F^{-1}(1-u))^\beta u\le \sup_{x\ge F^{-1}(1-u)}\,x^{\beta}(1-F(x-))$ and therefore $F^{-1}(1-u)\le \left(\sup_{x\ge 0}\,x^{\beta}(1-F(x))\right)^{\frac{1}{\beta}}u^{-\frac{1}{\beta}}$. Hence \eqref{majodifquantb} holds. For the sufficient condition, we remark that the finiteness of $\sup_{u\in(0,1/2]}u^{\frac{1}{\beta}}(F^{-1}(1-u)-F^{-1}(u))$ and the inequality $F^{-1}(1-u)-F^{-1}(u)\ge \left(F^{-1}(1/2)-F^{-1}(u)\right)\vee\left(F^{-1}(1-u)-F^{-1}(1/2)\right) $ valid for $u\in(0,1/2]$ imply that $\inf_{u \in (0,1/2]}u^{\frac{1}{\beta}}F^{-1}(u) >-\infty$ and $\sup_{u \in (0,1/2]}u^{\frac{1}{\beta}}F^{-1}(1-u)<+\infty$. With the inequality $ x\ge F^{-1}(F(x))$ valid for $x\in{\mathbb R}$ such that $0<F(x)<1$, this implies that $\inf_{x\in{\mathbb R}:F(x)\le 1/2}\left(F(x)\right)^{\frac{1}{\beta}}x> -\infty$ and therefore that $\sup_{x\ge 0}x^{{\beta}}F(-x)<+\infty$. With the inequality $ x\le F^{-1}(F(x)+)$ valid for $x\in{\mathbb R}$ such that $0<F(x)<1$, we obtain, in a symmetric way $\sup_{x\ge 0}x^{{\beta}}(1-F(x))<+\infty$. Let us finally check that $\lim_{x\to +\infty}x^{\beta}\Big(F(-x)+1-F(x)\Big)=0\Leftrightarrow \lim_{u\to 0+}u^{\frac{1}{\beta}}\left(F^{-1}(1-u)-F^{-1}(u)\right)=0$. For the necessary condition, we remark that either $F^{-1}(1)<+\infty$ and $\lim_{u\to 0+}u^{\frac 1\beta}F^{-1}(1-u)=0$ or $F^{-1}(1-u)$ goes to $+\infty$ as $u\to 0+$. For $u$ small enough so that $F^{-1}(1-u)>0$, we have $(F^{-1}(1-u))^\beta u\le \sup_{x\ge F^{-1}(1-u)}\,x^{\beta}(1-F(x-))$, from which we deduce that $\lim_{u\to 0+}u(F^{-1}(1-u))^\beta =0$. The fact that $\lim_{u\to 0+}u^{\frac 1\beta}F^{-1}(u)=0$ is deduced by a symmetric reasoning. For the sufficient condition, we use that $x(1-F(x))^{\frac{1}{\beta}}\le \sup_{u\le 1-F(x)}u^{\frac{1}{\beta}}F^{-1}((1-u)+)$ and $x\left(F(x)\right)^{\frac{1}{\beta}}\ge \inf_{u\le F(x)}u^{\frac{1}{\beta}}F^{-1}(u)$ for $x\in{\mathbb R}$ such that $0<F(x)<1$. \end{proof} \begin{lem}\label{lemcontxn} Let $\rho\ge 1$ and $\alpha\in \left(0,\frac{1}{\rho}\right)$.There is a finite constant $C$ only depending on $\rho$ and $\alpha$ such that the two extremal points in the optimal sequence $(x_i^N)_{1\le i\le N}$ for $e_N(\mu,\rho)$ satisfy $$\forall N\ge 1,\;x_1^N\ge CN^{\frac{1}{\rho}-\alpha}\inf_{u\in (0,\frac 1 N)}u^{\frac{1}{\rho}-\alpha}F^{-1}(u)\mbox{ and }x_N^N\le CN^{\frac{1}{\rho}-\alpha}\sup_{u\in (0,\frac 1 N)}u^{\frac{1}{\rho}-\alpha}F^{-1}(1-u).$$ If $\sup_{u\in(0,1/2]}u^{\frac{1}{\rho}-\alpha}\left(F^{-1}(1-u)-F^{-1}(u)\right)<+\infty$, then $\sup_{N\ge 1}N^{\alpha-\frac{1}{\rho}}\left(x_N^N\vee \left(-x_1^N\right)\right)<+\infty$. \end{lem} \begin{proof} Since the finiteness of $\sup_{u\in(0,1/2]}u^{\frac{1}{\rho}-\alpha}\left(F^{-1}(1-u)-F^{-1}(u)\right)$ implies the finiteness of both $\sup_{u\in (0,1)}u^{\frac{1}{\rho}-\alpha}F^{-1}(1-u)$ and $\inf_{u\in (0,1)}u^{\frac{1}{\rho}-\alpha}F^{-1}(u)$, the second statement is a consequence of the first one, that we are now going to prove. When $\rho=1$ (resp. $\rho=2$), then the conclusion easily follows from the explicit form $x_1^N=F^{-1}\left(\frac{1}{2N}\right)$ and $x_N^N=F^{-1}\left(\frac{2N-1}{2N}\right)$ (resp. $x_1^N=N\int_0^{\frac{1}{N}}F^{-1}(u)\,du$ and $x_N^N=N\int^1_{\frac{N-1}{N}}F^{-1}(u)\,du$). In the general case $\rho>1$, we are going to take advantage of the expression $$f(y)=\rho\int_0^{\frac{1}{N}}\left({\mathbf 1}_{\{y\ge F^{-1}(1-u)\}}\left(y-F^{-1}(1-u)\right)^{\rho-1}-{\mathbf 1}_{\{y< F^{-1}(1-u)\}}\left(F^{-1}(1-u)-y\right)^{\rho-1}\right)\,du$$ of the derivative of the function $\displaystyle {\mathbb R}\ni y\mapsto \int_{\frac{N-1}{N}}^{1}\left|y-F^{-1}(u)\right|^\rho\,du$ minimized by $x_N^N$. Since this function is strictly convex $x_N^N=\inf\{y\in{\mathbb R}: f(y)\ge 0\}$. Let us first suppose that $S_N:=\sup_{u\in (0,\frac 1 N)}u^{\frac{1}{\rho}-\alpha}F^{-1}(1-u)\in (0,+\infty)$. Since for fixed $y\in{\mathbb R}$, ${\mathbb R}\ni x\mapsto \left({\mathbf 1}_{\{y\ge x\}}(y-x)^{\rho-1}-{\mathbf 1}_{\{y< x\}}(x-y)^{\rho-1}\right)$ is non-increasing, we deduce that $\forall y\in{\mathbb R}$, $f(y)\ge \rho S_N^{\rho-1}g(\frac{y}{S_N})$ where $$g(z)=\int_0^{\frac{1}{N}}\left({\mathbf 1}_{\left\{z\ge u^{\alpha-\frac{1}{\rho}}\right\}}\left(z-u^{\alpha-\frac{1}{\rho}}\right)^{\rho-1}-{\mathbf 1}_{\left\{z< u^{\alpha-\frac{1}{\rho}}\right\}}\left(u^{\alpha-\frac{1}{\rho}}-z\right)^{\rho-1}\right)\,du.$$ For $z\ge (4N)^{\frac{1}{\rho}-\alpha}$, we have $z^{\frac{\rho}{\alpha\rho-1}}\le\frac{1}{4N}$ and $z-(2N)^{\frac{1}{\rho}-\alpha}\ge \left(1-2^{\alpha-\frac 1\rho}\right)z$ so that \begin{align*} g(z)&=\int_{z^{\frac{\rho}{\alpha\rho-1}}}^{\frac{1}{N}}\left(z-u^{\alpha-\frac{1}{\rho}}\right)^{\rho-1}\,du-\int_0^{z^{\frac{\rho}{\alpha\rho-1}}}\left(u^{\alpha-\frac{1}{\rho}}-z\right)^{\rho-1}\,du \ge \int_{\frac{1}{2N}}^{\frac{1}{N}}\left(z-(2N)^{\frac{1}{\rho}-\alpha}\right)^{\rho-1}\,du-\int_0^{z^{\frac{\rho}{\alpha\rho-1}}}u^{(\rho-1)\frac{\alpha\rho-1}{\rho}}\,du\\ &\ge \left(1-2^{\alpha-\frac 1\rho}\right)^{\rho-1}z^{\rho-1}\int_{\frac{1}{2N}}^{\frac{1}{N}}\,du-\frac{\rho z^{\frac{\rho}{\alpha\rho-1}+\rho-1}}{1+(\rho-1)\alpha\rho} = z^{\rho-1}\left(\frac{\left(1-2^{\alpha-\frac 1\rho}\right)^{\rho-1}}{2N}-\frac{\rho z^{\frac{\rho}{\alpha\rho-1}}}{1+(\rho-1)\alpha\rho} \right). \end{align*} The right-hand side is positive for $z>(\kappa N)^{\frac{1}{\rho}-\alpha}$ with $\kappa:=\frac{2\rho}{\left(1-2^{\alpha-\frac 1\rho}\right)^{\rho-1}\left(1+(\rho-1)\alpha\rho\right)}$. Hence for $z>\left(\left(\kappa\vee 4\right)N\right)^{\frac{1}{\rho}-\alpha}$, $g(z)>0$ so that for $y>\left(\left(\kappa\vee 4\right)N\right)^{\frac{1}{\rho}-\alpha}S_N$, $f(y)>0$ and therefore $$x_N^N \le \left(\left(\kappa\vee 4\right)N\right)^{\frac{1}{\rho}-\alpha}S_N .$$ Clearly, this inequality remains valid when $S_N=+\infty$. It holds in full generality since $S_N\ge 0$ and $S_N=0\Leftrightarrow F^{-1}(1)\le 0$ a condition under which $x_N^N\le 0$ since $x_N^N\in\left[F^{-1}\left(\frac{N-1}{N}+\right),F^{-1}\left(1\right)\right]\cap{\mathbb R}$. By a symmetric reasoning, we check that $x_1^N\ge \left(\left(\kappa\vee 4\right)N\right)^{\frac{1}{\rho}-\alpha}\inf_{u\in (0,\frac 1 N)}u^{\frac{1}{\rho}-\alpha}F^{-1}(u)$. \end{proof} \begin{proof}[Proof of Theorem \ref{alphaRater}] Since by Lemma \ref{lemququcdf}, $$\sup_{u\in(0,1/2]}u^{\frac{1}{\rho}-\alpha}\left(F^{-1}(1-u)-F^{-1}(u)\right)<+\infty\implies {\sup_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty$$ and, by \eqref{enrho}, $$e_N^\rho(\mu,\rho)\ge \int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du$$ to prove the equivalence, it is enough to check that \begin{align*} &{\sup_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty \implies \displaystyle{\sup_{N \ge 1}} N^{\alpha\rho} \, e_N^\rho(\mu,\rho)<+\infty\mbox{ and that }\\ &\sup_{N\ge 1}N^{\alpha\rho}\left(\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du\right)<+\infty\\&\phantom{\sup_{N\ge 1}N^{\alpha\rho}\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du} \implies\sup_{u\in(0,1/2]}u^{\frac{1}{\rho}-\alpha}\left(F^{-1}(1-u)-F^{-1}(u)\right)<+\infty. \end{align*} We are now going to do so and thus prove that the four suprema in the two last implications are simultaneously finite or infinite. Let us first suppose that $C:={\sup_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty$ and set $N\ge 2$. Let $x_i\in[F^{-1}(\frac{i-1}{N}+),F^{-1}(\frac{i}{N})]$ for $2\le i\le N-1$ . We have $$e^\rho_N(\mu,\rho)\le {\cal W}^\rho_\rho\left(\mu_N(x_{2:N-1}),\mu\right)= L_N+M_N+U_N$$ with $L_N=\int_{0}^{\frac{1}{N}}\left|F^{-1}(u)-F^{-1}\left(\frac{1}{N}\right)\wedge (-N^{\frac{1}{\rho}-\alpha})\right|^\rho\,du$, $U_N=\int_{\frac{N-1}{N}}^{1}\left|F^{-1}(u)-F^{-1}\left(\frac{N-1}{N}\right)\vee N^{\frac{1}{\rho}-\alpha}\right|^\rho\,du$ and \begin{align} M_N&=\sum_{i=2}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|F^{-1}(u)-x_i\right|^\rho\,du \le \sum_{i=2}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left(F^{-1}\left(\frac{i}{N}\right)-F^{-1}\left(\frac{i-1}{N}\right)\right)^\rho\,du\notag\\&\le \frac{1}{N}\sum_{i=1}^N\left(F^{-1}\left(\frac{N-1}{N}\right)-F^{-1}\left(\frac{1}{N}\right)\right)^{\rho-1}\left(F^{-1}\left(\frac{i}{N}\right)-F^{-1}\left(\frac{i-1}{N}\right)\right)\notag\\&=\frac{1}{N}\left(F^{-1}\left(\frac{N-1}{N}\right)-F^{-1}\left(\frac{1}{N}\right)\right)^{\rho}\label{majomn}\\&\le 2^\rho C^{1-\alpha\rho}N^{-\alpha\rho},\notag\end{align} where we used \eqref{majodifquantb} applied with $\beta=\frac{\rho}{1-\alpha \rho}$ for the last inequality. Let $x_+=0\vee x$ denote the positive part of any real number $x$. Applying Lemma \ref{lemenf} with $x=F^{-1}\left(\frac{1}{N}\right)\wedge \left(-N^{\frac{1}{\rho}-\alpha}\right)$, we obtain that \begin{align*} L_N &= \rho\int_{-\infty}^{F^{-1}\left(\frac{1}{N}\right)\wedge \left(-N^{\frac{1}{\rho}-\alpha}\right)}\left(F^{-1}\left(\frac{1}{N}\right)\wedge \left(-N^{\frac{1}{\rho}-\alpha}\right)-y\right)^{\rho-1}F(y)\,dy\\&+\rho\int^{F^{-1}\left(\frac{1}{N}\right)}_{F^{-1}\left(\frac{1}{N}\right)\wedge \left(-N^{\frac{1}{\rho}-\alpha}\right)}\left(y-F^{-1}\left(\frac{1}{N}\right)\wedge \left(-N^{\frac{1}{\rho}-\alpha}\right)\right)^{\rho-1}\left(\frac{1}{N}-F(y)\right)\,dy \\&\le \rho\int^{+\infty}_{N^{\frac{1}{\rho}-\alpha}}y^{\rho-1}F(-y)\,dy+\frac{1}{N}\left(N^{\frac{1}{\rho}-\alpha}+F^{-1}\left(\frac{1}{N}\right)\right)_+^{\rho}. \end{align*} In a symmetric way, we check that $\displaystyle U_N\le \rho\int^{+\infty}_{N^{\frac{1}{\rho}-\alpha}}y^{\rho-1}(1-F(y))\,dy+\frac{1}{N}\left(N^{\frac{1}{\rho}-\alpha}-F^{-1}\left(\frac{N-1}{N}\right)\right)_+^{\rho}$ so that \begin{align*} L_N+U_N &\le \rho C\int^{+\infty}_{N^{\frac{1}{\rho}-\alpha}}y^{-1-\frac{\alpha\rho^2}{1-\alpha\rho}}\,dy+\frac{1}{N}\left(\left(N^{\frac{1}{\rho}-\alpha}+F^{-1}\left(1/2\right)\right)_+^{\rho}+\left(N^{\frac{1}{\rho}-\alpha}-F^{-1}\left(1/2\right)\right)_+^{\rho}\right)\\ &\le \frac{1-\alpha\rho}{\alpha\rho} CN^{-\alpha\rho}+\left(1+2^{\rho-1}\right)N^{-\alpha\rho}+2^{\rho-1}\left|F^{-1}\left(1/2\right)\right|^\rho N^{-1}. \end{align*} Since $N^{-1}\le 2^{\alpha\rho-1}N^{-\alpha\rho}$, we conclude that with $\sup_{x_{2:N-1}}$ denoting the supremum over $x_i\in[F^{-1}(\frac{i-1}{N}+),F^{-1}(\frac{i}{N})]$ for $2\le i\le N-1$, \begin{align*} \sup_{N\ge 2}N^{\alpha\rho}e^\rho_N(\mu,\rho)\le \sup_{N\ge 2}\sup_{x_{2:N-1}}N^{\alpha\rho}{\cal W}_\rho^\rho(\mu_N(x_{2:N}),\mu)\le 2^\rho C^{1-\alpha\rho}+\frac{1-\alpha\rho}{\alpha\rho}C+1+2^{\rho-1}+2^{\rho+\alpha\rho-2}\left|F^{-1}\left(1/2\right)\right|^\rho. \end{align*} We may replace $\sup_{N\ge 2}N^{\alpha\rho}e^\rho_N(\mu,\rho)$ by $\sup_{N\ge 1}N^{\alpha\rho}e^\rho_N(\mu,\rho)$ in the left-hand side, since, applying Lemma \ref{lemenf} with $x=0$, then using that for $y\ge 0$, $F(-y)+1-F(y)=\mu((-\infty,-y]\cup (y,+\infty))\le 1$, we obtain that \begin{align*} e^\rho_1(\mu,\rho) &\le \rho\int_0^{+\infty}y^{\rho-1}\Big(F(-y)+1-F(y)\Big)\,dy \le \rho\int_0^1y^{\rho-1}\,dy+\rho C\int^{+\infty}_{1}y^{-1-\frac{\alpha\rho^2}{1-\alpha\rho}}\,dy=1+\frac{1-\alpha\rho}{\alpha\rho}C. \end{align*} Let us next suppose that $\displaystyle \sup_{N\ge 1}N^{\alpha\rho}\left(\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du\right)<+\infty$. Like in the proof of Proposition \ref{propals1rcomp}, we deduce \eqref{minoF-1u0}. With the monotonicity of $F^{-1}$, this inequality implies that $$\exists C<+\infty,\;\forall u \in (0,1/2], \quad F^{-1}(u) \ge F^{-1}(1/2) - \frac{C}{1 - 2^{\alpha-\frac{1}{\rho}}}\left(u^{\alpha-\frac{1}{\rho}}-1\right), $$ and therefore that $\inf_{u \in (0,1/2]}\left(u^{\frac{1}{\rho}-\alpha}F^{-1}(u)\right) >-\infty$. With a symmetric reasoning, we conclude that $$\sup_{u\in(0,1/2]}u^{\frac{1}{\rho}-\alpha}\Big(F^{-1}(1-u)-F^{-1}(u)\Big)<+\infty.$$ Let us now assume that $\limsup_{x\to+\infty}x^{\frac{\rho}{1-\alpha\rho}}\Big(F(-x)+1-F(x)\Big)\in(0,+\infty)$, which, in particular implies that $\sup_{x\ge 0}x^{\frac{\rho}{1-\alpha\rho}}\Big(F(-x)+1-F(x)\Big)<+\infty$ and check that $\limsup_{N\to\infty}N^\alpha e_N(\mu,\rho)>0$. For $x>0$, we have, on the one hand \begin{align*} x^{\frac{\alpha \rho^2}{1-\alpha\rho}}\int_x^{+\infty}y^{\rho-1}\Big(F(-y)+1-F(y)\Big)\,dy &\ge x^{\frac{\alpha \rho^2}{1-\alpha\rho}}\int_x^{2x}x^{\rho-1}\Big(F(-2x)+1-F(2x)\Big)\,dy\\ &=x^{\frac{\rho}{1-\alpha\rho}}\Big(F(-2x)+1-F(2x)\Big). \end{align*} On the other hand, still for $x>0$, \begin{align} x^{\frac{\alpha \rho^2}{1-\alpha\rho}}\int_x^{+\infty}y^{\rho-1}\Big(F(-y)+1-F(y)\Big)\,dy &\le x^{\frac{\alpha \rho^2}{1-\alpha\rho}}\sup_{y\ge x}y^{\frac{\rho}{1-\alpha\rho}}\Big(F(-y)+1-F(y)\Big)\int_x^{+\infty}y^{-\frac{\alpha \rho^2}{1-\alpha\rho}-1}\,dy\notag\\ &=\frac{1-\alpha\rho}{\alpha \rho^2}\sup_{y\ge x}y^{\frac{\rho}{1-\alpha\rho}}\Big(F(-y)+1-F(y)\Big)\label{jamqu} \end{align} Therefore $\displaystyle \limsup_{x\to+\infty}x^{\frac{\alpha \rho^2}{1-\alpha\rho}}\int_x^{+\infty}y^{\rho-1}\Big(F(-y)+1-F(y)\Big)\,dy\in (0,+\infty)$ and, by monotonicity of the integral, \begin{equation} \limsup_{N\to+\infty} y_N^{\frac{\alpha \rho^2}{1-\alpha\rho}}\int_{y_N}^{+\infty}y^{\rho-1}\Big(F(-y)+1-F(y)\Big)\,dy\in (0,+\infty)\label{limsupsousuite} \end{equation} along any sequence $(y_N)_{N\in{\mathbb N}}$ of positive numbers increasing to $+\infty$ and such that $\limsup_{N\to+\infty}\frac{y_{N+1}}{y_N}<+\infty$. By Lemmas \ref{lemququcdf} and \ref{lemcontxn}, we have $\kappa:=\sup_{N\ge 1}N^{\alpha-\frac 1\rho}\left(x_N^N\vee\left(-x_1^N\right)\right)<+\infty$ (notice that since $x_1^N\le x_N^N$, $\kappa\ge 0$). With \eqref{enf}, we deduce that: \begin{align*} \frac{e_N^\rho(\mu,\rho)}{\rho} &\ge \int^{x_1^N}_{-\infty}\left(x_1^N-y\right)^{\rho-1}F(y)\,dy+\int_{x_N^N}^{+\infty}\left(y-x_N^N\right)^{\rho-1}(1-F(y))\,dy \\&\ge \int^{-\kappa N^{\frac 1\rho-\alpha}}_{-\infty}\left(-\kappa N^{\frac 1\rho-\alpha}-y\right)^{\rho-1}F(y)\,dy+\int_{\kappa N^{\frac 1\rho-\alpha}}^{+\infty}\left(y-\kappa N^{\frac 1\rho-\alpha}\right)^{\rho-1}(1-F(y))\,dy\\ &\ge 2^{1-\rho}\int_{2\kappa N^{\frac 1\rho-\alpha}}^{+\infty} y^{\rho-1}\Big(F(-y)+1-F(y)\Big)\,dy. \end{align*} Applying \eqref{limsupsousuite} with $y_N=2\kappa N^{\frac 1\rho-\alpha}$, we conclude that $\limsup\limits_{N\to+\infty}\,N^{\alpha\rho}e_N^\rho(\mu,\rho)>0$. If $x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)$ does not go to $0$ as $x\to+\infty$ then either ${\sup_{x\ge 0}}\,x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)=+\infty={\sup_{N \ge 1}} N^{\alpha} \,e_N(\mu,\rho)$ or $\limsup_{x\to+\infty}x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)\in(0,+\infty)$ and $\limsup_{N\to+\infty}N^{\alpha}e_N(\mu,\rho)\in(0,+\infty)$ so that, synthesizing the two cases, $N^{\alpha}e_N(\mu,\rho)$ does not go to $0$ as $N\to+\infty$. Therefore, to conclude the proof of the second statement, it is enough to suppose $\lim_{x\to+\infty}x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)=0$ and deduce $\lim_{N\to +\infty} N^\alpha e_N(\mu,\rho)=0$, which we now do. By Lemma \ref{lemququcdf}, $\lim_{N\to\infty} N^{\alpha-\frac{1}{\rho}}\left(F^{-1}\left(\frac{N-1}{N}\right)-F^{-1}\left(\frac{1}{N}\right)\right)=0$. Since, reasoning like in the above derivation of \eqref{majomn}, we have $\sum_{i=2}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}|F^{-1}(u)-x_i^N|^\rho du\le \frac{1}{N}\left(F^{-1}\left(\frac{N-1}{N}\right)-F^{-1}\left(\frac{1}{N}\right)\right)^{\rho}$ for $N\ge 3$, we deduce that $\lim_{N\to\infty} N^{\alpha\rho}\sum_{i=2}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}|F^{-1}(u)-x_i^N|^\rho du=0$. Let $$S_N=\sup_{x\ge N^{\frac{1}{2\rho}-\frac \alpha 2}}\left(x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)\right)^{\frac{1-\alpha\rho}{2\alpha\rho^2}}\mbox{ and }y_N=F^{-1}\left(\frac{N-1}{N}\right)\vee N^{\frac{1}{2\rho}-\frac \alpha 2}\vee (S_NN^{\frac{1}{\rho}-\alpha}).$$ Using Lemma \ref{lemenf} for the first inequality, \eqref{jamqu} for the second one, then the definition of $y_N$ for the third, we obtain that \begin{align*} N^{\alpha\rho}&\int_{\frac{N-1}{N}}^1|F^{-1}(u)-x_N^N|^\rho du\\&\le \rho N^{\alpha\rho}\int_{F^{-1}\left(\frac{N-1}{N}\right)}^{y_N}(y_N-y)^{\rho-1}\left(F(y)-\frac{N-1}N\right)dy+\rho N^{\alpha\rho} \int_{y_N}^{+\infty}(y-y_N)^{\rho-1}(1-F(y))dy\\ &\le N^{\alpha\rho-1}\int_{F^{-1}\left(\frac{N-1}{N}\right)}^{y_N}\rho(y_N-y)^{\rho-1}dy+\frac{1-\alpha\rho}{\alpha\rho^2}N^{\alpha\rho}y_N^{-\frac{\alpha\rho^2}{1-\alpha\rho}}\sup_{y\ge y_N}y^{\frac{\rho}{1-\alpha \rho}}\Big(F(-y)+1-F(y)\Big)\\ &\le N^{\alpha\rho-1}\left(N^{\frac{1}{2\rho}-\frac \alpha 2}\vee (S_NN^{\frac{1}{\rho}-\alpha})-F^{-1}\left(\frac{N-1}{N}\right)\right)_+^\rho\\&+\frac{1-\alpha\rho}{\alpha\rho^2}N^{\alpha\rho}\left(S_NN^{\frac{1}{\rho}-\alpha}\right)^{-\frac{\alpha\rho^2}{1-\alpha\rho}}\sup_{y\ge N^{\frac{1}{2\rho}-\frac \alpha 2}} y^{\frac{\rho}{1-\alpha \rho}}\Big(F(-y)+1-F(y)\Big)\\ &\le \left(N^{\frac \alpha 2-\frac{1}{2\rho}}\vee S_N-N^{\alpha-\frac 1\rho}F^{-1}\left(\frac{N-1}{N}\right)\right)^\rho_++\frac{1-\alpha\rho}{\alpha\rho^2}S_N^{\frac{\alpha\rho^2}{1-\alpha\rho}}.\end{align*} Since $\lim_{N\to\infty}N^{\alpha-\frac 1\rho}F^{-1}\left(\frac{N-1}{N}\right)= 0=\lim_{N\to\infty}N^{\frac \alpha 2-\frac{1}{2\rho}}= \lim_{N\to+\infty} S_N$, we deduce that $\lim_{N\to\infty}N^{\alpha\rho}\int_{\frac{N-1}{N}}^1|F^{-1}(u)-x_N^N|^\rho du=0$. Dealing in a symmetric way with $N^{\alpha\rho}\int^{\frac{1}{N}}_0|F^{-1}(u)-x_N^N|^\rho du$, we conclude that $\lim_{N\to\infty}N^{\alpha\rho}e_N^\rho(\mu,\rho)=0$. \end{proof} \begin{exple} let $\mu_\beta(dx)=f(x)\,dx$ with $f(x)=\beta\frac{{\mathbf 1}_{\{x\ge 1\}}}{x^{\beta+1}}$ be the Pareto distribution with parameter $\beta>0$. Then $F(x)={\mathbf 1}_{\{x\ge 1\}}\left(1-x^{-\beta}\right)$ and $F^{-1}(u)=(1-u)^{-\frac{1}{\beta}}$. To ensure that $\int_{\mathbb R}|x|^\rho\mu(dx)<+\infty$, we suppose that $\beta>\rho$. Since $\frac{\rho}{1-\rho\left(\frac{1}{\rho}-\frac{1}{\beta}\right)}=\beta$ we have $\lim_{x\to+\infty}x^\frac{\rho}{1-\rho\left(\frac{1}{\rho}-\frac{1}{\beta}\right)}(F(-x)+1-F(x))=1$. Replacing $\limsup$ by $\liminf$ in the last step of the proof of Theorem \ref{alphaRater}, we check that $\liminf_{N\to+\infty}N^{\frac{1}{\rho}-\frac{1}{\beta}}e_N(\mu_\beta,\rho)>0$ and deduce with the statement of this theorem that $e_N\left(\mu_\beta,\rho\right)\asymp N^{-\frac{1}{\rho}+\frac{1}{\beta}}\asymp\sup_{x_{2:N-1}}{\cal W}_\rho(\mu_N(x_{2:N-1}),\mu_\beta) $. \end{exple} In the case $\alpha=\frac{1}{\rho}$, limit situation not covered by Theorem \ref{alphaRater}, we have the following result. \begin{prop}\label{propal1rho} For $\rho\ge 1$, \begin{align*} &\sup_{N\ge 1}N^{1/\rho}e_N(\mu,\rho)<+\infty {\mathbb R}ightarrow \sup_{N\ge 1}N\left(\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du\right)<+\infty\\ &\Leftrightarrow\sup_{u\in(0,1/2]}\left(F^{-1}(1-u/2)-F^{-1}(1-u)+F^{-1}(u)-F^{-1}(u/2)\right)<+\infty\\ &{\mathbb R}ightarrow\sup_{u\in(0,1/2]}\frac{F^{-1}(1-u)-F^{-1}(u)}{\ln(1/u)}<+\infty\Leftrightarrow\exists \lambda\in(0,+\infty),\;\forall x\ge 0,\;\Big(F(-x)+1-F(x)\Big)\le e^{-\lambda x}/\lambda\\&{\mathbb R}ightarrow \sup_{N\ge 2}\sup_{x_{2:N-1}}\frac{N^{1/\rho}}{1+\ln N}{\cal W}_\rho(\mu_N(x_{2:N-1}),\mu)<+\infty{\mathbb R}ightarrow\sup_{N\ge 1}\frac{N^{1/\rho}}{1+\ln N}e_N(\mu,\rho)<+\infty, \end{align*} where $\mu_N(x_{2:N-1})=\frac{1}{N}\left(\delta_{F^{-1}\left(\frac{1}{N}\right)\wedge (-\frac{\ln N}{\lambda})}+\sum_{i=2}^{N-1}\delta_{x_i}+\delta_{F^{-1}\left(\frac{N-1}{N}\right)\vee \frac{\ln N}{\lambda}}\right)$ and $\sup_{x_{2:N-1}}$ means the supremum over the choice of $x_i\in[F^{-1}(\frac{i-1}{N}+),F^{-1}(\frac{i}{N})]$ for $2\le i\le N-1$. \end{prop} \begin{remark} The first implication is not an equivalence for $\rho=1$. Indeed, in Example \ref{exempleexp}, for $\beta\ge 1$, $\lim\limits_{N\to+\infty} Ne_N(\mu,1)= +\infty$ while $\sup\limits_{N\ge 1}N\left(\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|\,du\right)<+\infty$. \end{remark} \begin{proof} The first implication is an immediate consequence of \eqref{enrho}.\\To prove the equivalence, we first suppose that: \begin{equation} \sup_{N\ge 1}N^{\frac{1}{\rho}}\left(\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du\right)^{\frac{1}{\rho}}<+\infty.\label{majotermesqueues} \end{equation} and denote by $C$ the finite supremum in this equation. By \eqref{minotermbordlimB} for $i=1$, $\forall N\ge 1,\;F^{-1}\left(\frac{1}{2N}\right)-F^{-1}\left(\frac{1}{4N}\right)\le 4C$. For $u\in(0,1/2]$, there exists $N\in{\mathbb N}^*$ such that $u\in\left[\frac{1}{2(N+1)},\frac{1}{2N}\right]$ and, by monotonicity of $F^{-1}$ and since $4N\ge 2(N+1)$, we get \begin{align*} F^{-1}(u)-F^{-1}(u/2) &\le F^{-1}\left(\frac{1}{2N}\right)-F^{-1}\left(\frac{1}{4(N+1)}\right)\\ &\le F^{-1}\left(\frac{1}{2N}\right)-F^{-1}\left(\frac{1}{4N}\right)+F^{-1}\left(\frac{1}{2(N+1)}\right)-F^{-1}\left(\frac{1}{4(N+1)}\right)\le 8 C. \end{align*} Dealing in a symmetric way with $ F^{-1}(1-u/2)-F^{-1}(1-u)$, we obtain that $$\sup_{u\in(0,1/2]}\Big(F^{-1}(1-u/2)-F^{-1}(1-u)+F^{-1}(u)-F^{-1}(u/2)\Big) \le 16C. $$ On the other hand, for $N\ge 2$, by Lemma \ref{lemenf} applied with $x=F^{-1}\left(\frac{1}{N}\right)$, \begin{align*} \displaystyle \frac{1}{\rho}\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du &\le \sum_{k\in{\mathbb N}}\int_{F^{-1}\left(\frac{1}{2^{k+1}N}\right)}^{F^{-1}\left(\frac{1}{2^kN}\right)}\left(F^{-1}\left(\frac{1}{N}\right)-y\right)^{\rho-1}F(y)\,dy\\ &\le \sum_{k\in{\mathbb N}}\frac{F^{-1}\left(\frac{1}{2^kN}\right)-F^{-1}\left(\frac{1}{2^{k+1}N}\right)}{2^kN} \left(\sum_{j=0}^k\left(F^{-1}\left(\frac{1}{2^jN}\right)-F^{-1}\left(\frac{1}{2^{j+1}N}\right)\right)\right)^{\rho-1}\\ &\le \frac{1}{N}\left(\sup_{u\in(0,1/2]}\left(F^{-1}(u)-F^{-1}(u/2)\right)\right)^\rho\sum_{k\in{\mathbb N}}\frac{(k+1)^{\rho-1}}{2^k}, \end{align*} where the last sum is finite. Dealing in a symmetric way with $\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du$, we conclude that \eqref{majotermesqueues} is equivalent to the finiteness of $\sup_{u\in(0,1/2]}\Big(F^{-1}(1-u/2)-F^{-1}(1-u)+F^{-1}(u)-F^{-1}(u/2)\Big)$. Under \eqref{majotermesqueues} with $C$ denoting the finite supremum, for $k\in{\mathbb N}^*$, $F^{-1}\left(2^{-(k+1)}\right)-F^{-1}\left(2^{-k}\right)\ge-4C$ and, after summation, $$F^{-1}(2^{-k})\ge F^{-1}(1/2)-4 C (k-1).$$ With the monotonicity of $F^{-1}$, we deduce that: $$\forall u\in(0,1/2],\;F^{-1}(u)\ge F^{-1}(1/2)+\frac{4C}{\ln 2}\ln u$$ and therefore that $\sup_{u\in(0,1/2]}\frac{-F^{-1}(u)}{\ln(1/u)}<+\infty$. With the inequality $F^{-1}(F(x))\le x$ valid for $x\in{\mathbb R}$, this implies that $\sup_{\left\{x\in{\mathbb R}:0<F(x)\le 1/2 \right\}}\frac{-x}{\ln(1/F(x))}<+\infty$ and therefore that $\exists \, \lambda\in(0,+\infty),\;\forall x\le 0,\;F(x)\le e^{\lambda x}/\lambda$. Under the latter condition, since $u\le F(F^{-1}(u))$ and $F^{-1}(u)\le 0$ for $u\in (0,F(0)]$, we have $\sup_{u\in(0,F(0)]}\frac{-F^{-1}(u)}{\ln(1/u)}<\infty$ and even $\sup_{u\in(0,1/2]}\frac{-F^{-1}(u)}{\ln(1/u)}<\infty$ since when $F(0)<\frac 1 2$, $\sup_{u\in(F(0),1/2]}\frac{-F^{-1}(u)}{\ln(1/u)}\le 0$. By a symmetric reasoning, we obtain the two equivalent tail properties $\sup_{u\in(0,1/2]}\frac{F^{-1}(1-u)-F^{-1}(u)}{\ln(1/u)}<+\infty$ and $\exists \,\lambda\in(0,+\infty),\;\forall x\ge 0,\;\Big(F(-x)+1-F(x)\Big)\le e^{-\lambda x}/\lambda$. Let us finally suppose these two tail properties and deduce that $\sup_{N\ge 2}\sup_{x_{2:N-1}}\frac{N^{1/\rho}}{1+\ln N}{\cal W}_\rho(\mu_N(x_{2:N-1}),\mu)<+\infty$. We use the decomposition ${\cal W}^\rho_\rho(\mu_N(x_{2:N-1}),\mu)= L_N+M_N+U_N$ introduced in the proof of Theorem \ref{alphaRater} but with $F^{-1}\left(\frac{1}{N}\right)\wedge \left(-\frac{\ln N}{\lambda}\right)$ and $F^{-1}\left(\frac{N-1}{N}\right)\vee \left(\frac{\ln N}{\lambda}\right)$ respectively replacing $F^{-1}\left(\frac{1}{N}\right)\wedge (-N^{\frac{1}{\rho}-\alpha})$ and $F^{-1}\left(\frac{N-1}{N}\right)\vee (N^{\frac{1}{\rho}-\alpha})$ in $L_N$ and $U_N$. By \eqref{majomn}, we get: \begin{align*} \forall N\ge 3,\;M_N\le \frac{1}{N}\left(F^{-1}\left(\frac{N-1}{N}\right)-F^{-1}\left(\frac{1}{N}\right)\right)^{\rho}\le \left(\sup_{u\in(0,1/2]}\frac{F^{-1}(1-u)-F^{-1}(u)}{\ln(1/u)}\right)^\rho\frac{(\ln N)^\rho}{N}. \end{align*} Applying Lemma \ref{lemenf} with $x=F^{-1}\left(\frac{1}{N}\right)\wedge \left(-\frac{\ln N}{\lambda}\right)$ then the estimation of the cumulative distribution function, we obtain that for $N\ge 2$, \begin{align*} L_N &\le \rho\int_{-\infty}^{F^{-1}\left(\frac{1}{N}\right)\wedge \left(-\frac{\ln N}{\lambda}\right)}\left(F^{-1}\left(\frac{1}{N}\right)\wedge \left(-\frac{\ln N}{\lambda}\right)-y\right)^{\rho-1}F(y)\,dy\\&+\rho\int^{F^{-1}\left(\frac{1}{N}\right)}_{F^{-1}\left(\frac{1}{N}\right)\wedge \left(-\frac{\ln N}{\lambda}\right)}\left(y-F^{-1}\left(\frac{1}{N}\right)\wedge \left(-\frac{\ln N}{\lambda}\right)\right)^{\rho-1}\left(\frac{1}{N}-F(y)\right)\,dy\\ &\le \frac{\rho}{\lambda}\int_{-\infty}^{-\frac{\ln N}{\lambda}}(-y)^{\rho-1}e^{\lambda y}\,dy+\frac{1}{N}\left(\frac{\ln N}{\lambda}+F^{-1}\left(\frac{1}{N}\right)\right)_+^{\rho}\\ &\le \frac{\rho}{\lambda}\sum_{k\ge 1}\int_{k\frac{\ln N}{\lambda}}^{(k+1)\frac{\ln N}{\lambda}}\left((k+1)\frac{\ln N}{\lambda}\right)^{\rho-1}e^{-\lambda y}\,dy+\frac{1}{N}\left(\frac{\ln N}{\lambda}+F^{-1}\left(\frac{1}{2}\right)\right)_+^{\rho}\\ &\le \frac{\rho(\ln N)^{\rho}}{\lambda^{\rho+1}N}\sum_{k\ge 1}\frac{(k+1)^{\rho-1}}{2^{k-1}}+\frac{1}{N}\left(\frac{1}{\lambda}\ln N+F^{-1}\left(\frac{1}{2}\right)\right)_+^{\rho}, \end{align*} where we used that $N^k\ge N2^{k-1}$ for the last inequality. Dealing in a symmetric way with $U_N$, we conclude that $\sup_{N\ge 2}\sup_{x_{2:N-1}}\frac{N{\cal W}^\rho_\rho(\mu_N(x_{2:N-1}),\mu)}{1+(\ln N)^\rho}<+\infty$. \end{proof} \pagenumbering{gobble} \end{document}
\begin{document} \title{Optimal three spheres inequality at the boundary for the Kirchhoff-Love plate's equation with Dirichlet conditions hanks{The first and the second authors are supported by FRA 2016 ``Problemi Inversi, dalla stabilit\`a alla ricostruzione'', Universit\`a degli Studi di Trieste. The second and the third authors are supported by Progetto GNAMPA 2017 ``Analisi di problemi inversi: stabilit\`a e ricostruzione'', Istituto Nazionale di Alta Matematica (INdAM).} \noindent \textbf{Abstract.} We prove a three spheres inequality with optimal exponent at the boundary for solutions to the Kirchhoff-Love plate's equation satisfying homogeneous Dirichlet conditions. This result implies the Strong Unique Continuation Property at the Boundary (SUCPB). Our approach is based on the method of Carleman estimates, and involves the construction of an ad hoc conformal mapping preserving the structure of the operator and the employment of a suitable reflection of the solution with respect to the flattened boundary which ensures the needed regularity of the extended solution. To the authors' knowledge, this is the first (nontrivial) SUCPB result for fourth-order equations with bi-Laplacian principal part. \noindent \textbf{Mathematical Subject Classifications (2010): 35B60, 35J30, 74K20, 35R25, 35R30, 35B45} \noindent \textbf{Key words:} elastic plates, three spheres inequalities, unique continuation, Carleman estimates. \section{Introduction} \label{sec: introduction} The main purpose of this paper is to prove a Strong Unique Continuation Property at the Boundary (SUCPB) for the Kirchhoff-Love plate's equation. In order to introduce the subject of SUCPB we give some basic, although coarse, notion. Let $\mathcal{L}$ be an elliptic operator of order $2m$, $m\in \mathbb{N}$, and let $\Omega$ be an open domain in $\mathbb{R}^N$, $N\geq2$. We say that $\mathcal{L}$ enjoys a SUCPB with respect to the Dirichlet boundary conditions if the following property holds true: \begin{equation}\label{formulaz-sucpb} \begin{cases} \mathcal{L}u=0, \mbox{ in } \Omega, \\ \frac{\partial^ju}{\partial n^j}=0, \mbox{ on } \Gamma, \quad\mbox{ for } j=0, 1, \ldots, m-1, \\ \int_{\Omega\cap B_r(P)}u^2=\mathcal{O}(r^k), \mbox{ as } r\rightarrow 0, \forall k\in \mathbb{N}, \end{cases}\Longrightarrow \quad u\equiv 0 \mbox{ in } \Omega, \end{equation} where $\Gamma$ is an open portion (in the induced topology) of $\partial\Omega$, $n$ is outer unit normal, $P\in\Gamma$ and $B_r(P)$ is the ball of center $P$ and radius $r$. Similarly, we say that $\mathcal{L}$ enjoys a SUCPB with respect to the set of normal boundary operators $\mathcal{B}_j$, $j\in J$, $\mathcal{B}_j$ of order $j$, $J\subset \{0, 1, \ldots, 2m-1\}$, $\sharp J = m$, \cite{l:Fo}, if the analogous of \eqref{formulaz-sucpb} holds when the Dirichlet boundary conditions are replaced by \begin{equation} \label{BC-generale} \mathcal{B}_ju=0, \quad \hbox{on } \Gamma, \quad \hbox{for } j\in J. \end{equation} The SUCPB has been studied for the second order elliptic operators in the last two decades, both in the case of homogeneous Dirichlet, Neumann and Robin boundary conditions, \cite{l:AdEsKe}, \cite{l:AdEs}, \cite{l:ARRV}, \cite{l:ApEsWaZh}, \cite{l:BaGa}, \cite{l:BoWo}, \cite{l:KeWa}, \cite{l:KuNy}, \cite{l:Si}. Although the conjecture that the SUCPB holds true when $\partial\Omega$ is of Lipschitz class is not yet proved, the SUCPB and the related quantitative estimates are today well enough understood for second-order elliptic equations. Starting from the paper \cite{l:AlBeRoVe}, the SUCPB turned out to be a crucial property to prove optimal stability estimates for inverse elliptic boundary value problems with unknown boundaries. Mostly for this reason the investigation about the SUCPB has been successfully extended to second order parabolic equations \cite{l:CaRoVe}, \cite{l:DcRoVe}, \cite{l:EsFe}, \cite{l:EsFeVe}, \cite{l:Ve1} and to wave equation with time independent coefficients \cite{l:SiVe}, \cite{l:Ve2}. For completeness we recall (coarsely) the formulation of inverse boundary value problems with unknown boundaries in the elliptic context. Assume that $\Omega$ is a bounded domain, with connected boundary $\partial \Omega$ of $C^{1,\alpha}$ class, and that $\partial\Omega$ is disjoint union of an accessible portion $\Gamma^{(a)}$ and of an inaccessible portion $\Gamma^{(i)}$. Given a symmetric, elliptic, Lipschitz matrix valued $A$ and $\psi \not\equiv 0$ such that $$\psi(x)=0, \mbox{ on } \Gamma^{(i)},$$ let $u$ be the solution to \begin{equation*} \left\{\begin{array}{ll} \mbox{div}\left(A\nabla u\right)=0, \quad \hbox{in } \Omega,\\ u=\psi, \quad \hbox{on } \partial\Omega. \end{array}\right. \end{equation*} Assuming that one knows \begin{equation*} \label{flux} A\nabla u\cdot\nu,\quad \mbox{on } \Sigma, \end{equation*} where $\Sigma$ is an open portion of $\Gamma^{(a)}$, the inverse problem under consideration consists in determining the unknown boundary $\Gamma^{(i)}$. The proof of the uniqueness of $\Gamma^{(i)}$ is quite simple and requires the weak unique continuation property of elliptic operators. On the contrary, the optimal continuous dependence of $\Gamma^{(i)}$ from the Cauchy data $u$, $A\nabla u\cdot\nu$ on $\Sigma$, which is of logarithmic rate (see \cite{l:DcRo}), requires quantitative estimates of strong unique continuation at the interior and at the boundary, like the three spheres inequality, \cite{l:Ku}, \cite{l:La} and the doubling inequality, \cite{l:AdEs}, \cite{l:GaLi}. Inverse problems with unknown boundaries have been studied in linear elasticity theory for elliptic systems \cite{l:mr03}, \cite{l:mr04}, \cite{l:mr09}, and for fourth-order elliptic equations \cite{l:mrv07}, \cite{l:mrv09}, \cite{l:mrv13}. It is clear enough that the unavailability of the SUCPB precludes proving optimal stability estimates for these inverse problems with unknown boundaries. In spite of the fact that the strong unique continuation in the interior for fourth-order elliptic equation of the form \begin{equation}\label{bilaplacian} \mathbb{D}elta^2u+\sum_{|\alpha|\leq 3}c_{\alpha} D^{\alpha}u=0 \end{equation} where $c_{\alpha}\in L^{\infty}(\Omega)$, is nowadays well understood, \cite{l:CoGr}, \cite{l:CoKo}, \cite{l:Ge}, \cite{l:LBo}, \cite{l:LiNaWa}, \cite{l:mrv07}, \cite{l:Sh}, to the authors knowledge, the SUCPB for equation like \eqref{bilaplacian} has not yet proved even for Dirichlet boundary conditions. In this regard it is worthwhile to emphasize that serious difficulties occur in performing Carleman method (the main method to prove the unique continuation property) for bi-Laplace operator \textit{near the boundaries}, we refer to \cite{l:LeRRob} for a thorough discussion and wide references on the topics. In the present paper we begin to find results in this direction for the Kirchhoff-Love equation, describing thin isotropic elastic plates \begin{equation} \label{eq:equazione_piastra-int} L(v) := {\rm div}\left ({\rm div} \left ( B(1-\nu)\nabla^2 v + B\nu \mathbb{D}elta v I_2 \right ) \right )=0, \qquad\hbox{in } \Omega\subset\mathbb{R}^2, \end{equation} where $v$ represents the transversal displacement, $B$ is the \emph{bending stiffness} and $\nu$ the \emph{Poisson's coefficient} (see \eqref{eq:3.stiffness}--\eqref{eq:3.E_nu} for the precise definitions). Assuming $B,\nu\in C^4(\overline{\Omega})$ and $\Gamma$ of $C^{6, \alpha}$ class, we prove our main results: a three spheres inequality at the boundary with optimal exponent (see Theorem \ref{theo:40.teo} for the precise statement) and, as a byproduct, the following SUCPB result (see Corollary \ref{cor:SUCP}) \begin{equation}\label{formulaz-sucpb-piastra} \begin{cases} Lv=0, \mbox{ in } \Omega, \\ v =\frac{\partial v}{\partial n}=0, \mbox{ on } \Gamma, \\ \int_{\Omega\cap B_r(P)}v^2=\mathcal{O}(r^k), \mbox{ as } r\rightarrow 0, \forall k\in \mathbb{N}, \end{cases}\Longrightarrow \quad v\equiv 0 \mbox{ in } \Omega. \end{equation} In our proof, firstly we flatten the boundary $\Gamma$ by introducing a suitable conformal mapping (see Proposition \ref{prop:conf_map}), then we combine a reflection argument (briefly illustrated below) and the Carleman estimate \begin{equation} \label{eq:24.4-intr} \sum_{k=0}^3 \tau^{6-2k}\int\rho^{2k+\epsilon-2-2\tau}|D^kU|^2dxdy\leq C \int\rho^{6-\epsilon-2\tau}(\mathbb{D}elta^2 U)^2dxdy, \end{equation} for every $\tau\geq \overline{\tau}$ and for every $U\in C^\infty_0(B_{\widetilde{R}_0}\setminus\{0\})$, where $0<\varepsilon<1$ is fixed and $\rho(x,y)\sim \sqrt{x^2+y^2}$ as $(x,y)\rightarrow (0,0)$, see \cite[Theorem 6.8]{l:mrv07} and here Proposition \ref{prop:Carleman} for the precise statement. To enter a little more into details, let us outline the main steps of our proof. a) Since equation \eqref{eq:equazione_piastra-int} can be rewritten in the form \begin{equation} \label{eq:equazione_piastra_non_div-intr} \mathbb{D}elta^2 v= -2\frac{\nabla B}{B}\cdot \nabla\mathbb{D}elta v + q_2(v) \qquad\hbox{in } \Omega, \end{equation} where $q_2$ is a second order operator, the equation resulting after flattening $\Gamma$ by a conformal mapping preserves the same structure of \eqref{eq:equazione_piastra_non_div-intr} and, denoting by $u$ the solution in the new coordinates, we can write \begin{equation} \label{eq:15.1a-intro} \begin{cases} \mathbb{D}elta^2 u= a\cdot \nabla\mathbb{D}elta u + p_2(u), \qquad\hbox{in } B_1^+, \\ u(x,0)=u_y(x,0) =0, \quad \forall x\in (-1,1) \end{cases} \end{equation} where $p_2$ is a second order operator. b) We use the following reflection of $u$, \cite{l:Fa}, \cite{l:Jo}, \cite{l:Sa}, \begin{equation*} \overline{u}(x,y)=\left\{ \begin{array}{cc} u(x,y), & \hbox{ in } B_1^+\\ w(x,y)=-[u(x,-y)+2yu_y(x,-y)+y^2\mathbb{D}elta u(x,-y)], & \hbox{ in } B_1^- \end{array} \right. \end{equation*} which has the advantage of ensuring that $\overline{u}\in H^4(B_1)$ if $u\in H^4(B_1^+)$ (see Proposition \ref{prop:16.1}), and then we apply the Carleman estimate \eqref{eq:24.4-intr} to $\xi \overline{u}$, where $\xi$ is a cut-off function. Nevertheless we have still a problem. Namely c) Derivatives of $u$ up to the sixth order occur in the terms on the right-hand side of the Carleman estimate involving negative value of $y$, hence such terms cannot be absorbed in a standard way by the left hand side. In order to overcome this obstruction, we use Hardy inequality, \cite{l:HLP34}, \cite{l:T67}, stated in Proposition \ref{prop:Hardy}. The paper is organized as follows. In Section \ref{sec: notation} we introduce some notation and definitions and state our main results, Theorem \ref{theo:40.teo} and Corollary \ref{cor:SUCP}. In Section \ref{sec: flat_boundary} we state Proposition \ref{prop:conf_map}, which introduces the conformal map which realizes a local flattening of the boundary which preserves the structure of the differential operator. Section \ref{sec: Preliminary} contains some auxiliary results which shall be used in the proof of the three spheres inequality in the case of flat boundaries, precisely Propositions \ref{prop:16.1} and \ref{prop:19.2} concerning the reflection w.r.t. flat boundaries and its properties, a Hardy's inequality (Proposition \ref{prop:Hardy}), the Carleman estimate for bi-Laplace operator (Proposition \ref{prop:Carleman}), and some interpolation estimates (Lemmas \ref{lem:Agmon} and \ref{lem:intermezzo}). In Section \ref{sec:3sfere} we establish the three spheres inequality with optimal exponents for the case of flat boundaries, Proposition \ref{theo:40.prop3}, and then we derive the proof of our main result, Theorem \ref{theo:40.teo}. Finally, in the Appendix, we give the proof of Proposition \ref{prop:conf_map} and of the interpolation estimates contained in Lemma \ref{lem:intermezzo}. \section{Notation} \label{sec: notation} We shall generally denote points in $\mathbb{R}^2$ by $x=(x_1,x_2)$ or $y=(y_1,y_2)$, except for Sections \ref{sec: Preliminary} and \ref{sec:3sfere} where we rename $x,y$ the coordinates in $\mathbb{R}^2$. In places we will use equivalently the symbols $D$ and $\nabla$ to denote the gradient of a function. Also we use the multi-index notation. We shall denote by $B_r(P)$ the disc in $\mathbb{R}^2$ of radius $r$ and center $P$, by $B_r$ the disk of radius $r$ and center $O$, by $B_r^+$, $B_r^-$ the hemidiscs in $\mathbb{R}^2$ of radius $r$ and center $O$ contained in the halfplanes $\mathbb{R}^2_+= \{x_2>0\}$, $\mathbb{R}^2_-= \{x_2<0\}$ respectively, and by $R_{ a,b}$ the rectangle $(-a,a)\times(-b,b)$. Given a matrix $A =(a_{ij})$, we shall denote by $|A|$ its Frobenius norm $|A|=\sqrt{\sum_{i,j}a_{ij}^2}$. Along our proofs, we shall denote by $C$ a constant which may change {}from line to line. \begin{definition} \label{def:reg_bordo} (${C}^{k,\alpha}$ regularity) Let $\Omega$ be a bounded domain in ${\mathbb{R}}^{2}$. Given $k,\alpha$, with $k\in\mathbb{N}$, $0<\alpha\leq 1$, we say that a portion $S$ of $\partial \Omega$ is of \textit{class ${C}^{k,\alpha}$ with constants $r_{0}$, $M_{0}>0$}, if, for any $P \in S$, there exists a rigid transformation of coordinates under which we have $P=0$ and \begin{equation*} \Omega \cap R_{r_0,2M_0r_0}=\{x \in R_{r_0,2M_0r_0} \quad | \quad x_{2}>g(x_1) \}, \end{equation*} where $g$ is a ${C}^{k,\alpha}$ function on $[-r_0,r_0]$ satisfying \begin{equation*} g(0)=g'(0)=0, \end{equation*} \begin{equation*} \|g\|_{{C}^{k,\alpha}([-r_0,r_0])} \leq M_0r_0, \end{equation*} where \begin{equation*} \|g\|_{{C}^{k,\alpha}([-r_0,r_0])} = \sum_{i=0}^k r_0^i\sup_{[-r_0,r_0]}|g^{(i)}|+r_0^{k+\alpha}|g|_{k,\alpha}, \end{equation*} \begin{equation*} |g|_{k,\alpha}= \sup_ {\overset{\scriptstyle t,s\in [-r_0,r_0]}{\scriptstyle t\neq s}}\left\{\frac{|g^{(k)}(t) - g^{(k)}(s)|}{|t-s|^\alpha}\right\}. \end{equation*} \end{definition} We shall consider an isotropic thin elastic plate $\Omega\times \left[-\frac{h}{2},\frac{h}{2}\right]$, having middle plane $\Omega$ and width $h$. Under the Kirchhoff-Love theory, the transversal displacement $v$ satisfies the following fourth-order partial differential equation \begin{equation} \label{eq:equazione_piastra} L(v) := {\rm div}\left ({\rm div} \left ( B(1-\nu)\nabla^2 v + B\nu \mathbb{D}elta v I_2 \right ) \right )=0, \qquad\hbox{in } \Omega. \end{equation} Here the \emph{bending stiffness} $B$ is given by \begin{equation} \label{eq:3.stiffness} B(x)=\frac{h^3}{12}\left(\frac{E(x)}{1-\nu^2(x)}\right), \end{equation} and the \emph{Young's modulus} $E$ and the \emph{Poisson's coefficient} $\nu$ can be written in terms of the Lam\'{e} moduli as follows \begin{equation} \label{eq:3.E_nu} E(x)=\frac{\mu(x)(2\mu(x)+3\lambda(x))}{\mu(x)+\lambda(x)},\qquad\nu(x)=\frac{\lambda(x)}{2(\mu(x)+\lambda(x))}. \end{equation} We shall make the following strong convexity assumptions on the Lam\'{e} moduli \begin{equation} \label{eq:3.Lame_convex} \mu(x)\geq \alpha_0>0,\qquad 2\mu(x)+3\lambda(x)\geq\gamma_0>0, \qquad \hbox{ in } \Omega, \end{equation} where $\alpha_0$, $\gamma_0$ are positive constants. It is easy to see that equation \eqref{eq:equazione_piastra} can be rewritten in the form \begin{equation} \label{eq:equazione_piastra_non_div} \mathbb{D}elta^2 v= \widetilde{a}\cdot \nabla\mathbb{D}elta v + \widetilde{q}_2(v) \qquad\hbox{in } \Omega, \end{equation} with \begin{equation} \label{eq:vettore_a_tilde} \widetilde{a}=-2\frac{\nabla B}{B}, \end{equation} \begin{equation} \label{eq:q_2} \widetilde{q}_2(v)=-\sum_{i,j=1}^2\frac{1}{B}\partial^2_{ij}(B(1-\nu)+\nu B\delta_{ij})\partial^2_{ij} v. \end{equation} Let \begin{equation} \label{eq:Omega_r_0} \Omega_{r_0} = \left\{ x\in R_{r_0,2M_0r_0}\ |\ x_2>g(x_1) \right\}, \end{equation} \begin{equation} \label{eq:Gamma_r_0} \Gamma_{r_0} = \left\{(x_1,g(x_1))\ |\ x_1\in (-r_0,r_0)\right\}, \end{equation} with \begin{equation*} g(0)=g'(0)=0, \end{equation*} \begin{equation} \label{eq:regol_g} \|g\|_{{C}^{6,\alpha}([-r_0,r_0])} \leq M_0r_0, \end{equation} for some $\alpha\in (0,1]$. Let $v\in H^2(\Omega_{r_0})$ satisfy \begin{equation} \label{eq:equat_u_tilde} L(v)= 0, \quad \hbox{ in } \Omega_{r_0}, \end{equation} \begin{equation} \label{eq:Diric_u_tilde} v = \frac{\partial v}{\partial n}= 0, \quad \hbox{ on } \Gamma_{r_0}, \end{equation} where $L$ is given by \eqref{eq:equazione_piastra} and $n$ denotes the outer unit normal. Let us assume that the Lam\'{e} moduli $\lambda,\mu$ satisfies the strong convexity condition \eqref{eq:3.Lame_convex} and the following regularity assumptions \begin{equation} \label{eq:C4Lame} \|\lambda\|_{C^4(\overline{\Omega}_{r_0})}, \|\mu\|_{C^4(\overline{\Omega}_{r_0})}\leq \Lambda_0. \end{equation} The regularity assumptions \eqref{eq:3.Lame_convex}, \eqref{eq:regol_g} and \eqref{eq:C4Lame} guarantee that $v\in H^6(\Omega_r)$, see for instance \cite{l:a65}. \begin{theo} [{\bf Optimal three spheres inequality at the boundary}] \label{theo:40.teo} Under the above hypotheses, there exist $c<1$ only depending on $M_0$ and $\alpha$, $C>1$ only depending on $\alpha_0$, $\gamma_0$, $\Lambda_0$, $M_0$, $\alpha$, such that, for every $r_1<r_2<c r_0<r_0$, \begin{equation} \label{eq:41.1} \int_{B_{r_2}\cap \Omega_{r_0}}v^2\leq C\left(\frac{r_0}{r_2}\right)^C\left(\int_{B_{r_1}\cap \Omega_{r_0}}v^2\right)^\theta\left(\int_{B_{r_0}\cap \Omega_{r_0}}v^2\right)^{1-\theta}, \end{equation} where \begin{equation} \label{eq:41.2} \theta = \frac{\log\left(\frac{cr_0}{r_2}\right)}{\log\left(\frac{r_0}{r_1}\right)}. \end{equation} \end{theo} \begin{cor} [{\bf Quantitative strong unique continuation at the boundary}] \label{cor:SUCP} Under the above hypotheses and assuming $\int_{B_{r_0 }\cap\Omega_{r_0}}v^2>0$, \begin{equation} \label{eq:suc1} \int_{B_{r_1 }\cap\Omega_{r_0}}v^2 \geq \left(\frac{r_1}{r_0}\right)^{\frac{\log A}{\log \frac{r_2}{cr_0}}} \int_{B_{r_0 }\cap\Omega_{r_0}}v^2, \end{equation} where \begin{equation} \label{eq:suc2} A= \frac{1}{C}\left(\frac{r_2}{r_0}\right)^C\frac{\int_{B_{r_2 }\cap\Omega_{r_0}}v^2}{\int_{B_{r_0 }\cap\Omega_{r_0}}v^2}<1, \end{equation} $c<1$ and $C>1$ being the constants appearing in Theorem \ref{theo:40.teo}. \end{cor} \begin{proof} Reassembling the terms in \eqref{eq:41.1}, it is straightforward to obtain \eqref{eq:suc1}-\eqref{eq:suc2}. The SUCBP follows immediately. \end{proof} \section{Reduction to a flat boundary} \label{sec: flat_boundary} The following Proposition introduces a conformal map which flattens the boundary $\Gamma_{r_0}$ and preserves the structure of equation \eqref{eq:equazione_piastra_non_div}. \begin{prop} [{\bf Conformal mapping}] \label{prop:conf_map} Under the hypotheses of Theorem \ref{theo:40.teo}, there exists an injective sense preserving differentiable map \begin{equation*} \Phi=(\varphi,\psi):[-1,1]\times[0,1]\rightarrow \overline{\Omega}_{r_0} \end{equation*} which is conformal, and it satisfies \begin{equation} \label{eq:9.assente} \Phi((-1,1)\times(0,1))\supset B_{\frac{r_0}{K}}(0)\cap \Omega_{r_0}, \end{equation} \begin{equation} \label{eq:9.2b} \Phi(([-1,1]\times\{0\})= \left\{ (x_1,g(x_1))\ |\ x_1\in [-r_1,r_1]\right\}, \end{equation} \begin{equation} \label{eq:9.2a} \Phi(0,0)= (0,0), \end{equation} \begin{equation} \label{eq:gradPhi} \frac{c_0r_0}{2C_0}\leq |D\Phi(y)|\leq \frac{r_0}{2}, \quad \forall y\in [-1,1]\times[0,1], \end{equation} \begin{equation} \label{eq:gradPhiInv} \frac{4}{r_0}\leq |D\Phi^{-1}(x)|\leq \frac{4C_0}{c_0r_0}, \quad\forall x\in \Phi([-1,1]\times[0,1]), \end{equation} \begin{equation} \label{eq:stimaPhi} |\Phi(y)|\leq \frac{r_0}{2}|y|, \quad \forall y\in [-1,1]\times[0,1], \end{equation} \begin{equation} \label{eq:stimaPhiInv} |\Phi^{-1}(x)| \leq \frac{K}{r_0}|x|, \quad \forall x\in \Phi([-1,1]\times[0,1]), \end{equation} with $K>8$, $0<c_0<C_0$ being constants only depending on $M_0$ and $\alpha$. Letting \begin{equation} \label{eq:def_sol_composta} u(y) = v(\Phi(y)), \quad y\in [-1,1]\times[0,1], \end{equation} then $u\in H^6((-1,1)\times(0,1))$ and it satisfies \begin{equation} \label{eq:equazione_sol_composta} \mathbb{D}elta^2 u= a\cdot \nabla\mathbb{D}elta u + q_2(u), \qquad\hbox{in } (-1,1)\times(0,1), \end{equation} \begin{equation} \label{eq:Dirichlet_sol_composta} u(y_1,0)= u_{y_2}(y_1,0) =0, \quad \forall y_1\in (-1,1), \end{equation} where \begin{equation*} a(y) = |\nabla \varphi(y)|^2\left([D\Phi(y)]^{-1}\widetilde{a}(\Phi(y))-2\nabla(|\nabla \varphi(y)|^{-2})\right), \end{equation*} $a\in C^3([-1,1]\times[0,1], \mathbb{R}^2)$, $q_2=\sum_{|\alpha|\leq 2}c_\alpha D^\alpha$ is a second order elliptic operator with coefficients $c_\alpha\in C^2([-1,1]\times[0,1])$, satisfying \begin{equation} \label{eq:15.2} \|a\|_{ C^3([-1,1]\times[0,1], \mathbb{R}^2)}\leq M_1,\quad \|c_\alpha\|_{ C^2([-1,1]\times[0,1])}\leq M_1, \end{equation} with $M_1>0$ only depending on $M_0, \alpha, \alpha_0, \gamma_0, \Lambda_0$. \end{prop} The explicit construction of the conformal map $\Phi$ and the proof of the above Proposition are postponed to the Appendix. \section{Preliminary results} \label{sec: Preliminary} In this paragraph, for simplicity of notation, we find it convenient to rename $x,y$ the coordinates in $\mathbb{R}^2$ instead of $y_1,y_2$. Let $u\in H^6(B_1^+)$ be a solution to \begin{equation} \label{eq:15.1a} \mathbb{D}elta^2 u= a\cdot \nabla\mathbb{D}elta u + q_2(u), \qquad\hbox{in } B_1^+, \end{equation} \begin{equation} \label{eq:15.1b} u(x,0)=u_y(x,0) =0, \quad \forall x\in (-1,1), \end{equation} with $q_2=\sum_{|\alpha|\leq 2}c_\alpha D^\alpha$, \begin{equation} \label{eq:15.2_bis} \|a\|_{ C^3(\overline{B}_1^+, \mathbb{R}^2)}\leq M_1,\quad \|c_\alpha\|_{ C^2(\overline{B}_1^+)}\leq M_1, \end{equation} for some positive constant $M_1$. Let us define the following extension of $u$ to $B_1$ (see \cite{l:Jo}) \begin{equation} \label{eq:16.1} \overline{u}(x,y)=\left\{ \begin{array}{cc} u(x,y), & \hbox{ in } B_1^+\\ w(x,y), & \hbox{ in } B_1^- \end{array} \right. \end{equation} where \begin{equation} \label{eq:16.2} w(x,y)= -[u(x,-y)+2yu_y(x,-y)+y^2\mathbb{D}elta u(x,-y)]. \end{equation} \begin{prop} \label{prop:16.1} Let \begin{equation} \label{eq:16.3} F:=a\cdot \nabla\mathbb{D}elta u + q_2(u). \end{equation} Then $F\in H^2(B_1^+)$, $\overline{u}\in H^4(B_1)$, \begin{equation} \label{eq:16.4} \mathbb{D}elta^2 \overline{u} = \overline{F},\quad \hbox{ in } B_1, \end{equation} where \begin{equation} \label{eq:16.5} \overline{F}(x,y)=\left\{ \begin{array}{cc} F(x,y), & \hbox{ in } B_1^+,\\ F_1(x,y), & \hbox{ in } B_1^-, \end{array} \right. \end{equation} and \begin{equation} \label{eq:16.6} F_1(x,y)= -[5F(x,-y)-6yF_y(x,-y)+y^2\mathbb{D}elta F(x,-y)]. \end{equation} \end{prop} \begin{proof} Throughout this proof, we understand $(x,y)\in B_1^-$. It is easy to verify that \begin{equation} \label{eq:17.1} \mathbb{D}elta^2 w(x,y)= -[5F(x,-y)-6yF_y(x,-y)+y^2\mathbb{D}elta F(x,-y)]=F_1(x,y). \end{equation} Moreover, by \eqref{eq:15.1b} and \eqref{eq:16.2}, \begin{equation} \label{eq:17.2} w(x,0)= -u(x,0) =0, \quad \forall x\in (-1,1). \end{equation} By differentiating \eqref{eq:16.2} w.r.t. $y$, we have \begin{equation} \label{eq:17.3bis} w_y(x,y)= -[u_y(x,-y)-2yu_{yy}(x,-y)+2y\mathbb{D}elta u(x,-y)-y^2(\mathbb{D}elta u_y)(x,-y)], \end{equation} so that, by \eqref{eq:15.1b}, \begin{equation} \label{eq:17.3} w_y(x,0)= -u_y(x,0) =0, \quad \forall x\in (-1,1). \end{equation} Moreover, \begin{equation} \label{eq:17.6} \mathbb{D}elta w(x,y)= -[3 \mathbb{D}elta u(x,-y)-4u_{yy}(x,-y)-2y(\mathbb{D}elta u_y)(x,-y)+y^2(\mathbb{D}elta^2 u)(x,-y)], \end{equation} so that, recalling \eqref{eq:15.1b}, we have that, for every $x\in (-1,1)$, \begin{multline} \label{eq:17.4} \mathbb{D}elta w(x,0)= -[3 \mathbb{D}elta u(x,0)-4u_{yy}(x,0)]= u_{yy}(x,0) = \mathbb{D}elta u (x,0). \end{multline} By differentiating \eqref{eq:17.6} w.r.t. $y$, we have \begin{multline} \label{eq:18.1} (\mathbb{D}elta w_y)(x,y)= -[-5 (\mathbb{D}elta u_y)(x,-y)+4u_{yyy}(x,-y)+\\ + 2y(\mathbb{D}elta u_{yy})(x,-y) +2y(\mathbb{D}elta^2 u)(x,-y)-y^2(\mathbb{D}elta^2 u_y)(x,-y)], \end{multline} so that, taking into account \eqref{eq:15.1b}, it follows that, for every $x\in (-1,1)$, \begin{multline} \label{eq:17.5} (\mathbb{D}elta w_y)(x,0)= -[-5 (\mathbb{D}elta u_y)(x,0)+4u_{yyy}(x,0)] =\\ =-[-5 u_{yxx}(x,0) - u_{yyy}(x,0)] = u_{yyy}(x,0) = (\mathbb{D}elta u_y)(x,0). \end{multline} By \eqref{eq:17.2} and \eqref{eq:17.3}, we have that $\overline{u}\in H^2(B_1)$. Let $\varphi\in C^\infty_0(B_1)$ be a test function. Then, integrating by parts and using \eqref{eq:17.1}, \eqref{eq:17.4}, \eqref{eq:17.5}, we have \begin{multline} \label{eq:18.2} \int_{B_1}\mathbb{D}elta \overline{u} \mathbb{D}elta\varphi = \int_{B_1^+}\mathbb{D}elta u \mathbb{D}elta\varphi +\int_{B_1^-}\mathbb{D}elta w \mathbb{D}elta\varphi=\\ =-\int_{-1 }^1 \mathbb{D}elta u(x,0)\varphi_y(x,0)+\int_{-1 }^1 (\mathbb{D}elta u_y)(x,0)\varphi(x,0) +\int_{B_1^+}(\mathbb{D}elta^2 u) \varphi +\\ +\int_{-1 }^1 \mathbb{D}elta w(x,0)\varphi_y(x,0)-\int_{-1 }^1 (\mathbb{D}elta w_y)(x,0)\varphi(x,0) +\int_{B_1^-}(\mathbb{D}elta^2 w) \varphi=\\ +\int_{B_1^+}F \varphi+\int_{B_1^-}F_1 \varphi =\int_{B_1}\overline{F} \varphi. \end{multline} Therefore \begin{equation*} \int_{B_1}\mathbb{D}elta \overline{u} \mathbb{D}elta\varphi =\int_{B_1}\overline{F} \varphi, \quad \forall \varphi \in C^\infty_0(B_1), \end{equation*} so that \eqref{eq:16.4} holds and, by interior regularity esimates, $\overline{u}\in H^4(B_1)$. \end{proof} {}From now on, we shall denote by $P_k$, for $k\in \mathbb{N}$, $0\leq k\leq 3$, any differential operator of the form \begin{equation*} \sum_{|\alpha|\leq k}c_\alpha(x)D^\alpha, \end{equation*} with $\|c_\alpha\|_{L^\infty}\leq cM_1$, where $c$ is an absolute constant. \begin{prop} \label{prop:19.2} For every $(x,y)\in B_1^-$, we have \begin{equation} \label{eq:19.1} F_1(x,y)= H(x,y)+(P_2(w))(x,y)+(P_3(u))(x,-y), \end{equation} where \begin{multline} \label{eq:19.2} H(x,y)= 6\frac{a_1}{y}(w_{yx}(x,y)+u_{yx}(x,-y))+\\ +6\frac{a_2}{y}(-w_{yy}(x,y)+u_{yy}(x,-y)) -\frac{12a_2}{y}u_{xx}(x,-y), \end{multline} where $a_1,a_2$ are the components of the vector $a$. Moreover, for every $x\in (-1,1)$, \begin{equation} \label{eq:23.1} w_{yx}(x,0)+u_{yx}(x,0)=0, \end{equation} \begin{equation} \label{eq:23.2} -w_{yy}(x,0)+u_{yy}(x,0)=0, \end{equation} \begin{equation} \label{eq:23.3} u_{xx}(x,0)=0. \end{equation} \end{prop} \begin{proof} As before, we understand $(x,y)\in B_1^-$. Recalling \eqref{eq:16.2} and \eqref{eq:16.3}, it is easy to verify that \begin{equation} \label{eq:19.3} F(x,-y)= (P_3(u))(x,-y), \end{equation} \begin{equation} \label{eq:20.1} -6yF_y(x,-y)= -6y(a\cdot \nabla \mathbb{D}elta u_y)(x,-y)+(P_3(u))(x,-y). \end{equation} Next, let us prove that \begin{equation} \label{eq:20.2} y^2\mathbb{D}elta F(x,-y)= (P_2(w))(x,y)+(P_3(u))(x,-y). \end{equation} By denoting for simplicity $\partial_1 =\frac{\partial}{\partial x}$, $\partial_2 =\frac{\partial}{\partial y}$, we have that \begin{multline} \label{eq:20.3} y^2\mathbb{D}elta F(x,-y)= y^2(a_j\partial_j\mathbb{D}elta^2 u + 2\nabla a_j\cdot \nabla \partial_j\mathbb{D}elta u + \mathbb{D}elta a_j\partial_j\mathbb{D}elta u)(x,-y)+y^2\mathbb{D}elta(q_2(u))(x,-y)=\\ =y^2(a_j\partial_j(a\cdot \nabla \mathbb{D}elta u+q_2 (u))(x,-y)+ 2y^2(\nabla a_j\cdot\nabla\partial_j \mathbb{D}elta u)(x,-y)+\\ +y^2(\mathbb{D}elta q_2(u))(x,-y)+y^2(P_3(u))(x,-y)=\\ =y^2(a_j a\cdot \nabla \mathbb{D}elta \partial_j u)(x,-y)+\\ +2y^2(\nabla a_j\cdot\nabla\partial_j \mathbb{D}elta u)(x,-y) +y^2\mathbb{D}elta(q_2(u))(x,-y)+y^2(P_3(u))(x,-y). \end{multline} By \eqref{eq:16.2}, we have \begin{equation*} y^2\mathbb{D}elta u(x,-y)=-w(x,y)-u(x,-y)-2yu_y(x,-y), \end{equation*} obtaining \begin{multline} \label{eq:21.1} y^2(a_j a\cdot \nabla \partial_j\mathbb{D}elta u)(x,-y)= (a_j a\cdot \nabla \partial_j(y^2\mathbb{D}elta u))(x,-y)+ (P_3(u))(x,-y)=\\ =(P_2(w))(x,y)+(P_3(u))(x,-y). \end{multline} Similarly, we can compute \begin{equation} \label{eq:21.2} 2y^2(\nabla a_j \cdot \nabla \partial_j\mathbb{D}elta u)(x,-y)= (P_2(w))(x,y)+(P_3(u))(x,-y), \end{equation} \begin{equation} \label{eq:21.3} y^2(\mathbb{D}elta q_2(u))(x,-y)= (P_2(w))(x,y)+(P_3(u))(x,-y). \end{equation} Therefore, \eqref{eq:20.2} follows {}from \eqref{eq:20.3}--\eqref{eq:21.3}. {}From \eqref{eq:16.6}, \eqref{eq:19.3}--\eqref{eq:20.2}, we have \begin{equation} \label{eq:21.4} F_1(x,y)=6y(a\cdot \nabla\mathbb{D}elta u_y)(x,-y) +(P_2(w))(x,y)+(P_3(u))(x,-y). \end{equation} We have that \begin{equation} \label{eq:21.5} 6y(a\cdot \nabla\mathbb{D}elta u_y)(x,-y)= 6y(a_1\mathbb{D}elta u_{xy})(x,-y)+6y(a_2\mathbb{D}elta u_{yy})(x,-y). \end{equation} By \eqref{eq:16.2}, we have \begin{equation} \label{eq:22.1} w_{yx}(x,y)=-u_{yx}(x,-y)+2yu_{yyx}(x,-y)-2y(\mathbb{D}elta u_{x})(x,-y) +y^2(\mathbb{D}elta u_{yx})(x,-y), \end{equation} so that \begin{equation} \label{eq:22.2} y(\mathbb{D}elta u_{yx})(x,-y)=\frac{1}{y}(w_{yx}(x,y)+u_{yx}(x,-y))+(P_3(u))(x,-y). \end{equation} Again by \eqref{eq:16.2}, we have \begin{multline} \label{eq:22.3} w_{yy}(x,y)=\\ =3u_{yy}(x,-y)-2(\mathbb{D}elta u)(x,-y)-2y((u_{yyy})(x,-y)+2\mathbb{D}elta u_y(x,-y)) -y^2(\mathbb{D}elta u_{yy})(x,-y)=\\ =u_{yy}(x,-y)-2u_{xx}(x,-y)-y^2(\mathbb{D}elta u_{yy})(x,-y)+y(P_3(u))(x,-y), \end{multline} so that \begin{equation} \label{eq:22.4} y(\mathbb{D}elta u_{yy})(x,-y)=\frac{1}{y}(-w_{yy}(x,y)+u_{yy}(x,-y)-2u_{xx}(x,-y))+(P_3(u))(x,-y). \end{equation} Therefore \eqref{eq:19.1}--\eqref{eq:19.2} follow by \eqref{eq:21.4}, \eqref{eq:21.5}, \eqref{eq:22.2} and \eqref{eq:22.4}. The identity \eqref{eq:23.1} is an immediate consequence of \eqref{eq:22.1} and \eqref{eq:15.1b}. By \eqref{eq:15.1b}, we have \eqref{eq:23.3} and by \eqref{eq:22.3} and \eqref{eq:23.3}, \begin{equation*} -w_{yy}(x,0)+ u_{yy}(x,0) =2 u_{xx}(x,0) =0. \end{equation*} \end{proof} For the proof of the three spheres inequality at the boundary we shall use the following Hardy's inequality (\cite[\S 7.3, p. 175]{l:HLP34}), for a proof see also \cite{l:T67}. \begin{prop} [{\bf Hardy's inequality}] \label{prop:Hardy} Let $f$ be an absolutely continuous function defined in $[0,+\infty)$, such that $f(0)=0$. Then \begin{equation} \label{eq:24.1} \int_1^{+\infty} \frac{f^2(t)}{t^2}dt\leq 4 \int_1^{+\infty} (f'(t))^2dt. \end{equation} \end{prop} Another basic result we need to derive the three spheres inequality at the boundary is the following Carleman estimate, which was obtained in \cite[Theorem 6.8]{l:mrv07}. \begin{prop} [{\bf Carleman estimate}] \label{prop:Carleman} Let $\epsilon\in(0,1)$. Let us define \begin{equation} \label{eq:24.2} \rho(x,y) = \varphi\left(\sqrt{x^2+y^2}\right), \end{equation} where \begin{equation} \label{eq:24.3} \varphi(s) = s\exp\left(-\int_0^s \frac{dt}{t^{1-\epsilon}(1+t^\epsilon)}\right). \end{equation} Then there exist $\overline{\tau}>1$, $C>1$, $\widetilde{R}_0\leq 1$, only depending on $\epsilon$, such that \begin{equation} \label{eq:24.4} \sum_{k=0}^3 \tau^{6-2k}\int\rho^{2k+\epsilon-2-2\tau}|D^kU|^2dxdy\leq C \int\rho^{6-\epsilon-2\tau}(\mathbb{D}elta^2 U)^2dxdy, \end{equation} for every $\tau\geq \overline{\tau}$ and for every $U\in C^\infty_0(B_{\widetilde{R}_0}\setminus\{0\})$. \end{prop} \begin{rem} \label{rem:stima_rho} Let us notice that \begin{equation*} e^{-\frac{1}{\epsilon}}s\leq \varphi(s)\leq s, \end{equation*} \begin{equation} \label{eq:stima_rho} e^{-\frac{1}{\epsilon}}\sqrt{x^2+y^2}\leq \rho(x,y)\leq \sqrt{x^2+y^2}. \end{equation} \end{rem} We shall need also the following interpolation estimates. \begin{lem} \label{lem:Agmon} Let $0<\epsilon\leq 1$ and $m\in \mathbb{N}$, $m\geq 2$. There exists an absolute constant $C_{m,j}$ such that for every $v\in H^m(B_r^+)$, \begin{equation} \label{eq:3a.2} r^j\|D^jv\|_{L^2(B_r^+)}\leq C_{m,j}\left(\epsilon r^m\|D^mv\|_{L^2(B_r^+)} +\epsilon^{-\frac{j}{m-j}}\|v\|_{L^2(B_r^+)}\right). \end{equation} \end{lem} See for instance \cite[Theorem 3.3]{l:a65}. \begin{lem} \label{lem:intermezzo} Let $u\in H^6(B_1^+)$ be a solution to \eqref{eq:15.1a}--\eqref{eq:15.1b}, with $a$ and $q_2$ satisfying \eqref{eq:15.2_bis}. For every $r$, $0<r<1$, we have \begin{equation} \label{eq:12a.2} \|D^hu\|_{L^2(B_{\frac{r}{2}}^+)}\leq \frac{C}{r^h}\|u\|_{L^2(B_r^+)}, \quad \forall h=1, ..., 6, \end{equation} where $C$ is a constant only depending on $\alpha_0$, $\gamma_0$ and $\Lambda_0$. \end{lem} The proof of the above result is postponed to the Appendix. \section{Three spheres inequality at the boundary and proof of the main theorem} \label{sec:3sfere} \begin{theo} [{\bf Optimal three spheres inequality at the boundary - flat boundary case}] \label{theo:40.prop3} Let $u\in H^6(B_1^+)$ be a solution to \eqref{eq:15.1a}--\eqref{eq:15.1b}, with $a$ and $q_2$ satisfying \eqref{eq:15.2_bis}. Then there exist $\gamma\in (0,1)$, only depending on $M_1$ and an absolute constant $C>0$ such that, for every $r<R<\frac{R_0}{2}<R_0<\gamma$, \begin{equation} \label{eq:40.1} R^{2\epsilon}\int_{B_R^+}u^2\leq C(M_1^2+1)\left(\frac{R_0/2}{R}\right)^C\left(\int_{B_r^+}u^2\right)^{\widetilde{\theta}}\left(\int_{B_{R_0}^+}u^2\right)^{1-\widetilde{\theta}}, \end{equation} where \begin{equation} \label{eq:39.1} \widetilde{\theta} = \frac{\log\left(\frac{R_0/2}{R}\right)}{\log\left(\frac{R_0/2}{r/4}\right)}. \end{equation} \end{theo} \begin{proof} Let $\epsilon \in (0,1)$ be fixed, for instance $\epsilon=\frac{1}{2}$. However, it is convenient to maintain the parameter $\epsilon$ in the calculations. Along this proof, $C$ shall denote a positive constant which may change {}from line to line. Let $R_0\in (0,\widetilde{R}_0)$ to be chosen later, where $\widetilde{R}_0$ has been introduced in Proposition \ref{prop:Carleman}, and let \begin{equation} \label{eq:25.1} 0<r<R<\frac{R_0}{2}. \end{equation} Let $\eta\in C^\infty_0((0,1))$ such that \begin{equation} \label{eq:25.2} 0\leq \eta\leq 1, \end{equation} \begin{equation} \label{eq:25.3} \eta=0, \quad \hbox{ in }\left(0,\frac{r}{4}\right)\cup \left(\frac{2}{3}R_0,1\right), \end{equation} \begin{equation} \label{eq:25.4} \eta=1, \quad \hbox{ in }\left[\frac{r}{2}, \frac{R_0}{2}\right], \end{equation} \begin{equation} \label{eq:25.6} \left|\frac{d^k\eta}{dt^k}(t)\right|\leq C r^{-k}, \quad \hbox{ in }\left(\frac{r}{4}, \frac{r}{2}\right),\quad\hbox{ for } 0\leq k\leq 4, \end{equation} \begin{equation} \label{eq:25.7} \left|\frac{d^k\eta}{dt^k}(t)\right|\leq C R_0^{-k}, \quad \hbox{ in }\left(\frac{R_0}{2}, \frac{2}{3}R_0\right),\quad\hbox{ for } 0\leq k\leq 4. \end{equation} Let us define \begin{equation} \label{eq:25.5} \xi(x,y)=\eta(\sqrt{x^2+y^2}). \end{equation} By a density argument, we may apply the Carleman estimate \eqref{eq:24.4} to $U=\xi \overline{u}$, where $\overline{u}$ has been defined in \eqref{eq:16.1}, obtaining \begin{multline} \label{eq:26.1} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^+}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi u)|^2 +\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^-}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi w)|^2\leq \\ \leq C \int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}|\mathbb{D}elta^2(\xi u)|^2+ C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}|\mathbb{D}elta^2(\xi w)|^2, \end{multline} for $\tau\geq \overline{\tau}$ and $C$ an absolute constant. By \eqref{eq:25.2}--\eqref{eq:25.5} we have \begin{multline} \label{eq:26.2} |\mathbb{D}elta^2(\xi u)|\leq \xi|\mathbb{D}elta^2 u|+C\chi_{B_{r/2}^+\setminus B_{r/4}^+} \sum_{k=0}^3 r^{k-4}|D^k u|+ C\chi_{B_{2R_0/3}^+\setminus B_{R_0/2}^+}\sum_{k=0}^3 R_0^{k-4}|D^k u|, \end{multline} \begin{multline} \label{eq:26.3} |\mathbb{D}elta^2(\xi w)|\leq \xi|\mathbb{D}elta^2 w|+C\chi_{B_{r/2}^-\setminus B_{r/4}^-} \sum_{k=0}^3 r^{k-4}|D^k w|+ C\chi_{B_{2R_0/3}^-\setminus B_{R_0/2}^-}\sum_{k=0}^3 R_0^{k-4}|D^k w|. \end{multline} Let us set \begin{multline} \label{eq:27.1} J_0 =\int_{B_{r/2}^+\setminus B_{r/4}^+}\rho^{6-\epsilon-2\tau} \sum_{k=0}^3 (r^{k-4}|D^k u|)^2+ \int_{B_{r/2}^-\setminus B_{r/4}^-}\rho^{6-\epsilon-2\tau} \sum_{k=0}^3 (r^{k-4}|D^k w|)^2, \end{multline} \begin{multline} \label{eq:27.2} J_1 =\int_{B_{2R_0/3}^+\setminus B_{R_0/2}^+}\rho^{6-\epsilon-2\tau} \sum_{k=0}^3 (R_0^{k-4}|D^k u|)^2+ \int_{B_{2R_0/3}^-\setminus B_{R_0/2}^-}\rho^{6-\epsilon-2\tau} \sum_{k=0}^3 (R_0^{k-4}|D^k w|)^2. \end{multline} By inserting \eqref{eq:26.2}, \eqref{eq:26.3} in \eqref{eq:26.1} we have \begin{multline} \label{eq:27.3} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^+}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi u)|^2 +\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^-}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi w)|^2\leq \\ \leq C \int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2|\mathbb{D}elta^2 u|^2+ C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|\mathbb{D}elta^2 w|^2+CJ_0+CJ_1, \end{multline} for $\tau\geq \overline{\tau}$, with $C$ an absolute constant. By \eqref{eq:15.1a} and \eqref{eq:15.2_bis} we can estimate the first term in the right hand side of \eqref{eq:27.3} as follows \begin{equation} \label{eq:28.1} \int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2|\mathbb{D}elta^2 u|^2\leq CM_1^2\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2\sum_{k=0}^3|D^k u|^2. \end{equation} By \eqref{eq:17.1}, \eqref{eq:19.1} and by making the change of variables $(x,y)\rightarrow(x,-y)$ in the integrals involving the function $u(x,-y)$, we can estimate the second term in the right hand side of \eqref{eq:27.3} as follows \begin{multline} \label{eq:28.2} \int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|\mathbb{D}elta^2 w|^2\leq C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2+\\ +CM_1^2\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2\sum_{k=0}^2|D^k w|^2+ CM_1^2\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2\sum_{k=0}^3|D^k u|^2. \end{multline} Now, let us split the integral in the right hand side of \eqref{eq:28.1} and the second and third integrals in the right hand side of \eqref{eq:28.2} over the domains of integration $B_{r/2}^\pm\setminus B_{r/4}^\pm$, $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$, $B_{2R_0/3}^\pm\setminus B_{R_0/2}^\pm$ and then let us insert \eqref{eq:28.1}--\eqref{eq:28.2} so rewritten in \eqref{eq:27.3}, obtaining \begin{multline} \label{eq:28.4} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^+}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi u)|^2 +\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^-}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi w)|^2\leq \\ \leq C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2 +CM_1^2\int_{B_{R_0/2}^- \setminus B_{r/2}^-}\rho^{6-\epsilon-2\tau}\sum_{k=0}^2|D^k w|^2+\\+ CM_1^2\int_{B_{R_0/2}^+ \setminus B_{r/2}^+}\rho^{6-\epsilon-2\tau}\sum_{k=0}^3|D^k u|^2 +C(M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \overline{\tau}$, with $C$ an absolute constant. \end{proof} Next, by estimating {}from below the integrals in the left hand side of this last inequality reducing their domain of integration to $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$, where $\xi=1$, we have \begin{multline} \label{eq:29.1} \sum_{k=0}^3 \int_{B_{R_0/2}^+ \setminus B_{r/2}^+}\tau^{6-2k} (1-CM_1^2\rho^{8-2\epsilon-2k})\rho^{2k+\epsilon-2-2\tau}|D^k u|^2+\\ +\int_{B_{R_0/2}^- \setminus B_{r/2}^-}\rho^{4+\epsilon-2\tau}|D^3 w|^2 +\sum_{k=0}^2\int_{B_{R_0/2}^- \setminus B_{r/2}^-}\tau^{6-2k} (1-CM_1^2\rho^{8-2\epsilon-2k})\rho^{2k+\epsilon-2-2\tau}|D^k w|^2 \leq \\ \leq C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2 +C(M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \overline{\tau}$, with $C$ an absolute constant. Recalling \eqref{eq:stima_rho}, we have that, for $k=0,1,2,3$ and for $R_0\leq R_1:=\min\{\widetilde{R}_0,2(2CM_1^2)^{-\frac{1}{2(1-\epsilon)}}\}$, \begin{equation} \label{eq:30.1} 1-CM_1^2\rho^{8-2\epsilon-2k}\geq \frac{1}{2}, \quad \hbox{ in }B_{R_0/2}^\pm, \end{equation} so that, inserting \eqref{eq:30.1} in \eqref{eq:29.1}, we have \begin{multline} \label{eq:30.3} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \rho^{2k+\epsilon-2-2\tau}|D^k u|^2 +\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-} \rho^{2k+\epsilon-2-2\tau}|D^k w|^2 \leq \\ \leq C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2 +C(M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \overline{\tau}$, with $C$ an absolute constant. By \eqref{eq:19.2} and \eqref{eq:15.2_bis}, we have that \begin{equation} \label{eq:30.4} \int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2\leq CM_1^2(I_1+I_2+I_3), \end{equation} with \begin{equation} \label{eq:31.0.1} I_1=\int_{-R_0}^{R_0}\left(\int_{-\infty}^0\left|y^{-1}(w_{yy}(x,y)- (u_{yy}(x,-y))\rho^\frac{6-\epsilon-2\tau}{2}\xi\right|^2dy\right)dx. \end{equation} \begin{equation} \label{eq:31.0.2} I_2=\int_{-R_0}^{R_0}\left(\int_{-\infty}^0\left|y^{-1}(w_{yx}(x,y)+ (u_{yx}(x,-y))\rho^\frac{6-\epsilon-2\tau}{2}\xi\right|^2dy\right)dx. \end{equation} \begin{equation} \label{eq:31.0.4} I_3=\int_{-R_0}^{R_0}\left(\int_{-\infty}^0\left|y^{-1} u_{xx}(x,-y)\rho^\frac{6-\epsilon-2\tau}{2}\xi\right|^2dy\right)dx. \end{equation} Now, let us see that, for $j=1,2,3$, \begin{multline} \label{eq:31.1} I_j\leq C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|D^3 w|^2 +C\tau^2\int_{B_{R_0}^-}\rho^{4-\epsilon-2\tau}\xi^2|D^2 w|^2+\\ +C\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2|D^3 u|^2 +C\tau^2\int_{B_{R_0}^+}\rho^{4-\epsilon-2\tau}\xi^2|D^2 u|^2 +C(J_0+J_1), \end{multline} for $\tau\geq \overline{\tau}$, with $C$ an absolute constant. Let us verify \eqref{eq:31.1} for $j=1$, the other cases following by using similar arguments. By \eqref{eq:23.2}, we can apply Hardy's inequality \eqref{eq:24.1}, obtaining \begin{multline} \label{eq:32.2} \int_{-\infty}^0\left|y^{-1}(w_{yy}(x,y)- (u_{yy}(x,-y))\rho^{\frac{6-\epsilon-2\tau}{2}}\xi\right|^2dy\leq\\ \leq 4\int_{-\infty}^0\left|\partial_y\left[(w_{yy}(x,y)- (u_{yy}(x,-y))\rho^{\frac{6-\epsilon-2\tau}{2}}\xi\right]\right|^2dy\leq\\ \leq 16 \int_{-\infty}^0\left(|w_{yyy}(x,y)|^2 +|u_{yyy}(x,-y)|^2\right)\rho^{6-\epsilon-2\tau}\xi^2dy+\\ 16 \int_{-\infty}^0\left(|w_{yy}(x,y)|^2 +|u_{yy}(x,-y)|^2\right)\left|\partial_y\left(\rho^{\frac{6-\epsilon-2\tau}{2}}\xi\right)\right|^2dy. \end{multline} Noticing that \begin{equation} \label{eq:32.1} |\rho_y|\leq\left|\frac{y}{\sqrt{x^2+y^2}}\varphi'(\sqrt{x^2+y^2})\right|\leq 1, \end{equation} we can compute \begin{multline} \label{eq:32.3} \left|\partial_y\left(\rho^{\frac{6-\epsilon-2\tau}{2}}(x,y)\xi(x,y)\right)\right|^2\leq 2|\xi_y|^2\rho^{6-\epsilon-2\tau}+2\left|\left(\frac{6-\epsilon-2\tau}{2}\right)\xi \rho_y\rho^{\frac{4-\epsilon-2\tau}{2}}\right|^2\leq\\ \leq 2\xi_y^2\rho^{6-\epsilon-2\tau}+2\tau^2\rho^{4-\epsilon-2\tau}\xi^2, \end{multline} for $\tau\geq \widetilde{\tau}:= \max\{\overline{\tau},3\}$, with $C$ an absolute constant. By inserting \eqref{eq:32.3} in \eqref{eq:32.2}, by integrating over $(-R_0,R_0)$ and by making the change of variables $(x,y)\rightarrow(x,-y)$ in the integrals involving the function $u(x,-y)$, we derive \begin{multline} \label{eq:33.0} I_1\leq C\int_{B_{R_0}^-}\xi^2\rho^{6-\epsilon-2\tau}|w_{yyy}|^2+ C\int_{B_{R_0}^+}\xi^2\rho^{6-\epsilon-2\tau}|u_{yyy}|^2+\\ +C\int_{B_{R_0}^-}\xi_y^2\rho^{6-\epsilon-2\tau}|w_{yy}|^2 +C\int_{B_{R_0}^+}\xi_y^2\rho^{6-\epsilon-2\tau}|u_{yy}|^2+\\ +C\tau^2\int_{B_{R_0}^-}\xi^2\rho^{4-\epsilon-2\tau}|w_{yy}|^2 +C\tau^2\int_{B_{R_0}^+}\xi^2\rho^{4-\epsilon-2\tau}|u_{yy}|^2. \end{multline} Recalling \eqref{eq:25.2}--\eqref{eq:25.5}, we find \eqref{eq:31.1} for $j=1$. Next, by \eqref{eq:30.3}, \eqref{eq:30.4} and \eqref{eq:31.1}, we have \begin{multline} \label{eq:33.1} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \rho^{2k+\epsilon-2-2\tau}|D^k u|^2 +\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-} \rho^{2k+\epsilon-2-2\tau}|D^k w|^2 \leq \\ \leq CM_1^2\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2|D^3u|^2+ CM_1^2\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|D^3w|^2+\\ +CM_1^2\tau^2\int_{B_{R_0}^+}\rho^{4-\epsilon-2\tau}\xi^2|D^2u|^2 +CM_1^2\tau^2\int_{B_{R_0}^-}\rho^{4-\epsilon-2\tau}\xi^2|D^2w|^2 +C(M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. Now, let us split the first four integrals in the right hand side of \eqref{eq:33.1} over the domains of integration $B_{r/2}^\pm\setminus B_{r/4}^\pm$, $B_{2R_0/3}^\pm\setminus B_{R_0/2}^\pm$ and $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$ and move on the left hand side the integrals over $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$. Recalling \eqref{eq:stima_rho}, we obtain \begin{multline} \label{eq:34.1} \sum_{k=2}^3 \int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \tau^{6-2k}(1-CM_1^2\rho^{2-2\epsilon})\rho^{2k+\epsilon-2-2\tau}|D^k u|^2+\\ +\sum_{k=2}^3 \int_{B_{R_0/2}^- \setminus B_{r/2}^-} \tau^{6-2k}(1-CM_1^2\rho^{2-2\epsilon})\rho^{2k+\epsilon-2-2\tau}|D^k w|^2+\\ +\sum_{k=0}^1 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \rho^{2k+\epsilon-2-2\tau}|D^k u|^2 +\sum_{k=0}^1 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-} \rho^{2k+\epsilon-2-2\tau}|D^k w|^2 \leq \\ \leq C(\tau^2M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. Therefore, for $R_0\leq R_2=\min\{R_1,2(2CM_1^2)^{-\frac{1}{2(1-\epsilon)}}\}$, it follows that \begin{multline} \label{eq:35.1} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \rho^{2k+\epsilon-2-2\tau}|D^k u|^2+ \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-} \rho^{2k+\epsilon-2-2\tau}|D^k w|^2\leq\\ \leq C(\tau^2M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. Let us estimate $J_0$ and $J_1$. {}From \eqref{eq:27.1} and recalling \eqref{eq:stima_rho}, we have \begin{multline} \label{eq:36.1} J_0\leq\left(\frac{r}{4}\right)^{6-\epsilon-2\tau}\left\{ \int_{B^+_{r/2}}\sum_{k=0}^3(r^{k-4}|D^k u|)^2+ \int_{B^-_{r/2}}\sum_{k=0}^3(r^{k-4}|D^k w|)^2 \right\}. \end{multline} By \eqref{eq:16.2}, we have that, for $(x,y)\in B^-_{r/2}$ and $k=0,1,2,3$, \begin{equation} \label{eq:36.1bis} |D^k w|\leq C\sum_{h=k}^{2+k}r^{h-k}|(D^h u)(x,-y)|. \end{equation} By \eqref{eq:36.1}--\eqref{eq:36.1bis}, by making the change of variables $(x,y)\rightarrow(x,-y)$ in the integrals involving the function $u(x,-y)$ and by using Lemma \ref{lem:intermezzo}, we get \begin{multline} \label{eq:36.2} J_0\leq C\left(\frac{r}{4}\right)^{6-\epsilon-2\tau} \sum_{k=0}^5 r^{2k-8}\int_{B^+_{r/2}}|D^k u|^2 \leq C\left(\frac{r}{4}\right)^{-2-\epsilon-2\tau}\int_{B_r^+}|u|^2, \end{multline} where $C$ is an absolute constant. Analogously, we obtain \begin{equation} \label{eq:37.1} J_1 \leq C\left(\frac{R_0}{2}\right)^{-2-\epsilon-2\tau}\int_{B_{R_0}^+}|u|^2. \end{equation} Let $R$ such that $r<R<\frac{R_0}{2}$. By \eqref{eq:35.1}, \eqref{eq:36.2}, \eqref{eq:37.1}, it follows that \begin{multline} \label{eq:37.1bis} \tau^{6}R^{\epsilon-2-2\tau}\int_{B_{R}^+ \setminus B_{r/2}^+} |u|^2 \leq\sum_{k=0}^3\tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \rho^{2k+\epsilon-2-2\tau}|D^ku|^2\leq\\ \leq C\tau^2 (M_1^2+1)\left[\left(\frac{r}{4}\right)^{-2-\epsilon-2\tau}\int_{B_r^+}|u|^2+ \left(\frac{R_0}{2}\right)^{-2-\epsilon-2\tau}\int_{B_{R_0}^+}|u|^2 \right], \end{multline} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. Since $\tau>1$, we may rewrite the above inequality as follows \begin{multline} \label{eq:37.2} R^{2\epsilon}\int_{B_{R}^+ \setminus B_{r/2}^+} |u|^2\leq C(M_1^2+1)\left[\left(\frac{r/4}{R}\right)^{-2-\epsilon-2\tau}\int_{B_r^+}|u|^2+ \left(\frac{R_0/2}{R}\right)^{-2-\epsilon-2\tau}\int_{B_{R_0}^+}|u|^2 \right], \end{multline} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. By adding $R^{2\epsilon}\int_{B_{r/2}^+}|u|^2$ to both members of \eqref{eq:37.2}, and setting, for $s>0$, \begin{equation*} \sigma_s=\int_{B_{s}^+}|u|^2, \end{equation*} we obtain \begin{equation} \label{eq:38.1} R^{2\epsilon}\sigma_R\leq C(M_1^2+1) \left[\left(\frac{r/4}{R}\right)^{-2-\epsilon-2\tau}\sigma_r+ \left(\frac{R_0/2}{R}\right)^{-2-\epsilon-2\tau}\sigma_{R_0} \right], \end{equation} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. Let $\tau^*$ be such that \begin{equation} \label{eq:38.2} \left(\frac{r/4}{R}\right)^{-2-\epsilon-2\tau^*}\sigma_r= \left(\frac{R_0/2}{R}\right)^{-2-\epsilon-2\tau^*}\sigma_{R_0}, \end{equation} that is \begin{equation} \label{eq:38.3} 2+\epsilon+2\tau^*=\frac{\log (\sigma_{R_0}/\sigma_r)}{\log \left(\frac{R_0/2}{r/4}\right)}. \end{equation} Let us distinguish two cases: \begin{enumerate}[i)] \item $\tau^*\geq \widetilde{\tau}$, \item $\tau^*< \widetilde{\tau}$, \end{enumerate} and set \begin{equation} \label{eq:39.1bis} \widetilde{\theta}=\frac{\log \left(\frac{R_0/2}{R}\right)}{\log \left(\frac{R_0/2}{r/4}\right)}. \end{equation} In case i), it is possible to choose $\tau = \tau^*$ in \eqref{eq:38.1}, obtaining, by \eqref{eq:38.2}--\eqref{eq:39.1bis}, \begin{equation} \label{eq:39.2} R^{2\epsilon}\sigma_R\leq C(M_1^2+1)\sigma_r^{\widetilde{\theta}}\sigma_{R_0}^{1-\widetilde{\theta}}. \end{equation} In case ii), since $\tau^*< \widetilde{\tau}$, {}from \eqref{eq:38.3}, we have \begin{equation*} \frac{\log (\sigma_{R_0}/\sigma_r)}{\log \left(\frac{R_0/2}{r/4}\right)}<2+\epsilon+2\widetilde{\tau}, \end{equation*} so that, multiplying both members by $\log \left(\frac{R_0/2}{R}\right)$, it follows that \begin{equation*} \widetilde{\theta}\log\left(\frac{\sigma_{R_0}}{\sigma_r}\right)<\log\left(\frac{R_0/2}{R}\right) ^{2+\epsilon+2\widetilde{\tau}}, \end{equation*} and hence \begin{equation} \label{eq:39.3} \sigma_{R_0}^{\widetilde{\theta}}\leq \left(\frac{R_0/2}{R}\right)^{2+\epsilon+2\widetilde{\tau}}\sigma_r^{\widetilde{\theta}}. \end{equation} Then is follows trivially that \begin{equation} \label{eq:39.4} R^{2\epsilon}\sigma_R\leq R^{2\epsilon}\sigma_{R_0}\leq R^{2\epsilon}\left(\frac{R_0/2}{R}\right)^{2+\epsilon+2\widetilde{\tau}}\sigma_r^{\widetilde{\theta}}\sigma_{R_0}^{1-\widetilde{\theta}}. \end{equation} Finally, by \eqref{eq:39.2} and \eqref{eq:39.4}, we obtain \eqref{eq:40.1}. \begin{proof}[Proof of Theorem \ref{theo:40.teo}] Let $r_1<r_2<\frac{r_0R_0}{2K}<r_0$, where $R_0$ is chosen such that $R_0<\gamma<1$, where $\gamma$ has been introduced in Theorem \ref{theo:40.prop3} and $K>1$ is the constant introduced in Proposition \ref{prop:conf_map}. Let us define \begin{equation*} r=\frac{2r_1}{r_0}, \qquad R= \frac{Kr_2}{r_0}. \end{equation*} Recalling that $K>8$, it follows immediately that $r<R<\frac{R_0}{2}$. Therefore, we can apply \eqref{eq:40.1} with $\epsilon=\frac{1}{2}$ to $u=v\circ\Phi$, obtaining \begin{equation} \label{eq:3sfere_u} \int_{B_R^+}u^2\leq \frac{C}{R^C}\left(\int_{B_r^+}u^2\right)^{\widetilde{\theta}}\left(\int_{B_{R_0}^+}u^2\right)^{1-\widetilde{\theta}}, \end{equation} with \begin{equation*} \widetilde{\theta} = \frac{\log\left(\frac{R_0r_0}{2Kr_2}\right)}{\log\left(\frac{R_0r_0}{r_1}\right)}. \end{equation*} and $C>1$ only depending on $M_0$, $\alpha$, $\alpha_0$ e $\gamma_0$ and $\Lambda_0$. {}From \eqref{eq:gradPhiInv}, \eqref{eq:stimaPhi}, \eqref{eq:stimaPhiInv} and noticing that \begin{equation*} \widetilde{\theta} \geq \theta:=\frac{\log\left(\frac{R_0r_0}{2Kr_2}\right)}{\log\left(\frac{r_0}{r_1}\right)}, \end{equation*} we obtain \eqref{eq:41.1}--\eqref{eq:41.2}. \end{proof} \section{Appendix} \label{sec: Appendix} \begin{proof}[Proof of Proposition \ref{prop:conf_map}] Let us construct a suitable extension of $g$ to $[-2r_0,2r_0]$. Let $P_6^\pm$ be the Taylor polynomial of order 6 and center $\pm r_0$ \begin{equation*} P_6^\pm(x_1)=\sum_{j=0}^6 \frac{g^{(j)}(\pm r_0)}{j!}(x_1-(\pm r_0))^j, \end{equation*} and let $\chi\in C^\infty_0(\mathbb{R})$ be a function satisfying \begin{equation*} 0\leq\chi\leq 1, \end{equation*} \begin{equation*} \chi=1, \hbox{ for } |x_1|\leq r_0, \end{equation*} \begin{equation*} \chi=0, \hbox{ for } \frac{3}{2}r_0\leq |x_1|\leq 2r_0, \end{equation*} \begin{equation*} |\chi^{(j)}(x_1)|\leq \frac{C}{r_0^j}, \hbox{ for } r_0\leq |x_1|\leq \frac{3}{2}r_0, \forall j\in \mathbb{N}. \end{equation*} Let us define \begin{equation*} \widetilde{g}=\left\{ \begin{array}{cc} g, & \hbox{ for } x_1\in [-r_0,r_0],\\ \chi P_6^+, & \hbox{ for } x_1\in [r_0, 2r_0],\\ \chi P_6^-, & \hbox{ for } x_1\in [-2r_0, -r_0]. \end{array} \right. \end{equation*} It is a straightforward computation to verify that \begin{equation} \label{eq:3.2} \widetilde g(x_1)=0, \hbox{ for } \frac{3}{2}r_0\leq |x_1|\leq 2r_0, \end{equation} \begin{equation} \label{eq:3.2bis} |\widetilde g(x_1)|\leq 2M_0r_0, \hbox{ for } |x_1|\leq 2r_0, \end{equation} so that the graph of $\widetilde g$ is contained in $R_{2r_0,2M_0r_0}$ and \begin{equation} \label{eq:3.3} \|\widetilde g \|_{C^{6,\alpha}([-2r_0,2r_0])}\leq CM_0r_0, \end{equation} where $C$ is an absolute constant. Let \begin{equation} \label{eq:Omega_r_0_tilde} \widetilde{\Omega}_{r_0} = \left\{ x\in R_{2r_0,2M_0r_0}\ |\ x_2>\widetilde{g}(x_1)\right\}, \end{equation} and let $k\in H^1(\widetilde{\Omega}_{r_0} )$ be the solution to \begin{equation} \label{eq:3.4} \left\{ \begin{array}{ll} \mathbb{D}elta k =0, & \hbox{in } \widetilde{\Omega}_{r_0},\\ & \\ k_{x_1}(2r_0,x_2) =k_{x_1}(-2r_0,x_2)=0, & \hbox{for } 0\leq x_2\leq 2M_0r_0,\\ & \\ k(x_1,2M_0r_0) =1, & \hbox{for } -2r_0\leq x_1\leq 2r_0,\\ & \\ k(x_1,\widetilde{g}(x_1)) =0, &\hbox{for } -2r_0\leq x_1\leq 2r_0.\\ \end{array}\right. \end{equation} Let us notice that $k\in C^{6,\alpha}\left(\overline{\widetilde{\Omega}}_{r_0} \right)$. Indeed, this regularity is standard away {}from any neighborhoods of the four points $(\pm2r_0,0)$, $(\pm 2r_0, 2M_0r_0)$ and, by making a even reflection of $k$ w.r.t. the lines $x_1 = \pm 2r_0$ in a neighborhood in $\widetilde{\Omega}_{r_0}$ of each of these points, we can apply Schauder estimates and again obtain the stated regularity. By the maximum principle, $\min_{\overline{\widetilde{\Omega}}_{r_0}}k = \min_{\partial \widetilde{\Omega}_{r_0}}k$. In view of the boundary conditions, this minimum value cannot be achieved in the closed segment $\{x_2=2M_0r_0, |x_1|\leq 2r_0\}$. It cannot be achieved in the segments $\{\pm 2r_0\}\times (0,2M_0r_0)$ since the boundary conditions over these segment contradict Hopf Lemma (see \cite{l:GT}). Therefore the minimum is attained on the boundary portion $\{(x_1, \widetilde{g}(x_1) \ | \ x_1\in [-2r_0,2r_0]\}$, so that $\min_{\overline{\widetilde{\Omega}}_{r_0}}k = 0$. Similarly, $\max_{\overline{\widetilde{\Omega}}_{r_0}}k = 1$ and, moreover, by the strong maximum and minimum principles, $0<k(x_1,x_2)<1$, for every $(x_1,x_2)\in \widetilde{\Omega}_{r_0}$. Denoting by $\mathcal R$ be the reflection around the line $x_1=2r_0$, let \begin{equation*} \Omega^*_{r_0}=\widetilde{\Omega}_{r_0}\cup \mathcal R(\widetilde{\Omega}_{r_0})\cup(\{2r_0\}\times (0,2M_0r_0)), \end{equation*} and let $k^*$ be the extension of $k$ to $\overline{\Omega}^*_{r_0}$ obtained by making an even reflection of $k$ around the line $x_1=2r_0$. Next, let us extend $k^*$ by periodicity w.r.t. the $x_1$ variable to the unbounded strip \begin{equation*} S_{r_0} = \cup_{l\in \mathbb{Z}} (\Omega^*_{r_0} + 8r_0le_1). \end{equation*} By Schauder estimates and by the periodicity of $k^*$, it follows that \begin{equation} \label{eq:5.1} \|\nabla k^*\|_{L^\infty(S_{r_0})}\leq \frac{C_0}{r_0}, \end{equation} with $C_0$ only depending on $M_0$ and $\alpha$. Therefore there exists $\delta_0= \delta_0(M_0, \alpha)$, $0<\delta_0\leq \frac{1}{4}$, such that \begin{equation} \label{eq:5.2} k^*(x_1,x_2)\geq \frac{1}{2} \quad \forall (x_1,x_2)\in \mathbb{R}\times[(1-\delta_0)2M_0r_0,2M_0r_0]. \end{equation} Since $k^*>0$ in $S_{r_0}$, by applying Harnack inequality and Hopf Lemma (see \cite{l:GT}), we have \begin{equation*} \frac{\partial k^*}{\partial x_2}\geq \frac{c_0}{r_0}, \quad \hbox{ on } \partial S_{r_0}, \end{equation*} with $c_0$ only depending on $M_0$ and $\alpha$. Therefore, the function $k^*$ satisfies \begin{equation*} \left\{ \begin{array}{ll} \mathbb{D}elta \left(\frac{\partial k^*}{\partial x_2}\right) =0, & \hbox{in } S_{r_0},\\ & \\ \frac{\partial k^*}{\partial x_2}\geq \frac{c_0}{r_0}, & \hbox{on } \partial S_{r_0}.\\ \end{array}\right. \end{equation*} Moreover, $\frac{\partial k^*}{\partial x_2}$, being continuous and periodic w.r.t. the variable $x_1$, attains its minimum in $\overline{S}_{r_0}$. Since this minimum value cannot be attained in $S_{r_0}$, it follows that \begin{equation} \label{eq:6.1} \frac{\partial k^*}{\partial x_2}\geq \frac{c_0}{r_0}, \quad \hbox{ in } \overline{S}_{r_0}. \end{equation} Now, let $h$ be an harmonic conjugate of $-k$ in $\widetilde{\Omega}_{r_0}$, that is \begin{equation} \label{eq:6.2} \left\{ \begin{array}{ll} h_{x_1} = k_{x_2}, &\\ & \\ h_{x_2} = -k_{x_1}. &\\ \end{array}\right. \end{equation} The map $\Psi : = h+ik$ is a conformal map in $\widetilde{\Omega}_{r_0}$, \begin{equation} \label{eq:DPsi} D\Psi =\left( \begin{array}{ll} k_{x_2} &-k_{x_1}\\ & \\ k_{x_1} &k_{x_2}\\ \end{array}\right) \end{equation} so that $|D\Psi| = \sqrt 2|\nabla k|$ and, by \eqref{eq:5.1} and \eqref{eq:6.1}, \begin{equation} \label{eq:6.3} \sqrt 2\frac{c_0}{r_0}\leq |D\Psi|\leq \sqrt 2\frac{C_0}{r_0}, \quad \hbox{in } \widetilde{\Omega}_{r_0}. \end{equation} Let us analyze the behavior of $\Psi$ on the boundary of $\widetilde{\Omega}_{r_0}$ \begin{equation*} \partial{\widetilde{\Omega}_{r_0}} = \sigma_1\cup \sigma_2\cup \sigma_3\cup \sigma_4, \end{equation*} where \begin{equation*} \sigma_1 = \{(x_1, \widetilde{g}(x_1)),\ | \ x_1\in [-2r_0,2r_0]\},\qquad \sigma_2 = \{(2r_0, x_2),\ | \ x_2\in [0,2M_0r_0]\}, \end{equation*} \begin{equation*} \sigma_3 = \{(x_1,2M_0r_0),\ | \ x_1\in [-2r_0,2r_0]\}, \qquad \sigma_4 = \{(-2r_0, x_2),\ | \ x_2\in [0,2M_0r_0]\}. \end{equation*} On $\sigma_1$, we have \begin{equation*} \Psi(x_1, \widetilde{g}(x_1))= h((x_1, \widetilde{g}(x_1))) +i0, \end{equation*} \begin{equation*} \frac{\partial}{\partial x_1}h(x_1, \widetilde{g}(x_1)= h_{x_1}(x_1, \widetilde{g}(x_1)+ h_{x_2}(x_1, \widetilde{g}(x_1)\widetilde{g}'(x_1) =-\sqrt{1+[\widetilde{g}'(x_1)]^2}(\nabla k\cdot n)>0, \end{equation*} where $n$ is the outer unit normal. Therefore $\Psi$ is injective on $\sigma_1$ and $\Psi(\sigma_1)$ is an interval $[a,b]$ contained in the line $\{y_2=0\}$, with \begin{equation*} a=h(-2r_0, 0), \quad b=h(2r_0, 0). \end{equation*} On $\sigma_2$, we have \begin{equation*} \Psi(2r_0, x_2)= h(2r_0, x_2)+ik(2r_0, x_2), \end{equation*} \begin{equation*} h_{x_2}(2r_0, x_2)=-k_{x_1}(2r_0, x_2)=0, \end{equation*} and similarly in $\sigma_4$, so that $h(-2r_0, x_2)\equiv a$ and $h(2r_0, x_2)\equiv b$ for $x_2\in[0,2M_0r_0]$ whereas, by \eqref{eq:6.1}, $k$ is increasing w.r.t. $x_2$. Therefore $\Psi$ is injective on $\sigma_2\cup \sigma_4$, and maps $\sigma_2$ into the segment $\{b\}\times[0,1]$ and $\sigma_4$ into the segment $\{a\}\times[0,1]$. On $\sigma_3$, we have \begin{equation*} \Psi(x_1, 2M_0r_0)= h(x_1, 2M_0r_0) +i1, \end{equation*} \begin{equation*} h_{x_1}(x_1, 2M_0r_0) = k_{x_2}(x_1, 2M_0r_0)>0, \end{equation*} so that $h$ is increasing in $[-2r_0,2r_0]$, $\Psi$ is injective on $\sigma_3$ and $\Psi(\sigma_3)$ is the interval $[a,b]\times\{1\}$. Therefore $\Psi$ maps in a bijective way the boundary of $\widetilde{\Omega}_{r_0}$ into the boundary of $[a,b]\times [0,1]$. Moreover, we have \begin{equation} \label{eq:b-a} b-a= \int_{-2r_0}^{2r_0}h_{x_1}(x_1,2M_0r_0)dx_1 = \int_{-2r_0}^{2r_0}k_{x_2}(x_1,2M_0r_0)dx_1. \end{equation} By \eqref{eq:5.1}, \eqref{eq:6.1} and \eqref{eq:b-a} the following estimate holds \begin{equation} \label{eq:b-a_bis} 4c_0\leq b-a\leq 4C_0. \end{equation} By \eqref{eq:6.3}, we can apply the global inversion theorem, ensuring that \begin{equation*} \Psi^{-1}: [a,b]\times [0,1]\rightarrow \overline{\widetilde{\Omega}}_{r_0} \end{equation*} is a conformal diffeomorphism. Moreover, \begin{equation} \label{eq:DPsi_inversa} D(\Psi^{-1}) =\frac{1}{|\nabla k|^2}\left( \begin{array}{ll} k_{x_2} &k_{x_1}\\ & \\ -k_{x_1} &k_{x_2}\\ \end{array}\right), \end{equation} \begin{equation} \label{eq:8.1} \frac{\sqrt 2}{C_0}r_0\leq |D\Psi^{-1}|= \frac{\sqrt 2}{|\nabla k|}\leq \frac{\sqrt 2}{c_0}r_0, \quad \hbox{in } [a,b]\times [0,1]. \end{equation} Now, let us see that the set $\Psi(\Omega_{r_0})$ contains a closed rectangle having one basis contained in the line $\{y_2=0\}$ and whose sides can be estimated in terms of $M_0$ and $\alpha$. To this aim we need to estimate the distance of $\Psi(0,0)=(\overline{\xi}_1,0)$ {}from the edges $(a,0)$ and $(b,0)$ of the rectangle $[a,b]\times[0,1]$. Recalling that $\widetilde{g}\equiv 0$ for $\frac{3}{2}r_0\leq |x_1|\leq 2r_0$, we have that $\sigma_1$ contains the segments $\left[-2r_0,-\frac{3}{2}r_0\right]\times \{0\}$, $\left[\frac{3}{2}r_0,2r_0\right]\times \{0\}$, so that \begin{equation} \label{eq:segmentino} h(2r_0,0)-h\left(\frac{3}{2}r_0,0\right)= \int_{\frac{3}{2}r_0}^{2r_0}h_{x_1}(x_1,0)dx_1 = \int_{\frac{3}{2}r_0}^{2r_0}k_{x_2}(x_1,0)dx_1. \end{equation} By \eqref{eq:5.1}, \eqref{eq:6.1} and \eqref{eq:segmentino} we derive \begin{equation} \label{eq:segmentino_bis} \frac{c_0}{2}\leq h(2r_0,0)-h\left(\frac{3}{2}r_0,0\right)\leq \frac{C_0}{2}. \end{equation} Similarly, \begin{equation} \label{eq:segmentino_ter} \frac{c_0}{2}\leq h\left(-\frac{3}{2}r_0,0\right)-h(-2r_0,0)\leq \frac{C_0}{2}. \end{equation} Since $h$ is injective and maps $\sigma_1$ into $[a,b]\times\{0\}$, it follows that \begin{equation*} |\Psi(0,0)-(a,0)| = h(0,0)-h(-2r_0,0) \geq\frac{c_0}{2}, \end{equation*} \begin{equation*} |\Psi(0,0)-(b,0)| = h(2r_0,0) - h(0,0) \geq\frac{c_0}{2}. \end{equation*} Possibly replacing $c_0$ with $\min\{c_0,2\}$, we obtain that $\overline{B}^+_{\frac{c_0}{2}}(\Psi(O))\subset [a,b]\times [0,1]$. By \eqref{eq:8.1}, \begin{equation*} |\Psi^{-1}(\xi)| = |\Psi^{-1}(\xi)-\Psi^{-1}(\Psi(O))|\leq\frac{\sqrt 2}{2}r_0<r_0, \qquad \forall \xi \in B^+_{\frac{c_0}{2}}(\Psi(O)), \end{equation*} so that $\Psi^{-1}\left(B^+_{\frac{c_0}{2}}(\Psi(O))\right)\subset \Omega_{r_0}$, \begin{equation*} \Psi(\Omega_{r_0})\supset B^+_{\frac{c_0}{2}}(\Psi(O))\supset R, \end{equation*} where $R$ is the rectangle \begin{equation*} R= \left(\overline{\xi}_1-\frac{c_0}{2\sqrt 2}, \overline{\xi}_1+\frac{c_0}{2\sqrt 2}\right)\times \left(0,\frac{c_0}{2\sqrt 2}\right). \end{equation*} Let us consider the homothety \begin{equation*} \Theta:[a,b]\times [0,1] \rightarrow\mathbb{R}^2, \end{equation*} \begin{equation*} \Theta(\xi_1,\xi_2) = \frac{2\sqrt 2}{c_0}(\xi_1-\overline{\xi}_1,\xi_2), \end{equation*} which satisfies \begin{equation*} \Theta(\Psi(O)) = O, \qquad D\Theta = \frac{2\sqrt 2}{c_0} I_2, \end{equation*} \begin{equation*} \Theta([a,b]\times [0,1]) =R^*, \qquad R^* =\left[\frac{2\sqrt 2}{c_0}(a-\overline{\xi}_1), \frac{2\sqrt 2}{c_0}(b-\overline{\xi}_1)\right]\times \left[0, \frac{2\sqrt 2}{c_0}\right], \end{equation*} \begin{equation*} \Theta(\overline{R}) = [-1,1]\times [0,1], \end{equation*} \begin{equation*} D(\Theta\circ \Psi)(x) = \frac{2\sqrt 2}{c_0}D\Psi(x). \end{equation*} Its inverse \begin{equation*} \Theta^{-1}:R^*\rightarrow [a,b]\times [0,1], \end{equation*} \begin{equation*} \Theta^{-1}(y_1,y_2) = \frac{c_0}{2\sqrt 2}(y_1+\overline{\xi}_1,y_2), \end{equation*} satisfies \begin{equation*} D\Theta^{-1}= \frac{c_0} {2\sqrt 2}I_2, \end{equation*} \begin{equation*} D((\Theta\circ \Psi)^{-1})(y) = \frac{c_0}{2\sqrt 2}D\Psi^{-1}(\Theta^{-1}(y)). \end{equation*} Let us define \begin{equation*} \Phi =(\Theta\circ \Psi)^{-1}). \end{equation*} We have that $\Phi$ is a conformal diffeomorphism {}from $R^*$ into $\widetilde{\Omega}_{r_0}$ such that \begin{equation*} \Omega_{r_0}\supset \Psi^{-1}(R)=\Phi((-1,1)\times(0,1)), \end{equation*} \begin{equation} \label{eq:gradPhibis} \frac{c_0r_0}{2C_0}\leq |D\Phi(y)|\leq \frac{r_0}{2}, \end{equation} \begin{equation} \label{eq:gradPhiInvbis} \frac{4}{r_0}\leq |D\Phi^{-1}(x)|\leq \frac{4C_0}{c_0r_0}. \end{equation} By \eqref{eq:gradPhi}, we have that, for every $y\in [-1,1]\times [0,1]$, \begin{equation} \label{eq:stimaPhibis} |\Phi(y)|= |\Phi(y)-\Phi(O)|\leq \frac{r_0}{2}|y|. \end{equation} Given any $x(x_1,x_2)\in \overline{\Omega}_{r_0}$, let $x^* =(x_1,g(x_1))$. We have \begin{equation*} |x-x^*| = |x_2 - g(x_1)| \leq|x_2|+ |g(x_1)-g(0)|\leq (M_0+1)|x|, \end{equation*} and, since the segment joining $x$ and $x^*$ is contained in $\overline{\Omega}_{r_0}$, by \eqref{eq:gradPhiInvbis} we have \begin{equation} \label{eq:stimaPhiInv1} |\Phi^{-1}(x)-\Phi^{-1}(x^*)|\leq \frac{4C_0}{c_0r_0}(M_0+1)|x|. \end{equation} Let un consider the arc $\tau(t)= \Phi^{-1}(t,g_1(t))$, for $t\in [0,x_1]$. Again by \eqref{eq:gradPhiInvbis}, we have \begin{multline} \label{eq:stimaPhiInv2} |\Phi^{-1}(x^*)| = |\Phi^{-1}(x^*)-\Phi^{-1}(O)| =\tau(x_1) -\tau(0) \leq\\ \leq \left|\int_0^{x_1}\tau'(t)dt \right|\leq \frac{4C_0}{c_0r_0}\sqrt{M_0^2+1}\ |x|. \end{multline} By \eqref{eq:stimaPhiInv1}, \eqref{eq:stimaPhiInv2}, we have \begin{equation} \label{eq:stimaPhiInvbis} |\Phi^{-1}(x)| \leq \frac{K}{r_0}|x|, \end{equation} with $K=\frac{4C_0}{c_0}(M_0+1+\sqrt{M_0^2+1})>8$. {}From this last inequality, we have that \begin{equation*} \Phi^{-1}\left(\Omega_{r_0}\cap B_{\frac{r_0}{K}}\right)\subset B_1^+\subset (-1,1)\times(0,1), \qquad \Phi((-1,1)\times(0,1))\supset \Omega_{r_0}\cap B_{\frac{r_0}{K}}. \end{equation*} Let $\Phi = (\varphi, \psi)$. We have that \begin{equation} \label{eq:DPhi} D\Phi =\left( \begin{array}{ll} \varphi_{y_1} &\varphi_{y_2}\\ & \\ -\varphi_{y_2} &\varphi_{y_1}\\ \end{array}\right), \end{equation} \begin{equation} \label{eq:32.1bisluglio} det(D\Phi(y)) = |\nabla\varphi(y)|^2, \end{equation} \begin{equation} \label{eq:DPhi_inversa} (D\Phi)^{-1} =\frac{1}{|\nabla \varphi|^2}\left( \begin{array}{ll} \varphi_{y_1} &-\varphi_{y_2}\\ & \\ \varphi_{y_2} &\varphi_{y_1}\\ \end{array}\right). \end{equation} Concerning the function $u(y) = v(\Phi(y))$, we can compute \begin{equation} \label{eq:32.3luglio} (\nabla v) (\Phi(y)) = [(D\Phi(y))^{-1}]^T\nabla u(y), \end{equation} \begin{equation} \label{eq:32.2luglio} (\mathbb{D}elta v) (\Phi(y)) = \frac{1}{|det(D\Phi(y)|}\textrm{div}\,(A(y)\nabla u(y)), \end{equation} where \begin{equation} \label{eq:33.0luglio} A(y) = |det(D\Phi(y)| (D\Phi(y))^{-1} [(D\Phi(y))^{-1}]^T. \end{equation} By \eqref{eq:DPhi}--\eqref{eq:DPhi_inversa}, we obtain that \begin{equation} \label{eq:33.0bisluglio} A(y) = I_2, \end{equation} so that \begin{equation} \label{eq:33.0terluglio} (\mathbb{D}elta v) (\Phi(y)) = \frac{1}{|\nabla \varphi(y)|^2}\mathbb{D}elta u(y), \end{equation} \begin{equation} \label{eq:33.1luglio} (\mathbb{D}elta^2 v) (\Phi(y)) = \frac{1}{|\nabla \varphi(y)|^2}\mathbb{D}elta \left(\frac{1}{|\nabla \varphi(y)|^2}\mathbb{D}elta u(y)\right). \end{equation} By using the above formulas, some computations allow to derive \eqref{eq:equazione_sol_composta}--\eqref{eq:15.2} {}from \eqref{eq:equazione_piastra_non_div}. Finally, the boundary conditions \eqref{eq:Dirichlet_sol_composta} follow {}from \eqref{eq:32.3luglio}, \eqref{eq:9.2b} and \eqref{eq:Diric_u_tilde}. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:intermezzo}] Here, we develop an argument which is contained in \cite[Chapter 9]{l:GT}. By noticing that $a\cdot\nabla\mathbb{D}elta u = \textrm{div}\,(\mathbb{D}elta u a)-(\textrm{div}\, a)\mathbb{D}elta u$, we can rewrite \eqref{eq:41.1} in the form \begin{equation*} \sum_{|\alpha|,|\beta|\leq 2}D^\alpha(a_{\alpha\beta}D^\beta u)=0. \end{equation*} Let $\sigma\in\left[\frac{1}{2},1\right)$, $\sigma'=\frac{1+\sigma}{2}$ and let us notice that \begin{equation} \label{eq:3a.1} \sigma'-\sigma = \frac{1-\sigma}{2}, \qquad 1-\sigma = 2(1-\sigma'). \end{equation} Let $\xi\in C^\infty_0(\mathbb{R}^2)$ be such that \begin{equation*} 0\leq\xi\leq 1, \end{equation*} \begin{equation*} \xi=1, \hbox{ for } |x|\leq \sigma, \end{equation*} \begin{equation*} \xi=0, \hbox{ for } |x|\geq \sigma', \end{equation*} \begin{equation*} |D^k(\xi)|\leq \frac{C}{(\sigma'-\sigma)^k}, \hbox{ for } \sigma\leq \sigma', k=0,1,2. \end{equation*} By straightforward computations we have that \begin{equation*} \sum_{|\alpha|,|\beta|\leq 2}D^\alpha(a_{\alpha\beta}D^\beta (u\xi))=f, \end{equation*} with \begin{equation*} f=\sum_{|\alpha|,|\beta|\leq 2}\sum_ {\overset{\scriptstyle \delta_2\leq\alpha}{\scriptstyle \delta_2\neq0}}{\alpha \choose \beta} D^{\alpha-\delta_2}a_{\alpha\beta}D^\beta u)D^{\delta_2}\xi+ \sum_{|\alpha|,|\beta|\leq 2}D^\alpha\left[a_{\alpha\beta} \sum_ {\overset{\scriptstyle \delta_1\leq\beta}{\scriptstyle \delta_1\neq0}}{\beta \choose \delta_1} D^{\beta-\delta_1}uD^{\delta_1}\xi\right]. \end{equation*} By standard regularity estimates (see for instance \cite[Theorem 9.8]{l:a65}, \begin{equation} \label{eq:8a.1} \|u\xi\|_{H^{4+k}(B_1^+)}\leq C\left(\|u\xi\|_{L^{2}(B_1^+)}+ \|f\|_{H^{k}(B_1^+)}\right). \end{equation} On the other hand, it follows trivially that \begin{equation} \label{eq:8a.2} \|f\|_{H^{k}(B_1^+)}\leq CM_1 \sum_{h=0}^{3+k}\frac{1}{(1-\sigma')^{4+k-h}}\|D^h u\|_{L^{2}(B_{\sigma'}^+)}. \end{equation} By inserting \eqref{eq:8a.2} in \eqref{eq:8a.1}, by multiplying both members by $(1-\sigma')^{4+k}$ and by recalling \eqref{eq:3a.1}, we have \begin{equation} \label{eq:8a.3} (1-\sigma)^{4+k}\|D^{4+k}u\|_{L^{2}(B_{\sigma}^+)}\leq C \left(\|u\|_{L^{2}(B_1^+)}+\sum_{h=1}^{3+k}(1-\sigma')^h \|D^{h}u\|_{L^{2}(B_{\sigma'}^+)} \right) \end{equation} Setting \begin{equation*} \Phi_j=\sup_{\sigma\in\left[\frac{1}{2},1\right)}(1-\sigma)^j \|D^{j}u\|_{L^{2}(B_{\sigma}^+)}, \end{equation*} {}from \eqref{eq:8a.3} we obtain \begin{equation} \label{eq:9a.2} \Phi_{4+k}\leq C\left(A_{2+k}+ \Phi_{3+k}\right). \end{equation} where \begin{equation*} A_{2+k}=\|u\|_{L^{2}(B_1^+)}+ \sum_{h=1}^{2+k}\Phi_h. \end{equation*} By the interpolation estimate \eqref{eq:3a.2} we have that, for every $\epsilon$, $0<\epsilon<1$ and for every $h\in \mathbb{N}$, $1\leq h\leq 3+k$, \begin{equation} \label{eq:9a.3} \|D^{h}u\|_{L^{2}(B_{\sigma}^+)}\leq C\left( \epsilon\|D^{4+k}u\|_{L^{2}(B_{\sigma}^+)}+ \epsilon^{-\frac{h}{4+k-h}}\|u\|_{L^{2}(B_{\sigma}^+)}\right). \end{equation} Let $\gamma>0$ and let $\sigma_\gamma\in \left[\frac{1}{2},1\right)$ such that \begin{equation} \label{eq:9a.4} \Phi_{3+k}\leq(1-\sigma_\gamma)^{3+k} \|D^{3+k}u\|_{L^{2}(B_{\sigma_\gamma}^+)}+\gamma. \end{equation} By applying \eqref{eq:9a.3} with $h=3+k$, $\epsilon=(1-\sigma_\gamma)\widetilde{\epsilon}$, $\sigma = \sigma_\gamma$, we have \begin{equation*} (1-\sigma_\gamma)^{3+k}\|D^{3+k}u\|_{L^{2}(B_{\sigma_\gamma}^+)}\leq \left( \widetilde{\epsilon}(1-\sigma_\gamma)^{4+k}\|D^{4+k}u\|_{L^{2}(B_{\sigma_\gamma}^+)}+ \widetilde{\epsilon}^{-(3+k)}\|u\|_{L^{2}(B_{\sigma_\gamma}^+)}\right), \end{equation*} so that, by \eqref{eq:9a.4} and by the arbitrariness of $\gamma$, we have \begin{equation*} \Phi_{3+k}\leq C \left( \widetilde{\epsilon}\Phi_{4+k}+ \widetilde{\epsilon}^{-(3+k)}\|u\|_{L^{2}(B_{1}^+)}\right). \end{equation*} By inserting this last inequality in \eqref{eq:9a.2}, we get \begin{equation*} \Phi_{4+k}\leq C \left(A_{2+k}+ \widetilde{\epsilon}^{-(3+k)}\|u\|_{L^{2}(B_{1}^+)}+ \widetilde{\epsilon}\Phi_{4+k}\right), \end{equation*} which gives, for $\epsilon =\frac{1}{2C+1}$, \begin{equation*} \Phi_{4+k}\leq C \left(\|u\|_{L^{2}(B_{1}^+)}+ \sum_{h=1}^{2+k}\Phi_{h}\right). \end{equation*} By proceeding similarly, we get \begin{equation*} \Phi_{4+k}\leq C \|u\|_{L^{2}(B_{1}^+)}, \end{equation*} so that \begin{equation} \label{eq:12a.1} \|D^{4+k}u\|_{L^{2}(B_{\frac{1}{2}}^+)} \leq2^{4+k}C\|u\|_{L^{2}(B_{1}^+)}, \qquad k=0,1,2. \end{equation} \end{proof} By applying \eqref{eq:9a.3} for a fixed $\epsilon$, $\sigma=\frac{1}{2}$, we can estimates the derivatives of order $h$, $1\leq h\leq 3$, \begin{equation} \label{eq:12a.1bis} \|D^{h}u\|_{L^{2}(B_{\frac{1}{2}}^+)} \leq C\left(\|D^{4+k}u\|_{L^{2}(B_{\frac{1}{2}}^+)}+ \|u\|_{L^{2}(B_{\frac{1}{2}}^+)}\right). \end{equation} By \eqref{eq:12a.1}, \eqref{eq:12a.1bis}, we have \begin{equation*} \|D^{h}u\|_{L^{2}(B_{\frac{1}{2}}^+)} \leq C\|u\|_{L^{2}(B_{1}^+)}, \qquad \hbox{ for } h=1,...,6. \end{equation*} By employing an homothety, we obtain \eqref{eq:12a.2}. \noindent \emph{Acknowledgement:} The authors wish to thank Antonino Morassi for fruitful discussions on the subject of this work. \end{document}
\begin{document} \title[ ]{Extrema of Curvature Functionals on the Space of Metrics on 3-Manifolds, II.} \author[ ]{Michael T. Anderson} \thanks{Partially supported by NSF Grant DMS-9802722} \maketitle \setcounter{section}{-1} \section{Introduction} \setcounter{equation}{0} This paper is a continuation of the study of some rigidity or non-existence issues discussed in [An1, \S 6]. The results obtained here also play a significant role in the approach to geometrization of 3-manifolds discussed in [An4]. Let $N$ be an oriented 3-manifold and consider the functional \begin{equation} \label{e0.1} {\cal R}^{2}(g) = \int_{N}|r_{g}|^{2}dV_{g}, \end{equation} on the space of metrics ${\Bbb M} $ on $N$ where $r$ is the Ricci curvature and $dV$ is the volume form. The Euler-Lagrange equations for a critical point of ${\cal R}^{2}$ read \begin{equation} \label{e0.2} \nabla{\cal R}^{2} = D^{*}Dr + D^{2}s - 2 \stackrel{\circ}{R}\circ r -\tfrac{1}{2}(\Delta s - |r|^{2})\cdot g = 0, \end{equation} \begin{equation} \label{e0.3} \Delta s = -\tfrac{1}{3}|r|^{2}. \end{equation} Here $s$ is the scalar curvature, $D^{2}s$ the Hessian of $s$, $\Delta s = trD^{2}s$ the Laplacian, and $ \stackrel{\circ}{R} $ the action of the curvature tensor $R$ on symmetric bilinear forms, c.f. [B, Ch.4H] for further details. The equation (0.3) is just the trace of (0.2). It is obvious from the trace equation (0.3) that there are no non-flat ${\cal R}^{2}$ critical metrics, i.e. solutions of (0.2)-(0.3), on compact manifolds $N$; this follows immediately by integrating (0.3) over $N$. Equivalently, since the functional ${\cal R}^{2}$ is not scale invariant in dimension 3, there are no critical metrics $g$ with ${\cal R}^{2}(g) \neq $ 0. To obtain non-trivial critical metrics in this case, one needs to modify ${\cal R}^{2}$ so that it is scale-invariant, i.e. consider $v^{1/3}{\cal R}^{2},$ where $v$ is the volume of $(N, g)$. Nevertheless, it is of course apriori possible that there are non-trivial solutions of (0.2)-(0.3) on non-compact manifolds $N$. \begin{theorem} \label{t 0.1.} Let (N, g) be a complete ${\cal R}^{2}$ critical metric with non-negative scalar curvature. Then (N,g) is flat. \end{theorem} This result generalizes [An1, Thm.6.2], which required that $(N, g)$ have an isometric free $S^{1}$ action. It is not known if the condition $s \geq $ 0 is necessary in Theorem 0.1; a partial result without this assumption is given after Proposition 2.2. However, following the discussion in \S 1 and [An1, \S 6], the main situation of interest is when $s \geq $ 0. Of course, Theorem 0.1 is false in higher dimensions, since any Ricci-flat metric is a critical point, in fact minimizer of ${\cal R}^{2}$ in any dimension, while any Einstein metric is critical for ${\cal R}^{2}$ in dimension 4. Next, we consider a class of metrics which are critical points of the functional ${\cal R}^{2}$ subject to a scalar curvature constraint. More precisely, consider scalar-flat metrics on a (non-compact) 3-manifold $N$ satisfying the equations \begin{equation} \label{e0.4} \alpha\nabla{\cal R}^{2} + L^{*}(\omega ) = 0, \end{equation} \begin{equation} \label{e0.5} \Delta\omega = -\frac{\alpha}{4}|r|^{2}. \end{equation} where again (0.5) is the trace of (0.4) since $s =$ 0. Here $L^{*}$ is the adjoint of the linearization of the scalar curvature, given by $$L^{*}f = D^{2}f - \Delta f\cdot g - fr, $$ and $\omega $ is a locally bounded function on $N$, which we consider as a potential. The meaning and derivation of these equations will be discussed in more detail in \S 1. They basically arise from the Euler-Lagrange equations for a critical metric of ${\cal R}^{2}$ subject to the constraint $s =$ 0. The parameter $\alpha $ may assume any value in [0, $\infty ).$ When $\alpha =$ 0, the equations (0.4)-(0.5) are the static vacuum Einstein equations, c.f. [An2] and references there. In this case, we require that $\omega $ is not identically 0. It is proved in \S 3 that an $L^{2,2}$ Riemannian metric $g$ and $L^{2}$ potential function $\omega $ satisfying the equations (0.4)-(0.5) weakly in a 3-dimensional domain is a $C^{\infty},$ (in fact real-analytic), solution of the equations. A smooth metric $g$ and potential function $\omega $ satisfying (0.4)-(0.5) will be called an ${\cal R}_{s}^{2}$ critical metric or ${\cal R}_{s}^{2}$ solution. \begin{theorem} \label{t 0.2.} Let $(N, g)$ be a complete ${\cal R}_{s}^{2}$ critical metric, i.e. a complete scalar-flat metric satisfying (0.4)-(0.5), with \begin{equation} \label{e0.6} -\lambda \leq \omega \leq 0, \end{equation} for some $\lambda < \infty .$ Suppose further that (N, g) admits an isometric free $S^{1}$ action leaving $\omega $ invariant. Then (N, g) is flat. \end{theorem} In contrast to Theorem 0.1, the assumption that $(N, g)$ admit an isometric free $S^{1}$ action here is essential. There are complete non-flat ${\cal R}_{s}^{2}$ solutions satisfying (0.6) which admit an isometric, but not free, $S^{1}$ action. For example, the complete Schwarzschild metric is an ${\cal R}_{s}^{2}$ solution for a suitable choice of the potential $\omega $ satisfying (0.6), c.f. Proposition 5.1. The condition $\omega \leq $ 0 can be weakened to an assumption that $\omega \leq $ 0 outside some compact set $K \subset N$. However, it is unknown if this result holds when $\omega \geq $ 0 everywhere for instance. Similarly, the assumption that $\omega $ is bounded below can be removed in certain situations, but it is not clear if it can be removed in general, c.f. Remarks 4.4 and 4.5. The proofs of these results rely almost exclusively on the respective trace equations (0.3) and (0.5). The full equations (0.2) and (0.4) are used only to obtain regularity estimates of the metric in terms of the potential function $s$, respectively $\omega .$ Thus it is likely that these results can be generalized to variational problems for other curvature-type integrals, whose trace equations have a similar form; c.f. \S 5.2 for an example. As noted above, both Theorem 0.1 and 0.2 play an important role in the approach to geometrization of 3-manifolds studied in [An4]. For instance, Theorem 0.2 is important in understanding the collapse situation. Following discussion of the origin of the ${\cal R}^{2}$ and ${\cal R}_{s}^{2}$ equations in \S 1, Theorem 0.1 is proved in \S 2. In \S 3, we prove the regularity of ${\cal R}_{s}^{2}$ solutions and apriori estimates for families of such solutions. Theorem 0.2 is proved in \S 4, while \S 5 shows that the Schwarzschild metric is an ${\cal R}_{s}^{2}$ solution and concludes by showing that Theorems 0.1 and 0.2 also hold for ${\cal Z}^{2}$ and ${\cal Z}_{s}^{2}$ solutions, where $z$ is the trace-free Ricci curvature. While efforts have been made to make the paper self-contained, in certain instances we refer to [An1] for further details. \section{Scalar Curvature Constrained Equations.} \setcounter{equation}{0} In this section, we discuss the nature and form of the ${\cal R}_{s}^{2}$ equations, as well as some motivation for considering these and the ${\cal R}^2$ equations. The discussion here is by and large only formal and we refer to [An1, \S 8] and [An4] for complete details and proofs of the assertions made. Suppose first that $M$ is a compact, oriented 3-manifold and consider the scale-invariant functional \begin{equation} \label{e1.1} I_{\varepsilon} = \varepsilon v^{1/3}{\cal R}^{2} + v^{1/6}{\cal S}^{2} = \varepsilon v^{1/3}\int|r|^{2} + v^{1/6}(\int s^{2})^{1/2}. \end{equation} on the space of metrics on $M$. Here $\varepsilon > $ 0 is a small parameter and we are interested in considering the behavior $\varepsilon \rightarrow $ 0. The existence, regularity and general geometric properties of minimizers $g_{\varepsilon}$ of (essentially) $I_{\varepsilon}$, for a fixed $\varepsilon > 0$, are proved in [An1, \S 8]; more precisely, such is done there for the closely related functional $\varepsilon v^{1/3}\int|R|^{2} + v^{1/3}\int s^{2}$, where $R$ is the full Riemann curvature tensor. All of these results follow from the same results proved in [An1, \S 3 - \S 5] for the $L^2$ norm of $R$, together with the fact that, for a fixed $\varepsilon > 0$, the functional $\varepsilon v^{1/3}\int|R|^{2} + v^{1/3}\int s^{2}$ has the same basic properties as $v^{1/3}\int|R|^{2}$ w.r.t. existence, regularity and completeness issues. Now in dimension 3, the full curvature $R$ is controlled by the Ricci curvature $r$. Thus, for example one has the relations $|R|^2 = 4|r|^2 - s^2$ and $s^2 \leq 3|r|^2$. A brief inspection of the work in [An1, \S 3 - \S 5, \S 8] then shows that these results, together with the same proofs, also hold for the functional $I_{\varepsilon}$ and its minimizers $g_{\varepsilon}$ in (1.1). The Euler-Lagrange equations for $I_{\varepsilon}$ at $g = g_{\varepsilon}$ are \begin{equation} \label{e1.2} \varepsilon\nabla{\cal R}^{2} + L^{*}(\tau ) + (\tfrac{1}{4}s\tau + c)\cdot g = 0, \end{equation} \begin{equation} \label{e1.3} 2\Delta (\tau + {\tfrac{3}{4}}\varepsilon s) + {\tfrac{1}{4}}s\tau = -{\tfrac{1}{2}}\varepsilon |r|^{2} + 3c, \end{equation} where $\tau = \tau_{\varepsilon} \ s/\sigma , \sigma = (v^{1/6}{\cal S}^{2}(g))$ and the constant term $c$, corresponding to the volume terms in (1.1), is given by \begin{equation} \label{e1.4} c = \frac{\varepsilon}{6v}\int|r|^{2}dV + \frac{1}{12\sigma v}\int s^{2}dV. \end{equation} Again (1.3) is the trace of (1.2). These equations can be deduced either from [An1, \S 8] or [B, Ch.4H]; again all terms in (1.2)-(1.4) are w.r.t. $g = g_{\varepsilon}$. As $\varepsilon \rightarrow $ 0, the curvature $r_{\varepsilon}$ of the solutions $g_{\varepsilon}$ of (1.2)-(1.3) will usually blow-up, i.e. diverge to infinity in some region, say in a neighborhood of points $x_{\varepsilon}.$ Thus blow up or renormalize the metric $g_{\varepsilon}$ by considering $g_{\varepsilon}' = \rho_{\varepsilon}^{- 2}\cdot g_{\varepsilon},$ where $\rho_{\varepsilon} \rightarrow $ 0 is chosen so that the curvature in the geodesic ball $(B_{x_{\varepsilon}}' (1), g_{\varepsilon}' )$ w.r.t. $g_{\varepsilon}' $ is bounded. More precisely, $\rho $ is chosen to be $L^{2}$ curvature radius of $g_{\varepsilon}$ at $x_{\varepsilon},$ c.f. [An1,Def.3.2]. Then the renormalized Euler-Lagrange equations take the form \begin{equation} \label{e1.5} \frac{\varepsilon}{\rho^{2}}\nabla{\cal R}^{2} + L^{*}(\tau ) +({\tfrac{1}{4}}s\tau + \frac{c}{\rho^{2}})\cdot g = 0, \end{equation} \begin{equation} \label{e1.6} 2\Delta (\tau +{\tfrac{3}{4}}\frac{\varepsilon}{\rho^{2}}s) + {\tfrac{1}{4}}s\tau = - {\tfrac{1}{2}}\frac{\varepsilon}{\rho^{2}}|r|^{2} + \frac{3c}{\rho^{2}}. \end{equation} Here $\rho = \rho_{\varepsilon}$ and otherwise all metric quantities are taken w.r.t. $g = g_{\varepsilon}' $ except for the potential function $\tau ,$ which has been normalized to be scale invariant. It is necessary to divide by $\rho^{2}$ in (1.5)-(1.6), since otherwise all terms in the equations tend uniformly to 0 in $L^{2}(B_{\varepsilon}), B_{\varepsilon} = B_{x_{\varepsilon}}' (1).$ From the scaling properties of $c$ in (1.4), note that in the scale $g_{\varepsilon}' , c' = \rho^{4}\cdot c,$ where $c = c(g_{\varepsilon}).$ Thus, the constant term in (1.5)-(1.6) satisfies \begin{equation} \label{e1.7} \frac{c}{\rho^{2}} = \frac{c(g_{\varepsilon}' )}{\rho^{2}} = \frac{\rho^{4}c(g_{\varepsilon})}{\rho^{2}} = \rho^{2}c(g_{\varepsilon}) \rightarrow 0, \ \ {\rm as} \ \ \varepsilon \rightarrow 0. \end{equation} Similarly, since ${\cal S}^{2}(g_{\varepsilon})$ is bounded, and the scalar curvature $s = s_{\varepsilon}' $ of $g_{\varepsilon}' $ is given by $s_{\varepsilon}' = \rho^{2}s_{\varepsilon},$ one sees that $s$ in (1.5)-(1.6) goes to 0 in $L^{2}(B_{\varepsilon})$ as $\varepsilon \rightarrow $ 0. Now assume that the potential function $\tau = \tau_{\varepsilon}$ is uniformly bounded in the $g_{\varepsilon}' $ ball $B_{\varepsilon}$, as $\varepsilon \rightarrow 0$. One then has three possible behaviors for the equations (1.5)-(1.6) in a limit $(N, g' , x, \tau )$ as $\varepsilon \rightarrow $ 0, (in a subsequence). The discussion to follow here is formal in that we are not concerned with the existence of such limits; this issue is discussed in detail in [An4], as is the situation where $\tau = \tau_{\varepsilon}$ is not uniformly bounded in $B_\varepsilon$ as $\varepsilon \rightarrow 0$. (It turns out that the limits below have the same form even if $\{\tau_{\varepsilon}\}$ is unbounded). {\bf Case(i).} $\varepsilon / \rho^{2} \rightarrow $ 0. In this case, the equations (1.7)-(1.8) in the limit $\varepsilon \rightarrow $ 0 take the form \begin{equation} \label{e1.8} L^{*}(\tau ) = 0, \Delta\tau = 0. \end{equation} These are the static vacuum Einstein equations, c.f. [An2,3] or [EK]. {\bf Case(ii).} $\varepsilon / \rho^{2} \rightarrow \alpha > $ 0. In this case, the limit equations take the form \begin{equation} \label{e1.9} \alpha\nabla{\cal R}^{2} + L^{*}(\tau ) = 0, \end{equation} \begin{equation} \label{e1.10} \Delta\tau = -\frac{\alpha}{4}|r|^{2}. \end{equation} Formally, these are the equations for a critical metric $g' $ of ${\cal R}^{2}$ subject to the constraint that $s =$ 0. However there are no compact scalar-flat perturbations of $g' $ since the limit $(N, g' )$ is non-compact and thus one must impose certain boundary conditions on the comparison metrics. For example, if the limit $(N, g' )$ is complete and asymptotically flat, (1.9)-(1.10) are the equations for a critical point of ${\cal R}^{2}$ among all scalar-flat and asymptotically flat metrics with a given mass $m$. {\bf Case(iii).} $\varepsilon / \rho^{2} \rightarrow \infty .$ In this case, renormalize the equations (1.5)-(1.6) by dividing by $\varepsilon /\rho^{2}.$ Since $\tau $ is bounded, $(\rho^{2}/\varepsilon )\tau \rightarrow $ 0, and one obtains in the limit \begin{equation} \label{e1.11} \nabla{\cal R}^{2} = 0, \end{equation} \begin{equation} \label{e1.12} |r|^{2} = 0, \end{equation} so the limit metric is flat. These three cases may be summarized by the equations \begin{equation} \label{e1.13} \alpha\nabla{\cal R}^{2} + L^{*}(\tau ) = 0, \end{equation} \begin{equation} \label{e1.14} \Delta\tau = -\frac{\alpha}{4}|r|^{2}, \end{equation} where $\alpha =$ 0 corresponds to Case (i), 0 $< \alpha < \infty $ corresponds to Case (ii) and $\alpha = \infty $ corresponds to (the here trivial) Case (iii). Essentially the same discussion is valid for the scale-invariant functional \begin{equation} \label{e1.15} J_{\varepsilon} = (\varepsilon v^{1/3}{\cal R}^{2} - v^{2/3}\cdot s)|_{{\cal C}}, \end{equation} where ${\cal C} $ is the space of Yamabe metrics on $M$. The existence and general properties of minimizers of $J_{\varepsilon}$ again are discussed in [An1,\S 8II]. By the same considerations, one obtains as above limit equations of the form (1.13)-(1.14), with $\tau $ replaced by the potential function $- (1+h)$ from [An1,\S 8II]. Next consider briefly the scale-invariant functional \begin{equation} \label{e1.16} I_{\varepsilon}' = \varepsilon v^{1/3}\int|r|^{2} + \bigl( v^{1/3}\int (s^-)^{2}\bigr )^{1/2}, \end{equation} on the space of metrics on $M$ as above, where $s^-$ = min$(s, 0)$. This functional, (essentially), is the main focus of [An4], and we refer there for a complete discussion. c.f. also \S 5.2. The Euler-Lagrange equations of $I_{\varepsilon}^-$ are formally the same as (1.2)-(1.3) with $\tau^-$ = min$(\tau ,0)$ in place of $\tau $ and $\sigma $ replaced by $(v^{1/3}\int (s^-)^{2})^{1/2}.$ Let now $g_{\varepsilon}$ be a minimizer of $I_{\varepsilon}^-$ on $M$. Note that here $\tau^-$ is automatically bounded above, as is $\int (s^-)^{2}$ as $\varepsilon \rightarrow $ 0. However, there is no longer an apriori bound on the $L^{2}$ norm of $s$, i.e. it may well happen that $\int s^{2}(g_{\varepsilon}) \rightarrow \infty $ as $\varepsilon \rightarrow $ 0. Formally taking a blow-up limit $(N, g' , x, \tau^-)$ as $\varepsilon \rightarrow $ 0 as above leads to the analogue of the equations (1.13)-(1.14), i.e. to the limit equations \begin{equation} \label{e1.17} \alpha\nabla{\cal R}^{2} + L^{*}(\tau^-) = 0, \end{equation} \begin{equation} \label{e1.18} \Delta (\tau^- + {\tfrac{3}{4}}\alpha s) = -{\tfrac{1}{4}}\alpha |r|^{2}, \end{equation} where as before we assume that $\tau^-$ = lim $\tau_{\varepsilon}^-$ is bounded below. These equations correspond to (0.4)-(0.5) with $\omega = \tau^-,$ but with $s \geq $ 0 in place of $s =$ 0, corresponding to the fact that in blow-up limits, now only $\int (s^-)^{2} \rightarrow $ 0 while previously $\int s^{2} \rightarrow $ 0. While in the region $N^- = {\tau^- < 0}$, the equations (1.17)-(1.18) have the same form as the Cases (i)-(iii) above, in the region $N^{+} = \{s > $ 0\}, (so that $\tau^- =$ 0), these equations take the form \begin{equation} \label{e1.19} \nabla{\cal R}^{2} = 0, \end{equation} \begin{equation} \label{e1.20} \Delta s = -\tfrac{1}{3}|r|^{2}, \end{equation} i.e. the ${\cal R}^{2}$ equations (0.2)-(0.3); here we have divided by $\alpha .$ The junction $\Sigma = \partial\{s =$ 0\} between the two regions $N^-$ and $N^{+}$ above is studied in [An4]. This junction may be compared with junction conditions common in general relativity, where vacuum regions of space-(time) are joined to regions containing a non-vanishing matter distribution, c.f. [W, Ch.6.2] or [MTW, Ch.21.13, 23]. This concludes the brief discussion on the origin of the equations in \S 0. The remainder of the paper is concerned with properties of their solutions. \section{Non-Existence of ${\cal R}^{2}$ Solutions.} \setcounter{equation}{0} In this section, we prove Theorem 0.1, i.e. there are no non-trivial complete ${\cal R}^{2}$ solutions with non-negative scalar curvature. The proof will proceed in several steps following in broad outline the proofs of [An1, Thms. 6.1,6.2]. We begin with some preliminary material. Let $r_{h}(x)$ and $\rho (x)$ denote the $L^{2,2}$ harmonic radius and $L^{2}$ curvature radius of $(N, g)$ at $x$, c.f. [An1, Def.3.2] for the exact definition. Roughly speaking, $r_{h}(x)$ is the largest radius of the geodesic ball at $x$ on which there exists a harmonic chart for $g$ in which the metric differs from the flat metric by a fixed small amount, say $c_{o},$ in the $L^{2,2}$ norm. Similarly, $\rho (x)$ is the largest radius on which the $L^{2}$ average of the curvature is bounded by $c_{o}\cdot \rho (x)^{-2},$ for some fixed but small constant $c_{o} > $ 0. From the definition, \begin{equation} \label{e2.1} \rho (y) \geq dist(y, \partial B_{x}(\rho (x))), \end{equation} for all $y\in B_{x}(\rho (x)),$ and similarly for $r_{h}(x).$ The point $x$ is called (strongly) $(\rho ,d)$ buffered if $\rho (y) \geq d\cdot \rho (x),$ for all $y\in\partial B_{x}(\rho (x)),$ c.f. [An3, Def.3.7] and also [An1, \S 5]. This condition insures that there is a definite amount of curvature in $L^{2}$ away from the boundary in $B_{x}(\rho (x)).$ As shown in [An1,\S 4], the equations (0.2)-(0.3) form an elliptic system and hence satisfy elliptic regularity estimates. Thus within the $L^{2,2}$ harmonic radius $r_{h},$ one actually has $C^{\infty}$ bounds of the solution metric, and hence its curvature, away from the boundary; the bounds depend only on the size of $r_{h}^{-1}.$ Observe also that the ${\cal R}^{2}$ equations are scale-invariant. Given this regularity derived from (0.2), as mentioned in \S 0, the full equation (0.2) itself is not otherwise needed for the proof of Theorem 0.1; only the trace equation (0.3) is used from here on. For completeness, we recall some results from [An1, \S 2, \S 3] concerning convergence and collapse of Riemannian manifolds with uniform lower bounds on the $L^2$ curvature radius. Thus, suppose ${(B_{i}(R), g_i, x_i)}$ is a sequence of geodesic $R$-balls in complete non-compact Riemannian manifolds $(N_i, g_i)$, centered at base points $x_i \in N_i$. Suppose that $\rho_{i}(y_{i}) \geq \rho_{o}$, for some $\rho_{o} > 0$, for all $y_i \in B_{x_{i}}(R)$. The sequence is said to be {\it non-collapsing} if there is a constant $\nu_o > 0$ such that vol$B_{x_{i}}(1) \geq \nu_o$, for all $i$, or equivalently, the volume radius of $x_i$ is uniformly bounded below. In this case, it follows that a subsequence of ${(B_{i}(R), g_i, x_i)}$ converges in the weak $L^{2,2}$ topology to a limit ${(B_{\infty}(R), g_{\infty}, x_{\infty})}$. The convergence is uniform on compact subsets, and the limit is a manifold with $L^{2,2}$ Riemannian metric $g_{\infty}$, with base point $x_{\infty}$ = lim $x_i$. In case the sequence above is a sequence of ${\cal R}^2$ solutions, the convergence above is in the $C^{\infty}$ topology, by the regularity results above. The sequence as above is {\it collapsing} if vol$B_{x_{i}}(1) \rightarrow 0$, as $i \rightarrow \infty$. In this case, it follows that vol$B_{y_{i}}(1) \rightarrow 0$, for all $y_i \in B_{x_{i}}(R)$. Further, for any $\delta > 0$, and $i$ sufficiently large, there are domains $U_{i} = U_{i}(\delta)$, with $B_{x_{i}}(R - 2\delta) \subset U_i \subset B_{x_{i}}(R - \delta)$ such that, topologically, $U_i$ is a graph manifold. Thus, $U_i$ admits an F-structure, and the $g_i$-diameter of the fibers, (circles or tori), converges to 0 as $i \rightarrow \infty$. Now in case the sequence satisfies regularity estimates as stated above for ${\cal R}^2$ solutions, the curvature is uniformly bounded in $L^{\infty}$ on $U_i$. Hence, it follows from results of Cheeger-Gromov, Fukaya and Rong, (c.f. [An1, Thm.2.10]), that $U_i$ is topologically either a Seifert fibered space or a torus bundle over an interval. Further, the inclusion map of any fiber induces an injection into the fundamental group $\pi_{1}(U_i)$, and the fibers represent (homotopically) the collection of all very short essential loops in $U_i$. Hence, there is an infinite ${\Bbb Z}$ or ${\Bbb Z} \oplus {\Bbb Z}$ cover $(\widetilde U_i, g_i, \widetilde x_i)$ of $(U_i, g_i, x_i)$, ($\widetilde x_i$ a lift of $x_i$), obtained by unwrapping the fibers, which is a non-collapsing sequence. For if the lifted sequence of covers collapsed, by the same arguments again, $\widetilde U_i$, and hence $U_i$, must contain essential short loops; all of these however have already been unwrapped in $\widetilde U_i$. Alternately, one may pass to sufficiently large finite covers $\bar U_i$ of $U_i$ to unwrap the collapse, in place of the infinite covers. In this collapse situation, the limit metrics $(\widetilde U_{\infty}, g_{\infty}, \widetilde x_{\infty})$, (resp. $(\bar U_{\infty}, g_{\infty}, \bar x_{\infty})$) have free isometric ${\Bbb R}$, (resp. $S^1$), actions. These results on the behavior of non-collapsing and collapsing sequences will be used frequently below. We now begin with the proof of Theorem 0.1 itself. The following Lemma shows that one may assume without loss of generality that a complete ${\cal R}^{2}$ solution has uniformly bounded curvature. \begin{lemma} \label{l 2.1.} Let $(N, g)$ be a complete non-flat ${\cal R}^{2}$ solution. Then there exists another complete non-flat ${\cal R}^{2}$ solution $(N' , g' ),$ obtained as a geometric limit at infinity of $(N, g)$, which has uniformly bounded curvature, i.e. \begin{equation} \label{e2.2} |r|_{g'} \leq 1. \end{equation} \end{lemma} {\bf Proof:} We may assume that $(N, g)$ itself has unbounded curvature, for otherwise there is nothing to prove. It follows from the $C^{\infty}$ regularity of solutions mentioned above that the curvature $|r|$ is unbounded on a sequence $\{x_{i}\}$ in $(N, g)$ if and only if \begin{equation} \label{e2.3} \rho (x_{i}) \rightarrow 0. \end{equation} For such a sequence, let $B_{i} = B_{x_{i}}(1)$ and let $d_{i}(x) = dist(x_{i}, \partial B_{i}).$ Consider the scale-invariant ratio $\rho (x)/d_{i}(x),$ for $x\in B_{i},$ and choose points $y_{i}\in B_{i}$ realizing the minimum value of $\rho /d_{i}$ on $B_{i}.$ Since $\rho /d_{i}$ is infinite on $\partial B_{i}, y_{i}$ is in the interior of $B_{i}.$ By (2.3), we have $$\rho (y_{i})/d_{i}(y_{i}) \rightarrow 0, $$ and so in particular $\rho (y_{i}) \rightarrow $ 0. Now consider the sequence $(B_{i}, g_{i}, y_{i}),$ where $g_{i} = \rho (y_{i})^{-2}\cdot g.$ By construction, $\rho_{i}(y_{i}) =$ 1, where $\rho_{i}$ is the $L^{2}$ curvature radius w.r.t. $g_{i}$ and $\delta_{i}(y_{i}) = dist_{g_{i}}(y_{i}, \partial B_{i}) \rightarrow \infty .$ Further, by the minimality property of $y_{i},$ \begin{equation} \label{e2.4} \rho_{i}(x) \geq \rho_{i}(y_{i})\cdot \frac{\delta_{i}(x)}{\delta_{i}(y_{i})} = \frac{\delta_{i}(x)}{\delta_{i}(y_{i})} . \end{equation} It follows that $\rho_{i}(x) \geq \frac{1}{2},$ at all points $x$ of uniformly bounded $g_{i}$-distance to $y_{i},$ (for $i$ sufficiently large, depending on $dist_{g_{i}}(x, y_{i})).$ Consider then the pointed sequence $(B_{i}, g_{i}, y_{i}).$ If this sequence, (or a subsequence), is not collapsing at $y_{i},$ then the discussion above, applied to ${(B_{y_{i}}(R_j), g_i, y_i)}$, with $R_j \rightarrow \infty$, implies that a diagonal subsequence converges smoothly to a limit $(N' , g' , y)$, $y =$ lim $y_{i}.$ The limit is a complete ${\cal R}^{2}$ solution, (since $\delta_{i}(y_{i}) \rightarrow \infty )$ satisfying $\rho \geq \frac{1}{2}$ everywhere, and $\rho (y) = 1$, since $\rho$ is continuous under smooth convergence to limits, (c.f. [An1, Thm. 3.5]). Hence the limit is not flat. By the regularity estimates above, the curvature $|r|$ is pointwise bounded above. A further bounded rescaling then gives (2.2). On the other hand, suppose this sequence is collapsing at $y_{i}.$ Then from the discussion preceding Lemma 2.1, it is collapsing everywhere within $g_{i}$-bounded distance to $x_{i}$ along a sequence of injective F-structures. Hence one may pass to suitable covers $\widetilde U_{i}$ of $U_i$ with $B_{x_{i}}(R_i - 1) \subset U_i \subset B_{x_{i}}(R)$, for some sequence $R_{i} \rightarrow \infty $ as $i \rightarrow \infty .$ This sequence is not collapsing and thus one may apply the reasoning above to again obtain a limit complete non-flat ${\cal R}^{2}$ solution satisfying (2.2), (which in addition has a free isometric ${\Bbb R}$-action. {\qed } Let $v(r)$ = vol$B_{x_{o}}(r),$ where $B_{x_{o}}(r)$ is the geodesic $r$-ball about a fixed point $x_{o}$ in $(N, g)$. Let $J^{2}$ be the Jacobian of the exponential map exp: $T_{x_{o}}N \rightarrow N$, so that $$v(r) = \int_{S_{o}}\int_{0}^{r}J^{2}(s,\theta )dsd\theta , $$ where $S_{o}$ is the unit sphere in $T_{x_{o}}N.$ Thus, $$v' (r) = \int_{S_{o}}J^{2}(r,\theta )d\theta , $$ is the area of the geodesic sphere $S_{x_{o}}(r).$ The next result proves Theorem 0.1 under reasonably weak conditions, and will also be needed for the proof in general. \begin{proposition} \label{p 2.2.} Let (N, g) be a complete ${\cal R}^{2}$ solution on a 3-manifold N, with bounded curvature and $s \geq $ 0. Suppose there are constants $\varepsilon > $ 0 and $c < \infty $ such that \begin{equation} \label{e2.5} v(r) \leq c\cdot r^{4-\varepsilon}, \end{equation} for all $r \geq $ 1. Then (N, g) is flat. \end{proposition} {\bf Proof:} Let $t(x)$= dist$(x, x_{o})$ be the distance function from $x_{o}\in N$ and let $\eta = \eta (t)$ be a non-negative cutoff function, of compact support to be determined below, but initially satisfying $\eta' (t) \leq $ 0. Multiply (0.3) by $\eta^{4}$ and apply the divergence theorem, (this is applicable since $\eta (t)$ is a Lipschitz function on $N$), to obtain $$\int\eta^{4}|r|^{2} = 3\int<\nabla s, \nabla\eta^{4}> . $$ Now one cannot immediately apply the divergence theorem again, since $t$ and hence $\eta $ is singular at the cut locus $C$ of $x_{o}.$ Let $U_{\delta}$ be the $\delta$-tubular neighborhood of $C$ in $N$. Then applying the divergence theorem on $N \setminus U_{\delta}$ gives $$\int_{N \setminus {U_{\delta}}}<\nabla s, \nabla\eta^{4}> = -\int_{N \setminus {U_{\delta}}}s\Delta\eta^{4} + \int_{\partial (N \setminus {U_{\delta}})}s<\nabla\eta^{4}, \nu> , $$ where $\nu $ is the unit outward normal. Since $<\nu , \nabla t> > $ 0 on $\partial (N \setminus {U_{\delta}})$ and $\eta' \leq $ 0, the hypothesis $s \geq $ 0 implies that the boundary term is non-positive. Hence, $$\int_{N \setminus U_{\delta}} \eta^{4}|r|^{2} \leq - 3\int_{N \setminus U_{\delta}} s\Delta\eta^{4}.$$ We have $\Delta\eta^{4} = 4\eta^{3}\Delta\eta + 12\eta^{2}|\nabla \eta|^{2} \geq 4\eta^{3}\Delta\eta ,$ so that again since $s \geq $ 0, $$\int_{N \setminus U_{\delta}} \eta^{4}|r|^{2} \leq - 12\int_{N \setminus U_{\delta}} s\eta^{3}\Delta\eta . $$ Further $\Delta\eta = \eta'\Delta t + \eta'' ,$ so that $$\int_{N \setminus U_{\delta}} \eta^{4}|r|^{2} \leq - 12\int_{N \setminus U_{\delta}} s\eta^{3}\eta' \Delta t + s\eta^{3}\eta''. $$ It is standard, c.f. [P,9.1.1], that off $C$, $$\Delta t = H = 2\frac{J'}{J}, $$ where $H$ is the mean curvature of $S_{x_{o}}(r)$ and $J = (J^{2})^{1/2}.$ Further, since the curvature of $(N, g)$ is bounded, standard comparison geometry, (c.f. [Ge] for example), implies that there is a constant $C < \infty$ such that \begin{equation} \label{e2.6} H(x) \leq C, \end{equation} for all $x$ outside $B_{x_{o}}(1) \subset N$. (Of course there is no such lower bound for $H$). Hence, since $s \geq 0$ and $\eta' \leq 0$, it follows that $$\int_{N \setminus U_{\delta}} \eta^{4}|r|^{2} \leq - 12\int_{N \setminus U_{\delta}} s\eta^{3}\eta' H^{+} + s\eta^{3}\eta'',$$ where $H^{+}$ = max$(H, 0)$. The integrand $-s\eta'H^{+}$ is positive and bounded. Hence, since the cutlocus $C$ is of measure 0, we may let $\delta \rightarrow 0$ and obtain $$\int_{N} \eta^{4}|r|^{2} \leq - 12\int_{N} s\eta^{3}\eta' H^{+} + s\eta^{3}\eta''.$$ Now fix any $R < \infty $ and choose $\eta = \eta (t)$ so that $\eta \equiv $ 1 on $B_{x_{o}}(R), \eta \equiv $ 0 on $N \setminus B_{x_{o}}(2R), \eta' \leq $ 0, and $|\eta'| \leq c/R, |\eta''| \leq c/R^{2}.$ Using the H\"older and Cauchy inequalities, we obtain \begin{equation} \label{e2.7} \int\eta^{4}|r|^{2} \leq \mu\int\eta^{4}s^{2} + \mu^{-1}\int\eta^{2}(\eta' )^{2}(H^{+})^{2} + \mu^{-1}\int\eta^{2}(\eta'' )^{2}, \end{equation} for any $\mu > $ 0 small. Since $|r|^{2} \geq s^{2}/3,$ by choosing $\mu $ sufficiently small the first term on the right in (2.7) may be absorbed into the left. Thus we have on $B(R)$= $B_{x_{o}}(R),$ for suitable constants $c_{i}$ independent of $R$, \begin{equation} \label{e2.8} \int_{B(R)}|r|^{2} \leq c_{1}\int_{B(2R)}(R^{-2}(H^{+})^{2} + R^{-4}) \leq c_{2}R^{-2}\int_{B(2R)}(H^{+})^{2} + c_{3}R^{-\varepsilon}, \end{equation} where the last inequality uses (2.5). We now claim that there is a constant $K < \infty$, (depending on the geometry of $(N, g)$), such that \begin{equation} \label{e2.9} \Delta t(x) \cdot \rho(x) \leq K, \end{equation} for all $x \in N$ with $t(x) \geq 10$, with $x \notin C$. We will assume (2.9) for the moment and complete the proof of the result; following this, we prove (2.9). Thus, substituting (2.9) in (2.8), and using the definition of $\rho$, we obtain $$\int_{B(R)}|r|^{2} \leq c_{4}R^{-2}\int_{B(2R)}|r| + c_{3}R^{-\varepsilon}.$$ Applying the Cauchy inequality to the $|r|$ integral then gives $$\int_{B(R)}|r|^{2} \leq c_{4}R^{-2}\big (\int_{B(2R)}|r|^{2} \bigl )^{1/2}vol B(2R)^{1/2} + c_{3}R^{-\varepsilon}.$$ Now from the volume estimate (2.5) and the uniform bound on $|r|$, there exists a sequence $R_i \rightarrow \infty$ and a constant $C < \infty$ such that $$\int_{B(2R_i)}|r|^{2} \leq C\int_{B(R_i)}|r|^{2}.$$ Hence, setting $R = R_i$ and combining these estimates gives $$\int_{B(R_i)}|r|^{2} \leq c_{5}\frac{vol B(2R_i)}{R_{i}^{4}}.$$ Taking the limit as $i \rightarrow \infty$ and using (2.5), it follows that $(N, g)$ is flat, as required. Thus, it remains to establish (2.9). We prove (2.9) by contradiction. Thus, suppose there is a sequence $\{x_i\} \in N \setminus C$ such that \begin{equation} \label{e2.10} \Delta t(x_i) \cdot \rho(x_i) \rightarrow \infty, \end{equation} as $i \rightarrow \infty$. Note that (2.10) is scale invariant and that necessarily $t(x_i) \rightarrow \infty$. In fact, by (2.6), (2.10) implies that $\rho(x_i) \rightarrow \infty$ also. Note that \begin{equation} \label{e2.11} \rho(y) \leq 2t(y), \end{equation} for any $y$ such that $t(y)$ is sufficiently large, since $(N, g)$ is assumed not flat. We rescale the manifold $(N, g)$ at $x_i$ by setting $g_i = \lambda_{i}^{2} \cdot g$, where $\lambda_{i} = \Delta t(x_i)$. Thus, w.r.t. $g_i$, we have $\Delta_{g_{i}}t_{i}(x_{i}) = 1$, where $t_{i}(y)$ = $dist_{g_{i}}(y, x_o) = \lambda_i t(y)$. By the scale invariance of (2.10), it follows that \begin{equation} \label{e2.12} \rho_i(x_i) \rightarrow \infty , \end{equation} where $\rho_i = \lambda_i \cdot \rho$ is the $L^2$ curvature radius w.r.t. $g_i$. By (2.11), this implies that $t_i(x_i) \rightarrow \infty$, so that the base point $x_o$ diverges to infinity in the $\{x_i\}$ based sequence $(N, g_i, x_i)$. Hence renormalize $t_i$ by setting $\beta_i(y) = t_i(y) - dist_{g_{i}}(y, x_o)$, as in the construction of Busemann functions. Thus, we have a sequence of ${\cal R}^2$ solutions $(N, g_i, x_i)$ based at $\{x_i\}$. From the discussion preceding Lemma 2.1, it follows that a subsequence converges smoothly to an ${\cal R}^2$ limit metric $(N_{\infty}, g_{\infty}, x_{\infty})$, passing to suitable covers as described in the proof of Lemma 2.1 in the case of collapse. By (2.12), it follows that $$N_{\infty} = {\Bbb R}^3, $$ (or a quotient of ${\Bbb R}^3$), and $g_{\infty}$ is the complete flat metric. Now the smooth convergence also gives $$\Delta_{g_{\infty}}\beta(x_{\infty}) = 1, $$ where $\beta$, the limit of $\beta_i$, is a Busemann function on a complete flat manifold. Hence $\beta$ is a linear coordinate function. This of course implies $\Delta_{g_{\infty}}\beta(x_{\infty}) = 0$, giving a contradiction. This contradiction then establishes (2.9). {\qed } We remark that this result mainly requires the hypothesis $s \geq $ 0 because of possible difficulties at the cut locus. There are other hypotheses that allow one to overcome this problem. For instance if $(N, g)$ is complete as above and (2.5) holds, (but without any assumption on $s$), and if there is a smooth approximation $\Roof{t}{\widetilde}$ to the distance function $t$ such that $|\Delta\Roof{t}{\widetilde}| \leq c/\Roof{t}{\widetilde},$ (for example if $|r| \leq c/t^{2}$ for some $c < \infty ),$ then $(N, g)$ is flat. The proof is the same as above, (in fact even simpler in this situation). Next we need the following simple result, which allows one to control the full curvature in terms of the scalar curvature. This result is essentially equivalent to [An1, Lemma 5.1]. \begin{lemma} \label{l 2.3.} Let $g$ be an ${\cal R}^{2}$ solution, defined in a geodesic ball $B = B_{x}(1),$ with $r_{h}(x) =$ 1. Then for any small $\mu > $ 0, there is a constant $c_{1} = c_{1}(\mu )$ such that \begin{equation} \label{e2.13} |r|^{2}(y) \leq c_{1}\cdot ||s||_{L^{2}(B)} , \end{equation} for all $y\in B(1-\mu ) = B_{x}(1-\mu ).$ In particular, if $||s||_{L^{2}(B)}$ is sufficiently small, then $g$ is almost flat, i.e. has almost 0 curvature, in $B(1-\mu ).$ Further, if $s \geq $ 0 in $B(1)$, then there is a constant $c_{2} = c_{2}(\mu )$ such that \begin{equation} \label{e2.14} ||s||_{L^{2}(B(1-\mu ))} \leq c_{2}s(x). \end{equation} \end{lemma} {\bf Proof:} Let $\eta $ be a non-negative cutoff function satisfying $\eta \equiv $ 1 on $B(1-\frac{\mu}{2}), \eta \equiv $ 0 on $A(1-\frac{\mu}{4},1),$ and $|\nabla \eta| \leq c/\mu .$ Pair the trace equation (0.3) with $\eta^{2}$ to obtain $$\int_{B}\eta^{2}|r|^{2} = - 3\int_{B}s\Delta\eta \leq c\cdot (\int_{B}s^{2})^{1/2}(\int_{B}(\Delta\eta )^{2})^{1/2}. $$ Since $r_{h}(x) =$ 1, $\eta $ may be chosen so that the $L^{2}$ norm of $\Delta\eta $ is bounded in terms of $\mu $ only. It follows that $$\int_{B(1-\frac{\mu}{2})}|r|^{2} \leq c(\mu )||s||_{L^{2}(B)}. $$ One obtains then an $L^{\infty},$ (and in fact $C^{k,\alpha}),$ estimate for $|r|^{2}$ by elliptic regularity, as discussed preceding Lemma 2.1. For the second estimate (2.14), note that by (0.3), $s$ is a superharmonic function, assumed non-negative in $B(1)$. Since the metric $g$ is bounded in $L^{2,2}$ on $B(1)$, and hence bounded in $C^{1/2}$ by Sobolev embedding, the estimate (2.14) is an immediate consequence of the DeGiorgi-Nash-Moser estimates for non-negative supersolutions of divergence form elliptic equations, c.f. [GT,Thm.8.18]. {\qed } The behavior of the scalar curvature, and thus of the full curvature, at infinity is the central focus of the remainder of the proof. For example, Lemma 2.3 leads easily to the following special case of Theorem 0.1. \begin{lemma} \label{l 2.4.} Suppose (N, g) is a complete ${\cal R}^{2}$ solution satisfying \begin{equation} \label{e2.15} limsup_{t\rightarrow\infty} \ t^{2}\cdot s = 0, \end{equation} where t(x) $=$ dist(x, $x_{o}).$ Then (N, g) is flat. \end{lemma} {\bf Proof:} We claim first that (2.15) implies that \begin{equation} \label{e2.16} liminf_{t\rightarrow\infty} \ \rho /t \geq c_{o}, \end{equation} for some constant $c_{o} > $ 0. For suppose (2.16) were not true. Then there is a sequence $\{x_{i}\}$ in $N$ with $t_{i} = t(x_{i}) \rightarrow \infty ,$ such that $\rho (x_{i})/t(x_{i}) \rightarrow $ 0. We may choose $x_{i}$ so that it realizes approximately the minimal value of the ratio $\rho /t$ for $ \frac{1}{2}t_{i} \leq t \leq 2t_{i},$ as in the proof of Lemma 2.1. For example, choose $x_{i}$ so that it realizes the minimal value of the ratio $$\rho (x)/dist(x, \partial A({\tfrac{1}{2}}t_{i}, 2t_{i})) $$ for $x\in A(\frac{1}{2}t_{i}, 2t_{i}).$ Such a choice of $x_{i}$ implies that $x_{i}$ is strongly $(\rho ,\frac{1}{2})$ buffered, i.e. $\forall y_{i}\in\partial B_{x_{i}}(\rho (x_{i})),$ $$\rho (y_{i}) \geq \tfrac{1}{2}\rho (x_{i}), $$ c.f. the beginning of \S 2 and compare with (2.4). Now rescale the metric $g$ by the $L^{2}$ curvature radius $\rho $ at $x_{i},$ i.e. set $g_{i} = \rho (x_{i})^{-2}\cdot g.$ Thus $\rho_{i}(x_{i}) =$ 1, where $\rho_{i} = \rho (g_{i}).$ As in the proof of Lemma 2.1, if the ball $(B_{i}, g_{i}), B_{i} = (B_{x_{i}}(\frac{11}{8}), g_{i})$ is sufficiently collapsed, pass to sufficiently large covers of this ball to unwrap the collapse, as discussed preceding Lemma 2.1. We assume this is done, and do not change the notation for the collapse case. Since we are assuming that $\rho (x_{i}) << t(x_{i}),$ by (2.15) and scaling properties, it follows that $s_{i},$ the scalar curvature of $g_{i},$ satisfies \begin{equation} \label{e2.17} s_{i} \rightarrow 0, \end{equation} uniformly on $(B_{i}(\frac{5}{4}), g_{i})$. By Lemma 2.3, we obtain \begin{equation} \label{e2.18} |r_{i}| \rightarrow 0, \end{equation} uniformly on $(B_{x_{i}}(\frac{9}{8}), g_i)$. However, since $\rho_{i}(x_{i}) =$ 1, and vol$B_{x{_i}}(1) > \nu_o > 0$, the ball $(B_{x_{i}}(\frac{9}{8}), g_i)$ has a definite amount of curvature in $L^{2}.$ This contradiction gives (2.16). Now apply the same reasoning to any sequence $\{y_{i}\}$ in $N$, with $t(y_{i}) \rightarrow \infty ,$ but with respect to the blow-down metrics $g_{i} = t(y_{i})^{-2}\cdot g,$ so that by (2.16), $\rho_{i}(y_{i}) \geq c > $ 0. Since (2.17) remains valid, apply Lemma 2.3 again to obtain (2.18) on a neighborhood of fixed $g_{i}$-radius about $y_{i}.$ The estimate (2.18) applied to the original (unscaled) metric $g$ means that \begin{equation} \label{e2.19} limsup_{t\rightarrow\infty} \ t^{2}\cdot |r| = 0, \end{equation} improving the estimate (2.15). Now standard comparison estimates on the Ricatti equation $H' + \frac{1}{2}H^2 \leq |r|$, (c.f. [P, Ch.9] and the proof of Prop. 2.2), shows that (2.19) implies that the volume growth of $(N, g)$ satisfies $$v(t) \leq c\cdot t^{3+\varepsilon}, $$ for any given $\varepsilon > $ 0 with $c = c(\varepsilon ) < \infty .$ The maximum principle applied to the trace equation (0.3), together with (2.15) implies that $s > 0$ everywhere. Thus, the result follows from Proposition 2.2. {\qed } The proof of Theorem 0.1 now splits into two cases, following the general situation in [An1, Thms. 6.1, 6.2] respectively. The first case below can basically be viewed as a local and quantitative version of Lemma 2.4. The result roughly states that if a complete ${\cal R}^{2}$ solution with $s \geq $ 0 is weakly asymptotically flat in some direction, then it is flat. \begin{theorem} \label{t 2.5.} Let (N, g) be a complete ${\cal R}^{2}$ solution with non-negative scalar curvature. Suppose there exists a sequence $x_{i}$ in (N, g) such that \begin{equation} \label{e2.20} \rho^{2}(x_{i})\cdot s(x_{i}) \rightarrow 0 \ \ {\rm as} \ i \rightarrow \infty . \end{equation} Then (N, g) is flat. \end{theorem} {\bf Proof:} The proof follows closely the ideas in the proof of [An1, Thms. 5.4 and 6.1]. Throughout the proof below, we let $\rho $ denote the $L^{4}$ curvature radius as opposed to the $L^{2}$ curvature radius. As noted in [An1,(5.6)], the $L^{2}$ and $L^{4}$ curvature radii are uniformly equivalent to each other on $(N, g)$, since as discussed preceding Lemma 2.1, the metric satisfies an elliptic system, and regularity estimates for such equations give $L^{4}$ bounds in terms of $L^{2}$ bounds. In particular, Lemma 2.3 holds with the $L^{4}$ curvature radius in place of the $L^{2}$ radius. Further, as discussed preceding Lemma 2.1, if a ball $B(\rho )\subset (N, g)$ is sufficiently collapsed, we will always assume below that the collapse is unwrapped by passing to the universal cover. Thus, the $L^{4}$ curvature radius and $L^{2,4}$ harmonic radius are uniformly equivalent to each other, c.f. [An1, (3.8)-(3.9)]. Let $\{x_{i}\}$ be a sequence satisfying (2.20). As in the proof of Lemma 2.4, (c.f. (2.17)ff), a subsequence of the rescaled metrics $g_{i} = \rho (x_{i})^{-2}\cdot g$ converges to a flat metric on uniformly compact subsets of $(B_{x_{i}}(1), g_{i}),$ unwrapping to the universal cover in case of collapse. Thus, the $(L^{4})$ curvature radius $\rho_{i}(y_{i})$ w.r.t. $g_{i}$ necessarily satisfies $\rho_{i}(y_{i}) \rightarrow $ 0, for some $y_{i}\in (\partial B_{x_{i}}(1), g_{i}),$ as $i \rightarrow \infty .$ Pick $i_{o}$ sufficiently large, so that $g_{i_{o}}$ is very close to the flat metric. We relabel by setting $g^{1} = g_{i_{o}}, q^{1} = x_{i_{o}}$ and $B = B^{1}= (B_{q^{1}}(1), g^{1}).$ For the moment, we work in the metric ball $(B^{1}, g^{1}).$ It follows that for any $\delta_{1}$ and $\delta_{2} > $ 0, we may choose $i_{o}$ such that \begin{equation} \label{e2.21} \rho (q) \leq \delta_{1}\rho (q_{1}), \end{equation} for some $q\in\partial B,$ and (from (2.20)), \begin{equation} \label{e2.22} s(q^{1}) \leq \delta_{2}, \end{equation} where $\rho $ and $s$ are taken w.r.t. $g_{1}.$ Here both $\delta_{1}$ and $\delta_{2}$ are assumed to be sufficiently small, (for reasons to follow), and further $\delta_{2}$ is assumed sufficiently small compared with $\delta_{1}$ but sufficiently large compared with $\delta_1{}^{2}.$ For simplicity, and to be concrete, we set \begin{equation} \label{e2.23} \delta_{2} = \delta_1{}^{3/2}, \end{equation} and assume that $\delta_{1}$ is (sufficiently) small. We use the trace equation (0.3), i.e. \begin{equation} \label{e2.24} \Delta s = -\tfrac{1}{3}|r|^{2}, \end{equation} on $(B, g^{1})$ to analyse the behavior of $s$ in this scale near $\partial B;$ recall that the trace equation is scale invariant. From the Green representation formula, we have for $x\in B$ \begin{equation} \label{e2.25} s(x) = \int_{\partial B}P(x,Q)d\mu_{Q} - \int_{B}\Delta s(y)G(x,y)dV_{y}, \end{equation} where $P$ is the Poisson kernel and $G$ is the positive Green's function for the Laplacian on $(B, g^{1}).$ As shown in [An1, Lemma 5.2], the Green's function is uniformly bounded in $L^{2}(B).$ The same holds for $\Delta s$ by (2.24), since the $L^{4}$ curvature radius of $g^{1}$ at $q^{1}$ is 1. Thus, the second term in (2.25) is uniformly bounded, i.e. there is a fixed constant $C_{o}$ such that \begin{equation} \label{e2.26} s(x) \leq \int_{\partial B}P(x,Q)d\mu_{Q} + C_{o}. \end{equation} The Radon measure $d\mu $ is a positive measure on $\partial B,$ since $s > $ 0 everywhere. Further, the total mass of $d\mu $ is at most $\delta_{2},$ by (2.22). By [An1, Lemma 5.3], the Poisson kernel $P(x,Q)$ satisfies $$P(x,Q) \leq c_{1}\cdot t_{Q}(x)^{-2}, $$ where $t_{Q}(x)$ = dist$(x,Q)$ and $c_{1}$ is a fixed positive constant. Hence, for all $x\in B,$ \begin{equation} \label{e2.27} s(x) \leq c_{1}\cdot \delta_{2}\cdot t^{-2}(x) + C_{o}, \end{equation} where $t(x)$= dist$(x,\partial B).$ Now suppose the estimate (2.27) can be improved in the sense that there exist points $q^{2}\in B^{1}$ s.t. \begin{equation} \label{e2.28} \rho^{1}(q^{2}) \leq \delta_{1}, \end{equation} and \begin{equation} \label{e2.29} s(q^{2}) \leq \tfrac{1}{2}\delta_{2}\cdot (\rho^{1}(q^{2}))^{-2} + C_{o}, \end{equation} where $\rho^{1}(x) = \rho (x, g^{1}).$ Note that $\rho^{1}(x) \geq t(x)$ by (2.1), so that the difference between (2.27) and (2.29) is only in the factors $c_{1}$ and $\frac{1}{2}.$ We have \begin{equation} \label{e2.30} \delta_{2}\cdot (\rho^{1}(q^{2}))^{-2} \geq \delta_{2}\delta_{1}^{-2} >> 1, \end{equation} where the last estimate follows from the assumption (2.23) on the relative sizes of $\delta_{1}$ and $\delta_{2}.$ Thus, the term $C_{o}$ in (2.29) is small compared with its partner in (2.29), and so \begin{equation} \label{e2.31} s(q^{2}) \leq \delta_{2}\cdot (\rho^{1}(q^{2}))^{-2}. \end{equation} We may then repeat the analysis above on the new scale $g^{2} = (\rho^{1}(q^{2}))^{-2}\cdot g^{1}$ and the $g^{2}$ geodesic ball $B^{2} = B_{q^{2}}^{2}(1),$ so that $\rho^{2}(q^{2}) =$ 1. Observe that the product $\rho^{2}\cdot s$ is scale-invariant, so that in the $g^{2}$ scale, $s$ is much smaller than $s$ in the $g^{1}$ scale. In the $g^{2}$ scale, (2.31) becomes the statement \begin{equation} \label{e2.32} s(q^{2}) \leq \delta_{2}, \end{equation} as in (2.22). We will show below in Lemma 2.6 that one may continue in this way indefinitely, i.e. as long as there exist points $q^{k}\in B^{k-1},$ with $\rho (q^{k}) \leq \delta_{1}\rho (q^{k-1})$ as in (2.28), then there exist such points $q^{k}$ satisfying in addition \begin{equation} \label{e2.33} s(q^{k}) \leq \delta_{2}, \end{equation} where $s$ is the scalar curvature of $g^{k} = (\rho^{k-1}(q_{k}))^{-2}\cdot g^{k-1},$ as in (2.22) or (2.32). On the one hand, we claim this sequence $\{q^{k}\}$ must terminate at some value $k_{o}.$ Namely, return to the original metric $(N, g)$. By construction, we have $\rho (q^{k}) \leq \delta_1{}^{k}\rho (q^{1}).$ The value $\rho (q^{1})$ is some fixed number, (possibly very large), say $\rho (q^{1}) = C$, so that \begin{equation} \label{e2.34} \rho (q^{k}) \leq C\cdot \delta_1{}^{k} \rightarrow 0, \ \ {\rm as} \ \ k \rightarrow \infty . \end{equation} However, observe that $dist_{g}(q^{k}, q^{1})$ is uniformly bounded, independent of $k$. Since $(N, g)$ is complete and smooth, $\rho $ cannot become arbitrarily small in compact sets of $N$. Hence (2.34) prevents $k$ from becoming arbitrarily large. On the other hand, if this sequence terminates at $q^{k}, k = k_{o},$ then necessarily \begin{equation} \label{e2.35} \rho (q) \geq \delta_{1}\cdot \rho (q^{k}), \end{equation} for all $q\in\partial B^{k}.$ However the construction gives $s(q^{k}) \leq \delta_{2},$ where $s$ is the scalar curvature of $g_{k},$ with $\rho^{k}(q^{k}) =$ 1. This situation contradicts Lemma 2.3 if $\delta_{2}$ is chosen sufficiently small compared with $\delta_{1},$ i.e. in view of (2.23), $\delta_{1}$ is sufficiently small. It follows that the proof of Theorem 2.5 is completed by the following: \begin{lemma} \label{l 2.6.} Let (N, g) be an ${\cal R}^{2}$ solution, $x\in N,$ and let $g$ be scaled so that $\rho (x) =$ 1, where $\rho $ is the $L^{4}$ curvature radius. Suppose that $\delta_{1}$ is sufficiently small, $\delta_{2} = \delta_1{}^{3/2},$ \begin{equation} \label{e2.36} s(x) \leq \delta_{2}, \end{equation} and, for some $y_{o}\in B_{x}(1),$ \begin{equation} \label{e2.37} \rho (y_{o}) \leq \delta_{1}\cdot \rho (x) = \delta_{1}. \end{equation} Then there exists an absolute constant $K < \infty $ and a point $y_{1}\in B_{x}(1),$ with $\rho (y_{1}) \leq K\cdot \rho (y_{o}),$ such that \begin{equation} \label{e2.38} s(y_{1}) \leq \delta_{2}\cdot (\rho (y_{1}))^{-2}. \end{equation} \end{lemma} {\bf Proof:} By (2.27), we have \begin{equation} \label{e2.39} s(y) \leq c_{1}\cdot \delta_{2}\cdot t^{-2}(y) + C_{o}, \end{equation} for all $y\in B.$ If there is a $y$ in $B_{y_{o}}(2\rho (y_{o})) \cap B_x(1)$ such that (2.38) holds at $y$, then we are done, so suppose there is no such $y$. Consider the collection $\beta$ of points $z\in B$ for which an opposite inequality to (2.38) holds, i.e. \begin{equation} \label{e2.40} s(z) \geq \tfrac{1}{10}\delta_{2}\cdot t^{-2}(z), \end{equation} for $t(z)$ very small; (the factor $\frac{1}{10}$ may be replaced by any other small positive constant). From (2.26), this implies that $s$ resembles a multiple of the Poisson kernel near $z$. More precisely, suppose (2.40) holds for all $z$ within a small ball $B_{z_{o}}(\nu ),$ for some $z_{o}\in\partial B.$ Given a Borel set $E \subset \partial B,$ let $m(E)$ denote the mass of the measure $d\mu $ from (2.26), of total mass at most $\delta_{2}.$ Then the estimate (2.40) implies \begin{equation} \label{e2.41} m(B_{z_{o}}(\nu )) \geq \delta_{2}\cdot \varepsilon_{o}, \end{equation} where $\nu $ may be made arbitrarily small if (2.40) holds for $z\in\beta$ and $t(z)$ is sufficiently small. The constant $\varepsilon_{o}$ depends only on the choice of $\frac{1}{10}$ in (2.40). Thus, part of $d\mu $ is weakly close to a multiple of the Dirac measure at some point $z_{o}\in\partial B$ near $\beta$; (the Dirac measure at $z_{o}$ generates the Poisson kernel $P(x, z_{o})).$ The idea now is that there can be only a bounded number $n_{o}$ of points satisfying this property, with $n_{o}$ depending only on the ratio $c_{1}/\varepsilon_{o}.$ Thus we claim that there is a constant $K_{1} < \infty ,$ depending only on the choice of $\frac{1}{10}$ in (2.40), and points $p_{1}\in\partial B$ such that \begin{equation} \label{e2.42} dist(p_{1},y_{o}) \leq K_{1}\cdot \rho (y_{o}), \end{equation} and \begin{equation} \label{e2.43} m(B_{p_{1}}(\rho (p_{1}))) \leq \delta_{2}\cdot \varepsilon_{o}. \end{equation} To see this, choose first $p'\in\partial B(1),$ as close as possible to $y_{o}$ such that \begin{equation} \label{e2.44} B_{p'}(\tfrac{1}{2}\rho (p' ))\cap B_{y_{o}}(\rho (y_{o})) = \emptyset . \end{equation} Note that in general $$\rho (p' ) \leq dist(p' , y_{o}) + \rho (y_{o}), $$ so that (2.44) implies $$dist(p' , y_{o}) \leq 3\rho (y_{o}), $$ and thus \begin{equation} \label{e2.45} \rho (p' ) \leq 4\rho (y_{o}). \end{equation} If $p' $ satisfies (2.43), then set $p_{1} = p' .$ If not, so $m(B_{p'}(\rho (p' ))) > \delta_{2}\cdot \varepsilon_{o},$ then repeat this process with $p' $ in place of $y_{o}.$ Since the total mass is $\delta_{2},$ this can be continued only a bounded number $K_{1}$ of times. Clearly, $$C^{-1}\rho (y_{o}) \leq \rho (p_{1}) \leq C\cdot \rho (y_{o}), $$ where $C = C(K_{1}).$ Now choose $y_{1}\in B_{x}(1)\cap B_{p_{1}}(\rho (p_{1})),$ say with $t(y_{1}) = \frac{1}{2}\rho (p_{1}).$ For such a choice, we then have $$s(y_{1}) \leq \tfrac{1}{10}\delta_{2}\rho (y_{1})^{-2}, $$ and the result follows. {\qed } Lemma 2.6 also completes the proof of Theorem 2.5. Finally consider the complementary case to Theorem 2.5. This situation is handled by the following result, which shows that the assumption (2.20) must hold. This result generalizes [An1, Thm.6.2]. \begin{theorem} \label{t 2.7.} Let $(N, g)$ be a complete ${\cal R}^{2}$ solution with non-negative scalar curvature and uniformly bounded curvature. Then \begin{equation} \label{e2.46} liminf_{t\rightarrow\infty} \ \rho^{2}s = 0. \end{equation} \end{theorem} The proof of Theorem 2.7 will proceed by contradiction in several steps. Thus, we assume throughout the following that there is some constant $d_{o} > $ 0 such that, for all $x\in (N, g)$, \begin{equation} \label{e2.47} s(x) \geq d_{o}\cdot \rho (x)^{-2}. \end{equation} Note first that for any complete non-flat manifold, $\rho (x) \leq 2t(x)$ for $t(x)$ sufficiently large, (as in (2.11)), so that (2.47) implies, for some $d > $ 0, \begin{equation} \label{e2.48} s(x) \geq d\cdot t(x)^{-2}. \end{equation} For reasons to follow later, we assume that $N$ is simply connected, by passing to the universal cover if it is not. Note that (2.47) also holds on any covering space, (with a possibly different constant, c.f. [An1, (3.8)]). Consider the conformally equivalent metric \begin{equation} \label{e2.49} \Roof{g}{\widetilde} = s\cdot g. \end{equation} The condition (2.48) guarantees that $(N, \Roof{g}{\widetilde})$ is complete. (Recall that by the maximum principle, $s > $ 0 everywhere, so that $\Roof{g}{\widetilde}$ is well-defined). A standard computation of the scalar curvature $\Roof{s}{\widetilde}$ of $\Roof{g}{\widetilde},$ c.f. [B, Ch.1J] or [An1, (5.18)], gives \begin{equation} \label{e2.50} \Roof{s}{\widetilde} = 1 +{\tfrac{2}{3}}\frac{|r|^{2}}{s^{2}} + {\tfrac{3}{2}}\frac{|\nabla s|^{2}}{s^{3}} \geq 1. \end{equation} Thus, $\Roof{g}{\widetilde}$ has uniformly positive scalar curvature. We claim that $(N, \Roof{g}{\widetilde})$ has uniformly bounded curvature. To see this, from formulas for the behavior of curvature under conformal changes, c.f. again [B, Ch.1J], one has \begin{equation} \label{e2.51} |\Roof{r}{\widetilde}|_{\Roof{g}{\widetilde}} \leq c_{1}\frac{|r|}{s} + c_{2}\frac{|D^{2}s|}{s^{2}} + c_{3}\frac{|\nabla s|^{2}}{s^{3}}, \end{equation} for some absolute constants $c_{i}.$ Here, the right side of (2.51) is w.r.t. the $g$ metric. The terms on the right on $(N, g)$ are all scale-invariant, so we may estimate them at a point $x\in N$ with $g$ scaled so that $\rho (x, g) = 1$. By assumption (2.47), it follows that $s$ is uniformly bounded below in $B(\frac{1}{2})$ = $B_{x}(\frac{1}{2})$. By Lemma 2.3, $|r|,$ and so also $s$, is uniformly bounded above in $B(\frac{1}{4}).$ Similarly, elliptic regularity for ${\cal R}^{2}$ solutions on $B(\frac{1}{2})$ implies that $|D^{2}s|$ and $|\nabla s|^{2}$ are bounded above on $B(\frac{1}{4}).$ Hence the claim follows. Now a result of Gromov-Lawson, [GL, Cor. 10.11] implies, since $N$ is simply connected, that the 1-diameter of $(N, \Roof{g}{\widetilde})$ is at most $12\pi .$ More precisely, let $\Roof{t}{\tilde}(x) = dist_{\Roof{g}{\tilde}}(x, x_{o}).$ Let $\Gamma = N/\sim ,$ where $x \sim x' $ if $x$ and $x' $ are in the same arc-component of a level set of $\Roof{t}{\tilde}.$ Then $\Gamma $ is a locally finite metric tree, for which the projection $\pi : N \rightarrow \Gamma $ is distance non-increasing, c.f. [G, App.1E]. The Gromov-Lawson result states that the diameter of any fiber $F(x) =\pi^{-1}(x)$ in $(N, \Roof{g}{\widetilde})$ is at most $12\pi .$ Since the curvature of $\Roof{g}{\widetilde}$ is uniformly bounded, it follows that the area of these fibers is also uniformly bounded. In particular, for any $x$, \begin{equation} \label{e2.52} vol_{\Roof{g}{\tilde}}B_{x}(r) \leq C_{o}\cdot L_{x}(r), \end{equation} where $L(r)$ is the length of the $r$-ball about $x$ in $\Gamma .$ Further, the uniform curvature bound on $\Roof{g}{\widetilde}$ implies there is a uniform bound $Q$ on the number of edges $E \subset \Gamma $ emanating from any vertex $v\in\Gamma .$ Note that the distance function $\Roof{t}{\tilde}$ gives $\Gamma $ the structure of a directed tree. Then by construction, at any edge $E \subset\Gamma $ terminating at a vertex $v$, either $\Gamma $ terminates, or there are at least two new edges initiating at $v$. Observe that any point $e$ in an edge $E$, (say not a vertex), divides $\Gamma $ into two components, the inward and outward, with the former containing the base point $x_{o}$ and the latter its complement $\Gamma_{e}.$ The outgoing subtree or branch $\Gamma_{e}$ may have infinite length, in which case it gives an end of $N$, or may have finite length. The remainder of the proof needs to be separated into several parts according to the complexity of the graph $\Gamma .$ \begin{lemma} \label{l 2.8.} Suppose that (2.47) holds. Then the graph $\Gamma $ must have an infinite number of edges. In particular, $N$ has an infinite number of ends. \end{lemma} {\bf Proof:} Suppose that $\Gamma $ has only a finite number of edges. It follows that $\Gamma $ has at most linear growth, i.e. \begin{equation} \label{e2.53} L_{x}(r) \leq C_{1}\cdot r, \end{equation} for some fixed $C_{1} < \infty .$ Returning to the metric $g$ on $N$, we have $g \leq C_{2}\cdot t^{2}\Roof{g}{\widetilde}.$ This together with (2.52) and (2.53) imply that \begin{equation} \label{e2.54} vol_{g}(B_{x_{o}}(r)) \leq C_{3}\cdot r^{3}, \end{equation} and so Proposition 2.2 implies that $(N, g)$ is flat. This of course contradicts (2.47). Since the graph $\Gamma$ is a metric tree, and any vertex has at least two outgoing edges, it follows that $\Gamma$ has infinitely many ends. Hence, by construction, $N$ also has infinitely many ends. {\qed } The preceding argument will be generalized further in the following to handle the situation when $\Gamma $ has infinitely many edges. To do this however, we first need the following preliminary result. Let $A_{c} = A_{c}(r)$ be any component of the annulus $\Roof{t}{\tilde}^{-1}(r, r+1)$ in $(N, \Roof{g}{\widetilde}).$ \begin{lemma} \label{l 2.9.} Assume the hypotheses of Theorem 2.7 and (2.47). {\bf (i).} There exists a fixed constant $d < \infty $ such that \begin{equation} \label{e2.55} sup_{A_{c}}s \leq d\cdot inf_{A_{c}}s. \end{equation} {\bf (ii).} Let $e\in\Gamma ,$ be any edge for which the outgoing branch $\Gamma_{e}$ is infinite, giving an end $N_{e}$ of N. Then \begin{equation} \label{e2.56} inf_{N_{e}}s = 0. \end{equation} \end{lemma} {\bf Proof: (i).} We work on the manifold $(N, \Roof{g}{\widetilde})$ as above. From standard formulas for the behavior of $\Delta $ under conformal changes, c.f. [B, Ch.1J], we have $$\Roof{\Delta}{\widetilde}s = \frac{1}{s}\Delta s + \frac{1}{2s^{2}}|\nabla s|^{2} = \frac{1}{s}\Delta s + \frac{1}{2s}|\Roof{\nabla}{\widetilde}s|^{2}, $$ where the last term is the norm of the gradient, both in the $\Roof{g}{\widetilde}$ metric. Since $$\Roof{\Delta}{\widetilde}(s^{1/2}) = {\tfrac{1}{2}}s^{-1/2}\bigl(\Roof{\Delta}{\widetilde}s - \frac{1}{2s}|\Roof{\nabla}{\widetilde}s|^{2}\bigr) , $$ we obtain $$\Roof{\Delta}{\widetilde}(s^{1/2}) = \tfrac{1}{2}s^{-3/2}\Delta s = - \tfrac{1}{6}s^{-3/2}|r|^{2} < 0. $$ Since $(N, \Roof{g}{\widetilde})$ has uniformly bounded geometry, the DeGiorgi-Nash-Moser estimate for supersolutions, c.f. [GT, Thm. 8.18] implies that $$\bigl(\frac{c}{vol_{\Roof{g}{\tilde}}B}\int_{B}s^{p}d\Roof{V}{\tilde}_{g}\bigr)^{1/p} \leq inf_{B' }s, $$ for any concentric $\Roof{g}{\widetilde}$-geodesic balls $B' \subset B = B(1)$ and $p < 3/2,$ where $c$ depends only on $dist_{\Roof{g}{\tilde}}(\partial B,\partial B' )$ and $p$. In particular for any component $A_{c}=A_{c}(r)$ of the annulus $\Roof{t}{\tilde}^{-1}(r,r+1)$ in $(N,\Roof{g}{\widetilde}),$ one has $$\bigl(\frac{1}{vol_{\Roof{g}{\tilde}}A_{c}}\int_{A_{c}}s^{p}dV_{\Roof{g}{\tilde}}\bigr)^{1/p} \leq c\cdot inf_{A_{c}' }s, $$ where $A_{c}' $ is say of half the width of $A_{c}.$ Converting this back to $(N, g)$ gives, after a little calculation, $$\bigl(\frac{1}{vol_{g}A_{c}}\int_{A_{c}}s^{3-\varepsilon}dV_{g}\bigr)^{1/(3-\varepsilon )} \leq c\cdot inf_{A_{c}' }s, $$ for any $\varepsilon > $ 0, with $c = c(\varepsilon ).$ By Lemma 2.3, it follows that the $L^{\infty}$ norm of $|r|$ and hence $s$ is bounded on $A_{c}$ by the $L^{2}$ norm average of $s$, which thus gives (2.55). {\bf (ii).} We first note that, in $(N, g)$, \begin{equation} \label{e2.57} liminf_{t\rightarrow\infty} \ s = 0. \end{equation} This follows easily from the maximum principle at infinity. Thus, let $\{x_{i}\}$ be any minimizing sequence for $s$, so that $s(x_{i}) \rightarrow inf_{N }s.$ Since $(N, g)$ has bounded curvature, it is clear that $(\Delta s)(x_{i}) \geq -\varepsilon_{i},$ for some sequence $\varepsilon_{i} \rightarrow $ 0. The trace equation (0.3) then implies that $|r|^{2}(x_{i}) \rightarrow $ 0 which of course implies $s(x_{i}) \rightarrow $ 0. Essentially the same argument proves (2.56). Briefly, if (2.56) were not true, one may take any sequence $x_{i}$ going to infinity in $\Gamma_{e}$ and consider the pointed sequence $(N, g, x_{i}).$ Since the curvature is uniformly bounded, a subsequence converges to a complete limit $(N' , g' , x)$, (passing as usual to sufficiently large covers in case of collapse), so that (2.57) holds on the limit. This in turn implies that (2.56) must also hold on $(N, g)$ itself. {\qed } We are now in position to understand in more detail the structure of $\Gamma .$ \begin{lemma} \label{l 2.10.} Under the assumptions of Theorem 2.7 and (2.47), there is a uniform upper bound on the distance between nearest vertices in $\Gamma .$ Thus for any vertex $v\in\Gamma ,$ there exists $v'\in\Gamma ,$ with $v' > v$ in terms of the direction on $\Gamma ,$ such that \begin{equation} \label{e2.58} dist_{\Gamma}(v, v' ) \leq D, \end{equation} for some fixed $D < \infty .$ \end{lemma} {\bf Proof:} This is proved by contradiction, so suppose that there is some sequence of edges $E_{i}$ in $\Gamma $ of arbitrarily long length, or an edge $E$ of infinite length. Let $x_{i}$ be the center point of $E_{i}$ in $N$, or a divergent sequence in $E$ in the latter case, and consider the pointed manifolds $(N, \Roof{g}{\widetilde}, x_{i}).$ This sequence has uniformly bounded curvature, and a uniform lower bound on $(volB_{x_{i}}(1), \Roof{g}{\widetilde}),$ since if the sequence volume collapsed somewhere, $(N, \Roof{g}{\widetilde})$ would have regions of arbitrarily long diameter which have the structure of a Seifert fibered space, (c.f. [An1,Thm.2.10]), contradicting by (2.50) [GL,Cor.10.13]. It follows that a subsequence converges to a complete, non-compact manifold $(N' , \Roof{g}{\widetilde}', x)$ with uniformly bounded 1-diameter, uniformly positive scalar curvature, and within bounded Gromov-Hausdorff distance to a line. Consider the same procedure for the pointed sequence $(N, g_{i}, x_{i}),$ where $g_{i} = \rho (x_{i})^{-2}\cdot g.$ There are now two possibilities for the limiting geometry of $(N, g_i, x_i)$, according to whether liminf \ $\rho (x_{i})/t(x_{i}) = 0$ or $\rho (x_{i})/t(x_{i}) \geq \mu_o $, for some $\mu_o > 0$, as $i \rightarrow \infty$. For clarity, we separate the discussion into these two cases. {\bf (a).} Suppose that liminf \ $\rho (x_{i})/t(x_{i}) = 0$ and choose a subsequence, also called $\{x_i\}$ such that $\rho (x_{i})/t(x_{i}) \rightarrow 0$. This is again similar to the situation where (2.16) does not hold. Argueing in exactly the same way as in this part of the proof of Lemma 2.4, it follows that one obtains smooth convergence of a further subsequence to a limit $(\bar N, \bar g, \bar x)$. Here, as before, one must pass to suitable covers of larger and larger domains to unwrap a collapsing sequence. The limit $(\bar N, \bar g)$ is a complete, non-flat ${\cal R}^2$ solution. Note that the assumption (2.47) is scale-invariant, invariant under coverings, (c.f. the statement following (2.48)), and invariant under the passage to geometric limits. Hence the estimate (2.47) holds on $(\bar{N}, \bar{g}).$ Now by construction, the graph $\bar{\Gamma}$ associated to $\bar{N}$ is a single line or edge, with no vertices. The argument in Lemma 2.8 above then proves (2.54) holds on $(\bar{N}, \bar{g})$, so that Proposition 2.2 implies that $(\bar{N},\bar{g})$ is flat, contradicting (2.47). {\bf (b).} Suppose that $\rho(x_i)/t(x_i) \geq \mu_o > 0$, for all $i$. In this case, the center points $x_{i}$ remain within uniformly bounded $g_{i}$-distance to the initial vertex $v_{i}$ of $E_{i}.$ The scalar curvature $s_i$ of $g_i$ goes to infinity in a small $g_i$-tubular neighborhood of $v_{i}.$ However, the curvature of $(N, g_i, x_i)$ is uniformly bounded outside the unit ball $(B_{v}(1), g_i)$ and hence in this region one obtains smooth convergence to an incomplete limit manifold $(\bar N', \bar g', \bar x')$, again passing to sufficiently large finite covers to unwrap any collapse. The limit $(\bar N', \bar g')$ is complete away from a compact boundary, (formed by a neigborhood of $\{v\}$). As in Case (a), the limit is a non-flat ${\cal R}^2$ solution satisfying (2.47), and the associated graph $\bar \Gamma'$ consists of a single ray. Hence (2.56) and (2.57) hold on $\bar N'$. Since $\bar \Gamma'$ is a single ray, the annuli $A$ in Lemma 2.9 are all connected and thus (2.56) and (2.57) imply that $\bar s' \rightarrow 0$ uniformly at infinity in $\bar N'$. Hence there is a compact regular level set $L = \{s = s_{o}\}, s_{o} > $ 0, of $s$ for which the gradient $\nabla s|_{L}$ points {\it out} of $U$, for $U = \{s \leq s_{o}\}.$ Observe that $U$ is cocompact in $\bar N'.$ Now return to the proof of Proposition 2.2, and the trace equation (0.3). Let $\eta $ be a cutoff function as before, but now with $\eta \equiv $ 1 on a neighborhood of $L$. Integrate as before, but over $U$ in place of $N$ to obtain $$\int_{U}\eta|r|^{2} = - 3\int_{U}\eta\Delta s = - 3\int_{U}s\Delta\eta - 3\int_{L}\eta<\nabla s, \nu> + \ 3\int_{L}s<\nabla\eta , \nu> , $$ where $\nu $ is the outward unit normal. Since $\eta \equiv $ 1 near $L$, $<\nabla\eta , \nu> =$ 0. Since $\nabla s|_{L} $ points out of $U$, $<\nabla s, \nu> > $ 0. Hence, $$\int_{U}\eta|r|^{2} \leq - 3\int_{U}s\Delta\eta . $$ From the bound (2.54) obtained as in the proof of Lemma 2.8, the proof of Proposition 2.2 now goes through without any differences and implies that $(\bar{N}, \bar{g})$ is flat in this case also, giving a contradiction. {\qed } Next, we observe that a similar, but much simpler, argument shows that there is a constant $D < \infty $ such that for any branch $\Gamma_{e},$ either \begin{equation} \label{e2.59} L(\Gamma_{e}) \leq D, \ \ {\rm or} \ \ L(\Gamma_{e}) = \infty . \end{equation} For suppose there were arbitrarily long but finite branches $\Gamma_{i},$ with length $L(\Gamma_{i}) \rightarrow \infty $ as $i \rightarrow \infty ,$ and starting at points $e_{i}.$ Then the same argument proving (2.56) implies that $inf_{N_{i}}s$ cannot occur at or near $e_{i},$ if $i$ is sufficiently large; here $N_{i}$ is the part of $N$ corresponding to $\Gamma_{i}.$ Thus $inf_{N_{i}}s$ occurs in the interior of $N_{i}.$ Since $\Gamma_{i}$ is finite, and thus $\Gamma_{i}$ and $N_{i}$ are compact, this contradicts the minimum principle for the trace equation (0.3). Given Lemmas 2.9 and 2.10, we now return to the situation following Lemma 2.8 and assume that $(N, g)$ satisfies the assumptions of Theorem 2.7 and (2.47). We claim that the preceding arguments imply the graph $\Gamma $ of $N$ must have exponential growth, in the strong sense that every branch $\Gamma_{e} \subset \Gamma $ also has exponential growth. Equivalently, we claim that given any point $e$ in an edge $E\subset\Gamma ,$ there is some vertex $v'\in\Gamma_{e},$ (so $v' > e$), within fixed distance $D$ to $e$, such that at least two outgoing edges $e_{1}, e_{2}$ from $v' $ have infinite subbranches $\Gamma_{e_{1}}, \Gamma_{e_{2}} \subset \Gamma_{e}.$ For if this were not the case, then there exist points $e_{i}\in\Gamma $ and branches $\Gamma_{e_{i}} \subset \Gamma $ and $D_{i} \rightarrow \infty ,$ such that all subbranches of $\Gamma_{e_{i}}$ starting within distance $D_{i}$ to $e_{i}$ are finite. By (2.59), all subbranches then have a uniform bound $D$ on their length. It follows that one generates as previously in the proof of Lemma 2.10 a geometric limit graph $\Gamma $ with at most linear growth which, as before, gives a contradiction. Hence all branches $\Gamma_{e}$ have a uniform rate of exponential growth, i.e. for all $r$ large, \begin{equation} \label{e2.60} L(B_{e}(r)) \geq e^{d\cdot r}, \end{equation} for some $d > $ 0, where $B_{e}(r)$ is the ball of radius $r$ in $\Gamma_{e}$ about some point in $e$. Since $(N, \Roof{g}{\widetilde})$ has a uniform lower bound on its injectivity radius, $(N_{e}, \Roof{g}{\widetilde})$ satisfies \begin{equation} \label{e2.61} vol_{\Roof{g}{\tilde}}(B_{e}(r)) \geq e^{d\cdot r}, \end{equation} for some possibly different constant $d$. Further, since $g \geq c\cdot \Roof{g}{\widetilde},$ (since $s$ is bounded above), (2.61) also holds for $g$, i.e. \begin{equation} \label{e2.62} vol_{g}(B_{e}(r)) \geq e^{d\cdot r}. \end{equation} again for some possibly different constant $d > 0$. However, by (2.57) and (2.55), (and the maximum principle for the trace equation), we may choose a branch $\Gamma_{e}$ such that $s \leq \delta $ on $N_{e},$ for any prescribed $\delta > $ 0. Lemma 2.3 then implies that \begin{equation} \label{e2.63} |r| \leq \delta_{1} = \delta_{1}(\delta ), \end{equation} everywhere on $N_{e}.$ From standard volume comparison theory, c.f. [P, Ch.9], (2.63) implies that the volume growth of $N_{e}$ satisfies \begin{equation} \label{e2.64} vol_{g}(B_{e}(r)) \leq e^{\delta_{2}\cdot r}, \end{equation} where $\delta_{2}$ is small if $\delta_{1}$ is small. Choosing $\delta $ sufficiently small, this contradicts (2.62). This final contradiction proves Theorem 2.7. {\qed } Theorems 2.5 and 2.7 together prove Theorem 0.1. \section{Regularity and Apriori Estimates for ${\cal R}_{s}^{2}$ Solutions.} \setcounter{equation}{0} In this section we prove the interior regularity of weak ${\cal R}_{s}^{2}$ solutions as well as apriori estimates for families of ${\cal R}_{s}^{2}$ solutions. These results will be needed in \S 4, and also in [An4]. These are local questions, so we work in a neighborhood of an arbitrary point $x\in N.$ We will assume that $(N, g, x)$ is scaled so that $\rho (x) =$ 1, passing to the universal cover if necessary if the metric is sufficiently collapsed in $B_{x}(\rho (x)).$ In particular, the $L^{2,2}$ geometry of $g$ is uniformly controlled in $B = B_{x}(1).$ The proof of regularity is similar to the proof of the smooth regularity of ${\cal R}^{2}$ solutions in [An1,\S 4] or that of critical metrics of $I_{\varepsilon},$ for any given $\varepsilon > $ 0, (c.f. \S 1) in [An1, \S 8]. The proof of apriori estimates proceeds along roughly similar lines, the main difference being that $\alpha $ is not fixed, but can vary over any value in [0, $\infty ).$ Thus one needs uniform estimates, independent of $\alpha .$ To handle this, especially when $\alpha $ is small, one basically uses the interaction of the terms $\alpha\nabla{\cal R}^{2}$ and $L^{*}\omega .$ We assume that $g$ is an $L^{2,2}$ metric satisfying the ${\cal R}_{s}^{2}$ equations \begin{equation} \label{e3.1} \alpha\nabla{\cal R}^{2} + L^{*}(\omega ) = 0, \end{equation} \begin{equation} \label{e3.2} \Delta\omega = -\frac{\alpha}{4}|r|^{2}, \end{equation} weakly in $B$, with scalar curvature $s \equiv $ 0 (in $L^{2})$ and potential $\omega\in L^{2}.$ If $\alpha =$ 0, we have assumed, c.f \S 0, that $\omega $ is not identically 0 on $B$. \begin{theorem} \label{t 3.1.} Let (g, $\omega )$ be a weak solution of the ${\cal R}_{s}^{2}$ equations. Then $g$ and $\omega $ are $C^{\infty}$ smooth, in fact real-analytic, in B. \end{theorem} For clarity, the proof will proceed in a sequence of Lemmas. These Lemmas will hold on successively smaller concentric balls $B \supset B(r_{1}) \supset B(r_{2}),$ etc, whose ratio $r_{i+1}/r_{i}$ is a definite but arbitrary constant $< $ 1, but close to 1. (The estimates will then depend on this ratio). In other words, we are only considering the interior regularity problem. To simplify notation, we will ignore explicitly stating the size of the ball at each stage and let $\bar{B}$ denote a suitable ball in $B$, with $d = dist(\partial\bar{B}, \partial B).$ Further, $c$ will always denote a constant independent of $\alpha ,$ which is either absolute, or whose dependence is explicitly stated. The value of $c$ may change from line to line or even from one inequality to the next. We recall the Sobolev embedding theorem, (in dimension 3), c.f [Ad, Thm. 7.57], \begin{equation} \label{e3.3} L^{k,p} \subset L^{m,q}, \ \ {\rm provided} \ \ k - m < 3/p \ \ {\rm and} \ \ 1/p - (k-m)/3 < 1/q, \end{equation} $$L^{1} \subset H^{t-2}, \ \ {\rm any} \ \ t < \frac{1}{2}. $$ where $H^{t},$ (resp. $H_{o}^{t}), t > $ 0, is the Sobolev space of functions (of compact support) with '$t$' derivatives in $L^{2}$ and $H^{-t}$ is the dual of $H_{o}^{t},$ c.f. [Ad, Thm.3.10], [LM, Ch.1.12]. Let $\omega_{o}$ be defined by \begin{equation} \label{e3.4} \omega_{o} = ||\omega||_{L^{2}(B)}. \end{equation} \begin{lemma} \label{l 3.2.} There is a constant $c =$ c(d) such that \begin{equation} \label{e3.5} ||\alpha|r|^{2}||_{L^{1}(\bar{B})} \leq c\cdot \omega_{o}. \end{equation} \end{lemma} {\bf Proof:} This follows immediately from the trace equation (3.2), by pairing it with a suitable smooth cutoff function $\eta $ of compact support in $B$, with $\eta \equiv $ 1 on $\bar{B}$ and using the self-adjointness of $\Delta .$ Since the $L^{2}$ norm of $\Delta\eta $ is bounded, the left side of (3.2) then becomes bounded by the $L^{2}$ norm of $\omega .$ (This argument is essentially the same as the proof of (2.13)). {\qed } \begin{lemma} \label{l 3.3.} For any $t < \frac{1}{2}, \omega\in H^{t}(\bar{B})$, and there is a constant $c =$ c(t, d) such that \begin{equation} \label{e3.6} ||\omega||_{H^{t}(\bar B)} \leq c\cdot \omega_{o}. \end{equation} \end{lemma} {\bf Proof:} This also follows from the trace equation (3.2). Namely, Lemma 3.2 implies that the right side of (3.2) is bounded in $L^{1}$ by $\omega_{o}.$ By Sobolev embedding, $L^{1}\subset H^{t-2},$ for any $t < \frac{1}{2},$ and we may consider the Laplacian as an operator $\Delta : H^{t} \rightarrow H^{t-2},$ c.f. [LM, Ch. 2.7] and also [An1, \S 4]. Elliptic theory for $\Delta $ implies that $||\omega||_{H^{t}}$ is bounded by the $H^{t-2}$ norm, and thus $L^{1}$ norm, of the right side and the $L^{2}$ norm of $\omega ,$ each of which is bounded by $\omega_{o}.$ {\qed } \begin{lemma} \label{l 3.4.} The tensor $\alpha r$ is in $H^{t}(\bar{B})$ and there is a constant $c =$ c(t, d) such that \begin{equation} \label{e3.7} ||\alpha r||_{H^{t}(\bar B)} \leq c\cdot \omega_{o}. \end{equation} \end{lemma} {\bf Proof:} This follows exactly the arguments of [An1, \S 4], so we will be brief. Write equation (3.1) as $$D^{*}D\alpha r = Q, $$ where $Q$ consists of all the other terms in (3.1). Using (3.5) and (3.6), it follows that $D^{*}D\alpha r$ is bounded in $H^{t-2}$ by $\omega_{o},$ (c.f. [An1, \S 4] for the details), and the ellipticity of $D^{*}D$ as in Lemma 3.3 above gives the corresponding bound on the $H^{t}$ norm of $\alpha r.$ {\qed } From Sobolev embedding (3.3), it follows that \begin{equation} \label{e3.8} ||\alpha r||_{L^{3-\mu}} \leq c\omega_{o}, \end{equation} for any given $\mu = \mu (t) > $ 0. Here and in the following, the norms are on balls $\bar B$, possibly becoming succesively smaller. Using the H\"older inequality, we have \begin{equation} \label{e3.9} \int (\alpha|r|^{2})^{p} \leq \bigl(\int (\alpha|r|)^{pr}\bigr)^{1/r}\bigl(\int|r|^{pq}\bigr)^{1/q}, \end{equation} so that choosing $pr = 3-\mu $ and $pq = 2$ implies that \begin{equation} \label{e3.10} ||\alpha r^{2}||_{L^{p}} \leq c\omega_{o}, \end{equation} for any $p = p(t) < 6/5.$ We may now repeat the arguments in Lemma 3.3 above, using the improved estimate (3.10), over the $L^{1}$ estimate. As before, the trace equation now gives, for $p < 6/5,$ $$||\omega||_{L^{1,2-\mu}} \leq ||\omega||_{L^{2,p}} \leq c\omega_{o}, $$ for any $\mu = \mu (p) > $ 0. Now repeat the argument in Lemma 3.4, considering $D^{*}D$ as an operator $L^{1,2-\mu} \rightarrow L^{-1,2-\mu}.$ This gives $$||\alpha r||_{L^{6-\varepsilon}} \leq c||\alpha r||_{L^{1,2-\mu}} \leq c\omega_{o}, $$ where the first inequality follows from the Sobolev inequality. From the Holder inequality (3.9) again, with $pr = 6-\varepsilon $ and $pq =$ 2, we obtain $$||\alpha r^{2}||_{L^{p}} \leq c\omega_{o}, $$ for $p < 3/2.$ Repeating this process again gives \begin{equation} \label{e3.11} ||\alpha r||_{L^{p}} \leq c\omega_{o}, \end{equation} for $p < \infty$ , $c = c(p)$. Finally, $\alpha r$ bounded in $L^{p}$ implies $\alpha|r|^{2}$ bounded in $L^{2-\varepsilon}, \varepsilon = \varepsilon (p),$ so repeating the process once more as above gives $\alpha r$ bounded in $L^{1,6-\varepsilon}$ and bounded in $C^{\gamma}, \gamma < \frac{1}{2}.$ This in turn implies $\alpha|r|^{2}$ is bounded in $L^{2}.$ These arguments thus prove the following: \begin{corollary} \label{c 3.5.} On a given $\bar{B} \subset B$, the following estimates hold, with $c = c(d)$: \begin{equation} \label{e3.12} ||\alpha r||_{L^{1,6}} \leq c\cdot \omega_{o}, ||\alpha|r|^{2}||_{L^{2}} \leq c\cdot \omega_{o}, ||\omega||_{L^{2,2}} \leq c\cdot \omega_{o}. \end{equation} \end{corollary} {\qed } In particular, from Sobolev embedding it follows that $\omega $ is a $C^{1/2}$ function, with modulus of continuity depending on $\omega_{o}.$ Given these initial estimates, it is now straightforward to prove Theorem 3.1. \noindent {\bf Proof of Theorem 3.1.} Suppose first $\alpha > $ 0. Then the iteration above may then be continued indefinitely. Thus since $\alpha r$ is bounded in $L^{1,6} \subset C^{1/2}$ by Sobolev embedding, $\alpha|r|^{2}$ is also bounded in $L^{1,6}.$ Applying the iteration above, it follows that $\omega\in L^{3,6},$ so that $D^{*}D\alpha r\in L^{1,6}$ implying that $\alpha r\in L^{3,6},$ so that $\alpha|r|^{2}\in L^{3,6},$ and so on. Note that at each stage the regularity of the metric $g$ is improved, since if $\alpha r\in L^{k,p},$ then $g\in L^{k+2,p},$ c.f. [An1, \S 4]. Continuing in this way gives the required $C^{\infty}$ regularity. The equations (3.1)-(3.2) form an elliptic system for $(g, \omega)$ with coefficients depending real analytically on $g$. This implies that $g$ and $\omega$ are in fact real-analytic, c.f. [M, Ch.6.6,6.7]. If $\alpha =$ 0, then the equations (3.1)-(3.2) are the static vacuum Einstein equations. By assumption the potential $\omega $ is not identically 0 and is an $L^{2,2}$ function by (3.12). The smooth (or real-analytic) regularity of $(g, \omega )$ then follows from standard regularity results for Einstein metrics, c.f. [B, Ch.5]. {\qed } We now turn to higher order estimates for families of ${\cal R}_{s}^{2}$ solutions on balls on which one has initial $L^{2,2}$ control of the metric. This corresponds to obtaining uniform estimates as above which are independent of $\alpha ,$ or equivalently to the smooth compactness of a family of such solutions. In contrast to the ${\cal R}^{2}$ equations, note that the ${\cal R}_{s}^{2}$ equations, for a fixed $\alpha ,$ are not scale-invariant. The full family of ${\cal R}_{s}^{2}$ equations, depending on $\alpha ,$ is scale-invariant. However, $\alpha $ itself is not scale-invariant. As seen in \S 1, $\alpha $ scales inversely to the curvature, i.e. as the square of the distance. On the other hand, the potential $\omega $ is scale-invariant. Thus, we now assume we have a sequence or family of smooth ${\cal R}_{s}^{2}$ solutions, with no apriori bound on $\alpha\in [0,\infty )$ or on the $L^{2}$ norm of $\omega $ on $B$. We assume throughout that the metrics are defined on a ball $B = B(1)$, with $r_{h}(B) \geq $ 1. Recall the definition of the constant $c_{o}$ preceding (2.1) in the definition of $r_{h}$ and $\rho ,$ which measures the deviation of $g$ from the flat metric. We define the $L^{k,2}$ curvature radius $\rho^{k}$ exactly in the same way as the $L^{2}$ curvature radius, with $|\nabla^{k} r|$ in place of $|r|$, c.f. [An1, Def.3.2]. \begin{theorem} \label{t 3.6} Let (N, g, $\omega )$ be an ${\cal R}_{s}^{2}$ solution, with $x\in N$ and $r_{h}(x) \geq $ 1. Suppose \begin{equation} \label{e3.13} \omega \leq 0, \end{equation} in $B_{x}(1).$ Then there is a constant $\rho_{o} > $ 0 such that the $L^{k,2}$ curvature radius $\rho^{k}$ satisfies, for all $k\geq 1,$ \begin{equation} \label{e3.14} \rho^{k}(x) \geq \rho_{o}. \end{equation} In particular, by Sobolev embedding, the Ricci curvature $r$ is bounded in $C^{k}.$ Further the $L^{k+2,2}$ norm of the potential $\omega $ is uniformly bounded in $B_{x}(\rho_{o}),$ in that, for a given $\varepsilon > $ 0, which may be made (arbitrarily) small if $c_{o}$ is chosen sufficiently small, \begin{equation} \label{e3.15} ||\omega||_{L^{k+2,2}(B_{x}(\rho_{o}))} \leq max\bigl( c(\varepsilon^{-1},k)\cdot ||\omega||_{L^{1}(B_{x}(\varepsilon ))}, c_{o}\bigr) , \end{equation} \end{theorem} The proof of this result will again be carried out in a sequence of steps. This result is considerably more difficult to prove than Theorem 3.1, since one needs to obtain estimates independent of $\alpha $ and $\omega_{o}.$ To do this, we will strongly use the assumption (3.13) that $\omega \leq $ 0. It is not clear if Theorem 3.6 holds in general when $\omega \geq $ 0. In fact, this is the main reason why the hypothesis (3.13) is needed in Theorem 0.2. We note that by Theorem 3.1, the regularity of $(g, \omega )$ is not an issue, so we will assume that $g$ and $\omega $ are smooth, (or real-analytic) on $B$. Theorem 3.6 is easy to prove from the preceding estimates if one has suitable bounds on $\alpha $ and $\omega_{o}.$ It is worthwhile to list these explicitly. {\bf (i).} Suppose $\alpha $ is uniformly bounded away from 0 and $\infty ,$ and $\omega_{o}$ is bounded above, i.e. \begin{equation} \label{e3.16} \kappa \leq \alpha \leq \kappa^{-1}, \omega_{o} \leq \kappa^{-1} \end{equation} for some fixed $\kappa > $ 0. One may then just carry out the arguments preceding Corollary 3.5 and in the proof of Theorem 3.1 to prove Theorem 3.6, with bounds then depending on $\kappa .$ This gives (3.14) and also (3.15), but with $\omega_{o}$ in place of the $L^{1}$ norm of $\omega $ in $B_{x}(\varepsilon ).$ In case $$||\omega||_{L^{1}(B_{x}(\varepsilon ))}<< \omega_{o}, $$ we refer to Case (I) below, where this situation is handled. {\bf (ii).} Suppose that $\alpha $ is uniformly bounded away from 0, and $\omega_{o}$ is bounded above, i.e. \begin{equation} \label{e3.17} \kappa \leq \alpha , \omega_{o} \leq \kappa^{-1} \end{equation} for some fixed $\kappa > $ 0. Thus, the difference with Case (i) is that $\alpha $ may be arbitrarily large. Then the equations (3.1)-(3.2) may be renormalized by dividing by $\alpha ,$ so that $\alpha $ becomes 1. This decreases the $L^{2}$ norm of the potential $\omega ,$ i.e. $\omega_{o},$ but otherwise leaves the preceding arguments unchanged. Hence, Theorem 3.6 is again proved in this case, with bounds depending on $\kappa .$ Observe that both of these arguments in (i), (ii) do not require the bound (3.13). The main difficulty is when $\alpha $ is small, and especially when $\omega_{o}$ is small as well. In this situation, the equations (3.1)-(3.2) approach merely the statement that 0 $\sim $ 0. Here one must understand the relative sizes of $\alpha $ and $\omega_{o}$ to proceed further. First note that by Theorem 3.1, we may assume that $\omega$ is not identically zero on any ball $\bar{B} \subset B$, since otherwise, (by real-analyticity), $\omega \equiv 0$ on $B$, and hence the solution is flat; (recall that we have assumed $\alpha > 0$ if $\omega \equiv 0$ on $B$). It follows that the set $\{\omega =$ 0\} is a closed real-analytic set in B. To begin, we must obtain apriori estimates on, for example, the $L^{2}$ norm of $\omega $ in terms of its value at or near the center point $x$, c.f. Corollary 3.9. The first step in this is the following $L^{1}$ estimate, which may also be of independent interest. (Of course this is meant to apply to $u = -\omega $ and $f = \frac{1}{4}\alpha|r|^{2}$, as in (3.2)). \begin{proposition} \label{p 3.7.} Let (B, g), $B = B_{x}(1)$ be a geodesic ball of radius 1, with $r_{h}(x) \geq $ 1. For a given smooth function $f$ on B, let $u$ be a solution of \begin{equation} \label{e3.18} \Delta u = f, \end{equation} on B, with \begin{equation} \label{e3.19} u \geq 0. \end{equation} Let $B_{\varepsilon} = B_{x}(\varepsilon ),$ where $\varepsilon > $ 0 is (arbitrarily) small, depending only on the choice of $c_{o}.$ Then there is a constant $c > $ 0, depending only on $\varepsilon^{-1},$ such that \begin{equation} \label{e3.20} \int_{B}u \leq c(\varepsilon^{-1})\int_{B}|f| + {\tfrac{1}{2}} u_{av}(\varepsilon ) + \int_{B_{\varepsilon}}u, \end{equation} where $u_{av}$ is the average of $u$ on $S_{\varepsilon} = \partial B_{\varepsilon}.$ \end{proposition} {\bf Proof:} Let $\eta $ be a non-negative function on $B$ such that $\eta = |\nabla\eta| =$ 0 on $\partial B,$ determined more precisely below. Multiply (3.18) by $\eta $ and apply the divergence theorem to obtain \begin{equation} \label{e3.21} \int_{B \setminus {B_{\varepsilon}}}\eta f = \int_{B \setminus {B_{\varepsilon}}}\eta\Delta u = \int_{B \setminus {B_{\varepsilon}}}u\Delta\eta + \int_{S_{\varepsilon}}\eta<\nabla u, \nu> - \int_{S_{\varepsilon}}u<\nabla\eta , \nu>, \end{equation} where $\nu $ is the outward unit normal. Let $\{x_{i}\}$ be a harmonic coordinate chart on $B$, so that the metric $g$ is bounded in $L^{2,2}$ on $B$ in the coordinates $\{x_{i}\},$ since $r_{h}(x) \geq $ 1. We may assume w.l.o.g. that in these coordinates, $g_{ij}(x) = \delta_{ij}.$ Let $\sigma = (\sum x_{i}^{2})^{1/2}.$ Then the ratio $\sigma /t,$ for $t(y) = dist_{g}(y,x),$ satisfies \begin{equation} \label{e3.22} 1- c_{1} \leq \frac{\sigma}{t} \leq 1+c_{1}, \end{equation} in $B$, where the constant $c_{1}$ may be made small by choosing the constant $c_{o}$ in the definition of $r_{h}$ sufficiently small. We choose $\eta = \eta (\sigma )$ so that $\eta (1) =$ 0 and $\eta' (1) =$ 0 and so that $\Delta\eta $ is close to the constant function 2 in $C^{0}(B \setminus {B_{\varepsilon}}),$ i.e. \begin{equation} \label{e3.23} \Delta\eta \sim 2. \end{equation} (Actually, since $\{\sigma =$ 1\} may not be contained in $B$, one should replace by the boundary condition at 1 by the same condition at $\sigma = 1-\delta , \delta = \delta (c_{o})$ small, but we will ignore this minor adjustment below). To determine $\eta$, we have $\Delta\eta = \eta'\Delta\sigma + \eta''|\nabla\sigma|^{2}.$ Since the metric $g = g_{ij}$ is close to the flat metric $\delta_{ij},$ we have $||\nabla\sigma|^{2}- 1| < \delta $ and $|\Delta\sigma-\frac{2}{\sigma}| < \frac{\delta}{\sigma}$, in $C^{0}$, where $\delta $ may be made (arbitrarily) small by choosing $c_{o}$ sufficiently small. Thus, let $\eta $ be a solution to \begin{equation} \label{e3.24} \frac{2}{\sigma}\eta' + \eta'' = 2. \end{equation} It is easily verified that the function \begin{equation} \label{e3.25} \eta (\sigma ) = \tfrac{2}{3}\sigma^{-1} - 1 + \tfrac{1}{3}\sigma , \end{equation} satisfies (3.24), with the correct boundary conditions at $\sigma =$ 1. Observe that $\eta (\varepsilon ) \sim \frac{2}{3}\varepsilon^{-1}, \ \eta' (\varepsilon ) \sim -<\nabla\eta , \nu> (\varepsilon ) \sim -\frac{2}{3}\varepsilon^{-2},$ for $\varepsilon $ small. Hence, (3.23) is satisfied on $B \setminus {B_{\varepsilon}}$ for a given choice of $\varepsilon $ small, if $\delta ,$ i.e. $c_{o},$ is chosen sufficiently small. With this choice of $\eta $ and $\varepsilon ,$ (3.21) becomes $$\int_{B \setminus {B_{\varepsilon}}}\eta f \sim 2\int_{B \setminus {B_{\varepsilon}}}u - {\tfrac{2}{3}} \varepsilon^{-2}\int_{S_{\varepsilon}}u + {\tfrac{2}{3}} \varepsilon^{-1}\int_{S_{\varepsilon}}<\nabla u, \nu> . $$ Applying the divergence theorem on $B_{\varepsilon}$ to the last term gives $$ {\tfrac{2}{3}} \varepsilon^{-1}\int_{S_{\varepsilon}}<\nabla u, \nu> = -{\tfrac{2}{3}} \varepsilon^{-1}\int_{B_{\varepsilon}}\Delta u \sim - c\cdot \varepsilon^{-1}\int_{B_{\varepsilon}}f. $$ Thus, combining these estimates gives $$ 2\int_{B \setminus {B_{\varepsilon}}}u \leq c(\varepsilon^{-1})\int_{B}|f| + u_{av}(\varepsilon ), $$ which implies (3.20). {\qed } The next Lemma is a straightforward consequence of this result. \begin{lemma} \label{l 3.8.} Assume the same hypotheses as Proposition 3.7. Then for any $\mu > $ 0, there is a constant $c = c(\mu ,d, \varepsilon^{-1})$ such that \begin{equation} \label{e3.26} ||u||_{L^{3-\mu}(\bar{B})} \leq c(\mu ,d,\varepsilon^{-1})\cdot \int_{B}|f| + {\tfrac{1}{2}} u_{av}(\varepsilon ). \end{equation} \end{lemma} {\bf Proof:} Let $\phi $ be a smooth positive cutoff function, supported in $B$, with $\phi \equiv $ 1 on $\bar{B}.$ As in Lemma 3.3, the Laplacian $\Delta $ is a bounded surjection $\Delta : H^{t} \rightarrow H^{t-2},$ for $t < \frac{1}{2},$ so that \begin{equation} \label{e3.27} ||\phi u||_{H^{t}} \leq c\cdot ||\Delta (\phi u)||_{H^{t-2}}. \end{equation} By Sobolev embedding $||\phi u||_{L^{3-\mu}} \leq c\cdot ||\phi u||_{H^{t}}, c = c(\mu ), \mu = \mu (t)$ so that it suffices to estimate the right side of (3.27). By definition, \begin{equation} \label{e3.28} ||\Delta (\phi u)||_{H^{t-2}} = sup|\int h\Delta (\phi u)|, \end{equation} where the supremum is over compactly supported functions $h$ with $||h||_{H_{o}^{2-t}} \leq $ 1. Since $\Delta u = f$, we have $$\Delta (\phi u) = \phi f + u\Delta\phi + 2<\nabla\phi ,\nabla u> . $$ Now \begin{equation} \label{e3.29} \int h\phi f \leq c\cdot ||f||_{L^{1}}\cdot ||h||_{L^{\infty}} \leq c\cdot ||f||_{L^{1}}, \end{equation} where the last inequality follows from Sobolev embedding. Next \begin{equation} \label{e3.30} \int hu\Delta\phi \leq c\cdot ||h||_{L^{\infty}}||u||_{L^{1}} \leq c\cdot ||u||_{L^{1}}, \end{equation} since we may choose $\phi $ with $|\Delta\phi| \leq c$ (e.g. $\phi = \phi (\sigma ),$ for $\sigma $ as in the proof of Prop. 3.7). For the last term, applying the divergence theorem gives \begin{equation} \label{e3.31} \int h<\nabla\phi , \nabla u> = -\int uh\Delta\phi -\int u<\nabla h, \nabla\phi> . \end{equation} The first term here is treated as in (3.30). For the second term, for any $\varepsilon_{1} > $ 0 we may choose $\phi $ so that $|\nabla\phi^{\varepsilon_{1}}|$ is bounded, (with bound depending only on $\varepsilon_{1}).$ By Sobolev embedding $|\nabla h|$ is bounded in $L^{3+\varepsilon_{2}},$ for $\varepsilon_{2} = \varepsilon_{2}(\mu )> $ 0. Thus, applying the H\"older inequality, $$|\int u<\nabla h, \nabla\phi>| \leq c\int\phi^{1-\varepsilon_{1}}u|\nabla h| \leq c\cdot ||\phi^{1-\varepsilon_{1}}u||_{L^{(3/2)-\varepsilon_{2}}}. $$ Write $(\phi^{1-\varepsilon_{1}}u)^{(3/2)-\varepsilon_{2}} = (\phi u)^{p}u^{1/2},$ where $p = 1-\varepsilon_{2}.$ For $\varepsilon_{2}$ small, this gives $\varepsilon_{1} \sim 1/3.$ It follows that $$\int (\phi^{1-\varepsilon_{1}}u)^{(3/2)-\varepsilon_{2}} = \int (\phi u)^{p}\cdot u^{1/2} \leq \bigl(\int (\phi u)^{2p}\bigl)^{1/2}\bigl(\int u\bigr)^{1/2}. $$ This estimate, together with standard use of the Young and interpolation inequalities, c.f. [GT, (7.5),(7.10)] implies that \begin{equation} \label{e3.32} ||\phi^{1-\varepsilon_{1}}u||_{L^{(3/2)-\varepsilon_{2}}} \leq \kappa||\phi u||_{L^{2}} + c\cdot \kappa^{-a}||u||_{L^{1}}, \end{equation} for any given $\kappa > $ 0, where $c$ and $a$ depend only on $\varepsilon_{2}.$ Since $||\phi u||_{L^{2}} \leq ||\phi u||_{H^{t}},$ by choosing $\kappa $ small in (3.32), we may absorb the first term on the right in (3.32) to the left in (3.27). Combining the estimates above gives $$||u||_{L^{3-\mu}(\bar{B})} \leq c(\mu )\cdot (||f||_{L^{1}(B)} + ||u||_{L^{1}(B)}). $$ The $L^{1}$ norm of $u$ is estimated in Proposition 3.7. In addition, we have $$\int_{B_{\varepsilon}}u \leq ||u||_{L^{3-\mu}(\bar{B})}\cdot (volB_{\varepsilon})^{q}, $$ where $q = \frac{2-\mu}{3-\mu}.$ Choosing $\varepsilon $ small, this term may also be absorbed into the right in (3.27), which gives the bound (3.26). {\qed } We point out that although the assumption (3.19) on $u$, (equivalent to (3.13) on $\omega$), is in fact not used in Proposition 3.7, it is required in Lemma 3.8 to control the $L^1$ norm of $u$. We now apply these results to the trace equation (3.2), so that $u = -\omega $ and $f = \frac{1}{4}\alpha|r|^{2}.$ Using the fact that $f \geq $ 0, we obtain: \begin{corollary} \label{c 3.9.} On (B, g) as above, \begin{equation} \label{e3.33} sup_{\bar{B}}|\omega| \leq c(d,\varepsilon^{-1})\cdot \int_{B}\alpha|r|^{2} + {\tfrac{1}{2}} |\omega|_{av}(\varepsilon ). \end{equation} \end{corollary} {\bf Proof:} By Lemma 3.8, it suffices to prove there is a bound $$sup_{\bar{B}}|\omega| \leq c||\omega||_{L^{2}(B)}. $$ Since $f$ is positive, i.e. $\Delta (-\omega ) \geq $ 0, this estimate is an application of the DeGiorgi-Nash-Moser sup estimate for subsolutions of the Laplacian, c.f. [GT, Thm 8.17]. {\qed } We are now in a position to deal with the relative sizes of $\alpha $ and $\omega .$ First, we renormalize $\omega $ to unit size near the center point $x$. Observe that for any $\varepsilon > $ 0, \begin{equation} \label{e3.34} |\omega|_{av}(\varepsilon ) > 0, \end{equation} since if $|\omega|_{av}(\varepsilon ) =$ 0, then by the minimum principle applied to the trace equation (3.2), $\omega \equiv $ 0 in $B_{\varepsilon}$ which has been ruled out above. For the remainder of the proof, we fix an $\varepsilon > $ 0 small and set \begin{equation} \label{e3.35} \bar{\omega} = \frac{\omega}{|\omega|_{av}(\varepsilon )}. \end{equation} Similarly, divide the equations (3.1)-(3.2) by $|\omega|_{av}(\varepsilon );$ this has the effect of changing $\alpha $ to \begin{equation} \label{e3.36} \bar{\alpha} = \frac{\alpha}{|\omega|_{av}(\varepsilon )}. \end{equation} By Corollary 3.9, (or Lemma 3.8), the $L^{2}$ norm of $\bar{\omega}$ on $\bar{B},$ i.e. $\bar{\omega}_{o},$ is now controlled by the $L^{1}$ norm of $\bar{\alpha}|r|^{2}$ on $B$, \begin{equation} \label{e3.37} \bar{\omega}_{o} \leq c\cdot [||\bar{\alpha}|r|^{2}||_{L^{1}(B)}+1]. \end{equation} We now divide the discussion into three cases, similar to the discussion following Theorem 3.6. {\bf (I).} Suppose $\bar{\alpha}$ is uniformly bounded away from 0 and $\infty ,$ i.e. \begin{equation} \label{e3.38} \kappa \leq \bar{\alpha} \leq \kappa^{-1}, \end{equation} for some $\kappa > $ 0. Since $r_{h}(x) \geq $ 1, the $L^{1}$ norm of $\bar{\alpha}|r|^{2}$ on $B$ is then uniformly bounded, and so $\bar{\omega}_{o}$ is uniformly bounded by (3.37). Hence, the proof of (3.14) follows as in Case (i) above, starting with Corollary 3.5. Further, $$\bar{\omega}_{o} \leq c(\varepsilon^{-1})\cdot ||\bar{\omega}||_{L^{1}(B_{\varepsilon})}, $$ since both terms are uniformly bounded away from 0 and $\infty .$ Hence (3.15) also follows as in Case (i) above. {\bf (II).} Suppose $\bar{\alpha}$ is uniformly bounded away from 0, i.e. \begin{equation} \label{e3.39} \kappa \leq \bar{\alpha}, \end{equation} for some $\kappa > $ 0. Then as in Case (ii), divide $\bar{\omega}$ and $\bar{\alpha}$ by $\bar{\alpha},$ so that the coefficient $\alpha' = \frac{\bar{\alpha}}{\bar{\alpha}} =$ 1. Then $\omega' = \frac{\bar{\omega}}{\bar{\alpha}}$ has bounded $L^{2}$ norm on $\bar{B}$ by (3.37) and the proof of (3.14) proceeds as in Case (I). In this case, the $L^{1}$ norm of $\omega' $ on $B_{\varepsilon}$ may be very small, but (3.15) follows as above from the renormalization of (3.37). {\bf (III).} $\bar{\alpha}$ is small, i.e. \begin{equation} \label{e3.40} \bar{\alpha} \leq \kappa . \end{equation} Again from (3.37), it follows that $\bar{\omega}_{o}$ is uniformly bounded on $\bar{B}.$ Thus, by Corollary 3.5, we have \begin{equation} \label{e3.41} ||\bar{\omega}||_{L^{2,2}} \leq c(\kappa ), \end{equation} so that, by Sobolev embedding, $\bar{\omega}$ is uniformly controlled in the $C^{1/2}$ topology inside $\bar{B}.$ This, together with (3.35) implies that $\bar{\omega}(x) \sim - 1,$ and hence there is a smaller ball $B'\subset\bar{B},$ whose radius is uniformly bounded below, such that for all $y\in B' ,$ \begin{equation} \label{e3.42} \bar{\omega}(y) \leq -\tfrac{1}{10}. \end{equation} (The factor $10^{-1}$ can be replaced by any other fixed negative constant). We now proceed with the proof of Theorem 3.6 under these assumptions. For notational simplicity, we remove the bar and prime from now on, so that $B' $ becomes $B$, $\bar{\omega}$ becomes $\omega , \bar{\alpha}$ becomes $\alpha ,$ and $\alpha , \omega $ satisfy (3.40)-(3.42) on $B$. The previous arguments, when $\alpha $ is bounded away from 0, essentially depend on the (dominant) $\alpha\nabla{\cal R}^{2}$ term in (3.1), and much less on the $L^{*}(\omega )$ term, (except in the estimates on $\omega $ in Proposition 3.7 - Corollary 3.9). When $\alpha $ is or approaches 0, one needs to work mainly with the $L^{*}(\omega )$ term to obtain higher order estimates, ensuring that the two terms $\alpha\nabla{\cal R}^{2}$ and $L^{*}(\omega )$ do not interfere with each other. For the following Lemmas, the preceding assumptions are assumed to hold. \begin{lemma} \label{l 3.10.} There is a constant $c < \infty $ such that, on $\bar{B} \subset B,$ \begin{equation} \label{e3.43} \alpha (\int|r|^{6})^{1/3} \leq c, \ \ \alpha\int|r|^{3} \leq c, \ \ \alpha\int|Dr|^{2} \leq c. \end{equation} \end{lemma} {\bf Proof:} For a given smooth cutoff function $\eta $ of compact support in $\bar{B},$ pair (3.1) with $\eta^{2}r$ and integrate by parts to obtain \begin{equation} \label{e3.44} \alpha\int|D\eta r|^{2} - \int\omega\eta^{2}|r|^{2} \leq \alpha c\int\eta^{2}|r|^{3} + \int< D^{2}(-\omega ), \eta^{2}r> + \alpha c\int (|D\eta|^{2}+ |\Delta\eta^{2}|)|r|^{2}. \end{equation} By the Sobolev inequality, \begin{equation} \label{e3.45} (\int (\eta|r|)^{6})^{1/3} \leq c_{s}\int|D(\eta|r|)|^{2} \leq cc_{s}\int||D\eta r||^{2}. \end{equation} where $c_{s}$ is the Sobolev constant of the embedding $L_{0}{}^{1,2} \subset L^{6}.$ Also, by the H\"older inequality, \begin{equation} \label{e3.46} \int\eta^{2}|r|^{3} \leq (\int (\eta|r|)^{6})^{1/3}\cdot (\int|r|^{3/2})^{2/3} \leq c_{o}(\int (\eta|r|)^{6})^{1/3}, \end{equation} where the last inequality follows from the definition of $\rho ,$ c.f. the beginning of \S 2. We may assume $c_{o}$ is chosen sufficiently small so that $cc_{o}c_{s} \leq \frac{1}{2}.$ Thus, the cubic term on the right in (3.44) may be absorbed into the left, giving $$\frac{c}{2}\alpha (\int (\eta r)^{6})^{1/3} - \int\omega\eta^{2}|r|^{2} \leq \int< D^{2}(-\omega ), \eta^{2}r> + \alpha c\int (|D\eta|^{2}+|\Delta\eta^{2}|)|r|^{2}. $$ For the first term on the right, we may estimate $$\int< D^{2}(-\omega ), \eta^{2}r> \ \leq (\int|D^{2}\omega|^{2})^{1/2}(\int|r|^{2})^{1/2} \leq c, $$ by (3.41) and the definition of $\rho .$ For the second term, $|D\eta|$ and $|\Delta\eta^{2}|$ are uniformly bounded in $L^{\infty}(B),$ so that $$\alpha\int (|D\eta|^{2}+ |\Delta\eta^{2}|)|r|^{2} \leq c, $$ either by Lemma 3.2 or by (3.40) and the definition of $\rho .$ From (3.42), $-\omega \geq \frac{1}{10}$ in $\bar{B},$ so that one obtains the bound $$\alpha (\int|r|)^{6})^{1/3} \leq c, $$ and thus also $$\alpha\int|Dr|^{2} \leq c. $$ The bound on the $L^{1}$ norm of $\alpha|r|^{3}$ then follows from (3.46). {\qed } \noindent \begin{lemma} \label{l 3.11.} There is a constant $c < \infty $ such that, on $\bar{B} \subset B,$ \begin{equation} \label{e3.47} \int|r|^{3} \leq c, \ \ \alpha (\int|r|^{9})^{1/3} \leq c. \end{equation} \end{lemma} {\bf Proof:} By pairing (3.1) with $r$, one obtains the estimate \begin{equation} \label{e3.48} -\alpha\Delta|r|^{2} + \alpha|Dr|^{2} - \omega|r|^{2} \ \leq \ < D^{2}(-\omega ), r> + \ \alpha|r|^{3}. \end{equation} Multiply (3.48) by $\eta|r|$ and integrate by parts, using the bound (3.42) and the Sobolev inequality as in (3.45) to obtain \begin{equation} \label{e3.49} c\cdot \alpha (\int|r|^{9})^{1/3}+ \frac{1}{10}\int|r|^{3} \leq \alpha\int|r|^{4} + \int< D^{2}(-\omega ), \eta|r|r> + \alpha\int|r|^{2}|D|r||\cdot |D\eta|. \end{equation} We have $$\int< D^{2}(-\omega ), \eta|r|r> \ \leq \int|r|^{2}|D^{2}\omega| \leq (\int|r|^{3})^{2/3}(\int|D^{2}\omega|^{3})^{1/3} ,$$ and $$(\int|D^{2}\omega|^{3})^{1/3} \leq c(\int\alpha^{3}|r|^{6})^{1/3}+ c(\int|\omega|^{2})^{1/2} \leq c, $$ where the last estimate follows from elliptic regularity on the trace equation (3.2) together with Lemma 3.10 and (3.41). Thus, this term may be absorbed into the left of (3.49), (because of the $2/3$ power), (unless $||r||_{L^{3}}$ is small, in which case one obtains a bound on $||r||_{L^{3}}).$ Similarly $$\alpha\int|r|^{4} = \alpha\int|r|^{3}|r| \leq \alpha (\int|r|^{9})^{1/3}(\int|r|^{3/2})^{2/3} \leq c_{o}\alpha (\int|r|^{9})^{1/3}. $$ Since $c_{o}$ is sufficiently small, this term may be absorbed into the left. For the last term on the right in (3.49), $$\alpha\int|r|^{2}|D|r||\cdot |D\eta| \leq \delta\alpha\int|r|^{4} + \delta^{-1}\alpha\int|D|r||^{2}|D\eta|^{2}; $$ the second term on the right is bounded by Lemma 3.10, while as in the previous estimate, the first term may be absorbed into the left, by choosing $\delta $ sufficiently small. These estimates combine to give the result. {\qed } This same argument may be repeated once more, pairing with $\eta|r|^{2},$ to obtain a bound \begin{equation} \label{e3.50} \int|r|^{4} \leq c. \end{equation} \noindent \begin{lemma} \label{l 3.12.} There is a constant $c < \infty $ such that, on $\bar{B} \subset B,$ \begin{equation} \label{e3.51} \alpha^{1/2}||D^{*}Dr||_{L^{2}} \leq c,\ \ ||Dr||_{L^{2}} \leq c, \ \ ||D^{2}\omega||_{L^{1,2}} \leq c, \end{equation} \end{lemma} {\bf Proof:} We may take the covariant derivative of (3.1) to obtain \begin{equation} \label{e3.52} \alpha DD^{*}Dr + \alpha Dr^{2} - D\omega r = DD^{2}(-\omega ), \end{equation} where $Dr^{2}$ denotes derivatives of terms quadratic in curvature. Hence $$\alpha< DD^{*}Dr,\eta Dr> + \alpha< Dr^{2},\eta Dr> - < D\omega r,\eta Dr> \ = -< DD^{2}(\omega ),\eta Dr> . $$ Using (3.42), this gives rise to the bound \begin{equation} \label{e3.53} \alpha\int|D^{*}D\eta r|^{2}+\frac{1}{10}\int\eta|Dr|^{2}\leq \end{equation} $$\leq \alpha c\int\eta|r||Dr|^{2}+\delta\int\eta|Dr|^{2}+c\delta^{-1}\int\eta|DD^{2}\omega|^{2}+\alpha c\int|D\eta|^{2}|Dr|^{2}+c\int\eta|D\omega||Dr||r|. $$ From elliptic regularity together with the Sobolev inequality (3.45), we have, (ignoring some constants), \begin{equation} \label{e3.54} \alpha (\int|D\eta r|^{6})^{1/3}\leq \alpha\int|D^{*}D\eta r|^{2}. \end{equation} Using same argument kind of argument as in (3.46), the first term on right of (3.53) can then be absorbed into left. Similarly, the second term in (3.53) can be absorbed into the $\frac{1}{10}$ term on the left, for $\delta $ small. For the third term, elliptic regularity gives, (ignoring constants and lower order terms), \begin{equation} \label{e3.55} \int|DD^{2}\omega|^{2} \leq \int|D\Delta\omega|^{2}+c \leq \alpha^{2}\int|r|^{2}|D|r||^{2}+c \leq \alpha^{2}(\int|r|^{4})^{1/2}(\int|D|r||^{4})^{1/2}+c. \end{equation} But by Lemma 3.10, $$\alpha (\int|r|^{4})^{1/2}\leq c\cdot c_{o}, $$ so that this third term may also be absorbed in left using (3.54). The fourth term on the right in (3.53) is bounded by Lemma 3.10. Finally for the last term, the H\"older inequality gives $$\int\eta|D\omega||Dr||r| \leq ||D\omega||_{L^{6}}||r||_{L^{3}}||Dr||_{L^{2}}. $$ The first two terms on the right here are bounded, by (3.41) and Lemma 3.11, so by use of the Young inequality, this term may also be absorbed into the left in (3.53). It follows that the left side of (3.53) is uniformly bounded. This gives the first two bounds in (3.51), while the last bound follows from the argument (3.55) above. {\qed } The estimates in Lemma 3.12, together with the previous work in Cases (I)-(III) above, proves Theorem 3.6 for $k =$ 1. By taking higher covariant derivatives of (3.1) and continuing in the same way for Case(III), one derives higher order estimates on $g$ and $\omega .$ This completes the proof of Theorem 3.6. {\qed } \section{Non-Existence of ${\cal R}_{s}^{2}$ Solutions with Free $S^{1}$ Action.} \setcounter{equation}{0} In this section, we prove Theorem 0.2. Thus let $N$ be an open oriented 3-manifold and $g$ a complete Riemannian metric on $N$ satisfying the ${\cal R}_{s}^{2}$ equations (0.4)-(0.5). The triple $(N, g, \omega )$ is assumed to admit a free isometric $S^{1}$ action. The assumption (0.6) on $\omega $ will enter only later. Let $V$ denote the quotient space $V = N/S^{1}.$ By passing to a suitable covering space if necessary, we may and will assume that $V$ is simply connected, and so $V = {\Bbb R}^{2}$ topologically. The metric $g$ is a Riemannian submersion to a complete metric $g_{V}$ on $V$. Let $f: V \rightarrow {\Bbb R} $ denote the length of the orbits, i.e. $f(x)$ is the length of the $S^{1}$ fiber through $x$. Standard submersion formulas, c.f. [B, 9.37], imply that that the scalar curvature $s_{V}$ of $g_{V},$ equal to twice the Gauss curvature, is given by \begin{equation} \label{e4.1} s_{V} = s + |A|^{2} + 2|H|^{2} + 2\Delta logf , \end{equation} where $A$ is the obstruction to integrability of the horizontal distribution and $H$ is the geodesic curvature of the fibers $S^{1}.$ Further $|H|^{2} = |\nabla logf|^{2},$ where log denotes the natural logarithm. Let $v(r)$ = area$D_{x_{o}}(r)$ in $(V, g_{V})$ and $\lambda (r) = v' (r)$ the length of $S_{x_{o}}(r),$ for some fixed base point $x_{o}\in V.$ The following general result uses only the submersion equation (4.2) and the assumption $s \geq $ 0. (The ${\cal R}_{s}^{2}$ equations are used only after this result). \begin{proposition} \label{p 4.1. } Let (V, $g_{V})$ be as above, satisfying (4.1) with $s \geq $ 0. Then there exists a constant $c < \infty $ such that \begin{equation} \label{e4.2} v(r) \leq c\cdot r^{2}, \end{equation} \begin{equation} \label{e4.3} \lambda (r) \leq c\cdot r, \end{equation} and \begin{equation} \label{e4.4} \int_{V}|\nabla logf|^{2}dA_{V} \leq c, \ \ \int_{V}|A|^{2}dA_{V} \leq c. \end{equation} \end{proposition} {\bf Proof:} We first prove (4.2) by combining (4.1) with the Gauss-Bonnet theorem. Thus, pair (4.1) with a smooth cutoff function $\eta^{2}$ where $\eta = \eta (d),$ and $d$ is the distance function on $V$ from $x_{o}\in V.$ This gives \begin{equation} \label{e4.5} \int_{V}(\eta^{2}s + \eta^{2}|A|^{2} + 2\eta^{2}|\nabla logf|^{2} + 2\eta^{2}\Delta logf) = \int_{V}\eta^{2}s_{V}. \end{equation} Now \begin{equation} \label{e4.6} 2\int_{V}\eta^{2}\Delta logf = - 4\int_{V}\eta<\nabla\eta , \nabla logf> \ \geq - 2\int_{V}\eta^{2}|\nabla logf|^{2} - 2\int_{V}|\nabla\eta|^{2}, \end{equation} so that, \begin{equation} \label{e4.7} \int_{V}\eta^{2}s_{V} \geq - 2\int_{V}|\nabla\eta|^{2}, \end{equation} since $s \geq $ 0. The Gauss-Bonnet theorem implies that in the sense of distributions on ${\Bbb R} ,$ \begin{equation} \label{e4.8} s_{V}(r) = 2[\chi (r) - \kappa (r)]' , \end{equation} where \begin{equation} \label{e4.9} \chi (r) = \chi (B(r)), \ s_{V}(r) = \int_{S_{x_{o}}(r)}s_{V}, \ \kappa (r) = \int_{S_{x_{o}}(r)}\kappa , \end{equation} and $\kappa $ is the geodesic curvature of $S_{x_{o}}(r),$ c.f. [GL, Theorem 8.11]. Since $\eta = \eta (d),$ by expressing (4.7) in terms of integration over the levels of $d$, one obtains, after integrating by parts, \begin{equation} \label{e4.10} \int_{0}^{\infty}(\eta^{2})' [\chi (r) - \kappa (r)] \leq \int_{0}^{\infty}(\eta' )^{2}\lambda (r). \end{equation} Now choose $\eta = \eta (r) = -\frac{1}{R}r +$ 1, $\eta (r) =$ 0, for $r \geq R$. In particular, $(\eta^{2})' \leq $ 0. It is classical, c.f. again [GL, p. 391], that \begin{equation} \label{e4.11} \lambda' (r) \leq \kappa (r) . \end{equation} Then (4.10) gives $$2\int_{0}^{R}(-\frac{1}{R}r+1)\frac{1}{R}\lambda' (r) \leq \frac{1}{R^{2}}\int_{0}^{R}\lambda (r)dr + 2\int (-\frac{1}{R}r+1)\frac{1}{R}\chi (r)dr. $$ But $$2\int_{0}^{R}(-\frac{1}{R}r+1)\frac{1}{R}\lambda' (r) = \frac{2}{R}\int\lambda' - \frac{2}{R^{2}}\int r\lambda' = \frac{2\lambda}{R} - \frac{2}{R}\lambda + \frac{2}{R^{2}}\int\lambda . $$ Thus $$\frac{1}{R^{2}}\int_{0}^{R}\lambda (r)dr = 2\int (-\frac{1}{R}r+1)\frac{1}{R}\chi (r)dr \leq C, $$ where the last inequality follows from the fact that $\chi (B(r)) \leq $ 1, since $B(r)$ is connected. This gives (4.2). Next, we prove (4.4). Returning to (4.5) and applying the Gauss-Bonnet theorem with $\eta = \chi_{B(r)}$ the characteristic function of $B(r) = B_{x_{o}}(r)$ in (4.10) gives, $$\int_{D(r)}(2|\nabla logf|^{2}+|A|^{2}) \leq 4\pi\chi (B(r)) - 2\lambda'- 2\int_{S(r)}<\nabla logf, \nu> , $$ where $\nu $ is the unit outward normal; the term $\lambda' $ is understood as a distribution on ${\Bbb R}^{+}.$ Using the H\"older inequality on the last term, we have \begin{equation} \label{e4.12} 2\int_{D(r)}(|\nabla logf|^{2}+\frac{1}{2}|A|^{2}) \leq 4\pi\chi (B(r)) - 2\lambda' + 2(\int_{S(r)}|\nabla logf|^{2}+\frac{1}{2}|A|^{2})^{1/2}(v' )^{1/2}. \end{equation} We now proceed more or less following a well-known argument in [CY,Thm.1]. Set $$F(r) = \int_{D(r)}(|\nabla logf|^{2}+\frac{1}{2}|A|^{2}). $$ Then (4.12) gives \begin{equation} \label{e4.13} F(r) \leq 2\pi - \lambda' + (F' )^{1/2}\lambda^{1/2}. \end{equation} Note that $F(r)$ is monotone increasing in $r$. If $F(r) \leq 2\pi ,$ for all $r$, then (4.4) is proved, so that we may assume that $F(r) > 2\pi ,$ for large $r$. Then (4.13) implies $$\frac{1}{\lambda^{1/2}} + \frac{\lambda'}{(F- 2\pi )\lambda^{1/2}} \leq \frac{(F' )^{1/2}}{F- 2\pi}. $$ We integrate this from $r$ to $s$, with $s >> r$. A straightforward application of the H\"older inequality gives $$(s- r)^{3/2}\cdot \bigl(\int_{r}^{s}\lambda\bigr)^{- 1/2} \leq \int_{r}^{s}(\lambda^{- 1/2}). $$ Thus, using (4.2), one obtains $$(s^{1/2} - r^{1/2}) + \int_{r}^{s} \frac{\lambda'}{(F- 2\pi )\lambda^{1/2}} \leq c\cdot (F^{-1}(r) - F^{-1}(s))^{1/2}(s-r)^{1/2}. $$ The integral on the left is non-negative, (integrate by parts to see this), and thus taking a limit as $s \rightarrow \infty $ implies, for any $r$, $$c \leq F^{-1}(r), $$ which gives (4.4). To prove (4.3), write (4.13) as $\lambda' \leq 2\pi + (F' )^{1/2}\cdot \lambda^{1/2}.$ Then one may integrate this from 0 to $r$ and apply the H\"older inequality and (4.2) to obtain (4.3). {\qed } \noindent \begin{remark} \label{r 4.2.} {\rm Consider the metric $\Roof{g}{\widetilde}_{V} = f^{2}\cdot g_{V}$ on $V$. If $\Roof{K}{\widetilde}$ and $K$ denote the Gauss curvatures of $\Roof{g}{\widetilde}_{V}$ and $g_{V}$ respectively, then a standard formula, c.f. [B, 1.159], gives $f^{2}\Roof{K}{\widetilde} = K - \Delta logf.$ Thus, from (4.1), we obtain $$f^{2}\Roof{K}{\widetilde} = {\tfrac{1}{2}} s + {\tfrac{1}{2}} |A|^{2} + |\nabla logf|^{2} \geq 0. $$ If one knew that $\Roof{g}{\widetilde}_{V} $ were complete, then the results in Proposition 4.1 could be derived in a simpler way from the well-known geometry of complete surfaces of non-negative curvature.} \end{remark} Next, we have the following analogue of Lemma 2.4, or more precisely (2.16). As in \S 2, let $t(x) = dist_{N}(x, x_{o}),$ for some fixed point $x_{o}\in N.$ \begin{proposition} \label{p 4.3.} Let (N, g, $\omega )$ be a complete ${\cal R}_{s}^{2}$ solution, with a free isometric $S^{1}$ action, and $\omega \leq $ 0. Then there is a constant $c_{o} > $ 0 such that on (N, g) \begin{equation} \label{e4.14} \rho \geq c_{o}\cdot t, \end{equation} and \begin{equation} \label{e4.15} |r| \leq c_{o}^{-1}t^{-2}. \end{equation} Further, for any $x,y\in S(r),$ \begin{equation} \label{e4.16} \frac{\omega (x)}{\omega (y)} \rightarrow 1, \ \ {\rm as} \ \ r \rightarrow \infty , \end{equation} and $\omega $ is a proper exhaustion function on N. If further $\omega $ is bounded below, then \begin{equation} \label{e4.17} lim_{t\rightarrow\infty}t^{2}|r| = 0, \end{equation} and \begin{equation} \label{e4.18} osc_{N \setminus B(r)} \ \omega \rightarrow 0, \ \ {\rm as} \ r \rightarrow \infty , \end{equation} \end{proposition} {\bf Proof:} These estimates will be proved essentially at the same time, but we start with the proof of (4.14). This is proved by contradiction, and the proof is formally identical to the proof of (2.16). Thus, assuming (4.14) is false, let $\{x_{i}\}$ be any sequence in $(N, g)$ with $t(x_{i}) \rightarrow \infty ,$ chosen to be $(\rho ,\frac{1}{2})$ buffered as following (2.16). Blow-down the metric $g$ based at $x_{i},$ by setting $g_{i} = \rho (x_{i})^{-2}\cdot g.$ In the $g_{i}$ metric, the equations (0.4)-(0.5) take the form \begin{equation} \label{e4.19} \alpha_{i}\nabla{\cal R}^{2} + L^{*}\omega = 0, \end{equation} \begin{equation} \label{e4.20} \Delta\omega = -\frac{\alpha_{i}}{4}|r|^{2}, \end{equation} where $\alpha_{i} = \alpha\rho_{i}^{-2}, \rho_{i} = \rho (x_{i}, g)$. All other metric quantities in (4.19)-(4.20) are w.r.t. $g_{i}.$ For clarity, it is useful to separate the discussion into non-collapse and collapse cases. {\bf Case (I).(Non-Collapse).} Suppose there is a constant $a_{o} > $ 0 such that \begin{equation} \label{e4.21} areaD(r) \geq a_{o}r^{2}, \end{equation} in $(V, g_{V}).$ Now by Proposition 4.1, \begin{equation} \label{e4.22} \int_{V \setminus D(r)}|\nabla logf|^{2} \rightarrow 0, \ \ {\rm as} \ r \rightarrow \infty . \end{equation} The integral in (4.22) is scale-invariant, and also invariant under multiplicative renormalizations of $f$. Thus, we normalize $f$ at each $x_{i}$ so that $f(x_{i}) \sim $ 1. This is equivalent to passing to suitable covering or quotient spaces of $N$. By the discussion on convergence preceding Lemma 2.1, and by Theorem 3.6, a subsequence of the metrics $(B_{x_{i}}(\frac{3}{2}), g_{i}) \subset (N, g_i)$ converges smoothly to a limit ${\cal R}_{s}^{2}$ solution defined at least on $(B_{x}(\frac{5}{4}), g_{\infty})$, $x = lim x_i$, possibly with $\alpha = 0$, (i.e. a solution of the static vacuum Einstein equations (1.8)). In particular, $f$ restricted to $B_{x_{i}}(\frac{5}{4})$ is uniformly bounded, away from 0 and $\infty .$ It follows then from the bound on $f$ and (4.22) that \begin{equation} \label{e4.23} \int_{B_{x_{i}}(\frac{5}{4})}|\nabla logf|^{2} \rightarrow 0, \ \ {\rm as} \ \ r \rightarrow \infty . \end{equation} w.r.t. the $g_{i}$-metric on $N$. The smooth convergence as above, and (4.23) imply that on the limit, $f =$ const. The same reasoning on (4.4) implies that $A =0$ on the limit. Since $A = 0$, $\nabla f =$ 0 and $s =$ 0 on the limit, from (4.1) we see that $s_{V} =$ 0 on the limit. Hence the limit is a flat product metric on $C\times S^{1},$ where $C$ is a flat 2-manifold. The smooth convergence implies that $(B_{x_{i}}(\frac{5}{4}), g_{i})$ is almost flat, i.e. its curvature is almost 0. This is a contradiction to the buffered property of $x_{i},$ as in Lemma 2.4. This proves (4.14). The estimate (4.15) follows immediately from (4.14) and the smooth convergence as above. The proof of (4.17) is now again the same as in Lemma 2.4. Namely repeat the argument above on the metrics $g_{i} = t(x_{i})^{-2}\cdot g,$ for any sequence $x_{i}$ with $t(x_{i}) \rightarrow \infty .$ Note that in this (non-collapse) situation, the hypothesis that $\omega $ is bounded below is not necessary. Observe also that the smooth convergence and (4.22) imply that \begin{equation} \label{e4.24} |\nabla logf|(x) << 1/t(x), \end{equation} as $t(x) \rightarrow \infty ,$ so that $f$ grows slower than any fixed positive power of $t$. It follows from this and from (4.3) that the annuli $A_{x_{o}}(r, 2r)$ have diameter satisfying diam$A_{x_{o}}(r,2r) \leq c\cdot r,$ for some fixed constant $c < \infty .$ To prove (4.16), we see from the above that for any sequence $x_{i}$ with $t(x_{i}) \rightarrow \infty ,$ the blow-downs $g_{i}$ as above converge smoothly to a solution of the static vacuum Einstein equations (1.8), which is flat, (in a subsequence). Hence the limit potential $\bar{\omega} =$ lim $\bar{\omega}_{i}, \bar{\omega}_{i} = \omega (x)/|\omega (x_{i})|,$ is either constant or a non-constant affine function. The limit is a flat product $A(k^{-1},k)\times S^{1},$ where $A(k^{-1},k)$ is the limit of the blow-downs of $A(k^{-1}t(x_{i}),kt(x_{i}))\subset (V, g_{i})$ and $k > $ 0 is arbitrary. If $\bar{\omega}$ were a non-constant affine function, then $\bar{\omega}$ must assume both positive and negative values on $A(k^{-1}, k)$, for some choice of sufficiently large $k$, which contradicts the assumption that $\omega \leq $ 0 everywhere. Thus, $\omega $ renormalized as above converges to a constant on all blow-downs, which gives (4.16). Since $N = {\Bbb R}^{2}\times S^{1}$ topologically, $N$ has only one end. By the minimum principle applied to the trace equation (3.2), $inf_{S(r)}\omega \rightarrow inf_{N}\omega ,$ as $r \rightarrow \infty $. Together with (4.16), it follows that $\omega $ is a proper exhaustion function on $N$. Further, if $\omega $ is bounded below, then (4.18) follows immediately. {\bf Case (II). (Collapse).} If (4.21) does not hold, so that \begin{equation} \label{e4.25} areaD(r_{i}) << r_{i}^{2}, \end{equation} for some sequence $r_{i} \rightarrow \infty ,$ then one needs to argue differently, since in this case, the estimate (4.22) may arise from collapse of the area, and not the behavior of $logf$. First we prove (4.14). By the same reasoning as above in Case (I), if (4.14) does not hold, then there is a $(\rho, \frac{1}{2})$ buffered sequence ${x_{i}}$, with $t(x_i) \rightarrow \infty$, which violates (4.14). Further, we may choose the base points exactly as in the proof of Lemmas 2.1 or 2.4, to satisfy (2.4). As in Case (I), normalize $f$ at each $x_i$ so that $f(x_i) \sim 1$. If the metrics $g_{i} = \rho(x_{i})^{-2} \cdot g$ are not collapsing at ${x_{i}}$, then the same argument as in Case (I) above gives a contradiction. Thus, assume the metrics $\{g_i\}$ are collapsing at $x_i$. Now as in (2.4), the curvature radius $\rho_i$ is uniformly bounded below by $\frac{1}{2}$ within arbitrary but fixed distances to the base point $x_{i}$. Hence, as discussed preceding Lemma 2.1, we may unwrap the collapse by passing to sufficiently large finite covers of arbitrarily large balls $B_{x_{i}}(R_i)$, $R_i \rightarrow \infty$. One thus obtains in the limit a complete non-flat ${\cal R}_{s}^{2}$ or static vacuum solution $(N' , g' )$, (corresponding to $\alpha = 0$), with an additional free isometric $S^{1}$ action, i.e. on $(N' , g' )$ one now has a free isometric $S^{1}\times S^{1}$ action. The second $S^{1}$ action arises from the collapse, and the unwrapping of the collapse in very large covers. This means that $N' $ is a torus bundle over ${\Bbb R} .$ Since $(N' , g' )$ is complete and scalar-flat, a result of Gromov-Lawson [GL, Thm. 8.4] states that any such metric is flat. This contradiction then implies (4.14) must hold. As before, smooth convergence then gives (4.15). The argument for (4.16)-(4.18) proceeds as follows. Consider any sequence $\{x_{i}\}$ in $N$ with $t(x_{i}) \rightarrow \infty .$ The blow-down metrics $g_{i} = t(x_{i})^{-2}\cdot g$ have a subsequence converging, after passing to suitably large finite covers as above, to a (now non-complete) maximal limit $(N' , g' )$ with, as above, a free isometric $S^{1}\times S^{1}$ action. The limit $(N' , g' )$ is necessarily a solution of the static vacuum equations (1.8) with potential $\bar \omega$ obtained by renormalizing the potential $\omega $ of $(N, g)$, i.e. $\bar{\omega} = lim_{i\rightarrow\infty} \frac{\omega (x)}{|\omega (x_{i})|}$ as before. Now it is standard, c.f. [An2, Ex.2.11], [EK, Thm.2-3.12], that the only, (even locally defined), solutions of these equations with such an $S^{1}\times S^{1}$ action are (submanifolds of) the {\it Kasner metrics}, given explicitly as metrics on ${\Bbb R}^{+}\times S^{1}\times S^{1}$ by \begin{equation} \label{e4.26} dr^{2}+r^{2\alpha}d\theta_{1}^{2}+r^{2\beta}d\theta_{2}^{2}, \end{equation} where $\alpha = (a- 1)/(a- 1+a^{-1}), \beta = (a^{-1}- 1)/(a- 1+a^{-1}),$ with potential $\bar{\omega} = cr^{\gamma}, \gamma = (a- 1+a^{-1})^{-1}$. The parameter $a$ may take any value in $[- 1,1].$ The values $a = 0$ and $a = 1$ give the flat metric, with $\bar{\omega} =$ const and $\bar{\omega} = cr$ respectively. The limit $(N', g')$ is flat if $a = 0,1$; this occurs if and only if $|r|t^{2} \rightarrow $ 0 in a $g_i$-neighborhood of $x_{i}$. Similarly, the limit $(N', g')$ is non-flat when $a \in [-1, 0)\cup (0,1)$, which occurs when $|r|t^{2}$ does not converge to 0 everywhere in a $g_i$-neighborhood of $x_{i}.$ In either case, the limits of the geodesic spheres in $(N, g)$ in the limit Kasner metric, are the tori $\{r =$ const\}. Since the oscillation of the limit potential $\bar{\omega}$ on such tori is 0, it is clear from the smooth convergence that (4.16) holds. As before, it is also clear that $\omega $ is a proper exhaustion function. Finally if $\omega $ is bounded below, then the limit $\bar \omega$ is uniformly bounded. Since $\bar \omega = cr^{\gamma}$, it follows that necessarily $a = 0$, so that all limits $(N' , g' )$ above are flat. The same argument as above in the non-collapse case then implies (4.17)-(4.18). {\qed } \noindent \begin{remark} \label{r 4.4.} {\rm The argument in Proposition 4.3 shows that if $\omega \leq $ 0, then either the curvature decays faster than quadratically, i.e. (4.17) holds, or the ${\cal R}_{s}^{2}$ solution is asymptotic to the Kasner metric (4.26). It may be possible that there are complete $S^{1}$ invariant ${\cal R}_{s}^{2}$ solutions asymptotic to the Kasner metric, although this possibility remains unknown.} \end{remark} We are now in position to complete the proof of Theorem 0.2. \noindent {\bf Proof of Theorem 0.2.} The assumption in (0.6) that $\omega $ is bounded below will be used only in one place, c.f. the paragraph following (4.37), so for the moment, we proceed without this assumption. It is convenient for notation to change sign, so we let \begin{equation} \label{e4.27} u = -\omega . \end{equation} It is clear that Proposition 4.3 implies that $u > $ 0 outside a compact set in $N$. It is useful to consider the auxilliary metric \begin{equation} \label{e4.28} \Roof{g}{\widetilde} = u^{2}\cdot g, \end{equation} compare with [An2, \S 3]. In fact the remainder of the proof follows closely the proof of [An2, Thm.0.3], c.f. also [An2, Rmk.3.6] so we refer there for some details. We only consider $\Roof{g}{\widetilde}$ outside a compact set $K \subset N$ on which $u > $ 0, so that $\Roof{g}{\widetilde}$ is a smooth Riemannian metric. By standard formulas for conformal change of the metric, c.f. [B, Ch.1J], the Ricci curvature $\Roof{r}{\widetilde}$ of $\Roof{g}{\widetilde}$ is given by $$\Roof{r}{\widetilde} = r - u^{-1}D^{2}u - u^{-1}\Delta u\cdot g + 2(dlog u)^{2} = $$ \begin{equation} \label{e4.29} = - u^{-1}L^{*}u - 2u^{-1}\Delta u\cdot g + 2(dlog u)^{2}. \end{equation} $$\geq - u^{-1}L^{*}u - 2u^{-1}\Delta u\cdot g. $$ From the Euler-Lagrange equations (0.4)-(0.5) and (4.15), together with the regularity estimates from Theorem 3.6, we see that $$|\nabla{\cal R}^{2}| \leq c\cdot t^{-4}, |\Delta u| \leq c\cdot t^{-4}. $$ Thus the Ricci curvature of $\Roof{g}{\widetilde}$ is almost non-negative outside a compact set, in the sense that $$\Roof{r}{\widetilde} \geq - c\cdot t^{-4}\Roof{g}{\widetilde}. $$ Further, since the Ricci curvature controls the full curvature in dimension 3, one sees from (4.29) that the sectional curvature $\Roof{K}{\widetilde}$ of $\Roof{g}{\widetilde}$ satisfies \begin{equation} \label{e4.30} |\Roof{K}{\widetilde}| \leq \frac{c}{u^{2}}|\nabla logu|^{2} + \frac{c}{t^4}, \end{equation} where the norm and gradient on the right are w.r.t. the $g$ metric. Let $\Roof{t}{\widetilde}(x) = dist_{\Roof{g}{\widetilde}}(x, x_{o}),$ (for $\Roof{t}{\widetilde}$ large), and $|\Roof{K}{\widetilde}|(\Roof{t}{\widetilde}) = sup_{S(\Roof{t}{\widetilde})}|\Roof{K}{\widetilde}|,$ taken w.r.t. $(N, \Roof{g}{\widetilde}).$ It follows from the change of variables formula that \begin{equation} \label{e4.31} \int_{1}^{\Roof{s}{\tilde}}\Roof{t}{\tilde}|\Roof{K}{\tilde}|(\Roof{t}{\tilde})d\Roof{t}{\tilde} \leq c \bigl [\int_{1}^{s}t|\nabla logu|^{2}(t)dt + 1 \bigr ], \end{equation} where as above, $|\nabla logu|^{2}(t) = sup_{S(t)}|\nabla logu|^{2}.$ In establishing (4.31), we use the fact that, \begin{equation} \label{e4.32} d\Roof{t}{\tilde} = udt \end{equation} together with the fact that \begin{equation} \label{e4.33} \Roof{t}{\tilde} \leq c\cdot u\cdot t, \end{equation} which follows from (4.32) by integration, using (4.16) together with the fact that $|\nabla logu| \leq c/t,$ which follows from (4.15). Now we claim that \begin{equation} \label{e4.34} \int_{1}^{s}t|\nabla logu|^{2}(t)dt \leq c\int_{1}^{s}area(S(t))^{-1}dt, \end{equation} for some $c < \infty .$ We refer to [An2,Lemma 3.5] for the details of this (quite standard) argument, and just sketch the ideas involved. First from the Bochner-Lichnerowicz formula and (4.15), one obtains $$\Delta|\nabla logu| \geq - (c/t^{2})|\nabla logu|. $$ Hence, from the sub-mean value inequality, [GT, Thm.8.17], one has \begin{equation} \label{e4.35} sup_{S(r)}|\nabla logu|^{2} \leq \frac{C}{volA(\frac{1}{2}r, 2r)}\int_{A(\frac{1}{2}r,2r)}|\nabla logu|^{2} \leq \frac{C}{r\cdot areaS(r)}\int_{B(r)}|\nabla logu|^{2}. \end{equation} where the second inequality uses again the curvature bound (4.15). To estimate the $L^{2}$ norm of $|\nabla logu|$ on $N$, we observe that \begin{equation} \label{e4.36} \int_{N}|r|^{2}dV = \int_{V}|r|^{2}fdA < \infty . \end{equation} The estimate (4.36) follows from the decay (4.15), from (4.2), and the fact that $sup_{S(r)}f \leq r^{1+\varepsilon},$ for any fixed $\varepsilon > $ 0, which follows from the proof of Proposition 4.3, (c.f. (4.24) and (4.26)). Now multiply the trace equation (0.5) by $u^{-1}$ and apply the divergence theorem on a suitable compact exhaustion $\Omega_{i}$ of $N$, for instance by sub-level sets of the proper exhaustion function $u$. Using (4.36), one thus obtains a uniform bound on the $L^{2}$ norm of $|\nabla logu|$ on $\Omega_{i},$ independent of $i$. Inserting this bound in (4.35) implies (4.34). Now if \begin{equation} \label{e4.37} \int_{1}^{\infty}area(S(t))^{-1}dt = \infty , \end{equation} then a result of Varopoulos [V] implies that $(N, g)$ is parabolic, in the sense that any positive superharmonic function on $N$ is constant. Since by (0.5) and (0.6), $\omega $ is a bounded superharmonic function on $(N, g)$, $\omega $ must be constant, so that $(N, g)$ is flat, by the trace equation (0.5). This proves Theorem 0.2 in this case. As indicated above, this is in fact the only place in the proof of Theorem 0.2 where the lower bound assumption $\omega \geq -\lambda > -\infty $ is used. Thus, suppose instead that \begin{equation} \label{e4.38} \int_{1}^{\infty}area(S(t))^{-1}dt < \infty . \end{equation} It follows then from (4.31),(4.34) and (4.38) that \begin{equation} \label{e4.39} \int_{1}^{\infty}\Roof{t}{\tilde}|\Roof{K}{\tilde}|(\Roof{t}{\tilde})d\Roof{t}{\tilde} < \infty . \end{equation} Now it is a standard fact in comparison theory, c.f. [Ab], that the bound (4.39) implies that $(N, \Roof{g}{\widetilde})$ is almost flat at infinity, in the strong sense that geodesic rays starting at some base point $x_{o} \in N$ either stay a bounded distance apart, or grow apart linearly. More precisely, outside a sufficiently large compact set, $(N, \Roof{g}{\widetilde})$ is quasi-isometric to the complement of a compact set in a complete flat manifold. Observe that $(N, \Roof{g}{\widetilde})$ cannot be quasi-isometric to ${\Bbb R}^{3}$ outside a compact set, since that would imply that $V$ is non-collapsing at infinity. But then by combining (4.24), (4.34) and (4.38) it follows that $u\cdot f,$ the length of the $S^{1}$ fiber in $(N, \Roof{g}{\widetilde})$ has sublinear growth; this is impossible when $(N, \widetilde g)$ is quasi-isometric to ${\Bbb R}^3$. Hence, outside a compact set, $(N, \widetilde g)$ is quasi-isometric to a flat product $C\times S^{1}$ where $C$ is the complement of a compact set in a complete flat 2-manifold. This means that there is a constant $C < \infty $ such that the $\widetilde g$-length of the $S^{1}$ fiber satisfies \begin{equation} \label{e4.40} L_{\tilde g}(S^{1}) = u\cdot f \leq C. \end{equation} Now return to the trace equation (0.5) on $(N, g)$. Integrate this over $B(s) \subset (N, g)$ and apply the divergence theorem to obtain \begin{equation} \label{e4.41} \frac{\alpha}{4}\int_{B(s)}|r|^{2} = \int_{S(s)}<\nabla u, \nu> \ \leq\int_{S(s)}|\nabla u| = \int_{S_{V}(s)}|\nabla u|f, \end{equation} where $S_{V}(s)$ is the geodesic $s$-sphere in $(V, g_{V}).$ Using (4.40), it follows that $$\frac{\alpha}{4}\int_{B(s)}|r|^{2} \leq C\int_{S_{V}(s)}|\nabla logu|. $$ However, by (4.34) and (4.38), $|\nabla logu|(t) << t^{-1}.$ This and the length estimate (4.3) imply that $$\int_{S_{V}(s)}|\nabla logu| \rightarrow 0, \ \ {\rm as} \ \ s \rightarrow \infty , $$ which of course implies that $(N, g)$ is flat. This completes the proof of Theorem 0.2. {\qed } \noindent \begin{remark} \label{r 4.5.(i).} {\rm As noted above, the lower bound assumption on $\omega $ is required only in case $(N, g)$ is parabolic. Alternately, if $(V, g_{V})$ is non-collapsing at infinity, i.e. (4.21) holds, or if $u = - \omega$ has sufficiently small growth at infinity, i.e. $\int^{\infty}t|\nabla logu|^{2}dt < \infty ,$ then the proof above shows that the lower bound on $\omega $ is again not necessary. On the other hand, as noted in Remark 4.4, there might exist complete ${\cal R}_{s}^{2}$ solutions asymptotic to the Kasner metric at infinity, so that $\omega \sim - r^{\gamma}, \gamma\in (0,1).$ {\bf (ii).} The proof of both Theorems 0.1 and 0.2 above only involve the asymptotic properties of the solution. It is clear that these proofs remains valid if it is assumed that $s \geq $ 0 outside a compact set in $(N, g)$ for Theorem 0.1, while $\omega \leq $ 0 outside a compact set in $(N, g)$ for Theorem 0.2.} \end{remark} \section{Existence of Complete ${\cal R}_{s}^{2}$ Solutions.} \setcounter{equation}{0} {\bf \S 5.1.} In this section, we show that the assumption that $(N, g, \omega )$ have an isometric free $S^{1}$ action in Theorem 0.2 is necessary, by constructing non-trivial ${\cal R}_{s}^{2}$ solutions with a large degree of symmetry. Let $g_{S}$ be the Schwarzschild metric on $[2m,\infty )\times S^{2},$ given by \begin{equation} \label{e5.1} g_{S} = (1-\frac{2m}{r})^{-1}dr^{2} + r^{2}ds^{2}_{S^{2}}. \end{equation} The parameter $m > $ 0 is the mass of $g_{S}.$ Varying $m$ corresponds to changing the metric by a homothety. Clearly the metric is spherically symmetric, and so admits an isometric $SO(3)$ action, although the action of any $S^{1}\subset SO(3)$ on $[2m,\infty )\times S^{2}$ is not free. The boundary $\Sigma = r^{-1}(2m)$ is a totally geodesic round 2-sphere, of radius $2m$, and hence $g_{S}$ may be isometrically doubled across $\Sigma $ to a complete smooth metric on $N = {\Bbb R}\times S^{2}.$ The Schwarzschild metric is the most important solution of the static vacuum Einstein equations (1.8). The potential is given by the function \begin{equation} \label{e5.2} u = (1-\frac{2m}{r})^{1/2}, \end{equation} The potential $u$ extends past $\Sigma $ as an {\it odd} harmonic function under reflection in $\Sigma .$ We show that there is a potential function $\omega $ on $N$ such that $(N, g_{S}, \omega )$ is a complete solution of the ${\cal R}_{s}^{2}$ equations (0.4)-(0.5) with non-zero $\alpha .$ \begin{proposition} \label{p 5.1.} The Schwarzschild metric (N, $g_{S})$ satisfies the ${\cal R}_{s}^{2}$ equations (0.4)-(0.5), where the potential is given by $$\omega = \tau + c\cdot u, $$ for any $c\in{\Bbb R} $ and $u$ is as in (5.2). The function $\tau $ is spherically symmetric and even w.r.t. reflection across $\Sigma .$ Explicitly, \begin{equation} \label{e5.3} \tau = lim_{a\rightarrow 2m} \ \tau_{a} \end{equation} where a $> 2m$ and \begin{equation} \label{e5.4} \tau_{a}(r) = \frac{\alpha}{8}(1-\frac{2m}{r})^{1/2}\bigl(\int_{a}^{r} \frac{1}{s^{5}(1-\frac{2m}{s})^{3/2}}ds - \frac{1}{ma^{3}}\frac{1}{(1-\frac{2m}{a})^{1/2}}\bigr) , \end{equation} for $r \geq a$. In particular, $\tau (2m) = -\frac{\alpha}{8m}(2m)^{-3}, \tau' (2m) =$ 0, $\tau < $ 0 everywhere, and $\tau $ is asymptotic to a constant $\tau_{o} < $ 0 at each end of $N$. \end{proposition} {\bf Proof:} By scaling, we may assume $m = \frac{1}{2}.$ One may rewrite the expression (0.2) for $\nabla{\cal R}^{2}$ via a standard Weitzenbock formula, c.f. [B, 4.71], to obtain \begin{equation} \label{e5.5} \nabla{\cal R}^{2} = \frac{1}{2}\delta dr + \frac{1}{2}D^{2}s - r\circ r - R\circ r - \frac{1}{2}\Delta s\cdot g + \frac{1}{2}|r|^{2}\cdot g. \end{equation} The Schwarzschild metric $g_{S},$ or any spherically symmetric metric, is conformally flat, so that $d(r-\frac{s}{4}g) =$ 0. Since $g_{S}$ is scalar-flat, one thus has \begin{equation} \label{e5.6} \nabla{\cal R}^{2} = - r\circ r - R\circ r + \frac{1}{2}|r|^{2}\cdot g. \end{equation} Let $t$ denote the distance to the event horizon $\Sigma $ and let $e_{i}, i =$ 1,2,3 be a local orthonormal frame on $N = S^{2}\times {\Bbb R} ,$ with $e_{3} = \nabla t,$ so that $e_{1}$ and $e_{2}$ are tangent to the spheres $t =$ const. Any such framing diagonalizes $r$ and $\nabla{\cal R}^{2}.$ The Ricci curvature of $g_{S}$ satisfies $$r_{33} = - r^{-3}, r_{11} = r_{22} = \frac{1}{2}r^{-3}. $$ A straightforward computation of the curvature terms in (5.6) then gives $$(\nabla{\cal R}^{2})_{33} = \frac{1}{4}r^{-6}, (\nabla{\cal R}^{2})_{11} = (\nabla{\cal R}^{2})_{22} = -\frac{1}{2}r^{-6} $$ We look for a solution of (0.4) with $\tau $ spherically symmetric, i.e. $\tau = \tau (t).$ Then $$D^{2}\tau = \tau' D^{2}t + \tau'' dt\otimes dt, \Delta\tau = \tau' H + \tau'' , $$ where $H = \Delta t$ is the mean curvature of the spheres $t =$ const and the derivatives are w.r.t. $t$. One has $(D^{2}t)_{33} =$ 0 while $(D^{2}t)_{ii} = \frac{1}{2}H$ in tangential directions $i =$ 1,2. Thus, in the $\nabla t$ direction, (0.4) requires \begin{equation} \label{e5.7} \frac{\alpha}{4}r^{-6}+ \tau'' - (\tau' H+\tau'' ) - \tau (- r^{-3}) = 0, \end{equation} while in the tangential directions, (0.4) is equivalent to \begin{equation} \label{e5.8} -\frac{\alpha}{2}r^{-6}+ \frac{H}{2}\tau' - (\tau' H+\tau'' ) - \frac{\tau}{2}r^{-3} = 0. \end{equation} The equations (5.7)-(5.8) simplify to the system \begin{equation} \label{e5.9} \tau' H - \tau r^{-3} = \frac{\alpha}{4}r^{-6}, \end{equation} \begin{equation} \label{e5.10} \tau'' + \frac{H}{2}\tau' + \frac{\tau}{2}r^{-3} = -\frac{\alpha}{2}r^{-6}. \end{equation} It is easily verified that (5.10) follows from (5.9) by differentiation, so only (5.9) needs to be satisfied. It is convenient to change the derivatives w.r.t. $t$ above to derivatives w.r.t. $r$, using the relation $\frac{dr}{dt} = (1-\frac{1}{r})^{1/2}.$ Since also $H = 2\frac{r'}{r} = \frac{2}{r}(1-\frac{1}{r})^{1/2},$ (5.9) becomes \begin{equation} \label{e5.11} \frac{2}{r}(1-\frac{1}{r})\dot {\tau} - \tau r^{-3} = \frac{\alpha}{4}r^{-6}, \end{equation} where $\dot {\tau} = d\tau /dr.$ The linear ODE (5.11) in $\tau $ can easily be explicitly integrated and one may verify that $\tau $ in (5.3)-(5.4) is the unique solution on $([2m,\infty )\times S^{2}, g_{S})$ of (0.4)-(0.5) satisfying, (since $m = \frac{1}{2}),$ \begin{equation} \label{e5.12} \tau (1) = -\frac{\alpha}{4} \ \ {\rm and} \ \ \frac{d}{dr}\tau|_{r=1} = 0. \end{equation} It follows that the {\it even} reflection of $\tau $ across $\{r =$ 1\} $= \Sigma $ gives a smooth solution of (0.4) on the (doubled) Schwarzschild metric $(N, g_{S}),$ satisfying the stated properties. Since Ker $L^{*} = \ <u> $ on the Schwarzschild metric, the potential $\omega $ has in general the form $\omega = \tau +cu,$ for some $c\in{\Bbb R} .$ {\qed } Thus the Schwarzschild metric, with potential function $\omega ,$ gives the simplest non-trivial solution to the ${\cal R}_{s}^{2}$ equations (0.4)-(0.5), just as it is the canonical solution of the static vacuum equations (1.8). \noindent \begin{remark} \label{r 5.2.} {\rm It is useful to consider some global aspects of solutions to the static vacuum equations in this context. It is proved in [An3,Appendix] that there are no non-flat complete solutions to the static vacuum equations (1.8) with potential $\omega < $ 0 or $\omega > $ 0 everywhere. Thus, this result is analogous to Theorem 0.1. While there do exist non-trivial complete smooth solutions with $\omega $ changing sign, for example the (isometrically doubled) Schwarzschild metric with $\omega = \pm u$ in (5.2), such solutions are very special since they have a smooth event horizon $\Sigma = \{\omega =$ 0\}, c.f. [An2] for further discussion. Hence, among the three classes of equations considered here, namely the ${\cal R}^{2}$ equations (0.2), the static vacuum equations (1.8) and and the ${\cal R}_{s}^{2}$ equations (1.9), only the scalar curvature constrained ${\cal R}_{s}^{2}$ equations (1.9) for $0<\alpha<\infty $ admit non-trivial complete solutions with non-vanishing potential. In [An4], we will investigate in greater detail the structure of complete ${\cal R}_{s}^{2}$ solutions, 0 $< \alpha < \infty ,$ as well as the structure of junction solutions, i.e. metrics which are solutions of the ${\cal R}^{2}$ equations in one region, and solutions of the ${\cal R}_{s}^{2}$ equations or static vacuum equations in another region, as mentioned in \S 1.} \end{remark} {\bf \S 5.2.} For the work in [An4] and that following it, it turns out to be very useful to have the results of this paper, and those of [AnI, \S 3], with the $L^{2}$ norm of the traceless Ricci curvature $z = r - \frac{s}{3}g$ in place of $r$, i.e. with the functional \begin{equation} \label{e5.13} {\cal Z}^{2} = \int|z|^{2}dV \end{equation} in place of ${\cal R}^{2}.$ We show below that this can be done with relatively minor changes in the arguments. First, the Euler-Lagrange equations for ${\cal Z}^{2}$ are \begin{equation} \label{e5.14} \nabla{\cal Z}^{2} = D^{*}Dz + \frac{1}{3}D^{2}s - 2\stackrel{\circ}{R}\circ z + \frac{1}{2}(|z|^{2} - \frac{1}{3}\Delta s)\cdot g = 0, \end{equation} \begin{equation} \label{e5.15} tr\nabla{\cal Z}^{2} = -\frac{1}{6}\Delta s - \frac{1}{2}|z|^{2} = 0. \end{equation} Similarly, the ${\cal Z}_{s}^{2}$ equations for scalar-flat metrics, i.e. the analogues of (0.4)-(0.5), are \begin{equation} \label{e5.16} \alpha\nabla{\cal Z}^{2} + L^{*}(\omega ) = 0, \end{equation} $$\Delta \omega = -\frac{\alpha}{4}|z|^{2}. $$ We begin by examining the functional $I_{\varepsilon}^{~-}$ given by \begin{equation} \label{e5.17} I_{\varepsilon}^{~-} = \varepsilon v^{1/3}\int|z|^{2} + (v^{1/2}\int (s^{-})^{2})^{1/2}, \end{equation} i.e. the ${\cal Z}^{2}$ analogue of the functional $I_{\varepsilon}' $ in (1.16). Recall here that from \S 1 that the behavior of $I_{\varepsilon}' $ as $\varepsilon \rightarrow $ 0 gives rise to the ${\cal R}^{2}$ and ${\cal R}_{s}^{2}$ equations. In the same way, the behavior of $I_{\varepsilon}^{~-}$ as $\varepsilon \rightarrow 0$ gives rise to the ${\cal Z}^2$ and ${\cal Z}_{s}^{2}$ equations. Again, as noted in \S 1, the existence and basic properties of critical points of the functional $I_{\varepsilon}$ in (1.1) were (essentially) treated in [An1, \S 8]. For this functional, the presence of $r$ or $z$ in the definition (1.1) makes no essential difference, due to the ${\cal S}^{2}$ term in (1.1). However, for the passage from ${\cal S}^{2}$ in (1.1) to ${\cal S}_{-}^{2}$ in (5.17), this is no longer the case, since we have no apriori control on $s^{+}$ = min$(s, 0)$. Thus, we first show that the results of [An1, \S 3 and \S 8] do in fact hold for metrics with bounds on $I_{\varepsilon}^{~-}$ and for minimizers of $I_{\varepsilon}^{~-}$. Let $r_{h}$ be the $L^{2,2}$ harmonic radius, as in [An1, Def. 3.2]. The main estimate we need is the following; compare with [An1, Rmk. 3.6]. \begin{proposition} \label{p 5.3.} Let $D$ be a domain in a complete Riemannian manifold $(N, g)$, such that \begin{equation} \label{e5.18} \varepsilon\int_{D}|z|^{2} + \int_{D}(s^{-})^{2} \leq \Lambda , \end{equation} where 0 $< \varepsilon < \infty .$ Then there is a constant $r_{o} = r_{o}(\Lambda , \varepsilon ) > $ 0 such that \begin{equation} \label{e5.19} r_{h}(x) \geq r_{o}\cdot \nu (x)\cdot \frac{dist(x, \partial D)}{diam D}. \end{equation} \end{proposition} {\bf Proof:} The proof is a modification of the proof of [An1, Thm. 3.5]. Thus, if (5.19) is false, then there exists a sequence of points $x_{i}$ in domains $(D_{i}, g_{i})$ such that \begin{equation} \label{e5.20} \frac{r_{h}(x_{i})}{dist(x_{i}, \partial D_{i}))} << \frac{\nu (x_{i})}{diam D_{i}} \leq 1, \end{equation} where the last inequality follows since the volume radius $\nu$ is at most the diameter. Choose the points $x_{i}$ to realize the mimimal value of the ratio in (5.20). It follows as in the proof of [An1, Thm. 3.5] that a subsequence of the rescaled pointed manifolds $(D_{i}, g_{i}' , x_{i}), g_{i}' = r_{h}(x_{i})^{-2}\cdot g_{i},$ converges in the weak $L^{2,2}$ topology to a complete, non-compact limit manifold $(N, g' , x)$, with $L^{2,2}$ metric $g' ,$ and of infinite volume radius. From (5.20), one easily deduces that $r_{h}(y_{i}) > \frac{1}{2},$ for all $y_{i}$ such that $dist_{g_{i}'}(x_{i}, y_{i}) \leq R$, for an arbitrary $R < \infty ,$ and for $i$ sufficiently large. Now the bound (5.18) and scaling properties of curvature imply that \begin{equation} \label{e5.21} z \rightarrow 0, \ \ s^{-} \rightarrow 0, \end{equation} strongly in $L^{2},$ uniformly on compact subsets of $(N, g' , x)$. Hence the limit $(N, g' )$ is of constant curvature, and non-negative scalar curvature. In particular, the scalar curvature $s' $ of $g' $ is constant. If $s' > $ 0, then $(N, g' )$ must be compact, which is impossible. Hence $s' =$ 0, and so $(N, g' )$ is flat. Hence, $(N, g' )$ is isometric to the complete flat metric ${\Bbb R}^{3},$ since $(N, g' )$ has infinite volume radius. We claim that $s \rightarrow $ 0 strongly in $L^{2}$ also. Together with (5.21), this implies $r \rightarrow $ 0 strongly in $L^{2},$ and the proof proceeds as in [An1, Thm. 3.5]. To prove the claim, let $\phi$ be any function of compact support in $B_{y}(\frac{1}{2}),$ for an arbitrary $y\in N.$ With respect to the $g_{i}' $ metric, and by use of the Bianchi identity, we have $$\int s\cdot \Delta \phi = -\int<\nabla s, \nabla \phi> = \tfrac{1}{6} \int<\delta z, \nabla \phi> = \tfrac{1}{6} \int< z, D^{2}\phi> , $$ where we have removed the prime and subscript $i$ from the notation. By $L^{2}$ elliptic regularity w.r.t. the metrics $g_{i}' ,$ (which are uniformly bounded locally in $L^{2,2}),$ it follows that \begin{equation} \label{e5.22} \int_{B_{y}(\frac{1}{4})}(s - s_{o})^{2} \leq C\cdot \int_{B_{y}(\frac{1}{2})}|z|^{2}, \end{equation} where $s_{o}$ is the mean value of $s$ on $B_{y}(\frac{1}{2}),$ and where $C$ is a fixed constant, independent of $i$ and $y$. Thus, $s = s_{g_{i}}'$ converges strongly to its limit mean value in the limit. Since $s_{o} =$ 0 in the limit, it follows that $s \rightarrow $ 0 strongly in $L^{2},$ as required. {\qed } Let $\rho_{z}$ be the $L^{2}$ curvature radius w.r.t. $z$, i.e. again as in [An1, Def. 3.2], $\rho_{z}(x)$ is the largest radius of a geodesic ball about $x$ such that for $y\in B_{x}(\rho_{z}(x))$ and $D_{y}(s) = B_{x}(\rho_{z}(x))\cap B_{y}(s),$ \begin{equation} \label{e5.23} \frac{s^{4}}{vol D_{y}(s)}\int_{D_{y}(s)}|z|^{2} \leq c_{o}, \end{equation} where $c_{o}$ is a fixed small constant. Of course $\rho_{r} \leq \rho_{z}$. Note that Proposition 5.3 implies in particular that for the $L^{2}$ curvature radius $\rho = \rho_{r}$ as in \S 2, \begin{equation} \label{e5.24} \rho_{r}(x) \geq \rho_{o}\cdot \rho_{z}(x), \end{equation} where $\rho_{o}$ is a constant depending only a lower bound on $\nu $ and a bound on ${\cal S} _{-}^{2}$ on $\rho_{z}(x).$ Note also that this local result is false without a local bound on ${\cal S}_{-}^{2}.$ For in this case, a metric of constant negative curvature, but with arbitrarily large scalar curvature will make $r_{h}$ arbitrarily small without any change to $\rho_{z}.$ Proposition 5.3 shows that the analogue of [An1, Thm.3.5/Rmk.3.6] holds for the functional $I_{\varepsilon}^{~-}$ in place of ${\cal R}^{2},$ for any given $\varepsilon > $ 0, as does [An1,Thm.3.7]. Given this local $L^{2,2}$ control, an examination of the proofs shows that the results [An1,Thm.3.9-Cor.3.11] also hold w.r.t. $I_{\varepsilon}^{~-},$ without any further changes, as does the main initial structure theorem, [An1, Thm.3.19]. The local results [An1, Lem.3.12-Cor.3.17], dealing with collapse within the $L^{2}$ curvature radius will not hold for $\rho_{z}$ without a suitable local bound on the scalar curvature. However, as noted at the bottom of [An1,p.223], [An1, Lemmas 3.12,3.13] only require a lower bound on the $L^{2}$ norm the negative part of the Ricci curvature. Hence these results, as well as [An1, Cor.3.14-Cor.3.17] hold for metrics satisfying \begin{equation} \label{e5.25} s \geq -\lambda , \end{equation} for some fixed constant $\lambda > -\infty .$ In particular, for metrics satisfying (5.25), we have \begin{equation} \label{e5.26} \rho_{r}(x) \geq \rho_{o}\cdot \rho_{z}(x), \end{equation} where $\rho_{o} = \rho_{o}(\lambda , c_{o})$ is independent of the volume radius $\nu .$ We now use the results above to prove the following: \begin{proposition} \label{p 5.4.} Theorems 0.1 and 0.2 remain valid for complete non-compact ${\cal Z}^{2}$ and ${\cal Z}_{s}^{2}$ solutions respectively. \end{proposition} {\bf Proof:} For Theorem 0.2, this is clear, since a scalar-flat ${\cal R}_{s}^{2}$ solution is the same as a scalar-flat ${\cal Z}_{s}^{2}$ solution. For Theorem 0.1, as noted in the beginning of \S 2, the form of the full Euler-Lagrange equations (0.2) or (5.14) makes no difference in the arguments, since elliptic regularity may be obtained from either one. Thus we need only consider the difference in the trace equations (0.3) and (5.15). Besides the insignificant difference in the constant factor, the only difference in these equations is that $r$ is replaced by $z$; the fact that the sign of the constants is the same is of course important. An examination of the proof shows that all arguments for ${\cal R}^{2}$ solutions remain valid for ${\cal Z}^{2}$ solutions, with $r$ replaced by $z$, except in the following two instances: (i). The passage from (2.7) to (2.8) in the proof of Proposition 2.2, which used the obvious estimate $|r|^{2} \geq s^{2}/3.$ This estimate is no longer available for $|z|^{2}$ in place of $|r|^{2}.$ (ii). In the proof of Lemma 2.9(ii), where $\Delta s(x_{i}) \rightarrow $ 0 implies $|r|(x_{i}) \rightarrow $ 0, and hence $s(x_{i}) \rightarrow $ 0, which again no longer follows trivially for $z$ in place of $r$. We first prove (ii) for ${\cal Z}^{2}$ solutions. By Lemma 2.1, we may assume that $(N, g)$ is a complete ${\cal Z}^{2}$ solution, with uniformly bounded curvature. Let $\{x_{i}\}$ be a minimizing sequence for $s \geq $ 0 on $(N, g)$. As before $\Delta s(x_{i}) \rightarrow $ 0, and so $|z|(x_{i}) \rightarrow $ 0, as $i \rightarrow \infty .$ Since the curvature is uniformly bounded, it follows from (the proof of) the maximum principle, c.f. [GT, Thm.3.5], that $|z|^{2} \rightarrow $ 0 in balls $B_{x_{i}}(c),$ for any given $c < \infty .$ Hence the metric $g$ in $B_{x_{i}}(c)$ approximates a constant curvature metric. Since $(N, g, x_{i})$ is complete and non-compact, this forces $s \rightarrow $ 0 in $B_{x_{i}}(c),$ which proves (ii). Regarding (i), it turns out, somewhat surprisingly, that the proof of Proposition 2.2 is not so simple to rectify. First, observe that Proposition 2.2 is used only in the following places in the proof of Theorem 0.1. (a). The end of Lemma 2.4. (b). Lemmas 2.8 and 2.10. Regarding (a), the estimate (2.19) still holds. In this case, as noted following the proof of Proposition 2.2, there is a smoothing $\tilde t$ of the distance function $t$ such that $|\Delta \tilde t| \leq c/ \tilde t^{2}$ and in fact \begin{equation} \label{e5.27} |D^{2}\tilde t| \leq c/\tilde t^{2}. \end{equation} The proof may then be completed as follows. As in the beginning of the proof of Proposition 2.2, we obtain from the trace equation (5.15), \begin{equation} \label{e5.28} \int\eta^{4}|z|^{2} \leq \frac{1}{3}\int<\nabla s, \nabla \eta^{4}> = -\frac{1}{18}\int< z, D^{2}\eta^{4}> , \end{equation} where the last equality follows from the Bianchi identity $\delta z = -\frac{1}{6}ds,$ and $\eta = \eta (\tilde t)$ is of compact support. Expand $D^{2}\eta^{4}$ as before and apply the Cauchy-Schwarz inequality to (5.28). Using (5.27), together with the argument following (2.9), the proof of Proposition 2.2 follows easily in this case. (b). For both of these results, it is assumed that (2.47) holds, i.e. $s(x) \geq d_{o}\cdot \rho (x)^{-2},$ and so $s(x) \geq d\cdot t(x)^{-2}.$ In this case, the estimate (5.26), which holds since $(N, g)$ has non-negative scalar curvature, together with a standard covering argument implies that \begin{equation} \label{e5.29} \int_{B(R)}s^{2}dV \leq c_{1}\cdot \int_{B(2R)}|z|^{2}, \end{equation} for all $R$ large, where $c_{1}$ is a constant independent of $R$. As before in the proof, there exists a sequence $r_{i} \rightarrow \infty$ as $i \rightarrow \infty$ such that, for all $R\in [r_{i}, 10r_{i}],$ \begin{equation} \label{e5.30} \int_{B(R)}s^{2}dV \leq c_{2}\cdot \int_{B(R/2)}s^{2}, \end{equation} with $c_{2}$ independent of $R$. Given the estimates (5.29)-(5.30), the proof of Proposition 2.2 then proceeds exactly as before. The remainder of the proof of Theorem 0.1 then holds for ${\cal Z}^{2}$ solutions; the only further change is to replace $r$ by $z$. {\qed } \begin{remark} \label{r 5.5.} {\rm We take this opportunity to correct an error in [AnI]. Namely in [An1, Thms. 0.1/3.19], and also in [An1, Thms. 0.3/5.9], it is asserted that the maximal open set $\Omega $ is embedded in $M$. This assertion may be incorrect, and in any case its validity remains unknown. The error is in the statement that the diffeomorphisms $f_{i_{k}}$ constructed near the top of [AnI, p.229] can be chosen to be nested. The proof of these results does show that $\Omega $ is weakly embedded in $M$,$$\Omega \subset\subset M, $$ in the sense that any compact domain with smooth boundary in $\Omega $ embeds as such a domain in $M$. Similarly, there exist open sets $V \subset M$, which contain a neighborhood of infinity of $\Omega ,$ such that $\{g_{i}\}$ partially collapses $V$ along F-structures. In particular, $V$ itself, as well as a neighborhood of infinity in $\Omega $ are graph manifolds. Thus, the basic structure of these results remains the same, provided one replaces the claim that $\Omega \subset M$ by the statement that $\Omega \subset\subset M$. The remaining parts of these results hold without further changes. The same remarks hold with regard to the results of \S 8. My thanks to thank Yair Minsky for pointing out this error.} \end{remark} \begin{center} September, 1998/October, 1999 \end{center} \address{Department of Mathematics\\ S.U.N.Y. at Stony Brook\\ Stony Brook, N.Y. 11794-3651}\\ \email{anderson@@math.sunysb.edu} \end{document}
\begin{document} \title{Unsupervised classification of quantum data} \author{Gael Sent\'is,$^{1}$ Alex Monr\`as,$^{2}$ Ramon Mu\~noz-Tapia,$^{2}$ John Calsamiglia,$^{2}$ and Emilio Bagan$^{2,3}$} \affiliation{$^{1}$Naturwissenschaftlich-Technische Fakult\"at, Universit\"at Siegen, 57068 Siegen, Germany\\ $^{2}$F\'{i}sica Te\`{o}rica: Informaci\'{o} i Fen\`{o}mens Qu\`antics, Departament de F\'{\i}sica, Universitat Aut\`{o}noma de Barcelona,08193 Bellaterra (Barcelona), Spain\\ $^{3}$Department of Computer Science, The University of Hong Kong, Pokfulam Road, Hong Kong } \begin{abstract} We introduce the problem of unsupervised classification of quantum data, namely, of systems whose quantum states are unknown. We derive the optimal single-shot protocol for the binary case, where the states in a disordered input array are of two types. Our protocol is universal and able to automatically sort the input under minimal assumptions, yet partially preserving information contained in the states. We quantify analytically its performance for arbitrary size and dimension of the data. We contrast it with the performance of its classical counterpart, which clusters data that has been sampled from two unknown probability distributions. We find that the quantum protocol fully exploits the dimensionality of the quantum data to achieve a much higher performance, provided data is at least three-dimensional. \blue{For the sake of comparison, we discuss the optimal protocol when the classical and quantum states are known.} {{\bf s}m e}nd{abstract} \maketitle {{\bf s}m s}ection{Introduction} Quantum-based communication and computation technologies promise unprecedented applications and unforeseen speed-ups for certain classes of computational problems. In origin, the advantages of quantum computing were exemplary showcased through instances of problems that are hard to solve in a classical computer, such as integer factorization~\cite{Shor1998}, unstructured search~\cite{Grover1997}, discrete optimization~\cite{Finnila1994,Kadowaki1998}, and simulation of many-body Hamiltonian dynamics~\cite{Lloyd1996}. In recent times, the field has ventured one step further: quantum computers are now also envisioned as nodes in a network of quantum devices, where connections are established via quantum channels, and data are quantum systems that flow through the network~\cite{Kimble2008,Wehner2018}. The design of future quantum networks in turn brings up new theoretical challenges, such as devising universal information processing protocols optimized to work with generic quantum inputs, without the need of human intervention. Quantum learning algorithms are by design well suited for this class of automated tasks~\cite{Dunjko2017}. Generalizing classical machine learning ideas to operate with quantum data, some algorithms have been devised for quantum template matching~\cite{Sasaki2002}, quantum anomaly detection~\cite{Liu2017,Skotiniotis2018}, learning unitary transformations~\cite{Bisio2010} and quantum measurements~\cite{Bisio2011a}, and classifying quantum states~\cite{Guta2010,Sentis2012a,Sentis2014a,Fanizza2018}. These works fall under the broad category of {{\bf s}m e}mph{supervised} learning~\cite{Hastie2001,Devroye2013}, where the aim is to learn an unknown conditional probability distribution ${{\bf s}m Pr}(y|x)$ from a number of given samples $x_i$ and associated values or labels $y_i$, called {{\bf s}m e}mph{training} instances. The performance of a trained learning algorithm is then evaluated by applying the learned function over new data $x'_i$ called {{\bf s}m e}mph{test} instances. In the quantum extension of supervised learning~\cite{Monras2017}, the training instances are quantum---say, copies of the quantum state templates, or a potential anomalous state, or a number of uses of an unknown unitary transformation. The separation between training and testing steps is sometimes not as sharp: in reinforcement learning, training occurs on an instance basis via the interaction of an agent with an environment, and the learning process itself may alter the underlying probability distribution~\cite{Dunjko2016}. In contrast, {{\bf s}m e}mph{unsupervised} learning aims at inferring structure in an unknown distribution ${{\bf s}m Pr}(x)$ given random, unlabeled samples $x_i$. Typically, this is done by grouping the samples in {{\bf s}m e}mph{clusters}, according to a preset definition of similarity. Unsupervised learning is a versatile form of learning, attractive in scenarios where appropriately labeled training data is not available or too costly. But it is also---generically---a much more challenging problem \cite{Aloise2009,Ben-David2015}. To our knowledge, a quantum extension of unsupervised learning in the sense described above has not yet been considered in the literature. \begin{figure}[t] \includegraphics[scale=.6]{fig1_mockup.pdf} \caption{Pictorial representation of the clustering device for an input of eight quantum states. States of the same type have the same color. States are clustered according to their type by performing a suitable collective measurement, which also provides a classical description of the clustering.}{(\lambdaambda)}bel{fig:scheme} {{\bf s}m e}nd{figure} In this paper, we take a first step into this branch of quantum learning by introducing the problem of unsupervised binary classification of quantum states. We consider the following scenario: a source prepares quantum systems in two possible pure states that are completely unknown; after some time, $N$ such systems have been produced and we ask ourselves whether there exists a quantum device that is able to cluster them in two groups according to their states (see Fig.~{\bf s}ef{fig:scheme}). \blue{This scenario represents a quantum clustering task in its simplest form, where the single feature defining a cluster of quantum systems is that their states are identical. While clustering classical data under this definition of cluster---a set of equal data instances---yields a trivial algorithm, merely observing such simple feature in a finite quantum data set involves a nontrivial stochastic process and gives rise to a primitive of operational relevance for quantum information. Moreover, in some sense our scenario actually contains a classical binary clustering problem: if we were to measure each quantum system separately, we would obtain a set of $N$ data points (the measurement outcomes). The points would be effectively sampled from the two probability distributions determined by the quantum states and the choice of measurement. The task would then be to identify which points were sampled from the same distribution. Reciprocally, we can interpret our quantum clustering task as a natural extension of a classical clustering problem with completely unstructured data, where the only single feature that identifies a cluster is that the data points are sampled from a fixed, but arbitrary, categorical probability distribution (i.e., with no order nor metric in the underlying space). The quantum generalization is then to consider (non-commuting) quantum states instead of probability distributions.} We require two important features in our quantum clustering device: (i) it has to be universal, that is, it should be designed to take any possible pair of types of input states, and (ii) it has to provide a classical description of the clustering, that is, which particles belong to each cluster. Feature (i) ensures general purpose use and versatility of the clustering device, in a similar spirit to programmable quantum processors~\cite{Buzek2006}. Feature (ii) allows us to assess the performance of the device purely in terms of the accuracy of the clustering, which in turn facilitates the comparison with classical clustering strategies. Also due to (ii), we can justifiably say that the device has not only performed the clustering task but also ``learned'' that the input is (most likely) partitioned as specified by the output description. Note that relaxing feature (ii) in principle opens the door to a more general class of {{\bf s}m e}mph{sorting} quantum devices, where the goal could be, e.g., to minimize the distance (under some norm) between the global output state and the state corresponding to perfect clustering of the input. Such devices, however, fall beyond the scope of unsupervised learning. Requiring the description of the clusters as a classical outcome induces structure in the device. To generate this information, a quantum measurement shall be performed over all $N$ systems with as many outcomes as possible clusterings. Then, the systems will be sorted according to this outcome (see Fig.~{\bf s}ef{fig:scheme}). Depending on the context, e.g., on whether or not the systems will be further used after the clustering, different figures of merit shall be considered in the optimization of the device. In this paper we focus on the clustering part: our goal is to find the quantum measurement that maximizes the success probability of a correct clustering. \blue{ Features (i) and (ii) allow us to formally regard quantum clustering as a state discrimination task~\cite{Helstrom1976,Barnett2001,Chiribella2004,Chiribella2006a,Audenaert2007,Krovi2015a}, albeit with important differences with respect to the standard setting. In quantum state discrimination~\cite{Helstrom1976}, we want to determine the state of a quantum system among a set of {{\bf s}m e}mph{known} hypotheses (i.e., classical descriptions of quantum states). We can phrase this problem in machine learning terminology as follows. We have a test state (or several copies of it~\cite{Audenaert2007}) and we decide its label based on {{\bf s}m e}mph{infinite training} data. In other words, we have full knowledge about the meaning of the possible labels. Supervised quantum learning algorithms for quantum state classification~\cite{Guta2010,Sentis2012a,Sentis2014a,Fanizza2018} consider the intermediate scenario with {{\bf s}m e}mph{limited training} data. In this case, no description of the states is available. Instead, we are provided with a finite number of copies of systems in each of the possible quantum states, and thus we have only partial classical knowledge about the labels. Extracting the label information from the quantum training data then becomes a key step in the protocol. Following this line of thought, the problem we consider in this paper is a type of unsupervised learning, that is, one with {{\bf s}m e}mph{no training}. There is no information whatsoever about what state each label represents. } We obtain analytical expressions for the performance of the optimal clustering protocol for arbitrary values of the local dimension $d$ of the systems in the cases of finite number of systems $N$ and in the asymptotic limit of many systems. We show that, in spite of the fact that the number of possible clusterings grows exponentially with $N$, the success probability decays only as $O(1/N^2)$. Furthermore, we contrast these results with an optimal clustering algorithm designed for the classical version of the task. We observe a striking phenomenon when analyzing the performance of the two protocols for $d>2$: whereas increasing the local dimension has a rapid negative impact in the success probability of the classical protocol (clustering becomes, naturally, harder), it turns out to be beneficial for its quantum counterpart. We also see, through numerical analysis, that the quantum measurement that maximizes the success probability is also optimal for a more general class of cost functions that are more natural for clustering problems, including the Hamming distance. In other words, this provides evidence that our entire analysis does not depend strongly on the chosen figure of merit, but rather on the structure of the problem itself. Measuring the systems will in principle degrade the information encoded in their states, hence, intuitively, there should be a trade-off between how good a clustering is and how much information about the original states is left in the clusters. Remarkably, our analysis reveals that the measurement that clusterizes optimally actually preserves information regarding the type of states that form each cluster. \blue{This feature adds to the usability of our device as a universal quantum data sorting processor. It can be regarded as the quantum analogue of a sorting network (or sorting memory)~\cite{Knuth1998}, used as a fixed network architecture that automatically orders generic inputs coming from an aggregated data pipeline.} The details of this second step are however left for a subsequent publication. The paper is organized as follows. In Section~{\bf s}ef{sec:the_task}, we formalize the problem and derive the optimal clustering protocol and its performance. In Section~{\bf s}ef{sec:classical}, we consider a classical clustering protocol and contrast it with the optimal one. \blue{We present the proofs of the main results of our work and the necessary theoretical tools to derive them in Section~{\bf s}ef{sec:methods}.} We end in Section~{\bf s}ef{sec:discussion} discussing the features of our quantum clustering device and other cost functions, and giving an outlook on future extensions. {{\bf s}m s}ection{Clustering quantum states}{(\lambdaambda)}bel{sec:the_task} Let us suppose that a source prepares quantum systems randomly in one of two pure $d$-dimensional states~$\ket{\phi_0}$ and~$\ket{\phi_1}$ with equal prior probabilities. Given a sequence of $N$ systems produced by the source, and with no knowledge of the states~$\ket{\phi_{0/1}}$, we are required to assign labels~`0' or~`1' to each of the systems. The labeling can be achieved via a generalized quantum measurement that tries to distinguish among all the possible global states of the $N$ systems. Each outcome of the measurement will then be associated to a possible label assignment, that is, to a {{\bf s}m e}mph{clustering}. Consider the case of four systems. All possible clusterings that we may arrange are depicted in Fig.~{\bf s}ef{fig:N4} as strings of red and blue balls. Since the individual states of the systems are unknown, what is labeled as ``red'' or ``blue'' is arbitrary, thus interchanging the labels leads to an equivalent clustering. For arbitrary $N$, there will be~$2^{N-1}$ such clusterings. Fig.~{\bf s}ef{fig:N4} also illustrates a natural way to label each clustering as $(n,{{\bf s}m s}igma)$. The index~$n$ counts the number of systems in the smallest cluster. The index ${{\bf s}m s}igma$ is a permutation that brings a {{\bf s}m e}mph{reference} clustering, defined as that in which the systems belonging to the smallest cluster fall all on the right, into the desired form. To make this labeling unambiguous, ${{\bf s}m s}igma$ is chosen from a restricted set ${\mathscr S}_n{{\bf s}m s}ubset S_N$, where $S_N$ stands for the permutation group of $N$ elements and $e$ denotes its unity element. We will see that the optimal clustering procedure consists in measuring first the value of $n$, and, depending on the outcome, performing a second measurement that identifies ${{\bf s}m s}igma$ among the relevant permutations with a fixed $n$. Thus, unsupervised clustering has been cast as a multi-hypothesis discrimination problem, which can be solved for an arbitrary number of systems $N$ with local dimension $d$. Below, we outline the derivation of our main result: the expression of the maximum average success probability achievable by a quantum clustering protocol. In the limit of large $N$ \blue{and for arbitrary $d$ (not necessarily constant with $N$)}, we show that this probability behaves as\footnote{\blue{The symbol ${{\bf s}m s}im$ stands for ``asymptotically equivalent to'', as in~\citep{Abramowitz1965}.}} \begin{equation}{(\lambdaambda)}bel{ps_asym} P_{{\bf s}m s} {{\bf s}m s}im {8(d-1)\over\lambdaeft(\displaystyle 2d+N{\bf s}ight) N}\,. {{\bf s}m e}nd{equation} Naturally, $P_{{\bf s}m s}$ goes to zero with $N$, since the total number of clusterings increases exponentially and it becomes much harder to discriminate among them. What may perhaps come as a surprise is that, despite this exponential growth, the scaling of $P_{{\bf s}m s}$ is only of order $O(1/N^2)$.\footnote{\blue{It is also interesting to see how far can one improve this result. By letting $d$ scale with $N$, e.g., by substituting $d{{\bf s}m s}im s N^\gamma$ for some $s>0$, $\gamma>1$ in Eq.~{{\bf s}m e}qref{ps_asym}, we obtain the absolute maximum $P_{{\bf s}m s} {{\bf s}m s}im 4/N$.}} Furthermore, increasing the local dimension yields a linear improvement in the asymptotic success probability. As we will later see, whereas the asympotic behavior in $N$ is not an exclusive feature of the optimal quantum protocol---we observe the same scaling in its classical counterpart, albeit only when $d=2$---the ability to exploit extra dimensions to enhance distinguishability is. \begin{figure}[t] \includegraphics[scale=.35]{fig2_N4examples2.pdf} \caption{All possible clusterings of $N=4$ systems when each can be in one of two possible states, depicted as blue and red. The pair of indices $(n,{{\bf s}m s}igma)$ identifies each clustering, where $n$ is the size of the smallest cluster, and~${{\bf s}m s}igma$ is a permutation of the {{{\bf s}m e}m reference} clusterings (those on top of each box), wherein the smallest cluster falls on the right. The symbol $e$ denotes the identity permutation, and $(ij)$ the transposition of systems in positions $i$ and $j$. Note that the choice of ${{\bf s}m s}igma$ is not unique.}{(\lambdaambda)}bel{fig:N4} {{\bf s}m e}nd{figure} Let us present an outlined derivation of the optimal quantum clustering protocol. Each input can be described by a string of 0's and 1's ${{\bf x}}=(x_1\cdots x_N)$, so that the global state of the systems entering the device is $\ket{\Phi_{\bf x}} = \ket{\phi_{x_1}}\otimes\ket{\phi_{x_2}}\otimes\cdots\otimes \ket{\phi_{x_N}}$. The clustering device can generically be defined by a positive operator valued measure (POVM) with elements $\{E_{\bf x}\}$, fulfilling $E_{\bf x}\ge 0$ and ${{\bf s}m s}um_{\bf x} E_{\bf x} = \openone$, where each operator~$E_{\bf x}$ is associated to the statement ``the measured global state corresponds to the string ${\bf x}$''. We want to find a POVM that maximizes the average success probability $P_{{\bf s}m s}=2^{1-N}\int d\phi_0 d\phi_1 {{\bf s}m s}um_{\bf x} {{\bf s}m tr}\, (\ketbrad{\Phi_{\bf x}} E_{\bf x})$, where we assumed that each clustering is equally likely at the input, and we are averaging over all possible pairs of states $\{\ket{\phi_{0}},\ket{\phi_{1}}\}$ and strings ${\bf x}$. Since our goal is to design a universal clustering protocol, the operators $E_{\bf x}$ cannot depend on $\ket{\phi_{0,1}}$, and we can take the integral inside the trace. The clustering problem can then be regarded as the optimization of a POVM that distinguishes between effective density operators of the form \begin{equation}{(\lambdaambda)}bel{rhox} {\bf s}ho_{{\bf x}}= \int d\phi_0 \, d\phi_1 \ketbrad{\Phi_{\bf x}} \,. {{\bf s}m e}nd{equation} It now becomes apparent that ${\bf s}ho_{\bf x}={\bf s}ho_{\bar {\bf x}}$, where $\bar {\bf x}$ is the complementary string of ${\bf x}$ (i.e., the values 0 and 1 are exchanged). The key that reveals the structure of the problem and allows us to deduce the optimal clustering protocol resides in computing the integral in Eq.~{{\bf s}m e}qref{rhox}. Averaging over the states leaves out only the information relevant to identify a clustering, that is, $n$ and ${{\bf s}m s}igma$. Certainly, identifying ${\bf x}{{\bf s}m e}quiv(n,{{\bf s}m s}igma)$, we can rewrite ${\bf s}ho_{\bf x}$ as \begin{align}{(\lambdaambda)}bel{rho_ns} {\bf s}ho_{n,{{\bf s}m s}igma} &= c_n \, U_{{\bf s}m s}igma \, (\openone^{{\bf s}m sym}_n\otimes\openone^{{\bf s}m sym}_{N-n} )\,U_{{\bf s}m s}igma^\dagger \nonumber\\ &= c_n \bigoplus_\lambda \openone_{(\lambdaambda)} \otimes \Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}\,. {{\bf s}m e}nd{align} By applying Schur lemma, one readily obtains the first line, where $\openone^{{\bf s}m sym}_k$ is a projector onto the completely symmetric subspace of $k$ systems, $c_n$ is a normalization factor, and $U_{{\bf s}m s}igma$ is a unitary matrix representation of ${{\bf s}m s}igma$. The second line follows from using the Schur basis (see Section~{\bf s}ef{app:optimality}), in which the states ${\bf s}ho_{n,{{\bf s}m s}igma}$ are block-diagonal. Here $\lambda$ labels the irreducible representations---irreps for short---of the joint action of the groups ${{\bf s}m SU}(d)$ and $S_N$ over the vector space $(d,\mathbb C)^{\otimes N}$, and is usually identified with the shape of Young diagrams (or partitions of~$N$). A pair of parentheses, $()$ [brackets, $\{\}$], surrounding the subscript ${(\lambdaambda)}mbda$, e.g., in Eq.~{{\bf s}m e}qref{rho_ns}, are used when ${(\lambdaambda)}mbda$ refers exclusively to irreps of ${{\bf s}m SU}(d)$ [$S_N$]; we stick to this convention throughout the paper. Note that averaging over all ${{\bf s}m SU}(d)$ transformations erases the information contained in the representation subspace~$({(\lambdaambda)}mbda)$. It also follows from Eq.~{{\bf s}m e}qref{rho_ns} and the rules of the Clebsch-Gordan decomposition that (i)~only two-row Young diagrams (partitions of length two) show up in the direct sum above, and (ii)~the operators $\Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}$ are rank-1 projectors (see Appendix~{\bf s}ef{app:irreps}). They carry all the information relevant for the clustering, and are understood to be zero for irreps $\lambda$ outside the support of~${\bf s}ho_{n,{{\bf s}m s}igma}$. With Eq.~{{\bf s}m e}qref{rho_ns} at hand, the optimal clustering protocol can be succinctly described as two successive measurements---we state the result here and present an optimality proof in Section~{\bf s}ef{app:optimality}. The first measurement is a projection onto the irrep subspaces $\lambda$, described by the set $\{\openone_{(\lambdaambda)} \otimes \openone_{\{\lambdaambda\}}\}$. The outcome of this measurement provides an estimate of~$n$, as $\lambda$ is one-to-one related to the size of the clusters. More precisely, we have from~(i) that $\lambda=(\lambda_1,\lambda_2)$, where $\lambda_1$ and $\lambda_2$ are nonnegative integers such that $\lambda_1+\lambda_2=N$ and~$\lambda_1\ge \lambda_2$. Then, given the outcome $\lambda=(\lambda_1,\lambda_2)$ of this first measurement, the optimal guess turns out to be $n=\lambda_2$. Very roughly speaking, the ``asymmetry" in the subspace $\lambda=(\lambda_1,\lambda_2)$ increases with $\lambda_2$. We recall that $\lambda=(N,0)$ is the fully symmetric subspace of $(d,\mathbb{C})^N$. Naturally, ${\bf s}ho_{0,{{\bf s}m s}igma}$ has support only in this subspace, as all states in the data are of one type. As $\lambda_2$ increases from zero, more states of the alternative type are necessary to achieve the increasing asymmetry of $\lambda=(\lambda_1,\lambda_2)$. Hence, for a given $\lambda_2$, there is a minimum value of $n$ for which~${\bf s}ho_{n,{{\bf s}m s}igma}$ can have support in the subspace $\lambda=(\lambda_1,\lambda_2)$. This minimum $n$ is the optimal guess. Once we have obtained a particular $\lambda\!=\!\lambda^*$ as an outcome (and guessed $n$), a second measurement is performed over the subspace $\{\lambda^*\}$ to produce a guess for~${{\bf s}m s}igma$. Since the states ${\bf s}ho_{n,{{\bf s}m s}igma}$ are covariant under~$S_N$, the optimal measurement to guess the permutation~${{\bf s}m s}igma$ is also covariant, and its seed is the rank-1 operator $\Omega_{\{\lambda^*\}}^{n, e}$, where $\lambda^*= (N-n,n)$. Put together, these two successive measurements yield a joint optimal POVM whose elements take the form \begin{equation}{(\lambdaambda)}bel{povm_elements_main} E_{n,{{\bf s}m s}igma} = {\bf x}i_{\lambda^*}^{n} (\openone_{({(\lambdaambda)}mbda^*)} \otimes \Omega_{\{{(\lambdaambda)}mbda^*\}}^{n,{{\bf s}m s}igma})\,, {{\bf s}m e}nd{equation} where $(n,{{\bf s}m s}igma)$ is the guess for the cluster and ${\bf x}i_{\lambda^*}^n$ is some coefficient that guarantees the POVM condition \mbox{${{\bf s}m s}um_{n,{{\bf s}m s}igma} E_{n,{{\bf s}m s}igma}=\openone$}. The success probability of the optimal protocol can be computed as~$P_{{\bf s}m s}=2^{1-N}{{\bf s}m s}um_{n,{{\bf s}m s}igma} {{\bf s}m tr}\,({\bf s}ho_{n,{{\bf s}m s}igma} E_{n,{{\bf s}m s}igma})$ (see Section~{\bf s}ef{app:optimality}). It reads \begin{align} P_{{\bf s}m s} &= 2^{1-N} {{\bf s}m s}um_{i=0}^{\floor{N/2}} \binom{N}{i} \frac{(d-1)(N-1-2i)^2}{(N-1+d-i)(i+1)^2} \,, {(\lambdaambda)}bel{ps} {{\bf s}m e}nd{align} from which the asymptotic limit Eq.~{{\bf s}m e}qref{ps_asym} follows (see Appendix~{\bf s}ef{app:asymptotics}). \blue{ Before closing this section we would like to briefly discuss the case when some information about the possible states $\ket{\phi_0}$ and $\ket{\phi_1}$ is available. A clustering device that incorporates this information into its design should succeed with a probability higher than Eq.~{{\bf s}m e}qref{ps}, at the cost of universality. To explore the extent of this performance enhancement, we study the extreme case where we have full knowledge of the states $\ket{\phi_0}$ and $\ket{\phi_1}$. We find that in the large $N$ limit the maximum improvement is by a factor of $N$. The optimal success probability scales as \begin{equation}{(\lambdaambda)}bel{ps_known} P_{{\bf s}m s} {{\bf s}m s}im \frac{4(d-1)}{N} {{\bf s}m e}nd{equation} (see Section~{\bf s}ef{sec:quantumknown} for details). } {{\bf s}m s}ection{Clustering classical states}{(\lambdaambda)}bel{sec:classical} To grasp the significance of our quantum clustering protocol, a comparison with a classical analogue is called for. First, in the place of a quantum system whose state is either $\ket{\phi_0}$ or $\ket{\phi_1}$, an input would be an instance of a $d$-dimensional random variable sampled from either one of two categorical probability distributions, $P=\{p_s\}_{s=1}^d$ and $Q=\{q_s\}_{s=1}^d$. Then, given a string of samples ${\bf s}=(s_1\cdots s_N)$, $s_i\in\{1,\lambdadots,d\}$, the clustering task would consist in grouping the data points $s_i$ in two clusters so that all points in a cluster have a common underlying probability distribution. Second, in analogy with the quantum protocol, our goal would be to find the optimal universal (i.e., independent of $P$ and $Q$) protocol, that performs this task. Here, optimality means attaining the maximum average success probability, where the average is over all $N$-length sequences ${\bf x}$ of distributions $P$ and $Q$ from which the string ${\bf s}$ is sampled, and over all such distributions. It should be emphasized that this is a very hard classical clustering problem, with absolute minimal assumptions, where there is no metric in the domain of the random variables and, in consequence, no exploitable notion of distance. Therefore, \blue{one should expect} the optimal algorithm to have a rather low performance and to differ significantly from well-known algorithms for classical unsupervised classification problems. As a further remark, we note that a choice of prior is required to perform the average over $P$ and $Q$. We will assume that the two are uniformly distributed over the simplex on which they are both defined. This reflects our lack of knowledge about the distributions underlying the string of samples ${\bf s}$. \blue{Under all these specifications, the classical clustering problem we just defined naturally connects with the quantum scenario in Section~{\bf s}ef{sec:the_task} as follows. We can interpret ${\bf s}$ as a string of outcomes obtained upon performing the same projective measurement on each individual quantum state $|\phi_{x_i}{\bf s}angle$ of our original problem. Furthermore, such local measurements can also be interpreted as a decoherence process affecting the pure quantum states at the input, whereby they decay into classical probability distributions over a fixed basis.} We might think of this as the semiclassical analogue of our original problem, since quantum resources are not fully exploited. Let us first lay out the problem in the special case of $d=2$, where the underlying distributions are Bernoulli, and we can write $P=\{p,1-p\}$, $Q=\{q,1-q\}$. Given an $N$-length string of samples~${\bf s}$, our intuition tells us that the best we can do is to assign the same underlying probability distribution to equal values in~${\bf s}$. So if, e.g., ${\bf s}=(00101\cdots)$, we will guess that the underlying sequence of distributions is $\hat{\bf x}=(PPQPQ\cdots)$ [or, equivalently, the complementary sequence $\hat{\bf x}=(QQPQP\cdots)$]. Thus, data points will be clustered according to their value 0 or 1. The optimality of this guessing rule is a particular case of the result for $d$-dimensional random variables in Appendix~{\bf s}ef{app:classic}. The probability that a string of samples ${\bf s}$, with $l$ zeros and $N-l$ ones, arises from the guessed sequence $\hat{\bf x}$ is given by \begin{equation} \!{{\bf s}m Pr}({\bf s}|{\bf x}\!=\!\hat{\bf x}) \!=\!\!\!\int_0^1\!\! \!dp\!\!\int_0^1\!\!\! dq\,p^l q^{N-l} =\frac{1}{(l+1)(N-l+1)} \,. {{\bf s}m e}nd{equation} The average success probability can then be readily computed as $P_{{\bf s}m s}^{{\bf s}m cl}=2 {{\bf s}m s}um_{{\bf x},{\bf s}} \delta_{{\bf x},\hat{\bf x}}\,{{\bf s}m Pr}({\bf x})\,{{\bf s}m Pr}({\bf s}|{\bf x})$ (recall that $\hat {\bf x}$ depends on ${\bf s}$), where ${{\bf s}m Pr}({\bf x})=2^{-N}$ is the prior probability of the sequence ${\bf x}$, which we assume to be uniform. The factor $2$ takes into account that guessing the complementary sequence leads to the same clustering. It is now quite straightforward to derive the asymptotic expression of $P_{{\bf s}m s}^{{\bf s}m cl}$ for large $N$. In this limit~${\bf x}$ will typically have the same number of $P$ and $Q$ distributions, so the guess $\hat{\bf x}$ will be right if $l=N/2$. Then, \begin{equation} P_{{\bf s}m s}^{{\bf s}m cl} {{\bf s}m s}im 2 \frac{1}{(N/2+1)^2} {{\bf s}m s}im \frac{8}{N^2} \,. {{\bf s}m e}nd{equation} This expression coincides with the quantum asymptotic result in Eq.~{{\bf s}m e}qref{ps_asym} for $d=2$. As we now see, this is however a particularity of Bernoulli distributions. The derivation for $d>2$ is more involved, since the optimal guessing rule is not so obvious (see Appendix~{\bf s}ef{app:classic} for details). Loosely speaking, we should still assign samples with the same value to the same cluster. By doing so, we obtain up to $d$ preliminary clusters. We next merge them into two clusters in such a way that their final sizes are as balanced as possible. This last step, known as the {{{\bf s}m e}m partition problem}~\cite{Korf1998}, is weakly NP-complete. Namely, \blue{its complexity is polynomial in the magnitudes of the data involved (the size of the preliminary clusters, which depends on $N$) but non-polynomial in the input size (the number of such clusters, determined by $d$).} This means that the classical and semiclassical protocols cannot be implemented efficiently for arbitrary~$d$. In the asymptotic limit of large $N$, and for arbitrary fixed values of $d$, we obtain \begin{equation}{(\lambdaambda)}bel{ps_asym_cl} P_{{\bf s}m s}^{{\bf s}m cl} {{\bf s}m s}im \lambdaeft(\frac{2}{N}{\bf s}ight)^d \frac{(2d-2)!}{(d-2)!} \,. {{\bf s}m e}nd{equation} There is a huge difference between this result and Eq.~{{\bf s}m e}qref{ps_asym}. Whereas increasing the local dimension provides an asymptotic linear advantage in the optimal quantum clustering protocol---states become more orthogonal---it has the opposite effect in its classical and semiclassical analogues, as it reduces exponentially the success probability. In the opposite regime, i.e., for $d$ asymptotically large and fixed values of~$N$, the optimal classical and semiclassical strategies provide no improvement over random guessing, and the clustering tasks become exceedingly hard and somewhat uninteresting. This follows from observing that the guessing rule relies on grouping repeated data values. In this regime, the typical string of samples~${\bf s}$ has no repeated elements, thus we are left with no alternative but to randomly guess the right clustering of the data and $P_{{\bf s}m s}^{{\bf s}m cl} {{\bf s}m s}im 2^{1-N}$. \blue{ To complete the picture, we end up this section by considering known classical probability distributions. Akin to the quantum case, one would expect an increase in the success probability of clustering. An immediate consequence of knowing the distributions $P$ and $Q$ is that the rule for assigning a clustering given a string of samples~${\bf s}$ becomes trivial. Each symbol $s_i\in\{1,\lambdadots,d\}$ will be assigned to the most likely distribution, that is, to $P$ ($Q$) if $p_{s_i} > q_{s_i}$ ($p_{s_i} < q_{s_i}$). It is clear that knowing $P$ and~$Q$ helps to better classify the data. This becomes apparent by considering the example of two three-dimensional distributions and the data string ${\bf s}=(112)$. If the distributions are unknown, such sequence leads to the guess $\hat{\bf x}=(PPQ)$ [or equivalently to $\hat{\bf x}=(QQP)$]. In contrast, if $P$ and $Q$ are known and, e.g., $p_1>q_1$ and $p_2>q_2$, the same sequence leads to the better guess $\hat{\bf x}=(PPP)$. The advantage of knowing the distribution, however, vanishes in the large $N$ limit, and the asymptotic performance of the optimal clustering algorithm is shown to be given by Eq.~{{\bf s}m e}qref{ps_asym_cl}. The interested reader can find the details of the proof in Appendix~{\bf s}ef{app:classic_known}. } \blue{ {{\bf s}m s}ection{Methods}{(\lambdaambda)}bel{sec:methods} Here we give the full proof of optimality of our quantum clustering protocol/device, which leads to our main result in Eq.~{{\bf s}m e}qref{ps_asym}. The proof relies on representation theory of the special unitary and the symmetric groups. In particular, the Schur-Weyl duality is used to efficiently represent the structure of the input quantum data and the action of the device. We then leverage this structure to find the optimal POVM and compute the minimum cost. Basic notions of representation theory that we use in the proof are covered in the Appendices~{\bf s}ef{app:partitions} and {\bf s}ef{app:irreps}. We close the Methods section proving Eq.~{{\bf s}m e}qref{ps_known} for the optimal success probability of clustering known quantum states. {{\bf s}m s}ubsection{Clustering quantum states: unknown input states}{(\lambdaambda)}bel{app:optimality} In this Section we obtain the optimal POVM for quantum clustering and compute the minimum cost. First, we present a formal optimality proof for an arbitrary cost function $f({\bf x},{\bf x}')$, which specifies the penalty for guessing ${\bf x}$ if the input is ${\bf x}'$. Second, we particularize to the case of success probability, as discussed in the main text, for which explicit expressions are obtained. {{\bf s}m s}ubsubsection{Generic cost functions} We say a POVM is optimal if it minimizes the average cost \begin{equation}{(\lambdaambda)}bel{app:av_cost} {\bar f} = \int d\phi_0 \,d\phi_1 {{\bf s}m s}um_{{\bf x},\hat{\bf x}} {{\bf s}m e}ta_{{\bf x}} \, f({\bf x},\hat{\bf x}) \, {{\bf s}m Pr}(\hat{\bf x}|{\bf x}) \,, {{\bf s}m e}nd{equation} where ${{\bf s}m e}ta_{{\bf x}}$ is the prior probability of input string ${\bf x}$, and ${{\bf s}m Pr}(\hat{\bf x}|{\bf x})={{\bf s}m tr}\, (\ketbrad{\Phi_{{\bf x}}} E_{\hat{\bf x}}) $ is the probability of obtaining measurement outcome (and guess) $\hat{\bf x}$ given input ${\bf x}$; recall that $\ket{\Phi_{\bf x}} = \ket{\phi_{x_1}}\otimes\ket{\phi_{x_2}}\otimes\cdots\otimes \ket{\phi_{x_N}}$, $x_k=0,1$, and an average is taken over all possible pairs of states $\{\ket{\phi_{0}},\ket{\phi_{1}}\}$, hence ${\bf x}$ and its complementary $\bar{\bf x}$ define de same clustering. A convenient way to identify the different clusterings is by counting the number $n$, $0\lambdaeq n\lambdaeq \floor{N/2}$, of zeros in~${\bf x}$ (so, strings with more 0s than 1s are discarded) and giving a unique representative ${{\bf s}m s}igma$ of the equivalence class of permutations that turn the reference string $(0^n1^{\bar n})$, ${\bar n}=N-n$, into ${\bf x}$. We will denote the subset of these representatives by ${\mathscr S}_n{{\bf s}m s}ubset S_N$, and the number of elements in each equivalence class~by~$b_n$. A simple calculation gives us $b_n=2(n!)^2$ if $n=\bar n$, and $b_n=n!\bar n!$ otherwise. As discussed in the main text, the clustering problem above is equivalent to a multi-hypothesis discrimination problem, where the hypotheses are given by \begin{align}{(\lambdaambda)}bel{app:rho_ns} {\bf s}ho_{{\bf x}} &= \int d\phi_0 \, d\phi_1 \ketbrad{\Phi_{\bf x}} \nonumber\\ &= c_n \, U_{{\bf s}m s}igma \, (\openone^{{\bf s}m sym}_n\otimes\openone^{{\bf s}m sym}_{\bar n} )\,U_{{\bf s}m s}igma^\dagger \,, {{\bf s}m e}nd{align} and we have used Schur lemma to compute the integral. Here,~$U_{{\bf s}m s}igma$ is a unitary matrix representation of the permutation~${{\bf s}m s}igma$, $\openone^{{\bf s}m sym}_k$ is a projector onto the completely symmetric subspace of $k$ systems, and $c_n= 1/(D^{{\bf s}m sym}_n D^{{\bf s}m sym}_{\bar n})$, where~$D^{{\bf s}m sym}_k=s_{(k,0)}$ [see Eq.~({\bf s}ef{mult_s})] is the dimension of symmetric subspace of~$k$ qudits. The states~{{\bf s}m e}qref{app:rho_ns} are block-diagonal in the Schur basis, which decouples the commuting actions of the groups ${{\bf s}m SU}(d)$ and $S_N$ over product states of the form of~$|\Phi_{{\bf x}}{\bf s}angle$. More precisely, Schur-Weyl duality states that the representations of the two groups acting on the common space $(d,\mathbb{C})^{\otimes N}$ are each other's commutant. Moreover, it provides a decomposition of this space into decoupled subspaces associated to irreducible representations (irreps) of both ${{\bf s}m SU}(d)$ and $S_N$. We can then express the states~${\bf s}ho_{{\bf x}}$, where ${\bf x}$ is specified as $(n,{{\bf s}m s}igma)$ [${\bf x}=(n,{{\bf s}m s}igma)$ for short], in the Schur basis as \begin{equation}{(\lambdaambda)}bel{app:rho_block} {\bf s}ho_{n,{{\bf s}m s}igma} = c_n \bigoplus_\lambda \openone_{(\lambdaambda)} \otimes \Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma} \,. {{\bf s}m e}nd{equation} In this direct sum, $\lambda$ is a label attached to the irreps of the joint action of ${{\bf s}m SU}(d)$ and $S_N$ and is usually identified with a partition of~$N$ or, equivalently, a Young diagram. As explained in the main text, a pair of parenthesis surrounding this type of label, like in ${(\lambdaambda)}$, mean that it refers specifically to irreps of ${{\bf s}m SU}(d)$. Likewise, a pair of brackets, e.g., ${\{\lambdaambda\}}$, indicate that the label refers to irreps of~$S_N$. In accordance with this convention, Schur-Weyl duality implies that $\Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}=U_{{\bf s}m s}igma^\lambda \,\Omega_{\{\lambdaambda\}}^{n,e}\, (U_{{\bf s}m s}igma^\lambda)^\dagger$, where~$U_{{\bf s}m s}igma^\lambda$ is the matrix of the irrep~$\lambda$ that represents ${{\bf s}m s}igma\in S_N$, and $e$ denotes the identity permutation (for simplicity, we omit the index $e$ when no confusion arises). In other words, the family of states~${\bf s}ho_{n,{{\bf s}m s}igma}$ is covariant with respect to $S_N$. One can easily check that $\Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}$ is always a rank-1 projector (see Appendix~{\bf s}ef{app:irreps}). In Eq.~{{\bf s}m e}qref{app:rho_block} it is understood that $\Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}=0$ outside of the range of~${\bf s}ho_{n,{{\bf s}m s}igma}$. With no loss of generality, the optimal measurement that discriminates the states ${\bf s}ho_{n,{{\bf s}m s}igma}$ can be represented by a POVM whose elements have the form shown in Eq.~{{\bf s}m e}qref{app:rho_block}. Moreover, we can assume it to be covariant under $S_N$~\cite{Holevo1982}. So, such POVM elements can be written as \begin{equation}{(\lambdaambda)}bel{app:povmelements} E_{n,{{\bf s}m s}igma} = \bigoplus_\lambda \openone_{(\lambdaambda)}\otimes U^\lambda_{{\bf s}m s}igma\, \Xi_{\{\lambdaambda\}}^n (U^\lambda_{{\bf s}m s}igma)^\dagger \,, {{\bf s}m e}nd{equation} where $\Xi_{\{\lambdaambda\}}^n$ is some positive operator. The resolution of the identity condition imposes constraints on them. The condition reads \begin{align} \begin{split}\, {{\bf s}m s}um_{n,{{\bf s}m s}igma} E_{n,{{\bf s}m s}igma} &= {{\bf s}m s}um_n \frac{1}{b_n} {{\bf s}m s}um_{{{\bf s}m s}igma\in S_N} \bigoplus_\lambda \openone_{(\lambdaambda)}\otimes U^\lambda_{{\bf s}m s}igma \Xi_{\{\lambdaambda\}}^n (U^\lambda_{{\bf s}m s}igma)^\dagger \\ &= \bigoplus_\lambda \openone_{(\lambdaambda)}\otimes\openone_{\{\lambdaambda\}} \,, {{\bf s}m e}nd{split} {{\bf s}m e}nd{align} where we have used the factor $b_n$ to extend the sum over~${\mathscr S}_n$ to the entire group $S_N$ and applied Schur lemma. Taking the trace on both sides of the equation, we find the POVM constraint to be \begin{equation}{(\lambdaambda)}bel{povmcond} {{\bf s}m s}um_n \frac{N!}{b_n} {{\bf s}m tr}\,{\lambdaeft(\Xi_{\{\lambdaambda\}}^n{\bf s}ight)} = \nu_\lambda \,,\quad \forall \lambda \,, {{\bf s}m e}nd{equation} where $\nu_\lambda$ is the dimension of $\openone_{\{\lambdaambda\}}$ or, equivalently, the multiplicity of the irrep $\lambda$ of ${{\bf s}m SU}(d)$ [see Eq.~({\bf s}ef{mult_nu})]. So far we have analyzed the structure that the symmetries of the problem impose on the states ${\bf s}ho_{n,{{\bf s}m s}igma}$ and the measurements. We have learned that for any choice of operators $\Xi^n_{{\{\lambdaambda\}}}$ that fulfill Eq.~({\bf s}ef{povmcond}), the set of operators~({\bf s}ef{app:povmelements}) defines a valid POVM, but it need not be optimal. So, we now proceed to derive optimality conditions for $\Xi^n_{{\{\lambdaambda\}}}$. Those are provided by the Holevo-Yuen-Kennedy-Lax~\cite{Holevo1973a,Yuen1975} necessary and sufficient conditions for minimizing the average cost. For our clustering problem in Eq.~({\bf s}ef{app:av_cost}) they read \begin{align} {(\lambdaambda)}bel{Holevo1} &(W_{{\bf x}}-\Gamma)E_{\bf x}=E_{\bf x}(W_{\bf x}-\Gamma)=0 \,,\\ {(\lambdaambda)}bel{Holevo2} &\phantom{(}W_{{\bf x}}-\Gamma \geq 0 \,. {{\bf s}m e}nd{align} They must hold for all ${\bf x}$, where $\Gamma={{\bf s}m s}um_{{\bf x}} W_{\bf x} E_{\bf x}={{\bf s}m s}um_{\bf x} E_{\bf x} W_{\bf x}$, and $W_{\bf x} = {{\bf s}m s}um_{{\bf x}'} f({\bf x},{\bf x}') {{\bf s}m e}ta_{{\bf x}'} {\bf s}ho_{{\bf x}'}$. We will assume that the prior distribution ${{\bf s}m e}ta_{{\bf x}}$ is flat and that the cost function is nonnegative and covariant with respect to the permutation group, i.e., $f({\bf x},{\bf x}')=f(\tau{\bf x},\tau{\bf x}')$ for all $\tau\in S_N$. Then, $W_{\tau{\bf x}}=U_\tau W_{{\bf x}} U_\tau^\dagger$ and we only need to ensure that conditions {{\bf s}m e}qref{Holevo1} and {{\bf s}m e}qref{Holevo2} are met for reference strings, for which ${\bf x}=(n,e)$. In the Schur basis, their corresponding operators, which we simply call $W_n$, and the matrix $\Gamma$ take the form \begin{align} W_n &= \bigoplus_\lambda \openone_{(\lambdaambda)} \otimes \omega^n_{\{\lambdaambda\}} \,,{(\lambdaambda)}bel{W_block}\\ \Gamma &= \bigoplus_\lambda k_\lambda \openone_{(\lambdaambda)}\otimes\openone_{\{\lambdaambda\}}\,,{(\lambdaambda)}bel{G_block} {{\bf s}m e}nd{align} where we have used Schur lemma to obtain Eq.~({\bf s}ef{G_block}) and defined $k_\lambda {{\bf s}m e}quiv {{\bf s}m s}um_n N!\, {{\bf s}m tr}\,{\lambdaeft(\omega^n_{\{\lambdaambda\}} \,\Xi^n_{\{\lambdaambda\}}{\bf s}ight)}/(b_n\nu_\lambda)$. Note that $\Gamma$ is a diagonal matrix, in spite of the fact that $\omega_{\{\lambdaambda\}}^n$ are, at this point, arbitrary full-rank positive operators. With Eqs.~{{\bf s}m e}qref{W_block} and {{\bf s}m e}qref{G_block}, the optimality conditions~{{\bf s}m e}qref{Holevo1} and {{\bf s}m e}qref{Holevo2} can be made explicit. First, we note that the subspace ${(\lambdaambda)}$ is irrelevant in this calculation, and that there will be an independent condition for each irrep $\lambda$. Taking into account these considerations, Eq.~{{\bf s}m e}qref{Holevo1} now reads \begin{align} \omega^n_{\{\lambdaambda\}}\Xi^n_{\{\lambdaambda\}} &=\Xi^n_{\{\lambdaambda\}}\omega^n_{\{\lambdaambda\}} =k_\lambda \Xi^n_{\{\lambdaambda\}} \,,\quad \forall n,\lambda \,.{(\lambdaambda)}bel{Holevo1b} {{\bf s}m e}nd{align} This equation tells us two things: (i) since the matrices~$\omega^n_{\{\lambdaambda\}}$ and~$\Xi^n_{\{\lambdaambda\}}$ commute, they have a common eigenbasis, and (ii)~Eq.~{{\bf s}m e}qref{Holevo1b} is a set of eigenvalue equations for $\omega^n_{\{\lambdaambda\}}$ with a common eigenvalue $k_\lambda$, one equation for each eigenvector of $\Xi^n_{\{\lambdaambda\}}$. Therefore, the support of $\Xi^n_{\{\lambdaambda\}}$ is necessarily restricted to a single eigenspace of $\omega^n_{\{\lambdaambda\}}$. Denoting by $\vartheta_{\lambda,a}^n$, $a=1,2,\dots$, the eigenvalues of $\omega_{\{\lambdaambda\}}^n$ sorted in increasing order, we have $k_\lambda=\vartheta_{\lambda,a}^n$ for some $a$, which may depend on ${(\lambdaambda)}mbda$ and $n$, or else $\Xi^n_{\{\lambdaambda\}}=0$. The second Holevo condition~{{\bf s}m e}qref{Holevo2}, under the same considerations regarding the block-diagonal structure, leads to \begin{equation}{(\lambdaambda)}bel{Holevo2b} \omega_{\{\lambdaambda\}}^n \geq k_\lambda \openone_{\{\lambdaambda\}} \,,\quad \forall n,\lambda \,. {{\bf s}m e}nd{equation} This condition further induces more structure in the POVM. Given $\lambda$, Eq.~{{\bf s}m e}qref{Holevo2b} has to hold for {{{\bf s}m e}m every} value of~$n$. In particular, we must have $\min_{n'} \vartheta_{\lambda,1}^{n'}\ge k_\lambda$. Therefore, $\min_{n'} \vartheta_{\lambda,1}^{n'}\ge\vartheta^n_{\lambda,a}$ for some $a$, or else $\Xi^n_{\{\lambdaambda\}}=0$. Since~$\Xi^n_{\{\lambdaambda\}}$ cannot vanish for all $n$ because of Eq.~({\bf s}ef{povmcond}), we readily see that \begin{equation}{(\lambdaambda)}bel{povm} k_\lambda\!=\!\vartheta^{n(\lambda)}_{\lambda,1}, \quad \Xi_{\{\lambdaambda\}}^n \!= \begin{cases} {\bf x}i_\lambda^n \Pi_{1}(\omega_{\{\lambdaambda\}}^n) & {{\bf s}m if} \, n=n(\lambda), \\ 0 & {{\bf s}m otherwise}, {{\bf s}m e}nd{cases} {{\bf s}m e}nd{equation} where $n(\lambda)={{\bf s}m argmin}_n \vartheta_{\lambda,1}^n$, $\Pi_{1}(\omega_{\{\lambdaambda\}}^n)$ is a projector onto the eigenspace of $\omega_{\{\lambdaambda\}}^n$ (not necessarily the whole subspace) corresponding to the minimum eigenvalue $\vartheta_{\lambda,1}^n$, and ${\bf x}i^n_{(\lambdaambda)}mbda$ is a suitable coefficient that can be read off from~Eq.~{{\bf s}m e}qref{povmcond}: \begin{equation}{(\lambdaambda)}bel{povmcoef} {\bf x}i_\lambda^{n} = \frac{\nu_\lambda b_n}{D_\lambda^{n} N!} \,, {{\bf s}m e}nd{equation} where $D_\lambda^n = \dim{[\Pi_{1}(\omega_{\{\lambdaambda\}}^n)]}$. This completes the construction of the optimal POVM. For a generic cost function, we can now write down a closed, implicit formula for the minimum average cost achievable by any quantum clustering protocol. It reads \begin{equation}{(\lambdaambda)}bel{opt_av_cost} \bar f = {{\bf s}m tr}\, \Gamma = {{\bf s}m s}um_\lambda s_\lambda \,\nu_\lambda\, \vartheta_{\lambda,1}^{n(\lambda)}\,, {{\bf s}m e}nd{equation} where $s_\lambda$ is the dimension of $\openone_{(\lambdaambda)}$ or, equivalently, the multiplicity of the irrep $\lambda$ of $S_N$ [see Eq.~({\bf s}ef{mult_s})]. The only object that remains to be specified is the function~$n(\lambda)$, which depends ultimately on the choice of the cost function $f({\bf x},{\bf x}')$. {{\bf s}m s}ubsubsection{Success probability} We now make Eq.~{{\bf s}m e}qref{opt_av_cost} explicit by considering the success probability $P_{{\bf s}m s}$ as a figure of merit, that is, we choose $f({\bf x},{\bf x}')=1-\delta_{{\bf x},{\bf x}'}$, hence $P_{{\bf s}m s}=1-\bar f$. We also assume that the source that produces the input sequence is equally likely to prepare either state, thus each string~${\bf x}$ has the same prior probability, ${{\bf s}m e}ta_{{\bf x}} = 2^{1-N} {{\bf s}m e}quiv{{\bf s}m e}ta$. In this case,~$W_n$ takes the simple form \begin{equation}{(\lambdaambda)}bel{Wme} W_n = \bigoplus_\lambda \openone_{(\lambdaambda)} \otimes \lambdaeft(\mu_\lambda\openone_{\{\lambdaambda\}} -{{\bf s}m e}ta c_n \Omega_{\{\lambdaambda\}}^n{\bf s}ight) \,, {{\bf s}m e}nd{equation} where $\mu_{(\lambdaambda)}mbda$ are positive coefficients and we recall that the expression in parenthesis corresponds to $\omega_{\{\lambdaambda\}}^n$ in Eq.~{{\bf s}m e}qref{W_block}. From this expression one can easily derive the explicit forms of $\vartheta_{\lambda,1}^{n}$ and $n(\lambda)$. We just need to consider the maximum eigenvalue of the rank-one projector~$\Omega^n_{\{\lambdaambda\}}$, which can be either one or zero depending on whether or not the input state ${\bf s}ho_{n,{{\bf s}m s}igma}$ has support in the irrep $\lambda$ space. So, among the values of $n$ for which ${\bf s}ho_{n,{{\bf s}m s}igma}$ does have support there, $n(\lambda)$ is one that maximizes $c_n$. Since $c_n$ is a decreasing function of $n$ in its allowed range (recall that $n\lambdae\floor{N/2}$), $n(\lambda)$ is the smallest such value. For the problem at hand, the irreps in the direct sum can be labeled by Young diagrams of at most two rows, or, equivalently, by partitions of $N$ of length at most two (see Appendix~{\bf s}ef{app:irreps}), hence $\lambda=(\lambda_1,\lambda_2)$, where $\lambda_1+\lambda_2=N$ and $\lambda_2$ runs from $0$ to $\floor{N/2}$. Given~$\lambda$, only states~${\bf s}ho_n$ with $n=\lambda_2,\lambdadots,\floor{N/2}$ have support on the irrep ${(\lambdaambda)}mbda$ space, as readily follows from the Clebsch-Gordan decomposition rules. Then, \begin{equation}{(\lambdaambda)}bel{nl} n(\lambda) = \lambda_2\,,\quad \vartheta^{n(\lambda)}_{\lambda,1}=\mu_\lambda-{{\bf s}m e}ta c_{n(\lambda)} \,. {{\bf s}m e}nd{equation} Eq.~{{\bf s}m e}qref{nl} gives the optimal guess for the size, $n$, of the smallest cluster. The rule is in agreement with our intuition. The irrep $(N,0)$, i.e., $\lambda_2=0$, corresponding to the fully symmetric subspace, is naturally associated with the value $n=0$, i.e., with all $N$ systems being in the same state/cluster; the irrep with one antisymmetrized index has $\lambda_2=1$, and hints at a system being in a different state than the others, i.e., at a cluster of size one; and so on. We now have all the ingredients to compute the optimal success probability from~Eq.~({\bf s}ef{opt_av_cost}). It reads \begin{align} P_{{\bf s}m s} &= {{\bf s}m e}ta {{\bf s}m s}um_\lambda c_{n(\lambda)} s_\lambda \nu_\lambda \nonumber\\ &= {1\over2^{N-1}}{{\bf s}m s}um_{i=0}^{\floor{N/2}} \binom{N}{i} \frac{(d-1)(N-2i+1)^2}{(d+i-1)(N-i+1)^2} \,, {(\lambdaambda)}bel{app:ps} {{\bf s}m e}nd{align} where we have used the relation ${{\bf s}m s}um_\lambda s_\lambda \nu_\lambda \mu_\lambda=1$ that follows from ${{\bf s}m tr}\, {{\bf s}m s}um_{\bf x} {{\bf s}m e}ta_{\bf x} {\bf s}ho_{\bf x}=1$, and the expressions of $\nu_\lambda$ and~$s_\lambda$ from Eqs.~{{\bf s}m e}qref{mult_nu} and {{\bf s}m e}qref{mult_s} in Appendix~{\bf s}ef{app:irreps}. {{\bf s}m s}ubsection{Clustering quantum states: known input states}{(\lambdaambda)}bel{sec:quantumknown} If the two possible states $\ket{\phi_0}$ and $\ket{\phi_1}$ are known, the optimal clustering protocol must use this information. It is then expected that the average performance will be much higher than for the universal protocol. It is natural in this context not to identify a given string~${\bf x}$ with its complementary $\bar{\bf x}$ (we stick to the notation in the main text), since mistaking one state for the other should clearly count as an error if the two preparations are specified. In this case, then, clustering is equivalent to discriminating the $2^N$ known pure states $\ket{\Phi_{\bf x}}\!=\!\ket{\phi_{x_1}{\bf s}angle\!\otimes\!|\phi_{x_2}{\bf s}angle\!\otimes\!\cdots\! \otimes\!|\phi_{x_N}}$ (hypotheses), where with no loss of generality we can write \begin{equation} \ket{\phi_{0/1}}={{\bf s}m s}qrt{\frac{1+c}{2}}\ket{0}\pm {{\bf s}m s}qrt{\frac{1-c}{2}}\ket{1} {(\lambdaambda)}bel{no loss} {{\bf s}m e}nd{equation} for a convenient choice of basis. Here $c=|{(\lambdaambda)}ngle\phi_0|\phi_1{\bf s}angle|$ is the overlap of the two states. The Gram matrix $G$ encapsulates all the information needed to discriminate the states of the set. It is defined as having elements $G_{{\bf x},{\bf x}'}=\braket{\Phi_{\bf x}}{\Phi_{{\bf x}'}}$. It is known that when the diagonal elements of its square root are all equal, i.e., $\big({{\bf s}m s}qrt{G}\,\big)_{{\bf x},{\bf x}}{{\bf s}m e}quiv S$ for all ${\bf x}$, then the square root measurement is optimal~\cite{DallaPozza2015,Sentis2016} and the probability of successful indentification reads simply $P_{{\bf s}m s}=S^2$. Notice that we have implicitly assumed uniformly distributed hypotheses. For the case at hand, \begin{align} G_{{\bf x},{\bf x}'}&=({(\lambdaambda)}ngle\phi_{x_1}|\otimes\cdots\otimes{(\lambdaambda)}ngle\phi_{x_N}|)(|\phi_{x'_1}{\bf s}angle\otimes\cdots\otimes|\phi_{x'_N}{\bf s}angle)\nonumber \\ &=\prod_{i=1}^N{(\lambdaambda)}ngle\phi_{x_i}|\phi_{x'_i}{\bf s}angle =\lambdaeft({\mathscr G}^{\otimes N}{\bf s}ight)_{{\bf x},{\bf x}'}, {{\bf s}m e}nd{align} where \begin{equation} {\mathscr G}=\begin{pmatrix} 1 & c \\ c & 1 {{\bf s}m e}nd{pmatrix} {{\bf s}m e}nd{equation} is the Gram matrix of $\{|\phi_0{\bf s}angle,|\phi_1{\bf s}angle\}$. Thus, ${{\bf s}m s}qrt G\!=\!({{\bf s}m s}qrt{\mathscr G}\,)^{\otimes N}$, with \begin{equation} {{\bf s}m s}qrt{{\mathscr G}}=\begin{pmatrix} \displaystyle \frac{{{\bf s}m s}qrt{1\!+\!c}+\!{{\bf s}m s}qrt{1\!-\!c}}{2} & \displaystyle \frac{{{\bf s}m s}qrt{1\!+\!c}-\!{{\bf s}m s}qrt{1\!-\!c}}{2} \\[.8em] \displaystyle \frac{{{\bf s}m s}qrt{1\!+\!c}-\!{{\bf s}m s}qrt{1\!-\!c}}{2} & \displaystyle\frac{ {{\bf s}m s}qrt{1\!+\!c}+\!{{\bf s}m s}qrt{1\!-\!c}}{2} {{\bf s}m e}nd{pmatrix}. {{\bf s}m e}nd{equation} As expected, the diagonal terms of ${{\bf s}m s}qrt{G}$ are all equal, and the success probability is given by \begin{equation}{(\lambdaambda)}bel{psnc} P_{{\bf s}m s}(c)\!=\!\lambdaeft( \!{{\bf s}m s}qrt{1\!+\!c}\!+\!{{\bf s}m s}qrt{1\!-\!c}\over2\, {\bf s}ight)^{\!2N}\!\!\!=\lambdaeft( 1\!+\!{{\bf s}m s}qrt{1\!-\!c^2}\over2\, {\bf s}ight)^{\!N}. {{\bf s}m e}nd{equation} We call the reader's attention to the fact that one could have attained the very same success probability by performing an individual Helstrom measurement~\cite{Helstrom1976}, with basis \begin{equation} \ket{\psi_{0/1}}=\frac{\ket{0}\pm \ket{1}}{{{\bf s}m s}qrt{2}}, {(\lambdaambda)}bel{Helstrom basis} {{\bf s}m e}nd{equation} on each state of the input sequence and guessed that the label of that state was the outcome value. In other words, for the problem at hand, global quantum measurements do not provide any improvement over individual fixed measurements. In order to compare with the results of the main text, we compute the average performance for a uniform distribution of states $\ket{\phi_0}$ and $\ket{\phi_1}$, i.e., the average \begin{align} P_{{\bf s}m s}\! &=\!\!\int\! d\phi_0 d\phi_1 P_{{\bf s}m s}(c)\nonumber\\ &=\!\!\int_0^1\! dc^2 P_{{\bf s}m s}(c)\!\int \!d\phi_0 d\phi_1\,\delta\!\lambdaeft(|{(\lambdaambda)}ngle\phi_0|\phi_1{\bf s}angle|^2\!-\!c^2{\bf s}ight)\nonumber\\ &=\!\!\int_0^1\! dc^2 P_{{\bf s}m s}(c)\!\int \!d\phi_1\,\delta\!\lambdaeft(|{(\lambdaambda)}ngle 0|\phi_1{\bf s}angle|^2\!-\!c^2{\bf s}ight)\nonumber\\ &=\!\!\int_0^1\! dc^2 \mu(c^2)P_{{\bf s}m s}(c), {{\bf s}m e}nd{align} where we have inserted the identity $1=\int_0^1dc^2 \delta(a^2-c^2)$, for $0<a{{\bf s}m e}quiv |{(\lambdaambda)}ngle\phi_0|\phi_1{\bf s}angle| < 1$, and used the invariance of the measure~$d\phi$ under ${{\bf s}m SU}(d)$ transformations. The marginal distribution is $\mu(c^2)=(d-1) (1-c^2)^{d-2}$ (see Appendix~{\bf s}ef{app:prior}). Using this result, the asymptotic behavior of the last integral is \begin{equation}{(\lambdaambda)}bel{eq:meanqudits} P_{{\bf s}m s}{{\bf s}m s}im \frac{4(d-1)}{N}\,. {{\bf s}m e}nd{equation} As expected, knowing the two possible states in the input string leads to a better behavior of the success probability: it decreases only linearly in $1/N$, as compared to the best universal quantum clustering protocol, which exhibits a quadratic decrease. To do a fairer comparison with universal quantum clustering, guessing the complementary string $\bar{\bf x}$ instead of ${\bf x}$ will now be counted as success, that is, now the clusterings are defined by the states \begin{equation} {\bf s}ho_{\bf x}=\frac{\ketbra{\Phi_{\bf x}}{\Phi_{\bf x}}+ \ketbra{\Phi_{\bar{{\bf x}}}}{\Phi_{\bar{{\bf x}}}}}{2}. {{\bf s}m e}nd{equation} For this variation of the problem, the optimal measurement is still local, and given by a POVM with elements \begin{equation} E_{\bf x}=\ketbra{\Psi_{\bf x}}{\Psi_{\bf x}}+\ketbra{\Psi_{\bar{{\bf x}}}}{\Psi_{\bar{{\bf x}}}}, {{\bf s}m e}nd{equation} where $\ket{\Psi_{\bf x}}\!=\!\ket{\psi_{x_1}{\bf s}angle\!\otimes\!|\psi_{x_2}{\bf s}angle\!\otimes\!\cdots\! \otimes\!|\psi_{x_N}}$, and where we recall that $\{\ket{\psi_{0}},\ket{\psi_{1}}\}$ is the (local) Helstrom measurement basis in Eq.~({\bf s}ef{Helstrom basis}). Note that $\{E_{\bf x}\}$ are orthogonal projectors. To prove the statement in the last paragraph, we show that the Holevo-Yuen-Kennedy-Lax conditions, Eq.~({\bf s}ef{Holevo1}), hold (recall that the Gram matrix technique does not apply to mixed states). For the success probability and assuming equal priors, these conditions take the simpler form \begin{align} {{\bf s}m s}um_{\bf x} E_{\bf x} {\bf s}ho_{\bf x}&={{\bf s}m s}um_{\bf x} {\bf s}ho_{\bf x} E_{\bf x}{{\bf s}m e}quiv\Gamma,{(\lambdaambda)}bel{holevo-cond0}\\ \Gamma-{\bf s}ho_{\bf x}&\geq 0 \quad \forall {\bf x}, {(\lambdaambda)}bel{holevo-cond} {{\bf s}m e}nd{align} where we have dropped the irrelevant factor ${{\bf s}m e}ta=2^{1-N}$. Condition~({\bf s}ef{holevo-cond0}) is trivially satisfied. To check that condition~({\bf s}ef{holevo-cond}) also holds, we recall the Weyl inequalities for the eigenvalues of Hermitian $n\times n$ matrices $A$, $B$~\cite{Horn2013}: \begin{equation} \vartheta_i(A+B)\lambdaeq \vartheta_{i+j}(A)+\vartheta_{n-j}(B), {(\lambdaambda)}bel{Weyl ineq} {{\bf s}m e}nd{equation} for $j=0,1,\lambdadots,n-i$, where the eigenvalues are labeled in increasing order $\vartheta_1\lambdaeq\vartheta_2\lambdaeq\cdots \lambdaeq\vartheta_n$. We use Eq.~({\bf s}ef{Weyl ineq}) to write \begin{equation} \vartheta_1(\Gamma)\lambdaeq \vartheta_{3}(\Gamma-{\bf s}ho_{\bf x})+\vartheta_{2^{N}-2}({\bf s}ho_{\bf x}) {(\lambdaambda)}bel{rmx} {{\bf s}m e}nd{equation} (note that effectively all these operators act on the $2^N$-dimensional subspace spanned by $\{|0{\bf s}angle,|1{\bf s}angle\}^{\otimes N}$). As will be proved below, $\Gamma>0$, which implies that $\vartheta_1(\Gamma)>0$. We note that ${\bf s}ho_{\bf x}$ has rank two, i.e., it has only two strictly positive eigenvalues, so $\vartheta_{2^N-2}({\bf s}ho_{\bf x})=0$. Then Eq.~({\bf s}ef{rmx}) implies \begin{equation} {(\lambdaambda)}bel{lambda3} \vartheta_{3}(\Gamma-{\bf s}ho_{\bf x}) \geq \vartheta_1(\Gamma)>0. {{\bf s}m e}nd{equation} Finally, notice that $\Gamma-{\bf s}ho_{\bf x}$ has two null eigenvalues, with eigenvectors $\ket{\Psi_{\bf x}}$ and $\ket{\Psi_{\bar{{\bf x}}}}$. Hence, $\vartheta_1(\Gamma-{\bf s}ho_{\bf x})=\vartheta_2(\Gamma-{\bf s}ho_{\bf x})=0$, and it follows from Eq.~{{\bf s}m e}qref{lambda3} that condition~({\bf s}ef{holevo-cond}) must hold. To show the positivity of $\Gamma$, which was assumed in the previous paragraph, we use Eqs.~({\bf s}ef{no loss}) and~({\bf s}ef{Helstrom basis}) to write \begin{align} \Gamma =\frac{1}{2}\lambdaeft[ \begin{pmatrix} a_1 & 0 \\ 0 &a_2 {{\bf s}m e}nd{pmatrix}^{\!\!\otimes N} \!\!\! +\begin{pmatrix} b_1& 0 \\ 0 & b_2 {{\bf s}m e}nd{pmatrix}^{\!\!\otimes N} {\bf s}ight], {{\bf s}m e}nd{align} where \begin{align} a_{1/2}=&\frac{1\pm c+{{\bf s}m s}qrt{1-c^2} }{2},\nonumber \\ b_{1/2}=&\frac{ 1\pm c - {{\bf s}m s}qrt{1-c^2} }{2}. {{\bf s}m e}nd{align} Notice that $a_1>b_1$ and $a_2>|b_2|$. Thus, if $0\lambdaeq c< 1$, we have $\vartheta_k >0$ for $k=1,2,\dots, 2^{N}$. The special case~$c=1$ is degenerate. Eq.~{{\bf s}m e}qref{holevo-cond} is trivially saturated, rendering $P_{{\bf s}m s}=2^{1-N}$, as it should be. The maximum success probability can now be computed recalling that $P_{{\bf s}m s}(c)=2^{1-N} {{\bf s}m tr}\,\Gamma$. We obtain \begin{equation} P_{{\bf s}m s}(c)=\lambdaeft(\frac{1+{{\bf s}m s}qrt{1-c^2}}{2}{\bf s}ight)^{\!\!N}\!\!+\lambdaeft(\frac{1-{{\bf s}m s}qrt{1-c^2}}{2}{\bf s}ight)^{\!\!N}, {{\bf s}m e}nd{equation} where the first term corresponds to guessing correctly all the states in the input string, whereas the second one results from guessing the other possible state all along the string. One can easily check that the average over~$c$ of the second term vanishes exponentially for large $N$, and we end up with a success probability given again by~Eq.~{{\bf s}m e}qref{eq:meanqudits}. Finally, we would like to mention that one could consider a simple unambiguous protocol~\cite{Ivanovic1987,Dieks1988,Peres1988,Chefles1998a} whereby each state of the input string would be identified with no error with probability $P_{{\bf s}m s}(c)=1-c$, i.e., the protocol would give an inconclusive answer with probability $1-P_{{\bf s}m s}=c$. Therefore, the average unambiguous probability of sorting the data would be \begin{equation} P_\mathrm{s}=2\!\! \int_0^1\!\! dc\,c \mu(c^2)(1-c)^N {{\bf s}m s}im \frac{2(d-1)}{N^2}\,. {{\bf s}m e}nd{equation} } {{\bf s}m s}ection{Discussion}{(\lambdaambda)}bel{sec:discussion} Unsupervised learning, which assumes virtually nothing about the distributions underlying the data, is already a hard problem~\cite{Aloise2009,Ben-David2015}. Lifting the notion of classical data to quantum data (i.e., states) factors in additional obstacles, such as the impossibility to repeatedly operate with the quantum data without degrading it. \blue{Most prominent classical clustering algorithms heavily rely on the iterative evaluation of a function on the input data (e.g., pairwise distances between points in a feature vector space, as in $k$-means~\cite{Lloyd1982}), hence they are not equipped to deal with degrading data and would expectedly fail in our scenario. The unsupervised quantum classification algorithm we present is thus, by necessity, far away from its classical analogues. In particular, since we are concerned with the optimal quantum strategy we need to consider the most general collective measurement, which is inherently single-shot:} it yields a single sample of a stochastic action, namely, a posterior state and an outcome of a quantum measurement, where the latter provides the description of the clustering. The main lesson stemming from our investigation is that, despite these limitations, clustering unknown quantum states is a feasible task. \blue{The optimal protocol that solves it showcases some interesting features.} \blue{ {{{\bf s}m e}m It does not completely erase the information about a given preparation of the input data after clustering.}} This is apparent from Eq.~{{\bf s}m e}qref{povm_elements_main}, since the action of the POVM on the subspaces~${(\lambdaambda)}$ is the identity. \blue{After the input data string in the global state $\ket{\Phi_{\bf x}}$ is measured and outcome $\lambda^*$ is obtained (recall that $\lambda^*$ gives us information about the size of the clusters), information relative to the particular states $\ket{\phi_{0/1}}$ remains in the subspace~$(\lambda^*)$ of the global post-measured state. Therefore, one could potentially use further the posterior (clustered) states down the line as approximations of the two classes of states. This opens the door for our clustering device to be used as an intermediate processor in a quantum network.} This notwithstanding, the amount of information that can be retrieved after optimal clustering is currently under investigation. {{{\bf s}m e}m It outbeats the classical and semiclassical protocols.} If the local dimension of the quantum data is larger than two, the dimensionality of the symmetric subspaces spanned by the global states of the strings of data can be exploited by means of collective measurements with a twofold effect: enhanced distinguishability of states, resulting in improved clustering performance (exemplified by a linear increase in the asymptotic success probability), and information-preserving data handling (to some extent, as discussed above). This should be contrasted with the semiclassical protocol, which essentially obliterates the information content of the data (as a von Neumann measurement is performed on each system), and whose success probability vanishes exponentially with the local dimension. In addition, the optimal classical and semiclassical protocols require solving an NP-complete problem and their implementation is thus inefficient. In contrast, we observe that the first part of the quantum protocol, which consists in guessing the size of the clusters $n$, runs efficiently on a quantum computer: this step involves a Schur transform that runs in polynomial time in $N$ and $\lambdaog d$~\cite{Harrow2005,Krovi2018}, followed by a projective measurement with no computational cost. The second part, guessing the permutation ${{\bf s}m s}igma$, requires implementing a group-covariant POVM. The complexity of this step, and hence the overall computational complexity of our protocol, is still an open question currently under investigation. {{{\bf s}m e}m It is optimal for a range of different cost functions.} There are various cost functions that could arguably be better suited to quantum clustering, e.g., the Hamming distance between the guessed and the true clusterings, or likewise, the trace distance or the infidelity between the corresponding effective states~${\bf s}ho_{n,{{\bf s}m s}igma}$ and~${\bf s}ho_{n',{{\bf s}m s}igma'}$. They are however hard to deal with analytically. The question arises as to whether our POVM is still optimal for such cost functions. To answer this question, we formulate an optimality condition that can be checked numerically for problems of finite size (see Appendix~{\bf s}ef{app:generalcosts}). Our numerics show that the POVM remains optimal for all these examples. This is an indication that the optimality of our protocol stems from the structure of the problem, independently of the cost function. {{{\bf s}m e}m It stands a landmark in multi-hypothesis state discrimination.} Analytical solutions to multi-hypothesis state discrimination exist only in a few specific cases~\cite{Barnett2001,Chiribella2004,Chiribella2006a,Krovi2015a,Sentis2016,Sentis2017}. Our set of hypotheses arises arguably from the minimal set of assumptions about a pure state source: it produces two states randomly. Variants of this problem with much more restrictive assumptions have been considered in Refs.~\cite{Korff2004,Hillery2011,Skotiniotis2018}. Our clustering protocol departs from other notions of quantum unsupervised machine learning that can be found in the literature~\cite{Aimeur2013,Lloyd2013,Wiebe2014a,Kerenidis2018}. In these references, data coming from a classical problem is encoded in quantum states that are available on demand via a quantum random access memory~\cite{Giovannetti2008}. The goal is to surpass classical performance in the number of required operations. In contrast, we deal with unprocessed quantum data as input, and aim at performing a task that is genuinely quantum. This is a notably harder scenario, where known heuristics for classical algorithms simply cannot work. \blue{Other extensions of this work currently under investigation are: clustering systems whose states can be of more than two types, where we expect a similar two-step measurement for the optimal protocol; and clustering of quantum processes, where the aim is to classify instances of unknown processes by letting them run on some input test state of our choice (see Ref.~\cite{Skotiniotis2018} for related work on identifying malfunctioning devices). In this last case, an interesting application arises when considering causal relations as the defining feature of a cluster. A clustering algorithm would then aim to identify, within a set of unknown processes, which ones are causally connected. Identifying causal structures has recently attracted attention among the quantum information community~\cite{Chiribella2019}.} \acknowledgments We acknowledge the financial support of the Spanish MINECO, ref. FIS2016-80681-P (AEI/FEDER, UE), and Generalitat de Catalunya CIRIT, ref. 2017-SGR-1127. GS thanks the support of the Alexander von Humboldt Foundation. EB also thanks the hospitality of Computer Science Department of the University of Hong Kong during his stay. \appendix {{\bf s}m s}ection{Partitions}{(\lambdaambda)}bel{app:partitions} Partitions play an important role in the representation theory of groups and are central objects in combinatorics. Here, we collect a few definitions and results that are used in the next appendices, particularly in Appendix~{\bf s}ef{app:irreps}. A {{\bf s}m e}mph{partition} $\lambda=(\lambda_1,\lambda_2,\lambdadots,\lambda_r,\lambdadots)$ is a sequence of nonnegative integers in nonincreasing order. The {{\bf s}m e}mph{length}~of~$\lambda$, denoted $l(\lambda)$, is the number of nonzero elements in~$\lambda$. We denote by \mbox{$\lambda\vdash N$} a partition $\lambda$ of the integer $N$, where $N={{\bf s}m s}um_i \lambda_i$. A natural way of ordering partitions is by inverse lexicographic order, i.e., given two partitions $\lambda$ and $\lambda'$, we write $\lambda > \lambda' $ iff the first nonzero difference $\lambda_i-\lambda'_i$ is positive. The total number of partitions of an integer $N$ is denoted by~$P_N$~\cite{Flajolet2009}, and the number of partitions such that $l(\lambda)\lambdae r$ by~$P^{(\lambdae r)}_N$. Similarly, the number of partitions of length~$r$ is denoted by~$P^{(r)}_N$. There exists no closed expression for any of these numbers, but there are widely known results (some of them by Hardy and Ramanujan are very famous~\cite{Andrews1976}) concerning their asymptotic behavior for large $N$. The one we will later use in Appendix~{\bf s}ef{app:classic} is \begin{equation} P^{(\lambdae r)}_N{{\bf s}m s}im {N^{r-1}\over r!(r-1)!}\,, {(\lambdaambda)}bel{partAsym} {{\bf s}m e}nd{equation} which gives the dominant contribution for large $N$. Note that from the obvious relation $P^{(r)}_N=P^{(\lambdae r)}_N-P^{(\lambdae r-1)}_N$, it follows that the same asymptotic expression holds for~$P^{(r)}_N$. Partitions are conveniently represented by {{\bf s}m e}mph{Young diagrams}. The Young diagram associated to the partition $\lambda \vdash N$ is an arrangement of $N$ empty boxes in $l(\lambda)$ rows, with~$\lambda_i$ boxes in the $i$th row. This association is one-to-one, hence $\lambda$ can be used to label Young diagrams as well. A {{\bf s}m e}mph{Young tableau} of $d$ entries is a Young diagram filled with integers from 1 up to~$d$, one in each box. There are two types of tableaux: A {{\bf s}m e}mph{standard Young tableau} (SYT) of shape $\lambda \vdash N$ is one where $d=N$ and such that the integers in each row increase from left to right, and from top to bottom in each column (hence each integer appears exactly once). A {{\bf s}m e}mph{semistandard Young tableau} (SSYT) of shape $\lambda \vdash N$ and $d$ entries, $d\geq l(\lambda)$, is one such that integers in each row are nondecreasing from left to right, and increasing from top to bottom in each column. The number of different SYTs of shape $\lambda \vdash N$ is given by the {{\bf s}m e}mph{hook-length} formula \begin{equation}{(\lambdaambda)}bel{mult_nu_general} \nu_\lambda = \frac{N!}{\prod_{(i,j)\in\lambda} h_{ij}} \,, {{\bf s}m e}nd{equation} where $(i,j)$ denotes the box located in the $i$th row and the $j$th column of the Young diagram, and $h_{ij}$ is the hook-length of the box~$(i,j)$, defined as the number of boxes located beneath or to the right of that box in the Young diagram, counting the box itself. Likewise, the number of SSYTs of shape $\lambda \vdash N$ and $d$ entries is given by the formula \begin{equation}{(\lambdaambda)}bel{mult_s_general} s_\lambda = \frac{\Delta(\lambda_1+d-1,\lambda_2+d-2,\lambdadots,\lambda_d)}{\Delta(d-1,d-2,\lambdadots,0)} \,, {{\bf s}m e}nd{equation} where $\Delta(x_1,x_2,\dots,x_d)=\prod_{i<j}(x_i-x_j)$. {{\bf s}m s}ection{Irreducible representations of SU$(d)$ and $S_N$ over $(d,\mathbb{C})^{\otimes N}$}{(\lambdaambda)}bel{app:irreps} For the sake of convenience, we recall here some ingredients of representation theory that we use throughout the paper. The results described below can be found in standard textbooks, for instance, in Refs. \cite{Sagan2001,Goodman2009}. {{\bf s}m s}ubsection{Some results in representation theory} Young diagrams or, equivalently, partitions $\lambda$, label the irreducible representations (irreps) of the general linear group GL$(d)$ and some of its subgroups, e.g., ${{\bf s}m SU}(d)$, and also the irreps of the symmetric group $S_N$. The dimension of these irreps are given by $s_\lambda$ and $\nu_\lambda$, respectively [Eqs.~({\bf s}ef{mult_nu_general}) and~({\bf s}ef{mult_s_general})]. Schur-Weyl duality \cite{Goodman2009} establishes a connection between irreps of both groups, as follows. Let us consider the transformations $R^{\otimes N}$ and $U_{{\bf s}m s}igma$ on the $N$-fold tensor product space $(d,\mathbb{C})^{\otimes N}\!$, where $R\in {{\bf s}m SU}(d)$ and $U_{{\bf s}m s}igma$ permutes the $N$ spaces $(d,\mathbb{C})$ of the tensor product according to the permutation ${{\bf s}m s}igma\in S_N$. Both $R^{\otimes N}$ and $U_{{\bf s}m s}igma$ define, respectively, a reducible unitary representation of the groups SU$(d)$ and~$S_N$ on $(d,\mathbb{C})^{\otimes N}\!$. Moreover, they are each other's commutants. It follows that this reducible representation decomposes into irreps $\lambda$, so that their joint action can be expressed as \begin{equation}{(\lambdaambda)}bel{schurweyl} R^{\otimes N}U_{{\bf s}m s}igma= U_{{\bf s}m s}igma R^{\otimes N} = \bigoplus_{\lambda \vdash N} R^\lambda \otimes U^\lambda_{{\bf s}m s}igma \,, {{\bf s}m e}nd{equation} where $R^\lambda$ and $U^\lambda_{{\bf s}m s}igma$ are the matrices that represent $R$ and $U_{{\bf s}m s}igma$, respectively, on the irrep~$\lambda$. To resolve any ambiguity that may arise, we write $\lambda$ in parenthesis, ${(\lambdaambda)}$, when it refers to the irreps of SU$(d)$, or in brackets,~${\{\lambdaambda\}}$, when it refers to those of~$S_N$. Eq.~({\bf s}ef{schurweyl}) tells us that the dimension of ${(\lambdaambda)}$, $s_\lambda$, coincides with the {{\bf s}m e}mph{multiplicity} of ${\{\lambdaambda\}}$, and conversely, the dimension of ${\{\lambdaambda\}}$, $\nu_\lambda$, coincides with the multiplicity of ${(\lambdaambda)}$. This block-diagonal structure provides a decomposition of Hilbert space $\mathcal{H}^{\otimes N}=(d,{\mathbb C})^{\otimes N}$ into subspaces that are invariant under the action of ${{\bf s}m SU}(d)$ and $S_N$, as $\mathcal{H}^{\otimes N}=\bigoplus_\lambda H_\lambda$, and in turn, $H_\lambda = H_{(\lambdaambda)} \otimes H_{\{\lambdaambda\}}$. The basis in which ${\mathcal H}^{\otimes N}$ has this form is known as {{\bf s}m e}mph{Schur basis}, and the unitary transformation that changes from the computational to the Schur basis is called {{\bf s}m e}mph{Schur transform}. To conclude this Appendix, let us recall the rules for reducing the tensor product of two ${{\bf s}m SU}(d)$ representations as a Clebsch-Gordan series of the form \begin{equation}{(\lambdaambda)}bel{reduction1} R^\lambda\otimes R^{\lambda'}=\bigoplus_{\lambda''} R^{\lambda''}\otimes \openone^{{(\lambdaambda)}mbda''} \,,\quad \forall R\in{{\bf s}m SU}(d)\,, {{\bf s}m e}nd{equation} where ${{\bf s}m dim}(\openone^{{(\lambdaambda)}mbda''})$ is the multiplicity of irrep ${(\lambdaambda)}mbda''$. The same rules also apply to the reduction of the outer product of representations of $S_n$ and~$S_{n'}$ into irreps of~$S_{n''}$, where $n''=n+n'$. In this case one has \begin{equation}{(\lambdaambda)}bel{reduction2} (U^\lambda \otimes U^{\lambda'})_{{{\bf s}m s}igma}=\bigoplus_{\lambda''} U^{\lambda''}_{{{\bf s}m s}igma} \otimes \openone^{{(\lambdaambda)}mbda''}, \quad \forall {{\bf s}m s}igma\in S_{n''}. {{\bf s}m e}nd{equation} Note the different meanings of $\otimes$ in the last two equations (it is however standard notation). The rules are most easily stated in terms of the Young diagrams that label the irreps. They are as follows: \begin{enumerate} \item In one of the diagrams that label de irreps on the left hand side of Eq.~({\bf s}ef{reduction1}) or Eq.~({\bf s}ef{reduction2}) (preferably the smallest), write the symbol $a$ in all boxes of the first row, the symbol $b$ in all boxes of the second row, $c$ in all boxes of the third one, and so~on. \item Attach boxes with $a$ to the second Young diagram in all possible ways subjected to the rules that no two $a$'s appear in the same column and that the resulting arrangement of boxes is still a Young diagram. Repeat this process with $b$'s, $c$'s, and so~on. \item For each Young diagram obtained in step two, read the 1st row of added symbols from right to left, then the second row in the same order, and so on. The resulting sequence of symbols, e.g., $abaabc\dots$, must be a lattice permutation, namely, to the left of any point in the sequence, there are not fewer $a$'s than $b$'s, no fewer $b$'s than $c$'s, and so on. Discard all diagrams that do not comply with this rule. {{\bf s}m e}nd{enumerate} The Young diagrams $\lambda''$ that result from this procedure specify the irreps on the right hand side of Eqs.~({\bf s}ef{reduction1}) and~({\bf s}ef{reduction2}). A same diagram can appear a number $M$ of times, in which case $\lambda''$ has multiplicity ${{\bf s}m dim}(\openone^{{(\lambdaambda)}mbda''})=M$. {{\bf s}m s}ubsection{Particularities of quantum clustering} Since the density operators [cf. Eq.~{{\bf s}m e}qref{app:rho_ns}] and POVM elements [cf. Eq.~{{\bf s}m e}qref{app:povmelements}] associated to each possible clustering emerge from the joint action of a permutation ${{\bf s}m s}igma\in S_N$ and a group average over ${{\bf s}m SU}(d)$, it is most convenient to work in the Schur basis, where the mathematical structure is much simpler. A further simplification specific to quantum clustering of two types of states, is that the irreps that appear in the block-diagonal decomposition of the states (and, hence, of the POVM elements) have at most length 2, i.e., they are labeled by bipartitions $\lambda=(\lambda_1,\lambda_2)$, and correspond to Young diagrams of at most two rows. This is because the ${\bf s}ho_{n,{{\bf s}m s}igma}$ arise from the tensor product of two {{\bf s}m e}mph{completely symmetric} projectors, $\openone^{{{\bf s}m sym}}_n$, $\openone^{{{\bf s}m sym}}_{\bar n}$, of~$n$ and~$\bar{n}$ systems [cf. Eq.~{{\bf s}m e}qref{app:rho_ns}]. They project into the irrep $\lambda=(n,0)$ and $\lambda'=(\bar n,0)$ subspaces, respectively. According to the reduction rules above, in the Schur basis the tensor product reduces as \ytableausetup{mathmode,boxsize=1em} \ytableausetup{centertableaux} \begin{eqnarray} && \overbrace{ \begin{ytableau} \phantom{.} & \phantom{.} {{\bf s}m e}nd{ytableau} \cdots \begin{ytableau} \phantom{.} & \phantom{.} & \phantom{.} {{\bf s}m e}nd{ytableau} }^{\bar n} \ \otimes \ \overbrace{ \begin{ytableau} a & a {{\bf s}m e}nd{ytableau} \cdots \begin{ytableau} a & a {{\bf s}m e}nd{ytableau}}^{n}\nonumber\\ &=& \overbrace{ \begin{ytableau} \phantom{.} & \phantom{.} & \phantom{.} {{\bf s}m e}nd{ytableau} \cdots \begin{ytableau} a & a & a {{\bf s}m e}nd{ytableau} }^{n+\bar n} \ \oplus\ \overbrace{ \begin{ytableau} \phantom{.} & \phantom{.} & \phantom{.}\\ a {{\bf s}m e}nd{ytableau} {\bf s}aisebox{.5em}{ $\cdots$\! \begin{ytableau} a & a {{\bf s}m e}nd{ytableau} }\!\! }^{n+\bar n-1} {(\lambdaambda)}bel{reduction3}\\ &\oplus& \overbrace{ \begin{ytableau} \phantom{.} & \phantom{.} & \phantom{.}\\ a&a {{\bf s}m e}nd{ytableau} {\bf s}aisebox{.5em}{ $\cdots$\! \begin{ytableau} a & a {{\bf s}m e}nd{ytableau} }\!\! }^{n+\bar n-2}\ \oplus \cdots \oplus \overbrace{ \begin{ytableau} \phantom{.} & \phantom{.} \\ a&a {{\bf s}m e}nd{ytableau} {\bf s}aisebox{0em}{ $\cdots$\! \begin{ytableau} \phantom{.}\\ a {{\bf s}m e}nd{ytableau} } {\bf s}aisebox{0.5em}{ \!\!$\cdots$\! \begin{ytableau} \phantom{.} {{\bf s}m e}nd{ytableau} } \!\! }^{\bar n}\ .\nonumber \\ &&\nonumber {{\bf s}m e}nd{eqnarray} This proves our statement. There is yet another simplification that emerges from Eq.~({\bf s}ef{reduction3}). Note that all the irreps appear only once in the reduction. That is, fixing the indices $n$, ${{\bf s}m s}igma$, and ${\{\lambdaambda\}}$ uniquely defines a one-dimensional subspace. Thus, the projectors $\Omega^{n,{{\bf s}m s}igma}_{{\{\lambdaambda\}}}$ are rank one. We conclude by giving explicit expressions for the dimensions of the irreps of $ S_N$ and ${{\bf s}m SU}(d)$, in Eqs.~{{\bf s}m e}qref{mult_nu_general} and~{{\bf s}m e}qref{mult_s_general}, for partitions of the form $\lambda=(\lambda_1,\lambda_2)$. These expressions are used to derive Eq.~{{\bf s}m e}qref{app:ps}, and read \begin{eqnarray} \nu_\lambda &=& \frac{N!(\lambda_1-\lambda_2+1)}{(\lambda_1+1)!\lambda_2! }\,, {(\lambdaambda)}bel{mult_nu} \\ \nonumber\\ s_\lambda &=& \frac{\lambda_1-\lambda_2+1}{\lambda_1+1} \binom{\lambda_1+d-1}{d-1} \binom{\lambda_2+d-2}{d-2} \,. {(\lambdaambda)}bel{mult_s} {{\bf s}m e}nd{eqnarray} One can check that Eqs.~({\bf s}ef{mult_nu}) and~({\bf s}ef{mult_s}) are consistent with Eq.~({\bf s}ef{reduction3}) by showing that the sum of the dimensions of the irreps on the right hand side agrees with the product of the two on the left hand side. Namely, by checking that \begin{align} s_{(\bar n,0)}s_{(n,0)}&={{\bf s}m s}um_{i=0}^n s_{(n+\bar n-i,i)}\,,{(\lambdaambda)}bel{check1}\\ \nu^{S_{\bar n}}_{(\bar n,0)}\nu^{S_n}_{(n,0)}\binom{n+\bar n}{n}&={{\bf s}m s}um_{i=0}^n \nu_{(n+\bar n-i,i)}\,,{(\lambdaambda)}bel{check2} {{\bf s}m e}nd{align} where the superscript remind us that the dimensions on the left hand side refer to irreps of $S_{\bar n}$, $S_{n}$. One obviously obtains $\nu^{S_{\bar n}}_{(\bar n,0)}=\nu^{S_n}_{(n,0)}=1$, since these are the trivial representations of either group. The binomial in~Eq.~({\bf s}ef{check2}) arises from the definition of outer product representation in~Eq.~({\bf s}ef{reduction2}), whereby the action of $S_{n+\bar n}$ is defined on basis vectors of the form $\bar v_{i_1i_2\dots i_{\bar n}}\otimes v_{i_{\bar n+1}i_{\bar n+2}\dots i_{\bar n+n}}$, with $\bar v_{i_1i_2\dots i_{\bar n}}\in H_{{\{\lambdaambda\}}}^{S_{\bar n}}$, $v_{i_1i_2\dots i_{n}}\in H_{\{\lambda'\}}^{S_{n}}$. There are, naturally, $\binom{\bar n+n}{n}$ ways of allocating $\bar n+n$ indices in this expression. {{\bf s}m s}ection{Asymptotics of $P_{{\bf s}m s}$}{(\lambdaambda)}bel{app:asymptotics} We next wish to address the asymptotic behavior of the success probability as the length~$N$ of the data string becomes large. Various behaviors will be derived depending on how the local dimension $d$ scales with $N$. In the large $N$ limit it suffices to consider even values of~$N$, which slightly simplifies the derivation of the asymptotic expressions. The success probability in Eq.~({\bf s}ef{app:ps}) for $N=2m$, $m\in{\mathbb N}$, can be written as (just define a new index as $j=m-i$) \begin{equation} \kern-.3em P_{{\bf s}m s}\!=\!{d\!-\!1\over 2^{2m-1}}{{\bf s}m s}um_{j=0}^m {(2j+1)^2\over (m\!+\!1\!+j)^2(m\!+\!d\!-\!1\!-\!j)}\!\begin{pmatrix}\!2m\\ m\!+\!j\!{{\bf s}m e}nd{pmatrix}\,. {(\lambdaambda)}bel{P asymp} {{\bf s}m e}nd{equation} For large $m$, we write $j=m x$ and use \begin{equation} {1\over 2^{2m-1}}\begin{pmatrix}2m\\ m+j{{\bf s}m e}nd{pmatrix}{{\bf s}m s}im {2\,{{\bf s}m e}^{-m x^2}\over{{\bf s}m s}qrt{m\pi}}\,. {(\lambdaambda)}bel{BinomialGauss} {{\bf s}m e}nd{equation} We start by assuming that $d$ scales more slowly than $N$, e.g., $d{{\bf s}m s}im N^{\gamma}$, with $0\lambdae\gamma<1$. In this situation, we can neglect~$d$ in the denominator of Eq.~({\bf s}ef{P asymp}). Neglecting also other subleading terms in inverse powers of $m$ and using the Euler-Maclaurin formula, we have \begin{equation} P_{{\bf s}m s}{{\bf s}m s}im (d-1) \int_{0}^\infty dx\; {4x^2\over (1+x)^2(1-x)} \;{2\,{{\bf s}m e}^{-m x^2}\over{{\bf s}m s}qrt{m\pi}} \,, {{\bf s}m e}nd{equation} which we can further approximate by substituting $0$ for~$x$ in the denominator, as the Gaussian factor peaks at $x=0$ as $m$ becomes larger, so \begin{align} P_{{\bf s}m s}&{{\bf s}m s}im4(d-1)\! \int_{0}^\infty \!dx\; x \;{2x\,{{\bf s}m e}^{-m x^2}\over{{\bf s}m s}qrt{m\pi}}\nonumber\\ &=-4(d-1) \int_{0}^\infty dx\; x \;{d\over dx} {{{\bf s}m e}^{-m x^2}\over m {{\bf s}m s}qrt{m\pi}}\,. {{\bf s}m e}nd{align} We integrate by parts to obtain \begin{equation} P_{{\bf s}m s}{{\bf s}m s}im {2(d-1)\over m}\int_{0}^\infty dx\; {2\,{{\bf s}m e}^{-m x^2}\over {{\bf s}m s}qrt{m\pi}}={2(d-1)\over m^2} \,. {{\bf s}m e}nd{equation} Hence, provided that $d$ scales more slowly than $N$, the probability of success vanishes asymptotically as~$N^{-2}$. More precisely, as \begin{equation} P_{{\bf s}m s}{{\bf s}m s}im {8(d-1)\over N^2} \,. {(\lambdaambda)}bel{gamma < 1} {{\bf s}m e}nd{equation} Let us next assume that $d$ scales faster than $N$, e.g., as~$d{{\bf s}m s}im N^{\gamma}$, with $\gamma>1$. In this case, $d$ is the leading contribution in the second factor in the denominator of Eq.~({\bf s}ef{P asymp}). Accordingly, we have \begin{align} P_{{\bf s}m s}&{{\bf s}m s}im (d-1)m\int_0^\infty dx {4x^2\over (1+x)^2 d}{2\,{{\bf s}m e}^{-m x^2}\over{{\bf s}m s}qrt{m\pi}}\nonumber\\ &{{\bf s}m s}im 4m \int_0^\infty dx\, x {2 x\,{{\bf s}m e}^{-m x^2}\over{{\bf s}m s}qrt{m\pi}}={2\over m} \,, {{\bf s}m e}nd{align} and the asymptotic expression becomes \begin{equation} P_{{\bf s}m s}{{\bf s}m s}im {4\over N} \,, {(\lambdaambda)}bel{gamma>1} {{\bf s}m e}nd{equation} independently of $d$. Finally, let us assume that $d$ scales exactly as $N$ and write $d=s N$, $s>0$. The success probability can be cast as \begin{equation} P_{{\bf s}m s}\!{{\bf s}m s}im \!(d\!-\!1)\! \int_{0}^\infty \! dx\; {4x^2\over (1+x)^2(1+2s-x)} \;{2\,{{\bf s}m e}^{-m x^2}\over{{\bf s}m s}qrt{m\pi}} \,. {{\bf s}m e}nd{equation} Proceeding as above, we obtain \begin{equation} P_{{\bf s}m s}{{\bf s}m s}im {2(d-1)\over (2s+1)m^2} \,. {{\bf s}m e}nd{equation} Thus, \begin{equation} P_{{\bf s}m s}{{\bf s}m s}im {8s\over (2s+1)N} \,. {(\lambdaambda)}bel{gamma=1} {{\bf s}m e}nd{equation} The three expressions, Eq.~({\bf s}ef{gamma < 1}), Eq.~({\bf s}ef{gamma>1}) and Eq.~({\bf s}ef{gamma=1}), can be combined into a single one as \begin{equation} P_{{\bf s}m s} {{\bf s}m s}im {8(d-1)\over\lambdaeft(\displaystyle 2d+N{\bf s}ight) N}\,. {{\bf s}m e}nd{equation} {{\bf s}m s}ection{Optimal POVM for general cost functions}{(\lambdaambda)}bel{app:generalcosts} This Appendix deals with the optimization of quantum clustering assuming other cost functions. We introduce a sufficient condition under which the type of POVM we used to maximize the success probability (Section~{\bf s}ef{app:optimality}) is also optimal for a given generic cost function. We conjecture that the condition holds under reasonable assumptions. We discuss numerical results for the cases of Hamming distance, trace distance, and infidelity. Recall that Eq.~{{\bf s}m e}qref{app:povmelements} together with Eq.~{{\bf s}m e}qref{povm} define the optimal POVM for a generic cost function that preserves covariance under $S_N$. However, this form is implicit and thus not very practical. Particularizing to the success probability, we managed to specify the function~$n(\lambda)=\lambda_2$ [cf. Eq.~{{\bf s}m e}qref{nl}] and the operators $\Xi_{\{\lambdaambda\}}^n = \Omega_{\{\lambdaambda\}}^{n} \delta_{n,\lambda_2}$. In summary, the POVM was specified solely in terms of the effective states ${\bf s}ho_{n,{{\bf s}m s}igma}$ (hypotheses). Here we conjecture that the choice \mbox{$\Xi_{\{\lambdaambda\}}^n=\Omega_{\{\lambdaambda\}}^{n} \delta_{n,n(\lambda)}$} is still optimal for a large class of cost functions~$f({\bf x},{\bf x}')$, albeit with varying guessing rules $n(\lambda)$. If this holds, given~$f({\bf x},{\bf x}')$, one only has to compute $n(\lambda)={{\bf s}m argmin}_n \vartheta^n_{\lambda,1}$ to obtain the optimal POVM. The minimum average cost can then be computed via Eq.~{{\bf s}m e}qref{opt_av_cost}. We now formulate this conjecture precisely as a testable mathematical condition. For any cost function (distance) such that $f({\bf x},{\bf x}')\ge0$ and $f({\bf x},{\bf x}') = 0$ iff ${\bf x}={\bf x}'$, we can always find some constant $t>0$ such that \begin{equation} t\,f({\bf x},{\bf x}')\geq \bar{\delta}_{{\bf x},{\bf x}'}{{\bf s}m e}quiv 1-\delta_{{\bf x},{\bf x}'},\quad \forall{\bf x},{\bf x}'. {(\lambdaambda)}bel{minimal cost} {{\bf s}m e}nd{equation} We can then rescale the cost function $f\mapsto t^{-1}f$ and assume with no loss of generality that $f({\bf x},{\bf x}')\ge\bar\delta_{{\bf x},{\bf x}'}$. We have \begin{equation} W_{\bf x} = \bar W_{\bf x} + \Delta_{\bf x} , {{\bf s}m e}nd{equation} where we have used the definition of $W_{\bf x}$ after Eq.~({\bf s}ef{Holevo1}) and similarly defined $\bar W_{\bf x}$ for the minimal cost~$\bar\delta_{{\bf x},{\bf x}'}$. As in Section~{\bf s}ef{app:optimality}, it suffices to consider ${\bf x}=(n,e)$. Then, \begin{equation} \Delta_{\bf x} = {{\bf s}m s}um_{{\bf x}'} {{\bf s}m e}ta_{{\bf x}'}[f({\bf x},{\bf x}')-\bar{\delta}_{{\bf x},{\bf x}'}] {\bf s}ho_{\bf x}' \geq 0. {{\bf s}m e}nd{equation} Using the same notation as in Eq.~({\bf s}ef{W_block}), this is equivalent~to \begin{equation} \omega_{\{\lambdaambda\}}^n - \bar\omega_{\{\lambdaambda\}}^n \geq 0 \,. {{\bf s}m e}nd{equation} We now recall the meaning of Eqs.~{{\bf s}m e}qref{Holevo1b} and {{\bf s}m e}qref{Holevo2b}: the operators $\Xi_{\{\lambdaambda\}}^n$ must be projectors onto the eigenspace of minimal eigenvalue of $\omega_{\{\lambdaambda\}}^n$. Then, according to Eq.~{{\bf s}m e}qref{povm}, the choice $\Xi_{\{\lambdaambda\}}^n=\Omega_{\{\lambdaambda\}}^n \delta_{n,n(\lambda)}$ is also optimal for arbitrary cost functions if it holds that \begin{equation}{(\lambdaambda)}bel{conjecture} {{\bf s}m supp}\lambdaeft(\Omega_{\{\lambdaambda\}}^n{\bf s}ight) = V_1\lambdaeft(\bar\omega_{\{\lambdaambda\}}^n{\bf s}ight) \overset{?}{{{\bf s}m s}ubset} V_1\lambdaeft(\omega_{\{\lambdaambda\}}^n{\bf s}ight), {{\bf s}m e}nd{equation} where $V_1(X)$ is the eigenspace of minimal eigenvalue of~$X$, and the equality follows from Eq.~{{\bf s}m e}qref{Wme}. Our conjecture is that Eq.~{{\bf s}m e}qref{conjecture} holds true for the class of ``reasonable'' cost functions considered in this paper, namely, for those that are nonnegative, covariant and satisfy the distance property stated before Eq.~({\bf s}ef{minimal cost}). We checked its validity for problems of size up to $N=8$, local dimension $d=2$, and uniform prior probabilities for the following cost functions: Hamming distance $h({\bf x},{\bf x}') = \min\{|{\bf x}-{\bf x}'|,|{\bf x}-{\bar {\bf x}}'|\}$ ($x_i=0,1$), trace distance $T({\bf x},{\bf x}')=\norm{{\bf s}ho_{\bf x}-{\bf s}ho_{{\bf x}'}}_1$, and infidelity $I({\bf x},{\bf x}')=1-{{\bf s}m tr}\,^2\big[({{\bf s}m s}qrt{{\bf s}ho_{\bf x}} {\bf s}ho_{{\bf x}'} {{\bf s}m s}qrt{{\bf s}ho_{\bf x}})^{1/2}\big]$. The above examples induce a much richer structure in the problem at hand. To illustrate this added complexity, in Fig.~{\bf s}ef{fig:avocado} we show a heat map of the Hamming distances $h({\bf x},{\bf x}')$ between all pairs of clusterings for $N=8$. The figure shows that the largest values of $h({\bf x},{\bf x}')$ can occur for two clusterings with equal cluster size $n$, and that~$h({\bf x},{\bf x}')$ is extremely dependent on the pair of permutations ${{\bf s}m s}igma,{{\bf s}m s}igma'$. As a result, the guessing rule $n({(\lambdaambda)}mbda)$ is completely different from the one that maximizes the probability of success~$P_{{\bf s}m s}$. In particular, irreps~${(\lambdaambda)}mbda$ are no longer in one-to-one correspondence with optimal guesses for~$n$. In Table~{\bf s}ef{tab:nl} we show values of~$n(\lambda)$ for our four cost functions and $N=4,\lambdadots,8$. In contrast to the case of the success probability (the cost function~$\bar\delta_{{\bf x},{\bf x}'}$), we note that in some cases it is actually optimal to map several irreps to the same guess, while never guessing certain cluster sizes. \begin{figure}[htbp] \centering \includegraphics[scale=.47]{Hamming_avocadoN8.pdf} \caption{Heat map of the Hamming distances $h({\bf x},{\bf x}')$ between clusterings for $N=8$. The clusterings are grouped by size of the smallest cluster $n=0,1,2,3,4$. Each group contains all nontrivial permutations ${{\bf s}m s}igma$ for a given $n$. A brighter color means a smaller Hamming distance.}{(\lambdaambda)}bel{fig:avocado} {{\bf s}m e}nd{figure} \begin{table*}[htbp] \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \cline{2-20} & \multicolumn{3}{c|}{$N=4$} & \multicolumn{3}{c|}{$N=5$} & \multicolumn{4}{c|}{$N=6$} & \multicolumn{4}{c|}{$N=7$} & \multicolumn{5}{c|}{$N=8$} \\ \hline \multicolumn{1}{|c|}{$\lambda$} & (4,0) & (3,1) & (2,2) & (5,0) & (4,1) & (3,2) & (6,0) & (5,1) & (4,2) & (3,3) & (7,0) & (6,1) & (5,2) & (4,3) & (8,0) & (7,1) & (6,2) & (5,3) & (4,4) \\ \hline \multicolumn{1}{|c|}{$\bar{\delta}$} & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 & 3 & 0 & 1 & 2 & 3 & 0 & 1 & 2 & 3 & 4 \\ \hline \multicolumn{1}{|c|}{$h$} & 0 & 2 & 2 & 0 & 2 & 2 & 0 & 3 & 2,3 & 3 & 0 & 3 & 3 & 3 & 0 & 4 & 4 & 4 & 4 \\ \hline \multicolumn{1}{|c|}{$T$} & 1 & 1 & 2 & 1 & 1 & 2 & 1 & 1 & 2 & 3 & 1 & 2 & 2 & 3 & 1 & 2 & 3 & 3 & 4 \\ \hline \multicolumn{1}{|c|}{$I$} & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 & 3 & 0 & 1 & 3 & 3 & 0 & 1 & 3 & 4 & 4 \\ \hline {{\bf s}m e}nd{tabular} \caption{Values of $n(\lambda)$, i.e., of the optimal guess for the size of the smallest cluster, where $\lambda=(\lambda_1,\lambda_2)$ are the relevant irreps, for data sizes $N=4,5,6,7,8$, and cost functions $\bar{\delta}({\bf x},{\bf x}')$ (corresponding to the success probability), Hamming distance~$h({\bf x},{\bf x}')$, trace distance $T({\bf x},{\bf x}')$, and infidelity $I({\bf x},{\bf x}')$.}{(\lambdaambda)}bel{tab:nl} {{\bf s}m e}nd{table*} Performing the Schur transform is computationally inefficient on a classical computer\footnote{In contrast, as was mentioned in the main text, there exist efficient quantum circuits able to implement the Schur transform in a quantum computer. A circuit based on the Clebsch-Gordan transform achieves polynomial time in $N$ and $d$~\cite{Harrow2005}. Recently, an alternative method based on the representation theory of the symmetric group was shown to reduce the dimension scaling to ${{\bf s}m poly}(\lambdaog d)$~\cite{Krovi2018}.}, which sets a limit on the size of the data one can test---in our case it is~$N=8$. However, it is worth mentioning that this difficulty might actually be overcome. The fundamental objects needed for testing Eq.~{{\bf s}m e}qref{conjecture} are the operators $\Omega_{\{\lambdaambda\}}^n$. Their computation would, in principle, not require the full Schur transform, as they can be expressed in terms of generalized Racah coefficients, which give a direct relation between Schur bases arising from different coupling schemes of the tensor product space. It is indeed possible to calculate generalized Racah coefficients directly without going through a Clebsch-Gordan transform~\cite{Gliske2005}, and should this method be implemented, clustering problems of larger sizes might be tested. However, an extensive numerical analysis was not the aim of this paper. {{\bf s}m s}ection{Prior distributions}{(\lambdaambda)}bel{app:prior} In the interest of making the paper self-contained, in this appendix we include the derivation of some results about the prior distributions used in the paper. Let ${\mathsf S}_d=\{p_s\ge 0| {{\bf s}m s}um_{s=1}^d p_s=1\}$ denote the standard $(d-\!1)$-dimensional (probability) simplex. Every categorical distribution (CD) $P=\{p_s\}_{s=1}^d$ is a point in ${\mathsf S}_d$. The flat distribution of CDs is the volume element divided by the volume of~${\mathsf S}_d$, the latter denoted by $V_d$. Choosing coordinates $p_1,\dots,p_{d-1}$, the flat distribution is $\prod_{s=1}^{d-1} dp_s/V_d{{\bf s}m e}quiv dP$. Let us compute the moments of the flat distribution; as a byproduct, we will obtain $V_d$. We have \begin{align} V_d\!\!\int_{{\mathsf S}_d}\!\!\! dP \prod_{s=1}^d \!p_s^{n_s} \!&=\!\!\int_0^1 \!\!dp_1\!\!\int_0^{1-p_1}\!\!\!dp_2\cdots\!\!\int_0^{1-\!\mbox{\tiny $\displaystyle{{\bf s}m s}um_{s=1}^{d-2}\!\!p_s$}}\!\!\!dp_{d-1} \!\!\prod_{s=1}^d \!p_s^{n_s} \nonumber\\ &= \frac{\prod_{s=1}^d n_s!}{\lambdaeft(d-1+{{\bf s}m s}um_{s=1}^d n_s{\bf s}ight)!} {(\lambdaambda)}bel{int simplex} {{\bf s}m e}nd{align} [the calculation becomes straightforward by iterating the change of variables $p_r\mapsto x$, where $p_r=(1-{{\bf s}m s}um_{s=1}^{r-1}p_s)x$, $r=d-2,d-3,\dots,2,1$]. In particular, setting $n_s=0$ for all~$s$ in Eq.~({\bf s}ef{int simplex}), we obtain $V_d=1/(d-1)!$. Then \begin{equation} \int_{{\mathsf S}_d}\!\! dP \prod_{s=1}^d p_s^{n_s}=\frac{(d-1)!\prod_{s=1}^d n_s!}{\lambdaeft(d-1+N{\bf s}ight)!}, {(\lambdaambda)}bel{moments1} {{\bf s}m e}nd{equation} where $N={{\bf s}m s}um_{s=1}^d n_s$. Next, we provide a simple proof that any fixed von Neumann measurement on a uniform distribution of pure states in $(d,{\mathbb C})$ gives rise to CDs whose probability distribution is flat. As a result, the classical and semiclassical strategies discussed in the main text have the same success probability. Take $|\phi{\bf s}angle\in (d,{\mathbb C})$ and let $\{|s{\bf s}angle\}_{s=1}^d$ be an orthonormal basis of $(d,{\mathbb C})$. By performing the corresponding von Neumann measurement, the probability of an outcome~$s$ is $p_s=|{(\lambdaambda)}ngle s|\phi{\bf s}angle|^2$. Thus, any distribution of pure states induces a distribution of CDs $\{p_s=|{(\lambdaambda)}ngle s|\phi{\bf s}angle|^2\}_{s=1}^d$ on ${\mathsf S}_d$. Let us compute the moments of the induced distribution, namely, \begin{align} \int\!\! d\phi \prod_{s=1}^d p_s^{n_s}\!&= \!\!\int\!\! d\phi \,{{\bf s}m tr}\,\!\!\lambdaeft[\bigotimes_{s=1}^d\lambdaeft(|s{\bf s}angle{(\lambdaambda)}ngle s|{\bf s}ight)^{\otimes n_s}\!\lambdaeft(|\phi{\bf s}angle{(\lambdaambda)}ngle\phi|{\bf s}ight)^{\otimes N}\!{\bf s}ight]\nonumber\\ &= \frac{1}{D^{{\bf s}m sym}_N} {{\bf s}m tr}\,\!\!\lambdaeft[\bigotimes_{s=1}^d\lambdaeft(|s{\bf s}angle{(\lambdaambda)}ngle s|{\bf s}ight)^{\otimes n_s}\!\openone^{{\bf s}m sym}_N\!{\bf s}ight], {(\lambdaambda)}bel{CalcMom} {{\bf s}m e}nd{align} where we recall that $D^{{\bf s}m sym}_N$ ($\openone^{{\bf s}m sym}_N$) is the dimension of (projector on) the symmetric subspace of~$(d,{\mathbb C})^{\otimes N}$ and we have used Schur lemma. A basis of the symmetric subspace is \begin{equation} |v_{\bf n}{\bf s}angle={{\bf s}m s}qrt{{\prod_{s=1}^d n_s!\over N!}}{{\bf s}m s}um_{{{\bf s}m s}igma\in S_N}\!\!U_{{\bf s}m s}igma \bigotimes_{s=1}^d|s{\bf s}angle^{\otimes n_s}, {{\bf s}m e}nd{equation} where ${\bf n}=(n_1,n_2,\dots, n_d)$. Note that there are $\binom{N+d-1}{d-1}$ different strings $\bf n$ (weak compositions of $N$ in $d$ parts), which agrees with $D^{{\bf s}m sym}_{N}=s_{(N,0)}$ [recall Ed.~({\bf s}ef{mult_s})], as it should be. Since $\openone^{{\bf s}m sym}_N={{\bf s}m s}um_{\bf n}|v_{\bf n}{\bf s}angle{(\lambdaambda)}ngle v_{\bf n}|$, we can easily compute the trace in Eq.~({\bf s}ef{CalcMom}) to obtain \begin{equation} \int\!\! d\phi \prod_{s=1}^d p_s^{n_s}\!=\!\frac{\prod_{s=1}^d n_s!}{N! D^{{\bf s}m sym}_N}=\frac{(d-1)!\prod_{s=1}^d n_s!}{(N+d-1)!}. {(\lambdaambda)}bel{moments2} {{\bf s}m e}nd{equation} This equation agrees with Eq.~({\bf s}ef{moments1}). This means that all the moments of the distribution induced from the uniform distribution of pure states coincide with the moments of a flat distribution of CDs on~${\mathsf S}_d$. Since the moments uniquely determine the distributions with compact support~\cite{Akhiezer1965} (and ${\mathsf S}_d$ is compact) we conclude that they are identical. As a byproduct, we can compute the marginal distribution $\mu(c^2)$, where $c$ is the overlap of $|\phi{\bf s}angle$ with a fixed state $|\psi{\bf s}angle$. Since we can always find a basis such that $|\psi{\bf s}angle$ is its first element, we have $c=|{(\lambdaambda)}ngle 1|\phi{\bf s}angle|$. Because of the results above, the marginal distribution is given by \begin{align} \mu(c^2)&= \!\!\int_0^{1-p_1}\!\!dp_2\cdots\!\!\int_0^{1-\!\mbox{\tiny $\displaystyle{{\bf s}m s}um_{s=1}^{d-2}\!\!p_s$}}\!dp_{d-1} \Bigg|_{p_1=c^2} \nonumber\\ &=(d\!-\!1)(1\!-c^2)^{d-2}, {(\lambdaambda)}bel{marginal} {{\bf s}m e}nd{align} in agreement with Ref.~\cite{Alonso2016}. {{\bf s}m s}ection{Optimal clustering protocol for unknown classical states}{(\lambdaambda)}bel{app:classic} In this appendix we provide details on the derivation of the optimal protocol for a classical clustering problem, analogue to the quantum problem discussed in the main text. The results here also apply to quantum systems when the measurement performed on each of them is restricted to be local, projective, $d$-dimensional, and fixed. We call this type of protocols semiclassical. Here, we envision a device that takes input strings of $N$ data points ${\bf s}=(s_{1}s_2\cdots s_{N})$, with the promise that each~$s_{i}$ is a symbol out of an alphabet of $d$ symbols, say the set $\{1,2,\dots,d\}$, and has been drawn from either roulette $P$, or from roulette $Q$, with corresponding categorical probability distributions $P=\{p_{s}\}_{s=1}^{d}$ and $Q=\{q_{s}\}_{s=1}^{d}$. To simplify the notation, we use the same symbols for the roulettes and their corresponding probability distributions, and for the stochastic variables and their possible outcomes. Also, the range of values of the index~$s$ will always be understood to be $\{1,2,\dots, d\}$, unless specified otherwise. The device's task is to group the data points in two clusters so that all points in either cluster have a common underlying probability distribution (either $P$ or $Q$). We wish the machine to be universal, meaning that it shall operate without knowledge on the distributions~$P$ and~$Q$. Accordingly, we will choose as figure of merit the probability of correctly classifying {{\bf s}m e}mph{all} data points, averaged over every possible sequence of roulettes ${\bf x}=(x_1x_2\cdots x_N)$, $x_i\in\{P,Q\}$, and over every possible distribution $P$ and $Q$. The latter are assumed to be uniformly distributed over the common probability simplex ${\mathsf S}_d$ on which they are defined. Formally, this success probability is \begin{eqnarray} P_{{\bf s}m s}^{{\bf s}m cl}&=&\int_{{\mathsf S}_d}\!\! dP dQ {{\bf s}m s}um_{{{\bf x}},{{\bf s}}} {{\bf s}m Pr}\lambdaeft(\hat{\bf x}\in\{{\bf x},\bar{\bf x}\},{\bf s},{\bf x};P,Q{\bf s}ight)\nonumber\\ &=& 2 \int_{{\mathsf S}_d}\!\! dP dQ {{\bf s}m s}um_{{\bf x},{\bf s}} \delta_{\hat{\bf x},{\bf x} }{{\bf s}m Pr}\lambdaeft({\bf s},{\bf x} ;P,Q {\bf s}ight), {{\bf s}m e}nd{eqnarray} where $\hat{\bf x}$ is the guess of ${\bf x}$ emitted by the machine, which by the universality requirement, can {{{\bf s}m e}m only} depend on the data string ${\bf s}$. The sums are carried out over all $2^{N}$ possible strings ${\bf s}$ and sequences of roulettes ${\bf x}$. The factor of two in the second equality takes into account that~$P$ and~$Q$ are unknown, hence identifying the complementary string $\bar{\bf x}$ leads to the same clustering. By emitting~$\hat{\bf x}$, the device suggests a classification of the $N$ data points~$s_i$ in two clusters. In the above equation we have used the notation of Appendix~{\bf s}ef{app:prior} for the integral over the probability simplex. An expression for the optimal success probability can be obtained from the trivial upper-bound \begin{eqnarray} P_{{\bf s}m s}^{{\bf s}m cl}&=& 2{{\bf s}m s}um_{{\bf s}} \int dP dQ \;{{\bf s}m Pr}\lambdaeft({\bf s},\hat{\bf x} ;P,Q {\bf s}ight)\nonumber\\ &\lambdaeq& 2 {{\bf s}m s}um_{{\bf s}} \max_{{\bf x}} \int dP dQ \; {{\bf s}m Pr}\lambdaeft({\bf s},{\bf x} ;P,Q {\bf s}ight) \nonumber\\ &=& 2{{\bf s}m s}um_{{\bf s}} \max_{{\bf x}} \; {{\bf s}m Pr}\lambdaeft({\bf s},{\bf x}{\bf s}ight) , {(\lambdaambda)}bel{eq:Psmax} {{\bf s}m e}nd{eqnarray} where ${{\bf s}m Pr}\lambdaeft({\bf s},{\bf x}{\bf s}ight)$ is the joint marginal distribution of ${\bf s}$ and~${\bf x}$. This bound is attained by the guessing rule \begin{equation} \hat{\bf x}=\underset{{\bf x}}{\operatorname{argmax}} \;{{\bf s}m Pr}\lambdaeft({\bf s},{\bf x}{\bf s}ight) . {{\bf s}m e}nd{equation} For two specific distributions $P$ and $Q$, the probability that a given roulette sequence ${\bf x}$ gives rise to a particular data string ${\bf s}$ is ${{\bf s}m Pr}({\bf s}|{\bf x};P,Q)=\prod_{s}p_s^{n_s}q_{s}^{m_{s}}$ where $n_{s}$ ($m_{s}$) is the number of occurrences of symbol~$s$ in~${\bf s}$ [i.e., how many $s_i\in{\bf s}$ satisfy $s_i=s$] arising from roulettes of type $P$ ($Q$). For later convenience, we define \mbox{$M_{s}=n_{s}+m_{s}$}, which gives the total number of such occurrences. Note that $\{M_s\}$ is independent of ${\bf x}$, whereas~$\{n_s\}$ and $\{m_s\}$ are not. Performing the integral over $P$ and~$Q$ we have \begin{eqnarray} {{\bf s}m Pr}({\bf s},{\bf x}) &=& \frac{{{\bf s}m Pr}({\bf s}|{\bf x})}{2^{N}} \nonumber\\ &=&\frac{1}{2^{N}}\int dP dQ\; {{\bf s}m Pr}({\bf s}|{\bf x};P,Q)\nonumber\\ &=& \frac{2^{-N} d_{\flat}!^{2} \prod_{s}n_{s}!m_{s}!} {(d_{\flat}+{{\bf s}m s}um_{s}m_{s})!(d_{\flat}+{{\bf s}m s}um_{s}n_{s})!} \,, {(\lambdaambda)}bel{eq:pxbr} {{\bf s}m e}nd{eqnarray} where we have used Eq.~({\bf s}ef{moments1}) and in the first equality we have assumed that the two types of roulette $P$ and~$Q$ are equally probable, hence each possible sequence ${\bf x}$ occurs with equal prior probability equal to $2^{-N}$. We have also introduced the notation $d_{\flat}{{\bf s}m e}quiv d-1$ to shorten the expressions throughout this appendix. Note that all the dependence on ${\bf x}$ is through the occurrence numbers $m_s$ and $n_s$. According to {{\bf s}m e}qref{eq:Psmax}, for each string ${\bf s}$ we need to maximize the joint probability ${{\bf s}m Pr}({\bf s},{\bf x})$ in {{\bf s}m e}qref{eq:pxbr} over all possible sequences of roulettes ${\bf x}$. We first note that, given a total of $M_s$ occurrences of a symbol $s$ in ${\bf s}$, ${{\bf s}m Pr}({\bf s},{\bf x})$ is maximized by a sequence ${\bf x}$ whereby all these occurrences come from the same type of roulette. In other words, by a sequence ${\bf x}$ such that either $m_s=M_s$ and~$n_s=0$ or else $m_s=0$ and~$n_s=M_s$. In order to prove the above claim, we single out a particular symbol $r$ that occurs a total number of times $\mu=M_r$ in~${\bf s}$. We focus on the dependence of ${{\bf s}m Pr}({\bf s},{\bf x})$ on the occurrence number $t=m_r$ (so, $n_r=\mu-t$) by writing \begin{eqnarray} {{\bf s}m Pr}({\bf s},{\bf x})&=& \frac{a \,(\mu-t)! t!}{(b+t )!(c-t)!} {{\bf s}m e}quiv f(t) , {{\bf s}m e}nd{eqnarray} where the coefficients $a$, $b$, and $c$ are defined as \begin{eqnarray} a&=&\frac{d_{\flat}!^{2}}{2^{N}}\prod_{s\neq r} n_{s}!m_{s}!\,,\\ b&=&d_{\flat}+{{\bf s}m s}um_{s\neq r} m_s\,,\\ c&=&d_{\flat}+{{\bf s}m s}um_{s} n_s+m_r=d_{\flat}+N-{{\bf s}m s}um_{s\neq r} m_s\,, {(\lambdaambda)}bel{the C} {{\bf s}m e}nd{eqnarray} and are independent of $t$. The function $f(t)$ can be extended to $t\in{\mathbb R}$ using the Euler gamma function and the relation $\Gamma(t+1)=t!$. This enables us to compute the second derivative of $f(t)$ and show that it is a convex function of $t$ in the interval $[0,\mu]$. Indeed, \begin{align}{(\lambdaambda)}bel{harmonic} \kern-1em{f''(t)\over f(t)}&=\!\lambdaeft[H_1(c\!-\!t)\!-\!H_1(\mu\!-\!t) \!-\!H_1(b\!+\!t)\!+\! H_1(t) {\bf s}ight]^2\nonumber\\ &+\! \phantom{\lambdaeft[{\bf s}ight.}H_2(c\!-\!t)\!-\!H_2(\mu\!-\!t) \!+\! H_2(b\!+\!t)\!-\!H_2(t) \nonumber\\[.5em] &\geq 0\,, {{\bf s}m e}nd{align} where $H_n(t)$ are the generalized harmonic numbers. For positive integer values of $t$ they are $H_n(t)={{\bf s}m s}um_{j=1}^{t} j^{-n}$. The relation $ H_n(t)=\zeta(n)-{{\bf s}m s}um_{j=1}^\infty (t+j)^{-n}$, where $\zeta(n)={{\bf s}m s}um_{j=1}^\infty j^{-n}$ is the Riemann zeta function, allows to extend the domain of $H_n(t)$ to real (and complex) values of $t$. The positivity of $f''(t)$ follows from the positivity of both~$f(t)$ and the two differences of harmonic numbers in the second line of Eq.~({\bf s}ef{harmonic}). Note that $H_2(x)$ is an increasing function of $x$. Since, obviously, $b+t>t$, and $c-t>{{\bf s}m s}um_s n_s={{\bf s}m s}um_s(M_s-m_s)\ge \mu-t$ [as follows from the definition of $c$ in~Eq.~({\bf s}ef{the C})], we see that the two differences are positive. The convexity of $f(t)$ for $t\in[0,\mu]$ implies that the maximum of $f(t)$ is either at $t=0$ or $t=\mu$. This holds for every value of $M_r $ and every symbol $r$ in the data string, so our claim holds. In summary, the optimal guessing rule must assign the same type of roulette to all the $M_s$ occurrences of a symbol $s$, i.e., it must group all data points that show the same symbol in the same cluster. This is in full agreement with our own intuition. The description of the optimal protocol that runs on our device is not yet complete. We need to specify how to reduce the current number of clusters down to two, since at this point we may (and typically will) have up to $d$ clusters; as many as different symbols. The reduction, or merging of the $d$ clusters can only be based on their relative sizes, as nothing is known about the underlying probability distributions. This is quite clear: Let ${\mathsf P}$ be the subset of symbols (e.g., the subset of $\{1,2,\dots,d\}$) for which $n_s=M_s$, and let ${\mathsf Q}$ be its complement, i.e.,~${\mathsf Q}$ contains the symbols for which $m_s=M_s$, and ${\mathsf P}=\bar{\mathsf Q}$. The claim we just proved tells us that in order to find the maximum of ${{\bf s}m Pr}({\bf s},{\bf x})$ it is enough to consider sequences of roulettes ${\bf x}$ that comply with the above conditions on the occurrence numbers.\footnote{For example, suppose $d=3$ and $N=12$. Assuming that ${\bf s}=(112321223112)$ is the string of data, the sequence of roulettes ${\bf x}$ in the table $$ \begin{tabular}{c | c c c c c c c c c c c c c} $i$ &1&2&3&4&5&6&7&8&9&10&11&12 \\ [0.5ex] \hline ${\bf s}$ &1&1&2&3&2&1&2&2&3&1&1&2 \\ [0.5ex] \hline ${\bf x}$ &$P$&$P$&$Q$&$Q$&$Q$&$P$&$Q$&$Q$&$Q$&$P$&$P$&$Q$ {{\bf s}m e}nd{tabular} $$ satisfies the conditions $m_s=M_s$ or $n_s=M_s$, since $n_1=M_1=5$, $m_2=M_2=5$, and $m_3=M_3=2$. In this case, ${\mathsf P}=\{1\}$, and ${\mathsf Q}=\{2,3\}$. The suggested clustering is $\{(1,2,6,10,11),(3,4,5,7,8,9,12)\}$. } For those, the joint probability~${{\bf s}m Pr}({\bf s},{\bf x})$ can be written as \begin{equation} {{\bf s}m Pr}({\bf s},{\bf x})= \frac{a}{ \big(d_{\flat}+{{\bf s}m s}um_{s\in{\mathsf Q}} M_s \big)!\big(d_{\flat}+{{\bf s}m s}um_{s\in{\mathsf P}} M_s\big)!}\,, {(\lambdaambda)}bel{eq:maxpsx-1} {{\bf s}m e}nd{equation} where $a$ now simplifies to $ 2^{-N} d_{\flat}!^{2}{\bf s}aisebox{.15em}{{{\bf s}m s}mall${\prod}_{s}$} M_s!$. Thus, it just remains to find the partition $\{{\mathsf P},{\mathsf Q}\}$ that maximizes this expression. It can be also be written as \begin{equation} {{\bf s}m Pr}({\bf s},{\bf x})= \frac{a}{(d_{\flat}+x )!(d_{\flat}+N-x)!}\,, {(\lambdaambda)}bel{eq:maxpsx} {{\bf s}m e}nd{equation} where we have defined $x={\bf s}aisebox{.15em}{{{\bf s}m s}mall${{\bf s}m s}um_{s\in {\mathsf Q}}$} M_s$. The maximum of this function is located at $x=N/2$, and one can easily check that it is monotonic on either side of its peak. Note that, depending on the values of the occurrence numbers $\{M_{s}\}$, the optimal value, $x=N/2$, may not be attained. In such cases, the maximum of ${{\bf s}m Pr}({\bf s},{\bf x})$ is located at $x^*=N/2\pm\Delta$, where $\Delta$ is the bias \begin{equation} \Delta=\frac{1}{2}\min_{\mathsf Q}\lambdaeft|{{\bf s}m s}um_{s\in {\mathsf Q}}M_s-{{\bf s}m s}um_{s\in\bar{\mathsf Q}}M_s{\bf s}ight|\,. {(\lambdaambda)}bel{eq:bias} {{\bf s}m e}nd{equation} The subset $\mathsf Q$ that minimizes this expression determines the optimal clustering. In summary (and not very surprisingly), the optimal guessing rule consists in first partitioning the data ${\bf s}$ in up to $d$ groups according to the symbol of the data points, and secondly, merging those groups (without splitting them) in two clusters in such a way that their sizes are as similar as possible. We have stumbled upon the so-called {{\bf s}m e}mph{partition problem}~\cite{Korf1998}, which is known to be weakly NP-complete. In particular, a large set of distinct occurrence counts $\{M_s\}$ rapidly hinders the efficiency of known algorithms, a situation likely to occur for large $d$. It follows that the optimal clustering protocol for the classical problem cannot be implemented efficiently in all instances of the problem. To obtain the maximum success probability $P_{{\bf s}m s}^{{\bf s}m cl}$, Eq.~({\bf s}ef{eq:Psmax}), we need to sum the maximum joint probability, given by {{\bf s}m e}qref{eq:maxpsx} with $x=x^*$, over all possible strings~${\bf s}$. Those with the same set of occurrence counts~$\{M_{s}\}$ give the same contribution. Moreover, all the dependence on~$\{M_s\}$ is through the bias $\Delta$. Therefore, if we define ${\bf x}i_\Delta$ to be the number of sets $\{M_s\}$ that give rise to a bias $\Delta$, then the corresponding number of data strings is ${\bf x}i_\Delta N!/{\bf s}aisebox{.15em}{{{\bf s}m s}mall${\prod}_{s}$} M_s!$. We thus can write \begin{equation} P_{s}^{{\bf s}m cl}={{\bf s}m s}um_{\Delta} \frac{2^{1-N}{\bf x}i_\Delta d_{\flat}!^{2}N!}{\lambdaeft(d_{\flat}\!+\!{N\over2}\!+\!\Delta {\bf s}ight)!\lambdaeft(d_{\flat}\!+\!{N\over2}\!-\!\Delta{\bf s}ight)!}\,. {(\lambdaambda)}bel{eq:Pssum} {{\bf s}m e}nd{equation} This is as far as we can go, as no explicit formula for the combinatorial factor ${\bf x}i_\Delta$ is likely to exist for general cases. However, it is possible to work out the asymptotic expression of the maximum success probability for large data sizes $N$. We first note that a generic term in the sum~({\bf s}ef{eq:Pssum}) can be written as the factor $2^{2d_\flat+1}{\bf x}i_\Delta d_\flat!^2 N!/(2d_\flat+N)!$ times a binomial distribution that peaks at $\Delta=0$ for large $N$. Hence, the dominant contribution in this limit is \begin{eqnarray} P_{s}^{{\bf s}m cl}&{{\bf s}m s}im &{\bf x}i_{0}\frac{2^{2d_\flat+1} d_{\flat}!^{2}N!}{(2d_{\flat}+N)!} {{\bf s}m s}im {\bf x}i_{0}\frac{2^{2d-1}(d-1)!^{2}}{N^{2d-2}} . {(\lambdaambda)}bel{eq:Psfopt2} {{\bf s}m e}nd{eqnarray} From the definition of ${\bf x}i_\Delta$, given above Eq.~({\bf s}ef{eq:Pssum}), and that of $\Delta$ in Eq.~({\bf s}ef{eq:bias}), we readily see that ${\bf x}i_{0}$ is the number of ordered partitions (i.e., the order matters) of~$N$ in~$d$ addends or parts\footnote{These ordered partitions are known as {{{\bf s}m e}m weak compositions} of $N$ into $d$ parts in combinatorics, where {{{\bf s}m e}m weak} means that some addends (or parts) can be zero; in contradistinction, the term {{{\bf s}m e}m composition} is used when all the parts are strictly positive. } (the occurrence counts~$M_s$) such that a subset of these addends is an ordered partition of~$N/2$ as well. Young diagrams come in handy to compute~${\bf x}i_0$. First, we draw pairs of diagrams, $[\lambda,\lambda']$, each of $N/2$ boxes and such that $\lambda\ge\lambda'$ (in lexicographical order; see Appendix~{\bf s}ef{app:partitions}), and $l(\lambda)+l(\lambda'){{\bf s}m e}quiv r+r'\lambdae d$, i.e., the total number of rows should not exceed $d$. Next, we fill the boxes with symbols $s_i$ (representing possible data points) so that all the boxes in each row have the same symbol. We readily see that the number of different fillings gives us ${\bf x}i_0$. An example is provided in Fig.~{\bf s}ef{fig:counting} for clarity. \begin{figure}[htbp] \ytableausetup{mathmode,boxsize=.8em,aligntableaux=top} \begin{gather*} {{\bf s}m s}criptstyle\frac{4\cdot3}{2} \begin{array}{l} \ydiagram{4}\\[-.3em] \ydiagram{4} {{\bf s}m e}nd{array} \phantom{+} {{\bf s}m s}criptstyle4\cdot3\cdot2 \begin{array}{l} \ydiagram{4}\\[-.3em] \ydiagram{3,1} {{\bf s}m e}nd{array} \phantom{+} {{\bf s}m s}criptstyle{4\cdot3\cdot2\over2} \begin{array}{l} \ydiagram{4}\\[-.3em] \ydiagram{2,2} {{\bf s}m e}nd{array} \phantom{+}{{\bf s}m s}criptstyle {4!\over2} \begin{array}{l} \ydiagram{4}\\[-.3em] \ydiagram{2,1,1} {{\bf s}m e}nd{array}\nonumber\\ {{\bf s}m s}criptstyle\frac{4!}{2\cdot2} \begin{array}{l} \ydiagram{3,1}\\ \ydiagram{3,1} {{\bf s}m e}nd{array} \phantom{+} {{\bf s}m s}criptstyle\frac{4!}{2} \begin{array}{l} \ydiagram{3,1}\\ \ydiagram{2,2} {{\bf s}m e}nd{array} \phantom{+} {{\bf s}m s}criptstyle {{\bf s}m s}criptstyle {4!\over4!} \begin{array}{l} \ydiagram{2,2}\\ \ydiagram{2,2} {{\bf s}m e}nd{array} {{\bf s}m e}nd{gather*} \caption{Use of Young diagrams for computing ${\bf x}i_0$. In the example, $N=8$ and $d=4$. The fraction before each pair gives the number of different fillings and hints at how it has been computed.} {(\lambdaambda)}bel{fig:counting} {{\bf s}m e}nd{figure} Although this pictorial method eases the computation of ${\bf x}i_0$, it becomes unpractical even for relatively small values of $N$. However, it becomes again very useful in the asymptotic limit since the number of Young diagrams with at least two rows of equal size become negligibly small for large $N$.\footnote{Actually, the number of Young diagrams of a given length with unequal number of boxes in each row is equal to the number of Young diagrams of $N-r(r-1)/2$ boxes, i.e., it is equal to $P^{(r)}_{N-r(r-1)/2}$. Using the results in Appendix~{\bf s}ef{app:partitions}, we immediately see that for large $N$ one has $P^{(r)}_{N-r(r-1)/2}/P^{(r)}_N{{\bf s}m s}im 1$, which proves the statement.} The same conclusion applies to the whole pairs $[\lambda,\lambda']$, since e.g., by reshuffling rows, one could merge the two members into a single diagram of~$N$ boxes and length $r+r'$. Thus, we may assume that all pairs of diagrams with a given total length, have unequal number of boxes in each row, which renders the counting of different fillings trivial: there are \mbox{$d!/(d-r-r'+1)!$} ways of filling each pair of diagrams. Recalling that there is a one-to-one mapping between partitions and Young diagrams, we can use Eq.~({\bf s}ef{partAsym}) and write \begin{align} {\bf x}i_{0}&{{\bf s}m s}im\frac{1}{2}{{\bf s}m s}um_{r=1}^{d-1}{{\bf s}m s}um_{r'=1}^{d-r} P^{(r)}_{N\over2}P^{(r')}_{N\over2}\frac{d!}{(d-r-r')!}\nonumber\\ &{{\bf s}m s}im\frac{1}{2} \lambdaeft(\frac{N}{2}{\bf s}ight)^{d-2}{{\bf s}m s}um_{r=1}^{d}\frac{r (d-r) d!}{r!^{2}(d-r)!^{2}}\nonumber\\ &{{\bf s}m s}im\frac{1}{2} \lambdaeft(\frac{N}{2}{\bf s}ight)^{d-2} \frac{(2d-2)!}{(d-2)!(d-1)!^{2}}\,. {{\bf s}m e}nd{align} This result, together with {{\bf s}m e}qref{eq:Psfopt2}, leads us to the desired asymptotic expression for the optimal success probability: \begin{equation} P_{s}^{{\bf s}m cl} {{\bf s}m s}im \lambdaeft(\frac{2}{N}{\bf s}ight)^{d} \frac{(2d-2)!}{(d-2)!}\,. {(\lambdaambda)}bel{eq:PsfoptFinal} {{\bf s}m e}nd{equation} \blue{ {{\bf s}m s}ection{Optimal clustering protocol for known classical states}{(\lambdaambda)}bel{app:classic_known} In this Appendix, we give a short discussion on clustering classical states under the assumption that the underlying probability distributions are known. In particular, we discuss two low-dimensional cases, $d=2,3$, and derive the asymptotic expression of the success probability of clustering for large data string length $N$ and arbitrary data dimension~$d$. We stick to the notation introduced in Appendix~{\bf s}ef{app:classic}. If the underlying probability distributions are known, a given data point $s$ is optimally assigned to the probability distribution for which $s$ is most likely. The success probability is thus given by $\max\{p_s,q_s\}/2$ (recall that the data is assumed to be drawn from either $P$ or $Q$ with equal prior probability). The average success probability of clustering over all possible strings of length~$N$ then reads \begin{equation} {(\lambdaambda)}bel{KP-2} \kern-0.4em P^{{\bf s}m cl}_{{{\bf s}m s},PQ}\! =\!\frac{1}{2^N}\!\!\lambdaeft[\!\lambdaeft({{\bf s}m s}um_{s=1}^d\! \max\{p_s,q_s\}\!\!{\bf s}ight)^{\!\!N}\!\!\!\!+\! \lambdaeft({{\bf s}m s}um_{s=1}^d \!\min\{p_s,q_s\}\!\!{\bf s}ight)^{\!\!N}{\bf s}ight]\!\!,\!\! {{\bf s}m e}nd{equation} where the term in the second line arises because assigning the wrong probability distribution to {{{\bf s}m e}m all} data points in~${\bf s}$ gives a correct clustering. In order to compare with our results for unknown classical states, we average the success probability over a uniform distribution of categorical probability distributions. This yields \begin{equation} {(\lambdaambda)}bel{KP-3} P^{{\bf s}m cl}_{{\bf s}m s}=\int_{{\mathsf S}_d}\!\! dP \int_{{\mathsf S}_d}\!\! dQ \,P^{{\bf s}m cl}_{{{\bf s}m s},PQ}\,, {{\bf s}m e}nd{equation} where the integration over the simplex ${\mathsf S}_d$, shared by $P$ and $Q$, is defined in Appendix~{\bf s}ef{app:prior}. To perform the integral in Eq.~({\bf s}ef{KP-3}) we need to partition ${\mathsf S}_d\times{\mathsf S}_d$ in different regions according to whether \mbox{$p_s\lambdae q_s$} or $p_s>q_s$ for the various symbols. By symmetry, the integral can only depend on the number $r$ of symbols for which $p_s\lambdae q_s$ (not in its particular value). Hence, $r=1,\dots,d-1$ labels the different types of integrals that we need to compute to evaluate~$P_{{\bf s}m s}^{{\bf s}m cl}$. Notice that we have the additional symmetry $r\lambdaeftrightarrow d-r$, corresponding to exchanging $p_s$ and $q_s$ for all~$s$. Since the value of these integrals does not depend on the specific value of~$s$, we can choose all $p_s$ with $s=1,2,\dots,r$ to satisfy $p_s>q_s$ and all $p_s$ with $s=r+1,r+2,\dots, d$ to satisfy \mbox{$p_s\lambdae q_s$}. To shorten the expressions below, we define \begin{equation} {(\lambdaambda)}bel{KP-6} {\mathfrak p}_k:={{\bf s}m s}um_{s=1}^k p_s\,,\quad {\mathfrak q}_k:={{\bf s}m s}um_{s=1}^k q_s \,. {{\bf s}m e}nd{equation} With these definitions ${\mathfrak p}_d={\mathfrak q}_d=1$, ${{\bf s}m s}um_{s=r+1}^dq_s=1-{\mathfrak q}_{r}$, and likewise {\bf s}aisebox{0ex}[0ex][0ex]{${{\bf s}m s}um_{s=r+1}^dp_s=1-{\mathfrak p}_{r}$}. The integrals that we need to compute are then \begin{multline} {(\lambdaambda)}bel{KP-8} I^d_r:=\!\! \int_{{\mathsf S}_d} \!\!\!dP\, {1\over V_d}\!\int_{0}^{p_1}\!\!\! dq_1 \cdots\!\! \int_{0}^{p_r}\!\!\! dq_r \\ \times \int_{p_{r+1}}^{{\mathfrak p}_{r+1}\!-{\mathfrak q}_r}\!\!\!\! dq_{r+1} \cdots\!\! \int_{p_{d-1}}^{{\mathfrak p}_{d-1}\!-{\mathfrak q}_{d-2}}\!\!\!\! dq_{d-1} \\ \times\lambdaeft[ (1\!+\!{\mathfrak p}_r\!-\!{\mathfrak q}_r)^N\!\!+(1\!+\!{\mathfrak q}_r\!-\!{\mathfrak p}_r)^N {\bf s}ight], {{\bf s}m e}nd{multline} and we note that, as anticipated, $I^d_r=I^d_{d-r}$. The average probability of successful clustering then reads \begin{equation} {(\lambdaambda)}bel{KP-11} P^{{\bf s}m cl}_{{\bf s}m s}=\frac{1}{2^N}{{\bf s}m s}um_{r=1}^{d-1} \binom{d}{r}I^d_r, {{\bf s}m e}nd{equation} where the binomial is the number of equivalent integral regions for the given $r$. {{\bf s}m s}ubsubsection*{Low data dimension} We can now discuss the lowest dimensional cases, for which explicit closed formulas for $I^d_r$ can be derived. For $d=2$ one has \begin{equation} {(\lambdaambda)}bel{KP-12} P^{{\bf s}m cl}_{{\bf s}m s}=\frac{8-2^{2-N}}{(N+2)(N+1)}. {{\bf s}m e}nd{equation} This result coincides with that of unknown probability distributions given in Eq.~{{\bf s}m e}qref{eq:Pssum} with ${\bf x}i_\Delta=1$. This is an expected result, as the optimal protocol for known and unknown probability distributions is exactly the same: assign to the same cluster all data points that show the same symbol~$s$. Therefore, knowing the probability distribution does not provide any advantage for $d=2$. For $d>2$, however, knowledge of the distributions $P$ and $Q$ helps classifying the data points. If $d=3$, the success probability {{\bf s}m e}qref{KP-11} can be computed to be \begin{equation} {(\lambdaambda)}bel{KP-13} P^{{\bf s}m cl}_{{\bf s}m s}=6\frac{2^5 (N-2)-2^{2-N}(N^2+7N+18)}{(N+4)(N+3)(N+2)(N+1)}\,. {{\bf s}m e}nd{equation} In Table~{\bf s}ef{table:K-U} we compare five values of $P^{{\bf s}m cl}_{{\bf s}m s}$ in Eq~{{\bf s}m e}qref{KP-13}, when $N=2,3,\dots,6$, with those for unknown distributions $P$ and $Q$ given by Eq.~{{\bf s}m e}qref{eq:Pssum}. As expected, the success probability is larger if $P$ and $Q$ are known. The source of the increase is illustrated by the string ${\bf s}=(112)$, which would be labeled as $PPQ$ (or $QQP$) if $P$ and $Q$ were unknown. However, if they are known and, e.g., $p_1>q_1$ and $p_2>q_2$, the string will be more appropriately labeled as $PPP$. \begin{table} \blue{ \begin{tabular}{|c|c|c|c|c|c|} \hline $N$ & 2 &3 &4 &5 & 6\\ \hline\hline Unknown: & 7/12 & 11/30 & 0.250 & 0.176 & 0.130\\ \hline Known: & 3/5 & 2/5 & 0.283 & 0.210 & 0.160 \\ \hline {{\bf s}m e}nd{tabular} \caption{The success probability $P^{{\bf s}m cl}_{{\bf s}m s}$ for $d=3$ and data string lengths $N=2,\dots,6$ in the cases of known and unknown distributions $P$ and $Q$. For unknown distributions, the values are computed using Eq.~{{\bf s}m e}qref{eq:Pssum} in Appendix~{\bf s}ef{app:classic}. For known distributions, the values are given by Eq.~{{\bf s}m e}qref{KP-13}. The table shows that knowing $P$ and $Q$ increases the success probability of clustering.} {(\lambdaambda)}bel{table:K-U} } {{\bf s}m e}nd{table} {{\bf s}m s}ubsubsection*{Arbitrary data dimension. Large $N$ limit} For increasing $N$, however, the advantage of knowing $P$ and $Q$ becomes less significant and vanishes asymptotically. This can be checked explicitly for $d=2,3$ by expanding Eqs.~({\bf s}ef{KP-12}) and~({\bf s}ef{KP-13}) in inverse powers of $N$. In this regime the average is dominated by distributions for which ${\mathfrak p}_r \approx 1$ and ${\mathfrak q}_r \approx 0$. Since in a typical string approximately half of the data will come from the distribution~$P$ and the other half from~$Q$, the optimal clustering protocol will essentially coincide with that for unknown distributions, i.e., it will collect the data points showing the same symbol in the same subcluster and afterwards merge the subclusters into two clusters of approximately the same size. We next prove that this intuition is right for all $d$. In the proof, we will make repeated use of the trivial observation that, for asymptotically large $N$ and $0<a<b<c$, one has \begin{equation}{(\lambdaambda)}bel{KP-ebc2} \int_a^b (c-x)^N dx {{\bf s}m s}im (c-a)^{N+1} /N\,. {{\bf s}m e}nd{equation} We also note that the contribution to the success probability coming from the completely wrong assignment of distributions, i.e., $(1+{\mathfrak q}_r-{\mathfrak p}_r)^N$, is exponentially vanishing, since we assumed $p_r>q_r$, and thus ${\mathfrak q}_r-{\mathfrak p}_r<0$ [this is well illustrated by the terms proportional to $2^{2-N}$ in Eqs.~({\bf s}ef{KP-12}) and~({\bf s}ef{KP-13})]. Because of this last observation, we can drop the second term in the integrand of $I^d_r$, Eq.~({\bf s}ef{KP-8}). The integrals over~$q_s$, $s\lambdae r$, of the remaining term, $(1+{\mathfrak p}_r-{\mathfrak q}_r)^N$, are dominated by the lower limit, $q_s=0$, as this value maximizes $1+{\mathfrak p}_r-{\mathfrak q}_r$. Using Eq.~({\bf s}ef{KP-ebc2}) we get \begin{multline} {(\lambdaambda)}bel{KP-ebc1} I^d_r{{\bf s}m s}im{(d-1)!\over N^r}\int_{{\mathsf S}_d} \!\!\!dP\\ \times\int_{p_{r+1}}^{{\mathfrak p}_{r+1}\!-{\mathfrak q}_r}\!\!\!\! dq_{r+1} \cdots\!\! \int_{p_{d-1}}^{{\mathfrak p}_{d-1}\!-{\mathfrak q}_{d-2}}\!\!\!\! dq_{d-1} (1\!+\!{\mathfrak p}_r)^{N+r}\,, {{\bf s}m e}nd{multline} where we recalled that the volume of the simplex ${\mathsf S}_d$ is $V_d=1/(d-1)!$. For the remaining integrals over $q_s$ in Eq.~({\bf s}ef{KP-ebc1}) we can take the lower limits to be \mbox{$p_s\approx 0$}, for \mbox{$s\ge r+1$}, since the integrand is maximized by \mbox{${\mathfrak p}_r\approx 1$}. Therefore, the upper limits become $1$, $1-q_{r+1}$, \dots, $1-{{\bf s}m s}um_{s=r+1}^{d-2}q_s$. We identify this upper and lower limits as those of an integral over a $(d-r-1)$-dimensional probability simplex ${\mathsf S}_{d-r}$. We can thus write \begin{equation} {(\lambdaambda)}bel{KP-as-1} I_r^d{{\bf s}m s}im \frac{(d-1)!}{(d-r-1)! N^r} \int_{{\mathsf S}_d} \!\!dP\,(1+{\mathfrak p}_r)^{N+r} \,. {{\bf s}m e}nd{equation} The last equation can be cast as \begin{equation} {(\lambdaambda)}bel{KP-as-1} I_r^d{{\bf s}m s}im \frac{(d-1)!}{(d-r-1)! N^r} \int_{{\mathsf S}_d} \!\!dP\lambdaeft(\!2\!-\!{{\bf s}m s}um_{s=r}^{d-1} p_s\!{\bf s}ight)^{\!N+r}\,, {{\bf s}m e}nd{equation} where we have used again that ${\mathfrak p}_r=1-{{\bf s}m s}um_{s=r+1}^dp_s$ and noted that under the integral sign we are free to relabel the variables $p_s$. According to the definition of~$\int_{{\bf s}aisebox{.3ex}[0ex][0ex]{\tiny${\mathsf S}_d$}}\!dP$, we need to perform $d-r$ integrals over the variables $p_{r},p_{r+1},\cdots, p_{d-1}$, for which we can use Eq.~({\bf s}ef{KP-ebc2}). This yields a factor $2^{N+d}/N^{d-r}$. The remaining integrals over~$p_1,p_2,\dots,p_{r-1}$ of this constant factor give an additional $1/(r-1)!$, as they effectively correspond to an integral over a $(r-1)$-dimensional simplex. Putting the different pieces together, the asymptotic expression of $I^d_r$ reads \begin{equation} {(\lambdaambda)}bel{KP-as-2} I_r^d{{\bf s}m s}imeq \frac{2^{N+d}}{N^d}\frac{[(d-1)!]^2}{(r-1)! (d-r-1)!}\,. {{\bf s}m e}nd{equation} We are now in position to compute the asymptotic success probability. Inserting Eq.~{{\bf s}m e}qref{KP-as-2} into Eq.~{{\bf s}m e}qref{KP-11} we readily obtain \begin{align} {(\lambdaambda)}bel{KP-as-3} P^{{\bf s}m cl}_{{\bf s}m s}&{{\bf s}m s}im \lambdaeft(\frac{2}{N}{\bf s}ight)^d(d\!-\!1)!(d\!-\!1){{\bf s}m s}um_{r=1}^{d-1} \binom{d}{r}\binom{d\!-\!2}{d\!-\!r\!-\!1} \nonumber\\ & = \lambdaeft(\frac{2}{N}{\bf s}ight)^d\frac{(2d-2)!}{(d-2)!}\,, {{\bf s}m e}nd{align} where we have used the well-known binomial identity ${{\bf s}m s}um_k\binom{a}{k}\binom{b}{s-k}=\binom{a+b}{s}$ [here, $k$ ranges over all values for which the binomials make sense]. Eq.~{{\bf s}m e}qref{KP-as-3} coincides with the asymptotic expression in the unknown case Eq.{{\bf s}m e}qref{ps_asym_cl}, as we anticipated. } {{\bf s}m e}nd{document}
\begin{document} \title{Bribery Can Get Harder in\ Structured Multiwinner Approval Election} \begin{abstract} We study the complexity of bribery in the context of structured multiwinner approval elections. Given such an election, we ask whether a certain candidate can join the winning committee by adding, deleting, or swapping approvals, where each such action comes at a cost and we are limited by a budget. We assume our elections to either have the candidate interval or the voter interval property, and we require the property to hold also after the bribery. While structured elections usually make manipulative attacks significantly easier, our work also shows examples of opposite behavior. \end{abstract} \section{Introduction}\label{sec:intro} We study the complexity of bribery under the multiwinner approval rule, in the case where the voters' preferences are structured. Specifically, we use the bribery model of \citett{Faliszewski, Skowron, and Talmon}{fal-sko-tal:c:bribery-measure}, where one can either add, delete, or swap approvals, and we consider \emph{candidate interval} and \emph{voter interval} preferences~\citep{elk-lac:c:approval-sp-sc}. In multiwinner elections, the voters express their preferences over the available candidates and use this information to select a winning committee (i.e., a fixed-size subset of candidates). We focus on one of the simplest and most common scenarios, where each voter specifies the candidates that he or she approves, and those with the highest numbers of approvals form the committee. Such elections are used, e.g., to choose city councils, boards of trustees, or to shortlist job candidates. Naturally, there are many other rules and scenarios, but they do not appear in practice as often as this simplest one. For more details on multiwinner voting, we point the readers to the overviews of \citett{Faliszewski et al.}{fal-sko-sli-tal:b:multiwinner-voting} and \citett{Lackner and Skowron}{lac-sko:t:approval-survey}. In our scenario, we are given an election, including the contents of all the votes, and, depending on the variant, we can either add, delete, or swap approvals, but each such action comes at a cost. Our goal is to find a cheapest set of actions that ensure that a given candidate joins the winning committee. Such problems, where we modify the votes to ensure a certain outcome, are known under the umbrella name of \emph{bribery}, and were first studied by \citett{Faliszewski, Hemaspaandra and Hemaspaandra}{fal-hem-hem:j:bribery}, whereas our specific variant is due to \citett{Faliszewski, Skowron, and Talmon}{fal-sko-tal:c:bribery-measure}. Historically, bribery problems indeed aimed to model vote buying, but currently more benign interpretations prevail. For example, \citett{Faliszewski, Skowron, and Talmon}{fal-sko-tal:c:bribery-measure} suggest using the cost of bribery as a measure of a candidate's success: A~candidate who did not win, but can be put into the committee at a low cost, certainly did better than one whose bribery is expensive. In particular, since our problem is used for post-election analysis, it is natural to assume that we know all the votes. For other similar interpretations, we point, e.g., to the works of \citett{Xia}{xia:c:margin-of-victory}, \citett{Shiryaev, Yu, and Elkind}{shi-yu-elk:c:robustness}, \citett{Bredereck et al.}{bre-fal-kac-nie-sko-tal:j:robustness}, \citett{Boehmer et al.}{boe-bre-fal-nie:c:counting-swap-bribery}, or \citett{Baumeister and Hogrebe}{bau-hog:c:counting-bribery}. \citett{Faliszewski and Rothe}{fal-rot:b:control-bribery} give a more general overview of bribery problems. We assume that our elections either satisfy the candidate interval (CI) or the voter interval (VI) property~\citep{elk-lac:c:approval-sp-sc}, which correspond to the classic notions of single-peakedness~\citep{bla:b:polsci:committees-elections} and single-crossingness~\citep{mir:j:single-crossing,rob:j:tax} from the world of ordinal elections. Briefly put, the CI property means that the candidates are ordered and each voter approves some interval of them, whereas the VI property requires that the voters are ordered and each candidate is approved by some interval of voters. For example, the CI assumption can be used to model political elections, where the candidates appear on the left-to-right spectrum of opinions and the voters approve those, whose opinions are close enough to their own. Importantly, we require our elections to have the CI/VI property also after the bribery; this approach is standard in bribery problems with structured elections~\citep{bra-bri-hem-hem:j:sp2,men-lar:c:bribery-sp-hard,elk-fal-gup-roy:c:swap-shift-sp-sc}, as well as in other problems related to manipulating election results~\citep{wal:c:uncertainty-in-preference-elicitation-aggregation,fal-hem-hem-rot:j:single-peaked-preferences,fit-hem:c:ties-sp-manipulation-bribery} (these references are examples only, there are many more papers that use this model). \begin{example} Let us consider a hotel in a holiday resort. The hotel has its base staff, but each month it also hires some additional help. For the coming month, the expectation is to hire extra staff for $k$ days. Naturally, they would be hired for the days when the hotel is most busy (the decision to request additional help is made a day ahead, based on the observed load). Since hotel bookings are typically made in advance, one knows which days are expected to be most busy. However, some people will extend their stays, some will leave early, and some will have to shift their stays. Thus the hotel managers would like to know which days are likely to become the busiest ones after such changes: Then they could inform the extra staff as to when they are expected to be needed, and what changes in this preliminary schedule might happen. Our bribery problem (for the CI setting) captures exactly the problem that the managers want to solve: The days are the candidates, $k$ is the committee size, and the bookings are the approval votes (note that each booking must regard a consecutive set of days). Prices of adding, deleting, and moving approvals correspond to the likelihood that a particular change actually happens (the managers usually know which changes are more or less likely). Since the bookings must be consecutive, the election has to have the CI property also after the bribery. The managers can solve such bribery problem for each of the days and see which ones can most easily be among the $k$ busiest ones. \end{example} \begin{example} For the VI setting, let us consider a related scenario. There is a team of archaeologists who booked a set of excavation sites, each for some consecutive number of days (they work on several sites in parallel). The team may want to add some extra staff to those sites that require most working days. However, as in the previous example, the bookings might get extended or shortened. The team's manager may use bribery to evaluate how likely it is that each of the sites becomes one of the most work-demanding ones. In this case, the days are the voters, and the sites are the candidates. \end{example} There are two main reasons why structured elections are studied. Foremost, as in the above examples, sometimes they simply capture the exact problem at hand. Second, many problems that are intractable in general, become polynomial-time solvable if the elections are structured. Indeed, this is the case for many ${\mathrm{NP}}$-hard winner-determination problems~\citep{bet-sli-uhl:j:mon-cc,elk-lac:c:approval-sp-sc,pet-lac:j:spoc} and for various problems where the goal is to make some candidate a winner~\citep{fal-hem-hem-rot:j:single-peaked-preferences,mag-fal:j:sc-control}, including some bribery problems~\citep{bra-bri-hem-hem:j:sp2,elk-fal-gup-roy:c:swap-shift-sp-sc}. There are also some problems that stay intractable even for structured elections~\citep{elk-fal-sch:j:fallback-shift,yan:c:borda-control-sp}\footnote{These references are not complete and are meant as examples.} as well as examples of complexity reversals, where assuming structured preferences turns a polynomial-time solvable problem into an intractable one. However, such reversals are rare and, to the best of our knowledge, so far were only observed by \citett{Menon and Larson}{men-lar:c:bribery-sp-hard}, for the case of weighted elections with three candidates (but see also the work of \citett{Fitzsimmons and Hemaspaandra}{fit-hem:c:ties-sp-manipulation-bribery}, who find complexity reversals that stem from replacing total ordinal votes with ones that include ties). \myparagraph{Our Contribution.} We provide an almost complete picture of the complexity of bribery by either adding, deleting, or swapping approvals under the multiwinner approval voting rule, for the case of CI and VI elections, assuming either that each bribery action has identical unit price or that they can be priced individually (see Table~\ref{tab:results}). By comparing our results to those for the unrestricted setting, provided by \citett{Faliszewski, Skowron, and Talmon}{fal-sko-tal:c:bribery-measure}, we find that any combination of tractability and intractability in the structured and unrestricted setting is possible. For example: \begin{enumerate} \item Bribery by adding approvals is solvable in polynomial time irrespective if the elections are unrestricted or have the CI or VI properties. \item Bribery by deleting approvals (where each deleting action is individually priced) is solvable in polynomial time in the unrestricted setting, but becomes ${\mathrm{NP}}$-hard for CI elections (for VI ones it is still in ${\mathrm{P}}$). \item Bribery by swapping approvals only to the designated candidate (with individually priced actions) is ${\mathrm{NP}}$-hard in the unrestricted setting, but becomes polynomial-time solvable both for CI and VI elections. \item Bribery by swapping approvals (where each action is individually priced and we are not required to swap approvals to the designated candidate only) is ${\mathrm{NP}}$-hard in each of the considered settings. \end{enumerate} \noindent\textbf{Possibility of Complexity Reversals.}\; So far, most of the problems studied for structured elections were subproblems of the unrestricted ones. For example, a winner determination algorithm that works for all elections, clearly also works for the structured ones and complexity reversal is impossible. The case of bribery is different because, by assuming structured elections, not only do we restrict the set of possible inputs, but also we constrain the possible actions. Yet, scenarios where bribery is tractable are rare, and only a handful of papers considered bribery in structured domains (we mention those of \citett{Brandt et al.}{bra-bri-hem-hem:j:sp2}, \citett{Fitzsimmons and Hemaspaandra}{fit-hem:c:ties-sp-manipulation-bribery}, \citett{Menon and Larson}{men-lar:c:bribery-sp-hard}, \citett{Elkind et al.}{elk-fal-gup-roy:c:swap-shift-sp-sc}), so opportunities for observing complexity reversals were, so far, very limited. We show several such reversals, obtained for very natural settings. \begin{table} \centering \setlength{\tabcolsep}{3pt} \scalebox{0.83}{ \begin{tabular}{r|cc|cc|cc} \toprule & \multicolumn{2}{c|}{Unrestricted} & \multicolumn{2}{c|}{Candidate Interval (CI)} & \multicolumn{2}{c}{Voter Interval (VI)} \\ {\small (prices)} & \multicolumn{1}{c}{\small (unit)} & \multicolumn{1}{c|}{\small (any)} & \multicolumn{1}{c}{\small (unit)} & \multicolumn{1}{c|}{\small (any)} & \multicolumn{1}{c}{\small (unit)} & \multicolumn{1}{c}{\small (any)} \\ \midrule \textsc{AddApprovals} & ${\mathrm{P}}$ & ${\mathrm{P}}$ & ${\mathrm{P}}$ & ${\mathrm{P}}$ & ${\mathrm{P}}$ & ${\mathrm{P}}$ \\[-3pt] & \multicolumn{2}{c|}{\scriptsize{Faliszewski et al.~\citeyearpar{fal-sko-tal:c:bribery-measure}}} & \scriptsize{[Thm.~\ref{thm:add-approvals-ci}]} & \scriptsize{[Thm.~\ref{thm:add-approvals-ci}]} & \scriptsize{[Thm.~\ref{thm:add-approvals-vi}]} & \scriptsize{[Thm.~\ref{thm:add-approvals-vi}]} \\ \midrule \textsc{DelApprovals} & ${\mathrm{P}}$ & ${\mathrm{P}}$ & \multirow{2}{*}{?} & ${\mathrm{NP}}$-com. & ${\mathrm{P}}$ & ${\mathrm{P}}$ \\ & \multicolumn{2}{c|}{\scriptsize{Faliszewski et al.~\citeyearpar{fal-sko-tal:c:bribery-measure}}}& & \scriptsize{[Thm.~\ref{thm:delete-approvals-ci}]} & \scriptsize{[Thm.~\ref{thm:delete-approvals-vi}]} & \scriptsize{[Thm.~\ref{thm:delete-approvals-vi}]} \\ \midrule \textsc{SwapApprovals} & ${\mathrm{P}}$ & ${\mathrm{NP}}$-com. & ${\mathrm{P}}$ & ${\mathrm{P}}$ & ${\mathrm{P}}$ & ${\mathrm{P}}$ \\[-3pt] \textsc{to p} &\multicolumn{2}{c|}{\scriptsize{Faliszewski et al.~\citeyearpar{fal-sko-tal:c:bribery-measure}}}& \scriptsize{[Thm.~\ref{thm:swap-approvals-to-p-ci}]} & \scriptsize{[Thm.~\ref{thm:swap-approvals-to-p-ci}]} & \scriptsize{[Thm.~\ref{thm:swap-approvals-to-p-vi}]} & \scriptsize{[Thm.~\ref{thm:swap-approvals-to-p-vi}]} \\[2mm] \textsc{SwapApprovals} & ${\mathrm{P}}$ & ${\mathrm{NP}}$-com. & ${\mathrm{NP}}$-com. & ${\mathrm{NP}}$-com. & \multirow{2}{*}{?} & ${\mathrm{NP}}$-com. \\[-3pt] &\multicolumn{2}{c|}{\scriptsize{Faliszewski et al.~\citeyearpar{fal-sko-tal:c:bribery-measure}}}& \scriptsize{[Thm.~\ref{thm:swap-approvals-ci}]} & \scriptsize{[Thm.~\ref{thm:swap-approvals-ci}]} & & \scriptsize{[Thm.~\ref{thm:vi-swaps}]} \\ \bottomrule \end{tabular}} \caption{Our results for the CI and VI domains, together with those of {\mathrm{P}}rotect\citett{Faliszewski, Skowron, and Talmon}{fal-sko-tal:c:bribery-measure} for the unrestricted setting. \textsc{SwapApprovals to p} refers to the problem where each action has to move an approval to the preferred candidate.} \label{tab:results} \end{table} \section{Preliminaries} For a positive integer $t$, we write $[t]$ to mean the set $\{1, \ldots, t\}$. By writing $[t]_0$ we mean the set $[t] \cup \{0\}$. \myparagraph{Approval Elections.} An \emph{approval election} $E = (C,V)$ consists of a set of candidates $C = \{c_1, \ldots, c_m\}$ and a collection of voters $V = \{v_1, \ldots, v_n\}$. Each voter $v_i \in V$ has \emph{an approval ballot} (or, equivalently, \emph{an approval set}) which contains the candidates that $v_i$ approves. We write $v_i$ to refer both to the voter and to his or her approval ballot; the exact meaning will always be clear from the context. A \emph{multiwinner voting rule} is a function $f$ that given an election $E = (C,V)$ and a committee size $k \in [|C|]$ outputs a nonempty family of winning committees (where each committee is a size-$k$ subset of $C$). We disregard the issue of tie-breaking and assume all winning committees to be equally worthy, i.e., we adopt the nonunique winner model. Given an election $E = (C,V)$, we let the approval score of a candidate $c \in C$ be the number of voters that approve~$c$, and we denote it as $\avscore_E(c)$. The approval score of a committee $S \subseteq C$ is $\avscore_E(S) = \sum_{c \in S} \avscore_E(c)$. Given an election~$E$ and a committee size~$k$, the \emph{multiwinner approval voting} rule, denoted $\av$, outputs all size-$k$ committees with the highest approval score. Occasionally we also consider the \emph{single-winner approval rule}, which is defined in the same way as its multiwinner variant, except that the committee size is fixed to be one. For simplicity, in this case we assume that the rule returns a set of tied winners (rather than a set of tied size-$1$ winning committees). \myparagraph{Structured Elections.} We focus on elections where the approval ballots satisfy either the \emph{candidate interval (CI)} or the \emph{voter interval (VI)} properties~\citep{elk-lac:c:approval-sp-sc}: \begin{enumerate} \item An election has the CI property (is a CI election) if there is an ordering of the candidates (called the \emph{societal axis}) such that each approval ballot forms an interval with respect to this ordering. \item An election has the VI property (is a VI election) if there is an ordering of the voters so that each candidate is approved by an interval of the voters (for this ordering). \end{enumerate} Given a CI election, we say that the voters have CI ballots or, equivalently, CI preferences; we use analogous convention for the VI case. As observed by \citett{Elkind and Lackner}{elk-lac:c:approval-sp-sc}, there are polynomial-time algorithms that test if a given election is CI or VI and, if so, provide appropriate orders of the candidates or voters; these algorithms are based on solving the \emph{consecutive ones} problem~\citep{boo-lue:j:consecutive-ones-property}. \myparagraph{Notation for CI Elections.} Let us consider a candidate set $C = \{c_1, \ldots, c_m\}$ and a societal axis $\rhd = c_1 c_2 \cdots c_m$. Given two candidates $c_i, c_j$, where $i \leq j$, we write $[c_i,c_j]$ to denote the approval set $\{c_i, c_{i+1}, \ldots, c_j\}$. \myparagraph{Bribery Problems.} We focus on the variants of bribery in multiwinner approval elections defined by~\citett{Faliszewski, Skowron, and Talmon}{fal-sko-tal:c:bribery-measure}. Let $f$ be a multiwinner voting rule and let \textsc{Op} be one of \textsc{AddApprovals}, \textsc{DelApprovals}, and \textsc{SwapApprovals} operations (in our case $f$ will either be $\av$ or its single-winner variant). In the $f$-\textsc{Op-Bribery} problem we are given an election $E = (C,V$), a committee size~$k$, a preferred candidate $p$, and a nonnegative integer $B$ (the budget). We ask if it is possible to perform at most $B$ unit operations of type \textsc{Op}, so that $p$ belongs to at least one winning committee: \begin{enumerate} \item For \textsc{AddApprovals}, a unit operation adds a given candidate to a given voter's ballot. \item For \textsc{DelApprovals}, a unit operation removes a given candidate from a given voter's ballot. \item For \textsc{SwapApprovals}, a unit operation replaces a given candidate with another one in a given voter's ballot. \end{enumerate} Like \citett{Faliszewski, Skowron, and Talmon}{fal-sko-tal:c:bribery-measure}, we also study the variants of \textsc{AddApprovals} and \textsc{SwapApprovals} problems where each unit operation must involve the preferred candidate. We are also interested in the priced variants of the above problems, where each unit operation comes at a cost that may depend both on the voter and the particular affected candidates; we ask if we can achieve our goal by performing operations of total cost at most~$B$. We distinguish the priced variants by putting a dollar sign in front of the operation type. For example, \textsc{\$AddApprovals} means a variant where adding each candidate to each approval ballot has an individual cost. \myparagraph{Bribery in Structured Elections.} We focus on the bribery problems where the elections have either the CI or the VI property. For example, in the \textsc{AV-\$AddApprovals-CI-Bribery} problem the input election has the CI property (under a given societal axis) and we ask if it is possible to add approvals with up to a given cost so that (a) the resulting election has the CI property for the same societal axis, and (b) the preferred candidate belongs to at least one winning committee. The VI variants are defined analogously (in particular, the voters' order witnessing the VI property is given and the election must still have the VI property with respect to this order after the bribery). The convention that the election must have the same structural property before and after the bribery, as well as the fact that the order witnessing this property is part of the input, is standard in the literature; see, e.g., the works of \citett{Faliszewski et al.}{fal-hem-hem-rot:j:single-peaked-preferences}, \citett{Brandt et al.}{bra-bri-hem-hem:j:sp2}, \citett{Menon and Larson}{men-lar:c:bribery-sp-hard}, and \citett{Elkind et al.}{elk-fal-gup-roy:c:swap-shift-sp-sc}. \myparagraph{Computational Problems.} For a graph $G$, by $V(G)$ we mean its set of vertices and by $E(G)$ we mean its set of edges. A graph is cubic if each of its vertices is connected to exactly three other ones. Our ${\mathrm{NP}}$-hardness proofs rely on reductions from variants of the \textsc{Independent Set} and \textsc{X3C} problems, both known to be ${\mathrm{NP}}$-complete~\citep{gar-joh:b:int,gon:j:x3c}. \begin{definition} In the \textsc{Cubic Independent Set} problem we are given a cubic graph $G$ and an integer $h$; we ask if $G$ has an independent set of size $h$ (i.e., a set of $h$ vertices such that no two of them are connected). \end{definition} \begin{definition} In the \textsc{Restricted Exact Cover by 3-Sets} problem (\textsc{RX3C}) we are given a universe $X$ of $3n$ elements and a family $\mathcal{S}$ of $3n$ size-$3$ subsets of $X$. Each element from $X$ appears in exactly three sets from $\mathcal{S}$. We ask if it is possible to choose $n$ sets from $\mathcal{S}$ whose union is $X$. \end{definition} \section{Adding Approvals} \label{sec:adding} \appendixsection{sec:adding} For the case of adding approvals, all our bribery problems (priced and unpriced, both for the CI and VI domains) remain solvable in polynomial time. Yet, as compared to the unrestricted setting, our algorithms require more care. For example, in the unrestricted case it suffices to simply add approvals for the preferred candidate~\citep{fal-sko-tal:c:bribery-measure} (choosing the voters where they are added in the order of increasing costs for the priced variant); a similar approach works for the VI case, but with a different ordering of the voters. \begin{theorem} \label{thm:add-approvals-vi} \textsc{AV-\$AddApprovals-VI-Bribery} $\in {\mathrm{P}}$. \end{theorem} \appendixproof{thm:add-approvals-vi} { \begin{proof} Consider an input with election $E = (C,V)$, committee size $k$, preferred candidate $p$, and budget $B$. Without loss of generality, we assume that $V = \{v_1, \ldots, v_n\}$ and the order witnessing the VI property is $v_1 \rhd v_2 \rhd \cdots \rhd v_n$. We note that it is neither beneficial nor necessary to ever add approvals for candidates other than $p$. Let $v_i, v_{i+1}, \ldots, v_j$ be the interval of voters that approves $p$, and let $s$ be the lowest number of additional approvals that $p$ needs to obtain to become a member of some winning committee (note that $s$ is easily computable in polynomial time). Our algorithm proceeds as follows: We consider all nonnegative numbers $s_\ell$ and $s_r$ such that (a)~$s = s_\ell + s_r$, (b)~$i - s_\ell \geq 1$, and (c)~$j+s_r \leq n$, and for each of them we compute the cost of adding an approval for $p$ to voters $v_{i-s_\ell}, \ldots, v_{i-1}$ and $v_{j+1}, \ldots, v_{j+s_r}$. We choose the pair that generates lowest cost and we accept if this cost is at most $B$. Otherwise we reject. The polynomial running time follows directly. Correctness is guaranteed by the fact that we need to maintain the VI property and that it suffices to add approvals for $p$ only. \end{proof} } The CI case introduces a different complication. Now, adding an approval for the preferred candidate in a given vote also requires adding approvals for all those between him or her and the original approval set. Thus, in addition to bounding the bribery's cost, we also need to track the candidates whose scores increase beyond a given level. \begin{theorem}\label{thm:add-approvals-ci} \textsc{AV-\$AddApprovals-CI-Bribery} $\in {\mathrm{P}}$. \end{theorem} \begin{proof} Our input consists of an election $E = (C,V)$, committee size $k$, preferred candidate $p \in C$, budget $B$, and the information about the costs of all the possible operations (i.e., for each voter and each candidate that he or she does not approve, we have the price for adding this candidate to the voter's ballot). Without loss of generality, we assume that $C = \{\ell_{m'}, \ldots, \ell_1, p, r_1, \ldots, r_{m''}\}$, $V = \{v_1, \ldots, v_n\}$, each voter approves at least one candidate,\footnote{Without this assumption we could still make our algorithm work. We would simply choose a certain number of voters who do not approve any candidates to approve $p$ alone (though we would not necessarily ask all such voters, due to possibly high prices).} and the election is CI with respect to the order: \[ \rhd = \ell_{m'} \cdots \ell_2\ \ell_1 \ p \ r_1\ r_2 \cdots r_{m''}. \] We start with a few observations. First, we note that if a voter already approves $p$ then there is no point in adding any approvals to his or her ballot. Second, if some voter does not approve $p$, then we should either not add any approvals to his or her ballot, or add exactly those approvals that are necessary to ensure that $p$ gets one. For example, if some voter has approval ballot $\{r_3, r_4, r_5\}$ then we may either choose to leave it intact or to extend it to $\{p, r_1, r_2, r_3, r_4, r_5\}$. We let $L = \{\ell_{m'}, \ldots, \ell_1\}$ and $R = \{r_1, \ldots, r_{m''}\}$, and we partition the voters into three groups, $V_\ell$, $V_p$, and $V_r$, as follows: \begin{enumerate} \item $V_p$ contains all the voters who approve $p$, \item $V_\ell$ contains the voters who approve members of $L$ only, \item $V_r$ contains the voters who approve members of $R$ only. \end{enumerate} Our algorithm proceeds as follows (by guessing we mean iteratively trying all possibilities; Steps~\ref{alg1:dp1} and~\ref{alg1:dp2} will be described in detail below): \begin{enumerate} \item Guess the numbers $x_\ell$ and $x_r$ of voters from $V_\ell$ and $V_r$ whose approval ballots will be extended to approve $p$. \item Guess the numbers $t_\ell$ and $t_r$ of candidates from $L$ and $R$ that will end up with higher approval scores than $p$ (we must have $t_\ell + t_r < k$ for $p$ to join a winning committee). \item \label{alg1:dp1} Compute the lowest cost of extending exactly $x_\ell$ votes from $V_\ell$ to approve $p$, so that at most $t_\ell$ candidates from $L$ end up with more than $\score_E(p)+x_\ell+x_r$ points (i.e., with score higher than $p$); denote this cost as $B_\ell$. \item \label{alg1:dp2} Repeat the above step for the $x_r$ voters from $V_r$, with at most $t_r$ candidates obtaining more than $\score_E(p)+x_\ell+x_r$ points; denote the cost of this operation as $B_r$. \item If $B_\ell + B_r \leq B$ then accept (reject if no choice of $x_\ell$, $x_r$, $t_\ell$, and $t_r$ leads to acceptance). \end{enumerate} One can verify that this algorithm is correct (assuming we know how to perform Steps~\ref{alg1:dp1} and~\ref{alg1:dp2}). Next we describe how to perform Step~\ref{alg1:dp1} in polynomial time (Step~\ref{alg1:dp2} is handled analogously). We will need some additional notation. For each $i \in [m']$, let $V_{\ell}(i)$ consist exactly of those voters from $V_\ell$ whose approval ballots include candidate $\ell_i$ but do not include $\ell_{i-1}$ (in other words, voters in $V_\ell(i)$ have approval ballots of the form $[\ell_j, \ell_i]$, where $j \geq i$). Further, for each $i \in [m']$ and each $e \in [|V_\ell(i)|]_0$ let $\cost(i,e)$ be the lowest cost of extending $e$ votes from $V_\ell(i)$ to approve $p$ (and, as a consequence, to also approve candidates $\ell_{i-1}, \ldots, \ell_1$). If $V_\ell(i)$ contains fewer than $e$ voters then $\cost(i,e) = +\infty$. For each $e \in [x_\ell]_0$, we define $S(e) = \score_E(p) + e + x_r$. Finally, for each $i \in [m']$, $e \in [x_\ell]_0$, and $t \in [t_\ell]_0$ we define function $f(i,e,t)$ so that: \begin{enumerate} \item[] $f(i,e,t)$ = the lowest cost of extending exactly $e$ votes from $V_\ell(1) \cup \cdots \cup V_\ell(i)$ (to approve~$p$) so that at most $t$ candidates among $\ell_1, \ldots, \ell_i$ end up with more than $S(e)$ points (function $f$ takes value $+\infty$ if satisfying all the given constraints is impossible). \end{enumerate} Our goal in Step~\ref{alg1:dp1} of the main algorithm is to compute $f(m',x_\ell,t_\ell)$, which we do via dynamic programming. To this end, we observe that the following recursive equation holds (let $\chi(i,e)$ be $1$ if $\score_E(\ell_i) > S(e)$ and let $\chi(i,e)$ be $0$ otherwise; we explain the idea of the equation below): \begin{align*} f(i,e,t)\! =\! \min_{e' \in [e]_0} \! \big( \cost(i,e') + f(i-1, e-e', t - \chi(i,e)\big). \end{align*} The intuition behind this equation is as follows. We consider each possible number $e' \in [e]_0$ of votes from $V_\ell(i)$ that can be extended to approve $p$. The lowest cost of extending the votes of $e'$ voters from $V_\ell(i)$ is, by definition, $\cost(i,e')$. Next, we still need to extend $e-e'$ votes from $V_\ell(i-1), \ldots, V_\ell(1)$ and, while doing so, we need to ensure that at most $t$ candidates end up with at most $S(e)$ points. Candidate $\ell_i$ cannot get any additional approvals from voters $V_\ell(i-1), \ldots, V_\ell(1)$, so he or she exceeds this value exactly if $\score_E(\ell_i) > S(e)$ or, equivalently, if $\chi(i,e) = 1$. This means that we have to ensure that at most $t - \chi(i,e)$ candidates among $\ell_{i-1}, \ldots, \ell_1$ end up with at most $S(e)$ points. However, since we extend $e'$ votes from $V_\ell(i)$, we know that candidates $\ell_{i-1}, \ldots, \ell_1$ certainly obtain $e'$ additional points (as compared to the input election). Thus we need to ensure that at most $t - \chi(i,e)$ of them end up with score at most $S(e-e')$ after extending the votes from $V_\ell(1) \cup \ldots \cup V_\ell(i-1)$. This is ensured by the $f(i-1,e-e',t-\chi(i,e))$ component in the equation (which also provides the lowest cost of the respective operations). Using the above formula, the fact that $f(1,e,t)$ can be computed easily for all values of~$e$ and~$t$, and standard dynamic programming techniques, we can compute $f(m',x_\ell,t_\ell)$ in polynomial time. This suffices for completing Step~\ref{alg1:dp1} of the main algorithm and we handle Step~\ref{alg1:dp2} analogously. Since all the steps of can be performed in polynomial time, the proof is complete. \end{proof} We note that both above theorems also apply to the cases where we can only add approvals for the preferred candidate. Indeed, the algorithm from Theorem~\ref{thm:add-approvals-vi} is designed to do just that, and for the algorithm from Theorem~\ref{thm:add-approvals-ci} we can set the price of adding other approvals to be $+\infty$. \section{Deleting Approvals} \label{sec:deleting} \appendixsection{sec:deleting} The case of deleting approvals is more intriguing. Roughly speaking, in the unrestricted setting it suffices to delete approvals from sufficiently many candidates that have higher scores than~$p$, for whom doing so is least expensive~\citep{fal-sko-tal:c:bribery-measure}. The same general strategy works for the VI case because we still can delete approvals for different candidates independently. \begin{theorem} \label{thm:delete-approvals-vi} \textsc{AV-\$DelApprovals-VI-Bribery} $\in~{\mathrm{P}}$. \end{theorem} \appendixproof{thm:delete-approvals-vi} { \begin{proof} Let our input consist of an election $E=(C,V)$, preferred candidate $p \in C$, committee size~$k$, and budget~$B$. We assume that $V = \{v_1, \ldots, v_n\}$ and the election is VI with respect to ordering the voters by their indices. Let $s = \score_E(p)$ be the score of $p$ prior to any bribery. We refer to the candidates with score greater than~$s$ as superior. Since it is impossible to increase the score of $p$ by deleting approvals, we need to ensure that the number of superior candidates drops to at most $k-1$. For each superior candidate $c$, we compute the lowest cost for reducing his or her score to exactly~$s$. Specifically, for each such candidate $c$ we act as follows. Let $t = \score_E(c) - s$ be the number of $c$'s approvals that we need to delete and let $v_{a}, v_{a+1}, \ldots, v_{b}$ be the interval of voters that approve $c$. For each $i \in [t]_0$ we compute the cost of deleting $c$'s approvals among the first $i$ and the last $t-i$ voters in the interval (these are the only operations that achieve our goal and maintain the VI property of the election); we store the lowest of these costs as ``the cost of $c$.'' Let $S$ be the number of superior candidates (prior to any bribery). We choose $S-(k-1)$ of them with the lowest costs. If the sum of these costs is at most $B$ then we accept and, otherwise, we reject. \end{proof} } For the CI case, our problem turns out to be ${\mathrm{NP}}$-complete. Intuitively, the reason for this is that in the CI domain deleting an approval for a given candidate requires either deleting all the approvals to the left or all the approvals to the right on the societal axis. Indeed, our main trick is to introduce approvals that must be deleted (at zero cost), but doing so requires choosing whether to delete their left or their right neighbors (at nonzero cost). This result is our first example of a complexity reversal. \begin{theorem}\label{thm:delete-approvals-ci} \textsc{AV-\$DelApprovals-CI-Bribery} is ${\mathrm{NP}}$-complete. \end{theorem} \begin{proof} We give a reduction from \textsc{RX3C}. Let $I=(X,\mathcal{S})$ be the input instance, where $X = \{x_1,\ldots, x_{3n}\}$ is the universe and $\mathcal{S} = \{S_1,\ldots,S_{3n}\}$ is a family of size-$3$ subsets of $X$. By definition, each element of $X$ belongs to exactly three sets from $\mathcal{S}$. We form an instance of \textsc{AV-\$DelApprovals-CI-Bribery} as follows. We have the preferred candidate $p$, for each universe element $x_i \in X$ we have corresponding universe candidate $x_i$, for each set $S_j \in \mathcal{S}$ we have set candidate $s_j$, and we have set $D$ of $2n$ dummy candidates (where each individual one is denoted by~$\diamond$). Let $C$ be the set of just-described $8n+1$ candidates and let $S = \{s_1, \ldots, s_{3n}\}$ contain the set candidates. We fix the societal axis to be: \[ \rhd = \underbrace{\overbrace{s_1 \cdots s_{3n}}^{3n} \overbrace{\diamond \cdots \diamond}^{2n} \overbrace{x_1 \cdots x_{3n}}^{3n} p}_{8n+1} \] Next, we form the voter collection $V$: \begin{enumerate} \item For each candidate in $S \cup D \cup \{p\}$, we have two voters that approve exactly this candidate. We refer to them as the \emph{fixed voters} and we set the price for deleting their approvals to be $+\infty$. We refer to their approvals as \emph{fixed}. \item For each set $S_j = \{x_a,x_b,x_c\}$, we form three \emph{solution voters}, $v(s_j,x_a)$, $v(s_j,x_b)$, and $v(s_j,x_c)$, with approval sets $[s_j,x_a]$, $[s_j, x_b]$, and $[s_j,x_c]$, respectively. For a solution voter $v(s_i,x_d)$, we refer to the approvals that $s_i$ and $x_d$ receive as \emph{exterior}, and to all the other ones as \emph{interior}. The cost for deleting each exterior approval is one, whereas the cost for deleting the interior approvals is zero. Altogether, there are $9n$ solution voters. \end{enumerate} To finish the construction, we set the committee size $k=n+1$ and the budget $B=9n$. Below, we list the approval scores prior to any bribery (later we will see that in successful briberies one always deletes all the interior approvals): \begin{enumerate} \item $p$ has $2$ fixed approvals, \item each universe candidate has $3$~exterior approvals (plus some number of interior ones), \item each set candidate has $3$ exterior approvals and $2$ fixed ones (plus some number of interior ones), and \item each dummy candidate has $2$ fixed approvals (and $9n$ interior ones). \end{enumerate} We claim that there is a bribery of cost at most $B$ that ensures that $p$ belongs to some winning committee if and only if $I$ is a yes-instance of \textsc{RX3C}. For the first direction, let us assume that $I$ is a yes-instance and let $\mathcal{T}$ be a size-$n$ subset of $\mathcal{S}$ such that $\bigcup_{S_i \in \mathcal{T}} S_i = X$ (i.e., $\mathcal{T}$ is the desired exact cover). We perform the following bribery: First, for each solution voter we delete all his or her interior approvals. Next, to maintain the CI property (and to lower the scores of some candidates), for each solution voter we delete one exterior approval. Specifically, for each set $S_j = \{x_a,x_b,x_c\}$, if $S_j$ belongs to the cover (i.e., if $S_i \in \mathcal{T}$) then we delete the approvals for $x_a$, $x_b$, and $x_c$ in $v(s_j,x_a)$, $v(s_j,x_b)$, and $v(s_j,x_c)$, respectively; otherwise, i.e., if $S_j \notin \mathcal{T}$, we delete the approvals for $s_j$ in these votes. As a consequence, all the universe candidates end up with two exterior approvals each, the $n$ set candidates corresponding to the cover end up with three approvals each (two fixed ones and one exterior), the $2n$ remaining set candidates and all the dummy candidates end up with two fixed approvals each. Since~$p$ has two approvals, the committee size is $n+1$, and only $n$ candidates have score higher than $p$, $p$ belongs to some winning committee (and the cost of the bribery is~$B$). For the other direction, let us assume that there is a bribery with cost at most $B$ that ensures that $p$ belongs to some winning committee. It must be the case that this bribery deletes exactly one exterior approval from each solution voter. Otherwise, since there are $9n$ solution voters and the budget is also $9n$, some solution voter would keep both his or her exterior approvals, as well as all the interior ones. This means that after the bribery there would be at least $2n$ dummy candidates with at least three points each. Then, $p$ would not belong to any winning committee. Thus, each solution voter deletes exactly one exterior approval, and we may assume that he or she also deletes all the interior ones (this comes at zero cost and does not decrease the score of $p$). By the above discussion, we know that all the dummy candidates end up with two fixed approvals, i.e., with the same score as~$p$. Thus, for $p$ to belong to some winning committee, at least $5n$ candidates among the set and universe ones also must end up with at most two approvals (at most $n$ candidates can have score higher than $p$). Let $x$ be the number of set candidates whose approval score drops to at most two, and let $y$ be the number of such universe candidates. We have that: \begin{align}\label{eq:1} 0 \leq x \leq 3n&,& 0 \leq y \leq 3n&,& \text{and}&& x+y \geq 5n. \end{align} Prior to the bribery, each set candidate has five non-interior approvals (including three exterior approvals) so bringing his or her score to at most two costs three units of budget. Doing the same for a universe candidate costs only one unit of budget, as universe candidates originally have only three non-interior approvals. Since our total budget is $9n$, we have: \begin{equation}\label{eq:2} 3x+y \leq 9n. \end{equation} Together, inequalities~\eqref{eq:1} and~\eqref{eq:2} imply that $x = 2n$ and $y = 3n$. That is, for each universe candidate $x_i$ there is a solution voter $v(s_j,x_d)$ who is bribed to delete the approval for $x_d$ (and, as a consequence of our previous discussion, who is not bribed to delete the approval for $s_j$). We call such solution voters \emph{active} and we define a family of sets: \[ \mathcal{T} = \{ S_j \mid s_j \text{ is approved by some active solution voter} \}. \] We claim that $\mathcal{T}$ is an exact cover for the \textsc{RX3C} instance $I$. Indeed, by definition of active solution voters we have that $\bigcup_{S_i \in \mathcal{T}} S_i = X$. Further, it must be the case that $|\mathcal{T}| = n$. This follows from the observation that if some solution voter is active then his or her corresponding set candidate $s_j$ has at least three approvals after the bribery (each set candidate receives exterior approvals from exactly three solution voters and these approvals must be deleted if the candidate is to end up with score two; this is possible only if all the three solution voters are not active). Since exactly $2n$ set candidates must have their scores reduced to two, it must be that $3n - |\mathcal{T}| = 2n$, so $|\mathcal{T}| = n$. This completes the proof. \end{proof} The above proof strongly relies on using $0$/$1$/$+\infty$ prices. The case of unit prices remains open and we believe that resolving it might be quite challenging. \section{Swapping Approvals} \label{sec:swapping} \appendixsection{sec:swapping} In some sense, bribery by swapping approvals is our most interesting scenario because there are cases where a given problem has the same complexity both in the unrestricted setting and for some structured domain (and this happens both for tractability and ${\mathrm{NP}}$-completeness), as well as cases where the unrestricted variant is tractable but the structured one is not or the other way round. \subsection{Approval Swaps to the Preferred Candidate} Let us first consider a variant of \textsc{AV-SwapApprovals-Bribery} where each unit operation moves an approval from some candidate to the preferred one. We call operations of this form \textsc{SwapApprovals to p}. In the unrestricted setting, this problem is in ${\mathrm{P}}$ for unit prices but is ${\mathrm{NP}}$-complete if the prices are arbitrary. For the CI and VI domains, the problem can be solved in polynomial time for both types of prices. While for the CI domain this is not so surprising---indeed, in this case possible unit operations are very limited---the VI case requires quite some care. \begin{theorem} \label{thm:swap-approvals-to-p-ci} \textsc{AV-\$SwapApprovals to p-CI-Bribery} $\in {\mathrm{P}}$. \end{theorem} \appendixproof{thm:swap-approvals-to-p-ci} { \begin{proof} Consider an input with CI election $E = (C,V)$, committee size $k$, preferred candidate $p$, and budget $B$. W.l.o.g., we assume that $C = \{l_{m'}, \ldots, l_1, p, r_1, \ldots, r_{m''}\}$, $V = \{v_1, \ldots, v_n\}$, and the societal axis is: \[ \rhd = \ell_{m'}\ \cdots\ \ell_2\ \ell_1 \ p \ r_1\ r_2\ \cdots\ r_{m''}. \] Since unit operations must move approvals to $p$, for each voter $v_i$ exactly one of the following holds: \begin{enumerate} \item There is $t \in [m']$ such that $v_i$ has approval set $[\ell_t,\ell_1]$ and the only possible operation is to move an approval from~$\ell_t$ to $p$ at some given cost. \item There is $t \in [m'']$ such that $v_i$ has approval set $[r_1,r_t]$ and the only possible operation is to move an approval from~$r_t$ to $p$ at some given cost. \item It is illegal to move any approvals for this voter. \end{enumerate} For each candidate $c \in C \setminus \{p\}$ and each integer $x$, we let $f(c,x)$ be the lowest cost of moving $x$ approvals from $c$ to $p$ (we assume that $f(c,x) = +\infty$ if doing so is impossible). By the above discussion, we can compute the values of $f$ in polynomial time. Our algorithm proceeds as follows. First, we guess score $y \in [n]_0$ that we expect $p$ to end up with. Second, we let~$S$ be the set of candidates that in the input election have score higher than $y$. For each candidate $c \in S$ we define his or her cost to be $f(c, \score_E(c)-y)$, i.e., the lowest cost of moving approvals from $c$ to $p$ so that $c$ ends up with score~$y$. Then we let~$S'$ be a set of~$|S|-(k-1)$ members of~$S$ with the lowest costs (if~$|S| \leq k-1$ then~$S'$ is an empty set). For each~$c \in S'$, we perform the approval moves implied by $f(c,\score_E(c)-y)$. Finally, we ensure that~$p$ has~$y$ approvals by performing sufficiently many of the cheapest still-not-performed unit operations (we reject for this value of $y$ if not enough operations remained). If the total cost of all performed unit operations is at most $B$, we accept (indeed, we have just found a bribery that ensures that there are at most $k-1$ candidates with score higher than $p$ and whose cost is not too high). Otherwise, we reject for this value of~$y$. If there is no $y$ for which we accept, we reject. \end{proof} } Our algorithm for the VI case is based on dynamic programming (expressed as searching for a shortest path in a certain graph) and relies on the fact that due to the VI property we avoid performing the same unit operations twice. \begin{theorem} \label{thm:swap-approvals-to-p-vi} For the case where we can only swap approvals to the preferred candidate, \textsc{AV-\$SwapApprovals-VI-Bribery} is in ${\mathrm{P}}$. \end{theorem} \begin{proof} Consider an instance of our problem with an election $E=(C,V)$, committee size~$k$, preferred candidate $p$, and budget $B$. Without loss of generality, we assume that $V = \{v_1, \ldots, v_n\}$ and that the election is VI with respect to the order $v_1 \rhd v_2 \rhd \cdots \rhd v_n$. We also assume that $p$ has at least one approval (if it were not the case, we could try all possible single-approval swaps to $p$). On the high level, our algorithm proceeds as follows: We try all pairs of integers $\alpha$ and $\beta$ such that $1 \leq \alpha \leq \beta \leq n$ and, for each of them, we check if there is a bribery with cost at most $B$ that ensures that the preferred candidate is (a)~approved exactly by voters $v_\alpha, \ldots, v_\beta$, and (b)~belongs to some winning committee. If such a bribery exists then we accept and, otherwise, we reject. Below we describe the algorithm that finds the cheapest successful bribery for a given pair $\alpha, \beta$ (if one exists). Let $\alpha$ and $\beta$ be fixed. Further, let $x, y$ be two integers such that in the original election $p$ is approved exactly by voters $v_x, v_{x+1}, \ldots, v_y$. Naturally, we require that $\alpha \leq x \leq y \leq \beta$; if this condition is not met then we drop this $\alpha$ and $\beta$. We let $s = \beta - \alpha + 1$ be the score that $p$ is to have after the bribery. We say that a candidate $c \in C \setminus \{p\}$ is \emph{dangerous} if his or her score in the original election is above $s$. Otherwise, we say that this candidate is \emph{safe}. Let $D$~be the number of dangerous candidates. For $p$ to become a member of some winning committee, we need to ensure that after the bribery at most $k-1$ dangerous candidates still have more than $s$ points (each safe candidate certainly has at most $s$ points). To do so, we analyze a certain digraph. For each pair of integers $a$ and $b$ such that $\alpha \leq a \leq b \leq \beta$ and each integer $d \in [|C|-1]_0$ we form a node $(a,b,d)$, corresponding to the fact that there is a bribery after which $p$ is approved exactly by voters $v_a, v_{a+1}, \ldots, v_b$ and exactly $d$ dangerous candidates have scores above $s$. Given two nodes $u' = (a',b', d')$ and $u'' = (a'',b'',d'')$, such that $a' \geq a''$, $b' \leq b''$, and $d'' \in \{d',d'-1\}$, there is a directed edge from $u'$ to $u''$ with weight $\cost(u',u'')$ exactly if there is a candidate $c$ such that after bribing voters $v_{a''}, v_{a''+1}, \ldots, v_{a'-1}$ and $v_{b'+1}, \ldots, v_{b''-1}, v_{b''}$ to move an approval from $c$ to $p$ it holds that: \begin{enumerate} \item voters approving $c$ still form an interval, \item if $c$ is a dangerous candidate and his or her score drops to at most $s$, then $d'' = d'-1$, and, otherwise, $d'' = d'$, and \item the cost of this bribery is exactly $\cost(u',u'')$. \end{enumerate} One can verify that for each node $u = (a,b,d)$ the weight of the shortest path from $(x,y,D)$ to $u$ is exactly the price of the lowest-cost bribery that ensures that $p$ is approved by voters $v_a, \ldots, v_b$ and exactly $d$ dangerous candidates have scores above $s$ (the VI property ensures that no approval is ever moved twice). Thus it suffices to find a node $(\alpha, \beta, K)$ such that $K < k$, for which the shortest path from $(x,y,D)$ is at most $B$. Doing so is possible in polynomial time using, e.g., the classic algorithm of Dijkstra. \end{proof} \subsection{Arbitrary Swaps} Let us now consider the full variant of bribery by swapping approvals. For the unrestricted domain, the problem is ${\mathrm{NP}}$-complete for general prices, but admits a polynomial-time algorithm for unit ones~\citep{fal-sko-tal:c:bribery-measure}. For the CI domain, ${\mathrm{NP}}$-completeness holds even in the latter setting. \begin{remark} The model of unit prices, applied directly to the case of \textsc{SwapApprovals-CI-Bribery}, is somewhat unintuitive. For example, consider societal axis $c_1 \rhd c_2 \rhd \cdots \rhd c_{10}$ and an approval set $[c_3,c_5]$. The costs of swap operations that transform it into, respectively, $[c_4,c_6]$, $[c_5,c_7]$, and $[c_6,c_8]$ are $1$, $2$, and $3$, as one would naturally expect. Yet, the cost of transforming it into, e.g., $[c_8,c_{10}]$ would also be~$3$ (move an approval from $c_3$ to $c_8$, from $c_4$ to $c_9$, and from $c_5$ to $c_{10}$), which is not intuitive. Instead, it would be natural to define this cost to be $5$ (move the interval by $5$ positions to the right). Our proof of Theorem~\ref{thm:swap-approvals-ci} works without change for both these interpretations of unit prices. \end{remark} \begin{theorem} \label{thm:swap-approvals-ci} \textsc{AV-SwapApprovals-CI-Bribery} is ${\mathrm{NP}}$-complete. \end{theorem} \begin{proof} We give a reduction from \textsc{Cubic Independent Set}. Let $G$ be our input graph, where $V(G) = \{c_1, \ldots, c_n\}$ and $E(G) = \{e_1, \ldots, e_L\}$, and let $h$ be the size of the desired independent set. We construct the corresponding \textsc{AV-SwapApprovals-CI-Bribery} instance as follows. Let $B = 3h$ be our budget and let $t = B+1$ be a certain parameter (which we interpret as ``more than the budget''). We form candidate set $C = V(G) \cup \{p\} \cup F \cup D$, where $p$ is the preferred candidate, $F$ is a set of $t(n+1)$ filler candidates, and $D$ is a set of $t$ dummy candidates. Altogether, there are $t(n+2)+n+1$ candidates. We denote individual filler candidates by $\diamond$ and individual dummy candidates by $\bullet$; we fix the societal axis to be: \[ \rhd = \underbrace{ \overbrace{\diamond \cdots \diamond}^t c_1 \overbrace{\diamond \cdots \diamond}^t c_2 \diamond \cdots \diamond c_{n-1} \overbrace{\diamond \cdots \diamond}^t c_n \overbrace{\diamond \cdots \diamond}^t \overbrace{\bullet \cdots \bullet}^t p }_{t(n+2) + n + 1} \] For each positive integer $i$ and each candidate $c$, we write ${\mathrm{P}}prec_i(c)$ to mean the $i$-th candidate preceding $c$ in $\rhd$. Similarly, we write $\mathrm{succ}_i(c)$ to denote the $i$-th candidate after~$c$. We introduce the following voters: \begin{enumerate} \item For each edge $e_i = \{c_a,c_b\}$ we add an edge voter $v_{a,b}$ with approval set $[c_a,c_b]$. For each vertex $c_i \in V(G)$, we write $V(c_i)$ to denote the set of the three edge voters corresponding to the edges incident to $c_i$. \item Recall that $L = |E(G)|$. For each vertex candidate $c_i \in V(G)$, we add sufficiently many voters with approval set $[{\mathrm{P}}prec_{t}(c_i), \mathrm{succ}_{t}(c_i)]$, so that, together with the score from the edge voters, $c_i$~ends up with $L$ approvals. \item We add $L-3$ voters that approve $p$. \item For each group of $t$ consecutive filler candidates, we add $L+4t$ filler voters, each approving all the candidates in the group. \end{enumerate} Altogether, $p$ has score $L-3$, all vertex candidates have score $L$, the filler candidates have at least $L+4t$ approvals each, and the dummy candidates have score $0$. We set the committee size to be $k = t(n+1) + (n-h)+1$. Prior to any bribery, each winning committee consists of $t(n+1)$ filler candidates and $(n-h)+1$ vertex ones (chosen arbitrarily). This completes our construction. Let us assume that $G$ has a size-$h$ independent set and denote it with $S$. For each $c_i \in S$ and each edge $e_t = \{c_i,c_j\}$, we bribe edge voter $v_{i,j}$ to move an approval from $c_i$ to a filler candidate right next to $c_j$. This is possible for each of the three edges incident to $c_i$ because $S$ is an independent set. As a consequence, each vertex from $S$ ends up with $L-3$ approvals. Thus only $n-h$ vertex candidates have score higher than $p$ and, so, there is a winning committee that includes $p$. For the other direction, let us assume that it is possible to ensure that $p$ belongs to some winning committee via a bribery of cost at most $B$. Let us consider the election after some such bribery was executed. First, we note that all the filler candidates still have scores higher than $L+3t$ (this is so because decreasing a candidate's score always has at least unit cost and $B < t$). Similarly, $p$ still has score $L-3$ because increasing his or her score, even by one, costs at least $t$ (indeed, $p$ is separated from the other candidates by $t$ dummy candidates). Since $p$ belongs to some winning committee, this means that at least $h$ vertex voters must have ended up with score at most $L-3$. In fact, since our budget is $B = 3h$, a simple counting argument shows that exactly $h$ of them have score exactly $L-3$, and all the other ones still have score~$L$. Let~$S$ be the set of vertex candidates with score $L-3$. The only way to decrease the score of a vertex candidate $c_i$ from $L$ to $L-3$ by spending three units of the budget is to bribe each of the three edge voters from $V(c_i)$ to move an approval from $c_i$ to a filler candidate. However, if we bribe some edge voter $v_{i,j}$ to move an approval from $c_i$ to a filler candidate, then we cannot bribe that same voter to also move an approval away from $c_j$ (this would either cost more than $t$ units of budget or would break the CI condition). Thus it must be the case that the candidates in $S$ correspond to a size-$h$ independent set for~$G$. \iffalse Now let us consider some vertex candidate $c_i$ whose score decreased from $L$ to $L-3$ by spending three units of budget. This bribery must have involved the three edge voters co However, the only way to decrease a vertex candidate's score from $L$ to $L-3$ by spending three units of budget is to bribe voters corresponding to the edges touching Next we show that there is a bribery that ensures $p$'s membership in some winning committee if and only if $G$ has a size-$h$ independent set. For the first direction, let $S$ be such an independent set. For each $c_i \in S$, we bribe the three edge voters Let us now show that there are $h$ vertices in $V(G)$ such that no two of them are connected. Let $S$ be a set of such vertices. We perform the following bribery swaps: For each $c_i \in S$ there are three different edge voters that we bribe We present a reduction from the \textsc{IS} problem, where given an integer $h$ and a graph $G$ we ask if there is a set of $h$ pairwise non-adjacent vertices in $G$. Let $(G,h)$ be an instance of \textsc{IS}, where $G=(V,E)$ is a cubic graph and $|G| = n$. We construct an instance of \textsc{AV-SwapApprovals-Bribery}, which will have a solution if and only if $G$ contains an independent set of size $h$. During construction we use parameter $t$ which we set to $3h+1$. We set the committee size to be $k=t(n+1) + n - h + 1$ and we set budget $B=3h$ and ensure that an optimal bribery has cost $B$ if and only if $G$ contains an independent set of size $h$. We construct candidates set $C$ as follows. We say that $C_v = \{ c_i | i=1,...,n \}$ is the set of the candidates corresponding to vertices from $V=(v_1,...,v_n)$. We also introduce a set of dummy candidates $D, |D|=t(n+2)$, which will fill the societal axis. Individual dummy candidates are denoted by $\diamond$. Finally we add the preferred candidate $p$, so $C = C_v \cup D \cup \{p\}$. We define the societal axis $\rhd$ as follows: $$ \rhd = \underbrace{ \overbrace{\diamond ... \diamond}^t c_1 \overbrace{\diamond ... \diamond}^t c_2 \diamond ... \diamond c_{n-1} \overbrace{\diamond ... \diamond}^t c_n \overbrace{\diamond ... \diamond}^t \overbrace{\diamond ... \diamond}^t p }_{t(n+2) + n + 1} $$ To define the preferences of the voters, we introduce the following notation. For each positive integer $i$ we denote the $i$th candidate before $c$ in $\rhd$ as $prec_i(c)$. Similarly we denote as $succ_i(c)$ the $i$th candidate after $c$ in $\rhd$. For each edge $e = (v_a, v_b), a < b$ in $E$ we introduce an edge voter $w_{\{a,b\}} = (c_a,succ_{1}(c_a),...,prec_{1}(c_b),c_b)$. Edge voters form set $W_e$. Then we find an integer $L$ being the largest number of approvals received from $W_e$ by any candidate $c_i$. Then we assure that all candidates from $C_v$ have approval score equal to $L$ by adding appropriate number of vertex voters of the form $w_i = ( prec_{t}(c_i), prec_{t-1}(c_i), ..., succ_{t-1}(c_i), succ_{t}(c_i) )$, which make up the set $W_v$. Next we assure that all dummy candidates from the range from the beginning of the $\rhd$ up to $succ_{t}(c_n)$ have score equal to $4t + L$ by adding voters approving single dummy candidates. Finally we add $L-3$ voters approving $p$. This completes the construction, which is done in polynomial time. With the preference profile thus created all candidates corresponding to the vertices have approval score equal to $L$, $p$ has the score of $L-3$, dummies between $succ_{t}(c_n)$ and $p$ do not have any approvals, and the remaining dummies have score $4t + L$. The committee will always contain $t(n+1)$ dummy candidates and it is impossible to decrease their scores without exceeding budget $B$, even by one point. It is clear that we can not make $p$ a member of the winning committee by increasing his or her approval score, as increasing it by every single point would cost at least $t$. Regardless of any bribery actions, out of all $t(n+1) + n - h + 1$ committee seats, $t(n+1)$ will be filled by dummies. There are left $n - h + 1$ places in the committee to be distributed over $n$ vertex candidates and candidate $p$. The only way to make $p$ a member of at least one winning committee is to decrease the score of $h$ candidates from $C_v$ to $L-3$, so the winning committee would have the following form: $t(n+1)$ dummy candidates with score at least $4t + L$, $n-h$ vertex candidates with score $L$, one candidate with score equal to $L-3$, selected in a way of tie-breaking. Decreasing the score of some $c_i, c_i \in C_v$ by 1 point by any voter from $W_v$ would cost $t+1$, which exceeds the budget. If there is a voter $w_{\{a,b\}}$, then educing the score of either $c_a$ or $c_b$ by 1 point has unit cost, but it is impossible to reduce the scores of both $c_a$ and $c_b$ and fit within constrained budget. In other words for each edge voter we can choose only one of its ends to have its score decreased. As $G$ is cubic graph and all of its vertices have degree equal to 3, there are exactly 3 voters in $W_e$ approving each vertex candidate. Note that with the budget constrained to $3h$, we can choose $h$ vertex candidates and decrease their scores by $3$ only if among the selected candidates not a single pair is approved by a common edge voter. Such a result exists if and only if $G$ contained independent set of size at least $h$. If $G$ contained only a set of size at most $h-1$, only $h-1$ candidates could have their scores reduced by $3$. The $h$th candidate would have a score of at least $L-2$ and would outrank $p$ and other candidates with score of $L-3$. To complete the reduction, we need to transform the result of the bribery to the set of vertices from $G$. If the list of vertex candidates who after bribery have approval score of $L-3$ is $(c_{i1}, c_{i2}, ..., c_{ih})$, then vertices $(v_{i1}, v_{i2}, ..., v_{ih})$ form an independent set in $G$. \fi \end{proof} \hyphenation{fa-li-szew-ski} Next let us consider the VI domain. On the one hand, the complexity of our problem for unit prices remains open. On the other, for arbitrary prices we show that it remains ${\mathrm{NP}}$-complete. Our proof works even for the single-winner setting. In the unrestricted domain, the single-winner variant can be solved in polynomial-time~\citep{fal:c:nonuniform-bribery}. \begin{theorem} \label{thm:vi-swaps} \textsc{AV-\$SwapApprovals-VI-Bribery} is ${\mathrm{NP}}$-complete, even for the single-winner case (i.e., for committees of size one). \end{theorem} \appendixproof{thm:vi-swaps} { \begin{proof} We give a reduction from \textsc{RX3C}. Let $I = (X,\mathcal{S})$ be an instance of \textsc{RX3C}, where $X = \{x_1, \ldots, x_{3n}\}$ is a universe and $\mathcal{S} = \{S_1, \ldots, S_{3n}\}$ is a family of size-$3$ subsets of $X$ (recall that each element from $X$ belongs to exactly three sets from $\mathcal{S}$). We form a single-winner approval election with $10n+1$ voters $V = \{v_0, v_1, \ldots, v_{10n}\}$ and the following candidates: \begin{enumerate} \item We have the preferred candidate $p$. \item For each universe element $x_i \in X$ we have a corresponding universe candidate, also referred to as $x_i$. \item For each set $S_t = \{x_i, x_j, x_k\}$ we have two set candidates, $S'_t$ and $S''_t$, and three content candidates $y_{t,i}$, $y_{t,j}$, and $y_{t,k}$. \end{enumerate} The approvals for these candidates, and the costs of moving them, are as follows (if we do not explicitly list the cost of moving some approval from a given candidate to another, then this move has cost $+\infty$, i.e., this swap is impossible; the construction is illustrated in Figure~\ref{fig:vi-swap}): \begin{enumerate} \item Candidate $p$ is approved by $P = 7n-1$ voters, $v_1, \ldots, v_P$. \item Each candidate $x_i \in X$ is approved by exactly $P+1$ voters, $v_{i}, \ldots, v_{i+P}$. For each of the set candidates $S'_t$, the cost of moving $v_i$'s approval from $x_i$ to $S'_t$ is $0$ if $x_i$ belongs to the set $S_t$, and it is $+\infty$ otherwise. \item None of the $S'_t$ candidates is initially approved by any of the voters. \item Each candidate $S''_t$ is approved by all the voters. For each $t \in [3n]$ we have the following costs of moving the approval of $S''_t$: \begin{enumerate} \item The cost of moving $v_0$'s approval from $S''_t$ to $S'_t$ is~$0$; the same holds for $v_{3n+1}$. \item For each $i \in [3n]$, the cost of moving $v_i$'s approval from $S''_t$ to $S'_t$ is $0$ if $x_i$ belongs to $S_t$, and it is $+\infty$ otherwise. \item For each $x_i \in S_t$, the cost of moving $v_i$'s approval from $S''_t$ to $y_{t,i}$ is $0$. \item The cost of moving $v_{10n}$'s approval from $S''_t$ to $S'_t$ is $1$. \item For each voter in $\{v_{P}, \ldots, v_{10n-1}\}$, the cost of moving the approval from $S''_t$ to $S'_t$ is $0$. \end{enumerate} \end{enumerate} One can verify that this election has the VI property for the natural order of the voters (i.e., for $v_0 \rhd \cdots \rhd v_{10n}$). We claim that it is possible to ensure that $p$ becomes a winner of this election by approval-moves of cost at most $B = 2n$ (such that the election still has the VI property after these moves) if and only if $I$ is a yes-instance of \textsc{RX3C}. \begin{figure} \caption{\label{fig:vi-swap} \label{fig:vi-swap} \end{figure} For the first direction, let us assume that $I$ is a yes-instance of \textsc{RX3C} and that $R \subseteq [3n]$ is a size-$n$ set such that $\bigcup_{i \in R}S_i = X$ (naturally, for each $t, \ell \in R$, sets $S_t$ and $S_\ell$ are disjoint). It is possible to ensure that $p$ becomes a winner of our election by performing the following swaps (intuitively, for $t \notin R$, $3n+2$ voters with the highest indices move approvals from $S''_t$ to $S'_t$, and for $t \in R$, $3n+2$ voters with the lowest indices move their approvals from $S''_t$ either to $S'_t$ or to the content candidates): \begin{enumerate} \item For each $t \in [3n] \setminus R$, voters $v_{P}, \ldots, v_{10n}$ move approvals from $S''_{t}$ to $S'_t$. In total, this costs $2n$ units of budget and ensures that each involved candidate $S''_t$ has score~$P$, whereas each involved candidate $S'_t$ has score $3n+2$. \item For each $t \in R$, we perform the following moves (we assume that $S_t = \{x_i, x_j, x_k\}$). For each voter $v_\ell \in \{v_i, v_j, v_k\}$, we move this voter's approval from $x_\ell$ to $S'_t$ (at zero cost), and we move this voter's approval from $S''_t$ to $y_{t,\ell}$ (at zero cost). For each $\ell \in \{0, \ldots, 3n+1 \} \setminus \{i,j,k\}$, we move voter $v_\ell$'s approval from $S''_t$ to $S'_t$ (at zero cost). All in all, candidates $x_i$, $x_j$, and $x_k$ lose one approval each, and end up with $P$ approvals; $S''_t$ loses $3n+2$ approvals and also ends up with $P$ approvals, $S'_t$ ends up with $3n+2$ approvals (from voters $v_0, \ldots, v_{3n+1}$) and candidates $y_{t,i}$, $y_{t,j}$, and $y_{t,k}$ end up with a single approval each. The cost of these operations is zero. \end{enumerate} One can verify that after these swaps the election still has the VI property, $p$ has score $P$, and all the other candidates have score at most $P$. For the other direction, let us assume that there is a sequence of approval moves that costs at most $2n$ units of budget and ensures that $p$ is a winner. Since all the moves of approvals from and to $p$ have cost $+\infty$, this means that every candidate ends up with at most $P$ points. Thus each universe candidate moves one of his or her approvals to the set candidate $S''_t$ that corresponds to a set that contains this universe candidate (other moves of approvals from the universe candidates are too expensive), and each set candidate $S''_t$ loses at least $3n+2$ points. Let us consider some candidate $S''_t$. Since, in the end, $S''_t$ has at most $P$ points, it must be that either voter $v_0$ moved an approval from $S''_t$ to $S'_t$ or voter $v_{10n}$ did so (it is impossible for both of these voters to make the move since then, to ensure the VI property, we would have to move all the approvals from $S''_t$ to $S'_t$ and such a bribery would be too expensive; similarly, it is impossible that either of these voters moves his or her approval to some other candidate). If $v_0$ moves approval from $S''_t$ to $S'_t$, then we refer to $S''_t$ as a \emph{solution candidate}. Otherwise we refer to him or her as a \emph{non-solution} candidate. Due to the costs of approval moves, there are at most $2n$ non-solution candidates. W.l.o.g., we can assume that for each non-solution candidate voters $v_{P}, \ldots, v_{10n}$ move approvals from $S''_t$ to $S'_t$ (indeed, one can verify that this is the only way for ensuring that non-solution candidates have at most score $P$, the election still satisfies the VI property for these candidates, and we do not exceed the budget). Let us consider some solution candidate $S''_t$ such that $S_t = \{x_i,x_j,x_k\}$. For $S''_t$ to end up with at most $P$ points, voters $v_0, \ldots, v_{3n+1}$ must move approvals from $S''_t$ someplace else (for $v_0$ this is by definition of a solution candidate, for the other voters this follows from the fact that $v_{10n}$ cannot move his or her approval to $S'_t$, and from the need to maintain the VI property). In fact, the only option of moving approvals from $S''_t$ for voters $v_0$ and $v_{3n+1}$ is to move these approvals to $S'_t$. However, then for the VI property to hold for $S'_t$ and $S''_t$, voters $v_1, \ldots, v_{3n}$ must approve $S'_t$ and disapprove $S''_t$. This is possible only if: \begin{enumerate} \item For each $\ell \in \{i,j,k\}$, voter $v_\ell$ moves an approvals from $x_\ell$ to $S'_t$ and from $S''_t$ to $y_{t,\ell}$. \item For each $\ell \in [3n] - \{i,j,k\}$, voter $v_\ell$ moves an approval from $S''_t$ to $S'_t$. \end{enumerate} The above is possible exactly if the solution candidates correspond to $n$ sets forming an exact cover of the universe. As the reduction clearly runs in polynomial time, this completes the proof. \end{proof} } \section{Summary} We have studied bribery in multiwinner approval elections, for the case of candidate interval (CI) and voter interval (VI) preferences. Depending on the setting, our problem can either be easier, harder, or equally difficult as in the unrestricted domain. {\mathrm{P}}aragraph{Acknowledgments.} This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 101002854). \noindent \includegraphics[width=3cm]{erceu} \end{document}
\begin{document} \title{QuiversToricVarieties: a package to construct quivers of sections on complete toric varieties} \author{Nathan Prabhu-Naik} \date{9th October 2014} \begin{abstract} Given a collection of line bundles on a complete toric variety, the \emph{Macaulay2} package \emph{QuiversToricVarieties} contains functions to construct its quiver of sections and check whether the collection is strong exceptional. It contains a database of full strong exceptional collections of line bundles for smooth Fano toric varieties of dimension less than or equal to 4. \end{abstract} \maketitle \section{Introduction} \noindent For a collection of non-isomorphic line bundles $\mathcal{L} = \lbrace \mathcal{L}_0 := \mathcal{O}_X, \mathcal{L}_1, \ldots, \mathcal{L}_r \rbrace$ on a complete normal toric variety $X$, the endomorphism algebra $\operatorname{End}(\bigoplus_i \mathcal{L}_i)$ can be described as the quotient of the path algebra of its quiver of sections by an ideal of relations determined by labels on the arrows in the quiver \cite{CrSm}. The vertices of the quiver correspond to the line bundles and there is a natural order on the vertices defined by $i < j$ if $\operatorname{Hom}(L_j, L_i) = 0$. For $i < j$, the number of arrows from $i$ to $j$ is equal to the dimension of the cokernel of the map \begin{equation} \bigoplus_{i<k<j} \operatorname{Hom}(L_i , L_k) \otimes \operatorname{Hom}(L_k , L_j) \longrightarrow \operatorname{Hom} (L_i , L_j), \end{equation} and we label each arrow by the toric divisors corresponding to the sections in a basis for the cokernel. Using the given order on $\mathcal{L}$, the collection is \emph{strong exceptional} if \begin{equation} \operatorname{Ext}^i (L_j, L_k) = 0, \forall j, \ k, \text{ and } i \neq 0. \end{equation} \noindent Let $\mathcal{D}^b(X)$ denote the bounded derived category of coherent sheaves on $X$. The collection $\mathcal{L}$ is \emph{full}, or \emph{generates} $\mathcal{D}^b(X)$, if the smallest triangulated full subcategory of $\mathcal{D}^b(X)$ containing $\mathcal{L}$ is $\mathcal{D}^b(X)$ itself. A tilting bundle $T$ on $X$ is a vector bundle such that $T$ generates $\mathcal{D}^b(X)$ and $\operatorname{Ext}^i(T,T) = 0$ for $i>0$; given a full strong exceptional collection of line bundles $\mathcal{L}$ on $X$, the direct sum $\bigoplus_{L_i \in \mathcal{L}} L_i$ is a tilting bundle. The following theorem by Baer and Bondal allows us to understand $\mathcal{D}^b(X)$ in terms of the module category of a finite dimensional algebra. \begin{theorem}\cite{Baer,Bond} Let $T$ be a tilting bundle on $X$, $A = \operatorname{End}(T)$ and $\mathcal{D}^b(\mod A)$ be the bounded derived category of finitely generated right $A$-modules. Then \begin{equation} \mathbf{R}\operatorname{Hom} (T, - ) \colon \mathcal{D}^b(X) \rightarrow \mathcal{D}^b(\mod A) \end{equation} is an equivalence of triangulated categories. \end{theorem} \noindent A complete normal toric variety induces a short exact sequence of abelian groups \begin{equation} \begin{CD}\operatorname{op}eratorname{lab}el{ses} 0@>>> M @>>> \ensuremath{\mathbb{Z}}^{\Sigma(1)} @>\deg >> \operatorname{Cl}(X)@>>> 0, \end{CD} \end{equation} where $M$ is the character lattice of the dense torus in $X$, $\Sigma(1)$ is the set of rays in the fan $\Sigma$ of $X$, and the map $\deg$ sends a toric divisor $D \in \ensuremath{\mathbb{Z}}^{\Sigma(1)}$ to the rank $1$ reflexive sheaf $\mathcal{O}_{X} (D)$ in the class group $\operatorname{Cl} (X)$ (see for example \cite{Fult}). Showing that $\mathcal{L}$ is strong exceptional in this situation is equivalent to checking that $H^i(X, L^{-1}_j \otimes L_k) = 0$ for $i > 0,\ 0 \leq j,k \leq r$. Using a theorem of Eisenbud, Musta{\c{t}}{\u{a}} and Stillman \cite{EiMuSt}, we can determine if the cohomology of $\mathcal{O}_X(D)$ vanishes by considering when $\mathcal{O}_X(D)$ avoids certain affine cones constructed in $\operatorname{Cl}(X)$, which we call \emph{non-vanishing cohomology cones}. The purpose of the package \emph{QuiversToricVarieties} for \emph{Macaulay2} \cite{M2} is to construct the quiver of sections for a collection of line bundles on a complete toric variety and check if the collection is strong exceptional. We note that there does exist computer programs that check if a collection of line bundles on a toric variety is strong exceptional; see for example Perling's \emph{TiltingSheaves} \cite{Perl}. Restricting our attention to smooth toric Fano varieties, toric divisorial contractions give the collection of $n$-dimensional toric Fano varieties a poset structure, described for $n=3$ by \cite{Oda} and $n=4$ by \cite{Sato} (see also \cite[Remark 2.4]{Prna}). The contractions induce lattice maps between the short exact sequences (\ref{ses}) determined by the varieties and these lattice maps are an essential ingredient in the proof that each smooth toric Fano variety of dimension $\leq 4$ has a full strong exceptional collection of line bundles \cite[Theorem 6.4]{Prna}. The package \emph{QuiversToricVarieties} contains a database of these lattice maps and of full strong exceptional collections of line bundles on all smooth toric Fano varieties of dimension $\leq 4$. In the case when $X$ is a smooth toric Fano variety, let $Y = \operatorname{tot}(\omega_X)$ be the total space of the canonical bundle on $X$. The package \emph{QuiversToricVarieties} contains methods to check if the pullback of a full strong exceptional collection of line bundles on $X$ along the morphism $Y \rightarrow X$ is a tilting bundle on $Y$. \emph{QuiversToricVarieties} depends on the package \emph{NormalToricVarieties} for the construction of toric varieties and for the database of smooth toric Fano varieties. All varieties are defined over $\ensuremath{\Bbbk} = \ensuremath{\mathbb{C}}$. \section{Overview of the Package} \noindent Let $X$ be a complete normal toric variety constructed in \emph{NormalToricVarieties} with a torsion-free class group. The class group lattice of $X$ has a basis determined by \verb+fromWDivToCl+ and the function \verb+fromPicToCl+ can be used to determine which vectors in the lattice correspond to line bundles. The input for the method \verb+quiver+ is a complete normal toric variety with a torsion-free class group, together with a list of vectors $v_i$ in the class group lattice that correspond to the line bundles $L_i$. The vectors are ordered by \verb+quiver+ and the basis of $\operatorname{Hom} (L_i,L_j)$ is calculated by determining the basis of the multidegree $v_j-v_i$ over the Cox ring of the variety. From this basis, the irreducible maps are chosen and listed as arrows, with the corresponding toric divisors as labels. If some of the vectors do not correspond to line bundles then a quiver is still constructed but the resulting path algebra modulo relations may not be isomorphic to $\operatorname{End}(\bigoplus_{i \in Q_0} E_i)$, where $E_i$ are the rank $1$ reflexive sheaves corresponding to $v_i$. Alternatively, we can produce a quiver by explicitly listing the vertices, the arrows with labels and the variety. The methods \verb+source+, \verb+target+, \verb+label+ and \verb+index+ return the specific details of an arrow in the quiver, a list of which can be accessed by inputting \verb+Q_1+. Besides the method \verb+quiver+, the method \verb+doHigherSelfExtsVanish+ forms the core of the package. The primary input is a quiver of sections. The method creates the non-vanishing cohomology cones in the class group lattice for $X$ and determines if the vectors $v_i - v_j$ avoid these cones. The cones are determined by certain subsets $I$ of the rays of the fan $\Sigma$ for $X$; if the complement of the supporting cones for $I$ in $\Sigma$ has non-trivial reduced homology, then $I$ is called a \emph{forbidden set} and it determines a cone in $\ensuremath{\mathbb{Z}}^{\Sigma(1)}$. The forbidden sets can be calculated using the function \verb+forbiddenSets+, and the image of a cone determined by a forbidden set under the map \verb+fromWDivToCl X+ is a non-vanishing cohomology cone in $\operatorname{Cl}(X)$. A database in \emph{NormalToricVarieties} contains the smooth toric Fano varieties up to dimension $6$ and can be accessed using \verb+smoothFanoToricVariety+. The divisorial contractions between the smooth toric Fano varieties up to dimension $4$ are listed under the \verb+contractionList+ command, and the induced maps between their respective short exact sequences (\ref{ses}) are recalled from a database in \emph{QuiversToricVarieties} using the \verb+tCharacterMap+, \verb+tDivisorMap+ and the \verb+picardMap+ commands. Note that as each variety considered is smooth, its class group is isomorphic to its Picard group. The database containing full strong exceptional collections of line bundles for smooth Fano toric varieties in dimension $\leq 4$ can be accessed using \verb+fullStrExcColl+. The collections for the surfaces were calculated by King \cite{King}, the threefolds by Costa--Mir\'{o}-Roig \cite{CoMR1}, Bernardi--Tirabassi \cite{BeTi} and Uehara \cite{Ueha} and the fourfolds by Prabhu-Naik \cite{Prna}. \section{An Example} \noindent We illustrate the main methods in \emph{QuiversToricVarieties} using the blowup of $\ensuremath{\mathbb{P}}^2$ at three points, the birationally-maximal smooth toric Fano surface. It is contained in the toric Fano database in \emph{NormalToricVarieties}, which is loaded by the \emph{QuiversToricVarieties} package. \begin{verbatim} i1 : loadPackage "QuiversToricVarieties"; i2 : X = smoothFanoToricVariety(2,4); \end{verbatim} \noindent A full strong exceptional collection $\mathcal{L}$, first considered by King \cite{King}, can be recalled from the database and its quiver of sections can be created. \begin{verbatim} i3 : L = fullStrExcColl(2,4); o3 = {{0,0,0,0},{0,0,1,1},{0,1,0,0},{0,1,1,0},{1,0,0,0},{1,0,0,1}} i4 : Q = quiver(L,X); \end{verbatim} \noindent We can view the details of the quiver, either by displaying the arrows at each vertex, or by listing all of the arrows and considering their source, target and label. \begin{verbatim} i5 : Q#0 o5 = HashTable{1 => {x_0x_1 , x_3x_4 } } 2 => {x_1x_2 , x_4x_5 } 3 => {x_2x_3 , x_0x_5 } degree => {0, 0, 0, 0} i6 : first Q_1 o6 = arrow_1 i7 : source oo, target oo, label oo o7 = (0, 1, x_0x_1 ) \end{verbatim} \noindent The forbidden sets of rays can be computed and the collection of line bundles can be checked to be strong exceptional. The method \verb+doHigherSelfExtsVanish+ creates a copy of the non-vanishing cohomology cones in the cache table for $X$, where the cones are given by a vector and a matrix $\{w,M\}$ encoding the supporting closed half spaces of the cone, in which case the lattice points of the cone are $\{ v \in \operatorname{Cl} (X) \mid M v \leq w \}$. The non-vanishing cone for $H^2$ is displayed below. \begin{verbatim} i8 : peek forbiddenSets X o8 = MutableHashTable{1 => {{0,2},{0,3},{1,3},{0,1,3},{0,2,3},{0,4},{1,4},...} 2 => {{0,1,2,3,4,5}} i9 : doHigherSelfExtsVanish Q o9 = true i10 : X.cache.cones#2 o10 = {{| -1 |, | 1 1 1 0 1 |}} | -1 | | 1 0 1 1 1 | | -1 | | 0 1 1 0 0 | | -1 | | 0 0 0 1 1 | \end{verbatim} Consider the chain of divisorial contractions $X =: X_4 \rightarrow X_3 \rightarrow X_2 \rightarrow X_0$ from $X$ to the toric Fano surfaces numbered $3$, $2$ and $0$ in the database. The contractions induces lattice maps $\operatorname{Pic}(X_4) \rightarrow \operatorname{Pic}(X_3) \rightarrow \operatorname{Pic}(X_2) \rightarrow \operatorname{Pic}(X_0)$ and the method \verb+doHigherExtsVanish+ can check if the non-isomorphic line bundles in the image of $\mathcal{L}$ under these lattice maps are strong exceptional for each contraction. \begin{verbatim} i11 : doHigherSelfExtsVanish(Q,{4,3,2,0}) o11 = true \end{verbatim} Now consider the morphism $\pi \colon \operatorname{tot}(\omega_X) \rightarrow X$. The pullback $\pi^* (\bigoplus_{L_i \in \mathcal{L}} L_i)$ is a tilting bundle on $Y = \operatorname{tot} (\omega_X)$ if \[ H^k(X,L_i \otimes L_j^{-1} \otimes \omega_X^{-m}) = 0\] for all $k>0$, $m \geq 0$ and $L_i,L_j \in \mathcal{L}$ (see for example \cite[Theorem 6.7]{Prna}). As $\omega^{-1}_X$ is ample, there exists a non-negative integer $n$ such that $L_i \otimes L_j^{-1} \otimes \omega_X^{-m}$ is nef for $0 \leq i,j \leq r$ and $m \geq n$, and hence $H^k(X,L_i \otimes L_j^{-1} \otimes \omega_X^{-m}) = 0$ for all $k>0$ by Demazure vanishing. The method \verb+bundlesNefCheck+ checks for a given integer $n$ whether $L_i \otimes L_j^{-1} \otimes \omega_X^{-n}$ is nef for all $L_i,L_j \in \mathcal{L}$. \begin{verbatim} i12 : n=2; i13 : bundlesNefCheck(Q,n) o13 = true \end{verbatim} If an integer $p$ is included as an additional input in \verb+doHigherSelfExtsVanish+, then the method checks that for all $0 \leq m \leq p$, whether the line bundles $L_i \otimes L_j^{-1} \otimes \omega^{-m}$ avoid the non-vanishing cohomology cones. Note that for our example, the computation above implies that it is enough to use the integer $n-1$. \begin{verbatim} i14 : doHigherSelfExtsVanish(Q,n-1) o14 = true \end{verbatim} For $t \in \{4,3,2,0\}$, let $\{ L_{i,t} \}_{i \in I_t}$ denote the list of non-isomorphic line bundles in the image of $\mathcal{L}$ under the map given by \verb+picardMap+ from $\operatorname{Pic}(X) \rightarrow \operatorname{Pic}(X_t)$, where $I_t$ is an index set. By including the list of divisorial contractions as an input in \verb+doHigherSelfExtsVanish+, we can check that \[ H^k(X_t,L_{i,t} \otimes (L_{j,t})^{-1} \otimes \omega_{X_t}^{-m}) = 0\] for $k >0$, $0 \leq m \leq n-1$, $t \in \{4,3,2,0\}$ and all $i,j \in I_t$. \begin{verbatim} i15 : doHigherSelfExtsVanish(Q,{4,3,2,0},n-1) o15 = true \end{verbatim} For all $n$-dimensional smooth toric Fano varieties, $1 \leq n \leq 3$, and 88 of the 124 smooth toric Fano fourfolds, the database contains a chain complex of modules over the Cox ring for the variety. The chain complexes are used in \cite{Prna} to show that the collections of line bundles in the database for these varieties are full. \begin{verbatim} i16 : C = resOfDiag(2,4); i17 : SS = ring C; i18 : C 6 12 6 o18 = SS <-- SS <-- SS \end{verbatim} \nocite{M2} \end{document}
\begin{document} \thispagestyle{empty} \setcounter{page}{1} \begin{center} {\large\bf Stability of a functional equation deriving from quartic and additive functions \vskip.20in {\bf M. Eshaghi Gordji } \\[2mm] {\footnotesize Department of Mathematics, Semnan University,\\ P. O. Box 35195-363, Semnan, Iran\\ [-1mm] e-mail: {\tt maj\[email protected]}} } \end{center} \vskip 5mm \noindent{\footnotesize{\bf Abstract.} In this paper, we obtain the general solution and the generalized Hyers-Ulam Rassias stability of the functional equation $$f(2x+y)+f(2x-y)=4(f(x+y)+f(x-y))-\frac{3}{7}(f(2y)-2f(y))+2f(2x)-8f(x).$$ \vskip.10in \footnotetext { 2000 Mathematics Subject Classification: 39B82, 39B52.} \footnotetext { Keywords:Hyers-Ulam-Rassias stability.} \newtheorem{df}{Definition}[section] \newtheorem{rk}[df]{Remark} \newtheorem{lem}[df]{Lemma} \newtheorem{thm}[df]{Theorem} \newtheorem{pro}[df]{Proposition} \newtheorem{cor}[df]{Corollary} \newtheorem{ex}[df]{Example} \setcounter{section}{0} \numberwithin{equation}{section} \vskip .2in \begin{center} \section{Introduction} \end{center} The stability problem of functional equations originated from a question of Ulam [24] in 1940, concerning the stability of group homomorphisms. Let $(G_1,.)$ be a group and let $(G_2,*)$ be a metric group with the metric $d(.,.).$ Given $\varepsilonilon >0$, dose there exist a $\delta >0$, such that if a mapping $h:G_1\longrightarrow G_2$ satisfies the inequality $d(h(x.y),h(x)*h(y)) <\delta,$ for all $x,y\in G_1$, then there exists a homomorphism $H:G_1\longrightarrow G_2$ with $d(h(x),H(x))<\varepsilonilon,$ for all $x\in G_1?$ In the other words, Under what condition dose there exists a homomorphism near an approximate homomorphism? The concept of stability for functional equation arises when we replace the functional equation by an inequality which acts as a perturbation of the equation. In 1941, D. H. Hyers [9] gave a first affirmative answer to the question of Ulam for Banach spaces. Let $f:{E}\longrightarrow{E'}$ be a mapping between Banach spaces such that $$\|f(x+y)-f(x)-f(y)\|\leq \delta, $$ for all $x,y\in E,$ and for some $\delta>0.$ Then there exists a unique additive mapping $T:{E}\longrightarrow{E'}$ such that $$\|f(x)-T(x)\|\leq \delta,$$ for all $x\in E.$ Moreover if $f(tx)$ is continuous in t for each fixed $x\in E,$ then $T$ is linear. Finally in 1978, Th. M. Rassias [21] proved the following Theorem. \begin{thm}\label{t1} Let $f:{E}\longrightarrow{E'}$ be a mapping from a norm vector space ${E}$ into a Banach space ${E'}$ subject to the inequality $$\|f(x+y)-f(x)-f(y)\|\leq \varepsilonilon (\|x\|^p+\|y\|^p), \eqno \hspace {0.5 cm} (1.1)$$ for all $x,y\in E,$ where $\varepsilonilon$ and p are constants with $\varepsilonilon>0$ and $p<1.$ Then there exists a unique additive mapping $T:{E}\longrightarrow{E'}$ such that $$\|f(x)-T(x)\|\leq \frac{2\varepsilonilon}{2-2^p}\|x\|^p, \eqno \hspace {0.5 cm}(1.2)$$ for all $x\in E.$ If $p<0$ then inequality (1.1) holds for all $x,y\neq 0$, and (1.2) for $x\neq 0.$ Also, if the function $t\mapsto f(tx)$ from $\Bbb R$ into $E'$ is continuous for each fixed $x\in E,$ then T is linear. \end{thm} In 1991, Z. Gajda [5] answered the question for the case $p>1$, which was rased by Rassias. This new concept is known as Hyers-Ulam-Rassias stability of functional equations (see [1,2], [5-11], [18-20]). In [15], Won-Gil Prak and Jea Hyeong Bae, considered the following functional equation: $$f(2x+y)+f(2x-y)=4(f(x+y)+f(x-y))+24f(x)-6f(y).\eqno \hspace {0.5cm}(1.3)$$ In fact they proved that a function f between real vector spaces X and Y is a solution of (1.3) if and only if there exists a unique symmetric multi-additive function $B:X\times X\times X\times X\longrightarrow Y$ such that $f(x)=B(x,x,x,x)$ for all $x$ (see [3,4], [12-17], [22,23]). It is easy to show that the function $f(x)=x^4$ satisfies the functional equation (1.3), which is called a quartic functional equation and every solution of the quartic functional equation is said to be a quartic function. We deal with the next functional equation deriving from quartic and additive functions: $$f(2x+y)+f(2x-y)=4(f(x+y)+f(x-y))-\frac{3}{7}(f(2y)-2f(y))+2f(2x)-8f(x).\eqno \hspace {0.5cm}(1.4)$$ It is easy to see that the function $f(x)=ax^4+bx$ is a solution of the functional equation (1.4). In the present paper we investigate the general solution and the generalized Hyers-Ulam-Rassias stability of the functional equation (1.4). \vskip 5mm \begin{center} \section{ General solution} \end{center} In this section we establish the general solution of functional equation (1.4). \begin{thm}\label{t1} Let $X$,$Y$ be vector spaces, and let $f:X\longrightarrow Y$ be a function satisfies (1.4). Then the following assertions hold. a) If f is even function, then f is quartic. b) If f is odd function, then f is additive. \end{thm} \begin{proof} a) Putting $x=y=0$ in (1.4), we get $f(0)=0$. Setting $x=0$ in (1.4), by evenness of f, we obtain $$f(2y)=16f(y), \eqno \hspace {0.5cm}(2.1)$$ for all $y\in X.$ Hence (1.4) can be written as $$f(2x+y)+f(2x-y)=4(f(x+y)+f(x-y))+24f(x)-6f(y) \eqno \hspace {0.5cm}(2.2)$$ for all $x,y \in X.$ This means that $f$ is a quartic function. b) Setting $x=y=0$ in (1.4) to obtain $f(0)=0.$ Putting $x=0$ in (1.4), then by oddness of f, we have $$f(2y)=2f(y), \eqno \hspace {0.5cm}(2.3)$$ for all $y\in X.$ We obtain from (1.4) and (2.3) that $$f(2x+y)+f(2x-y)=4(f(x+y)+f(x-y))-4f(x), \eqno \hspace {0.5cm}(2.4)$$ for all $x,y\in X.$ Replacing y by -2y in (2.4), it follows that $$f(2x-2y)+f(2x+2y)=4(f(x-2y)+f(x+2y))-4f(x). \eqno \hspace {0.5cm}(2.5)$$ Combining (2.3) and (2.5) to obtain $$f(x-y)+f(x+y)=2(f(x-2y)+f(x+2y))-2f(x). \eqno \hspace {0.5cm}(2.6)$$ Interchange x and y in (2.6) to get the relation $$f(x+y)+f(x-y)=2(f(y-2x)+f(y+2x))-2f(y). \eqno \hspace {0.5cm}(2.7)$$ Replacing y by -y in (2.7), and using the oddness of f to get $$f(x-y)-f(x+y)=2(f(2x-y)-f(2x+y))+2f(y). \eqno \hspace {0.5cm}(2.8)$$ From (2.4) and (2.8), we obtain $$4f(2x+y)=9f(x+y)+7f(x-y))-8f(x)+2f(y). \eqno \hspace {0.5cm}(2.9)$$ Replacing x+y by y in (2.9) it follows that $$7f(2x-y)=4f(x+y)+2f(x-y))-9f(y)+8f(x). \eqno \hspace {0.5cm}(2.10)$$ By using (2.9) and (2.10), we lead to \begin{align*} f(2x+y)&+f(2x-y)=\frac{79}{28}f(x+y)+\frac{57}{28}f(x-y))\\ &-\frac{6}{7}f(x)-\frac{11}{14}f(y). \hspace {8cm}(2.11) \end{align*} We get from (2.4) and (2.11) that $$3f(x+y)+5f(x-y))=8f(x)-28f(y). \eqno \hspace {0.5cm}(2.12)$$ Replacing x by 2x in (2.4) it follows that $$f(4x+y)+f(4x-y)=16(f(x+y)+f(x-y))-24f(x). \eqno \hspace {0.5cm}(2.13)$$ Setting $2x+y$ instead of y in (2.4), we arrive at $$f(4x+y)-f(y)=4(f(3x-y)+f(x-y))-4f(x).\eqno \hspace {0.5cm}(2.14)$$ Replacing y by -y in (2.14), and using oddness of f to get $$f(4x-y)+f(y)=4(f(3x+y)+f(x+y))-4f(x). \eqno \hspace {0.5cm}(2.15)$$ Adding (2.14) to (2.15) to get the relation \begin{align*} f(4x+y)+f(4x-y)&=4(f(3x+y)+f(3x-y))\\&-4(f(x+y)+f(x-y))-8f(x). \hspace {4.2cm}(2.16) \end{align*} Replacing y by x+y in (2.4) to obtain $$f(3x+y)+f(x-y)=4(f(2x+y)-f(y))-4f(x). \eqno \hspace {0.5cm}(2.17)$$ Replacing y by -y in (2.17), and using the oddness of f, we lead to $$f(3x-y)+f(x+y)=4(f(2x-y)+f(y))-4f(x). \eqno \hspace {0.5cm}(2.18)$$ Combining (2.17) and (2.18) to obtain $$f(3x+y)+f(3x-y)=15(f(x+y)+f(x-y))-24f(x). \eqno \hspace {0.5cm}(2.19)$$ Using (2.16) and (2.19) to get $$f(4x+y)+f(4x-y)=56(f(x+y)+f(x-y))-104f(x). \eqno \hspace {0.5cm}(2.20)$$ Combining (2.13) and (2.20), we arrive at $$f(x+y)+f(x-y)=2f(x).\eqno \hspace {0.5cm}(2.21)$$ Hence by using (2.12) and (2.21) it is easy to see that f is additive. This completed the proof of Theorem. \end{proof} \begin{thm}\label{t2} Let $X$,$Y$ be vector spaces, and let $f:X\longrightarrow Y$ be a function. Then f satisfies (1.4) if and only if there exist a unique symmetric multi-additive function $B:X\times X\times X\times X\longrightarrow Y$ and a unique additive function $A:X\longrightarrow Y$ such that $f(x)=B(x,x,x,x)+A(x)$ for all $x\in X.$ \end{thm} \begin{proof} Let $f$ satisfies (1.4). We decompose f into the even part and odd part by setting $$f_e(x)=\frac{1}{2}(f(x)+f(-x)),~~\hspace {0.3 cm}f_o(x)=\frac{1}{2}(f(x)-f(-x)),$$ for all $x\in X.$ By (1.4), we have \begin{align*} f_e(2x+y)&+f_e(2x-y)=\frac{1}{2}[f(2x+y)+f(-2x-y)+f(2x-y)+f(-2x+y)]\\ &=\frac{1}{2}[f(2x+y)+f(2x-y)]+\frac{1}{2}[f(-2x+(-y))+f(-2x-(-y))]\\ &=\frac{1}{2}[4(f(x+y)+f(x-y))-\frac{3}{7}(f(2y)-2f(y))+2f(2x)-8f(x)]\\ &+\frac{1}{2}[4(f(-x-y)+f(-x-(-y)))-\frac{3}{7}(f(-2y)-2f(-y))+2f(-2x)-8f(-x)]\\ &=4[\frac{1}{2}(f(x+y)+f(-x-y))+\frac{1}{2}(f(-x+y)+f(x-y))]\\ &-\frac{3}{7}[\frac{1}{2}(f(2y)+f(-2y))-(f(y)-f(-y))]\\ &+2[\frac{1}{2}(f(2x)+f(-2x))]-8[\frac{1}{2}(f(x)+f(-x))]\\ &=4(f_e(x+y)+f_e(x-y))-\frac{3}{7}(f_e(2y)-2f_e(y))+2f_e(2x)-8f_e(x) \end{align*} for all $x,y\in X.$ This means that $f_e$ holds in (1.4). Similarly we can show that $f_o$ satisfies (1.4). By above Theorem, $f_e$ and $f_o$ are quartic and additive respectively. Thus there exists a unique symmetric multi-additive function $B:X\times X\times X\times X\longrightarrow Y$ such that $f_e(x)=B(x,x,x,x)$ for all $x\in X.$ Put $A(x):=f_o(x)$ for all $x\in X.$ It follows that $f(x)=B(x)+A(x)$ for all $x\in X.$ The proof of the converse is trivially. \end{proof} \section{ Stability} Throughout this section, X and Y will be a real normed space and a real Banach space, respectively. Let $f:X\rightarrow Y$ be a function then we define $D_f:X\times X \rightarrow Y$ by \begin{align*} D_{f}(x,y)&=7[f(2x+y)+f(2x-y)]-28[f(x+y)+f(x-y)]\\ &+3[f(2y)-2f(y)]-14[f(2x)-4f(x)] \end{align*} for all $x,y \in X.$ \begin{thm}\label{t2} Let $\psi:X\times X\rightarrow [0,\infty)$ be a function satisfies $\sum^{\infty}_{i=0} \frac{\psi(0,2^ix)}{16^i}<\infty$ for all $x\in X$, and $\lim \frac{\psi(2^n x,2^n y)}{16^n}=0$ for all $x,y\in X$. If $f:X\rightarrow Y$ is an even function such that $f(0)=0,$ and that $$\|D_f(x,y)\|\leq \psi(x,y), \eqno \hspace {0.5cm}(3.1)$$ for all $x,y\in X$, then there exists a unique quartic function $Q:X \rightarrow Y$ satisfying (1.4) and $$\|f(x)-Q(x)\|\leq \frac{1}{48}\sum^{\infty}_{i=0} \frac{\psi(0,2^i x)}{16^i},\eqno \hspace {0.5cm}(3.2)$$ for all $x\in X$. \end{thm} \begin{proof} Putting $x=0$ in (3.1), then we have $$\|3f(2y)-48f(y)\|\leq \psi(0,y). \eqno\hspace {0.5cm}(3.3)$$ Replacing y by x in (3.3) and then dividing by 48 to obtain $$\parallel \frac{f(2x)}{16}-f(x)\parallel \leq \frac{1}{48}\psi(0,x), \eqno\hspace {0.5cm}(3.4)$$ for all $x\in X.$ Replacing x by 2x in (3.4) to get $$\parallel\frac{f(4x)}{16}-f(2x)\parallel\leq \frac{1}{48}\psi(0,2x). \eqno\hspace {0.5cm}(3.5)$$ Combine (3.4) and (3.5) by use of the triangle inequality to get $$\parallel\frac{f(4x)}{16^2}-f(x)\parallel \leq\frac{1}{48}(\frac{\psi(0,2x)}{16}+\psi(0,x)). \eqno\hspace {0.5cm}(3.6)$$ By induction on $n\in \mathbb{N}$, we can show that $$\parallel \frac{f(2^n x)}{16^ n}-f(x)\parallel\leq\frac {1}{48} \sum^{n-1}_{i=0} \frac {\psi(0,2^i x)}{16^i}.\eqno \hspace {0.5cm}(3.7)$$ Dividing (3.7) by $16^m$ and replacing x by $2^m x$ to get \begin{align*} \parallel \frac {f(2^{m+n}x)}{16^{m+n}}-\frac{f(2^{m}x)}{16^{m}}\parallel &=\frac{1}{16^m}\parallel f(2^n 2^m x)-f(2^m x)\parallel\\ &\leq\frac {1}{48\times16^m}\sum^{n-1}_{i=0}\frac{\psi(0,2^i x)}{16^i}\\ &\leq\frac{1}{48}\sum^{\infty}_{i=0}\frac{\psi(0,2^i 2^m x)}{16^{m+i}}, \end{align*} for all $x \in X$. This shows that $\{\frac{f(2^n x)}{16^n}\}$ is a Cauchy sequence in Y, by taking the $\lim m\rightarrow \infty.$ Since Y is a Banach space, then the sequence $\{\frac{f(2^n x)}{16^n}\}$ converges. We define $Q:X\rightarrow Y$ by $Q(x):=\lim_n \frac{f(2^n x)}{16^n}$ for all $x\in X$. Since f is even function, then Q is even. On the other hand we have \begin{align*} \|D_Q (x,y)\|&=\lim_n \frac{1}{16^n}\|D_f(2^n x, 2^n y)\|\\ &\leq\lim_n \frac{\psi(2^n x,2^n y)}{16^n}=0, \end{align*} for all $x,y\in X$. Hence by Theorem 2.1, Q is a quartic function. To shows that Q is unique, suppose that there exists another quartic function $\acute{Q}:X\rightarrow Y$ which satisfies (1.4) and (3.2). We have $Q(2^n x)=16^n Q(x)$, and $\acute{Q}(2^n x)=16^n \acute{Q}(x)$, for all $x\in X$. It follows that \begin{align*} \parallel \acute{Q}(x)-Q(x)\parallel &=\frac{1}{16^n}\parallel \acute{Q}(2^n x)-Q(2^n x)\parallel\\ &\leq \frac{1}{16^n}[\parallel \acute{Q}(2^n x)-f(2^n x)\parallel+\parallel f(2^n x)-Q(2^n x)\parallel]\\ &\leq\frac{1}{24}\sum^{\infty}_{i=0}\frac{\psi(0,2^{n+i} x)}{16^{n+i}}, \end{align*} for all $x\in X$. By taking $n\rightarrow \infty$ in this inequality we have $\acute{Q}(x)=Q(x)$. \end{proof} \begin{thm}\label{t'2} Let $\psi:X\times X\rightarrow [0,\infty)$ be a function satisfies $\sum^{\infty}_{i=0} 16^i\psi(0,2^{-i-1}x)<\infty$ for all $x\in X$, and $\lim 16^n\psi(2^{-n} x,2^{-n} y)=0$ for all $x,y\in X$. Suppose that an even function $f:X\rightarrow Y$ satisfies f(0)=0, and (3.1). Then the limit $Q(x):=\lim_n 16^n{f(2^{-n} x)}$ exists for all $x\in X$ and $Q:X \rightarrow Y$ is a unique quartic function satisfies (1.4) and $$\|f(x)-Q(x)\|\leq \frac{1}{3}\sum^{\infty}_{i=0} 16^i\psi(0,2^{-i-1} x), \eqno \hspace {0.5cm}(3.8)$$ for all $x\in X$. \end{thm} \begin{proof} Putting $x=0$ in (3.1), then we have $$\|3f(2y)-48f(y)\|\leq \psi(0,y). \eqno \hspace {0.5cm}(3.9)$$ Replacing y by $\frac{x}{2}$ in (3.9) and result dividing by 3 to get $$\parallel 16f(2^{-1}x)-f(x)\parallel \leq \frac{1}{3}\psi(0,2^{-1}x),\eqno \hspace {0.5cm}(3.10)$$ for all $x\in X.$ Replacing x by $\frac{x}{2}$ in (3.10) it follows that $$\parallel 16f(4^{-1}x)-f(2^{-1}x)\parallel\leq \frac{1}{3}\psi(0,2^{-2}x).\eqno \hspace {0.5cm}(3.11)$$ Combining (3.10) and (3.11) by use of the triangle inequality to obtain $$\parallel 16^2f(4^{-1}x)-f(x)\parallel \leq\frac{1}{3}(\frac{\psi(0,2^{-2}x)}{16}+\psi(0,2^{-1}x)). \eqno \hspace {0.5cm}(3.12)$$ By induction on $n\in \mathbb{N}$, we have $$\parallel 16^n f(2^{-n} x)-f(x)\parallel\leq\frac {1}{3} \sum^{n-1}_{i=0} 16^i\psi(0,2^{-i-1} x).\eqno \hspace {0.5cm}(3.13)$$ Multiplying (3.13) by $16^m$ and replacing x by $2^{-m} x$ to obtain \begin{align*} \parallel 16^{m+n} {f(2^{-m-n}x)}-16^m{f(2^{-m}x)}\parallel &={16^m}\parallel f(2^{-n} 2^{-m} x)-f(2^{-m} x)\parallel\\ &\leq\frac {16^m}{3}\sum^{n-1}_{i=0}16^i{\psi(0,2^{-i-1} x)}\\ &\leq\frac{1}{3}\sum^{\infty}_{i=0}{16^{m+i}}{\psi(0,2^{-i-1} 2^{-m} x)}, \end{align*} for all $x \in X$. By taking the $\lim m\rightarrow \infty,$ it follows that $\{16^n{f(2^{-n} x)}\}$ is a Cauchy sequence in Y. Since Y is a Banach space, then the sequence $\{16^n{f(2^{-n} x)}\}$ converges. Now we define $Q:X\rightarrow Y$ by $Q(x):=\lim_n 16^n{f(2^{-n} x)}$ for all $x\in X$. The rest of proof is similar to the proof of Theorem 3.1. \end{proof} \begin{thm}\label{t2} Let $\psi:X\times X\rightarrow [0,\infty)$ be a function such that $$\sum \frac{\psi(0,2^i x)}{2^i}< \infty, \eqno \hspace {0.5cm}(3.14)$$ and $$\lim_n \frac {\psi(2^n x,2^n y)}{2^n}=0, \eqno \hspace {0.5cm}(3.15)$$ for all $x,y \in X$. If $f:X\rightarrow Y$ is an odd function such that $$\|D_f (x,y)\|\leq \psi(x,y), \eqno \hspace {0.5cm}(3.16)$$ for all $x,y\in X$. Then there exists a unique additive function $A:X\rightarrow Y$ satisfies (1.4) and $$\|f(x)-A(x)\|\leq\frac{1}{2} \sum^{\infty}_{i=0}\frac{\psi(0,2^i x)}{2^i},$$ for all $x\in X$. \end{thm} \begin{proof} Setting $x=0$ in (3.16) to get $$\|f(2y)-2f(y)\|\leq \psi(o,y). \eqno \hspace {0.5cm}(3.17)$$ Replacing y by x in (3.17) and result dividing by 2, then we have $$\|\frac{f(2x)}{2}-f(x)\|\leq\frac{1}{2} \psi(0,x). \eqno\hspace {0.5cm}(3.18)$$ Replacing x by 2x in (3.18) to obtain $$\|\frac{f(4x)}{2}-f(2x)\|\leq\frac{1}{2} \psi(0,2x). \eqno\hspace {0.5cm}(3.19)$$ Combine (3.18) and (3.19) by use of the triangle inequality to get $$\|\frac{f(4x)}{4}-f(x)\|\leq\frac{1}{2} (\psi(0,x)+\frac{1}{2}\psi(0,2x)).\eqno\hspace {0.5cm}(3.20)$$ Now we use iterative methods and induction on $n$ to prove our next relation. $$\|\frac{f(2^n x)}{2^n}-f(x)\|\leq\frac{1}{2} \sum^{n-1}_{i=0}\frac{\psi(0,2^i x)}{2^i}.\eqno\hspace {0.5cm}(3.21)$$ Dividing (3.21) by $2^m$ and then substituting x by $2^m x$, we get \begin{align*} \parallel \frac {f(2^{m+n}x)}{2^{m+n}}-\frac{f(2^{m}x)}{2^{m}}\parallel &=\frac{1}{2^m}\parallel \frac{f(2^n 2^m x)}{2^n}-f(2^m x)\parallel\\ &\leq\frac {1}{2^{m+1}}\sum^{n-1}_{i=0}\frac{\psi(0,2^i 2^m x)}{2^i}\\ &\leq\frac{1}{2}\sum^{\infty}_{i=0}\frac{\psi(0,2^{i+m}x)}{2^{m+i}}\hspace {5.5cm}(3.22) \end{align*} Taking $m\rightarrow \infty$ in (3.22), then the right hand side of the inequality tends to zero. Since Y is a Banach space, then $A(x)=\lim_n \frac {f(2^n x)}{2^n}$ exits for all $x\in X$. The oddness of f implies that A is odd. On the other hand by (3.15) we have \begin{align*} D_A (x,y)=\lim_n \frac{1}{2^n}\|D_f(2^n x,2^n y)\|\leq\lim_n \frac{\psi(2^n x,2^n y)}{2^n}=0. \end{align*} Hence by Theorem 1.2, A is additive function. The rest of the proof is similar to the proof of Theorem 3.1. \end{proof} \begin{thm}\label{t''2} Let $\psi:X\times X\rightarrow [0,\infty)$ be a function satisfies $$\sum^{\infty}_{i=0} 2^i\psi(0,2^{-i-1}x)<\infty,$$ for all $x\in X$, and $\lim 2^n\psi(2^{-n} x,2^{-n} y)=0$ for all $x,y\in X$. Suppose that an odd function $f:X\rightarrow Y$ satisfies (3.1). Then the limit $A(x):=\lim_n 2^n{f(2^{-n} x)}$ exists for all $x\in X$ and $A:X \rightarrow Y$ is a unique additive function satisfying (1.4) and $$\|f(x)-A(x)\|\leq \sum^{\infty}_{i=0} 2^i\psi(0,2^{-i-1} x)$$ for all $x\in X$. \end{thm} \begin{proof} It is similar to the proof of Theorem 3.3. \end{proof} \begin{thm}\label{t5} Let $\psi:X\times X\rightarrow Y$ be a function such that $$\sum^{\infty}_{i=o} \frac{\psi(0,2^i x)}{2^i}\leq \infty \quad and \quad \lim_n \frac{\psi(2^n x,2^n x)}{2^n}=0,$$ for all $x\in X$. Suppose that a function $ f:X\rightarrow Y$ satisfies the inequality $$\|D_f (x,y)\|\leq \psi(x,y),$$ for all $x,y\in X$, and $f(0)=0$. Then there exist a unique quartic function $Q:X\rightarrow Y$ and a unique additive function $A:X\rightarrow Y$ satisfying (1.4) and \begin{align*} \parallel f(x)-Q(x)-A(x)\parallel &\leq\frac {1}{48}[\sum^{\infty}_{i=0}(\frac{\psi(0,2^i x)+\psi(0,-2^i x)}{2\times16^i}\\ &+\frac{12(\psi(0,2^i x)+\psi(0,-2^i x))}{2^i})], \hspace {4.2cm}(3.23) \end{align*} for all $x,y\in X$. \end{thm} \begin{proof} We have $$\|D_{f_e} (x,y)\|\leq\frac{1}{2}[\psi(x,y)+\psi(-x,-y)]$$ for all $x,y\in X$. Since $f_e(0)=0$ and $f_e$ is and even function, then by Theorem 3.1, there exists a unique quartic function $Q:x\rightarrow Y$ satisfying $$\parallel f_e(x)-Q(x)\parallel \leq\frac {1}{48}\sum^{\infty}_{i=0} \frac{\psi(0,2^i x)+\psi(0,-2^i x)}{2\times16^i}, \eqno\hspace {0.5cm}(3.24)$$ for all $x\in X$. On the other hand $f_0$ is odd function and $$\|D_{f_0} (x,y)\|\leq \frac{1}{2}[\psi(x,y)+\psi(-x,-y)],$$ for all $x,y\in X$. Then by Theorem 3.3, there exists a unique additive function $A:X\rightarrow Y$ such that $$\parallel f_0(x)-A(x)\parallel \leq\frac {1}{2}\sum^{\infty}_{i=0} \frac{\psi(0,2^i x)+\psi(0,-2^i x)}{2\times2^i}, \eqno\hspace {0.5cm}(3.25)$$ for all $x\in X$. Combining (3.24) and (3.25) to obtain (3.23). This completes the proof of Theorem. \end{proof} By Theorem 3.5, we are going to investigate the Hyers-Ulam -Rassias stability problem for functional equation (1.4). \begin{cor}\label{t2} Let $\theta\geq0$, $P<1$. Suppose $f:X\rightarrow Y$ satisfies the inequality $$\|D_f (x,y)\|\leq\theta(\|x\|^p+\|y\|^p),$$ for all $x,y\in X$, and $f(0)=0$. Then there exists a unique quartic function $Q:X\rightarrow Y$, and a unique additive function $A:X\rightarrow Y$ satisfying (1.4), and $$\parallel f(x)-Q(x)-A(x)\parallel \leq\frac {\theta}{48}\|x\|^p (\frac{16}{16-2^p}+\frac{96}{1-2^{p-1}}),$$ for all $x\in X$. \end{cor} By Corollary 3.6, we solve the following Hyers-Ulam stability problem for functional equation (1.4). \begin{cor}\label{t2} Let $\varepsilonilon$ be a positive real number, and let $f:X\rightarrow Y$ be a function satisfies $$\|D_f (x,y)\|\leq\varepsilonilon,$$ for all $x,y\in X$. Then there exist a unique quartic function $Q:X\rightarrow Y$, and a unique additive function $A:X\rightarrow Y$ satisfying (1.4) and $$\parallel f(x)-Q(x)-A(x)\parallel \leq\frac {362}{45} ~\varepsilonilon,$$ for all $x\in X$. \end{cor} By applying Theorems 3.2 and 3.4, we have the following Theorem. \begin{thm}\label{t5} Let $\psi:X\times X\rightarrow Y$ be a function such that $$\sum^{\infty}_{i=o} 16^i{\psi(0,2^{-i-1} x)}\leq \infty \quad and \quad \lim_n 16^n{\psi(2^n x,2^n x)}=0,$$ for all $x\in X$. Suppose that a function $ f:X\rightarrow Y$ satisfies the inequality $$\|D_f (x,y)\|\leq \psi(x,y),$$ for all $x,y\in X$, and $f(0)=0$. Then there exist a unique quartic function $Q:X\rightarrow Y$ and a unique additive function $A:X\rightarrow Y$ satisfying (1.4) and \begin{align*} \parallel f(x)-Q(x)-A(x)\parallel &\leq\sum^{\infty}_{i=0}[(\frac{16^i}{3}+2^i)(\frac{\psi(0,2^{-i-1} x)+\psi(0,-2^{-i-1} x)}{2})], \end{align*} for all $x,y\in X$. \end{thm} \begin{cor}\label{t2} Let $\theta\geq0$, $P>4$. Suppose $f:X\rightarrow Y$ satisfies the inequality $$\|D_f (x,y)\|\leq\theta(\|x\|^p+\|y\|^p),$$ for all $x,y\in X$, and $f(0)=0$. Then there exist a unique quartic function $Q:X\rightarrow Y$, and a unique additive function $A:X\rightarrow Y$ satisfying (1.4), and $$\parallel f(x)-Q(x)-A(x)\parallel \leq\frac {\theta}{3\times 2^p}\|x\|^p (\frac{1}{1-2^{4-p}}+\frac{1}{1-2^{1-p}}),$$ for all $x\in X$. \end{cor} {\small } \end{document}
\begin{document} \title{The title of my page} \begin{abstract} We consider here the problem of classifying orbits of an action of the diffeomorphism group of 3-space on a tower of fibrations with $\mathbb{P}^2$-fibers that generalize the Monster Tower due to Montgomery and Zhitomirskii. As a corollary we give the first steps towards the problem of classifying Goursat 2-flags of small length. In short, we classify the orbits within the first four levels of the Monster Tower and show that there is a total of $34$ orbits at the fourth level in the tower. \end{abstract} \section{Introduction}\label{sec:intro} A Goursat flag is a nonholonomic distribution $D$ with ``slow growth''. By slow growth we mean that the associated flag of distributions $$D \hspace{.2in} \subset \hspace{.2in} D + [D,D]\hspace{.2in} \subset \hspace{.2in} D + [D,D] + [[D,D],[D,D]] \dots ,$$ grows by one dimension at each bracketing step and after $n$ steps it will span the entire tangent bundle. By an abuse of notation, $D$ in this context also means the sheaf of vector fields spanning $D$. Though less popular than her nonholonomic siblings like the contact distribution, or rolling distribution in mechanics \cite{gil}, Goursat distributions are more common than one would think. The canonical Cartan distributions in the jet spaces $J^k(\mathbb{R},\mathbb{R})$, or the non-slip constraint for a jackknifed truck \cite{jean} are examples. Generalizations of Goursat flags have been proposed in the literature. One such notion is that of a {\it Goursat multi-flag.} Typical examples of Goursat multi-flags include the Cartan distributions $C$ of the jet spaces $J^k(\mathbb{R},\mathbb{R}^n),n\geq 2.$ Iterated bracketing this time: $$C\hspace{.2in} \subset \hspace{.2in} C + [C,C] \hspace{.2in} \subset \hspace{.2in} C + [C,C] + [[C,C],[C,C]] \dots,$$ leads to a jump in rank by $n$ dimensions at each step. To our knowledge the general theory behind Goursat multi-flags made their first appearance in the works of A. Kumpera and J. L Rubin \cite{kumpera1}. P. Mormul has also been very active in breaking new ground \cite{mormul1}, and developed new combinatorial tools to investigate the normal forms of these distributions. In addition to Mormul's work is Yamaguchi's work \cite{yamaguchi1} and \cite{yamaguchi2} where he investigated the local properties of Goursat multi-flags. It is also important to mention that results from his work in \cite{yamaguchi2} were crucial to our classification proceedure. In this note we concentrate on the problem of classifying local germs of Goursat multi-flags of small length. We will consider Goursat 2-flags of length up to 4. Goursat 2-flags exhibit many new geometric features our old Goursat flags(Goursat $1$-flags) did not possess. The geometric properties of Goursat multi-flags was the main subject of the paper \cite{castro}. We approach the classification problem from a geometric standpoint, and will partially follow the programme started by Montgomery and Zhitomirskii in \cite{mont1}. Our starting point is the universality theorem proved by Yamaguchi and Shibuya \cite{yamaguchi1} stating that any germ $D$ of a Goursat distribution, or using Mormul's terminology, a Goursat $n$-flag of length $k$ is analytically equivalent to the germ of the canonical distribution on the $k$-step Cartan prolongation \cite{bryant} of the flat $\mathbb{R}^{n+1}$ with its trivial bundle, the germs being taken at some $D$-dependent point. In what follows we apply Yamaguchi's theorem to the $n=2$ case, and translate the classification problem of Goursat 2-flags into a problem of classifying points in a tower of real $\mathbb{P}^2$ (projective plane) fibrations \begin{equation}\label{eqn:tower} \cdots \rightarrow \mathcal{P}^{4}(2)\rightarrow \mathcal{P}^{3}(2)\rightarrow \mathcal{P}^{2}(2) \rightarrow \mathcal{P}^{1}(2) \rightarrow \mathcal{P}^{0}(2) = \mathbb{R}^3, \end{equation} where $\text{dim}(\mathcal{P}^{k}(2)) = 3 + 2k$ and $\mathcal{P}^{k}(2)$ is the Cartan prolongation of $\mathcal{P}^{k-1}(2)$. The global topology of these manifolds is much more interesting though, and has yet to be explored (\cite{castro}). Each $\mathcal{P}^{k}(2)$ comes equipped with a rank-3 distribution $\Delta_k$. At a dense open subset of $\mathcal{P}^{k}(2)$ this $\Delta_{k}$ is locally diffeomorphic to the Cartan distribution $C$ in $J^k(\mathbb{R},\mathbb{R}^2).$ $\Delta_k$ has an associated flag of length $k$. A description of symmetries of the $\Delta_k$ is the content of a theorem due to Yamaguchi \cite{yamaguchi2}, attributed to Backl\"und, and dating from the $1980$'s. He showed that that any symmetry of $(\mathcal{P}^{k}(2),\Delta_k)$ results from the Cartan prolongation of a symmetry of the base manifold, i.e. of a diffeomorphism of the 3-fold $\mathbb{R}^3$. By applying the techniques developed in \cite{castro,mont2}, we will attack the classification problem utilizing the {\it curves-to-points} approach and a new technique we named {\it isotropy representation}, and used to some extent in \cite{mont1}, somehow inspired by \'E. Cartan's moving frame method \cite{favard}. Our main result states that there are $34$ Goursat $2$-flags of length 4, and we provide the exact numbers for each length size. Our approach is constructive. Normal forms for each equivalence class can be made explicit. Due to space limitations we will write down only a couple instructive examples. We would like to mention that P. Mormul and Pelletier \cite{mormul2} have attempted an alternative solution to the classification problem. In their classification work, they employed the results and tools proved from previous works done by Mormul. In \cite{mormul} Mormul discusses two coding systems for special $2$-flags and proves that the two coding systems are the same. One system is the Extend Kumpera Ruiz system, which is a coding of system used to describe $2$-flags. The other is called Singularity Class coding, which is an intrinsic coding system that describes the sandwich diagram \cite{mont1} associated to $2$-flags. A brief outline on how these coding systems relate to Montgomery's $RVT$ coding is discussed in \cite{castro}. Then, building upon Mormul's work in \cite{mormul3}, Mormul and Pelletier use the idea of strong nilpotency of special multi-flags, along with properties and relationships between his two coding systems, to classify the these distributions, up to length $4$(equivalently, up to level $4$ of the Monster Tower). Our $34$ agrees with theirs. Here is a short description of the paper. In section one we acquaint ourselves with the main definitions necessary for the statements of our main results. Section two contains our precise statements, and a few explanatory remarks to help the reader progress through the theory with with us. Section three consists of the statements of our main results. In section four we discuss the basic tools and ideas that will be needed to prove our various results. Section five is devoted to technicalities, and the actual proofs. We conclude the paper, section six, with a quick summary of our findings. For the record, we have also included an appendix where our lengthy computations are scrutinized. \noindent {\bf Acknowlegements.} We would like to thank Corey Shanbrom, and Richard Montgomery (both at UCSC) for many useful conversations and remarks. \section{Main definitions} \subsection{Constructions} Let $(Z,\Delta)$ be a manifold of dimension $n$ equipped with a plane field of rank $r$ and let $\Pj (\Delta)$ be the {\it projectivization} of $\Delta$. As a manifold, $$Z^1 = \Pj (\Delta) .$$ Various objects in $Z$ can be canonically prolonged (lifted) to the new manifold $Z^1$. \begin{table}\label{tab:prol} \caption{Some geometric objects and their Cartan prolongations.} \begin{tabular}{|c|c|} \hline curve $c:(\mathbb{R},0)\rightarrow (Z,p)$ & curve $c^{1}:(\mathbb{R},0)\rightarrow (Z^1,p),$ \\ & $c^{1}(t) =\text{(point,moving line)} = (c(t),span\{ \frac{dc(t)}{dt} \}) $\\ \hline diffeomorphism $\Phi: Z \circlearrowleft$ & diffeomorphism $\Phi^{1}: Z^1 \circlearrowleft$, \\ & $\Phi^{1}(p,\ell) = (\Phi(p),d\Phi_p(\ell))$ \\ \hline rank $r$ linear subbundle & rank $r$ linear subbundle $\Delta_1=d\pi_{(p,\ell)}^{-1}(\ell)\subset TZ^1$,\\ $\Delta \subset TZ$ & $\pi: Z^1 \rightarrow Z$ is the canonical projection. \\ \hline \end{tabular} \end{table} Given an analytic curve $c:(I,0)\rightarrow (Z,p)$, where $I$ is some open interval about zero in $\mathbb{R}$, we can naturally define a curve a new curve $$c^{1}:(I,0)\rightarrow (Z^1,(p,\ell))$$ with image in $Z^1$ and where $\ell = span\{ \frac{dc(0)}{dt} \}$. This new curve, $c^{1}(t)$, is called the \textit{prologation} of $c(t)$. If $t = t_{0}$ is not a regular point, then we define $c^{1}(t_{0})$ by taking the limit $\lim_{t \rightarrow t_{0}} c^{1}(t)$ where the limit varies over the regular points $t \rightarrow t_{0}$. An important fact to note, proved in \cite{mont1}, is that the analyticity of $Z$ and $c$ implies that the limit is well defined and that the prolonged curve $c^{1}(t)$ is analytic as well. Since this process can be iterated, we will write $c^{k}(t)$ to denoted the $k$-fold prolongation of the curve $c(t)$. The manifold $Z^1$ also comes equipped with a distribution $\Delta_1$ called the {\it Cartan prolongation of $\Delta$} \cite{bryant} and is defined as follows. Let $\pi : Z^1 \rightarrow Z$ be the projection map $(p, \ell)\mapsto p$. Then $$\Delta_1(p,\ell) = d\pi_{(p,\ell)}^{-1}(\ell),$$ i.e. {\it it is the subspace of $T_{(p,\ell)}Z^1$ consisting of all tangents to curves which are prolongations of curves one level down through $p$ in the direction $\ell$.} It is easy to check using linear algebra that $\Delta_1$ is also a $k$-plane field. The pair $(Z^1,\Delta_{1})$ is called the {\it Cartan prolongation} of $(Z,\Delta)$. \begin{example} Take $Z = \mathbb{R}^{3}$ with its tangent bundle, which we denote by $\Delta_0$. Then the tower shown in equation (\ref{eqn:tower}) is obtained by prolonging the pair $(\mathbb{R}^{3},\Delta_0)$ four times. \end{example} By a {\it symmetry} of the pair $(Z,\Delta)$ we mean a (local or global) diffeomorphism $\Phi \in Diff(3)$ of $Z$ that preserves the subbundle $\Delta$. The symmetries of $(Z,\Delta)$ can also be prolonged to symmetries $\Phi^{1}$ of $(Z^1,\Delta_{1})$ as follows. Define $$\Phi^{1}(p,\ell)=(\Phi(p), d\Phi(\ell)).$$ Since $d\Phi(p)$ is invertible, and $d\Phi$ is linear the second component is well defined as a projective map. The new born symmetry we denoted by $\Phi^{1}$ is the prolongation (to $Z^1$) of $\Phi$. Objects of interest in this paper and their Cartan prolongations are summarized in table (1). Unless otherwise mentioned prolongation will always refer to Cartan prolongation. \begin{eg}[Prolongation of a cusp.] Let $c(t) = (t^{2}, t^{3}, 0)$ be the $A_{2}$ cusp in $\R^{3}$. Then $c^{1}(t) = (x(t), y(t), [dx, dy, dz]) = (t^{2}, t^{3}, 0, [2t: 3t^{2}: 0])$. After we introduce fiber affine coordinates $u = \frac{dy}{dx}$ and $v = \frac{dz}{dx}$ around the point $[1: 0: 0]$ we obtain the immersed curve $$c^{1}(t) = (t^{2}, t^{3}, 0, \frac{3}{2}t, 0)$$ \end{eg} \subsection{Constructing the Monster tower.} We start with $\R^{n+1}$ as our base and take $\Delta_{0} = T \R^{n+1}$. Prolonging $\Delta_{0}$ we get that $\mathcal{P}^{1}(n) = \Pj \Delta_{0}$ with the distribution $\Delta_{1}$. By iterating this process we end up with the manifold $\mathcal{P}^{k}(n)$ which is endowed with the rank $n$ distribtuion $\Delta_{k} = (\Delta_{k-1})^{1}$ and fibered over $\mathcal{P}^{k-1}(n)$. In this paper we will be looking at the case of when $n=3$. \begin{defn} The Monster tower is a sequence of manifolds with distributions, $(\mathcal{P}^{k}, \Delta_{k})$, together with fibrations $$\cdots \rightarrow \mathcal{P}^{k}(n) \rightarrow \mathcal{P}^{k-1}(n) \rightarrow \cdots \rightarrow \mathcal{P}^{1}(n) \rightarrow \mathcal{P}^{0}(n) = \mathbb{R}^{n+1}$$ and we write $\pi_{k,i}: \mathcal{P}^{k}(n) \rightarrow \mathcal{P}^{i}(n)$, with $i < k$ for the projections. \end{defn} \begin{thm} For $n > 1$ and $k>0$ any local diffeomorphism of $\mathcal{P}^{k}(n)$ preserving the distribution $\Delta_{k}$ is the restriction of the $k$-th prolongation of a local diffeomorphism $\Phi \in Diff(n)$. \end{thm} Proof: This was shown by Yamaguchi and Shibuya in (\cite{yamaguchi2}). \begin{rem} The importance of the above result cannot be stressed enough. This theorem by Yamaguchi and Shibuya is what allows the isotropy representation method, discussed in section five of the paper, of classifying orbits within the Monster Tower to work. \end{rem} \begin{rem} Since we will be working exclusively with the $n=2$ Monster tower in this paper, we will just write $\mathcal{P}^{k}$ for $\mathcal{P}^{k}(2)$. \end{rem} \begin{defn} Two points $p,q$ in $\mathcal{P}^k$ are said to be equivalent if and only if there is a $\Phi\in \text{Diff}(3)$ such that $\Phi^{k}(p)=q$, in other words, $q\in \mathcal{O}(p)$($\mathcal{O}(p)$ is the orbit of the point $p$). \end{defn} \subsection{Orbits.} Yamaguchi's theorem states that any symmetry of $\mathcal{P}^k$ comes from prolonging a diffeomorphism of $\mathbb{R}^3$ $k$-times. This remark is essential to our computations. Let us denote by $\mathcal{O}(p)$ the orbit of the point $p$ under the action of $Diff(\mathbb{R}^3)$. In trying to calculate the various orbits within the Monster tower we see that it is easier to fix the base points $p_{0} = \pi_{k,0}(p_{k})$ and $q_{0} = \pi_{k,0}(q_{k})$ to be $0 \in \R^{3}$. This means that we can replace the pseudogroup $Diff(3)$, diffeomorphism germs of $\R^{3}$, by the group $Diff_{0}(3)$ of diffeomorphism germs that map the origin back to the origin in $\R^{3}$. \begin{figure}\label{fig:prologdist} \label{fig:prologdiff} \label{fig:prolongation} \end{figure} \begin{defn} We say that a curve or curve germ $\gamma: (\R, 0) \rightarrow (\R^{3}, 0)$ realizes the point $p_{k} \in \mathcal{P}^{k}$ if $\gamma^{k}(0) = p_{k}$, where $p_{0} = \pi_{k,0}(p_{k})$. \end{defn} \begin{defn} A direction $\ell \subset \Delta_{k}(p_{k})$, $k \geq 1$ is called a critical direction if there exists an immersed curve, at level $k$, that is tangent to the direction $\ell$ whose projection to the zero-th level is the constant curve. If no such curve exists, then we call $\ell$ a regular direction. \end{defn} \begin{defn} Let $p \in \mathcal{P}^{k}$, then the \begin{eqnarray*} \text{Germ}(p) &=& \{ c :(\mathbb{R},0)\rightarrow (\mathbb{R}^3,0)| \text{$\frac{dc^{k}}{dt}\vert_{t=0}\neq 0$ is a regular direction} \}. \end{eqnarray*} \end{defn} \begin{defn} Two curves $\gamma$, $\sigma$ in $\R^{3}$ are $(RL)$ equivalent, written $\gamma \sim \sigma$ $\Leftrightarrow$ there exists a diffeomorphism germ $\Phi \in Diff(3)$ and a reparametrization $\tau \in Diff_{0}(1)$ of $(\R,0)$ such that $\sigma = \Phi \circ \gamma \circ \tau$. \end{defn} \section{Main results} \begin{thm}[Orbit counting per level]\label{thm:main} In the $n=2$ Monster tower the number of orbits within each of the first four levels of the tower are as follows: level $1$ has $1$ orbit, level $2$ has $2$ orbits, level $3$ has $7$ orbits, and level $4$ has $34$ orbits. \end{thm} The main idea behind this classification is a coding system developed by Castro and Montgomery \cite{castro}. This coding system is known as $RVT$ coding where each point in the Monster tower is labeled by a sequence of $R$'s, $V$'s, $T$'s, and $L$'s along with various decorations. We will give an explanation of this coding system in the next section. Using this coding system we went class by class and determined the number or orbits within every possible $RVT$ class that could arise at each of the first four levels. \begin{thm}[Listing of orbits within each $RVT$ code.]\label{thm:count} The above table, is a break down of the number of orbits that appear within each $RVT$ class within the first three levels. \begin{table}\label{tab:codes} \begin{tabular}{|c|c|c|c|} \hline Level of tower & $RVT$ code & Number of orbits & Normal forms \\ $1$ & $R$ & $1$ & $(t,0,0)$ \\ $2$ & $RR$ & $1$ & $(t,0,0)$ \\ & $RV$ & $1$ & $(t^{2}, t^{3}, 0)$\\ $3$ & $RRR$ & $1$ & $(t,0,0)$ \\ & $RRV$ & $1$ & $(t^{2}, t^{5}, 0)$ \\ & $RVR$ & $1$ & $(t^{2}, t^{3}, 0)$ \\ & $RVV$ & $1$ & $(t^{3}, t^{5}, t^{7})$, $(t^{3}, t^{5}, 0)$ \\ & $RVT$ & $2$ & $(t^{3}, t^{4},t^{5})$, $(t^{3}, t^{4}, 0)$ \\ & $RVL$ & $1$ & $(t^{4}, t^{6}, t^{7})$ \\ \hline \end{tabular} \end{table} \no For level $4$ there is a total of $23$ possible $RVT$ classes. Of the $23$ possibilities $14$ of them consist of a single orbit. The classes $RRVT$, $RVRV$, $RVVR$, $RVVV$, $RVVT$, $RVTR$, $RVTV$, $RVTL$ consist of $2$ orbits, and the class $RVTT$ consists of $4$ orbits. \end{thm} \begin{rem} There are a few words that should be said to explain the normal forms column in table $2$. Let $p_{k} \in \mathcal{P}^{k}$, for $k = 1,2,3$, having $RVT$ code $\omega$, meaning $\omega$ is a word from the second column of the table. For $\gamma \in Germ(p_{k})$, then $\gamma$ is $(RL)$ equivalent to one of the curves listed in the normal forms column for the $RVT$ class $\omega$. Now, notice that for the class $RVV$ that there are two inequivalent curves sitting in the normal forms column, but that there is only one orbit within that class. This is because the two normal forms are equal to each other, at $t=0$, after three prolongations. However, after four prolongations they represent different points at the fourth level. This corresponds to the fact that at the fourth level class $RVVR$ breaks up into two orbits. \end{rem} The following theorems are results that were proved in \cite{castro} and which helped to reduce the number calculations in our orbit classification process. \begin{defn} A point $p_{k} \in \mathcal{P}^{k}$ is called a Cartan point if its $RVT$ code is $R^{k}$. \end{defn} \begin{thm}\label{thm:cartan} The $RVT$ class $R^{k}$ forms a single orbit at any level within the Monster tower $\mathcal{P}^{k}(n)$ for $k \geq 1$ and $n \geq 1$. Every point at level $1$ is a Cartan point. For $k > 1$ the set $R^{k}$ is an open dense subset of $\mathcal{P}^{k}(n)$. \end{thm} \begin{defn} A parametrized curve is an $A_{2k}$ curve, $k \geq 1$ if it is equivalent to the curve $$(t^{2}, t^{2k + 1}, 0) $$ \end{defn} \begin{thm}\label{thm:ak} Let $p_{j} \in \mathcal{P}^{j}$ with $j = k + m + 1$, with $m,k \geq 0$, $m,k$ are positive integers, and $p_{j} \in R^{k}CR^{m}$, then $Germ(p_{j})$ contains a curve germ equivalent to the $A_{2k}$ singularity, which means that the $RVT$ class $R^{k}CR^{m}$ consists of a single orbit. \end{thm} \begin{rem} One could ask ``Why curves?'' The space of $k$-jets of functions $f:\mathbb{R}\rightarrow \mathbb{R}^{2}$, usually denoted by $J^k(\mathbb{R},\mathbb{R}^{2})$ is an open dense subset of $\mathcal{P}^{k}$. It is in this sense that a point $p\in \mathcal{P}^{k}$ is roughly speaking the $k$-jet of a curve in $\mathbb{R}^3$. Sections of the bundle $$J^k(\mathbb{R},\mathbb{R}^{2}) \rightarrow \mathbb{R}\times \mathbb{R}^{2}$$ are $k$-jet extensions of functions. Explicitly, given a function $t\mapsto (t,x(t),y(t))$ its $k$-jet extension is defined as $$(x,f(x))\mapsto (t,x(t),y(t),x'(t),y'(t),\dots,x^{(k)}(t),y^{(k)}(t)).$$ (Superscript here denotes the order of the derivative.) It is an instructive example to show that for certain choices of fiber affine coordinates in $\mathcal{P}^{k}$, not involving critical directions, that our local charts will look like a copy of $J^k(\mathbb{R},\mathbb{R}^{2})$. \ Another reason for looking at curves is because it gives us a better picture for the overall behavior of an $RVT$ class. If one knows all the possible curve normal forms for a particular $RVT$ class, say $\omega$, then not only does one know how many orbits are within the class $\omega$, but they also know how many orbits are within the regular prolongation of $\omega$. By regular prolongation of an $RVT$ class $\omega$ we mean the addition of $R$'s to the end of the word $\omega$, i.e. the regular prolongation of $\omega$ is $\omega R \cdots R$. This method of using curves to classify $RVT$ classes was used in \cite{mont1} and proved to be very successful in classifing points within the $n=1$ Monster Tower. \end{rem} \section{Tools and ideas involved in the proofs}\label{sec:tools} Before we begin the proofs we need to define the $RVT$ code. \subsection{$RC$ coding of points.} \begin{defn} A point $p_{k} \in \mathcal{P}^{k}$, where $p_{k} = (p_{k-1}, \ell)$ is called regular or critical point if the line $\ell$ is a regular direction or a critical direction. \end{defn} \begin{defn} For $p_{k} \in \mathcal{P}^{k}$, $k \geq 1$ and $p_{i} = \pi_{k,i}(p_{k})$, we write $\omega_{i}(p_{k}) = R$ if $p_{i}$ is a regular point and $\omega_{i}(p_{k}) = C$ if $p_{i}$ is a critical point. Then the word $\omega(p_{k}) = \omega_{1}(p_{k}) \cdots \omega_{k}(p_{k})$ is called the $RC$ code for the point $p_{k}$. Note that $\omega_{1}(p_{k})$ is always equal to $R$ by Theorem $3.4$. \end{defn} \no So far we have not discussed how critical directions arise inside of $\Delta_{k}$. The following section will show that there is more than one kind of critical direction that can appear within the distribution $\Delta_{k}$. \subsection{Baby Monsters.} One can apply prolongation to any analytic $n$-dimensional manifold $F$ in place of $\R^{n}$. Start out with $\mathcal{P}^{0}(F) = F$ and take $\Delta^{F}_{0} = TF$. Then the prolongation of the pair $(F, \Delta^{F}_{0})$ is $\mathcal{P}^{1}(F) = \Pj TF$, with canonical rank $m$ distribution $\Delta^{F}_{1} = (\Delta^{F}_{0})^{1}$. By iterating this process $k$ times we end up with the pair $(\mathcal{P}^{k}(F), \Delta^{F}_{k})$, which is analytically diffeomorphic to $(\mathcal{P}^{k}(n-1), \Delta_{k})$. \ Now, apply this process to the fiber $F_{i}(p_{i}) = \pi^{-1}_{i, i-1}(p_{i-1}) \subset \mathcal{P}^{i}$ through the point $p_{i}$ at level $i$. The fiber is an $(n-1)$-dimensional integral submanifold for $\Delta_{i}$. Prolonging, we see the $\mathcal{P}^{1}(F_{i}(p_{i})) \subset \mathcal{P}^{i + 1}$, and has the associated distribution $\delta^{1}_{i} = \Delta^{F_{i}(p_{i})}_{1}$; that is, $$\delta^{1}_{i}(q) = \Delta_{i + 1}(q) \cap T_{q}(\mathcal{P}^{1}(F_{i}(p_{i}))) $$ which is a hyperplane within $\Delta_{i + 1}(q)$, for $q \in \mathcal{P}^{1}(F_{i}(p_{i}))$. When this prolongation process is iterated, we end up with the submanifolds $$\mathcal{P}^{j}(F_{i}(p_{i})) \subset \mathcal{P}^{i + j}$$ with the hyperplane subdistribution $\delta^{j}_{i} \subset \Delta_{i + j}(q)$ for $q \in \mathcal{P}^{j}(F_{i}(p_{i}))$. \begin{defn} A baby Monster born at level $i$ is a sub-tower $(\mathcal{P}^{i}(F_{i}(p_{i})), \delta^{j}_{i})$, for $j \geq 0$ within the Monster tower. If $q \in \mathcal{P}^{j}(F_{i}(p_{i}))$ then we will say that a baby Monster born at level $i$ passes through $q$ and that $\delta^{j}_{i}(q)$ is a critical hyperplane passing through $q$, and born at level $i$. \end{defn} \begin{rem} The vertical plane $V_k (q)$, which is of the form $\delta^{0}_{k} (q)$, is always one of the critical hyperplanes passing through $q$. \end{rem} \begin{thm} A direction $\ell \subset \Delta_{k}$ is called critical $\Leftrightarrow$ $\ell$ is contained in a critical hyperplane. \end{thm} \begin{figure} \caption{Arrangement of critical hyperplanes.} \label{fig:one-plane} \label{fig:two-planes} \label{fig:three-planes} \label{fig:arrangement} \end{figure} \subsection{Arrangements of critical hyperplanes for $n = 2$.} Over any point $p_{i}$, at the i-th level of the Monster tower, there is a total of three different hyperplane configurations for $\Delta_{i}$. These three configurations are shown in diagrams $(a)$, $(b)$, and $(c)$. Figure $(a)$ is the picture for $\Delta_{i}(p_{i})$ when the i-th letter in the $RVT$ code for $p_{i}$ is the letter $R$. From our earlier discussion, this means that the vertical hyperplane, labeled with a $V$, is the only critical hyperplane sitting inside of $\Delta_{i}(p_{i})$. Figure $(b)$ is the picture for $\Delta_{i}(p_{i})$ when the i-th letter in the $RVT$ code is either the letter $V$ or the letter $T$. This gives that there is a total of two critical hyperplanes sitting inside of $\Delta_{i}(p_{i})$: one is the vertical hyperplane and the other is the tangency hyperplane, labeled by the letter $T$. Now, figure $3$ describes the picture for $\Delta_{i}(p_{i})$ when the i-th letter in the $RVT$ code of $p_{i}$ is the letter $L$. Figure $(c)$ depicts this situation where there is now a total of three critical hyperplanes: one for the vertical hyperplane, and two tangency hyperplanes, labeled as $T_{1}$ and $T_{2}$. Now, because of the presence of these three critical hyperplanes we need to refine our notion of an $L$ direction and add two more distinct $L$ directions. These three directions are labeled as $L_{1}$, $L_{2}$, and $L_{3}$. \ With the above in mind, we can now refine our $RC$ coding and define the $RVT$ code for points within the Monster tower. Take $p_{k} \in \mathcal{P}^{k}$ and if $\omega_{i}(p_{k}) = C$ then we look at the point $p_{i} = \pi_{k,i}(p_{k})$, where $p_{i} = (p_{i-1}, \ell_{i-1})$. Then depending on which hyperplane $\ell_{i-1}$ is contained in we relabel the letter $C$ by the letter $V$, $T$, $L$, $T_{i}$ for $i = 1,2$, or $L_{j}$ for $j = 1,2,3$. As a result, we see that each of the first four levels of the Monster tower is made up of the following $RVT$ classes: \ \begin{itemize} \item{}Level 1: $R$. \item{}Level 2: $RR, RV$. \item{}Level 3: $$RRR, RRV, RVR, RVV, RVT, RVL$$ \item{}Level 4: $$RRRR, RRRV$$ $$RRVR, RRVV, RRVT, RRVL$$ $$RVRR, RVRV, RVVR, RVVV, RVVT, RVVL $$ $$RVTR, RVTV, RVTT , RVTL$$ $$RVLR, RVLV, RVLT_1, RVLT_2, RVLL_1, RVLL_2, RVLL_3$$ \end{itemize} \ \begin{rem} As was pointed out in \cite{castro}, the symmetries, at any level in the Monster tower, preserve the critical hyperplanes. In other words, if $\Phi^{k}$ is a symmetry at level $k$ in the Monster tower and $\delta^{j}_{i}$ is a critical hyperplane within $\Delta_{k}$, then $\Phi^{k}_{\ast}(\delta^{j}_{i}) = \delta^{j}_{i}$. As a result, the $RVT$ classes creates a partition of the various points within any level of the Monster tower. \end{rem} Now, from the above configurations of critical hyperplanes section one might ask the following question: how does one "see" the two tangency hyperplanes that appear over an "L" point and where do they come from? This question was an important one to ask when trying to classify the number of orbits within the fourth level of the Monster Tower and to better understand the geometry of the tower. We will provide an example to answer this question, but before we do so we must discuss some details about a particular coordinate system called Kumpera-Rubin coordinates to help us do various computations on the Monster tower. \ \subsection{Kumpera-Rubin coordinates} When doing local computations in the tower (\ref{eqn:tower}) one needs to work with suitable coordinates. A good choice of coordinates were suggested by Kumpera and Ruiz \cite{kumpera1} in the Goursat case, and later generalized by Kumpera and Rubin \cite{kumpera2} for multi-flags. A detailed description of the inductive construction of Kumpera-Rubin coordinates was given in \cite{castro} and is discussed in the example following this section, as well as in the proof of our level $3$ classification. For the sake of clarity, we will highlight the coordinates' attributes through an example. \begin{eg}[Constructing fiber affine coordinates in $\mathcal{P}^{2}$] \end{eg} \subsection*{Level One:} Consider the pair $(\mathbb{R}^{3}, T \R^{3})$ and let $(x,y,z)$ be local coordinates. The set of 1-forms $\{dx,dy,dz\}$ form a coframe of $T^*\mathbb{R}^3$. Any line $\ell$ through $p \in \mathbb{R}^3$ has projective coordinates $[dx(\ell): dy(\ell): dz(\ell)]$. Since the affine group, which is contained in $Diff(3)$, acts transitively on $\mathcal{P}(T\mathbb{R}^3)$ we can fix $p=(0,0,0)$ and $\ell = \text{span} \left\{ (1,0,0) \right\}$. Thus $dx(\ell)\neq 0$ and we introduce fiber affine coordinates $[1: dy/dx: dz/dx]$ or, $$u = \frac{dy}{dx}, v = \frac{dz}{dx}.$$ The Pfaffian system describing the prolonged distribution $\Delta_1$ on $$\mathcal{P}^{1} \approx \mathbb{R}^3\times \mathbb{P}^2$$ is $$ \{dy - u dx = 0, dz - v dx = 0 \} = \Delta_1 \subset T\mathcal{P}^{1}.$$ At the point $p_{1} = (p_{0}, \ell) = (x,y,z,u,v) = (0,0,0,0,0)$ the distribution is the linear subspace $$\Delta_1(0,0,0)=\{dy = 0,dz = 0\}.$$ The triad of one-forms $dx,du,dv$ form a local coframe for $\Delta_1$ near $p_{1} = (p_{0}, \ell)$. The fiber, $F_{1}(p_{1}) = \pi^{-1}_{1,0}(p_{0})$, is given by $x = y = z = 0$. The 2-plane of critical directions (``bad-directions'') is thus spanned by $\frac{\partial}{\partial u},\frac{\partial}{\partial v}$. \ The reader may have noticed that we could have instead chosen any regular direction at level $1$, instead, e.g. $\frac{\partial}{\partial x} + a \frac{\partial}{\partial u} + b \frac{\partial}{\partial v}$ and centered our chart on it. All regular directions, at level one, are equivalent. \subsection*{Level Two: $RV$ points.} Any line $\ell\subset \Delta_1(p_{1}')$, for $p_{1}'$ near $p_{1}$, will have projective coordinates $$[dx(\ell): du(\ell): dv(\ell)] .$$ If we choose a critical direction, say $\ell =\frac{\partial}{\partial u}$, then $du(\frac{\partial}{\partial u})=1$ and we can take the projective chart $[\frac{dx}{du} : 1 : \frac{dv}{du}]$. We will show below that any two critical directions are equivalent and therefore such a choice does not result in any loss of generality. We introduce new fiber affine coordinates $$u_2 = \frac{dx}{du},v_2 = \frac{dy}{du},$$ and the distribution $\Delta_2$ will be described in this chart as \begin{eqnarray*} \Delta_2 = \{dy - u dx = 0, dz - v dx = 0,\\ dx - u_2 du = 0, dv - v_2 du = 0\} \subset T\mathcal{P}^{2}. \end{eqnarray*} \subsection*{Level Three: The Tangency Hyperplanes over an $L$ point.} We take $p_{3} = (p_{2}, \ell) \in RVL$ with $p_{2}$ as in the level two discussion. We will show what the local affine coordinates near this point are and that the tangency hyperplane $T_{1}$, in $\Delta_{3}(p_{3})$, is the critical hyperplane $\delta^{1}_{2}(p_{3}) = span \{ \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa v_{3}} \}$ and the tangency hyperplane $T_{2}$ is the critical hyperplane $\delta^{2}_{1}(p_{3}) = span \{ \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa u_{3}}\}$. \ We begin with the local coordinates near $p_{3}$. First, recall that the distribution $\Delta_{2}$, in this case, is coframed by $[du: du_{2}: dv_{2}]$. Within $\Delta_{2}$ the vertical hyperplane is given by $du = 0$ and the tangency hyperplane by $du_{2} = 0$. The point $p_{3} = (p_{2}, \ell)$ with $\ell$ being an $L$ direction means that both $du(\ell) = 0$ and $du_{2}(\ell) = 0$. This means that the only choice for local coordinates near $p_{3}$ is given by $[\frac{du}{dv_{2}}: \frac{du_{2}}{dv_{2}}: 1]$ to give that the fiber coordinates at level $3$ are $$u_{3} = \frac{du}{dv_{2}}, v_{3} = \frac{du_{2}}{dv_{2}} $$ and the distribution $\Delta_{3}$ will be described in this chart as \begin{eqnarray*} \Delta_{3} = \{dy - u dx = 0, dz - v dx = 0,\\ dx - u_2 du = 0, dv - v_2 du = 0, \\ du - u_{3}dv_{2} = 0, du_{2} - v_{3}dv_{2} = 0 \} \subset T\mathcal{P}^{3}. \end{eqnarray*} With this in mind, we are ready to determine how the two tangency hyperplanes are situated within $\Delta_{3}$. \ {\it $T_{1} = \delta^{1}_{2}(p_{3})$:} First we note that $p_{3} = (x,y,z,u,v,u_{2},v_{2},u_{3},v_{3}) = (0,0,0,0,0,0,0,0,0)$ with $u = \frac{dy}{dx}$, $v = \frac{dz}{dx}$, $u_{2} = \frac{dx}{du}$, $v_{2} = \frac{dv}{du}$, $u_{3} = \frac{du}{dv_{2}}$, $v_{3} = \frac{du_{2}}{dv_{2}}$. With this in mind, we start by looking at the vertical hyperplane $V_{2}(p_{2}) \subset \Delta_{2}(p_{2})$ and prolong the fiber $F_{2}(p_{2})$ associated to $V_{2}(p_{2})$ and see that $$ \mathcal{P}^{1}(F_{2}(p_{2})) = \Pj V_{2} = (p_{1}, u_{2}, v_{2}, [ du: du_{2}: dv_{2} ]) = (p_{1}, u_{2}, v_{2}, [ 0: a: b ]) $$ $$= (p_{1}, u_{2}, v_{2}, [ 0: \frac{a}{b}: 1 ]) = (p_{1}, u_{2}, v_{2}, 0, v_{3})$$ where $a,b \in \R$ with $b \neq 0$. Then, since $\Delta_{3}$, in a neighborhood of $p_{3}$, is given by $$ \Delta_{3} = span \{ u_{3}Z^{(2)}_{1} + v_{3}\frac{\pa}{\pa u_{2}} + \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa u_{3}}, \frac{\pa}{\pa v_{3}} \}$$ with $Z^{(2)}_{1} = u_{2}Z^{(1)}_{1} + \frac{\pa}{\pa u} + v_{2} \frac{\pa}{\pa v}$ and $Z^{(1)}_{1} = u \frac{\pa}{\pa y} + v \frac{\pa}{\pa z} + \frac{\pa}{\pa x}$ and that $T_{p_{3}}(\mathcal{P}^{1}(F_{2}(p_{2}))) = span \{ \frac{\pa}{\pa u_{2}}, \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa v_{3}} \}$ we see that since $$\delta^{1}_{2}(p_{3}) = \Delta_{3}(p_{3}) \cap T_{p_{3}}(\mathcal{P}^{1}(F_{2}(p_{2})))$$ it gives us that $$\delta^{1}_{2}(p_{3}) = span \{ \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa v_{3}} \}$$ Now, since $V_{3}(p_{3}) \subset \Delta_{3}(p_{3})$ is given by $V_{3}(p_{3}) = span \{ \frac{\pa}{\pa u_{3}}, \frac{\pa}{\pa v_{3}} \}$ we see, based upon figure $(c)$, that $T_{1} = \delta^{1}_{2}(p_{3})$. {\it $T_{2} = \delta^{2}_{1}(p_{3})$:} We begin by looking at $V_{1}(p_{1}) \subset \Delta_{1}(p_{1})$ and at the fiber $F_{1}(p_{1})$ associated to $V_{1}(p_{1})$. When we prolong the fiber space we see that $$\mathcal{P}^{1}(F_{1}(p_{1})) = \Pj V_{1} = (0,0,0, u, v, [dx: du: dv]) = (0,0,0, u, v, [0: a: b])$$ $$ = (0,0,0, u, v, [0: 1: \frac{b}{a}]) = (0,0,0, u, v, 0, v_{2})$$ where $a,b \in \R$ with $a \neq 0$. Then, since $\Delta_{2}$, in a neighborhood of $p_{2}$, is given by $$ \Delta_{2} = span \{ u_{2}Z^{(1)}_{1} + \frac{\pa}{\pa u} + v_{2}\frac{\pa}{\pa v}, \frac{\pa}{\pa u_{2}}, \frac{\pa}{\pa v_{2}} \}$$ and $T_{p_{2}}(\mathcal{P}^{1}(F_{1}(p_{1}))) = span \{ \frac{\pa}{\pa u}, \frac{\pa}{\pa v}, \frac{\pa}{\pa v_{2}} \}$ that $$\delta^{1}_{1}(p_{2}) = \Delta_{2}(p_{2}) \cap T_{p_{2}}(\mathcal{P}^{1}(F_{1}(p_{1})))$$ and we see that in a neighborhood of $p_{2}$ that $$\delta^{1}_{1} = span \{ u_{2}Z^{(1)}_{1} + \frac{\pa}{\pa u} + v_{2}\frac{\pa}{\pa v}, \frac{\pa}{\pa v_{2}} \}$$ Now, in order to figure out what $\delta^{2}_{1}(p_{3})$ is we need to prolong the fiber $F_{1}(p_{1})$ twice and then look at the tangent space at the point $p_{3}$. We see that $$\mathcal{P}^{2}(F_{1}(p_{1})) = \Pj \delta^{1}_{1} = (0,0,0, u,v, 0, v_{2}, [du: du_{2}: dv_{2}])$$ $$ = (0,0,0, u,v, 0, v_{2}, [a: 0: b]) = (0,0,0,u, v, 0, v_{2}, [\frac{a}{b}: 0: 1]) = (0,0,0, u, v, 0, v_{2}, u_{3}, 0)$$ then since $$\delta^{2}_{1}(p_{3}) = \Delta_{3}(p_{3}) \cap T_{p_{3}}(\mathcal{P}^{2}(F_{1}(p_{1})))$$ with $\Delta_{3}(p_{3}) = span \{ \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa u_{3}}, \frac{\pa}{\pa v_{3}} \}$ and $T_{p_{3}}(\mathcal{P}^{2}(F_{1}(p_{1}))) = span \{ \frac{\pa}{\pa u}, \frac{\pa}{\pa v}, \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa u_{3}} \}$ it gives that $$\delta^{2}_{1}(p_{3}) = span \{ \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa u_{3}} \}$$ and from looking at figure $(c)$ one can see that $T_{2} = \delta^{2}_{1}(p_{3})$. \begin{figure} \caption{Critical hyperplane configuration over $p_{3} \end{figure} \begin{rem} The above example, along with figure $2$, gives concrete reasoning for why a critical hyperplane, which is not the vertical one, is called a "tangency" hyperplane. Also, in figure $2$ we have drawn the submanifolds $\mathcal{P}^{1}(F_{2}(p_{2}))$ and $\mathcal{P}^{1}(F_{1}(p_{1}))$ to reflect the fact that they are tangent to the manifolds $\mathcal{P}^{3}$ and $\mathcal{P}^{2}$ respectively with one of their dimensions tangent to the vertical space. At the same time, the submanifold $\mathcal{P}^{2}(F_{1}(p_{1}))$ is drawn to reflect that it is tangent to the manifold $\mathcal{P}^{3}$ with one of its dimensions tangent to the appropriate direction in the vertical hyperplane. In particular, it is drawn to show the fact that $\mathcal{P}^{2}(F_{1}(p_{1}))$ is tangent to the $\frac{\partial}{\partial u_{3}}$ direction while $\mathcal{P}^{1}(F_{2}(p_{2}))$ is tangent to the $\frac{\partial}{\partial v_{3}}$ direction. \end{rem} \subsection{Semigroup of a curve} \begin{defn} The \textit{order} of an analytic curve germ $f(t) = \Sigma a_{i}t^{i}$ is the smallest integer $i$ such that $a_{i} \neq 0$. We write $ord(f)$ for this (nonegative) integer. The \textit{multiplicity} of a curve germ $\gamma:(\mathbb{R},0) \to (\mathbb{R}^n,0)$, denoted $mult(\gamma)$, is the minimum of the orders of its coordinate functions $\gamma_{i}(t)$ relative to any coordinate system vanishing at $p$. \end{defn} \begin{defn} If $\gamma: (\mathbb{R},0) \to (\mathbb{R}^n,0)$ is a well-parameterized curve germ, then its \emph{semigroup} is the collection of positive integers $\text{ord}(P(\gamma(t)))$ as $P$ varies over analytic functions of $n$ variables vanishing at $0$. \end{defn} Because $\text{ord}(PQ(\gamma(t))) = \text{ord}(P(\gamma(t)) + \text{ord}(Q(\gamma(t))$ the curve semigroup is indeed an algebraic semigroup, i.e. a subset of $\mathbb{N}$ closed under addition. The semigroup of a well-parameterized curve is a basic diffeomorphism invariant of the curve. \begin{defn}[Following Arnol'd (cf.\cite{arnsing}, end of his introduction] A curve germ $\gamma$ in $\mathbb{R}^d$, $d \ge 2$ has symbol $[m, n]$, $[m, n, p]$, or $[m, (n, p)]$ if it is equivalent to a curve germ of the form $(t^m, t^n, 0, \ldots,0 ) + O(t^{n+1})$, $(t^m, t^n, t^p, 0, \ldots,0) + O(t^{p+1})$, or $(t^m, t^n + t^p, 0, \ldots,0) + O(t^{p+1})$. Here $m< n < p$ are positive integers. \end{defn} \subsection{The points-to-curves and back philosophy} The idea is to translate the problem of classifying orbits in the tower (\ref{eqn:tower}) into an equivalent classification problem for finite jets of space curves. Here we are going to mention some highlights of this approach, we will refer the diligent reader to \cite{castro} check the technical details. \subsubsection*{Methodology. How does the curve-to-point philosophy work?} To any $p\in \mathcal{P}^{k}(n)$ we associate the set \begin{eqnarray*} \text{Germ}(p) &=& \{ c :(\mathbb{R},0)\rightarrow (\mathbb{R}^3,0)| \text{$\frac{dc^{k}}{dt}\vert_{t=0}\neq 0$ is a regular direction} \}. \end{eqnarray*} The operation of $k$-fold prolongation applied to $\text{Germ}(p)$ yields immersed curves at level $k$ in the Monster tower, and tangent to some line $\ell$ having nontrivial projection onto the base manifold $\mathbb{R}^3.$ Such ``good directions'' were named {\it regular} in \cite{castro} and within each subspace $\Delta_k$ they form an open dense set. A ``bad direction'' $\ell_{\text{critical}}$, or {\it critical direction} in the jargon of \cite{castro} are directions which will project down to a point. The set of critical direction has codimension 1, and consists of a finite union of 2-planes within each $\Delta_k$. Symmetries of $\mathcal{P}^{k}$ do preserve the different types of directions. In \cite{castro} it was proved that that $\text{Germ}(p)$ is always non-empty. Consider now the set valued map $p\mapsto \text{Germ}(p)$. One can prove that $p\sim q$ iff $\text{Germ}(p)\sim \text{Germ}(q)$. (The latter equivalence means ``to any curve in $\text{Germ}(p)$ there is a curve in $\text{Germ}(q)$ and vice-versa.'') \begin{lemma}[Fundamental lemma of points-to-curves approach] Let $\Omega$ be a subset of $\mathcal{P}^{k}(n)$ and suppose that $$\bigcup_{p\in \Omega}\text{Germ}(p)= \{\text{finite no. of equivalence classes of curves}\}.$$ Then $$\Omega = \{\text{finite no. of orbits}\}.$$ \end{lemma} \section{Proofs.} Now we are ready to prove Theorem \ref{thm:main} and Theorem \ref{thm:count}. We start at level $1$ of the tower and work our way up to level $4$. At each level we classify the orbits within each of the $RVT$ classes that can arise at that particular level. We begin with the first level. \noindent {\bf Proof of Theorem \ref{thm:main} and Theorem \ref{thm:count}, the classification of points at level $1$ and level $2$.} Theorem $3.3$ tells us that all points at the first level of the tower are equivalent, giving that there is a single orbit. For level $2$ there are only two possible $RVT$ codes: $RR$ and $RV$. Again, any point in the class $RR$ is a Cartan point and by Theorem $3.4$ consists of only one orbit. The class $RV$ consists of a single orbit by Theorem $3.5$. \ {\bf The classifiction of points at level $3$.} There is a total of six distinct $RVT$ classes at level three in the Monster tower. We begin with the class $RRR$. {\it The class $RRR$:} Any point within the class $RRR$ is a Cartan point that by Theorem \ref{thm:cartan} that there is only one orbit within this class. {\it The classes $RVR$ and $RRV$:} From Theorem \ref{thm:ak} we know that any point within the class $RVR$ has a single orbit, which is represented by the point $\gamma^{3}(0)$ where $\gamma$ is the curve $\gamma(t) = (t^{2}, t^{3}, 0)$. Similarly, the class $RRV$ has a single orbit, which is represented by the point $\tilde{\gamma}^{3}(0)$ where $\tilde{\gamma}(t) = (t^{2}, t^{5}, 0)$. \no Before we continue, we need to pause and provide some framework to help us with the classification of the remaining $RVT$ codes. \ {\it Setup for classes of the form $RVC$:} We set up coordinates $x,y,z,u,v,u_{2},v_{2}$ for a point in the class $RV$ as in subsection $4.4$. Then for $p_{2} \in RV$ we have $\Delta_{2}(p_{2}) = span \{ \frac{\pa}{\pa u}, \frac{\pa}{\pa u_{2}}, \frac{\pa}{\pa v_{2}} \}$ where $p_{2} = (x,y,z,u,v,u_{2}, v_{2}) = (0,0,0,0,0,0,0)$ and for any point $p_{3} \in RVC \subset \mathcal{P}^{3}$ that $p_{3} = (p_{2}, \ell_{2}) = (p_{2}, [du(\ell_{2}): du_{2}(\ell_{2}): dv_{2}(\ell_{2})])$. Since the point $p_{2}$ is in the class $RV$ we see that if $du = 0$ along $\ell_{2}$ that $p_{3} \in RVV$, $du_{2} = 0$ with $du \neq 0$ along $\ell_{2}$ that $p_{3} \in RVT$, and if $du = 0$ and $du_{2} = 0$ along $\ell_{2}$ that $p_{3} \in RVL$. With this in mind, we are ready to continue with the classification. \ {\it The class $RVV$:} Let $p_{3} \in RVV$ and let $\gamma \in Germ(p_{3})$. We prolong $\gamma$ three times and have that $\gamma^{3}(t) = (x(t),y(t),z(t), u(t), v(t), u_{2}(t), v_{2}(t))$ and we look at the component functions $u(t)$, $u_{2}(t)$, and $v_{2}(t)$ where we set $u(t) = \Sigma_{i} a_{i}t^{i}$, $u_{2}(t) = \Sigma_{j}b_{j}t^{j}$, and $v_{2}(t) = \Sigma_{k}c_{k}t^{k}$. Now, since $\gamma^{2}(t)$ needs to be tangent to the vertical hyperplane in $\Delta_{3}$ that $\gamma^{2}(0)'$ must be a proper vertical direction in $\Delta_{3}$, meaning $\gamma^{2}(0)'$ is not an $L$ direction. Since $\Delta_{3}$ is coframed by $du$, $du_{2}$, and $dv_{2}$, we must have that $du = 0$ and $du_{2} \neq 0$ along $\gamma^{2}(0)'$. This imposes the condition for the functions $u(t)$ and $u_{2}(t)$ that $a_{1} = 0$ and $b_{1} \neq 0$, but for $v_{2}(t)$ it may or may not be true that $c_{1}$ is nonzero. Also it must be true that $a_{2} \neq 0$ or else the curve $\gamma$ will not be in the set $Germ(p_{3})$. We first look at the case when $c_{1} \neq 0$. {\it Case $1$, $c_{1} \neq 0$:} From looking at the one-forms that determine $\Delta_{2}$, we see that in order for the curve $\gamma^{3}$ to be integral to this distribution that the other component function for $\gamma^{3}$ must satisfy the following relations: $$ \dot{y}(t) = u(t) \dot{x}(t), \dot{z}(t) = v(t) \dot{x}(t)$$ $$ \dot{x}(t) = u_{2}(t) \dot{u}(t), \dot{v}(t) = v_{2}(t) \dot{u}(t)$$ We start with the expressions for $\dot{x}(t)$ and $\dot{v}(t)$ and see that, based upon what we know about $u(t)$, $u_{2}(t)$, and $v_{2}(t)$, that $x(t) = \frac{2a_{2}b_{1}}{3}t^{3} + \ldots$ and $v(t) = \frac{2a_{2}c_{1}}{3}t^{3} + \ldots$. We can then use this information to help us find $y(t)$ and $z(t)$. We see that $y(t) = \frac{2a^{2}_{2}b_{1}}{5}t^{5} + \ldots$ and $z(t) = \frac{4a^{2}_{2}b_{1}c_{1}}{3}t^{7} + \ldots$. Now, we know what the first nonvanishing coeffiecients are for the curve $\gamma(t) = (x(t), y(t), z(t))$ and we want to determine the simpliest curve that $\gamma$ must be equivalent to. In order to do this we will first look at the semigroup for the curve $\gamma$. In this case the semigroup is given by $S = \{3, [4], 5, 6, 7, \cdots \}$. \begin{rem} We again pause to explain the notation used for the semigroup $S$. The set $S = \{3, [4], 5, 6, 7, \cdots \}$ is a semigroup where the binary operation is addition. The numbers $3$, $5$, $6$, and so on are elements of this semigroup while the bracket around the number $4$ means that it is not an element of $S$. When we write "$\cdots$" after the number $7$ it means that every positive integer after $7$ is an element in our semigroup. \end{rem} This means that every term, $t^{i}$ for $i \geq 7$, can be eliminated from the above power series expansion for the component functions $x(t)$, $y(t)$, and $z(t)$ by a change of variables given by $(x,y,z) \mapsto (x + f(x,y,z), y + g(x,y,z), z + h(x,y,z))$. With this in mind and after we rescale the leading coeffiecients for each of the component for $\gamma$, we see that $$\gamma(t) = (x(t), y(t), z(t)) \sim (\tilde{x}(t), \tilde{y}(t), \tilde{z}(t)) = (t^{3} + \alpha t^{4}, t^{5}, t^{7}) $$ We now want to see if we can eliminate the $\alpha$ term, if it is nonzero. To do this we will use a combination of reparametrization techniques along with semigroup arguments. Use the reparamentrization $t = T(1 - \frac{\alpha}{3}T)$ and we get that $\tilde{x}(T) = T^{3}(1 - \frac{\alpha}{3}T)^{3} + T^{4}(1 - \frac{\alpha}{3}T)^{4} + \ldots = T^{3} + O(T^{5})$. This gives us that $(\tilde{x}(T), \tilde{y}(T), \tilde{z}(T)) = (T^{3} + O(T^{5}), T^{5} + O(T^{6}), T^{7} + O(T^{8}))$ and since we can eliminate all of the terms of degree $5$ and higher we see that $(\tilde{x}(T), \tilde{y}(T), \tilde{z}(T)) \sim (T^{3}, T^{5}, T^{7})$. This means that our original $\gamma$ is equivalent to the curve $(t^{3}, t^{5}, t^{7})$. {\it Case $2$, $c_{1} = 0$:} By repeating an argument similar to the above one, we will end up with $\gamma(t) = (x(t), y(t), z(t)) = (\frac{2a_{2}b_{1}}{3}t^{3} + \ldots, \frac{2a^{2}_{2}b_{1}}{5}t^{5} + \ldots, \frac{a^{2}_{2}b_{1}c_{2}}{8}t^{8} + \ldots)$. Note that $c_{2}$ may or may not be equal to zero. This gives that the semigroup for the curve $\gamma$ is $S = \{ 3, [4], 5, 6, [7], 8 \cdots \}$ and that our curve $\gamma$ is such that $$\gamma(t) = (x(t), y(t), z(t)) \sim (\tilde{x}(t), \tilde{y}(t), \tilde{z}(t)) = (t^{3} + \alpha_{1}t^{4} + \alpha_{2}t^{7} ,t^{5} + \beta t^{7}, 0)$$ Again, we want to know if we can eliminate the $\alpha_{i}$ and $\beta$ terms. First we focus on the $\alpha_{i}$ terms in $\tilde{x}(t)$. We use the reparametrization given by $t = T(1 - \frac{\alpha_{1}}{3}T)$ to give us that $\tilde{x}(T) = T^{3} + \alpha_{2}'T^{7} + O(T^{8})$. Then to eliminate the $\alpha_{2}'$ term we use the reparametrization given by $T = S(1 - \frac{\alpha_{2}'}{3}S^{4})$ to give that $\tilde{x}(S) = S^{3} + O(S^{8})$. We are now ready to deal with the $\tilde{y}$ function. Now, because of our two reparametrizations we get that $\tilde{y}$ is of the form $\tilde{y}(t) = t^{5} + \beta't^{7}$. To get rid of the $\beta'$ term we simply use the rescaling given by $t \mapsto \frac{1}{ \sqrt{ \left|\beta' \right|} }t$ and then use the scaling diffeomorphism given by $(x,y,z) \mapsto ( \left| \beta' \right|^{\frac{3}{2}}x, \left| \beta' \right|^{\frac{5}{2}}y, z)$ to give us that $\gamma$ is equivalent to either $(t^{3}, t^{5} + t^{7}, 0)$ or $(t^{3},t^{5} - t^{7}, 0)$. Note that the above calculations were done under the assumption that $\beta_{1} \neq 0$. If $\beta_{1} = 0$ then we see, using similar calculations as the above, that we get the normal form $(t^{3}, t^{5}, 0)$. This means that there is a total of $4$ possible normal forms that represent the points within the class $RVV$. It is tempting, at first glance, to believe that these curves are all inequivalent. However, it can be shown that the $3$ curves $(t^{3}, t^{5} + t^{7}, 0)$, $(t^{3},t^{5} - t^{7}, 0)$, $(t^{3}, t^{5}, 0)$ are actually equivalent. It is not very difficult to show this equivalence, but it does amount to rather messy calculation. As a result, the techniques used to show this equivalence are outlined in section $7.1$ of the appendix. \ This means that the total number of possible normal forms is reduced to $2$ possibilities: $\gamma_{1}(t) = (t^{3}, t^{5}, t^{7})$ and $\gamma_{2}(t) = (t^{3}, t^{5}, 0)$. We will show that these two curves are inequivalent to one another. One possibility is to look at the semigroups that each of these curves generate. The curve $\gamma_{1}$ has the semigroup $S_{1} = \{3,[4], 5, 6, 7, \cdots \}$, while the curve $\gamma_{2}$ has the semigroup $S_{2} = \{3, [4], 5, 6, [7], 8, \cdots \}$. Since the semigroup of a curve is an invariant of the curve and the two curves generate different semigroups, then the two curves must be inequivalent. In \cite{castro} another method was outlined to check and see whether or not these two curves are equivalent, which we will now present. One can see that the curve $(t^{3}, t^{5}, 0)$ is a planar curve and in order for the curve $\gamma_{1}$ to be equivalent to the curve $\gamma_{2}$, then we must be able to find a way to turn $\gamma_{1}$ into a planar curve, meaning we need to find a change of variables or a reparametrization which will make the $z$-component function of $\gamma_{1}$ zero. If it were true that $\gamma_{1}$ is actually a planar curve, then $\gamma_{1}$ must lie in an embedded surface in $\R^{3}$(or embedded surface germ), say $M$. Since $M$ is an embedded surface it means that there exists a local defining function at each point on the manifold. Let the local defining function near the origin be the real analytic function $f: \R^{3} \rightarrow \R$. Since $\gamma_{1}$ is on M, then $f(\gamma_{1}(t)) = 0$ for all $t$ near zero. However, when one looks at the individual terms in the Taylor series expansion of $f$ composed with $\gamma_{1}$ that there will be nonzero terms which will show up and give that $f(\gamma_{1}(t)) \neq 0$ for all $t$ near zero, which creates a contradiction. This tell us that $\gamma_{1}$ cannot be equivalent to any planar curve near $t=0$. As a result, there is a total of two inequivalent normal forms for the class $RVV$: $(t^{3}, t^{5}, t^{7})$ and $(t^{3}, t^{5}, 0)$. \ The remaining classes $RVT$ and $RVL$ are proved in an almost identical manner using the above ideas and techniques. As a result, we will omit the proofs and leave them to the reader. \ With this in mind, we are now ready to move on to the fourth level of the tower. We initially tried to tackle the problem of classifying the orbits at the fourth level by using the curve approach from the third level. Unfortunately, the curve approach becomes a bit too unwieldy to use to determine what the normal forms were for the various $RVT$ classes. The problem was simply this: when we looked at the semigroup for a particular curve in a number of the $RVT$ classes at the fourth level that there were too many "gaps" in the various semigroups. The first occuring class, according to codimension, in which this occured was the class $RVVV$. \begin{eg} The semigroups for the class $RVVV$. Let $p_{4} \in RVVV$, and for $\gamma \in Germ(p_{4})$ that $\gamma^{3}(t) = (x(t), y(t), z(t), u(t), v(t), u_{2}(t), v_{2}(t), u_{3}(t), v_{3}(t))$ with $u = \frac{dy}{dx}$, $v = \frac{dz}{dx}$, $u_{2} = \frac{dx}{du}$, $v_{2} = \frac{dv}{du}$, $u_{3} = \frac{du}{du_{2}}$, $v_{3} = \frac{dv_{2}}{du_{2}}$. Since $\gamma^{4}(0) = p_{4}$ we must have that $\gamma^{3}(t)$ is tangent to the vertical hyperplane within $\Delta_{3}$, which is coframed by $\left\{ du_{2}, du_{3}, dv_{3} \right\}$. One can see that $du_{2} = 0$ along $\gamma^{3}(0)'$. Then, just as with the analysis with the third level, we look at $u_{2}(t) = \Sigma_{i} a_{i}t^{i}$, $u_{3}(t) = \Sigma_{j}b_{j}t^{j}$, $v_{3}(t) = \Sigma_{k}c_{k}t^{k}$ where we must have that $a_{1} = 0$, $a_{2} \neq 0$, $b_{1} \neq 0$, and $c_{1}$ may or may not be equal to zero. When we go from the fourth level back down to the zeroth level we see that $\gamma(t) = (t^{5} + O(t^{11}), t^{8} + O(t^{11}), O(t^{11}))$. If $c_{1} \neq 0$, then we get that $\gamma(t) = (t^{5} + O(t^{12}), t^{8} + O(t^{12}), t^{11} + O(t^{12}))$ and the semigroup for this curve is $S = \{5, [6], [7], 8, [9], 10, 11, [12], 13, [14], 15, 16, [17], 18 \cdots \}$. If $c_{1} \neq 0$, then we get that $\gamma(t) = (t^{5} + O(t^{12}), t^{8} + O(t^{12}), O(t^{12}))$ and the semigroup for this curve is $S = \{5, [6], [7], 8, [9], 10, [11], [12], 13, [14], 15, 16, [17], 18, [19], 20, 21, [22], 23 \cdots \}$. \end{eg} As a result, it became impractical to work stictly using the curve approach. This meant that we had to look at a different approach to the classification problem. This lead us to work with a tool called the isotropy representation. \subsection{The isotropy method.} Here is the general idea of the method. Suppose we want to look at a particular RVT class, at the k-th level, given by $\omega_k$ (a word of length $k$) and we want to see how many orbits there are. Suppose as well that we understand its projection $\omega_{k-1}$ one level down, which decomposes into $N$ orbits. Choose representative points $p_{i}$, $i = 1, \cdots , N$ for the $N$ orbits in $\omega_{k-1}$, and consider the group $G_{k-1}(p_{i})$ of level $k-1$ symmetries that fix $p_{i}$. This group is called the \textit{isotropy group of} $p_{i}$. Since elements $\Phi^{k-1}$ of the isotropy group fix $p_{i}$, their prolongations $\Phi^{k} = (\Phi^{k-1}, \Phi^{k-1}_{\ast})$ act on the fiber over $p_{i}$. Under the action of the isotropy group the fiber decomposes into some number $n_{i} \geq 1$ (possibly infinite) of orbits. Summing, we find that $\omega_{k}$ decomposes into $\sum_{i = 1}^{N} n_{i} \geq N$ orbits. For the record, $\Phi_*^{\bullet}$ denotes the tangent map. This will tell us how many orbits there are for the class $\omega_k$. This is the theory. Now we need to explain how one actually prolongs diffeomorphisms in practice. Since the manifold $\mathcal{P}^{k}$ is a type of fiber compactification of $J^k(\mathbb{R},\mathbb{R}^2)$, it is reasonable to expect that the prolongation of diffeomorphisms from the base $\mathbb{R}^3$ should be similar to what one does when prolonging point symmetries from the theory of jet spaces. See specially (\cite{duzhin}, last chapter) and (\cite{olver}, p. 100). Given a point $p_k \in \mathcal{P}^{k}$ and a map $\Phi\in \text{Diff}(3)$ we would like to write explicit formulas for $$\Phi^{k}(p_k).$$ Coordinates of $p_k$ can be made explicit. Now take any curve $\gamma(t) \in \text{Germ}(p_k)$, and consider the prolongation of $\Phi\circ \gamma(t)$. The coordinates of $\Phi^{k}(p_k)$ are exactly the coordinates of $(\Phi\circ \gamma)^{(k)}(0) = \Phi^{k}(\gamma^{k}(0))$. Moreover the resulting point is independent of the choice of a regular $\gamma \in \text{Germ}(p)$. \subsection{Proof of theorem \ref{thm:main}} \ {\bf The classifiction of points at level $4$.} We are now ready to begin with the classification of points at level $4$. We will present the proof for the classification of the class $RVVV$ as an example of how the isotropy representation method works. {\it The class $RVVV$.} Before we get started, we will summarize the main idea of the following calculation to classify the number of orbits within the class $RVVV$. Let $p_{4} \in RVVV \subset \mathcal{P}^{4}$ and start with the projection of $p_{4}$ to level zero, $\pi_{4,0}(p_{4}) = p_{0}$. Since all of the points at level zero are equivalent, then one is free to choose any representative for $p_{0}$. For simplicity, it is easiest to choose it to be the point $p_{0} = (0,0,0)$. Next, we look at all of the points at the first level, which project to $p_{0}$. Since all of these points are equivalent it gives that there is a single orbit in the first level and we are again able to choose any point in $\mathcal{P}^{1}$ as our representive so long as it projects to the point $p_{0}$. We will pick $p_{1} = (0,0,0,[1: 0: 0]) = (0,0,0,0,0)$ with $u = \frac{dy}{dx}$ and $v = \frac{dz}{dx}$ and we will look at all of the diffeomorphisms that fix the point $p_{0}$ and $\Phi_{\ast}([1: 0: 0]) = [1: 0: 0]$. This condition will place some restrictions on the component functions of the local diffeomorphisms $\Phi$ in $Diff_{0}(3)$ when we evaluate at the the point $p_{0}$ and tell us what $\Phi^{1} = (\Phi, \Phi_{\ast})$ will look like at the point $p_{1}$. We call this group of diffeomorphisms $G_{1}$. We can then move on to the second level and look at the class $RV$. For any $p_{2} \in RV$ it is of the form $p_{2} = (p_{1}, \ell_{1})$ with $\ell_{1}$ contained in the vertical hyperplane inside of $\Delta_{1}(p_{1})$. Now, apply the pushforwards of the $\Phi^{1}$'s in $G_{1}$ to the vertical hyperplane and see if these symmetries will act transitively on the critical hyperplane. If they do act transitively then there is a single orbit within the class $RV$. If not, then there exists more than one orbit within the class $RV$. Note that because of Theorem \ref{thm:ak} that we should expect to only see one orbit within this class. Once this is done, we can just iterate the above process to classify the number of orbits within the class $RVV$ at the third level and then within the class $RVVV$ at the fourth level. {\it Level 0:} Let $G_{0}$ be the group that contains all diffeomorphism germs that fix the origin. {\it Level 1:} We know that all the points in $\mathcal{P}^{1}$ are equivalent, giving that there is only a single orbit. So we pick a representative element from the single orbit of $\mathcal{P}^{1}$. We will take our representative to be $p_{1} = (0,0,0,0,0) = (0,0,0, [1: 0: 0]) = (x,y,z, [dx: dy: dz])$ and take $G_{1}$ to be the set of all $\Phi \in G_{0}$ such that $\Phi^{1}$ will take the tangent to the $x$-Axis back to the $x$-Axis, meaning $\Phi_{\ast}([1: 0: 0]) = [1: 0: 0]$. \ \no Then for $\Phi \in G_{1}$ and $\Phi(x,y,z) = (\phi^{1}, \phi^{2}, \phi^{3})$ we have that \ \[ \Phi_{\ast} = \begin{pmatrix} \phi_{x}^{1} & \phi_{y}^{1} & \phi_{z}^{1} \\ \phi_{x}^{2} & \phi_{y}^{2} & \phi_{z}^{2} \\ \phi_{x}^{3} & \phi_{y}^{3} & \phi_{z}^{3} \\ \end{pmatrix} = \begin{pmatrix} \phi_{x}^{1} & \phi_{y}^{1} & \phi_{z}^{1} \\ 0 & \phi_{y}^{2} & \phi_{z}^{2} \\ 0 & \phi_{y}^{3} & \phi_{z}^{3} \\ \end{pmatrix} \] \ When we evalutate at $(x,y,z) = (0,0,0)$. \ Here is the \textit{Taylor triangle} representing the different coefficients in the Taylor series expansion of a diffeomorphism in $G_{i}$. The three digits represent the number of partial derivatives with respect to either $x$, $y$, or $z$. For example, $(1,2,0) = \frac{\pa^{3}}{\pa x \pa^{2} y}$. The vertical column denotes the coefficient order. We start with the Taylor triangle for $\phi^{2}$: \ \begin{center} \begin{tabular}{rcccccccccc} $n=0$:& & & & & \xcancel{(0,0,0)} \\\noalign{ } $n=1$:& & & & $\xcancel{(1,0,0)}$ & (0,1,0) & (0,0,1)\\\noalign{ } $n=2$:& & & (2,0,0) & (1,1,0) & (1,0,1) & (0,1,1) & (0,0,2)\\\noalign{ } \end{tabular} \end{center} \ We have crossed out $(1,0,0)$ since $\frac{\pa \phi^{2}}{\pa y}(\z) = 0$. Next is the Taylor triangle for $\phi^{3}$: \ \begin{center} \begin{tabular}{rcccccccccc} $n=0$:& & & & & \xcancel{(0,0,0)} \\\noalign{ } $n=1$:& & & & $\xcancel{(1,0,0)}$ & (0,1,0) & (0,0,1)\\\noalign{ } $n=2$:& & & (2,0,0) & (1,1,0) & (1,0,1) & (0,1,1) & (0,0,2)\\\noalign{ } \end{tabular} \end{center} \ This describes the properties of the elements $\Phi \in G_{1}$. We now try to figure out what the $\Phi^{1}$, for $\Phi \in G_{1}$, will look like in $KR$-coordinates. First, take $\ell \subset \Delta_{0}$ where we write $\ell = a \frac{\partial}{\partial x} + b \frac{\partial}{\partial y} + c \frac{\partial}{\partial z}$ with $a,b,c \in \R$ and $a \neq 0$. We see that \begin{eqnarray*} \Phi_{\ast}(\ell)&=&span\left\{ (a \phi_{x}^{1} + b \phi_{y}^{1} + c \phi_{z}^{1})\frac{\pa}{\pa x} + (a \phi_{x}^{2} + b \phi_{y}^{2} + c \phi_{z}^{2})\frac{\pa}{\pa y} + (a \phi_{x}^{3} + b \phi_{y}^{3} + c \phi_{z}^{3})\frac{\pa}{\pa z} \right\} \\ &=&span \left\{ (\phi_{x}^{1} + u \phi_{y}^{1} + v \phi_{z}^{1})\frac{\pa}{\pa x} + (\phi_{x}^{2} + u \phi_{y}^{2} + v \phi_{z}^{2})\frac{\pa}{\pa y} + (\phi_{x}^{3} + u \phi_{y}^{3} + v \phi_{z}^{3})\frac{\pa}{\pa z} \right\} \\ &=&span\left\{ a_{1}\frac{\pa}{\pa x} + a_{2}\frac{\pa}{\pa y} + a_{3}\frac{\pa}{\pa z} \right\} \end{eqnarray*} where in the second to last step we divided by "$a$" to get that $u = \frac{b}{a}$ and $v = \frac{c}{a}$. Now, since $\Delta_{1}$ is given by \ $dy - udx = 0$ \ $dz - vdx = 0$ and since $[dx: dy: dz] = [1: \frac{dy}{dx}: \frac{dz}{dx}]$ we have for $\Phi \in G_{1}$ that it is given locally as $\Phi^{1}(x,y,z,u,v) = (\phi^{1}, \phi^{2}, \phi^{3}, \tilde{u}, \tilde{v})$ where \ $\tilde{u} = \frac{a_{2}}{a_{1}} = \frac{\phi_{x}^{2} + u \phi_{y}^{2} + v \phi_{z}^{2}}{\phi_{x}^{1} + u \phi_{y}^{1} + v \phi_{z}^{1}}$ \ $\tilde{v} = \frac{a_{3}}{a_{1}} = \frac{\phi_{x}^{3} + u \phi_{y}^{3} + v \phi_{z}^{3}}{\phi_{x}^{1} + u \phi_{y}^{1} + v \phi_{z}^{1}}$ {\it Level $2$:} At level $2$ we are looking at the class $RV$ which consists of a single orbit. This means that we can pick any point in the class $RV$ as our representative. We will pick our point to be $p_{2} = (p_{1}, \ell_{1})$ with $\ell_{1} \subset \Delta_{1}(p_{1})$ to be the vertical line $\ell_{1} =[dx: du: dv] = [0: 1: 0]$. Now, we will let $G_{2}$ be the set of symmetries from $G_{1}$ that fix the vertical line $\ell_{1} = [0: 1: 0]$ in $\Delta_{1}(p_{1})$, meaning we want $\Phi^{1}_{\ast}([0: 1: 0]) = [0: 1: 0]$ for all $\Phi \in G_{2}$. Then this says that $\Phi^{1}_{\ast}([dx_{| \ell_{1}}: du_{| \ell_{1}}: dv_{| \ell_{1}}]) = \Phi^{1}_{\ast}([0: 1: 0]) = [0: 1: 0] = [d \phi^{1}_{| \ell_{1}}: d \tilde{u}_{| \ell_{1}}: d \tilde{v}_{| \ell_{1}}]$. When we fix this direction it might yield some new information about the component functions for the $\Phi$ in $G_{2}$. \ $\bullet$ $d \phi^{1}_{| \ell_{1}} = 0$. \ $d \phi^{1} = \phi_{x}^{1} dx + \phi_{y}^{1} dy + \phi_{z}^{1} dz$ and that, based on the above, $d \phi^{1}\rest{\ell_{1}} = 0$ and can see that we will not gain any new information about the component functions for $\Phi \in G_{2}$. \ $\bullet$ $d \tilde{v}_{| \ell_{1}} = 0$ \ $d \tilde{v} = d(\frac{a_{3}}{a_{1}}) = \frac{da_{3}}{a_{1}} - \frac{(da_{1})a_{3}}{a_{1}^{2}}$ \ First notice that when we evaluate at $(x,y,z,u,v) = (0,0,0,0,0)$ that $a_{3} = 0$ and since we are setting $d \tilde{v}\rest{\ell_{1}} = 0$ then $d a_{3} \rest{\ell_{1}}$ must be equal to zero. We calculate that $$da_{3} = \phi_{xx}^{3}dx + \phi_{xy}^{3}dy + \phi_{xz}^{3}dz + \phi_{y}^{3}du + u (d \phi_{y}^{3}) + \phi_{z}^{3}dv + v(d \phi_{z}^{3})$$ then when we evaluate we get $$d a_{3} \rest{\ell_{1}} = \phi^{3}_{y}(\z)du \rest{\ell_{1}} = 0 $$ since $du \rest{\ell_{1}} \neq 0$. This forces $\phi^{3}_{y}(\z) = 0$. \ This gives us the updated Taylor triangle for $\phi^{3}$: \ \begin{center} \begin{tabular}{rcccccccccc} $n=0$:& & & & & \xcancel{(0,0,0)} \\\noalign{ } $n=1$:& & & & $\xcancel{(1,0,0)}$ & \xcancel{(0,1,0)} & (0,0,1)\\\noalign{ } $n=2$:& & & (2,0,0) & (1,1,0) & (1,0,1) & (0,1,1) & (0,0,2)\\\noalign{ } \end{tabular} \end{center} \ We have determined some of the properties about elements in $G_{2}$ and now we will see what these elements look like locally. We look at $\Phi^{1}_{\ast}(\ell)$ for $\ell \subset \Delta_{1}$, near the vertical hyperplane in $\Delta_{1}$, and is of the form $\ell = a Z^{1} + b \frac{\partial}{\partial u} + c \frac{\partial}{\partial v}$ with $a,b,c \in \R$ and $b \neq 0$ with $Z^{1} = u \frac{\partial}{\partial y} + v \frac{\partial}{\partial z} + \frac{\partial}{\partial x}$. This gives that \ \[ \Phi^{1}_{\ast}(\ell) = \begin{pmatrix} \phi_{x}^{1} & \phi_{y}^{1} & \phi_{z}^{1} & 0 & 0 \\ \phi_{x}^{2} & \phi_{y}^{2} & \phi_{z}^{2} & 0 & 0 \\ \phi_{x}^{3} & \phi_{y}^{3} & \phi_{z}^{3} & 0 & 0 \\ \frac{\partial \tilde{u}}{\partial x} & \frac{\partial \tilde{u}}{\partial y} & \frac{\partial \tilde{u}}{\partial z} & \frac{\partial \tilde{u}}{\partial u} & \frac{\partial \tilde{u}}{\partial v} \\ \frac{\partial \tilde{v}}{\partial x} & \frac{\partial \tilde{v}}{\partial y} & \frac{\partial \tilde{v}}{\partial z} & \frac{\partial \tilde{v}}{\partial u} & \frac{\partial \tilde{v}}{\partial v} \end{pmatrix} \begin{pmatrix} a \\ au \\ av \\ b \\ c \end{pmatrix} \] \ \begin{eqnarray*} = & & span \{ (a \phi_{x}^{1} + au \phi_{y}^{1} + av \phi_{z}^{1})\frac{\pa}{\pa x} \\ & + & (a \frac{\partial \tilde{u}}{\partial x} + a u \frac{\partial \tilde{u}}{\partial y} + a v \frac{\partial \tilde{u}}{\partial z} + b \frac{\partial \tilde{u}}{\partial u} + c \frac{\partial \tilde{u}}{\partial v})\frac{\pa}{\pa u} \\ & + & (a \frac{\partial \tilde{v}}{\partial x} + a u \frac{\partial \tilde{v}}{\partial y} + a v \frac{\partial \tilde{v}}{\partial z} + b \frac{\partial \tilde{v}}{\partial u} + c \frac{\partial \tilde{v}}{\pa v})\frac{\pa}{\pa v} \} \end{eqnarray*} \begin{eqnarray*} = & & span \{ (u_{2} \phi_{x}^{1} + uu_{2} \phi_{y}^{1} + vu_{2} \phi_{z}^{1})\frac{\pa}{\pa x} \\ & + & ( u_{2} \frac{\partial \tilde{u}}{\partial x} + u u_{2} \frac{\partial \tilde{u}}{\partial y} + v u_{2} \frac{\partial \tilde{u}}{\partial z} + \frac{\partial \tilde{u}}{\partial u} + v_{2} \frac{\partial \tilde{u}}{\partial v})\frac{\pa}{\pa u} \\ & + & ( u_{2} \frac{\partial \tilde{v}}{\partial x} + u u_{2} \frac{\partial \tilde{v}}{\partial y} + vu_{2} \frac{\partial \tilde{v}}{\partial z} + \frac{\partial \tilde{v}}{\partial u} + v_{2} \frac{\partial \tilde{v}}{v})\frac{\pa}{\pa v} \} \end{eqnarray*} $= span \{ b_{1}\frac{\pa}{\pa x} + b_{2}\frac{\pa}{\pa u} + b_{3}\frac{\pa}{\pa v} \}$. Notice that we have only paid attention to the $x$, $u$, and $v$ coordinates since $\Delta_{1}^{\ast}$ is framed by $dx$, $du$, and $dv$. Since $u_{2} = \frac{dx}{du}$ and $v_{2} = \frac{dv}{du}$ we get that \ $\tilde{u}_{2} = \frac{b_{1}}{b_{2}} = \frac{u_{2} \phi_{x}^{1} + uu_{2} \phi_{y}^{1} + vu_{2} \phi_{z}^{1}}{u_{2} \frac{\partial \tilde{u}}{\partial x} + u u_{2} \frac{\partial \tilde{u}} {\partial y} + v u_{2} \frac{\partial \tilde{u}}{\partial z} + \frac{\partial \tilde{u}}{\partial u} + v_{2} \frac{\partial \tilde{u}}{\partial v}}$ \ $\tilde{v}_{2} = \frac{b_{3}}{b_{2}} = \frac{u_{2} \frac{\partial \tilde{v}}{\partial x} + u u_{2} \frac{\partial \tilde{v}}{\partial y} + vu_{2} \frac{\partial \tilde{v}}{\partial z} + \frac{\partial \tilde{v}}{\partial u} + v_{2} \frac{\partial \tilde{v}}{\partial v}} {u_{2} \frac{\partial \tilde{u}}{\partial x} + u u_{2} \frac{\partial \tilde{u}} {\partial y} + v u_{2} \frac{\partial \tilde{u}}{\partial z} + \frac{\partial \tilde{u}}{\partial u} + v_{2} \frac{\partial \tilde{u}}{\partial v}}$ \ This tells us what the new component functions $\tilde{u}_{2}$ and $\tilde{v}_{2}$ are for $\Phi^{2}$. {\it Level $3$:} At level $3$ we are looking at the class $RVV$. We know from our work on the third level that there will be only one orbit within this class. This means that we can pick any point in the class $RVV$ as our representative. We will pick the point to be $p_{3} = (p_{2}, \ell_{2})$ with $\ell_{2} \subset \Delta_{2}$ to be the vertical line $\ell_{2} =[du: du_{2}: dv_{2}] = [0: 1: 0]$. Now, we will let $G_{3}$ be the set of symmetries from $G_{2}$ that fix the vertical line $\ell_{2} = [0: 1: 0]$ in $\Delta_{2}$, meaning we want $\Phi^{2}_{\ast}([0: 1: 0]) = [0: 1: 0] = [ d \tilde{u}\rest{\ell_{3}}: d \tilde{u}_{2}\rest{\ell_{3}}: d \tilde{v}_{2}\rest{\ell_{3}} ]$ for all $\Phi \in G_{3}$. Since we are taking $du\rest{\ell_{3}} 0$ and $dv_{2}\rest{ \ell_{3}} = 0$, with $du_{2}\rest{\ell_{3}} \neq 0$ we need to look at $d \tilde{u}\rest{\ell_{3}} = 0$ and $d \tilde{v}_{2}\rest{\ell_{3}} = 0$ to see if these relations will gives us more information about the component functions of $\Phi$. \ $\bullet$ $d \tilde{u}\rest{\ell_{3}} = 0$. \ $d \tilde{u} = d(\frac{a_{2}}{a_{1}}) = \frac{da_{2}}{a_{1}} - \frac{a_{2} da_{1}}{a_{1}^{2}}$ and can see that $a_{2}(p_{2}) = 0$ and that \ $da_{2}\rest{\ell _{3}} = \phi_{xx}^{2}dx\rest{\ell_{3}} + \phi_{xy}^{2} dy \rest{ \ell_{3}} + \phi_{xz}^{2} dz\rest{ \ell_{3}} + \phi_{y}^{2} du \rest{\ell_{3}} + \phi_{z}^{2} dv\rest{\ell_{3}} = 0$. Since all of the differentials are going to be equal to zero when we put the line $\ell_{3}$ into them that we will not gain any new information about the $\phi^{i}$'s. \ $\bullet$ $d \tilde{v}_{2}\rest{\ell_{3}} = 0$ \ $d \tilde{v}_{2} = d(\frac{b_{3}}{b_{2}}) = \frac{db_{3}}{b_{2}} - \frac{b_{3} db_{2}}{b_{2}^{2}}$. When we evaluate we see that $b_{3}(p_{2}) = 0$ since $\frac{\pa \tilde{v}}{\pa u}(p_{2}) = \phi^{3}_{y}(\z) = 0$, which means that we only need to look at $\frac{db_{3}}{b_{2}}$. \begin{eqnarray*} db_{3} & = & d(u_{2} \frac{\pa \tilde{v}}{\pa x} + u_{2} u \frac{\pa \tilde{v}}{\pa z} + \frac{\pa \tilde{v}}{\pa u} + v_{2} \frac{\pa \tilde{v}}{\pa v}) \\ & = & \frac{\pa \tilde{v}}{\pa x} du_{2} + u_{2}(d \frac{\pa \tilde{v}}{\pa x}) + u \frac{\pa \tilde{v}}{\pa y} du_{2} \\ & + & u_{2}\frac{\pa \tilde{v}}{\pa y} du + u_{2} u (d \frac{\pa \tilde{v}}{\pa y}) + v \frac{\pa \tilde{v}}{\pa z} du_{2} \\ & + & u_{2} \frac{\pa \tilde{v}}{\pa z} dv + u_{2}v(d\frac{\pa \tilde{v}}{\pa z}) + \frac{\pa \tilde{v}}{\pa u \pa x} dx \\ & + & \frac{\pa \tilde{v}}{\pa u \pa y} dy + \frac{\pa \tilde{u}}{\pa u \pa z} dz + \frac{\pa \tilde{v}}{\pa v}dv_{2} + v_{2}(d \frac{\pa \tilde{v}}{\pa v}). \end{eqnarray*} \ Then evaluating we get \ $db_{3} \rest{\ell_{3}} = \frac{\pa \tilde{v}}{\pa x}(p_{3})du_{2} \rest{\ell_{3}} = 0$ and since $du_{2} \rest{\ell_{3}} \neq 0$ it forces $\frac{\pa \tilde{v}}{\pa x}(p_{3}) = 0$. We have $\frac{\pa \tilde{v}}{\pa x}(p_{3}) = \frac{\phi^{3}_{xx}(\z)}{\phi^{1}_{x}(\z)} - \frac{\phi^{1}_{xx}(\z) \phi^{3}_{x}(\z)}{\phi^{1}_{x}(\z)}$ and that $\phi^{3}_{x}(\z) = 0$ to give that $\frac{\pa \tilde{v}}{\pa x}(p_{3}) = \frac{\phi^{3}_{xx}(\z)}{\phi^{1}_{x}(\z)} = 0$ which forces $\phi^{3}_{xx}(\z) = 0$, which gives us information about $\Phi^{3}$. This gives us the updated Taylor triangle for $\phi^{3}$: \begin{center} \begin{tabular}{rcccccccccc} $n=0$:& & & & & \xcancel{(0,0,0)} \\\noalign{ } $n=1$:& & & & $\xcancel{(1,0,0)}$ & \xcancel{(0,1,0)} & (0,0,1)\\\noalign{ } $n=2$:& & & \xcancel{(2,0,0)} & (1,1,0) & (1,0,1) & (0,1,1) & (0,0,2)\\\noalign{ } \end{tabular} \end{center} \ Now, our goal is to look at how $\Phi^{3}_{\ast}$'s act on the distribution $\Delta_{3}(p_{3})$ to determine the number of orbits within the class $RVVV$. In order to do so we will need to figure out what the local component functions, call them $\tilde{u}_{3}$ and $\tilde{v}_{3}$, for $\Phi^{3}$, where $\Phi \in G_{3}$, will look like. To do this we will again look at $\Phi^{2}_{\ast}$ applied to a line $\ell$ that is near the vertical hyperplane in $\Delta_{2}$. \ Set $\ell = aZ^{(2)}_{1} + b \frac{\pa}{\pa u_{2}} + c \frac{\pa}{\pa v_{2}}$ for $a,b,c \in \R$ and $b \neq 0$ where $Z^{(2)}_{1} = u_{2}(u \frac{\pa}{\pa y} + v \frac{\pa}{\pa z} + \frac{\pa}{\pa x}) + \frac{\pa}{\pa u} + v_{2} \frac{\pa}{\pa v}$. This gives that \ \[ \Phi^{2}_{\ast}(\ell) = \begin{pmatrix} \phi^{1}_{x} & \phi^{1}_{y} & \phi^{1}_{z} & 0 & 0 & 0 & 0 \\ \phi^{2}_{x} & \phi^{2}_{y} & \phi^{2}_{z} & 0 & 0 & 0 & 0 \\ \phi^{3}_{x} & \phi^{3}_{y} & \phi^{3}_{z} & 0 & 0 & 0 & 0 \\ \frac{\pa \tilde{u}}{\pa x} & \frac{\pa \tilde{u}}{\pa y} & \frac{\pa \tilde{u}}{\pa z} & \frac{\pa \tilde{u}}{\pa u} & \frac{\pa \tilde{u}}{\pa v} & 0 & 0 \\ \frac{\pa \tilde{v}}{\pa x} & \frac{\pa \tilde{v}}{\pa y} & \frac{\pa \tilde{v}}{\pa z} & \frac{\pa \tilde{v}}{\pa u} & \frac{\pa \tilde{v}}{\pa v} & 0 & 0 \\ \frac{\pa \tilde{u}_{2}}{\pa x} & \frac{\pa \tilde{u}_{2}}{\pa y} & \frac{\pa \tilde{u}_{2}}{\pa z} & \frac{\pa \tilde{u}_{2}}{\pa u} & \frac{\pa \tilde{u}_{2}}{\pa v} & \frac{\pa \tilde{u}_{2}}{\pa u_{2}} & \frac{\pa \tilde{u}_{2}}{\pa v_{2}} \\ \frac{\pa \tilde{v}_{2}}{\pa x} & \frac{\pa \tilde{v}_{2}}{\pa y} & \frac{\pa \tilde{v}_{2}}{\pa z} & \frac{\pa \tilde{v}_{2}}{\pa u} & \frac{\pa \tilde{v}_{2}}{\pa v} & \frac{\pa \tilde{v}_{2}}{\pa u_{2}} & \frac{\pa \tilde{v}_{2}}{\pa v_{2}} \end{pmatrix} \begin{pmatrix} au_{2} \\ a u u_{2} \\ a v u_{2} \\ a \\ a v_{2} \\ b \\ c \end{pmatrix} \] \ \begin{eqnarray*} & = & span\{ (a u_{2} \frac{\pa \tilde{u}}{\pa x} + a u u_{2} \frac{\pa \tilde{u}}{\pa y} + avu_{2} \frac{\pa \tilde{u}}{\pa z} + a \frac{\pa \tilde{u}}{\pa u} + av_{2} \frac{\pa \tilde{u}}{\pa v})\frac{\pa}{\pa u} \\ & + & (au_{2} \frac{\pa \tilde{u}_{2}}{\pa x} + auu_{2} \frac{\pa \tilde{u}_{2}}{\pa y} + avu_{2} \frac{\pa \tilde{u}_{2}}{\pa z} + a \frac{\pa \tilde{u}_{2}}{\pa u} + av_{2} \frac{\pa \tilde{u}_{2}}{\pa v} + b \frac{\pa \tilde{u}_{2}}{\pa u_{2}} + c \frac{\pa \tilde{u}_{2}}{\pa v_{2}})\frac{\pa}{\pa u_{2}} \\ & + & (au_{2} \frac{\pa \tilde{v}_{2}}{\pa x} + auu_{2} \frac{\pa \tilde{v}_{2}}{\pa y} + avu_{2} \frac{\pa \tilde{v}_{2}}{\pa z} + a \frac{\pa \tilde{v}_{2}}{\pa u} + av_{2} \frac{\pa \tilde{v}_{2}}{\pa v} + b \frac{\pa \tilde{v}_{2}}{\pa u_{2}} + c \frac{\pa \tilde{v}_{2}}{\pa v_{2}})\frac{\pa}{\pa v_{2}} \} \end{eqnarray*} \begin{eqnarray*} & = & span\{ (u_{3} u_{2} \frac{\pa \tilde{u}}{\pa x} + u_{3} u u_{2} \frac{\pa \tilde{u}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{u}}{\pa z} + u_{3} \frac{\pa \tilde{u}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}}{\pa v})\frac{\pa}{\pa u} \\ & + & (u_{3}u_{2} \frac{\pa \tilde{u}_{2}}{\pa x} + u_{3}uu_{2} \frac{\pa \tilde{u}_{2}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{u}_{2}}{\pa z} + u_{3} \frac{\pa \tilde{u}_{2}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}_{2}}{\pa v} + \frac{\pa \tilde{u}_{2}}{\pa u_{2}} + v_{3} \frac{\pa \tilde{u}_{2}}{\pa v_{2}})\frac{\pa}{\pa u_{2}} \\ & + & u_{3}u_{2} \frac{\pa \tilde{v}_{2}}{\pa x} + u_{3}uu_{2} \frac{\pa \tilde{v}_{2}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{v}_{2}}{\pa z} + u_{3} \frac{\pa \tilde{v}_{2}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{v}_{2}}{\pa v} + \frac{\pa \tilde{v}_{2}}{\pa u_{2}} + v_{3} \frac{\pa \tilde{v}_{2}}{\pa v_{2}})\frac{\pa}{\pa v_{2}} \} \end{eqnarray*} $= span\{ c_{1}\frac{\pa}{\pa u} + c_{2}\frac{\pa}{\pa u_{2}} + c_{3}\frac{\pa}{\pa v_{2}} \}$, since our local coordinates are given by $[du: du_{2}: dv_{2}] = [\frac{du}{du_{2}}: 1: \frac{dv_{2}}{du_{2}}] = [u_{3}: 1: v_{3}]$ we find that \ $\tilde{u}_{3} = \frac{c_{1}}{c_{2}} = \frac{u_{3} u_{2} \frac{\pa \tilde{u}}{\pa x} + u_{3} u u_{2} \frac{\pa \tilde{u}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{u}}{\pa z} + u_{3} \frac{\pa \tilde{u}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}}{\pa v}} {u_{3}u_{2} \frac{\pa \tilde{u}_{2}}{\pa x} + u_{3}uu_{2} \frac{\pa \tilde{u}_{2}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{u}_{2}}{\pa z} + u_{3} \frac{\pa \tilde{u}_{2}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}_{2}}{\pa v} + \frac{\pa \tilde{u}_{2}}{\pa u_{2}} + v_{3} \frac{\pa \tilde{u}_{2}}{\pa v_{2}}} $ \ $\tilde{v}_{3} = \frac{c_{3}}{c_{2}} = \frac{u_{3}u_{2} \frac{\pa \tilde{v}_{2}}{\pa x} + u_{3}uu_{2} \frac{\pa \tilde{v}_{2}}{\pa y} + u_{3}vu_{2} \ \frac{\pa \tilde{v}_{2}}{\pa z} + u_{3} \frac{\pa \tilde{v}_{2}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{v}_{2}}{\pa v} + \frac{\pa \tilde{v}_{2}}{\pa u_{2}} + v_{3} \frac{\pa \tilde{v}_{2}}{\pa v_{2}}} {u_{3}u_{2} \frac{\pa \tilde{u}_{2}}{\pa x} + u_{3}uu_{2} \frac{\pa \tilde{u}_{2}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{u}_{2}}{\pa z} + u_{3} \frac{\pa \tilde{u}_{2}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}_{2}}{\pa v} + \frac{\pa \tilde{u}_{2}}{\pa u_{2}} + v_{3} \frac{\pa \tilde{u}_{2}}{\pa v_{2}}}$. {\it Level $4$:} Now that we know what the component functions are for $\Phi^{3}$, with $\Phi \in G_{3}$, we are ready to apply its pushforward to the distribution $\Delta_{3}$ at $p_{3}$ and figure out how many orbits there are for the class $RVVV$. We let $\ell = b \frac{\pa}{\pa u_{3}} + c \frac{\pa}{\pa v_{3}}$, with $b,c \in \R$, be a vector in the vertical hyperplane of $\Delta_{3}(p_{3})$ and we see that $$\Phi^{3}_{\ast}(\ell) = span \{ (b \frac{\pa \tilde{u}_{3}}{\pa u_{3}}(p_{3}) + c \frac{\pa \tilde{u}_{3}}{\pa v_{3}}(p_{3}))\frac{\pa}{\pa u_{3}} + (b \frac{\pa \tilde{v}_{3}}{\pa u_{3}}(p_{3}) + c \frac{\pa \tilde{v}_{3}}{\pa v_{3}}(p_{3}))\frac{\pa}{\pa v_{3}} \} .$$ This means that we need to see what $\frac{\pa \tilde{u}_{3}}{\pa u_{3}}$, $\frac{\pa \tilde{u}_{3}}{\pa v_{3}}$, $\frac{\pa \tilde{v}_{3}}{\pa u_{3}}$, $\frac{\pa \tilde{v}_{3}}{\pa v_{3}}$ are when we evaluate at $p_{3} = (x,y,z,u,v,u_{2},v_{2},u_{3},v_{3}) = (0,0,0,0,0,0,0,0,0)$. This will amount to a somewhat long process, so we will just first state what the above terms evaluate to and leave the computations for the appendix. After evaluating we will see that $\Phi^{3}_{\ast}(\ell) = span \{ (b \frac{(\phi^{2}_{y}(\z))^{2}}{(\phi^{1}_{x}(\z))^{3}})\frac{\pa}{\pa u_{3}} + (c \frac{\phi^{3}_{z}(\z)}{(\phi^{1}_{x}(\z))^{2}})\frac{\pa}{\pa v_{3}} \}$, this means that for $\ell = \frac{\pa}{\pa u_{3}}$ we get $\Phi^{3}_{\ast}(\ell) = span \{\frac{(\phi^{2}_{y}(\z))^{2}}{(\phi^{1}_{x}(\z))^{3}}\frac{\pa}{\pa u_{3}} \}$ to give one orbit. Then, for $\ell = \frac{\pa}{\pa u_{3}} + \frac{\pa}{\pa v_{3}}$ we see that $\Phi^{3}_{\ast}(\ell) = span \{(\frac{(\phi^{2}_{y}(\z))^{2}}{(\phi^{1}_{x}(\z))^{3}})\frac{\pa}{\pa u_{3}} + (\frac{\phi^{3}_{z}(\z)}{(\phi^{1}_{x}(\z))^{2}})\frac{\pa}{\pa v_{3}} \}$, and notice that $\phi^{1}_{x}(\z) \neq 0$, $\phi^{2}_{y}(\z) \neq 0$, and $\phi^{3}_{z}(\z) \neq 0$, but we can choose them to be anything else to get any vector of the form $b' \frac{\pa}{\pa u_{3}} + c' \frac{\pa}{\pa v_{3}}$ with $b', c' \neq 0$ to give another, seperate, orbit(Recall that in order for $\ell$ to be a vertical direction, in this case, that it must be of the form $\ell = b \frac{\pa}{\pa u_{3}} + c \frac{\pa}{\pa v_{3}}$ with $b \neq 0$.) This means that there is a total of $2$ orbits for the class $RVVV$, as seen in Figure $3$. \begin{figure} \caption{Orbits within the class $RVVV$.} \label{fig:orbits} \end{figure} The classification of the other $RVT$ classes at level $4$ are done in a very similar manner. The details of these other calculations will be given in a subsequent work by one of the authors. \ \section{Conclusion} We have exbited above a canonical procedure for lifting the action of $Diff(3)$ to the fibered manifold $\mathcal{P}^k(2)$, and by a mix of singularity theory of space curves and the representation theory of diffeomorphism groups we were able to completely classify orbits of this extended action for small values of $k$. A cursory glance at our computational methods will convince the reader that these results can nominally be extended to higher values of $k$, say $k\geq 5$ but given the exponential increase in computational effort the direct approach can become somewhat unwieldy. Progress has been made though to try and extend the classification results of the present paper and we hope to release these findings sometime in the near future. In (\cite{castro}) we already called the attention upon a lack of discrete invariants to assist with the classification problem, we hope to return to this problem in future publications. \ It came to our attention recently that Respondek and Li \cite{respondek} constructed a mechanical system consisting of $k$-rigid bars moving in $\mathbb{R}^{n+1}$ subjected to a nonholonomic constraint which is equivalent to the Cartan distribution of $J^k(\mathbb{R},\mathbb{R}^{n})$ at regular configuration points. We conjecture that the singular configurations of the $k$-bar will be related to singular Goursat multiflags similar to those presented here, though in Respondek and Li's case the configuration manifold is a tower of $S^n$ fibrations instead of our $\mathbb{P}^n$ tower. Another research venue, and that to our knowledge has been little explored, is that of understanding how these results could be applied to the geometric theory of differential equations. Let us remind the reader that the spaces $\mathcal{P}^k(2)$, or more generally $\mathcal{P}^k(n)$ are vertical compactifications of the jet spaces $J^{k}(\mathbb{R},\mathbb{R}^2)$ and $J^k(\mathbb{R},\mathbb{R}^n)$ respectively. Kumpera and collaborators (\cite{kumpera2}) have used the geometric theory to study the problem underdetermined systems of systems of ordinary differential equations, but it remains to be explored how our singular orbits can be used to make qualitative statements about the behavior of solutions to singular differential equations. \section{Appendix} \subsection{A technique to eliminate terms in the short parameterization of a curve germ.} \ The following technique that we will discuss is outlined in \cite{zariski} on pg. $23$. Let $C$ be a parameterization of a planar curve germ. A \textit{short parameterization} of $C$ is of the form $$C = \begin{cases} x = t^{n} & \\ y = t^{m} + bt^{\nu_{\rho}} + \Sigma^{q}_{i = \rho + 1} a_{\nu_{i}}t^{\nu_{i}} &\mbox{$b \neq 0$ if $\rho \neq q + 1$} \end{cases}$$ \ \noindent where the $\nu_{i}$, for $i = \rho, \cdots , q$ are positive integers such that $\nu_{\rho} < \cdots < \nu_{q}$ and they do not belong to the semigroup of the curve $C$. Suppose that $\nu_{\rho} + n \in n \Z_{+} + m \Z_{+}$. Now, notice that $\nu_{\rho} + n \in m \Z_{+}$ because $\nu_{\rho}$ is not in the semigroup of $C$. Let $j \in \Z_{+}$ be such that $\nu_{\rho} + n = (j + 1)m$; notice that $j \geq 1$ since $\nu_{\rho} > m$. Then set $a = \frac{bn}{m}$ and \\ $x' = t^{n} + at^{jm} +$( terms of degree $> jm$). Let $\tau^{n} = t^{n} + at^{jm} +$ (terms of degree $> jm$). From this expression one can show that $t = \tau - \frac{a}{n} \tau^{jm - n + 1} +$ (terms of degree $> jm - n + 1$), and when we substitute this into the original expression above for $C$ that $$C = \begin{cases} x' = \tau^{n} \\ y = \tau^{m} + \mbox{terms of degree $> \nu_{\rho}$} \end{cases}$$ \ We can now apply semigroup arguments to the above expression for $C$ and see that $C$ has the parametrization $$C = \begin{cases} x' = \tau^{n} & \\ y' = \tau^{m} + \Sigma^{q}_{i = \rho + 1} a'_{\nu_{i}} \tau^{\nu_{i}} \end{cases}$$ \ We can apply the above technique to the two curves $(t^{3}, t^{5} + t^{7}, 0)$ and $(t^{3}, t^{5} - t^{7}, 0)$ to get that they are equivalent to the curve $(t^{3}, t^{5}, 0)$. \subsection{Computations for the class $RVVV$.} \ We will now provide the details that show what the functions $\frac{\pa \tilde{u}_{3}}{\pa u_{3}}$, $\frac{\pa \tilde{u}_{3}}{\pa v_{3}}$, $\frac{\pa \tilde{v}_{3}}{\pa u_{3}}$, $\frac{\pa \tilde{v}_{3}}{\pa v_{3}}$ are when we evaluate at $p_{4} = (x,y,z,u,v,u_{2},v_{2},u_{3},v_{3}) = (0,0,0,0,0,0,0,0,0)$. $\bullet$ $\frac{\pa \tilde{u}_{3}}{\pa u_{3}}$ \ Recall that $\tilde{u}_{3} = \frac{c_{1}}{c_{2}}$. Then $\frac{\pa \tilde{u}_{3}}{\pa u_{3}} = \frac{u_{2} \frac{\pa \tilde{u}_{x}}{\pa x} + uu_{2} \frac{\pa \tilde{u}}{\pa y} + \frac{\pa \tilde{u}}{\pa u} + v_{2} \frac{\pa \tilde{u}}{\pa}}{c_{2}} - \frac{\frac{\pa c_{2}}{\pa u_{3}} c_{1}}{c^{2}_{2}}$ and \ $\frac{\pa \tilde{u}_{3}}{\pa u_{3}}(p_{4}) = \frac{ \frac{\pa \tilde{u}}{\pa u}(p_{4}) }{ \frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4})}$, since $c_{1}(p_{4}) = 0$. We recall that $\frac{\pa \tilde{u}}{\pa u}(p_{4}) = \frac{\phi^{2}_{y}(\z)}{\phi^{1}_{x}(\z)}$, $\frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4}) = \frac{\phi^{1}_{x}(\z)}{\frac{\pa \tilde{u}}{\pa u}(p_{4})}$ to give that \ $\frac{\pa \tilde{u}_{3}}{\pa u_{3}}(p_{4}) = \frac{(\phi^{2}_{y}(\z))^{2}}{(\phi^{1}_{x}(\z))^{3}}$. \ $\bullet$ $\frac{\pa \tilde{u}_{3}}{\pa v_{3}}$. \ Have that $\tilde{u}_{3} = \frac{c_{1}}{c_{2}}$, then \ $\frac{\pa \tilde{u}_{3}}{\pa v_{3}}(p_{4}) = \frac{\frac{\pa c_{1}}{\pa v_{3}}(p_{4})}{c_{2}(p_{4})} - \frac{\frac{\pa c_{2}}{\pa v_{3}}(p_{4}) c_{1}(p_{4})}{c^{2}_{2}(p_{4})} = 0$, since $c_{1}$ is not a function of $u_{3}$ and $c_{1}(p_{4}) = 0$. \ $\bullet$ $\frac{\pa \tilde{v}_{3}}{\pa u_{3}}$ \ Have that $\tilde{v}_{3} = \frac{c_{3}}{c_{2}}$, then \ $\frac{\pa \tilde{v}_{3}}{\pa u_{3}} = \frac{u_{2}\frac{\pa \tilde{v}_{2}}{\pa x} + ... + \frac{\pa \tilde{v}_{2}}{\pa u} + v_{2} \frac{\pa \tilde{v}_{2}}{\pa v}}{c_{2}} - \frac{(u_{2} \frac{\pa \tilde{u}_{2}}{\pa x} + ... + \frac{\pa \tilde{u}_{2}}{\pa u} + ... + v_{2} \frac{\pa \tilde{u}_{2}}{\pa v})c_{1}}{c^{2}_{2}}$ \ $\frac{\pa \tilde{v}_{3}}{\pa u_{3}}(p_{4}) = \frac{ \frac{\pa \tilde{v}_{2}}{\pa u} (p_{4})}{ \frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4}) } - \frac{ \frac{\pa \tilde{u}_{2}}{\pa u}(p_{4}) \frac{\pa \tilde{v}_{2}}{\pa u_{2}}(p_{4}) } { (\frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4}))^{2} }$ \ This means that we will need to figure out what $\frac{\pa \tilde{u}_{2}}{\pa u_{2}}$, $\frac{\pa \tilde{v}_{2}}{\pa u_{2}}$, and $\frac{\pa \tilde{v}_{2}}{\pa u}$ are when we evaluate at $p_{4}$. \ $\circ$ $\frac{\pa \tilde{u}_{2}}{\pa u_{2}}$ \ Can recall from work at the level below that $\frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4}) = \frac{\phi^{1}_{x}(\z)}{\frac{\pa \tilde{u}}{\pa u}(p_{4})} = \frac{\phi^{2}_{y}(\z)}{(\phi^{1}_{x}(\z))^{2}}$ since $\frac{\pa \tilde{u}}{\pa u}(p_{4}) = \frac{\phi^{2}_{y}(\z) }{\phi^{1}_{x}(\z)}$. \ $\circ$ $\frac{\pa \tilde{v}_{2}}{\pa u_{2}}$ \ Can recall from work at level $3$ that $\frac{\pa \tilde{v}_{2}}{\pa u_{2}}(p_{4}) = \frac{\frac{\pa \tilde{v}}{\pa x}(p_{4})}{\frac{\pa \tilde{u}}{\pa u}(p_{4})} = 0$ since $\frac{\pa \tilde{v}}{\pa x}(p_{4}) = \frac{\phi^{3}_{xx}(\z)}{\phi^{1}_{x}(\z)}$ and have that $\phi^{1}_{xx}(\z) = 0$ to give $\frac{\pa \tilde{v}_{2}}{\pa u_{2}}(p_{4}) = 0$. This gives the reduced expression $\frac{\pa \tilde{v}_{3}}{\pa u_{3}}(p_{4}) = \frac{\frac{\pa \tilde{v}_{2}}{\pa u}(p_{4})}{\frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4})}$. \ $\circ$ $\frac{\pa \tilde{v}_{2}}{\pa u}$ \ Recall that $\frac{\pa \tilde{v}_{2}}{\pa u} = \frac{b_{3}}{b_{2}}$, then we get that \ $\frac{\pa \tilde{v}_{2}}{\pa u} = \frac{ u_{2} \frac{\pa \tilde{v}}{\pa x \pa u} + u_{2} \frac{\pa \tilde{v}}{\pa y} + ... + \frac{\pa \tilde{v}}{\pa^{2} u} + v_{2} \frac{\pa \tilde{v}}{\pa v \pa u}}{b_{2}} - \frac{(u_{2} \frac{\pa \tilde{u}}{\pa x \pa u} + ... + \frac{\pa \tilde{u}}{\pa^{2} u} + v_{2} \frac{\pa \tilde{u}}{\pa v \pa u})b_{3}}{b_{2}^{2}}$ \ $\frac{\pa \tilde{v}_{2}}{\pa u}(p_{4}) = \frac{\frac{\pa \tilde{v}}{\pa^{2} u}(p_{4})}{ \frac{\pa \tilde{u}}{\pa u}(p_{4})} - \frac{\frac{\pa \tilde{u}}{\pa^{2} u}(p_{4}) \frac{\pa \tilde{v})}{\pa u}(p_{4})}{ (\frac{\pa \tilde{u}}{\pa u}(p_{4}))^{2}}$ since $b_{2}(p_{4}) = \frac{\pa \tilde{u}}{\pa u}(p_{4})$ and $b_{3}(p_{4}) = \frac{\pa \tilde{v}}{\pa u}(p_{4})$. In order to figure out $\frac{\pa \tilde{v}_{2}}{\pa u}(p_{4})$ will be we need to look at $\frac{\pa \tilde{v}}{\pa u}(p_{4})$, $\frac{\pa \tilde{v}}{\pa^{2} u}(p_{4})$, and $\frac{\pa \tilde{u}}{\pa^{2} u}(p_{4})$. \ $\circ$ $\frac{\pa \tilde{v}}{\pa u}$ \ Recall that $\tilde{v} = \frac{a_{3}}{a_{1}}$ and that $\frac{\pa \tilde{v}}{\pa u} = \frac{\phi^{3}_{y}}{a_{1}} - \frac{\phi^{1}_{y} a_{3}}{a^{2}_{1}}$, then \ $\frac{\pa \tilde{v}}{\pa u}(p_{4}) = \frac{\phi^{3}_{y}(\z)}{\phi^{1}_{x}(\z)} - \frac{ \phi^{1}_{y}(\z) \phi^{3}_{x}(\z)}{(\phi^{1}_{x}(\z))^{2}} = 0$ since $\phi^{3}_{y}(\z) = 0$ and $\phi^{3}_{x}(\z) = 0$. \ $\circ$ $\frac{\pa \tilde{v}}{\pa^{2} u}$ \ From the above we have $\frac{\pa \tilde{v}}{\pa u} = \frac{\phi^{3}_{y}}{a_{1}} - \frac{\phi^{1}_{y} a_{3}}{a^{2}_{1}}$, then \ $\frac{\pa \tilde{v}}{\pa^{2} u}(p_{4}) = \frac{0}{a_{1}(p_{4})} - \frac{\phi^{3}_{y}(\z) \phi^{1}_{y}(\z)}{a^{2}_{1}(p_{4})} - \frac{\phi^{1}_{y}(\z) \phi^{3}_{y}(\z)}{a^{2}_{1}(p_{4})} + \frac{(\phi^{1}_{y}(\z))^{2} \phi^{3}_{x}(\z)}{a^{3}_{1}(p_{4})} = 0$ since $\phi^{3}_{y}(\z) = 0$ and $\phi^{3}_{x}(\z) = 0$. \ We see that we do not need to determine what $\frac{\pa \tilde{u}}{\pa^{2} u}(p_{4})$ is, since $\frac{\pa \tilde{v}}{\pa u}$ and $\frac{\pa \tilde{v}}{\pa^{2} u}$ will be zero at $p_{4}$ and give us that $\frac{\pa \tilde{v}_{3}}{\pa u_{3}}(p_{4}) = 0$. \ $\bullet$ $\frac{\pa \tilde{v}_{3}}{\pa v_{3}}$. \ Recall that $\tilde{v}_{3} = \frac{c_{3}}{c_{2}}$, then \ $\frac{\pa \tilde{v}_{3}}{\pa v_{3}} = \frac{ u_{3}u_{2} \frac{\pa \tilde{v}_{2}}{\pa x \pa v_{3}} + ... + \frac{\pa \tilde{v}_{2}}{\pa v_{2}} }{c_{2}} - \frac{ (u_{3}u_{2} \frac{\pa \tilde{u}_{2}}{\pa x \pa v_{3}} + ... + \frac{\pa \tilde{u}}{\pa v_{2}})c_{3}}{c^{2}_{2}}$ \ $\frac{\pa \tilde{v}_{3}}{\pa v_{3}}(p_{4}) = \frac{ \frac{\pa \tilde{v}_{2}}{\pa v_{2}}(p_{4})}{ \frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4})} - \frac{ \frac{\pa \tilde{u}}{\pa v_{2}(p_{4})} \frac{\pa \tilde{v}_{2}}{\pa u_{2}}(p_{4})}{(\frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4}))^{2}}$. This means that we need to look at $\frac{\pa \tilde{v}_{2}}{\pa v_{2}}$, $\frac{\pa \tilde{u}}{\pa v_{2}}$, $\frac{\pa \tilde{v}_{2}}{\pa u_{2}}$, and $\frac{\pa \tilde{u}_{2}}{\pa u_{2}}$ evaluated at $p_{4}$. \ $\circ$ $\frac{\pa \tilde{v}_{2}}{\pa v_{2}}$. \ We recall from an earlier calculation that $\frac{\pa \tilde{v}_{2}}{\pa v_{2}}(p_{4}) = \frac{ \frac{\pa \tilde{v}}{\pa v}(p_{4})}{\frac{\pa \tilde{u}}{\pa u}(p_{4})} = \frac{\phi^{3}_{z}(\z)}{\phi^{2}_{y}(\z)}$. \ $\circ$ $\frac{\pa \tilde{u}}{\pa v_{2}}$. \ Recall from an earlier calculation that $\frac{\pa \tilde{u}_{2}}{\pa v_{2}}(p_{4}) = 0$. \ $\circ$ $\frac{\pa \tilde{u}_{2}}{\pa u_{2}}$. \ Recall that $\tilde{u}_{2} = \frac{b_{1}}{b_{2}}$ and that $\frac{\pa \tilde{u}_{2}}{\pa u_{2}} = \frac{ \phi^{1}_{x} + u \phi^{1}_{y} + v \phi^{1}_{z}}{b_{2}} - \frac{(\frac{\pa \tilde{u}}{\pa x} + u \frac{\pa \tilde{u}}{\pa y} + v \frac{\pa \tilde{u}}{\pa z})b_{1}}{b^{2}_{2}}$, then \ $\frac{ \pa \tilde{u}_{2}}{\pa u_{2}}(p_{4}) = \frac{\phi^{1}_{x}(\z)}{\frac{\pa \tilde{u}}{\pa u}(p_{4})} = \frac{(\phi^{1}_{x}(\z))^{2}}{\phi^{2}_{y}(p_{4})}$. With the above in mind we see that $\frac{\pa \tilde{v}_{3}}{\pa v_{3}}(p_{4}) = \frac{\phi^{3}_{z}(\z)}{(\phi^{1}_{x}(\z))^{2}}$. \ This gives that $\Phi^{3}_{\ast}(\ell) = span \{ (b \frac{\pa \tilde{u}_{3}}{\pa u_{3}}(p_{4}) + c \frac{\pa \tilde{u}_{3}}{\pa v_{3}}(p_{4}))\frac{\pa}{\pa u_{3}} + (b \frac{\pa \tilde{v}_{3}}{\pa u_{3}}(p_{4}) + c \frac{\pa \tilde{v}_{3}}{\pa v_{3}}(p_{4}))\frac{\pa}{\pa v_{3}} \} = span \{ (b \frac{(\phi^{2}_{y}(\z))^{2}}{(\phi^{1}_{x}(\z))^{3}})\frac{\pa}{\pa u_{3}} + c \frac{\phi^{3}_{z}(\z)}{(\phi^{1}_{x}(\z))^{2}}\frac{\pa}{\pa v_{3}} \}$. \end{document}
\begin{document} \title{On the volume of unit balls\ of finite-dimensional Lorentz spaces} \begin{abstract} We study the volume of unit balls $B^n_{p,q}$ of finite-dimensional Lorentz sequence spaces $\ell_{p,q}^n.$ We give an iterative formula for ${\rm vol}(B^n_{p,q})$ for the weak Lebesgue spaces with $q=\infty$ and explicit formulas for $q=1$ and $q=\infty.$ We derive asymptotic results for the $n$-th root of ${\rm vol}(B^n_{p,q})$ and show that $[{\rm vol}(B^n_{p,q})]^{1/n}\approx n^{-1/p}$ for all $0<p<\infty$ and $0<q\le\infty.$ We study further the ratio between the volume of unit balls of weak Lebesgue spaces and the volume of unit balls of classical Lebesgue spaces. We conclude with an application of the volume estimates and characterize the decay of the entropy numbers of the embedding of the weak Lebesgue space $\ell_{1,\infty}^n$ into $\ell_1^n.$ \end{abstract} {\bf Keywords:} Lorentz sequence spaces, weak Lebesgue spaces, entropy numbers, vo\-lu\-me estimates, (non-)convex bodies \section{Introduction and main results} It was observed already in the first half of the last century (cf. the interpolation theorem of Marcinkiewicz \cite{Mar}) that the scale of Lebesgue spaces $L_p(\Omega)$, defined on a subset $\Omega\subset \mathbb{R}^n$, is not sufficient to describe the fine properties of functions and operators. After the pioneering work of Lorentz \cite{Lor1,Lor2}, Hunt defined in \cite{Hunt1, Hunt2} a more general scale of function spaces $L_{p,q}(\Omega)$, the so-called Lorentz spaces. This scale includes Lebesgue spaces as a special case (for $p=q$) and Lorentz spaces have found applications in many areas of mathematics, including harmonic analysis (cf. \cite{Graf1,Graf2}) and the analysis of PDE's (cf. \cite{LR,Meyer}). If $\Omega$ is an atomic measure space (with all atoms of the same measure), one arrives naturally at the definition of Lorentz spaces on (finite or infinite) sequences. If $n$ is a positive integer and $0<p\le\infty$, then the Lebesgue $n$-dimensional space $\ell_p^n$ is $\mathbb{R}^n$ equipped with the (quasi-)norm \begin{equation*} \|x\|_p=\begin{cases}\displaystyle \Bigl(\sum_{j=1}^n |x_j|^p\Bigr)^{1/p},&\quad\text{for}\ 0<p<\infty,\\ \displaystyle\max_{j=1,\dots,n}|x_j|,&\quad\text{for}\ p=\infty \end{cases} \end{equation*} for every $x=(x_1,\dots,x_n)\in\mathbb{R}^n$. We denote by $B_p^n$ its unit ball \begin{equation}\label{eq:defBp} B_p^n=\{x\in\mathbb{R}^n:\|x\|_p\le 1\}. \end{equation} If $0<p,q\le\infty$, then the Lorentz space $\ell_{p,q}^n$ stands for $\mathbb{R}^n$ equipped with the \mbox{(quasi-)norm} \begin{equation}\label{eq:defpq} \|x\|_{p,q}=\|k^{\frac{1}{p}-\frac{1}{q}}x_k^*\|_q, \end{equation} where $x^*=(x_1^*,\dots,x_n^*)$ is the non-increasing rearrangement of $(|x_1|,\dots,|x_n|)$. If $p=q$, then $\ell_{p,p}^n=\ell_p^n$ are again the Lebesgue sequence spaces. If $q=\infty$, then the space $\ell_{p,\infty}^n$ is usually referred to as a weak Lebesgue space. Similarly to \eqref{eq:defBp}, we denote by $B_{p,q}^n$ the unit ball of $\ell_{p,q}^n$, i.e. the set \begin{equation} B_{p,q}^n=\{x\in\mathbb{R}^n:\|x\|_{p,q}\le 1\}. \end{equation} Furthermore, $B^{n,+}_{p}$ (or $B_{p,q}^{n,+}$) will be the set of vectors from $B_p^n$ (or $B_{p,q}^{n}$) with all coordinates non-negative. Lorentz spaces of (finite or infinite) sequences have been used extensively in different areas of mathematics. They form a basis for many operator ideals of Pietsch, cf. \cite{Pietsch1, Pietsch2, Triebel2}, they play an important role in the interpolation theory, cf. \cite{BS,BL,Haroske,LiPe}, and their weighted counterparts are the main building blocks of approximation function spaces, cf. \cite{ST,Triebel1}. Weak Lebesgue sequence spaces (i.e. Lorentz spaces with $q=\infty$) were used by Cohen, Dahmen, Daubechies, and DeVore \cite{CDDD} to characterize functions of bounded variation. Lorentz spaces further appear in approximation theory \cite{DeVore,DeLo,DePeTe} and signal processing \cite{IEEE1,IEEE2,FR}. The volume of unit balls of classical Lebesgue sequence spaces $B_p^n$ is known since the times of Dirichlet \cite{Dirichlet}, who showed for $0<p\le\infty$ that \begin{equation}\label{eq:volBp} \operatornamewithlimits{vol}(B_p^n)=2^n\cdot\frac{\Gamma\bigl(1+\frac{1}{p}\bigr)^n}{\Gamma\bigl(1+\frac{n}{p}\bigr)}. \end{equation} Here, $\operatornamewithlimits{vol}(A)$ stands for the Lebesgue measure of a (measurable) set $A\subset\mathbb{R}^n$ and $\Gamma(x)=\int_0^\infty t^{x-1}e^{-t}dt$ is the Gamma function for $x>0$. Since then, \eqref{eq:volBp} and its consequences play an important role in many results about finite-dimensional Lebesgue spaces, cf. \cite{Pisier}. Although many properties of Lorentz sequence spaces were studied in detail earlier (cf. \cite{LorTheo2, LorTheo1, BS,BL, LorTheo5, LorTheo3, LorTheo4}), there seems to be only very little known about the volume of their unit balls. The aim of this work is to fill to some extent this gap. We present two ways, which lead to recursive formulas for $\operatornamewithlimits{vol}(B_{p,\infty}^n)$ if $0<p<\infty$. The first one (cf. Theorem \operatornamewithlimits{Re}f{thm:ind:1}) \begin{equation}\label{eq:intro:1} \operatornamewithlimits{vol}(B^{n,+}_{p,\infty})=\sum_{j=1}^n (-1)^{j-1}{n\choose j}n^{-j/p}\operatornamewithlimits{vol}(B^{n-j,+}_{p,\infty}) \end{equation} is quite well suited for calculating $\operatornamewithlimits{vol}(B_{p,\infty}^n)$ for moderate values of $n$ and we present also some numerical results on the behavior of this quantity for different values of $p$. Although an explicit formula for $\operatornamewithlimits{vol}(B_{p,\infty}^n)$ can be derived from \eqref{eq:intro:1}, cf. Theorem \operatornamewithlimits{Re}f{thm:ind:2}, due to its combinatorial nature it seems to be only of a limited practical use. In Section \operatornamewithlimits{Re}f{sec:integral} we derive the same formula with the help of iterated multivariate integrals, very much in the spirit of the original proof of Dirichlet. Surprisingly, a simple explicit formula can be given for $\operatornamewithlimits{vol}(B_{p,1}^n)$ for the full range of $0<p\le\infty$. Indeed, we show in Theorem \operatornamewithlimits{Re}f{thm:q1} that $$ \operatornamewithlimits{vol}(B_{p,1}^n)=2^n\prod_{k=1}^n\frac{1}{\varkappa_p(k)},\quad \text{where}\quad \varkappa_p(k)=\sum_{j=1}^kj^{1/p-1}. $$ If $p=1$, then $\varkappa_1(k)=k$ and this formula reduces immediately to the well-known relation $\operatornamewithlimits{vol}(B_{1}^n)=2^n/n!.$ Using Stirling's formula, \eqref{eq:volBp} implies that $[\operatornamewithlimits{vol}(B_p^n)]^{1/n}\approx n^{-1/p}$ for all $0<p<\infty$ with the constants of equivalence independent of $n$. Using the technique of entropy numbers, we show in Theorem \operatornamewithlimits{Re}f{thm:asym:1} that essentially the same is true for the whole scale of Lorentz spaces $\ell_{p,q}^n$ (with a remarkable exception for $p=\infty$, cf. Theorem \operatornamewithlimits{Re}f{thm:asym:inf1}). It is a very well known fact (cf. Theorem \operatornamewithlimits{Re}f{thm:emb:1}) that $B_{p}^n\subset B_{p,\infty}^n$ for all $0<p\le\infty$ and it is a common folklore to consider the unit balls of weak Lebesgue spaces (i.e. Lorentz spaces with $q=\infty$) as the ``slightly larger'' counterparts of the unit balls of Lebesgue spaces with the same summability parameter $p$. This intuition seems to be further confirmed by Theorem \operatornamewithlimits{Re}f{thm:asym:1}, which shows that the quantities $[\operatornamewithlimits{vol}(B^n_p)]^{1/n}$ and $[\operatornamewithlimits{vol}(B^n_{p,\infty})]^{1/n}$ are equivalent to each other with constants independent on $n$. On the other hand, we show in Theorem \operatornamewithlimits{Re}f{thm:ratio} that $\operatornamewithlimits{vol}(B^n_{p,\infty})/\operatornamewithlimits{vol}(B^n_{p})$ grows exponentially in $n$ at least for $p\le 2$. We conjecture (but it remains an open problem) that the same is true for all $p<\infty.$ We conclude our work by considering the entropy numbers of the embeddings between Lorentz spaces of finite dimension, which complements the seminal work of Edmunds and Netrusov \cite{EN}. We characterize in Theorem \operatornamewithlimits{Re}f{thm:entropy} the decay of the entropy numbers $e_k(id:\ell_{1,\infty}^n\to \ell_{1}^n)$, which turns out to exhibit a rather unusual behavior, namely $$ e_k(id:\ell_{1,\infty}^n\to \ell_1^n)\approx\begin{cases} \log(1+n/k),\quad 1\le k\le n,\\ 2^{-\frac{k-1}{n}},\quad k\ge n \end{cases} $$ with constants of equivalence independent of $k$ and $n$. We see that after a logarithmic decay for $1\le k \le n$, the exponential decay in $k$ takes over for $k\ge n.$ \section{Recursive and explicit formulas} In this section, we present different formulas for the volume of unit balls of Lorentz spaces for two special cases, namely for the weak Lebesgue spaces with $q=\infty$ and for Lorentz spaces with $q=1.$ Surprisingly, different techniques have to be used in these two cases. \subsection{Weak Lebesgue spaces} We start by the study of weak Lebesgue spaces, i.e. the Lorentz spaces $\ell_{p,\infty}^n$. If $p=\infty$, then $\ell_{p,\infty}^n=\ell_\infty^n$. Therefore, we restrict ourselves to $0<p<\infty$ in this section. \subsubsection{Using the inclusion-exclusion principle} In this section, we assume the convention $$ \operatornamewithlimits{vol}(B^{1,+}_{p,\infty})=\operatornamewithlimits{vol}(B^{0,+}_{p,\infty})=1 $$ for every $0<p<\infty.$ \begin{thm}\label{thm:ind:1} Let $n\in\mathbb{N}$ and $0<p<\infty$. Then \begin{equation}\label{eq:ind:1} \operatornamewithlimits{vol}(B^{n,+}_{p,\infty})=\sum_{j=1}^n (-1)^{j-1}{n\choose j}n^{-j/p}\operatornamewithlimits{vol}(B^{n-j,+}_{p,\infty}). \end{equation} \end{thm} \begin{proof} For $1\le k \le n$, we denote $A_k=\{x\in B^{n,+}_{p,\infty}:x_k\le n^{-1/p}\}$. If $x\in B^{n,+}_{p,\infty}$, then at least one of the coordinates of $x$ must be smaller than or equal to $n^{-1/p}$. Therefore $$ B^{n,+}_{p,\infty}=\bigcup_{j=1}^n A_j. $$ For a non-empty index set $K\subset \{1,\dots,n\}$, we denote $$ A_K=\bigcap_{k\in K}A_k=\{x\in B_{p,\infty}^{n,+}:x_k\le n^{-1/p}\ \text{for all}\ k\in K\}. $$ If we denote by $x_{K^c}$ the restriction of $x$ onto $K^c=\{1,\dots,n\}\setminus K$, then the $j^{\rm th}$ largest coordinate of $x_{K^c}$ can be at most $j^{-1/p}$, i.e. $x_{K^c}\in B^{n-|K|,+}_{p,\infty}$. Here, $|K|$ stands for the number of elements in $K$. We therefore obtain \begin{equation}\label{eq:volK} \operatornamewithlimits{vol}(A_K)=\Bigl(\prod_{k\in K}n^{-1/p}\Bigr)\cdot \operatornamewithlimits{vol}(B^{n-|K|,+}_{p,\infty})=n^{-|K|/p}\cdot \operatornamewithlimits{vol}(B^{n-|K|,+}_{p,\infty}). \end{equation} Finally, we insert \eqref{eq:volK} into the inclusion-exclusion principle and obtain \begin{align*} \operatornamewithlimits{vol}(B^{n,+}_{p,\infty})&=\sum_{\emptyset\not=K\subset\{1,\dots,n\}}(-1)^{|K|-1}\operatornamewithlimits{vol}(A_K)\\ &=\sum_{\emptyset\not=K\subset\{1,\dots,n\}}(-1)^{|K|-1}n^{-|K|/p}\operatornamewithlimits{vol}(B^{n-|K|,+}_{p,\infty})\\ &=\sum_{j=1}^n (-1)^{j-1}{n\choose j}n^{-j/p}\operatornamewithlimits{vol}(B^{n-j,+}_{p,\infty}). \end{align*} \end{proof} The relation \eqref{eq:ind:1} is already suitable for calculation of $\operatornamewithlimits{vol}(B^{n}_{p,\infty})$ for moderate values of $n$, cf. Table \operatornamewithlimits{Re}f{table_infty}. Let us remark, that $\operatornamewithlimits{vol}(B^n_{1,\infty})$ is maximal for $n=4$ and $\operatornamewithlimits{vol}(B^n_{2,\infty})$ attains its maximum at $n=18.$ \begin{table}[h t p]\centering \pgfplotstableread[col sep = semicolon]{q1prehled15.txt}\loadedtable \setlength{\tabcolsep}{8pt} \pgfplotstabletypeset[ precision=3, every head row/.style={ before row=\toprule,after row=\midrule}, every last row/.style={ after row=\bottomrule}, columns/n/.style={int detect, column name={$n$}}, columns/pul/.style={sci, sci zerofill, sci sep align, column name={$p=1/2$}}, columns/jedna/.style={sci, sci zerofill, sci sep align, column name={$p=1$}}, columns/dva/.style={sci, sci zerofill, sci sep align, column name={$p=2$}}, columns/sto/.style={sci, sci zerofill, sci sep align, column name={$p=100$}}, ]\loadedtable \caption{$\text{vol}(B^n_{p,\infty})$ for dimensions up to 15} \label{table_infty} \end{table} Next we exploit Theorem \operatornamewithlimits{Re}f{thm:ind:1} to give a certain explicit result about the volume of unit balls of weak Lebesgue spaces. For this, we denote by ${\bf K}_n$ the set of integer vectors of finite length $k=(k_1,\dots,k_j)$ with positive coordinates $k_1,\dots,k_j$, which sum up to $n$. We denote by $\ell(k)=j$ the length of $k\in{\bf K}_n.$ Similarly, we denote by ${\bf M}_n$ the set of all increasing sequences $m=(m_0,\dots,m_j)$ which grow from zero to $n$, i.e. with $0=m_0<m_1<\dots<m_j=n.$ The quantity $\ell(m)=j$ is again the length of $m\in{\bf M}_n$. Hence, \begin{align*} {\bf K}_n&:=\{k=(k_1,\dots,k_j):k_i\in\mathbb{N}, \sum_{i=1}^j k_i=n\},\\ {\bf M}_n&:=\{m=(m_0,\dots,m_j):m_i\in\mathbb{N}_0,\ 0=m_0<m_1<\dots<m_j=n\}. \end{align*} For $k\in{\bf K}_n$, we also write $$ {n\choose k}={n\choose k_1,\dots,k_{\ell(k)}}=\frac{n!}{k_1!\dots k_{\ell(k)}!} $$ The explicit formula for $\operatornamewithlimits{vol}(B_{p,\infty}^n)=2^n\operatornamewithlimits{vol}(B_{p,\infty}^{n,+})$ is then presented in the following theorem. \begin{thm}\label{thm:ind:2} Let $0<p<\infty$ and $n\in\mathbb{N}.$ Then \begin{align} \notag \operatornamewithlimits{vol}(B^{n,+}_{p,\infty})&=\sum_{k\in {\bf K}_n} (-1)^{n+\ell(k)}{n\choose k} \prod_{l=1}^{\ell(k)}\Bigl(n-\sum_{i=1}^{l-1}k_i\Bigr)^{-k_l/p}\\ \label{eq:ind:2}&=n!\sum_{m\in {\bf M}_n} (-1)^{n+\ell(m)} \prod_{l=0}^{\ell(m)-1} \frac{(n-m_l)^{-(m_{l+1}-m_{l})/p}}{(m_{l+1}-m_{l})!}. \end{align} \end{thm} \begin{proof} First, we prove the second identity in \eqref{eq:ind:2}. Indeed, the mapping $k=(k_1,\dots,k_j)\to (0,k_1,k_1+k_2,\dots,\sum_{i=1}^jk_i)$ maps one-to-one ${\bf K}_n$ onto ${\bf M}_n$, preserving also the length of the vectors. Next, we proceed by induction to show the first identity of \eqref{eq:ind:2}. For that sake, we denote ${\bf K}_0=\{0\}$ with $\ell(0)=0$. With this convention, \eqref{eq:ind:2} is true for both $n=0$ and $n=1$, where both the sides of \eqref{eq:ind:2} are equal to one. The rest follows from \eqref{eq:ind:1}. Indeed, we obtain \begin{align*} \operatornamewithlimits{vol}(B^{n,+}_{p,\infty})&=\sum_{j=1}^n (-1)^{j-1}{n\choose j}n^{-j/p}\operatornamewithlimits{vol}(B^{n-j,+}_{p,\infty})\\ &=\sum_{j=1}^n (-1)^{j-1}{n\choose j}n^{-j/p}\sum_{k\in{\bf K}_{n-j}}(-1)^{n-j+\ell(k)}{n-j\choose k}\prod_{l=1}^{\ell(k)}\Bigl(n-j-\sum_{i=1}^{l-1}k_i\Bigr)^{-k_l/p}\\ &=\sum_{j=1}^n \sum_{k\in{\bf K}_{n-j}} (-1)^{n+\ell(k)-1}{n\choose j}{n-j\choose k}n^{-j/p}\prod_{l=1}^{\ell(k)}\Bigl(n-j-\sum_{i=1}^{l-1}k_i\Bigr)^{-k_l/p}\\ &=\sum_{\nu\in{\bf K}_{n}} (-1)^{n+\ell(\nu)}{n\choose \nu}n^{-\nu_1/p}\prod_{l=1}^{\ell(\nu)-1}\Bigl(n-\sum_{i=1}^{l}\nu_i\Bigr)^{-\nu_{l+1}/p}\\ &=\sum_{\nu\in{\bf K}_{n}} (-1)^{n+\ell(\nu)}{n\choose \nu}\prod_{l=1}^{\ell(\nu)} \Bigl(n-\sum_{i=1}^{l-1}\nu_i\Bigr)^{-\nu_{l}/p}, \end{align*} where we identified the pair $(j,k)$ with $1\le j\le n$ and $k=(k_1,\dots,k_{\ell(k)})\in {\bf K}_{n-j}$ with $\nu=(j,k_1,\dots,k_{\ell(k)})\in{\bf K}_n$. If $j=n$, then the pair $(n,0)$ is identified with $\nu=(n)$. In any case, $\ell(\nu)=\ell(k)+1.$ \end{proof} \subsubsection{Integral approach}\label{sec:integral} The result of Theorem \operatornamewithlimits{Re}f{thm:ind:2} can be obtained also by an iterative evaluation of integrals, resembling the approach of the original work of Dirichlet \cite{Dirichlet}. To begin with, we define a scale of expressions which for some specific choice of parameters lead to a formula for $\operatornamewithlimits{vol}(B^n_{p,\infty})$. \begin{dfn}\label{def:V} Let $m\in\mathbb{N}_0$ and $n\in\mathbb{N}$. Let $a\in\mathbb{R}^n$ be a decreasing positive vector, i.e. $a=(a_1,\dots,a_n)$ with $a_1>a_2>\dots>a_n>0.$ We denote \begin{equation}\label{eq:def:V} V^{(m)}(n,a)=\int_0^{a_n}\int_{x_n}^{a_{n-1}}\dots\int_{x_2}^{a_1}x_1^mdx_1\dots dx_{n-1}dx_n. \end{equation} \end{dfn} The domain of integration in \eqref{eq:def:V} is defined by the following set of conditions \begin{align*} 0\le x_n\le a_n,\quad x_n\le x_{n-1}\le a_{n-1},\quad \dots,\quad x_2\le x_1\le a_1, \end{align*} which can be reformulated also as \begin{align*} 0\le x_n\le x_{n-1}\le \dots\le x_2\le x_1\quad\text{and}\quad x_j\le a_j\ \text{for all}\ j=1,\dots,n. \end{align*} Hence, the integration in \eqref{eq:def:V} goes over the cone of non-negative non-increasing vectors in $\mathbb{R}^n$ intersected with the set $\{x\in\mathbb{R}^n:x^*_j\le a_j\ \text{for}\ j=1,\dots,n\}.$ If we set $a^{(p)}=(a_1^{(p)},\dots,a_n^{(p)})$ with $a_j^{(p)}=j^{-1/p}$ for $0<p<\infty$ and $1\le j\le n$, this set coincides with $B^{n,+}_{p,\infty}$. Finally, considering all the possible reorderings of $x$, we get \begin{equation}\label{eq:BV1} \operatornamewithlimits{vol}(B^{n,+}_{p,\infty})=n!\cdot V^{(0)}(n,a^{(p)}). \end{equation} In what follows, we simplify the notation by assuming $V^{(0)}(0,\emptyset)=1.$ The integration in \eqref{eq:def:V} leads to the following recursive formula for $V^{(m)}(n,a)$. \begin{lem}\label{lem:ind:V1} Let $m\in\mathbb{N}_0$, $n\in\mathbb{N}$ and $a\in\mathbb{R}^n$ with $a_1>a_2>\dots>a_n>0.$ Then \begin{equation}\label{eq:ind:V1} V^{(m)}(n,a)=\sum_{i=1}^{n}(-1)^{i+1}\frac{a_i^{m+i}m!}{(m+i)!}V^{(0)}(n-i,(a_{i+1},\dots,a_n)). \end{equation} \end{lem} \begin{proof} First, we obtain \begin{align*} V^{(m)}(n,a)&=\int_0^{a_n}\int_{x_n}^{a_{n-1}}\dots\int_{x_2}^{a_1}x_1^mdx_1\dots dx_{n-1}dx_n\\ &=\frac{1}{m+1}\int_0^{a_n}\int_{x_n}^{a_{n-1}}\dots\int_{x_3}^{a_2}(a_1^{m+1}-x_2^{m+1})dx_2\dots dx_{n-1}dx_n\\ &=\frac{a_1^{m+1}}{m+1}V^{(0)}(n-1,(a_2,\dots,a_n))-\frac{1}{m+1}V^{(m+1)}(n-1,(a_2,\dots,a_n)). \end{align*} The proof of \eqref{eq:ind:V1} now follows by induction over $n$. For $n=1$, $V^{(m)}(1,(a_1))=\frac{a_1^{m+1}}{m+1}$, which is in agreement with \eqref{eq:ind:V1}. To simplify the notation later on, we write $a_{[k,l]}=(a_k,\dots,a_l)$ for every $1\le k\le l\le n$. We assume, that \eqref{eq:ind:V1} holds for $n-1$ and calculate \begin{align*} V^{(m)}(n,a)&=\frac{a_1^{m+1}}{m+1}V^{(0)}(n-1,a_{[2,n]})-\frac{1}{m+1}V^{(m+1)}(n-1,(a_{[2,n]}))\\ &=\frac{a_1^{m+1}}{m+1}V^{(0)}(n-1,a_{[2,n]})- \frac{1}{m+1}\sum_{i=1}^{n-1}(-1)^{i+1}\frac{a_{i+1}^{m+1+i}(m+1)!}{(m+1+i)!}V^{(0)}(n-1-i,(a_{[i+2,n]}))\\ &=\frac{a_1^{m+1}}{m+1}V^{(0)}(n-1,a_{[2,n]})+ \sum_{j=2}^{n}(-1)^{j+1}\frac{a_{j}^{m+j}m!}{(m+j)!}V^{(0)}(n-j,(a_{[j+1,n]}))\\ &=\sum_{j=1}^{n}(-1)^{j+1}\frac{a_{j}^{m+j}m!}{(m+j)!}V^{(0)}(n-j,(a_{[j+1,n]})), \end{align*} which finishes the proof of \eqref{eq:ind:V1}. \end{proof} Lemma \operatornamewithlimits{Re}f{lem:ind:V1} allows for a different proof of Theorem \operatornamewithlimits{Re}f{thm:ind:2}. \begin{proof}[Alternative proof of Theorem \operatornamewithlimits{Re}f{thm:ind:2}] By \eqref{eq:BV1}, we need to calculate $V^{(0)}(n,a)$ and then substitute $a=a^{(p)}.$ We show by induction that \begin{equation}\label{eq:BV2} V^{(0)}(n,a)=\sum_{m\in{\bf M}_n}(-1)^{n+\ell(m)}\prod_{l=0}^{\ell(m)-1}\frac{a_{n-m_l}^{m_{l+1}-m_l}}{(m_{l+1}-m_l)!}. \end{equation} If $n=1$, then both sides of \eqref{eq:BV2} are equal to $a_1.$ For general $n$, we obtain by \eqref{eq:ind:V1} \begin{align*} V^{(0)}(n,a)&=\sum_{i=1}^{n}(-1)^{i+1}\frac{a_i^{i}}{i!}V^{(0)}(n-i,(a_{i+1},\dots,a_n))\\ &=\sum_{i=1}^{n}(-1)^{i+1}\frac{a_i^{i}}{i!} \sum_{m\in{\bf M}_{n-i}}(-1)^{n-i+\ell(m)}\prod_{l=0}^{\ell(m)-1}\frac{a_{n-i-m_l+i}^{m_{l+1}-m_l}}{(m_{l+1}-m_l)!}\\ &=\sum_{i=1}^{n}\sum_{m\in{\bf M}_{n-i}}(-1)^{n+1+\ell(m)}\frac{a_i^{i}}{i!} \prod_{l=0}^{\ell(m)-1}\frac{a_{n-m_l}^{m_{l+1}-m_l}}{(m_{l+1}-m_l)!}\\ &=\sum_{\mu\in{\bf M}_n} (-1)^{n+\ell(\mu)} \frac{a_{n-\mu_{\ell(\mu)-1}}^{\mu_{\ell(\mu)}-\mu_{\ell(\mu)-1}}}{(\mu_{\ell(\mu)}-\mu_{\ell(\mu)-1})!} \prod_{l=0}^{\ell(\mu)-2}\frac{a_{n-\mu_l}^{\mu_{l+1}-\mu_l}}{(\mu_{l+1}-\mu_l)!}\\ &=\sum_{\mu\in{\bf M}_n} (-1)^{n+\ell(\mu)} \prod_{l=0}^{\ell(\mu)-1}\frac{a_{n-\mu_l}^{\mu_{l+1}-\mu_l}}{(\mu_{l+1}-\mu_l)!}, \end{align*} where we identified the pair $(i,m)$ with $1\le i \le n$ and $m=(m_0,\dots,m_{\ell(m)})\in{\bf M}_{n-i}$ with $\mu=(\mu_0,\dots,\mu_{\ell(\mu)})=(m_0,\dots,m_{\ell(m)},n)\in{\bf M}_n$. Hence, $\ell(\mu)=\ell(m)+1.$ \end{proof} \subsection{Lorentz spaces with $q=1$} We give an explicit formula for $\operatornamewithlimits{vol}(B_{p,1}^n)$, which takes a surprisingly simple form. The approach is based on the polarization. Recall, that for $0<p\le \infty$, the \mbox{(quasi-)norm} $\|x\|_{p,1}$ is defined as $$ \|x\|_{p,1}=\sum_{k=1}^n k^{1/p-1}x_k^*. $$ \begin{thm}\label{thm:q1} Let $0<p\le \infty$. Then $$ \operatornamewithlimits{vol}(B_{p,1}^n)=2^n\prod_{k=1}^n\frac{1}{\varkappa_p(k)},\quad \text{where}\quad \varkappa_p(k)=\sum_{j=1}^kj^{1/p-1}. $$ \end{thm} \begin{proof} Let $f:[0,\infty)\to \mathbb{R}$ be a smooth non-negative function with a sufficient decay at infinity (later on, we will just choose $f(t)=e^{-t}$). Then \begin{align} \notag\int_{\mathbb{R}^n}f(\|x\|_{p,1})dx&=-\int_{\mathbb{R}^n}\int_{\|x\|_{p,1}}^\infty f'(t)dtdx=-\int_0^\infty f'(t)\int_{x:\|x\|_{p,1}\le t}1dx dt\\ \label{eq:q1:1}&=-\int_0^\infty f'(t)\operatornamewithlimits{vol}(\{x:\|x\|_{p,1}\le t\})dt\\ \notag&=-\int_0^\infty t^n f'(t)\operatornamewithlimits{vol}(\{x:\|x\|_{p,1}\le 1\})dt=-\operatornamewithlimits{vol}(B^n_{p,1})\int_0^\infty t^n f'(t)dt. \end{align} For the choice $f(t)=e^{-t}$, we get \begin{equation}\label{eq:q1:2} -\int_0^\infty t^n f'(t)dt=\int_0^\infty t^ne^{-t}dt=\Gamma(n+1)=n!. \end{equation} It remains to evaluate \begin{align*} I^n_{p}=\int_{\mathbb{R}^n}\exp(-\|x\|_{p,1})dx=\int_{\mathbb{R}^n}\exp\Bigl(-\sum_{k=1}^n k^{1/p-1}x_k^*\Bigr)dx =2^n\cdot n!\cdot\int_{{\mathcal C}_+^n}\exp\Bigl(-\sum_{k=1}^n k^{1/p-1}x_k\Bigr)dx, \end{align*} where $$ {\mathcal C}_+^n=\Bigl\{x\in\mathbb{R}^n:x_1\ge x_2\ge\dots\ge x_n\ge 0\Bigr\}. $$ We denote for $t\ge 0$, $0<p\le\infty$, and $n\in \mathbb{N}$ $$ A(n,p,t)=\int_{{\mathcal C}_{t,+}^n}\exp\Bigl(-\sum_{k=1}^n k^{1/p-1}x_k\Bigr)dx, $$ where ${\mathcal C}_{t,+}^n=\Bigl\{x\in\mathbb{R}^n:x_1\ge x_2\ge\dots\ge x_n\ge t\Bigr\}$, i.e. $I_p^n=2^n\cdot n!\cdot A(n,p,0).$ We observe that \begin{equation}\label{eq:int:1} A(1,p,t)=\int_t^\infty e^{-u}du=e^{-t} \end{equation} and \begin{align} \notag A(n,p,t)&=\int_{t}^\infty \exp\Bigl({-n^{1/p-1}x_n}\Bigr)\int_{x_n}^\infty \exp\Bigl({-(n-1)^{1/p-1}x_{n-1}}\Bigr)\dots \int_{x_2}^\infty \exp({-x_1})dx_1\dots dx_{n-1}dx_n\\ \label{eq:int:2} &=\int_{t}^\infty \exp\Bigl({-n^{1/p-1}x_n}\Bigr)A(n-1,p,x_n)dx_n. \end{align} Combining \eqref{eq:int:1} and \eqref{eq:int:2}, we prove by induction \begin{align*} A(n,p,t)=\prod_{k=1}^n\frac{1}{\varkappa_p(k)}\exp(-\varkappa_p(n) t),\quad \text{where}\quad \varkappa_p(k)=\sum_{j=1}^kj^{1/p-1} \end{align*} and \begin{equation}\label{eq:q1:3} I_p^n=2^n\cdot n!\cdot \prod_{k=1}^n\frac{1}{\varkappa_p(k)}. \end{equation} Finally, we combine \eqref{eq:q1:1} with \eqref{eq:q1:2} and \eqref{eq:q1:3} and obtain $$ \operatornamewithlimits{vol}(B_{p,1}^n)=2^n\prod_{k=1}^n\frac{1}{\varkappa_p(k)}. $$ \end{proof} \begin{rem} Let us point out that for $p=1$, we get $\varkappa_1(k)=k$ and we recover the very well known formula $\operatornamewithlimits{vol}(B^n_{1,1})=2^n/n!$. The application of the polarization identity to other values of $q\not=1$ is also possible, but one arrives to an $n$-dimensional integral, which (in contrary to $I_p^n$) seems to be hard to compute explicitly. \end{rem} \section{Asymptotic behavior} Volumes of convex and non-convex bodies play an important role in many areas of mathematics, cf. \cite{Pisier}. Nevertheless, for most of the applications we do not need the exact information about the volume, it is often enough to apply good lower and/or upper bounds of this quantity. For example, for the use in local Banach theory, it is sometimes sufficient to have some asymptotic bounds on $\operatornamewithlimits{vol}(B_p^n)$ for $n$ large. In this section, we provide two such estimates. \subsection{Asymptotic behavior of $\operatornamewithlimits{vol}(B_{p,q}^n)^{1/n}$} The first quantity, we would like to study, is the $n$-th root of $\operatornamewithlimits{vol}(B^n_{p,q})$. In the Lebesgue case $q=p$, \eqref{eq:volBp} can be combined with the Stirling's formula (cf. \cite{WiW}) \begin{equation}\label{eq:Stirling} \Gamma(t)=(2\pi)^{1/2}t^{t-1/2}e^{-t}e^{\theta(t)/t},\quad 0<t<\infty, \end{equation} where $0<\theta(t)<1/12$ for all $t>0$, to show that \begin{equation}\label{eq:asymp:p} \operatornamewithlimits{vol}(B_p^n)^{1/n}\approx n^{-1/p}, \end{equation} where the constants of equivalence do not depend on $n$. Combining \eqref{eq:asymp:p} with the embedding (cf. Theorem \operatornamewithlimits{Re}f{thm:emb:1}) $$ B_p^n\subset B_{p,\infty}^n\subset (1+\log(n))^{1/p}B_{p}^n, $$ we observe that \begin{equation}\label{eq:asymp:pq1} n^{-1/p}\lesssim \operatornamewithlimits{vol}(B^n_{p,\infty})\lesssim \Bigl(\frac{1+\log(n)}{n}\Bigr)^{1/p} \end{equation} for all $0<p\le \infty.$ The aim of this section is to show, that the lower bound in \eqref{eq:asymp:pq1} is sharp and that \eqref{eq:asymp:p} generalizes to all $0<p<\infty$ and $0<q\le\infty$ without additional logarithmic factors. If $0<p<\infty$ and $q=1$, this can be obtained as a consequence of Theorem \operatornamewithlimits{Re}f{thm:q1}. Indeed, elementary estimates give $$ \varkappa_p(k)\approx k^{1/p} $$ with constants independent on $k$ and Theorem \operatornamewithlimits{Re}f{thm:q1} then implies $$ \operatornamewithlimits{vol}(B^n_{p,1})^{1/n}\approx \Bigl(\prod_{k=1}^n \varkappa_p(k)^{-1}\Bigr)^{1/n}\approx \Bigl(\prod_{k=1}^n k^{-1/p}\Bigr)^{1/n}\approx (n!)^{-\frac{1}{p}\cdot\frac{1}{n}}. $$ The result is then finished by another application of Stirling's formula. To extend the result also to $q\not=1$, we apply the technique of entropy together with interpolation. \begin{thm}\label{thm:asym:1} Let $n\in\mathbb{N}$, $0<p<\infty$ and $0<q\le\infty$. Then \begin{equation}\label{eq:thm:asymp} \operatornamewithlimits{vol}(B_{p,q}^n)^{1/n}\approx n^{-1/p} \end{equation} with the constants of equivalence independent of $n$. \end{thm} \begin{proof} \emph{Step 1.:} First, we show the upper bound of $\operatornamewithlimits{vol}(B^n_{p,\infty})^{1/n}$. For that reason, we define the entropy numbers of a bounded linear operator between two quasi-normed Banach spaces $X$ and $Y$ as follows $$ e_k(T:X\to Y)=\inf\Bigl\{\varepsilon>0:\exists \{y_l\}_{l=1}^{2^{k-1}}\subset Y\ \text{such that}\ T(B_X)\subset\bigcup_{l=1}^{2^{k-1}}(y_l+\varepsilon B_Y)\Bigr\}. $$ Here, $B_X$ and $B_Y$ stand for the unit ball of $X$ and $Y$, respectively. We use the interpolation inequality for entropy numbers (cf. Theorem \cite[Theorem 1.3.2 (i)]{ET}) together with the interpolation property of Lorentz spaces (cf. \cite[Theorems 5.2.1 and 5.3.1]{BL}) and obtain that $$ e_{k}(id:\ell_{p,\infty}^n\to\ell_\infty^n)\le c_p e_k(id:\ell_{p/2}^n\to\ell_{\infty}^n)^{1/2}, $$ where $c_p>0$ depends only on $p$. Together with the known estimates of entropy numbers of embeddings of Lebesgue-type sequence spaces \cite{GL,KV,K2001,S}, we obtain $$ e_{n}(id:\ell^n_{p,\infty}\to\ell_\infty^n)\le c_p n^{-1/p}. $$ By the definition of entropy numbers, this means that $B_{p,\infty}^n$ can be covered with $2^{n-1}$ balls in $\ell_\infty^n$ with radius $(1+\varepsilon)c_pn^{-1/p}$ for every $\varepsilon>0$. Comparing the volumes, we obtain $$ \operatornamewithlimits{vol}(B_{p,\infty}^n)\le 2^{n-1}[(1+\varepsilon)c_pn^{-1/p}]^n\operatornamewithlimits{vol}(B_\infty^n), $$ i.e. $\operatornamewithlimits{vol}(B_{p,\infty}^n)^{1/n}\le c_p' n^{-1/p}.$ \emph{Step 2.:} The estimate from above for general $0<q\le \infty$ is covered by the embedding of Theorem \operatornamewithlimits{Re}f{thm:emb:1} and by the previous step. \emph{Step 3.:} For the lower bound, we use again the interpolation of entropy numbers leading to $$ e_{n}(id:\ell_{p/2}^n\to \ell_{p,q}^n)\le c_{p,q} e_{n}(id:\ell_{p/2}^n\to \ell_{\infty}^n)^{1/2}. $$ Therefore, the unit ball $B_{p/2}^n$ can be covered by $2^{n-1}$ copies of $B_{p,q}^n$ multiplied by $(1+\varepsilon)c n^{-1/p}$. Comparing the volumes, we obtain $$ c_1n^{-2/p}\le \operatornamewithlimits{vol}(B_{p/2}^n)^{1/n}\le c_2n^{-1/p}\operatornamewithlimits{vol}(B_{p,q}^n)^{1/n}, $$ which finishes the proof. \end{proof} The result of Theorem \operatornamewithlimits{Re}f{thm:asym:1} seems to be a bit surprising at first look - especially in view of \eqref{eq:asymp:pq1}, which suggests that some additional logarithmic factor could appear. That the outcome of Theorem \operatornamewithlimits{Re}f{thm:asym:1} was by no means obvious is confirmed by inspecting the case $p=\infty$, where the behavior of $n$-th root of the volume of the unit ball actually differs from \eqref{eq:thm:asymp}. \begin{thm}\label{thm:asym:inf1} Let $n\in \mathbb{N}$ be a positive integer. Then \begin{equation}\label{eq:asym:inf1} [\operatornamewithlimits{vol}(B^n_{\infty,1})]^{1/n}\approx (\log (n+1))^{-1} \end{equation} with the constants of equivalence independent on $n$. \end{thm} \begin{proof} By Theorem \operatornamewithlimits{Re}f{thm:q1}, we know that $$ \operatornamewithlimits{vol}(B^n_{\infty,1})^{1/n}\approx \Bigl(\prod_{k=1}^n \varkappa_\infty(k)^{-1}\Bigr)^{1/n}, $$ where $$ \varkappa_\infty(k)=\sum_{j=1}^k \frac{1}{j}\approx \log(k+1)\quad \text{for any}\quad k\ge 1. $$ Therefore, $$ \operatornamewithlimits{vol}(B^n_{\infty,1})^{1/n}\approx \Bigl(\prod_{k=1}^n \frac{1}{\log(k+1)}\Bigr)^{1/n}. $$ The lower bound of this quantity is straightforward $$ \Bigl(\prod_{k=1}^n \frac{1}{\log(k+1)}\Bigr)^{1/n}\ge \Bigl(\prod_{k=1}^n \frac{1}{\log(n+1)}\Bigr)^{1/n}=\frac{1}{\log(n+1)}. $$ For the upper bound, we use the inequality between geometric and arithmetic mean and obtain $$ \Bigl(\prod_{k=1}^n \frac{1}{\log(k+1)}\Bigr)^{1/n}\le \frac{1}{n} \sum_{k=1}^n \frac{1}{\log(k+1)}\le \frac{1}{n}\biggl\{\frac{1}{\log(2)}+\int_2^{n+1}\frac{1}{\log(t)}dt\biggr\}. $$ The last integral is known as the (offset) logarithmic integral and is known to be asymptotically $O(x/\log(x))$ for $x$ going to infinity, cf. \cite[Chapter 5]{AS}. Alternatively, the same fact can be shown easily by the L'Hospital's rule. This finishes the proof. \end{proof} \subsection{Ratio of volumes} The unit balls $B^n_{p,\infty}$ of weak Lebesgue spaces are commonly considered to be ``slightly larger'' than the unit balls of Lebesgue spaces with the same summability parameter. The aim of this section is to study their relation in more detail. For that sake, we define for $0<p<\infty$ \begin{equation}\label{eq:ratio:1} R_{p,n}:=\frac{\operatornamewithlimits{vol}(B_{p,\infty}^n)}{\operatornamewithlimits{vol}(B_p^n)}. \end{equation} By the embedding in Theorem \operatornamewithlimits{Re}f{thm:emb:1} (which we give below with the full proof for reader's convenience, cf. \cite[Chapter 4, Proposition 4.2]{BS}) we know that this quantity is bounded from below by one. Later on, we would like to study its behavior (i.e. growth) when $n$ tends to $\infty.$ \begin{thm}\label{thm:emb:1} If $0<p<\infty$ and $0<q\le r\le \infty$, then \begin{equation}\label{eq:emb:p2} B_{p,q}^n\subset c_{p,q,r} B_{p,r}^n, \end{equation} where the quantity $c_{p,q,r}$ does not depend on $n$. In particular, $B_{p,q}^n\subset B_{p,r}^n$ if also $q\le p$. \end{thm} \begin{proof} First, we prove the assertion with $r=\infty$. If $1\le l\le n$ is a positive integer, the result follows from \begin{align*} \|x\|_{p,q}^q=\sum_{k=1}^n k^{q/p-1}(x_k^*)^q\ge\sum_{k=1}^l k^{q/p-1}(x_k^*)^q \ge (x_l^*)^q\sum_{k=1}^lk^{q/p-1} \end{align*} and $$ l^{q/p}(x_l^*)^q\le \|x\|_{p,q}^q\cdot l^{q/p}\cdot\Bigl(\sum_{k=1}^l k^{q/p-1}\Bigr)^{-1}. $$ We obtain that \begin{equation}\label{eq:emb:p1} \|x\|_{p,\infty}=\max_{l=1,\dots,n}l^{1/p}x_l^*\le \|x\|_{p,q}\sup_{l\in\mathbb{N}}l^{1/p}\cdot\Bigl(\sum_{k=1}^l k^{q/p-1}\Bigr)^{-1/q}. \end{equation} For $q\le p$, it can be shown by elementary calculus that $$ \sum_{k=1}^lk^{q/p-1}\ge l^{q/p}. $$ Together with \eqref{eq:emb:p1}, this implies that $\|x\|_{p,\infty}\le \|x\|_{p,q}$ and we obtain $B_{p,q}^n\subset B_{p,\infty}^n,$ i.e. \eqref{eq:emb:p2} with $c_{p,q,\infty}=1.$ If, on the other hand, $q>p$ we estimate $$ \sum_{k=1}^lk^{q/p-1}\ge\int_0^l t^{q/p-1}dt=\frac{p}{q}\cdot l^{q/p} $$ and \eqref{eq:emb:p2} follows with $c_{p,q,\infty}=(q/p)^{1/q}$. If $0<q< r<\infty$, we write \begin{align*} \|x\|_{p,r}&=\Bigl\{\sum_{k=1}^n k^{r/p-1}(x_k^*)^r\Bigr\}^{1/r} =\Bigl\{\sum_{k=1}^n k^{q/p-1}(x_k^*)^q k^{(r-q)/p}(x_k^*)^{r-q}\Bigr\}^{1/r}\\ &\le \|x\|_{p,\infty}^{\frac{r-q}{r}}\cdot\|x\|_{p,q}^{q/r}\le [c_{p,q,\infty}\|x\|_{p,q}]^{\frac{r-q}{r}}\cdot\|x\|_{p,q}^{q/r}, \end{align*} i.e. $$ \|x\|_{p,r}\le c_{p,q,r}\|x\|_{p,q}\quad\text{with}\quad c_{p,q,r}=(c_{p,q,\infty})^{\frac{r-q}{r}}. $$ \end{proof} We show that the ratio $R_{p,n}$ defined in \eqref{eq:ratio:1} grows exponentially for $0<p\le 2$. Naturally, we also conjecture that the same is true for all $0<p<\infty$, but we leave this as an open problem. \begin{thm}\label{thm:ratio} For every $0<p\le 2$, there is a constant $C_p>1$, such that $$ R_{p,n}\gtrsim C_p^n $$ with the multiplicative constant independent on $n$. \end{thm} \begin{proof} We give the proof for even $n$'s, the proof for odd $n$'s is similar, only slightly more technical. Let ${\mathcal B^n_p}\subset \mathbb{R}^n$ be the set of vectors $x\in\mathbb{R}^n$, which satisfy $$ x_1^*\in\Bigl[\frac{1}{2^{1/p}},1\Bigr],\quad x_2^*\in \Bigl[\frac{1}{3^{1/p}},\frac{1}{2^{1/p}}\Bigr],\dots,x^*_{n/2}\in\Bigl[\frac{1}{(n/2+1)^{1/p}},\frac{1}{(n/2)^{1/p}}\Bigr] $$ and $\displaystyle x^*_{n/2+1},\dots,x_n^*\in\Bigl[0,\frac{1}{n^{1/p}}\Bigr]$. Then ${\mathcal B}_p^n\subset B_{p,\infty}^n$ and the volume of ${\mathcal B}_p^n$ can be calculated by combinatorial methods. Indeed, there is ${n\choose n/2}$ ways how to choose the $n/2$ indices of the smallest coordinates. Furthermore, there is $(n/2)!$ ways, how to distribute the $n/2$ largest coordinates. We obtain that \begin{align}\label{eq:ratio:proof1} R_{p,n}&=\frac{\operatornamewithlimits{vol}(B_{p,\infty}^{n})}{\operatornamewithlimits{vol}(B_p^n)}\ge \frac{\operatornamewithlimits{vol}({\mathcal B}_p^n)}{\operatornamewithlimits{vol}(B_p^n)}\\ \notag&\ge\frac{\Gamma(1+n/p)}{\Gamma(1+1/p)^n}\cdot \binom{n}{n/2}\cdot (n/2)! \cdot\prod_{i=1}^{n/2}\frac{(i+1)^{1/p} - i^{1/p}}{i^{1/p}(i+1)^{1/p}} \cdot\left(\frac{1}{n^{1/p}}\right)^{n/2}. \end{align} First, we observe that, by Stirling's formula \eqref{eq:Stirling}, \begin{align} \notag\Gamma(1+n/p)&\cdot\binom{n}{n/2}\cdot (n/2)! \cdot\prod_{i=1}^{n/2}\frac{1}{i^{1/p}(i+1)^{1/p}} \cdot\left(\frac{1}{n^{1/p}}\right)^{n/2}\\ \notag& = \frac{\Gamma(1+n/p)n!}{\left[(n/2)!\right]^{1+1/p}\left[(n/2+1)!\right]^{1/p}n^{n/(2p)}}\\ \label{eq:ratio:proof2}&\approx \frac{\sqrt{2\pi n/p}\left(\frac{n}{pe}\right)^{n/p}\sqrt{2\pi n}\left(\frac{n}{e}\right)^n}{\left(\sqrt{\pi n}\right)^{1+2/p}(n/2+1)^{1/p}\left(\frac{n}{2e}\right)^{n/2+n/p}n^{n/(2p)}}\\ \notag&\approx \left(\frac{2^{1/2+1/p}}{p^{1/p}e^{1/2}}\right)^n\cdot n^{n/2-n/(2p)+1/2-2/p}. \end{align} By the mean value theorem, we obtain $$ (i+1)^{1/p} - i^{1/p}\geq \begin{cases} \frac{i^{1/p-1}}{p},\quad 0<p\le 1,\\ \frac{(i+1)^{1/p-1}}{p},\quad 1<p\le 2. \end{cases} $$ We use \eqref{eq:Stirling} to estimate $\Gamma(1+1/p)$ together with \eqref{eq:ratio:proof1} and \eqref{eq:ratio:proof2} and obtain \begin{align*} R_{p,n} & \gtrsim \left(\frac{2^{1/2+1/p}}{\Gamma(1+1/p)p^{1/p}e^{1/2}}\right)^n\cdot \frac{n^{n/2-n/(2p)+1/2-2/p}}{p^{n/2}} \cdot\left[(n/2)!\right]^{1/p-1}(n/2+1)^{\alpha(1/p-1)}\\ & \approx \left(\frac{2^{1+1/(2p)}}{\Gamma(1+1/p)p^{1/p+1/2}e^{1/(2p)}}\right)^n\cdot n^{-3/(2p)+\alpha(1/p-1)} \gtrsim \left(\frac{2^{1/2+1/(2p)}}{\pi^{1/2}e^{p/12-1/(2p)}}\right)^n n^{-3/(2p)+\alpha(1/p-1)}, \end{align*} where $\alpha=0$ for $0<p\le 1$ and $\alpha=1$ for $1<p\le 2.$ The proof is then finished by monotonicity and $$ \frac{2^{1/2+1/(2p)}}{\pi^{1/2}e^{p/12-1/(2p)}}=\sqrt{\frac{2}{\pi}} \cdot\frac{ (2e)^{1/(2p)}}{e^{p/12}}\ge \sqrt{\frac{2}{\pi}}\cdot \frac{(2e)^{1/4}}{e^{1/6}}>1. $$ \end{proof} \section{Entropy numbers} We have already seen the closed connection between estimates of volumes of unit balls of finitedimensional (quasi-)Banach spaces and the decay of entropy numbers of embeddings of such spaces in the proof of Theorem \operatornamewithlimits{Re}f{thm:asym:1}. With the same arguments as there, it is rather straightforward to prove that \begin{equation}\label{eq:interpol:1} e_k(id:\ell^n_{p_0,q_0}\to \ell_{p_1,q_1}^n)\approx e_k(id:\ell^n_{p_0}\to \ell_{p_1}^n) \end{equation} for $0<p_0,p_1<\infty$ with $p_0\not=p_1.$ On the other hand, it was shown in \cite{EN}, that the entropy numbers of diagonal operators between Lorentz sequence spaces can exhibit also a very complicated behavior. Actually, they served in \cite{EN} as the first counterexample to a commonly conjectured interpolation inequality for entropy numbers. We complement \eqref{eq:interpol:1} by considering the limiting case $p_0=p_1.$ As an application of our volume estimates, accompanied by further arguments, we will investigate in this section the decay of the entropy numbers $e_k(id:\ell^n_{1,\infty}\to \ell^n_{1}).$ Before we come to our main result, we state a result from coding theory \cite{coding:1,coding:2}, which turned out to be useful also in connection with entropy numbers \cite{EN,KV} and even in optimality of sparse recovery in compressed sensing \cite{BCKV,FPRU,FR}. \begin{lem}\label{Lemma:coding} Let $k\le n$ be positive integers. Then there are $M$ subsets $T_1,\dots,T_M$ of $\{1,\dots,n\}$, such that \begin{enumerate} \item[(i)] $\displaystyle M\ge \Bigl(\frac{n}{4k}\Bigr)^{k/2}$, \item[(ii)] $|T_i|=k$ for all $k=1,\dots,M$, \item[(iii)] $|T_i\cap T_j|<k/2$ for all $i\not=j.$ \end{enumerate} \end{lem} To keep the argument simple, we restrict ourselves to $p=1$. \begin{thm}\label{thm:entropy} Let $k$ and $n$ be positive integers. Then $$ e_k(id:\ell_{1,\infty}^n\to \ell_1^n)\approx\begin{cases} \log(1+n/k),\quad 1\le k\le n,\\ 2^{-\frac{k-1}{n}},\quad k\ge n, \end{cases} $$ where the constants of equivalence do not depend on $k$ and $n$. \end{thm} \begin{proof} \emph{Step 1. (lower bound for $k\ge n$):} If $B_{1,\infty}^n$ is covered by $2^{k-1}$ balls in $\ell_1^n$ with radius $\varepsilon>0$, it must hold $$ \operatornamewithlimits{vol}(B_{1,\infty}^n)^{1/n}\le 2^{\frac{k-1}{n}}\varepsilon\operatornamewithlimits{vol}(B_1^n)^{1/n}, $$ which (in combination with Theorem \operatornamewithlimits{Re}f{thm:asym:1}) gives the lower bound for $k\ge n$.\\ \emph{Step 2. (upper bound for $k\ge n$):} We use again volume arguments. Let $\varepsilon>0$ be a parameter to be chosen later on. Let $\{x_1,\dots,x_N\}\subset B_{1,\infty}^n$ be a maximal $\varepsilon$-distant set in the metric of $\ell_1^n.$ This means that $$ B_{1,\infty}^n\subset \bigcup_{j=1}^N (x_j+\varepsilon B_1^n) $$ and $\|x_i-x_j\|_1>\varepsilon$ for $i\not =j.$ Hence, any time $N\le 2^{k-1}$ for some positive integer $k$, then $e_k(id:\ell_{1,\infty}^n\to \ell_1^n)\le \varepsilon.$ To estimate $N$ from above, let us note that $(x_j+\varepsilon B_{1}^n)\subset 2(1+\varepsilon)B_{1,\infty}^n$, which follows by the quasi-triangle inequality for $\ell_{1,\infty}^n$. On the other hand, the triangle inequality of $\ell_1^n$ implies that $(x_j+\frac{\varepsilon}{2}B_1^n)$ are mutually disjoint. Hence, $$ N\Bigl(\frac{\varepsilon}{2}\Bigr)^n\operatornamewithlimits{vol}(B_1^n)\le 2^n(1+\varepsilon)^n\operatornamewithlimits{vol}(B_{1,\infty}^n), $$ i.e. \begin{equation}\label{eq:entropy:1} N\le 4^n\Bigl(1+\frac{1}{\varepsilon}\Bigr)^n\frac{\operatornamewithlimits{vol}(B_{1,\infty}^n)}{\operatornamewithlimits{vol}(B_{1}^n)}\le \frac{8^n}{\varepsilon^n}\frac{\operatornamewithlimits{vol}(B_{1,\infty}^n)}{\operatornamewithlimits{vol}(B_{1}^n)} \end{equation} if $0<\varepsilon<1.$ We now define the parameter $\varepsilon$ by setting the right-hand side of \eqref{eq:entropy:1} equal to $2^{k-1}$. By Theorem \operatornamewithlimits{Re}f{thm:asym:1}, there exists an integer $\gamma\ge 1$, such that $\varepsilon<1$ for $k\ge \gamma n$. In this way, we get $N\le 2^{k-1}$ and $\varepsilon=8 [\operatornamewithlimits{vol}(B_{1,\infty}^n)/\operatornamewithlimits{vol}(B_{1}^n)]^{1/n}\cdot 2^{-\frac{k-1}{n}}\le c\,2^{-\frac{k-1}{n}}$. This gives the result for $k\ge \gamma n.$ \emph{Step 3. (upper bound for $k\le n$):} Let $1\le l\le n/2$ be a positive integer, which we will chose later on. To every $x\in B_{1,\infty}^n$, we associate $S\subset\{1,\dots,n\}$ to be the indices of its $l$ largest entries (in absolute value). Furthermore, $x_S\in\mathbb{R}^n$ denotes the restriction of $x$ to $S$. We know that $$ \|x-x_S\|_1\le \sum_{k=l+1}^n \frac{1}{k}\le \int_{l}^{n}\frac{dx}{x}=\log(n)-\log(l)=\log(n/l). $$ By Step 2, there is an absolute constant $c>0$ (independent of $l$), such that $$ e_{\gamma l}(id:\ell_{1,\infty}^l\to \ell_1^l)< c, $$ where $\gamma\ge 1$ is the integer constant from Step 2. Hence, there is a point set ${\mathcal N}\subset \mathbb{R}^l$, with $|{\mathcal N}|=2^{\gamma l}$, which is a $c$-net of $B_{1,\infty}^l$ in the $\ell_1^l$-norm. For any set $S$ as above, we embed ${\mathcal N}$ into $\mathbb{R}^n$ by extending the points from ${\mathcal N}$ by zero outside of $S$ and obtain a point set ${\mathcal N}_S$, which is a $c$-net of $\{x\in B_{1,\infty}^n:{\rm supp\ }(x)\subset S\}$. Taking the union of all these nets over all sets $S\subset \{1,\dots,n\}$ with $|S|=l$, we get $2^{\gamma l}{n\choose l}$ points, which can approximate any $x\in B_{1,\infty}^n$ within $c+\log(n/l)$ in the $\ell_1^n$-norm. We use the elementary estimate ${n\choose l}\le (en/l)^l$ and assume (without loss of generality) that $\gamma\ge 2$. Then we may conclude, that whenever $2^{k-1}\ge 2^{\gamma l}{n\choose l}$, we have $e_k(id:\ell_{1,\infty}^n\to\ell_{1}^n)\le c+\log(n/l)$, i.e. $$ k-1\ge \gamma l(1+\log(en/l))\operatornamewithlimits{Im}plies e_k(id:\ell_{1,\infty}^n\to\ell_{1}^n)\lesssim (1+\log(n/l)). $$ By a standard technical argument, $l$ can be chosen up to the order of $k/\log(n/k)$, which gives the result for $n$ large enough and $k$ between $(\gamma+1)\log(en)$ and $n$. Using monotonicity of entropy numbers, the upper bound from Step 2 and the elementary bound $e_k(id:\ell_{1,\infty}^n\to\ell_{1}^n)\le \|id:\ell_{1,\infty}^n\to\ell_{1}^n\|\le 1+\log(n)$ concludes the proof of the upper bounds. \emph{Step 4. (lower bound for $k\le n$):} Let $n$ be a sufficiently large positive integer and let $\nu\ge 1$ be the largest integer with $12\cdot 4^\nu\le n$. Let $1\le\mu\le \nu$ be a positive integer. We apply Lemma \operatornamewithlimits{Re}f{Lemma:coding} with $k$ replaced by $4^l$ for every integer $l$ with $\mu\le l \le \nu$. In this way, we obtain a system of subsets $T^l_1,\dots,T^l_{M_l}$ of $\{1,\dots,n\}$, such that $|T^l_i|=4^l$ for every $1\le i \le M_l$, $|T^l_i\cap T^l_j|<4^l/2$ for $i\not=j$ and $$ \displaystyle M_l\ge \Bigl(\frac{n}{4^{l+1}}\Bigr)^{4^l/2}\ge M:=\Bigl(\frac{n}{4^{\mu+1}}\Bigr)^{4^\mu/2}. $$ For $j\in\{1,\dots,M\}$, we put \begin{align*} \widetilde T^\mu_j&=T^\mu_j,\\ \widetilde T^{\mu+1}_j&=T^{\mu+1}_j\setminus T^{\mu}_j,\\ &\vdots\\ \widetilde T^{\nu}_j&=T^{\nu}_j\setminus (T^{\nu-1}_j\cup\dots\cup T^{\mu}_j). \end{align*} Observe, that by this construction the sets $\{\widetilde T^{l}_j:\mu\le l\le \nu\}$ are mutually disjoint and $|\widetilde T^l_j|\le |T^l_j|=4^l$. Furthermore, $|\widetilde T^\mu_j|=4^\mu$ and \begin{align} \notag |\widetilde T^l_j|&\ge |T^{l}_j|- [|T^{l-1}_j|+\dots+|T^{\mu}_j|]=4^l-[4^{l-1}+\dots+4^\mu]\\ \label{eq:entro:11}&\ge 4^l\Bigl(1-\sum_{s=1}^\infty\frac{1}{4^s}\Bigr)=\frac{2}{3}\cdot 4^l \end{align} for $\mu<l\le\nu.$ We associate to the sets $\{\widetilde T_j^l:\mu\le l\le \nu, 1\le j\le M\}$ a system of vectors $x^1,\dots,x^M\in{\mathbb R}^n$. First, we observe that if $u\in\{1,\dots,n\}$ belongs to $\widetilde T^l_j$ for some $l\in\{\mu,\mu+1,\dots,\nu\}$, then this $l$ is unique and we put $(x^j)_u=\frac{1}{4^l}.$ Otherwise, we set $(x^j)_u=0.$ We may also express this construction by $$ x^j=\sum_{l=\mu}^{\nu}\frac{1}{4^l}\chi_{\widetilde T_j^l}, $$ where $\chi_A$ is the indicator function of a set $A$. Now we observe that \begin{align*} \|x^j\|_{1,\infty}&\le \max\Bigl\{4^\mu\cdot\frac{1}{4^\mu}, \frac{4^{\mu}+4^{\mu+1}}{4^{\mu+1}}, \dots,\frac{4^{\mu}+4^{\mu+1}+\dots+4^\nu}{4^{\nu}}\Bigr\}\\ &\le 1+\frac{1}{4}+\frac{1}{4^2}+\dots=\frac{4}{3}. \end{align*} Furthermore, let $i\not=j$ and let $u\in \widetilde T^l_i$ with $u\not\in\widetilde T^l_j$. Then \begin{equation}\label{eq:entro:12} |(x^i)_u-(x^j)_u|\ge \frac{1}{4^l}-\frac{1}{4^{l+1}}=\frac{3}{4}\cdot\frac{1}{4^l}. \end{equation} To estimate the $\ell_1$-distances among the points $\{x^1,\dots,x^M\}$, we combine \eqref{eq:entro:12}, \eqref{eq:entro:11}, and obtain for $i\not= j$ \begin{align*} \|x^i-x^j\|_1&\ge\sum_{l=\mu}^\nu \sum_{u\in\widetilde T_i^l\setminus \widetilde T^l_j}|(x^i)_u-(x^j)_u| \ge \sum_{l=\mu}^\nu |\widetilde T^l_i\setminus \widetilde T^l_j|\cdot \frac{3}{4}\cdot\frac{1}{4^l}\\ &=\frac{3}{4}\Bigl\{\sum_{l=\mu}^\nu |\widetilde T^l_i|\cdot \frac{1}{4^l}-\sum_{l=\mu}^\nu |\widetilde T^l_i \cap \widetilde T^l_j|\cdot \frac{1}{4^l}\Bigr\}\\ &\ge \frac{3}{4}\Bigl\{1+\sum_{l=\mu+1}^\nu \frac{2}{3}\cdot 4^l\cdot\frac{1}{4^l} -\sum_{l=\mu}^\nu |T^l_i \cap T^l_j|\cdot \frac{1}{4^l}\Bigr\}\\ &\ge \frac{3}{4}\Bigl\{1+\frac{2}{3}(\nu-\mu) -\sum_{l=\mu}^\nu \frac{4^l}{2}\cdot \frac{1}{4^l}\Bigr\}\\ &= \frac{3}{4}\Bigl\{1+\frac{2}{3}(\nu-\mu)-\frac{1}{2}(\nu-\mu+1)\Bigr\}\ge \frac{1}{8}(\nu-\mu+1). \end{align*} We conclude, that the points $\{x^j:j=1,\dots,M\}$ satisfy $$ \|x^j\|_{1,\infty}\le \frac{4}{3}\quad \text{and}\quad \|x^i-x^j\|_1\ge \frac{1}{8}(\nu-\mu+1)\ \text{for}\ i\not=j. $$ It follows that if a positive integer $k$ satisfies \begin{equation}\label{eq:entropy:2} M=\Bigl(\frac{n}{4^{\mu+1}}\Bigr)^{4^\mu/2}\ge 2^{k-1}, \end{equation} then $$ e_k(id:\ell_{1,\infty}^n\to\ell_1^n)\ge c(\nu-\mu+1), $$ where the absolute constant $c$ can be taken $c=\frac{3}{64}.$ Let now $n\ge 200$ and $1\le k\le n/200$ be positive integers. Then we chose $\nu\ge 1$ to be the largest integer with $12\cdot 4^\nu\le n$ and let $\mu\ge 1$ be the smallest integer with $k\le 4^\mu/2.$ Due to $\frac{n}{4^{\mu+1}}\ge 2$, this choice ensures \eqref{eq:entropy:2} and $$ \nu-\mu+1\ge \log_4\Bigl(\frac{n}{48}\Bigr)-\log_4(2k)\gtrsim \log(1+n/k). $$ The remaining pairs of $k$ and $n$ are covered by monotonicity of entropy numbers at the cost of constants of equivalence. \end{proof} {\bf Acknowledgement:} We would like to thank Franck Barthe for proposing the problem to us and to Leonardo Colzani and Henning Kempka for valuable discussions. \end{document}
\begin{document} \title[Coarse Types Of Tropical Matroid Polytopes]{Coarse Types Of Tropical Matroid Polytopes} \author[Kulas]{Katja Kulas} \address{Fachbereich Mathematik, TU Darmstadt, 64293 Darmstadt, Germany} \email{[email protected]} \begin{abstract} Describing the combinatorial structure of the tropical complex $\mathcal{C}$ of a tropical matroid polytope, we obtain a formula for the coarse types of the maximal cells of $\mathcal{C}$. Due to the connection between tropical complexes and resolutions of monomial ideals, this yields the generators for the corresponding coarse type ideal introduced in \cite{DJS09}. Furthermore, a complete description of the minimal tropical halfspaces of the uniform tropical matroid polytopes, i.e. the tropical hypersimplices, is given. \end{abstract} \maketitle \section{Introduction} Tropical matroid polytopes have been introduced in~\cite{DSS2005} as the tropical convex hull of the cocircuits, or dually, of the bases of a matroid. The arrangement of finitely many points $V$ in the tropical torus ${\mathbb{T}}^d$ has a natural decomposition $\mathcal{C}_V$ of ${\mathbb{T}}^d$ into (ordinary) polytopes, the tropical complex, equipped with a (fine) type $T$, which encodes the relative position to the generating points. The \emph{coarse types} only count the cardinalities of $T$. In~\cite{DS04}, Develin and Sturmfels showed that the bounded cells of $\mathcal{C}_V$ yield the tropical convex hull of $V$, which is dual to the regular subdivision $\Sigma$ of a product of two simplices (or equivalently---due to the Cayley Trick---to the regular mixed subdivisions of a dilated simplex). The authors of ~\cite{BlockYu} and~\cite{DJS09} use the connection of the cellular structure of $\mathcal{C}_V$ or rather of $\Sigma$ to minimal cellular resolutions of certain monomial ideals to provide an algorithm for determining the facial structure of the bounded subcomplex of $\mathcal{C}_V$. A main result of~\cite{DJS09} says that the labeled complex $\mathcal{C}_V$ supports a minimal cellular resolution of the ideal $I$ generated by monomials corresponding to the set of all (coarse) types. The main theme of this paper is the study of the tropical complex of tropical convex polytopes associated with matroids arising from graphs---the \emph{tropical matroid polytopes}. Recall that a {\it matroid} $M$ is a finite collection $\mathcal{F}$ of subsets of $[n]={1,2,\ldots,n}$, called {\it independent sets}, such that three properties are satisfied: (i) $\emptyset\in\mathcal{F}$, (ii) if $X\in\mathcal{F}$ and $Y\subseteq X$ then $Y\in \mathcal{F}$, (iii) if $U,V\in\mathcal{F}$ and $\lvert U \rvert=\lvert V \rvert+1$ there exists $x\in U\setminus V$ such that $V\cup x\in\mathcal{F}$. The last one is also called the \emph{exchange property}. The maximal independent sets are the \emph{bases} of $M$. A matroid can also be defined by specifying its \emph{non-bases}, i.e. the subsets of $E$ with cardinality $k$ that are not bases. For more details on matroids see the survey of Oxley ~\cite{Oxley2003} and the books of White(\cite{White1986},~\cite{White1987},~\cite{White1992}). An important class of matroids are the graphic or cycle matroids proven to be regular, that is, they are representable over every field. A \emph{graphic matroid} is associated with a simple undirected graph $G$ by letting $E$ be the set of edges of $G$ and taking as the bases the edges of the spanning forests. Matroid polytopes were first studied in connection with optimization and linear programming, introduced by Jack Edmonds~\cite{Edmonds03}. A nice polytopal characterization for a matroid polytope was given by Gelfand et~al.~\cite{GGMS1987} stating that each of its edges is a parallel translate of $e_i-e_j$ for some $i$ and $j$. In the case of tropical matroid polytopes the coarse types display the number $b_{I,J}$ of bases $B$ of the associated matroid with subsets $I,J$, where all elements of $I$ but none of $J$ are contained in $B$. \filbreak \begin{theorem} Let $\mathcal{C}$ be the tropical complex of a tropical matroid polytope with $d+1$ elements and rank $k$. The set of all coarse types of the maximal cells arising in $\mathcal{C}$ is given by the tuples $(t_1,\ldots,t_{d+1})$ with \begin{equation*} t_j \ = \ \begin{cases} b_{\{i_1\},\emptyset}+b_{\,\emptyset,\{i_1,i_2,\ldots,i_{d'+1}\}} & \text{if $j=i_1$} \, ,\\ b_{\{i_l\},\{i_1,\ldots,i_{l-1}\}} & \text{if $j=i_l\in \{i_2,\ldots i_{d'+1}\}$} \, ,\\ 0 & \text{otherwise} \, . \end{cases} \end{equation*} where $d'\in [d-k+1]$ and $\{i_1,i_2,\ldots,i_{d'+1}\}$ is a sequence of elements such that $[d+1]\setminus\{i_1,i_2,\ldots,i_{d'}\}$ contains a basis of the associated matroid. \end{theorem} Subsequently, we relate our combinatorial result to commutative algebra. For the coarse type $\mathbf{t}(p)$ of $p$ and $x^{\mathbf{t}(p)}={x_1}^{{\mathbf{t}(p)}_1}{x_2}^{{\mathbf{t}(p)}_2}\cdots{x_{d+1}}^{{\mathbf{t}(p)}_{d+1}}$ the monomial ideal \[I=\langle x^{\mathbf{t}(p)}\colon p\in{\mathbb{T}}^d\rangle\subset{\mathbb{K}}[x_1,\ldots,x_{d+1}]\] is called the \emph{coarse type ideal}. In~\cite{DJS09}, Corollary 3.5, it was shown that $I$ is generated by the monomials, which are assigned to the coarse types of the inclusion-maximal cells of the tropical complex. As a direct consequence of Theorem 3.6 in~\cite{DJS09}, we obtain the generators of $I$. \begin{corollary}The coarse type ideal $I$ for the tropical complex of a tropical matroid polytope with $d+1$ elements and rank $k$ is equal to \[\langle x_{i_1}^{t_{i_1}}x_{i_2}^{t_{i_2}}\cdots x_{i_{d'+1}}^{t_{i_{d'+1}}}\colon [d+1]\setminus\{i_1,\ldots,i_{d'}\} \text{ contains a basis }\rangle\] where $(t_{i_1},t_{i_2},\ldots,t_{i_{d'+1}})=\big(b_{\{i_1\},\emptyset}+b_{\,\emptyset,\{i_1,i_2,\ldots,i_{d'+1}\}},b_{\{i_2\},\{i_1\}},\ldots,b_{\{i_{d'+1}\},\{i_1,\ldots,i_{d'}\}}\big)$. \end{corollary} Furthermore, we apply these results to the special case of uniform matroids, introduced and studied in~\cite{Joswig05}. We close this work by stating the minimal tropical halfspaces containing a uniform tropical matroid polytope by using the characterization of Proposition 1 in~\cite{GaubertKatz09}. \section{Basics of tropical convexity} We start with collecting basic facts about tropical convexity and fixing the notation. Defining \emph{tropical addition} by $x\oplus y:=\min(x,y)$ and \emph{tropical multiplication} by $x\odot y:=x+y$ yields the \emph{tropical semi-ring} $({\mathbb{R}},\oplus,\odot)$. Component-wise tropical addition and \emph{tropical scalar multiplication} \begin{equation*} \lambda \odot (\xi_0,\dots,\xi_d) := (\lambda \odot \xi_1,\dots,\lambda \odot \xi_d) = (\lambda+\xi_0,\dots,\lambda+\xi_d) \end{equation*} equips ${\mathbb{R}}^{d+1}$ with a semi-module structure. For $x,y\in{\mathbb{R}}^{d+1}$ the set \begin{equation*} [x,y]_\mathrm{trop} := \SetOf{(\lambda \odot x) \oplus (\mu \odot y)}{\lambda,\mu \in {\mathbb{R}}} \end{equation*} defines the \emph{tropical line segment} between $x$ and $y$. A subset of ${\mathbb{R}}^{d+1}$ is \emph{tropically convex} if it contains the tropical line segment between any two of its points. A direct computation shows that if $S\subset{\mathbb{R}}^{d+1}$ is tropically convex then $S$ is closed under tropical scalar multiplication. This leads to the definition of the \emph{tropical torus} as the quotient semi-module \begin{equation*} {\mathbb{T}}^d := {\mathbb{R}}^{d+1} / ({\mathbb{R}}\odot(1,\dots,1)) . \end{equation*} Note that ${\mathbb{T}}^d$ was called ``tropical projective space'' in \cite{DS04}, \cite{Joswig05}, \cite{DevelinYu06}, and \cite{JoswigSturmfelsYu07}. Tropical convexity gives rise to the hull operator $\operatorname{tconv}$. A \emph{tropical polytope} is the tropical convex hull of finitely many points in ${\mathbb{T}}^d$. Like an ordinary polytope each tropical polytope $P$ has a unique set of generators which is minimal with respect to inclusion; these are the \emph{tropical vertices} of $P$. There are several natural ways to choose a representative coordinate vector for a point in ${\mathbb{T}}^d$. For instance, in the coset $x+({\mathbb{R}}\odot(1,\dots,1))$ there is a unique vector $c(x)\in{\mathbb{R}}^{d+1}$ with non-negative coordinates such that at least one of them is zero; we refer to $c(x)$ as the \emph{canonical coordinates} of $x\in{\mathbb{T}}^d$. Moreover, in the same coset there is also a unique vector $(\xi_0,\dots,\xi_d)$ such that $\xi_0=0$. Hence, the map \begin{equation*}\label{eq:c_0} c_0 : {\mathbb{T}}^d \to {\mathbb{R}}^d , (\xi_1,\dots,\xi_{d+1}) \mapsto (\xi_2-\xi_1,\dots,\xi_{d+1}-\xi_1) \end{equation*} is a bijection. Often we will identify ${\mathbb{T}}^d$ with ${\mathbb{R}}^d$ via this map. The \emph{tropical hyperplane} ${\mathcal{H}}_a$ defined by the \emph{tropical linear form} $a=(\alpha_1,\dots,\alpha_{d+1})\in{\mathbb{R}}^{d+1}$ is the set of points $(\xi_1,\dots,\xi_{d+1})\in{\mathbb{T}}^d$ such that the minimum \begin{equation*} (\alpha_1 \odot \xi_1) \oplus \dots \oplus (\alpha_{d+1} \odot \xi_{d+1}) \end{equation*} is attained at least twice. For $d=3$ the tropical hyperplane is shown in Figure~\ref{fig:3hypersimplicesb}. The complement of a tropical hyperplane in ${\mathbb{T}}^d$ has exactly $d+1$ connected components, each of which is an \emph{open sector}. A \emph{closed sector} is the topological closure of an open sector. The set \begin{equation*} S_k := \SetOfbig{(\xi_1,\dots,\xi_{d+1})\in{\mathbb{T}}^d}{\xi_k=0 \text{ and } \xi_i>0 \text{ for } i\ne k} , \end{equation*} for $1\le k \le d+1$, is the \emph{$k$-th open sector} of the tropical hyperplane ${\mathcal{Z}}$ in ${\mathbb{T}}^d$ defined by the zero tropical linear form. Its closure is \begin{equation*} \bar S_k := \SetOfbig{(\xi_1,\dots,\xi_{d+1})\in{\mathbb{T}}^d}{\xi_k=0 \text{ and } \xi_i\ge 0 \text{ for } i\ne k} . \end{equation*} We also use the notation $\bar S_I:=\bigcup\SetOf{\bar S_i}{i\in I}$ for any set $I\subset[d+1]:=\{1,\dots,d+1\}$. If $a=(\alpha_1,\dots,\alpha_{d+1})$ is an arbitrary tropical linear form then the translates $-a+S_k$ for $1\le k\le d+1$ are the open sectors of the tropical hyperplane ${\mathcal{H}}_a$. The point $-a$ is the unique point contained in all closed sectors of ${\mathcal{H}}_a$, and it is called the \emph{apex} of ${\mathcal{H}}_a$. For each $I\subset[d+1]$ with $1\le \# I \le d$ the set $-a+\bar S_I$ is the \emph{closed tropical halfspace} of ${\mathcal{H}}_a$ of type $I$. A tropical halfspace $H(-a,I)$ can also be written in the form \begin{eqnarray*} H(-a,I)&=&\{x\in{\mathbb{T}}^d\mid\text{ the minimum of }\displaystyle\bigoplus_{i=1}^{d+1} \alpha_i\odot \xi_i\text{ is attained}\\ &&\,\,\text{ at a coordinate }i\in I\}\\&=&\{x\in{\mathbb{T}}^d\mid \displaystyle\bigoplus_{i\in I} (\alpha_i\odot \xi_i)\leq\bigoplus_{j\in J} (\alpha_j\odot \xi_j)\}\end{eqnarray*}where $I$ and $J$ are disjoint subsets of $[d+1]$ and $I\cup J= [d+1]$. The tropical polytopes in ${\mathbb{T}}^d$ are exactly the bounded intersections of finitely many closed tropical halfspaces; see \cite{GaubertKatz09} and \cite{Joswig05}. We concentrate on the combinatorial structure of tropical polytopes. Let $V:=(v_1,\dots,v_n)$ be a sequence of points in ${\mathbb{T}}^d$. The \emph{(fine) type} of $x\in{\mathbb{T}}^d$ with respect to $V$ is the ordered $(d+1)$-tuple $\operatorname{type}_V(x):=(T_1,\dots,T_{d+1})$ where \begin{equation*} T_k := \SetOf{i\in\{1,\dots,n\}}{v_i\in x+\bar S_k} . \end{equation*} For a given type ${\mathcal{T}}$ with respect to $V$ the set \begin{equation*} X^{\circ}_V({\mathcal{T}}) := \SetOfbig{x\in{\mathbb{T}}^d}{\operatorname{type}_V(x)={\mathcal{T}}} \end{equation*} is a relatively open subset of ${\mathbb{T}}^d$ and is called the \emph{cell} of type ${\mathcal{T}}$ with respect to $V$. The set $X^{\circ}_V({\mathcal{T}})$ as well as its topological closure are tropically and ordinary convex; in \cite{JK08}, these were called \emph{polytropes}. With respect to inclusion the types with respect to $V$ form a partially ordered set. The intersection of two cells $X_V({\mathcal{S}})$ and $X_V({\mathcal{T}})$ with type ${\mathcal{S}}$ and ${\mathcal{T}}$ is equal to the polyhedron $X_V({\mathcal{S}}\cup{\mathcal{T}})$. The collection of all (closed) cells induces a polyhedral subdivision $\mathcal{C}_V$ of ${\mathbb{T}}^d$. A $\min$-tropical polytope $P=\operatorname{tconv}(V)$ is the union of cells in the bounded subcomplex $\mathcal{B}_V$ of $\mathcal{C}_V$ induced by the arrangement $\mathcal{A}_V$ of $\max$-tropical hyperplanes with apices $v\in V$. A cell of $\mathcal{C}_V$ is unbounded if and only if one of its type components is the empty set. The type of $x$ equals the union of the types of the (maximal) cells that contain $x$ in their closure. The dimension of a cell $X_T$ can be calculated as the number of the connected components of the undirected graph $G=\big(\{1,2,\ldots,d+1\},\,\{(j,k)\mid T_j\cap T_k\neq\emptyset\}\big)$ minus one. The zero-dimensional cells are called pseudovertices of $P$. Replacing the (fine) type entries $T_k\subseteq [n]$ for $k\in [d+1]$ of a point $p\in{\mathbb{T}}^d$ by their cardinalities $t_k:=\left|T_k\right|$ we get the \emph{coarse type} $t_V(p)=(t_1,\ldots,t_{d+1})\in{\mathbb{N}}^{d+1}$ of $p$. A coarse type entry $t_k$ displays how many generating points lie in the $k$-th closed sector $p+\overline{S_k}$. In~\cite{DJS09}, the authors associate the tropical complex of a tropical polytope with a monomial ideal, the coarse type ideal \[I:=\langle {x_1}^{t_1}{x_2}^{t_2}\cdots{x_{d+1}}^{t_{d+1}}\colon p\in{\mathbb{T}}^d\rangle\subset{\mathbb{K}}[x_1,\ldots,x_{d+1}].\] By Corollary 3.5 of~\cite{DJS09}, $I$ is generated by the monomials assigned to the coarse types of the inclusion-maximal cells of the tropical complex. The tropical complex $\mathcal{C}_V$ gives rise to minimal cellular resolutions of $I$. \begin{theorem}[ \cite{DJS09}, Theorem 3.6 ]\label{thm:DJS2009} The labeled complex $\mathcal{C}_V$ supports a minimal cellular resolution of the ideal $I$ generated by monomials corresponding to the set of all (coarse) types. \end{theorem} Considering cellular resolutions of monomial ideals, introduced in~\cite{BPS1998} and~\cite{BS1998}, is a natural technique to construct resolutions of monomial ideals using labeled cellular complexes and provide an important interface between topological constructions, combinatorics and algebraic ideas. The authors of~\cite{BlockYu} and \cite{DJS09} use this to give an algorithm for determining the facial structure of a tropical complex. More precisely, they associate a squarefree monomial ideal $I$ with a tropical polytope and calculate a minimal cellular resolution of $I$, where the $i$-th syzygies of $I$ are encoded by the $i$-dimensional faces of a polyhedral complex. A tropical halfspace is called \emph{minimal} for a tropical polytope $P$ if it is minimal with respect to inclusion among all tropical halfspaces containing $P$. Consider a tropical halfspace $H(a,I)\subset {\mathbb{T}}^d$ with $I\subset[d+1]$ and apex $a\in {\mathbb{T}}^d$, and a tropical polytope $P=\operatorname{tconv}\{v_1,\ldots,v_n\}\subseteq {\mathbb{T}}^d$. To show that $H(a,I)$ is minimal for $P$, it suffices to prove, by Proposition 1 of~\cite{GaubertKatz09}, that the following three criteria hold for the type $(T_1,T_2,\ldots,T_{d+1})=\operatorname{type}_V(a)$ of the apex $a$: \begin{itemize}\item[(i)] $\displaystyle\bigcup_{i\in I}T_i=[n]$, \item[(ii)] for each $j\in I^C$ there exists an $i\in I$ such that $T_i\cap T_j\neq\emptyset$, \item[(iii)] for each $i\in I$ there exists $j\in I^C$ such that $\displaystyle T_i\cap T_j\not\subset\bigcup_{k\in I\setminus\{i\}}T_k$.\end{itemize} Here, we denote the complement of a set $I\subseteq [d+1]$ as $I^C=[d+1]\setminus I$. \noindent Obvious minimal tropical halfspaces of a tropical polytope $P=\operatorname{tconv}(V)\subseteq {\mathbb{T}}^{d}$ are its cornered halfspaces, see~\cite{Joswig08}. The {\it $k$-th corner} of $P$ is defined as \[c_k(V):= (-v_{1,k}) \odot v_1\oplus(-v_{2,k})\odot v_2 \oplus\cdots\oplus(-v_{n,k})\odot v_n.\] The tropical halfspace $H_k:=c_k(V)+\overline{S_k}$ is called the {\it $k$-th cornered tropical halfspace} of $P$ and the intersection of all $d+1$ cornered halfspaces is the {\it cornered hull} of $P$. \section{Tropical Matroid Polytopes} The tropical matroid polytope of a matroid $\mathcal{M}$ is defined in~\cite{DSS2005} as the tropical convex hull of the negative incidence vectors of the bases of $\mathcal{M}$. In this paper, we restrict ourselves to matroids arising from graphs. The \emph{graphic matroid} of a simple undirected graph $G=(V,E)$ is $\mathcal{M}(G)=(E,\mathcal{I}=\{F\subseteq E\colon F \text{ is acyclic}\})$. While the forests of $G$ form the system of independent sets of $\mathcal{M}(G)$ its bases are the spanning forests. We will assume that $G$ is connected, so the bases of $\mathcal{M}(G)$ are the spanning trees of $G$. Furthermore, we exclude bridges, i.e. edges whose deletion increases the number of connected components of $G$, leading to elements that are contained in every basis. Let $d+1$ be the number of elements and $n$ be the number of bases of $\mathcal{M}:=\mathcal{M}(G)$ and $\mathcal{B}:=\{B_1,\ldots,B_n\}$ its bases. It follows from the exchange property of matroids that all bases of $\mathcal{M}$ have the same number of elements, which is called the \emph{rank} of $\mathcal{M}$. Consider the $0/1$-matrix $M\in{\mathbb{R}}^{(d+1)\times n}$ with rows indexed by the elements of the ground set $E$ and columns indexed by the bases of $\mathcal{M}$ which has a $0$ in entry $(i,j)$ if the $i$-th element is in the $j$-th basis. The \emph{tropical matroid polytope} $P$ of $\mathcal{M}$ is the tropical convex hull of the columns of $M$. Let \begin{equation}\label{eq:generators}V=\left\{-e_B:=\sum_{i\in B}-e_i\big|B\in \mathcal{B}\right\}\end{equation} be the set of generators of $P$. It turns out that these are just the tropical vertices of $P$, see Lemma~\ref{lem:pseudovertices}. If the underlying matroid has rank $k$, then the canonical coordinate vectors of $V$ have exactly $k$ zeros and $d+1-k$ ones and will be denoted as $v_{B_i}$ or for short $v_i$ if the corresponding basis is $B_i\in\mathcal{B}$. Note that with $\oplus$ as $\max$ instead of $\min$ the generators of a tropical matroid polytope are the positive incidence vectors of the bases of the corresponding matroid. Throughout this paper we write $\mathcal{P}_{k,d}$ for the set of all tropical matroid polytopes arising from a graphic matroid with $d+1$ elements and rank $k$. \begin{example} The tropical hypersimplex $\Delta_k^d$ in ${\mathbb{T}}^d$ studied in \cite{Joswig05} is a tropical matroid polytope of a uniform matroid of rank $k$ with $d+1$ elements and $\binom{d+1}{k}$ bases. It is defined as the tropical convex hull of all points $-e_I:=\displaystyle\sum_{i\in I}-e_i$ where $e_i$ is the $i$-th unit vector of ${\mathbb{R}}^{d+1}$ and $I$ is a $k$-element subset of $[d+1]$. The tropical vertices of $\Delta_k^d$ are \[\V(\Delta_k^d)=\left\{-e_I\big|I\in\binom{[d+1]}{k}\right\}\, \text{ for all }k>0.\] In~\cite{Joswig05}, it is shown that $\Delta_{k+1}^d\subsetneq\Delta_k^d$ implying that the first tropical hypersimplex contains all other tropical hypersimplices in ${\mathbb{T}}^d$. The first tropical hypersimplex $\Delta^d=\Delta_1^d$ in ${\mathbb{T}}^d$ is the $d$-dimensional tropical standard simplex which is also a polytrope. Clearly, we have for a tropical matroid polytope $P\in\mathcal{P}_{k,d}$ the chain $P\subseteq\Delta_k^d\subsetneq\cdots\subsetneq\Delta_1^d=\Delta^d$. For $d=3$ the three tropical hypersimplices are shown in Figure~\ref{fig:3hypersimplices}. \begin{figure} \caption{\label{fig:3hypersimplices} \label{fig:3hypersimplicesa} \label{fig:3hypersimplicesb} \label{fig:3hypersimplicesc} \label{fig:3hypersimplices} \end{figure} \end{example} The origin $\mathbf{0}\in{\mathbb{T}}^{d}$ and its fine type are crucial for the calculation of the fine and the coarse types of the maximal cells in the cell complex of $P$. \begin{lemma} A tropical matroid polytope $P\in\mathcal{P}_{k,d}$ with generators $V$ contains the origin $\mathbf{0}\in{\mathbb{T}}^{d}$. Its type is $\operatorname{type}_{V}(\mathbf{0})=(T^{(0)}_1,T^{(0)}_2,\ldots,T^{(0)}_{d+1})$ with $T^{(0)}_i=\{j\mid i\in B_j\}$. \end{lemma} \begin{proof} By Proposition 3 of \cite{DS04} about the shape of a tropical line segment, the only pseudovertex of the tropical line segment between two distinct $0$-$1$-vectors $u$ and $v$ in ${\mathbb{T}}^{d}$ is the point $w$ with $w_{l}=0$ if $u_l=0$ or $v_l=0$ and $w_{l}=1$ otherwise. Since every element of $E$ is contained in any basis of $\mathcal{M}(G)$ (apply any spanning-tree-greedy-algorithm for the connected components of $G$ starting from this element) and by using the previous argument, the origin must be contained in $P$. An index $j$ is contained in the $i$-th type coordinate $T^{(0)}_i$ if $v_{j,i}=\min\{v_{j,1},v_{j,2},\ldots,v_{j,{d+1}}\}$, which is satisfied by all indices $i\in B_j$. \end{proof} The $i$-th type entry $T_i^{(0)}$ of $\mathbf{0}$ contains all bases of $\mathcal{M}$ with element $i$, and $|T_i^{(0)}|$ is the number of bases of $\mathcal{M}$ containing $i$. Now it is time to introduce our running example. \initfloatingfigs \begin{example}\label{ex:RunEx1} The graphical matroid given by the following graph $G$ has $d+1=5$ elements (edges with bold indices), rank $k=3$, $n=8$ bases $B_1=\{\mathbf{1},\mathbf{2},\mathbf{4}\},\,B_2=\{\mathbf{1},\mathbf{2},\mathbf{5}\},\,B_3=\{\mathbf{1},\mathbf{3},\mathbf{4}\},\,B_4=\{\mathbf{1},\mathbf{3},\mathbf{5}\},\,B_5=\{\mathbf{1},\mathbf{4},\mathbf{5}\},\,B_6=\{\mathbf{2},\mathbf{3},\mathbf{4}\},\,B_7=\{\mathbf{2},\mathbf{3},\mathbf{5}\},\,B_8=\{\mathbf{3},\mathbf{4},\mathbf{5}\}$ and the non-bases $\{\mathbf{1},\mathbf{2},\mathbf{3}\},\,\{\mathbf{2},\mathbf{4},\mathbf{5}\}$. \vspace*{0.1cm}\\ \begin{minipage}{0.2\textwidth} \centering \psfrag{0}{$\mathbf{1}$} \psfrag{G}{$G:$} \psfrag{1}{$\mathbf{2}$} \psfrag{2}{$\mathbf{3}$} \psfrag{3}{$\mathbf{4}$} \psfrag{4}{$\mathbf{5}$} \psfrag{T0}{\!$\mathit{(12345)}$} \psfrag{T1}{$\mathit{(1267)}$} \psfrag{T2}{$\mathit{(34678)}$} \psfrag{T3}{$\mathit{(13568)}$} \psfrag{T4}{$\mathit{(24578)}$} \includegraphics[scale=0.9]{RunningExample_G} \end{minipage} \begin{minipage}{0.7\textwidth} Let $P$ be the corresponding tropical matroid polytope with its generators {\begin{eqnarray*}V&=&\{v_{B_1},\ldots,v_{B_8}\}\\ &=&\left\{\scriptsize \begin{pmatrix}0\\0\\1\\0\\1\end{pmatrix}, \begin{pmatrix}0\\0\\1\\1\\0\end{pmatrix}, \begin{pmatrix}0\\1\\0\\0\\1\end{pmatrix}, \begin{pmatrix}0\\1\\0\\1\\0\end{pmatrix}, \begin{pmatrix}0\\1\\1\\0\\0\end{pmatrix}, \begin{pmatrix}1\\0\\0\\0\\1\end{pmatrix}, \begin{pmatrix}1\\0\\0\\1\\0\end{pmatrix}, \begin{pmatrix}1\\1\\0\\0\\0\end{pmatrix} \right\}.\end{eqnarray*}} The type of the origin $\mathbf{0}$ of $P$ is $(12345,1267,34678,13568,24578)$ where the $i$-th type entry contains all bases using the edge $i$ (italic edge attributes). \end{minipage} \end{example} In the next lemma we will show that the tropical standard simplex $\Delta^d$ is the cornered hull of all tropical matroid polytopes in $\mathcal{P}_{k,d}$. \begin{lemma}\label{lem:chull of tmp} The cornered hull of a tropical matroid polytope $P\in\mathcal{P}_{k,d}$ with generators $V$ is the $d$-dimensional tropical standard simplex $\Delta^d$. The $i$-th corner of $P$ is the vector $e_i$. The type of $e_i$ with respect to $V$ is $\operatorname{type}_{V}(e_i)=(T_1,\ldots,T_{d+1})$ with \begin{equation*} T_j \ = \ \begin{cases} [d+1] & \text{if $j=i$} \, ,\\ \{l\mid j\in B_l \text{ and }i\notin B_l \}& \text{otherwise} \, . \end{cases} \end{equation*} \end{lemma} \begin{proof} For $B\in\mathcal{B}$ the $i$-th (canonical) coordinate of $v_B$ is \begin{equation*} v_{B,i} \ = \ \begin{cases} 0 & \text{if $i\in B$} \, ,\\ 1& \text{otherwise} \, . \end{cases} \end{equation*} The $j$-th coordinate of the $i$-th corner $c_i(V)$ of $P$ is \begin{equation*}c_i(V)_j=\displaystyle\min_{J\in \mathcal{B}}(v_{J,j}-v_{J,i})\ = \ \begin{cases} 0 & \text{if $i=j$} \, ,\\ -1& \text{otherwise} \, . \end{cases} \end{equation*} In canonical coordinates we get $c_i(V)=e_i$, which at the same time is the $i$-th apex vertex of the tropical standard simplex $\Delta^d$. The type of $e_i$ is $\operatorname{type}_{V}(e_i)=(T_1,T_2,\ldots,T_{d+1})$, where some index $l$ is contained in the $j$-th coordinate $T_j$ for $j\neq i$ if $v_{l,j}=\min\{v_{l,1},v_{l,2},\ldots,v_{l,i}-1,\ldots,v_{l,{d+1}}\}$. This is satisfied by all bases $B_l\in\mathcal{B}$ with $j\in B_l$ and $i\notin B_l$. For $j=i$ all indices $l\in[d+1]$ are contained in $T_i$ since the right hand side of $v_{l,i}-1=\min\{v_{l,1},v_{l,2},\ldots,v_{l,i}-1,\ldots,v_{l,{d+1}}\}$ is smaller or equal than the left hand side in every case. \end{proof} Besides the point $\mathbf{0}$, the other pseudovertices of a tropical matroid polytope correspond to unions of its bases. \begin{lemma}\label{lem:pseudovertices} The pseudovertices of $P\in\mathcal{P}_{k,d}$ are \[\PV(P)=\left\{-e_J\big|J = \bigcup_{i\in I}B_i \text{ for some }I\subseteq[n]\right\}.\] The pseudovertices of the first tropical hypersimplex are \[\PV(\Delta^d)=\left\{-e_J\big|J\in\bigcup_{j=1}^{d}\binom{[d+1]}{j}\right\}.\] Let $(T^{(0)}_1,\ldots,T^{(0)}_{d+1})$ be the type of the pseudovertex $\mathbf{0}$ with respect to $V$ and consider a point $-e_J\in\PV(P)$. If the complement $J^C$ of $J$ is equal to $\{i_1,\ldots,i_r\}$, then the type $(T_1,\ldots,T_{d+1})$ of $-e_J$ with respect to $V$ is given by \begin{equation*} \label{eq:typestmp}T_j \ = \ \begin{cases} T^{(0)}_j\setminus(T^{(0)}_{i_1}\cup\cdots\cup T^{(0)}_{i_r}) & \text{if $j\in J$} \, ,\\ T^{(0)}_j\cup({T^{(0)}_{i_1}}^C\cap\cdots\cap {T^{(0)}_{i_r}}^C)& \text{otherwise} \, . \end{cases} \end{equation*} \end{lemma} \begin{proof} Consider the point $v_J:=c(-e_J)=e_{J^C}$ with canonical coordinates \[v_{J,i}= \begin{cases} 0 & \text{if $i\in J$} \, ,\\ 1& \text{otherwise} \, . \end{cases}\] and $\operatorname{type}_{V}(v_J)=(T_1,\ldots,T_{d+1})$. Since the union of the elements of one or more bases of $\mathcal{M}$ consists of at least $k$ elements, the index set $J$ has at least $k$ elements and thus we have $r\leq d-k+1$ for the cardinality $r$ of $J^C$. We can assume that $J^C=\{1,2,\ldots,r\}$. Then some index $l$ occurs in the $j$-th coordinate $T_j$ if and only if \begin{eqnarray}\label{eq:condHypersimplex:typePseudovert} v_{l,j}-v_{J,j}&=&\min\{v_{l,1}-1,\ldots,v_{l,r}-1,v_{l,r+1},\ldots,v_{l,d+1}\}\\ &=&\min\{v_{l,1}-1,\ldots,v_{l,r}-1\}\in \{-1,0\}\nonumber. \end{eqnarray} For $j\in J$ the left hand side of equation~(\ref{eq:condHypersimplex:typePseudovert}) is $v_{l,j}-0\in\{0,1\}$. If $j\in B_l$, we get $v_{l,j}-v_{J,j}=0-0$ and this is minimal in~(\ref{eq:condHypersimplex:typePseudovert}) if the coordinates $v_{l,i}$ are equal to one for all $i\in J^C$, i.e. $i\notin B_l$. If $j\notin B_l$, we get $v_{l,j}-v_{J,j}=1\notin\{-1,0\}$. Therefore, $T_j$ is equal to $\{(l\mid j\in B_l)\,\wedge (i\notin B_l \text{ for all } i\in J^C)\}=T^{(0)}_j\setminus(T^{(0)}_{i_1}\cup\cdots\cup T^{(0)}_{i_r})$. For $j\in J^C$ the left hand side is $v_{l,j}-1\in\{0,-1\}$. If $j\in B_l$, we get $v_{l,j}-v_{J,j}=-1=\min\{v_{l,1}-1,\ldots,v_{l,j}-1,\ldots,v_{l,r}-1\}$. If $j\notin B_l$, we get $v_{l,j}-v_{J,j}=1-1=0$ and this is minimal in~\eqref{eq:condHypersimplex:typePseudovert} if the coordinates $v_{l,i}$ are equal to one for all $i\in J^C$, i.e. $i\notin B_l$. Therefore, $T_j$ is equal to $\{l\mid j\in B_l\,\text{\bf or } (i\notin B_l \text{ for all } i\in J^C)\}=T^{(0)}_j\cup({T^{(0)}_{i_1}}^C\cap\cdots\cap {T^{(0)}_{i_r}}^C)$. If $r=d-k+1$, the pseudovertex $v:=c(-e_J)$ is a generator of $P$. Each of its type entries contains the index, which is assigned to a basis $B\in\mathcal{B}$. Since $B$ is the only basis with $i_1,\ldots,i_{d-k+1}\notin B$, its index is the only element of $T_j=T^{(0)}_j\setminus(T^{(0)}_{i_1}\cup\cdots\cup T^{(0)}_{i_{d-k+1}})$ for $j\in B$. For this reason, the generators as defined in~(\ref{eq:generators}) are exactly the tropical vertices of $P$. Now we consider the other points of $\PV(V)$, i.e. $r<d-k+1$. The intersection of two type entries $T_{j_1}\cap T_{j_2}$ is equal to \begin{equation} \label{eq:intersection}T_{j_1}\cap T_{j_2} \ = \ \begin{cases} (T^{(0)}_{j_1}\cap T^{(0)}_{j_2})\setminus(T^{(0)}_{i_1}\cup\cdots\cup T^{(0)}_{i_r}) & \text{if $j_1,j_2\in J$} \, ,\\ (T^{(0)}_{j_1}\cap T^{(0)}_{j_2})\cup({T^{(0)}_{i_1}}^C\cap\cdots\cap {T^{(0)}_{i_r}}^C)& \text{otherwise} \, . \end{cases} \end{equation} In the first case of \ref{eq:intersection}, $T_{j_1}\cap T_{j_2}$ consists of at least one tropical vertex $v_l$ with $v_{l,j_1}=v_{l,j_2}=0$ and $v_{l,i}=1$ for all $i\in J^C$. In the second case there are even more tropical vertices allowed and $T_{j_1}\cap T_{j_2}\neq \emptyset$. Hence, Proposition 17 of~\cite{DS04} tells us that the cell $X_T$ has dimension $0$, i.e. the given points really are pseudovertices of $P$. For $J=\bigcup_{i\in I}B_i$ and $J'=\bigcup_{i\in I'}B_i$ with $I\neq I'\subseteq[n]$ the tropical line segment between $v_J$ and $v_{J'}$ is the concatenation of the two ordinary line segments $[v_J, v_{J\cup \tilde{J}}]$ and $[v_{J\cup \tilde{J}},v_{J'}]$. The point $v_{J\cup \tilde{J}}$ is again a point of $\PV(P)$. Therefore, there are no other pseudovertices as the given points in $\PV(P)$. Now we consider the tropical standard simplex $\Delta^d$. If the tropical vertex $v_l:=v_{B_l}$, $B_l\in\binom{[d+1]}{1}$, of $\Delta^d$ is given by the vector $v_{B_l}=-e_l$ ($l=1,\ldots,d+1$), then the type of the origin $\mathbf{0}$ with respect to $\Delta^d$ is $T^{\mathbf{0}}=(1,2,\ldots,d+1)$. Therefore, this is an interior point of $\Delta^d$. Let $v_J$ with $J\in\bigcup_{j=1}^{d}\binom{[d+1]}{j}$ be any pseudovertex of $\Delta^d$. Since for $i\in J$ and $i\notin J$, we have \mbox{$v_{i,i}-v_{J,i}= 0=\displaystyle\min\{v_{l,1}-1,\ldots,v_{l,r}-1,v_{l,r+1},\ldots,v_{l,i},\ldots,v_{l,d+1}\}$} and\\\mbox{$v_{i,i}-v_{J,i}=-1=\displaystyle\min\{v_{l,1}-1,\ldots,v_{l,i}-1,\ldots,v_{l,r}-1\}$}, respectively, it follows that the index $i$ is contained in the $i$-th entry of $T$ for all $i=1,\ldots,d+1$, i.e. $T^{\mathbf{0}}\subset T$. Hence, $\Delta^d$ is a polytrope. \end{proof} Let $v_J= \sum_{i\in J}-e_i=-e_J$ be a pseudovertex of $P$ with $J=\bigcup_{i\in I}B_i$ for $I\subseteq[n]$. If the complement $J^C$ of $J$ is equal to $\{i_1,i_2,\ldots,i_r\}$ with $r\le d-k+1$, we will denote $v_J$ as $e_{i_1,i_2,\ldots,i_r}$ and its type with respect to $P$ as \[\operatorname{type}_{V}(v_J)=T({v}_J)=\big(T_1({v}_J),\ldots,T_{d+1}({v}_J)\big).\] Because of the previous lemma, the $i$-th entry of $T({v}_J)$ contains all bases using edge $i\in J$ that are possible after deleting the edges of $J^C$ in the corresponding graph $G$ or, equivalently, all bases that are possible after (re-)inserting edge $i\in J^C$ into $(V(G),E(G)\setminus\{J^C\})$. We call a sequence of pseudovertices $e_{\emptyset},e_{i_1},e_{i_1,i_2},\ldots,e_{i_1,i_2,\ldots,i_{d-k+1}}$, or rather the set \\\mbox{$\{i_1,\ldots,i_{d-k+1}\}\subset[d+1]$}, {\it valid} if the edge set $E\setminus\{i_1,\ldots,i_{d-k+1}\}$ contains a spanning tree of the underlying graph $G$. The first point $e_{\emptyset}=\mathbf{0}$ is assigned to the total edge set $E$ of $G$. Then we delete edge after edge such that the graph is still connected until the edge set forms a connected graph without cycles. So the last point of a valid sequence is the tropical vertex $v_B$ of $P$ with $B=[d+1]\setminus\{i_1,i_2,\ldots,i_{d-k+1}\}$. It turns out that the pseudovertices of the valid sequences and subsequences of them play a major role in the calculation of the maximal bounded und unbounded cells of $P$. \begin{lemma} The maximal bounded cells of $P\in\mathcal{P}_{k,d}$ are of dimension $d-k+1$. They form the tropical convex hull of the pseudovertices of a valid sequence $\mathbf{0},e_{i_1},e_{i_1,i_2},\ldots,e_{i_1,i_2,\ldots,i_{d-k+1}}$, where the last pseudovertex is a tropical vertex $v_B$ according to the basis $B=[d+1]\setminus\{i_1,i_2,\ldots,i_{d-k+1}\}\in\mathcal{B}$ of $\mathcal{M}$. Let $T^{(0)}=(T^{(0)}_1,\ldots,T^{(0)}_{d+1})$ be the type of the pseudovertex $\mathbf{0}$ with respect to $P$. Then the type $T=(T_1,\ldots,T_{d+1})$ of the interior of the bounded cell $X_{T}=\operatorname{tconv}(\mathbf{0},e_{i_1},e_{i_1,i_2},\ldots,e_{i_1,i_2,\ldots,i_{d-k+1}})$ is given by $T_{i_1}=T^{(0)}_{i_1},T_{i_2}=T^{(0)}_{i_2}\setminus T^{(0)}_{i_1},\ldots,T_{i_{d-k+1}}=T^{(0)}_{i_{d-k+1}}\setminus (T^{(0)}_{i_1}\cup T^{(0)}_{i_2}\cup T^{(0)}_{i_{d-k}})$ and $T_j=T^{(0)}_j\setminus (T^{(0)}_{i_1}\cup T^{(0)}_{i_2}\cup T^{(0)}_{i_{d-k+1}})$ for all $j\in B$. \end{lemma} \begin{proof} First, we will show that this sequence really defines a bounded cell of $P$, i.e. $T_{j}\neq\emptyset$ for all $j\in [d+1]$. So consider the type entry at some coordinate $i_j\in B^C$ \begin{eqnarray*} T_{i_j}&=&T_{i_j}(\mathbf{0})\cap T_{i_j}(e_{i_1})\cap \ldots \cap\\ && T_{i_j}(e_{i_1,\ldots,i_{j-1}})\cap\\ && T_{i_j}(e_{i_1,\ldots,i_{j}})\cap\ldots\cap\\ && T_{i_j}(e_{i_1,\ldots,i_{d-k+1}})\\ &=&\{l\mid i_j\in B_l\}\cap\{l\mid i_j\in B_l \text{ and } i_1\notin B_l\}\cap\ldots\cap\\ &&\{l\mid i_j\in B_l \text{ and } (i_1,\ldots,i_{j-1}\notin B_l)\}\cap\\ && \{l\mid i_j\in B_l \text{ or } (i_1,\ldots,i_{j}\notin B_l)\}\cap\ldots\cap\\ && \{l\mid i_j\in B_l \text{ or } (i_1,\ldots,i_{d-k+1}\notin B_l)\}\\ &=& \{l\mid i_j\in B_l \text{ and }(i_1,\ldots,i_{j-1}\notin B_l)\}\\ &=& T^{(0)}_{i_j}\setminus(T^{(0)}_{i_1}\cup\ldots\cup T^{(0)}_{i_{j-1}}). \end{eqnarray*} The cardinality of $T_{i_j}=T^{(0)}_{i_j}\cap {T^{(0)}_{i_1}}^C\cap\ldots\cap {T^{(0)}_{i_{j-1}}}^C$ is equal to the number of tropical vertices $v$ of $P$ with $v_{i_j}=0$ and $v_{i_1}=\ldots=v_{i_{j-1}}=1$ (in canonical coordinates) respectively to the number of bases $B$ with $i_j\in B$ and $i_1,\ldots,i_{j-1}\notin B$, which is greater than $0$ since we consider only valid sequences. So every type coordinate $T_{i_j}$ contains at least one entry. In the case of uniform matroids we have the choice of $d+1-j$ {\it free} coordinates from which $k-1$ must be equal to $0$, i.e. the cardinality of $T_{i_j}$ is equal to $\binom{d+1-j}{k-1}$. Analogously, the other type entries $T_j=T^{(0)}_j\setminus (T^{(0)}_{i_1}\cup T^{(0)}_{i_2}\cup T^{(0)}_{i_{d-k+1}})=\{v_B\}$ for $j\in B$ and their cardinality $|T_j|=1$ can be verified. Furthermore, we have $T_1\cup \cdots \cup T_{d+1}=[n]$, because $T^{(0)}_1\cup \cdots \cup T^{(0)}_{d+1}=[n]$. Since no type entry of $T$ is empty, the cell $X_{T}$ is bounded. More precisely, $T_{i_1},\ldots,T_{i_{d-k+1}}$ is a partition of the indices of $\V(P)\setminus\{v_B\}$, and the other type coordinates each contain the index of the tropical vertex $v_B$; we call this a {\it pre-partition}. By Proposition 17 in \cite{DS04}, the dimension of $X_{T}$ is $d-k+1$. Removing one pseudovertex $e_{i_1,\ldots,i_r}$ with $r\in [d-k+1]$ from a valid sequence, we obtain $T_{i_{r+1}}=T^{(0)}_{i_{r+1}}\setminus(T^{(0)}_{i_1}\cup\cdots\cup T^{(0)}_{i_{r-1}})$ and $T_{i_r}\cap T_{i_{r+1}}\neq\emptyset$. This yields a bounded cell with lower dimension than $d-k+1$. Adding a pseudovertex $e_J$ to $X_{T}$, $J\neq B$ with $J^C=\{j_1,\ldots,j_r\}$ ($1\leq r\leq d-k+1$) and $(j_1,\ldots,j_l)\neq (i_1,\ldots,i_l)$ for all $l=1,\ldots,r$, we consider $T'=T\cap\operatorname{type}_{P}(e_J)$. To keep the status of a maximal bounded cell, the type of the cell still has to be a pre-partition of $[n]$ without empty type entries. There are three different cases (1)-(3). (1) For $J^C\not\subseteq B^C$ and $J\cap B\neq\emptyset$, there is an index $j\in J\cap B$. We consider the $j$-th type entry of $T'$ that is equal to $T_{j}\cap T^{(0)}_j(e_J)=T^{(0)}_j\cap {T^{(0)}_{i_1}}^C\cap\cdots\cap {T^{(0)}_{i_{d-k+1}}}^C\cap {T^{(0)}_{j_1}}^C\cap\cdots\cap {T^{(0)}_{j_r}}^C$. This is an empty set since there are no tropical vertices of $P$ with $d-k+1+r$ entries equal to one. The cells with empty type entries are not bounded. (2) For $J^C\not\subseteq B^C$ and $J\cap B=\emptyset$, we consider an index $j\in J\cap B^C$ that corresponds to a valid sequence with $i_t=j$, $t\in\{1,\ldots,d-k+1\}$. The $j$-th type entry of $T'$ is equal to $T^{(0)}_j(e_J)\cap T_j=T^{(0)}_j\cap {T^{(0)}_{j_1}}^C\cap\cdots\cap {T^{(0)}_{j_r}}^C\cap {T^{(0)}}^C_{i_1}\cap\cdots\cap {T^{(0)}}^C_{i_{t-1}}$. Since $J^C\not\subseteq\{i_1,\ldots,i_{t-1}\}$, the cardinality of $T'_j$ is less than $|T^{(0)}_j|$, and we get no valid partition of $[n]$. (3) For $J^C\subset B^C$ we have $r<d+1-k$ (otherwise $J=B$). We choose the smallest index $j$ such that $i_j\in J\cap B^C$. That means $i_1,\ldots,i_{j-1}\in J^C\subset B^C$. Since we have $(i_1,\ldots,i_l)\neq(j_1,\ldots j_l)$ for all $l=1,\ldots,r$, we know that $(i_1,\ldots,i_{j-1})\neq(j_1,\ldots,j_r)$ leading to $|T_{i_j}|=|T^{(0)}_{i_j}\cap {T^{(0)}_{i_1}}^C\cap\cdots\cap {T^{(0)}_{i_{j-1}}}^C|>|T'_{i_j}|=|T^{(0)}_{i_j}\cap {T^{(0)}_{j_1}}^C\cap\cdots\cap {T^{(0)}_{j_r}}^C|$. As in the other two cases this is no valid pre-partition of $[n]$. In every case the adding of a pseudovertex from another sequence leads to unfeasible types of bounded cells. Similarly, it is not difficult to see that removing a pseudovertex and adding a new one from another sequence leads to unfeasible types or lower dimensional bounded cells, i.e. mixing of valid sequences is not possible. Altogether, we get the desired maximal bounded cells of $P$. \end{proof} There are $n\cdot(d+1-k)!$ maximal bounded cells of $P$ since we have $(d+1-k)!$ possibilities to add edges to a spanning tree until we get the whole graph. \begin{example} The tropical matroid polytope $P$ from Example \ref{ex:RunEx1} is contained in the $4$-dimensional tropical hyperplane with apex $\mathbf{0}$. It is shown in Figure~\ref{fig:PVG} as the abstract graph of the vertices and edges of its bounded subcomplex. Its maximal bounded cells are ordinary simplices of dimension $d-k+1=2$, whose pseudovertices are the tropical vertices $V=\{v_{B_1},\ldots,v_{B_8}\}$ (dark), the origin $\mathbf{0}$ (the centered point) and the five corners $c_i=e_i$ (light). The four tropical vertices with indices $3$,~$4$,~$5$ and $8$ correspond to the bases that are possible after deleting edge $1$ in the underlying graph and therefore adjacent to the point $e_1$. One valid sequence $i_1,i_2$ leading to a maxim bounded cell is for example the (tropical/ordinary) convex hull of $e_{\emptyset}=(0,0,0,0,0),\,e_4=(0,0,0,0,1)$ and $e_{4,2}=v_{B_1}=(0,0,1,0,1)$, i.e. $i_1=4$ and $i_2=2$, with interior cell type $(1,1,36,1,24578)$, representing the basis $B_1=\{1,2,4\}$. \begin{figure} \caption{\label{fig:PVG} \label{fig:PVG} \end{figure} \end{example} All cells in the tropical complex $\mathcal{C}_{V}$, bounded or not, are pointed, i.e. they do not contain an affine line. So each cell of $\mathcal{C}_{V}$ must contain a bounded cell as an ordinary face. We now state the main theorem about the coarse types of maximal cells in the cell complex of a tropical matroid polytope. Let $b_{I,J}$ denote the number of bases $B\in\mathcal{B}$ with $I\subseteq B$ and $J\subseteq B^C$. \begin{theorem}\label{thm:coarse types of tmp} Let $\mathcal{C}$ be the tropical complex induced by the tropical vertices of a tropical matroid polytope $P\in\mathcal{P}_{k,d}$. The set of all coarse types of the maximal cells arising in $\mathcal{C}$ is given by those tuples $(t_1,\ldots,t_{d+1})$ with \begin{equation}\label{eq:coarse types of tmp} t_j \ = \ \begin{cases} b_{\{i_1\},\emptyset}+b_{\,\emptyset,\{i_1,i_2,\ldots,i_{d'+1}\}} & \text{if $j=i_1$} \, ,\\ b_{\{i_l\},\{i_1,\ldots,i_{l-1}\}} & \text{if $j=i_l\in \{i_2,\ldots i_{d'+1}\}$} \, ,\\ 0 & \text{otherwise} \, . \end{cases} \end{equation} where $e_{i_1},\ldots,e_{i_1,i_2,\ldots,i_{d'}}$ form a subsequence of a valid sequence of $P$. \end{theorem} \begin{proof} Depending on the maximal bounded (ordinary) face in the boundary, there are three types of maximal unbounded cells in $\mathcal{C}_{V}$. The first one, $X_{T}$, contains a maximal bounded cell of dimension $d-k+1$, which is the tropical convex hull of the pseudovertices of a complete valid sequence $\mathbf{0},e_{i_1},e_{i_1,i_2},\ldots,e_{i_1,i_2,\ldots,i_{d-k+1}}$ where $B^C=\{i_1,\ldots,i_{d-k+1}\}$ is the complement of a basis of $\mathcal{M}$. To get full-dimensional we have the choice between $k-1$ of $k$ free directions $-e_{i}$, $i\in B$. So let $-e^{\infty}_{j_1},\ldots,-e^{\infty}_{j_{k-1}}$ be the {\it extreme rays} of $X_{T}$, and $(T^{(0)}_1,\ldots,T^{(0)}_{d+1})$ be the type of the pseudovertex $\mathbf{0}$ with respect to $P$. Then the type $T=(T_1,\ldots,T_{d+1})$ of the interior of this unbounded cell $X_{T}$ is given by the intersection of the types of its vertices and therefore $T_{i_1}=T^{(0)}_{i_1},T_{i_2}=T^{(0)}_{i_2}\setminus T^{(0)}_{i_1},\ldots,T_{i_{d-k+1}}=T^{(0)}_{i_{d-k+1}}\setminus (T^{(0)}_{i_1}\cup T^{(0)}_{i_2}\cup T^{(0)}_{i_{d-k}})$, $T_i=T^{(0)}_i\setminus (T^{(0)}_{i_1}\cup T^{(0)}_{i_2}\cup T^{(0)}_{i_{d-k+1}})$ for $i\notin B^C\cup\{j_1,\ldots,j_{k-1}\}$ and $T_{j_1}=\ldots=T_{j_{k-1}}=\emptyset$. Choosing $d'=d-k+1$ and $i_{d'+1}=i$, we get the coarse type entries of equation~(\ref{eq:coarse types of tmp}). The second type, $X_{T}$, of maximal unbounded cells contains a bounded cell of lower dimension $d'\in\{0,\ldots,d-k\}$, which is the tropical convex hull of the pseudovertices of some subsequence $e_{i_1},e_{i_1,i_2},\ldots,e_{i_1,i_2,\ldots,i_{d'+1}}$. To get full-dimensional we still need the extreme rays $e_{i_1,i_2,\ldots,i_{d'+1}}-e^{\infty}_l$ for all directions $l\notin\{i_1,\ldots,i_{d'+1}\}$. Then the type $T=(T_1,\ldots,T_{d+1})$ of the interior of this unbounded cell $X_{T}$ is given by $T_{i_1}=T^{(0)}_{i_1}\cup({T^{(0)}_{i_1}}^C\cap\cdots\cap {T^{(0)}_{i_{d'+1}}}^C),T_{i_2}=T^{(0)}_{i_2}\setminus T^{(0)}_{i_1},\ldots,T_{i_{d'+1}}=T^{(0)}_{i_{d'+1}}\setminus (T^{(0)}_{i_1}\cup T^{(0)}_{i_2}\cup T^{(0)}_{i_{d'}})$, $T_j=\emptyset$ for $j\notin \{i_1,\ldots,i_{d'+1}\}$ with the coarse type as given in equation~(\ref{eq:coarse types of tmp}). The third and last type of maximal unbounded cells contains a bounded cell of dimension $d-k$ and is assigned to the non-bases of $\mathcal{M}$, i.e. to the subsets of $E$ with cardinality $k$ that are not bases. Let $i_1,\ldots,i_{d-k+1}$ be the complement of a non-basis $N$ and $i_1,\ldots,i_{d-k}$ a valid subsequence. Then there is an unbounded cell $X_{T}$ that is the tropical convex hull of the pseudovertices $\mathbf{0}, e_{i_1},\ldots,e_{i_{d-k}}$ and the extreme rays $\mathbf{0}-e^{\infty}_l$ for all directions $l\notin\{i_1,\ldots,i_{d-k+1}\}$ and with type entries $T_{i_1}=T^{(0)}_{i_1},T_{i_2}=T^{(0)}_{i_2}\setminus T^{(0)}_{i_1},\ldots,T_{i_{d-k+1}}=T^{(0)}_{i_{d-k+1}}\setminus (T^{(0)}_{i_1}\cup T^{(0)}_{i_2}\cup T^{(0)}_{i_{d-k}})$, $T_j=\emptyset$ for $j\notin \{i_1,\ldots,i_{d-k+1}\}$. Choosing $d'=d-k$ and observing that $b_{\,\emptyset,\{i_1,i_2,\ldots,i_{d'+1}\}}=0$ for the non-basis $\{i_1,i_2,\ldots,i_{d'+1}\}^C$ we get the desired result. \end{proof} Restricting ourselves to the uniform case, we get the following result. \begin{corollary} The coarse types of the maximal cells in the tropical complex induced by the tropical vertices of the tropical hypersimplex $\Delta_k^d$ in ${\mathbb{T}}^d$ with $2\le k< d+1$ are up to symmetry of $\operatorname{Sym}(d+1)$ given by \[\left(\binom{d+1-\alpha}{k}+\binom{d}{k-1},\,\binom{d-1}{k-1},\ldots,\,\binom{d-(\alpha-1)}{k-1},\underbrace{0,\ldots,0}_{d+1-\alpha}\right)\] where $0\le\alpha\le d+2-k$ correlates to the maximal dimension of a bounded cell of its boundary. \end{corollary} Now we relate the combinatorial properties of the tropical complex $\mathcal{C}$ of a tropical matroid polytope to algebraic properties of a monomial ideal which is assigned to $\mathcal{C}$. As a direct consequence of Theorem~\ref{thm:DJS2009} and Corollary 3.5 in~\cite{DJS09}, we can state the generators of the coarse type ideal \[I=\langle x^{\mathbf{t}(p)}\colon p\in{\mathbb{T}}^d\rangle\subset{\mathbb{K}}[x_1,\ldots,x_{d+1}],\] where $\mathbf{t}(p)$ is the coarse type of $p$ and $x^{\mathbf{t}(p)}={x_1}^{{\mathbf{t}(p)}_1}{x_2}^{{\mathbf{t}(p)}_2}\cdots{x_{d+1}}^{{\mathbf{t}(p)}_{d+1}}$. \begin{corollary}The coarse type ideal $I$ is equal to \[\langle x_{i_1}^{t_{i_1}}x_{i_2}^{t_{i_2}}\cdots x_{i_{d'+1}}^{t_{i_{d'+1}}}\colon [d+1]\setminus\{i_1,\ldots,i_{d'}\} \text{ contains a basis }\rangle\] with $(t_{i_1},t_{i_2},\ldots,t_{i_{d'+1}})=\big(b_{\{i_1\},\emptyset}+b_{\,\emptyset,\{i_1,i_2,\ldots,i_{d'+1}\}},b_{\{i_2\},\{i_1\}},\ldots,b_{\{i_{d'+1}\},\{i_1,\ldots,i_{d'}\}}\big)$. \end{corollary} \begin{example}The tropical complex $\mathcal{C}$ of the tropical matroid polytope of Example~\ref{ex:RunEx1} has 73 maximal cells. There are five maximal cells for the case $d'=0$ with $t_{i_{d'+1}}=8$ and $t_j=0$ for $j\neq i_{d'+1}$, and 48 for the case $d'=2$ according to the 8 bases. Finally, there are 20 maximal cells for the case $d'=1$, where $[d+1]\setminus\{i_1\}$ contains a basis, but $[d+1]\setminus\{i_1,i_2\}$ does not necessarily contain a basis. The coarse type ideal of $\mathcal{C}$ is given by \begin{eqnarray*}I&=\langle& {x_1}^1{x_2}^2{x_3}^5,{x_1}^1{x_2}^5{x_3}^2,{x_1}^2{x_2}^1{x_3}^5,{x_1}^4{x_2}^1{x_3}^3,{x_1}^4{x_2}^3{x_3}^1,{x_1}^2{x_2}^5{x_3}^1,{x_2}^2{x_3}^6,{x_2}^6{x_3}^2,\\ &&{x_2}^2{x_3}^5{x_4}^1,{x_2}^5{x_3}^2{x_4}^1,{x_1}^2{x_3}^6,{x_1}^5{x_3}^3,{x_1}^2{x_3}^5{x_4}^1,{x_1}^4{x_3}^3{x_4}^1,{x_3}^8,{x_3}^5{x_4}^3,{x_1}^8,{x_1}^5{x_2}^3,\\ &&{x_1}^5{x_4}^3,{x_1}^4{x_3}^1{x_4}^3,{x_1}^4{x_2}^3{x_4}^1,{x_1}^4{x_2}^1{x_4}^3,{x_0}^2{x_4}^6,{x_4}^8,{x_0}^2{x_1}^1{x_4}^5,{x_0}^1{x_1}^2{x_4}^5,{x_1}^2{x_4}^6,\\ &&{x_0}^1{x_2}^2{x_4}^5,{x_2}^2{x_4}^6,{x_0}^2{x_2}^1{x_4}^5,{x_1}^1{x_2}^2{x_4}^5,{x_1}^2{x_2}^1{x_4}^5,{x_0}^2{x_3}^1{x_4}^5,{x_3}^3{x_4}^5,{x_2}^2{x_3}^1{x_4}^5,\\ &&{x_1}^2{x_3}^1{x_4}^5,{x_0}^1{x_2}^5{x_4}^2,{x_2}^6{x_4}^2,{x_2}^5{x_3}^1{x_4}^2,{x_1}^1{x_2}^5{x_4}^2,{x_0}^2{x_3}^6,{x_0}^1{x_2}^2{x_3}^5,{x_0}^2{x_2}^1{x_3}^5,\\ &&{x_0}^2{x_1}^1{x_3}^5,{x_0}^1{x_1}^2{x_3}^5,{x_0}^2{x_3}^5{x_4}^1,{x_0}^1{x_2}^5{x_3}^2,{x_0}^3{x_2}^5,{x_2}^8,{x_0}^1{x_1}^2{x_2}^5,{x_1}^2{x_2}^6,{x_1}^2{x_2}^5{x_4}^1,\\ &&{x_0}^1{x_1}^4{x_2}^3,{x_0}^6{x_4}^2,{x_0}^5{x_1}^1{x_4}^2,{x_0}^5{x_1}^2{x_4}^1,{x_0}^3{x_1}^4{x_4}^1,{x_0}^1{x_1}^4{x_4}^3,{x_0}^5{x_2}^1{x_4}^2,{x_0}^5{x_2}^3,\\ &&{x_0}^5{x_1}^2{x_2}^1,{x_0}^3{x_1}^4{x_2}^1,{x_0}^5{x_3}^1{x_4}^2,{x_0}^5{x_2}^1{x_3}^2,{x_0}^5{x_3}^2{x_4}^1,{x_0}^6{x_3}^2,{x_0}^5{x_1}^1{x_3}^2,{x_0}^6{x_1}^2,\\ &&{x_0}^3{x_1}^5,{x_0}^5{x_1}^2{x_3}^1,{x_0}^3{x_1}^4{x_3}^1,{x_0}^8,{x_0}^1{x_1}^4{x_3}^3\,\rangle\subseteq R:={\mathbb{R}}[{x_0},{x_1},{x_2},{x_3},{x_4}]\end{eqnarray*}We obtain its minimal free resolution, which is induced by $\mathcal{C}$\[\mathcal{F}_{\bullet}^{\mathcal{C}}\colon\,0\rightarrow R^{14}\rightarrow R^{78}\rightarrow R^{172}\rightarrow R^{180}\rightarrow R^{73}\rightarrow I\rightarrow 0,\]where the exponents $i$ of the free graded $R$-modules $R^i$ correspond to the entries of the $f$-vector $f(\mathcal{C})=(1,14,78,172,180,73)$ of $\mathcal{C}$. \end{example} In (ordinary) convexity swapping between interior and exterior description of a polytope is a famous problem known as the \emph{convex hull problem}. For a uniform matroid it is possible to indicate the minimal tropical halfspaces of its tropical matroid polytope. \begin{theorem} The tropical hypersimplex $\Delta_k^d$ in ${\mathbb{T}}^d$ is the intersection of its cornered halfspaces and the tropical halfspaces $H(\mathbf{0},I)$, where $I$ is a $(d-k+2)$-element subset of $[d+1]$. \end{theorem} \begin{proof} For $k=1$ the tropical standard simplex is a polytrope and coincides with its cornered hull. For $k\geq 2$ we want to verify the three conditions of Gaubert and Katz in Proposition 1 of~\cite{GaubertKatz09}. Let $T=(T_1,\ldots,T_{d+1})$ be the type of the apex $\mathbf{0}$ of $H(\mathbf{0},I)$. If a vertex $v\in\V(\Delta_k^d)$ appears in some type entry $T_i$, then the $i$-th (canonical) coordinate of $v$ is equal to zero. Hence, exactly $k$ entries of $T$ contain the index of $v$. Since the cardinality of $I^C=[d+1]\setminus I$ is only $k-1$, every tropical vertex of $\Delta_k^d$ is contained in some sector $\overline{S_i}$ with $i\in I$, i.e. $\Delta_k^d\subseteq H(\mathbf{0},I)$. Consider the complement $I^C$ of $I$. For all $i\in I^C$ there is a tropical vertex $v$ with $v_i=0$, i.e. $v\in T_i$. Since the cardinality of $I^C$ is equal to $k-1$ and $v$ has $k$ entries equal to zero, there must be an index $j\in I$ such that $v_j=0$. We can conclude that $T_i\cap T_j\neq\emptyset$. The intersection $T_i\cap T_j$ is not empty for arbitrary $i,j\in[d+1]$, because its cardinality is equal to the number of tropical vertices $v$ with $v_i=v_j=0$, which is $\binom{d}{k-1}$ with $k>1$. For $i\in I$ and $j\in I^C$, the set $T_i\cap T_j$ consists of all tropical vertices $v$ with $v_i=0$ and $v_j=1$ (in canonical coordinates). On the other hand, the set $\bigcup_{k\in I\setminus\{i\}}T_k$ contains all tropical vertices $v$ with $v_i=1$. So we get $T_i\cap T_j \not\subset \bigcup_{k\in I\setminus\{i\}}T_k$. Hence, we obtain that $H(\mathbf{0},I)$ is a minimal tropical halfspace, and $\Delta_k^d$ is contained in the intersection of its cornered hull $\displaystyle \bigcap_{i\in[d+1]}H(e_i,\{i\})$ with $\displaystyle \bigcap_{I\in\binom{[d+1]}{d-k+2}}H(\mathbf{0},I)$. We still have to prove that the intersection of the given minimal tropical halfspaces is contained in $\Delta_k^d$. Let us assume that there is a point $x\in{\mathbb{T}}^d\setminus\Delta_k^d$ with $\operatorname{type}_{\Delta_k^d}(x)_i=\emptyset$. Then for any tropical halfspace $H(\mathbf{0},I)$, $I\in\binom{[d+1]}{d-k+2}$, with $i\in I^C$ we obtain $x\notin H(\mathbf{0},I)$. Consequently, the tropical hypersimplex $\Delta_k^d$ is the set of all points $x\in{\mathbb{T}}^d$ satisfying \begin{eqnarray*} \displaystyle\bigoplus_{i\in I} x_i&\leq&\bigoplus_{j\in I^C} x_j \text{ for all }I\subseteq [d+1]\text{ with }\lvert I \rvert=d-k+2\\\text{and } (-1)\odot x_i&\leq&\bigoplus_{j\neq i} x_j \text{ for all }i\in[d+1]. \end{eqnarray*} \end{proof} \begin{example}The second tropical hypersimplex $\Delta_2^3$ in ${\mathbb{T}}^3$ is the intersection of the $4$ cornered halfspaces $(c_i,\{i\})$ for $i=1,\ldots,4$ and the tropical halfspaces $(\mathbf{0},\{1,2,3\})$, $(\mathbf{0},\{1,2,4\})$, $(\mathbf{0},\{1,3,4\})$ and $(\mathbf{0},\{2,3,4\})$ with apex $\mathbf{0}\in {\mathbb{T}}^d$. The second tropical hypersimplex $\Delta_2^2$ in ${\mathbb{T}}^2$ is the intersection of the three cornered halfspaces $(c_i,\{i\})$ for $i=1,\ldots,3$ and the tropical halfspaces $(\mathbf{0},\{1,2\})$, $(\mathbf{0},\{1,3\})$ and $(\mathbf{0},\{2,3\})$ with apex $\mathbf{0}\in {\mathbb{T}}^2$, see Figure \ref{fig:Delta22}. \begin{figure} \caption{\label{fig:Delta22} \label{fig:Delta22a} \label{fig:Delta22b} \label{fig:Delta22} \end{figure} \end{example} \noindent\emph{Acknowledgements.} I would like to thank my advisor Michael Joswig for suggesting the problem, and for supporting me writing this article. \def$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title{Algebraic duality for partially ordered sets} \begin{center} Department of Mathematics, SPb UEF, Griboyedova 30/32, \\ 191023, St-Petersburg, Russia \\ {\em and} \\ Division of Mathematics, Istituto per la Ricerca di Base, \\ I-86075, Monteroduni, Molise, Italy \end{center} \begin{abstract} For an arbitrary partially ordered set $P$ its {\em dual} $P^*$ is built as the collection of all monotone mappings $P\to{\bf 2}$ where ${\bf 2}=\{0,1\}$ with $0<1$. The set of mappings $P^*$ is proved to be a complete lattice with respect to the pointwise partial order. The {\em second dual} $P^{**}$ is built as the collection of all morphisms of complete lattices $P^*\to{\bf 2}$ preserving universal bounds. Then it is proved that the partially ordered sets $P$ and $P^{**}$ are isomorphic. \end{abstract} \paragraph{AMS classification:} 06A06, 06A15 \section*{Introduction} The results presented in this paper can be considered as the algebraic counterpart of the duality in the theory of linear spaces. The outline of the construction looks as follows. Several categories occur in the theory of partially ordered sets. The most general is the category ${\cal POSET}$ whose objects are partially ordered sets and the morphisms are the monotone mappings. Another category which will be used is ${\cal BCL}$ whose objects are (bounded) complete lattices and the morhisms are the lattice homomorphisms preserving universal bounds. Evidently ${\cal BCL}$ is the subcategory of ${\cal POSET}$. To introduce the algebraic duality (I use the term `algebraic' to avoid confusion with the traditional duality based on order reversal) the two element partially ordered set ${\bf 2}$ is used: \[ {\bf 2}=\{0,1\}\quad,\quad 0<1 \] Let $P$ be an object of ${\cal POSET}$. Consider its dual $P^*$: \begin{equation}\label{f129} P^* = {\sf Mor}_{\scriptscriptstyle {\cal POSET}}(P,{\bf 2}) \end{equation} The set $P^*$ has the pointwise partial order. Moreover, it is always the complete lattice with respect to this partial order (section \ref{s130}). Furthermore, starting from $P^* \in {\cal BCL}$ (bounded compete lattices) consider the set $P^{**}$ of all morphisms in the appropriate category: \begin{equation}\label{f129d} P^{**} = {\sf Mor}_{\scriptscriptstyle {\cal BCL}}(P^*,{\bf 2}) \end{equation} And again, the set of mappings $P^{**}$ is pointwise partaially ordered. Finally, it is proved in section \ref{s139} that $P^{**}$ is isomorphic to the initial partially ordered set $P$ (the isomorphism lemma \ref{l141}): \[ P^{**} \simeq P \] The account of the results is organized as follows. First it is proved that $P^*$ (\ref{f129}) is complete lattice. Then the embeddings $p\to\lambda_p$ and $p\to\upsilon_p$ of the poset $P$ into $P^*$ are built (\ref{f131d}). Then it is shown that the principal ideals $[0,\lambda_p]$ in $P^*$ are prime for all $p\in P$ (lemma \ref{l139}). Moreover, it is shown that there is no more principal prime ideals in $P^*$. Finally, it is observed that the principle prime ideals on $P^*$ are in 1-1 correspondence with the elements of $P^{**}$. \section{The structure of the dual space} \label{s130} First define the pointwise partial order on the elements of $P^*$ (\ref{f129}). For any $x,y\in P^*$ \begin{equation}\label{f130p} x\le y \kern0.3em\Leftrightarrow\kern0.3em \forall p \in P \quad x(p)\le y(p) \end{equation} Evidently the following three statements are equivalent for $x,y\in P^*$: \begin{equation}\label{f130} \left. \begin{array}{lcl} &x\le y & \cr \forall p\in P \quad x(p)=1 &\Rightarrow & y(p)=1 \cr \forall p\in P \quad y(p)=0 &\Rightarrow & x(p)=0 \end{array} \right. \end{equation} To prove that $P^*$ is complete lattice, consider its arbitrary subset $K\subseteq P^*$ and define the following mappings $u,v:P \to {\bf 2}$: \begin{equation}\label{f131i} u(p) = \left\lbrace \begin{array}{rcl} 1, & \exists k\in K & k(p)=1 \cr 0, & \forall k\in K & k(p)=0 \end{array}\right. \qquad v(p) = \left\lbrace \begin{array}{rcl} 0, & \exists k\in K & k(p)=0 \cr 1, & \forall k\in K & k(p)=1 \end{array}\right. \end{equation} The direct calculations show that both $u$ and $v$ are monotone mappings: $u,v\in P^*$ and \[ u=\sup_{P^*}K \quad,\quad v=\inf_{P^*}K \] which proves that $P^*$ is the complete lattice. Denote by {\bf 0},{\bf 1} the universal bounds of the lattice $P^*$: \[ \forall p\in P \quad {\bf 0}(p)=0 \quad, \quad {\bf 1}(p)=1 \] Let $p$ be an element of $P$. Define the elements $\lambda_p,\upsilon_p\in P^*$ associated with $p$: for all $q\in P$ \begin{equation}\label{f131d} \lambda_p(q) = \left\lbrace \begin{array}{rcl} 0 &, & q\le p\cr 1 &, & \hbox{otherwise} \end{array}\right. \qquad \upsilon_p(q) = \left\lbrace \begin{array}{rcl} 1 &, & q\ge p \cr 0 &, & \hbox{otherwise} \end{array}\right. \end{equation} \begin{lemma}\label{l132} For any $x\in P^*,\quad p\in\ P$ \begin{equation}\label{f132l} \begin{array}{rcl} x(p)=0 &\Leftrightarrow & x\le \lambda_p \hbox{ \rm in } P^* \cr x(p)=1 &\Leftrightarrow & x\ge \upsilon_p \hbox{ \rm in } P^* \end{array} \end{equation} \end{lemma} \paragraph{Proof.} Rewrite the left side of the first equivalency as \( \forall q \quad q\le p \kern0.3em\Rightarrow\kern0.3em x(q)=0$, hence $\forall q \quad\lambda_p(q)=0 \kern0.3em\Rightarrow\kern0.3em x(q)=0$, therefore $x\le \lambda_p$ by virtue of (\ref{f130}). The second equivalency is proved likewise. \hspace*{\fill}$\Box$ We shall focus on the 'inner' characterization of the elements $\lambda_p,\upsilon_p$ in mere terms of the lattice $P^*$ itself. To do it, recall the necessary definitions. Let $L$ be a complete lattice. An element $a\in L$ is called {\sc join-irreducible} ({\sc meet-irreducible}) if it can not be represented as the join (resp., meet) of a collection of elements of $L$ different from $a$. To make this definition more verifiable introduce for every $a\in L$ the following elements of $L$: \begin{equation}\label{f131} \begin{array}{rcl} \check{a} &=& \inf_L\{x\in L\mid\quad x>a\} \cr \hat{a} &=& \sup_L\{y\in L\mid\quad y<a\} \end{array} \end{equation} \noindent which do exist since $L$ is complete. Clearly, $\check{a}\ge a \ge \hat{a}$ and the equivalencies \begin{equation}\label{f132} \begin{array}{rcl} a\neq \check{a} &\Leftrightarrow & a\hbox{ is meet-irreducible} \cr a\neq \hat{a} &\Leftrightarrow & a\hbox{ is join-irreducible} \end{array} \end{equation} follow directly from the above definitions. \begin{lemma} \label{l138} An element $w\in P^*$ is meet irreducible if and only if it is equal to $\lambda_p$ for some $p\in P$. Dually, $v\in P^*$ is join irreducible iff $v=\upsilon_p$ for some $p\in P$. \end{lemma} \paragraph{Proof.} First prove that every $\lambda_p$ is meet irreducible. To do it we shall use the criterion (\ref{f132}). Let $p\in P$. Define $u\in P^*$ as: \[ u(q) = \left\lbrace \begin{array}{rcl} 0 &,& q<p \cr 1 &,& \hbox{otherwise} \end{array} \right. \] then the following equivalency holds: \begin{equation}\label{f135} x\le y \kern0.3em\Leftrightarrow\kern0.3em ( \forall q \quad x(q)=0 \kern0.3em\Rightarrow\kern0.3em q<p ) \end{equation} Now let $x>\lambda_p$, then $x(p)=1$ (otherwise (\ref{f132l}) would enable $x\le\lambda_p$). Then $x>\lambda_p$ implies $x\ge\lambda_p$, hence $\forall q \quad x(q)=0 \kern0.3em\Rightarrow\kern0.3em q\le p$, although $q=p$ is excluded, hence we get exactly the right side of (\ref{f135}). That means that \[ u=\inf_{P^*}\{x\mid x>\lambda_p\} = {\check{\lambda}}_p \] differs from $\lambda_p$, hence $\lambda_p$ is meet irreducible by virtue of (\ref{f131}). The second dual statement is proved quite analogously. Conversely, suppose we have a meet irreducible $w\in P^*$, hence, according to (\ref{f131}), there exists $p\in P^*$ such that $\check{w}(p)\neq 0$ while $w(p)=0$. The latter means $w\le \lambda_p$ for this $p$. To disprove $w<\lambda_p$ rewrite $\check{w}(p)\neq 0$ as \( \lnot(\inf\{x\mid w<x\}\quad =\quad \lambda_p) \) which is equivalent to \[ \exists y (\forall x \quad w<x \Rightarrow y\le x) \quad \& \quad \lnot(y\le \lambda_p) \] In particular, it must hold for $x=\lambda_p$, thus the assumption $w<\lambda_p$ implies \( \exists y \quad y\le \lambda_p \kern0.4em \& \kern0.4em \lnot(y\le \lambda_p) \), and the only remaining possibility for $w$ is to be equal to $\lambda_p$. \hspace*{\fill}$\Box$ \paragraph{Dual statement.} The join irreducibles of $P^*$ are the elements $\upsilon_p, p\in P$ and only they. \section{Second dual and the isopmorphism lemma} \label{s139} Introduce the necessary definitions. Let $L$ be a lattice. An {\sc ideal} in $L$ is a subset $K\subseteq L$ such that \begin{itemize} \item $k\in K, x\le k \kern0.3em\Rightarrow\kern0.3em x\in K$ \item $a,b\in K \kern0.3em\Rightarrow\kern0.3em a\lor b \in K$ \end{itemize} Replacing $\le$ by $\ge$ and $\lor$ by $\land$ the notion of {\sc filter} is introduced. An ideal (filter) $K\subseteq L$ is called {\sc prime} if its set complement $L\setminus K$ is a filter (resp., ideal) in $L$. Now return to the lattice $P^*$. \begin{lemma}\label{l139} For any $p\in P$ both the principal ideal $[0,\lambda_p]$ and the principal filter $[\upsilon_p,1]$ are prime in $P^*$. Moreover, \[ [\upsilon_p,1] = P^*\setminus [0,\lambda_p] \] \end{lemma} \paragraph{Proof.} Fix up $p\in P$, then for any $x\in P^*$ the value $x(p)$ is either 0 (hence $x\le \lambda_p$) or 1 (and then $x\ge \upsilon_p$) according to (\ref{f132l}). Since $\lambda_p$ never equals $\upsilon_p$ (because their values at $p$ are different), the sets $[\upsilon_p,1]$ and $[0,\lambda_p]$ are disjoint, which completes the proof. \hspace*{\fill}$\Box$ The converse statement is formulated in the following lemma. \begin{lemma} \label{l141} For any pair $u,v\in P^*$ such that \begin{equation}\label{f141} [0,u] = P^*\setminus [v,1] \end{equation} there exists an element $p\in P$ such that $u=\lambda_p$ and $v=\upsilon_p$. \end{lemma} \paragraph{Proof.} It follows from (\ref{f141}) that $u$ and $v$ are not comparable, therefore $u\land v < v$. Thus there exists $p\in P$ such that $(u\land v)(p) =0$ while $v(p) = 1$. Then (\ref{f132l}) implies $u\land v \le \lambda_p$ and $v\ge \upsilon_p$. Suppose $v\neq \upsilon_p$, then (\ref{f141}) implies $\upsilon_p\le u$, which together with $\upsilon_p\le v$ implies $\upsilon_p \le u\land v \le \lambda_p$ which never holds since $\upsilon_p$ and $\lambda_p$ are not comparable. So, we have to conclude that $v=\upsilon_p$, thus $u=\lambda_p$. \hspace*{\fill}$\Box$ Now introduce the {\sc second dual} $P^{**}$ as the set of all homomorphisms of complete lattices $P^*\to {\bf 2}$ preserving universal bounds, that is, for any ${\bf p} \in P^{**}, K\subseteq P^*$ \[ \begin{array}{l} {\bf p}(\sup_K) = \sup_{k\in K}{\bf p}(k) \cr {\bf p}(\inf_K) = \inf_{k\in K}{\bf p}(k) \cr {\bf p}(0) = 0\hbox{ ; }{\bf p}(1) = 1 \end{array} \] with the pointwise partial order as in (\ref{f130p}). Now we are ready to prove the following {\em isomorphism lemma}. \begin{lemma} The partially ordered sets $P$ and $P^{**}$ are isomorphic. \end{lemma} \paragraph{Proof.} Define the mapping $F:P\to P^{**}$ by putting \[ F(p) = {\bf p}:\quad {\bf p}(x) = x(p) \qquad \forall x\in P^{**} \] Evidently $F$ is the order preserving injection. To build the inverse mapping $G:P^{**}\to P$, for any ${\bf p}\in P^{**}$ consider the ideal ${\bf p}^{-1}(0)$ and the filter ${\bf p}^{-1}(1)$ in $P^*$ both being prime (see \cite{lt}, II.4). Let $u=\sup{\bf p}^{-1}(0)$ and $v=\inf{\bf p}^{-1}(1)$. Since ${\bf p}$ is the homomorphism of complete lattices, $u\in {\bf p}^{-1}(0)$ and $v\in {\bf p}^{-1}(1)$, hence ${\bf p}^{-1}(0) = [0,u]$ and ${\bf p}^{-1}(1) = [v,1]$. Applying lemma \ref{l141} we see that there exists $p\in P$ such that $u=\lambda_p$ and $v=\upsilon_p$. Put $G({\bf p}) = p$. The mapping $G$ is order preserving and injective (since the different principal ideals have different suprema). It remains to prove that $F,G$ are mutually inverse. Let $p\in P$, consider $G(F(p))$. Denote ${\bf p} = F(p)$, then ${\bf p}^{-1}(1) = \{x\in P^*\mid x(p)=0\} = \{x\mid x\le \lambda_p\}$. Thus $\sup{\bf p}^{-1}(0) = \lambda_p$, then $G\circ F = {\rm id}_P$ which completes the proof. \hspace*{\fill}$\Box$ \section*{Concluding remarks} The results presented in this paper show that besides the well known duality in partially ordered sets based on order reversal, we can establish quite another kind of duality {\em \`a la} linear algebra. As in the theory of linear topological spaces, we see that the `reflexivity' expressed as $P=P^{**}$ can be achieved by appropriate {\em definition} of dual space. We see that a general partially ordered set have the dual space being a complete lattice. We also see that not every complete lattice can play the r\^ole of dual for a poset. These complete lattices can be characterized in terms of spaces with two closure operations \cite{cr}. For the category of {\em ortho}posets this construction was introduced in \cite{mayet}. Another approach to dual spaces when they are treated as sets of two-valued measures (in terms of this paper, as sub-posets of $P^{**}$) is in \cite{tkadlec}. The main feature of the techniques suggested in the present paper is that all the constructions are formulated in mere terms of partially ordered sets and lattices. The work was supported by the RFFI research grant (97-14.3-62). The author acknowledges the financial support from the Soros foundation (grant A97-996) and the research grant "Universities of Russia". \end{document}
\begin{document} \title{Non-deterministic weighted automata \protect\\ evaluated over Markov chains\footnote{This paper has been published in Journal of Computer and System Sciences: \url{https://doi.org/10.1016/j.jcss.2019.10.001} \begin{abstract} We present the first study of non-deterministic weighted automata under probabilistic semantics. In this semantics words are random events, generated by a Markov chain, and functions computed by weighted automata are random variables. We consider the probabilistic questions of computing the expected value and the cumulative distribution for such random variables. The exact answers to the probabilistic questions for non-deterministic automata can be irrational and are uncomputable in general. To overcome this limitation, we propose approximation algorithms for the probabilistic questions, which work in exponential time in the size of the automaton and polynomial time in the size of the Markov chain and the given precision. We apply this result to show that non-deterministic automata can be effectively determinised with respect to the standard deviation metric. \end{abstract} \newcommand{\introPara}[1]{\noindent\emph{#1}.} \section{Introduction} Weighted automata are (non-deterministic) finite automata in which transitions carry weights~\cite{Droste:2009:HWA:1667106}. We study here weighted automata (on finite and infinite words) whose semantics is given by \emph{value functions} (such as the sum or the average)~\cite{quantitativelanguages}. In such weighted automata transitions are labeled with rational numbers and hence every run yields a sequence of rationals, which the value function aggregates into a single (real) number. This number is the value of the run, and the value of a word is the infimum over the values of all accepting runs on that word. The value function approach has been introduced to express quantitative system properties (performance, energy consumption, etc.) and it serves as a foundation for \emph{quantitative verification}~\cite{quantitativelanguages,henzingerotop17}. Basic decision questions for weighted automata are quantitative counterparts of the emptiness and universality questions obtained by imposing a threshold on the values of words. \introPara{Probabilistic semantics} The emptiness and the universality problems correspond to the best-case and the worst-case analysis. For the average-case analysis, weighted automata are considered under probabilistic semantics, in which words are random events generated by a Markov chain~\cite{DBLP:conf/icalp/ChatterjeeDH09,lics16}. In such a setting, functions from words to reals computed by deterministic weighted automata are measurable and hence can be considered as random variables. The fundamental probabilistic questions are to compute \emph{the expected value} and \emph{the cumulative distribution} for a given automaton and a Markov chain. \introPara{The deterministic case} Weighted automata under probabilistic semantics have been studied only in the deterministic case. A close relationship has been established between weighted automata under probabilistic semantics and weighted Markov chains~\cite{DBLP:conf/icalp/ChatterjeeDH09}. For a weighted automaton $\mathcal{A}$ and a Markov chain $\mathcal{M}$ representing the distribution over words, the probabilistic problems for $\mathcal{A}$ and $\mathcal{M}$ coincide with the probabilistic problem of the weighted Markov chain $\mathcal{A} \times \mathcal{M}$. Weighted Markov chains have been intensively studied with single and multiple quantitative objectives~\cite{BaierBook,DBLP:conf/concur/ChatterjeeRR12,filar,DBLP:conf/cav/RandourRS15}. The above reduction does not extend to non-deterministic weighted automata \cite[Example~30]{lics16}. \introPara{Significance of nondeterminism} Non-deterministic weighted automata are provably more expressive than their deterministic counterpart~\cite{quantitativelanguages}. Many important system properties can be expressed with weighted automata only in the nondeterministic setting. This includes minimal response time, minimal number of errors and the edit distance problem~\cite{henzingerotop17}, which serves as the foundation for the \emph{specification repair} framework from~\cite{DBLP:conf/lics/BenediktPR11}. Non-determinism can also arise as a result of abstraction. The exact systems are often too large and complex to operate on and hence they are approximated with smaller non-deterministic models~\cite{clarke2016handbook}. The abstraction is especially important for multi-threaded programs, where the explicit model grows exponentially with the number of threads~\cite{DBLP:conf/popl/GuptaPR11}. \paragraph*{Our contributions} We study non-deterministic weighted automata under probabilistic semantics. We work with weighted automata as defined in~\cite{quantitativelanguages}, where a value function $f$ is used to aggregate weights along a run, and the value of the word is the infimum over values of all runs. (The infimum can be changed to supremum as both definitions are dual). We primarily focus on the two most interesting value functions: the sum of weights over finite runs, and the limit average over infinite runs. The main results presented in this paper are as follows. \begin{itemize} \item We show that the answers to the probabilistic questions for weighted automata with the sum and limit-average value functions can be irrational and even transcendental (Theorem~\ref{th:irrational}) and cannot be computed by any effective representation (Theorem~\ref{th:limavg-undecidable}). \item We establish approximation algorithms for the probabilistic questions for weighted automata with the sum and limit-average value functions. The approximation is \textsc{\#P}-complete for (total) weighted automata with the sum value function (Theorem~\ref{th:approximation-sum}), and it is $\PSPACE$-hard and solvable in exponential time for weighted automata with the limit-average value function (Theorem~\ref{th:approximation-limavg}). \item We show that weighted automata with the limit-average value function can be approximately determinised (Theorem~\ref{th:approximateDeterminisation}). Given an automaton $\mathcal{A}$ and $\epsilon >0$, we show how to compute a deterministic automaton $\mathcal{A}_D$ such that the expected difference between the values returned by both automata is at most $\epsilon$. \end{itemize} \paragraph*{Applications} We briefly discuss applications of our contributions in quantitative verification. \begin{itemize} \item The expected-value question corresponds to the average-case analysis in quantitative verification~\cite{DBLP:conf/icalp/ChatterjeeDH09,lics16}. Using results from this paper, we can perform the average-case analysis with respect to quantitative specifications given by non-deterministic weighted automata. \item Some quantitative-model-checking frameworks~\cite{quantitativelanguages} are based on the universality problem for non-deterministic automata, which asks whether all words have the value below a given threshold. Unfortunately, the universality problem is undecidable for weighted automata with the sum or the limit average values functions. The distribution question can be considered as a computationally-attractive variant of universality, i.e., we ask whether almost all words have value below some given threshold. We show that if the threshold can be approximated, the distribution question can be computed effectively. \item Weighted automata have been used to formally study online algorithms~\cite{aminof2010reasoning}. Online algorithms have been modeled by deterministic weighted automata, which make choices based solely on the past, while offline algorithms have been modeled by non-deterministic weighted automata. Relating deterministic and non-deterministic models allowed for formal verification of the worst-case competitiveness ratio of online algorithms. Using the result from our paper, we can extend the analysis from~\cite{aminof2010reasoning} to the average-case competitiveness. \end{itemize} \paragraph*{Related work} The problem considered in this paper is related to the following areas from the literature. \noindent\emph{Probabilistic verification of qualitative properties}. Probabilistic verification asks for the probability of the set of traces satisfying a given property. For non-weighted automata, it has been extensively studied~\cite{DBLP:conf/focs/Vardi85,DBLP:journals/jacm/CourcoubetisY95, BaierBook} and implemented~\cite{DBLP:journals/entcs/KwiatkowskaNP06,DBLP:conf/tacas/HintonKNP06}. The prevalent approach in this area is to work with deterministic automata, and apply determinisation as needed. To obtain better complexity bounds, the probabilistic verification problem has been directly studied for unambiguous B\"uchi automata in~\cite{DBLP:conf/cav/BaierK0K0W16}; the authors explain there the potential pitfalls in the probabilistic analysis of non-deterministic automata. \noindent\emph{Weighted automata under probabilistic semantics}. Probabilistic verification of weighted automata and their extensions has been studied in~\cite{lics16}. All automata considered there are deterministic. \noindent\emph{Markov Decision Processes (MDPs)}. MDPs are a classical extension of Markov chains, which models control in a stochastic environment~\cite{BaierBook,filar}. In MDPs, probabilistic and non-deterministic transitions are interleaved; this can be explained as a game between two players: Controller and Environment. Given a game objective (e.g. state reachability), the goal of Controller is to maximize the probability of the objective by selecting non-deterministic transitions. Environment is selecting probabilistic transitions at random w.r.t. a probability distribution described in the current state of the MDP. Intuitively, the non-determinism in MDPs is resolved based on the past, i.e., each time Controller selects a non-deterministic transition, its choice is based on previously picked transitions. Our setting can be also explained in such a game-theoretic framework: first, Environment generates a complete word, and only then non-deterministic choices are resolved by Controller, who generates a run of a given weighted automaton. That non-determinism in a run at some position $i$ may depend on letters in the input word on positions past $i$ (i.e., future events). Partially Observable Markov Decision Process (POMDPs)~\cite{ASTROM1965174} are an extension of MDPs, which models weaker non-determinism. In this setting, the state space is partitioned into \emph{observations} and the non-deterministic choices have to be the same for sequences consisting of the same observations (but possibly different states). Intuitively, Controller can make choices based only on the sequence of observations it has seen so far. While in POMDPs Controller is restricted, in our setting Controller is stronger than in the MDPs case. \noindent\emph{Non-deterministic probabilistic automata}. The combination of nondeterminism with stochasticity has been recently studied in the framework of probabilistic automata~\cite{nondet-prob}. There have been defined non-deterministic probabilistic automata (NPA) and there has been proposed two possible semantics for NPA. It has been shown that the equivalence problem for NPA is undecidable (under either of the considered two semantics). Related problems, such as the threshold problem, are undecidable already for (deterministic) probabilistic automata~\cite{bertoni1977some}. While NPA work only over finite words, the interaction between probabilistic and non-deterministic transitions is more general than in our framework. In particular, non-determinism in NPA can influence the probability distribution, which is not possible in our framework. \noindent\emph{Approximate determinisation}. As weighted automata are not determinisable, Boker and Henzinger~\cite{BokerH12} studied \emph{approximate} determinisation defined as follows. The distance $d_{\sup}$ between weighted automata $\mathcal{A}_1, \mathcal{A}_2$ is defined as $d_{\sup}(\mathcal{A}_1, \mathcal{A}_2) = \sup_{w} | \mathcal{A}_1(w) - \mathcal{A}_2(w)|$. A nondeterministic weighted automaton $\mathcal{A}$ can be \emph{approximately} determinised if for every $\epsilon >0$ there exists a deterministic automaton $\mathcal{A}_D$ such that $d_{\sup}(\mathcal{A}, \mathcal{A}_D) \leq \epsilon$. Unfortunately, weighted automata with the limit average value function cannot be approximately determinised~\cite{BokerH12}. In this work we show that the approximate determinisation is possible for the standard deviation metric $d_{\mathrm{std}}$ defined as $d_{\mathrm{std}}(\mathcal{A}_1, \mathcal{A}_2) = \mathbb{E}(|\mathcal{A}_1(w) - \mathcal{A}_2(w)|)$. This paper is an extended and corrected version of~\cite{concur2018}. It contains full proofs, an extended discussion and a stronger version of Theorem~\ref{th:irrational}. We have showed in~\cite{concur2018} that the expected values and the distribution values may be irrational. In this paper we show that these values can be even transcendental (Theorem~\ref{th:irrational}). We have corrected two claims from~\cite{concur2018}. First, we have corrected statements of Theorems~\ref{th:irrational} and~\ref{th:limavg-undecidable}. For $\textsc{LimAvg}$-automata and the distribution question $\mathbb{D}_{\mathcal{M},\mathcal{A}}(\lambda)$, the values, which can be irrational and uncomputable are not the values of the probability $\mathbb{D}_{\mathcal{M},\mathcal{A}}(\lambda) = \mathbb{P}_{\mathcal{M}}(\set{w \mid \valueL{\mathcal{A}} \leq \lambda})$, but the values of the thershod $\lambda$ that correspond to mass points, i.e., values $\lambda$ such that $\mathbb{P}_{\mathcal{M}}(\set{w \mid \valueL{\mathcal{A}} = \lambda}) > 0$. We have also removed from Theorem~\ref{th:all-pspace-hard} PSPACE-hardness claim for the distribution question for (non-total) $\textsc{Sum}$-automata. We show that the (exact) distribution question for all $\textsc{Sum}$-automata is \textsc{\#P}-complete. \section{Preliminaries} Given a finite alphabet $\Sigma$ of letters, a \emph{word} $w$ is a finite or infinite sequence of letters. We denote the set of all finite words over $\Sigma$ by $\Sigma^*$, and the set of all infinite words over $\Sigma$ by $\Sigma^\omega$. For a word $w$, we define $w[i]$ as the $i$-th letter of $w$, and we define $w[i,j]$ as the subword $w[i] w[i+1] \ldots w[j]$ of $w$. We use the same notation for other sequences defined later on. By $|w|$ we denote the length of $w$. A \emph{(non-deterministic) finite automaton} (NFA) is a tuple $(\Sigma, Q, Q_0, F, \delta)$ consisting of an input alphabet $\Sigma$ , a finite set of states $Q$, a set of initial states $Q_0 \subseteq Q$, a set of final states $F$, and a finite transition relation $\delta \subseteq Q \times \Sigma \times Q$. We define $\delta(q,a) = \set{q' \in \mathbb{Q} \mid \delta(q,a,q')}$ and $\delta(S,a) = \bigcup_{q \in S} \delta(q,a)$. We extend this to words $\widehat{\delta} \colon 2^Q \times \Sigma^* \to 2^Q$ in the following way: $\widehat{\delta}(S,\epsilon) = S$ (where $\epsilon$ is the empty word) and $\widehat{\delta}(S,aw) = \widehat{\delta}(\delta(S,a),w)$, i.e., $\widehat{\delta}(S,w)$ is the set of states reachable from $S$ via $\delta$ over the word $w$. \Paragraph{Weighted automata} A \emph{weighted automaton} is a finite automaton whose transitions are labeled by rational numbers called \emph{weights}. Formally, a weighted automaton is a tuple $(\Sigma, Q, Q_0, F, \delta, {C})$, where the first five elements are as in the finite automata, and ${C} \colon \delta \to \mathbb{Q}$ is a function that defines \emph{weights} of transitions. An example of a weighted automaton is depicted in Figure~\ref{fig:aut}. The size of a weighted automaton $\mathcal{A}$, denoted by $|\mathcal{A}|$, is $|Q| + |\delta| + \sum_{q, q', a} \mathrm{len}(C(q, a, q'))$, where $\mathrm{len}$ is the sum of the lengths of the binary representations of the numerator and the denominator of a given rational number. A \emph{run} $\pi$ of an automaton $\mathcal{A}$ on a word $w$ is a sequence of states $\pi[0] \pi[1] \dots$ such that $\pi[0]$ is an initial state and for each $i$ we have $(\pi[i-1],w[i],\pi[i]) \in \delta$. A finite run $\pi$ of length $k$ is \emph{accepting} if and only if the last state $\pi[k]$ belongs to the set of accepting states $F$. As in~\cite{quantitativelanguages}, we do not consider $\omega$-accepting conditions and assume that all infinite runs are accepting. Every run $\pi$ of an automaton $\mathcal{A}$ on a (finite or infinite) word $w$ defines a sequence of weights of successive transitions of $\mathcal{A}$ as follows. Let $({C}(\pi))[i]$ be the weight of the $i$-th transition, i.e., ${C}(\pi[i-1], w[i], \pi[i])$. Then, ${C}(\pi)=({C}(\pi)[i])_{1\leq i \leq |w|}$. A \emph{value functions} $f$ is a function that assigns real numbers to sequences of rational numbers. The value $f(\pi)$ of the run $\pi$ is defined as $f({C}(\pi))$. The value of a (non-empty) word $w$ assigned by the automaton $\mathcal{A}$, denoted by $\valueL{\mathcal{A}}(w)$, is the infimum of the set of values of all accepting runs on $w$. The value of a word that has no (accepting) runs is infinite. To indicate a particular value function $f$ that defines the semantics, we will call a weighted automaton $\mathcal{A}$ an $f$-automaton. \Paragraph{Value functions} We consider the following value functions. For finite runs, functions $\textsc{Min}$ and $\textsc{Max}$ are defined in the usual manner, and the function $\textsc{Sum}$ is defined as \[\textsc{Sum}(\pi) = \sum\nolimits_{i=1}^{|C(\pi)|} ({C}(\pi))[i]\] For infinite runs we consider the supremum $\textsc{Sup}$ and infimum $\textsc{Inf}$ functions (defined like $\textsc{Max}$ and $\textsc{Min}$ but on infinite runs) and the limit average function $\textsc{LimAvg}$ defined as \[\textsc{LimAvg}(\pi) = \limsup\limits_{k \rightarrow \infty} \favg{\pi[0, k]} \] where for finite runs $\pi$ we have \(\favg{\pi}=\frac{\textsc{Sum}(\pi)}{|C(\pi)|}\). \subsection{Probabilistic semantics} A (finite-state discrete-time) \emph{Markov chain} is a tuple $\tuple{\Sigma,S,s_0,E}$, where $\Sigma$ is the alphabet of letters, $S$ is a finite set of states, $s_0$ is an initial state, $E \colon S \times \Sigma \times S \mapsto [0,1]$ is an edge probability function, which for every $s \in S$ satisfies that $\sum_{a \in \Sigma, s' \in S} E(s,a,s') = 1$. An example of a single-state Markov chain is depicted in Figure~\ref{fig:aut}. In this paper, Markov chains serve as a mathematical model as well as the input to algorithms. Whenever a Markov chain is the input to a problem or an algorithm, we assume that all edge probabilities are rational and the size of a Markov chain $\mathcal{M}$ is defined as $|\mathcal{M}|=|S|+|E|+\sum_{q, q', a}\mathrm{len}(E(q, a, q'))$. The probability of a finite word $u$ w.r.t.\ a Markov chain $\mathcal{M}$, denoted by $\mathbb{P}_{\mathcal{M}}(u)$, is the sum of probabilities of paths from $s_0$ labeled by $u$, where the probability of a path is the product of probabilities of its edges. For sets $u\cdot \Sigma^\omega = \{ uw \mid w \in \Sigma^{\omega} \}$, called \emph{cylinders}, we have $\mathbb{P}_{\mathcal{M}}(u\cdot \Sigma^\omega)=\mathbb{P}_{\mathcal{M}}(u)$, and then the probability measure over infinite words defined by $\mathcal{M}$ is the unique extension of the above measure to the $\sigma$-algebra generated by cylinders (by Carath\'{e}odory's extension theorem~\cite{feller}) We will denote the unique probability measure defined by $\mathcal{M}$ as $\mathbb{P}_{\mathcal{M}}$. For example, for the Markov chain $\mathcal{M}$ presented in Figure~\ref{fig:aut}, we have that $\mathbb{P}_{\mathcal{M}}(ab) = \frac{1}{4}$, and so $\mathbb{P}_{\mathcal{M}}(\set{w \in \set{a,b}^\omega \mid w[0,1]=ab})=\frac{1}{4}$, whereas $\mathbb{P}_{\mathcal{M}}(X)=0$ for any countable set of infinite words $X$. A function $f \colon \Sigma^{\omega} \to \mathbb{R}$ the is measureable w.r.t. $\mathbb{P}_{\mathcal{M}}$ is called a \emph{random variable} (w.r.t. $\mathbb{P}_{\mathcal{M}}$). A random variable $g$ is \emph{discrete}, if there exists a countable set $Y \subset \mathbb{R}$ such that $g$ returns a value for $Y$ with probability $1$ ($\mathbb{P}_{\mathcal{M}}(\set{w \mid g(w) \in Y}) = 1$). For the discrete random variable $g$, we define the \emph{expected value} $\mathbb{E}_{\mathcal{M}}(g)$ (w.r.t. the measure $\mathbb{P}_{\mathcal{M}}$) as \[ \mathbb{E}_{\mathcal{M}}(g) = \sum_{y \in Y} y \cdot \mathbb{P}_{\mathcal{M}}(\set{w \mid g(w) = y}). \] Every non-negative random variable $h \colon \Sigma^{\omega} \to \mathbb{R}^+$ is a point-wise limit of some sequence of monotonically increasing discrete random variables $g_1, g_2, \ldots$ and the expected value $\mathbb{E}_{\mathcal{M}}(h)$ is the limit of expected values $\mathbb{E}_{\mathcal{M}}(g_i)$~\cite{feller}. Finally, every random variable $f$ can be presented as the difference $h_1 - h_2$ of non-negative random variables $h_1, h_2$ and we have $\mathbb{E}_{\mathcal{M}}(f) = \mathbb{E}_{\mathcal{M}}(h_1) - \mathbb{E}_{\mathcal{M}}(h_2)$~\cite{feller}. A \emph{terminating} Markov chain $\mathcal{M}^T$ is a tuple $\tuple{\Sigma,S,s_0,E, T}$, where $\Sigma$, $S$ and $s_0$ are as usual, $E \colon S \times (\Sigma\cup \set{\epsilon}) \times S \mapsto [0,1]$ is the edge probability function, such that if $E(s, a, t)$, then $a=\epsilon$ if and only if $t\in T$, and for every $s \in S$ we have $\sum_{a \in \Sigma\cup \set{\epsilon}, s' \in S} E(s,a,s') = 1$, and $T$ is a set of terminating states such that the probability of reaching a terminating state from any state $s$ is positive. Notice that the only $\epsilon$-transitions in a terminating Markov chain are those that lead to a terminating state. The probability of a finite word $u$ w.r.t. $\mathcal{M}^T$, denoted $\mathbb{P}_{\mathcal{M}^T}(u)$, is the sum of probabilities of paths from $s_0$ labeled by $u$ such that the only terminating state on this path is the last one. Notice that $\mathbb{P}_{\mathcal{M}^T}$ is a probability distribution on finite words whereas $\mathbb{P}_{\mathcal{M}}$ is not (because the sum of probabilities may exceed 1). A function $f \colon \Sigma^{*} \to \mathbb{R}$ is called a \emph{random variable} (w.r.t. $\mathbb{P}_{\mathcal{M}^T}$). Since words generated by $\mathcal{M}^T$ are finite, the co-domain of $f$ is countable and hence $f$ is discrete. The expected value of $f$ w.r.t. $\mathcal{M}^T$ is defined in the same way as for non-terminating Markov chains. \Paragraph{Automata as random variables} An infinite-word weighted automaton $\mathcal{A}$ defines the function $\valueL{\mathcal{A}}$ that assigns each word from $\Sigma^{\omega}$ its value $\valueL{\mathcal{A}}(w)$. This function is measurable for all the automata types we consider in this paper (see Remark~\ref{rem:measurability} below). Thus, this function can be interpreted as a random variable with respect to the probabilistic space we consider. Hence, for a given automaton $\mathcal{A}$ (over infinite words) and a Markov chain $\mathcal{M}$, we consider the following quantities: \noindent\fbox{\parbox{0.96\textwidth}{ $\mathbb{E}_{\mathcal{M}}(\mathcal{A})$ --- the expected value of the random variable $\valueL{\mathcal{A}}$ w.r.t. the measure $\mathbb{P}_{\mathcal{M}}$. \\ $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda) = \mathbb{P}_{\mathcal{M}}(\{w \mid \valueL{\mathcal{A}}(w) \leq \lambda \})$ --- the (cumulative) distribution function of $\valueL{\mathcal{A}}$ w.r.t. the measure $\mathbb{P}_{\mathcal{M}}$. }} In the finite words case, the expected value $\mathbb{E}_{\mathcal{M}^T}$ and the distribution $\mathbb{D}_{\mathcal{M}^T, \mathcal{A}}$ are defined in the same manner. \begin{nremark}[Bounds on the expected value and the distribution] Both quantities can be easily bounded: the value of the distribution function $\mathbb{D}_{\mathcal{M}, \mathcal{A}}$ is always between $0$ and $1$. For a $\textsc{LimAvg}$-automaton $\mathcal{A}$, we have $\mathbb{E}_{\mathcal{M}}(\mathcal{A}) \in [\min_\mathcal{A}, \max_\mathcal{A}] \cup \set{\infty}$, where $\min_\mathcal{A}$ and $\max_\mathcal{A}$ denote the minimal and the maximal weight of $\mathcal{A}$ and $\mathbb{E}_{\mathcal{M}}(\mathcal{A}) = \infty$ if and only if the probability of the set of words with no accepting runs in $\mathcal{A}$ is positive. {Note that we consider no $\omega$-accepting conditions, and hence all infinite runs of $\textsc{LimAvg}$-automata are accepting, but there can be infinite words, on which a given $\textsc{LimAvg}$-automaton has no infinite runs. } For a $\textsc{Sum}$-automaton $\mathcal{A}$, we have $\mathbb{E}_{\mathcal{M}^T}(\mathcal{A}) \in [L_{\mathcal{M}^T} \cdot \min_\mathcal{A}, L_{\mathcal{M}^T} \cdot \max_\mathcal{A}] \cup \set{\infty}$, where $L_{\mathcal{M}^T}$ is the expected length of a word generated by $\mathcal{M}^T$ (it can be computed in a standard way~\cite[Section 11.2]{Grinstead12}) and, as above, $\mathbb{E}_{\mathcal{M}^T}(\mathcal{A}) = \infty$ if and only if there is a finite word $w$ generated by $\mathcal{M}^T$ with non-zero probability such that $\mathcal{A}$ has no accepting runs on $w$. We show in Section~\ref{sec:irrational} that the distribution and expected value may be irrational, even for integer weights and uniform distributions. \end{nremark} \begin{nremark}[Measurability of functions represented by automata] \label{rem:measurability} For automata on finite words, $\textsc{Inf}$-automata and $\textsc{Sup}$-automata, measurability of $\valueL{\mathcal{A}}$ is straightforward. To show that $\valueL{\mathcal{A}}(w) \colon \Sigma^\omega \mapsto \mathbb{R}$ is measurable for any non-deterministic $\textsc{LimAvg}$-automaton $\mathcal{A}$, it suffices to show that for every $x \in \mathbb{R}$, the preimage $\valueL{\mathcal{A}}^{-1}(-\infty,x]$ is measurable. Let $Q$ be the set of states of $\mathcal{A}$. We can define a subset $A_x \subseteq \Sigma^{\omega} \times Q^\omega$ of the pairs, the word and the run on it, where the value of the run is less than or equal to $x$. We show that $A_x$ is Borel. For $p \in \mathbb{N}$, let $B_x^p$ be the subset of $\Sigma^{\omega} \times Q^\omega$ of pairs $(w, \pi)$ such that up to position $p$ the sequence $\pi$ is a run on $w$ and the average of weights up to $p$ is at most $x$. Observe that $B_x^p$ is an open set and $A_x$ is equal to $\bigcap_{\epsilon \in \mathbb{Q}^+} \bigcup_{p_0 \in \mathbb{N}} \bigcap_{p \geq p_0} B_{x+\epsilon}^p$, i.e., $A_x$ consists of pairs $(w, \pi)$ satisfying that for every $\epsilon\in\mathbb{Q}^+$ there exists $p_0$ such that for all $p\geq p_0$ the average weight of $\pi$ at $p$ does not exceed $x+\epsilon$ and $\pi$ is a run on $w$ (each finite prefix is a run). Finally, $\valueL{\mathcal{A}}^{-1}(-\infty,x]$ is the projection of $A_x$ on the first component $\Sigma^{\omega}$. The projection of a Borel set is an \emph{analytic set}, which is measurable~\cite{kechris}. Thus, $\valueL{\mathcal{A}}$ defined by a non-deterministic $\textsc{LimAvg}$-automaton is measurable. The above proof of measurability requires some knowledge of descriptive set theory. We will give a direct proof of measurability of $\valueL{\mathcal{A}}$ in the paper (Theorem~\ref{th:approximation-limavg}). \end{nremark} \subsection{Computational questions} We consider the following basic computational questions: \noindent\fbox{\parbox{0.96\textwidth}{ \emph{The expected value question}: Given an $f$-automaton $\mathcal{A}$ and a (terminating) Markov chain $\mathcal{M}$, compute $\mathbb{E}_{\mathcal{M}}(\mathcal{A})$. \emph{The distribution question}: Given an $f$-automaton $\mathcal{A}$, a (terminating) Markov chain $\mathcal{M}$ and a threshold $\lambda \in \mathbb{Q}$, compute $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)$. }} Each of the above questions have its decision variant (useful for lower bounds), where instead of computing the value we ask whether the value is less than a given threshold $t$. The above questions have their approximate variants: \noindent\fbox{\parbox{0.96\textwidth}{ \emph{The approximate expected value question}: Given an $f$-automaton $\mathcal{A}$, a (terminating) Markov chain $\mathcal{M}$, $\epsilon \in \mathbb{Q}^+$, compute a number $y \in \mathbb{Q}$ such that $|y - \mathbb{E}_{\mathcal{M}}(\mathcal{A})| \leq \epsilon$. \emph{The approximate distribution question}: Given an $f$-automaton $\mathcal{A}$, a (terminating) Markov chain $\mathcal{M}$, a threshold $\lambda \in \mathbb{Q}$ and $\epsilon \in \mathbb{Q}^+$ compute a number $y \in \mathbb{Q}$ which belongs to $[\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda-\epsilon)-\epsilon, \mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda+\epsilon)+\epsilon]$. }} \begin{nremark} The notion of approximation for the distribution question is based on the Skorokhod metric~\cite{billingsley2013convergence}. Let us compare here this notion with two possible alternatives: the \emph{inside approximation}, where $y$ belongs to $[\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda-\epsilon), \mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda+\epsilon)]$, and the \emph{outside approximation}, where $y$ belongs to $[\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)-\epsilon, \mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)+\epsilon]$. The outside approximation is reasonable for $\textsc{Sum}$-automata, where the exact value of the probability is hard to compute, but for the $\textsc{LimAvg}$-automata its complexity is the same as computing the exact value (because the latter is difficult already for automata which return the same value for almost all words, as shown in Remark~\ref{r:ultimatelyperiodic}). For the inside approximation, it is the other way round: for $\textsc{Sum}$-automata it makes little sense as the problem is undecidable even for automata returning integer values, but for $\textsc{LimAvg}$-automata it is a reasonable definition as the returned values can be irrational. We chose a definition that works for both types of automata. However, the results we present can be easily adjusted to work in the case of the outside approximation for $\textsc{Sum}$-automata and in the case of the inside approximation for the $\textsc{LimAvg}$-automata. \end{nremark} \begin{figure} \caption{The automaton $\mathcal{A} \label{fig:aut} \end{figure} \section{Basic properties} Consider an $f$-automaton $\mathcal{A}$, a Markov chain $\mathcal{M}$ and a set of words $X$. We denote by $\mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid X)$ the expected value of $\mathcal{A}$ w.r.t. $\mathcal{M}$ restricted only to words in the set $X$ (see \cite{feller}). The following says that we can disregard a set of words with probability $0$ (e.g. containing only some of the letters under uniform distribution) while computing the expected value. \begin{fact}\label{f:equal-expected} If $\mathbb{P}(X)=1$ then $\mathbb{E}_\mathcal{M}(\mathcal{A}) = \mathbb{E}_\mathcal{M}(\mathcal{A} \mid X)$. \end{fact} The proof is rather straightforward; the only interesting case is when there are some words not in $X$ with infinite values. But for all the functions we consider, one can show that in this case there is a set of words with infinite value that has a non-zero probability, and therefore $\mathbb{E}_\mathcal{M}(\mathcal{A}) = \mathbb{E}_\mathcal{M}(\mathcal{A} \mid X)=\infty$. One corollary of Fact \ref{f:equal-expected} is that if $\mathcal{M}$ is, for example, uniform, then because the set $Y$ of ultimately-periodic words (i.e., words of the form $vw^\omega$) is countable and hence has probability $0$, we have $\mathbb{E}_\mathcal{M}(\mathcal{A}) = \mathbb{E}_\mathcal{M}(\mathcal{A} \mid \Sigma^\omega \setminus Y)$. This suggests that the values of ultimately-periodic words might not be representative for an automaton. We exemplify this in Remark~\ref{r:ultimatelyperiodic}, where we show an automaton whose value is irrational for almost all words, yet rational for all ultimately-periodic words. \subsection{Example of computing expected value by hand}\label{s:example} Consider a $\textsc{LimAvg}$-automaton $\mathcal{A}$ and a Markov chain $\mathcal{M}$ depicted in Figure~\ref{fig:aut}. We encourage the reader to take a moment to study this automaton and try to figure out its expected value. The idea behind $\mathcal{A}$ is as follows. Assume that $\mathcal{A}$ is in a state $q_l$ for some $l \in \{a,b\}$. Then, it reads a word up to the first occurrence of a subword $ba$, where it has a possibility to go to $q_x$ and then to non-deterministically choose $q_a$ or $q_b$ as the next state. Since going to $q_x$ and back to $q_l$ costs the same as staying in $q_l$, we will assume that the automaton always goes to $q_x$ in such a case. When an automaton is in the state $q_x$ and has to read a word $w=a^jb^k$, then the average cost of a run on $w$ is $\frac{j}{j+k}$ if the run goes to $q_b$ and $\frac{k}{j+k}$ otherwise. So the run with the lowest value is the one that goes to $q_a$ if $j>k$ and $q_b$ otherwise. To compute the expected value of the automaton, we focus on the set $X$ of words $w$ such that for each positive $n \in \mathbb{N}$ there are only finitely many prefixes of $w$ of the form $w'a^jb^k$ such that $\frac{j+k}{|w'|+j+k} \geq \frac{1}{n}$. Notice that this means that $w$ contains infinitely many $a$ and infinitely many $b$. It can be proved in a standard manner that $\mathbb{P}_\mathcal{M}(X)=1$. Let $w \in X$ be a random event, which is a word generated by $\mathcal{M}$. Since $w$ contains infinitely many letters $a$ and $b$, it can be partitioned in the following way. Let $w=w_1w_2w_3 \dots$ be a partition of $w$ such that each $w_i$ for $i>0$ is of the form $a^jb^k$ for $j\geq 0, k>0$, and for $i>1$ we also have $j>0$. For example, the partition of $w=baaabbbaabbbaba\dots$ is such that $w_1=b$, $w_2=aaabbb$, $w_3=aabbb$, $w_4=ab$, \dots. Let $s_i=|w_1w_2 \dots w_i|$. We now define a run $\pi_w$ on $w$ as follows: \[ q^w_1 \dots q^w_1 q_x q^w_2\dots q^w_2q_xq^w_3 \dots q^w_3 q_x q^w_4 \dots \] where the length of each block of $q_i$ is $|w_i|-1$, $q^w_0=q_a$ and $q^w_i=q_a$ if $w_i=a^jb^k$ for some $j>k$ and $q^w_i$=$q_b$ otherwise. It can be shown by a careful consideration of all possible runs that this run's value is the infimum of values of all the runs on this word. \begin{lemma}\label{l:best-run} For every $w \in X$ we have $\valueL{\mathcal{A}}(w) = \textsc{LimAvg}(\pi_w)$. \end{lemma} \begin{proof} We show that for every accepting run $\pi$ on $w \in X$ we have $\textsc{LimAvg}(\pi_w) \leq \textsc{LimAvg}(\pi)$. It follows that $\valueL{\mathcal{A}}(w) = \textsc{LimAvg}(\pi_w)$. Consider a run $\pi$ of $\mathcal{A}$ on $w$. The cost of a run over $w_i=a^jb^k$ is at least $min(j, k)-1$, which is reached by $\pi_w$, therefore for every $i\in \mathbb{N}$ we have \begin{equation}\label{e:property} \favg{\pi_w[0,s_i]} \leq \favg{\pi[0,s_i]}. \end{equation} It may happen, however, that for some $p$, the value of $\favg{\pi[0,p]}$ is less than $\favg{\pi_w[0,p]}$; for example, for a word starting with $baaabbbb$, we have $\pi_w[0,4]=q_aq_xq_bq_bq_b$ and $\favg{\pi_w[0,4]}$ is $\frac{1}{2}$, but for a run $\pi'=q_aq_xq_aq_aq_a\dots$ we have $\favg{\pi'[0,4]}=0$. For arbitrary words, a run that never visits $q_b$ may have a better value. We show, however, that for words from $X$ this is not the case. We show that for any position $p$ such that $s_i<p<s_{i+1}$, \begin{equation}\label{e:propertyTwo} \favg{\pi_w[0,p]} \leq \favg{\pi[0,s_i]} + \frac{p-s_i}{p} \end{equation} Observe that \[ \begin{split} \favg{\pi_w[0,p]} = \frac{\textsc{Sum}(\pi_w[0,p])}{p} &\leq \frac{\textsc{Sum}(\pi_w[0,s_i])}{s_i} + \frac{\textsc{Sum}(\pi_w[s_i,p])}{p} \\ &= \favg{\pi_w[0, s_i]} + \frac{\textsc{Sum}(\pi_w[s_i,p])}{p}. \end{split} \] By \eqref{e:property} and the fact that the weights of the automaton do not exceed 1, we obtain \[ \favg{\pi_w[0, s_i]} + \frac{\textsc{Sum}(\pi_w[s_i,p])}{p} \leq \favg{\pi[0, s_i]} + \frac{p-s_i}{p}, \] thus \eqref{e:propertyTwo}. Assume $n \in \mathbb{N}$. By the definition of $X$, there can be only finitely many prefixes of $w$ of the form $w'a^jb^k$ where $\frac{j+k}{|w'|+j+k} \geq \frac{1}{n}$, so \( \favg{\pi_w[0,p]} \geq \favg{\pi[0,s_i]} + \frac{1}{n}\) may hold only for finitely many $p$. Therefore, $\textsc{LimAvg}(\pi_w) \leq \textsc{LimAvg}(\pi) + \frac{1}{n}$ for every $n$, so $\textsc{LimAvg}(\pi_w) \leq \textsc{LimAvg}(\pi)$. \end{proof} By Fact~\ref{f:equal-expected} and Lemma~\ref{l:best-run}, it remains to compute the expected value of $\textsc{LimAvg}(\set{\pi_w \mid w \in X})$. As the expected value of the sum is the sum of expected values, we can state that \[\mathbb{E}_\mathcal{M}(\textsc{LimAvg}(\set{\pi_w \mid w \in X})) = \limsup\limits_{s \rightarrow \infty} \frac{1}{s} \cdot \sum_{i=1}^{s} \mathbb{E}_\mathcal{M}\left(\set{({C}(\pi_w))[i] \mid w \in X}\right) \] It remains to compute $\mathbb{E}_\mathcal{M}(({C}(\pi_w))[i])$. If $i$ is large enough (and since the expected value does not depend on a finite number of values, we assume that it is), the letter $\pi_w[i]$ is in some block $w_s=a^jb^k$. There are $j+k$ possible letters in this block, and the probability that the letter $\pi_w[i]$ is an $i$th letter in such a block is $2^{-(j+k+2)}$ (``+2'', because the block has to be maximal, so we need to include the letters before the block and after the block). So the probability that a letter is in a block $a^jb^k$ is $\frac{j+k}{2^{j+k+2}}$. The average cost of a such a letter is $\frac{\min(j, k)}{j+k}$, as there are $j+k$ letters in this block and the block contributes $\min(j, k)$ to the sum. It can be analytically checked that \[ \sum_{j=1}^\infty \sum_{k=1}^\infty \frac{j+k}{2^{j+k+2}} \cdot \frac{\min(j, k)}{j+k} = \sum_{j=1}^\infty \sum_{k=1}^\infty \frac{\min(j, k)}{2^{j+k+2}} = \frac{1}{3} \] We can conclude that \(\mathbb{E}_\mathcal{M}(\textsc{LimAvg}(\pi_w))=\frac{1}{3}\) and, by Lemma \ref{l:best-run}, $\mathbb{E}_\mathcal{M}(\mathcal{A})=\frac{1}{3}$. The bottom line is that even for such a simple automaton with only one strongly connected component consisting of three states (and two of them being symmetrical), the analysis is complicated. On the other hand, we conducted a simple Monte Carlo experiment in which we computed the value of this automaton on 10000 random words of length $2^{22}$ generated by $\mathcal{M}$, and observed that the obtained values are in the interval $[0.3283, 0.3382]$, with the average of $0.33336$, which is a good approximation of the expected value $0.(3)$. This foreshadows our results for $\textsc{LimAvg}$-automata: we show that computing the expected value is, in general, impossible, but it is possible to approximate it with arbitrary precision. Furthermore, the small variation of the results is not accidental -- we show that for strongly-connected $\textsc{LimAvg}$-automata, almost all words have the same value (which is equal to the expected value). \subsection{Irrationality of the distribution and the expected value}\label{sec:irrational} We show that the exact values in the probabilistic questions for $\textsc{Sum}$-automata and $\textsc{LimAvg}$-automata may be (strongly) irrational. More precisely, we show that for the $\textsc{Sum}$-automaton depicted in Figure~\ref{fig:irrational}, the distribution $\mathbb{D}_{\mathcal{A}}(-1)$ is transcendental, i.e., it is irrational and, unlike for instance $\sqrt{2}$, there is no polynomial with integer coefficients whose one of the roots is $\mathbb{D}_{\mathcal{A}}(-1)$. For the expected value, we construct an automaton $\mathcal{A}'$ such that $\mathbb{E}(\mathcal{A})- \mathbb{E}(\mathcal{A}') = 1 - \mathbb{D}_{\mathcal{A}}(-1)$ is transcendental. Therefore, one of $\mathbb{E}(\mathcal{A})$, $\mathbb{E}(\mathcal{A}')$ is transcendental. Furthermore, we modify $\mathcal{A}$ and $\mathcal{A}'$ to show that there exists $\textsc{LimAvg}$-automaton $\mathcal{A}Inf$ whose expected value is transcendental and the value $\lambda$ such that $\mathbb{P}(\{w \mid \valueL{\mathcal{A}Inf}(w) = \lambda) = 1$ is transcendental. It follows that the minimal $\lambda$ such that $\mathbb{D}_{\mathcal{A}Inf}(\lambda) = 1$ is transcendental. \begin{theorem}[Irrational values] \label{th:irrational} The following conditions hold: \begin{enumerate} \item There exists a $\textsc{Sum}$-automaton whose distribution and expected value w.r.t. the uniform distribution are transcendental. \item There exists a $\textsc{LimAvg}$-automaton such that the expected value and the value of almost all words w.r.t. the uniform distribution are transcendental. \end{enumerate} \end{theorem} \begin{proof} We assume that the distribution of words is uniform. In the infinite case, this means that the Markov chain contains a single state where it loops over any letter with probability $\frac{1}{|\Sigma|}$, where $\Sigma$ is the alphabet. In the finite case, this amounts to a terminating Markov chain with one regular state and one terminating state; it loops over any letter in the non-terminating state with probability $\frac{1}{|\Sigma|+1}$ or go to the terminating state over $\epsilon$ with probability $\frac{1}{|\Sigma|+1}$. Below we omit the Markov chain as it is fixed (for a given alphabet). We define a $\textsc{Sum}$-automaton $\mathcal{A}$ (Figure~\ref{fig:irrational}) over the alphabet $\Sigma = \set{a, \# }$ such that $\mathcal{A}(w) = 0$ if $w = a \# a^4 \# \ldots \# a^{4^n}$ and $\mathcal{A}(w) \leq -1$ otherwise. Such an automaton basically picks a block with an inconsistency and verifies it. For example, if $w$ contains a block $\# a^i \# a^j \#$, the automaton $\mathcal{A}$ first assigns $-4$ to each letter $a$ and upon $\#$ it switches to the mode in which it assigns $1$ to each letter $a$. Then, $\mathcal{A}$ returns the value $j - 4\cdot i$. Similarly, we can encode the run that returns the value $4\cdot i - j$. Therefore, all the runs return $0$ if and only if each block of $a$'s is four times as long as the previous block. Finally, $\mathcal{A}$ checks whether the first block of $a$'s has length $1$ and returns $-1$ otherwise. Let $\gamma$ be the probability that a word is of the form $a \# a^4 \# \ldots \# a^{4^n}$. Such a word has length $l_n = \frac{4^{n+1} -1}{3}+n$ and its probability is ${ 3^{-(l_n+1)} }$ (as the probability of any given word with $m$ letters over a two-letters alphabet is $3^{-(m+1)}$). Therefore $\gamma$ is equal to $\sum_{n=0}^{\infty} { 3^{-(l_n+1)} }$. Observe that $\gamma$ written in base $3$ has arbitrary long sequences of $0$'s and hence its representation is acyclic. Thus, $\gamma$ is irrational. Due to Roth's Theorem~\cite{roth}, if $\alpha \in \mathbb{R}$ is algebraic but irrational, then there are only finitely many pairs $(p,q)$ such that $|\alpha - \frac{p}{q}| \leq \frac{1}{q^3}$. We show that there are infinitely many such pairs for $\gamma$ and hence it is transcendental. Consider $i \in \mathbb{N}$ and let $p_i, q_i \in \mathbb{N}$ be such that $q_i = 3^{-(l_i+1)}$ and $\frac{p_i}{q_i} = \sum_{n=0}^{i} { 3^{-(l_n+1)}}$. Then, \[ 0 < \gamma - \frac{p_i}{q_i} < 2\cdot 3^{-(l_{i+1}+1)} \] Observe that for $i>1$ we have $l_{i+1} > 3 (l_{i}+1)$ and hence \[ \gamma - \frac{p_i}{q_i} < 2\cdot 3^{-(l_{i+1}+1)} < \frac{2}{3} 3^{-3(l_i+1)} < \frac{1}{q_i^3}. \] Therefore, $\gamma$ is transcendental. Observe that $\gamma = 1 - \mathbb{D}_{\mathcal{A}}(-1)$. Therefore, $\mathbb{D}_{\mathcal{A}}(-1)$ is transcendental. For the expected value, we construct $\mathcal{A}'$ such that for every word $w$ we have $\valueL{\mathcal{A}'}(w) = min(\valueL{\mathcal{A}}(w),-1)$. This can be done by adding to $\mathcal{A}$ an additional initial state $q_0$, which starts an automaton that assigns to all words value $-1$. Observe that $\mathcal{A}$ and $\mathcal{A}'$ differ only on words $w$ of the form $a \# a^4 \# \ldots \# a^{4^n}$, where $\mathcal{A}(w) = 0$ and $\mathcal{A}'(w) = -1$. On all other words, both automata return the same values. Therefore, $\mathbb{E}(\mathcal{A}) - \mathbb{E}(\mathcal{A}') = \gamma$. It follows that at least one of the values $\mathbb{E}(\mathcal{A})$, $\mathbb{E}(\mathcal{A}')$ is transcendental. The same construction works for $\textsc{LimAvg}$-automata. We take $\mathcal{A}$ defined as above and convert it to a $\textsc{LimAvg}$-automaton $\mathcal{A}Inf$ over $\Sigma' = \Sigma \cup \{\$\}$, where the fresh letter $\$$ resets the automaton, i.e., $\mathcal{A}Inf$ has transitions labeled by $\$$ from any final state of $\mathcal{A}$ to any of its initial states. We apply the same construction to $\mathcal{A}'$ defined as above and denote the resulting automaton by $\mathcal{A}Inf'$. Observe that $\mathbb{E}(\mathcal{A}Inf) = \mathbb{E}(\mathcal{A})$ (resp., $\mathbb{E}(\mathcal{A}Inf') = \mathbb{E}(\mathcal{A}')$. To see that, consider random variables $X_1, X_2, \ldots$ defined on $\Sigma^{\omega}$, where $X_i(w)$ is the average value $\frac{1}{|u|}\valueL{\mathcal{A}}(u)$ of the $i$-th block $\$u\$$ in $w$, i.e., $w =u_1 \$ u_2 \$ \ldots \$u_i \$ \ldots$, all $u_j$ are from $\Sigma^*$ and $u = u_i$. Observe that $X_1, X_2, \ldots$ are independent and identically distributed random variables and hence with probability $1$ we have \[ \liminf\limits_{s \rightarrow \infty} \frac{1}{s} (X_1 + \ldots + X_s) = \limsup\limits_{s \rightarrow \infty} \frac{1}{s} (X_1 + \ldots + X_s) = \mathbb{E}(X_i) = \mathbb{E}(\mathcal{A}) \] Therefore, with probability $1$ over words $w$ we have $\valueL{\mathcal{A}Inf}(w) = \mathbb{E}(\mathcal{A})$. It follows that $\mathbb{E}(\mathcal{A}Inf) = \mathbb{E}(\mathcal{A})$ and the minimal $\lambda$ such that $\mathbb{D}_{\mathcal{A}Inf}(\lambda) = 1$ equals $\mathbb{E}(\mathcal{A})$. Similarly, $\mathbb{E}(\mathcal{A}Inf') = \mathbb{E}(\mathcal{A}')$ and the minimal $\lambda$ such that $\mathbb{D}_{\mathcal{A}Inf'}(\lambda) = 1$ equals $\mathbb{E}(\mathcal{A}')$. Therefore, for one of automata $\mathcal{A}Inf, \mathcal{A}Inf'$, the value of almost all words and the expected value are transcendental. \begin{figure} \caption{The automaton $\mathcal{A} \label{fig:irrational} \end{figure} \end{proof} \section{The exact value problems} \label{s:exact} In this section we consider the probabilistic questions for non-deterministic $\textsc{Sum}$-automata and $\textsc{LimAvg}$-automata, i.e., the problems of computing the exact values of the expected value $\mathbb{E}_{\mathcal{M}}(\mathcal{A})$ and the distribution $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)$ w.r.t. a Markov chain $\mathcal{M}$ and an $f$-automaton $\mathcal{A}$. The answers to these problems are related values may be irrational (Theorem~\ref{th:irrational}), but one can perhaps argue that there might be some representation of irrational numbers that can be employed to avoid this problem. We prove that this is not the case by showing that computing the exact value to any representation with decidable equality of two numbers is impossible. \begin{theorem}\label{th:limavg-undecidable} The following conditions hold: \begin{enumerate} \item The expected value and the distribution of (non-deterministic) $\textsc{Sum}$-automata are uncomputable even for the uniform probability measure. \item The expected value and the value of almost all words (if it exists) of (non-deterministic) $\textsc{LimAvg}$-automata are uncomputable even for the uniform probability measure. \end{enumerate} \end{theorem} \begin{proof} The proof is by a (Turing) reduction from the quantitative universality problem for $\textsc{Sum}$-automata, which is undecidable~\cite{Krob94,AlmagorBK11}: \noindent\fbox{\parbox{0.96\textwidth}{\emph{The quantitative universality problem for $\textsc{Sum}$-automata}: Given a $\textsc{Sum}$-automaton with weights $-1$, $0$ and $1$, decide whether for all words $w$ we have $\valueL{\mathcal{A}}(w) \leq 0$. }} We first discuss reductions to the probabilistic problems for $\textsc{Sum}$-automata. Consider an instance of the quantitative universality problem, which is a $\textsc{Sum}$-automaton $\mathcal{A}$. If there is a word $w$ with the value greater than $0$, then due to uniformity of the probability measure we have $\mathbb{P}(w)>0$, and thus $\mathbb{D}_{\mathcal{A}}(0) < 1$. Otherwise, clearly $\mathbb{D}_{\mathcal{A}}(0) = 1$. Therefore, solving the universality problem amounts to computing whether the $\mathbb{D}_{\mathcal{A}}(0) = 1$, and thus the latter problem is undecidable. For the expected value, we construct a $\textsc{Sum}$-automaton $\mathcal{A}'$ such that for every word $w$ we have $\valueL{\mathcal{A}'}(w) = min(\valueL{\mathcal{A}}(w),0)$. Observe that $\mathbb{E}(\mathcal{A}) = \mathbb{E}(\mathcal{A}')$ if and only if for every word $w$ we have $\valueL{\mathcal{A}}(w) \leq 0$, i.e., the answer to the universality problem is YES. Therefore, there is no Turing machine, which given a $\textsc{Sum}$-automaton $\mathcal{A}$ computes $\mathbb{E}(\mathcal{A})$ (in any representation allowing for effective equality testing). For the $\textsc{LimAvg}$ case, we construct a $\textsc{LimAvg}$-automaton $\mathcal{A}Inf$ from the $\textsc{Sum}$-automaton $\mathcal{A}$, by connecting all accepting states (of $\mathcal{A}$) with all initial states by transitions of weight $0$ labeled by an auxiliary letter $\#$. We construct $\mathcal{A}Inf'$ from $\mathcal{A}'$ in the same way. The automata $\mathcal{A}Inf, \mathcal{A}Inf'$ have been constructed from $\mathcal{A}$ and respectively $\mathcal{A}'$ as in the proof of Theorem~\ref{th:irrational}, and the virtually the same argument shows that for almost all words $w$ (i.e., with the probability $1$) we have $\valueL{\mathcal{A}Inf}(w) = \mathbb{E}{\mathcal{A}}$ (resp., $\valueL{\mathcal{A}Inf'}(w) = \mathbb{E}(\mathcal{A}')$). Therefore, $\mathbb{E}(\mathcal{A}Inf) = \mathbb{E}(\mathcal{A}Inf')$ if and only if for every finite word $u$ we have $\valueL{\mathcal{A}}(u) \leq 0$. In consequence, there is no Turing machine computing the expected value of a given $\textsc{LimAvg}$-automaton. Furthermore, since $\mathcal{A}Inf$ (resp., $\mathcal{A}Inf'$) returns $\mathbb{E}(\mathcal{A}Inf)$ (resp., $\mathbb{E}(\mathcal{A}Inf')$) on almost all words, there is no Turing machine computing the value of almost all words of a given (non-deterministic) $\textsc{LimAvg}$-automaton. \end{proof} \subsection{Extrema automata} We discuss the distribution problem for $\textsc{Min}$-, $\textsc{Max}$-, $\textsc{Inf}$- and $\textsc{Sup}$-automata, where $\textsc{Min}$ and $\textsc{Max}$ return the minimal and respectively the maximal element of a finite sequence, and $\textsc{Inf}$ and $\textsc{Sup}$ return the minimal and respectively the maximal element of an infinite sequence. The expected value of an automaton can be easily computed based on the distribution as there are only finitely many possible values of a run (each possible value is a label of some transition). \begin{theorem} \label{th:extrema} For $\textsc{Min}$-, $\textsc{Max}$-, $\textsc{Inf}$- and $\textsc{Sup}$-automata $\mathcal{A}$ and a Markov chain $\mathcal{M}$, the expected value and the distribution problems can be solved in exponential time in $|\mathcal{A}|$ and polynomial time in $|\mathcal{M}|$. \end{theorem} \begin{proof} We discuss the case of $f = \textsc{Inf}$ as the other cases are similar. Consider an $\textsc{Inf}$-automaton $\mathcal{A}$. Observe that every value returned by $\mathcal{A}$ is one of its weights. For each weight $x$ of $\mathcal{A}$, we construct a (non-deterministic) $\omega$-automaton $\mathcal{A}_x$ that accepts only words of value greater than $x$, i.e., $\valueL{\mathcal{A}_x} = \{ w \mid \valueL{\mathcal{A}}(w) > x \}$. To construct $\mathcal{A}_x$, we take $\mathcal{A}$, remove the transitions of weight less or equal to $x$, and drop all the weights. Therefore, the set of words with the value greater than $x$ is regular, and hence it is measurable and we can compute its probability $p_x$ by computing the probability of $\valueL{\mathcal{A}_x}$. The probability of an $\omega$-regular language given by a non-deterministic $\omega$-automaton (without acceptance conditions) can be computed in exponential time in the size of the automaton and polynomial time in the Markov chain defining the probability distribution~\cite[Chapter 10.3]{BaierBook}. It follows that $p_x$ can be computed in exponential time in $|\mathcal{A}|$ and polynomial time in $|\mathcal{M}|$. Observe that $p_x = 1 - \mathbb{D}_{\mathcal{M},\mathcal{A}}(x)$ and hence we can compute the distribution question $\mathbb{D}_{\mathcal{M},\mathcal{A}}(\lambda)$ by computing $1 - p_x$ for the maximal weight $x$ that does not exceed $\lambda$. For the expected value, let $x_1, \ldots, x_k$ be all weights of $\mathcal{A}$ listed in the ascending order. Let $p_0 = 1$. Observe that for all $i \in \set{1, \ldots, k}$ we have $p_{x_{i-1}} - p_{x_i} = \mathbb{P}_{\mathcal{M}}(\set{w \mid \valueL{\mathcal{A}}(w) = x_i})$ is the probability of the set of words of the value $x_i$. Therefore, $\mathbb{E}_{\mathcal{M}}(\mathcal{A}) = \sum_{i=1}^{k} (p_{x_{i-1}} - p_{x_i}) \cdot x_i$ and hence the expected value can be computed in exponential time in $|\mathcal{A}|$ and polynomial time in $|\mathcal{M}|$. \end{proof} \section{The approximation problems} \label{s:approx} \newcommand{\mathrm{fin}}{\mathrm{fin}} \newcommand{g[{k}]}{g[{k}]} \newcommand{g[{2^k}]}{g[{2^k}]} \newcommand{M[k]}{M[k]} \newcommand{N[k]}{N[k]} \newcommand{\markov^{\approx}}{\mathcal{M}^{\approx}} \newcommand{\xi_o}{\xi_o} \newcommand{\pi_f}{\pi_f} \newcommand{\markov^{diff}}{\mathcal{M}^{diff}} We start the discussion on the approximation problems by showing a hardness result that holds for a wide range of value functions. We say that a function is $0$-preserving if its value is $0$ whenever the input consists only of $0$s. The functions $\textsc{Sum}$, $\textsc{LimAvg}$, $\textsc{Min}$, $\textsc{Max}$, $\textsc{Inf}$, $\textsc{Sup}$ and virtually all the functions from the literature~\cite{quantitativelanguages} are $0$-preserving. The hardness result follows from the fact that accepted words have finite values, which we can force to be $0$, while words without accepting runs have infinite values. The answers in the approximation problems are numbers and to study the lower bounds, we consider their decision variants, called the \emph{separation problems}. The \emph{expected separation problem} is a variant of the expected value problem, in which the input is enriched with numbers $a, b$ such that $b-a>2\epsilon$ and the instance is such that $\mathbb{E}_{\mathcal{M}}(\mathcal{A}) \not \in [a, b]$ and the question is whether $\mathbb{E}_{\mathcal{M}}(\mathcal{A})<a$. In the \emph{distribution separation problem}, the input is enriched with numbers $a,b,c,d$ such that $b-a >2\epsilon$ and $d-c >2\epsilon$, the instance satisfies for all $\lambda \in [c,d]$ we have $\mathbb{D}_{\mathcal{M},\mathcal{A}}(\lambda) \not \in [a, b]$, and we ask whether $\mathbb{D}_{\mathcal{M},\mathcal{A}}(\frac{c+d}{2})<a$. Note that having an algorithm computing one of the approximate problems (for the distribution or the expected value), we can use it to decide the separation question. Conversely, using the separation problem as an oracle, we can perform binary search on the domain to solve the corresponding approximation problem in polynomial time. \begin{theorem} \label{th:all-pspace-hard} The following conditions hold: \begin{enumerate} \item For any $0$-preserving function $f$, the expected separation problem for non-deterministic $f$-automata is $\PSPACE$-hard. \item For any $0$-preserving function $f$ over infinite words, the distribution separation problem for non-deterministic $f$-automata over infinite words is $\PSPACE$-hard. \end{enumerate} \end{theorem} \begin{proof} The proof is via reduction from the universality question for non-deterministic (unweighted) finite-word automata, which is $\PSPACE$-complete~\cite{HU79}. \noindent\emph{The finite-word case}. We consider the uniform distribution over finite words. Given a non-deterministic finite-word automaton $\mathcal{A}$, we construct a finite-word $f$-automaton $\mathcal{A}_{\mathrm{fin}}$ by labeling all transitions of $\mathcal{A}$ with $0$. Observe that if there exists a word which is not accepted by $\mathcal{A}$ then the expected value of $\mathcal{A}_{\mathrm{fin}}$ is $\infty$. Otherwise, all words have value $0$ and hence the expected value for $\mathcal{A}_{\mathrm{fin}}$ is $0$. The universality problem for $\mathcal{A}$ reduces to the expected separation problem for $\mathcal{A}_{\mathrm{fin}}$. \noindent\emph{The infinite-word case}. We consider the uniform distribution over infinite words. Given a non-deterministic finite-word automaton $\mathcal{A}$, we construct an infinite-word $f$-automaton $\mathcal{A}Inf$ in the following way. We start with the automaton $\mathcal{A}$. First, we extend the input alphabet with a fresh letter $\#$, which resets the automaton. More precisely, we add transitions labeled by $\#$ between any final state of $\mathcal{A}$ and any initial state of $\mathcal{A}$. Finally, we label all transitions with $0$. The resulting automaton is $\mathcal{A}Inf$. If there exists a finite word $u$ rejected by $\mathcal{A}$, then for every infinite word $w$ containing the infix $\# u \#$ the automaton $\mathcal{A}Inf$ has no infinite run and hence it assigns value $\infty$ to $w$. Observer that the set of words containing $\# u \#$ has probability $1$ (for any finite word $u$). Therefore, if $\mathcal{A}$ rejects some word, the expected value for $\mathcal{A}Inf$ is $\infty$ and the distribution of $\mathcal{A}Inf$ for any $\lambda \in \mathbb{R}$ is $0$. Otherwise, if $\mathcal{A}$ accepts all words, the expected value of $\mathcal{A}Inf$ is $0$ and the distribution of $\mathcal{A}Inf$ for any $\lambda \geq 0$ is $1$. The universality problem for $\mathcal{A}$ reduces to the separation problems for $\mathcal{A}Inf$. \end{proof} \Paragraph{Total automata} Theorem~\ref{th:all-pspace-hard} gives us a general hardness result, which is due to accepting conditions rather than values returned by weighted automata. In the following, we focus on weights and we assume that weighted automata are \emph{total}, i.e., they accept all words (resp., almost all words in the infinite-word case). For $\textsc{Sum}$-automata under the totality assumption, the approximate probabilistic questions become \textsc{\#P}-complete. We additionally show that the approximate distribution question for $\textsc{Sum}$-automata is in \textsc{\#P}{} regardless of totality assumption. \begin{theorem} \label{th:approximation} The following conditions hold: \begin{enumerate} \item The approximate expected value and the approximate distribution questions for total non-deterministic total $\textsc{Sum}$-automata are \textsc{\#P}-complete. \item The approximate distribution question for non-deterministic $\textsc{Sum}$-automata is \textsc{\#P}-complete. \end{enumerate} \end{theorem} \begin{proof} \noindent\emph{\textsc{\#P}-hardness}. Consider the problem of counting the number of satisfying assignment of a given propositional formula $\varphi$ in Conjunctive Normal Form (CNF)~\cite{valiant1979complexity,papadimitriou2003computational}. This problem is \textsc{\#P}-complete. We reduce it to the problem of approximation of the expected value for total $\textsc{Sum}$-automata. Consider a formula $\varphi$ in CNF over $n$ variables. Let $\mathcal{M}^T$ be a terminating Markov chain over $\{0,1\}$, which at each step produces $0$ and $1$ with equal probability $\frac{1}{3}$, and it terminates with probability $\frac{1}{3}$. We define a total $\textsc{Sum}$-automaton $\mathcal{A}_{\varphi}$ such that it assigns $0$ to all words of length different than $n$. For words $u \in \set{0,1}^n$, the automaton $\mathcal{A}_{\varphi}$ regards $u$ as an assignment for variables of $\varphi$; $\mathcal{A}_{\varphi}$ non-deterministically picks one clause of $\varphi$ and returns $1$ if that clause is satisfied and $0$ otherwise. We can construct such $\mathcal{A}_{\varphi}$ to have polynomial size in $|\varphi|$. Observe that $\mathcal{A}_{\varphi}(u) = 0$ if some clause of $\varphi$ is not satisfied by $u$, i.e., $\varphi$ is false under the assignment given by $u$. Otherwise, if the assignment given by $u$ satisfies $\varphi$, then $\mathcal{A}_{\varphi}(u) = 1$. It follows that the expected value of $\mathcal{A}_{\varphi}$ equals ${3}^{-(n+1)} \cdot C$, where ${3}^{-(n+1)}$ is the probability of generating a word of length $n$ and $C$ is the number of variable assignments satisfying $\varphi$. Therefore, we can compute $C$ by computing the expected value of $\mathcal{A}_{\varphi}$ with any $\epsilon$ less than $0.5 \cdot {3}^{-(n+1)}$. Observe that the automaton $\mathcal{A}_{\varphi}$ returns values $0$ and $1$ and hence the expected value $\mathbb{E}_{\mathcal{M}}(\mathcal{A}_{\varphi}) = 1 - \mathbb{D}_{\mathcal{M}, \mathcal{A}_{\varphi}}(0)$, where $1 - \mathbb{D}_{\mathcal{M}, \mathcal{A}_{\varphi}}(0)$ is the probability that $\mathcal{A}_{\varphi}$ returns $1$. \noindent\emph{Containment of the approximate distribution question in \textsc{\#P}}. Consider a terminating Markov chain $\mathcal{M}^T$, a $\textsc{Sum}$-automaton $\mathcal{A}$, and $\epsilon \in \mathbb{Q}^+$. Let $C$ be the smallest number such that every non-zero probability in $\mathcal{M}^T$ is at least $2^{-C}$. Such $C$ is polynomial in the input size. Consider $N=C \cdot \mathrm{len}(\epsilon)+1$ and let $\mathbb{D}_{\mathcal{M}^T, \mathcal{A}}(\lambda, N)$ be the distribution of $\mathcal{A}$ over words up to length $N$, i.e., $\mathbb{P}_{\mathcal{M}^T}(\{w \mid |w| \leq N\ \wedge \valueL{\mathcal{A}}(w) \leq \lambda\})$. We show that the distribution of $\mathcal{A}$ and the distribution of $\mathcal{A}$ over words up to length $N$ differ by less than $\frac{\epsilon}{2}$, i.e., that \[ |\mathbb{D}_{\mathcal{M}^T, \mathcal{A}}(\lambda) - \mathbb{D}_{\mathcal{M}^T, \mathcal{A}}(\lambda,n)| \leq \frac{\epsilon}{2}. \] To do so, let $p_n$, for $n \in \mathbb{N}$, be the probability that $\mathcal{M}^T$ emits a word of the length greater than $n$. From any state of $\mathcal{M}^T$, the probability of moving to a terminating state is at least $2^{-C}$. We can (very roughly) bound the probability of generating a word of length greater than $i$ ($p_{i})$ by $(1-2^{-C})^i$. This means that $p_n$ decreases exponentially with $n$. Since $(1-\frac{1}{n})^n \leq \frac{1}{2}$ for all $n>1$, we obtained the desired inequality. Let $K = (N+1)\cdot \log(|\Sigma|) \cdot \epsilon^{-1}+1$. We build a non-deterministic Turing machine $H_1$ such that on the input $\mathcal{M}^T$, $\mathcal{A}$, $\epsilon$, and $\lambda$ such that the number $c_A$ of accepting computations of $H_1$ satisfies the following: \[ \Bigl\lvert \mathbb{D}_{\mathcal{M}^T,\mathcal{A}}(\lambda, n) - \frac{c_A}{2^K}\Bigl\rvert \leq \frac{\epsilon}{2}. \] To $\epsilon$-approximate $\mathbb{D}_{\mathcal{M}^T,\mathcal{A}}(\lambda)$, we need to compute $c_A$ and divide it by $2^K$, which can be done in polynomial time. The machine $H_1$ works as follows. Given the input $\mathcal{M}^T$, $\mathcal{A}$, $\epsilon$, it non-deterministically generates a string $u\alpha$, where $u \in (\Sigma \cup \{\#\})^N$ is a word and $\alpha \in \set{0, 1}^K$ is a number written in binary. The machine rejects unless $u$ is of the form $wv$, where $w\in \Sigma^*$ and $v\in\set{\#}^*$. Then, the machine accepts if $\valueL{\mathcal{A}}(w) \leq \lambda$ and $\alpha \leq 2^K \cdot \mathbb{P}_{\mathcal{M}^T}(w)$. Therefore, provided that $H_1$ generates $w$ with $\valueL{\mathcal{A}}(w) \leq \lambda$, the number of accepting computations $c_A^w$ equals $\lfloor 2^K \cdot \mathbb{P}_{\mathcal{M}^T}(w) \rfloor$. It follows that $c_A^w$ divided by $2^K$ is a $2^{-K}$-approximation of $\mathbb{P}_{\mathcal{M}^T}(w)$, i.e., \[ \Bigl\lvert \mathbb{P}_{\mathcal{M}^T}(w) - \frac{c_A^w}{2^K} \Bigl\rvert < 2^{-K}. \] The total number of accepting paths of $H_1$ is given by \[ c_A = \sum_{w \colon |w| \leq N\ \wedge \valueL{\mathcal{A}}(w) \leq \lambda} c_A^w. \] We estimate the difference between $\mathbb{D}_{\mathcal{M}^T,\mathcal{A}}(\lambda, n)$ and the value $\frac{c_A}{2^K}$: \[ \Bigl\lvert \mathbb{D}_{\mathcal{M}^T,\mathcal{A}}(\lambda, n) - \frac{c_A}{2^K} \Bigr\rvert \leq \sum_{w \colon |w| \leq N\ \wedge \valueL{\mathcal{A}}(w) \leq \lambda} \Bigr\lvert \mathbb{P}_{\mathcal{M}^T}(w) - \frac{c_A^w}{2^K}\Bigr\rvert \leq |\Sigma|^{N+1} \cdot 2^{-K} < \frac{\epsilon}{2}. \] \noindent\emph{Containment of the approximate expected value question in \textsc{\#P}} \newcommand{\mathbb{E}N}{M} Assume that $\mathcal{A}$ is total. For readability we assume that $\mathcal{A}$ has only integer weights. If it does not, we can multiply all weights by least common multiple of all denominators of weights in $\mathcal{A}$; this operation multiples the expected value by the same factor. Recall that $C$ is the smallest number such that every non-zero probability in $\mathcal{M}^T$ is at least $2^{-C}$. Let $W$ be the maximal absolute value of weights in $\mathcal{A}$ and let $\mathbb{E}N = C \mathrm{len}(\epsilon)\cdot \log(C W) +1$ and $\mathbb{E}_{\mathcal{M}^T}(\mathcal{A}, N)$ be the expected value of $\mathcal{M}^T$ for words up to length $\mathbb{E}N$, i.e., computing only the finite sum from the definition of the expected value. We show that \[\bigl\lvert \mathbb{E}_{\mathcal{M}^T}(\mathcal{A})-\mathbb{E}_{\mathcal{M}^T}(\mathcal{A},\mathbb{E}N)\bigl\rvert \leq \frac{\epsilon}{2}.\] Recall that $p_n$ is the probability that $\mathcal{M}^T$ emits a word of the length greater than $n$, and $p_{n} \leq (1-2^{-C})^n$. Since $\mathcal{A}$ is total, the value of every word $w$ is finite and it belongs to the interval $[-|w| \cdot W, |w| \cdot W]$. The value of a word of the length bounded by $i$ is at most $i \cdot W$. Therefore, the expected value of $\mathcal{A}$ over words of grater than $k$ is bounded from above by \( \sum_{i \geq k} p_{i} \cdot i \cdot W \leq W \cdot (1- 2^{-C})^k \cdot (k+1) \). W.l.o.g. we assume, that there are no transitions to the initial state in $\mathcal{A}$. Next, we transform $\mathcal{A}$ to an automaton $\mathcal{A}'$ that returns natural numbers on all words of length at most $\mathbb{E}N$ by adding $W \cdot \mathbb{E}N$ to every transition from the initial state. Observe that $\mathbb{E}_{\mathcal{A}'} = \mathbb{E}_{\mathcal{A}} + W\cdot \mathbb{E}N$ and $D = 2\cdot W \cdot \mathbb{E}N$ is an upper bound on values returned by the automaton $\mathcal{A}'$ on words of length at most $\mathbb{E}N$. Finally, we construct a Turing machine $H_2$, similar to $H_1$. Let $K = (D+1) \cdot (N+1)\cdot (|\Sigma|+1) \cdot \epsilon^{-1}+1$. $H_2$ non-deterministically chooses a word $u\alpha$, where $u \in (\Sigma \cup \{\#\})^N$ is a word and $\alpha \in \set{0, 1}^K$ is a number written in binary, and also non-deterministically picks a natural number $\beta \in [0, D]$. The machine rejects unless $u$ is of the form $wv$, where $w\in \Sigma^*$ and $v\in\set{\#}^*$. Then $H_2$ accepts if and only if $\valueL{\mathcal{A}'}(w) \leq \beta$ and $\alpha \leq 2^K \cdot \mathbb{P}_{\mathcal{M}^T}(w)$. Then, provided that $H_2$ generates $w$, the number of accepting computations $c_A^w$ equals $\lfloor 2^K \cdot \mathbb{P}_{\mathcal{M}^T}(w) \cdot \valueL{\mathcal{A}'}(w) \rfloor$. Therefore, using estimates similar to the distribution case, we obtain the desired inequality \[\bigl\lvert \mathbb{E}_{\mathcal{M}^T}(\mathcal{A}',\mathbb{E}N) - \frac{c_A}{2^K}\bigl\rvert \leq \frac{\epsilon}{2}.\] Finally, we obtain that $\frac{c_A}{2^K} -WM$ is an $\epsilon$-approximation of $ \mathbb{E}_{\mathcal{M}^T}(\mathcal{A})$, i.e., \[\bigl\lvert \mathbb{E}_{\mathcal{M}^T}(\mathcal{A}) - \big(\frac{c_A}{2^K} -WM\big)\bigl\rvert \leq {\epsilon}.\] \end{proof} We show that the approximation problem for $\textsc{LimAvg}$-automata is $\PSPACE$-hard over the class of total automata. \begin{theorem} \label{th:approximation-sum} The separation problems for non-deterministic total $\textsc{LimAvg}$-automata are $\PSPACE$-hard. \end{theorem} \begin{proof} We consider the uniform distribution over infinite words. Given a non-deterministic finite-word automaton $\mathcal{A}$, we construct an infinite-word $\textsc{LimAvg}$-automaton $\mathcal{A}Inf$ from $\mathcal{A}$ in the following way. We introduce an auxiliary symbol $\#$ and we add transitions labeled by $\#$ between any final state of $\mathcal{A}$ and any initial state of $\mathcal{A}$. Then, we label all transitions of $\mathcal{A}Inf$ with $0$. Finally, we connect all non-accepting states of $\mathcal{A}$ with an auxiliary state $q_{\mathrm{sink}}$, which is a sink state with all transitions of weight $1$. The automaton $\mathcal{A}Inf$ is total. Observe that if $\mathcal{A}$ is universal, then $\mathcal{A}Inf$ has a run of value $0$ on every word. Otherwise, if $\mathcal{A}$ rejects a word $w$, then upon reading a subword $\# w \#$, the automaton $\mathcal{A}Inf$ reaches $q_{\mathrm{sink}}$, i.e., the value of the whole word is $1$. Almost all words contain an infix $\# w \#$ and hence almost all words have value $1$. Therefore, the universality problem for $\mathcal{A}$ reduces to the problem deciding whether for almost all words $w$ we have $\valueL{\mathcal{A}Inf}(w) = 0$ or for almost all words $w$ we have $\valueL{\mathcal{A}Inf}(w) = 1$? The latter problem reduces to the expected separation problem as well as the distribution separation problem for $\mathcal{A}Inf$. \end{proof} \section{Approximating $\textsc{LimAvg}$-automata in exponential time} In this section we develop algorithms for the approximate expected value and approximate distribution questions for (non-deterministic) $\textsc{LimAvg}$-automata. The presented algorithms work in exponential time in the size of the automaton, polynomial time in the size of the Markov chain and the precision. The case of $\textsc{LimAvg}$-automata is significantly more complex than the other cases and hence we present the algorithms in stages. First, we restrict our attention to \emph{recurrent} $\textsc{LimAvg}$-automata and the uniform distribution over infinite words. Recurrent automata are strongly connected with an appropriate set of initial states. We show that deterministic $\textsc{LimAvg}$-automata with bounded look-ahead approximate recurrent automata. Next, in Section~\ref{s:non-uniform} we extend this result to non-uniform measures given by Markov chains. Finally, in Section~\ref{s:non-recurrent} we show the approximation algorithms for all (non-deterministic) $\textsc{LimAvg}$-automata and measures given by Markov chains. \Paragraph{Recurrent automata} Let $\mathcal{A} = (\Sigma, Q, Q_0,\delta)$ be a non-deterministic $\textsc{LimAvg}$-automaton and $\widehat{\delta}$ be the extension of $\delta$ to all words $\Sigma^*$. The automaton $\mathcal{A}$ is \emph{recurrent} if and only if the following conditions hold: \begin{enumerate}[(1)] \item for every state $q \in Q$ there is a finite word $u$ such that $\widehat{\delta}(q, u) = Q_0$ ($\widehat{\delta}$ is the transition relation extended to words), and \item for every set $S \subseteq Q$, if $\widehat{\delta}(Q_0, w) = S$ for some word $w$, then there is a finite word $u$ such that $\widehat{\delta}(S, u) = Q_0$. \end{enumerate} Intuitively, in recurrent automata $\mathcal{A}$, if two runs deviate at some point, with high probability it is possible to synchronize them. More precisely, for almost all words $w$, if $\pi$ is a run on $w$, and $\rho$ is a finite run up to position $i$, then $\rho$ can be extended to an infinite run that eventually coincides with $\pi$. Moreover, we show that with high probability, they synchronize within doubly-exponential number of steps in $|\mathcal{A}|$ (Lemma~\ref{l:resetWrods}). \begin{example} Consider the automaton depicted in Figure~\ref{fig:aut}. This automaton is recurrent with the initial set of states $Q_0 = \set{q_x, q_a, q_b}$. For condition (1) from the definition of recurrent automata, observe that for every state $q$ we have $\widehat{\delta}(q, abab) = Q_0$. For condition (2), observe that $\widehat{\delta}(Q_0, b) = Q_0$, $\widehat{\delta}(Q_0, a) = \set{q_a, q_b}$ and $\widehat{\delta}(\set{q_a, q_b}, a) = \widehat{\delta}(\set{q_a, q_b}, b) = Q_0$. The automaton would also be recurrent in the case of $Q_0=\set{q_a, q_b}$, but not in any other case. Consider an automaton $\mathcal{A}$ depicted below: \begin{center} { \centering \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.0cm,semithick] \tikzstyle{every state}=[fill=white,draw=black,text=black,minimum size=0.4cm] \tikzset{ datanode/.style = { draw, circle, text width=0.30cm, inner ysep = +0.4em}, labelnode/.style = { draw, rounded corners, align=center, fill=white} } \foreach \lab\lX/\lY/\name/\kind/{C}/\time/\attr in {A/0/0/$q_L$/square/0/10/5, B/2/0/$q_R$/walk/0/100/10 } { \node[datanode] (\lab) at (\lX,\lY) { \name }; } \draw (A) edge[bend left] node[above] {$a:0$} (B); \draw (B) edge[bend left] node {$a:0$} (A); \end{tikzpicture} } \end{center} The automaton $\mathcal{A}$ is recurrent if the set of initial states is either $\{q_L\}$ or $\{q_R\}$, but not in the case of $\{q_L, q_R\}$. Indeed, if we pick $q_L$ (resp., $q_R$) we can never reach the whole set $\{q_L, q_R\}$. This realizes our intuition that runs that start in $q_L$ and $q_R$ will never synchronize. \end{example} We discuss properties of recurrent automata. For every $\mathcal{A}$ that is strongly connected as a graph there exists a set of initial states $T$ with which it becomes recurrent. Indeed, consider $\mathcal{A}$ as an unweighted $\omega$-automaton and construct a deterministic $\omega$-automaton $\mathcal{A}^D$ through the power-set construction applied to $\mathcal{A}$. Observe that $\mathcal{A}^D$ has a single bottom strongly-connected component (BSCC), i.e., a strongly connected component such that there are no transitions leaving that component. The set $Q_0$ belongs to that BSCC. Conversely, for any strongly connected automaton $\mathcal{A}$, if $Q_0$ belongs to the BSCC of $\mathcal{A}^D$, then $\mathcal{A}$ is recurrent. Observe that for a recurrent automaton $\mathcal{A}$ the probability of words accepted by $\mathcal{A}$ is either $0$ or $1$. Now, for a word $w$ consider a sequence of reachable sets of states $\Pi_w$ defined as $Q_0, \widehat{\delta}(Q_0, w[1]), \widehat{\delta}(Q_0, w[2]), \ldots$ Since $\mathcal{A}^D$ has a single BSCC containing $Q_0$, all sets of $\Pi_w$ belong to that BSCC and hence either for almost all words $w$, the sequence $\Pi_w$ eventually contains only empty sets or for all words $w$, the sequence $\Pi_w$ consists of non-empty sets only. Observe that $\mathcal{A}$ has an infinite run on $w$ if and only if $\Pi_w$ consists of non-empty sets. It follows that the probability of the set of words having any infinite run in $\mathcal{A}$ is either $0$ or $1$. While Markov chains generate words letter by letter, to define a run of a word of the minimal value we need to have the completely generated word, i.e., the optimal transition at some position $i$ may depend on some positions $j>i$ in the word. This precludes application of standard techniques for probabilistic verification, which rely on the fact that the word and the run on it are generated simultaneously~\cite{DBLP:conf/focs/Vardi85,DBLP:journals/jacm/CourcoubetisY95,BaierBook}. \Paragraph{Key ideas} Our main idea is to change the non-determinism to \emph{bounded look-ahead}. This must be inaccurate, as the expected value of a deterministic automaton with bounded look-ahead is always rational, whereas Theorem~\ref{th:irrational} shows that the values of non-deterministic automata may be irrational. Nevertheless, we show that bounded look-ahead is sufficient to \emph{approximate} the probabilistic questions for recurrent automata (Lemma~\ref{l:convergence}). Furthermore, the approximation can be done effectively (Lemma~\ref{l:jumpingRuns}), which in turn gives us an exponential-time approximation algorithm for recurrent automata (Lemma~\ref{l:singleSCC}). Then, we comment on the extension to all distributions given by Markov chains (Section~\ref{s:non-uniform}). Finally, we show the proof for all $\textsc{LimAvg}$-automata over probability measures given by Markov chains (Theorem~\ref{th:approximation-limavg}). \subsection{Nearly-deterministic approximations} \Paragraph{Jumping runs} Let $k>0$ and let $N_k$ be the set of natural numbers not divisible by $k$. A \emph{$k$-jumping run} $\xi$ of $\mathcal{A}$ on a word $w$ is an infinite sequence of states such that for every position $i \in N_k$ we have $(\pi[i-1],w[i],\pi[i]) \in \delta$. An $i$-th \emph{block} of a $k$-jumping run is a sequence $\xi[ki, k(i+1)-1] $; within a block the sequence $\xi$ is consistent with transitions of $\mathcal{A}$. The positions $k,2k, \ldots \notin N_k$ are \emph{jump} positions, where the sequence $\xi$ need not obey the transition relation of $\mathcal{A}$. The cost ${C}$ of a transition of a $k$-jumping run $\xi$ within a block is defined as usual, while the cost of a jump is defined as the minimal weight of $\mathcal{A}$. The value of a $k$-jumping run $\xi$ is defined as the limit average computed for such costs. \Paragraph{Optimal and block-deterministic jumping runs} We say that a $k$-jumping run $\xi$ on a word $w$ is \emph{optimal} if its value is the infimum over values of all $k$-jumping runs on $w$. We show that optimal $k$-jumping runs can be constructed nearly deterministically, i.e., only looking ahead to see the whole current block. For every $S \subseteq Q$ and $u \in \Sigma^k$ we fix a run $\xi_{S,u}$ on $u$ starting in one of states of $S$, which has the minimal average weight. Then, given a word $w \in \Sigma^{\omega}$, we define a $k$-jumping run $\xi$ as follows. We divide $w$ into $k$-letter blocks $u_1, u_2, \ldots$ and we put $\xi = \xi_{S_0, u_1} \xi_{S_1, u_2} \ldots$, where $S_0 = \set{q_0}$ and for $i>0$, $S_i$ is the set of states reachable from $q_0$ on the word $u_1 \ldots u_i$. The run $\xi$ is a $k$-jumping run and it is indeed optimal. We call such runs \emph{block-deterministic} --- they can be constructed based on finite memory --- the set of reachable states $S_i$ and the current block of the input word. Since all runs of $\mathcal{A}$ are in particular $k$-jumping runs, the value of (any) optimal $k$-jumping run on $w$ is less or equal to $\mathcal{A}(w)$. We show that for recurrent $\textsc{LimAvg}$-automata, the values of $k$-jumping runs on $w$ converge to $\mathcal{A}(w)$ as $k$ tends to infinity. To achieve this, we construct a run of $\mathcal{A}$ which tries to ``follow'' a given jumping run, i.e., after almost all jump positions it is able to synchronize with the jumping run quickly. \Paragraph{Proof plan} Let $k>0$. Consider a word $w$ and some optimal $k$-jumping run $\xi_o$ on $w$. We construct a run $\pi_f$ of $\mathcal{A}$ in the following way. Initially, both runs start in some initial state $q_0$ and coincide. However, at the first jump position $\xi_o$ may take a move that is not a transition of $\mathcal{A}$. The run $\pi_f$ attempts to synchronize with $\xi_o$, i.e., to be at the same position in the same state, and then repeat transitions of $\xi_o$ until the end of the block. Then, in the next block, regardless of whether $\pi_f$ managed to synchronize with $\xi_o$ or not, we repeat the process. We say that a run $\pi_f$ constructed in such a way is a \emph{run following} $\xi_o$. In the following Lemma~\ref{l:resetWrods}, we show that for $m \in \mathbb{N}$ large enough, with high probability, the run $\pi_f$ synchronizes with $\xi_o$ within $m$ steps. We then show that if $m$ is large enough and $k$ is much larger than $m$, then the values of runs $\pi_f$ and $\xi_o$ differ by less than $\epsilon$ (Lemma~\ref{l:convergence}). Let $q$ be a state of $\mathcal{A}$ and $u$ be a finite word. We say that a word $v$ \emph{saturates} the pair $(q, u)$, if the set of reachable states from $q$ over $v$ equals all the states reachable over $uv$ from the initial states, i.e., $\widehat{\delta}(Q_0, uv) = \widehat{\delta}(q, v)$. \begin{example} Consider the automaton from Figure~\ref{fig:aut} with $Q_0=Q$. For any $(q, u)$, any word that contains the infix $abab$ saturates $(q, u)$, as $\widehat{\delta}(Q_0, uv'abab) = \widehat{\delta}(q, v'abab)= Q_0$ for any $v'$. \end{example} Observe that in the above, the probability that a random word of a length $4\ell$ does not saturate $(q,u)$ is bounded by $(1-\frac{1}{16})^\ell$. So the probability that a random word $v$ saturates $(q, u)$ quickly tends to $1$ with $|v|$. The next lemma shows that this is not a coincidence. \begin{lemma} \label{l:resetWrods} Let $\mathcal{A}$ be an NFA, $u$ be a finite word, and $q \in \widehat{\delta}(Q_0, u)$. For every $\Delta >0$ there exists a natural number $\ell=2^{2^{O(|\mathcal{A}|)}} \log (\rev{\Delta})$ such that over the uniform distribution on $\Sigma^{\ell}$ we have $\mathbb{P}(\{v \in \Sigma^{\ell} \mid v\text{ saturates } (q,u) \}) \geq 1 - \Delta$. \end{lemma} \begin{proof} First, observe that there exists a word $v$ saturating $(q,u)$. Let $S = \widehat{\delta}(Q_0, u)$. Then, $q \in S$. Since $\mathcal{A}$ is recurrent, there exists a word $\alpha$ such that $Q_0 = \widehat{\delta}(q, \alpha)$. It follows that $S = \widehat{\delta}(q, \alpha u)$. Since $q \in S$, we have $\widehat{\delta}(q, \alpha u) \subseteq \widehat{\delta}(S, \alpha u)$. It follows that for $i\geq 0$ we have $\widehat{\delta}(S, (\alpha u)^i ) = \widehat{\delta}(q, (\alpha u)^{i+1}) \subseteq \widehat{\delta}(S, (\alpha u)^{i+1})$. Therefore, for some $i >0$ we have $\widehat{\delta}(q, (\alpha u)^{i}) = \widehat{\delta}(S, (\alpha u)^{i})$, i.e., the word $(\alpha u)^{i}$ saturates $(q,u)$. Now, we observe that there exists a saturating word that is exponentially bounded in $|\mathcal{A}|$. We start with the word $v_0$ equal $(\alpha u)^{i}$ and we pick any two positions $k < l$ such that $\widehat{\delta}(q, v_0[1,k]) = \widehat{\delta}(q, v_0[1,l])$ and $\widehat{\delta}(S, v_0[1,k]) = \widehat{\delta}(S, v_0[1,l])$. Observe that for $v_1$ obtained from $v_0$ by removal of $v[k+1, l]$, the reachable sets do not change, i.e., $\widehat{\delta}(q, v_0) = \widehat{\delta}(q, v_1)$ and $\widehat{\delta}(S, v_0) = \widehat{\delta}(S, v_1)$. We iterate this process until there are no such positions. The resulting word $v'$ satisfies $\widehat{\delta}(S, v') = \widehat{\delta}(q, v')$. Finally, each position $k$ of $v'$ defines the unique pair $(\widehat{\delta}(q, v_0[1,k]), \widehat{\delta}(S, v_0[1,k]))$ of subsets of $Q$. Therefore, the length of $v'$ is bounded by $2^{2\cdot|Q|}$. We have shown above that for every pair $(q,u)$ there exists a saturating word $v_{q,u}$ of length bounded by $N = 2^{2\cdot|Q|}$. The probability of the word $v_{q,u}$ is $p_0 = 2^{-O(N)}$. Let $\ell = \frac{1}{p_0} \cdot \log (\rev{\Delta})$; we show that the probability that $(q, u)$ is not saturated by a word from $\Sigma^{N \cdot \ell}$ is at most $\Delta$. Consider a word $x \in \Sigma^{N \cdot \ell}$. We can write it as $x = x_1 \ldots x_{\ell}$, where all words $x_k$ have length $N$. If $x_k$ saturates $(q,u x_1 \ldots x_{k-1})$, then $x_1 \ldots x_k$ (as well as $x$) saturates $(q,u)$. Therefore, the word $x$ does not saturate $(q_u)$ if for all $1 \leq k \leq \ell$, $x_k$ does not saturate $(q, u x_1 \ldots x_{k-1})$. The probability that $x \in \Sigma^{N \cdot \ell}$ does not saturate $(q,u)$ is at most $(1 - p_0)^{\ell} \leq (\frac{1}{2})^{\log (\rev{\Delta})} \leq \Delta$. \end{proof} Finally, we show that for almost all words the value of an optimal $k$-jumping run approximates the values of the word. \begin{lemma} \label{l:convergence} Let $\mathcal{A}$ be a recurrent $\textsc{LimAvg}$-automaton. For every $\epsilon \in \mathbb{Q}^+$, there exists $k$ such that for almost all words $w$, the value $\mathcal{A}(w)$ and the value of an optimal $k$-jumping run on $w$ differ by at most $\epsilon$. The value $k$ is doubly-exponential in $|\mathcal{A}|$ and polynomial in $\rev{\epsilon}$. \end{lemma} \begin{proof} By Lemma~\ref{l:resetWrods}, for all $\Delta >0$, $\ell = 2^{2^{O(|\mathcal{A}|)}} \log (\rev{\Delta})$, and all $k > \ell$, the probability that the run $\pi_f$ synchronizes with an optimal $k$-jumping run $\xi_o$ within $\ell$ steps in a block is at least $1 - \Delta$. Consider some $k>\ell$ and an optimal $k$-jumping run $\xi_o$ that is block-deterministic. Observe that the run $\pi_f$ of $\mathcal{A}$ following $\xi_o$ is also block-deterministic. Consider a single block $\xi_o[i, i+k-1]$. By Lemma \ref{l:resetWrods}, the probability that $\pi_f[i+\ell-1]=\xi_o[i+\ell-1]$ is at least $1-\Delta$. In such a case, the sum of costs on that block of $\pi_f$ exceeds $\xi_o$ by at most $D \cdot \ell$, where $D$ is the difference between the maximal and the minimal weight in $\mathcal{A}$. Otherwise, if $\pi_f$ does not synchronize, we bound the difference of the sums of values on that block by the maximal possible difference $D \cdot k$. Since runs are block-deterministic, synchronization of $\pi_f$ and $\xi_o$ satisfies the Markov property; it depends only on the current block and the set of states $S$ reachable on the input word until the beginning of the current block. We observe that as $\mathcal{A}$ is recurrent, the corresponding Markov chain, whose states are reachable sets of states $S$ of $\mathcal{A}$, has only a single BSCC. Therefore, for almost all words, the average ratio of $k$-element blocks, in which $\pi_f$ synchronizes with $\xi_o$ within $\ell$ steps, is $1 - \Delta$. We then conclude that for almost all words the difference between $\pi_f$ and $\xi_o$ is bounded by $\gamma = \frac{(1 - \Delta)\cdot (D \cdot \ell) + \Delta \cdot (D \cdot k)}{k}$. Observe that with $\Delta < \frac{\epsilon}{2\cdot D}$ and $k > \frac{2\cdot D \cdot \ell}{\epsilon}$, the value $\gamma$ is less than $\epsilon$. \end{proof} \subsection{Random variables} Given a recurrent $\textsc{LimAvg}$-automaton $\mathcal{A}$ and $k>0$, we define a function $g[{k}] : \Sigma^{\omega} \to \mathbb{R}$ such that $g[{k}](w)$ is the value of some optimal $k$-jumping run $\xi_o$ on $w$. We can pick $\xi_o$ to be block-deterministic and hence $g[{k}]$ corresponds to a Markov chain $M[k]$. More precisely, we define $M[k]$ labeled by $\Sigma^{k}$ such that for every word $w$, the limit average of the path in $M[k]$ labeled by blocks of $w$ (i.e., blocks $w[1,k] w[k+1, 2k] \ldots$) equals $g[{k}](w)$. Moreover, the distribution of blocks $\Sigma^k$ is uniform and hence $M[k]$ corresponds to $g[{k}]$ over the uniform distribution over $\Sigma$. The Markov chain $M[k]$ is a labeled weighted Markov chain~\cite{filar}, such that its states are all subsets of $Q$, the set of states of $\mathcal{A}$. For each state $S \subseteq Q$ and $u \in \Sigma^k$, the Markov chain $\mathcal{M}$ has an edge $(S,\widehat{\delta}(S,u))$ of probability $\frac{1}{|\Sigma|^k}$. The weight of an edge $(S,S')$ labeled by $u$ is the minimal average of weights of any run from some state of $S$ to some state of $S'$ over the word $w$. We have the following: \begin{lemma} \label{l:recurrentMeasurable} Let $\mathcal{A}$ be a recurrent $\textsc{LimAvg}$-automaton and $k>0$. (1)~The functions $g[{k}]$ and $\valueL{\mathcal{A}}$ are random variables. (2)~For almost all words $w$ we have $g[{k}](w) = \mathbb{E}(g[{k}])$ and $\valueL{\mathcal{A}}(w) = \mathbb{E}(\valueL{\mathcal{A}})$. \end{lemma} \begin{proof} Since $\mathcal{A}$ is recurrent, $M[k]$ has a single BSCC and hence $M[k]$ and $g[{k}]$ return the same value for almost all words~\cite{filar}. This implies that the preimage through $g[{k}]$ of each set has measure $0$ or $1$, and hence $g[{k}]$ is measurable~\cite{feller}. Lemma~\ref{l:convergence} implies that (measurable functions) $g[{k}]$ converge to $\valueL{\mathcal{A}}$ with probability $1$, and hence $\valueL{\mathcal{A}}$ is measurable~\cite{feller}. As the limit of $g[{k}]$, $\valueL{\mathcal{A}}$ also has the same value for almost all words. \end{proof} \begin{nremark}\label{r:ultimatelyperiodic} The automaton $\mathcal{A}$ from the proof of Theorem \ref{th:irrational} is recurrent (it resets after each $\$$), so the value of $\mathcal{A}$ on almost all words is irrational. Yet, for every ultimately periodic word $vw^\omega$, the value of $\mathcal{A}$ is rational. This means that while the expected value is realized by almost all words, it is not realized by any ultimately periodic word. \end{nremark} \subsection{Approximation algorithms} We show that the expected value of $g[{k}]$ can be efficiently approximated. The approximation is exponential in the size of $\mathcal{A}$, but only logarithmic in $k$ (which is doubly-exponential due to Lemma~\ref{l:convergence}). \newcommand{\newh}[1]{\tilde{h}^{#1}} \newcommand{\mfloor}[2]{\lfloor#1\rfloor_{#2}} To approximate the expected value of $g[{k}]$ we need to compute the expected value of $\mathcal{A}$ over $k$-letter blocks. Such blocks are finite and hence we consider $\mathcal{A}$ as a finite-word automaton with the average value function $\textsc{Avg}$. More precisely, for $S$ being a subset of states of $\mathcal{A}$, we define $\mathcal{A}_{S}^{\mathrm{fin}}$ as a $\textsc{Avg}$-automaton over finite words as $\mathcal{A}$, which initial states set to $S$ and all states accepting. We can approximate the expected value of $\mathcal{A}_{S}^{\mathrm{fin}}$ over words $\Sigma^{k}$ in logarithmic time in $k$. \begin{lemma} \label{l:approxFinExpected} Let $\mathcal{A}$ be a recurrent $\textsc{LimAvg}$-automaton, let $S, S'$ be subsets of states of $\mathcal{A}$, and let $i > 0$. We can approximate the expected value $\mathbb{E}( \{ \mathcal{A}_{S}^{\mathrm{fin}}(w) \mid |w| = 2^k $ and $ \widehat{\delta}(S,w) = S'\})$ within a given $\epsilon \in \mathbb{Q}^+$ in exponential time in $|\mathcal{A}|$ and polynomial time in $k$ and $\rev{\epsilon}$. \end{lemma} \begin{proof} \newcommand{\roundup}[1]{\left[#1\right]_{\epsilon_0}} Let $h(q, w, q')$ be the infimum average weight over runs from $q$ to $q'$ over $w$. Consider $\epsilon_0 =\frac{\epsilon}{k+1}$. Let $H=\set{j\cdot \epsilon_0 \mid j\in \mathbb{Z}} \cap (-|\mathcal{A}|, |\mathcal{A}|)$ be a finite set and let $\roundup{x}$ stand for the greatest number from $H$ not exceeding $x$. Consider $i \in \set{0, \dots, k}$ and let $N = 2^i$. We define a function $\newh{i}: Q\times \Sigma^{N} \times Q \to H$ as follows. First, we define $\newh{0}(q, w, q')=\roundup{h(q, w, q')}$. Then, inductively, we define \[\newh{i+1}(q,w_1w_2, q') = \roundup{\min_{q'' \in Q} \frac{\newh{i}(q, w_1, q'') + \newh{i}(q'', w_2, q')}{2}}\] \noindent We show by induction on $i$ that for all $i$, $q$, $q'$, $N = 2^i$ and $w\in \Sigma^{N}$ we have $|h(q, w, q') -\newh{i}(q, w, q') |\leq (i+1) \epsilon_0$. First, we comment on the deteriorating precision. Notice that $|h(q, w, q') -\newh{i}(q, w, q') |\leq \epsilon_0$ may not hold in general. Let us illustrate this with a simple toy example. Consider $\epsilon_0=1$, $x \in (0, 1)$ and $y \in (1, 2)$. Then $\frac{x+y}{2}\in (\frac{1}{2}, \frac{3}{2})$, thus $\roundup{\frac{x+y}{2}}\in \set{0, 1}$. However, knowing only $\roundup{x}$ and $\roundup{y}$, we cannot asses whether the answer should be $0$ or $1$. Therefore, when iterating the above-described procedure, we may lose some precision (up to one $\epsilon_0$ at each step); this is why we start with $\epsilon_0$ rather than $\epsilon$. Now, we show by induction $|h(q, w, q') -\newh{i}(q, w, q') |\leq (i+1) \epsilon_0$. More precisely, we show that \begin{enumerate}[(1)] \item $\newh{i}(q, w, q') \leq h(q, w, q') $ and \item $h(q, w, q') -\newh{i}(q, w, q') \leq (i+1) \epsilon_0$. \end{enumerate} The case $i=0$ follows from the definition of $\newh{0}(q, w, q')$. Consider $i >0$ and assume that for all words $w$ of length $2^i$ the induction hypothesis holds. Consider $w = w_1 w_2$ and states $q, q'$. There exists $q''$ such that $h(q, w, q') = h(q, w_1, q'') + h(q'', w_1, q')$. Then, due to induction assumption of (1) we have $\newh{i-1}(q, w_1, q'') \leq h(q, w_1, q'') $ and $\newh{i-1}(q'',w_2, q') \leq h(q'', w_2, q') $. In consequence, we get (1). Now, to show (2), consider a state $s$ that realizes the minimum from the definition of $\newh{i}(q, w, q')$. There are numbers $a,b \in \mathbb{Z}$ such that $\newh{i-1}(q, w_1, s) = a\epsilon_0$, and $\newh{i-1}(s, w_2, q') = b\epsilon_0$. Then, $h(q,w,q') \leq \frac{h(q,w_1,s) + h(s,w_2,q)}{2}$ and we have \[ h(q,w,q') - \newh{i}(q, w, q') \leq \frac{h(q,w_1,s) + h(s,w_2,q)}{2} - \roundup{\frac{(a+b)\epsilon_0}{2}} \] Observe that $\roundup{\frac{(a+b)\epsilon_0}{2}} = \frac{(a+b)\epsilon_0}{2}$ if $a+b$ is even and $ \roundup{\frac{(a+b)\epsilon_0}{2}} = \frac{(a+b)\epsilon_0}{2} - \frac{\epsilon_0}{2}$ otherwise. This gives us the following inequality \[ h(q,w,q') - \newh{i}(q, w, q') \leq \frac{(h(q,w_1,s) -a\epsilon_0)+ (h(s,w_2,q)-b\epsilon_0)}{2} + \frac{\epsilon_0}{2} \] Due to the induction hypothesis (2) we have $h(q, w_1, q'') - a\epsilon_0 \leq i \epsilon_0$ $h(q'', w_2, q') - b\epsilon_0 \leq i \epsilon_0$ and it gives us (2). We cannot compute the functions $\newh{i}$ directly (in reasonable time), because there are too many words to be considered. However, we can compute them symbolically. Define the \emph{clusterization function} $c^i$ as follows. Let $N = 2^i$. For each function $f \colon Q\times \Sigma^N \times Q \to H$ we define $c^i(f) = |\set{w \mid \forall q, q' .\newh{i}(q, w, q')=f(q, w, q')}|$. Basically, for each function $f$, clasterization counts the number of words realizing $f$ though functions $\newh{i}(\cdot, w, \cdot)$. The function $c^0$ can be computed directly. Then, $c^{i+1}(f)$ can be computed as the sum of $c^{i}(f_1) \cdot c^{i}(f_2)$ over all the functions $f_1, f_2$ such that $f=f_1 * f_2$, where $h_1 * h_2(q, q'')=\roundup{\min_{q' \in F_2} \frac{h_1(q,q') + h_2(q',q'')}{2}}$. It follows that we can compute the $k$-clusterization in time exponential in $|\mathcal{A}|$, polynomial in $\rev{\epsilon}$ and $k$. The desired expected valued can be derived from the $k$-clusterization in the straightforward way. \end{proof} In consequence, we can approximate the expected value of $g[{k}]$ in exponential time in $|\mathcal{A}|$ but logarithmic in $k$, which is important as $k$ may be doubly-exponential in $|\mathcal{A}|$ (Lemma~\ref{l:convergence}). \begin{lemma} \label{l:jumpingRuns} Given a recurrent $\textsc{LimAvg}$-automaton $\mathcal{A}$, $k=2^l$ and $\epsilon \in \mathbb{Q}^+$, the expected value $\mathbb{E}(g[{k}])$ can be approximated up to $\epsilon$ in exponential time in $|\mathcal{A}|$, logarithmic time in $k$ and polynomial time in $\rev{\epsilon}$. \end{lemma} \begin{proof} Recall that the expected values of $M[k]$ and $g[{k}]$ coincide. Observe that $M[k]$ can be turned into a weighted Markov chain $N[k]$ over the same set of states with one edge between any pair of states as follows. For an edge $(S,S')$, we define its probability as $\frac{1}{|\Sigma|^k}$ multiplied by the number of edges from $S$ to $S'$ with positive probability in $M[k]$ and the weight of $(S,S')$ in $N[k]$ is the average of the weights of all such the edges in $M[k]$, i.e., the weight of $(S,S')$ is $\mathbb{E}( \{ \mathcal{A}_{S}^{\mathrm{fin}}(w) \mid w \in \Sigma^k $ and $ \widehat{\delta}(S,w) = S'\})$ (see Lemma~\ref{l:approxFinExpected}). Observe that the expected values of $M[k]$ and $N[k]$ coincide. Having the Markov chain $N[k]$, we can compute its expected value in polynomial time~\cite{filar}. Since $N[k]$ has the exponential size in $|\mathcal{A}|$, we can compute it in exponential time in $|\mathcal{A}|$. However, we need to show how to construct $N[k]$. In particular, computing $\mathbb{E}( \{ \mathcal{A}_{S}^{\mathrm{fin}}(w) \mid w \in \Sigma^k $ and $ \widehat{\delta}(S,w) = S'\})$ can be computationally expensive as $k$ can be doubly-exponential in $|\mathcal{A}|$ (Lemma~\ref{l:convergence}). Still, due to Lemma~\ref{l:approxFinExpected}, we can approximate $\mathbb{E}( \{ \mathcal{A}_{S}^{\mathrm{fin}}(w) \mid w \in \Sigma^k $ and $ \widehat{\delta}(S,w) = S'\})$ in exponential time in $|\mathcal{A}|$, logarithmic time in $k$ and polynomial time in $\epsilon$. Therefore, we can compute a Markov chain $\markov^{\approx}$ with the same structure as $N[k]$ and such that for every edge $(S,S')$ the weight of $(S,S')$ in $\markov^{\approx}$ differs from the weight in $N[k]$ by at most $\epsilon$. Therefore, the expected values of $\markov^{\approx}$ and $N[k]$ differ by at most ${\epsilon}$. \end{proof} Lemma~\ref{l:convergence} and Lemma~\ref{l:jumpingRuns} give us approximation algorithms for the expected value and the distribution of recurrent automata over the uniform distribution: \begin{lemma} \label{l:singleSCC-uniform} Given a recurrent $\textsc{LimAvg}$-automaton $\mathcal{A}$, $\epsilon \in \mathbb{Q}^+$ and $\lambda \in \mathbb{Q}$, we can compute $\epsilon$-approximations of the distribution $\mathbb{D}_{\mathcal{A}}(\lambda)$ and the expected value $\mathbb{E}(\mathcal{A})$ with respect to the uniform measure in exponential time in $|\mathcal{A}|$ and polynomial time in $\rev{\epsilon}$. \end{lemma} \begin{proof} For uniform distributions, by Lemma~\ref{l:convergence}, for every $\epsilon > 0$, there exists $k$ such that $|\mathbb{E}(\mathcal{A}) - \mathbb{E}(g[{k}]) | \leq \frac{\epsilon}{2}$. The value $k$ is doubly-exponential in $|\mathcal{A}|$ and polynomial in $\rev{\epsilon}$. Then, by Lemma~\ref{l:jumpingRuns}, we can compute $\gamma$ such that $|\gamma - \mathbb{E}(g[{k}])| \leq \frac{\epsilon}{2}$ in exponential time in $|\mathcal{A}|$ and polynomial in $\rev{\epsilon}$. Thus, $\gamma$ differs from $\mathbb{E}(\mathcal{A})$ by at most $\epsilon$. Since almost all words have the same value, we can approximate $\mathbb{D}_{\mathcal{A}}(\lambda)$ by comparing $\lambda$ with $\gamma$, i.e., $1$ is an $\epsilon$-approximation of $\mathbb{D}_{\mathcal{A}}(\lambda)$ if $\lambda \leq \gamma$, and otherwise $0$ is an $\epsilon$-approximation of $\mathbb{D}_{\mathcal{A}}(\lambda)$. \end{proof} \subsection{Non-uniform measures} \label{s:non-uniform} We briefly discuss how to adapt Lemma~\ref{l:singleSCC-uniform} to all measures given by Markov chains. We sketch the main ideas. \noindent \emph{Key ideas}. Assuming that (a variant of) Lemma~\ref{l:resetWrods} holds for any probability measure given by a Markov chain, the proofs of Lemmas~\ref{l:convergence}, \ref{l:jumpingRuns} and \ref{l:singleSCC} can be easily adapted. Therefore we focus on adjusting Lemma~\ref{l:resetWrods}. Observe that if a Markov chain $\mathcal{M}$ produces all finite prefixes $u \in \Sigma^*$ with non-zero probability, then the proof of Lemma~\ref{l:resetWrods} can be straightforwardly adapted. Otherwise, if some finite words cannot be produced by a Markov chain $\mathcal{M}$, then Lemma~\ref{l:resetWrods} may be false. However, if there are words $w$ such that $\mathcal{A}$ has an infinite run on $w$, but $\mathcal{M}$ does not emit $w$, we can restrict $\mathcal{A}$ to reject such words. Therefore, we assume that for every word $w$, if $\mathcal{A}$ has an infinite run on $w$, then $\mathcal{M}$ has an infinite path with non-zero probability on $w$ ($\mathcal{M}$ emits $w$). Then, the current proof of Lemma~\ref{l:resetWrods} can be straightforwardly adapted to the probability measure given by $\mathcal{M}$. In consequence, we can compute $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)$ and $\mathbb{E}_{\mathcal{M}}(\mathcal{A})$ in exponential time in $|\mathcal{A}|$ and polynomial time in $|\mathcal{M}|$ and $\rev{\epsilon}$. More precisely, we first observe that we may enforce $\mathcal{M}$ to be ``deterministic'', i.e., for all states $s$ and letters $a$ at most one outgoing transition labeled with $a$ has positive probability. We can determinise $\mathcal{M}$ by extending the alphabet $\Sigma$ to $\Sigma \times S$, where $S$ is the set of states of $\mathcal{M}$. The second component in $\Sigma \times S$ encodes the target state in the transition. Observe that $\mathcal{A}$ can be extended to the corresponding automaton $\mathcal{A}'$ over $\Sigma \times S$ by cloning transitions, i.e., for every transition $(q,a,q')$, the automaton $\mathcal{A}'$ has transitions $(q,(a,s),q')$ for every $s \in S$ (i.e., $\mathcal{A}'$ ignores the state of $\mathcal{M}$). For such a deterministic Markov chain $\mathcal{M}'$, we define a deterministic $\omega$-automaton $\mathcal{A}_{\mathcal{M}}$ that accepts words emitted by $\mathcal{M}'$. Finally, we consider the automaton $\mathcal{A}^R = \mathcal{A}_{\mathcal{M}} \times \mathcal{A}'$, which has infinite runs only on words that are emitted by $\mathcal{M}'$. Therefore, as we discussed, we can adapt the proof of Lemma~\ref{l:singleSCC-uniform} in such a case and compute $\mathbb{D}_{\mathcal{M}', \mathcal{A}^R}(\lambda)$ and $\mathbb{E}_{\mathcal{M}'}(\mathcal{A}^R)$ (in exponential time in $|\mathcal{A}^R|$, polynomial time in $|\mathcal{M}|$ and $\rev{\epsilon}$; notice that $|\mathcal{A}^R|$ is polynomial in $|\mathcal{A}|$). Finally, observe that $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda) = \mathbb{D}_{\mathcal{M}', \mathcal{A}^R}(\lambda)$ and $\mathbb{E}_{\mathcal{M}}(\mathcal{A}) = \mathbb{E}_{\mathcal{M}'}(\mathcal{A}^R)$. In consequence, we have the following: \begin{lemma} \label{l:singleSCC} Given a recurrent $\textsc{LimAvg}$-automaton $\mathcal{A}$, Markov chain $\mathcal{M}$, $\epsilon \in \mathbb{Q}^+$ and $\lambda \in \mathbb{Q}$, we can compute $\epsilon$-approximations of the distribution $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)$ and the expected value $\mathbb{E}_{\mathcal{M}}(\mathcal{A})$ in exponential time in $|\mathcal{A}|$ and polynomial time in $|\mathcal{M}|$ and $\rev{\epsilon}$. \end{lemma} \section{Non-recurrent automata} \label{s:non-recurrent} We present the approximation algorithms for all non-deterministic $\textsc{LimAvg}$-automata over measures given by Markov chains. \begin{theorem} \label{th:approximation-limavg} (1)~For a non-deterministic $\textsc{LimAvg}$-automaton $\mathcal{A}$ the function $\valueL{\mathcal{A}} : \Sigma^{\omega} \to \mathbb{R}$ is measurable. (2)~Given a non-deterministic $\textsc{LimAvg}$-automaton $\mathcal{A}$, Markov chain $\mathcal{M}$, $\epsilon \in \mathbb{Q}^+$, and $\lambda \in \mathbb{Q}$, we can $\epsilon$-approximate the distribution $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)$ and the expected value $\mathbb{E}(\mathcal{A})$ in exponential time in $|\mathcal{A}|$ and polynomial time in $|\mathcal{M}|$ and $\rev{\epsilon}$. \end{theorem} \begin{proof} \newcommand{\rho}{\rho} \newcommand{\mathcal{A}Det}{\mathcal{A}^D} Consider $\mathcal{A}$ as an $\omega$-automaton. It has no acceptance conditions and hence we can determinise it with the standard power-set construction to a deterministic automaton $\mathcal{A}Det$. Then, we construct a Markov chain $\mathcal{M} \times \mathcal{A}Det$, compute all its BSCCs $R_1, \ldots, R_k$ along with the probabilities $p_1, \ldots, p_k$ of reaching each of these sets. This can be done in polynomial time in $\mathcal{M} \times \mathcal{A}Det$~\cite{filar,BaierBook}, and hence polynomial in $\mathcal{M}$ and exponential in $\mathcal{A}$. Let $H_1, \ldots, H_k$ be sets of paths in $\mathcal{M} \times \mathcal{A}Det$ such that for each $i$, all $\rho \in H_i$ eventually reach $R_i$ and stay there forever. Observe that each $H_i$ is a Borel set; the set $H_i^p$ of paths that stay in $R_i$ past position $p$ is closed and $H_i = \bigcup_{p\geq 0} H_i^p$. It follows that each $H_i$ is measurable. We show how to compute an $\epsilon$-approximation of the conditional expected value $\mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid H_i)$. Consider a BSCC $R_i$. The projection of $R_i$ on the first component $R_i^1$ is a BSCC in $\mathcal{M}$ and the projection on the second component $R_i^2$ is an SCC of $\mathcal{A}_D$. Let $(s,A) \in R_i$. If we fix $s$ as the initial state of $R_i^1$ and $A$ as the initial state of $R_i^2$, then $R_i$ are all reachable states of $R_i^1 \times R_i^2$. The set $R_i^2$ consists of the states of $\mathcal{A}Det$, which are subsets of states of $\mathcal{A}$. Therefore, the union $\bigcup R_i^2$ is a subset of states of $\mathcal{A}$ and it consists of some SCCs $S_1, \ldots, S_m$ of $\mathcal{A}$. All these SCCs are reachable, but it does not imply that there is a run of $\mathcal{A}$ that stays in $S_j$ forever. We illustrate that in the following example. Consider the automaton $\mathcal{A}$ presented in Figure~\ref{fig:autTwo}, where $q_I$ is the initial state, and a single-state Markov chain $\mathcal{M}$ generating uniform distribution. \begin{figure} \caption{An automaton with a reachable SCC $q_F$ such that almost no runs stay in $q_F$ forever} \label{fig:autTwo} \end{figure} Then, all paths in $\mathcal{M} \times \mathcal{A}Det$ are eventually contained in $\mathcal{M} \times \{Q\}$, i.e., the second component consists of all states of $\mathcal{A}$. Still, if a word $w$ has infinitely many letters $a$, then $\mathcal{A}$ has no (infinite) run on $w$ that visits the state $q_F$. The set of infinite words that contain finitely many letters $a$ is countable and hence has probability $0$. Therefore, almost all words (i.e., all except for some set of probability $0$) have no run that visits the state $q_F$. To avoid such pathologies, we divide SCCs into two types: \emph{permanent} and \emph{transitory}. More precisely, for a path $\rho$ in $\mathcal{M} \times \mathcal{A}Det$ let $w_{\rho}$ be the word labeling $\rho$. We show that for each SCC $S_j$, one of the following holds: \begin{itemize} \item $S_j$ is \emph{permanent}, i.e., for almost all paths $\rho \in H_i$ (i.e., the set of paths of probability $1$), the automaton $\mathcal{A}$ has a run on the word $w_{\rho}$ that eventually stays in $S_j$ forever, or \item $S_j$ is \emph{transitory}, i.e., for almost all paths $\rho \in H_i$, the automaton $\mathcal{A}$ has no run on $w_{\rho}$ that eventually stays in $S_j$. \end{itemize} Consider an SCC $S_j$. If $S_j$ is permanent, then it is not transitory. We show that if $S_j$ is not permanent, then it is transitory. Suppose that $S_j$ is not permanent and consider any $(s,A) \in H_i$. Almost all paths in $H_i$ visit $(s,A)$ and since $S_j$ in not permanent, there exists an infinite path $\rho$ that visits $(s,A)$ and $\mathcal{A}$ has no run on $w_{\rho}$ that stays in $S_j$ forever. Let $u$ be the suffix of $w_{\rho}$ that labels $\rho$ past some occurrence of $(s,A)$. We observe that $\widehat{\delta}(A \cap S_j, u) = \emptyset$ and hence for some finite prefix $u'$ of $u$ we have $\widehat{\delta}(A \cap S_j, u') = \emptyset$. Let $p$ be the probability that $\mathcal{M} \times \mathcal{A}Det$ in the state $(s,A)$ generates a path labeled by $u'$. The probability that a path that visits $(s,A)$ at least $\ell$ times does not contain $(s,A)$ followed by labels $u'$ is at most $(1-p)^{\ell}$. Observe that for almost all paths in $H_i$, the state $(s,A)$ is visited infinitely often and hence almost all paths contain $(s,A)$ followed by labels $u'$ upon which the path leaves $S_j$. Therefore, $S_j$ is transitory. To check whether $S_j$ is permanent or transitory, observe that for any $(s,A) \in H_i$, in the Markov chain $\mathcal{M} \times \mathcal{A}Det$, we can reach the set $\mathcal{M} \times \{\emptyset \}$ from $(s, A \cap S_j)$ if and only if $S_j$ is transitory. The former condition can be checked in polynomial space. We mark each SCC $S_1, \ldots, S_k$ as permanent or transitory and for every permanent SCC $S_j$, we compute an $\epsilon$-approximation of $\mathbb{E}_{\mathcal{M}}(\mathcal{A}[S_j] \mid H_i)$, which is the expected value of $\mathcal{A}$ under condition $H_i$ with the restriction to runs that eventually stay in $S_j$. Observe that an $\epsilon$-approximation of $\mathbb{E}_{\mathcal{M}}(\mathcal{A}[S_j] \mid H_i)$ can be computed using Lemma~\ref{l:singleSCC}. Indeed, we pick $(s, A) \in H_i$ and observe that $\mathcal{A}$ restricted to states $S_j$ is recurrent (with an appropriate initial states). Finally, we pick the minimum $\gamma$ over the computed expected values $\mathbb{E}_{\mathcal{M}}(\mathcal{A}[S_j] \mid H_i)$ and observe that almost all words in $H_i$ have value $\gamma$. It follows that $\mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid H_i) = \gamma$. In each BSCC $R_i$, almost all words have value $\mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid H_i)$. As we discussed earlier, each $H_i$ is measurable, and hence the function $\valueL{\mathcal{A}} : \Sigma^{\omega} \to \mathbb{R}$ is measurable. Moreover, to approximate the distribution $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)$, we sum probabilities of $p_i$ of reaching the BSCCs $R_i$ over $R_i$'s such that the $\epsilon$-approximation of $\mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid H_i)$ is less or equal to $\lambda$. Finally, we compute an $\epsilon$-approximation of $\mathbb{E}_{\mathcal{M}}(\mathcal{A})$ from $\epsilon$-approximations of conditional expected values $\mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid H_i)$ using the identity $\mathbb{E}_{\mathcal{M}}(\mathcal{A}) = \sum_{i=1}^k p_i \cdot \mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid H_i)$. \end{proof} \section{Determinising and approximating $\textsc{LimAvg}$-automata} \newcommand{\mathcal{A}B}{\mathcal{B}} For technical simplicity, we assume that the distribution of words is uniform. However, the results presented here extend to all distributions given by Markov chains. Recall that for the $\textsc{LimAvg}$ automata, the value of almost all words (i.e., all except for some set of words of probability $0$) whose optimal runs end up in the same SSC, is the same. This means that there is a finite set of values (not greater than the number of SSCs of the automaton) such that almost all words have their values in this set. $\textsc{LimAvg}$-automata are not determinisable~\cite{quantitativelanguages}. We say that a non-deterministic $\textsc{LimAvg}$-automaton $\mathcal{A}$ is \emph{weakly determinisable} if there is a deterministic $\textsc{LimAvg}$-automaton $\mathcal{A}B$ such that $\mathcal{A}$ and $\mathcal{A}B$ have the same value over almost all words. From \cite{lics16} we know that deterministic automata return rational values for almost all words, so not all $\textsc{LimAvg}$-automata are weakly determinisable. However, we can show the following. \begin{theorem}\label{t:determinisation} A $\textsc{LimAvg}$-automaton $\mathcal{A}$ is weakly determinisable if and only if it returns rational values for almost all words. \end{theorem} \begin{proof}[Proof sketch] Assume an automaton $\mathcal{A}$ with SSCs $C_1, \dots, C_m$. For each $i$ let $v_i$ be defined as the expected value of $\mathcal{A}$ when its set of initial states is $C_i$ and the run is bounded to stay in $C_i$. If $\mathcal{A}$ has no such runs for some $C_i$, then $v_i = \infty$. We now construct a deterministic automaton $B$ with rational weights using the standard power-set construction. We define the cost function such that the cost of any transition from a state $Y$ is the minimal value $v_i$ such that $v_i$ is rational and $Y$ contains a state from $C_i$. If there are no such $v_i$, then we set the cost to the maximal cost of $\mathcal{A}$. Roughly speaking, $B$ tracks in which SSCs $A$ can be and the weight corresponds to the SSC with the lowest value. To see that $B$ weakly determinises $A$ observe that for almost all words $w$, a run with the lowest value over $w$ ends in some SSC and its value then equals the expected value of this component, which is rational as the value of this word is rational. \end{proof} A straightforward corollary is that every non-deterministic $\textsc{LimAvg}$-automaton can be weakly determinised by an $\textsc{LimAvg}$-automaton with real weights. Theorem \ref{t:determinisation} does not provide an implementable algorithm for weak-determinisation, because of the hardness of computing the values $v_i$. It is possible, however, to approximate this automaton. We say that a deterministic $\textsc{LimAvg}$-automaton $B$ \emph{$\epsilon$-approximates} $\mathcal{A}$ if for almost every word $w$ we have that $\valueL{B}(w)\in[\valueL{\mathcal{A}}(w) - \epsilon, \valueL{\mathcal{A}}(w) + \epsilon]$. \begin{theorem} \label{th:approximateDeterminisation} For every $\epsilon>0$ and a non-deterministic $\textsc{LimAvg}$-automaton $\mathcal{A}$, one can compute in exponential time a deterministic $\textsc{LimAvg}$-automaton that $\epsilon$-approximates $\mathcal{A}$. \end{theorem} The proof of this theorem is similar to the proof of Theorem~\ref{t:determinisation}, except now it is enough to approximate the values $v_i$, which can be done in exponential time. \Paragraph{Acknowledgements} Our special thanks go to G\"unter Rote who pointed out an error in an earlier version of our running example. \doclicenseThis \end{document}
\begin{document} \title[Ideals, supports and harmonic operators] {Ideals of the Fourier algebra, supports and harmonic operators} \date{} \author{M. Anoussis, A. Katavolos and I. G. Todorov} \address{Department of Mathematics, University of the Aegean, Samos 83 200, Greece} \email{[email protected]} \address{Department of Mathematics, University of Athens, Athens 157 84, Greece} \email{[email protected]} \address{Pure Mathematics Research Centre, Queen's University Belfast, Belfast BT7 1NN, United Kingdom} \email{[email protected]} \keywords{Fourier algebra, masa-bimodule, invariant subspaces, harmonic operators} \begin{abstract} We examine the common null spaces of families of Herz-Schur multipliers and apply our results to study jointly harmonic operators and their relation with jointly harmonic functionals. We show how an annihilation formula obtained in \cite{akt} can be used to give a short proof {as well as a generalisation} of a result of Neufang and Runde concerning harmonic operators with respect to a normalised positive definite function. We compare the two notions of support of an operator that have been studied in the literature and show how one can be expressed in terms of the other. \end{abstract} \maketitle \section{Introduction and Preliminaries} In this paper we investigate, for a locally compact group $G$, the common null spaces of families of Herz-Schur multipliers (or completely bounded multipliers of the Fourier algebra $A(G)$) and their relation to ideals of $A(G)$. This provides a new perspective for our previous results in \cite{akt} concerning (weak* closed) spaces of operators on $L^2(G)$ which are simultaneously invariant under all Schur multipliers and under {conjugation by the right regular representation} of $G$ on $L^2(G)$ ({\em jointly invariant} subspaces -- see below for precise definitions). At the same time, it provides a new approach to, as well as an extension of, a result of Neufang and Runde \cite{neurun} concerning the space $\widetilde{\cl H}_\sigma$ of operators which are `harmonic' with respect to a positive definite normalised function $\sigma:G\to\bb C$. The notion of $\sigma$-harmonic operators was introduced in \cite{neurun} as an extension of the notion of $\sigma$-harmonic functionals on $A(G)$ as defined and studied by Chu and Lau in \cite{chulau}. One of the main results of Neufang and Runde is that $\widetilde{\cl H}_\sigma$ is the von Neumann algebra on $L^2(G)$ generated by the algebra $\cl D$ of multiplication operators together with the space ${\cl H}_\sigma$ of harmonic functionals, considered as a subspace of the von Neumann algebra $\vn(G)$ of the group. It will be seen that this result can be obtained as a consequence of the fact (see Corollary \ref{c_jho}) that, for any family $\Sigma$ of completely bounded multipliers of $A(G)$, the space $\widetilde{\cl H}_\Sigma$ of {\em jointly $\Sigma$-harmonic operators} can be obtained as the weak* closed $\cl D$-bimodule generated by the {\em jointly $\Sigma$-harmonic functionals} ${\cl H}_\Sigma$. In fact, the spaces $\widetilde{\cl H}_\Sigma$ belong to the class of jointly invariant subspaces of $\cl B(L^2(G))$ studied in \cite[Section 4]{akt}. The space ${\cl H}_\Sigma$ is the annihilator in $\vn(G)$ of a certain ideal of $A(G)$. Now from any given closed ideal $J$ of the Fourier algebra $A(G)$, there are two `canonical' ways to arrive at a weak* closed $\cl D$-bimodule of $\cl B(L^2(G))$. One way is to consider its annihilator $J^\perp$ in $\vn(G)$ and then take the weak* closed $\cl D$-bimodule generated by $J^{\perp}$. We call this bimodule $\Bim(J^\perp)$. The other way is to take a suitable saturation $\Sat(J)$ of $J$ within the trace class operators on $L^2(G)$ (see Theorem \ref{th_satlcg}), and then form its annihilator. This gives a masa bimodule $(\Sat J)^{\perp}$ in $\cl B(L^2(G))$. In \cite{akt}, we proved that these two procedures yield the same bimodule, that is, $\Bim(J^\perp) = (\Sat J)^{\perp}$. Our proof that $\widetilde{\cl H}_\Sigma=\Bim({\cl H}_\Sigma)$ rests on this equality. The notion of {\em support}, $\mathop{\mathrm{supp}}G(T)$, of an element $T\in\vn(G)$ was introduced by Eymard in \cite{eymard} by considering $T$ as a linear functional on the function algebra $A(G)$; thus $\mathop{\mathrm{supp}}G(T)$ is a closed subset of $G$. This notion was extended by Neufang and Runde in \cite{neurun} to an arbitrary $T\in\cl B(L^2(G))$ and used to describe harmonic operators. By considering joint supports, we show that this extended notion of $G$-support for an operator $T\in\cl B(L^2(G))$ coincides with the joint $G$-support of a family of elements of $\vn (G)$ naturally associated to $T$ (Proposition \ref{propsame2}). On the other hand, the notion of support of an operator $T$ acting on $L^2(G)$ was first introduced by Arveson in \cite{arv} as a certain closed subset of $G \times G$. This notion was used in his study of what was later called operator synthesis. A different but related approach appears in \cite{eks}, where the notion of $\omega$-support, $\mathop{\mathrm{supp}}o(T)$, of $T$ was introduced and used to establish a bijective correspondence between reflexive masa-bimodules and $\omega$-closed subsets of $G\times G$. We show that the joint $G$-support $\mathop{\mathrm{supp}}G(\cl A)$ of an arbitrary family $\cl A\subseteq \cl B(L^2(G))$ can be fully described in terms of its joint $\omega$-support $\mathop{\mathrm{supp}}o(\cl A)$ (Theorem \ref{th_compsa}). The converse does not hold in general, as the $\omega$-support, being a subset of $G\times G$, contains in general more information about an arbitrary operator than its $G$-support (see Remark \ref{last}); however, in case $\cl A$ is a (weak* closed) jointly invariant subspace, we show that its $\omega$-support can be recovered from its $G$-support (Theorem \ref{312}). We also show that, if a set $\Omega\subseteq G\times G$ is invariant under all maps $(s,t)\to (sr,tr), \, r\in G$, then $\Omega$ is marginally equivalent to an $\omega$-closed set if and only if it is marginally equivalent to a (topologically) closed set. This can fail for non-invariant sets (see for example \cite[p. 561]{eks}). {For a related result, see \cite[Proposition 7.3]{stt_clos}.} \noindent\textbf{Preliminaries and Notation } Throughout, $G$ will denote a second countable locally compact group, equipped with left Haar measure. Denote by $\cl D\subseteq\cl{B}(L^2(G))$ the maximal abelian selfadjoint algebra (masa, for short) consisting of all multiplication operators $M_f:g\to fg$, where $f\in L^\infty(G)$. We write $\vn (G)$ for the von Neumann algebra $\{\lambda_s : s\in G\}''$ generated by the left regular representation $s\to \lambda_s$ of $G$ on $L^2(G)$ (here $(\lambda_sg)(t)=g(s\an t)$). Every element of the predual of $\vn (G)$ is a vector functional, $\omega_{\xi,e^{it}a}: T\to (T\xi,e^{it}a)$, where $\xi,e^{it}a\in L^2(G)$, and $\nor{\omega_{\xi,e^{it}a}}$ is the infimum of the products $\|\xi\|_2\|e^{it}a\|_2$ over all such representations. This predual can be identified \cite{eymard} with the set $A(G)$ of all complex functions $u$ on $G$ of the form $s\to u(s)=\omega_{\xi,e^{it}a}(\lambda_s)$. With the above norm and pointwise operations, $A(G)$ is a (commutative, regular, semi-simple) Banach algebra of continuous functions on $G$ vanishing at infinity, called the \emph{Fourier algebra} of $G$; its Gelfand spectrum can be identified with $G$ {\it via} point evaluations. The set $A_c(G)$ of compactly supported elements of $A(G)$ is dense in $A(G)$. A function $\sigma:G\to\bb C$ is a {\em multiplier} of $A(G)$ if for all $u\in A(G)$ the pointwise product $\sigma u$ is again in $A(G)$. By duality, a multiplier $\sigma$ induces a bounded operator $T\to \sigma\cdot T$ on $\vn(G)$. We say $\sigma$ is {\em a completely bounded (or Herz-Schur) multiplier}, and write $\sigma\in M^{\cb}A(G)$, if the latter operator is completely bounded, that is, if there exists a constant $K$ such that $\nor{[\sigma\cdot T_{ij}]}\le K \nor{[T_{ij}]}$ for all $n\in\bb N$ and all $[T_{ij}]\in M_n(\vn (G))$ (the latter being the space of all $n$ by $n$ matrices with entries in $\vn (G)$). The least such constant is the \emph{cb norm} of $\sigma$. The space $M^{\cb}A(G)$ with pointwise operations and the cb norm is a Banach algebra into which $A(G)$ embeds contractively. For a subset $\Sigma\subseteq M^{\cb}A(G)$, we let $Z(\Sigma)=\{s\in G: \sigma(s)=0 \text{ for all } \sigma\in\Sigma\}$ be its \emph{zero set}. A subset $\Omega\subseteq G\times G$ is called {\em marginally null} if there exists a null set (with respect to Haar measure) $X\subseteq G$ such that $\Omega\subseteq (X\times G)\cup(G\times X)$. Two sets $\Omega,\Omega'\subseteq G\times G$ are {\em marginally equivalent} if their symmetric difference is a marginally null set; we write $\Omega_1\cong \Omega_2$. A set $\Omega\subseteq G\times G$ is said to be {\em $\omega$-open} if it is marginally equivalent to a {\em countable} union of Borel rectangles $A\times B$; it is called {\em $\omega$-closed} when its complement is $\omega$-open. Given any set $\Omega\subseteq G\times G$, we denote by $\frak M_{\max}(\Omega)$ the set of all $T\in\cl{B}(L^2(G))$ which are {\em supported} by $\Omega$ in the sense that $M_{\chi_ B}TM_{\chi_A}=0$ whenever $A\times B\subseteq G\times G$ is a Borel rectangle disjoint from $\Omega$ (we write $\chi_A$ for the characteristic function of a set $A$). Given any set $\cl U\subseteq \cl{B}(L^2(G))$ there exists a smallest, up to marginal equivalence, $\omega$-closed set $\Omega\subseteq G\times G$ supporting every element of $\cl U$, {\it i.e.} such that $\cl U\subseteq\frak M_{\max}(\Omega)$. This set is called {\em the $\omega$-support} of $\cl U$ and is denoted $\mathop{\mathrm{supp}}o(\cl U)$ \cite{eks}. Two functions $h_1,h_2 : G\times G\to \bb{C}$ are said to be {\em marginally equivalent}, or equal {\em marginally almost everywhere (m.a.e.)}, if they differ on a marginally null set. The predual of $\cl{B}(L^2(G))$ consists of all linear forms $\omega$ given by $\omega(T):= \sum\limits_{i=1}^{\infty} \sca{Tf_i, g_i}$ where $f_i, g_i\in L^2(G)$ and $\sum\limits_{i=1}^{\infty}\nor{f_i}_2\nor{g_i}_2<\infty$. Each such $\omega$ defines a trace class operator whose kernel is a function $h = h_\omega:G\times G\to\bb C$, unique up to marginal equivalence, given by $h(x,y)=\sum\limits_{i=1}^{\infty} f_i(x )\bar g_i(y)$. This series converges marginally almost everywhere on $G\times G$. We use the notation $\du{T}{h} :=\omega(T)$. We write $T(G)$ for the Banach space of (marginal equivalence classes of) such functions, equipped with the norm of the predual of $\cl{B}(L^2(G))$. Let $\frak{S}(G)$ be the multiplier algebra of $T(G)$; by definition, a measurable function $w : G\times G\rightarrow \bb{C}$ belongs to $\frak{S}(G)$ if the map $m_w: h\to wh$ leaves $T(G)$ invariant, that is, if $wh$ is marginally equivalent to a function from $T(G)$, for every $h\in T(G)$. Note that the operator $m_w$ is automatically bounded. The elements of $\frak{S}(G)$ are called \emph{(measurable) Schur multipliers}. By duality, every Schur multiplier induces a bounded operator $S_w$ on $\cl B(L^2(G))$, given by \[\du{S_w(T)}{h} = \du{T}{wh}, \ \ \ h\in T(G), \; T\in \cl B(L^2(G))\, .\] The operators of the form $S_w$, $w\in \frak{S}(G)$, are precisely the bounded weak* continuous $\cl D$-bimodule maps on $\cl B(L^2(G))$ (see \cite{haa}, \cite{sm}, \cite{pe} and \cite{kp}). A weak* closed subspace $\cl U$ of $\cl B(L^2(G))$ is invariant under the maps $S_w$, $w\in \frak{S}(G)$, if and only if it is invariant under all left and right multiplications by elements of $\cl D$, {\it i.e.} if $M_fTM_g\in \cl U$ for all $f,g\in L^\infty(G)$ and all $T\in\cl U$, in other words, if it is a {\em $\cl D$-bimodule}. For any set $\cl T\subseteq \cl B(L^2(G))$ we denote by $\Bim\cl T$ the smallest weak* closed $\cl D$-bimodule containing $\cl T$; thus, $\mathrm{Bim}(\cl T)=\overline{[\mathfrak{S}(G)\cl T]}^{w^*}$. We call a subspace $\cl U\subseteq \cl B(L^2(G))$ {\em invariant} if $\rho_rT\rho_r^*\in\cl A$ for all $T\in\cl A$ and all $r\in G$; here, $r\to \rho_r$ is the right regular representation of $G$ on $L^2(G)$. An invariant space, which is also a $\cl D$-bimodule, will be called a {\em jointly invariant space}. It is not hard to see that, if $\cl A\subseteq \cl B(L^2(G))$, the smallest weak* closed jointly invariant space containing $\cl A$ is the weak* closed linear span of $\{S_w(\rho_rT\rho_r^*): T\in\cl A, w\in \frak S(G), r\in G\}$. For a complex function $u$ on $G$ we let $N(u):G\times G\to\bb C$ be the function given by $N(u)(s,t) = u(ts^{-1})$. For any subset $E$ of $G$, we write $E^*=\{(s,t)\in G\times G: ts^{-1}\in E\}$. It is shown in \cite{bf} (see also \cite{j} and \cite{spronk}) that the map $u\rightarrow N(u)$ is an isometry from $M^{\cb}A(G)$ into $ \frak{S}(G)$ and that its range consists precisely of all {\em invariant} Schur multipliers, {\it i.e.} those $w\in \frak{S}(G)$ for which $w(sr,tr) = w(s,t)$ for every $r\in G$ and marginally almost all $s,t$. Note that the corresponding operators $S_{N(u)}$ are denoted $\hat\Theta(u)$ in \cite{neuruaspro}. The following result from \cite{akt} is crucial for what follows. \begin{theorem}\label{th_satlcg} Let $J\subseteq A(G)$ be a closed ideal and $\Sat(J)$ be the closed $L^\infty(G)$-bimodule of $T(G)$ generated by the set \[ \{N(u)\chi_{L\times L}: u \in J, L\ \text{compact, } \ L\subseteq G \}. \] Then $\Sat(J)^{\perp} = \Bim (J^{\perp})$. \end{theorem} \section{Null spaces and harmonic operators}\label{s1} Given a subset $\Sigma\subseteq M^{\cb}A(G)$, let \[ \frak{N}(\Sigma) = \{T\in \vn(G) : \sigma\cdot T = 0, \ \mbox{ for all } \sigma\in \Sigma\} \] be the {\em common null set} of the operators on $\vn(G)$ of the form $T\to \sigma\cdot T$, with $\sigma\in \Sigma$. Letting \[\Sigma A \stackrel{def}{=} \overline{\sspp}(\Sigma A(G)) = \overline{\sspp\{ \sigma u : \sigma\in \Sigma, u\in A(G)\}},\] it is easy to verify that $\Sigma A$ is a closed ideal of $A(G)$ and that \begin{equation}\label{eq_prean} \frak{N}(\Sigma) = (\Sigma A)^\bot . \end{equation} {\remark \label{remideal} The sets of the form $\Sigma A$ are precisely the closed ideals of $A(G)$ generated by their compactly supported elements.} \begin{proof} It is clear that, if $\Sigma\subseteq M^{\cb}A(G)$, the set $\{\sigma u: \sigma\in\Sigma, u\in A_c(G)\}$ consists of compactly supported elements and is dense in $\Sigma A$. Conversely, suppose that $J\subseteq A(G)$ is a closed ideal such that $J\cap A_c(G)$ is dense in $J$. For every $u\in J$ with compact support $K$, there exists $v\in A(G)$ which equals 1 on $K$ \cite[(3.2) Lemme]{eymard}, and so $u=uv\in JA$. Thus $J= \overline{J\cap A_c(G)}\subseteq JA\subseteq J$ and hence $J=JA$. \end{proof} The following Proposition shows that it is sufficient to study sets of the form $\frak N(J)$ where $J$ is a closed ideal of $A(G)$. \begin{proposition}\label{p_njan} For any subset $\Sigma$ of $M^{\cb}A(G)$, \[ \frak{N}(\Sigma)=\frak{N}(\Sigma A).\] \end{proposition} \proof If $\sigma\cdot T = 0$ for all $\sigma\in\Sigma$ then {\em a fortiori} $v\sigma\cdot T=0$, for all $v\in A(G)$ and all $\sigma\in \Sigma$. It follows that $w\cdot T=0$ for all $w\in \Sigma A$; thus $\frak{N}(\Sigma)\subseteq \frak{N}(\Sigma A)$. Suppose conversely that $w\cdot T=0$ for all $w\in \Sigma A$ and fix $\sigma\in\Sigma$. Now $u\sigma\cdot T=0$ for all $u \in A(G)$, and so $ \du{\sigma\cdot T}{uv}=0$ when $u,v\in A(G)$. Since the products $uv$ form a dense subset of $A(G)$, we have $\sigma\cdot T=0$. Thus $\frak{N}(\Sigma)\supseteq \frak{N}(\Sigma A)$ since $\sigma\in\Sigma$ is arbitrary, and the proof is complete. \qed It is not hard to see that $\lambda_s$ is in $\frak N(\Sigma)$ if and only if $s$ is in the zero set $Z(\Sigma)$ of $\Sigma$, and so $Z(\Sigma)$ coincides with the zero set of the ideal $J=\Sigma A$. Whether or not, for an ideal $J$, these unitaries suffice to generate $\frak N(J)$ depends on properties of the zero set. For our purposes, a closed subset $E\subseteq G$ is a {\em set of synthesis} if there is a unique closed ideal $J$ of $A(G)$ with $Z(J)=E$. Note that this ideal is generated by its compactly supported elements \cite[Theorem 5.1.6]{kaniuth}. \begin{lemma} \label{proto} Let $J\subseteq A(G)$ be a closed ideal. Suppose that its zero set $E=Z(J)$ is a set of synthesis. Then \[ \frak N(J)=J^\bot=\overline{\sspp\{\lambda_x:x\in E\}}^{w*} \] \end{lemma} \proof Since $E$ is a set of synthesis, $J=JA$ by Remark \ref{remideal}; thus $J^\bot=(JA)^\bot=\frak N(J)$ by relation (\ref{eq_prean}). The other equality is essentially a reformulation of the fact that $E$ is a set of synthesis: a function $u\in A(G)$ is in $J$ if and only if it vanishes at every point of $E$, that is, if and only if it annihilates every $\lambda_s$ with $s\in E$ (since $\du{\lambda_s}{u}= u(s)$). \qed A linear space $\cl U$ of bounded operators on a Hilbert space is called {\em a ternary ring of operators (TRO)} if it satisfies $ST^*R\in\cl U$ whenever $S,T$ and $R$ are in $\cl U$. Note that a TRO containing the identity operator is automatically a selfadjoint algebra. \begin{proposition}\label{deutero} Let $J\subseteq A(G)$ be a closed ideal. Suppose that its zero set $E=Z(J)$ is the coset of a closed subgroup of $G$. Then $\frak N(J)$ is a (weak-* closed) TRO. In particular, if $E$ is a closed subgroup then $\frak N(J)$ is a von Neumann subalgebra of $\vn (G)$. \end{proposition} \proof We may write $E=Hg$ where $H$ is a closed subgroup and $g\in G$ (the proof for the case $E=gH$ is identical). Now $E$ is a translate of $H$ which is a set of synthesis by \cite{tatsuuma2} and hence $E$ is a set of synthesis. Thus Lemma \ref{proto} applies. If $sg,tg,rg$ are in $E$ and $S=\lambda_{sg}, \, T=\lambda_{tg}$ and $R=\lambda_{rg}$, then $ST^*R=\lambda_{st\an rg}$ is also in $\frak N(J)$ because $st\an rg\in E$. Since $\frak N(J)$ is generated by $\{\lambda_x:x\in E\}$, it follows that $ST^*R\in\frak N(J)$ for any three elements $S,T,R$ of $\frak N(J)$. \qed {\remark Special cases of the above result are proved by Chu and Lau in \cite{chulau} (see Propositions 3.2.10 and 3.3.9.)} We now pass from $\vn(G)$ to $\cl B (L^2(G))$: The algebra $M^{\cb}A(G)$ acts on $\cl B(L^2(G))$ {\it via} the maps $S_{N(\sigma)},\, \sigma\in M^{\cb}A(G)$ (see \cite{bf} and \cite{j}), and this action is an extension of the action of $M^{\cb}A(G)$ on $\vn (G)$: when $T\in\vn (G)$ and $\sigma\in M^{\cb}A(G)$, we have $S_{N(\sigma)}(T)=\sigma\cdot T$. Hence, letting \[ \tilde{\frak N}(\Sigma) = \{T\in\cl B(L^2(G)): S_{N(\sigma)}(T)=0 , \ \mbox{ for all } \sigma\in \Sigma\}, \] we have $\frak{N}(\Sigma)=\tilde{\frak N}(\Sigma)\cap \vn(G).$ The following is analogous to Proposition \ref{p_njan}; note, however, that the dualities are different. \begin{proposition}\label{new} If $\Sigma\subseteq M^{\cb}A(G)$, \[ \tilde{\frak{N}}(\Sigma)=\tilde{\frak{N}}(\Sigma A).\] \end{proposition} \proof The inclusion $\tilde{\frak{N}}(\Sigma)\subseteq \tilde{\frak{N}}(\Sigma A)$ follows as in the proof of Proposition \ref{p_njan}. To prove that $\tilde{\frak N}(\Sigma A)\subseteq \tilde{\frak{N}}(\Sigma)$, let $T\in \tilde{\frak{N}}(\Sigma A)$; then $S_{N(v\sigma)}(T)=0$ for all $\sigma \in\Sigma$ and $v \in A(G)$. Thus, if $h\in T(G)$, \[\du{S_{N(\sigma)}(T)}{N(v) h} =\du{T}{N(\sigma v) h} = \du{S_{N(v\sigma )}(T)}{ h} = 0\, .\] Since the linear span of the set $\{N(v) h: v \in A(G), h \in T(G) \}$ is dense in $T(G)$, it follows that $S_{N(\sigma)}(T)=0$ and so $T \in \tilde{\frak{N}}(\Sigma)$. \qed \begin{proposition}\label{prop2} For every closed ideal $J$ of $A(G),\quad \tilde{\frak N}(J)= \Bim(J^\bot) $. \end{proposition} \proof If $T\in\cl B(L^2(G)), h\in T(G)$ and $u\in A(G)$ then \[\langle S_{N(u)}(T),h\rangle = \langle T, N(u)h\rangle .\] By \cite[Proposition 3.1]{akt}, $\Sat(J)$ is the closed linear span of $\{N(u)h: u\in J , h\in T(G)\}$. We conclude that $T\in (\Sat(J))^\bot$ if and only if $S_{N(u)}(T) = 0$ for all $u\in J$, {\it i.e.} if and only if $T\in \tilde{\frak{N}}(J)$. By Theorem \ref{th_satlcg}, $(\Sat(J))^\bot=\Bim(J^\bot)$, and the proof is complete. $\qquad\Box$ \begin{theorem}\label{thbimn} For any subset $\Sigma$ of $M^{\cb}A(G)$, \[ \tilde{\frak N}(\Sigma)= \Bim(\frak{N}(\Sigma)).\] \end{theorem} \proof It follows from relation (\ref{eq_prean}) that $\Bim((\Sigma A)^\bot) = \Bim(\frak{N}(\Sigma))$. But $\Bim((\Sigma A)^\bot)=\tilde{\frak N}(\Sigma A)$ from Proposition \ref{prop2} and $\tilde{\frak N}(\Sigma A)=\tilde{\frak N}(\Sigma)$ from Proposition \ref{new}. \qed More can be said when the zero set $Z(\Sigma)$ is a subgroup (or a coset) of $G$. \begin{lemma}\label{trito} Let $J\subseteq A(G)$ be a closed ideal. Suppose that its zero set $E=Z(J)$ is a set of synthesis. Then \begin{equation}\label{eq} \tilde{\frak N}(J) =\overline{\sspp\{M_g\lambda_x:x\in E,g\in L^\infty(G)\}}^{w*} \end{equation} \end{lemma} \proof By Theorem \ref{thbimn}, $\tilde{\frak N}(J) = \Bim(\frak N(J))$ and thus, by Lemma \ref{proto}, $\tilde{\frak N}(J)$ is the weak* closed linear span of the monomials of the form $M_f\lambda_sM_g$ where $f,g\in L^\infty(G)$ and $s\in E$. But, because of the commutation relation $\lambda_sM_g=M_{g_s}\lambda_s \ \ (\mbox{where } g_s(t)=g(s\an t))$, we may write $M_f\lambda_sM_g=M_\phi\lambda_s$ where $\phi=fg_s\in L^\infty(G)$.\qed \begin{theorem}\label{tetarto} Let $J\subseteq A(G)$ be a closed ideal. Suppose that its zero set $E=Z(J)$ is the coset of a closed subgroup of $G$. Then $\tilde{\frak N}(J)$ is a (weak* closed) TRO. In particular if $E$ is a closed subgroup then $\tilde{\frak N}(J)$ is a von Neumann subalgebra of $\cl B(L^2(G))$ and \[ \tilde{\frak N}(J)=(\cl D\cup\frak N(J))''=(\cl D\cup\{\lambda_x:x\in E\})''. \] \end{theorem} \proof As in the proof of Proposition \ref{deutero}, we may take $E=Hg$. By Lemma \ref{trito}, it suffices to check the TRO relation for monomials of the form $M_f\lambda_{sg}$; but, by the commutation relation, triple products $(M_f\lambda_{sg})(M_g\lambda_{tg})^*(M_h\lambda_{rg})$ of such monomials may be written in the form $M_\phi\lambda_{st\an rg}$ and so belong to $\tilde{\frak N}(J)$ when $sg,tg$ and $rg$ are in the coset $E$. Finally, when $E$ is a closed subgroup, the last equalities follow from relation (\ref{eq}) and the bicommutant theorem. \qed We next extend the notions of $\sigma$-harmonic functionals \cite{chulau} and operators \cite{neurun} to jointly harmonic functionals and operators: \begin{definition}\label{d_jh} Let $\Sigma\subseteq M^{\cb}A(G)$. An element $T\in \cl \vn(G)$ will be called a \emph{$\Sigma$-harmonic functional } if $\sigma\cdot T=T$ for all $\sigma\in\Sigma$. We write $\cl H_{\Sigma}$ for the set of all {$\Sigma$-harmonic} functionals. An operator $T\in \cl B(L^2(G))$ will be called \emph{$\Sigma$-harmonic} if $S_{N(\sigma)}(T)=T$ for all $\sigma\in\Sigma$. We write $\widetilde{\cl H}_{\Sigma}$ for the set of all {$\Sigma$-harmonic} operators. \end{definition} Explicitly, if $\Sigma'=\{\sigma -\mathbf 1:\sigma\in\Sigma\}$, \begin{align*} \cl H_{\Sigma} &= \{T\in \vn(G) : \sigma\cdot T=T \;\text{for all }\; \sigma\in\Sigma\} = \frak N(\Sigma') \\ \text{and }\quad \widetilde{\cl H}_{\Sigma} \ &= \{T\in \cl B(L^2(G)) : S_{N(\sigma)}(T)=T \;\text{for all }\; \sigma\in\Sigma\} = \tilde{\frak N}(\Sigma'). \end{align*} The following is an immediate consequence of Theorem \ref{thbimn}. \begin{corollary}\label{c_jho} Let $\Sigma\subseteq M^{\cb}A(G)$. Then the weak* closed $\cl D$-bimodule $\Bim(\cl H_{\Sigma})$ generated by $\cl H_{\Sigma}$ coincides with $\widetilde{\cl H}_{\Sigma}$. \end{corollary} Let $\sigma$ be a positive definite normalised function and $\Sigma = \{\sigma\}$. In \cite[Theorem 4.8]{neurun}, the authors prove, under some restrictions on $G$ or $\sigma$ (removed in \cite{kalantar}), that $\widetilde{\cl H}_{\Sigma}$ coincides with the von Neumann algebra $(\cl D\cup\cl H_{\Sigma})''$. We give a short proof of a more general result. Denote by $P^1(G)$ the set of all positive definite normalised functions on $G$. Note that $P^1(G)\subseteq M^{\cb}A(G)$. \begin{theorem} Let $\Sigma\subseteq P^1(G)$. The space $\widetilde{\cl H}_{\Sigma}$ is a von Neumann subalgebra of $\cl B(L^2(G))$, and $\widetilde{\cl H}_{\Sigma}=(\cl D \cup\cl H_{\Sigma})''$. \end{theorem} \proof Note that $\cl H_{\Sigma}=\frak N(\Sigma')=\frak N(\Sigma' A)$ and $\widetilde{\cl H}_{\Sigma} = \tilde{\frak N}(\Sigma')= \tilde{\frak N}(\Sigma' A)$. Since $Z(\Sigma')$ is a closed subgroup \cite[Proposition 32.6]{hr2}, it is a set of spectral synthesis \cite{tatsuuma2}. Thus the result follows from Theorem \ref{tetarto}. \qed {\remark It is worth pointing out that $\widetilde{\cl H}_{\Sigma}$ has an abelian commutant, since it contains a masa. In particular, it is a type I, and hence an injective, von Neumann algebra.} In \cite[Theorem 4.3]{akt} it was shown that a weak* closed subspace $\cl U\subseteq \cl B(L^2(G))$ is jointly invariant if and only if it is of the form $\cl U = \Bim(J^{\perp})$ for a closed ideal $J\subseteq A(G)$. By Proposition \ref{prop2}, $\Bim(J^{\perp})=\tilde{\frak{N}}(J)$, giving another equivalent description. In fact, the ideal $J$ may be replaced by a subset of $M^{\cb}A(G)$: \begin{proposition}\label{th_eqc} Let $\cl U\subseteq \cl B(L^2(G))$ be a weak* closed subspace. The following are equivalent: (i) \ \ $\cl U$ is jointly invariant; (ii) \ there exists a closed ideal $J\subseteq A(G)$ such that $\cl U = \tilde{\frak{N}}(J)$; (iii) \ there exists a subset $\Sigma\subseteq M^{\cb}A(G)$ such that $\cl U = \tilde{\frak{N}}(\Sigma)$. \end{proposition} \begin{proof} We observed the implication (i)$\Rightarrow$(ii) above, and (ii)$\Rightarrow$(iii) is trivial. Finally, (iii)$\Rightarrow$(i) follows from Theorem \ref{thbimn} and \cite[Theorem 4.3]{akt}. \end{proof} {\remark It might also be observed that every weak* closed jointly invariant subspace $\cl U$ is of the form $\cl U = \widetilde{\cl H}_{\Sigma}$ for some $\Sigma\subseteq M^{\cb}A(G)$.} We end this section with a discussion on the ideals of the form $\Sigma A$: If $J$ is a closed ideal of $A(G)$, then $J A\subseteq J$; thus, by (\ref{eq_prean}) and Proposition \ref{p_njan}, $J^\bot\subseteq \frak N(J)$ and therefore $\Bim (J^\bot)\subseteq\tilde{\frak N}(J)$, since $\tilde{\frak N}(J)$ is a $\cl D$-bimodule and contains ${\frak N}(J)$. The equality $J^\bot= \frak N(J)$ holds if and only if $J$ is generated by its compactly supported elements, equivalently if $J=JA$ (see Remark \ref{remideal}). Indeed, by Proposition \ref{p_njan} we have $\frak N(J)=\frak N(JA)= (JA)^\bot$ and so the equality $J^\bot= \frak N(J)$ is equivalent to $J^\bot= (JA)^\bot$. Interestingly, the inclusion $\Bim (J^\bot)\subseteq\tilde{\frak N}(J)$ is in fact always an equality (Proposition \ref{prop2}). We do not know whether all closed ideals of $A(G)$ are of the form $\Sigma A$. They certainly are when $A(G)$ satisfies {\em Ditkin's condition at infinity} \cite[Remark 5.1.8 (2)]{kaniuth}, namely if every $u\in A(G)$ is the limit of a sequence $(uv_n)$, with $v_n\in A_c(G)$. Since $A_c(G)$ is dense in $A(G)$, this is equivalent to the condition that every $u\in A(G)$ belongs to the closed ideal $\overline{uA(G)}$. This condition has been used before (see for example \cite{kl}). It certainly holds whenever $A(G)$ has a weak form of approximate identity; for instance, when $G$ has the approximation property (AP) of Haagerup and Kraus \cite{hk} and a fortiori when $G$ is amenable. It also holds for all discrete groups. See also the discussion in Remark 4.2 of \cite{lt} and the one following Corollary 4.7 of \cite{akt}. \section{Annihilators and Supports}\label{s} In this section, given a set $\cl A$ of operators on $L^2(G)$, we study the ideal of all $u\in A(G)$ which act trivially on $\cl A$; its zero set is the $G$-support of $\cl A$; we relate this to the $\omega$-support of $\cl A$ defined in \cite{eks}. In \cite{eymard}, Eymard introduced, for $T\in\vn (G)$, the ideal $I_T$ of all $u\in A(G)$ satisfying $u\cdot T=0$. We generalise this by defining, for a subset $\cl A$ of $\cl B(L^2(G))$, \[ I_\cl{A} =\{u \in A(G): S_{N(u)}(\cl A)=\{0\}. \} \] It is easy to verify that $I_\cl{A}$ is a closed ideal of $A(G)$. Let $\cl U(\cl A)$ be the smallest weak* closed jointly invariant subspace containing $\cl A$. We next prove that $\cl U(\cl A)$ coincides with the set $\tilde{\frak N}(I_\cl{A})$ of all $T\in\cl B(L^2(G))$ satisfying $S_{N(u)}(T)=0$ for all $u \in I_\cl{A}$. \begin{proposition} \label{13} Let $\cl A\subseteq\cl B(L^2(G))$. If $\sigma\in M^{\cb}A(G)$ then $S_{N(\sigma)}(\cl A)=\{0\}$ if and only if $S_{N(\sigma)}(\cl U(\cl A))=\{0\}$. Thus, $I_\cl{A}=I_\cl{U(A)}$. \end{proposition} \proof Recall that $$\cl U(\cl A) = \overline{\sspp\{S_w(\rho_r T \rho_r^*) : T\in \cl A, w\in \frak{S}(G), r\in G\}}^{w^*}.$$ The statement now follows immediately from the facts that $S_{N(\sigma)}\circ S_w= S_w\circ S_{N(\sigma)}$ for all $w\in \frak S(G)$ and $S_{N(\sigma)}\circ {\rm Ad}_{\rho_r}= {\rm Ad}_{\rho_r}\circ S_{N(\sigma)}$ for all $r\in G$. The first commutation relation is obvious, and the second one can be seen as follows: Denoting by $\theta_r$ the predual of the map ${\rm Ad}_{\rho_r}$, for all $h\in T(G)$ we have $\theta_r(N(\sigma)h) = N(\sigma)\theta_r(h)$ since $N(\sigma)$ is right invariant and so \begin{align*} \du{S_{N(\sigma)}(\rho_rT\rho_r^*)}{h} &= \du{\rho_rT\rho_r^*}{N(\sigma)h} = \du{T}{\theta_r(N(\sigma)h)} \\ & = \du{T}{N(\sigma)\theta_r(h)} = \du{S_{N(\sigma)}(T)}{\theta_r(h)} \\ &= \du{\rho_r(S_{N(\sigma)}(T))\rho_r^*}{h}. \end{align*} Thus $S_{N(\sigma)}(\rho_rT\rho_r^*)=\rho_r(S_{N(\sigma)}(T))\rho_r^*$. \qed \begin{theorem} \label{prop16} Let $\cl A\subseteq\cl B(L^2(G))$. The bimodule $\tilde{\frak N}(I_\cl{A})$ coincides with the smallest weak* closed jointly invariant subspace $\cl U(\cl A)$ of $\cl B(L^2(G))$ containing $\cl A$. \end{theorem} \proof Since $\cl{U(A)}$ is weak* closed and jointly invariant, by \cite[Theorem 4.3]{akt} it equals $\Bim(J^\bot)$, where $J$ is the closed ideal of $A(G)$ given by \[J=\{u\in A(G): N(u)\chi_{L\times L} \in (\cl{U(A)})_\bot \;\text{for all compact $L\subseteq G$}\}.\] We show that $J\subseteq I_\cl{A}$. Suppose $u\in J$; then, for all $w\in\frak S(G)$ and all $T\in\cl A$, since $S_w(T)$ is in $\cl{U(A)}$, by Theorem \ref{th_satlcg} it annihilates $ N(u)\chi_{L\times L}$ for every compact $L\subseteq G$. It follows that $$\du{S_{N(u)}(T)}{w\chi_{L\times L}} = \du{T}{N(u)w\chi_{L\times L}} =\du{S_w(T)}{N(u)\chi_{L\times L}} = 0$$ for all $w\in\frak S(G)$ and all compact $L\subseteq G$. Taking $w=f\otimesimes\bar g$ with $f,g\in L^\infty(G)$ supported in $L$, this yields \[ \sca{S_{N(u)}(T)f,g} = \du{S_{N(u)}(T)}{w\chi_{L\times L}} = 0 \] for all compactly supported $f,g\in L^\infty(G)$ and therefore $S_{N(u)}(T)=0$. Since this holds for all $T\in\cl A$, we have shown that $u\in I_\cl{A}$. It follows that $\cl{U(A)}=\Bim(J^\perp)\supseteq \Bim(I_\cl{A}^\perp)$. But $\Bim(I_\cl{A}^\perp)=\tilde{\frak N}(I_\cl{A})$ by Proposition \ref{prop2}, and this space is clearly jointly invariant and weak* closed. Since it contains $\cl A$, it also contains $\cl{U(A)}$ and so \[\cl{U(A)}=\Bim(J^\perp)= \Bim(I_\cl{A}^\perp)=\tilde{\frak N}(I_\cl{A}). \qquad\Box\] \noindent\textbf{Supports of functionals and operators} In \cite{neurun}, the authors generalise the notion of support of an element of $\vn(G)$ introduced by Eymard \cite{eymard} by defining, for an arbitrary $T\in\cl B(L^2(G))$, \[ \mathop{\mathrm{supp}}G T := \{x\in G : u(x) = 0 \;\text{for all $u\in A(G)$ with }\; S_{N(u)}(T) = 0\}.\] Notice that $\mathop{\mathrm{supp}}G T$ coincides with the zero set of the ideal $I_T$ (see also \cite[Proposition 3.3]{neurun}). More generally, let us define the {\em $G$-support} of a subset $\cl A$ of $\cl B(L^2(G))$ by \[\mathop{\mathrm{supp}}G(\cl A) = Z(I_\cl{A}).\] When $\cl A\subseteq \vn(G)$, then $\mathop{\mathrm{supp}}G(\cl A)$ is just the support of $\cl A$ considered as a set of functionals on $A(G)$ as in \cite{eymard}. The following is proved in \cite{neurun} under the assumption that $G$ has the approximation property of Haagerup and Kraus \cite{hk}: \begin{proposition} Let $T\in\cl B(L^2(G))$. Then $\mathop{\mathrm{supp}}G(T)=\emptyset$ if and only if $T=0$. \end{proposition} \proof It is clear that the empty set is the $G$-support of the zero operator. Conversely, suppose $\mathop{\mathrm{supp}}G(T)=\emptyset$, that is, $Z(I_T)=\emptyset$. This implies that $I_T=A(G)$ (see \cite[Corollary 3.38]{eymard}). Hence $S_{N(u)}(T)=0$ for all $u\in A(G)$, and so for all $h\in T(G)$ we have \[ \du{T}{N(u)h}= \du{S_{N(u)}(T)}{h}=0. \] Since the linear span of $\{N(u)h:u\in A(G), h\in T(G)\}$ is dense in $T(G)$, it follows that $T=0.$ \qed \begin{proposition}\label{propsame} The $G$-support of a subset $\cl A\subseteq\cl B(L^2(G))$ is the same as the $G$-support of the smallest weak* closed jointly invariant subspace $\cl{U(A})$ containing $\cl A$. \end{proposition} \proof Since $I_\cl{A}=I_{\cl{U(A})}$ (Proposition \ref{13}), this is immediate. \qed The following proposition shows that the $G$-support of a subset $\cl A\subseteq\cl B(L^2(G))$ is in fact the support of a space of linear functionals on $A(G)$ (as used by Eymard): it can be obtained either by first forming the ideal $I_\cl{A}$ of all $u\in A(G)$ `annihilating' $\cl A$ (in the sense that $S_{N(u)}(\cl A)=\{0\}$) and then taking the support of the annihilator of $I_\cl{A}$ in $\vn(G)$; alternatively, it can be obtained by forming the smallest weak* closed jointly invariant subspace $\cl{U(A})$ containing $\cl A$ and then considering the support of the set of all the functionals on $A(G)$ which are contained in $\cl{U(A})$. \begin{proposition}\label{propsame2} The $G$-support of a subset $\cl A\subseteq\cl B(L^2(G))$ coincides with the supports of the following spaces of functionals on $A(G)$: (i) \ the space $I_\cl{A}^\bot\subseteq\vn(G)$ (ii) the space $\cl{U(A})\cap\vn(G)=\frak N(I_\cl{A})$. \end{proposition} \proof By Proposition \ref{prop2} and Theorem \ref{prop16}, \[\cl{U(A})= \tilde{\frak N}(I_\cl{A})=\Bim( I_\cl{A}^\bot).\] Since the $\cl D$-bimodule $\Bim( I_\cl{A}^\bot)$ is jointly invariant, it coincides with $\cl U(I_\cl{A}^\bot)$. Thus $\cl{U(A})=\cl U(I_\cl{A}^\bot)$ and so Proposition \ref{propsame} gives $\mathop{\mathrm{supp}}G(\cl A)=\mathop{\mathrm{supp}}G( I_\cl{A}^\bot)$, proving part (i). Note that $\cl U(\frak N(I_\cl{A}))=\Bim(\frak N(I_\cl{A}))=\tilde{\frak N}(I_\cl{A})$ and so $\cl U(\frak N(I_\cl{A}))=\cl{U(A})$. Thus by Proposition \ref{propsame}, $\frak N(I_\cl{A})$ and $\cl A$ have the same support. Since $\cl{U(A})\cap\vn(G)=\tilde{\frak N}(I_\cl{A})\cap\vn(G)=\frak N(I_\cl{A})$, part (ii) follows. \qed We are now in a position to relate the $G$-support of a set of operators to their $\omega$-support as introduced in \cite{eks}. \begin{theorem}\label{312} Let $\cl U\subseteq\cl B(L^2(G))$ be a weak* closed jointly invariant subspace. Then \begin{align*} \mathop{\mathrm{supp}}o(\cl U) &\cong (\mathop{\mathrm{supp}}G(\cl U))^*. \end{align*} In particular, the $\omega$-support of a jointly invariant subspace is marginally equivalent to a topologically closed set. \end{theorem} \proof Let $J = I_\cl{U}$. By definition, $\mathop{\mathrm{supp}}G(\cl U) = Z(J)$. By the proof of Theorem \ref{prop16}, $\cl U = \Bim(J^\bot)$, and hence, by Theorem \ref{th_satlcg}, $\cl U = (\Sat J)^\bot$. By \cite[Section 5]{akt}, $\mathop{\mathrm{supp}}o(\cl U)=\nul(\Sat J)=(Z(J))^*$, where $\nul (\Sat J)$ is the largest, up to marginal equivalence, $\omega$-closed subset $F$ of $G\times G$ such that $h|_F = 0$ for all $h\in\Sat J$ (see \cite{st1}). The proof is complete. \qed \begin{corollary}\label{c_nss} Let $\Sigma\subseteq M^{\cb}A(G)$. Then \[ \mathop{\mathrm{supp}}o \tilde{\frak{N}}(\Sigma) \cong Z(\Sigma)^*. \] If $Z(\Sigma)$ satisfies spectral synthesis, then $\tilde{\frak{N}}(\Sigma) = \frak{M}_{\max}(Z(\Sigma)^*)$. \end{corollary} \begin{proof} From Theorem \ref{thbimn}, we know that $\tilde{\frak{N}}(\Sigma)=\Bim((\Sigma A)^\bot)=\tilde{\frak{N}}(\Sigma A)$ and so $\mathop{\mathrm{supp}}o \tilde{\frak{N}}(\Sigma) \cong Z(\Sigma A)^*$ by \cite[Section 5]{akt}. But $Z(\Sigma A)=Z(\Sigma)$ as can easily be verified (if $\sigma(t)\ne 0$ there exists $u\in A(G)$ so that $(\sigma u)(t)\ne 0$; the converse is trivial). The last claim follows from the fact that, when $Z(\Sigma)$ satisfies spectral synthesis, there is a unique weak* closed $\cl D$ bimodule whose $\omega$-support is $Z(\Sigma)^*$ (see \cite[Theorem 4.11]{lt} or the proof of \cite[Theorem 5.5]{akt}). \end{proof} Note that when $\Sigma\subseteq P^1(G)$, the set $Z(\Sigma)$ satisfies spectral synthesis. The following corollary is a direct consequence of Corollary \ref{c_nss}. \begin{corollary}\label{corsyn} Let $\Sigma\subseteq M^{\cb}A(G)$ and $\Sigma'=\{\mathbf 1-\sigma:\sigma\in\Sigma\}$. If $Z(\Sigma') $ is a set of spectral synthesis, then $\widetilde{\cl H}_{\Sigma} = \frak{M}_{\max}(Z(\Sigma')^*)$. \end{corollary} \begin{corollary} Let $\Omega$ be a subset of $G\times G$ which is invariant under all maps $(s,t)\to (sr,tr), \, r\in G$. Then $\Omega$ is marginally equivalent to an $\omega$-closed set if and only if it is marginally equivalent to a topologically closed set. \end{corollary} \proof A topologically closed set is of course $\omega$-closed. For the converse, let $\cl U=\frak{M}_{\max}(\Omega)$, so that $\Omega\cong\mathop{\mathrm{supp}}o(\cl U)$. Note that $\cl U$ is a weak* closed jointly invariant space. Indeed, since $\Omega$ is invariant, for every $T\in\cl U$ the operator $T_r=: \rho_rT\rho_r^*$ is supported in $\Omega$ and hence is in $\cl U$. Of course $\cl U$ is invariant under all Schur multipliers. By Theorem \ref{312}, $\mathop{\mathrm{supp}}o(\cl U)$ is marginally equivalent to a closed set. \qed \begin{theorem}\label{th_compsa} Let $\cl A\subseteq \cl B(L^2(G))$. Then $\mathop{\mathrm{supp}}G(\cl A)$ is the smallest closed subset $E\subseteq G$ such that $E^*$ marginally contains $\mathop{\mathrm{supp}}o(\cl A)$. \end{theorem} \proof {Let $\cl U=\cl{U(A})$ be the smallest jointly invariant weak* closed subspace containing $\cl A$. Let $Z=Z(I_\cl{A})$; by definition, $Z=\mathop{\mathrm{supp}}G\cl A$. But $\mathop{\mathrm{supp}}G\cl A=\mathop{\mathrm{supp}}G\cl U=Z$ (Proposition \ref{propsame}) and so $\mathop{\mathrm{supp}}o\cl{U}\cong Z^*$ by Theorem \ref{312}. Thus $Z^*$ does marginally contain $\mathop{\mathrm{supp}}o(\cl A)$. On the other hand, let $E\subseteq G$ be a closed set such that $E^*$ marginally contains $\mathop{\mathrm{supp}}o(\cl A)$. Thus any operator $T\in\cl A$ is supported in $E^*$. But since $E^*$ is invariant, $\rho_rT\rho_r^*$ is also supported in $E^*,$ for every $r\in G$. Thus $\cl U$ is supported in $E^*$. This means that $Z^*$ is marginally contained in $E^*$; that is, there is a null set $N\subseteq G$ such that $Z^*\setminus E^*\subseteq (N\times G)\cup (G\times N)$. We claim that $Z\subseteq E$. To see this, assume, by way of contradiction, that there exists $s\in Z\setminus E$. Then the `diagonal' $\{(r,sr):r\in G\}$ is a subset of $Z^*\setminus E^*\subseteq (N\times G)\cup (G\times N)$. It follows that for every $r\in G$, either $r\in N$ or $sr\in N$, which means that $r\in s\an N$. Hence $G\subseteq N\cup s\an N$, which is a null set. This contradiction shows that $Z\subseteq E$. \qed We note that for subsets $\cl S$ of $\vn(G)$ the relation $\mathop{\mathrm{supp}}o (\cl{S})\subseteq (\mathop{\mathrm{supp}}G(\cl S))^*$ is in \cite[Lemma 4.1]{lt}. In \cite{neurun} the authors define, for a closed subset $Z$ of $G$, the set \[ \cl B_Z(L^2(G)) = \{T\in\cl B(L^2(G): \mathop{\mathrm{supp}}G(T)\subseteq Z\}. \] \begin{corollary}\label{rem38} If $Z\subseteq G$ is closed, the set $\cl B_Z(L^2(G))$ consists of all $T\in\cl B(L^2(G))$ which are $\omega$-supported in $Z^*$; that is, $\cl B_Z(L^2(G))=\frak{M}_{\max}(Z^*)$. In particular, this space is a reflexive jointly invariant subspace. \end{corollary} \proof If $T$ is $\omega$-supported in $Z^*$, then by Theorem \ref{th_compsa}, $\mathop{\mathrm{supp}}G(T)\subseteq Z$. Conversely if $\mathop{\mathrm{supp}}G(T)\subseteq Z$ then $\mathop{\mathrm{supp}}G(\cl U(T))\subseteq Z$ by Proposition \ref{propsame}. But, by Theorem \ref{312}, $\mathop{\mathrm{supp}}o(\cl U(T)) \cong (\mathop{\mathrm{supp}}G(\cl U(T)))^*\subseteq Z^*$ and so $T$ is $\omega$-supported in $Z^*$. \qed \begin{remark} \label{last} The $\omega$-support $\mathop{\mathrm{supp}}o(\cl A)$ of a set $\cl A$ of operators is more \lq sensitive' than $\mathop{\mathrm{supp}}G(\cl A)$ in that it encodes more information about $\cl A$. Indeed, $\mathop{\mathrm{supp}}G(\cl A)$ only depends on the (weak* closed) jointly invariant subspace generated by $\cl A$, while $\mathop{\mathrm{supp}}o(\cl A)$ depends on the (weak* closed) masa-bimodule generated by $\cl A$. \end{remark} {\example Let $G=\bb Z$ and $\cl A=\frak{M}_{\max}\{ (i,j):i+j \in\{0,1\}\}$. The $\omega$-support of $\cl A$ is of course the two-line set $\{ (i,j):i+j \in\{0,1\}\}$, while its $G$-support is $\bb Z$ which gives no information about $\cl A$.} Indeed, if $E\subseteq\bb Z$ contains $\mathop{\mathrm{supp}}G(\cl A)$, then by Theorem \ref{th_compsa} $E^*=\{(n,m)\in\bb Z\times\bb Z:m-n\in E\}$ must contain $\{ (i,j):i+j \in\{0,1\}\}$. Thus for all $n\in\bb Z$, since $(-n,n)$ and $(-n,n+1)$ are in $\mathop{\mathrm{supp}}o(\cl A)$ we have $n-(-n)\in E$ and $n+1-(-n)\in E$; hence $\bb Z\subseteq E$. \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$} \end{document}
\begin{document} \title[Canonical Generating Classes]{Canonical Equivariant Cohomology Classes Generating Zeta Values of Totally Real Fields} \author[Bannai]{Kenichi Bannai$^{*\diamond}$} \email{[email protected]} \address{${}^*$Department of Mathematics, Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kouhoku-ku, Yokohama 223-8522, Japan} \address{${}^\diamond$Mathematical Science Team, RIKEN Center for Advanced Intelligence Project (AIP), 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan} \author[Hagihara]{Kei Hagihara$^{\diamond*}$} \author[Yamada]{Kazuki Yamada$^*$} \author[Yamamoto]{Shuji Yamamoto$^{*\diamond}$} \date{\today} \date{\today \quad (Version 1.08)} \begin{abstract} It is known that the special values at nonpositive integers of a Dirichlet $L$-function may be expressed using the generalized Bernoulli numbers, which are defined by a canonical generating function. The purpose of this article is to consider the generalization of this classical result to the case of Hecke $L$-functions of totally real fields. Hecke $L$-functions may be expressed canonically as a finite sum of zeta functions of Lerch type. By combining the non-canonical multivariable generating functions constructed by Shintani, we newly construct a canonical class, which we call the \textit{Shintani generating class}, in the equivariant cohomology of an algebraic torus associated to the totally real field. Our main result states that the specializations at torsion points of the derivatives of the Shintani generating class give values at nonpositive integers of the zeta functions of Lerch type. This result gives the insight that the correct framework in the higher dimensional case is to consider higher equivariant cohomology classes instead of functions. \end{abstract} \thanks{This research is supported by KAKENHI 18H05233. The topic of research initiated from the KiPAS program FY2014--2018 of the Faculty of Science and Technology at Keio University.} \subjclass[2010]{11M35 (Primary), 11R42, 14L15, 55N91 (Secondary)} \maketitle \setcounter{tocdepth}{1} \section{Introduction}\label{section: introduction} It is classically known that the special values at nonpositive integers of a Dirichlet $L$-function may be expressed using the generalized Bernoulli numbers, which are defined by a canonical rational generating function. This simple but significant result is the basis of the deep connection between the special values of Dirichlet $L$-functions and important arithmetic invariants pertaining to the abelian extensions of $\bbQ$. In his ground-breaking article \cite{Shi76}, Shintani generalized this result to the case of Hecke $L$-functions of totally real fields. His approach consists of two steps: The decomposition of a Hecke $L$-function into a finite sum of zeta functions -- the \textit{Shintani zeta functions} -- associated to certain cones, and the construction of a multivariable generating function for special values of each Shintani zeta function. Although this method attained certain success, including the construction by Barsky \cite{Bar78} and Cassou-Nogu\`es \cite{CN79} of the $p$-adic $L$-functions for totally real fields, the decomposition step above requires a choice of cones, and the resulting generating function is non-canonical. A canonical object behind these generating functions remained to be found. The purpose of this article is to construct geometrically such a canonical object, which we call the \textit{Shintani generating class}, through the combination of the following three ideas. We let $g$ be the degree of the totally real field. First, the Hecke $L$-functions are expressed canonically in terms of the \textit{zeta functions of Lerch type} (cf.\ Definition \ref{def: Lerch}), or simply \textit{Lerch zeta functions}, which are defined for finite additive characters parameterized by torsion points of a certain algebraic torus of dimension $g$, originally considered by Katz \cite{Katz81}, associated to the totally real field. Second, via a \v Cech resolution, the multivariable generating functions constructed by Shintani for various cones may beautifully be combined to form the Shintani generating class, a canonical cohomology class in the $(g-1)$-st cohomology group of the algebraic torus minus the identity. Third, the class descends into the equivariant cohomology with respect to the action of totally positive units, which successfully allows for nontrivial specializations of the class and its derivatives at torsion points. Our main result, Theorem \ref{theorem: main}, states that the specializations at nontrivial torsion points of the derivatives of the Shintani generating class give values at nonpositive integers of the Lerch zeta functions associated to the totally real field. The classical result for $\bbQ$ that we generalize, viewed through our emphasis on Lerch zeta functions, is as follows. The Dirichlet $L$-function may canonically be expressed as a finite linear combination of the classical \textit{Lerch zeta functions}, defined by the series \begin{equation}\label{eq: Lerch} \cL(\xi, s)\coloneqq\sum_{n=1}^\infty \xi(n)n^{-s} \end{equation} for finite characters $\xi\in{\Hecke}om_\bbZ(\bbZ,\bbC^\times)$. The series \eqref{eq: Lerch} converges for any $s\in\bbC$ such that $\Re(s)>1$ and has an analytic continuation to the whole complex plane, holomorphic if $\xi\neq 1$. When $\xi=1$, the function $\cL(1, s)$ coincides with the Riemann zeta function $\zeta(s)$, hence has a simple pole at $s=1$. A crucial property of the Lerch zeta functions is that it has a canonical generating function $\cG(t)$, which single-handedly captures for \textit{all} nontrivial finite characters $\xi$ the values of Lerch zeta functions at nonpositive integers. Let $\bbG_m\coloneqq\Spec\bbZ[t,t^{-1}]$ be the multiplicative group, and let $\cG(t)$ be the rational function \[ \cG(t)\coloneqq \frac{t}{1-t} \in \Gamma\bigl(U,\sO_{\bbG_m}\bigr), \] where $U\coloneqq\bbG_m\setminus\{1\}$. We denote by $\partial$ the algebraic differential operator $\partial\coloneqq t\frac{d}{dt}$, referred to as the ``magic stick'' in \cite{Kato93}*{1.1.7}. Note that any $\xi\in\bbG_m(\bbC)$ corresponds to an additive character $\xi\colon\bbZ\rightarrow\bbC^\times$ given by $\xi(n)\coloneqq\xi^n$ for any $n\in\bbZ$. Then we have the following. \begin{theorem}\label{theorem: classical generating} For any nontrivial torsion point $\xi$ of $\bbG_m$ and $k\in\bbN$, we have \[ \cL(\xi,-k)=\partial^k\cG(t)\big|_{t=\xi}\in \bbQ(\xi). \] In particular, the values $\cL(\xi,-k)$ for any $k\in\bbN$ are all algebraic. \end{theorem} The purpose of this article is to generalize the above result to the case of totally real fields. Let $F$ be a totally real field of degree $g$, and let $\cO_F$ be its ring of integers. We denote by $\cO_{F+}$ the set of totally positive integers and by $\Delta\coloneqq\cO_{F+}^\times$ the set of totally positive units of $F$. Let $\bbT\coloneqq{\Hecke}om_\bbZ(\cO_F,\bbG_m)$ be an algebraic torus defined over $\bbZ$ which represents the functor associating to any $\bbZ$-algebra $R$ the group $\bbT(R)={\Hecke}om_\bbZ(\cO_F,R^\times)$. Such a torus was used by Katz \cite{Katz81} to reinterpret the construction by Barsky \cite{Bar78} and Cassou-Nogu\`es \cite{CN79} of the $p$-adic $L$-function of totally real fields. For the case $F=\bbQ$, we have $\bbT = {\Hecke}om_\bbZ(\bbZ,\bbG_m)=\bbG_m$, hence $\bbT$ is a natural generalization of the multiplicative group. For an additive character $\xi\colon\cO_F\rightarrow R^\times$ and $\varepsilon\in\Delta$, we let $\xi^\varepsilon$ be the character defined by $\xi^\varepsilon(\alpha)\coloneqq\xi(\varepsilon\alpha)$ for any $\alpha\in\cO_F$. This gives an action of $\Delta$ on the set of additive characters $\bbT(R)$. We consider the following zeta function, which we regard as the generalization of the classical Lerch zeta function to the case of totally real fields. \begin{definition}\label{def: Lerch} For any torsion point $\xi\in\bbT(\bbC)={\Hecke}om_\bbZ(\cO_F,\bbC^\times)$, we define the \textit{zeta function of Lerch type}, or simply the \textit{Lerch zeta function}, by \begin{equation}\label{eq: Shintani-Lerch} \cL(\xi\Delta, s)\coloneqq\sum_{\alpha\in\Delta_{\xi}\backslash\cO_{F+}}\xi(\alpha)N(\alpha)^{-s}, \end{equation} where $N(\alpha)$ is the norm of $\alpha$, and $\Delta_{\xi}\subset\Delta$ is the isotropic subgroup of $\xi$, i.e.\ the subgroup consisting of $\varepsilon\in\Delta$ such that $\xi^\varepsilon=\xi$. \end{definition} The notation $\cL(\xi\Delta, s)$ is used since \eqref{eq: Shintani-Lerch} depends only on the $\Delta$-orbit of $\xi$. This series is known to converge for $\Re(s)>1$, and may be continued analytically to the whole complex plane. When the narrow class number of $F$ is \textit{one}, the Hecke $L$-function of a finite Hecke character of $F$ may canonically be expressed as a finite linear sum of $\cL(\xi\Delta, s)$ for suitable finite characters $\xi$ (see Proposition \ref{prop: Hecke}). The action of $\Delta$ on additive characters gives a right action of $\Delta$ on $\bbT$. The structure sheaf $\sO_\bbT$ on $\bbT$ has a natural $\Delta$-equivariant structure in the sense of Definition \ref{def: equivariant structure}. Let $U\coloneqq\bbT\setminus\{1\}$. Our main results are as follows. \begin{theorem}\label{theorem: introduction} \begin{enumerate} \item \mbox{(Proposition \ref{prop: Shintani generating class}) } There exists a canonical class \[ \cG\in H^{g-1}(U/\Delta,\sO_\bbT), \] where $H^{g-1}(U/\Delta,\sO_\bbT)$ is the equivariant cohomology of $U$ with coefficients in $\sO_\bbT$ (see \S\ref{section: equivariant} for the precise definition.) \item (Theorem \ref{theorem: main}) For any nontrivial torsion point $\xi$ of $\bbT$, we have a canonical isomorphism \[ H^{g-1}(\xi/\Delta_\xi,\sO_\xi)\cong\bbQ(\xi). \] Through this isomorphism, for any integer $k\geq 0$, we have \[ \cL(\xi\Delta,-k)=\partial^k\cG(\xi)\in\bbQ(\xi), \] where $\partial\colon H^{g-1}(U/\Delta,\sO_\bbT)\rightarrow H^{g-1}(U/\Delta,\sO_\bbT)$ is a certain differential operator given in \eqref{eq: differential}, and $\partial^k\cG(\xi)$ is the image of $\partial^k\cG$ with respect to the specialization map $ H^{g-1}(U/\Delta,\sO_\bbT)\rightarrow H^{g-1}(\xi/\Delta_\xi,\sO_\xi) $ induced by the equivariant morphism $\xi\rightarrow U$. \end{enumerate} \end{theorem} We refer to the class $\cG$ as the \textit{Shintani generating class}. If $F=\bbQ$, then we have $\Delta=\{1\}$, and the class $\cG$ is simply the rational function $\cG(t)=t/(1-t)\in H^0(U,\sO_{\bbG_m})=\Gamma(U,\sO_{\bbG_m})$. Thus Theorem \ref{theorem: introduction} (2) coincides with Theorem \ref{theorem: classical generating} in this case. For the case $F=\bbQ$ and also for the case of imaginary quadratic fields (see for example \cite{CW77}\cite{CW78}), canonical algebraic generating functions of special values of Hecke $L$-functions play a crucial role in relating the special values of Hecke $L$-functions to arithmetic invariants. However, up until now, the discovery of such a \textit{canonical} generating function has been elusive in the higher dimensional cases. Our result suggests that the correct framework in the higher dimensional case is to consider equivariant cohomology classes instead of functions. Relation of our work to the results of Charollois, Dasgupta, and Greenberg \cite{CDG14} was kindly pointed out to us by Peter Xu. As a related result, the relation of special values of Hecke $L$-functions of totally real fields to the topological polylogarithm on a torus was studied by Be\u\i linson, Kings, and Levin in \cite{BKL18}. The polylogarithm for general commutative group schemes were constructed by Huber and Kings \cite{HK18}. Our discovery of the Shintani generating class arose from our attempt to explicitly describe various realizations of the polylogarithm for the algebraic torus $\bbT$. In subsequent research, we will explore the arithmetic implications of our insight (see for example \cite{BHY00}). \tableofcontents The content of this article is as follows. In \S \ref{section: Lerch Zeta}, we will introduce the Lerch zeta function $\cL(\xi\Delta, s)$ and show that this function may be expressed non-canonically as a linear sum of Shintani zeta functions. We will then review the multivariable generating function constructed by Shintani of the special values of Shintani zeta functions. In \S \ref{section: equivariant}, we will define the equivariant cohomology of a scheme with an action of a group, and will construct the equivariant \v Cech complex $C^\bullet(\frU/\Delta,\sF)$ which calculates the equivariant cohomology of $U\coloneqq\bbT\setminus\{1\}$ with coefficients in an equivariant coherent sheaf $\sF$ on $U$. In \S \ref{section: Shintani Class}, we will define in Proposition \ref{prop: Shintani generating class} the Shintani generating class $\cG$, and in Lemma \ref{lem: differential} give the definition of the derivatives. Finally in \S \ref{section: specialization}, we will give the proof of our main theorem, Theorem \ref{theorem: main}, which coincides with Theorem \ref{theorem: introduction} (2). \section{Lerch Zeta Function}\label{section: Lerch Zeta} In this section, we first introduce the Lerch zeta function for totally real fields. Then we will then define the Shintani zeta function associated to a cone $\sigma$ and a function $\phi\colon\cO_F\rightarrow\bbC$ which factors through $\cO_F/\frf$ for some nonzero ideal $\frf\subset\cO_F$. We will then describe the generating function of its values at nonpositive integers when $\phi$ is a finite additive character. Let $\xi\in{\Hecke}om_\bbZ(\cO_F,\bbC^\times)$ be a $\bbC$-valued character on $\cO_F$ of finite order. As in Definition \ref{def: Lerch} of \S\ref{section: introduction}, we define the Lerch zeta function for totally real fields by the series \[ \cL(\xi\Delta, s)\coloneqq\sum_{\alpha\in\Delta_\xi\backslash\cO_{F+}}\xi(\alpha)N(\alpha)^{-s}, \] where $\Delta_\xi\coloneqq\{\varepsilon\in\Delta\mid \xi^\varepsilon=\xi\}$, which may be continued analytically to the whole complex plane. \begin{remark} Note that we have \[ \cL(\xi\Delta, s)=\sum_{\alpha\in\Delta\backslash\cO_{F+}} \sum_{\varepsilon\in\Delta_\xi\backslash\Delta} \xi(\varepsilon\alpha)N(\alpha)^{-s}. \] Even though $\xi(\alpha)$ is not well-defined for $\alpha\in\Delta\backslash\cO_F$, the sum $\sum_{\varepsilon\in\Delta_\xi\backslash\Delta}\xi(\varepsilon\alpha)$ is well-defined for $\alpha\in\Delta\backslash\cO_F$. \end{remark} The importance of $\cL(\xi\Delta, s)$ is in its relation to the Hecke $L$-functions of $F$. Let $\frf$ be a nonzero integral ideal of $F$. We denote by $\Cl^+_F(\frf)\coloneqq I_\frf/P^+_\frf$ the strict ray class group modulo $\frf$ of $F$, where $I_\frf$ is the group of fractional ideals of $F$ prime to $\frf$ and $P^+_\frf\coloneqq \{ (\alpha) \mid \alpha\in F_+, \alpha\equiv 1 \operatorname{mod}^\times \frf\}$. A finite Hecke character of $F$ of conductor $\frf$ is a character \[ \chi\colon\Cl^+_F(\frf)\rightarrow\bbC^\times. \] By \cite{Neu99}*{Chapter VII (6.9) Proposition}, there exists a unique character $ \chi_\fin\colon(\cO_F/\frf)^\times\rightarrow\bbC^\times $ associated to $\chi$ such that $\chi((\alpha))=\chi_\fin(\alpha)$ for any $\alpha\in\cO_{F+}$ prime to $\frf$. In particular, we have $\chi_\fin(\varepsilon)=1$ for any $\varepsilon\in\Delta$. Extending by \textit{zero}, we regard $\chi_\fin$ as functions on $\cO_F/\frf$ and $\cO_F$ with values in $\bbC$. In what follows, we let $\bbT[\frf]\coloneqq{\Hecke}om(\cO_F/\frf,{\ol\bbQ}^\times)\subset\bbT(\ol\bbQ)$ be the set of $\frf$-torsion points of $\bbT$. We say that a character $\chi$, $\chi_\fin$ or $\xi\in\bbT[\frf]$ is \textit{primitive}, if it does not factor respectively through $\Cl_F^+(\frf')$, $(\cO_F/\frf')^\times$ or $\cO_F/\frf'$ for any integral ideal $\frf'\neq\frf$ such that $\frf'|\frf$. Then we have the following. \begin{lemma}\label{lemma: fourier} For any $\xi\in\bbT[\frf]$, let \[ c_\chi(\xi)\coloneqq\frac{1}{\bsN(\frf)}\sum_{\beta\in\cO_F/\frf} \chi_\fin(\beta)\xi(-\beta). \] Then we have \[ \chi_\fin(\alpha)=\sum_{\xi\in\bbT[\frf]}c_\chi(\xi)\xi(\alpha). \] Moreover, if $\chi_\fin$ is primitive, then we have $c_\chi(\xi)=0$ for any non-primitive $\xi$. \end{lemma} \begin{proof} The first statement follows from \[ \sum_{\xi\in\bbT[\frf]}c_\chi(\xi)\xi(\alpha) = \frac{1}{\bsN(\frf)} \sum_{\beta\in\cO_F/\frf} \chi_\fin(\beta)\biggl(\sum_{\xi\in\bbT[\frf]} \xi(\alpha-\beta)\biggr) = \chi_\fin(\alpha), \] where the last equality follows from the fact that $\sum_{\xi\in\bbT[\frf]} \xi(\alpha)=N(\frf)$ if $\alpha\equiv 0\pmod{\frf}$ and $\sum_{\xi\in\bbT[\frf]} \xi(\alpha)=0$ if $\alpha\not\equiv 0\pmod{\frf}$. Next, suppose $\chi_\fin$ is primitive, and let $\frf'\neq\frf$ be an integral ideal of $F$ such that $\frf'|\frf$ and $\xi\in\bbT[\frf ']$. Since $\chi_\fin$ is primitive, it does not factor through $\cO_F/\frf'$, hence there exists an element $\gamma\in\cO_F$ prime to $\frf$ such that $\gamma\equiv1\pmod{\frf'}$ and $\chi_\fin(\gamma)\neq 1$. Then since $\xi\in\bbT[\frf']$, we have $\xi(\gamma\alpha)=\xi(\alpha)$ for any $\alpha\in\cO_F$. This gives \begin{align*} c_\chi(\xi)=\frac{1}{\bsN(\frf)}\sum_{\beta\in\cO_F/\frf} \chi_\fin(\beta)\xi(-\beta) &=\frac{1}{\bsN(\frf)}\sum_{\beta\in\cO_F/\frf} \chi_\fin(\beta)\xi(-\gamma\beta)\\ &=\frac{\ol\chi_\fin(\gamma)}{\bsN(\frf)}\sum_{\beta\in\cO_F/\frf} \chi_\fin(\gamma\beta) \xi(-\gamma\beta)=\ol\chi_\fin(\gamma)c_\chi(\xi). \end{align*} Since $\chi_\fin(\gamma)\neq 1$, we have $c_\chi(\xi)=0$ as desired. \end{proof} Note that since multiplication by $\varepsilon\in\Delta$ is bijective on $\cO_F/\frf$ and since $\chi_\fin(\varepsilon)=1$, we have $c_\chi(\xi^\varepsilon)=c_\chi(\xi)$. Then we have the following. \begin{proposition}\label{prop: Hecke} Assume that the narrow class number of $F$ is \textit{one}, and let $\chi\colon\Cl^+_F(\frf)\rightarrow\bbC^\times$ be a finite primitive Hecke character of $F$ of conductor $\frf\neq(1)$. Then for $U[\frf]\coloneqq\bbT[\frf]\setminus\{1\}$, we have \[ L(\chi, s)=\sum_{\xi\in U[\frf]/\Delta} c_\chi(\xi)\cL(\xi\Delta, s). \] \end{proposition} \begin{proof} By definition and Lemma \ref{lemma: fourier}, we have \begin{align*} \sum_{\xi\in\bbT[\frf]/\Delta} c_\chi(\xi)\cL(\xi\Delta, s) &=\sum_{\xi\in\bbT[\frf]/\Delta}\sum_{\alpha\in\Delta\backslash\cO_{F+}} \sum_{\varepsilon\in\Delta_\xi\backslash\Delta}c_\chi(\xi)\xi(\varepsilon\alpha)\bsN(\alpha)^{-s}\\ &=\sum_{\alpha\in\Delta\backslash\cO_{F+}}\sum_{\xi\in\bbT[\frf]/\Delta}\sum_{\varepsilon\in\Delta_\xi\backslash\Delta} c_\chi(\xi^\varepsilon)\xi^\varepsilon(\alpha)\bsN(\alpha)^{-s}\\ &=\sum_{\alpha\in\Delta\backslash\cO_{F+}}\sum_{\xi\in\bbT[\frf]}c_\chi(\xi)\xi(\alpha)\bsN(\alpha)^{-s}\\ &=\sum_{\alpha\in\Delta\backslash\cO_{F+}} \chi_\fin(\alpha)\bsN(\alpha)^{-s} = \sum_{\fra\subset\cO_F} \chi(\fra)\bsN\!\fra^{-s}. \end{align*} Our assertion follows from the definition of the Hecke $L$-function and the fact that $c_\chi(\xi)=0$ for $\xi=1$. \end{proof} \begin{remark}\label{rem: class number one} We assumed the condition on the narrow class number for simplicity. By considering the Lerch zeta functions corresponding to additive characters in ${\Hecke}om_\bbZ(\fra,\bbC^\times)$ for general fractional ideals $\fra$ of $F$, we may express the Hecke $L$-functions when the narrow class number of $F$ is greater than \textit{one}. \end{remark} We will next define the Shintani zeta function associated to a cone. Note that we have a canonical isomorphism \[ F\otimes\bbR\cong\bbR^I\coloneqq \prod_{\tau\in I}\bbR, \qquad \alpha\otimes 1 \mapsto (\alpha^\tau), \] where $I$ is the set of embeddings $\tau\colon F\hookrightarrow\bbR$ and we let $\alpha^\tau\coloneqq\tau(\alpha)$ for any embedding $\tau\in I$. We denote by $\bbR^I_+\coloneqq\prod_{\tau\in I}\bbR_+$ the set of totally positive elements of $\bbR^I$, where $\bbR_+$ is the set of positive real numbers. \begin{definition} A rational closed polyhedral cone in $\bbR^I_+\cup\{0\}$, which we simply call a cone, is any set of the form \[ \sigma_{\boldsymbol{\alpha}}\coloneqq\{ x_1 \alpha_1+\cdots+x_m\alpha_m \mid x_1,\ldots,x_m \in\bbR_{\geq0}\} \] for some ${\boldsymbol{\alpha}}=(\alpha_1,\ldots,\alpha_m)\in\cO_{F+}^m$. In this case, we say that ${\boldsymbol{\alpha}}$ is a generator of $\sigma_{\boldsymbol{\alpha}}$. By considering the case $m=0$, we see that $\sigma=\{0\}$ is a cone. \end{definition} We define the dimension $\dim\sigma$ of a cone $\sigma$ to be the dimension of the $\bbR$-vector space generated by $\sigma$. In what follows, we fix a numbering $I=\{\tau_1,\ldots,\tau_g\}$ of elements in $I$. For any subset $R\subset\bbR_+^I$, we let \[ \wh R\coloneqq\{(u_{\tau_1},\ldots,u_{\tau_g})\in\bbR_+^I \mid\exists\delta>0,\,0<\forall\delta'<\delta,(u_{\tau_1},\ldots,u_{\tau_{g-1}}, u_{\tau_g}-\delta')\in R\}. \] \begin{definition} Let $\sigma$ be a cone, and let $\phi\colon\cO_F\rightarrow\bbC$ be a $\bbC$-valued function on $\cO_F$ which factors through $\cO_F/\frf$ for some nonzero ideal $\frf\subset\cO_F$. We define the \textit{Shintani zeta function} $\zeta_\sigma(\phi,\bss)$ associated to a cone $\sigma$ and function $\phi$ by the series \begin{equation}\label{eq: Shintani zeta} \zeta_\sigma(\phi,\bss)\coloneqq\sum_{\alpha\in\wh\sigma\cap\cO_F} \phi(\alpha) \alpha^{-\bss}, \end{equation} where $\bss=(s_{\tau})\in\bbC^I$ and $\alpha^{-\bss}\coloneqq\prod_{\tau\in I}(\alpha^{\tau})^{-s_{\tau}}$. The series \eqref{eq: Shintani zeta} converges if $\Re(s_{\tau})>1$ for any $\tau\in I$. \end{definition} By \cite{Shi76}*{Proposition 1}, the function $\zeta_\sigma(\phi,\bss)$ has a meromorphic continuation to any $\bss\in\bbC^I$. If we let $\bss=(s,\ldots, s)$ for $s\in\bbC$, then we have \begin{equation}\label{eq: diagonal} \zeta_\sigma(\phi,(s,\ldots, s))=\sum_{\alpha\in\wh\sigma\cap\cO_F} \phi(\alpha) N(\alpha)^{-s}. \end{equation} Shintani constructed the generating function of values of $\zeta_\sigma(\xi, \bss)$ at nonpositive integers for additive characters $\xi\colon\cO_F\rightarrow\bbC^\times$ of finite order, given as follows. In what follows, we view $z\in F\otimes\bbC$ as an element $z = (z_\tau)\in\bbC^I$ through the canonical isomorphism $F\otimes\bbC\cong\bbC^I$. \begin{definition}\label{def: generating} Let $\sigma=\sigma_{\boldsymbol{\alpha}}$ be a $g$-dimensional cone generated by ${\boldsymbol{\alpha}}=(\alpha_1,\ldots,\alpha_g)\in\cO_{F+}^g$, and we let $P_{\boldsymbol{\alpha}}\coloneqq\{ x_1\alpha_1+\cdots+x_g\alpha_g\mid \forall i\,\,0\leq x_i < 1\}$ be the parallelepiped spanned by $\alpha_1,\ldots,\alpha_g$. We define $\sG_{\sigma}(z)$ to be the meromorphic function on $F\otimes\bbC\cong\bbC^I$ given by \[ \sG_\sigma(z)\coloneqq \frac{\sum_{\alpha\in \wh P_{\boldsymbol{\alpha}}\cap\cO_F}e^{2\pi i\Tr(\alpha z)}}{{\bigl(1-e^{2\pi i\Tr(\alpha_1z)}}\bigr) \cdots\bigl(1-e^{2\pi i\Tr(\alpha_gz)}\bigr)}, \] where $\Tr(\alpha z)\coloneqq\sum_{\tau\in I}\alpha^\tau z_\tau$ for any $\alpha\in\cO_F$. The definition of $\sG_\sigma(z)$ depends only on the cone and is independent of the choice of the generator ${\boldsymbol{\alpha}}$. \end{definition} \begin{remark} If $F=\bbQ$ and $\sigma=\bbR_{\geq0}$, then we have $ \sG_\sigma(z) = \frac{e^{2\pi iz}}{1-e^{2\pi i z}}. $ \end{remark} For $\bsk=(k_\tau)\in\bbN^I$, we denote $\partial^\bsk\coloneqq\prod_{\tau\in I}\partial_\tau ^{k_\tau}$, where $\partial_\tau :=\frac{1}{2\pi i}\frac{\partial}{\partial z_\tau}$. For $u\in F$, we let $\xi_u$ be the finite additive character on $\cO_F$ defined by $\xi_u(\alpha)\coloneqq e^{2\pi i\Tr(\alpha u)}$. We note that any additive character on $\cO_F$ with values in $\bbC^\times$ of finite order is of this form for some $u\in F$. The following theorem, based on the work of Shintani, is standard (see for example \cite{CN79}*{Th\'eor\`eme 5}, \cite{Col88}*{Lemme 3.2}). \begin{theorem}\label{theorem: Shintani} Let ${\boldsymbol{\alpha}}$ and $\sigma$ be as in Definition \ref{def: generating}. For any $u\in F$ satisfying $\xi_u(\alpha_j)\neq1$ for $j=1,\ldots,g$, we have \[ \partial^\bsk \sG_\sigma(z)\big|_{z=u\otimes1}=\zeta_\sigma(\xi_u,-\bsk). \] \end{theorem} Note that the condition $\xi_u(\alpha_j)\neq1$ for $j=1,\ldots,g$ ensures that $z=u\otimes 1$ does not lie on the poles of the function $\sG_\sigma(z)$. The Lerch zeta function $\cL(\xi\Delta, s)$ may be expressed as a finite sum of functions $\zeta_\sigma(\xi,(s,\ldots, s))$ using the Shintani decomposition. We first review the definition of the Shintani decomposition. We say that a cone $\sigma$ is \textit{simplicial}, if there exists a generator of $\sigma$ that is linearly independent over $\bbR$. Any cone generated by a subset of such a generator is called a \textit{face} of $\sigma$. A simplicial fan $\Phi$ is a set of simplicial cones such that for any $\sigma\in\Phi$, any face of $\sigma$ is also in $\Phi$, and for any cones $\sigma,\sigma'\in\Phi$, the intersection $\sigma\cap\sigma'$ is a common face of $\sigma$ and $\sigma'$. A version of Shintani decomposition that we will use in this article is as follows. \begin{definition}\label{def: Shintani} A \textit{Shintani decomposition} is a simplicial fan $\Phi$ satisfying the following properties. \begin{enumerate} \item $\bbR_+^I\cup\{0\}=\coprod_{\sigma\in\Phi}\sigma^\circ$, where $\sigma^\circ$ is the relative interior of $\sigma$, i.e., the interior of $\sigma$ in the $\bbR$-linear span of $\sigma$. \item For any $\sigma\in\Phi$ and $\varepsilon\in\Delta$, we have $\varepsilon\sigma\in\Phi$. \item The quotient $\Delta\backslash\Phi$ is a finite set. \end{enumerate} \end{definition} We may obtain such decomposition by slightly modifying the construction of Shintani \cite{Shi76}*{Theorem 1} (see also \cite{Hid93}*{\S2.7 Theorem 1}, \cite{Yam10}*{Theorem 4.1}). Another construction was given by Ishida \cite{Ish92}*{p.84}. For any integer $q\geq0$, we denote by $\Phi_{q+1}$ the subset of $\Phi$ consisting of cones of dimension $q+1$. Note that by \cite{Yam10}*{Proposition 5.6}, $\Phi_g$ satisfies \begin{equation}\label{eq: upper closure} \bbR_+^I=\coprod_{\sigma\in\Phi_g}\wh\sigma. \end{equation} This gives the following result. \begin{proposition} Let $\xi\colon\cO_F\rightarrow\bbC^\times$ be a character of finite order, and $\Delta_\xi\subset\Delta$ its isotropic subgroup. If $\Phi$ is a Shintani decomposition, then we have \begin{equation}\label{eq: Shintani} \cL(\xi\Delta, s) = \sum_{\sigma\in\Delta_\xi\backslash\Phi_g}\zeta_\sigma(\xi,(s,\ldots,s)). \end{equation} \end{proposition} \begin{proof} By \eqref{eq: upper closure}, if $C$ is a representative of $\Delta_\xi\backslash\Phi_g$, then $\coprod_{\sigma\in C}\wh\sigma$ is a representative of the set $\Delta_\xi\backslash\bbR_+^I$. Our result follows from the definition of the Lerch zeta function and \eqref{eq: diagonal}. \end{proof} The expression \eqref{eq: Shintani} is non-canonical, since it depends on the choice of the Shintani decomposition. \section{Equivariant Coherent Cohomology}\label{section: equivariant} In this section, we will first give the definition of equivariant sheaves and equivariant cohomology of a scheme with an action of a group. As in \S\ref{section: introduction}, we let \begin{equation}\label{eq: algebraic torus} \bbT\coloneqq {\Hecke}om_\bbZ(\cO_F,\bbG_m) \end{equation} be the algebraic torus over $\bbZ$ defined by Katz \cite[\S 1]{Katz81}, satisfying $\bbT(R)={\Hecke}om_\bbZ(\cO_F,R^\times)$ for any $\bbZ$-algebra $R$. We will then construct the \textit{equivariant \v Cech complex}, which is an explicit complex which may be used to describe equivariant cohomology of $U\coloneqq\bbT\setminus\{1\}$ with action of $\Delta$. \begin{remark} In order to consider the values of Hecke $L$-functions when the narrow class number of $F$ is greater than \textit{one} (cf. Remark \ref{rem: class number one}), then it would be necessary to consider the algebraic tori \[ \bbT_\fra\coloneqq {\Hecke}om_\bbZ(\fra,\bbG_m) \] for general fractional ideals $\fra$ of $F$. \end{remark} We first review the basic facts concerning sheaves on schemes that are equivariant with respect to an action of a group. Let $G$ be a group with identity $e$. A $G$-scheme is a scheme $X$ equipped with a right action of $G$. We denote by $[u]\colon X \rightarrow X$ the action of $u\in G$, so that $[uv] = [v]\circ[u]$ for any $u,v\in G$ holds. In what follows, we let $X$ be a $G$-scheme. \begin{definition}\label{def: equivariant structure} A \textit{$G$-equivariant structure} on an $\sO_X$-module $\sF$ is a family of isomorphisms \[ \iota_u\colon [u]^* \sF \xrightarrow\cong \sF \] for $u\in G$, such that $\iota_e = \id_\sF$ and the diagram \[\xymatrix{ [uv]^* \sF \ar[r]^-{\iota_{uv}}\ar@{=}[d] & \sF \\ [u]^*[v]^* \sF \ar[r]^-{[u]^*\iota_v} & [u]^*\sF \ar[u]_{\iota_u} }\] is commutative. We call $\sF$ equipped with a $G$-equivariant structure a \textit{$G$-equivariant sheaf}. \end{definition} Note that the structure sheaf $\sO_X$ itself is naturally a $G$-equivariant sheaf. For any $G$-equivariant sheaf $\sF$ on $X$, we define the equivariant global section by $\Gamma(X/G,\sF)\coloneqq {\Hecke}om_{\bbZ[G]}(\bbZ,\Gamma(X, \sF)) = \Gamma(X,\sF)^G$. Then the equivariant cohomology $H^m(X/G, -)$ is defined to be the $m$-th right derived functor of $\Gamma(X/G,-)$. Suppose we have a group homomorphism $\pi\colon G\rightarrow H$. For a $G$-scheme $X$ and an $H$-scheme $Y$, we say that a morphism $f\colon X\rightarrow Y$ of schemes is \textit{equivariant} with respect to $\pi$, if we have $f\circ[u]=[\pi(u)]\circ f$ for any $u\in G$. If $\sF$ is a $H$-equivariant sheaf on $Y$ and $f$ is equivariant, then $f^*\sF$ is naturally an $G$-equivariant sheaf on $X$ with the equivariant structure given by $f^*\iota_{\pi(u)}\colon [u]^* (f^*\sF) = f^* ([\pi(u)]^* \sF) \rightarrow f^*\sF$ for any $u\in G$, and $f$ induces the pull-back homomorphism \begin{equation}\label{eq: pullback} f^*\colon H^m(Y/H,\sF) \rightarrow H^m(X/G, f^* \sF) \end{equation} on equivariant cohomology. We now consider our case of the algebraic torus $\bbT$. For any $\alpha \in \cO_F$, the morphism $\bbT(R) \rightarrow R^\times$ defined by mapping $\xi \in \bbT(R)$ to $\xi(\alpha) \in R^\times$ induces a morphism of group schemes $t^\alpha\colon\bbT \rightarrow \bbG_m$, which gives a rational function of $\bbT$. Then we have \[ \bbT=\Spec\bbZ[t^\alpha\mid\alpha\in\cO_F], \] where $t^\alpha, t^{\alpha'}$ satisfies the relation $t^\alpha t^{\alpha'}=t^{\alpha+\alpha'}$ for any $\alpha,\alpha'\in\cO_F$. If we take a basis $\alpha_1,\ldots,\alpha_g$ of $\cO_F$ as a $\bbZ$-module, then we have \[ \Spec\bbZ[t^\alpha\mid\alpha\in\cO_F]=\Spec\bbZ[ t^{\pm\alpha_1},\ldots,t^{\pm \alpha_g}]\cong\bbG_m^g. \] The action of $\Delta$ on $\cO_F$ by multiplication induces an action of $\Delta$ on $\bbT$. Explicitly, the isomorphism $[\varepsilon]\colon\bbT\rightarrow\bbT$ for $\varepsilon\in\Delta$ is given by $t^\alpha\mapsto t^{\varepsilon\alpha}$ for any $\alpha\in \cO_F$. \begin{definition}\label{def: twist} For any $\bsk=(k_{\tau})\in\bbZ^I$, we define a $\Delta$-equivariant sheaf $\sO_\bbT(\bsk)$ on $\bbT$ as follows. As an $\sO_\bbT$-module we let $\sO_\bbT(\bsk)\coloneqq\sO_\bbT$. The $\Delta$-equivariant structure \[ \iota_\varepsilon\colon[\varepsilon]^*\sO_\bbT\cong\sO_\bbT \] is given by multiplication by $\varepsilon^{-\bsk}\coloneqq\prod_{\tau\in I}(\varepsilon^{\tau})^{-k_{\tau}}$ for any $\varepsilon\in\Delta$. Note that for $\bsk, \bsk'\in\bbZ^I$, we have $\sO_\bbT(\bsk)\otimes\sO_\bbT(\bsk')=\sO_\bbT(\bsk+\bsk')$. For the case $\bsk=(k,\ldots,k)$, we have $\varepsilon^{-\bsk}=N(\varepsilon)^{-k}=1$ for any $\varepsilon\in\Delta$, hence $\sO_\bbT(\bsk)=\sO_\bbT$. \end{definition} The open subscheme $U\coloneqq\bbT\setminus\{1\}$ also carries a natural $\Delta$-scheme structure. We will now construct the \textit{equivariant \v Cech complex}, which may be used to express the cohomology of $U$ with coefficients in a $\Delta$-equivariant quasi-coherent $\sO_U$-module $\sF$. For any $\alpha\in\cO_{F}$, we let $U_\alpha\coloneqq\bbT\setminus \{t^{\alpha}=1\}$. Then any $\varepsilon\in\Delta$ induces an isomorphism $[\varepsilon]\colon U_{\varepsilon\alpha}\rightarrow U_{\alpha}$. We say that $\alpha\in\cO_{F+}$ is \textit{primitive} if $\alpha/N\not\in\cO_{F+}$ for any integer $N>1$. In what follows, we let $A\subset\cO_{F+}$ be the set of primitive elements of $\cO_{F+}$. Then \begin{enumerate} \item $\varepsilon A = A$ for any $\varepsilon\in\Delta$. \item The set $\frU\coloneqq\{U_\alpha\}_{\alpha\in A}$ gives an affine open covering of $U$. \end{enumerate} We note that for any simplicial cone $\sigma$ of dimension $m$, there exists a generator ${\boldsymbol{\alpha}}\in A^m$, unique up to permutation of the components. Let $q$ be an integer $\geq 0$. For any ${\boldsymbol{\alpha}}=(\alpha_0,\ldots,\alpha_{q})\in A^{q+1}$, we let $U_{\boldsymbol{\alpha}}\coloneqq U_{\alpha_0}\cap\cdots\cap U_{\alpha_{q}}$, and we denote by $j_{{\boldsymbol{\alpha}}}\colon U_{\boldsymbol{\alpha}}\hookrightarrow U$ the inclusion. We let \[ \sC^q(\frU,\sF)\coloneqq \prod_{{\boldsymbol{\alpha}}\in A^{q+1}}^\alt j_{{\boldsymbol{\alpha}}*}j_{\boldsymbol{\alpha}}^*\sF \] be the subsheaf of $ \prod_{{\boldsymbol{\alpha}}\in A^{q+1}} j_{{\boldsymbol{\alpha}}*}j_{\boldsymbol{\alpha}}^*\sF$ consisting of sections $\bss=(s_{\boldsymbol{\alpha}})$ such that $s_{\rho({\boldsymbol{\alpha}})} = \sgn(\rho)s_{\boldsymbol{\alpha}}$ for any $\rho\in\frS_{q+1}$ and $s_{\boldsymbol{\alpha}}=0$ if $\alpha_i=\alpha_j$ for some $i\neq j$. We define the differential $ d^q\colon\sC^q(\frU,\sF)\rightarrow\sC^{q+1}(\frU,\sF) $ to be the usual alternating sum \begin{equation}\label{eq: Cech differential} (d^qf)_{\alpha_0\cdots\alpha_{q+1}}\coloneqq\sum_{j=0}^{q+1}(-1)^j f_{\alpha_0\cdots\check\alpha_{j}\cdots\alpha_{q+1}}\big|_{U_{(\alpha_0,\ldots,\alpha_{q+1})}\cap V} \end{equation} for any section $(f_{\boldsymbol{\alpha}})$ of $\sC^q(\frU,\sF)$ of each open set $V\subset U$. If we let $\sF\hookrightarrow \sC^0(\frU,\sF)$ be the natural inclusion, then we have the exact sequence \[\xymatrix{ 0\ar[r]& \sF\ar[r]& \sC^0(\frU,\sF)\ar[r]^{d^0}&\sC^1(\frU,\sF)\ar[r]^{\quad d^1}&\cdots \ar[r]^{d^{q-1}\quad}&\sC^q(\frU,\sF)\ar[r]^{\quad d^q}&\cdots. }\] We next consider the action of $\Delta$. For any ${\boldsymbol{\alpha}}\in A^{q+1}$ and $\varepsilon\in\Delta$, we have a commutative diagram \[ \xymatrix{ U_{\varepsilon{\boldsymbol{\alpha}}}\ar@{^{(}->}[r]^{j_{\varepsilon{\boldsymbol{\alpha}}}}\ar[d]_{[\varepsilon]}^\cong&U\ar[d]^\cong_{[\varepsilon]}\\ U_{\boldsymbol{\alpha}}\ar@{^{(}->}[r]^{j_{\boldsymbol{\alpha}}} & U, } \] where $\varepsilon{\boldsymbol{\alpha}}\coloneqq(\varepsilon\alpha_0,\ldots,\varepsilon\alpha_{q})$. Then we have an isomorphism \[ [\varepsilon]^* j_{{\boldsymbol{\alpha}}*}j_{\boldsymbol{\alpha}}^*\sF \cong j_{\varepsilon{\boldsymbol{\alpha}}*}j_{\varepsilon{\boldsymbol{\alpha}}}^*[\varepsilon]^*\sF \xrightarrow\cong j_{\varepsilon{\boldsymbol{\alpha}}*}j_{\varepsilon{\boldsymbol{\alpha}}}^*\sF, \] where the last isomorphism is induced by the $\Delta$-equivariant structure $\iota_\varepsilon\colon[\varepsilon]^*\sF\cong\sF$. This induces an isomorphism $ \iota_\varepsilon\colon[\varepsilon]^*\sC^q(\frU,\sF)\xrightarrow\cong\sC^q(\frU,\sF), $ which is compatible with the differential \eqref{eq: Cech differential}. Hence $\sC^\bullet(A,\sF)$ is a complex of $\Delta$-equivariant sheaves on $U$. \begin{proposition}\label{proposition: acyclic} The sheaf $\sC^q(\frU,\sF)$ is acyclic with respect to the functor $\Gamma(U/\Delta, -)$. \end{proposition} \begin{proof} By definition, the functor $\Gamma(U/\Delta, -)$ is the composite of the functors $\Gamma(U,-)$ and ${\Hecke}om_{\bbZ[\Delta]}(\bbZ,-)$. Standard facts concerning the composition of functors shows that we have a spectral sequence \[ E_2^{a,b}=H^a\bigl(\Delta, H^b(U, \sC^q(\frU,\sF))\bigr)\Rightarrow H^{a+b}(U/\Delta, \sC^q(\frU,\sF)). \] We first prove that $H^b(U, \sC^q(\frU,\sF))=0$ if $b\neq0$. If we fix some total order on the set $A$, then we have \[ \sC^q(\frU,\sF)\cong\prod_{\alpha_0<\cdots<\alpha_q} j_{{\boldsymbol{\alpha}}*}j^*_{\boldsymbol{\alpha}}\sF, \] and each component $j_{{\boldsymbol{\alpha}}*}j^*_{\boldsymbol{\alpha}}\sF$ is acyclic for the functor $\Gamma(U,-)$ since $U_{\boldsymbol{\alpha}}$ is affine. Therefore $\sC^q(\frU,\sF)$ is acyclic by Lemma \ref{lem: 3.5} below. It is now sufficient to prove that $H^a\bigl(\Delta, H^0(U, \sC^q(\frU,\sF))\bigr)=0$ for any integer $a\neq0$, where \[ H^0(U, \sC^q(\frU,\sF)) = \prod_{{\boldsymbol{\alpha}}\in A^{q+1}}^\alt \Gamma(U,j_{{\boldsymbol{\alpha}}*}j_{\boldsymbol{\alpha}}^*\sF) \cong\prod_{\alpha_0<\cdots<\alpha_q} \Gamma(U_{\boldsymbol{\alpha}},\sF). \] Assume that the total order on $A$ is preserved by the action of $\Delta$ (for example, we may take the order on $\bbR$ through an embedding $\tau\colon A\hookrightarrow\bbR$ for some $\tau\in I$). Let $B$ be the subset of $A^{q+1}$ consisting of elements ${\boldsymbol{\alpha}} = (\alpha_0,\ldots,\alpha_q)$ such that $\alpha_0<\cdots<\alpha_q$. Then action of $\Delta$ on $B$ is free. We denote by $B_0$ a subset of $B$ representing the set $\Delta\backslash B$, so that any ${\boldsymbol{\alpha}}\in B$ may be written uniquely as ${\boldsymbol{\alpha}}=\varepsilon{\boldsymbol{\alpha}}_0$ for some $\varepsilon\in\Delta$ and ${\boldsymbol{\alpha}}_0\in B_0$. We let \[ M\coloneqq\prod_{{\boldsymbol{\alpha}}\in B_0} \Gamma(U_{{\boldsymbol{\alpha}}}, \sF), \] and we let $ {\Hecke}om_\bbZ(\bbZ[\Delta], M) $ be the coinduced module of $M$, with the action of $\Delta$ given for any $\varphi\in{\Hecke}om_\bbZ(\bbZ[\Delta], M)$ by $\varepsilon\varphi(u)=\varphi(u\varepsilon)$ for any $u\in\bbZ[\Delta]$ and $\varepsilon\in\Delta$. Then we have a $\bbZ[\Delta]$-linear isomorphism \begin{equation}\label{eq: isomorphism vulcan} H^0(U, \sC^q(\frU,\sF)) \xrightarrow\cong {\Hecke}om_\bbZ(\bbZ[\Delta], M) \end{equation} given by mapping any $(s_{\boldsymbol{\alpha}})\in H^0(U, \sC^q(A,\sF))$ to the $\bbZ$-linear homomorphism \[ \varphi_{(s_{\boldsymbol{\alpha}})}(\delta)\coloneqq\Bigl(\iota_\delta\bigl([\delta]^*s_{\delta^{-1}{\boldsymbol{\alpha}}_0}\bigr)\Bigr)\in M \] for any $\delta\in\Delta$. The compatibility of \eqref{eq: isomorphism vulcan} with the action of $\Delta$ is seen as follows. By definition, the action of $\varepsilon\in\Delta$ on $(s_{\boldsymbol{\alpha}})\in H^0(U,\sC^q(A,\sF))$ is given by $\varepsilon\bigl((s_{\boldsymbol{\alpha}})\bigr) =\bigl(\iota_\varepsilon([\varepsilon]^* s_{\varepsilon^{-1}{\boldsymbol{\alpha}}})\bigr)$. Hence noting that \[ \iota_\delta\circ[\delta]^*\iota_\varepsilon= \iota_{\delta\varepsilon}\colon\Gamma(U_{\boldsymbol{\alpha}},[\delta\varepsilon]^*\sF) \rightarrow\Gamma(U_{\boldsymbol{\alpha}},\sF) \] and $ [\delta]^* \circ \iota_\varepsilon= [\delta]^*\iota_\varepsilon \circ [\delta]^*\colon\Gamma(U_{\boldsymbol{\alpha}},[\varepsilon]^*\sF) \rightarrow\Gamma(U_{\boldsymbol{\alpha}},[\delta]^*\sF) $ for any $\delta\in\Delta$, we have \begin{align*} \varphi_{\varepsilon(s_{\boldsymbol{\alpha}})}(\delta)&= \Bigl(\iota_\delta\bigl([\delta]^*(\iota_\varepsilon([\varepsilon]^*s_{\varepsilon^{-1}\delta^{-1}{\boldsymbol{\alpha}}_0}))\bigr)\Bigr) =\Bigl((\iota_\delta\circ[\delta]^*\iota_\varepsilon)\bigl([\delta\varepsilon]^*s_{\varepsilon^{-1}\delta^{-1}{\boldsymbol{\alpha}}_0}\bigr)\Bigr)\\ &=\bigl(\iota_{\delta\varepsilon}([\delta\varepsilon]^*s_{\varepsilon^{-1}\delta^{-1}{\boldsymbol{\alpha}}_0})\bigr) =\varphi_{(s_{\boldsymbol{\alpha}})}(\delta\varepsilon) \end{align*} as desired. The fact that \eqref{eq: isomorphism vulcan} is an isomorphism follows from the fact that $B_0$ is a representative of $\Delta\backslash B$. By \eqref{eq: isomorphism vulcan} and Shapiro's lemma, we have $H^a(\Delta, H^0(U,\sC^q(A,\sF))) \cong H^a(\{1\},M)=0$ for $a\neq0$ as desired. \end{proof} The following Lemma \ref{lem: 3.5} was used in the proof of Proposition \ref{proposition: acyclic}. \begin{lemma}\label{lem: 3.5} Let $I$ be a scheme and let $(\sF_\lambda)_{\lambda\in\Lambda}$ be a family of quasi-coherent sheaves on $I$. Then for any integer $m\geq0$, we have \[ H^m\biggl(I, \prod_{\lambda\in\Lambda}\sF_\lambda\biggr) \cong\prod_{\lambda\in\Lambda}H^m(I, \sF_\lambda). \] \end{lemma} \begin{proof} Take an injective resolution $0\rightarrow\sF_\lambda\rightarrow I^\bullet_\lambda$ for each $\lambda\in\Lambda$. We will prove that $0\rightarrow\prod_{\lambda\in\Lambda}\sF_\lambda\rightarrow \prod_{\lambda\in\Lambda}I^\bullet_\lambda$ is an injective resolution. Since the product of injective objects is injective, it is sufficient to prove that $0 \rightarrow \prod_{\lambda\in\Lambda}\sF_\lambda\rightarrow \prod_{\lambda\in\Lambda}I^\bullet_\lambda$ is exact. For any affine open set $V$ of $I$, by affine vanishing, the global section $0 \rightarrow \sF_\lambda(V) \rightarrow I^\bullet_\lambda(V)$ is exact, hence the product \begin{equation}\label{eq: orange} 0 \rightarrow\prod_{\lambda\in\Lambda}\sF_\lambda(V) \rightarrow \prod_{\lambda\in\Lambda} I^\bullet_\lambda(V) \end{equation} is also exact. For any $x\in I$, if we take the direct limit of \eqref{eq: orange} with respect to open affine neighborhoods of $x$, then we obtain the exact sequence \[ 0 \rightarrow \Biggl(\prod_{\lambda\in\Lambda}\sF_\lambda\Biggr)_x\rightarrow \Biggl( \prod_{\lambda\in\Lambda}I^\bullet_\lambda\Biggr)_x. \] This shows that $0 \rightarrow \prod_{\lambda\in\Lambda}\sF_\lambda\rightarrow \prod_{\lambda\in\Lambda}I^\bullet_\lambda$ is exact as desired. \end{proof} Proposition \ref{proposition: acyclic} gives the following Corollary. \begin{corollary}\label{cor: description} We let $C^\bullet(\frU/\Delta,\sF)\coloneqq\Gamma(U/\Delta,\sC^\bullet(\frU,\sF))$. Then for any integer $m\geq0$, the equivariant cohomology $H^m(U/\Delta,\sF)$ is given as \[ H^m(U/\Delta,\sF) = H^m(C^\bullet(\frU/\Delta,\sF)). \] \end{corollary} By definition, for any integer $q\in\bbZ$, we have \[ C^q(\frU/\Delta,\sF)= \biggl(\prod^\alt_{{\boldsymbol{\alpha}}\in A^{q+1}}\Gamma(U_{\boldsymbol{\alpha}},\sF)\biggr)^\Delta. \] \section{Shintani Generating Class}\label{section: Shintani Class} We let $\bbT$ be the algebraic torus of \eqref{eq: algebraic torus}, and let $U=\bbT\setminus\{1\}$. In this section, we will use the description of equivariant cohomology of Corollary \ref{cor: description} to define the Shintani generating class as a class in $H^{g-1}(U/\Delta,\sO_\bbT)$. We will then consider the action of the differential operators $\partial_\tau$ on this class. We first interpret the generating functions $\sG_\sigma(z)$ of Definition \ref{def: generating} as rational functions on $\bbT$. Let $\frD^{-1}\coloneqq\{u\in F\mid\Tr_{F/\bbQ}(u\cO_F)\subset\bbZ\}$ be the inverse different of $F$. Then there exists an isomorphism \begin{equation}\label{eq: uniformization} (F\otimes\bbC)/\frD^{-1} \xrightarrow\cong\bbT(\bbC) = {\Hecke}om_\bbZ(\cO_F,\bbC^\times), \qquad z \mapsto \xi_z \end{equation} given by mapping any $z\in F\otimes\bbC$ to the character $\xi_z(\alpha)\coloneqq e^{2\pi i \Tr(\alpha z)}$ in ${\Hecke}om_\bbZ(\cO_F,\bbC^\times)$. The function $t^\alpha$ on $\bbT(\bbC)$ corresponds through the isomorphism \eqref{eq: uniformization} to the function $ e^{2\pi i \Tr(\alpha z)} $ for any $\alpha\in\cO_F$. Thus the following holds. \begin{lemma}\label{lem: correspondence} For ${\boldsymbol{\alpha}}=(\alpha_1,\ldots,\alpha_g)\in A^g$ and $\sigma\coloneqq\sigma_{\boldsymbol{\alpha}}$, consider the rational function \[ \cG_\sigma(t)\coloneqq\frac{\sum_{\alpha\in \wh P_{\boldsymbol{\alpha}}\cap\cO_F}t^\alpha}{{\bigl(1-t^{\alpha_1}\bigr)\cdots \bigl(1-t^{\alpha_g}\bigr)}} \] on $\bbT$, where $P_{\boldsymbol{\alpha}}$ is again the parallelepiped spanned by $\alpha_1,\ldots,\alpha_g$. Then $\cG_\sigma(t)$ corresponds to the function $\sG_{\sigma}(z)$ of Definition \ref{def: generating} through the uniformization \eqref{eq: uniformization}. Note that by definition, if we let $B\coloneqq\bbZ[t^{\alpha}\mid\alpha\in\cO_{F+}]$, then we have \begin{equation}\label{eq: B} \cG_\sigma(t)\in B_{\boldsymbol{\alpha}}\coloneqq B\Bigl[\frac{1}{1-t^{\alpha_1}},\ldots,\frac{1}{1-t^{\alpha_g}}\Bigr]. \end{equation} \end{lemma} Again, we fix an ordering $I=\{\tau_1,\ldots,\tau_g\}$. For any ${\boldsymbol{\alpha}}=(\alpha_1,\ldots,\alpha_g)\in\cO_{F+}^g$, let $\bigl(\alpha_j^{\tau_i}\bigr)$ be the matrix in $M_g(\bbR)$ whose $(i,j)$-component is $\alpha_j^{\tau_i}$. We let $\sgn({\boldsymbol{\alpha}})\in\{0,\pm1\}$ be the signature of $\det\bigl(\alpha_j^{\tau_i}\bigr)$. We define the \textit{Shintani generating class} $\cG$ as follows. \begin{proposition}\label{prop: Shintani generating class} For any ${\boldsymbol{\alpha}}=(\alpha_1,\ldots,\alpha_g)\in A^{g}$, we let \[ \cG_{\boldsymbol{\alpha}}\coloneqq\sgn({\boldsymbol{\alpha}})\cG_{\sigma_{\boldsymbol{\alpha}}}(t) \in\Gamma(U_{\boldsymbol{\alpha}},\sO_\bbT). \] Then we have $ (\cG_{\boldsymbol{\alpha}}) \in C^{g-1}(A,\sO_\bbT). $ Moreover, $(\cG_{\boldsymbol{\alpha}})$ satisfies the cocycle condition $d^{g-1}(\cG_{\boldsymbol{\alpha}})=0$, hence defines a class \[ \cG\coloneqq [\cG_{\boldsymbol{\alpha}}]\in H^{g-1}(U/\Delta,\sO_\bbT). \] We call this class the Shintani generating class. \end{proposition} \begin{proof} By construction, $(\cG_{\boldsymbol{\alpha}})$ defines an element in $\Gamma(U, \sC^{g-1}(\frU,\sO_\bbT))=\prod^\alt_{{\boldsymbol{\alpha}}\in A^g} \Gamma(U_{\boldsymbol{\alpha}},\sO_\bbT)$. Since $[\varepsilon]^*\cG_{\boldsymbol{\alpha}}=\cG_{\varepsilon{\boldsymbol{\alpha}}}$ for any $\varepsilon\in\Delta$, we have \[ (\cG_{\boldsymbol{\alpha}})\in \Gamma\bigl(U, \sC^{g-1}(\frU,\sO_\bbT)\bigr)^{\Delta}= C^{g-1}(\frU/\Delta,\sO_\bbT). \] To prove the cocycle condition $d^{g-1}(\cG_{\boldsymbol{\alpha}})=0$, it is sufficient to check that \begin{equation}\label{eq: cocycle condition} \sum_{j=0}^g(-1)^j\cG_{(\alpha_0,\ldots,\check\alpha_j,\ldots\alpha_g)}=0 \end{equation} for any $\alpha_0,\ldots,\alpha_g\in A$. By definition, the rational function $\cG_{\sigma_{\boldsymbol{\alpha}}}(t)$ maps to the formal power series \[ \cG_{\sigma_{\boldsymbol{\alpha}}}(t) =\sum_{\alpha\in\wh\sigma_{\boldsymbol{\alpha}}\cap\cO_F}t^\alpha \] by taking the formal completion $B_{\boldsymbol{\alpha}}\hookrightarrow\bbZ\llbracket t^{\alpha_1},\ldots,t^{\alpha_g}\rrbracket$, where $B_{\boldsymbol{\alpha}}$ is the ring defined in \eqref{eq: B}. Since the map taking the formal completion is injective, it is sufficient to check \eqref{eq: cocycle condition} for the associated formal power series. By \cite{Yam10}*{Proposition 6.2}, we have \[ \sum_{j=0}^g(-1)^j\sgn(\alpha_0,\ldots,\check\alpha_j,\ldots,\alpha_g) \boldsymbol{1}_{\wh\sigma_{(\alpha_0,\ldots,\check\alpha_j,\ldots\alpha_g)}}\equiv0 \] as a function on $\bbR_+^I$, where $\boldsymbol{1}_R$ is the characteristic function of $R\subset\bbR^I_+$ satisfying $\boldsymbol{1}_R(x)=1$ if $x\in R$ and $\boldsymbol{1}_R(x)=0$ if $x\not\in R$. Our assertion now follows by examining the formal power series expansion of $\cG_{(\alpha_0,\ldots,\check\alpha_j,\ldots,\alpha_g)}$. \end{proof} We will next define differential operators $\partial_\tau$ for $\tau\in I$ on equivariant cohomology. Since $t^\alpha = e^{2\pi i \Tr(\alpha z)}$ through \eqref{eq: uniformization} for any $\alpha\in\cO_F$, we have \begin{equation}\label{eq: relation} \frac{dt^\alpha}{t^\alpha} = \sum_{\tau\in I} 2\pi i \alpha^\tau dz_\tau. \end{equation} Let $\alpha_1,\ldots,\alpha_g$ be a basis of $\cO_F$. For any $\tau\in I$, we let $\partial_\tau $ be the differential operator \[ \partial_\tau \coloneqq\sum_{j=1}^g \alpha_j^\tau t^{\alpha_j}\frac{\partial}{\partial t^{\alpha_j}}. \] By \eqref{eq: relation}, we see that $\partial_\tau $ corresponds to the differential operator $\frac{1}{2\pi i}\frac{\partial}{\partial z_\tau}$ through the uniformization \eqref{eq: uniformization}, and hence is independent of the choice of the basis $\alpha_1,\ldots,\alpha_g$. By Theorem \ref{theorem: Shintani} and Lemma \ref{lem: correspondence}, we have the following result. \begin{proposition}\label{prop: Shintani} Let ${\boldsymbol{\alpha}}=(\alpha_1,\ldots,\alpha_g)\in A^g$ and $\sigma=\sigma_{\boldsymbol{\alpha}}$. For any $\bsk=(k_\tau)\in\bbN^I$ and $\partial^\bsk\coloneqq\prod_{\tau\in I}\partial_\tau^{k_\tau}$, we have \[ \partial^\bsk\cG_\sigma(\xi) = \zeta_\sigma(\xi,-\bsk) \] for any torsion point $\xi\in U_{{\boldsymbol{\alpha}}}$. \end{proposition} The differential operator $\partial_\tau$ gives a morphism of abelian sheaves \[ \partial_\tau\colon\sO_{\bbT_{F^\tau}}(\bsk)\rightarrow\sO_{\bbT_{F^\tau}}(\bsk-1_\tau) \] compatible with the action of $\Delta$ for any $\bsk\in\bbZ^I$, where $\bbT_{F^\tau}\coloneqq\bbT\otimes F^\tau$ is the base change of $\bbT$ to $F^\tau$. This induces a homomorphism \[ \partial_\tau\colon H^m(U_{F^\tau}/\Delta,\sO_{\bbT_{F^\tau}}(\bsk))\rightarrow H^m(U_{F^\tau}/\Delta,\sO_{\bbT_{F^\tau}}(\bsk-1_\tau)) \] on equivariant cohomology. \begin{lemma}\label{lem: differential} Let $\wt F$ be the composite of $F^\tau$ for all $\tau\in I$. The operators $\partial_\tau$ for $\tau\in I$, considered over $\wt F$, are commutative with each other. Moreover, the composite \[ \partial\coloneqq\prod_{\tau\in I}\partial_\tau\colon \sO_{\bbT_{\wt F}} \rightarrow \sO_{\bbT_{\wt F}}(1,\ldots,1)=\sO_{\bbT_{\wt F}} \] is defined over $\bbQ$, that is, it is a base change to $\wt F$ of a morphism of abelian sheaves $ \partial\colon \sO_\bbT\rightarrow\sO_\bbT. $ In particular, $\partial$ induces a homomorphism \begin{equation}\label{eq: differential} \partial\colon H^m(U/\Delta,\sO_\bbT) \rightarrow H^m(U/\Delta,\sO_\bbT). \end{equation} \end{lemma} \begin{proof} The commutativity is clear from the definition. Since the group $\Gal(\wt F/\bbQ)$ permutes the operators $\partial_\tau$, the operator $\partial$ is invariant under this action. This gives our assertion. \end{proof} Our main result, which we prove in \S \ref{section: specialization}, concerns the specialization of the classes \begin{equation}\label{eq: main} \partial^k\cG \in H^{g-1}\bigl(U/\Delta,\sO_\bbT\bigr) \end{equation} for $k\in\bbN$ at nontrivial torsion points of $\bbT$. \section{Specialization to Torsion Points}\label{section: specialization} For any nontrivial torsion point $\xi$ of $\bbT$, let $\Delta_\xi\subset\Delta$ be the isotropic subgroup of $\xi$. Then we may view $\xi\coloneqq\Spec\bbQ(\xi)$ as a $\Delta_\xi$-scheme with a trivial action of $\Delta_\xi$. Then the natural inclusion $\xi\rightarrow U$ for $U\coloneqq\bbT\setminus\{1\}$ is compatible with the inclusion $\Delta_\xi\subset\Delta$, hence the pullback \eqref{eq: pullback} induces the specialization map \[ \xi^*\colon H^m(U/\Delta,\sO_\bbT)\rightarrow H^m(\xi/\Delta_\xi,\sO_\xi). \] The purpose of this section is to prove our main theorem, given as follows. \begin{theorem}\label{theorem: main} Let $\xi$ be a nontrivial torsion point of $\bbT$, and let $k$ be an integer $\geq0$. If we let $\cG$ be the Shintani generating class defined in Proposition \ref{prop: Shintani generating class}, and if we let $\partial^k\cG(\xi)\in H^{g-1}(\xi/\Delta_\xi,\sO_\xi)$ be image by the specialization map $\xi^*$ of the class $\partial^k\cG$ defined in \eqref{eq: main}, then we have \[ \partial^k\cG(\xi) = \cL(\xi\Delta,-k) \] through the isomorphism $H^{g-1}(\xi/\Delta_\xi,\sO_\xi)\cong\bbQ(\xi)$ given in Proposition \ref{prop: ecc} below. \end{theorem} We will prove Theorem \ref{theorem: main} at the end of this section. The specialization map can be expressed explicitly in terms of cocycles as follows. We let $V_\alpha\coloneqq U_\alpha\cap \xi$ for any $\alpha\in A$. Then $\frV\coloneqq\{V_\alpha\}_{\alpha\in A}$ is an affine open covering of $\xi$. For any integer $q\geq 0$ and ${\boldsymbol{\alpha}}=(\alpha_0,\ldots,\alpha_{q})\in A^{q+1}$, we let $V_{\boldsymbol{\alpha}}\coloneqq V_{\alpha_0}\cap\cdots\cap V_{\alpha_{q}}$ and \begin{equation}\label{eq: alternating} C^q\bigl(\frV/\Delta_\xi,\sO_{\xi}\bigr)\coloneqq \biggl(\prod_{{\boldsymbol{\alpha}}\in A^{q+1}}^\alt\Gamma(V_{\boldsymbol{\alpha}},\sO_{\xi})\biggr)^{\Delta_\xi}. \end{equation} Here, note that $\Gamma(V_{\boldsymbol{\alpha}},\sO_\xi)=\bbQ(\xi)$ if $V_{\boldsymbol{\alpha}}\neq\emptyset$ and $\Gamma(V_{\boldsymbol{\alpha}},\sO_\xi)=\{0\}$ otherwise. The same argument as that of Corollary \ref{cor: description} shows that we have \begin{equation}\label{eq: similarly} H^m(\xi/\Delta_\xi,\sO_\xi)\cong H^m\bigl(C^\bullet\bigl(\frV/\Delta_\xi,\sO_{\xi}\bigr)\bigr). \end{equation} We let $A_\xi$ be the subset of elements $\alpha\in A$ satisfying $\xi\in U_\alpha$. This is equivalent to the condition that $\xi(\alpha)\neq 1$. We will next prove in Lemma \ref{lemma: 5.2} that the cochain complex $C^\bullet\bigl(\frV/\Delta_\xi,\sO_{\xi}\bigr)$ of \eqref{eq: alternating} is isomorphic to the dual of the chain complex $C_\bullet(A_\xi)$ defined as follows. For any integer $q\geq0$, we let \[ C_q(A_\xi)\coloneqq\bigoplus^\alt_{{\boldsymbol{\alpha}}\in A_\xi^{q+1}}\bbZ{\boldsymbol{\alpha}} \] be the quotient of $\bigoplus_{{\boldsymbol{\alpha}}\in A_\xi^{q+1}}\bbZ{\boldsymbol{\alpha}}$ by the submodule generated by \[ \{\rho({\boldsymbol{\alpha}})-\sgn(\rho){\boldsymbol{\alpha}}\mid {\boldsymbol{\alpha}}\in A_\xi^{q+1}, \rho\in\frS_{q+1}\} \cup \{ {\boldsymbol{\alpha}}=(\alpha_0,\ldots,\alpha_q) \mid \text{$\alpha_i=\alpha_j$ for some $i\neq j$}\}. \] We denote by $\langle{\boldsymbol{\alpha}}\rangle$ the class represented by ${\boldsymbol{\alpha}}$ in $C_q(A_\xi)$. We see that $C_q(A_\xi)$ has a natural action of $\Delta_\xi$ and is a free $\bbZ[\Delta_\xi]$-module. In fact, a basis of $C_q(A_\xi)$ may be constructed in a similar way to the construction of $B_0$ in the proof of Proposition \ref{proposition: acyclic}. Then $C_\bullet(A_\xi)$ is a complex of $\bbZ[\Delta_\xi]$-modules with respect to the standard differential operator $d_q\colon C_q(A_\xi)\rightarrow C_{q-1}(A_\xi)$ given by \[ d_q(\langle\alpha_0,\ldots,\alpha_{q}\rangle)\coloneqq\sum_{j=0}^{q} (-1)^{j}\langle\alpha_0,\ldots,\check\alpha_{j},\ldots,\alpha_{q}\rangle \] for any ${\boldsymbol{\alpha}}=(\alpha_0,\ldots,\alpha_{q})\in A_\xi^{q+1}$. If we let $d_0\colon C_0(A_\xi) \rightarrow \bbZ$ be the homomorphism defined by $d_0(\langle \alpha\rangle)\coloneqq 1$ for any $\alpha\in A_\xi$, then $C_\bullet(A_\xi)$ is a free resolution of $\bbZ$ with trivial $\Delta_\xi$-action. We have the following. \begin{lemma}\label{lemma: 5.2} There exists a natural isomorphism of complexes \[ C^\bullet(\frV/\Delta_\xi,\sO_{\xi})\xrightarrow\cong {\Hecke}om_{\Delta_\xi}(C_\bullet(A_\xi),\bbQ(\xi)). \] \end{lemma} \begin{proof} The natural isomorphism \[ \prod_{{\boldsymbol{\alpha}}\in A^{q+1}}\Gamma(V_{\boldsymbol{\alpha}},\sO_\xi)=\prod_{{\boldsymbol{\alpha}}\in A^{q+1}_\xi}\bbQ(\xi) \cong{\Hecke}om_\bbZ\left(\bigoplus_{{\boldsymbol{\alpha}}\in A_\xi^{q+1}}\bbZ{\boldsymbol{\alpha}},\bbQ(\xi)\right) \] induces an isomorphism between the submodules \[ C^q(\frV/\Delta_\xi,\sO_\xi)=\left(\prod^\alt_{{\boldsymbol{\alpha}}\in A^{q+1}}\Gamma(V_{\boldsymbol{\alpha}},\sO_\xi) \right)^{\Delta_\xi} \subset \prod_{{\boldsymbol{\alpha}}\in A^{q+1}}\Gamma(V_{\boldsymbol{\alpha}},\sO_\xi) \] and \[ {\Hecke}om_{\Delta_\xi}(C_q(A_\xi),\bbQ(\xi))\subset{\Hecke}om_\bbZ \left(\bigoplus_{{\boldsymbol{\alpha}}\in A_\xi^{q+1}}\bbZ{\boldsymbol{\alpha}},\bbQ(\xi)\right). \] Moreover, this isomorphism is compatible with the differential. \end{proof} We will next use a Shintani decomposition (see Definition \ref{def: Shintani}) to construct a complex which is quasi-isomorphic to the complex $C_\bullet\bigl(A_\xi)$. \begin{lemma}\label{lem: inclusion} Let $\xi$ be as above. There exists a Shintani decomposition $\Phi$ such that any $\sigma\in\Phi$ is of the form $\sigma_{\boldsymbol{\alpha}}=\sigma$ for some ${\boldsymbol{\alpha}}\in A_\xi^{q+1}$. \end{lemma} \begin{proof} Let $\Phi$ be a Shintani decomposition. We will deform $\Phi$ to construct a Shintani decomposition satisfying our assertion. Let $\Lambda$ be a finite subset of $A$ such that $\{\sigma_\alpha \mid \alpha\in \Lambda\}$ represents the quotient set $\Delta_\xi\backslash\Phi_1$. If $\xi(\alpha)\neq 1$ for any $\alpha\in\Lambda$, then $\Phi$ satisfies our assertion since $\alpha\in A_\xi$ if and only if $\xi(\alpha)\neq 1$. Suppose that there exists $\alpha\in \Lambda$ such that $\xi(\alpha)=1$. Since $\xi$ is a nontrivial character on $\cO_F$, there exists $\beta\in\cO_{F+}$ such that $\xi(\beta)\neq 1$. Then for any integer $N$, we have $\xi(N\alpha+\beta)\neq 1$. Let $\Phi'$ be the set of cones obtained by deforming $\sigma=\sigma_\alpha$ to $\sigma'\coloneqq\sigma_{N\alpha+\beta}$ and $\varepsilon\sigma$ to $\varepsilon\sigma'$ for any $\varepsilon\in\Delta_\xi$. By taking $N$ sufficiently large, the amount of deformation can be made arbitrarily small so that $\Phi'$ remains a fan. By repeating this process, we obtain a Shintani decomposition satisfying the desired condition. \end{proof} In what follows, we fix a Shintani decomposition $\Phi$ satisfying the condition of Lemma \ref{lem: inclusion}. Let $N\colon\bbR_+^I\rightarrow\bbR_+$ be the norm map defined by $N((a_\tau))\coloneqq\prod_{\tau\in I}a_\tau$, and we let \[ \bbR_1^I\coloneqq\{(a_\tau)\in\bbR_+^I\mid N((a_\tau))=1\} \] be the subset of $\bbR^I_+$ of norm one. For any $\sigma\in\Phi_{q+1}$, the intersection $\sigma\cap\bbR_1^I$ is a subset of $\bbR_1^I$ which is homeomorphic to a simplex of dimension $q$, and the set $\{ \sigma\cap \bbR_1^I \mid \sigma\in\Phi_+\}$ for $\Phi_+\coloneqq\bigcup_{q\geq0}\Phi_{q+1}$ gives a simplicial decomposition of the topological space $\bbR_1^I$. In what follows, for any $\sigma\in\Phi_{q+1}$, we denote by $\langle\sigma\rangle$ the class $\sgn({\boldsymbol{\alpha}})\langle{\boldsymbol{\alpha}}\rangle$ in $C_q(A_\xi)$, where ${\boldsymbol{\alpha}}\in A_\xi^{q+1}$ is a generator of $\sigma$. Recall that such a generator ${\boldsymbol{\alpha}}$ is uniquely determined up to permutation from $\sigma$. We then have the following. \begin{lemma}\label{lemma: 5.4} For any integer $q\geq0$, we let $C_q(\Phi)$ be the $\bbZ[\Delta_\xi]$-submodule of $C_q(A_\xi)$ generated by $\langle\sigma\rangle$ for all $\sigma\in\Phi_{q+1}$. Then $C_\bullet(\Phi)$ is a subcomplex of $C_\bullet(A_\xi)$ which also gives a free resolution of $\bbZ$ as a $\bbZ[\Delta_\xi]$-module. In particular, the natural inclusion induces a quasi-isomorphism of complexes \[ C_\bullet(\Phi)\xrightarrow{\qis} C_\bullet(A_\xi) \] compatible with the action of $\Delta_\xi$. \end{lemma} \begin{proof} Note that $C_q(\Phi)$ for any integer $q\geq0$ is a free $\bbZ[\Delta_\xi]$-module generated by representatives of the quotient $\Delta_\xi\backslash \Phi_{q+1}$. By construction, $C_\bullet(\Phi)$ can be identified with the chain complex associated to the simplicial decomposition $\{\sigma\cap\bbR^I_1\mid \sigma\in\Phi_+\}$ of the topological space $\bbR^I_1$, hence we see that the complex $C_\bullet(\Phi)$ is exact and gives a free resolution of $\bbZ$ as a $\bbZ[\Delta_\xi]$-module. Our assertion follows from the fact that $C_\bullet(A_\xi)$ also gives a free resolution of $\bbZ$ as a $\bbZ[\Delta_\xi]$-module. \end{proof} We again fix a numbering of elements in $I$ so that $I=\{\tau_1,\ldots,\tau_g\}$. We let \[ L\colon\bbR_+^I\rightarrow\bbR^g \] be the homeomorphism defined by $(x_\tau)\mapsto(\log x_{\tau_i})$. If we let $W\coloneqq\{ (y_{\tau_i})\in\bbR^g \mid\sum_{i=1}^gy_{\tau_i}=0\}$, then $W$ is an $\bbR$-linear subspace of $\bbR^g$ of dimension $g-1$, and $L$ gives a homeomorphism $\bbR_1^I\cong W\cong \bbR^{g-1}$. For $\Delta_\xi\subset F$, the Dirichlet unit theorem (see for example \cite{Sam70}*{Theorem 1 p.61}) shows that the discrete subset $L(\Delta_\xi)\subset W$ is a free $\bbZ$-module of rank $g-1$, hence we have \[ \cT_\xi\coloneqq\Delta_\xi\backslash\bbR_1^I\cong \bbR^{g-1}/\bbZ^{g-1}. \] We consider the coinvariant \[ C_q(\Delta_\xi\backslash\Phi)\coloneqq C_q(\Phi)_{\Delta_\xi} \] of $C_q(\Phi)$ with respect to the action of $\Delta_\xi$, that is, the quotient of $C_q(\Phi)$ by the subgroup generated by $\langle\sigma\rangle-\langle\varepsilon\sigma\rangle$ for $\sigma\in\Phi_{q+1}$ and $\varepsilon\in\Delta_\xi$. For any $\sigma\in\Phi_{q+1}$, we denote by $\ol\sigma$ the image of $\sigma$ in the quotient $\Delta_\xi\backslash\Phi_{q+1}$, and we denote by $\langle\ol\sigma\rangle$ the image of $\langle\sigma\rangle$ in $C_q(\Delta_\xi\backslash\Phi)$, which depends only on the class $\ol\sigma\in \Delta_\xi\backslash\Phi_{q+1}$. Then the set $\{ \Delta_\xi \backslash(\Delta_\xi\sigma\cap\bbR^I_1) \mid \ol\sigma\in \Delta_\xi\backslash\Phi_+\}$ of subsets of $\cT_\xi$ gives a simplicial decomposition of $\cT_\xi$ and $C_\bullet(\Delta_\xi\backslash \Phi)$ may be identified with the associated chain complex. Hence we have \begin{align*} H_m(C_\bullet(\Delta_\xi\backslash\Phi))&= H_m(\cT_\xi,\bbZ),& H^m\Bigl({\Hecke}om_\bbZ\bigl(C_\bullet(\Delta_{\xi}\backslash\Phi), \bbZ\bigr)\Bigr)&= H^m(\cT_\xi,\bbZ). \end{align*} Since $\cT_\xi\cong\bbR^{g-1}/\bbZ^{g-1}$, the homology groups $H_m(\cT_\xi,\bbZ)$ for integers $m$ are free abelian groups, and the pairing \begin{equation}\label{eq: pairing} H_m(\cT_\xi,\bbZ) \times H^m(\cT_\xi,\bbZ) \rightarrow \bbZ, \end{equation} obtained by associating to a cycle $u\in C_m(\Delta_\xi\backslash\Phi)$ and a cocycle $\varphi \in {\Hecke}om_\bbZ\bigl(C_m(\Delta_\xi\backslash\Phi), \bbZ\bigr)$ the element $\varphi(u)\in\bbZ$, is perfect (see for example \cite{Mun84}*{Theorem 45.8}). The generator of the cohomology group \[ H_{g-1}(\cT_\xi,\bbZ) = H_{g-1}\bigl(C_\bullet(\Delta_\xi\backslash\Phi)\bigr) \cong \bbZ \] is given by the fundamental class \begin{equation}\label{eq: fundamental class} \sum_{\ol\sigma\in\Delta_\xi\backslash\Phi_{g}} \langle\ol\sigma\rangle \in C_{g-1}(\Delta_\xi\backslash\Phi), \end{equation} and the canonical isomorphism \begin{equation}\label{eq: isom} H^{g-1}(\cT_\xi,\bbQ(\xi)) = H^{g-1}\Bigl({\Hecke}om_\bbZ\bigl(C_\bullet(\Delta_\xi\backslash\Phi), \bbQ(\xi)\bigr)\Bigr) \cong \bbQ(\xi) \end{equation} induced by the fundamental class \eqref{eq: fundamental class} via the pairing \eqref{eq: pairing} is given explicitly in terms of cocycles by mapping any $\varphi\in {\Hecke}om_\bbZ(C_{g-1}(\Delta_\xi\backslash\Phi), \bbQ(\xi))$ to the element $\sum_{\ol\sigma\in\Delta_\xi\backslash\Phi_g}\varphi(\langle\ol\sigma\rangle)\in \bbQ(\xi)$. \begin{proposition}\label{prop: ecc} Let $\eta\in H^{g-1}(\xi/\Delta_\xi,\sO_\xi)$ be represented by a cocycle \[ (\eta_{\boldsymbol{\alpha}})\in C^{g-1}(\frV/\Delta_\xi,\sO_\xi) =\biggl(\prod_{{\boldsymbol{\alpha}}\in A_\xi^{g}}^\alt \bbQ(\xi)\biggr)^{\Delta_\xi}. \] For any cone $\sigma\in\Phi_g$, let $\eta_\sigma\coloneqq \sgn({\boldsymbol{\alpha}})\eta_{{\boldsymbol{\alpha}}}$ for any ${\boldsymbol{\alpha}}\in A^g_\xi$ such that $\sigma_{\boldsymbol{\alpha}}=\sigma$. Then the homomorphism mapping the cocycle $(\eta_{\boldsymbol{\alpha}})$ to $ \sum_{\ol\sigma\in\Delta_\xi\backslash\Phi_{g}} \eta_{\sigma} $ induces a canonical isomorphism \begin{equation}\label{eq: isom main} H^{g-1}(\xi/\Delta_\xi,\sO_\xi)\cong\bbQ(\xi). \end{equation} \end{proposition} \begin{proof} Since $C_q(\Phi)$ and $C_q(A_\xi)$ are free $\bbZ[\Delta_\xi]$-modules, the quasi-isomorphism $C_\bullet(\Phi)\xrightarrow\qis C_\bullet(A_\xi)$ of Lemma \ref{lemma: 5.4} induces the quasi-isomorphism \[ {\Hecke}om_{\Delta_\xi}\bigl(C_\bullet(A_\xi),\bbQ(\xi)\bigr)\xrightarrow\qis {\Hecke}om_{\Delta_\xi}\bigl(C_\bullet(\Phi),\bbQ(\xi)\bigr). \] Combining this fact with Lemma \ref{lemma: 5.2} and \eqref{eq: similarly}, we see that \[ H^{g-1}(\xi/\Delta_\xi,\sO_\xi)\cong H^{g-1}\bigl({\Hecke}om_{\Delta_\xi}\bigl(C_\bullet(\Phi),\bbQ(\xi)\bigr)\bigr). \] Since we have $ {\Hecke}om_{\Delta_\xi}\bigl(C_\bullet(\Phi),\bbQ(\xi)\bigr) = {\Hecke}om_{\bbZ}\bigl(C_\bullet(\Delta_\xi\backslash\Phi),\bbQ(\xi)\bigr), $ our assertion follows from \eqref{eq: isom}. \end{proof} We will now prove Theorem \ref{theorem: main}. \begin{proof}[Proof of Theorem \ref{theorem: main}] By construction and Lemma \ref{lem: differential}, the class $\partial^k\cG(\xi)$ is a class defined over $\bbQ(\xi)$ represented by the cocycle $ (\partial^k\cG_{\boldsymbol{\alpha}}(\xi)) \in C^{g-1}(\frV/\Delta_\xi,\sO_{\xi}). $ By Proposition \ref{prop: ecc} and Proposition \ref{prop: Shintani}, the class $\partial^k\cG(\xi)$ maps through\eqref{eq: isom main} to \[ \sum_{\sigma\in\Delta_\xi\backslash\Phi_g} \partial^k\cG_\sigma(\xi) =\sum_{\sigma\in\Delta_\xi\backslash\Phi_g} \zeta_{\sigma}(\xi,(-k,\ldots,-k)). \] Our assertion now follows from \eqref{eq: Shintani}. \end{proof} \begin{corollary} Assume that the narrow class number of $F$ is \textit{one}, and let $\chi\colon\Cl^+_F(\frf)\rightarrow\bbC^\times$ be a finite primitive Hecke character of $F$ of conductor $\frf\neq(1)$. If we let $U[\frf]\coloneqq\bbT[\frf]\setminus\{1\}$, then we have \[ L(\chi, -k)=\sum_{\xi\in U[\frf]/\Delta}c_\chi(\xi)\partial^k\cG(\xi) \] for any integer $k\geq0$. \end{corollary} \begin{proof} The result follows from Theorem \ref{theorem: main} and Proposition \ref{prop: Hecke}. \end{proof} The significance of this result is that the special values at negative integers of \textit{any} finite Hecke character of $F$ may be expressed as a linear combination of special values of the derivatives of a single canonical cohomology class, the Shintani class $\cG$ in $H^{g-1}(U/\Delta,\sO_\bbT)$. \begin{bibdiv} \begin{biblist} \bibselect{PolylogarithmBibliography} \end{biblist} \end{bibdiv} \end{document}
\begin{document} \title{The Digital Twin Landscape at the Crossroads of Predictive Maintenance, Machine Learning and Physics Based Modeling} \abstract{The concept of a digital twin has exploded in popularity over the past decade, yet confusion around its plurality of definitions, its novelty as a new technology, and its practical applicability still exists, all despite numerous reviews, surveys, and press releases. The history of the term digital twin is explored, as well as its initial context in the fields of product life cycle management, asset maintenance, and equipment fleet management, operations, and planning. A definition for a minimally viable framework to utilize a digital twin is also provided based on seven essential elements. A brief tour through DT applications and industries where DT methods are employed is also outlined. The application of a digital twin framework is highlighted in the field of predictive maintenance, and its extensions utilizing machine learning and physics based modeling. Employing the combination of machine learning and physics based modeling to form hybrid digital twin frameworks, may synergistically alleviate the shortcomings of each method when used in isolation. Key challenges of implementing digital twin models in practice are additionally discussed. As digital twin technology experiences rapid growth and as it matures, its great promise to substantially enhance tools and solutions for intelligent upkeep of complex equipment, are expected to materialize. } \section{Introduction} The concept of the {\em digital twin} (DT) has been increasingly mentioned over the last decade in both academic and industrial circles. The frequency of a web search topic of {\em digital twin} (includes similar search terms such as, {\em digital twins}, {\em digital twin definition}, {\em what is a digital twin}, etc.) has seen an approximately exponential rise in roughly the past decade (see Figure \ref{fig:DT_Trend_Analysis}). Publication of scholarly articles shows similar trends across several databases including Web of Science\texttrademark, Scopus\texttrademark, and Google Scholar\texttrademark. Yet, the definition of what a digital twin consists of has evolved since its initial introduction, for better or for worse, with some attaching various adjectives to broaden it, as well as others insisting on inclusion of tangential topics to stake novelty claims to the idea in specific technical fields. This manuscript seeks to provide clarity on defining {\em digital twin}, through exploring the history of the term, its initial context in the fields of product life cycle management (PLM), asset maintenance, and fleet of equipment management, operations, and planning. A definition for a minimally viable digital twin framework is also provided to alleviate ambiguity of the DT. Furthermore, a brief tour through its applications across industries will be provided to clarify the context in which digital twins are used today. Thereafter, the application of digital twins in the fields of predictive maintenance, machine learning, and physics based modeling with a keen eye towards their intersections will be investigated. Finally, the challenges of implementing digital twins will be examined. The reader should gain a clear understanding of what is a digital twin, how is it applied, and what obstacles to its use lie ahead. \begin{figure} \caption{Trends of the query term {\em digital twin} \label{fig:DT_Trend_Analysis} \end{figure} \subsection{Defining Digital Twin and Related Terms} \subsubsection{Digital Twin Definitions} At first glance, the term {\em digital twin} conjures the idea of a simple model, a scaled replica or mathematical expression that is a representation of the physically real object or system, and that, certainly, is not a new concept. Over a century ago, Buckingham~\cite{buckingham1914physically} made use of dimensional analysis to lay the groundwork for engineers and physicists to use scaled physical models in place of the larger physical models they represented. These smaller {\em physical twins} of the larger physical object, such as a ship, an aircraft, a bridge or a building, allowed for testing of designs without having to recreate a full scale model, simply by utilizing equal dimensionless parameters invariant to scale, for example, the Reynolds number when analyzing fluid flow past the object. Decades later, with the advent of modern computers, these physical models were supplemented with computational (digital) models that were digitally drawn with computer aided design (CAD) while using governing equations that could be solved by discretizing the model design space into small volumes, elements, and/or nodes~\cite{Marinkovic2019FEM}. So is it relevant to ask if these design models also fall into the definition of a digital twin? There are multiple definitions for the concept of DT which vary across different industries, and that can broadly encompass sophisticated computational physics-based models of parts, machine learning algorithms applied to recorded sensor data, CAD models, a repository for part and asset manufacturing and maintenance history, and/or scaled virtual reality environments (see Negri et al.~\cite{negri2017review} for a comprehensive list of different DT definitions); however the common theme connecting these definitions is that a virtual representation of a real physical system, machine, or product, spanning its entire life cycle, is created in order to track changes, provide traceability of parts and/or software, and typically connect embedded sensors and Internet of Things (IoT) devices with databases to document life cycle of the subject item. The term digital twin can also be synonymous with {\em digital database}; however, although potentially existing in a database, a digital twin can refer to the represented model or simulations (or their aggregate thereof) of the physical object or system. The first comprehensive definition of {\em digital twin} is often credited to NASA's 2012 {\em Modeling, Simulation, Information Technology \& Processing Roadmap} ~\cite{shafto2012modeling}, defined therein as: \begin{quote} A digital twin is an integrated multiphysics, multiscale simulation of a vehicle or system that uses the best available physical models, sensor updates, fleet history, etc., to mirror the life of its corresponding flying twin. The digital twin is ultra-realistic and may consider one or more important and interdependent vehicle systems, including propulsion/energy storage, avionics, life support, vehicle structure, thermal management/TPS [thermal protection system], etc. Manufacturing anomalies that may affect the vehicle may also be explicitly considered. In addition to the backbone of high-fidelity physical models, the digital twin integrates sensor data from the vehicle’s on-board integrated vehicle health management (IVHM) system, maintenance history and all available historical/fleet data obtained using data mining and text mining. By combining all of this information, the digital twin continuously forecasts the health of the vehicle/ system, the remaining useful life and the probability of mission success. The systems on board the digital twin are also capable of mitigating damage or degradation by recommending changes in mission profile to increase both the life span and the probability of mission success.~\cite{shafto2012modeling} \end{quote} Thus, in its initial definition, the digital twin concept was applied to the service life (planning, maintenance, and operation) of a complex asset, such as an aerospace or astronautical vehicle with thousands, if not millions, of individual parts assembled into a dense physical web of interacting functional systems. A similar DT concept was also proposed by Tuegel et al. for an aircraft, specifically~\cite{tuegel2011reengineering}. Such complex assets are subject to multi-year environmental degradation, requiring sustained maintenance, part replacement, mission dependent equipment swaps, and operational planning including fleet management, service downtime scheduling, and part and personnel logistics. Also accompanying the complex asset is a wealth of data, generated from onboard sensors. This data is often generated in the context of feedback for various control logic applications, but also for condition monitoring, warning indicators, and system alarms. These maintenance requirements and the amount of diagnostic data available align directly with the goals of predictive maintenance (PMx, sometimes also abbreviated PdM): the promise that analyzing and interpreting asset data will allow anticipation of the need for corrective maintenance, its convenient scheduling, and preventing equipment failures~\cite{errandonea2020digital,carvalho2019systematic,miller_system-level_2020}. Another early conceptual framework for digital twins is heavily based on the concept of product lifecycle management (PLM): a systematic approach to managing the series of changes a product goes through, from its design and development, to its ultimate retirement, generational redesign, or disposal~\cite{terzi2010product}. PLM can be visualized as two interwoven cycles of product development, on the one hand, and product service and maintenance, on the other; the former taking place before an asset is used in the real world design intent, and the latter referring to the asset's service support during applicable use (Figure \ref{fig:DT-PLM}). The digital twin can then be conceptualized as a seamless link connecting the interrelated cycles by providing a common database to store designs, models, simulations, algorithms, data, and information tracked over time and throughout each cycle. A claim is made from Grieves~\cite{grieves2016origins} that the following definition preceded the previously mentioned digital twin definition by about a decade under the term {\em Mirrored Spaces Model} which is closely tied with the idea of PLM: \begin{quote} The Digital Twin is a set of virtual information constructs that fully describes a potential or actual physical manufactured product from the micro atomic level to the macro geometrical level. The Digital Twin would be used for predicting future behavior and performance of the physical product. At the Prototype stage, the prediction would be of the behavior of the designed product with components that vary between its high and low tolerances in order to ascertain that the as-designed product met the proposed requirements. In the Instance stage, the prediction would be a specific instance of a specific physical product that incorporated actual components and component history. The predictive performance would be based from current point in the product's lifecycle at its current state and move forward. Multiple instances of the product could be aggregated to provide a range of possible future states. Digital Twin Instances could be interrogated for the current and past histories. Irrespective of where their physical counterpart resided in the world, individual instances could be interrogated for their current system state: fuel amount, throttle settings, geographical location, structure stress, or any other characteristic that was instrumented. Multiple instances of products would provide data that would be correlated for predicting future states. For example, correlating component sensor readings with subsequent failures of that component would result in an alert of possible component failure being generated when that sensor pattern was reported. The aggregate of actual failures could provide Bayesian probabilities for predictive uses. \cite{grieves2016origins} \end{quote} In this context, the DT concept is a common realization that most modern machines, systems, and products are generating data from network connected sensors and their design and development typically span multiple engineering domains, as they include mechanical and structural hardware, electronics, embedded software, network communication, and often more. A significant design change or even a part replacement with nonidentical tolerances made in any of the engineering domains will necessitate and propagate significant design changes or changes in performance in other domains. A digital twin provides the connection between prior and future design and operations, to alleviate many communication and logistical problems arising from the intersection of disparate engineering and technical fields, which may exist during a product’s or system’s life cycle. Furthermore, the digital twin’s ability to be updated in real-time or near real-time from embedded and peripheral sensors, allows for deeper analysis of the performance and maintenance of the physical twin asset. Rosen et al.~\cite{rosen_about_2015} describe the digital twin concept as the future of a trend in modeling and simulations that has evolved from individual solutions of specific problems to model based systems engineering (MSBE) with multi-physics and multi scale modeling, and then finally to seamless digital twin modeling, linked to operational data throughout the entire life cycle of an asset. Kritzinger et al.~\cite{kritzinger2018digital} describe levels of automation that distinguish traditional models from digital twin models; digital models have manual data analysis from the physical object and optional manual decision making from the digital object; digital shadows have automated digital threads connecting physical object data to the digital object but still have manual information flow from the digital object to the physical object; only the digital twin has automated data and information flow in each direction via the digital thread to and from each of the physical and digital objects. A comprehensive review~\cite{Qamsane2020digital} reveals at least ten different definitions for digital twins. A survey of 150 engineers asking what they thought a DT was, revealed the most popular definition was "a virtual replica of the physical asset which can be used to monitor and evaluate its performance" and found that its most useful application was thought to be in the field of predictive maintenance~\cite{Hamer2018feasibility}. The variety and expanding inclusion of concepts may appear to define DT as a concept equivalent to a traditional {\em simulation} or even {\em data analysis} of sensor data; however, the novelty, in comparison to the 20th century definitions of similar terms, lies in two areas: a) the development of large, connected streams or {\em threads} of sensor data that may be analyzed to improve the understanding of the current system or machine (also a defining feature of {\em Industry 4.0}~\cite{lasi2014industry}); and b) the integration of multiple models describing the form, function and behavior of the physical system and its components at various scales and using diverse modeling paradigms. \begin{figure} \caption{The digital twin product lifecycle management (DT-PLM) concept. The two cycles of product development and product service and maintenance are interwoven. The digital twin model provides a seamless link in connecting these interrelated cycles by providing a common database to store designs, models, simulations, test data, as-manufactured data, algorithms, usage data, and other available information tracked over time and throughout each cycle as the physical twin asset moves along the cycles.} \label{fig:DT-PLM} \end{figure} \subsubsection{Digital Thread, Industry 4.0, Digital Engineering} Although sometimes used interchangeably with {\em digital twin}, {\em digital thread} has a more specific meaning: \begin{quote} The digital thread refers to the communication framework that allows a connected data flow and integrated view of the asset’s data throughout its lifecycle across traditionally siloed functional perspectives. The digital thread concept raises the bar for delivering ``the right information to the right place at the right time'' \cite{leiva2016demystifying} \end{quote} Digital thread is often considered synonymous with the {\em connected supply chain} that ``collects data on all parts, kits, processes, machines and tools in real time and then stores that data, enabling digital twins and complete traceability.'' \cite{leiva2016demystifying} Digital thread is also closely related to the concept of {\em Industry 4.0}: \begin{quote} The increasing digitalization of all manufacturing and manufacturing-supporting tools is resulting in the registration of an increasing amount of actor- and sensor-data which can support functions of control and analysis. Digital processes evolve as a result of the likewise increased networking of technical components and, in conjunction with the increase of the digitalization of produced goods and services, they lead to completely digitalized environments. Those are in turn driving forces for new technologies such as simulation, digital protection, and virtual or augmented reality. \cite{lasi2014industry} \end{quote} Yet another closely related term is {\em digital engineering}, which can be defined as: \begin{quote} Digital engineering is defined as an integrated digital approach that uses authoritative sources of systems data and models as a continuum across disciplines to support lifecycle activities from concept through disposal. Digital engineering applies similar emphasis on the continuous evolution of artifacts and the organizational integration of people and process throughout a development organization and integration team [\dots] Digital engineering is sometimes referred to as ``digital thread,'' which is understood as a more encompassing term, though the novelty of both terms has created some dispute over their exact overlap. Both digital thread and digital engineering are an extension of product lifecycle management, a common practice in private industry that involves the creation and storage of a system's lifecycle artifacts, in digital form, and which can be modified as a system evolves throughout its lifecycle. Digital thread and digital engineering both involve a single source of truth, referred to as the authoritative source of truth (ASoT), which contain artifacts maintained in a single repository, and stakeholders work from the same models rather than copies of models. \cite{Shepard2020digitalengineering} \end{quote} The relation between the concepts of digital twin and digital thread are further illustrated in Figure \ref{fig:DT_Fleet}. A digital twin may be created for each vehicle/asset in a fleet of vehicles/assets and fleet data analyzed by the digital twins may be leveraged into information for monitoring fleet health, decreasing downtime, managing inventory, optimizing operations, and performing simulation scenarios that study performance of the fleet. \begin{figure} \caption{Illustration of a fleet of digital twins. Digital threads connect each individual physical twin with its corresponding digital twin allowing for bidirectional data and information flow.} \label{fig:DT_Fleet} \end{figure} \subsubsection{Minimally Viable Digital Twin Framework} Now armed with a list of defined concepts, the task of answering, {\em what constitutes a minimally viable digital twin framework}, can be addressed. At bare minimum, a DT modeling framework (Figure \ref{fig:DT_Min_Framework}) must comprise the seven elements described below. It is worth noting that since the modeling framework may be implemented in stages, through the life-cycle of the asset, not all of the elements of the framework will be present in every stage (e.g., the physical asset may first be digitally designed) and can be considered as existing in the interim prototype stage until all elements are incorporated. \begin{itemize} \item Physical Twin Asset - object, system, or process occurring in the real world; without a corresponding physical twin, the digital twin model is simply a traditional exploratory simulation or design model. \item Digital Twin - a digital representation of the physical twin asset, such as information models describing the asset's form, function and behavior, the current state/configuration of the physical asset, as well as pointers to a digital database of historical data from instrumentation and previous states/configurations. Without the digital twin representation, the physical twin merely exists with no data and models for analysis. \item Instrumentation – sensors, detectors, and measurement tools that collectively generate digital data and knowledge about the physical asset or the environment in which it operates; instrumentation may be independent from or embedded on the physical asset and may even include manual inspection notes converted digitally for the digital twin to interpret and process. Without some sort of instrumentation, nothing is measured and no data is generated within the modeling framework. \item Analysis - data analytics, computational models, simulations, algortihms, and decisions (human or machine); Analysis transforms digital twin data from/of the physical asset into information that is manually, automatically, or autonomously actionable. Without analysis, the digital twin simply mirrors the current state of the physical counterpart, with no actionable information generated for the physical twin. \item Digital Thread – a digital connection (through wired/wireless communication networks) that provides a data link between the physical asset and the digital twin. Without a digital thread, there is no digital communication between the twins, and thus the configuration comprises, at best, an analog model. \item Live Data – data from instrumentation that is streamed through the digital thread that changes over time indicating the changing state or status of the physical asset throughout its life cycle; {\em live} being defined as the maximum amount of capture latency that allows for the feedback to be actionable. Without live data, the model is static, temporary, or non-dynamic; such a model can only provide instantaneous, temporary solutions or prototypical off-line feedback. \item Actionable Information – information generated from the analysis and relayed as an actionable feedback on the physical asset. (i.e., providing feedback on the operation, control, maintenance, and/or scheduling of use of the physical twin asset). With no actionable information being utilized by the physical twin, the physical twin {\em flies blind} with no external insights into its own state or condition. \end{itemize} With this framework, one can quickly distinguish what makes a digital twin model distinct from traditional design models and simulations: design models and simulations are used as testing grounds to create at least a physical asset prototype using simulated physics and computer aided design, whereas a digital twin model utilizes data directly from the field in the precise operating conditions and environment the asset is used. In addition, the authors believe the source of confusion in the multiple definitions of digital twin is alleviated by a clear distinction between digital twin and a digital twin modeling framework; the former is considered to be a digital representation of the states, conditions, environmental exposures, and configurations of the physical twin asset, while the latter (as described in the above system of seven components) provides the contextual framework in which the former is utilized. In this context, the digital twin modeling framework, which utilizes digital threads, is consistent with the idea of a {\em Cyber-Physical System} (CPS)~\cite{uhlemann_digital_2017-1}. The modeling philosophies are slightly different, though. In the CPS literature, emphasis is placed in the analysis/control tasks, and the digital representations used to assimilate instrumentation data and perform analysis (i.e., the digital twin) are typically control-oriented models of the physical asset purposefully designed to capture only the relevant behavior and properties needed to support those specific analysis tasks. The digital twin modeling framework, on the other hand, places less emphasis on the analysis tasks and more emphasis on ensuring that the digital twin is a faithful replica of the physical asset, supporting a variety of analysis task even beyond those that are originally envisioned during the design. \begin{figure} \caption{Minimally viable framework for a digital twin model. “Live data” is defined as a changing quantity over time with as little data capture latency as needed to allow for the information to be actionable in operations, which in turn, provides feedback on the use, control, maintenance, and/or use scheduling of the physical twin asset.} \label{fig:DT_Min_Framework} \end{figure} A potential critique could be that this framework can be used to describe the relatively simple control logic algorithms (e.g., a vehicle cruise control or auto-pilot control) of a physical asset, or even a simple viewable dashboard of gauges that a human can view and interpret for informative feedback to control the asset. However, these simple examples miss the point that digital twins are meant to model complex interactions (multiphysics) to control complex assets (multiscale multisystems and multiprocesses). Ultimately, that is a minimal framework, and complexity implemented in digital twins can be envisaged to rise in sophistication along three main scales: \begin{itemize} \item {\bf Extent of autonomy and decision speed}: A basic model would be a manual supervisory one providing information to personnel who interprets the digital twin information and makes control decisions, while data or operation controls are uploaded/downloaded manually with decisions made on a coarse time scale of hours and days. An intermediate model would automate not just digital data processing but also simple stable controls with regular human interventions to mitigate unstable decision making with decisions made on the time scales of minutes to hours. An advanced autonomous model would automate the majority of decision making in real time (perhaps with only a few seconds of processing lag), instilling enough confidence to make human intervention and decision making rare. \item {\bf Extent of component granularity}: A basic model would treat the asset as a single item, or perhaps concurrently consider a few components in isolation, and only allow for simple alarming or warning that a maintenance check by a human is needed for further investigation and troubleshooting. An intermediate model would separately model all major subsystems (e.g., avionics, engine, power equipment, hydraulics, structural frame, etc.) or known frequent problem areas, in an attempt to isolate expected faults and more easily identify anomalous behavior of major components and subsystems. An ultimate digital twin would model all components individually that make up the physical asset, as well as their potentially complex interactions in a holistic manner, living up to its "twin" title while allowing for extensive root cause analysis and adjudication of failures and faults down to individual components. \item {\bf Extent of incorporated physics}: A basic model would focus on a single physics domain mode of failure such as shearing of a bolt due to excessive stress. An intermediate model would incorporate at least two domains of physics to identify failures dependent on multi-domain solutions (e.g., thermal cycling leading to fatigue driven structural failure). A high fidelity model would incorporate all relevant physics domains that can cause failures (e.g., a battery explosion involving electrical, chemical, thermal, and structural considerations). \end{itemize} The extent of component granularity and incorporated physics included are proportional to the amount and types of instrumentation (sensor) data that is used to monitor the health and upkeep of the physical twin asset. If there are relatively few and sparsely distributed sensors and detectors in, on, or around the physical asset, then there is a higher likelihood that the digital twin models may be attempting to solve an ill-posed problem: one where unique solutions do not exist or at least where solutions are not convergent at all or that converge to local optima only. One cannot hope to achieve accurate physical simulation of all or a large percentage of the components of a complex asset without instrumentation density that captures behavior of those components at the required granularity and level of detail. Similarly, instruments should be able to capture the relevant physical quantities that the physics model attempts to represent. As an example, a digital twin thermal cycling fatigue analysis model will have no input (and no solution) if there are no embedded temperature measuring devices capturing the real environmental exposure of the components attempting to be modeled. Therefore, accurate and useful digital twin models will directly rely on sensitive, reliable, and physically relevant instrumentation. Even with entirely data-driven statistical and machine learning techniques that do not explicitly model the physics of the asset, one can expect more reliable models if training data from a set of relevant physical sensor types are included, to avoid learning spurious correlations. \subsection{Digital Twin Literature Review of Reviews} Many digital twin reviews exist. A recent Web of Science search for the title or abstract containing (("digital twin" OR "digital twins" OR "digital thread" OR "digital threads") AND ("review" OR "reviews" OR "survey" OR "surveys" OR "landscape" OR "landscapes")) returned 215 results. A similarly filtered Google Scholar\texttrademark{} search limited to review articles returned 305 results. A comprehensive review of digital twin review articles is in order, but it is beyond the scope of this manuscript. We only provide a brief overview with highlights below. Lu et al.~\cite{lu2020digital} is the most cited review of the digital twin concept. The authors provide a systematic examination of the definition of digital twin and related terms, the enabling technologies, communication standards and network architecture of the digital thread, and current research issues and challenges. Nevertheless, the review is singularly focused on industrial manufacturing applications. Also, instead of demonstrating specific examples of DT, general advantages, such as "decision-making optimization", "data sharing", and "mass personalization" are outlined without quantitative metrics in support. Kritzinger et al.~\cite{kritzinger2018digital} is another frequently cited review of digital twin use in industrial manufacturing. The authors provide a classification schema of DT relevant articles based on the level of automation and model types defining terms such as "digital model" (manual data flow between physical and digital objects in both directions), "digital shadow" (automated data flow from physical to digital objects, but manual data flow from digital to manual objects), and "digital twin" (automated data flow between physical and digital objects in both directions)~\cite{kritzinger2018digital}. Only about one quarter of cited articles involved case studies. Errandonea et al.~\cite{errandonea2020digital} provide a comprehensive review of digital twins used in the context of predictive maintenance. The paper identified 68 articles and conference papers that used DT for maintenance applications. Khan et al.~\cite{khan2020requirements} provide a look at the path towards {\em autonomous maintenance}, which explores requirements from sensors, machine learning, security, and material science that are needed in order to achieve highly automated and low human intervention digital twin models for equipment maintenance. Khan et al.~\cite{khan2018review} is another comprehensive, as well as visionary, review of deep learning applied in the context of system health management (SHM). SHM includes PMx, yet is a more general term, encompassing diagnostics and anomaly detection in addition to the prognostics often performed in PMx. The authors identified 38 articles that use deep learning methods to analyze and interpret data from equipment to perform anomaly detection, fault and failure diagnosis and classification, remaining useful life estimation, and component degradation. Rasheed et al.~\cite{rasheed2020digital} outline several hybrid methods for digital twins incorporating ML and Physics-Based Modeling (PBM); however, there are no current reviews on the intersection of digital twin predictive maintenance models incorporating both machine learning methods and physics based modeling. \section{Digital Twin Applications and Industries} We now review possible digital twin applications and the examples from different industries, in order to better illustrate how the concepts described so far map into real-world usage scenarios. \subsection{DT Applications} We begin with a review of prototypical applications that are relevant to multiple different industries. \paragraph{Risk assessment and decreased time to production:} For manufacturing and production, a plant layout is a time intensive design and high cost endeavor, which inherently involves risk from inefficient allocations of time and resources. The digital twin creates an opportunity to decrease the time to production by effectively creating a high fidelity representation of the production line in a virtual space, while also exploring what-if design scenarios that can result in optimization of layout to maximize production output and/or minimizing costs~\cite{negri2017review}. This virtual simulation of the manufacturing line may also be used determine what variables of the system are important to monitor in the operation phase and further, whilst in the operation phase, the virtual digital twin, used initially for design, may then be updated with sensor data for the algorithms to analyze and produce prognostics for operations and maintenance. \paragraph{Predictive maintenance:} Predictive maintenance has been stated as both the original purpose for the concept of digital twin \cite{shafto2012modeling} as well as being the most popular application of the digital twin model \cite{errandonea2020digital}. The predictive maintenance application will be discussed in further detail in Section 3. \paragraph{Real-time remote monitoring and fleet management:} After the predictive maintenance models arrive at their updated predictions, fleet management may then be optimized and executed. Fleet management involves maintenance planning, use scheduling, and logistics as well as performance evaluation metrics. Verdouw et al.~\cite{verdouw2017digital} summarizes several digital twin applications in agriculture, including a fleet management application that tracks individual equipment location and energy use, accurate row tracking with various towed agricultural equipment, and evaluation of crop yield for individual machines. Major et al. provides a demonstrated example of real-time remote monitoring and control of a ship's crane, as well as the ship itself, with the long-term goal of improved fleet logistics and safety, and supervision from onshore monitoring centers~\cite{major2021real}. \paragraph{Increased team collaboration and efficiency:} Efficient collaboration between members of a project team is vital if a project is to stay within time and budget constraints. Data and information from assets, as it is updated, altered, and generated must be shared with project managers, multidisciplinary engineering groups, builders and/or manufacturers, and customers/consumers. The need for an integrated platform is substantial, especially when team collaborators are of various technical and skilled backgrounds. The digital twin framework provides such a platform that potentially allows near real time monitoring and information interrogation from a consolidated reliable source. Perhaps this is best illustrated in the field of construction; Lee et al.~\cite{lee2021integrated} disclose a blockchain framework for providing traceability of updates to the database that warehouses both the {\em as-planned} construction and the {\em as-built} project generated from GPS as-measured geometry, and material property testing (e.g., soil and building material). The blockchain traces and authenticates user updates that become a single real time source of truth for all users to interrogate. One of the biggest sources of delays in construction projects occurs from lack of adequately planning for supply chain logistics: getting the right building materials to the right place at the right time. Having a reliable and authenticated source of project status could allow optimization of supply orders and deliveries. \subsection{DT Industries} We now review specific applications within different industries. \paragraph{Manufacturing:} Digital twins have been extensively applied in industrial manufacturing, mainly in the form of a predictive maintenance model of large complex manufacturing machinery and in the form of a large scale simulation of production run machinery, the latter having the goal of decreasing time to production and assessing risk as is mentioned above. Uhlemann et al.~\cite{uhlemann_digital_2017-1} give an example of effective and efficient production layout planning for small to medium-sized enterprises where different generated layouts are compared using a digital twin simulated environment. PMx applications for manufacturing are also a target area for DT model application, focusing on the impact of upkeep on output production as well as correlated maintenance on upstream and downstream equipment~\cite{susto2014machine}. Rosen et al.~\cite{rosen_about_2015} explore production planning and control by developing a DT simulation framework to optimize the effects of production parameters on output production and manufacturing equipment maintenance. \paragraph{Aerospace:} The digital twin framework is heavily applied in aerospace and aviation in predictive maintenance and fleet management applications~\cite{glaessgen_digital_2012,karve_digital_2020,musso2020interacting,sisson2022digital,wang2020life,xiong2021digital,zaccaria2018fleet,zhou2021real,kapteyn2020toward,guivarch_creation_2019}. The large amount of sensor recorded data and up-to-date maintenance records create an environment in which predictive maintenance models can thrive, but also necessitates large scale data management to provide authoritative and easy-to-interrogate databases. Simulation studies evaluating performance, faults, or failure of assets utilizing digital twin models may be performed for various operating conditions of the assets and also under various ambient environmental conditions. Logistical plans may be developed and explored based on simulated site plans or various fleet sizes~\cite{west2018demonstrated}. Machine learning algorithms are employed to flexibly optimize logistical problems associated with predictive maintenance while also predicting the faults and failures of assets based on trends and patterns in the recorded sensor data. These faults may be identified from anomaly detectors that make use of recorded {\em healthy} or {\em nominal} operational data that contrast with anomalous data, which can be flagged in real time. \paragraph{Architecture, Civil Engineering, Structures and Building Management:} Building information models (BIM) are closely related to digital twins and are very well known in the architecture, engineering and construction management world. BIMs are semantically rich information models of buildings that can be used to easily visualize the 3D representation of the building along with key properties of the different components and systems. They have also been used to assimilate changes made to the physical building, and in turn, act as a guide during building design, construction and maintenance operations~\cite{coupry2021bim}. Structural health modeling is also a salient example of digital twin use for maintenance, repair and operations (MRO) of structures ~\cite{bigoni2022predictive,droz2021multi,taddei2018simulation, rosafalco2020fully}. Here, digital twins incorporating both analytics of data-driven machine learning of states attributed to sensor data and physics based models, which provide simulations for training models, work together to provide warnings and predictions of potentially adverse states of the structure. \paragraph{Healthcare, Medicine, and the Human Digital Twin:} In clinical settings, one is often surrounded by various complex and expensive equipment which is also the subject of maintenance, repair, and fleet management. Healthcare costs continue to rise, yet the high costs of preventive maintenance still dominate healthcare environments, where new technologies can be adopted slowly~\cite{shamayleh2020iot}. There is potential for large cost savings by adapting digital twin based predictive maintenance models. The digital twin concept has also been applied directly to the human patient as well with high fidelity cardiac models serving to help diagnose heart conditions that may lead to patient specific and personalized treatments~\cite{gillette2021framework}. \cite{corral2020digital} Yet another healthcare application is that of a digital twin model of humans in the context of wearable health monitors and trackers that collect data and provide personalized advice, diagnoses, and treatments based on data-driven prediction models~\cite{liu2019novel}. Along similar methodology, Barbiero et al.~\cite{barbiero2021graph} use a patient digital twin approach utilizing a graph neural network to forecast patient conditions (e.g., high blood pressure) based on clinical data from multiple levels of anatomy and physiology, such as cells, tissues, organs, and organ systems. \section{Interrelation between Predictive Maintenance and Digital Twin Framework} \subsection{Distinguishing Types of Maintenance} Predictive maintenance and the digital twin modeling framework share many common elements and goals. Predictive maintenance utilizes sensor data or maintenance history data from a physical asset to predict the probability of failure over a future interval. The prediction may be used to plan and execute future maintenance or operation of the physical asset. This cycle can be visualized as a control loop, (Figure \ref{fig:pmx-control-loop}) wherein data is generated from the instruments associated with the physical twin asset, passed through the digital thread via the data management block, analyzed by the digital twin models/algorithms, turned into actionable information to operate the physical twin asset, which then leads to new data generation, beginning the cycle anew. Thus, predictive maintenance models, when deployed, meet all the criteria of the definition of a minimally viable digital twin framework (Figure \ref{fig:DT_Min_Framework}). Therefore, it is often mentioned that predictive maintenance models are one of the most popular applications of digital twin~\cite{errandonea2020digital}. However, in the broader field of maintenance, repair, and overhaul (MRO), several distinct maintenance types exist depending on the desired level of safety, cost savings and available instrument data, and not all maintenance operations are fit to utilize a digital twin approach. \begin{figure} \caption{The layers of the predictive maintenance stack fit together in a control loop where data is generated, transmitted, analyzed, and acted on. This figure is reproduced with permission from Edman et al.~\cite{edman2021predictive} \label{fig:pmx-control-loop} \end{figure} \paragraph{Reactive Maintenance:} Reactive maintenance may be summarized by the adage, {\em if it isn't broke, don't fix it}. Equipment is allowed to run to failure. Failed parts are replaced or repaired. Repairs may be temporary in order to quickly regain operational status, while putting off until later more time consuming permanent repairs. Such an approach may save money and personnel time in the short term, but runs the risk of unpredictable interruption and more costly catastrophic failure in the future~\cite{swanson2001linking}. No data from the asset, even if any is recorded, is analyzed, therefore this approach does not utilize the digital twin framework. \paragraph{Preventative Maintenance:} Some assets cannot risk catastrophic failure from the perspective of both a total loss/destruction of the asset and loss of human life (i.e., such as catastrophic failure of an aircraft). Instead, the goal of preventive maintenance, also known as scheduled maintenance, is the exact opposite of the reactive approach, in that safety is of utmost importance. In preventive maintenance, there is little tolerance for risk of failure in any critical components. Maintenance plans are set by manufacturers with scheduled inspections and overhauls to occur well before estimated time of failure utilizing large safety margins~\cite{shafiee2015maintenance}. However, such precautions come with the penalty of prematurely replacing still usable components. Again, since no asset data is analyzed this approach does not utilize the digital twin framework. \paragraph{Condition-Based Maintenance:} Condition-based maintenance (CBM) falls between the {\em never replace before failure} philosophy of reactive maintenance and the {\em always replace before there is a remote chance of failure} philosophy of preventive maintenance. CBM can be synonymous with the term {\em diagnosis based maintenance} and is sometimes used interchangeably with PMx or with the term {\em condition monitoring}, although there is not a consensus on the equivalency of these terms, which is a source of confusion~\cite{si2011remaining}. However, CBM may have a subtle distinction from PMx, in that CBM requires a keen human observer and tends to use human senses (e.g., odd noises, smells, vibrations, visible erratic behaviors, etc.) to sense that the condition of the asset is in an anomalous state~\cite{nikolaev2019hybrid}. Other CBM methods could be simple heuristics applied to sensor data. An example heuristic would be accelerometer measurements to not exceed the threshold of $k$ standard deviations of signal from some past defined interval, otherwise an impending failure is expected. CBM is used to assess the state of the asset in the present moment based on a recent past interval of time and thus is typically not considered a predictive method. CBM rules are often static and sensitive to noise and artifacts that violate assumptions but do not increase risk of failure, which may lead to excessive maintenance~\cite{errandonea2020digital}. CBM methods can fall under the category of the digital twin framework if they make use of digital data, but often fail to meet the criteria, due to a lack of digital thread, if only analog signals are analyzed and actionable feedback is manually implemented. \paragraph{Predictive Maintenance:} Predictive maintenance is a proactive technique that uses real-time asset data (collected through sensors), historical performance data, and advanced analytics to forecast when asset failure will occur. Using data collected by sensors during normal operation, predictive maintenance software uses advanced algorithms to compare real-time data against known measurements, and accurately predicts asset failure. Advanced PMx techniques incorporate machine learning, which is summarized in several extensive reviews~\cite{carvalho2019systematic,miller_system-level_2020}. The result of PMx is that maintenance work can be scheduled and performed before an asset is expected to fail with minimal downtime or impact on operations~\cite{nikolaev2019hybrid}. PMx is synonymous with the the terms prognostic health management (PHM) and systems health management (SHM)~\cite{khan2018review}. As outlined above, predictive maintenance algorithms and models are an application of the digital twin modeling framework. \paragraph{Prescriptive Maintenance:} Prescriptive maintenance can be thought of as the implementation step after predictive maintenance~\cite{errandonea2020digital}. After an asset or fleet of assets are predicted (often with a probability or confidence window) to fail within a time interval into the future, the next task is to optimize the maintenance schedule that minimizes costs, minimizes equipment downtime, and maximizes logistical efficiencies (getting the right replacement component in the right place at the right time). Prescriptive maintenance is essentially the optimization and execution of the maintenance plan after predictive maintenance has been performed. Note that the concept of {\em prescriptive maintenance} is often included under the term {\em predictive maintenance}. \paragraph{Diagnostics, Anomaly Detection, and Prognostics: What Went Wrong?, What Is Going Wrong?, What Will Go Wrong?} Maintenance, repair, and operations is often divided into the distinct tasks of diagnostics, anomaly detection, and prognostics. ~\cite{khan2018review} Diagnostics is the task primarily concerned with classifying anomalous behavior to known fault conditions (i.e., to answer the question of: what went wrong?). Faults, or the erroneous operation of equipment, are identified using a diagnostic framework. Anomaly detection is chiefly concerned with detection of unintended or unexpected functions of the monitored equipment. Anomalies may or may not cause or lead to faults or failures, but simply are significant deviations from nominal operation as recorded in the past. Ideally, all potential anomalies and diagnoses would be accounted for during the design and testing phase of the equipment development, yet in practice, complex or aging assets can fail in unanticipated modes and scenarios and therefore, in-the-field analysis of detected anomalies and their linking to discovered faults is at the heart of digital-twin data-driven techniques for diagnostics. Root cause analysis (RCA) is the systematic approach for diagnosing the fault (cause) that leads to a failure. Traditional RCA approaches have included manual analysis by subject matter experts (SMEs) using diagnostic fault trees, fishbone diagrams, and the 5 whys procedure~\cite{sivaraman2021multi}. However, these manual analysis methods become unwieldy with large systems that have many levels of component interaction. As equipment becomes more complex and sophisticated, the number or combinations and permutations of potential causal factors for certain fault events rapidly increases. Therefore, statistical tests and analytic methods, such as regression and the Pearson correlation coefficient, have been applied to capture relationships between recorded sensor variables~\cite{madhavan2021evidence,zhao2017advanced}. Nevertheless, even these methods have difficulty in dealing with non-linear patterns, as well as multi-variable dependence effects and multiple timing and lag effects~\cite{bonissone2009systematic}. Finally, the task of prognostics deals with predicting a future state or condition (e.g., failure) of the equipment or component thereof. The prediction comes with uncertainties (or are probabilistic) that typically expand with the span of the forecast horizon. Remaining useful life (RUL) estimations of equipment components, effectively a time-to-failure prediction, are the most common type of prognostic indicator and traditionally, have been calculated using historical operational component data (e.g., similar components utilized across a fleet of assets) analyzed with statistical techniques such as regression, stochastic filtering (e.g., Kalman filter), covariate hazard (e.g., Cox proportional hazard), and hidden Markov models~\cite{si2011remaining}. \section{Incorporating Machine Learning with the Digital Twin Framework for Predictive Maintenance} Machine learning (ML) is a broad field of various analytical methods that learn by leveraging historical data to make decisions about data encountered in the future. Accordingly, since predictive maintenance, an application of digital twin models, utilizes instrumentation data to make diagnostic or prognostic decisions, ML has been often applied to PMx analyses~\cite{errandonea2020digital,khan2018review,nikolaev2019hybrid}. ML algorithms can be coarsely divided into three different types of learning: supervised, unsupervised, and semi-supervised. These different types are separated based on the type of data one can employ; supervised learning works with data that has labels (i.e., the true predictive outcome is linked to each observation); unsupervised learning has no access to ground-truth labels an utilizes techniques to group, cluster, or extract patterns that can be distinct indicators of an underlying phenomenon; while semi-supervised learning is a hybrid approach in which only a small portion of the data has labels and it may be possible to infer the missing ground-truth annotations. Likewise, in PMx applications, the approach chosen will depend on the instrumentation data one can use and whether the data is labelled, that is, faults and failures, as well as nominal operation patterns, have been attributed to the recorded data. Ideally, data is collected from multiple sensors over a fleet of assets, over the entire life of each component, from initial normal operation, through a time of degradation, and then finally until failure. Despite increasing use of connected sensors embedded on complex assets, access to these ideal databases can be rare due to the storage requirements needed to warehouse prolonged acquisition data cycles. Also, many complex physical assets are rarely run until failure, due to the risk of a catastrophic consequences, including total loss of the asset. \paragraph{Predictive Maintenance Workflow and Training from Nominal {\em Healthy} Data} Booyse et al.~\cite{booyse2020deep} argue that the first phase in health monitoring is to be able to detect anomalous or faulty behavior from nominal healthy behavior. Typically this involves a workflow of data acquisition (DAQ) from the instrumentation, preprocessing of data to filter noise and remove artifact, identifying a set of distinct condition indicators that differentiate normal and anomalous data or previously known fault modes, training and testing an ML model to enable predictions on test data, and deploying the model to make predictions when encountering future data. Often, the acquired data is time series data (i.e., a recorded numeric variable, such as temperature, pressure, acceleration, concentration, strain, etc., as a function of time) and condition indicators, or features, can be based in the time-domain (e.g., mean, standard deviation, root-mean square, skewness, kurtosis, or morphological parameters of the shape of the time series signal, etc.), frequency-domain (e.g., power bandwidth, mean frequency, peak frequencies and amplitudes, etc.) and time-frequency domain (e.g., spectral entropy, spectral kurtosis, etc.). Condition indicators could also be derived from static or dynamic physical models. Determination of anomalous behavior may involve one or more distinctive features which may be extracted by unsupervised learning methods, such as clustering, or if many features are involved, dimension reduction maybe performed through, e.g., principal component analysis or other null-space basis model type. Yang et al.~\cite{yang2022causal} describe a causal correlation algorithm that is applied to Bayesian networks and potential for diagnosing causality of behaviors (e.g., faults) in large complicated networks that have non-linearities and are multivariate. As faults are identified and accumulated during the operational history of the asset, these events may then be compared to the identified groups or clusters, thus performing a diagnostic function of linking anomalous behavior with identified faults. \paragraph{Similarity Models: When Data Encompasses Run to Failure Complete Histories} If the acquired data is a complete history of a group (fleet) of similar equipment, spanning from its initial operation until failure, it lends itself to prognostication of failures using similarity models~\cite{Mikus2007},~\cite{Dubrawski2011}. If one can map the current state of an asset on the specific time marks of usage trajectories observed from other assets whose actual outcomes are known, one could use the distribution of these outcomes as a reference for e.g., remaining useful life, time to specific type of failure, or other statistics of interest. Many of similarity based modeling techniques draw inspiration from medicine, where the individual patient's heath status assessment, diagnosis as well as prognosis are routinely mapped on the background of many similar cases observed before, sometimes including detection of distinct phenotypes of trajectories of evolution towards failure~\cite{Chen2015}, or from public health, where the task of detecting outbreaks of infectious diseases appears similar to the task of detecting onset of new types of failures spreading across the fleet of an equipment~\cite{Dubrawski2007}. Applications of this concept vary in how the similarity is defined and how the predictions are formed, and include statistical machine learning as well as neural network based methods ~\cite{adhikari2018machine,saha2019different,bektas2019neural}. \paragraph{Survival Models: When Data Encompasses Only the End Time of Failure From Similar Equipment} Sometimes only the time point of failure is known, and survival models such as Kaplan-Meier, Cox proportional hazard (CPH)~\cite{hrnjica2021survival}, or more advanced approaches using deep learning~\cite{chen2020predictive} can be utilized. Survival curves (probability of survival over time) for time-to-event (i.e., events are faults or failures in the PMx context) are generated from the failure history data. Distinct survival curves may be generated for groups of equipment that share similar covariates (similar condition indicators or properties, e.g., manufacture, operating conditions). Based on the survival curves, RUL may be estimated~\cite{hrnjica2021survival}. \paragraph{Degradation models: When Data Encompasses Run to Known Threshold that Exceeds Safety Criteria} Frequently, equipment is not run until failure, but instead until just before exceeding a safety threshold. A class of methods that reflect this concept is known as degradation models. They can be implemented as a linear stochastic process or an exponential stochastic process if the equipment experiences cumulative degradation effects~\cite{thiel2021cumulative}. Degradation models typically work with a single condition indicator, although data fusion techniques can be used to combine multiple indicators of degradation. However, what role maintenance and repair may have in not only censoring failure events, but also resetting or offsetting latent degradation states is largely an unexplored area of research~\cite{miller_system-level_2020}. \section{Incorporating Physics-Based Modeling and Simulations with Digital Twin Frameworks for Predictive Maintenance} So far, when discussing PMx tasks and their specific applications, the digital twins at the core of the digital twin framework have been largely statistical models of the behavior of the physical asset, tailored specifically to address the PMx task in question. However, any careful reader may have observed already that there is ample opportunity to extend the reach of the process by leveraging more complex (e.g., multi-physics, multi-scale) models of the assets being twinned. Physics-based models (PBMs) are extensively utilized in the design-phase and are excellent candidates for such extensions. However, a drop-in replacement may be difficult as digital twins and PBMs differ in several fundamental ways: \begin{enumerate} \item PBMs are often used as a design tool in the early phases of the product or system development and are rarely returned to after the design, testing, and manufacturing phases are completed, perhaps only after product failure (failure analysis). Digital twins are updated and utilized through all phases from concept, design, and testing to implementation, customer support, and end of product life phases. Ensuring that PBMs can be easily updated throughout the life-cycle requires some adjustments. \item Traditional PBMs are often built only for specifically assumed operating conditions for the product or the conditions are learned from limited bench-top lab testing or environmental data. The digital twin modeling framework relies on generated data from real world applications of products; simulations and analyses performed with the use of digital twins are enabled with data from the fully assembled, working product or system, operating in their real-world environment. Thus, continuously calibrating/update PBMs to enable this integration can pose a challenge. \item Traditional PBMs are typically highly detailed representations of small parts or subsystems of an overall assemblage of parts or a system-of-systems. Digital twin analyses take advantage of data being collected from the working assembled system to predict faults and failures that occur due to the complex interactions of systems of parts and their environment. PBMs typically require orders of magnitude more computations to produce their predictions than the data-driven models typically used in PMx. \end{enumerate} It would be unfair to characterize the relationship of traditional PBMs and digital twin models as adversarial. Due to the benefits they provide, many of the design phase PBMs already form the initial record of the digital twin, or of the individual parts that comprise a digital twin of a system or asset. We now expand on the specific limitations of using PBMs in the digital twin modeling framework. \paragraph{Limitations of Multi-Scale, Multi-Physics Models for the Digital Twin Framework} A common refrain from digital twin skeptics may be that a realistic digital twin can never fully simulate a physical system or asset; however all models, including physics-based models (e.g., stress/strain analysis, computational fluid dynamics, electromagnetic scattering, and other complex processes) are based on assumptions and simplifications in an attempt to explain complex physical interactions. Yet, despite the simplifications, computational models have driven much of engineering progress over the past half century. PBMs often achieve accurate solutions by discretizing, splitting the volumes or areas into small regular elements and nodes that populate a model space governed by differential equations. Typically, these are second order (or higher), non-linear, partial differential equations, which have few, if any, general solutions or techniques. If the model space is of irregular geometry as well, a discretized model based on finite or boundary elements may be the only option for a reasonably accurate solution. Unfortunately, the accuracy of such models come at the cost of extensive computation time. Cerrone et al.~\cite{cerrone2014effects} estimated a single simulation run of crack propagation in a structural plate with several notches and holes had: \begin{quote} approximately 5.5 million degrees of freedom. Simulations were conducted on a 3.40 GHz, 4th generation Intel Core i7 processor. Abaqus/Explicit’s shared memory parallelization on four threads with a targeted time increment of $1\cdot10^{-6}$ seconds resulted in approximately a 4-day wall-clock run time~\cite{cerrone2014effects}. \end{quote} Of course, such a runtime is far from the promise of digital twins providing near real-time decision making from the constant stream of sensor data. To exacerbate, modern automobiles have tens of thousands of parts and large commercial aircraft may have millions of parts~\cite{airbus_a380_facts}. In addition, engineers frequently desire more than one type of {\em what if} simulation and would prefer to run a great number of varied simulations to explore variables and effects. These complicating factors would seem to relegate the concept of digital twins based entirely on physics models out of reach practically and economically~\cite{west2017digital,tuegel2012airframe}, or at least push its utility decades into the future. However, there are several novel forms of analysis that seek to hybridize physics based models with purely data-driven techniques, such as machine learning, which may result in more manageable computational costs of PBMs and automated learning of their parameters. \paragraph{Combining Data-Driven Machine Learning Methods and Physics-Based Models to Address Each Other's Shortcomings} While a digital twin definition often includes the concept of physics based models, there are well known limitations of such models as outlined above. The term {\em model} may be generalized to include trends and patterns directly learned from data. For example, there is the related terminology of implicit digital twins (IDT) from Xiong et al.~\cite{xiong2021digital}: \begin{quote} However, the traditional DT method requires a definite physical model. The structure of the aero-engine system is complex, and the use of a physical-based model to implement a DT requires the establishment of its own model for each component unit, which complicates predictive maintenance and increases costs, let alone achieve accurate maintenance. To circumvent this limitation, this paper uses data-driven and deep learning technology to develop DT from sensor data and historical operation data of equipment and realizes reliable simulation data mapping through intelligent sensing and data mining (called implicit digital twin; IDT). By properly mapping the simulation data of aero-engine cluster to a certain parameter, combined with the deep learning method, various scenarios’ remaining useful life (RUL) can be predicted by adjusting the parameters.~\cite{xiong2021digital} \end{quote} \noindent In other words, a physical model may not be needed; an IDT model can simply be created through data-driven analysis (i.e., machine learning). Nevertheless, ML approaches have a few relevant limitations as well: \begin{enumerate} \item Scientific and engineering problems are often underconstrained (i.e., large number of variables, small number of samples) making learning reliable ML models from the corresponding data difficult. \item Catastrophic failures are naturally infrequent and so they may be seldom, if ever, encountered in the recorded data. This issue can sometimes be alleviated using Bayesian forms of ML that allow incorporation of prior probability distributions to account for events that are not represented in data. \item It can be easy for crossvalidation methods to misevaluate spurious relationships learned by data-driven frameworks as they can look deceptively well on training as well as tests sets. \item Some ML methods, such as deep neural networks, are ``black boxes'' that provide few interpretable insights into the resulting models, and as such they may fail to convince their users that the obtained solutions are sufficiently systematic to be applicable to future, similar problems. \item Data used for training ML models is most often just a limited projection of the reality that they are expected to capture. It is then easy for even advanced ML models to fail to follow common sense that comes natural to human domain experts, if important nuances of the underlying knowledge is not reflected in the training data. \item Large amounts of labeled training data that is often required to produce reliable data-driven models can be expensive to acquire and/or time consuming to create. \end{enumerate} Some recently developed frameworks aim to combine the interpretability of PBMs with the data-driven analytical power of digital twins and machine learning. Combining PBMs and ML seeks to overcome the problems of the long runtime of highly detailed and complex physical models on one hand, and the lack of interpretability and required large volumes of labeled training data on the other~\cite{kapteyn2020toward}. Hybrid ML and physics based solutions are likely to excel in solving analytic problems, such as planning maintenance of complex assets (e.g., vehicles, aircraft, buildings, manufacturing plants) where some data (e.g., small samples of rare occurrences, noisy and spatially or temporally sparse sensor recordings, etc.) and some knowledge of physics (e.g., missing boundary conditions, occurrence of physical interactions, etc.) exists, but neither alone are likely to contain enough information to solve complex diagnostics or prognostics problems with sufficiently useful accuracy or precision for practical application. \paragraph{Hybrid Digital Twin Frameworks Combine Machine-Learning and Physics-Based Modeling} There are several approaches to combining PBMs and ML, which may go by several different names, such as {\em physics informed machine learning} (PIML), {\em theory guided data science} (TGDS), {\em scientific machine learning} (SciML), which include physical constraints and known parameter relationships (e.g., an ODE, PDE, physical and material properties and relationships). There are three main strategies identified so far: physics informed neural networks, reduced order modeling, and simulated data generation for supplementing small data sets. The work of adjusting PBMs to work well with ML (and in particular for the PMx tasks) has been so far limited to very simple components and systems. The R\&D community is yet to seriously consider scaling hybrid approaches to handle complex systems with multiple components requiring true multi-physics and multi-scale models. Work is needed on both figuring out how to automatically integrate multiple PBM models together, as well as how to automatically select the right hybridization strategy to make the resulting solutions run sufficiently fast without compromising fidelity of the resulting models. \paragraph{Physics Informed Machine Learning (PIML) and Physics Informed Neural Networks (PINNs)} In 2019, Raissi et al.~\cite{raissi2019physics} proposed a deep learning framework that combines mathematical models and data by taking advantage of prior techniques for using neural networks as differentiation engines and differential equation solvers (e.g., Neural ODE~\cite{chen2018neural}). The main idea is that the physical relations and equations modeling them are leveraged to formulate a loss function for the ML algorithm to minimize violation of the principles of physics. Specifically, a loss function can combine the usual data-driven component based on observed residuals, with physics-driven terms reflective of errors in the solutions of the governing ODE or PDE, and terms reflective of violations of any boundary or initial conditions. Raissi et al.~\cite{raissi2019physics} provide several examples of solved dynamic as well as boundary value problems, including Schrodinger's, Navier Stokes, and Berger's equations. A similar approach by Jia et al.~\cite{jia2019physics} under the name of {\em physics guided neural networks} (PGNN) which was used to determine temperature distribution along the depth of a lake using both physical relations and sensor data. \paragraph{Reduced Order Modeling} Another method is to reduce the order, size, number of degrees of freedom (DoF), or dimensionality of the PBMs. This approach is often called {\em reduced order modeling} or {\em projection based modeling} or {\em lift and learn}~\cite{swischuk2019projection,kapteyn2020toward}. Here, training data is generated by the PBMs, but only a few snapshots, e.g., a few individually solved time instances are used to reduce the computational power needed instead of solving over a complete time domain. Then, a lower-dimensional basis is computed and the higher-order PDE model is projected onto the lower-dimensional space. Hartmann et al.~\cite{hartmann202012,hartmann2018model} give an excellent review on reduced order modeling and its role as a digital twin enabling technology, drastically reducing model complexity and computation time, all while maintaining high fidelity of solutions. The reduced order model solutions can be arrived at rapidly for various boundary and/or initial conditions and are frequently used as a simulation database to which ML algorithms may be trained on to classify damage states or learn a regression to a continuous degradation model; the trained models can then be fed distributed sensor data from real-world assets in the field to perform equipment health monitoring tasks~\cite{bigoni2022predictive,kapteyn2020toward,hartmann2018model,droz2021multi,leser2020digital,taddei2018simulation,rosafalco2020fully}. \section{Challenges of Digital Twin Implementation} \paragraph{Sensor Robustness, Missing data, Poor Quality data, and Offline Sensors} DT frameworks require live data, which is often generated by a dense array of sensors. Inevitably, one or more sensors will disconnect, contain periodic noise or artifact, or ironically, require maintenance. First, the DT models need to be able to detect and handle sensor signal dropout. If not accounted for in the model algorithms, faults may go unnoticed or misdiagnosed. One such way of dealing with data interruptions is to use {\em circuit breakers} attached to the sensors that trip when the signals go out of range. \cite{preuveneers2018robust} Another approach is to ensure adequate signal processing to remove and filter out unwanted noise and artifacts. \paragraph{Workplace Adoption of the DT Framework} Successful DT frameworks require a team effort of caring about data quality. Errors introduced through improper data entry or inadvertent part swaps will propagate throughout the DT framework. Improving user interfaces on data entry menus, as well as seeking out and requesting feedback from team members will demonstrate concerns for the daily user. Finally, sharing the goals, the rewards, and the output of the models with team members also helps reinforce positive feedback in the workplace. \paragraph{Security Protocols} It is estimated that the majority of network-connected digital twins will utilize at least five different kinds of integration endpoints, while each of the endpoints represents a potential area of security vulnerability. \cite{mullet2021review} Therefore, it is highly recommended that best practices be implemented wherever possible, including, but not limited to: end to end data encryption, restricted use of portable media such as portable hard drives and other portable media on the network, regular data backups (to offline locations if possible), automated system software patch and upgrade installs, password protected programmable logic contollers, managed user authentication and controlled access to digital twin assets~\cite{mullet2021review}. \section{Summary} This manuscript attempts to provide clarity on defining {\em digital twin}, through exploring the history of the term, its initial context in the fields of product life cycle management, asset maintenance, and equipment fleet management, operations, and planning. A definition for a minimally viable digital twin framework is also provided based on seven essential elements. A brief tour through DT applications and industries where DT methods are employed is also provided. Thereafter, the paper highlights the application of a digital twin framework in the field of predictive maintenance, and its extensions utilizing machine learning and physics based modeling. The authors submit that employing the combination of machine learning and physics based modeling to form hybrid digital twin frameworks, may synergistically alleviate the shortcomings of each method when used in isolation. Finally, the paper summarizes the key challenges of implementing digital twin models in practice. A few evident limitations notwithstanding, digital twin technology experiences rapid growth and as it matures, we expect its great promise to materialize and substantially enhance tools and solutions for intelligent upkeep of complex equipment. \paragraph{Acknowledgement:} This work was partially supported by the U.S.\ Army Contracting Command under Contracts W911NF20D0002 and W911NF22F0014 delivery order No. 4 and by a Space Technology Research Institutes grant from NASA’s Space Technology Research Grants Program. \end{document}
\begin{document} \begin{abstract} Let $\A \in \Reals^{n \times n}$ be a nonnegative irreducible square matrix and let $r(\A)$ be its spectral radius and Perron-Frobenius eigenvalue. Levinger asserted and several have proven that $r(t):=\spr((1{-}t) \A + t \A\tr)$ increases over $t \in [0,1/2]$ and decreases over $t \in [1/2,1]$. It has further been stated that $r(t)$ is concave over $t \in (0,1)$. Here we show that the latter claim is false in general through a number of counterexamples, but prove it is true for $\A \in \Reals^{2\times 2}$, weighted shift matrices (but not cyclic weighted shift matrices), tridiagonal Toeplitz matrices, and the 3-parameter Toeplitz matrices from Fiedler, but not Toeplitz matrices in general. A general characterization of the range of $t$, or the class of matrices, for which the spectral radius is concave in Levinger's homotopy remains an open problem. \end{abstract} \title{ Nonconcavity of the Spectral Radius \in Levinger's Theorem} \pagestyle{myheadings} \markboth{L. Altenberg \& J. E. Cohen}{Nonconcavity of the Spectral Radius in Levinger's Theorem} \centerline{\emph{Dedicated to the memory of Bernard Werner Levinger (1928--2020)}} \ \\ \noindent Keywords: circuit matrix, convexity, direct sum, homotopy, nonuniform convergence, skew symmetric \\ MSC2010: 15A18, 15A42, 15B05, 15B48, 15B57 \section{Introduction} The variation of the spectrum of a linear operator as a function of variation in the operator has been extensively studied, but even in basic situations like a linear homotopy $(1{-}t) \X + t \Y$ between two matrices $\X, \Y$, the variational properties of the spectrum have not been fully characterized. We focus here on Levinger's theorem about the spectral radius over the convex combinations of a nonnegative matrix and its transpose, $(1{-}t) \A + t \A\tr$. We refer to $\B(t) = (1{-}t) \A + t \A\tr$, $t \in [0,1]$, as \emph{Levinger's homotopy},\footnote{Also called Levinger's transformation \cite{Psarrakos:and:Tsatsomeros:2003:Perron}.} and the spectral radius of Levinger's homotopy as \emph{Levinger's function} $r(t) \eqdef \spr(\B(t)) = \spr((1{-}t) \A + t \A\tr)$. On November 6, 1969, the \emph{Notices of the American Mathematical Society} received a three-line abstract from Bernard W. Levinger for his talk at the upcoming AMS meeting, entitled ``An inequality for nonnegative matrices.''\cite{Levinger:1970:Inequality} We reproduce it in full: ``\underline{Theorem.} Let $A \ge 0$ be a matrix with nonnegative components. Then $f(t) = p(tA + (1{-}t)A^T)$ is a monotone nondecreasing function of $t$, for $0 \le t \le 1/2$, where $p(C)$ denotes the spectral radius of the matrix $C$. This extends a theorem of Ostrowski. The case of constant $f(t)$ is discussed.'' Levinger presented his talk at the Annual Meeting of the American Mathematical Society at San Antonio in January 1970. Miroslav Fiedler and Ivo Marek were also at the meeting \cite{Marek:1974:Inequality}. Fiedler developed an alternative proof of Levinger's theorem and communicated it to Marek \cite{Marek:1978:Perron}. Fiedler did not publish his proof until 1995 \cite{Fiedler:1995:Numerical}. Levinger appears never to have published his proof. Marek \cite{Marek:1978:Perron,Marek:1984:Perron} published the first proofs of Levinger's theorem, building on Fiedler's ideas to generalize it to operators on Banach spaces. Bapat \cite{Bapat:1987:Two} proved a generalization of Levinger's theorem for finite matrices. He showed that a necessary and sufficient condition for non-constant Levinger's function is that $\A$ have different left and right normalized (unit) eigenvectors (\emph{Perron vectors}) corresponding to the Perron-Frobenius eigenvalue (\emph{Perron root}). Fiedler \cite{Fiedler:1995:Numerical} proved also that Levinger's function $r(t)$ is concave in some open neighborhood of $t=1/2$, and strictly concave when $\A$ has different left and right normalized Perron vectors. The extent of this open neighborhood was not elucidated. Bapat and Raghavan \cite[p.~121]{Bapat:and:Raghavan:1997} addressed the concavity of Levinger's function in discussing ``an inequality due to Levinger, which essentially says that for any $\A \ge 0$, the Perron root, considered as a function along the line segment joining $\A$ and $\A\tr$, is concave.'' The inference about concavity would appear to derive from the theorem of \cite[Theorem 3]{Bapat:1987:Two} that $\spr(t \, \A + (1{-}t)\B\tr) \geq t\, r(\A) + (1{-}t)\, r(\B)$ for all $t \in [0, 1]$, when $\A$ and $\B$ have a common left Perron vector and a common right Perron vector. The same concavity conclusion with the same argument appears in \cite[Corollary 1.17]{Stanczak:Wiczanowski:and:Boche:2009:Fundamentals}. However, concavity over the interval $t \in [0, 1]$ would require that for all $t, h_1, h_2 \in [0, 1]$, $r(t \, \F(h_1) + (1{-}t) \F(h_2) ) \geq t\, r(\F(h_1)) + (1{-}t) r(\F(h_2))$, where $\F(h) \eqdef h \, \A + (1{-} h)\B\tr$. While Theorem 3.3.1 of \cite{Bapat:and:Raghavan:1997} proves this for $h_1 = 1$ and $h_2 = 0$, it cannot be extended generally to $h_1, h_2 \in (0,1)$ because $\F(h_1)$ and $\F(h_2)\tr$ will not necessarily have common left eigenvectors and common right eigenvectors. Here, we show that the concavity claim is true for $2 \times 2$ and other special families of matrices. We also show that for each of these matrix families, counterexamples to concavity arise among matrix classes that are ``close'' to them, in having extra or altered parameters. Table \ref{Table:Comparison} summarizes our results. \begin{table}[ht] \label{Table:Comparison} \caption{Classes of nonnegative matrices with concave Levinger's function (left), and matrix classes ``close'' to them with nonconcave Levinger's function (right).} {\small \begin{tabular}{|lr|lr|} \hline {\bf Concave} & &{\bf Nonconcave} &\\ \hline $2 \times 2$ &\!\!\!\!Theorem \ref{Theorem:2x2}& $3 \times 3$, $4 \times 4$ &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!Eqs.\ \eqref{eq:Ex1}, \eqref{eq:4x4} \\ Tridiagonal Toeplitz &\!\!\!\!Theorem \ref{Theorem:Tridiag} & 4-parameter Toeplitz &Eq.\ \eqref {eq:ToeplitzConvex}\\ Fiedler's 3-parameter Toeplitz &\!\!\!\!Theorem \ref{Theorem:FiedlerLevinger} & 4-parameter Toeplitz &Eq.\ \eqref {eq:ToeplitzConvex}\\ $n \times n$ weighted shift matrix &\!\!\!\!Theorem \ref{Theorem:Shift} & $n \times n$ cyclic weighted shift matrix &Eq.\ \eqref {eq:CyclicShift16} \\ \hline \end{tabular} } \end{table} \section{Matrices that Violate Concavity} \subsection{A Simple Example} Let \an{ \A &= \Pmatr{ 0&1&0\\ 0&0&0\\ 0&0&2/5 }\notag \stext{to give} \B(t) &= (1{-}t)\A+t\A\tr = \Pmatr{ 0&1{-}t&0\\ t&0&0\\ 0&0&2/5}. \label{eq:Ex1} } The eigenvalues of $\B(t)$ are $\{2/5, +\sqrt{t(1{-}t)}, -\sqrt{t(1{-}t)}\}$, plotted in Figure \ref{fig:Ex1}. On the interval $t \in [1/5, 4/5]$, $\spr(\B(t)) = \sqrt{t(1{-}t)}$ is strictly concave. On the intervals $t \in [0, 1/5]$ and $t \in [4/5,1]$, $\spr(\B(t))$ is constant. It is clear from the figure that $\spr(\B(t))$ is not concave in the neighborhood of $t=1/5$ (and $t=4/5$), since for all small $\epsilon > 0$, \an{\label{eq:Ineq1} \frac{1}{2}[ \spr(\B(1/5 - \epsilon) + \spr(\B(1/5 + \epsilon) ]> \spr(\B(1/5 )) = 2/5. } By the continuity of the eigenvalues in the matrix elements \cite[2.4.9]{Horn:and:Johnson:2013}, we can make $\B(t)$ irreducible and yet preserve inequality \eqref{eq:Ineq1} in a neighborhood of $t=1/5$ by adding a small enough positive perturbation to each element of $\A$. \begin{figure} \caption{Eigenvalues of the matrix $\B(t)$ from \eqref{eq:Ex1} \label{fig:Ex1} \end{figure} The basic principle behind this counterexample is that the maximum of two concave functions need not be concave. Here $\B(t)$ is the direct sum of two block matrices. The eigenvalues of the direct sum are the union of the eigenvalues of the blocks, which are different functions of $t$. One block has a constant spectral radius and the other block has a strictly concave spectral radius. The spectral radius of $\B(t)$ is their maximum. Another example of this principle is constructed by taking the direct sum of two $2 \times 2$ blocks, each of which is a Levinger homotopy of the matrix $\Pmatr{0&1\\0&0}$, but for values of $t$ at opposite ends of the {unit} interval, one $2\times 2$ block, $\A_1$, with $t_1 = 511/512$ and the other $2\times 2$ block, $\A_2$, with $t_2 =1/8$. We take a weighted combination of the two blocks with weight $h$, $\A(h) = (1-h) \A_1 \oplus h \A_2$, to get: \an{ \A(h) &= \Pmatr{ 0 & (1{-}h) \frac{511}{512} & 0 & 0 \\ (1{-}h) \frac{1}{512} & 0 & 0 & 0 \\ 0 & 0 & 0 & h \frac{1}{8} \\ 0 & 0 & h\frac{7 }{8} & 0 } . \label{eq:4x4} } The eigenvalues of $\B(t,h) = (1{-}t) \A(h) + t \A(h)\tr$ are plotted in Figure \ref{fig:4x4}. We see that there is a narrow region of $h$ below $h=0.5$ where the maximum eigenvalue switches from block 2 to block 1 and back to block 2 with increasing $t \in [0,1]$, making $\spr(\B(t,h)) = \spr((1{-}t) \A(h) + t \A(h)\tr)$ at $h=0.4$ nonconcave with respect to the interval $t \in [0,1] $. \begin{figure} \caption{Eigenvalues of $\B(t,h)$ for a two-parameter homotopy: Levinger's homotopy $\B(t,h) = (1{-} \label{fig:4x4} \end{figure} As in example \ref{eq:Ex1}, $\A(h)$ may be made irreducible by positive perturbation of the $0$ values without eliminating the nonconcavity. The principle here may be codified as follows. \begin{Proposition} Let $\A = \A_1 \oplus \A_2 \in \Reals^{n \times n}$, where $\A_1$ and $\A_2$ are irreducible nonnegative square matrices. {Then} $\spr(t) \eqdef \spr((1-t)\A+ t \A\tr)$ is not concave in $t \in (0,1)$ if there exists $t^* \in (0,1)$ such that \begin{enumerate} \item $\spr((1-t^*)\A_1+ t^* \A_1\tr) = \spr((1-t^*)\A_2+ t^* \A_2\tr)$,\\ \ \\ {and} \ \\ \item $ \left. \df{}{t} \spr((1-t)\A_1+ t \A_1\tr)\right|_{t=t^*} \neq \left. \df{}{t} \spr((1-t)\A_2+ t \A_2\tr)\right|_{t=t^*} $. \end{enumerate} \end{Proposition} \begin{proof} Let $r^* \eqdef r(t^*) = \spr((1-t^*)\A_1+ t^* \A_1\tr) = \spr((1-t^*)\A_2+ t^* \A_2\tr)$. Since the spectral radius of a nonnegative irreducible matrix is a simple eigenvalue by Perron-Frobenius theory, it is analytic in the matrix elements \cite[Fact 1.2]{Tsing:etal:1994:Analyticity}. Thus for each of $\A_1$ and $\A_2$, Levinger's function is analytic in $t$, and therefore has equal left and right derivatives around $t^*$. So we can set $s_1 = \dfinline{\spr((1-t)\A_1 + t \A_1\tr)}{t}|_{t=t^*}$ and $s_2 = \dfinline {\spr((1{-}t)\A_2+ t \A_2\tr)}{t}|_{t=t^*}$. Then \ab{ \spr((1{-}t^* {-} \ep)\A_1+ (t^* {+} \ep) \A_1\tr) &= r^* + \ep s_1 + \Order(\ep^2), \\ \spr((1{-}t^* {-} \ep)\A_2+ (t^* {+} \ep) \A_2\tr) &= r^* + \ep s_2 + \Order(\ep^2). } For a small neighborhood around $t^*$, \ab{ \spr(t^*{+}\ep) &= \spr( (1{-}t^* {-} \ep )\A+ (t^* {+} \ep) \A\tr) \\& = \max\set{\spr((1{-}t^* {-} \ep)\A_1+ (t^* {+} \ep) \A_1\tr), \spr((1{-}t^* {-} \ep)\A_2+ (t^* {+} \ep) \A_2\tr)} \\& = r^* + \Cases{ \ep \min(s_1, s_2) + \Order(\ep^2), &\qquad \ep < 0, \\ \ep \max(s_1, s_2) + \Order(\ep^2), &\qquad \ep > 0 . } } A necessary condition for concavity is $ \frac{1}{2}(\spr(t^*{+} \ep) + \spr(t^*{-}\ep)) \leq \spr(t^*). $ However, for small enough $\ep > 0$, letting $\delta = \max(s_1, s_2) - \min(s_1, s_2) > 0$, \ab{ \frac{\spr(t^*{+} \ep) + \spr(t^*{-}\ep)}{2} &= r^* + \ep \frac{\max(s_1, s_2) - \min(s_1, s_2) }{2} + \Order(\ep^2) \\ &= r^* + \ep \delta / {2} + \Order(\ep^2) > r^* . } The condition for concavity is thus violated. \end{proof} \subsection{Toeplitz Matrices} The following nonnegative irreducible Toeplitz matrix has a nonconcave Levinger's function: \an{\label{eq:ToeplitzConvex} \A&= \Pmatr{ 5 & 0 & 6 & 0 \\ 1 & 5 & 0 & 6 \\ 0 & 1 & 5 & 0 \\ 8 & 0 & 1 & 5 } } A plot of Levinger's function for \eqref{eq:ToeplitzConvex} is not unmistakably nonconcave, so instead we plot the second derivative of $\spr(\B(t))$ in Figure \ref{fig:ToeplitzConvex}, which is positive at the boundaries $t=0$ and $t=1$, and becomes negative in the interior. \begin{figure} \caption{The second derivative of Levinger's function for the Toeplitz matrix \eqref{eq:ToeplitzConvex} \label{fig:ToeplitzConvex} \end{figure} \subsection{Weighted Circuit Matrices} Another class of matrices where Levinger's function can be nonconcave is the weighted circuit matrix. A weighted circuit matrix is an $n \times n$ matrix in which there are $k\in[1,n]$ distinct integers $i_1, i_2, \ldots, i_k \in \{1, 2, \ldots, n\}$ such that all elements are zero except weights $c_j$, $j = 1, \ldots, k$, at matrix positions $(i_1, i_2), (i_2, i_3), \ldots, (i_{k-1}, i_k), (i_k, i_1)$, which form a circuit. We refer to a \emph{positive weighted circuit matrix} when the weights are all positive numbers. When focusing on the spectral radius of a positive weighted circuit matrix, we may without loss of generality consider its non-zero principal submatrix, whose canonical permutation of the indices gives a \emph{positive cyclic weighted shift matrix}, $\A$, with elements \an{ A_{ij} &= \Cases{ c_i > 0, &\qquad j = i \text{ mod } n + 1, \quad i \in \set{1, \ldots, n}, \\ 0, & \qquad\text{otherwise.} }\label{eq:CyclicShift} } Equation \eqref{eq:CyclicShift} defines a cyclic \emph{downshift} matrix, while an \emph{upshift} matrix results from replacing $j = i \text{ mod } n + 1$ with $i = j \text{ mod } n + 1$, which is equivalent for our purposes. Cyclic weighted shift matrices have the form \ab{ \Pmatr{ 0 & c_1 & 0 & 0 \\ 0 & 0 & c_2 & 0 \\ 0 & 0 & 0 & c_3 \\ c_4 & 0 & 0 & 0 \\ } . } If one of the weights $c_i$ is set to $0$, the matrix becomes a positive non-cyclic weighted shift matrix. In Section \ref{sec:WSM}, we show that Levinger's function of a positive non-cyclic weighted shift matrix is strictly concave. Cyclicity from a single additional positive element $c_i>0$ allows nonconcavity. Here we provide an example of nonconcavity using a cyclic shift matrix with \emph{reversible weights}, which have been the subject of recent attention \cite{Chien:and:Nakazato:2020:Symmetry}. Figure \ref{fig:CyclicShift16} shows Levinger's function for a $16 \times 16$ cyclic weighted shift matrix with two-pivot reversible weights \an{ c_j &= 16 + \sin\left(2 \pi \frac{j}{16}\right), &\quad j = 1, \ldots, 16 . \label{eq:CyclicShift16} } Levinger's function is convex for most of the interval $t \in [0,1]$, and is concave only in the small interval around $t=1/2$. \begin{figure} \caption{Nonconcave Levinger's function for a $16 \times 16$ two-pivot reversible cyclic weighted shift matrix with weights $c_j = 16 + \sin(2 \pi j /16)$, \eqref{eq:CyclicShift16} \label{fig:CyclicShift16} \end{figure} \section{Matrices with Concave Levinger's Function} Here we show that several special classes of nonnegative matrices have concave Levinger's functions: $2\times 2$ matrices, non-cyclic weighted shift matrices, tridiagonal Toeplitz matrices, and Fiedler's 3-parameter Toeplitz matrices. \subsection{\texorpdfstring{$2 \times 2$}{2 x 2}\ Matrices} \begin{Theorem}\label{Theorem:2x2} Let $\A \in \Reals^{2 \times 2}$ be nonnegative and irreducible. Then the spectral radius and Perron-Frobenius eigenvalue $r(t):=r((1{-}t) \A + t \A\tr)$ is concave over $t \in (0,1)$, strictly when $\A$ has different left and right Perron-Frobenius eigenvectors. \end{Theorem} \begin{proof} Let $a, b,c,d \in (0,\infty), t\in (0,1)$, and assume $b\ne c$ to assure that $\A\ne\A\tr$ and the left and right Perron-Frobenius eigenvectors are not colinear. Let $$ \A:=\Pmatr{ a & b\\ c & d }, \quad \B(t):=(1{-}t) \A+t \A\tr.$$ The Perron-Frobenius eigenvalue of $\B(t)$ is obtained by using the quadratic formula to solve the characteristic equation. After some simplification, $$ r(t):= r(\B(t)) = \frac{a+d+\sqrt{(a-d)^2 + 4t(1{-}t)(b-c)^2 + 4bc }}{2}. $$ The first derivative with respect to $t$ is $$r'(t)=\frac{{\left(1{-}2\,t\right)}\,{{\left(b-c\right)}}^2 }{\sqrt{(a-d)^2 + 4t(1{-}t)(b-c)^2 + 4bc}}. $$ The denominator above is positive for all $t\in(0,1)$ because of the assumption that $b\ne c$. The second derivative is, again after some simplification, \an{\label{eq:f''(t)} r''(t)=-\frac{2\,(b-c)^2 \,\left((a-d)^2+(b+c)^2\right)}{{{\left((a-d)^2 + 4t(1{-}t)(b-c)^2 + 4bc\right)}}^{3/2} }<0. } The numerator in the fraction above is positive because $b\ne c$, and the minus sign in front of the fraction guarantees strict concavity for all $t\in(0,1)$. \end{proof} \subsection{Tridiagonal Toeplitz Matrices} \begin{Theorem}[Tridiagonal Toeplitz Matrices]\label{Theorem:Tridiag} Let $\A \in \Reals^{n \times n}$, $n \geq 2$, be a tridiagonal Toeplitz matrix with diagonal elements $b\geq 0$, subdiagonal elements $a \geq 0$, and superdiagonal elements $c \geq 0$, with $\max(a, \ c) > 0$. Then for $t \in (0, 1)$, $\spr( (1{-}t) \A + t \A\tr )$ is concave in $t$, increasing on $t \in (0,1/2)$, and decreasing on $t \in (1/2,1)$, all strictly when $a \neq c$. \end{Theorem} \begin{proof} The eigenvalues of a tridiagonal Toeplitz matrix $\A$ with $a, c \neq 0$ are \cite[22-5.18]{Hogben:2014:Handbook} \cite[Theorem 2.4]{Bottcher:and:Grudsky:2005:Spectral} \an{ \lambda_k(\A) &= b + 2 \sqrt{a c}\ \cos\left(\frac{k \pi}{n{+}1} \right). \label{eq:BG2005} } The matrix $(1{-}t) \A + t \A\tr$ has subdiagonal values $(1{-}t) a + t c$ and superdiagonal values $t a + (1{-}t) c$. Since at least one of $a,c$ is strictly positive, $(1{-}t) a + t c > 0$ and $t a + (1{-}t) c > 0$ for $t \in (0,1)$. Therefore \eqref{eq:BG2005} is applicable. Writing $\lambda_k(t) \eqdef \lambda_k( (1{-}t) \A + t \A\tr)$, we obtain \ab{ \lambda_k(t) &= b + 2 \sqrt{((1{-}t) a + t c) (t a + (1{-}t) c)}\ \cos\left(\frac{k \pi}{n{+}1} \right). } It is readily verified that the first derivatives are \ab{ \df{}{t}\lambda_k(t) &= \cos\left(\frac{k \pi}{n{+}1} \right) \frac{(a-c)^2 (1{-}2t)}{\sqrt{((1{-}t) a + t c) (t a + (1{-}t) c)}}, \stext{and the second derivatives are} \ddf{}{t}\lambda_k(t) &= - \cos\left(\frac{k \pi}{n{+}1} \right) \frac{(a^2-c^2)^2}{2 \big[ ( (1{-}t) a + t c) (t a + (1{-}t) c )\big]^{3/2} } . } Since $(1{-}t) a + t c > 0$ and $t a + (1{-}t) c > 0$ for $t \in (0,1)$, the denominators are positive. When $a=c$ both derivatives are identically zero. When $a \neq c$, the factors not dependent on $k$ are strictly positive for all $t \in (0,1)$ except for $t=1/2$ where the first derivative of all the eigenvalues vanishes. Because the second derivatives have no sign changes on $t \in (0,1)$, and since $[ (1{-}t) a + t c][t a + (1{-}t) c ] > 0$, there are no inflection points. Therefore each eigenvalue is either convex in $t$ or concave in $t$, depending on the sign of $\cos( k \pi/(n+1) )$. The maximal eigenvalue is \ab{ r(t) = \lambda_1(t) = b + 2 \sqrt{((1{-}t) a + t c) (t a + (1{-}t) c)}\ \cos(\pi/(n{+}1)) . } From its first derivative, since $ \cos(\pi/(n{+}1)) > 0$, $r(t)$ is increasing on $t \in (0, 1/2)$ and decreasing on $t \in (1/2,1)$, strictly when $a \neq c$. Since its second derivative is negative, $r(t)$ is concave in $t$ on $t \in (0,1)$, strictly when $a \neq c$. \end{proof} \subsection{Fiedler's Toeplitz Matrices} Fiedler \cite[p. 180]{Fiedler:1995:Numerical} established this closed formula for the spectral radius of a special Toeplitz matrix. \begin{Theorem}[Fiedler's 3-Parameter Toeplitz Matrices]\label{Theorem:Fiedler} Consider a Toeplitz matrix $\A \in \Complex^{n \times n}$, $n \geq 3$, with diagonal values $(v, 0, \ldots, 0, v, w, u, 0, \ldots, 0, u)$, with $v, w, u \in \Complex$: \an{\label{eq:FiedlerFlipped} \A&=\Pmatr{w & u & 0 & \cdots & 0 & u\\ v & w & u & 0 & \cdots & 0\\ 0 & v & w & u & \cdots & 0\\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & v & w & u \\ v & 0 & \cdots & 0 & v & w }. } Let $\omega = e^{2 \pi i / n}$. The eigenvalues of $\A$ are \ab{ \lambda_{j+1}(\A) &= w + \omega^j u^{(1{-}1/n)} v^{1/n} + \omega^{n-j} u^{1/n} v^{(1{-}1/n)}, \qquad j = 0, 1, \ldots, n{-}1. } \end{Theorem} We apply Theorem \ref{Theorem:Fiedler} to the Levinger function. \begin{Theorem}\label{Theorem:FiedlerLevinger} Let $\A$ be defined as in \eqref{eq:FiedlerFlipped} with $u,v,w > 0$. Then $\spr(t) \eqdef \spr( (1{-}t) \A + t \A \tr)$ is concave in $t$ for $t \in (0,1)$, strictly if $u \neq v$. \end{Theorem} \begin{proof} For $u, v, w > 0$, $\spr(\A) = \lambda_1(\A) = w + u^{(1{-}1/n)} v^{1/n} + u^{1/n} v^{(1{-}1/n)}$ from Theorem \ref{Theorem:Fiedler}. Let $\B(t) = (1{-}t) \A + t \A\tr$. Then $\B(t)$ is again a Toeplitz matrix of the form \eqref {eq:FiedlerFlipped}, with diagonal values $(1{-}t)v{+}t u, 0, \ldots, 0, (1{-}t)v{+}t u, w, (1{-}t)u {+} t v,$ $0, \ldots, 0$, $(1{-}t)u{+}t v$ for matrix elements $A_{i, i{+}m}$, with $m \in \set{1{-}n, n{-}1}$, and $i \in $$\{\max(1, 1{-}m)$, $\ldots$, $\min(n, n{-}m)$$\}$. So again by Theorem \ref{Theorem:Fiedler}, \ab{ \spr(\B(t)) = w &+ [(1{-}t)u + t v]^{(1{-}1/n)} \ [(1{-}t)v+t u]^{1/n} \\ & + [(1{-}t)u + t v]^{1/n}\ [(1{-}t)v+t u]^{(1{-}1/n)} . } It is readily verified that \ab{ &\ddf{}{t} \spr(\B(t)) \\ &= - \frac{n-1}{n^2 u^2 v^2} (u-v)^2 (u+v)^2 \\& \quad \times \left([(1{-}t)v + u]^{1/n} [(1{-}t)u + t v]^{(1{-}1/n)} + [(1{-}t)v + u]^{(1{-}1/n)} [(1{-}t)u + t v]^{1/n} \right) \\ & \leq 0, } with equality if and only if $u = v$. \end{proof} With the simple exchange of $A_{1n}$ and $A_{n1}$ in \eqref {eq:FiedlerFlipped}, $\A$ would become a circulant matrix, which has left and right Perron vectors colinear with the vector of all ones, $\ev$, and would therefore have a constant Levinger's function. \subsection{Weighted Shift Matrices} \label{sec:WSM} An $n \times n$ weighted shift matrix, $\A$, has the form \ab{ A_{ij} = \Cases{ c_i, &\qquad j=i+1, \quad i \in \set{1, \ldots, n-1},\\ 0, & \qquad \text{otherwise}, } } where $c_i$ are the weights. It is obtained from a cyclic shift matrix be setting any one of the weights to $0$ and appropriately permuting the indices. Unless we explicitly use ``cyclic'', we mean \emph{non-cyclic shift matrix} when we write ``shift matrix''. We will show that Levinger's function for positive weighted shift matrices is strictly concave. First we develop some lemmas. \begin{Lemma}\label{Lemma:poly} Let $\cv \in \Complex^{n+1}$ be a vector of complex numbers and $\alpha \in \Complex$, $\alpha \neq 0$. Then the roots of a polynomial $p(x) = \sum_{k=0}^n x^k \alpha^{n-k} c_k$ are $r_j = \alpha f_j(\cv)$, where $f_j: \Complex^{n+1} \goesto \Complex$, $j = 1, \ldots, n$. \end{Lemma} \begin{proof} We factor and apply the Fundamental Theorem of Algebra to obtain \ab{ p(x) & = \sum_{k=0}^n x^k \alpha^{n-k} c_k = \alpha^n \sum_{k=0}^n \Pfrac{x}{\alpha}^k c_k = \alpha^n \prod_{j=1}^n \left( \frac{x}{\alpha} - f_j(\cv) \right). } Hence the roots of $p(x)$ are $\set{\alpha f_j(\cv)\ |\ j = 1, \ldots, n}$. \end{proof} \begin{Lemma}\label{Lemma:ab} Let $\alpha, \beta \in \Complex \backslash 0$, $\A(\alpha, \beta) = [A_{ij}]$ be a hollow tridiagonal matrix, where $A_{ij} > 0$ for $j=i+1$ and $j=i-1$, $A_{ij}=0$ otherwise, and \ab{ A_{ij} = \Cases{ \alpha \, c_{ij}, & \qquad j=i+1, \quad i \in \set{1, \ldots, n-1},\\ \beta \, c_{ij}, & \qquad j=i-1, \quad i \in \set{2, \ldots, n}, } } so $\A(\alpha, \beta) $ has the form \ab { \A(\alpha,\beta) &=\Pmatr{ 0 & \alpha \, c_{12} & 0 & \cdots & 0 & 0 & 0\\ \beta \, c_{21} & 0 & \alpha \, c_{23}& \cdots & 0 & 0 & 0\\ 0 & \beta \, c_{32} & 0 & \ddots & 0 &0 & 0\\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \ddots & 0 & \alpha \, c_ {n{-}2, n{-}1} & 0 \\ 0 & 0 & 0 & \cdots & \beta \, c_{n{-}1, n{-}2} & 0 & \alpha \, c_ {n{-}1, n} \\ 0 & 0 & 0 & \cdots & 0 & \beta \, c_ {n, n{-}1} & 0 }. } Let $\cv \in \Complex^{2(n-1)}$ represent the vector of $c_{ij}$ constants. Then the eigenvalues of $\A$ are of the form $\sqrt{\alpha \beta}\: f_h(\cv) $, $h = 1, \ldots, n$, where $f_h\suchthat \Complex^{2(n-2)} \goesto \Complex$ are functions of the $c_{ij}$ constants that do not depend on $\alpha$ or $\beta$. \end{Lemma} \begin{proof} The characteristic polynomial of $\A$ is \ab{ p_\A(\lambda) &= \det(\lambda\I - \A)\\ &= \begin{vmatrix} \lambda & -\alpha \, c_{12} & 0 & \cdots & 0 & 0 & 0\\ -\beta \, c_{21} & \lambda & -\alpha \, c_{23}& \cdots & 0 & 0 & 0\\ 0 & -\beta \, c_{32} & \lambda & \ddots & 0 &0 & 0\\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \ddots & \lambda & -\alpha \, c_ {n{-}2, n{-}1} & 0 \\ 0 & 0 & 0 & \cdots & -\beta \, c_{n{-}1, n{-}2} & \lambda & -\alpha \, c_ {n{-}1, n} \\ 0 & 0 & 0 & \cdots & 0 & -\beta \, c_ {n, n{-}1} & \lambda \end{vmatrix}. } The characteristic polynomial has the recurrence relation \an{ p_{\A_k}(\lambda) & = \lambda \ p_{\A_{k-1}}(\lambda) - \alpha \, \beta \, c_{k,k{-}1} \ c_{k{-}1,k} \ p_{\A_{k-2}}(\lambda), &\quad k \in \set{3, \ldots, n}, \label{eq:rec}\\ \stext{with initial conditions} p_{\A_2}(\lambda) &= \lambda^2 - \alpha \, \beta \ c_{12}\ c_{21}, \quad\text{and} \label{eq:A2} \\ p_{\A_1}(\lambda) &= \lambda, \label{eq:A1} } where $\A_k$ is the principal submatrix of $\A$ over indices $1, \ldots, k$. We show by induction that for all $k \in \set{2, \ldots, n}$, \an{ p_{\A_k}(\lambda) & = \sum_{j=0}^k \lambda^j (\alpha \beta)^{ (k-j)/2} \, g_{jk}(\cv) = \sum_{j=0}^k \lambda^j \sqrt{\alpha \beta}^{\, (k-j)} g_{jk}(\cv), \label{eq:IH} } where each $g_{jk}\suchthat \Complex^{2(n-1)} \goesto \Complex$, $k \in \set{2, \ldots, n}$, $j \in \set{0, \ldots, k}$, is a function of constants $\cv$. From \eqref{eq:A2}, we see that \eqref{eq:IH} holds for $k=2$: $ p(\A_2)(\lambda) = \lambda^2 - \alpha \, \beta \, c_{12}\, c_{21} . $ For $k=3$, from the recurrence relation \eqref{eq:rec} and initial conditions \eqref{eq:A1}, \eqref{eq:A2}, we have \ab{ p(\A_3)(\lambda) & = \lambda\, p_{\A_{2}}(\lambda) - \alpha \beta \, c_{32} \, c_{23} \, p_{\A_{1}}(\lambda) = \lambda (\lambda^2 - \alpha \beta \, c_{12}\, c_{21}) - \alpha \beta \, c_{32} \, c_{23} \, \lambda \\ & = \lambda^3 - \lambda \sqrt{\alpha \beta}^{\, 2}( c_{12}\, c_{21} + c_{32} \, c_{23}), } which satisfies \eqref{eq:IH}. These are the basis steps for the induction. For the inductive step, we need to show that if \eqref{eq:IH} holds for $k-1, k-2$ then it holds for $k$. Suppose that \eqref{eq:IH} holds for $2 \leq k-1, k-2 \leq n-1$. Then \ab{ &p_{\A_k}(\lambda) = \lambda\, p_{\A_{k-1}}(\lambda) - \alpha\beta \, c_{k,k{-}1} \, c_{k{-}1,k} \ p_{\A_{k-2}}(\lambda)\\ & = \lambda \sum_{j=0}^{k{-}1} \lambda^j \sqrt{\alpha \beta}^{\, (k{-}1{-}j)} g_{j,k{-}1}(\cv) - \alpha\beta \, c_{k,k{-}1} \, c_{k{-}1,k} \sum_{j=0}^{k-2} \lambda^j \sqrt{\alpha \beta}^{\, (k-2-j)} g_{j,k-2}(\cv) \\ & = \sum_{j=1}^{k} \lambda^{j} \sqrt{\alpha \beta}^{\,( k{-}j)} g_{j-1,k{-}1}(\cv) {-} \sum_{j=0}^{k{-}2} \lambda^j \sqrt{\alpha \beta}^{\, (k-j)} c_{k,k{-}1} \, c_{k{-}1,k} \ g_{j,k-2}(\cv) , } which satisfies \eqref{eq:IH}. Thus by induction $p_{\A_n}(\lambda)$ satisfies \eqref{eq:IH}. Then Lemma \ref{Lemma:poly} implies that the parameters $\set{\alpha, \beta}$ appear as the linear factor $\sqrt{\alpha \beta}$ in each root of the characteristic polynomial of $\A(\alpha,\beta)$ --- its eigenvalues. \end{proof} \begin{Theorem}[Weighted Shift Matrices]\label{Theorem:Shift} Levinger's function is strictly concave for nonnegative weighted shift matrices with at least one positive weight. \end{Theorem} \begin{proof} Let the positive weighted shift matrix $\A$ be defined as \ab{ A_{ij} = \Cases{ c_i \geq 0, & \qquad j=i+1, \quad i \in \set{1, \ldots, n-1},\\ 0, & \qquad \text{otherwise}, } } where $c_i$ are the weights and $c_i >0$ for at least one $i = 1, \ldots, n-1$. By Lemma \ref{Lemma:ab}, all the eigenvalues of Levinger's homotopy $\B(t) = (1{-}t) \A + t \A\tr$ are of the form $\lambda_i(\B(t)) = \sqrt{t(1{-}t)}\, f_i(\cv)$, where $\cv$ is the vector of weights, and $f_i\suchthat \Reals^{n-1} \goesto \Reals$, since $\B(t)$ is a direct sum of one or more (if some $c_i=0$) Jacobi matrices and these have real eigenvalues \cite[22.7.2]{Hogben:2014:Handbook}. If at least one weight $c_i$ is positive, then $\B(t)$ has a principal submatrix $\Pmatr{0& (1{-}t) c_i\\t\, c_i&0}$ with a positive spectral radius for $t \in (0,1)$. Thus by \cite[Corollary 8.1.20(a)]{Horn:and:Johnson:2013}, $\spr(\B(t)) > 0$ for $t \in (0,1)$. Therefore for $t \in (0,1)$, $\spr(\B(t)) = \lambda_1(\B(t)) = \sqrt{t(1{-}t)}\: f_1(\cv) > 0$. Since $\sqrt{t(1{-}t)}$ is strictly concave in $t$ for $t \in (0,1)$, Levinger's function is strictly concave in $t$ for $t \in (0,1)$. \end{proof} \begin{Corollary} Levinger's function is strictly concave for a nonnegative hollow tridiagonal matrix, $\A \in \Reals^{n \times n}$, in which $A_{ii}=0$ for $i \in \set{1, \ldots, n}$, and where for each $i \in \set{1, \ldots, n-1}$, $A_{i,i+1}\, A_{i+1,i} = 0$, and for at least one $i$, $A_{i,i+1} > 0$. \end{Corollary} \begin{proof} $\A$ is derived from a weighted shift matrix by swapping some elements of the superdiagonal $A_{i,i+1}$ to the transposed position in the subdiagonal, $A_{i+1,i}$. The determinant of Levinger's homotopy $\det(\lambda \I - \B(t)) = \det(\lambda \I - (1-t) \A - t \A\tr)$ remains unchanged under such swapping because the term $\alpha \beta \, c_{k,k{-}1} \, c_{k{-}1,k}$ in \eqref{eq:rec}, which is $(1-t)t \, c_{k{-}1,k}^2$ in the weighted shift matrix, remains invariant under swapping as $t (1-t) \, c_{k,k{-}1}^2$. \end{proof} We complete the connection to positive weighted circuit matrices with this corollary. \begin{Corollary} By setting one or more, but not all, of the weights in a positive weighted circuit matrix to $0$, Levinger's function becomes strictly concave. \end{Corollary} \begin{proof} A positive weighted circuit matrix where some but not all of the positive weights are changed to $0$ is, under appropriate permutation of the indices, a nonnegative weighted shift matrix to which Theorem \ref{Theorem:Shift} applies. \end{proof} What kind of transition does Levinger's function make during the transition from a cyclic weighted shift matrix with nonconcave Levinger's function to a weighted shift matrix with its necessarily concave Levinger's function, as one of the weights is lowered to $0$? Does the convexity observed in Figure \ref{fig:CyclicShift16} at the boundaries $t=0$ and $t=1$ flatten and become strictly concave for some positive value of that weight? We examine this transition for the cyclic shift matrix in example \eqref{eq:CyclicShift16} (Figure \ref{fig:CyclicShift16}). The minimal weight is $c_{12} = 16 + \sin\left(2 \pi \frac{12}{16}\right) = 15$. Figure \ref{fig:C12} plots Levinger's function as $c_{12}$ is divided by factors of $2^{8}$. Figure \ref{fig:Shift-Matrix_Limit} plots the second derivatives of Levinger's function. We observe non-uniform convergence to the $c_{12}=0$ curve. As $c_{12}$ decreases, the second derivative converges to the $c_{12}=0$ curve over wider and wider intervals of $t$, but outside of these intervals the second derivative \emph{diverges} from the $c_{12}=0$ curve, attaining larger values near and at the boundaries $t=0$ and $t=1$ with smaller $c_{12}$. Meanwhile for $c_{12}=0$, Levinger's function is proportional to $\sqrt{t(1-t)}$, the second derivative of which goes to $-\infty$ as $t$ goes to $0$ or $1$. When $c_{12}> 0$, $\B(0)$ and $\B(1)$ are irreducible, and when $c_{12}=0$, $\B(t)$ is irreducible for $t \in (0,1)$. But for $c_{12}=0$, $\B(0)$ and $\B(1)$ are reducible matrices. While the eigenvalues are always continuous functions of the elements of the matrix, the derivatives of the spectral radius need not be, and in this case, we see an unusual example of nonuniform convergence in the second derivative of the spectral radius. \begin{figure} \caption{Levinger's function for the cyclic weighted shift matrix from \eqref{eq:CyclicShift16} \label{fig:C12} \end{figure} \begin{figure}\label{fig:Shift-Matrix_Limit} \end{figure} \section{Matrices with Constant Levinger's Function} Bapat \cite{Bapat:1987:Two} and Fiedler \cite{Fiedler:1995:Numerical} identified matrices with colinear left and right Perron vectors as having constant Levinger's function. Here we make explicit a property implied by this constraint that appears not to have been described. We use the centered representation of Levinger's homotopy. The \emph{symmetric} part of a square matrix $\A$ is \an{ \bm{S}(\A) \eqdef (\A + \A\tr) / 2 . \label{eq:SymPart} } The \emph{skew symmetric} part of $\A$ is \an{ \K(\A) \eqdef (\A - \A\tr)/2 . \label{eq:SkewPart} } Then $\A = \bm{S}(\A) + \K(\A)$. Levinger's homotopy in this centered representation is now, suppressing the $\A$ argument, \ab{ \C(p) &\eqdef \bm{S} + p \, \K, \qquad p \in [-1, 1], \stext{and Levinger's function is} c(p) &\eqdef r((p+1)/2) = \spr(\bm{S} + p \, \K). } The range of $p$ in this centered representation may be extended beyond $[-1,1]$, while maintaining $\C(p) \geq \0$, to the interval $p \in [-\alpha, \alpha]$ where \ab{ \alpha = \min_{i,j} \frac{A_{ij} + A_{ji}}{| A_{ji} - A_{ij} | } \geq 1. } \begin{Theorem}\label{Theorem:SK} Let $\A \in \Reals^{n \times n}$ be irreducible and nonnegative. Then $\spr((1{-}t) \A + t \A\tr)$ is constant in $t \in [0,1]$ if and only if the Perron vector of $\A + \A\tr$ is in the null space of $\A - \A\tr$. \end{Theorem} \begin{proof} \cite{Bapat:1987:Two} and \cite{Fiedler:1995:Numerical} proved that $\spr((1{-}t) \A + t \A\tr)$ is constant in $t \in [0,1]$ if and only if the left and right Perron vectors of $\A$ are colinear. Suppose the left and right Perron vectors of $\A$ are colinear. Without loss of generality, they can be normalized to sum to $1$ in which case they are identical. Let the left and right Perron vectors of $\A$ be $\x$. Then \ab{ \frac{1}{2} (\A + \A\tr) \x &= \spr(\A) \ \x, \stext{and} (\A - \A\tr) \x &= \spr(\A)\ (\x - \x) = \0. } Hence $\x$ is the Perron vector of $\A + \A\tr$ and $\x > \0$ is in the null space of $\A - \A\tr$. For the converse, let the Perron vector of $\A + \A\tr$ be $\x > \0$, and let $\x$ be in the null space of $\A - \A\tr$. Then \ab{ (\A + \A\tr ) \x &= \spr(\A{+}\A\tr)\ \x \text{ and } (\A - \A\tr ) \x = \A \x - \A\tr \x = \0, } {which gives} \ab{ \A \x & = \frac{1}{2}[(\A + \A\tr ) +( \A - \A\tr)] \x = \frac{1}{2}\spr(\A{+}\A\tr)\ \x + \0 = \frac{\spr(\A{+}\A\tr)}{2} \ \x \stext{and} \A\tr \x & = \frac{1}{2}[(\A + \A\tr ) - ( \A - \A\tr)] \x = \frac{1}{2}\spr(\A{+}\A\tr)\ \x - \0 = \frac{\spr(\A{+}\A\tr)}{2}\ \x } hence $\x$ is a Perron vector of $\A$ and of $\A\tr$. \end{proof} \begin{Corollary} Let $\bm{S} = \bm{S}\tr \in \Reals^{n\times n}$ be a nonnegative irreducible symmetric matrix, and $\K = - \K\tr \in \Reals^{n\times n}$ be a nonsingular skew symmetric matrix such that $\A= \bm{S} + \K \geq \0$. Then $n$ is even and $\A$ has a non-constant Levinger's function. \end{Corollary} \begin{proof} If $\K$ is a nonsingular skew symmetric matrix, $n$ must be even, since odd-order skew symmetric matrices are always singular \cite[2-9.27]{Hogben:2014:Handbook}. If $\C(p) := \bm{S} + p \, \K$ with $\K$ nonsingular, then because the null space of $\K$ is $\{\0\}$, $\C(p)$ must have a non-constant Levinger's function $c(p)$ by Theorem \ref{Theorem:SK}. \end{proof} The following corollary pursues the observation made by an anonymous reviewer that a matrix $\A$ with colinear left and right Perron vectors is orthogonally similar to a direct sum $\Pmatr{\spr(\A)} \oplus \F$ for some square matrix $\F$. This entails that the skew symmetric part of $\A$ is orthogonally similar to $\Pmatr{\spr(\A) - \spr(\A)} \oplus (\F-\F\tr) / 2 = \Pmatr{0} \oplus (\F-\F\tr) / 2$, and is thus singular. \begin{Corollary} Let $\bm{S} = \bm{S}\tr \in \Reals^{n\times n}$ be a nonnegative irreducible symmetric matrix, and $\K = - \K\tr \in \Reals^{n\times n}$ be a skew symmetric matrix, such that $\A = \bm{S} + \K \geq \0$. Let $\Q = (\Q\tr)^{-1}$ be an orthogonal matrix that diagonalizes $\bm{S}$ to \ab{ \Lam &\eqdef \Q\tr \bm{S} \Q = \Pmatr {\spr(\bm{S}) & 0 & \cdots & 0\\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n } . } Then $\A$ has a constant Levinger's function if and only if \an{\label{eq:KQP} \K_1 &\eqdef \Q\tr\K \Q = \Pmatr {0 & \0\tr \\ \0 &\K_2 } = \Pmatr{0} \oplus \K_2, } where $\K_2 = - \K_2\tr \in \Reals^ {n{-}1 \times n{-}1} $ and $\0\tr = (0 \ldots 0) \in \Reals^{n{-}1}$. \end{Corollary} \begin{proof} Since $\bm{S}$ is real and symmetric, $\bm{S} = \Q \Lam \Q\tr$ is in Jordan canonical form. Let $\x > \0$ be the normalized Perron vector of $\bm{S}$. Then $\x = [\Q]_1$ is the first column of $\Q$, and the other columns of $\Q$ are orthogonal to $\x$, so $\x\tr \Q = (1\:0 \cdots 0)$. The necessary and sufficient condition from Theorem \ref{Theorem:SK} for $\A$ to have constant Levinger's function is that $\x\tr\K = \0\tr$, equivalent to \ab{ \x\tr \K & = \x\tr \Q\K_1\Q\tr = (1\: 0 \cdots 0) \K_1\Q\tr = \0\tr. } Since $\Q$ is orthogonal, it has null space $\{\0\}$, so $ (1\: 0 \cdots 0) \K_1\Q\tr = \0\tr$ if and only if $ (1\: 0 \cdots 0) \K_1 = \0\tr$, which is the top row of $\K_1$. $\K_1$ and $\K_2$ must be skew symmetric since $\K$ is skew symmetric, as can be seen immediately from transposition. The skew symmetry of $\K_1$ implies its first column must also be all zeros as its first row is, establishing the form given in \eqref{eq:KQP}. \end{proof} \section{Conclusions} We have shown that it is not in general true that the spectral radius along a line from a nonnegative square matrix $\A$ to its transpose --- Levinger's function --- is concave. Our counterexamples to concavity have a simple principle in the case of direct sums of block matrices, namely, that the maximum of two concave functions need not be concave. However, for the other examples we present --- Toeplitz matrices, and positive circuit or cyclic weighted shift matrices --- whatever principles underly the nonconcavity remain to be discerned. Also remaining to be discerned are the properties of matrix families --- a few of which we have presented here --- that guarantee concave Levinger functions. A general characterization of the range of $t$ for which the spectral radius is concave in Levinger's homotopy remains an open problem. \section*{Biographical Note} Bernard W. Levinger (Berlin, Germany, September 3, 1928 -- Fort Collins, Colorado, USA, January 17, 2020) and his family fled Nazi Germany to England in 1936, to Mexico in 1940, and to the United States in 1941, which initially placed them in an immigration prison and deported them to Mexico, but which ultimately allowed their immigration, whereupon they settled in New York City. Levinger graduated from Bronx High School of Science and earned a doctorate in mathematics from New York University. He was Professor of Mathematics and Professor Emeritus at Colorado State University, Fort Collins. He leaves a large family, including his wife Lory of more than 65 years.\cite{Levinger:1928-2020} \end{document}
\begin{document} \chapter*{The spread of finite and infinite groups \\ {\normalfont\large Scott Harper\footnotemark}} \footnotetext[1]{School of Mathematics and Statistics, University of St Andrews, St Andrews, KY16 9SS, UK \newline \url{[email protected]}} \section*{Abstract}\trivlist\item[] It is well known that every finite simple group has a generating pair. Moreover, Guralnick and Kantor proved that every finite simple group has the stronger property, known as $\frac{3}{2}$-generation, that every nontrivial element is contained in a generating pair. Much more recently, this result has been generalised in three different directions, which form the basis of this survey article. First, we look at some stronger forms of $\frac{3}{2}$-generation that the finite simple groups satisfy, which are described in terms of spread and uniform domination. Next, we discuss the recent classification of the finite $\frac{3}{2}$-generated groups. Finally, we turn our attention to infinite groups, focusing on the recent discovery that the finitely presented simple groups of Thompson are also $\frac{3}{2}$-generated, as are many of their generalisations. Throughout the article we pose open questions in this area, and we highlight connections with other areas of group theory. \varepsilonndtrivlist\addvspace{26pt} \section{Introduction} \label{s:intro} Every finite simple group can be generated by two elements. This well-known result was proved for most finite simple groups by Steinberg in 1962 \cite{ref:Steinberg62} and completed via the Classification of Finite Simple Groups (see \cite{ref:AschbacherGuralnick84}). Much more is now known about generating pairs for finite simple groups. For instance, for any nonabelian finite simple group $G$, almost all pairs of elements generate $G$ \cite{ref:KantorLubotzky90,ref:LiebeckShalev95}, $G$ has an invariable generating pair \cite{ref:GuralnickMalle12JLMS,ref:KantorLubotzkyShalev11}, and, with only finitely many exceptions, $G$ can be generated by a pair of elements where one has order $2$ and the other has order either $3$ or $5$ \cite{ref:LiebeckShalev96Ann,ref:LubeckMalle99}. The particular generation property of finite simple groups that this survey focuses on was established by Guralnick and Kantor \cite{ref:GuralnickKantor00} and independently by Stein \cite{ref:Stein98}. They proved that if $G$ is a finite simple group, then every nontrivial element of $G$ is contained in a generating pair. Groups with this property are said to be \varepsilonmph{$\frac{3}{2}$-generated}. We will survey the recent work (mostly from the past five years) that addresses natural questions arising from this theorem. Section~\ref{s:finite} focuses on finite groups and considers recent progress towards answering two natural questions. Do finite simple groups satisfy stronger versions of $\frac{3}{2}$-generation? Which other finite groups are $\frac{3}{2}$-generated? Regarding the first, in Sections~\ref{ss:finite_spread} and~\ref{ss:finite_udn}, we will meet two strong versions of $\frac{3}{2}$-generation, namely (uniform) spread and total/uniform domination. Regarding the second, Section~\ref{ss:finite_bgh} presents the recent classification of the finite $\frac{3}{2}$-generated groups established by Burness, Guralnick and Harper in 2021 \cite{ref:BurnessGuralnickHarper21}. All these ideas are brought together as we discuss the generating graph in Section~\ref{ss:finite_graph}. Section~\ref{ss:finite_app} rounds off the first half by highlighting applications of spread to word maps, the product replacement graph and the soluble radical of a group. Section~\ref{s:infinite} focuses on infinite groups and, in particular, whether any results on the $\frac{3}{2}$-generation of finite groups extend to the realm of infinite groups. After discussing this in general terms in Sections~\ref{ss:infinite_intro} and~\ref{ss:infinite_soluble}, our focus shifts to the finitely presented infinite simple groups of Richard Thompson in Sections~\ref{ss:infinite_thompson_introduction} to~\ref{ss:infinite_thompson_t}. Here we survey the ongoing work of Bleak, Donoven, Golan, Harper, Hyde and Skipper, which reveals strong parallels between the $\frac{3}{2}$-generation of these infinite simple groups and the finite simple groups. Section~\ref{ss:infinite_thompson_introduction} serves as an introduction to Thompson's groups for any reader unfamiliar with them. This survey is based on my one-hour lecture at \varepsilonmph{Groups St Andrews 2022} at the University of Newcastle, and I thank the organisers for the opportunity to present at such an enjoyable and interesting conference. I have restricted this survey to the subject of spread and have barely discussed other aspects of generation. Even regarding the spread of finite simple groups, much more could be said, especially regarding the methods involved in proving the results. Both of these omissions from this survey are discussed amply in Burness' survey article from \varepsilonmph{Groups St Andrews 2017} \cite{ref:Burness19}, which is one reason for deciding to focus in this article on the progress made in the past five years. \textbf{Acknowledgements. } The author wrote this survey when he was first a Heilbronn Research Fellow and then a Leverhulme Early Career Fellow, and he thanks the Heilbronn Institute for Mathematical Research and the Leverhulme Trust. He thanks Tim Burness, Charles Cox, Bob Guralnick, Jeremy Rickard and a referee for their helpful comments, and he also thanks Guralnick for his input on Application~3, especially his suggested proof of Theorem~\ref{thm:x_radical}. \section{Finite Groups} \label{s:finite} \subsection{Generating pairs} \label{ss:finite_intro} It is easy to write down a pair of generators for each alternating group $A_n$: for instance, if $n$ is odd, then $A_n = \langle (1 \, 2 \, 3), (1 \, 2 \, \dots \, n) \rangle$. In 1962, Steinberg \cite{ref:Steinberg62} proved that every finite simple group of Lie type is $2$-generated, by exhibiting an explicit pair of generators. In light of the Classification of Finite Simple Groups, once the sporadic groups were all shown to be $2$-generated, it became known that every finite simple group is $2$-generated \cite{ref:AschbacherGuralnick84}. Since then, numerous stronger versions of this theorem have been proved (see Burness' survey \cite{ref:Burness19}). Even as early as 1962, Steinberg raised the possibility of stronger versions of his $2$-generation result \cite{ref:Steinberg62}: \trivlist\item[]\small ``It is possible that one of the generators can be chosen of order 2, as is the case for the projective unimodular group, or even that one of the generators can be chosen as an arbitrary element other than the identity, as is the case for the alternating groups. Either of these results, if true, would quite likely require methods much more detailed than those used here.'' \varepsilonndtrivlist\normalsize That is, Steinberg is suggesting the possibility that for a finite simple group $G$ one might be able to replace just the existence of $x,y \in G$ such that $\langle x, y \rangle = G$, with the stronger statement that for all nontrivial elements $x \in G$ there exists $y \in G$ such that $\langle x,y \rangle = G$. He alludes to the fact that this much stronger condition is known to hold for the alternating groups, which was shown by Piccard in 1939 \cite{ref:Piccard39}. In the following example, we will prove this result on alternating groups, but with different methods than Piccard used. \begin{example} \label{ex:alternating} Let $G = A_n$ for $n \mathfrak{g}eq 5$. We will focus on the case $n \varepsilonquiv 0 \mod{4}$ and then address the remaining cases at the end. Write $n=4m$ and let $s$ have cycle shape $[2m-1,2m+1]$, that is, let $s$ be a product of disjoint cycles of lengths $2m-1$ and $2m+1$. Visibly, $s$ is contained in a maximal subgroup $H \leqslant G$ of type $(\mathbb{S}m{2m-1} \times \mathbb{S}m{2m+1}) \cap G$. We claim that no further maximal subgroups of $G$ contain $s$. Imprimitive maximal subgroups are ruled out since $2m-1$ and $2m+1$ are coprime. In addition, a theorem of Marggraf \cite[Theorem~13.5]{ref:Wielandt64} ensures that no proper primitive subgroup of $A_n$ contains a $k$-cycle for $k < \frac{n}{2}$, so $s$ is contained in no primitive maximal subgroups as a power of $s$ is a $(2m-1)$-cycle. Now let $x$ be an arbitrary nontrivial element of $G$. Choosing $g$ such that $x$ moves some point from the $(2m-1)$-cycle of $s^g$ to a point in the $(2m+1)$-cycle of $s^g$ gives $x \not\in H^g$. This means that no maximal subgroup of $G$ contains both $x$ and $s^g$, so $\langle x, s^g \rangle = G$. In particular, every nontrivial element of $G$ is contained in a generating pair. We now address the other cases, but we assume that $n \mathfrak{g}eq 25$ for exposition. If $n \varepsilonquiv 2 \mod{4}$, then we choose $s$ with cycle shape $[2m-1,2m+3]$ (where $n=4m+2$) and proceed as above but now the unique maximal overgroup has type $(\mathbb{S}m{2m-1} \times \mathbb{S}m{2m+3}) \cap G$. A similar argument works for odd $n$. Here $s$ has cycle shape $[m-2,m,m+2]$ if $n=3m$, $[m+1,m+1,m-1]$ if $n=3m+1$ and $[m+2,m,m]$ if $n=3m+2$, and the only maximal overgroups of $s$ are the three obvious intransitive ones. For each $1 \neq x \in G$, it is easy to find $g \in G$ such that $x$ misses all three maximal overgroups of $s^g$ and hence deduce that $\langle x, s^g \rangle = G$. \varepsilonnd{example} In 2000, Guralnick and Kantor \cite{ref:GuralnickKantor00} gave a positive answer to the longstanding question of Steinberg by proving the following. \begin{theorem} \label{thm:guralnick_kantor} Let $G$ be a finite simple group. Then every nontrivial element of $G$ is contained in a generating pair. \varepsilonnd{theorem} We say that a group $G$ is \varepsilonmph{$\frac{3}{2}$-generated} if every nontrivial element of $G$ is contained in a generating pair. The author does not know the origin of this term, but it indicates that the class of $\frac{3}{2}$-generated groups includes the class of $1$-generated groups and is included in the class of $2$-generated groups. This is somewhat analogous to the class of $\frac{3}{2}$-transitive permutation groups introduced by Wielandt \cite[Section~10]{ref:Wielandt64}, which is included in the class of $1$-transitive groups and includes the class of $2$-transitive groups. Let us finish this section by briefly turning from simple groups to simple Lie algebras. Here we have a theorem of Ionescu \cite{ref:Ionescu76}, analogous to Theorem~\ref{thm:guralnick_kantor}. \begin{theorem}\label{thm:ionescu} Let $\mathfrak{g}$ be a finite dimensional simple Lie algebra over $\mathbb{C}$. Then for all $x \in \mathfrak{g} \setminus 0$ there exists $y \in \mathfrak{g}$ such that $x$ and $y$ generate $\mathfrak{g}$ as a Lie algebra. \varepsilonnd{theorem} In fact, Bois \cite{ref:Bois09} proved that every classical finite dimensional simple Lie algebra in characteristic other than 2 or 3 has this $\frac{3}{2}$-generation property, but Goldstein and Guralnick \cite{ref:GoldsteinGuralnick} have proved that $\mathfrak{sl}_n$ in characteristic 2 does not. \subsection{Spread} \label{ss:finite_spread} Let us now introduce the concept that gives this article its name. \begin{definition} \label{def:spread} Let $G$ be a group. \begin{enumerate} \item The \varepsilonmph{spread} of $G$, written $s(G)$, is the supremum over integers $k$ such that for any $k$ nontrivial elements $x_1, \dots, x_k \in G$ there exists $y \in G$ such that $\langle x_1, y \rangle = \cdots = \langle x_k, y \rangle = G$. \item The \varepsilonmph{uniform spread} of $G$, written $u(G)$, is the supremum over integers $k$ for which there exists $s \in G$ such that for any $k$ nontrivial elements $x_1, \dots, x_k \in G$ there exists $y \in s^G$ such that $\langle x_1, y \rangle = \cdots = \langle x_k, y \rangle = G$. \varepsilonnd{enumerate} \varepsilonnd{definition} The term spread was introduced by Brenner and Wiegold in 1975 \cite{ref:BrennerWiegold75}, but the term uniform spread was not formally introduced until 2008 \cite{ref:BreuerGuralnickKantor08}. Note that $s(G) > 0$ if and only if every nontrivial element of $G$ is contained in a generating pair. Therefore, spread gives a way of quantifying how strongly a group is $\frac{3}{2}$-generated. Uniform spread captures the idea that the complementary element $y$, while depending on the elements $x_1, \dots, x_k$, can be chosen somewhat uniformly for all choices of $x_1, \dots, x_k$: it can always be chosen from the same prescribed conjugacy class. In Section~\ref{ss:finite_udn}, we will see a way of measuring how much more uniformity in the choice of $y$ we can insist on. Observe that Example~\ref{ex:alternating} actually shows that $u(A_n) \mathfrak{g}eq 1$ for all $n \mathfrak{g}eq 5$. By Theorem~\ref{thm:guralnick_kantor}, every finite simple group $G$ satisfies $s(G) > 0$. What more can be said about the (uniform) spread of finite simple groups? The main result is the following proved by Breuer, Guralnick and Kantor \cite{ref:BreuerGuralnickKantor08}. \begin{theorem}\label{thm:breuer_guralnick_kantor} \hspace{-2mm} Let $G$ be a nonabelian finite simple group. Then $s(G) \mathfrak{g}eq u(G) \mathfrak{g}eq 2$. Moreover, $s(G) = 2$ if and only if $u(G)=2$ if and only if \[ G \in \{ A_5, A_6, \Omega^+_8(2) \} \cup \{ {\rm Sp}_{2m}(2) \mid m \mathfrak{g}eq 3 \}. \] \varepsilonnd{theorem} The asymptotic behaviour of (uniform) spread is given by the following theorem of Guralnick and Shalev \cite[Theorem~1.1]{ref:GuralnickShalev03}. The version of this theorem stated in \cite{ref:GuralnickShalev03} is given just in terms of spread, but the result given here follows immediately from their proof (see \cite[Lemma~2.1--Corollary~2.3]{ref:GuralnickShalev03}). \begin{theorem}\label{thm:guralnick_shalev} Let $(G_i)$ be a sequence of nonabelian finite simple groups such that $|G_i| \to \infty$. Then $s(G_i) \to \infty$ if and only if $u(G_i) \to \infty$ if and only if $(G_i)$ has no infinite subsequence consisting of either \begin{enumerate} \item alternating groups of degree all divisible by a fixed prime \item symplectic groups over a field of fixed even size or odd-dimensional orthogonal groups over a field of fixed odd size. \varepsilonnd{enumerate} \varepsilonnd{theorem} Given that $s(G_i) \to \infty$ if and only if $u(G_i) \to \infty$, we ask the following. (Note that $s(G) - u(G)$ can be arbitrarily large, see Theorem~\ref{thm:spread_psl2}(iv) for example.) \begin{question} \label{que:spread_uniform} Does there exist a constant $c$ such that for all nonabelian finite simple groups $G$ we have $s(G) \leqslant c \cdot u(G)$? \varepsilonnd{question} There are explicit upper bounds that justify the exceptions in parts~(i) and~(ii) of Theorem~\ref{thm:guralnick_shalev}. Indeed, $s(\mathrm{Sp}_{2m}(q)) \leqslant q$ for even $q$ and $s(\Omega_{2m+1}(q)) \leqslant \frac{1}{2}(q^2+q)$ for odd $q$ (see \cite[Proposition~2.5]{ref:GuralnickShalev03} for a geometric proof). For alternating groups of composite degree $n > 4$, if $p$ is the least prime divisor of $n$, then $s(A_n) \leqslant \binom{2p+1}{3}$ (see \cite[Proposition~2.4]{ref:GuralnickShalev03} for a combinatorial proof). For even-degree alternating groups, the situation is clear: $s(A_n)=4$, but much less is known in odd degrees (see \cite[Section~3.1]{ref:GuralnickShalev03} for partial results). \begin{question} \label{que:spread_alt} What is the (uniform) spread of $A_n$ when $n$ is odd? \varepsilonnd{question} The spread of even-degree alternating groups was determined by Brenner and Wiegold in the paper where they first introduced the notion of spread. They also studied the spread of two-dimensional linear groups, but their claimed value for $s(\mathrm{PSL}_2(q))$ was only proved to be a lower bound. Further work by Burness and Harper demonstrates that this is not an upper bound when $q \varepsilonquiv 3 \mod{4}$, where they prove the following (see \cite[Theorem~5 \& Remark~5]{ref:BurnessHarper20}). \begin{theorem} \label{thm:spread_psl2} Let $G = \mathrm{PSL}_2(q)$ with $q \mathfrak{g}eq 11$. \begin{enumerate} \item If $q$ is even, then $s(G) = u(G) = q-2$. \item If $q \varepsilonquiv 1 \mod{4}$, then $s(G) = u(G) = q-1$. \item If $q \varepsilonquiv 3 \mod{4}$, then $s(G) \mathfrak{g}eq q-3$ and $u(G) \mathfrak{g}eq q-4$. \item If $q \varepsilonquiv 3 \mod{4}$ is prime, then $s(G) \mathfrak{g}eq \frac{1}{2}(3q-7)$ and $s(G)-u(G) = \frac{1}{2}(q+1)$. \varepsilonnd{enumerate} \varepsilonnd{theorem} \begin{question} \label{que:spread_psl2} What is the (uniform) spread of $\mathrm{PSL}_2(q)$ when $q \varepsilonquiv 3 \mod{4}$? \varepsilonnd{question} In short, determining the spread of simple groups is difficult. We conclude by commenting that the precise value of the spread of only two sporadic groups is known, namely $s(\mathrm{M}_{11}) = 3$ \cite{ref:Woldar07} (see also \cite{ref:BradleyHolmes07}) and $s(\mathrm{M}_{23}) = 8064$ \cite{ref:BradleyHolmes07, ref:Fairbairn12JGT}. In contrast, the exact spread and uniform spread of symmetric groups is known. In a series of papers in the late 1960s \cite{ref:Binder68,ref:Binder70,ref:Binder70MZ,ref:Binder73}, Binder determined the spread of $\mathbb{S}m{n}$ and also showed that $u(\mathbb{S}m{n}) \mathfrak{g}eq 1$ unless $n \in \{4,6\}$ (Binder used different terminology). However, the uniform spread of symmetric groups was only completely determined in a 2021 paper of Burness and Harper \cite{ref:BurnessHarper20}; indeed, showing that $u(\mathbb{S}m{n}) \mathfrak{g}eq 2$ for even $n > 6$ involves both a long combinatorial argument and a CFSG-dependent group theoretic argument (see \cite[Theorem~3 \& Remark~3]{ref:BurnessHarper20}). We say more on $\mathbb{S}m{6}$ in Example~\ref{ex:spread_s6}. \begin{theorem} \label{thm:spread_sym} Let $G = \mathbb{S}m{n}$ with $n \mathfrak{g}eq 5$. Then \[ s(G) = \left\{\begin{array}{ll} 2 & \text{if $n$ is even} \\ 3 & \text{if $n$ is odd} \\ \varepsilonnd{array} \right.\quad \text{and} \quad u(G) = \left\{\begin{array}{ll} 0 & \text{if $n=6$} \\ 2 & \text{otherwise.} \\ \varepsilonnd{array} \right. \] \varepsilonnd{theorem} \textbf{Methods. A probabilistic approach. } As we turn to discuss the key method behind these results, we return to Example~\ref{ex:alternating} where we proved that $u(G) \mathfrak{g}eq 1$ when $G = A_n$ for even $n > 6$. We found an element $s \in G$ contained in a unique maximal subgroup $H$ of $G$. Since $G$ is simple, $H$ is corefree, so $\bigcap_{g \in G} H^g = 1$, which means that for each nontrivial $x \in G$ there exists $g \in G$ such that $x \not\in H^g$. This implies that $\langle x, s^g \rangle = G$, so $s^G$ witnesses $u(G) \mathfrak{g}eq 1$. This argument can be generalised in two ways: one yields Lemma~\ref{lem:spread}, giving a better lower bound on the uniform spread of $G$ and the other yields Lemma~\ref{lem:udn_bases}, pertaining to the uniform domination number of $G$, which we will meet in the next section. Lemma~\ref{lem:spread} takes a probabilistic approach, so we need some notation. For a finite group $G$ and elements $x,s \in G$, we write \begin{equation} \label{eq:q} Q(x,s) = \frac{|\{ y \in s^G \mid \langle x,y \rangle \neq G \}|}{|s^G|}, \varepsilonnd{equation} which is the probability a uniformly random conjugate of $s$ does not generate with $x$, and write $\mathcal{M}(G,s)$ for the set of maximal subgroups of $G$ that contain $s$. \begin{lemma} \label{lem:spread} Let $G$ be a finite group and let $s \in G$. \begin{enumerate} \item For $x \in G$, \[ Q(x,s) \leqslant \sum_{H \in \mathcal{M}(G,s)}^{} \frac{|x^G \cap H|}{|x^G|}. \] \item For a positive integer $k$, if $Q(x,s) < \frac{1}{k}$ for all prime order elements $x \in G$, then $u(G) \mathfrak{g}eq k$ is witnessed by $s^G$. \varepsilonnd{enumerate} \varepsilonnd{lemma} \begin{proof} For (i), let $x \in G$. Then $\langlex,s^g\rangle \neq G$ if and only if $x \in H^g$, or equivalently $x^{g^{-1}} \in H$, for some $H \in \mathcal{M}(G,s)$. Therefore, \[ Q(x,s) = \frac{|\{y \in s^G \mid \langle x, y \rangle \neq G \}|}{|s^G|} \leqslant \sum_{H \in \mathcal{M}(G,s)} \frac{|x^G \cap H|}{|x^G|}. \] For (ii), fix $k$. To prove that $u(G) \mathfrak{g}eq k$ is witnessed by $s^G$, it suffices to prove that for all elements $x_1,\dots,x_k \in G$ of prime order there exists $y \in s^G$ such that $\langlex_i,y\rangle=G$ for all $1 \leqslant i \leqslant k$. Therefore, let $x_1,\dots,x_k \in G$ have prime order. If $Q(x_i,s) < \frac{1}{k}$ for all $1 \leqslant i \leqslant k$, then \[ \frac{|\{y \in s^G \mid \text{$\langle x_i, y \rangle = G$ for all $1 \leqslant i \leqslant k$} \}|}{|s^G|} \mathfrak{g}eq 1 - \sum_{i=1}^{k}Q(x_i,s) > 0, \] so there exists $y \in s^G$ such that $\langlex_i,y\rangle=G$ for all $1 \leqslant i \leqslant k$. \varepsilonnd{proof} Therefore, to obtain lower bounds on the uniform spread (and hence spread) of a finite group, it is enough to (a) identify an element whose maximal overgroups $H$ are tightly constrained, and then (b) for each such $H$ and for all prime order $x \in G$, bound the quantity $\frac{|x^G \cap H|}{|x^G|}$. The ratio $\frac{|x^G \cap H|}{|x^G|}$ is the well-studied \varepsilonmph{fixed point ratio}. More precisely, $\frac{|x^G \cap H|}{|x^G|}$ is nothing other than the proportion of points in $G/H$ fixed by $x$ in the natural action of $G$ on $G/H$. These fixed point ratios, in the context of primitive actions of almost simple groups, have seen many applications via probabilistic methods, not just to spread, but also to base sizes (e.g. the Cameron--Kantor conjecture) and monodromy groups (e.g. the Guralnick--Thompson conjecture), see Burness' survey article \cite{ref:Burness18}. To address task (a), one applies the well-known and extensive literature on the subgroup structure of almost simple groups. For (b), one appeals to the bounds on fixed point ratios of primitive actions of almost simple groups, the most general of which is \cite[Theorem~1]{ref:LiebeckSaxl91} of Liebeck and Saxl. This states that \begin{equation}\label{eq:fpr} \frac{|x^G \cap H|}{|x^G|} \leqslant \frac{4}{3q} \varepsilonnd{equation} for any almost simple group of Lie type $G$ over $\mathbb{F}_q$, maximal subgroup $H \leqslant G$ and nontrivial element $x \in G$, with known exceptions. This is essentially best possible, since $\frac{|x^G \cap H|}{|x^G|} \approx q^{-1}$ when $q$ is odd, $G = \mathrm{PGL}_n(q)$, $H$ is the stabiliser of a $1$-space of $\mathbb{F}_q^n$ and $x$ lifts to the diagonal matrix $[-1,1,1, \dots, 1] \in \mathrm{GL}_n(q)$. However, there are much stronger bounds that take into account the particular group $G$, subgroup $H$ or element $x$ (see \cite[Section~2]{ref:Burness18} for a survey). Bounding uniform spread via Lemma~\ref{lem:spread} was the approach introduced by Guralnick and Kantor in their 2000 paper \cite{ref:GuralnickKantor00} where they prove that $u(G) \mathfrak{g}eq 1$ for all nonabelian finite simple groups $G$. Clearly this approach also easily yields further probabilistic information and we refer the reader to Burness' survey article \cite{ref:Burness19} for much more on this approach. We will give just one example, which we will return to later in the article (see \cite[Example~3.9]{ref:Burness19}). \begin{example} \label{ex:spread_e8} Let $G = E_8(q)$ and let $s$ generate a cyclic maximal torus of order $\Phi_{30}(q) = q^8+q^7-q^5-q^4-q^3+q+1$. Weigel proved that $\mathcal{M}(G,s) = \{ H \}$ where $H = N_G(\langles\rangle) = \langles\rangle:30$ (see \cite[Section~4(j)]{ref:Weigel92}). Applying Lemma~\ref{lem:spread} with the bound in \varepsilonqref{eq:fpr}, for all nontrivial $x \in G$ we have $u(G) \mathfrak{g}eq 1$ since \[ \sum_{H \in \mathcal{M}(G,s)} \frac{|x^G \cap H|}{|x^G|} \leqslant \frac{4}{3q} \leqslant \frac{2}{3} < 1. \] However, we can do better: $|x^G \cap H| \leqslant |H| \leqslant q^{14}$ and $|x^G| > q^{58}$ for all nontrivial elements $x \in G$, so $u(G) \mathfrak{g}eq q^{44}$ since \[ \sum_{H \in \mathcal{M}(G,s)} \frac{|x^G \cap H|}{|x^G|} < \frac{1}{q^{44}}. \] \varepsilonnd{example} While the overwhelming majority of results on (uniform) spread are established via the probabilistic method encapsulated in Lemma~\ref{lem:spread}, there are cases where this approach fails, as the following example highlights. \begin{example} \label{ex:spread_sp} Let $m \mathfrak{g}eq 3$ and let $G = \mathrm{Sp}_{2m}(2)$. By Theorem~\ref{thm:breuer_guralnick_kantor}, we know that $u(G)=2$. However, if $x$ is a transvection, then $Q(x,s) > \frac{1}{2}$ for all $s \in G$. This is proved in \cite[Proposition~5.4]{ref:BreuerGuralnickKantor08}, and we give an indication of the proof. Every element of $G = \mathrm{Sp}_{2m}(2)$ is contained in a subgroup of type ${\rm O}^+_{2m}(2)$ or ${\rm O}^-_{2m}(2)$ (see \cite{ref:Dye79}, for example). Assume that $s$ is contained in a subgroup $H \cong {\rm O}^-_{2m}(2)$. The groups $\mathrm{Sp}_{2m}(2)$ and ${\rm O}^\varphim_{2m}(2)$ contain $2^{2m}-1$ and $2^{2m-1} \mp 2^{m-1}$ transvections, respectively, so \[ Q(x,s) \mathfrak{g}eq \frac{2^{2m-1}+2^{m-1}}{2^{2m}-1} = \frac{2^{m-1}}{2^m-1} > \frac{1}{2}. \] A more involved argument gives $Q(x,s) > \frac{1}{2}$ if $s$ is contained in a subgroup of type ${\rm O}^+_{2m}(2)$ but none of type ${\rm O}^-_{2m}(2)$, relying on $s$ being reducible here. \varepsilonnd{example} \subsection{Uniform domination} \label{ss:finite_udn} We began by observing that any finite simple group $G$ is $\frac{3}{2}$-generated, that is \begin{equation} \label{eq:generation} \text{for all $x \in G \setminus 1$ there exists $y \in G$ such that $\langlex,y\rangle = G$.} \varepsilonnd{equation} We then looked to strengthen \varepsilonqref{eq:generation} by increasing the scope of the first quantifier. Recall that the \varepsilonmph{spread} of $G$, denoted $s(G)$, is the greatest $k$ such that \[ \text{for all $x_1, \dots, x_k \in G$ there exists $y \in G$ such that $\langlex_1,y\rangle = \cdots = \langle x_k,y\rangle = G$.} \] We also have a related notion: the \varepsilonmph{uniform spread} of $G$, denoted $u(G)$, is the greatest $k$ for which there exists an element $s \in G$ such that \[ \text{for all $x_1, \dots, x_k \in G$ there exists $y \in s^G$ such that $\langlex_1,y\rangle = \cdots = \langle x_k,y\rangle = G$.} \] The notion of uniform spread inspires us to strengthen \varepsilonqref{eq:generation} by narrowing the range of the second quantifier. That is, we say that the \varepsilonmph{total domination number} of $G$, denoted $\mathfrak{g}amma_t(G)$, is the least size of a subset $S \subseteq G$ such that \[ \text{for all $x \in G \setminus 1$ there exists $y \in S$ such that $\langlex,y\rangle = G$.} \] Again we have a related notion: the \varepsilonmph{uniform domination number} of $G$, denoted $\mathfrak{g}amma_u(G)$, is the least size of a subset $S \subseteq G$ of conjugate elements such that \[ \text{for all $x \in G \setminus 1$ there exists $y \in S$ such that $\langlex,y\rangle = G$.} \] These latter two concepts were introduced by Burness and Harper in \cite{ref:BurnessHarper19} and studied further in \cite{ref:BurnessHarper20}. The terminology is motivated by the generating graph (see Section~\ref{ss:finite_graph}). Let $G$ be a nonabelian finite simple group. Clearly $2 \leqslant \mathfrak{g}amma_t(G) \leqslant \mathfrak{g}amma_u(G)$, and since $u(G) \mathfrak{g}eq 1$, there exists a conjugacy class $s^G$ such that $\mathfrak{g}amma_u(G) \leqslant |s^G|$. However, the class exhibited in Guralnick and Kantor's proof of $u(G) \mathfrak{g}eq 1$ is typically very large (for groups of Lie type, $s$ is usually a regular semisimple element), so it is natural to seek tighter upper bounds on $\mathfrak{g}amma_u(G)$. The following result of Burness and Harper does this \cite[Theorems~2, 3 \& 4]{ref:BurnessHarper19} (see \cite[Theorem~4(i)]{ref:BurnessHarper20} for the refined upper bound in (iii)). \begin{theorem} \label{thm:udn} Let $G$ be a nonabelian finite simple group. \begin{enumerate} \item If $G = A_n$, then $\mathfrak{g}amma_u(G) \leqslant 77 \log_2{n}$. \item If $G$ is classical of rank $r$, then $\mathfrak{g}amma_u(G) \leqslant 7r+70$. \item If $G$ is exceptional, then $\mathfrak{g}amma_u(G) \leqslant 5$. \item If $G$ is sporadic, then $\mathfrak{g}amma_u(G) \leqslant 4$. \varepsilonnd{enumerate} \varepsilonnd{theorem} In this generality, these bounds are optimal up to constants. For example, if $n \mathfrak{g}eq 6$ is even, then $\log_2{n} \leqslant \mathfrak{g}amma_t(A_n) \leqslant \mathfrak{g}amma_u(A_n) \leqslant 2 \log_2{n}$, and if $G$ is $\mathrm{Sp}_{2r}(q)$ with $q$ even or $\Omega_{2r+1}(q)$ with $q$ odd, then $r \leqslant \mathfrak{g}amma_t(G) \leqslant \mathfrak{g}amma_u(G) \leqslant 7r$ \cite[Theorems~3(i) \& 6.3(iii)]{ref:BurnessHarper20}. Regarding the bounds in (iii) and (iv), for sporadic groups, $\mathfrak{g}amma_u(G) = 4$ is witnessed by $G = {\rm M}_{11}$ \cite[Theorem~3]{ref:BurnessHarper19}, but the best lower bound for exceptional groups is $\mathfrak{g}amma_u(G) \mathfrak{g}eq 3$ given by $G = F_4(q)$ \cite[Lemma~6.17]{ref:BurnessHarper20}. \begin{question} \label{que:udn_alt} Does there exist a constant $c$ such that for all $n \mathfrak{g}eq 5$ we have $\log_p{n} \leqslant \mathfrak{g}amma_t(A_n) \leqslant \mathfrak{g}amma_u(A_n) \leqslant c \log_p{n}$ where $p$ is the least prime divisor of $n$? \varepsilonnd{question} By \cite[Theorem~4(ii)]{ref:BurnessHarper20}, we know that $\mathfrak{g}amma_t(A_n) \mathfrak{g}eq \log_p{n}$, so to provide an affirmative answer to Question~\ref{que:udn_alt}, it suffices to prove that $\mathfrak{g}amma_u(A_n) \leqslant c \log_p{n}$. \begin{question} \label{que:udn_lie} Does there exist a constant $c$ such that for all finite simple groups of Lie type $G$ other than $\mathrm{Sp}_{2r}(q)$ with $q$ even and $\Omega_{2r+1}(q)$ with $q$ odd, we have $\mathfrak{g}amma_u(G) \leqslant c$? \varepsilonnd{question} By Theorem~\ref{thm:udn}, to answer Question~\ref{que:udn_lie}, it suffices to consider classical groups of large rank, and it was shown in \cite[Theorem~6.3(ii)]{ref:BurnessHarper19} that $c=15$ suffices for some families of these groups. Affirmative answers to Questions~\ref{que:udn_alt} and~\ref{que:udn_lie} would answer Question~\ref{que:udn_tdn} too. \begin{question} \label{que:udn_tdn} Does there exist a constant $c$ such that for all nonabelian finite simple groups $G$ we have $\mathfrak{g}amma_u(G) \leqslant c \cdot \mathfrak{g}amma_t(G)$? \varepsilonnd{question} The smallest possible value of $\mathfrak{g}amma_u(G)$ is $2$ (since $G$ is not cyclic), and an almost complete classification of when this is achieved was given in \cite[Corollary~7]{ref:BurnessHarper20}. \begin{theorem} \label{thm:udn_two} Let $G$ be a nonabelian finite simple group. Then $\mathfrak{g}amma_u(G) = 2$ only if $G$ is one of the following \begin{enumerate} \item $A_n$ for prime $n \mathfrak{g}eq 13$ \item $\mathrm{PSL}_2(q)$ for odd $q \mathfrak{g}eq 11$ \item[] $\mathrm{PSL}^\varepsilon_n(q)$ for odd $n$, but not $n=3$ with $(q,\varepsilon) \in \{ (2,+), (4,+), (3,-), (5,-) \}$ \item[] ${\rm PSp}_{4m+2}(q)^\ast$ for odd $q$ and $m \mathfrak{g}eq 2$, and ${\rm P}\Omega^\varphim_{4m}(q)^\ast$ for $m \mathfrak{g}eq 2$ \item ${}^2B_2(q)$, ${}^2G_2(q)$, ${}^2F_4(q)$, ${}^3D_4(q)$, ${}^2E_6(q)$, $E_6(q)$, $E_7(q)$ , $E_8(q)$ \item ${\rm M}_{23}$, ${\rm J}_1$, ${\rm J}_4$, ${\rm Ru}$, ${\rm Ly}$, ${\rm O'N}$, ${\rm Fi}_{23}$, ${\rm Th}$, $\mathbb{B}$, $\mathbb{M}$ or ${\rm J}_3^\ast$, ${\rm He}^\ast$, ${\rm Co}_1^\ast$, ${\rm HN}^\ast$. \varepsilonnd{enumerate} Moreover, $\mathfrak{g}amma_u(G) = 2$ in all the cases without an asterisk. \varepsilonnd{theorem} We will say that a subset $S \subseteq G$ of conjugate elements of $G$ is a \varepsilonmph{uniform dominating set} of $G$ if for all nontrivial $x \in G$ there exists $y \in S$ such that $\langle x, y \rangle = G$, so $\mathfrak{g}amma_u(G)$ is the smallest size of a uniform dominating set of $G$. For groups $G$ such that $\mathfrak{g}amma_u(G) = 2$, we know that there exists a uniform dominating set of size two. How abundant are such subsets? To this end, let $P(G,s,2)$ be the probability that two random conjugates of $s$ form a uniform dominating set for $G$, and let $P(G) = \max\{ P(G,s,2) \mid s \in G \}$. Then we have the following probabilistic result \cite[Corollary~8 \& Theorem~9]{ref:BurnessHarper20}. \begin{theorem} \label{thm:udn_prob} Let $(G_i)$ be a sequence of nonabelian finite simple groups such that $|G_i| \to \infty$. Assume that $\mathfrak{g}amma_u(G_i) = 2$, and $G_i \not\in \{ {\rm PSp}_{4m+2}(q) \mid \text{odd $q$, $m \mathfrak{g}eq 2$}\} \cup \{ {\rm P}\Omega^\varphim_{4m}(q) \mid \text{$m \mathfrak{g}eq 2$} \} \cup \{ {\rm J}_3, {\rm He}, {\rm Co}_1, {\rm HN} \}$. Then \[ P(G_i) \to \left\{ \begin{array}{ll} \frac{1}{2} & \text{if $G = \mathrm{PSL}_2(q)$} \\ 1 & \text{otherwise.} \varepsilonnd{array} \right. \] Moreover, $P(G_i) \leqslant \frac{1}{2}$ if and only if $G_i =\mathrm{PSL}_2(q)$ for $q \varepsilonquiv 3 \mod{4}$ or $G_i \in \{ A_{13}, {\rm PSU}_5(2), {\rm Fi}_{23} \}$. \varepsilonnd{theorem} \textbf{Methods. Bases of permutation groups. } Let us now discuss the methods used in \cite{ref:BurnessHarper19,ref:BurnessHarper20} to bound $\mathfrak{g}amma_u(G)$. Here there is a very pleasing connection with an entirely different topic in permutation group theory: bases. For a group $G$ acting faithfully on a set $\Omega$, a subset $B \subseteq \Omega$ is a \varepsilonmph{base} if the pointwise stabiliser $G_{(B)}$ is trivial. Since $G$ acts faithfully, the entire domain $\Omega$ is a base, so we naturally ask for the smallest size of a base, which we call the \varepsilonmph{base size} $b(G,\Omega)$. To turn this combinatorial notion into an algebraic one, we observe that when $G$ acts on $G/H$, a subset $\{ Hg_1, \dots, Hg_c \}$ is a base if and only if $\cap_{i=1}^c H^{g_i} = 1$, so $b(G,G/H)$ is the smallest number of conjugates of $H$ whose intersection is trivial. Bases have been studied for over a century, and the base size has been at the centre of several recently proved conjectures, such as Pyber's conjecture that there is a constant $c$ such that $\frac{\log|G|}{\log|\Omega|} \leqslant b(G, \Omega) \leqslant c \frac{\log|G|}{\log|\Omega|}$ for all primitive groups $G \leqslant \mathrm{Sym}(\Omega)$ (see \cite{ref:DuyanHalasiMaroti18}), and Cameron's conjecture that $b(G, \Omega) \leqslant 7$ for nonstandard primitive almost simple groups $G \leqslant \mathrm{Sym}(\Omega)$ (see \cite{ref:BurnessLiebeckShalev09}). There is an ambitious ongoing programme of work, initiated by Saxl, to provide a complete classification of the primitive groups $G \leqslant \mathrm{Sym}(\Omega)$ with $b(G,\Omega) = 2$. There are numerous partial results in this direction, and we give just one, as we will use it below. Burness and Thomas \cite{ref:BurnessThomas} proved that if $G$ is a simple group of Lie type and $T$ is a maximal torus, then $b(G,G/N_G(T)) = 2$ apart from a few known low rank exceptions. The following result is the bridge that connects bases with uniform domination (see \cite[Corollaries~2.2 \&~2.3]{ref:BurnessHarper19}). \begin{lemma} \label{lem:udn_bases} Let $G$ be a finite group and let $s \in G$. \begin{enumerate} \item Assume that $\mathcal{M}(G,s) = \{ H \}$ and $H$ is corefree. Then the smallest uniform dominating set $S \subseteq s^G$ satisfies $|S|=b(G,G/H)$. \item Assume that $H \in \mathcal{M}(G,s)$ is corefree. Then every uniform dominating set $S \subseteq s^G$ satisfies $|S| \mathfrak{g}eq b(G,G/H)$. \varepsilonnd{enumerate} \varepsilonnd{lemma} \begin{proof} For (i), note that $x \in H$ if and only if $\langle x, s \rangle \neq G$. Hence, $\{ s^{g_1}, \dots, s^{g_c} \}$ is a uniform dominating set if and only if $\bigcap_{i=1}^{c} H^{g_i} = 1$, or said otherwise, if and only if $\{ g_1, \dots, g_c \}$ is a base for $G$ acting on $G/H$. The result follows. For (ii), if $x \in H$, then $\langle x, s \rangle \neq G$. Therefore, if $\{ s^{g_1}, \dots, s^{g_c} \}$ is a uniform dominating set, then $\bigcap_{i=1}^{c} H^{g_i} = 1$, so $\{ g_1, \dots, g_c \}$ is a base for $G$ acting on $G/H$ and, consequently, $c \mathfrak{g}eq b(G, G/H)$. \varepsilonnd{proof} Let us explain how Lemma~\ref{lem:udn_bases} applies. Part~(i) gives an upper bound: if we can find $s \in G$ such that $\mathcal{M}(G,s) = \{H\}$ and $b(G,G/H) \leqslant c$, then $\mathfrak{g}amma_u(G) \leqslant c$. Part~(ii) gives a lower bound: if we can show that for all $s \in G$ there exists $H \in \mathcal{M}(G,s)$ with $b(G,G/H) \mathfrak{g}eq c$, then $\mathfrak{g}amma_u(G) \mathfrak{g}eq c$. We give two examples to show how we do this in practice. \begin{example} \label{ex:udn_e8} Let $G = E_8(q)$ and let $s$ generate a cyclic maximal torus of order $\Phi_{30}(q) = q^8+q^7-q^5-q^4-q^3+q+1$. As noted in Example~\ref{ex:spread_e8}, $\mathcal{M}(G,s) = \{ H \}$ where $H$ is the normaliser of the torus $\langles\rangle$. Now, applying Burness and Thomas' result \cite[Theorem~1]{ref:BurnessThomas} mentioned above, we see that $b(G,G/H)=2$, so Lemma~\ref{lem:udn_bases} implies that $\mathfrak{g}amma_u(G) = 2$. \varepsilonnd{example} \begin{example} \label{ex:udn_an} Let $n > 6$ be even and let $G = A_n$. We will give upper and lower bounds on $\mathfrak{g}amma_u(G)$ via Lemma~\ref{lem:udn_bases}. Seeking an upper bound on $\mathfrak{g}amma_u(G)$, let $s = (1 \, 2 \, \dots \, l)(l+1 \, l+2 \, \dots \, n)$ where $l \in \{\frac{n}{2}-1, \frac{n}{2}-2\}$ is odd. As we showed in Example~\ref{ex:alternating}, $\mathcal{M}(G,s) = \{H\}$ where $H \cong (\mathbb{S}m{l} \times \mathbb{S}m{n-l}) \cap A_n$. The action of $A_n$ on $A_n/H$ is just the action of $A_n$ on the set of $l$-subsets of $\{1, 2, \dots, n\}$. The base size of this action was studied by Halasi, and by \cite[Theorem~4.2]{ref:Halasi12}, we have $b(G,G/H) \leqslant \left\lceil \log_{\lceil n/l \rceil} n \right\rceil \cdot (\lceil n/l \rceil - 1) \leqslant 2 \log_2{n}$. Applying Lemma~\ref{lem:udn_bases}(i) gives $\mathfrak{g}amma_u(G) \leqslant 2\log_2{n}$. Turning to a lower bound, note that every element of $G$ is contained in a subgroup $K$ of type $(\mathbb{S}m{k} \times \mathbb{S}m{n-k}) \cap A_n$ for some $0 < k < n$. By \cite[Theorem~3.1]{ref:Halasi12}, we have $b(G, G/K) \mathfrak{g}eq \log_2{n}$. Applying Lemma~\ref{lem:udn_bases}(ii) gives $\mathfrak{g}amma_u(G) \mathfrak{g}eq \log_2{n}$. \varepsilonnd{example} We now address the general case where $s$ is not contained in a unique maximal subgroup of $G$. In the spirit of how uniform spread was studied, a probabilistic approach is adopted. Write $Q(G,s,c)$ for the probability that a random $c$-tuple of elements of $s^G$ does not give a uniform dominating set of $G$ and write $\mathcal{P}(G)$ for the set of prime order elements of $G$. The main lemma is \cite[Lemma~2.5]{ref:BurnessHarper19}. \begin{lemma} \label{lem:udn_prob} Let $G$ be a finite group, let $s \in G$ and let $c$ be a positive integer. \begin{enumerate} \item For all positive integers $c$, we have \[ Q(G,s,c) \leqslant \sum_{x \in \mathcal{P}(G)} \left(\sum_{H \in \mathcal{M}(G,s)}\frac{|x^G \cap H|}{|x^G|}\right)^c. \] \item For a positive integer $c$, if $Q(G,s,c) < 1$, then $\mathfrak{g}amma_u(G) \leqslant c$. \varepsilonnd{enumerate} \varepsilonnd{lemma} \begin{proof} Part~(ii) is immediate. For part~(i), $\{s^{g_1}, \dots, s^{g_c}\}$ is not a uniform dominating set of $G$ if and only if there exists a prime order element $x \in G$ such that $\langle x, s^{g_i} \rangle \neq G$ for all $1 \leqslant i \leqslant c$. Since $Q(x,s)$ is the probability that $x$ does not generate $G$ with a random conjugate of $s$ (see \varepsilonqref{eq:q}), this implies that $Q(G,s,c) \leqslant \sum_{x \in \mathcal{P}(G)}^{} Q(x,s)^c$. The result follows from Lemma~\ref{lem:spread}(i). \varepsilonnd{proof} Could Lemma~\ref{lem:udn_prob} yield a better bound than Lemma~\ref{lem:udn_bases} when $s$ satisfies $\mathcal{M}(G,s) = \{H\}$? In this case, $Q(G,s,c)$ is nothing other than the probability that a random $c$-tuple of elements of $G/H$ form a base and $\sum_{x \in \mathcal{P}(G)} \left(\frac{|x^G \cap H|}{|x^G|}\right)^c$ is the upper bound for $Q(G,s,c)$ used, first by Liebeck and Shalev \cite{ref:LiebeckShalev99} and then by numerous others since, to obtain upper bounds on the base size $b(G,G/H)$. Therefore, Lemma~\ref{lem:udn_prob} has nothing new to offer in this special case. We conclude with an example of Lemma~\ref{lem:udn_prob} in action. This establishes a (typical) special case of Theorem~\ref{thm:udn}(ii). \begin{example} \label{ex:udn_prob} Let $n \mathfrak{g}eq 10$ be even and let $G = \mathrm{PSL}_n(q)$. We proceed similarly to Example~\ref{ex:udn_an}, by fixing odd $l \in \{\frac{n}{2}-1, \frac{n}{2}-2\}$ and then letting $s$ lift to a block diagonal matrix \scalebox{0.75}{$\left( \begin{array}{cc} A & 0 \\ 0 & B \varepsilonnd{array} \right)$} where $A \in \mathrm{SL}_l(q)$ and $B \in \mathrm{SL}_{n-l}(q)$ are irreducible. The order of $s$ is divisible by a \varepsilonmph{primitive prime divisor} of $q^{n-l}-1$ (a prime divisor coprime to $q^k-1$ for each $1 \leqslant k < n-l$). Using the framework of Aschbacher's theorem on the subgroup structure of classical groups \cite{ref:Aschbacher84}, Guralnick, Penttila, Praeger and Saxl, classify the subgroups of $\mathrm{GL}_n(q)$ that contain an element whose order is a primitive prime divisor of $q^m-1$ when $m > \frac{n}{2}$ \cite{ref:GuralnickPenttilaPraegerSaxl97}. With this we deduce that $\mathcal{M}(G,s) = \{ G_U, G_V \}$ where $U$ and $V$ are the obviously stabilised subspaces of dimension $l$ and $n-l$, respectively. The fixed point ratio for classical groups acting on the set of $k$-subspaces of their natural module was studied by Guralnick and Kantor, and by \cite[Proposition~3.1]{ref:GuralnickKantor00}, if $G = \mathrm{PSL}_n(q)$ and $H$ is the stabiliser of a $k$-subspace of $\mathbb{F}_q^n$, then $\frac{|x^G \cap H|}{|x^G|} < \frac{2}{q^k}$. Applying Lemma~\ref{lem:udn_prob} gives $\mathfrak{g}amma_u(G) \leqslant 2n+15$ since \[ Q(G,s,2n+15) \leqslant \sum_{x \in \mathcal{P}(G)} \left(\sum_{H \in \mathcal{M}(G,s)}\frac{|x^G \cap H|}{|x^G|}\right)^{2n+15} \!\!\leqslant q^{n^2-1} \left( 2 \cdot \frac{2}{q^{\frac{n}{2}-2}} \right)^{2n+15} \!\!< 1. \] \varepsilonnd{example} \subsection{The spread of a finite group} \label{ss:finite_bgh} We now look beyond finite simple groups and ask the general question: for which finite groups $G$ is every nontrivial element contained in a generating pair? Brenner and Wiegold's original 1975 paper gives a comprehensive answer for finite soluble groups (see \cite[Theorem~2.01]{ref:BrennerWiegold75} for even more detail). \begin{theorem} \label{thm:brenner_wiegold} Let $G$ be a finite soluble group. The following are equivalent: \begin{enumerate} \item $s(G) \mathfrak{g}eq 1$ \item $s(G) \mathfrak{g}eq 2$ \item every proper quotient of $G$ is cyclic. \varepsilonnd{enumerate} \varepsilonnd{theorem} The equivalence of (i) and (ii) shows that $s(G)=1$ for no finite soluble groups $G$, so Brenner and Wiegold asked the following question \cite[Problem~1.04]{ref:BrennerWiegold75}. \begin{question} \label{que:brenner_wiegold} Which finite groups $G$ satisfy $s(G) = 1$? In particular, are there perhaps only finitely many such groups? \varepsilonnd{question} The condition in (iii) is necessary for every nontrivial element of $G$ to be contained in a generating pair, and this is true for an arbitrary group $G$. To see this, assume that every nontrivial element of $G$ is contained in a generating pair. Let $1 \neq N \trianglelefteqslant G$ and let $1 \neq n \in N$. Then there exists $g \in G$ such that $G = \langle n, g \rangle$, so $G/N = \langleNn, Ng\rangle = \langleNg\rangle$, which is cyclic. In 2008, Breuer, Guralnick and Kantor conjectured that this condition is also sufficient for all finite groups \cite[Conjecture~1.8]{ref:BreuerGuralnickKantor08}. \begin{conjecture} \label{con:breuer_guralnick_kantor} Let $G$ be a finite group. Then $s(G) \mathfrak{g}eq 1$ if and only if every proper quotient of $G$ is cyclic. \varepsilonnd{conjecture} Completing a long line of research in this direction, both Question~\ref{que:brenner_wiegold} and Conjecture~\ref{con:breuer_guralnick_kantor} were settled by Burness, Guralnick and Harper in 2021 \cite{ref:BurnessGuralnickHarper21}. \begin{theorem} \label{thm:burness_guralnick_harper} Let $G$ be a finite group. Then $s(G) \mathfrak{g}eq 2$ if and only if every proper quotient of $G$ is cyclic. \varepsilonnd{theorem} \begin{corollary} \label{cor:burness_guralnick_harper} No finite group $G$ satisfies $s(G)=1$. \varepsilonnd{corollary} The next example (which is \cite[Remark~2.16]{ref:BurnessGuralnickHarper21}) shows that Theorem~\ref{thm:burness_guralnick_harper} does not hold if spread is replaced with uniform spread (recall Theorem~\ref{thm:spread_sym}). \begin{example} \label{ex:spread_s6} Let $G = \mathbb{S}m{n}$ where $n \mathfrak{g}eq 6$ is even. Suppose that $u(G) > 0$ is witnessed by the class $s^G$. Since a conjugate of $s$ generates with $(1 \, 2 \, 3)$, $s$ must be an odd permutation. Since a conjugate of $s$ generates with $(1 \, 2)$, $s$ must have at most two cycles. Since $n$ is even, it follows that $s$ is an $n$-cycle. However, if $n=6$ and $a \in \mathrm{Aut}(G) \setminus G$, then $s^a \in (1 \, 2 \, 3)(4 \, 5)^G$ also witnesses $u(G) > 0$, which is a contradiction. Therefore, $u(\mathbb{S}m{6})=0$. \varepsilonnd{example} However, Example~\ref{ex:spread_s6} is essentially the only obstacle to a result for uniform spread analogous to Theorem~\ref{thm:burness_guralnick_harper} on spread. Indeed, Burness, Guralnick and Harper gave the following complete description of the finite groups $G$ with $u(G) < 2$ \cite[Theorem~3]{ref:BurnessGuralnickHarper21}. (Note that the uniform spread of abelian groups $G$ is not interesting: $u(G) = \infty$ if $G$ is cyclic and $u(G) = 0$ otherwise.) \begin{theorem} \label{thm:burness_guralnick_harper_uniform} Let $G$ be a nonabelian finite group such that every proper quotient is cyclic. Then \begin{enumerate} \item $u(G) = 0$ if and only if $G = \mathbb{S}m{6}$ \item $u(G) = 1$ if and only if the group $G$ has a unique minimal normal subgroup $N = T_1 \times \cdots \times T_k$ where $k \mathfrak{g}eq 2$ and where $T_i = A_6$ and $N_G(T_i)/C_G(T_i) = \mathbb{S}m{6}$ for all $1 \leqslant i \leqslant k$. \varepsilonnd{enumerate} \varepsilonnd{theorem} Theorem~\ref{thm:burness_guralnick_harper_uniform} emphasises the anomalous behaviour of $\mathbb{S}m{6}$: it is the only almost simple group $G$ where every proper quotient of $G$ is cyclic but $u(G) < 2$. \textbf{Methods. A reduction theorem and Shintani descent. } We now outline the proof of Theorems~\ref{thm:burness_guralnick_harper} and~\ref{thm:burness_guralnick_harper_uniform} in \cite{ref:BurnessGuralnickHarper21}. We need to consider the finite groups $G$ all of whose proper quotients are cyclic. In light of Theorem~\ref{thm:brenner_wiegold}, we will assume that $G$ is insoluble, so $G$ has a unique minimal normal subgroup $T^k$ for a nonabelian simple group $T$, and we can assume that $G = \langle T^k, s \rangle$ where $s = (a, 1, \dots, 1)\sigma \in \mathrm{Aut}(T^k)$ for $a \in \mathrm{Aut}(T)$ and $\sigma = (1 \, 2 \, \dots \, k) \in \mathbb{S}m{k}$. Let us ignore $T = A_6$ due to the complications we have already seen in this case. The first major step in the proof of Theorems~\ref{thm:burness_guralnick_harper} and~\ref{thm:burness_guralnick_harper_uniform} is the following reduction theorem \cite[Theorem~2.13]{ref:BurnessGuralnickHarper21}. \begin{theorem} \label{thm:bgh_reduction} Fix a nonabelian finite simple group $T$ and assume that $T \neq A_6$. Fix $s = (a, 1, \dots, 1)\sigma \in \mathrm{Aut}(T^k)$ with $a \in \mathrm{Aut}(T)$ and $\sigma = (1 \, 2 \, \dots \, k) \in \mathbb{S}m{k}$. Then $s$ witnesses $u(\langleT^k,s\rangle) \mathfrak{g}eq 2$ if the following hold: \begin{enumerate} \item $a$ witnesses $u(\langle T, a \rangle) \mathfrak{g}eq 2$ \item $\langle a \rangle \cap T \neq 1$, and if $a$ is square in $\mathrm{Aut}(T)$, then $|\langlea\rangle \cap T|$ does not divide $4$. \varepsilonnd{enumerate} \varepsilonnd{theorem} The second major step is to generalise Breuer, Guralnick and Kantor's result Theorem~\ref{thm:breuer_guralnick_kantor} that $u(T) \mathfrak{g}eq 2$ for all nonabelian finite simple groups $T$ to all almost simple groups $A = \langle T, a \rangle$. This long line of research was initiated by Burness and Guest for $T = \mathrm{PSL}_n(q)$ \cite{ref:BurnessGuest13}, continued by Harper for the remaining classical groups $T$ \cite{ref:Harper17,ref:HarperLNM} and completed by Burness, Guralnick and Harper for exceptional groups $T$ \cite{ref:BurnessGuralnickHarper21}. (We have already noted that the proof of $u(S\!_n) \mathfrak{g}eq 2$ for $n \neq 6$ was completed by Burness and Harper in \cite{ref:BurnessHarper20}, and the result for sporadic groups follows from computational work in \cite{ref:BreuerGuralnickKantor08}). We conclude by highlighting the major obstacle that this body of work faced and then outlining the technique that overcame this obstacle. Let $G = \langle T, g \rangle$ where $T$ is a finite simple group of Lie type and $g \in \mathrm{Aut}(T)$. Suppose $s^G$ witnesses $u(G) \mathfrak{g}eq 2$. A conjugate of $s$ generates with any element of $T$, so $G = \langle T, s \rangle$. By replacing $s$ with a power if necessary, we may assume that $s \in Tg$. How do we describe elements of $Tg$ and their overgroups? Suppose that $T = \mathrm{PSL}_n(q)$ with $q=p^f$. If $g \in {\rm PGL}_n(q)$, then there are geometric techniques available, but what if $g$ is, say, the field automorphism $(a_{ij}) \mapsto (a_{ij}^p)$? The technique that was used to answer these questions is known as \varepsilonmph{Shintani descent}. This was introduced by Shintani in 1976 \cite{ref:Shintani76} (and generalised by Kawanaka in \cite{ref:Kawanaka77}) to study irreducible characters of almost simple groups. However, as first exploited by Fulman and Guralnick in their work on the Boston--Shalev conjecture \cite{ref:FulmanGuralnick12}, Shintani descent also provides a fruitful way of studying the conjugacy classes of almost simple groups. The main theorem is the following, and we follow Desphande's proof \cite{ref:Deshpande16}. (Here $\sigma_i$ is considered as an element of $\langle X, \sigma_i \rangle$ for $i \in \{1,2\}$.) \begin{theorem} \label{thm:shintani} Let $X$ be a connected algebraic group, and let $\sigma_1, \sigma_2\colon X \to X$ be commuting Steinberg endomorphisms. Then there is a bijection \[ F\colon \{ \text{$X_{\sigma_1}$-classes in $X_{\sigma_1}\sigma_2$} \} \to \{ \text{$X_{\sigma_2}$-classes in $X_{\sigma_2}\sigma_1$} \}. \] \varepsilonnd{theorem} \begin{proof} Let $S$ be the orbits of $\{ (g,h) \in X\sigma_2 \times X\sigma_1 \mid [g,h]=1 \}$ under the conjugation action of $X$. By the Lang--Steinberg theorem, $S$ is in bijection with the orbits of $\{ (x\sigma_2, \sigma_1) \mid x \in X_{\sigma_1} \}$ under conjugation by $X_{\sigma_1}$ and also with the orbits of $\{ (\sigma_2, y\sigma_1) \mid x \in X_{\sigma_2} \}$ under conjugation by $X_{\sigma_2}$. This provides a bijection between the $X_{\sigma_1}$-classes in $X_{\sigma_1}\sigma_2$ and the $X_{\sigma_2}$-classes in $X_{\sigma_2}\sigma_1$. \varepsilonnd{proof} The bijection in the proof of Theorem~\ref{thm:shintani}, known as the \varepsilonmph{Shintani map} of $(X,\sigma_1,\sigma_2)$, has desirable properties. For instance, if $\sigma_1 = \sigma_2^e$ for $e \mathfrak{g}eq 1$, then \begin{equation} \label{eq:shintani} F((x\sigma_2)^{X_{\sigma_1}}) = (a^{-1}(x\sigma_1)^{-e}a)^{X_{\sigma_2}} \varepsilonnd{equation} for some $a \in X$. Moreover, if $F(g^{X_{\sigma_1}}) = h^{X_{\sigma_2}}$, then it is easy to show that $C_{X_{\sigma_1}}(g) \cong C_{X_{\sigma_2}}(h)$, and, by now, extensive information is also available about how maximal overgroups of $g$ in $\langle X_{\sigma_1}, \sigma_2 \rangle$ relate to the maximal overgroups of $h$ in $\langle X_{\sigma_2}, \sigma_1 \rangle$. These latter results are crucial to proving Theorem~\ref{thm:burness_guralnick_harper} for almost simple groups of Lie type. The first result in this direction is due to Burness and Guest \cite[Corollary~2.15]{ref:BurnessGuest13}, and the subsequent developments are unified by Harper in \cite{ref:Harper21}, to which we refer the reader for further detail. \begin{example} \label{ex:bgh} We sketch how $u(G) \mathfrak{g}eq 2$ was proved for $G = \langle T, g \rangle$ when $T = \Omega^+_{2m}(q)$ with $q=2^f$ and $g$ is the field automorphism $\varphi\colon(a_{ij}) \mapsto (a_{ij}^2)$. We will not make further assumptions on $q$, but we will assume that $m$ is large. Let $X$ be the simple algebraic group ${\rm SO}_{2m}(\overline{\mathbb{F}}_2)$ and let $F$ be the Shintani map \noindent of $(X,\varphi^f,\varphi)$. Then $G = \langle X_{\varphi^f}, \varphi \rangle$, and writing $G_0 = X_{\varphi} = \Omega^+_{2m}(2)$, we observe that $F$ gives a bijection between the conjugacy classes in $Tg$ and those in $G_0$. We define $s \in Tg$ such that $F(s^G) = s_0^{G_0}$ for a well chosen element $s_0 \in G_0$. In particular, \varepsilonqref{eq:shintani} implies that $s_0$ is $X$-conjugate to a power of $s$. To define $s_0$, fix $k$ such that $m-k$ is even and $\frac{\sqrt{2m}}{4} < 2k < \frac{\sqrt{2m}}{2}$, fix $A \in \Omega^-_{2k}(2)$ and $B \in \Omega^-_{2m-2k}(2)$ of order $2^k+1$ and $2^{m-k}+1$ and let $s_0 = \text{\scalebox{0.75}{$\left( \begin{array}{cc} A & 0 \\ 0 & B \varepsilonnd{array} \right)$}} \in \Omega^+_{2m}(2)$. We now study $\mathcal{M}(G,s)$. First, a power of $s_0$ (and hence $s$) has a $1$-eigenspace of codimension $2k < \frac{\sqrt{2m}}{2}$, so \cite[Theorem~7.1]{ref:GuralnickSaxl03} implies that $s$ is not contained in any local or almost simple maximal subgroup of $G$. Next, a power of $s_0$ has order $2^{m-k}+1$, which is divisible by the \varepsilonmph{primitive part} of $2^{2m-2k}-1$ (the largest divisor of $2^{2m-2k}-1$ that is prime to $2^l-1$ for all $0 < l < 2m-2k$). For sufficiently large $m$, we can apply the main theorem of \cite{ref:GuralnickPenttilaPraegerSaxl97} to deduce that all of the maximal overgroups of $s_0$ in $G_0 = \Omega^+_{2m}(2)$ are reducible. In particular, the only maximal overgroup of $s_0$ in $G_0$ arising as the set of fixed points of a closed positive-dimensional $\varphi$-stable subgroup of $X$ is the obvious reducible subgroup of type $({\rm O}^-_{2k}(2) \times {\rm O}^-_{2m-2k}(2)) \cap G_0$. Now the theory of Shintani descent \cite[Theorem~4]{ref:Harper21} implies that the only such maximal overgroup of $s$ in $G$ is one subgroup $H$ of type $({\rm O}^\varphim_{2k}(q) \times {\rm O}^\varphim_{2m-2k}(q)) \cap G$. Drawing these observations together and using Aschbacher's subgroup structure theorem \cite{ref:Aschbacher84}, we deduce that $\mathcal{M}(G,s) = \{H\} \cup \mathcal{M}'$, where $\mathcal{M}'$ consists of subfield subgroups. There are at most $\log_2{f}+1 = \log_2\log_2 q + 1$ classes of maximal subfield subgroups of $G$, and by \cite[Lemma~2.19]{ref:BurnessGuest13}, $s$ is contained in at most $|C_{G_0}(s_0)| = (2^k+1)(2^{2m-2k}+1)$ conjugates of a fixed maximal subgroup. Using the fixed point ratio bound proved for reducible subgroups in \cite{ref:GuralnickKantor00} and irreducible subgroups in \cite{ref:Burness071,ref:Burness072,ref:Burness073,ref:Burness074}, for all nontrivial $x \in G$ we have \[ Q(x,s) < \sum_{H \in \mathcal{M}(G,s)} \frac{|x^G \cap H|}{|x^G|} < \frac{5}{q^{\sqrt{n}/4}} + (\log_2\log_2{q}+1)(2^k+1)(2^{2m-2k}+1) \frac{2}{q^{n-3}}. \] In particular, for sufficiently large $m$, $Q(x,s) < \frac{1}{2}$ and $u(G) \mathfrak{g}eq 2$. Moreover, $Q(x,s) \to 0$ and $u(G) \to \infty$, as $m \to \infty$ or, for sufficiently large $m$, as $q \to \infty$. \varepsilonnd{example} \subsection{The generating graph} \label{ss:finite_graph} In this section, we introduce a combinatorial object that gives a way to visualise the concepts we have introduced so far. The \varepsilonmph{generating graph} of a group $G$, denoted $\Gamma(G)$, is the graph whose vertex set is $G \setminus 1$ and where two vertices $g, h \in G$ are adjacent if $\langle g, h \rangle = G$. See Figure~\ref{fig:graph} for examples. \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale=0.42] \node[white] (0) at (270:8cm) {}; \node[draw,circle,minimum size=0.8cm] (1) at (45 :5.5cm) {$ab$}; \node[draw,circle,minimum size=0.8cm] (2) at (135 :5.5cm) {$b$}; \node[draw,circle,minimum size=0.8cm] (3) at (225 :5.5cm) {$a^3b$}; \node[draw,circle,minimum size=0.8cm] (4) at (315 :5.5cm) {$a^2b$}; \node[draw,circle,minimum size=0.8cm] (5) at (0:1.5cm) {$a^3$}; \node[draw,circle,minimum size=0.8cm] (6) at (180:1.5cm) {$a$}; \node[draw,circle,minimum size=0.8cm] (8) at (270:6.5cm) {$a^2$}; \varphiath (1) edge[-] (2); \varphiath (2) edge[-] (3); \varphiath (3) edge[-] (4); \varphiath (4) edge[-] (1); \varphiath (1) edge[-] (5); \varphiath (2) edge[-] (5); \varphiath (3) edge[-] (5); \varphiath (4) edge[-] (5); \varphiath (1) edge[-] (6); \varphiath (2) edge[-] (6); \varphiath (3) edge[-] (6); \varphiath (4) edge[-] (6); \varepsilonnd{tikzpicture} \hspace{1cm} \begin{tikzpicture}[scale=0.5,circle,inner sep=0.01cm,minimum size=0.66cm] \node[draw] (1) at (90-360/11 :5cm) {\tiny{(1\,2)(3\,4)}}; \node[draw] (2) at (90 :5cm) {\tiny{(1\,3)(2\,4)}}; \node[draw] (3) at (90+360/11 :5cm) {\tiny{(1\,4)(2\,3)}}; \node[draw] (4) at (90+360/11*2: 5cm) {\tiny{(1\,2\,3)}}; \node[draw] (5) at (90+360/11*3: 5cm) {\tiny{(1\,3\,2)}}; \node[draw] (6) at (90+360/11*4: 5cm) {\tiny{(1\,2\,4)}}; \node[draw] (7) at (90+360/11*5: 5cm) {\tiny{(1\,4\,2)}}; \node[draw] (8) at (90+360/11*6: 5cm) {\tiny{(1\,3\,4)}}; \node[draw] (9) at (90+360/11*7: 5cm) {\tiny{(1\,4\,3)}}; \node[draw] (10) at (90+360/11*8: 5cm) {\tiny{(2\,3\,4)}}; \node[draw] (11) at (90+360/11*9: 5cm) {\tiny{(2\,4\,3)}}; \foreach \x in {1,...,3} \foreach \y in {4,...,11} {\varphiath (\x) edge[-] (\y);} \varphiath (5) edge[-] (6); \varphiath (7) edge[-] (8); \varphiath (9) edge[-] (10); \varphiath (11) edge[-] (4); \foreach \x in {4,...,11} \foreach \y in {4,...,11} { \varphigfmathtruncatemacro{\a}{\x-\y}; \ifnum\a=0{} \varepsilonlse { \ifnum\a=1{} \varepsilonlse { \ifnum\a=-1{} \varepsilonlse { \varphiath (\x) edge[-] (\y); } \fi } \fi } \fi } \varepsilonnd{tikzpicture} \varepsilonnd{center} \caption{The generating graphs of $D_8 = \langle a, b \mid a^4 = 1, b^2 = 1, a^b = a^{-1} \rangle$ and the alternating group $A_4$.} \label{fig:graph} \varepsilonnd{figure} A question that immediately comes to mind is: when is $\Gamma(G)$ is connected? This question has a remarkably straightforward answer. \begin{theorem} \label{thm:generating_graph} Let $G$ be a finite group. Then the following are equivalent: \begin{enumerate} \item $\Gamma(G)$ has no isolated vertices \item $\Gamma(G)$ is connected \item $\Gamma(G)$ has diameter at most two \item every proper quotient of $G$ is cyclic. \varepsilonnd{enumerate} \varepsilonnd{theorem} Of course, Theorem~\ref{thm:generating_graph} is simply a reformulation of Theorem~\ref{thm:burness_guralnick_harper} due to Burness, Guralnick and Harper. Indeed, the generating graph gives an enlightening perspective on the concepts introduced so far: \begin{enumerate} \item $G$ is $\frac{3}{2}$-generated if and only if $\Gamma(G)$ has no isolated vertices \item $s(G)$ is the greatest $k$ such that any $k$ vertices of $\Gamma(G)$ have a common neighbour, which means that $s(G) \mathfrak{g}eq 2$ if and only if $\mathrm{diam}(\Gamma(G)) \leqslant 2$ \item $\mathfrak{g}amma_t(G)$ is the total domination number of $\Gamma(G)$: the least size of a set of vertices whose neighbours cover $\Gamma(G)$. \varepsilonnd{enumerate} Regarding (iii), the total domination number is a well studied graph invariant and was the inspiration for the group theoretic term (and the symbol $\mathfrak{g}amma_t$ is the graph theoretic notation). Regarding (ii), as far as the author is aware, the graph invariant corresponding to the spread of a group, while natural, does not have a canonical name, but, inspired by the recent work on the spread of finite groups, some authors have started to use the term \varepsilonmph{spread} for this graph invariant (that is, the \varepsilonmph{spread} of a graph is the greatest $k$ such that any $k$ vertices have a common neighbour), see for example \cite[Section~2.5]{ref:Cameron22}. What are the connected components of $\Gamma(G)$ are in general? The following conjecture (first posed as a question in \cite{ref:CrestaniLucchini13-Israel}) proposes a straightforward answer. \begin{conjecture} \label{con:generating_graph} Let $G$ be a finite group. Then the graph obtained from $\Gamma(G)$ by removing the isolated vertices is connected. \varepsilonnd{conjecture} Conjecture~\ref{con:generating_graph} is known to be true if $G$ is soluble or characteristically simple by work of Crestani and Lucchini \cite{ref:CrestaniLucchini13-Israel,ref:CrestaniLucchini13-JAlgCombin} or if every proper quotient of $G$ is cyclic as a consequence of Theorem~\ref{thm:burness_guralnick_harper}. Otherwise, Conjecture~\ref{con:generating_graph} remains an intriguing open question about the generating sets of finite groups. There is now a vast literature on the generating graph and surveying it is beyond the scope of this survey article, so we will make only a few remarks. The first paper to study $\Gamma(G)$ in its own right, and call it the \varepsilonmph{generating graph}, was \cite{ref:LucchiniMaroti09Ischia} by Lucchini and Mar\'oti, who then studied various aspects of this graph in subsequent papers (for example, \cite{ref:BreuerGuralnickLucchiniMarotiNagy10,ref:LucchiniMaroti09}). However, $\Gamma(G)$ first appeared in the literature, indirectly, as a construction in a proof of Liebeck and Shalev in \cite{ref:LiebeckShalev96JAlg}. A major result of that paper is that there exist constants $c_1, c_2 > 0$ such that for all finite simple groups $G$, the probability that two randomly chosen elements generate $G$, denoted $P(G)$, satisfies \begin{equation} \label{eq:prob} 1 - \frac{c_1}{m(G)} \leqslant P(G) \leqslant 1 - \frac{c_2}{m(G)} \varepsilonnd{equation} where $m(G)$ is the smallest index of a subgroup of $G$ (for example, $m(A_n)=n$). From this, Liebeck and Shalev deduce that there is a constant $c > 0$ such that every finite simple group $G$ contains at least $c \cdot m(G)$ elements that pairwise generate $G$. The proof of this corollary simply involves applying Tur\'an's theorem to $\Gamma(G)$, exploiting \varepsilonqref{eq:prob}. A couple of subsequent papers \cite{ref:Blackburn06,ref:BritnellEvseevGuralnickHolmesMaroti08} continued the study of cliques in $\Gamma(G)$, partly motivated by the observation that the largest size of a clique in $\Gamma(G)$, denoted $\mu(G)$, is a lower bound for the smallest number of proper subgroups whose union is $G$, denoted $\sigma(G)$. Returning to spread, it is easy to see that $s(G) < \mu(G) \leqslant \sigma(G)$, so these results give upper bounds on spread, which are otherwise difficult to find. Indeed, the best upper bounds for the smaller 14 sporadic groups (including the two where the spread is known exactly) were found in \cite{ref:BradleyHolmes07} by a clever refinement of this bound (for the larger 12 sporadic groups, different methods were used \cite{ref:Fairbarin12CA}). \subsection{Applications} \label{ss:finite_app} We conclude Section~\ref{s:finite} with three applications of spread. The first shows how $\frac{3}{2}$-generation naturally arises in a completely different context. The second highlights the benefit of studying spread, not just $\frac{3}{2}$-generation. The third moves beyond simple groups and applies the classification of finite $\frac{3}{2}$-generated groups. \textbf{Application 1. Word maps. } From a word $w = w(x,y)$ in the free group $F_2$, we obtain a \varepsilonmph{word map} $w\colon G \times G \to G$ and we write $w(G) = \{ w(g,h) \mid g,h \in G\}$. Let $G$ be a nonabelian finite simple group. For natural choices of $w$, there has been substantial recent progress showing that $w(G) = G$. For example, solving the Ore Conjecture, Liebeck, O'Brien, Shalev and Tiep \cite{ref:LiebeckOBrienShalevTiep10} proved that $w = x^{-1}y^{-1}xy$ is surjective (that is, every element is a commutator). Now consider the converse question: which subsets $S \subseteq G$ arise as images of some word $w \in F_2$? Such a subset $S$ must satisfy $1 \in S$ (as $w(1,1)=1$) and $S^a = S$ for all $a \in \mathrm{Aut}(G)$ (as $w(x^a,y^a) = w(x,y)^a$), and Lubotzky \cite{ref:Lubotzky14} proved that these conditions are sufficient. \begin{theorem} \label{thm:lubotzky} Let $G$ be a finite simple group and let $S \subseteq G$. Then $S$ is the image of a word $w \in F_2$ if and only if $1 \in S$ and $S^a = S$ for all $a \in \mathrm{Aut}(G)$. \varepsilonnd{theorem} The proof of Theorem~\ref{thm:lubotzky} is short and fairly elementary, except it uses Theorem~\ref{thm:guralnick_kantor}. Indeed, Theorem~\ref{thm:guralnick_kantor} is the only way it depends on the CFSG. \begin{proof}[Proof outline of Theorem~\ref{thm:lubotzky}] Let $G^2 = \{ (a_i,b_i) \mid 1 \leqslant i \leqslant |G|^2 \}$ be ordered such that $\langle a_i, b_i \rangle = G$ if and only if $i \leqslant \varepsilonll$. Fix the free group $F_2 = \langle x, y \rangle$ and let $\varphi\colon F_2 \to G^{|G|^2}$ be defined as $\varphi(x) = (a_1, \dots, a_{|G|^2})$ and $\varphi(y) = (b_1, \dots, b_{|G|^2})$. Let $z = (z_1, \dots, z_{|G|^2})$ where $z_i = a_i$ if $i \leqslant \varepsilonll$ and $a_i \in S$ and $z_i = 1$ otherwise. By Theorem~\ref{thm:guralnick_kantor}, every nontrivial element of $G$ is contained in a generating pair, so, in particular, $\{ z_i \mid 1 \leqslant i \leqslant |G|^2 \} = S \cup \{1\} = S$. By an elementary argument, Lubotzky shows that $\varphi(F_2) = H \times K$ where $H$ and $K$ are the projections of $\varphi(F_2)$ onto the first $\varepsilonll$ factors of $G^{|G|^2}$ and the remaining $|G|^2-\varepsilonll$ factors, respectively. Moreover, using a theorem of Hall \cite{ref:Hall36}, $H$ is the subgroup of $G^\varepsilonll$ isomorphic to $G^{\varepsilonll/|\mathrm{Aut}(G)|}$ with the defining property that $(g_1, \dots, g_\varepsilonll) \in H$ if and only if for all $a \in \mathrm{Aut}(G)$ and $1 \leqslant i,j \leqslant \varepsilonll$ we have $g_i = g_j^a$ whenever $\varphi_i = \varphi_{\!j}\,a$. In particular, since $S^a = S$ for all $a \in \mathrm{Aut}(G)$, we deduce that $z \in \varphi(F_2)$. Therefore, there exists $w \in F_2$ such that for all $1 \leqslant i \leqslant |G|^2$ we have $w(a_i,b_i) = \varphi(w)_i = z_i$. Combining the conclusions of the previous two paragraphs, we deduce that $w(G) = \{ w(a_i,b_i) \mid 1 \leqslant i \leqslant |G|^2 \} = \{ z_i \mid 1 \leqslant i \leqslant |G|^2 \} = S$. \varepsilonnd{proof} \textbf{Application 2. The product replacement graph. } For a positive integer $k$, the vertices of the product replacement graph $\Gamma_k(G)$ are the generating $k$-tuples of $G$, and the neighbours of $(x_1, \dots, x_i, \dots, x_k)$ in $\Gamma_k(G)$ are $(x_1, \dots, x_ix_j^\varphim, \dots, x_k)$ and $(x_1, \dots, x_j^\varphim x_i, \dots, x_k)$ for each $1 \leqslant i \neq j \leqslant k$. The product replacement graph arises in a number of contexts, most notably, the product replacement algorithm for computing random elements of $G$, which involves a random walk on $\Gamma_k(G)$, see \cite{ref:CellerLeedhamGreenMurrayNiemeyerOBrien95}. Thus, the connectedness of $\Gamma_k(G)$ is of particular interest. Specifically, Pak \cite[Question~2.1.33]{ref:Pak01} asked whether $\Gamma_k(G)$ is connected whenever $k$ is strictly greater than $d(G)$, the smallest size of a generating set for $G$. This question is open, even for finite simple groups where Wiegold conjectured that the answer is true. Nevertheless, the following lemma of Evans \cite[Lemma~2.8]{ref:Evans93} shows the usefulness of spread. (Here a generating tuple of $G$ is said to be \varepsilonmph{redundant} if some proper subtuple also generates $G$.) \begin{lemma}\label{lem:evans} Let $k \mathfrak{g}eq 3$ and let $G$ be a group such that $s(G) \mathfrak{g}eq 2$. Then all of the redundant generating $k$-tuples are connected in $\Gamma_k(G)$. \varepsilonnd{lemma} \begin{proof} Let $x = (x_1,\dots,x_k)$ and $y = (y_1,\dots,y_k)$ be two redundant generating $k$-tuples. By an elementary observation of Pak, it is sufficient to show that $x$ and $y$ are connected after permuting of the entries of $x$ and $y$. In particular, since $x$ and $y$ are redundant, we may assume that $\langlex_1,\dots,x_{k-1}\rangle = \langley_1,\dots,y_{k-1}\rangle = G$ and also that $x_1 \neq 1 \neq y_2$. Since $s(G) \mathfrak{g}eq 2$, there exists $z \in G$ such that $\langle x_1, z \rangle = \langle y_2, z \rangle = G$. We now make a series of connections. First, $x=x^{(1)}$ is connected to $x^{(2)} = (x_1,\dots,x_{k-1},z)$ as $\langlex_1,\dots,x_{k-1}\rangle = G$. Next, $x^{(2)}$ is connected to $x^{(3)} = (x_1,y_2,\dots,y_{k-1},z)$ as $\langlex_1,z\rangle = G$. Now, $x^{(3)}$ is connected to $x^{(4)} = (y_1,\dots,y_{k-1},z)$ as $\langley_2,z\rangle = G$. Finally, $x^{(4)}$ is connected to $(y_1,\dots,y_k) = y$ as $\langley_1,\dots,y_{k-1}\rangle = G$. This shows that $x$ is connected to $y$. \varepsilonnd{proof} Combining Lemma~\ref{lem:evans} with Theorem~\ref{thm:breuer_guralnick_kantor} shows that to prove Wiegold's conjecture, it suffices to show that for each finite simple group $G$ every irredundant generating $k$-tuple is connected in $\Gamma_k(G)$ to a redundant one. As further evidence for the relevance of spread in this area, we note that Wiegold's original conjecture was (the a priori weaker claim) that a related graph $\mathbb{S}igma_k(G)$ is connected for all finite simple groups $G$ and $k > d(G)$ (see \cite[Conjecture~2.5.4]{ref:Pak01}), but a short argument of Pak \cite[Proposition~2.5.13]{ref:Pak01} shows that $\mathbb{S}igma_k(G)$ is connected if and only if $\Gamma_k(G)$ is connected, for all groups $G$ such that $s(G) \mathfrak{g}eq 2$, which by Theorem~\ref{thm:breuer_guralnick_kantor} includes all finite simple groups $G$. \textbf{Application 3. The $\boldsymbol{\mathcal{X}}$-radical of a group. } In 1968, Thompson proved that a finite group is soluble if and only if all of its 2-generated subgroups are soluble \cite[Corollary~2]{ref:Thompson68}. The result follows from Thompson's classification of the finite insoluble groups all of whose proper subgroups are soluble, but, in 1995, Flavell gave a direct proof of this result \cite{ref:Flavell95}. Confirming a conjecture of Flavell \cite[Conjecture~B]{ref:Flavell01}, Guralnick, Kunyavsk\u{\i}i, Plotkin and Shalev proved the following result about the \varepsilonmph{soluble radical} of $G$, written $R(G)$, which is the largest normal soluble subgroup of $G$ \cite[Theorem~1.1]{ref:GuralnickKunyavskiiPlotkinShalev06}. \begin{theorem} \label{thm:soluble_radical} Let $G$ be a finite group. Then \[ R(G) = \{ x \in G \mid \text{$\langlex,y\rangle$ is soluble for all $y \in G$} \}. \] \varepsilonnd{theorem} The key (and only CFSG-dependent) element of the proof of Theorem~\ref{thm:soluble_radical} is a strong version Theorem~\ref{thm:guralnick_kantor} on the $\frac{3}{2}$-generation of finite simple groups. To paint a picture of how the $\frac{3}{2}$-generation of simple groups plays the starring role, we first present the analogue for simple Lie algebras \cite[Theorem~2.1]{ref:GuralnickKunyavskiiPlotkinShalev06}. Recall that the \varepsilonmph{radical} of a Lie algebra $\mathfrak{g}$, denoted $R(\mathfrak{g})$, is the largest soluble ideal of $\mathfrak{g}$. \begin{theorem} \label{thm:soluble_radical_lie_algebras} Let $\mathfrak{g}$ be a finite-dimensional Lie algebra over $\mathbb{C}$. Then \[ R(\mathfrak{g}) = \{ x \in \mathfrak{g} \mid \text{$\langlex,y\rangle$ is soluble for all $y \in \mathfrak{g}$} \}. \] \varepsilonnd{theorem} \begin{proof} Let $x \in \mathfrak{g}$. First assume that $x \in R(\mathfrak{g})$. Let $y \in G$ and let $\mathfrak{h}$ be the smallest ideal of $\langlex,y\rangle$ containing $x$. Now $\mathfrak{h}$ is soluble as it is a Lie subalgebra of $R(\mathfrak{g})$, and $\langlex,y\rangle/\mathfrak{h}$ is soluble as it is $1$-dimensional, so $\langlex,y\rangle$ is soluble. Now assume that $\langle x, y \rangle$ is soluble for all $y \in G$. We will prove that $x \in R(\mathfrak{g})$. For a contradiction, assume that $\mathfrak{g}$ is a minimal counterexample (by dimension). Consider $\overline{\mathfrak{g}} = \mathfrak{g}/R(\mathfrak{g})$. Then $\langle \overline{x}, \overline{y} \rangle$ is soluble for all $y \in \mathfrak{g}$, and $R(\overline{\mathfrak{g}})$ is trivial, so $\overline{x} \not\in R(\overline{\mathfrak{g}})$. Therefore, $R(\mathfrak{g})=0$, by the minimality of $\mathfrak{g}$. This means that $\mathfrak{g}$ is semisimple, so we may write $\mathfrak{g} = \mathfrak{g}_1 \oplus \cdots \oplus \mathfrak{g}_k$ where $\mathfrak{g}_1, \dots, \mathfrak{g}_k$ are simple. Writing $x = (x_1, \dots, x_k)$, fix $1 \leqslant i \leqslant k$ such that $x_i \neq 0$. Then by Theorem~\ref{thm:ionescu}, there exists $y \in \mathfrak{g}_i$ such that $\langlex_i,y\rangle = \mathfrak{g}_i$. In particular, $\mathfrak{g}_i$ is a quotient of $\langlex,y\rangle$, so $\langlex,y\rangle$ is not soluble, which is a contradiction. \varepsilonnd{proof} Returning to groups, again using variants of Theorem~\ref{thm:guralnick_kantor}, Guralnick, Plotkin and Shalev set Theorem~\ref{thm:soluble_radical} in a more general context \cite[Theorem~6.1]{ref:GuralnickPlotkinShalev07}. Here we give a short proof of their result by applying Theorem~\ref{thm:burness_guralnick_harper}. Let $\mathcal{X}$ be a class of finite groups that is closed under subgroups, quotients and extensions. The $\mathcal{X}$-radical of a group $G$, denoted $\mathcal{X}(G)$, is the largest normal $\mathcal{X}$-subgroup of $G$. For instance, $\mathcal{X}(G) = R(G)$ if $\mathcal{X}$ is the class of soluble groups. \begin{theorem} \label{thm:x_radical} Let $\mathcal{X}$ be a class of finite groups that is closed under subgroups, quotients and extensions. Then \[ \mathcal{X}(G) = \{ x \in G \mid \text{$\langlex^{\langley\rangle}\rangle$ is an $\mathcal{X}$-group for all $y \in G$} \}. \] \varepsilonnd{theorem} \begin{corollary} \label{cor:x_radical} Let $\mathcal{X}$ be a class of finite groups that is closed under subgroups, quotients and extensions. Then $G$ is an $\mathcal{X}$-group if and only if every $2$-generated subgroup of $G$ is an $\mathcal{X}$-group. \varepsilonnd{corollary} \begin{proof} If $G$ is an $\mathcal{X}$-group, then every $2$-generated subgroup is. Conversely, if for all $x,y \in G$ the subgroup $\langlex,y\rangle$ is an $\mathcal{X}$-group, then so is $\langlex^{\langley\rangle}\rangle$, so, by Theorem~\ref{thm:x_radical}, $x \in \mathcal{X}(G)$, which shows $G = \mathcal{X}(G)$, which is an $\mathcal{X}$-group. \varepsilonnd{proof} \begin{corollary} \label{cor:x_radical_soluble} Let $\mathcal{X}$ be a class of finite groups that is closed under subgroups, quotients and extensions. Assume that $\mathcal{X}$ contains all soluble groups. Then \[ \mathcal{X}(G) = \{ x \in G \mid \text{$\langlex, y\rangle$ is an $\mathcal{X}$-group for all $y \in G$} \}. \] \varepsilonnd{corollary} \begin{proof} Let $x \in G$. If for all $y \in G$, $\langlex,y\rangle$ is an $\mathcal{X}$-group, then so is $\langlex^{\langley\rangle}\rangle$, so, by Theorem~\ref{thm:x_radical}, $x \in \mathcal{X}(G)$. Conversely, if $x \in \mathcal{X}(G)$, then $\langlex^{\langley\rangle}\rangle \leqslant \langlex^G\rangle \leqslant \mathcal{X}(G)$ is an $\mathcal{X}$-group, so $\langlex,y\rangle$, an extension of $\langlex^{\langley\rangle}\rangle$ by a cyclic group, is an $\mathcal{X}$-group. \varepsilonnd{proof} \begin{proof}[Proof of Theorem~\ref{thm:x_radical}] Let $x \in G$. First assume that $x \in \mathcal{X}(G)$. For all $y \in G$, we have $\langlex^{\langley\rangle}\rangle \leqslant \langlex^G\rangle \leqslant \mathcal{X}(G)$, so $\langlex^{\langley\rangle}\rangle$ is an $\mathcal{X}$-group. Now assume that $\langle x^{\langley\rangle} \rangle$ is an $\mathcal{X}$-group for all $y \in G$. We will prove that $x \in \mathcal{X}(G)$. For a contradiction, assume that $G$ is a minimal counterexample. Consider $\overline{G} = G/\mathcal{X}(G)$. Then $\langle \overline{x}^{\langle\overline{y}\rangle} \rangle$ is an $\mathcal{X}$-group for all $y \in G$ (as $\mathcal{X}$ is closed under quotients), and $\mathcal{X}(\overline{G})$ is trivial (as $\mathcal{X}$ is closed under extensions), so $\overline{x} \not\in \mathcal{X}(\overline{G})$. Therefore, $\mathcal{X}(G)=1$, by the minimality of $G$. Now consider $H = \langlex^G\rangle$. Then $\langlex^{\langleh\rangle}\rangle$ is an $\mathcal{X}$-group for all $h \in H$, and $\mathcal{X}(H) \leqslant \mathcal{X}(G)$ (as $\mathcal{X}(H)$ is characteristic in $H$ so normal in $G$), so $x \not\in \mathcal{X}(H)$. Therefore, $\langlex^G\rangle = G$, by the minimality of $G$. Consider a power $x'$ of $x$ of prime order. Then $\langle x'^{\langley\rangle} \rangle$ is an $\mathcal{X}$-group for all $y \in G$ (as $\mathcal{X}$ is closed under subgroups), and $x' \not\in 1 = \mathcal{X}(G)$. Therefore, it suffices to consider the case where $x$ has prime order. Let $N$ be a minimal normal subgroup of $G$ and write $N = T^k$ where $T$ is simple. Observe that $N$, or equivalently $T$, is not an $\mathcal{X}$-group, since $\mathcal{X}(G)=1$. Suppose that $x \in N$. Then $G = N$ since $\langlex^G\rangle = G$. In particular, $k=1$, so, by Theorem~\ref{thm:guralnick_kantor}, there exists $y \in G$ such that $\langlex,y\rangle=G$, which is not an $\mathcal{X}$-group, so $\langlex^{\langley\rangle}\rangle$ is not an $\mathcal{X}$-group either: a contradiction. Therefore, $x \not\in N$. Suppose that $x$ centralises $N$. Then $N$ is central since $\langlex^G\rangle = G$, so $N \cong C_p$ for a prime $p$. In $\widetilde{G} = G/N$, $\widetilde{x}$ is nontrivial as $x \not\in N$ and $\langle\widetilde{x}^{\langle\widetilde{y}\rangle}\rangle$ is an $\mathcal{X}$-group for all $y \in G$ (as $\langlex^{\langley\rangle}\rangle$ is). The minimality of $G$ means $\widetilde{x} \in \mathcal{X}(\widetilde{G})$, but $\langle\widetilde{x}^{\widetilde{G}}\rangle = \widetilde{G}$ (as $\langlex^G\rangle = G$), so $\widetilde{G}$ is an $\mathcal{X}$-group. Since $N \cong C_p$ is not an $\mathcal{X}$-group, $p$ does not divide $|\widetilde{G}|$. By the Schur--Zassenhaus Theorem, $G = N \times H$ for some $H \cong \widetilde{G}$. Since $N \cong C_p$ is not an $\mathcal{X}$-group and $\langlex\rangle$ is an $\mathcal{X}$-group, $p$ does not divide $|x|$, so $x \in H$, contradicting $\langle x^G \rangle = G$. Therefore, $x$ acts nontrivially on $N$. Suppose that $N$ is abelian. As $x$ acts nontrivially on $N$, there is $n \in N$ with $[x,n] \neq 1$. Now $\langle [x,n] \rangle$ is isomorphic to $T$, as $[x,n] \in N$, so it is not an $\mathcal{X}$-group, implying that $\langle x^{\langlen\rangle} \rangle$ is not an $\mathcal{X}$-group either. Therefore, $N$ is nonabelian. Now $x$ permutes the $k$ factors of $N \cong T^k$ and let $M \cong T^l$ be a nontrivial subgroup of $N$ whose factors are permuted transitively by $x$. If $l=1$, then $\langle M, x \rangle$ is almost simple, and if $l > 1$, then, recalling that $x$ has prime order, $\langlex\rangle \cong C_l$ acts regularly on the factors of $M$. In either case, $M$ is the unique minimal normal subgroup of $\langleM,x\rangle$, so every proper quotient of $\langleM,x\rangle$ is cyclic. Therefore, by Theorem~\ref{thm:burness_guralnick_harper}, there exists $m \in \langle M, x \rangle$ such that $\langle m, x \rangle = \langle M, x\rangle$. Moreover, $\langlex^{\langlem\rangle}\rangle$, being a normal subgroup of $\langle M, x \rangle$ containing $x$, is also $\langle M, x\rangle$. However, $\langlex^{\langlem\rangle}\rangle = \langle M, x \rangle$ is not an $\mathcal{X}$-group since the subgroup $T$ is not an $\mathcal{X}$-group. This contradiction completes the proof. \varepsilonnd{proof} \section{Infinite Groups} \label{s:infinite} \subsection{Generating infinite groups} \label{ss:infinite_intro} We now turn to infinite groups and their generating pairs. Do the results from Section~\ref{s:finite} on the spread of finite groups extend to infinite groups? Let us recall that the motivating theorem for finite groups is the landmark result that every finite simple group is $2$-generated. This result is easily seen to be false when the assumption of finiteness is removed. For example, the alternating group $\mathrm{Alt}(\mathbb{Z})$ is simple but is not finitely generated since every finite subset of $\mathrm{Alt}(\mathbb{Z})$ is supported on finitely many points and therefore generates a finite subgroup. The problem persists even if we restrict to finitely generated simple groups. Answering a question of Wiegold in the Kourovka Notebook \cite[Problem~6.44]{ref:Kourovka22}, in 1982, Guba constructed a finitely generated infinite simple group that is not 2-generated (in fact, the group constructed has the property that every 2-generated subgroup is free) \cite{ref:Guba82}. More recently, Osin and Thom, by studying the $\varepsilonll^2$-Betti number of groups, proved that for every $k \mathfrak{g}eq 2$ there exists an infinite simple group that is $k$-generated but not $(k-1)$-generated \cite[Corollary~1.2]{ref:OsinThom13}. With these results in mind, it makes sense to focus on $2$-generated groups and ask whether the results about the spread of finite $2$-generated groups extend to general $2$-generated groups. Recall that Theorem~\ref{thm:burness_guralnick_harper} gives a characterisation of the finite $\frac{3}{2}$-generated groups: a finite group $G$ is $\frac{3}{2}$-generated if and only if every proper quotient of $G$ is cyclic. In particular, every finite simple group is $\frac{3}{2}$-generated. The following example due to Cox in 2022 \cite{ref:Cox22}, highlights that this characterisation does not extend to general $2$-generated groups (that is, there exists a infinite $2$-generated group $G$ that is not $\frac{3}{2}$-generated but for which every proper quotient of $G$ is cyclic). \begin{example} \label{ex:cox} For each positive integer $n$, let $G_n$ be the subgroup of $\mathrm{Sym}(\mathbb{Z})$ defined as $\langle \mathrm{Alt}(\mathbb{Z}), t^n \rangle$ where $t\colon \mathbb{Z} \to \mathbb{Z}$ is the translation $x \mapsto x+1$. It is straightforward to show that $\mathrm{Alt}(\mathbb{Z})$ is the unique minimal normal subgroup of $G_n$, so every proper quotient of $G$ is cyclic. In addition, $G_n$ is $2$-generated. Indeed, by \cite[Lemma~3.7]{ref:Cox22}, $G_n = \langle a_n, t^n \rangle$ for $a_n = \varphirod_{i=0}^{m+1} x_i^{t^{3ni}}$ where $x_0, \dots, x_{m+1} \in A_{3n}$ satisfy $x_0 = (1 \, 3)$, $x_{m+1} = (2 \, 3)$ and $\{ (1 \, 2 \, 3)^{x_1}, \dots, (1 \, 2 \, 3)^{x_m} \} = (1 \, 2 \, 3)^{A_{3n}}$. To see this, we note that $[a_n^{t^{-3n(m+1)}},a_n] = [x_{m+1},x_0] = (1 \, 2 \, 3)$ and $(1 \, 2 \, 3)^{a_n^{t^{-3ni}}} = (1 \, 2 \, 3)^{x_i}$, so $\langle a_n, t^n \rangle \mathfrak{g}eq \langle (1 \, 2 \, 3)^{A_{3n}}, t^n \rangle = \langle A_{3n}, t^n \rangle$, which is simply $\langle \mathrm{Alt}(\mathbb{Z}), t^n \rangle$ (compare with Lemma~\ref{lem:covering_symmetric} below). However, in \cite[Theorem~4.1]{ref:Cox22}, Cox proves that if $n \mathfrak{g}eq 3$, then $(1 \, 2 \, 3)$ is not contained in a generating pair for $G_n$, so $G_n$ gives an example of a $2$-generated group all of whose proper quotients are cyclic but which is not $\frac{3}{2}$-generated. To simplify the proof, we will assume that $n \mathfrak{g}eq 4$. Let $g \in G_n$. If $g \in \mathrm{Alt}(\mathbb{Z})$, then $\langle (1 \, 2 \, 3), g \rangle \leqslant \mathrm{Alt}(\mathbb{Z}) < G_n$. Now assume that $g \not\in \mathrm{Alt}(\mathbb{Z})$, so $\langleg\rangle = \langleht^k\rangle$ where $h \in \mathrm{Alt}(\mathbb{Z})$ and $k \mathfrak{g}eq 4$. It is easy to show that $g$ has exactly $k$ infinite orbits $O_1, \dots, O_k$ (indeed, if $\mathrm{supp}(h) \subseteq [a,b]$, then we quickly see that for a suitable permutation $\varphii \in \mathbb{S}m{k}$, we can find $k$ orbits $O_1, \dots, O_k$ of $g$ satisfying $O_i \setminus [a,b] = \{ x > b \mid x \varepsilonquiv i \mod{k} \} \cup \{ x < a \mid x \varepsilonquiv i\varphii \mod{k} \}$). Since $k \mathfrak{g}eq 4$, we can fix $i$ such that $O_i \cap \{1, 2, 3\} = \varepsilonmptyset$, so $O_i$ is an orbit of $\langle (1 \, 2 \, 3), g \rangle$, which implies that $\langle (1 \, 2 \, 3), g \rangle \neq G_n$ in this case too. In contrast, in \cite[Theorem~6.1]{ref:Cox22}, Cox shows that $G_1$ and $G_2$ are $\frac{3}{2}$-generated, and, in fact, $2 \leqslant u(G_i) \leqslant s(G_i) \leqslant 9$ for $i \in \{1,2\}$. \varepsilonnd{example} The groups in Example~\ref{ex:cox} are not simple, so the following question remains. \begin{question} \label{que:infinte_0} Does there exist a $2$-generated simple group $G$ with $s(G)=0$? \varepsilonnd{question} Recall that for finite groups $G$, Theorem~\ref{thm:burness_guralnick_harper} also establishes that $s(G) \mathfrak{g}eq 1$ if and only if $s(G) \mathfrak{g}eq 2$, so there are no finite groups $G$ satisfying $s(G)=1$. This raises the following question. \begin{question} \label{que:infinite_1} Does there exist a $2$-generated simple group $G$ with $s(G)=1$? \varepsilonnd{question} There is a clear difference between generating finite and infinite groups, and straightforward analogues of the theorems for finite groups do not hold for infinite groups. Nevertheless, do the results on the spread of finite simple groups extend to important classes of infinite simple groups? The investigation of the infinite simple groups of Richard Thompson (and their many generalisations) in Sections~\ref{ss:infinite_thompson_introduction}--\ref{ss:infinite_thompson_t} demonstrates that the answer is a resounding yes! However, before turning to these infinite simple groups, in Section~\ref{ss:infinite_soluble}, we look at the other important special case we considered for finite groups: soluble groups. \subsection{Soluble groups} \label{ss:infinite_soluble} In the opening to Section~\ref{ss:finite_bgh}, we noted that when Brenner and Wiegold introduced the notion of spread, they proved that for a finite soluble group $G$, we have $s(G) \mathfrak{g}eq 1$ if and only if $s(G) \mathfrak{g}eq 2$ if and only if every proper quotient of $G$ is cyclic (see Theorem~\ref{thm:brenner_wiegold}). By Theorem~\ref{thm:breuer_guralnick_kantor}, ``soluble'' can be removed from the hypothesis (while keeping ``finite''). The following theorem establishes that ``finite'' can be removed from the hypothesis (while keeping ``soluble''), in a very strong sense. (Theorem~\ref{thm:infinite_soluble} is due to the author, and this is the first appearance of it in the literature.) \begin{theorem} \label{thm:infinite_soluble} Let $G$ be an infinite soluble group such that every proper quotient is cyclic. Then $G$ is cyclic. \varepsilonnd{theorem} \begin{proof} It suffices to show that $G$ is abelian, because an infinite abelian group where every proper quotient is cyclic is itself cyclic. For a contradiction, suppose that $G$ is nonabelian. If $1 \neq N \trianglelefteqslant G$, then $G/N$ is cyclic, so $G' \leqslant N$. Therefore, $G'$ is the unique minimal normal subgroup of $G$. In particular, $G''$ is $G'$ or $1$, but $G$ is soluble, so $G'' = 1$, which implies that $G'$ is abelian. Therefore, $G'$ is an abelian characteristically simple group, so it is isomorphic to the additive group of a vector space $V$ over a field $F$, and we may assume that $F = \mathbb{F}_p$ or $F=\mathbb{Q}$. Let $g \in G$ such that $G/V = \langleVg\rangle$ and write $H = \langleg\rangle$, so $G = VH$. Observe that $Z(G)=1$, for otherwise $G/Z(G)$ is cyclic, so $G$ is abelian, a contradiction. Now, if $g^i \in V$, then $g^i \in Z(G) = 1$, so $V \cap H = 1$. Hence, $G$ is a semidirect product $V{:}H$. For all nontrivial $v \in V$, we have $V = \langle v^G \rangle$ since $V$ is a minimal normal subgroup and $\langle v^G \rangle = \langle v^H \rangle$ since $V$ is abelian. Therefore, $V$ is an irreducible $FH$-module, so $V$ is finite-dimensional since $H$ is cyclic. (To see this, suppose that $V$ is infinite-dimensional, so $H$ is infinite. We give a proper nonzero submodule $U$, contradicting the irreducibility of $V$. For $0 \neq v \in V$, either $\{ vg^i \mid i \in \mathbb{Z} \}$ is linearly independent, and $U$ is the kernel of $\sum_{i \in \mathbb{Z}} a_ivg^i \mapsto \sum_{i \in \mathbb{Z}} a_i$, or for some $u = vg^i$ we have $a_0 u + a_1 ug + \cdots + a_k ug^k = 0$ and $U = \langle u, ug, \dots, ug^{k-1} \rangle$.) If $g^i \in C_G(V)$, then $g^i \in Z(G) = 1$, so $V$ is a faithful $FH$-module. In particular, if $F$ is finite, then so is $G = F^n{:}H \leqslant F^n{:}\mathrm{GL}_n(F)$, so we must have $F = \mathbb{Q}$. Let $\chi = X^n + a_{n-1}X^{n-1} + \cdots + a_1X + a_0 \in \mathbb{Q}[X]$ be the characteristic polynomial of $g$, and let $(e_1,\dots,e_n)$ be a basis for $V$ with respect to which the matrix $A$ of $g$ is the companion matrix of $\chi$. Let $P$ be the set of prime divisors appearing in the reduced forms of $a_0, \dots, a_{n-1}$ and note that $P$ is finite. For all $i \in \mathbb{Z}$, write $e_1A^i$ as a linear combination $\lambda_{i1}e_1 + \cdots + \lambda_{in}e_n$. Any prime that divides the denominator of the reduced form of one of the $\lambda_{ij}$ is contained in $P$. Hence, only finitely many primes appear in the denominators of the reduced forms of any element in the subgroup $N$ generated by $\{ e_1A^i \mid i \in \mathbb{Z} \}$. Since $N$ is $\langlee_1^G\rangle$, it is a proper nontrivial subgroup of $V$ that is normal in $G$, which contradicts $V$ being a minimal normal subgroup of $G$. Therefore, $G$ is abelian and so cyclic. \varepsilonnd{proof} With a much shorter proof, one can obtain an analogous result for the class of residually finite groups (this was observed by Cox in \cite[Lemma~1.1]{ref:Cox22}). \begin{theorem} \label{thm:infinite_residually_finite} Let $G$ be an infinite residually finite group such that every proper quotient is cyclic. Then $G$ is cyclic. \varepsilonnd{theorem} \begin{proof} Suppose that $G$ is nonabelian. Fix $x,y \in G$ with $[x,y] \neq 1$. Since $G$ is residually finite, $G$ has a finite index normal subgroup $N$ such that $[Nx,Ny]$ is nontrivial in $G/N$ (so, $Nx$ and $Ny$ are nontrivial in $G/N$). Since $G$ is infinite and $N$ has finite index, we know that $N$ is nontrivial, so $G/N$ is cyclic, which contradicts $G/N$ being nonabelian. Therefore, $G$ is abelian and hence cyclic. \varepsilonnd{proof} \subsection{Thompson's groups: an introduction} \label{ss:infinite_thompson_introduction} In 1965, Richard Thompson introduced three finitely generated infinite groups $F < T < V$ \cite{ref:Thompson65}. Among other interesting properties of these groups, $V$ and $T$ were the first known examples of finitely presented infinite simple groups and for 35 years (until the work of Burger and Mozes \cite{ref:BurgerMozes00}) all known examples of such groups were closely related to $T$ and $V$. For an indication of other interesting properties of these groups, we record that $F$ is finitely presented yet it contains a copy of $F \times F$ and is an HNN extension of itself. Moreover, $F$ has exponential growth but contains no nonabelian free groups, and one of the most famous open questions in geometric group theory is whether $F$ is amenable \cite{ref:Cleary17}. However, these three groups not only raise interesting group theoretic questions, but they have also played a role in a whole range of mathematical areas such as the word problem for groups, homotopy theory and dynamical systems (see \cite{ref:Dydak77_2,ref:GhysSergiescu87,ref:Thompson80} for example). We refer the reader to Canon, Floyd and Parry's introduction to these groups \cite{ref:CannonFloydParry96}. An appealing feature of Thompson's groups is that they admit concrete representations as transformation groups, which we outline now. Let $X = \{0,1\}^*$ be the set of all finite words over $\{0,1\}$, and let $\mathfrak{C} = \{0,1\}^\mathbb{N}$ be \varepsilonmph{Cantor space}, the set of all infinite sequences over $\{0,1\}$ with the usual topology. For $u \in X$, we write $u\mathfrak{C} = \{uw \mid w \in \mathfrak{C} \}$, and we say a finite set $A \subseteq X$ is a \varepsilonmph{basis} of $\mathfrak{C}$ if $\{ u\mathfrak{C} \mid u \in A \}$ is a partition of $\mathfrak{C}$. Thompson's group $V$ is the group of homeomorphisms $g \in \mathrm{Homeo}(\mathfrak{C})$ for which there exists a \varepsilonmph{basis pair}, namely a bijection $\sigma\colon A \mapsto B$ between two bases $A$ and $B$ of $\mathfrak{C}$ such that $(uw)g = (u\sigma)w$ for all $u \in A$ and all $w \in \mathfrak{C}$. In other words, $V$ is the group of homeomorphisms of $\mathfrak{C}$ that act by prefix substitutions. For instance, $c\colon(00,01,1) \mapsto (11,0,10)$ is an element of $V$ that, for example, maps $01010101\dots$ to $0010101\dots$. The selfsimilarity of $\mathfrak{C}$ means that there is not a unique choice of basis pair; indeed, by subdividing $\mathfrak{C}$ further $c$ is also represented by $(000, 001, 01, 1) \mapsto (110, 111, 0, 10)$. By identifying elements of $X$ with vertices of the infinite binary rooted tree, we can represent bases as binary rooted trees and elements of $V$ by the familiar \varepsilonmph{tree pairs}, as shown in Figure~\ref{fig:elements}. \begin{figure} \begin{minipage}{0.42\textwidth} \begin{gather*} \text{\footnotesize $a = (000,001,01,10,110,111)$} \\ \text{\footnotesize \qquad $\mapsto (001,000,01,110,111,10)$} \\[1pt] \begin{tikzpicture}[ scale=0.5, font=\footnotesize, inner sep=0pt, baseline=-30pt, level distance=20pt, level 1/.style={sibling distance=60pt}, level 2/.style={sibling distance=30pt}, level 3/.style={sibling distance=15pt} ] \node (root) [circle,fill] {} child {node (0) [circle,fill] {} child {node (00) [circle,fill] {} child {node (000) {1}} child {node (001) {2}}} child {node (01) {3}}} child {node (1) [circle,fill] {} child {node (10) {4}} child {node (11) [circle,fill] {} child {node (110) {5}} child {node (111) {6}}}}; \varepsilonnd{tikzpicture} \raisebox{3mm}{ $\longrightarrow$ } \begin{tikzpicture}[ scale=0.5, font=\footnotesize, inner sep=0pt, baseline=-30pt, level distance=20pt, level 1/.style={sibling distance=60pt}, level 2/.style={sibling distance=30pt}, level 3/.style={sibling distance=15pt} ] \node (root) [circle,fill] {} child {node (0) [circle,fill] {} child {node (00) [circle,fill] {} child {node (000) {2}} child {node (001) {1}}} child {node (01) {3}}} child {node (1) [circle,fill] {} child {node (10) {6}} child {node (11) [circle,fill] {} child {node (110) {4}} child {node (111) {5}}}}; \varepsilonnd{tikzpicture} \varepsilonnd{gather*} \varepsilonnd{minipage} \begin{minipage}{0.48\textwidth} \begin{gather*} \text{\footnotesize $b = (000,001,010,011,100,101,110,111)$} \\ \text{\footnotesize \quad $\mapsto (000,010,011,100,101,110,111,001)$} \\[1pt] \begin{tikzpicture}[ scale=0.5, font=\footnotesize, inner sep=0pt, baseline=-30pt, level distance=20pt, level 1/.style={sibling distance=60pt}, level 2/.style={sibling distance=30pt}, level 3/.style={sibling distance=15pt} ] \node (root) [circle,fill] {} child {node (0) [circle,fill] {} child {node (00) [circle,fill] {} child {node (000) {1}} child {node (001) {2}}} child {node (01) [circle,fill] {} child {node (010) {3}} child {node (011) {4}}}} child {node (1) [circle,fill] {} child {node (10) [circle,fill] {} child {node (100) {5}} child {node (101) {6}}} child {node (11) [circle,fill] {} child {node (110) {7}} child {node (111) {8}}}}; \varepsilonnd{tikzpicture} \raisebox{3mm}{ $\longrightarrow$ } \begin{tikzpicture}[ scale=0.5, font=\footnotesize, inner sep=0pt, baseline=-30pt, level distance=20pt, level 1/.style={sibling distance=60pt}, level 2/.style={sibling distance=30pt}, level 3/.style={sibling distance=15pt} ] \node (root) [circle,fill] {} child {node (0) [circle,fill] {} child {node (00) [circle,fill] {} child {node (000) {1}} child {node (001) {8}}} child {node (01) [circle,fill] {} child {node (010) {2}} child {node (011) {3}}}} child {node (1) [circle,fill] {} child {node (10) [circle,fill] {} child {node (100) {4}} child {node (101) {5}}} child {node (11) [circle,fill] {} child {node (110) {6}} child {node (111) {7}}}}; \varepsilonnd{tikzpicture} \varepsilonnd{gather*} \varepsilonnd{minipage} \begin{minipage}{0.42\textwidth} \begin{gather*} \text{\footnotesize $c = (00,01,1) \mapsto (11,0,10)$} \\[1pt] \begin{tikzpicture}[ scale=0.5, font=\footnotesize, inner sep=0pt, baseline=-30pt, level distance=20pt, level 1/.style={sibling distance=60pt}, level 2/.style={sibling distance=30pt} ] \node (root) [circle,fill] {} child {node (0) [circle,fill] {} child {node (00) {1}} child {node (01) {2}}} child {node (1) {3}}; \varepsilonnd{tikzpicture} \raisebox{5mm}{ \ $\longrightarrow$ \ } \begin{tikzpicture}[ scale=0.5, font=\footnotesize, inner sep=0pt, baseline=-30pt, level distance=20pt, level 1/.style={sibling distance=60pt}, level 2/.style={sibling distance=30pt} ] \node (root) [circle,fill] {} child {node (0) {2}} child {node (1) [circle,fill] {} child {node (10) {3}} child {node (11) {1}}}; \varepsilonnd{tikzpicture} \\[-2mm] \begin{tikzpicture}[scale=2,thick] \draw [fill=white] ( 0:0.3) -- ( 0:0.2) arc ( 0: 90:0.2) -- ( 90:0.3) arc ( 90: 0:0.3); \draw [fill=black!40] ( 90:0.3) -- ( 90:0.2) arc ( 90:180:0.2) -- (180:0.3) arc (180: 90:0.3); \draw [fill=black] (180:0.3) -- (180:0.2) arc (180:360:0.2) -- (360:0.3) arc (360:180:0.3); \varepsilonnd{tikzpicture} \raisebox{5mm}{ $\longrightarrow$ } \begin{tikzpicture}[scale=2,thick,draw=blue!40!black] \draw [fill=black!40] ( 0:0.3) -- ( 0:0.2) arc ( 0:180:0.2) -- (180:0.3) arc (180: 0:0.3); \draw [fill=black] (180:0.3) -- (180:0.2) arc (180:270:0.2) -- (270:0.3) arc (270:180:0.3); \draw [fill=white] (270:0.3) -- (270:0.2) arc (270:360:0.2) -- (360:0.3) arc (360:270:0.3); \varepsilonnd{tikzpicture} \varepsilonnd{gather*} \varepsilonnd{minipage} \begin{minipage}{0.48\textwidth} \begin{gather*} \text{\footnotesize $s = (00,01,10,11) \mapsto (0,100,101,11)$} \\[1pt] \begin{tikzpicture}[ scale=0.5, inner sep=0pt, baseline=-30pt, level distance=20pt, level 1/.style={sibling distance=60pt}, level 2/.style={sibling distance=30pt}, level 3/.style={sibling distance=15pt} ] \node (root) [circle,fill] {} child {node (0) [circle,fill] {} child {node (00) {1}} child {node (01) {2}}} child {node (1) [circle,fill] {} child {node (10) {3}} child {node (11) {4}}}; \varepsilonnd{tikzpicture} \raisebox{5mm}{ \ $\longrightarrow$ \ } \begin{tikzpicture}[ scale=0.5, font=\footnotesize, inner sep=0pt, baseline=-30pt, level distance=20pt, level 1/.style={sibling distance=60pt}, level 2/.style={sibling distance=30pt}, level 3/.style={sibling distance=15pt} ] \node (root) [circle,fill] {} child {node (0) {1}} child {node (1) [circle,fill] {} child {node (10) [circle,fill] {} child {node (100) {2}} child {node (101) {3}}} child {node (11) {4}}}; \varepsilonnd{tikzpicture} \\[3.1mm] \begin{tikzpicture}[scale=2,thick] \draw (0, 0) rectangle (0.25,0.1); \draw (0.25,0) rectangle (0.5, 0.1); \draw (0.5, 0) rectangle (0.75,0.1); \draw (0.75,0) rectangle (1, 0.1); \varepsilonnd{tikzpicture} \longrightarrow \begin{tikzpicture}[scale=2,thick] \draw (0, 0) rectangle (0.5, 0.1); \draw (0.5, 0) rectangle (0.625,0.1); \draw (0.625,0) rectangle (0.75, 0.1); \draw (0.75, 0) rectangle (1, 0.1); \varepsilonnd{tikzpicture} \varepsilonnd{gather*} \varepsilonnd{minipage} \caption{Four elements of Thompson's group $V$.} \label{fig:elements} \varepsilonnd{figure} A motivating perspective in the study of generating sets of Thompson's groups is that $V$ combines the selfsimilarity of the Cantor space with permutations from the symmetric group. Indeed, to $g \in V$ we may associate (not uniquely) a permutation as follows. Let $\sigma\colonA \to B$ be a basis pair for $g$ and write $A = \{ a_1, \dots, a_n \}$ and $B = \{ b_1, \dots, b_n \}$ where $a_1 < \dots < a_n$ and $b_1 < \dots < b_n$ in the lexicographic order. Then the permutation associated to $g$ is the element $\varphii_g \in \mathbb{S}m{n}$ satisfying $a_ig = b_{i\varphii_g}$. For the elements in Figure~\ref{fig:elements}, for instance, \[ \varphii_a = (1 \, 2)(4 \, 5 \, 6), \quad \varphii_b = (2 \, 3 \, 4 \, 5 \, 6 \, 7 \, 8), \quad \varphii_c = (1 \, 2 \, 3), \quad \varphii_s = 1. \] This perspective gives an easy way to define $F$ and $T$. Thompson's groups $F$ and $T$ are the subgroups of $V$ of elements whose associated permutation is trivial and cyclic, respectively (this is well defined). Clearly $F < T < V$ and, referring to Figure~\ref{fig:elements}, we see that $s \in F$, $c \in T \setminus F$ and $a,b \in V \setminus T$. Given a binary word $u$, we can associate a subset $I_u$ of the unit interval $[0,1]$ (or unit circle $\mathbb{S}^1$) inductively as follows: the empty word corresponds to $(0,1)$ and for any binary word $u$, the words $u0$ and $u1$ correspond to the open left and right halves of $u$. In this way, a basis for $\mathfrak{C}$ can be interpreted as a sequence of disjoint open intervals whose closures cover $[0,1]$ (or $\mathbb{S}^1$), and an element of $V$, as a basis pair, $\sigma\colon\{a_1, \dots, a_n\}\to\{b_1, \dots, b_n\}$ defines a bijection $g\colon[0,1] \to [0,1]$ (or $g\colon\mathbb{S}^1 \to \mathbb{S}^1$) by specifying that $I_{a_i}g = I_{b_i}$ and $g|_{I_{a_i}}$ is affine for all $1 \leqslant i \leqslant n$. Under this correspondence, $F$ is a group of piecewise linear homeomorphisms of $[0,1]$ and $T$ is a group of piecewise linear homeomorphisms of $\mathbb{S}^1$. See Figure~\ref{fig:elements} for some examples. \subsection{\boldmath Thompson's group $V$} \label{ss:infinite_thompson_v} The final three sections of this survey address generating sets for Thompson's groups, which have seen lots of very recent progress. We begin with $V$. The group $V$ is $2$-generated. Indeed, referring to Figure~\ref{fig:elements}, $V = \langle a,b \rangle$. Given this infinite $2$-generated simple group $V$, we are naturally led to ask: is $V$ $\frac{3}{2}$-generated? An answer was given by Donoven and Harper in 2020 \cite{ref:DonovenHarper20}. \begin{theorem} \label{thm:donoven_harper} Thompson's group $V$ is $\frac{3}{2}$-generated. \varepsilonnd{theorem} Theorem~\ref{thm:donoven_harper} gave the first example of a noncyclic infinite $\frac{3}{2}$-generated group, other than the pathological \varepsilonmph{Tarski monsters}: the infinite groups whose only proper nontrivial subgroups have order $p$ for a fixed prime $p$, which are clearly simple and $\frac{3}{2}$-generated and were proved to exist for all $p > 10^{75}$ by Olshanskii in \cite{ref:Olshanskii80}. (Note that the groups $G_1$ and $G_2$ in Example~\ref{ex:cox} were found later by Cox in \cite{ref:Cox22}, motivated by a question posed in \cite{ref:DonovenHarper20}.) In particular, $V$ was the first finitely presented example of a noncyclic infinite $\frac{3}{2}$-generated group. \textbf{Methods. A parallel with symmetric groups. } Let us write $G = \mathbb{S}m{n}$ and $\Omega = \{ 1, \dots, n \}$. It is well known that $G$ is generated by the set of transpositions $\{ (i \, j) \mid \text{distinct $i, j \in \Omega$} \}$ which yields a natural presentation for $G$, namely \begin{equation} \label{eq:presentation_symmetric} \langle t_{i,j} \mid t_{i,j}^2, \ t_{i,j}^{t_{k,l}} = t_{i (k \, l),j(k \, l)} \rangle \varepsilonnd{equation} Moreover, using the fact that $G$ is generated by transpositions, we obtain the following \varepsilonmph{covering lemma} (here $G_{[A]}$ is the subgroup of $G$ supported on $A \subseteq \Omega$). \begin{lemma} \label{lem:covering_symmetric} Let $A_1, \dots, A_k \subseteq \Omega$ satisfy $A_i \cap A_{i+1} \neq \varepsilonmptyset$ and $\bigcup_{i=1}^k{A_i} = \Omega$. Then $G_{[A_i]} \cong \mathbb{S}m{|A_i|}$ for all $i$, and $G = \langle G_{[A_1]}, \dots, G_{[A_k]} \rangle$. \varepsilonnd{lemma} These results for $G = \mathbb{S}m{n}$ have analogues for $V$. Here, \varepsilonmph{transpositions} are elements of the following form: for $u, v \in X$ such that the corresponding subsets of $\mathfrak{C}$ are disjoint, we write $(u \, v)$ for the element given as $(u,v,w_1, \dots, w_k) \mapsto (v,u,w_1, \dots, w_k)$ where $\{ u, v, w_1, \dots, w_k\}$ is a basis for $\mathfrak{C}$. Brin \cite{ref:Brin04} proved that $V$ is generated by $\{ (u \, v) \mid \text{disjoint $u,v \in X$} \}$. As an aside, let us point out why the subgroup of elements that are a product of an even number of transpositions is not a proper nontrivial normal subgroup of $V$ (as with the symmetric and alternating groups): this subgroup is not proper. Indeed, every element of $V$ is a product of an even number of transpositions as the selfsimilarity of $\mathfrak{C}$ shows that $(u \, v)$ can be rewritten as $(u0 \, v0)(u1 \, v1)$. Bleak and Quick \cite[Theorem~1.1]{ref:BleakQuick17} demonstrated how this generating set gives a presentation for $V$ combining the corresponding presentation for the symmetric group in \varepsilonqref{eq:presentation_symmetric} with the selfsimilarity of $\mathfrak{C}$, namely \begin{equation} \label{eq:presentation_v} \langle t_{u,v} \mid t_{u,v}^2, \ t_{u,w}^{t_{x,y}} = t_{u(x \, y),v(x \, y)}, \ t_{u,v} = t_{u0,v0}t_{u1,v1} \rangle \varepsilonnd{equation} (see \cite[(1.1)]{ref:BleakQuick17} for a full explanation of the notation used in the relations). We will say no more about presentations, save that Bleak and Quick found a presentation for $V$ with 2 generators and 7 relations \cite[Theorem~1.3]{ref:BleakQuick17}, which they derived from another, more intuitive, presentation based on the analogy with $S_n$ which has 3 generators and 8 relations \cite[Theorem~1.2]{ref:BleakQuick17}. As with the symmetric group, the fact that $V$ is generated by transpositions yields an easy proof of the following. \begin{lemma} \label{lem:covering_v} Let $U_1, \dots, U_k \subseteq \mathfrak{C}$ be clopen subsets satisfying $U_i \cap U_{i+1} \neq \varepsilonmptyset$ and $\bigcup_{i=1}^k U_i = \mathfrak{C}$. Then $V_{[U_i]} \cong V$ for all $i$, and $V = \langle V_{[U_1]}, \dots, V_{[U_k]} \rangle$. \varepsilonnd{lemma} With Lemma~\ref{lem:covering_v} in place, we now highlight the main ideas in the proof of Theorem~\ref{thm:donoven_harper} by way of an example (this is \cite[Example~4.1]{ref:DonovenHarper20}). We will see an alternative approach in Theorem~\ref{thm:donoven_harper_hyde} \begin{example} \label{ex:donoven_harper} Let $x = (00 \ \ 01) \in V$. We will construct $y \in V$ such that $\langlex,y\rangle = V$. Let $y_1 = a_{[00]}$ and $y_2 = b_{[01]}$, where for $g \in V$ and clopen $A \subseteq \mathfrak{C}$ we write $g_{[A]}$ for the image of $g$ under the canonical isomorphism $V \to V_{[A]}$. Let $y_3 = (00 \ \ 01 \ \ 10 \ \ 11)_{[0^310]} \cdot (0^310^3 \ \ 010^3)$ and $y_4 = (0000 \ \ 0001 \ \ \cdots \ \ 1010)_{[0^31^2]} \cdot (0^31^20^4 \ \ 1)$, and define $y = y_1y_2y_3y_4$. Note that $y_1$, $y_2$, $y_3$ and $y_4$ have coprime orders (6, 7, 5 and 11, respectively). Moreover, these elements have disjoint support, so they commute. Consequently, all four elements are suitable powers of $y$ and are, thus, contained in $\langlex,y\rangle$. We claim that $\langlex,y\rangle = V$. Recall that $V = \langle a,b\rangle$, so $V_{[00]} = \langlea_{[00]},b_{[00]}\rangle = \langley_1, y_2^x\rangle \leqslant \langlex,y\rangle$. In addition, $V_{[01]} = (V_{[00]})^x \leqslant \langlex,y\rangle$. Using appropriate elements from $V_{[00]}$ and $V_{[01]}$ we can show that $(000 \ \ 01) \in \langle V_{[00]}, V_{[01]}, y_3 \rangle \leqslant \langlex,y\rangle$ and $(000 \ \ 1) \in \langle V_{[00]}, y_4 \rangle \leqslant \langlex,y\rangle$. Therefore, $\langlex,y\rangle \mathfrak{g}eq \langle V_{[00]}, V_{[00]}^{(000 \ \ 01)}, V_{[00]}^{(000 \ \ 1)} \rangle = \langle V_{[000 \cup 001]}, V_{[01 \cup 001]}, V_{[1 \cup 001]} \rangle$. Now applying Lemma~\ref{lem:covering_v} twice gives $\langle x,y \rangle = V$. \varepsilonnd{example} \subsection{\boldmath Generalisations of $V$} \label{ss:infinite_thompson_general} There are numerous variations on $V$, and these are the focus of this section. The \varepsilonmph{Higman--Thompson group} $V_n$, for $n \mathfrak{g}eq 2$, is an infinite finitely presented group, introduced by Higman in \cite{ref:Higman74}. There is a natural action of $V_n$ on $n$-ary Cantor space $\mathfrak{C}_n = \{0,1,\dots,n-1\}^{\mathbb{N}}$, and $V_2$ is nothing other than $V$. The derived subgroup of $V_n$ equals $V_n$ for even $n$ and has index two for odd $n$. In both cases, $V_n'$ is simple and both $V_n$ and $V_n'$ are $2$-generated \cite{ref:Mason77}. The \varepsilonmph{Brin--Thompson group} $nV$, for $n \mathfrak{g}eq 1$, acts on $\mathfrak{C}^n$ and was defined by Brin in \cite{ref:Brin04}. The groups $V=1V, 2V, 3V, \dots$ are pairwise nonisomorphic \cite{ref:BleakLanoue10}, simple \cite{ref:Brin10} and $2$-generated \cite[Corollary~1.3]{ref:Quick19}. The results about generating $V$ by transpositions have analogues for $V_n$ and $nV$ (see \cite[Section~3]{ref:DonovenHarper20}), and, in \cite[Theorem~1.1]{ref:Quick19}, Quick gives a presentation for $nV$ analogous to the one for $V$ in \varepsilonqref{eq:presentation_v}. Theorem~\ref{thm:donoven_harper} extends to all of these groups too \cite[Theorems~1 \& 2]{ref:DonovenHarper20}. \begin{theorem} \label{thm:donoven_harper_generalisation} For all $n \mathfrak{g}eq 2$, the Higman--Thompson groups $V_n$ and $V_n'$ are $\frac{3}{2}$-generated, and for all $n \mathfrak{g}eq 1$, the Brin--Thompson group $nV$ is $\frac{3}{2}$-generated. \varepsilonnd{theorem} In particular, the groups $V_n$ when $n$ is odd give infinitely many examples of infinite $\frac{3}{2}$-generated groups that are not simple. As we introduced them, the Higman--Thompson group $V_n'$ is a simple subgroup of $\mathrm{Homeo}(\mathfrak{C}_n)$ and the Brin--Thompson group $nV$ is a simple subgroup of $\mathrm{Homeo}(\mathfrak{C}^n)$. Since $\mathfrak{C}^n$ ($n$th power of $\mathfrak{C}$) and $\mathfrak{C}_n$ ($n$-ary Cantor space) are both homeomorphic to $\mathfrak{C}$, all of these groups can be viewed as subgroups of $\mathrm{Homeo}(\mathfrak{C})$. Recent work of Bleak, Elliott and Hyde \cite{ref:BleakElliottHyde}, highlights that these groups, and numerous others (such as Nekrashevych's simple groups of dynamical origin), can be viewed within one unified dynamical framework. A group $G \leqslant \mathrm{Homeo}(\mathfrak{C})$ is said to be \varepsilonmph{vigorous} if for any clopen subsets $\varepsilonmptyset \subsetneq B,C \subsetneq A \subseteq \mathfrak{C}$ there exists $g \in G$ supported on $A$ such that $Bg \subseteq C$. In \cite{ref:BleakElliottHyde}, Bleak, Elliot and Hyde study vigorous groups and, among much else, prove that a perfect vigorous group $G \leqslant \mathrm{Homeo}(\mathfrak{C})$ is simple if and only if it is generated by its elements of \varepsilonmph{small support} (namely, elements supported on a proper clopen subset of $\mathfrak{C}$). To give a flavour of how these dynamical properties suitably capture the ideas we have seen in this section, compare the following, which is \cite[Lemma~2.18 \& Proposition~2.19]{ref:BleakElliottHyde}, with Lemma~\ref{lem:covering_v}. \begin{lemma} \label{lem:covering_vigorous} Let $G$ be a vigorous group that is generated by its elements of small support. Let $U_1, \dots, U_k \subseteq \mathfrak{C}$ be clopen subsets satisfying $U_i \cap U_{i+1} \neq \varepsilonmptyset$ and $\bigcup_{i=1}^k U_i = \mathfrak{C}$. Then $G = \langle G_{[U_1]}, \dots, G_{[U_k]} \rangle$. Moreover, if $G$ is simple, then for each $i$ the group $G_{[U_i]}$ is a simple vigorous group. \varepsilonnd{lemma} Bleak, Elliott and Hyde go on to prove that every finitely generated simple vigorous group is $2$-generated \cite[Theorem~1.12]{ref:BleakElliottHyde}. Are all such groups $\frac{3}{2}$-generated? Bleak, Donoven, Harper and Hyde \cite{ref:BleakDonovenHarperHyde} recently proved that $u(G) \mathfrak{g}eq 1$. \begin{theorem} \label{thm:donoven_harper_hyde} Let $G \leqslant \mathrm{Homeo}(\mathfrak{C})$ be a finitely generated simple vigorous group. Then there exists an element $s \in G$ of small support and order 30 such that for every nontrivial $x \in G$ there exists $y \in s^G$ such that $\langle x, y \rangle = G$. \varepsilonnd{theorem} Theorem~\ref{thm:donoven_harper_hyde} gives $u(G) \mathfrak{g}eq 1$ for all the simple groups $G$ in Theorem~\ref{thm:donoven_harper_generalisation}. In particular, we obtain a strong version of Theorem~\ref{thm:donoven_harper} on Thompson's group $V$, improving $s(V) \mathfrak{g}eq 1$ to $u(V) \mathfrak{g}eq 1$. It is possible to obtain stronger results on the (uniform) spread of $V$ and its generalisations (and $T$, discussed below), and this is the subject of current work of the author and others (e.g. \cite{ref:BleakDonovenHarperHyde}). \subsection{\boldmath Thompson's groups $T$ and $F$} \label{ss:infinite_thompson_t} In this final section, we discuss generating sets of Thompson's groups $T$ and $F$. We begin with $T$, which is a simple $2$-generated group, so it is natural to study its (uniform) spread. In 2022, Bleak, Harper and Skipper \cite{ref:BleakHarperSkipper} proved $u(T) \mathfrak{g}eq 1$. \begin{theorem} \label{thm:bleak_harper_skipper} There exists an element $s \in T$ such that for every nontrivial $x \in T$ there exists $y \in s^T$ such that $\langle x, y \rangle = T$. \varepsilonnd{theorem} \begin{corollary} \label{cor:bleak_harper_skipper} Thompson's group $T$ is $\frac{3}{2}$-generated. \varepsilonnd{corollary} The element $s$ in Theorem~\ref{thm:bleak_harper_skipper} can be chosen as the one in Figure~\ref{fig:elements}. Moreover, in \cite[Proposition~3.1]{ref:BleakHarperSkipper}, it is shown that if we restrict to elements $x$ of infinite order, then we can choose $s$ to be any infinite order element, that is to say, for any two infinite order elements $x,s \in T$ there exists $g \in T$ such that $\langle x, s^g \rangle = T$. This naturally raises the question of whether an arbitrary infinite order element can be chosen for $s$ in Theorem~\ref{thm:bleak_harper_skipper} (see \cite[Question~1]{ref:BleakHarperSkipper}). We now turn to Thompson's group $F$. This is $2$-generated since if we write $x_0 = (00, 01, 1) \mapsto (0, 10, 11)$ and $x_1 = (0, 100, 101, 1) \mapsto (0, 10, 110, 111)$, then, by \cite[Theorem~3.4]{ref:CannonFloydParry96} for example, $F = \langle x_0, x_1 \rangle$. Moreover, if we inductively define $x_{i+1} = x_i^{x_0}$ for all $i \mathfrak{g}eq 1$, then the elements $x_0, x_1, x_2, \dots$ witness the following well-known presentation \[ F = \langle x_0, x_1, x_2, \dots \mid \text{$x_j^{x_i} = x_{j+1}$ for $i < j$} \rangle. \] However, $F$ is not a simple group. Considering $F$ in its natural action on $[0,1]$, the homomorphism $\varphii\colonF \to \mathbb{Z}^2$ defined as $f \mapsto (\log_2{f'(0^+)}, \log_2{f'(1^-)})$ is surjective and the kernel of $\varphii$ is the derived subgroup $F'$, which is simple. Moreover, $F'$ is the unique minimal normal subgroup of $F$, so the nontrivial normal subgroups of $F$ are in bijection with normal subgroups of $F/F' = \mathbb{Z}^2$ (see \cite[Section~4]{ref:CannonFloydParry96} for proofs of these claims). In particular, $F$ is not $\frac{3}{2}$-generated since it has a proper noncyclic quotient. Now $F'$ is not $\frac{3}{2}$-generated for a different reason: it is not finitely generated. Indeed, for any nontrivial normal subgroup $N = \varphii^{-1}(\langle (a_0,a_1), (b_0,b_1) \rangle)$, if $\{ a_0, b_0 \} = \{ 0 \}$ or $\{ a_1, b_1 \} = \{ 0 \}$, then $N$ is not finitely generated. To see this in the former case, for finitely many elements each of which acts as the identity on an interval containing $0$, there exists an interval containing $0$ on which they all act as the identity, so they generate a proper subgroup of $N$ (for the latter case, replace $0$ with $1$). However, the following recent theorem of Golan \cite[Theorem~2]{ref:GolanGen} shows that these are the only obstructions to $\frac{3}{2}$-generation. \begin{theorem} \label{thm:golan} Let $(a_0,a_1), (b_0,b_1) \in \mathbb{Z}^2$ with $\{a_0,b_0\} \neq \{0\}$ and $\{a_1,b_1\} \neq \{0\}$. Let $x \in F$ be a nontrivial element such that $\varphii(x) = (a_0,a_1)$. Then there exists $y \in F$ such that $\varphii(y) = (b_0,b_1)$ and $\langlex,y\rangle = \varphii^{-1}(\langle(a_0,a_1),(b_0,b_1)\rangle$. \varepsilonnd{theorem} Theorem~\ref{thm:golan} has the following consequence, which asserts that $F$ is almost $\frac{3}{2}$-generated \cite[Theorem~1]{ref:GolanGen}. \begin{corollary} \label{cor:golan} Let $f \in F$ and assume that $\varphii(f)$ is contained in a generating pair of $\varphii(F)$. Then $f$ is contained in a generating pair of $F$. \varepsilonnd{corollary} Theorem~\ref{thm:golan} also implies that every finitely generated normal subgroup of $F$ is $2$-generated. In particular, every finite index subgroup of $F$ is $2$-generated. \textbf{Methods. Covering lemmas and a generation criterion. } We conclude the survey by discussing how Theorems~\ref{thm:bleak_harper_skipper} and~\ref{thm:golan} are proved in \cite{ref:BleakHarperSkipper} and \cite{ref:GolanGen}, respectively. Covering lemmas (analogues of Lemma~\ref{lem:covering_symmetric}), again, play a role. For $F$ and $T$, these results are well known, see \cite[Corollary~2.6 \& Lemma~2.7]{ref:BleakHarperSkipper} for example. (We call an interval $[a,b]$ \varepsilonmph{dyadic} if $a,b \in \mathbb{Z}[\frac{1}{2}]$.) \begin{lemma} \label{lem:covering_f} Let $[a_1,b_1], \dots, [a_k,b_k] \subseteq [0,1]$ be dyadic intervals satisfying $\bigcup_{i=1}^k (a_i,b_i) = (0,1)$. Then $F_{[a_i,b_i]} \cong F$ for all $i$, and $F = \langle F_{[a_1,b_1]}, \dots, F_{[a_k,b_k]} \rangle$. \varepsilonnd{lemma} \begin{lemma} \label{lem:covering_t} Let $[a_1,b_1], \dots, [a_k,b_k] \subseteq \mathbb{S}^1$ be dyadic intervals satisfying $\bigcup_{i=1}^k (a_i,b_i) = \mathbb{S}^1$. Then $T_{[a_i,b_i]} \cong F$ for all $i$, and $T = \langle T_{[a_1,b_1]}, \dots, T_{[a_k,b_k]} \rangle$. \varepsilonnd{lemma} Another key ingredient is a criterion due to Golan, for which we need some further notation. Fix a subgroup $H \leqslant F$. An element $f \in F \leqslant \mathrm{Homeo}([0,1])$ is \varepsilonmph{piecewise-$H$} if there is a finite subdivision of $[0,1]$ such that on each interval in the subdivision, $f$ coincides with an element of $H$. The closure of $H$, written ${\rm Cl}(H)$, is the subgroup of $F$ containing all elements that are piecewise-$H$. The following result combines \cite[Theorem~1.3]{ref:GolanMAMS} with \cite[Theorem~1.3]{ref:GolanMax}. \begin{theorem} \label{thm:golan_criterion} Let $H \leqslant F$. Then the following hold: \begin{enumerate} \item $H \mathfrak{g}eq F'$ if and only if ${\rm Cl}(H) \mathfrak{g}eq F'$ and there exist $f \in H$ and a dyadic $\omega \in (0,1)$ such that $f'(\omega^+)=2$ and $f'(\omega^-)=1$ \item $H = F$ if and only if ${\rm Cl}(H) \mathfrak{g}eq F'$ and there exist $f,g \in H$ such that $f'(0^+)=g'(1^-)=2$ and $f'(1^-)=g'(0^+)=1$. \varepsilonnd{enumerate} \varepsilonnd{theorem} We now discuss the proof of Theorem~\ref{thm:bleak_harper_skipper} on $T$ given by Bleak, Harper and Skipper \cite{ref:BleakHarperSkipper}. By Lemma~\ref{lem:covering_t}, for each nontrivial $x \in T$ it suffices to find a dyadic interval $[a,b] \subseteq \mathbb{S}^1$ and $y \in s^T$ such that $\bigcup_{g \in \langle x,y \rangle} (a,b)g = \mathbb{S}^1$ and $T_{[a,b]} \leqslant \langle x, y \rangle$. If $|x|$ is infinite, a dynamical argument is used (for any infinite order element $s$), see \cite[Proposition~3.1]{ref:DonovenHarper20}. The key ingredients for finite $|x|$ are highlighted in the following example. \begin{example} \label{ex:bleak_harper_skipper} Let $x \in T$ be a nontrivial torsion element. We will prove that there exists $y \in s^T$ (for $s$ as in Figure~\ref{fig:elements}) such that $\langle x, y \rangle = T$. By replacing $x$ by a power if necessary, $x$ has rotation number $\frac{1}{p}$ for prime $p$. For exposition, we only discuss the case $p \mathfrak{g}eq 5$. Since any two torsion elements of $T$ with the same rotation number are conjugate, by replacing $x$ by a conjugate if necessary, $x = (00, 01, 10, 110, \dots, 1^{p-3}0, 1^{p-2}) \mapsto (01, 10, 110, \dots, 1^{p-3}0, 1^{p-2}, 00)$. We claim that $T = \langle x, s \rangle$. By Lemma~\ref{lem:covering_t}, since $\mathbb{S}^1 = \bigcup_{i \in \mathbb{Z}} (0,\frac{7}{8})x^i$, it suffices to prove that $T_{[0,\frac{7}{8}]} \leqslant \langle x, s \rangle$. Indeed, we claim that $T_{[0,\frac{7}{8}]} = \langley_0,y_1\rangle$ for $y_0 = s$ and $y_1 = s^x$. Defining $t\colon(0,\frac{7}{8}) \to (0,1)$ as $\omega t = \omega$ if $\omega \leqslant \frac{3}{4}$ and $\omega t = 2\omega-\frac{3}{4}$ if $\omega > \frac{3}{4}$, it suffices to prove that $\langle y_0^t, y_1^t \rangle = (T_{[0,\frac{7}{8}]})^t = F$. To do this, we apply Theorem~\ref{thm:golan_criterion}(ii). To verify the second condition, choose $f = y_0^t$ and $g = (y_1^t)^{-1}$, so $f'(0^+)=g'(1^-)=2$ and $f'(1^-)=g'(0^+)=1$. It remains to prove that ${\rm Cl}(\langle y_0^t, y_1^t \rangle) \mathfrak{g}eq F'$. Here we apply another criterion: for $g_1, \dots, g_k \in F$ we have $\langle g_1, \dots, g_k \rangle \mathfrak{g}eq F'$ if and only if the Stallings $2$-core of $\langleg_1, \dots, g_k\rangle$ equals the Stallings $2$-core of $F$ \cite[Lemma~7.1 \& Remark~7.2]{ref:GolanMAMS}. The \varepsilonmph{Stallings $2$-core} is a directed graph associated to a diagram group introduced by Guba and Sapir \cite{ref:GubaSapir97}. Given elements $g_1, \dots, g_k \in F$ represented as tree pairs, there is a short combinatorial algorithm to find the Stallings 2-core of $\langle g_1, \dots, g_k \rangle$, and it is straightforward to compute the Stallings 2-core of $\langle y_0^t, y_1^t \rangle$ and note that it is the Stallings 2-core of $F$ (see the proof of \cite[Proposition~3.2]{ref:BleakHarperSkipper}). Therefore, $F = \langle y_0^t, y_1^t \rangle$, completing the proof that $T = \langle x, s \rangle$. \varepsilonnd{example} We conclude by briefly outlining the proof of Theorem~\ref{thm:golan} on $F$ given by Golan \cite{ref:GolanGen}, which uses similar methods to those in \cite{ref:BleakHarperSkipper} on $T$ and \cite{ref:DonovenHarper20} on $V$. Let $(a_0,a_1), (b_0,b_1) \in \mathbb{Z}^2$ with $\{a_0,b_0\} \neq \{0\}$ and $\{a_1,b_1\} \neq \{0\}$, and let $x \in F \setminus 1$ with $\varphii(x) = (a_0,a_1)$. Observe that it suffices to find an element $y$ such that $\varphii(y) = (b_0,b_1)$ and $\langle x,y \rangle \mathfrak{g}eq F'$. In \cite{ref:GolanGen}, an explicit choice of $y$, based on $x$, is given and the condition $\langle x,y \rangle \mathfrak{g}eq F'$ is verified via Theorem~\ref{thm:golan_criterion}(i). \begin{thebibliography}{999} \bibitem{ref:Aschbacher84} M.~Aschbacher, \varepsilonmph{On the maximal subgroups of the finite classical groups}, Invent. Math. \textbf{76} (1984), 469--514. \bibitem{ref:AschbacherGuralnick84} M.~Aschbacher and R.~Guralnick, \varepsilonmph{Some applications of the first cohomology group}, J. Algebra \textbf{90} (1984), 446--460. \bibitem{ref:Binder68} G.~J. Binder, \varepsilonmph{The bases of the symmetric group}, Izv. Vyssh. Uchebn. Zaved. Mat. \textbf{78} (1968), 19--25. \bibitem{ref:Binder70MZ} G.~J. Binder, \varepsilonmph{Certain complete sets of complementary elements of the symmetric and the alternating group of the nth degree}, Mat. Zametiki \textbf{7} (1970), 173--180. \bibitem{ref:Binder70} G.~J. Binder, \varepsilonmph{The two-element bases of the symmetric group}, Izv. Vyssh. Uchebn. Zaved. Mat. \textbf{90} (1970), 9--11. \bibitem{ref:Binder73} G.~J. Binder, \varepsilonmph{The inclusion of the elements of an alternating group of even degree in a two-element basis}, Izv. Vyssh. Uchebn. Zaved. Mat. \textbf{135} (1973), 15--18. \bibitem{ref:Blackburn06} S.~Blackburn, \varepsilonmph{Sets of permutations that generate the symmetric group pairwise}, J. Combin. Theory Ser. A \textbf{113} (2006), 1572--1581. \bibitem{ref:BleakDonovenHarperHyde} C.~Bleak, C.~Donoven, S.~Harper and J.~Hyde, \varepsilonmph{Generating simple vigorous groups}, {in preparation}. \bibitem{ref:BleakElliottHyde} C.~Bleak, L.~Elliott and J.~Hyde, \varepsilonmph{Sufficient conditions for a group of homeomorphisms of the {C}antor set to be two-generated}, {preprint}, arxiv:\url{2008.04791}. \bibitem{ref:BleakHarperSkipper} C.~Bleak, S.~Harper and R.~Skipper, \varepsilonmph{Thompson's group {$T$} is $\frac{3}{2}$-generated}, {preprint}, arxiv:\url{2206.05316}. \bibitem{ref:BleakLanoue10} C.~Bleak and D.~Lanoue, \varepsilonmph{A family of non-isomorphic results}, Geom. Dedicata \textbf{146} (2010), 21--26. \bibitem{ref:BleakQuick17} C.~Bleak and M.~Quick, \varepsilonmph{The infinite simple group {$V$} of {R}ichard {J.} {T}hompson: presentations by permutations}, Groups Geom. Dyn. \textbf{11} (2017), 1401--1436. \bibitem{ref:Bois09} J.-M.~Bois, \varepsilonmph{Generators of simple {L}ie algebras in arbitrary characteristics}, Math. Z. \textbf{262} (2009), 715--741. \bibitem{ref:BradleyHolmes07} J.~D. Bradley and P.~E. Holmes, \varepsilonmph{Improved bounds for the spread of sporadic groups}, LMS J. Comput. Math. \textbf{10} (2007), 132--140. \bibitem{ref:BrennerWiegold75} J.~L. Brenner and J.~Wiegold, \varepsilonmph{Two generator groups, {I}}, Michigan Math. J. \textbf{22} (1975), 53--64. \bibitem{ref:BreuerGuralnickKantor08} T.~Breuer, R.~M. Guralnick and W.~M. Kantor, \varepsilonmph{Probabilistic generation of finite simple groups, {II}}, J. Algebra \textbf{320} (2008), 443--494. \bibitem{ref:BreuerGuralnickLucchiniMarotiNagy10} T.~Breuer, R.~M. Guralnick, A.~Lucchini, A.~Mar{\'o}ti and G.~P. Nagy, \varepsilonmph{Hamiltonian cycles in the generating graphs of finite groups}, Bull. Lond. Math. Soc. \textbf{42} (2010), 621--633. \bibitem{ref:Brin04} M.~G. Brin, \varepsilonmph{Higher dimensional {T}hompson groups}, Geom. Dedicata \textbf{108} (2004), 163--192. \bibitem{ref:Brin10} M.~G. Brin, \varepsilonmph{On the baker's maps and the simplicity of the higher dimensional {T}hompson's groups {$nV$}}, Publ. Mat. \textbf{54} (2010), 433--439. \bibitem{ref:BritnellEvseevGuralnickHolmesMaroti08} J.~R. Britnell, A.~Evseev, R.~M. Guralnick, P.~E. Holmes and A.~Mar\'oti, \varepsilonmph{Sets of elements that pairwise generate a linear group}, J. Combin. Theory Ser. A \textbf{115} (2008), 442--465. \bibitem{ref:BurgerMozes00} M.~Burger and S.~Mozes, \varepsilonmph{Lattices in product of trees}, Inst. Hautes \'Etudes Sci. Publ. Math. \textbf{92} (2000), 151--194. \bibitem{ref:Burness071} T.~C. Burness, \varepsilonmph{Fixed point ratios in actions of finite classical groups, {I}}, J. Algebra \textbf{309} (2007), 69--79. \bibitem{ref:Burness072} T.~C. Burness, \varepsilonmph{Fixed point ratios in actions of finite classical groups, {II}}, J. Algebra \textbf{309} (2007), 80--138. \bibitem{ref:Burness073} T.~C. Burness, \varepsilonmph{Fixed point ratios in actions of finite classical groups, {III}}, J. Algebra \textbf{314} (2007), 693--748. \bibitem{ref:Burness074} T.~C. Burness, \varepsilonmph{Fixed point ratios in actions of finite classical groups, {IV}}, J. Algebra \textbf{314} (2007), 749--788. \bibitem{ref:Burness18} T.~C. Burness, \varepsilonmph{Simple groups, fixed point ratios and applications}, in \varepsilonmph{Local Representation Theory and Simple Groups}, {EMS} Series of Lectures in Mathematics, European Mathematical Society, 2018, 267--322. \bibitem{ref:Burness19} T.~C. Burness, \varepsilonmph{Simple groups, generation and probabilistic methods}, in \varepsilonmph{Proceedings of Groups St Andrews 2017}, London Math. Soc. Lecture Note Series, vol. 455, Cambridge University Press, 2019, 200--229. \bibitem{ref:BurnessGuest13} T.~C. Burness and S.~Guest, \varepsilonmph{On the uniform spread of almost simple linear groups}, Nagoya Math. J. \textbf{209} (2013), 35--109. \bibitem{ref:BurnessGuralnickHarper21} T.~C. Burness, R.~M. Guralnick and S.~Harper, \varepsilonmph{The spread of a finite group}, Ann. of Math. \textbf{193} (2021), 619--687. \bibitem{ref:BurnessHarper19} T.~C. Burness and S.~Harper, \varepsilonmph{On the uniform domination number of a finite simple group}, Trans. Amer. Math. Soc. \textbf{372} (2019), 545--583. \bibitem{ref:BurnessHarper20} T.~C. Burness and S.~Harper, \varepsilonmph{Finite groups, $2$-generation and the uniform domination number}, Israel J. Math. \textbf{239} (2020), 271--367. \bibitem{ref:BurnessLiebeckShalev09} T.~C. Burness, M.~.W. Liebeck and A.~Shalev, \varepsilonmph{Base sizes for simple groups and a conjecture of {C}ameron}, Proc. Lond. Math. Soc. \textbf{98} (2009), 116--162. \bibitem{ref:BurnessThomas} T.~C. Burness and A.~R. Thomas, \varepsilonmph{Normalisers of maximal tori and a conjecture of {V}dovin}, J. Algebra \textbf{619} (2023), 459--504. \bibitem{ref:Cameron22} P.~J. Cameron, \varepsilonmph{Graphs defined on groups}, Int. J. Group Theory \textbf{11} (2022), 53--107. \bibitem{ref:CannonFloydParry96} J.~W. Cannon, W.~J. Floyd and W.~R. Parry, \varepsilonmph{Introductory notes on {R}ichard {T}hompson's groups}, Enseign. Math. \textbf{42} (1996), 1--44. \bibitem{ref:CellerLeedhamGreenMurrayNiemeyerOBrien95} F.~Celler, C.~R. Leedham-Green, S.~H. Murray, A.~C. Niemeyer and E.~A. O'Brien, \varepsilonmph{Generating random elements of a finite group}, Comm. Algebra \textbf{23} (1995), 4391--4948. \bibitem{ref:Cleary17} S.~Cleary, \varepsilonmph{Thompson's group}, in \varepsilonmph{Office Hours with a Geometric Group Theorist}, Princeton University Press, 2017, 331--357. \bibitem{ref:Cox22} C.~G. Cox, \varepsilonmph{On the spread of infinite groups}, Proc. Edinb. Math. Soc. \textbf{65} (2022), 214--228. \bibitem{ref:CrestaniLucchini13-Israel} E.~Crestani and A.~Lucchini, \varepsilonmph{The generating graph of finite soluble groups}, Israel J. of Math. \textbf{198} (2013), 63--74. \bibitem{ref:CrestaniLucchini13-JAlgCombin} E.~Crestani and A.~Lucchini, \varepsilonmph{The non-isolated vertices in the generating graph of direct powers of simple groups}, J. Algebraic Combin. \textbf{37} (2013), 249--263. \bibitem{ref:Deshpande16} T.~Deshpande, \varepsilonmph{Shintani descent for algebraic groups and almost simple characters of unipotent groups}, Compos. Math. \textbf{152} (2016), 1697--1724. \bibitem{ref:DonovenHarper20} C.~Donoven and S.~Harper, \varepsilonmph{Infinite $\frac{3}{2}$-generated groups}, Bull. Lond. Math. Soc. \textbf{52} (2020), 657--673. \bibitem{ref:DuyanHalasiMaroti18} H.~Duyan, Z.~Halasi and A.~Mar\'oti, \varepsilonmph{A proof of {P}yber's base size conjecture}, Adv. Math. \textbf{331} (2018), 720--747. \bibitem{ref:Dydak77_2} J.~Dydak, \varepsilonmph{1-movable continua need not be pointed 1-movable}, Bull. Acad. Polon. Sci. S\'{e}r. Sci. Math. Astronom. Phys. \textbf{25} (1977), 559--562. \bibitem{ref:Dye79} R.~H. Dye, \varepsilonmph{Interrelations of symplectic and orthogonal groups in characteristic two}, J. Algebra \textbf{59} (1979), 202--221. \bibitem{ref:Evans93} M.~J. Evans, \varepsilonmph{{$T$}-systems of certain finite simple groups}, Math. Proc. Camb. Phil. Soc. \textbf{113} (1993), 9--22. \bibitem{ref:Fairbairn12JGT} B.~Fairbairn, \varepsilonmph{The exact spread of {${\rm {M}}_{23}$} is $8064$}, Int. J. Group Theory \textbf{1} (2012), 1--2. \bibitem{ref:Fairbarin12CA} B.~Fairbairn, \varepsilonmph{New upper bounds on the spreads of sporadic simple groups}, Comm. Algebra \textbf{40} (2012), 1872--1877. \bibitem{ref:Flavell95} P.~Flavell, \varepsilonmph{Finite groups in which every two elements generate a soluble subgroup}, Invent. Math. \textbf{121} (1995), 279--285. \bibitem{ref:Flavell01} P.~Flavell, \varepsilonmph{Generation theorems for finite groups}, in \varepsilonmph{Groups and combinatorics}, Adv. Stud. Pure. Math., vol.~32, Math. Soc. Japan, 2001, 291--300. \bibitem{ref:FulmanGuralnick12} J.~Fulman and R.~M. Guralnick, \varepsilonmph{Bounds on the number and sizes of conjugacy classes in finite {C}hevalley groups with applications to derangements}, Trans. Amer. Math. Soc. \textbf{364} (2012), 3023--3070. \bibitem{ref:GhysSergiescu87} E.~Ghys and V.~Sergiescu, \varepsilonmph{Sur un groupe remarquable de diff\'eomorphismes du cercle}, Comment. Math. Helv. \textbf{62} (1987), 185--239. \bibitem{ref:GolanMAMS} G.~{Golan Polak}, \varepsilonmph{The generation problem in {T}hompson group {$F$}}, Mem. Amer. Math. Soc. {to appear}. \bibitem{ref:GolanMax} G.~{Golan Polak}, \varepsilonmph{On maximal subgroups of {T}hompson's group {$F$}}, {preprint}, arxiv:\url{2209.03244}. \bibitem{ref:GolanGen} G.~{Golan Polak}, \varepsilonmph{{T}hompson's group {$F$} is almost $\frac{3}{2}$-generated}, {preprint}, arxiv:\url{2210.03564}. \bibitem{ref:GoldsteinGuralnick} D.~Goldstein and R.~M. Guralnick, \varepsilonmph{Generation of {J}ordan algebras and symmetric matrices}, in preparation. \bibitem{ref:Guba82} V.~S. Guba, \varepsilonmph{A finite generated simple group with free 2-generated subgroups}, Sibirsk. Mat. Zh. \textbf{27} (1986), 50--67. \bibitem{ref:GubaSapir97} V.~Guba and M.~Sapir, \varepsilonmph{Diagram groups}, Mem. Amer. Math. Soc. \textbf{130} (1997), viii+117. \bibitem{ref:GuralnickKantor00} R.~M. Guralnick and W.~M. Kantor, \varepsilonmph{Probabilistic generation of finite simple groups}, J. Algebra \textbf{234} (2000), 743--792. \bibitem{ref:GuralnickKunyavskiiPlotkinShalev06} R.~Guralnick, B.~Kunyavski\u{\i}, E.~Plotkin and A.~Shalev, \varepsilonmph{Thompson-like characterizations of the solvable radical}, J. Algebra \textbf{300} (2006), 363--375. \bibitem{ref:GuralnickMalle12JLMS} R.~M. Guralnick and G.~Malle, \varepsilonmph{Simple groups admit {B}eauville structures}, J. Lond. Math. Soc. \textbf{85} (2012), 694--721. \bibitem{ref:GuralnickPenttilaPraegerSaxl97} R.~M. Guralnick, T.~Penttila, C.~E. Praeger and J.~Saxl, \varepsilonmph{Linear groups with orders having certain large prime divisors}, Proc. Lond. Math. Soc. \textbf{78} (1997), 167--214. \bibitem{ref:GuralnickPlotkinShalev07} R.~Guralnick, E.~Plotkin and A.~Shalev, \varepsilonmph{Burnside-type problems related to solvability}, Internat. J. Algebra Comput. \textbf{17} (2007), 1033--1048. \bibitem{ref:GuralnickSaxl03} R.~M. Guralnick and J.~Saxl, \varepsilonmph{Generation of finite almost simple groups by conjugates}, J. Algebra \textbf{268} (2003), 519--571. \bibitem{ref:GuralnickShalev03} R.~M. Guralnick and A.~Shalev, \varepsilonmph{On the spread of finite simple groups}, Combinatorica \textbf{23} (2003), 73--87. \bibitem{ref:Halasi12} Z.~Halasi, \varepsilonmph{On the base size of the symmetric group acting on subsets}, Stud. Sci. Math. Hung. \textbf{49} (2012), 492--500. \bibitem{ref:Hall36} P.~Hall, \varepsilonmph{The {E}ulerian functions of a group}, Quart. J. Math. \textbf{7} (1936), 134--151. \bibitem{ref:Harper17} S.~Harper, \varepsilonmph{On the uniform spread of almost simple symplectic and orthogonal groups}, J. Algebra \textbf{490} (2017), 330--371. \bibitem{ref:Harper21} S.~Harper, \varepsilonmph{Shintani descent, simple groups and spread}, J. Algebra \textbf{578} (2021), 319--355. \bibitem{ref:HarperLNM} S.~Harper, \varepsilonmph{The spread of almost simple classical groups}, Lecture Notes in Mathematics, vol. 2286, Springer, 2021. \bibitem{ref:Higman74} G.~Higman, \varepsilonmph{Finitely presented infinite simple groups}, Notes on Pure Mathematics, vol.~8, Australia National University, Canberra, 1974. \bibitem{ref:Ionescu76} T.~Ionescu, \varepsilonmph{On the generators of semisimple {L}ie algebras}, Linear Algebra Appl. \textbf{15} (1976), 271--292. \bibitem{ref:KantorLubotzky90} W.~M. Kantor and A.~Lubotzky, \varepsilonmph{The probability of generating a finite classical group}, Geom. Dedicata \textbf{36} (1990), 67--87. \bibitem{ref:KantorLubotzkyShalev11} W.~M. Kantor, A.~Lubotzky and A.~Shalev, \varepsilonmph{Invariable generation and the {C}hebotarev invariant of a finite group}, J. Algebra \textbf{348} (2011), 302--314. \bibitem{ref:Kawanaka77} N.~Kawanaka, \varepsilonmph{On the irreducible characters of the finite unitary groups}, J. Math. Soc. Japan \textbf{29} (1977), 425--450. \bibitem{ref:Kourovka22} E.~I. Khukhro and V.~D. Mazurov (editors), \varepsilonmph{The Kourovka Notebook: Unsolved Problems in Group Theory}, {20th Edition, Novosibirsk}, 2022, arxiv:\url{1401.0300}. \bibitem{ref:LiebeckOBrienShalevTiep10} M.~W. Liebeck, E.~A. O'Brien, A.~Shalev and P.~H. Tiep, \varepsilonmph{The {O}re conjecture}, J. Eur. Math. Soc. \textbf{12} (2010), 939--1008. \bibitem{ref:LiebeckSaxl91} M.~W. Liebeck and J.~Saxl, \varepsilonmph{Minimal degrees of primitive permutation groups, with an application to monodromy groups of covers of {R}iemann surfaces}, Proc. Lond. Math. Soc. \textbf{63} (1991), 266--314. \bibitem{ref:LiebeckShalev95} M.~W. Liebeck and A.~Shalev, \varepsilonmph{The probability of generating a finite simple group}, Geom. Dedicata \textbf{56} (1995), 103--113. \bibitem{ref:LiebeckShalev96Ann} M.~W. Liebeck and A.~Shalev, \varepsilonmph{Probabilistic methods, and the $(2,3)$-generation problem}, Ann. of Math. \textbf{144} (1996), 77--125. \bibitem{ref:LiebeckShalev96JAlg} M.~W. Liebeck and A.~Shalev, \varepsilonmph{Simple groups, probabilistic methods, and a conjecture of {K}antor and {L}ubotzky}, J. Algebra \textbf{184} (1996), 31--57. \bibitem{ref:LiebeckShalev99} M.~W. Liebeck and A.~Shalev, \varepsilonmph{Simple groups, permutation groups, and probability}, J. Amer. Math. Soc. \textbf{12} (1999), 497--520. \bibitem{ref:LubeckMalle99} F.~L{\"u}beck and G.~Malle, \varepsilonmph{(2,3)-generation of exceptional groups}, J. Lond. Math. Soc. \textbf{59} (1999), 109--122. \bibitem{ref:Lubotzky14} A.~Lubotzky, \varepsilonmph{Images of word maps in finite simple groups}, Glasg. Math. J. \textbf{56} (2014), 465--469. \bibitem{ref:LucchiniMaroti09} A.~Lucchini and A.~Mar\'oti, \varepsilonmph{On the clique number of the generating graph of a finite group}, Proc. Amer. Math. Soc. \textbf{137} (2009), 3207--3217. \bibitem{ref:LucchiniMaroti09Ischia} A.~Lucchini and A.~Mar\'oti, \varepsilonmph{Some results and questions related to the generating graph of a finite group}, in \varepsilonmph{Ischia Group Theory 2008}, World Scientific Publishing, 2009, 183--208. \bibitem{ref:Mason77} D.~R. Mason, \varepsilonmph{On the 2-generation of certain finitely presented infinite simple groups}, J. Lond. Math. Soc. \textbf{16} (1977), 229--231. \bibitem{ref:Olshanskii80} A.~Y. Ol'shanskii, \varepsilonmph{An infinite group with subgroups of prime orders}, Izv. Akad. Nauk SSSR Ser. Mat. \textbf{44} (1980), 309--321. \bibitem{ref:OsinThom13} D.~Osin and A.~Thom, \varepsilonmph{Normal generation and $\varepsilonll^2$-{B}etti numbers of groups}, Math. Ann. \textbf{355} (2013), 1331--1347. \bibitem{ref:Pak01} I.~Pak, \varepsilonmph{What do we know about the product replacement algorithm?}, in \varepsilonmph{Groups and computation, III (Columbus, OH, 1999)}, Ohio State Univ. Math. Res. Inst. Publ., vol.~8, de Gruyter, 2001, 301--347. \bibitem{ref:Piccard39} S.~Piccard, \varepsilonmph{Sur les bases du groupe sym\'etrique et du groupe alternant}, Math. Ann. \textbf{116} (1939), 752--767. \bibitem{ref:Quick19} M.~Quick, \varepsilonmph{Permutation-based presentations for {B}rin's higher-dimensional {T}hompson groups {$nV$}}, J. Aust. Math. Soc. {to appear}. \bibitem{ref:Shintani76} T.~Shintani, \varepsilonmph{Two remarks on irreducible characters of finite general linear groups}, J. Math. Soc. Japan \textbf{28} (1976), 396--414. \bibitem{ref:Stein98} A.~Stein, \varepsilonmph{1$\frac{1}{2}$-generation of finite simple groups}, Contrib. Algebra and Geometry \textbf{39} (1998), 349--358. \bibitem{ref:Steinberg62} R.~Steinberg, \varepsilonmph{Generators for simple groups}, Canadian J. Math. \textbf{14} (1962), 277--283. \bibitem{ref:Thompson68} J.~G. Thompson, \varepsilonmph{Nonsolvable groups all of whose local subgroups are solvable}, Bull. Amer. Math. Soc. \textbf{74} (1968), 383--437. \bibitem{ref:Thompson65} R.~J. Thompson, widely circulated handwritten notes (1965), 1--11. \bibitem{ref:Thompson80} R.~J. Thompson, \varepsilonmph{Embeddings into finitely generated simple groups which preserve the word problem}, in \varepsilonmph{Word problems, II (Conf. on Decision Problems in Algebra, Oxford, 1976)}, North-Holland, 1980, 401--441. \bibitem{ref:Weigel92} T.~S. Weigel, \varepsilonmph{Generation of exceptional groups of {L}ie-type}, Geom. Dedicata \textbf{41} (1992), 63--87. \bibitem{ref:Wielandt64} H.~Wielandt, \varepsilonmph{Finite Permutation Groups}, Academic Press, 1964. \bibitem{ref:Woldar07} A.~Woldar, \varepsilonmph{The exact spread of the {M}athieu group {$M_{11}$}}, J. Group Theory \textbf{10} (2007), 167--171. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title[Nonhyperbolic ergodic measures: positive entropy and full support]{Robust existence of nonhyperbolic ergodic measures with positive entropy and full support} \date{\today} \author[Ch.~Bonatti]{Christian Bonatti} \address{Institut de Math\'ematiques de Bourgogne} \email{[email protected]} \author[L.~J.~D\'\i az]{Lorenzo J.~D\'\i az} \address{Departamento de Matem\'atica, Pontif\'{\i}cia Universidade Cat\'olica do Rio de Janeiro} \email{[email protected]} \author[D.~Kwietniak]{Dominik Kwietniak} \address{Faculty of Mathematics and Computer Science, Jagiellonian University in Krak\'ow} \urladdr{http://www.im.uj.edu.pl/DominikKwietniak/} \email{[email protected]} \begin{abstract} We prove that for some manifolds $M$ the set of robustly transitive partially hyperbolic diffeomorphisms of $M$ with one-dimensional nonhyperbolic centre direction contains a $C^1$-open and dense subset of diffeomorphisms with nonhyperbolic measures which are ergodic, fully supported and have positive entropy. To do so, we formulate abstract conditions sufficient for the construction of an ergodic, fully supported measure $\mu$ which has positive entropy and is such that for a continuous function $\varphi\colon X\to\mathbb{R}$ the integral $\int\varphi\,d\mu$ vanishes. The criterion is an extended version of the \emph{control at any scale with a long and sparse tail} technique coming from the previous works. \end{abstract} \begin{thanks}{This research has been supported [in part] by CAPES - Ci\^encia sem fronteiras, CNE-Faperj, and CNPq-grants (Brazil), the research of DK was supported by the National Science Centre (NCN) grant 2013/08/A/ST1/00275 and his stay in Rio de Janeiro, where he joined this project was possible thanks to the CAPES/Brazil grant no. 88881.064927/2014-01. The authors acknowledge the hospitality of PUC-Rio and IM-UFRJ. LJD thanks the hospitality and support of ICERM - Brown University during the thematic semester ``Fractal Geometry, Hyperbolic Dynamics, and Thermodynamical Formalism''.} \end{thanks} \keywords{Birkhoff average, entropy, ergodic measure, Lyapunov exponent, nonhyperbolic measure, partial hyperbolicity, transitivity} \subjclass[2000]{ 37D25, 37D35, 37D30, 28D99 } \maketitle \section{Introduction} Our motivation is the following problem: \emph{To what extent does ergodic theory detect the nonhyperbolicity of a dynamical system? Do nonhyperbolic dynamical systems always have a nonhyperbolic ergodic measure?} These questions originated with a construction presented in \cite{GIKN} and inspired many papers exploring the properties of nonhyperbolic ergodic measures. We emphasise that the answer to the second question be ``no'' in general: there are examples of nonhyperbolic diffeomorphisms whose all ergodic measures are hyperbolic (even with all Lyapunov exponents uniformly bounded away from $0$), see \cite{BBS}. However, these examples are ``fragile'': ergodic nonhyperbolic measures reappear after an arbitrarily small perturbation of these diffeomorphisms. Thanks to the works of Abraham and Smale \cite{AbSm} and Newhouse \cite{New} it is known since late sixties that there exist open sets of nonhyperbolic systems. On the other hand, the first examples of open sets of diffeomorphisms with nonhyperbolic ergodic measures appeared just recently, in 2005 (see \cite{KN}). The construction in \cite{KN} uses the \emph{method of periodic approximations} introduced in \cite{GIKN} (we will outline this method later). Note that this technique works in the specific setting of partially hyperbolic diffeomorphisms of the three-dimensional torus $\mathbb{T}^3$ with compact centre leaves. The existence of nonhyperbolic ergodic measures for some nonhyperbolic systems raises immediately further questions, which we address here: \emph{Which nonhyperbolic dynamical systems have ergodic nonhyperbolic measures? What is the support of these measures? What is their entropy? How many zero Lyapunov exponents do they have? How about other ergodic-theoretic properties of these measures?} The following theorem is a simplified version of our main result. For the full statement see Theorems \ref{t.openanddense} and \ref{t.average} below. \begin{theorem*}\lambdabel{thm:one} For every $n\ge 3$ there are a closed manifold $M$ of dimension $n$, a nonempty open set $\mathcal{U}$ in the space $\operatorname{Diff}^1(M)$ of $C^1$-diffeomorphisms defined on $M$, and a constant $C>0$ such that every $f\in \mathcal{U}$ has a nonhyperbolic invariant measure $\mu$ (i.e., with some zero Lyapunov exponent) satisfying: \begin{enumerate} \item \lambdabel{tp1} $\mu$ is ergodic, \item \lambdabel{tp2} the support of $\mu$ is the whole manifold $M$, \item \lambdabel{tp3} the entropy of $\mu$ is larger than $C$. \end{enumerate} \end{theorem*} The Theorem applies to any manifold allowing a robustly transitive diffeomorphism which has partially hyperbolic splitting with one-dimensional centre. In particular, by \cite{BD-robtran} it applies to \begin{itemize} \item the $n$-dimensional torus ($n\ge 3$), \item (more generally) any manifold carrying a transitive Anosov flow. \end{itemize} The nonhyperbolic measure $\mu$ as in the Theorem above apart from being ergodic, fully supported and having positive entropy, is also \emph{robust}. Namely, our proof shows that the three former properties appear \emph{robustly} in the space $\operatorname{Diff}^1(M)$ (i.e. we provide an open set of diffeomorphisms with an ergodic nonhyperbolic fully supported measure of positive entropy). Several previous works established the existence of nonhyperbolic measures in similar settings as in Theorem above and these measures have some (but not all) of the properties listed above. These references present two approaches to the construction of ergodic nonhyperbolic measures: the first one is the already mentioned \emph{method of periodic approximations} \cite{GIKN}, the second is the method of generating a nonhyperbolic measure by a \emph{controlled point} \cite{BBD:16,BDB:}. Note that \cite{BZ} combines these two methods. Both schemes are detailed in Section \ref{ss.compar}. A discussion of previous results on nonhyperbolic ergodic measures see Section \ref{ss.history} or \cite{D-ICM} for a more comprehensive survey on that topic. Here we further extend the method of construction of a controlled point presented in \cite{BDB:} and complement it by an abstract result in ergodic theory (see the theorem about entropy control and the discussion in Section \ref{ss.control}). These allows us to address simultaneously all four properties listed above. \subsection{Precise results for robustly transitive diffeomorphisms}\lambdabel{ss.precise} In what follows $M$ denotes a closed compact manifold, $\operatorname{Diff}^1(M)$ is the space of $C^1$-diffeo\-morphisms of $M$ endowed with the usual $C^1$-topology, and $f\in \operatorname{Diff}^1(M)$. For $\Lambdambda\subset M$ we write $\cM_f(\Lambdambda)$ for the set of all $f$-invariant measures with support contained in $\Lambdambda$. A $Df$-invariant splitting $T_\Lambda M=E\oplus F$ is \emph{dominated} if there are constants $C>0$ and $\lambdambda<1$ such that $\| Df^{n} E_{x}\|\cdot \| Df^{-n} F_{f^{n}(x)}\| < C \lambdambda^n$ for every $x\in\Lambdambda$ and $n\in \mathbb{N}N$. We say that a compact $f$-invariant set $\Lambdambda$ is \emph{partially hyperbolic with one-di\-men\-sio\-nal center} if there is a $Df$-invariant splitting with three nontrivial bundles\begin{equation}\lambdabel{e.ph} T_\Lambdambda M = E^{{\mathrm{s}}s} \oplus E^{{\mathrm{c}}} \oplus E^{{\mathrm{u}}u} \end{equation} such that $E^\mathrm{ss}$ is uniformly contracting and $E^\mathrm{uu}$ is uniformly expanding, $\operatorname{dim} E^{{\mathrm{c}}}=1$, and the $Df$-invariant splittings $E^{\mathrm{cs} } \oplus E^{{\mathrm{u}}u}$ and $E^{{\mathrm{s}}s} \oplus E^{\mathrm{cu}}$ are both dominated, where $E^\mathrm{cs}= E^{{\mathrm{s}}s} \oplus E^{{\mathrm{c}}}$ and $E^\mathrm{cu}= E^{{\mathrm{c}}} \oplus E^{{\mathrm{u}}u}$. The bundles $E^{{\mathrm{u}}u}$ and $E^{{\mathrm{s}}s}$ are called \emph{strong stable} and \emph{strong unstable}, respectively, and $E^{\mathrm{c}}$ is the \emph{center bundle}. We abuse a bit the terminology and say that the splitting given by \eqref{e.ph} is also \emph{dominated}. If $\Lambdambda$ is a partially hyperbolic set with one-dimensional center, then the bundles $E^{{\mathrm{s}}s}, E^{{\mathrm{c}}}, E^{{\mathrm{u}}u}$ depend continuously on the point $x\in \Lambdambda$. Hence the \emph{logarithm of the center derivative} \begin{equation}\lambdabel{e.logmap} \mathrm{J}_f^{{\mathrm{c}}}(x) \eqdef \log | Df_x |_{E^{\mathrm{c}} (x)\setminus\{0\}}| \end{equation} is a continuous map. If, in addition, $\mu\in \cM_f(\Lambdambda)$ is ergodic, then the Oseledets Theorem implies that there is a number $\chi^{{\mathrm{c}}} (\mu)$, called the {\emph{central Lyapunov exponent of $\mu$,}} such that for $\mu$-almost every point $x\in \Lambdambda$ it holds $$ \lim_{n \to \pm \infty} \frac{\log |Df^n_x (v)|}{n} = \int \mathrm{J}_f^{\mathrm{c}} \, d\mu=\chi^{{\mathrm{c}}}(\mu), \qquad \mbox{for every $v\in E^{{\mathrm{c}}} \setminus \{0\}$}. $$ In particular, the function $\mu\mapsto\chi^{{\mathrm{c}}}(\mu)$ is continuous with respect to the weak$^*$ topology on $\cM_f(\Lambdambda)$. A diffeomorphism $f\in \operatorname{Diff}^1(M)$ is \emph{transitive} if it has a dense orbit. The diffeomorphism $f$ is \emph{$C^1$-robustly transitive} if it belongs to the $C^1$-interior of the set of transitive diffeomorphisms (i.e., all $C^1$-nearby diffeomorphisms are also transitive). We denote by $\cR\cT(M)$ the $C^1$-open set of such diffeomorphisms $f\in\operatorname{Diff}^1(M)$ that: \begin{itemize} \item $f$ is robustly transitive, \item $f$ has a pair of hyperbolic periodic points with different indices, \item $M$ is a partially hyperbolic set for $f$ with one-dimensional center. \end{itemize} These assumptions imply that $\operatorname{dim}(M)\ge 3$, because in dimension two robustly transitive diffeomorphisms are always hyperbolic, see \cite{PuSa}. The set $\cR\cT(M)$ contains well studied and interesting examples of diffeomorphisms. Among them there are: different types of skew product diffeomorphisms, see \cite{BD-robtran,Sh}; derived from Anosov diffeomorphisms, see \cite{Mda}; and perturbations of time-one maps of transitive Anosov flows, see \cite{BD-robtran}. Our main theorem provides a measure $\mu$ which is ergodic \emph{and} has full support \emph{and} positive entropy, \emph{and} is such that the integral $\int J_f^{{\mathrm{c}}}d\mu$ has a prescribed value ($=0$). \begin{thm} \lambdabel{t.openanddense} There is a $C^1$-open and dense subset $\cZ(M)$ of $\cR\cT(M)$ such that every $f\in \cZ(M)$ has an ergodic nonhyperbolic measure with positive entropy and full support. Furthermore, for each $f\in \cZ(M)$ there is a neighbourhood $\cV_f$ of $f$ and a constant $c_f>0$ such that for every $g\in\cV_f\cap\cZ(M)$ we have $h(\mu_g)\ge c_f$. \end{thm} In what follows, given a periodic point $p$ of a diffeomorphism $f$ we denote by $\mu_{\cO(p)}$ the unique $f$-invariant measure supported on the orbit ${\cO(p)}$ of $p$. The next result is a reformulation and an extension of Corollary 6 in \cite{BDB:} to our context: \begin{thm}\lambdabel{t.average} Consider an open subset $\cU$ of $\cR\cT(M)$ such that there is a continuous map defined on $\cU$ \[ f\mapsto (p_f,q_f) \] that associates to each $f\in \cU$ a pair of hyperbolic periodic points with $\cO(p_f)\cap\cO(q_f)=\emptyset$. Let $\varphi\colon M\to\mathbf{R}$ be a continuous function satisfying $$ \int \varphi\, d\mu_{\cO(p_f)}<0<\int \varphi \,d\mu_{\cO(q_f)}, \quad \mbox{for every $f\in \cU$}. $$ Then there is a $C^1$-open and dense set $\cV$ of $ \cU$ such that every $f\in\cV$ has an ergodic measure $\mu_f$ whose support is $M$, satisfies $\int \varphi\, d\mu_f=0$, and has positive entropy. Furthermore, for each $f\in \cV$ there is a neighbourhood $\cV_f$ of $f$ and a constant $c_f>0$ such that for every $g\in\cV_f\cap\cV$ we have $h(\mu_g)\ge c_f$. \end{thm} \begin{rem} Note that in Theorem~\ref{t.average} there is no condition on the indices of the periodic points. Observe also that Theorem~\ref{t.openanddense} is not a particular case of Theorem~\ref{t.average}: in Theorem~\ref{t.openanddense} there is no fixed map $\varphi$, the considered map is the logarithm of the center derivative $J_f^{{\mathrm{c}}}$ and therefore it changes with $f$ (i.e., there is a family of maps $\varphi_f$ depending on $f$). \end{rem} \begin{rem} We could state a version of Theorem \ref{t.openanddense} replacing the logarithm of the center derivative $J_f^{{\mathrm{c}}}$ by any continuous map $\varphi\colon M\to \mathbb{R}$ and picking any value $t$ satisfying \[ \inf\left\{ \int \varphi\, d\mu\,: \, \mu \in \cM_f(M)\right\}<\, t\, < \sup\left\{ \int \varphi\, d\mu\,:\, \mu \in \cM_f(M)\right\}. \] \end{rem} \begin{rem} Our methods imply that the sentence ``{\emph{Moreover, the measure $\mu$ can be taken with positive entropy.}}'' can be added to Theorems 5, 7, and 8 and Proposition 4b in \cite{BDB:}. Furthermore, the entropy will have locally a uniform lower bound. The details are left to the reader. \end{rem} \subsection{Previous results on nonhyperbolic ergodic measures}\lambdabel{ss.history} By \cite{CCGWY} nonhyperbolic homoclinic classes of $C^1$-generic diffeomorphisms always support nonhyperbolic ergodic measures. The proof uses the periodic approximation method and extends \cite{DG}. In some settings the results of \cite{BDG} imply that these measures have full support in the homoclinic class. Specific examples of open sets of diffeomorphisms of the three torus with nonhyperbolic ergodic measures were first obtained in \cite{KN} using the periodic approximation method. In \cite{BBD:16} there are general results guaranteeing for an open and dense subset of $C^1$-robustly transitive diffeomorphisms the existence of nonhyperbolic ergodic measures with positive entropy. The latter paper uses the controlled point method. All results above provide measures with only one zero Lyapunov exponent. In \cite{WZ} yet another adaptation of the method of periodic approximation yields ergodic measures with multiple zero Lyapunov exponents for some $C^1$-generic diffeomorphisms (see also \cite{BBD:14} for results about skew-products). Some limitations of these previous constructions are already known. By \cite{KL} any measure obtained by the method of periodic approximations has zero entropy. Hence all nonhyperbolic measures defined in \cite{BDG,BZ,CCGWY,DG,GIKN, KN,WZ} have necessarily zero entropy. On the other hand, the measures produced in \cite{BBD:16} cannot have full support for their definition immediately implies that they are supported on a Cantor-like subset of the ambient manifold. \subsection{Comparison of constructions of nonhyperbolic ergodic measures} \lambdabel{ss.compar} So far there are two ways to construct nonhyperbolic ergodic measures: these measures are either ``approximated by periodic measures'' as in \cite{GIKN} or ``generated by a controlled point'' as in \cite{BBD:16,BDB:}. Both methods apply to any partially hyperbolic diffeomorphism as in Subsection~\ref{ss.precise} whose domain contains two special subsets: a centre contracting region (where the central direction is contracted) and a centre expanding region (where the central direction is expanded). Furthermore, the orbits can travel from one of these regions to the other in a controlled way. For example, both methods work for a diffeomorphism with a transitive set (which is persistent to perturbations if one wants to obtain a robust result) containing two ``heteroclinically related blenders'' of different indices and a region where the dynamics is partially hyperbolic with one dimensional centre. We discuss blenders in Section \ref{s.robustly}. Let us briefly describe these two approaches. For simplicity, we will restrict ourselves to the partially hyperbolic setting as above. As mentioned above the ``approximation by periodic orbits'' construction from \cite{GIKN} builds a sequence of periodic orbits $\Gammamma_i$ and periodic measures $\mu_i$ supported on these orbits such that the sequence of central Lyapunov exponents $\lambdambda_i$ of these measures tends to zero as $i$ approaches infinity. Since in our partial hyperbolic setting the central Lyapunov exponent vary continuously with the measure, it vanishes for every weak$^*$ accumulation point $\mu$ of the sequence of measures $\mu_i$ so we get a nonhyperbolic measure. The difficulty is to prove that every limit measure is ergodic as ergodicity is not closed property in the weak$^*$ topology. The arguments in \cite{GIKN} contain a general criterion for the weak$^*$ convergence and ergodicity of the limit of a sequence periodic orbits $\Gammamma_i$. It requires that for some summable sequence $(\gamma_i)$ of positive reals most points on the orbit $\Gammamma_{i+1}$ shadow $\gamma_i$-close a point on the precedent orbit $\Gammamma_i$ for $\card{\Gammamma_i}$ iterates. If the proportion of these shadowing points among all points of $\Gammamma_{i+1}$ tends to $1$ as $i\to +\infty$ sufficiently fast, then $\mu_i$ weak$^*$ converge to an ergodic measure. The ``non-shadowing'' points are used to decrease the absolute value of the central Lyapunov exponent over $\Gammamma_i$ and to spread the support of $\Gammamma_i$ in the ambient space. This forces the limit measure to have zero central Lyapunov exponent and full support, as in this case the central Lyapunov exponent depends continuously on the measure (see below). It turns out that the repetitive nature of the method of periodic approximation forces the resulting measure to be close (Kakutani equivalent\footnote{Two measure preserving systems are \emph{Kakutani equivalent} if they have a common derivative. A \emph{derivative} of a measure preserving system is an another measure preserving system isomorphic with a system induced by the first one. See Nadkarni's book \cite{Nadkarni}, Chapter 7.}) to a group rotation. Actually, it is proved in \cite{KL} that the periodic measures $\mu_i$ described above converge to $\mu$ in a much stronger sense than weak$^*$ convergence. This new notion of convergence is coined \emph{Feldman--Katok convergence} and it implies that all measures obtained following \cite{GIKN} (thus the measures from \cite{ BDG, BZ,CCGWY, DG, KN, WZ} obtained by this method) are loosely Kronecker measures with zero entropy. For more details we refer to \cite{KL}. The authors of \cite{BBD:16} devised a new method for constructing nonhyperbolic ergodic measures using blenders and flip-flop configurations (we review the former in Section \ref{s.robustly} and the latter in Section~\ref{s.flipflop}). Applying these tools one defines a point $x$ such that the Birkhoff averages of the central derivative along segments of its forward orbit go to zero uniformly. In a bit more precise terms, there are sequences of positive reals and positive integers, denoted $\varepsilon_n$ and $T_n$, with $\varepsilon_n\to 0^+$ and $T_n\to \infty$ as $n\to \infty$ such that the average of the central derivative along a segment of the $x$-orbit $\{f^t(x), \dots,f^{t+T_n}(x)\}$ is less than $\varepsilon_n$ for any $t\geq 0$. We say that such an $x$ is \emph{controlled at any scale}. Then the $\omega$-limit set $\omega(x)$ of $x$ is an invariant compact set such that for all measures supported on $\omega(x)$ the centre Lyapunov exponent vanishes, see \cite[Lemma 2.2]{BBD:16}. Under some mild assumptions one can find a point $x$ such that the compact invariant set $\omega(x)$ is also partially hyperbolic and $f|_{\omega(x)}$ has the full shift over a finite alphabet as a factor, thus it has positive topological entropy. To achieve this one finds a pair of disjoint compact subsets $K_0$ and $K_1$ of $M$ such that for some $k>0$ and $\omega\in\{0,1\}^\mathbb{Z}Z$ which is generic for the Bernoulli measure $\xi_{1/2}$ for every $j\in\mathbb{Z}Z$ we have $f^{jk}(x)\in K_{\omega(j)}$. By the variational principle for topological entropy \cite{Wal:82} the set $\omega(x)$ supports an ergodic nonhyperbolic measure with positive entropy. Unfortunately, the existence of a semi-conjugacy from $\omega(x)$ to a Cantor set carrying the full shift forces $\omega(x)$ to be a proper subset of $M$, thus these measures cannot be supported on the whole manifold. In \cite{BDB:} the procedure from \cite{BBD:16} was modified. More precisely, in \cite{BDB:} the control over orbit of a point $x$ is relaxed: one splits the orbit of $x$ into a ``regular part'' and a ``tail'', and one only needs to control the averages over the orbit segment $\{f^t(x), \dots,f^{t+T_n}(x)\}$ of length $T_n$ only for $t$ belonging to the regular part, which is a set of positive density in $\mathbb{N}$ and at the same time the iterates corresponding to the tail part are dense in $X$. Under quite restrictive conditions on the tail (coined \emph{longness} and \emph{sparseness}), Theorem~1 from \cite{BDB:} claims that if $x$ is controlled at any scale with a long sparse tail then any measure $\nu$ generated\footnote{A measure $\mu$ is generated by a point $x$ if $\mu$ is a weak$^*$ limit point of the sequence of measures $\frac1n\sum_{i=0}^{n-1}\deltalta_{f^i(x)}$, where $\deltalta_{f^i(x)}$ is the Dirac measure at the point $f^i(x)\in X$.} by $x$ has vanishing central Lyapunov exponent and full support. The underlying topological mechanism providing points whose orbits are controlled at any scale with long sparse tail are the \emph{flip-flop families with sojourns in $X$}. Since there is no longer a semi-conjugacy to a full shift a different method has to be applied to establish positivity of the entropy. \subsection{Control of entropy}\lambdabel{ss.control} In this paper we combine the two methods above. We pick a pair of disjoint compact subsets $K_0$ and $K_1$ of $M$ and we divide the orbit into the regular and tail part as in \cite{BDB:}. We assume that the controlled points visits $K_0$ and $K_1$ following the same pattern as some point generic for the Bernoulli measure $\xi$ (as in \cite{BBD:16}), but we require that this happens \emph{only} for iteration in the regular part of the orbit. We also assume that the tail is even more structured: apart of being long and sparse, the tail intersected with nonnegative integers is a \emph{rational subset of $\mathbb{N}N$}. A set $A\subset\mathbb{N}N$ is \emph{rational} if it can be approximated with arbitrary precision by sets which are finite union of arithmetic progressions (sets of the form $a+b\mathbb{N}N$, where $a,b\in\mathbb{N}N$, with $b\neq 0$). Here, the ``precision'' is measured in terms of the upper asymptotic density $\bar{d}$ of the symmetric difference of $A$ and a finite union of arithmetic progressions. To get positive entropy, we show that the measure $\mu$ is an extension of a loosely Bernoulli system with positive entropy (a measure preserving system with a subset of positive measure such that the induced system is a Bernoulli process). More precisely, we have the following general criterion for the positivity of any measure generated by a point (actually, we prove even more general Theorem \ref{thm:mainbis}, but for the full statement we need more notation, see Section \ref{s.measurespositive}). \begin{theorem*}[Control of entropy]\lambdabel{thm:main} Let $(X, \rho) $ be a compact metric space and $f\colon X\to X$ be a continuous map. Assume that $\mathbf{K}=(K_0,K_1)$ is a pair of disjoint compact subsets of $X$ and $J\subset \mathbb{N}$ is a rational. If $\bar x\in X$ is such that $f^j(\bar x)\in K_{\bar z(j)}$ for every $j\in J$ where $z$ is a generic point for the Bernoulli measure $\xi_{1/2}$, then the entropy of any measure $\mu$ generated by $x$ has entropy at least $d(J)\cdot \log 2$, where $d(J)$ stands for the asymptotic density of $J$. \end{theorem*} We apply the above criterion to the controlled point and the rational set $J$ which is the complement (in $\mathbb{N}N$) of the intersection of the rational tail with $\mathbb{N}$. This implies that $J$ is also a rational set (Remark~\ref{r.complement}) and the asymptotic density of $J$ exists and satisfies $0<d(J)<1$. The underlying topological mechanism we use to find the controlled point with the required behavior is provided by a new object we call the \emph{double flip-flop family for $f^N$ with $f$-sojourns in $X$}, where $N>0$ is some integer. This mechanism is a variation of the notion of the flip-flop family with sojourns in $X$, indeed we will see that a flip-flop family with sojourns in $X$ yields a double flip-flop family for some $f^N$ with $f$-sojourns in $X$, see Proposition~\ref{p.yieldsdouble}. \subsection*{Organisation of the paper} This paper is organised as follows. In Section~\ref{s.measurespositive}, we prove Theorem~\ref{thm:main}. Section~\ref{s.scalesandtails} is devoted to the construction of rational sparsely long tails. In Section~\ref{s.flipflop}, we study different types of flip-flop families and prove that they generate ergodic measures with full support and positive entropy with appropriate averages. Finally, in Section~\ref{s.robustly} devoted to robustly transitive diffeomorphisms, we complete the proofs of Theorems~\ref{t.openanddense} and \ref{t.average}. \section{Measures with positive entropy. Proof of Theorem~\ref{thm:main}} \lambdabel{s.measurespositive} The goal of this section is to prove Theorem~\ref{thm:main}. Throughout what follows, $X$ is a compact metric space, $\rho$ is a metric for $X$, and $f\colon X\to X$ is a continuous map (not necessarily a homeomorphism). First we introduce some notation. Given a continuous function $\varphi\colon X\to \mathbf{R}$, $n>0$, and $x\in X$ we denote by $\varphi_n(x)$ the Birkhoff average of $\varphi$ over the orbit segment $x,f(x),\ldots,f^{n-1}(x)$, that is, \begin{equation}\lambdabel{e.averages} \varphi_n(x)\eqdef \frac 1n\sum_{i=0}^{n-1}\varphi\circ f^i(x). \end{equation} \subsection*{Generated measures, generic/ergodic points} Let $\M_f(X)$ denote the set of $f$-invariant Borel probability measures on $X$. Given $x\in X$ and $n\ge 1$ we set \begin{equation}\lambdabel{e.empiric} \mu_n(x,f)\eqdef \frac1n\sum_{i=0}^{n-1}\deltalta_{f^i(x)}, \end{equation} where $\deltalta_{f^i(x)}$ is the Dirac measure at the point $f^i(x)\in X$. We say that a point $x\in X$ {\emph{generates $\mu\in\M_f(X)$} along $(n_k)_{k\in\mathbb{N}}$} if $(n_k)_{k\in\mathbb{N}}$ is a strictly increasing sequence of integers such that $\lim_{k\to\infty}\mu_{n_k}(x,f)=\mu$ in the weak$^*$ topology on the space of probability measures on $X$. If $\mu$ is a measure generated by a point $x\in X$, then $\mu$ is invariant, that is $\mu\in\cM_f(X)$. If there is no need to specify $(n_k)$ we just say that \emph{$x$ generates $\mu$}. We write $V(x)$ for the set of all $f$-invariant measures generated by $x$. We say that $x$ is a \emph{generic point} for $\mu$ if $\mu$ is the unique measure generated by $x$, i.e., $V(x)=\{\mu\}$. A point is an \emph{ergodic point} if it is a generic point for an ergodic measure. We write $h_\mu(f)$ for the \emph{entropy} of $f$ with respect to $\mu$, see \cite{Wal:82} for its definition and basic properties. \subsection*{Sets and their densities} The \emph{upper asymptotic density} of a set $J\subset \mathbb{Z}$ ($J\subset\mathbb{N}$) is the number \[ \bar d(J)=\limsup_{N\to\infty}\frac{1}{N}\card{J\cap [0,N-1]}. \] Similarly, we define the \emph{lower asymptotic density} $\underline{d}(J)$ of $J$. If $\bar d(J)=\underline{d}(J)$, then we say that $J$ has \emph{asymptotic density} $d(J)=\underline{d}(J)=\bar d (J)$. Note that $\bar d$, $\underline{d}$, and $d$ (if defined) are determined only by $J\cap\mathbb{N}$. \subsection*{Symbolic dynamics} Let $\Omegaega_M=\{0,1,\ldots,M-1\}^\mathbb{Z}$ be the full shift over $\cA=\{0,1,\ldots,M-1\}$. For $\alpha\in\cA$ we write $[\alpha]_0$ for the \emph{cylinder set} defined as $\{\omega\in\Omegaega_M:\omega_0=\alpha\}$. Similarly, given a word $u=\alpha_1\dots \alpha_k\in\cA^k$ the set $[u]_0$ is the cylinder defined as $\{\omega\in\Omegaega_M:\omega_0=\alpha_1, \dots, \omega_{k-1}=\alpha_k\}$. We will often identify a set $A\subset\mathbb{Z}$ ($A\subset\mathbb{N}$) with its characteristic function $\chi_A\in\Omegaega_2$, that is $(\chi_A)_i=1$ if and only if $i\in A$. By $\sigma$ we denote the shift homeomorphism on $\Omegaega_M$ given by $(\sigma(\omega))_i=\omega_{i+1}$ for each $i\in \mathbb{Z}$. For more details on symbolic dynamics we refer the reader to \cite{LM}. \subsection*{Completely deterministic sequences} A point $x\in\Omegaega_2$ is \emph{completely deterministic} (or \emph{deterministic} for short) if every measure generated by $x$ has zero entropy, that is, $h(\mu)=0$ for every $\mu\in V(x)$. A set $J\subset \mathbb{Z}$ is \emph{completely deterministic} if its characteristic function is a completely deterministic point in $\Omegaega_2$. This notion is due to B.~Weiss, see \cite{Weiss}. \subsection*{Bernoulli measure} The {\emph{Bernoulli measure $\xi_{1/2}$}} is the shift invariant measure on $\Omegaega_2$ such that for each $N\in\mathbb{N}$ and $u=u_1\ldots u_N\in\{0,1\}^N$ we have $\xi_{1/2}([u]_0)=1/2^N$. \subsection*{Itineraries} Let $\mathbf{K}=(K_0,K_1)$ be disjoint compact subsets of $X$ and $J\subset\mathbb{Z}$. We say that $\omega\in\Omegaega_2$ is the $\mathbf{K}$-itinerary of $x\in X$ over $J$ if $f^{j}(x)\in K_{\omega(j)}$ for each $j\in J$. The next result is the main step in the proof of Theorem~\ref{thm:main}. \begin{thm}[Control of the entropy]\lambdabel{thm:mainbis} Let $(X, \rho) $ be a compact metric space and $f\colon X\to X$ be a continuous map. Assume that $\mathbf{K}=(K_0,K_1)$ is a pair of disjoint compact subsets of $X$ and $J\subset \mathbb{N}$ is completely deterministic with $\underline{d}(J)>0$. If the $\mathbf{K}$-itinerary of $\bar x\in X$ over $J$ is a generic point for $\xi_{1/2}$, then the entropy of any measure $\mu\in V(x)$ satisfies $$ h _{\mu}(f)\ge \underline{d}(J)\cdot \log 2>0. $$ \end{thm} Our proof is based on the following property of completely deterministic sets: \emph{A sequence formed by symbols chosen from a generic point of a Bernoulli measure $\xi_{1/2}$ along a completely deterministic set with positive density is again a generic point for $\xi_{1/2}$}. This is a result of Kamae and Weiss (see \cite{Weiss}) originally formulated in the language of normal numbers and admissible selection rules. \begin{rem} When we were finishing writing this paper, {\L}{\c{a}}cka announced in her PhD thesis \cite{Martha} a version of Theorem \ref{thm:mainbis} with relaxed assumptions on the point $\bar x$ and the sequence $J$. Her proof is based on properties of the $\bar{f}$-pseudometric discussed in \cite{KL}. \end{rem} \subsection{Proof of Theorem~\ref{thm:mainbis}} Let $K_2=X\setminus (K_0\cup K_1)$. Let $\iota_\mathcal{P}\colon X\to\Omegaega_3$ be the coding map with respect to the partition $\mathcal{P}=\{K_0,K_1,K_2\}$. In other words, $\iota_\mathcal{P}(x)=y\in\Omegaega_3$, where \[ \iota_\mathcal{P}(x)=\begin{cases} i, & \mbox{if $j\in\mathbb{N}$ and $f^j(x)\in K_i$}, \\ 0, & \mbox{if $j<0$}. \end{cases} \] Fix $\nu\in V(\bar x)$. We modify $K_0$ and $K_1$ without changing the $\mathbf{K}$-itinerary of $\bar x$ over $J$ so that the topological boundary of $K_i$ for $i=0,1,2$, denoted $\partial K_i$, is $\nu$-null. To this end, we set \[ \tilde{c}=\frac{1}{2}\min\{\rho(x_0,x_1):x_0\in K_0,\,x_1\in K_1\} \] and for $0<c<\tilde{c}$ we define sets $\partial_c K_i =\{x\in X: \dist (x,K_i)=c\}$ for $i=0,1$. Note that for each $c$ the set $\partial_c K_0\cup\partial_c K_1$ contains (but need not to be equal) the topological boundaries of the sets: $K^c_0$, $K^c_1$, and $K'_2=X\setminus (K^c_0\cup K^c_1)$, where $K^c_i=\{x\in X: \dist (x,K_i)\le c\}$ for $i=0,1$. Consider a family of closed sets $\mathcal{C}=\{\partial_c K_0\cup\partial_c K_1: 0<c<\tilde c\}$. Since elements of $\mathcal{C}$ are pairwise disjoint, only countably many of them can be of positive $\nu$-measure. Fix any $0<c<\tilde{c}$ such that $\nu(\partial_c K_0\cup\partial_c K_1)=0$ and replace $K_i$ by $K^c_i$ for $i=0,1$, and $K_2$ by $K'_2$. Note that this does not change the $\mathbf{K}$-itinerary of $x$ over $J$. Furthermore, the elements of our redefined partition, which we will still denote $\mathcal{P}=\{K_0,K_1,K_2\}$ have $\nu$-null boundaries. Let $(n_k)$ be the sequence of integers along which $\bar x$ generates $\nu$. Then $y\eqdef\iota_\mathcal{P}(\bar x)\in\Omegaega_3$ generates a shift-invariant measure $\mu$ on $\Omegaega_3$ which is the push-forward of $\nu$ through the coding map $\iota_\mathcal{P}$. Furthermore, the dynamical entropy of $\nu$ with respect to the partition $\mathcal{P}$ equals $h_\mu(\sigma)$. See \cite[Lemma 2]{KL} for more details. Note that the proof in \cite{KL} is stated for generic points, but it is easily adapted to the measures generated along a sequence as considered here. Now Theorem~\ref{thm:mainbis} follows from the following fact. \begin{cla} \lambdabel{t.p.entropy} If $\mu \in \mathcal{M}_\sigma(\Omegaega_3)$ is a measure generated by $y$, then $h_{\mu}(\sigma) \ge \underline{d}(J)\cdot \log 2$. \end{cla} \begin{proof}[Proof of the Claim \ref{t.p.entropy}] Let $\mu\in V(y)$ and $(n_k)_{k\in \mathbb{N}}$ be a strictly increasing sequence such that $\mu_{n_k}(y,\sigma)\to\mu$ as $k\to\infty$. Let $\chi_J\in\Omegaega_2$ be the characteristic sequence of $J$. Let $\mu_J$ by any measure generated by $\chi_J$ along $(n_k)_{k\in \mathbb{N}}$, that is, is any limit point of $(\mu_{n_k}(\chi_J,\sigma))_{k\in\mathbb{N}}$. Passing to a subsequence (if necessary) we assume that $\mu_{n_k}(\chi_J,\sigma)$ converges as $k\to\infty$ to a shift-invariant measure $\mu_J$ on $\Omegaega_2$. Consider the product dynamical system on $\Omegaega_3\times \Omegaega_2$ given by $$ S \eqdef \sigma\times \sigma\colon \Omegaega_3\times \Omegaega_2\to \Omegaega_3\times \Omegaega_2. $$ Again passing to a subsequence if necessary, we may assume that there exists $\mu'\in\mathcal{M}_{S}(\Omegaega_3\times\Omegaega_2)$ such that \[ \mu_{n_k}((y,\chi_J),S) \to \mu'\text{ as }k\to\infty. \] Recall that a \emph{joining} of $\mu$ and $\mu_J$ is an $S$-invariant measure on $\Omegaega_3\times \Omegaega_2$ which projects $\mu$ in the first coordinate and $\mu_J$ in the second. Observe that $\mu'$ is a joining of $\mu$ and $\mu_J$ (because the marginal distributions of $\mu_{n_k}((y,\chi_J),S)$ converge as $k\to\infty$ to, respectively, $\mu$ and $\mu_J$). As the entropy of a joining is bounded below by the entropy of any of its marginals and is bounded above by the sum of the entropies of its marginals (see \cite[Fact 4.4.3]{Downarowicz}) we have that \[ h_\mu(\sigma)\le h_{\mu'}(S)\le h_\mu(\sigma)+h_{\mu_J}(\sigma)=h_\mu(\sigma), \] where the equality uses that $J$ is completely deterministic. Now to complete the proof the Claim \ref{t.p.entropy} it suffices to show the following fact. \begin{cla} \lambdabel{p.l.entropy} Every $S$-invariant measure $\mu'\in V(y,\chi_J)$ satisfies $h_{\mu'}(S)\ge \und d(J)\cdot \log 2>0$. \end{cla} \begin{proof}[Proof of Claim \ref{p.l.entropy}] Let $\Psi\colon \{0,1,2\}\times\{0,1\}\to\{0,1,2\}\times\{0,1\}$ be the $1$-block map given by $\Psi(\alpha,1)=(\alpha,1)$ and $\Psi(\alpha,0)=(2,0)$ for $\alpha\in\{0,1,2\}$. Consider the factor map $\psi\colon\Omegaega_3\times\Omegaega_2\to\Omegaega_3\times\Omegaega_2$ determined by $\Psi$, that is \[ \psi(\omega)=\big((\psi(\omega))_i\big)_{i\in\mathbb{Z}},\qquad\text{where }(\psi(\omega))_i = \Psi(\omega_i) \,\text{for }i\in\mathbb{Z}. \] Observe that we have defined $\psi$ so that if $\omega,\omega'\in \Omegaega_3$ satisfy $\omega|_J=\omega'|_J$, then $(\omega'',\chi_J)\eqdef\psi(\omega,\chi_J)=\psi(\omega',\chi_J)$. Furthermore, $\omega''$ agrees with both, $\omega$ and $\omega'$, over $J$ and $\omega''_j=2$ for all $j\notin J$. In particular, if $\bar z\in\Omegaega_2$ is a generic point for the Bernoulli measure $\xi_{1/2}$ such that $\bar z|_J= y|_J$, then $(z,\chi_J)\eqdef\psi(y,\chi_J)=\psi(\bar z,\chi_J)$. Recall that the only joining of a Bernoulli measure $\xi_{1/2}$ and the zero entropy measure $\mu_J$ is the product measure $\xi_{1/2}\times\mu_J$, see \cite[Theorem 18.16]{Glasner}. As any limit point of $(\mu_{n_k}((\bar z,\chi_J),S))_{k\in\mathbb{N}}$ is a joining of $\xi_{1/2}$ and $\mu_J$, we get that $(\bar z,\chi_J)$ generates along $(n_k)_{k\in\mathbb{N}}$ the $S$-invariant measure $\xi_{1/2}\times \mu_J$. It follows that $(z,\chi_J)$ generates along $(n_k)_{k\in\mathbb{N}}$ the $S$-invariant measure $\mu''=\psi_*(\xi_{1/2}\times\mu_J)$. Note that this shows that all measures in $\mathcal{M}_S(\Omegaega_3\times\Omegaega_2)$ generated by $(y,\chi_J)$ along $(n_k)_{k\in\mathbb{N}}$ are pushed forward by $\psi$ onto $\mu''$. Therefore to finish the proof of the Claim \ref{p.l.entropy} it is enough to see that \begin{equation} \lambdabel{e.toseethat} h_{\mu''}(S) \ge \underline{d}(J)\cdot \log 2. \end{equation} Let $\mathbb{I}_{[1]_0}$ be the characteristic function of the cylinder $[1]_0\subset\Omegaega_2$. Note that from the definition of $\underline{d}(J)$ and $\bar d(J)$ it follows immediately that \begin{equation} \lambdabel{e.dj} \underline{d}(J) \le \lim_{k\to\infty} \frac{1}{n_k}\sum_{j=0}^{n_k-1} \mathbb{I}_{[1]_0}(\sigma^{j}(\chi_J)) \le \bar d (J). \end{equation} Observe also that the measure $\mu''$, by its definition, is concentrated on the set \[ \big( [0]_0\times[1]_0 \big) \cup \big( [1]_0 \times [1]_0 \big) \cup \big( [2]_0\times [0]_0 \big)\subset \Omegaega_3\times\Omegaega_2. \] Consider the set $E\eqdef \big([0]_0\times[1]_0\big)\cup\big([1]_0\times [1]_0\big)\subset \Omegaega_3\times\Omegaega_2$ and let $\mathbb{I}_E$ be its characteristic function. \begin{cla} \lambdabel{cl.theclaim} We have $0<\underline{d}(J)\le\mu''(E)=\mu_J([1]_0)\le\bar d(J)$. \end{cla} \begin{proof}[Proof of Claim \ref{cl.theclaim}] Note that for $n\in\mathbb{N}$ it holds $S^n(z,\chi_J)\in E$ if and only if $\sigma^n(\chi_J)\in [1]_0$, equivalently, if $n\in J$. Recall that along $(n_k)$, the point $(z,\chi_J)$ generates $\mu''$ and $\chi_J$ generates $\mu_J$. Furthermore, as the topological boundaries of $E$ and $[1]_0$ are empty, it follows from the portmanteau theorem \cite[Thm. 18.3.4]{Garling}, that \[ \mu''(E)=\lim_{k\to\infty}\frac{1}{n_k}\sum_{j=0}^{n_k-1} \mathbb{I}_E(S^{j}(z,\chi_J)) =\lim_{k\to\infty} \frac{1}{n_k}\sum_{j=0}^{n_k-1} \mathbb{I}_{[1]_0}(\sigma^{j}(\chi_J)). \] Equation \eqref{e.dj} implies now that $0<\underline{d}(J)\le\mu''(E)\le\bar d(J)$, proving Claim \ref{cl.theclaim}. \end{proof} By Claim \ref{cl.theclaim} we have that $S$ induces a measure preserving system $(E,\mu''_E,S_E)$ on $E$, where $\mu''_E(A)=\mu''(A\cap E)/\mu''(E)$ for every Borel set $A\subset \Omegaega_3\times\Omegaega_2$ and $S_E(x) =S^{r(x)} (x)$, where $r(x)=\inf\{ q>0 \colon S^q(x) \in E\}$ is defined for $\mu''$-a.e. point $x\in E$. \begin{cla}\lambdabel{l.bernoulli} The measure preserving system $(\Omegaega_2,\xi_{1/2},\sigma)$ is a factor of $(E,\mu''_E,S_E)$. \end{cla} Let us assume that Claim \ref{l.bernoulli} holds and conclude the proof of Claim~\ref{p.l.entropy}. Note that by Claim \ref{l.bernoulli} we have $h_{\mu''_E}(S_E)\ge \log 2$. Now by Abramov's formula\footnote{The proof that this well-known formula works for transformations which can be not ergodic nor invertible, is due H. Scheller, see \cite{Krengel} or \cite[p. 257]{Petersen}.} it follows that \[ h_{\mu''_E}(S_E)=h_{\mu''}(S)/\mu''(E). \] By Claim~\ref{cl.theclaim}, this yields $\underline{d}(J)\cdot \log 2\le h_{\mu''}(S)$, proving \eqref{e.toseethat} and finishing the proof of Claim~\ref{p.l.entropy}. \end{proof} Since Claim~\ref{p.l.entropy} implies Claim \ref{thm:mainbis}, and the latter implies Theorem \ref{t.p.entropy}, it remains to prove Claim \ref{l.bernoulli}. \begin{proof}[Proof of Claim~\ref{l.bernoulli}] Consider the partition $\cP_E\eqdef \{P_0,P_1\}$ of $E$, where $P_0\eqdef [0]_0\times[1]_0$ and $P_1\eqdef [1]_0\times [1]_0$. Fix $N\in\mathbb{N}$ and $v=v_1\ldots v_N\in\{0,1\}^N$. Let \[ \mathcal P_v\eqdef P_{v_1}\cap S_E^{-1}(P_{v_2})\cap\ldots\cap S_E^{-N+1}(P_{v_N}). \] Our goal is to prove that $\mu''_E(\mathcal P_v) = 1/2^N$, which implies that $(\Omegaega_2,\xi_{1/2},\sigma)$ is a factor $(E,\mu''_E,S_E)$ through the factor map generated by $\cP_E$. To this end we need some auxiliary notation. Let $\mathcal G_J^N$ be the set of blocks over $\{0,1\}$ which contain exactly $N$ occurrences of $1$, start with $1$, and end with $1$. For $u\in\mathcal G_J^N$ and $1\le j\le N$ we denote by $o(j)$ the position of the $j$-th occurrence of $1$ in $u$ and define \[ V_{v,u}\eqdef \{(\omega,\bar\omega)\in \operatorname{supp}\mu'': \bar \omega\in [u]_0 \text{ and }\omega_{o(j)}=v_{j}\text { for }j=1,\ldots,N\}. \] From the definition of $\mu''$ it follows that $\mu''(V_{v,u})=(1/2^N)\,\mu_J([u]_0)$. Furthermore, we set \[ U_v\eqdef \bigcup_{u\in \mathcal G_J^N}V_{v,u}, \qquad\text{hence}\qquad \mu''(U_v)=\frac{1}{2^N}\, \sum_{u\in \mathcal G_J^N}\mu_J([u]_0). \] Noting that $\mu_J$-almost every point $\bar\omega\in[1]_0$ belongs to some $[u]_0$ with $u\in\mathcal G_J^N$ we get that $$ \sum_{u\in \mathcal G_J^N}\mu_J([u]_0)=\mu_J([1]_0). $$ Therefore, using Claim~\ref{cl.theclaim}, we get $$ \mu''(U_v)=\frac{\mu_J([1]_0)}{2^N}= \frac{\mu''(E)}{2^N}. $$ Note also that $\mathcal P_v=U_v\cap E=U_v$, thus \[ \mu''_E(\mathcal P_v)=\frac{\mu''(U_v)}{\mu''(E)}= \frac{1}{2^N}, \] proving Claim \ref{l.bernoulli}. \end{proof} The proof of Claim~\ref{t.p.entropy} is now complete. This ends the proof of Theorem \ref{thm:mainbis}. \end{proof} \subsection{Rational sets and proof of Theorem~\ref{thm:main}}\lambdabel{ss.rational} The notion of a rational set was introduced by Bergelson and Ruzsa \cite{BR1}. Below, by an {\emph{arithmetic progression}} we mean a set of the form $a\mathbb{Z}+b$ for some $a,b\in\mathbb{N}$, $a\neq 0$. \begin{defn}\lambdabel{d.rational} We say that a set $A\subset\mathbb{Z}$ is \emph{rational} if for every $\varepsilon>0$ there is a set $B\subset\mathbb{Z}$ which is a union of finitely many arithmetic progressions and satisfies $\bar d(A\div B)<\varepsilon$, where $A\div B$ stands for the symmetric difference of $A$ and $B$. A subset $B$ of $\mathbb{N}$ is \emph{rational} if $B=C\cap \mathbb{N}$ for some rational set $C\subset \mathbb{Z}$. \end{defn} \begin{rem}\lambdabel{r.complement} If $A \subset \mathbb{Z}$ is rational then its complement $\mathbb{Z}\setminus A$ is also rational. The same holds for rational subsets of $\mathbb{N}$. Note that, by definition, a rational set has a well defined density. \end{rem} Recall here that the formula \[ \bar d(x,y)\eqdef \limsup_{N\to\infty}\frac{1}{N} \card{\{0\le n<N:x_n\neq y_n\}},\quad \text{for }x,y\in\Omegaega_M \] defines a pseudometric on $\Omegaega_M$. In the following, we need the following properties of $\bar d$: \begin{enumerate} \item\lambdabel{p1} If $(x_n)_{n\in\mathbb{N}}\subset\Omegaega_M$ is a sequence of ergodic points and $x\in\Omegaega_M$ is such that $\bar d(x_n,x)\to 0$ as $n\to \infty$, then $x$ is also an ergodic point. \item\lambdabel{p2} Furthermore, if $(x_n)_{n\in\mathbb{N}}\subset\Omegaega_M$ and $x\in\Omegaega_M$ are as above and $V(x_n)=\{\mu_n\}$ and $V(x)=\{\mu\}$, then $h_{\mu_n}(\sigma)\to h_\mu(\sigma)$ as $n\to\infty$. \end{enumerate} A proof of \eqref{p1} is sketched in \cite{Weiss}, alternatively it follows from \cite[Theorem 15 and Corollary 5]{KLO}. To see \eqref{p2} one combines \eqref{p1} with \cite[Theorem I.9.16]{Shields} and the proof of \cite[Theorem I.9.10]{Shields}. Corollary~\ref{c:main} (and therefore Theorem~\ref{thm:main}) follows from the following result. \begin{lem}\lambdabel{lem:genericity} If $A\subset \mathbb{Z}$ or $A\subset \mathbb{N}$ is a rational set, then its characteristic function $\chi_A\in\Omegaega_2$ is a completely deterministic ergodic point. \end{lem} \begin{proof} Note that a set $B\subset \mathbb{Z}$ is a union of finitely many arithmetic progressions if and only if its characteristic function $\chi_B\in\Omegaega_2$ is a periodic point for the shift map $\sigma\colon\Omegaega_2\to\Omegaega_2$. Furthermore, $\bar d(A\div B)<\varepsilon$ is equivalent to $\bar d(\chi_A,\chi_B)<\varepsilon$. Thus, by definition, for every rational set $A\subset \mathbb{Z}$ there is a sequence $(z_n)_{n=0}^\infty$ of $\sigma$-periodic points in $\Omegaega_2$ such that $\bar d(\chi_A,z_n)\to 0$ as $n\to\infty$. Since each $z_n$ is a generic point for a zero entropy ergodic measure, the properties \eqref{p1}--\eqref{p2} of $\bar d$ mentioned above allow us to finish the proof if $A\subset \mathbb{Z}$. For $A\subset\mathbb{N}$ it is enough to note that if $C\subset \mathbb{Z}$ is such that $\chi_C\in\Omegaega_2$ is a completely deterministic ergodic point and $A=C\cap\mathbb{N}$, then $\chi_A\in\Omegaega_2$ is also a completely deterministic ergodic points, as these notions depend only on the forward orbit of a point. \end{proof} Using Lemma~\ref{lem:genericity} we see that the characteristic function $\chi_A\in\Omegaega_2$ of a rational set $A\subset \mathbb{Z}$ is a completely deterministic ergodic point and has a well defined density. Therefore we get the following corollary of Theorem \ref{thm:mainbis}, which explains why the theorem about control of entropy stated in the introduction follows from Theorem \ref{thm:mainbis}. \begin{coro}\lambdabel{c:main} The conclusion of Theorem~\ref{thm:mainbis} holds if $J$ is a rational set of $\mathbb{N}$ with $0<d(J)<1$. \end{coro} \section{Rational long sparse tails} \lambdabel{s.scalesandtails} The aim of this section is to find rational subsets of $\mathbb{N}$, which fulfill the requirements needed in the construction of a controlled point from \cite{BDB:}. These sets are coined in \cite{BDB:} \emph{$\cT$-long $\bar\varepsilon$-sparse tails} (associated to a scale $\cT$ and a controlling sequence $\bar\varepsilon$) and are discussed below under the name of \emph{$\cT$-sparsely long tails}. Their elements are times where the control of the averages of a function is partially lost. The times in a tail allows us to control and spread the support of measures generated by a controlled point (here longness of a tail is crucial for guaranteeing the full support), while we retain some control on the averages due to sparseness. To control the entropy we define our tail so that it is also a rational set. Hence its complement, that is, the set of times defining the regular part of the orbit, is also rational. It follows that both sets, the tail and the regular part, have nontrivial and well-defined densities. This allows us to apply the criterion for positivity of the entropy (Theorem \ref{thm:mainbis}) to the measures generated by a controlled point. We first recall from \cite{BDB:} the definitions of scales and sparsely long tails. \begin{defn}[Scale]\lambdabel{d.scale} We say that a sequence of positive integers $\cT=(T_n)_{n\in\mathbb{N}}$ is \emph{a scale} if there is an integer sequence $\bar \kappa=(\kappa_n)_{n\in\mathbb{N}}$ of \emph{factors of $\cT$} such that \begin{itemize} \item $\kappa_0=3$, and $\kappa_{n}$ is a multiple of $3\kappa_{n-1}$ for every $n\ge 1$, \item $T_0$ is a multiple of $3$ and $T_{n}=\kappa_{n} \, T_{n-1}$ for every $n\ge 1$, \item $\kappa_{n+1}/\kappa_n \to \infty$ as $n\to\infty$. \end{itemize} \end{defn} \begin{rem} We have that \[ \sum_{n=0}^\infty\frac{1}{\kappa_n}\le\sum\frac{1}{3^n}<1. \] \end{rem} \begin{defn}[$\mathbb{N}$-interval] An \emph{interval of integers} (or $\mathbb{N}$-interval for short) is a set $[a,b]_{\mathbb{N}N}\eqdef[a,b]\cap \mathbb{N}N$, where $a,b\in \mathbb{N}N$. \end{defn} \begin{defn}[Component of a set ${\mathbb M}} \deltaf\NN{{\mathbb N}} \deltaf\OO{{\mathbb O}} \deltaf\PP{{\mathbb P}\subset \mathbb{N}$] Given a subset ${\mathbb M}} \deltaf\NN{{\mathbb N}} \deltaf\OO{{\mathbb O}} \deltaf\PP{{\mathbb P}$ of $\mathbb{N}N$ a \emph{component} of ${\mathbb M}} \deltaf\NN{{\mathbb N}} \deltaf\OO{{\mathbb O}} \deltaf\PP{{\mathbb P}$ is any maximal $\mathbb{N}$-interval contained in ${\mathbb M}} \deltaf\NN{{\mathbb N}} \deltaf\OO{{\mathbb O}} \deltaf\PP{{\mathbb P}$, that is, $[a,b]_{\mathbb{N}N}\subset {\mathbb M}} \deltaf\NN{{\mathbb N}} \deltaf\OO{{\mathbb O}} \deltaf\PP{{\mathbb P}$ is a component of ${\mathbb M}} \deltaf\NN{{\mathbb N}} \deltaf\OO{{\mathbb O}} \deltaf\PP{{\mathbb P}$ if and only if $b+1\notin {\mathbb M}} \deltaf\NN{{\mathbb N}} \deltaf\OO{{\mathbb O}} \deltaf\PP{{\mathbb P}$ and $a-1\notin {\mathbb M}} \deltaf\NN{{\mathbb N}} \deltaf\OO{{\mathbb O}} \deltaf\PP{{\mathbb P}$. \end{defn} \begin{defn}[$T$-regular interval] Let $T$ be a positive integer. We say that an $\mathbb{N}$-interval $I$ is \emph{$T$-regular interval} if $I=[kT,(k+1)T-1]_\mathbb{N}$ for some $k\ge 0$. \end{defn} \begin{defn}[$\cT$-adapted set, $n$-skeleton] Let $\cT=(T_n)_{n\in \mathbb{N}N}$ be a scale. We say that a set $R_\infty\subset\mathbb{N}$ is \emph{$\cT$-adapted} if every component of $R_\infty$ is a $T_n$-regular interval for some $n\in\mathbb{N}$. Given $n\in \mathbb{N}$ the \emph{$n$-skeleton} $R_n$ of $R_\infty$ is the union of all components of $R_\infty$ which are $T_k$-regular intervals for some $k\ge n$. \end{defn} By definition for any $\cT$-adapted set $R_\infty$ we have $R_\infty=R_0\supset R_1\supset R_2\supset\ldots$. \begin{defn}[Sparsely long tail]\lambdabel{d.tail} Consider a scale $\cT=(T_n)_{n\in \mathbb{N}N}$ with a sequence of factors $\bar\kappa=(\kappa_n)_{n\in \mathbb{N}N}$. A set $R_\infty\subset \mathbb{N}N$ is a \emph{$\cT$-sparsely long tail} if the following holds: \begin{enumerate} \item\lambdabel{i.adapted} $R_\infty$ is $\cT$-adapted, \item\lambdabel{i.0} $0\notin R_\infty$, in particular $[0,T_n-1]_{\mathbb{N}N} \not\subset R_n$, where $R_n$ is the $n$-skeleton of $R_\infty$, \item\lambdabel{i.center} If a $T_n$-regular interval $I=[a,b]_\mathbb{N}$ is not contained in $R_\infty$, equivalently if $I\not\subset R_n$, then the $(n-1)$-skeleton $R_{n-1}$ of $R_\infty$ can intersect nontrivially only in the middle third interval of $I$ and is {\emph{$1/\kappa_n$-sparse}} in $I$, that is \begin{gather*} I\cap R_{n-1}\subset\left[ \left(a +\nicefrac{T_n}{3}\right),\left(b-\nicefrac{T_n}{3}\right)\right]_\mathbb{N}N, \\ 0< \frac{ \card{R_{n-1}\cap I}}{T_n}\le 1/\kappa_n. \end{gather*} \end{enumerate} \end{defn} \begin{defn}[Rational sparsely long tail] We say that $R_\infty$ is a \emph{rational $\mathcal{T}$-sparsely long tail} if it satisfies Definitions \ref{d.rational} and \ref{d.tail}. \end{defn} Next, we extend \cite[Lemma 2.7]{BDB:} adding rationality of the tail to its conclusion. In fact, the tail constructed in \cite{BDB:} is also a rational tail but this fact is not noted there. \begin{propo}[Existence of rational sparsely long tails]\lambdabel{p.l.tailexistence} Let $\cT=(T_n)_{n\in \mathbb{N}N}$ be a scale and $\bar \kappa=(\kappa_n)_{n\in\mathbb{N}}$ be its sequence of factors. Then there is a rational $\cT$-sparsely long tail $R_\infty$ with $0<d(R_\infty)<1$. \end{propo} \begin{proof} For each $n\in\mathbb{N}$ define $A_{n+1} \eqdef\left[\nicefrac{T_{n+1}}{3}, \nicefrac{T_{n+1}}{3}+T_n-1 \right]_\mathbb{N}N$. Then we set \[ R^*_\infty = \bigcup_{n\ge 1} (A_n + T_n \mathbb{Z}Z)\qquad\text{and}\qquad R_\infty=R^*_\infty\cap \mathbb{N}. \] It is easy to see that the requirements imposed on the growth of the scale $\cT$ imply that the characteristic function $\chi_\infty$ of $R^*_\infty$ is a \emph{regular Toeplitz sequence} (see \cite{Downarowicz-Toeplitz}), which immediately yields that $R_\infty$ is rational. Clearly, $R_\infty\neq \mathbb{N}$, which gives $0<d(R_\infty)<1$, again because $\chi_\infty$ is Toeplitz. But for the convenience of the readers we provide a direct elementary proof of these facts. Let $\Pi_0=\emptyset$ and for each $n\ge1$ we set $\Pi_n=R_\infty\cap[0,T_n-1]_\mathbb{N}$. It follows that for each $n\in\mathbb{N}$ we have $\Pi_n\subset\Pi_{n+1}$ and \begin{equation}\lambdabel{e.pin} \Pi_{n+1}= \left( \bigcup_{k=0}^{\kappa_{n+1}-1} \Pi_n + k T_n \right) \cup A_{n+1}. \end{equation} \begin{cla} \lambdabel{cl.longsparse} The set $R_\infty $ is a $\cT$-sparsely long tail. \end{cla} \begin{proof}We need to check conditions \eqref{i.adapted}, \eqref{i.0} and \eqref{i.center} from Definition \ref{d.tail}. First we prove \eqref{i.adapted}, which says that $R_\infty$ is $\mathcal{T}$-adapted. Fix $n\ge 0$ and note that neither $0$, nor $T_{n+1}-1$ belongs to $\Pi_{n+1}$. We claim that the components of $\Pi_{n+1}$ are $T_i$-regular for some $i\le n$. Note that the $T_n$-regular interval $A_{n+1}$ is a component of $\Pi_{n+1}$. The other components of $\Pi_{n+1}$ are components of $\Pi_n$ translated by a number $\ell T_n$, for some $\ell\in \{0,\dots,\kappa_{n+1}-1\}$. Arguing inductively, we get that the components of $R_\infty$ are $T_i$-regular for some $i\in\mathbb{N}$. Obviously $0\not \in R_\infty$, yielding \eqref{i.0}. It remains to prove \eqref{i.center}. Fix $n\ge 1$. Note that $ R_{n-1} \cap \Pi_n= A_n, $ and $A_n$ is contained in the middle third interval of $\Pi_n$. It follows that $$ \frac{\card{R_{n-1} \cap \Pi_n}}{T_n}= \frac{\card{A_n}}{T_n}=\frac{T_{n-1}}{T_n}=\frac{1}{\kappa_n}. $$ This proves that $T_n$-regular interval $[0,T_n-1]\not\subset R_\infty$ containing $\Pi_n$ satisfies \eqref{i.center}. The same holds for every $T_n$-regular interval $I$ not contained in $R_\infty$, because such $I\cap R_\infty=\Pi_n+j T_n$ for some $j\ge 1$. \end{proof} \begin{cla} \lambdabel{cl.rational} The set $R_\infty $ is rational. \end{cla} \begin{proof} Define $Q_n\eqdef \bigcup_{i=1}^{n} (A_i + T_i \mathbb{N})$. Then $Q_n$is a finite union of arithmetic sequences. Furthermore, $R_\infty\div Q_n \subset R_n$ and the $n$ skeleton $R_n$ of $R_\infty$ satisfies \begin{equation}\lambdabel{e.rn} R_n =\bigcup_{i=n+1}^\infty (A_i + T_i \mathbb{N}). \end{equation} Thus it is enough to see that $\bar d (R_n) \to 0$ as $n\to\infty$. But by \eqref{e.rn} and subadditivity of $\bar d$ we have \[ \bar d(R_n) =\bar d\bigg(\bigcup_{i=n+1}^\infty (A_i + T_i \mathbb{N})\bigg)\le \sum_{i=n+1}^\infty \bar d(A_i + T_i \mathbb{N}). \] It is easy to see that for each $i\ge 1$ we have $\bar d(A_i + T_i \mathbb{N}) = T_{i-1}/T_i=1/\kappa_i$. As a conclusion, we get $\bar d (R_n) \le \sum_{i=n+1}^\infty \frac{1}{\kappa_n}$, proving that the tail is rational. \end{proof} \begin{cla}\lambdabel{cl.density} We have $0<d(R_\infty)<1$. \end{cla} \begin{proof} The set is rational and hence it has a well defined density, see Remark~\ref{r.complement}. We also have \[ \frac{1}{\kappa_1}= \bar d(A_1+T_1\mathbb{N})\le \bar d(R_\infty)=\bar d\bigg(\bigcup_{i=1}^\infty (A_i + T_i \mathbb{N})\bigg) \le\sum_{i=1}^\infty\frac{1}{\kappa_i}<1.\qedhere \] \end{proof} The lemma now follows from Claims~\ref{cl.longsparse}, \ref{cl.rational}, and \ref{cl.density}. \end{proof} \section{Double flip-flop families} \lambdabel{s.flipflop} In this section, we review the definitions of flip-flop families following \cite{BBD:16,BDB:}. We also introduce the notion of a double flip-flop family and prove that flip-flops families yield double flip-flop families, see Proposition~\ref{p.yieldsdouble}. Using these families and Theorem~\ref{thm:main} obtain ergodic measures with full support, positive entropy, and zero average for a continuous potential $\varphi\colon X \to \mathbb{R}$, see Theorem~\ref{t.flipfloptailqual}. In what follows, $(X, \rho)$ is a compact metric space, $f\colon X \to X$ is a homeomorphism, and $\varphi\colon X \to \mathbb{R}$ is a continuous function. \subsection{Flip-flop families} We begin by recalling the definition of flip-flop families. \begin{defn}[Flip-flop family]\lambdabel{d.flipflop} A \emph{flip-flop family} associated to $\varphi$ and $f$ is a family $\mathfrak{F}=\mathfrak{F}^+\sqcup\mathfrak{F}^-$ of compact subsets of $X$ called {\emph{plaques}}\footnote{We pay special attention to the case when the sets of the flip-flop family are discs tangent to a strong unstable cone field. This justifies this name.} such that there are $\alpha>0$ and a sequence of numbers $(\zeta_n)_n$ and $\zeta_n\to 0^+$ as $n\to \infty$, satisfying: \begin{enumerate} \item\lambdabel{i.flipflop1} let $F_\mathfrak{F}^+\eqdef \bigcup_{D\in\mathfrak{F}^+} D$ (resp. $F_\mathfrak{F}^-\eqdef \bigcup_{D\in\mathfrak{F}^-} D $), then $\varphi(x)>\alpha$ for every $x\in F_\mathfrak{F}^+$ (resp. $\varphi(x)< -\alpha$ for every $x\in F_\mathfrak{F}^-$); \item\lambdabel{i.flipflop2} for every $D\in \mathfrak{F}$, there are sets $D^+\in \mathfrak{F}^+$ and $D^-\in\mathfrak{F}^-$ contained in $f(D)$; \item\lambdabel{i.flipflop33} for every $n>0$ and every family of sets $D_i\in \mathfrak{F}$, $i\in\{0,\dots ,n\}$, with $D_{i+1}\subset f(D_i)$ it holds $$ \mathrm{diam} (f^{-i} (D_n))\le \zeta_i, \quad \mbox{for every $i\in\{0,\dots,n\}$.} $$ \end{enumerate} \end{defn} We now recall the notions of $f$-sojourns. Note that in this definition the flip-flop family may be relative to a power $f^k$ of $f$, but the sojourns are relative to $f$. Furthermore, $\varphi_k$ stands here for the Birkhoff averages of a map $\varphi$ with respect to $f$ introduced in \eqref{e.averages}. \begin{defn}[Flip-flop family with $f$-sojourns]\lambdabel{d.flipfloptail} Consider a flip-flop family $\mathfrak{F}=\mathfrak{F}^+\sqcup\mathfrak{F}^-$ associated to $\varphi_k$ and $f^k$ for some $k\ge 1$ and a compact subset $Y$ of $X$. The \emph{flip-flop family $\mathfrak{F}$ has $f$-sojourns along $Y$} (or \emph{$\mathfrak{F}$ $f$-sojourns along $Y$}) if there is a sequence $(\eta_n)_n$ and $\eta_n\to 0^+$, such that for every $\deltalta>0$ there is an integer $N=N_\deltalta$ so that every plaque $D\in\mathfrak{F}$ contains subsets $\widehat D^+, \widehat D^-$ satisfying: \begin{enumerate} \item\lambdabel{i.defff0} for every $x\in \widehat D^+\cup \widehat D^-$ the orbit segment $\{x,\dots, f^N(x)\}$ is $\deltalta$-dense in $Y$ (i.e., the $\deltalta$-neighbourhood of the orbit segment contains $Y$); \item\lambdabel{i.defff1} $f^N(\widehat D^+)=\widehat D^+_N\in \mathfrak{F}^+$ and $f^N(\widehat D^-)=\widehat D^-_N\in\mathfrak{F}^-$; \item\lambdabel{i.defff2} for every $i\in\{0,\dots, N\}$ it holds $$ \mathrm{diam} (f^{-i} (\widehat D_N^\pm))\le \eta_i. $$ \end{enumerate} \end{defn} We are now in position to define double flip-flop families (with sojourns). Observe that the remark before Definition \ref{d.flipfloptail} applies also to Definition \ref{d.dflipfloptail}. \begin{defn}[Double flip-flop family]\lambdabel{d.dflipflop} A \emph{double flip-flop family} associated to $\varphi$ and $f$ is a family $\mathfrak{D}\eqdef \mathfrak{D}^+_0\sqcup\mathfrak{D}^+_1 \sqcup \mathfrak{D}^-_0\sqcup\mathfrak{D}^-_1 $ of compact subsets of $X$ such that there are $\alpha>0$ and a sequence of numbers $(\zeta_n)_n$ with $\zeta_n\to 0^+$ as $n\to \infty$, and with the following properties: let $E^+_i\eqdef \bigcup_{D\in\mathfrak{D}^+_i} D$ and $E^-_i\eqdef \bigcup_{D\in\mathfrak{D}^-_i} D$, where $i=0,1$, $E^+\eqdef E^+_0 \cup E^+_1$, and $E^-\eqdef E^-_0 \cup E^-_1$. \begin{enumerate} \item\lambdabel{i.dflipflop1} $\varphi(x)\geq\alpha$ for every $x\in E^+$ and $\varphi(x)\leq -\alpha$ for every $x\in E^-$; \item\lambdabel{i.dflipflop2} for every $D\in \mathfrak{D}$, there are sets $D^+_0\in \mathfrak{D}^+_0$, $D^+_1\in \mathfrak{D}^+_1$, $D^-_0\in \mathfrak{D}^-_0$, and $D^-_1\in \mathfrak{D}^-_1$ contained in $f(D)$; \item\lambdabel{i.dflipflop3} for every $n>0$ and every family of sets $D_i\in \mathfrak{D}$, $i\in\{0,\dots ,n\}$, with $D_{i+1}\subset f(D_i)$ it holds $$ \mathrm{diam} (f^{-i} (D_n))\le \zeta_i, \quad \mbox{for every $i\in\{0,\dots,n\}$.} $$ \item\lambdabel{i.dflipflop4} The closures of the sets $E^+_0, E^+_1, E^-_0, E^-_1$ are pairwise disjoint\footnote{This condition is straightforward in the flip-flop case since $\varphi$ is strictly bigger that $\alpha>0$ in $F^+_\cF$ and strictly less than $-\alpha<0$ in $F^-_\mathfrak{F}$.}. \end{enumerate} \end{defn} \begin{defn}[Double flip-flop family with $f$-sojourns along $Y$]\lambdabel{d.dflipfloptail}Let $\mathfrak{D}\eqdef \mathfrak{D}^+_0\sqcup\mathfrak{D}^+_1 \sqcup \mathfrak{D}^-_0\sqcup\mathfrak{D}^-_1 $ be a double flip-flop family associated to $\varphi_k$, $f^k$, $k\ge 1$. Given a compact subset $Y$ of $X$ we say that \emph{$\mathfrak{D}$ has $f$-sojourns along $Y$} (or that \emph{$\mathfrak{D}$ $f$-sojourns along $Y$}) if there is a sequence $(\eta_n)_n$ and $\eta_n\to 0^+$, such that for every $\deltalta>0$ there is an integer $N=N_\deltalta$ such that every plaque $D\in\mathfrak{D}$ contains subsets $\widehat D^+_0, \widehat D^+_1, \widehat D^-_0, \widehat D^-_1$ such that: \begin{enumerate} \item\lambdabel{i.def-dff0} for every $x\in \widehat D^+_0\cup \widehat D^+_1 \widehat D^-_0 \cup \widehat D^-_1$ the orbit segment $\{x,f(x),\dots, f^N(x)\}$ is $\deltalta$-dense in $Y$; \item\lambdabel{i.def-dff1} $f^N(\widehat D^i_j)= D^i_{N,j} \in \mathfrak{D}^i_j$, $i\in \{-,+\}$ and $j\in \{0,1\}$; \item\lambdabel{i.def-dff2} for every $i\in\{0,\dots, N\}$ it holds $$ \mathrm{diam} (f^{-i} (\widehat D_{N,j}^\pm))\le \eta_i. $$ \end{enumerate} \end{defn} \begin{rem} \lambdabel{r.multiple} In the previous definitions the constant $N$ can be chosen a multiple of $k$. \end{rem} \subsection{Existence of double flip-flop families with sojourns} \lambdabel{ss.double} We now prove that existence of flip-flop families with sojourns implies the existence of double flip-flop families with sojourns. \begin{propo} \lambdabel{p.yieldsdouble} Consider a flip-flop family $\mathfrak{F}=\mathfrak{F}^+\sqcup\mathfrak{F}^-$ associated to $\varphi_k$ and $f^k$ for some $k\ge 1$ with $f$-sojourns in a compact subset $Y$ of $X$. Then there are $r\ge 1$ and a double flip-flop family $\mathfrak{D}= \mathfrak{D}^+_0\sqcup \mathfrak{D}^+_1 \sqcup \mathfrak{D}^-_0\sqcup \mathfrak{D}^-_1$ associated to $f^r$ and $\varphi_r$ with $f$-sojourns along $Y$. \end{propo} \begin{proof} Given a plaque $D\in \mathfrak{F}$ and $\ell \ge 1$, consider subsets $D_{+^\ell,+}$ $D_{+^\ell,-}$, $D_{-^\ell,+}$, and $D_{-^\ell,-}$ of $D$ satisfying \begin{itemize} \item $f^{ki} (D_{+^\ell,+})$ is contained in some plaque of $\mathfrak{F}^+$ for every $i\in \{1,\dots,\ell\}$ and $f^{k(\ell+1)} (D_{+^\ell,+})\in \mathfrak{F}^+$, \item $f^{ki} (D_{+^\ell,-})$ is contained in some plaque of $\mathfrak{F}^+$ for every $i\in \{1,\dots,\ell\}$ and $f^{k(\ell+1)} (D_{+^\ell,-})\in \mathfrak{F}^-$, \item $f^{ki} (D_{-^\ell,-})$ is contained in some plaque of $\mathfrak{F}^-$ for every $i\in \{1,\dots,\ell\}$ and $f^{k(\ell+1)} (D_{+^\ell,-})\in \mathfrak{F}^-$, \item $f^{ki} (D_{-^\ell,+}))$ is contained in some plaque of $\mathfrak{F}^-$ for every $i\in \{1,\dots,\ell\}$ and $f^{k(\ell+1)} (D_{-^\ell,+})\in \mathfrak{F}^+$. \end{itemize} The existence of these subsets is assured by item \eqref{i.flipflop2} in the definition of a flip-flop family. Using the continuity of $\varphi$, we have that for every $\ell$ large enough there is $\alpha'>$ such that \[ \begin{split} &\varphi_{k(\ell+1)} (x) > \alpha'>0 \quad \mbox{if $x\in D_{+^\ell,\pm}$},\\ &\varphi_{k(\ell+1)} (x) < -\alpha'<0 \quad \mbox{if $x\in D_{-^\ell,\pm}$}. \end{split} \] We use here that the $\ell+1$ Birkhoff averages of $\varphi_k$ with respect to $f^k$ are the same as $\varphi_{k(\ell+1)}$, that is, $k(\ell+1)$ averages of $\varphi$ with respect to $f$. We fix such a large $\ell$ and define \[ \begin{split} \mathfrak{D}^+_0&\eqdef \{D_{+^\ell,+}, \, D\in \mathfrak{F}\}, \quad \mathfrak{D}^+_1\eqdef \{D_{+^\ell,-}, \, D\in \mathfrak{F}\},\\ \mathfrak{D}^-_0&\eqdef \{D_{-^\ell,+}, \, D\in \mathfrak{F}\}, \quad \mathfrak{D}^-_1\eqdef \{D_{-^\ell,-}, \, D\in \mathfrak{F}\}. \end{split} \] By construction $\mathfrak{D}=\mathfrak{D}^+_0\sqcup \mathfrak{D}^+_1\sqcup \mathfrak{D}^-_0\sqcup \mathfrak{D}^-_1$ satisfies conditions \eqref{i.dflipflop1}, \eqref{i.dflipflop2}, and \eqref{i.dflipflop3} in the definition of double flip-flop family for $f^{k(\ell+1)}$ and $\varphi_{k(\ell+1)}$. To check condition \eqref{i.dflipflop4}, i.e, the closures of the sets $E^+_0, E^+_1, E^-_0, E^-_1$ are pairwise disjoint, just observe that the value of $\varphi$ on the $k\ell$ and $k(\ell+1)$ iterates of theses sets are uniformly separated. It remains to get the sojourns property. Fix small $\deltalta>0$ and consider the number $N=N_\deltalta$ in the definition of sojourn for $\mathfrak{F}$. Take a set $D\in \mathfrak{D}$ and consider $f^{k(\ell+1)}(D)=\widehat D\in \mathfrak{F}$. The sojourns property for $\mathfrak{F}$ provides a subset $\widehat D'$ such that $f^N (\widehat D')\in \mathfrak{F}$ and the first $N$ iterates of any point $x\in \widehat D'$ are $\deltalta$-dense in $Y$. Consider now $f^{-k(\ell+1)} (\widehat D')\subset D$. It is enough now to observe that any point in that set is such that its first $k(\ell+1)+N$ iterates are $\deltalta$-dense in $Y$. We omit the choice of the sequences $\zeta_i$ and $\eta_i$ in the previous construction. We finish the proof by taking $r=k(\ell+1)$. \end{proof} \subsection{Support, average, and entropy} \lambdabel{ss.support} We now obtain ergodic measures with full support and positive entropy satisfying $\int \varphi d\mu=0$. \begin{theo}\lambdabel{t.flipfloptailqual} Let $(X,\rho)$ be a compact metric space, $Y$ a compact subset of $X$, $f\colon X \to X$ a homeomorphism, and $\varphi\colon X \to \mathbb{R}$ a continuous function. Assume that there is a flip-flop family $\mathfrak{F}$ associated to $\varphi_k$ and $f^k$ for some $k\ge 1$ having $f$-sojourns along $Y$. Then there is an ergodic measure $\mu$ with positive entropy whose support contains $Y$ and such that $\int \varphi \,d\mu =0$. \end{theo} First note that by Proposition~\ref{p.yieldsdouble} we can assume that there is a double flip-flop family $\mathfrak{D}=\mathfrak{D}^+_0\sqcup \mathfrak{D}^+_1\sqcup \mathfrak{D}^-_0\sqcup \mathfrak{D}^-_1$ relative to $\varphi_r$, $f^r$, and some $r\ge 1$ with $f$-sojourns along $Y$. We define the sets $$ K_0\eqdef \mathrm{closure} \left( \bigcup_{D\in \mathfrak{D}^+_0\cup \mathfrak{D}^-_0} D \right) \quad \mbox{and} \quad K_1\eqdef \mathrm{closure} \left( \bigcup_{D\in \mathfrak{D}^+_1\cup \mathfrak{D}^-_1} D \right). $$ Recall that the sets $K_0$ and $K_1$ are disjoint. The pair $\mathbf{K}\eqdef (K_0, K_1)$ is the {\emph{division associated to $\mathfrak{D}$.}} We need to recall some definitions from \cite{BDB:}. Consider sequences $\bar \deltalta =(\deltalta_n)_{n\in \mathbb{N}N}$, $\bar \alpha =(\alpha_n)_{n\in \mathbb{N}N}$, and $\bar \varepsilon =(\varepsilon_n)_{n\in \mathbb{N}N}$ of positive numbers converging to $0$ as $n\to \infty$. Consider a scale $\mathcal{T}=(T_n)_{n\in \mathbb{N}N}$ and a $\mathcal{T}$-long $\bar \varepsilon$-sparse tail $R_\infty$. \begin{defn}[$\bar \alpha$-control and $\bar\deltalta$-denseness] A point $x\in X$ is {\emph{$\bar \alpha$-controlled for $\varphi$ with a tail $R_\infty$}} if for every $n\in\mathbb{N}$ and every $T_n$-regular interval $I$ that is not strictly contained in a component of $R_\infty$ it holds $$ \frac{1}{T_n} \sum_{j\in I} \varphi (f^j(x))\in [-\alpha_n, \alpha_n]. $$ The orbit of a point $x\in X$ is {\emph{$\bar\deltalta$-dense in $Y$ along the tail $R_\infty$ }} if for every component $I$ of $R_\infty$ of size $T_n$ the segment of orbit $\{f^j(x), j\in I\}$ is $\deltalta_n$-dense in $Y$. \end{defn} We are now ready to state the main technical step of the proof of Theorem~\ref{t.flipfloptailqual}. This is a reformulation of \cite[Theorem 2]{BDB:} with an additional control of the itine\-ra\-ries. This control leads to positive entropy. For the notion of a $\bf K$-itinerary of a point over a set see Section~\ref{s.measurespositive}. \begin{propo}\lambdabel{p.maintech} Let $(X, \rho)$ be a compact metric space, $Y$ a compact subset of $X$, $f\colon X \to X$ be a homeomorphism, and $\varphi\colon X \to \mathbb{R}$ be a continuous map. Assume that there is a double flip-flop family $\mathfrak{D}$ associated to $\varphi_r$, $f^r$ for some $r\ge 1$ with $f$-sojourns along $Y$. Let $\mathbf{K}=(K_0,K_1)$ the division of $\mathfrak{D}$. Consider sequences $\bar \alpha =(\alpha_n)_{n\in \mathbb{N}N}$ and $\bar \deltalta=(\deltalta_n)_{n\in \mathbb{N}N}$ of positive numbers converging to $0$ and $\omega \in \Omegaega_2$. Then there are a scale $\mathcal{T}$ and a rational and $\mathcal{T}$-sparsely long tail $R_\infty$ such that: for every plaque $D\in \mathfrak{D}$ there is a point $x\in D$ satisfying \begin{enumerate} \item\lambdabel{i.p.control} the Birkhoff averages of $\varphi_r$ along the orbit of $x$ with respect to $f^r$ are $\bar\alpha$-controlled with the tail $R_\infty$, \item \lambdabel{i.p.density} the $f$-orbit of $x$ is $\bar\deltalta$-dense in $Y$ along $R_\infty$, \item \lambdabel{i.p.itinerary} $\omega$ is the $\mathbf{K}$-itinerary of $x$ with respect to $f^r$ over $J\eqdef \mathbb{N}N \setminus R_\infty$. \end{enumerate} \end{propo} \subsubsection{Proposition~\ref{p.maintech} implies Theorem~\ref{t.flipfloptailqual}} We now deduce Theorem~\ref{t.flipfloptailqual}. Let $x$ be the point given by Proposition~\ref{p.maintech} associated to $\omega\in \Omegaega_2$, which is a generic point for the Bernoulli measure $\xi_{1/2}$. By \cite[Proposition 2.17]{BDB:}, if $\widetilde \mu$ is a accumulation point of the sequence of empirical measures $(\mu_n(x,f^r))_{n\in \mathbb{N}N}$ (recall \eqref{e.empiric}), then for $\widetilde \mu$-almost every point $y$ it holds $\frac{1}{n} \sum_{i=0}^{n-1} \varphi_r (f^{ri}(y))=0$. This implies that for every $\mu$ generated by $x$ for $f$ it also holds that \begin{equation}\lambdabel{e.zero-av} \frac{1}{n} \sum_{i=0}^{n-1} \varphi (f^{i}(y))=0, \quad \mbox{for $ \mu$-almost every point $y$.} \end{equation} Moreover, by \cite[Proposition 2.2]{BDB:} for every measure $\mu$ generated by $x$ one has the $f$-orbit of $\mu$-almost every point $y$ is dense in $Y$. Since $R_\infty$ is rational we have that its complement $J=\mathbb{N}\setminus R_\infty$ is also rational and has a density $d(J)>0$, see Remark~\ref{r.complement}. By Theorem~\ref{thm:main}, every $f^r$-invariant measure $\widetilde \mu$ generated by $f^r$ along the orbit of $x$ satisfies $h(\widetilde\mu) > d(J)\log 2=\lambdambda>0$. Therefore, every $f$-invariant measure $\mu$ generated by $f$ along the orbit of $x$ has entropy at least $\lambdambda/r$. This implies that the ergodic decomposition of $\mu$ has some measure $\nu$ with full support, positive entropy, and, by \eqref{e.zero-av}, $\int \varphi d\nu=0$. The proof of Theorem~\ref{t.flipfloptailqual} is now complete. $\square$ \subsection{Proof of Proposition~\ref{p.maintech}} Fix sequences $\bar \alpha =(\alpha_n)_{n\in \mathbb{N}N}$ and $\bar \deltalta=(\deltalta_n)_{n\in \mathbb{N}N}$ of positive reals converging to $0$. Take any $\omega\in\Omegaega_2$. Consider a double flip-flop $\mathfrak{D}=\mathfrak{D}^+_0 \sqcup \mathfrak{D}^+_1 \sqcup \mathfrak{D}^-_0 \sqcup \mathfrak{D}^-_1$ associated to $\varphi_r$ and $f^r$ for some $r\ge 1$ with sojourns along $Y$. Let $\mathbf{K}=(K_0,K_1)$ denote the associated division of $\mathfrak{D}$. We let $\mathfrak{D}^+\eqdef \mathfrak{D}^+_0 \sqcup \mathfrak{D}^+_1$ and $\mathfrak{D}^-\eqdef\mathfrak{D}^-_0 \sqcup \mathfrak{D}^-_1$ and note that $\mathfrak{D}^+\sqcup\mathfrak{D}^-$ is a flip-flop family for $f^r$. We also consider $\mathfrak{D}_0\eqdef \mathfrak{D}^+_0 \sqcup \mathfrak{D}^-_0$ and $\mathfrak{D}_1\eqdef \mathfrak{D}^+_1 \sqcup \mathfrak{D}^-_1$. Following \cite{BDB:} we will use the induction on $n$ to construct a scale $\cT=(T_n)_{n\in\mathbb{N}}$ and a $\cT$-sparsely long tail $R_\infty$ (see Section \ref{s.scalesandtails}) such that there exists a point $x\in X$ satisfying conditions \eqref{i.p.control}, \eqref{i.p.density}, and \eqref{i.p.itinerary} of our proposition. After $n$ steps of our induction we will have $T_0,\ldots, T_n$ and $\Pi_{n-1}=R_\infty\cap [0,T_n-1]$. Assume that all these objects are defined up to the index $n-1$. Note that no parameters beyond $n$ are required to check that some set $R\subset [0,T_n-1]$ satisfies the conditions from the definition of the $\cT$-sparsely long tail. Furthermore, knowing that $\Pi_{n-2}$ satisfies these conditions we can use translates of this set by a multiple of $T_{n-1}$ to get a set which we declare to be $\Pi_{n-1}=R_\infty\cap[0,T_{n}-1]$. The double flip-flop family is used as follows: the partition $\mathfrak{D}^+\sqcup \mathfrak{D}^-$ is used for controlling averages and the partition $\mathfrak{D}_0 \sqcup \mathfrak{D}_1$ is used to follow a prescribed itinerary. In the above situation, following the reasoning in \cite{BDB:} we obtain that there is an infinite set $\cS$ of multiples of $T_{n-1}$ such that for every $S\in\cS$ and every $R\subset [0,S-1]$ following the rules of a tail (up to time $S$) and such that $R\cap[0,T_{n-1}-1]=\Pi_{n-1}$, given any $D\in \mathfrak{D}$ there is a family of plaques $D_i\in \mathfrak{D}$, $i\not\in R$, such that \begin{itemize} \item for every $i,j\in [0,S-1]\setminus R$ with $j>i$ it holds $ D_j \subset f^{r(j-i)} (D_i)$, \item for every $x\in f^{-rS} (D_{S})$ the orbit segment $\{x,f^r(x),\dots,f^{rS}(x)\}$ is controlled for $\varphi_r$ with parameters $(\alpha_1,\dots,\alpha_n)$ and the tail $R$, that is, for every $i\le n$ and every $T_i$-regular interval $I$ contained in $[0,S-1]_\mathbb{N}$ that is not contained in $R$ the average of $\varphi_r$ over $I$ is in $[-\alpha_i, -\alpha_i/2] \cup [\alpha_i/2, \alpha_i]$. \item for every $x\in f^{-rS} (D_{S})$ and every component $I=[a,b]_\mathbb{N}$ of $R$ of size $T_i$ the orbit segment $\{f^{i}(x):i\in [ar,br]_\mathbb{N}\}$ is $\deltalta_i$-dense in $Y$. \end{itemize} Actually, exploiting the fact that we deal with double flip-flop family, we can combine the reasoning of \cite[Section 2.5.2]{BDB:} with the one in \cite{BBD:16} to add one more claim: we choose $D_i\in \mathfrak{D}_{\omega_i}$. Now, given $\cS$ we can choose $T_n$ which is large enough to obtain that $T_{n-1}/T_n$ is sufficiently small (since $\cS$is infinite we can do it). Furthermore we can extend $\Pi_{n-2}$ to $\Pi_{n-1}$ exactly as in the proof of Proposition \ref{p.l.tailexistence} see formula \eqref{e.pin}. This completes the induction step of our construction $R_\infty$. Now observe that the set $\bigcap_{i\in R_\infty}f^{-ri}(D_i)\subset D_{0}$ is a nested intersection of nonempty compact sets with diameters converging to $0$, thus it contains only one point $x$, which by our construction satisfies conditions \eqref{i.p.control}, \eqref{i.p.density}, and \eqref{i.p.itinerary}. \section{Robustly transitive diffeomorphisms} \lambdabel{s.robustly} In this section, we prove Theorems~\ref{t.openanddense} and \ref{t.average}. Recall that $\cR\cT(M)$ is the (open) subset of $\operatorname{Diff}^1(M)$ of diffeomorphisms that are robustly transitive, have a pair of hyperbolic periodic points of different indices, and have a partially hyperbolic splitting $TM = E^{{\mathrm{u}}u} \oplus E^{{\mathrm{c}}} \oplus E^{{\mathrm{s}}s}$ with one-dimensional center $E^{\mathrm{c}}$, where $E^\mathrm{uu}$ is uniformly expanding and $E^\mathrm{ss}$ is uniformly contracting. Let $d^\mathrm{uu}$ be the dimension of $E^\mathrm{uu}$. Note that the map $\mathrm{J}_f^{{\mathrm{c}}} \colon M \to \mathbf{R}$, $\mathrm{J}_f^{{\mathrm{c}}}(x) \eqdef \log | Df_x |_{E^{\mathrm{c}} (x)}|$ is continuous for every $f\in\cR\cT(M)$. Recall that $(\mathrm{J}_{f}^{{\mathrm{c}}})_n$ stands for the Birkhoff $n$-average of $\mathrm{J}_f^{{\mathrm{c}}}$, cf. \eqref{e.averages}. \begin{theo}\lambdabel{t.l.h(q)} There is a $C^1$-open and dense subset $\cI(M)$ of $\cR\cT(M)$ such that for every $f\in \cI(M)$ there are $N\in \mathbb{N}N$ and a neighbourhood $\mathcal \cU_f\subset \cI(M)$ such that every $g\in \cU_f$ has a flip-flop family with respect to the map $(\mathrm{J}_g^{{\mathrm{c}}})_N$ and $g^N$ with sojourns along $M$. \end{theo} A crucial point here is that we get a single $N$ such that for every $g$ near $f$ we get a flip-flop family associated with $g^N$ (we will pay special attention to this fact). A priori, the number $N$ for the flip-flop families constructed in \cite{BBD:16,BDB:} could depend on $f$. Since the flip-flop family for $f^N$ leads (through the criterion in Theorem \ref{thm:mainbis}) to an invariant measure with entropy bounded below by a constant times $\log 2/N$, we need the number $N$ to be locally invariable to get uniform local lower bounds for the entropy of measures we find. This is precisely what we obtain from Theorem~\ref{t.l.h(q)}. \begin{proof}[Sketch of the proof Theorem~\ref{t.l.h(q)}] Our hypotheses imply that every $f\in \cI(M)$ has a pair of saddles $p_f$ and $q_f$ of indices, respectively, $d^\mathrm{uu}$ and $d^\mathrm{uu}+1$. The saddles depend continuously on $f$ and the indices are locally constant. Furthermore, the homoclinic classes of the saddles satisfy $H(p_f,f)=H(q_f,f)=M$ (see \cite[Proposition 7.1]{BDB:} which just summarises results from \cite{BDPR}). The discussion below involves the notions of a {\emph{dynamical blender}} and a {\emph{flip-flop configuration.}} As we do not need their precise definitions and will only use some specific properties of them, we will just give rough definitions of these concepts an refer to \cite{BDB:} and \cite{BBD:16} for details. In what follows, the discussion is restricted to our partially hyperbolic setting and to small open subset of $\cI(M)$ where the index $d^\mathrm{uu}$ is constant. Recall that a family of discs $\mathfrak{D}$ is \emph{strictly $f$-invariant} if there is an $\varepsilon$-neigh\-bour\-hood of $\mathfrak{D}$ such that for every disc $D_0$ in a such a neighbourhood there is a disc $D_1\in \mathfrak{D}$ with $D_1\subset f(D_0)$, see \cite[Definition 3.7]{BBD:16}. A {\emph{dynamical blender}} (in what follows we simply say a {\emph{blender}}) of a diffeomorphism $f$ is a locally maximal (in an open set $U$) and transitive hyperbolic set $\Gammamma$ of index $d^\mathrm{uu}+1$ endowed with an strictly $f$-invariant family of discs $\mathfrak{D}_f$ of dimension $d^\mathrm{uu}$ tangent to an invariant expanding cone field ${\mathcal{C}}^{\mathrm{uu}}$ around $E^\mathrm{uu}$. Hence, a blender is $4$-tuple $(\Gammamma_f, U,{\mathcal{C}}^{\mathrm{uu}}, \mathfrak{D}_f)$. In what follows, let us simply denote the blender as $(\Gammamma_f,\mathfrak{D}_f)$. As the usual hyperbolic sets, blenders are $C^1$-robust and have continuations. By \cite[Lemma 3.8]{BBD:16} strictly invariant families are robust: for every $g$ sufficiently close to $f$ the family $\mathfrak{D}_f$ is also strictly invariant for $g$. As a consequence, if $(\Gammamma_f,\mathfrak{D}_f)$ is a blender of $f$ then $(\Gammamma_g,\mathfrak{D}_f)$ is a blender of $g$ for every $g$ close to $f$, where $\Gammamma_g$ is the hyperbolic continuation of $\Gammamma_f$. In what follows we will omit the subscripts for simplicity. We can speak of the index of a blender $(\Gammamma,\mathfrak{D})$ (the dimension of the unstable bundle of $\Gamma$). Given a saddle of the same index as the blender we say that the blender and the saddle are homoclinically related if their invariant manifolds intersect cyclically and transversely (this is a natural extension of the homoclinic relation of a pair of saddles). In what follows, we consider blenders which are expanding in the center direction, that is, with index $d^\mathrm{uu}+1$. Consider now a saddle $p$ of index $d^\mathrm{uu}$. The saddle $p$ and the blender $(\Gammamma,\mathfrak{D})$ are in a {\emph{flip-flop configuration}} if $W^\mathrm{u} (p,f)$ contains some disc of the family $\mathfrak{D}$ of the blender and the unstable manifold of the blender transversely intersects the stable manifold of the saddle (note that the sums of these manifolds exceeds by one the dimension of the ambient space). By transversality and the openess of the invariant family the flip-flop configurations are also $C^1$-robust. The results of \cite[Section 6.5.1]{BDB:} are summarised in the following proposition. \begin{propo} \lambdabel{p.dandovoltas} There is an open and dense subset $\cF(M)$ of $\cR\cT(M)$ such that every diffeomorphisms $f\in \cF(M)$ has a pair of saddles $p_f$ and $q_f$ of different indices and a blender $(\Gamma_f,\mathfrak{D}_f)$ such that: \begin{itemize} \item $H(p_f,f)=H(q_f,f)=M$, \item $\Gammamma_f$ is homoclinically related to $p_f$, \item $\Gammamma_f$ and $q_f$ are in a flip-flop configuration, \item there is a metric on $M$ such that $\mathrm{J}_f^{{\mathrm{c}}}$ is positive in a neighbourhood of $\Gamma_f$ and negative in a neighbourhood of the orbit of $q_f$. \end{itemize} \end{propo} From now on, we will always consider $M$ with a metric given by Proposition \ref{p.dandovoltas}. Let us recall another result from \cite{BDB:}. \begin{theo}[Theorem 6.8 in \cite{BDB:}] \lambdabel{t.p.flipfloptail} Consider $f\in \operatorname{Diff}^1(M)$ with a dynamical blender $(\Gamma,\mathfrak{D})$ in a flip-flop configuration with a hyperbolic periodic point $q$. Let $\varphi\colon M\to\mathbf{R}$ be a continuous function such that $\varphi|_\Gamma>0$ and $\varphi|_{\cO(q)}<0$. Then there are $N\geq1$ and a flip-flop family $\mathfrak{F}$ with respect to $\varphi_N$ and $f^N$ which $f$-sojourns along the homoclinic class $H(q,f)$. \end{theo} We can now apply Theorem~\ref{t.p.flipfloptail} to the flip-flop configuration associated to the blender $(\Gammamma_f,\mathfrak{D}_f)$ and the saddle $q_f$ provided by Proposition~\ref{p.dandovoltas} and the map $\mathrm{J}_f^{{\mathrm{c}}}$. This provides the flip-flop family associated to the map $\mathrm{J}_f^{{\mathrm{c}}}$. The fact that the sojourns take place in the whole manifold follows from $H(q_f,f)=M$. To complete the sketch of the proof of Theorem~\ref{t.l.h(q)} it remains to get the uniformity of $N$. To get such a control we need to recall some steps of the construction in \cite{BBD:16}. Let us explain how to derive the flip-flop family $\mathfrak{F}=\mathfrak{F}^+\sqcup\mathfrak{F}^-$ associated to $f^{N_f}$ and the number $N_f$ from the flip-flop configuration of the saddle $q_f$ and the blender $(\Gammamma_f, \mathfrak{D}_f)$. The sub-family $\mathfrak{F}^+$ is formed by the discs of $\mathfrak{D}_f$. To define $\mathfrak{F}^-$ let us assume, for simplicity, that $f(q_f)=q_f$. We consider an auxiliary family $\mathfrak{D}_q$ of $C^1$-embedded discs containing $W^\mathrm{u}_{\deltalta}(q_f,f)$ (for sufficiently small $\deltalta$) in its interior and consisting of small discs $D$ such that \begin{enumerate} \item[a)] every $D$ intersects transversely $W^\mathrm{s}_\deltalta(q_f,f)$ and is tangent to a small cone field around $E^{\mathrm{uu}}$, \item[b)] there is $\lambdambda>1$ such that $\|Df(v)\| \ge \lambdambda \|v\|$ for every vector $v$ tangent to $D$, \item[c)] $f(D)$ contains a disc in $\mathfrak{D}_q$. \end{enumerate} For the existence of the family $\mathfrak{D}_q$ and its precise definition see \cite[Lemma 4.11]{BBD:16}. It turns out that for every $g$ nearby $f$ the family $\mathfrak{D}_q$ also satisfies these properties for $g$. We let $\mathfrak{F}^-=\mathfrak{D}_q$. Observe now that due to the flip-flop configuration $W^u(q_f,f)$ contains a disc $D'\in \mathfrak{D}_f=\mathfrak{F}^+$. Hence there is large $k_0$ such that for every disc $D\in \mathfrak{D}_f=\mathfrak{F}^+$ and every $N\ge k_0$ the disc $f^N(D)$ contains a sub-disc close enough to $D'$ and hence contains a disc in $\mathfrak{D}_f=\mathfrak{F}^+$ (note that this family is necessarily open). Note also that $f^N(D)$ contains a disc of $\mathfrak{D}_q$ by (c). Observe that the choice of $k_0$ holds for every $g$ nearby $f$. A similar construction holds for the images of the discs in $\mathfrak{D}=\mathfrak{F}^-$, now we use that $W^s(q_f,f)$ transversely intersects every disc in $\mathfrak{D}_f=\mathfrak{F}^+$. In this way, we get a uniform $N$ in such a way the family satisfies condition \eqref{i.flipflop2} in the definition of a flip-flop family (Definition~\ref{i.flipflop33}). In our partially hyperbolic case, condition \eqref{i.flipflop33} follows because all the discs we consider are tangent to a strong unstable cone field. It remains to get condition \eqref{i.flipflop1} on the averages of $(\mathrm{J}_f^{{\mathrm{c}}})_N$. For this, some additional shrinking of the discs of the blender is needed. We will follow \cite[Section 4.4]{BBD:16}. Note that the map $\mathrm{J}_f^{{\mathrm{c}}}$ is positive for the points in the set $\Gammamma_f$ (here we recall that $E^\mathrm{c}$ is expanding in a neighbourhood $V$ of $\Gammamma_f$ since we consider the metric given by Proposition \ref{p.dandovoltas}). Consider for each disc $D$ of the family $\mathfrak{D}_f$ a sub-disc $D'$ contained in $V$ such that the family $\mathfrak{D}'_f$ formed by the sets $D'$ is invariant for $f^m$ for some $m$. Again, the same $m$ works for every $g$ sufficiently close to $f$. The precise definition of this new family $\mathfrak{D}'_f$ is in \cite[Definition 4.15]{BBD:16} and the invariance properties are in \cite[Lemmas 4.17 and 4.18]{BBD:16}. Finally, observe that once we have obtained the flip-flop family $\mathfrak{F}=\mathfrak{F}^+\sqcup\mathfrak{F}^-$ the proof that this family has sojourns is exactly as in \cite[Proposition 5.2]{BDB:}. This completes our sketch of the proof of Theorem~\ref{t.l.h(q)}. \end{proof} \subsection{Proof of Theorem~\ref{t.openanddense}} The theorem follows immediately from Theorem~\ref{t.flipfloptailqual} and Theorem~\ref{t.l.h(q)}. \subsection{Proof of Theorem \ref{t.average}} Recall that for a periodic point $p_f$ of $f$ we denote by $\mu_{\cO(p_f)}$ the unique $f$-invariant probability measure supported on the orbit of $p_f$. Consider now periodic points $p_f$ and $q_f$ of $f$ satisfying $\int \varphi\, d\mu_{\cO(p_f)} <0< \int \varphi \, d\mu_{\cO(q_f)}$. To prove Theorem~\ref{t.average} it is enough to consider the case where the saddles $p_f$ and $q_f$ have the same index and are homoclinically related (which is an open and dense condition in $\mathcal{RT}(M)$). In \cite[Section 5.3]{BDB:} it is explained how the case where the saddles have different indices is reduced to this ``homoclinically related'' case: after an arbitrarily small perturbation of $f$ one gets $g$ with a saddle-node $r_g$ with $\int \varphi d\mu_{\cO(r_g)}\ne 0$. Assume that $\int \varphi d\mu_{\cO(r_g)}>0$. In this case we perturb $g$ to get $h$ such that $r_h$ has the same index as $p_h$ and these points are homoclinically related. Then we are in the ``homoclinically related case''. We now prove Theorem~\ref{t.average} when the saddles $p_f$ and $q_f$ are homoclinically related. Let us assume for simplicity, that these saddles are fixed points of $f$. The result follows from the construction in \cite{BDB:}, we will sketch below its main steps. Recall that the set $\cD^i(M)$ of $i$-dimensional (closed) discs $C^1$-embedded in $M$ has a natural topology which is induced by a metric $\mathfrak{d}$, for details see \cite[Proposition 3.1]{BBD:16}. For small $\varrho>0$ consider the $\varrho$-neighbourhoods $\cV^{\mathfrak{d}}_\varrho(p_f)\eqdef \cV^{\mathfrak{d}}_\varrho(W^{\mathrm{u}}_{loc}(p_f,f))$ and $\cV^{\mathfrak{d}}_\varrho(q_f)\eqdef \cV^{\mathfrak{d}}_\varrho(W^{\mathrm{u}}_{loc}(q_f,f))$ of the local unstable manifolds of $p_f$ and $q_f$ for the distance $\mathfrak{d}$ in $\cD^i(M)$, where $i$ is the dimension of the unstable bundle of $p_f$ and $q_f$. We consider the following family $\mathfrak{F}_f=\mathfrak{F}_f^+\sqcup \mathfrak{F}_f^-$ of discs: \begin{itemize} \item $\mathfrak{F}_f^-$ is the family of discs in $\cV^\mathfrak{d}_\varrho(p_f)$ contained in $W^{\mathrm{u}}(p_f,f)\cup W^{\mathrm{u}}(q_f,f)$; \item $\mathfrak{F}_f^+$ is the family of discs in $\cV^\mathfrak{d}_\varrho(q_f)$ contained in $W^{\mathrm{u}}(p_f,f)\cup W^{\mathrm{u}}(q_f,f)$. \end{itemize} Note that as $q_f$ and $p_f$ are homoclinically related these two families are both infinite. Note also that for $\varrho>0$ small enough one has that $\varphi$ is negative in the discs of $\mathfrak{F}_f^-$ and positive in the discs of $\mathfrak{F}_f^+$. Note that we can define the families $\mathfrak{F}_g^\pm$ analogously for every $g$ close to $f$, having also that $\varphi$ is negative in $\mathfrak{F}_f^-$ and positive in $\mathfrak{F}_f^+$. We have the following result which is an improvement of \cite[Proposition 5.2]{BDB:}. The original result is stated for a single diffeomorphism $f$. Here we have a version valid for a neighbourhood with a uniform control of $n$ in the whole neighbourhood. As in the case in the previous section, this allows us to locally bound the entropy of the measures associated to the flip-flop from below. \begin{propo} Consider $f$ and $\varphi$ as above. Then there is $n$ such that the family $\mathfrak{F}_g$ is a flip-flop family associated to $\varphi$ and $g^n$ and has $g$-sojourns along the homoclinic class $H(p_g,g)$. \end{propo} \begin{proof} Let us recall the proof of the proposition for $f$ (\cite[Proposition 5.2]{BDB:}). Since the saddles $p_f$ and $q_f$ are homoclinically related, there is $n$ such that for every disc $D\in \mathfrak{F}_f^\pm$ the disc $f^n(D)$ contains discs $D_p\in \cV^\mathfrak{d}_\varrho(p_f)$ and $D_q\in \cV^\mathfrak{d}_\varrho(q_f)$. By construction $D\in W^{\mathrm{u}}(p_f,f)\cup W^{\mathrm{u}}(q_f,f)$. Observe that for $g$ close enough to $f$ and every disc $D\in \mathfrak{F}_g^\pm$ the disc $g^n(D)$ also contains discs $D_p\in \cV^\mathfrak{d}_\varrho(p_g)$ and $D_q\in \cV^\mathfrak{d}_\varrho(q_g)$. The fact that $\mathfrak{F}_f$ is a flip-flop family is quite straightforward. The same proof applies to $\mathfrak{F}_g$. For details see \cite[Section 5.2]{BDB:}, where it is also proved that the family has sojourns in the whole class. \end{proof} \end{document}
\begin{document} \begin{abstract} In this paper we consider the class of K3 surfaces defined as hypersurfaces in weighted projective space, that admit a non-symplectic automorphism of non-prime order, excluding the orders 4, 8, and 12. We show that on these surfaces the Berglund-H\"ubsch-Krawitz mirror construction and mirror symmetry for lattice polarized K3 surfaces constructed by Dolgachev agree; that is, both versions of mirror symmetry define the same mirror K3 surface. \end{abstract} \keywords{K3 surfaces, mirror symmetry, mirror lattices, Berglund-H\"ubsch-Krawitz construction} \subjclass[2010]{Primary 14J28, 14J33; Secondary 14J17, 11E12, 14J32} \title{BHK mirror symmetry for K3 surfaces with non-symplectic automorphism} \section*{Introduction} Since its discovery by physicists nearly 30 years ago, mirror symmetry has been the focus of much interest for both physicists and mathematicians. Although mirror symmetry has been ``proven'' physically, we have much to learn about the phenomenon mathematically. When we speak of mirror symmetry mathematically, there are many different constructions or rules for determining when a Calabi--Yau manifold is ``mirror'' to another. The constructions are often formulated in terms of families of Calabi--Yau manifolds. A natural question is whether, in a situation where more than one version can apply, they produce the same mirror (or mirror family). In this article, we consider two versions of mirror symmetry for K3 surfaces, and show that in this case the answer is affirmative, as we might expect. The first version of mirror symmetry of interest to us is known as BHK mirror symmetry. This was formulated by Berglund--H\"ubsch \cite{berghub}, Berglund--Henningson \cite{berghenn} and Krawitz \cite{krawitz} for Landau--Ginzburg models. Using the ideas of the Landau--Ginzburg/Calabi--Yau correspondence, BHK mirror symmetry also produces a version of mirror symmetry for certain Calabi--Yau manifolds (see Section~\ref{sec-mirror}). In the BHK construction, one starts with a quasihomogeneous and invertible polynomial $W$ and a group $G$ of symmetries of $W$ satisfying certain conditions (see Section~\ref{quasi_sec} for more details). From this data, we obtain the Calabi--Yau (orbifold) defined as the hypersurface $Y_{W,G}=\set{W=0}/G$. Given an LG pair $(W,G)$, BHK mirror symmetry allows to obtain another LG pair $(W^T,G^T)$ satisfying the same conditions, and therefore another Calabi--Yau (orbifold) $Y_{W^T,G^T}$. We say that $Y_{W,G}$ and $Y_{W^T,G^T}$ form a BHK mirror pair. In our case, we resolve singularities to obtain K3 surfaces $X_{W,G}$ and $X_{W^T,G^T}$, which we call a BHK mirror pair. When no confusion arise, we will denote these mirror K3 surfaces simply by $X$ and $X^T$, respectively. Another form of mirror symmetry for K3 surfaces, which we will call LPK3 mirror symmetry, is described by Dolgachev in \cite{dolgachev}. LPK3 mirror symmetry says that the mirror family of a given K3 surface admitting a polarization by a lattice $M$ is the family of K3 surfaces polarized by the \emph{mirror lattice} $M^\vee$. We say that the two K3 surfaces are LPK3 mirror when they are lattice polarized and they belong to LPK3 mirror families (see details in Section \ref{sec-LPK3}). Returning to the question posed earlier, one can ask whether the BHK mirror symmetry and LPK3 mirror symmetry produce the same mirror. A similar question was considered by Belcastro in \cite{belcastro}. She considers a family of K3 surfaces that arise as (the resolution of) hypersurfaces in weighted projective space, uses the Picard lattice of a general member of the family as polarization, and finds that this particular polarization does not yield very many mirror families. This polarization fails to yield mirror symmetry for at least two reasons. First, it does not consider the group of symmetries. And secondly---and perhaps more compelling---a result proved by Lyons--Olcken (see \cite{LO}) following Kelly (see \cite{kelly}) shows that the rank of the Picard lattice of $X_{W,G}$ does not depend on $G$ at all. This fact suggests that we need a finer invariant than the full Picard lattice to exhibit LPK3 mirror symmetry. We need to find a polarizing lattice that recognizes the role of the group $G$. The correct polarizing lattice seems to be the invariant lattice \[ S_X(\sigma)=\{x\in H^2(X,\mathbb Z):\sigma^*x=x \} \] of a certain non-symplectic automorphism $\sigma\in\Aut{X}$. This was proven in \cite{ABS} and \cite{CLPS} in the case of K3 surfaces admitting a non--symplectic automorphism prime order. In what follows, we generalize the results of \cite{ABS} and \cite{CLPS} to K3 surfaces admitting a non--symplectic automorphism $\sigma$ of any finite order, excepting orders 4, 8 and 12. By polarizing each of the K3 surfaces in question by the invariant lattice $S_X(\sigma)$ of a non-symplectic automorphism $\sigma$ of finite order, we prove that BHK mirror symmetry and LPK3 mirror symmetry agree. This is done as in the previous works, by showing that $S_{X^T}(\sigma^T)$ is the mirror lattice of $S_{X}(\sigma)$. This situation differs significantly from the case of prime order automorphism in that the invariant lattice is no longer $p$-elementary and there is no longer a (known) relationship between the invariants of $S_X(\sigma)$ and the fixed locus of $\sigma$. Hence, instead of studying the fixed locus in order to recover $S_X(\sigma)$, we determine $S_X(\sigma)$ with other methods. As for orders 4, 8 and 12, more details are required and the methods are slightly different, so that this will be the object of further work. The question of whether two versions of mirror symmetry produce the same mirror has been investigated by others as well, but for different constructions of mirror symmetry than we consider here. Partial answers to the question are given by Artebani--Comparin--Guilbot in \cite{good_pairs}, where Batyrev and BHK mirror constructions are both seen as specializations of a more general construction based on the definition of good pairs of polytopes. Rohsiepe also considered Batyrev mirror symmetry in connection with LPK3 mirror symmetry in \cite{Roh}, where he shows a duality for the K3's obtained as hypersurfaces in one of the Fano toric varieties constructed by one of the 4319 3-dimensional reflexive polytopes. As in Belcastro's paper \cite{belcastro}, Rohsiepe used the Picard lattice of a general member of the family of such hypersurfaces to polarize the K3 surfaces. As it turns out, only 14 of the 95 weight systems yield a K3 surface in a Fano ambient space. We do not consider such a restriction in the current paper. Clarke has also described a framework which he calls an auxilliary Landau--Ginzburg model, which encapsulates several versions of mirror symmetry, including Batyrev--Borisov, BHK, Givental's mirror theorem and Hori--Vafa mirror symmetry (see \cite{clarke}). Kelly also has some results in this direction in \cite{kelly}, where he shows by means of Shioda maps, that certain BHK mirrors are birational. The current article is similar in scope to these articles. There are also several papers treating non--symplectic automorphisms of K3 surfaces, which are closely related to this paper. These include \cite{order_four} for automorphisms of order four, \cite{order_six} for order six, \cite{Schutt2010} for order $2^p$, \cite{order_eight} for order eight, and \cite{order_sixteen} for order sixteen. In general, it seems difficult to find the invariant lattice of a non--symplectic automorphism on a K3 surface. The current article gives some new methods for computing the invariant lattice, which we hope will yield more general results. As complementary results, in doing this classification we discovered the existence of one of the cases that couldn't be discovered in the order 16 classification in \cite{order_sixteen} namely a K3 surface admitting a purely non--symplectic automorphism of order sixteen, which has as fixed locus a curve of genus zero, and 10 isolated fixed points. This is number 58 in Table~\ref{tab-16}. Dillies has also found such an example in \cite{dillies16}. Additionally, our computations unearthed a different result from Dillies in \cite{order_six}. If we look at Table~\ref{tab-6}, we find the invariant lattice for number 29 and one of rows of 5d has an invariant lattice of order 12. These K3 surfaces admit an automorphism of order three, namely $\sigma_6^2$, with invariants $(g,n,k)=(0,8,5)$, but the automorphism $\sigma_6$ fixes one rational curve and 8 isolated points. This is missing from Table~1 in \cite{order_six}. Furthermore, the same can be said for the K3 surfaces in same table which have $v\oplus 4\omega_{2,1}^{1}$ as the invariant lattice, namely one of 8b, 8d, 33a, and 33b. These K3 surfaces admit a non--symplectic automorphism of order three with invariants $(g,n,k)=(0,7,4)$, but $\sigma_6$ fixes one rational curve and seven isolated points. This is also missing from the Table in \cite{order_six}. The paper is organized as follows. In Section \ref{sec-background} we recall some definitions and results on K3 surfaces and lattices, while Section \ref{sec-mirror} is dedicated to the introduction of mirror symmetry, both LPK3 and BHK. The main result of the paper is Theorem \ref{t:main_thm}. Section \ref{sec-method} is dedicated to the explanation of the methods used in the proof. In Section \ref{sec-ex} we report some meaningful examples, and Section \ref{sec:tables} contains the tables proving the main theorem. We would like to thank Michela Artebani, Alice Garbagnati, Alessandra Sarti and Matthias Sch\"utt for many useful discussions and helpful insights. We would also thank Antonio Laface for the help on magma code \cite{magma}. The first author has been partially supported by Proyecto Fondecyt Postdoctorado N. 3150015 and Proyecto Anillo ACT 1415 PIA Conicyt. \section{Background} \label{sec-background} In this section we recall some facts about K3 surfaces and lattices. For notations and theorems, we follow \cite{surfaces, nikulin} \subsection{K3 Surfaces} A \emph{K3 surface} is a compact complex surface $X$ with trivial canonical bundle and $\dim H^1(X,\mathcal O_X)=0$. All K3 surfaces considered here will be projective and minimal. It is well-known that all K3 surfaces are diffeomorphic and K\"ahler. Given a K3 surface $X$, $H^2(X,\mathbb{Z})$ is free of rank 22, the Hodge numbers of $X$ are $h^{2,0}(X)=h^{0,2}(X)=1$, $h^{1,1}(X)=20$ and $h^{1,0}(X)=h^{0,1}(X)=0$, and the Euler characteristic is $24$. The Picard group of $X$ coincides with the N\'eron--Severi group, and both are torsion free. From the facts above, we see that $H^{2,0}(X)$ is one--dimensional. In fact, it is generated by a nowhere--vanishing two--form $\omega_X$, which satisfies $\langle \omega_X,\omega_X \rangle=0$ and $\langle \omega_X,\overline{\omega}_X \rangle>0$. Given an automorphism $\sigma$ of the K3 surfaces $X$, we get an induced Hodge isometry $\sigma^*$, which preserves $H^{2,0}(X)$, i.e. $\sigma^*\omega_X=\lambda_\sigma \omega_X$ for some $\lambda_\sigma\in \mathbb{C}^*$. We call $\sigma$ \emph{symplectic} if $\lambda_\sigma=1$ and \emph{non-symplectic} otherwise. If $\sigma$ is an automorphism with nonprime order $m$, we say $\sigma$ is \emph{purely non-symplectic} if $\lambda_\sigma=\xi_m$ with $\xi_m$ a primitive $m$-th root of unity. \subsection{Lattice theory}\label{s:lattice} A \emph{lattice} is a free abelian group $L$ of finite rank together with a non-degenerate symmetric bilinear form $B\colon L \times L \to \mathbb{Z}$. A lattice $L$ is \emph{even} if $B(x,x)\in 2\mathbb{Z}$ for each $x\in L$. The \emph{signature} of $L$ is the signature $(t_+,t_-)$ of $B$. A lattice $L$ is \emph{hyperbolic} if its signature is $(1,\rk(L)-1)$. A sublattice $L\subset L'$ is called \emph{primitive} if $L'/L$ is free. On the other hand, a lattice $L'$ is an \emph{overlattice} of finite index of $L$ if $L\subset L'$ and $L'/L$ is a finite abelian group. We will refer to it simply as an \emph{overlattice}. Given a finite abelian group $A$, a \emph{finite quadratic form} is a map $q:A\to \mathbb{Q}/2\mathbb{Z}$ such that for all $n\in \mathbb{Z}$ and $a,a'\in A$ $q(na)=n^2q(a)$ and $q(a+a')-q(a)-q(a')\equiv 2b(a,a') \imod{2\mathbb{Z}}$ where $b:A\times A\to \mathbb{Q}/\mathbb{Z}$ is a finite symmetric bilinear form. We define orthogonality on subgroups of $A$ via $b$. Given a lattice $L$, the corresponding bilinear form $B$ induces an embedding $L \hookrightarrow L^\ast$, where $L^\ast:= \Hom(L,\mathbb{Z})$. The \emph{discriminant group} $A_{L}:= L^\ast/L$ is a finite abelian group. In fact, if we write $B$ as a symmetric matrix in terms of a minimal set of generators of $L$, then the order of $A_{L}$ is equal to $|\det(B)|$. The bilinear form $B$ can be extended to $L^*\times L^*$ taking values in $\mathbb{Q}$. If $L$ is even, this induces a finite quadratic form $q_L: A_L\to \mathbb{Q}/2\mathbb{Z}$. The minimal number of generators of $A_L$ is called the \emph{length} of $L$. If $A_L$ is trivial, $L$ is called \emph{unimodular}. For a prime number $p$, $L$ is called \emph{$p$-elementary} if $A_L \simeq (\mathbb{Z}/p\mathbb{Z})^a$ for some $a\in\mathbb N_0$; in this case, $a$ is the length of $A_L$. Two lattices $L$ and $K$ are said to be \emph{orthogonal}, if there exists an even unimodular lattice $S$ such that $L\subset S$ and $L^^{\prime}erp_S\cong K$. Orthogonality will be a key ingredient in the definition of mirror symmetry for K3 surfaces. The following fact will also be useful. \begin{prop}[cf. {\cite[Corollary 1.6.2]{nikulin}}]\label{p:orth} Two lattices $L$ and $K$ are orthogonal if and only if $q_L\cong -q_K$. \end{prop} We recall the definition of several lattices that we will encounter later. The lattice $U$ is the hyperbolic lattice of rank 2 whose bilinear form is given by the matrix $\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array} \right)$. The lattices $A_n, D_m, E_6,E_7, E_8, n\geq 1, m\geq 4$ are the even negative definite lattices associated to the respective Dynkin diagrams. For $n\geq 1$, the lattice $A_n$ has rank $n$ and its discriminant group is $\mathbb{Z}/({n+1})\mathbb{Z}$. If $p$ is prime, $A_{p-1}$ is $p$-elementary (with $a=1$). For $m\geq 4$, the lattice $D_m$ has rank $m$ and its discriminant group is $\mathbb{Z}/2\mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$ for $m$ even, and $\mathbb{Z}/4\mathbb{Z}$ for $m$ odd. Finally, $E_6,E_7,E_8$ have ranks 6, 7, and 8 and discriminant groups of order 3, 2, and 1, respectively. For $p\equiv 1 ^{\prime}mod 4$ the lattice $H_p$ is the hyperbolic even lattice of rank 2, whose bilinear form is given by the matrix \[ H_p=\left( \begin{array}{cc} \frac{(p+1)}2& 1\\ 1&2\end{array} \right). \] The discriminant group of $H_p$ is $\mathbb{Z}/p\mathbb{Z}$. There are two non--isomorphic hyperbolic lattices of rank 2 with discriminant group $\mathbb{Z}/9\mathbb{Z}$ defined by the matrices \[ L_9=\left( \begin{array}{cc} -2& 1\\ 1&4\end{array} \right),\qquad M_9=\left( \begin{array}{cc} -4&5\\ 5&-4\end{array} \right).\] Following \cite{belcastro} we recall that $T_{p,q,r}$ with $p,q,r\in \mathbb{Z}$ is the lattice determined by a graph which has the form of a T, and $p,q,r$ are the respective lengths of the three legs. The rank of $T_{p,q,r}$ is $p+q+r-2$ and the discriminant group has order $pqr-pq-qr-pr$. Given a lattice $L$, we denote by $L(n)$, the lattice with the same rank as $L$, but whose values under the bilinear form $B$ are multiplied by $n$. Many even lattices are uniquely determined by their rank and the discriminant quadratic form. To make this statement precise, we introduce the following finite quadratic forms. The notation follows \cite{belcastro} and the results are proven in \cite{nikulin}. We define three classes of finite quadratic forms forms, $w_{p,k}^\varepsilonilon$, $u_k$, $v_k$ as follows: \begin{enumerate} \item For $p\neq 2$ prime, $k\geq 1$ an integer, and $\varepsilonilon\in \set{^{\prime}m 1}$, let $a$ be the smallest even integer that has $\varepsilonilon$ as quadratic residue modulo $p$. Then we define $w_{p,k}^\varepsilonilon:\mathbb{Z}/{p^k}\mathbb{Z}\to \mathbb{Q}/2\mathbb{Z}$ via $w_{p,k}^\varepsilonilon(1)= ap^{-k}$. \item For $p=2$, $k\geq 1$ and $\varepsilonilon\in \set{^{\prime}m 1, ^{\prime}m 5}$, we define $w_{2,k}^\varepsilonilon: \mathbb{Z}_{2^k}\to \mathbb{Q}/2\mathbb{Z}$ on the generator via $w_{2,k}^\varepsilonilon (1)=\varepsilonilon\cdot 2^{-k}$. \item For $k\geq 1$ an integer, we define the forms $u_k$ and $v_k$ on $\mathbb{Z}/{2^k}\mathbb{Z}\times \mathbb{Z}/{2^k}\mathbb{Z}$ via the matrices: \[ u_k=\left(\begin{matrix} 0 & 2^{-k} \\ 2^{-k} & 0 \end{matrix}\right) \quad v_k=2^{-k}\left(\begin{matrix} 2 & 1 \\ 1 & 2 \end{matrix}\right) \] \end{enumerate} For example, if we consider the lattice $L=A_2$, then $A_L\cong \mathbb{Z}/3\mathbb{Z}$ and $q_L$ has value $\tfrac{4}{3}$ on the generator. Thus $q_L\cong \omega_{3,1}^1$. \begin{thm}[cf. {\cite[Thm. 1.8.1]{nikulin}}]\label{t:relations} The forms $w_{p,k}^\varepsilonilon$, $u_k$, $v_k$ generate the semi--group of finite quadratic forms. \end{thm} In other words every finite quadratic form can be written (not uniquely) as a direct sum of the generators $w_{p,k}^\varepsilonilon$, $u_k$, $v_k$. Relations can be found in \cite[Thm. 1.8.2]{nikulin}. For a finite quadratic form $q$, and a prime number $p$, we denote $q$ restricted to the $p$--component $(A_q)_p$ of $A$ by $q_p$. The following results describe the close link between discriminant quadratic forms and even lattices. \begin{thm}[cf. {\cite[Thm. 1.13.2]{nikulin}}]\label{t:lattice_unique} An even lattice $S$ with invariants $(t_+,t_-,q)$ is unique if, simultaneously, \begin{enumerate} \item $t_+\geq 1, t_-\geq 1, t_++t_-\geq 3$; \item for each $p\neq 2$, either $\rk S \geq 2+l((A_q)_p)$ or $ q_p\cong w_{p,k}^\varepsilonilon\oplus w_{p.k}^{\varepsilonilon'}\oplus q_p'$; \item for $p= 2$, either $\rk S \geq 2+l((A_q)_2)$ or one of the following holds \[ q_2\cong u_k\oplus q_2',\quad q_2\cong v_k\oplus q_p',\quad q_2\cong w_{2,k}^\varepsilonilon\oplus w_{2.k}^{\varepsilonilon'}\oplus q_2'. \] \end{enumerate} \end{thm} \begin{cor}[cf. {\cite[Corollary 1.13.3]{nikulin}}]\label{c:latunique} An even lattice $S$ with invariants $(t_+,t_-,q)$ exists and is unique if $t_+ -t_-\equiv \sign q \imod 8$, $t_+ +t_-\geq 2+ l(A_q)$, and $t_+, t_- \geq 1$. \end{cor} \begin{cor}[cf. {\cite[Corollary 1.13.4]{nikulin}}]\label{c:U+T} Let $S$ be an even lattice of signature $(t_+,t_-)$. If $t_+\geq 1$, $t_-\geq 1$ and $t_+ +t_-\geq 3+l(A_S)$, then $S\cong U \oplus T$ for some lattice $T$. \end{cor} In Table \ref{tab-forms}, we list the discriminant form associated to each of the lattices a ppearing in our calculations (see Sections \ref{sec-ex},\ref{sec:tables}). A complete description can be found in \cite[Appendix A]{belcastro}. \begin{table}[h!]\centering \begin{tabular}{ l c c || l c c|| l c c} $L$&$\sign L$ &$q_L$& $L$&$\sign L$ &$q_L$& $L$&$\sign L$ &$q_L$\\ \hline $U$ &(1,1) & trivial &$D_6$ &(0,6) & $(w_{2,1}^1)^2$ & $T_{4,4,4}$ &(1,9) & $v_2$\\ $U(2)$ &(1,1) & $u$ &$D_9$ &(0,9) & $w_{2,2}^{-1}$ & $T_{3,4,4}$ &(1,8) & $w_{2,3}^5$\\ $A_1$ &(0,1) & $w^{-1}_{2,1}$ &$E_6$ &(0,6) & $w_{3,1}^{-1}$ & $T_{2,5,6}$ &(1,10) & $w_{2,3}^{-5}$\\ $A_2$ &(0,2) & $w_{3,1}^{1}$ &$E_7$ &(0,7) & $w_{2,1}^{1}$ & $<2>$ &(1,0) & $w_{2,1}^1$\\ $A_3$ &(0,3) & $w_{2,2}^5$ &$E_8$ &(0,8) & trivial & $<4>$ &(1,0) & $w_{2,2}^1$\\ $A_1(2)$ &(0,1) & $w_{2,2}^{-1}$ &$H_5$ &(1,1) & $w_{5,1}^{-1}$ & $<8>$ &(1,0) & $w_{2,3}^1$\\ $D_4$ &(0,4) & $v$ &$L_9$ &(1,1) & $w_{3,2}^1$ & $<-8>$ &(0,1) & $w_{2,3}^{-1}$\\ $D_5$ &(0,5) & $w_{2,2}^{-5}$ &$M_9$ &(1,1) & $w_{3,2}^{-1}$ & & &\\ \end{tabular} \caption{Lattices and forms} \label{tab-forms} \end{table} Let $L'$ be an overlattice $L'$ of the lattice $L$. We call $H_{L'}:=L'/L$. By the chain of embeddings $L\subset L'\subset (L')^*\subset L^*$ one has $H_{L'}\subset A_L$ and $A_{L'}=((L')^*/L)/H_{L'}$. \begin{prop}[cf. {\cite[Prop. 1.4.1]{nikulin}}]\label{p:overl} The correspondence $L'\leftrightarrow H_{L'}$ is a 1:1 correspondence between overlattices of finite index of $L$ and $q_L$-isotopic subgroups of $A_L$, i.e. subgroups on which the form $q_L$ is 0. Moreover, $H_{L'}^^{\prime}erp=(L')^*/L$ and $q_{L'}=({q_L}_{|H_{L'}^^{\prime}erp})/H_{L'}$. \end{prop} \subsection{K3 lattices} \label{sec-K3lattice} Let $X$ be a K3 surface. It is well--known that $H^2(X,\mathbb{Z})$ is an even unimodular lattice of signature $(3,19)$. As such, it is isometric to the \emph{K3-lattice} $L_{\text{K3}} = U^3\oplus (E_8)^2$. We let \[ S_X = H^2(X,\mathbb{Z}) \cap H^{1,1}(X,\mathbb{C}) \] denote the {\em Picard} lattice of $X$ in $H^2(X,\mathbb{Z})$ and $T_X = S_X^^{\prime}erp$ denote the {\em transcendental lattice}. Let $\sigma$ be a non-symplectic automorphism of $X$. We let $S_X(\sigma)\subseteq H^2(X,\mathbb{Z})$ denote the $\sigma^\ast$-invariant sublattice of $H^2(X,\mathbb{Z})$: \[S_X(\sigma)=\{x\in H^2(X,\mathbb{Z}): \sigma^*x=x\}.\] One can check that it is a primitive sublattice of $H^2(X,\mathbb{Z})$. In fact, $S_X(\sigma)$ is a primitive sublattice of $S_X$ and in general $S_X(\sigma)\subsetneq S_X$. We let $T_X(\sigma) = S_X(\sigma)^^{\prime}erp$ denote its orthogonal complement. The signature of $S_X(\sigma)$ is $(1,t)$ for some $t\leq 19$, i.e. $S_X(\sigma)$ is hyperbolic. \section{Mirror symmetry} \label{sec-mirror} \subsection{Mirror symmetry for K3 surfaces} \label{sec-LPK3} Mirror symmetry for a Calabi--Yau manifold $X$ and its mirror $X^\vee$ can be thought of as an exchanging of the K\"ahler structure on $X$ for the complex structure of $X^\vee$. Thus a first prediction of mirror symmetry is the rotation of the Hodge diamond: \begin{equation*} H^{p,q}(X,\mathbb{C})\cong H^{q,N-p}(X,\mathbb{C}) \end{equation*} where $N$ is the dimension of $X$. For K3 surfaces, however, the Hodge diamond is symmetric under the rotation mentioned above. So we need to consider a refinement of this idea. This is accomplished by the notion of lattice polarization. Roughly, we choose a primitive lattice $M\hookrightarrow S_X$, which plays the role of the K\"ahler deformations, and the mirror lattice $M^\vee$, which we now define, plays the role of the complex deformations. We will refer to this formulation of mirror symmetry simply as LPK3 mirror symmetry. Following \cite{dolgachev}, let $X$ be a K3 surface and suppose that $M$ is a lattice of signature $(1,t)$. If $j\colon M\hookrightarrow S_X$ is a primitive embedding into the Picard lattice of $X$, the pair $(X,j)$ is called an \emph{$M$-polarized K3 surface}. There is a moduli space of $M$--polarized K3 surfaces with dimension $19-t$. We will not be concerned about the embedding. As in \cite{CLPS}, we will call the pair $(X,M)$ an \emph{$M$-polarizable} K3 surface if such an embedding $j$ exists. Note that for an $M$-polarizable K3 surface $(X,M)$, the lattice $M$ naturally embeds primitively into $L_{\text{K3}}$. \begin{defn}\label{mirror_defn} Let $M$ be a primitive sublattice of $L_{\text{K3}}$ of signature $(1,t)$ with $t\leq 18$ such that $M^^{\prime}erp_{L_{K3}}\cong U\oplus \mirror{M}$. We define $\mirror{M}$ to be (up to isometry) the \emph{mirror lattice} of $M$.\footnote{As in \cite{CLPS}, our definition in this restricted setting is slightly coarser than the one used by Dolgachev in \cite{dolgachev}, since we do not keep track of the embedding $U\hookrightarrow M^^{\prime}erp$ and instead only consider $M^\vee$ up to isometry.} \end{defn} By Theorem \ref{t:lattice_unique} this definition is independent of the embedding $M$ into $L_{\text{K3}}$. Furthermore under some conditions, (see e.g. Corollary~\ref{c:U+T} and Theorem~\ref{t:lattice_unique}) this definition is also independent of the embedding $U$ into $M^^{\prime}erp$. One can check that these conditions are satisfied for the lattices we consider here. Note that $\mirror{M}$ also embeds primitively into $L_{\text{K3}}$ and has signature $(1,18-t)$. Furthermore, $q_M\cong -q_{\mirror{M}}$. One easily checks that $(\mirror{M})^^{\prime}erp_{L_{\text{K3}}}\cong U\oplus M$. Given $(X,M)$ an $M$-polarizable K3 surface and $(X',M')$ an $M'$-polarizable K3 surface, with $M$ and $M'$ primitive sublattices of $S_X$ and $S_{X'}$, resp., we say that $(X,M)$ and $(X',M')$ are \emph{LPK3 mirrors} if $M' = \mirror{M}$ (or equivalently $M=\mirror{(M')}$). Notice that if $M$ has rank $t+1$, then the dimension of the moduli space of $M^\vee$ polarized K3 surfaces is $19-(18-t)$ which agrees with the rank of $M$. Returning to the question of K\"ahler deformations and complex deformations, we see that this definition of mirror symmetry matches the idea behind rotation of the Hodge diamond, as mentioned earlier. \subsection{Quasihomogeneous polynomials and diagonal symmetries}\label{quasi_sec} We recall a few facts and definitions (cf. \cite{CLPS} for details). A \emph{quasihomogeneous} map of degree $d$ with integer weights $w_1, w_2, \dots, w_n$ is $W:\mathbb{C}^n\to \mathbb{C}$ such that for every $\lambda \in \mathbb{C}$, \[ W(\lambda^{w_1}x_1, \lambda^{w_2}x_2, \dots, \lambda^{w_n}x_n) = \lambda^dW(x_1,x_2, \dots, x_n). \] One can assume $\gcd(w_1, w_2, \dots, w_n)=1$ and say $W$ has the \emph{weight system} $(w_1, w_2, \ldots, w_n; d)$. Given a quasihomogeneous polynomial $W:\mathbb{C}^n \rightarrow \mathbb{C}$ with a critical point at the origin, we say it is \emph{non-degenerate} if the origin is the only critical point of $W$ and the fractional weights $\frac{w_1}{d}, \ldots, \frac{w_n}{d}$ of $W$ are uniquely determined by $W$. A non-degenerate quasihomogeneous polynomial $W$ (also called \emph{potential} in the literature) is \emph{invertible} if it has the same number of monomials as variables. If $W$ is invertible we can rescale variables so that $W = \sum_{i=1}^n ^{\prime}rod_{j=1}^n x_j^{a_{ij}}$. This polynomial can be represented by the square matrix $\A{W} = (a_{ij})$, which we will call the \emph{exponent matrix} of the polynomial. Since $W$ is invertible, the matrix $\A{W}$ is an invertible matrix. The \emph{group $G_W$ of diagonal symmetries} of an invertible polynomial $W$ is \begin{equation*} \Gmax{W} = \{(c_1, c_2, \ldots, c_n) \in (\mathbb{C}^*)^n: W(c_1x_1, c_2x_2, \ldots, c_nx_n) = W(x_1, x_2, \ldots, x_n)\}. \end{equation*} Observe that, given $\gamma=(c_1, c_2, \ldots, c_n)\in \Gmax{W}$, the $c_i$'s are roots of unity. Thus one can consider $\Gmax{W}$ as a subgroup of $(\mathbb{Q} / \mathbb{Z})^n$, using addivite notation and identifying $(c_1, c_2, \ldots, c_n) = (e^{2 ^{\prime}i i g_1}, e^{2 ^{\prime}i i g_2}, \ldots, e^{2 ^{\prime}i i g_n})$ with $(g_1, g_2, \ldots, g_n)\in(\mathbb{Q} / \mathbb{Z})^n$. Observe that the order of $\Gmax{W}$ is $|\Gmax{W}| = \det(\A{W})$. Since $W$ is quasihomogeneous, the \emph{exponential grading operator} $\J{W} = \left(\frac{w_1}{d}, \frac{w_2}{d}, \ldots, \frac{w_n}{d}\right)$ is contained in $\Gmax{W}$. We denote by $J_W$ the cyclic group of order $d$ generated by $\J{W}$: $J_W=\inn{\J{W}}$. Moreover, each $\gamma=(g_1,\dots,g_n)$ define a diagonal matrix and thus $G_W$ is embedded in $\GL_n(\mathbb{C})$. We define $$\SLgp{W}:=\Gmax{W}\cap \SLn{n},$$ i.e. $\gamma=(g_1,\dots,g_n)\in \SLgp{W}$ if and only if $\sum_i g_i\in \mathbb{Z}$. The group $\SLgp{W}$ is called the {\em symplectic group} since, by \cite[Proposition 1]{ABS}, an automorphism $\sigma\in \Gmax{W}$ is symplectic if and only if $\det \sigma =1$, that is, if and only if $\sigma\in \SLgp{W}$. \subsection{K3 surfaces from $(W,G)$}\label{K3_sec} Reid (in an unpublished work) and Yonemura \cite{yonemura} have indipendently compiled a list of the 95 normalized weight systems $(w_1,w_2,w_3,w_4;d)$ (``the 95 families'') such that $\mathbb{P}(w_1,w_2,w_3,w_4)$ admits a quasismooth hypersurface of degree $d$ whose minimal resolution is a K3 surface. We consider one of these weight systems $(w_1,w_2,w_3,w_4;d)$ and an invertible quasihomogeneous polynomial of the form \begin{equation}\label{eq-W} W=x_1^m+f(x_2,x_3,x_4). \end{equation} Moreover, let $G$ be a group of symmetries such that $J_W\subseteq G \subseteq \SLgp{W}$ and let $\omegaidetilde{G}=G/J_W$. The polynomial $W$ defines a hypersurface $Y_{W,G}\subset \mathbb{P}(w_1,w_2,w_3,w_4)/\omegaidetilde{G}$ and one shows that the minimal resolution $X_{W,G}$ of $Y_{W,G}$ is a K3 surface (see \cite{ABS,CLPS}). The group $\Gmax{W}$ acts on $Y_{W,G}$ via automorphisms, which extend to automorphisms on the K3 surface $X_{W,G}$. The given form of $W$ ensures that the K3 surface $X_{W,G}$ admits a purely non-symplectic automorphism of order $m$: \[ \sigma_m:^{\prime}p{x_1}{x_2}{x_3}{x_4}\mapsto ^{\prime}p{\zeta_mx_1}{x_2}{x_3}{x_4} \] where $\zeta_m$ is a primitive $m$-th root of unity. With additive notation, it is $\sigma_m=\left(\tfrac 1m,0,0,0\right)$. \subsection{BHK mirror symmetry}\label{BHK_sec} Now we can describe the second relevant formulation of mirror symmetry coming from mirror symmetry for Landau--Ginzburg models and which we call BHK (from Berglund-H\"ubsch-Krawitz) mirror symmetry. This particular formulation of mirror symmetry was developed initially by Berglund--H\"ubsch in \cite{berghub}, and later refined by Berglund--Henningson in \cite{berghenn} and Krawitz in \cite{krawitz}. Because of the LG/CY correspondence and a theorem from Chiodo--Ruan \cite{BHCR}, this mirror symmetry of LG models can be translated into mirror symmetry for Calabi--Yau varieties (or orbifolds). We consider $(W,G)$ with $W$ invertible and $W=\sum_{i=1}^n ^{\prime}rod_{j=1}^n x_j^{a_{ij}}$ and define another pair $(W^T,G^T)$, called the BHK mirror. We first define the polynomial $W^T$ as $$W^T = \sum_{i=1}^n ^{\prime}rod_{j=1}^n x_j^{a_{ji}},$$ i.e. the matrix of exponents of $W^T$ is $A_W^T$. By the classification of invertible polynomials (cf. \cite[Theorem 1]{KrSk}), $W^T$ is invertible. Next, using additive notation, one defines the dual group $G^T$ of $G$ as \begin{equation}\label{dualG_def} G^T = \set{\; g \in \Gmax{W^T} \; | \;\;g\A{W} h^T \in \mathbb{Z} \text{ for all } h \in G \; }. \end{equation} The following useful properties of the dual group can be found in \cite[Proposition 3]{ABS}: \begin{prop}[cf. {\cite[Proposition 3]{ABS}}] Given $G$ and $G^T$ as before, one has: \begin{enumerate} \item $(G^T)^T = G$. \item If $G_1\subset G_2$, then $G_2^T\subset G_1^T$ and $G_2/G_1\cong G_1^T/G_2^T$. \item $(\Gmax{W})^T = \{0\}$, $(\{0\})^T = \Gmax{W^T}$. \item $(J_W)^T=\SLgp{W^T}$. In particular, if $J_W\subset G$, then $G^T\subset \SLgp{W}$. \end{enumerate} \end{prop} Given the pair $(W,G)$ with $W$ invertible with respect to one of the 95 weight systems, we associated to it the K3 surface $X_{W,G}$. One can check that in this case the weight system of $W^T$ also belongs to the 95. By the previous result, $J_{W^T}\subseteq G^T\subseteq \SL_{W^T}$, so that $X_{W^T,G^T}$ is again a K3 surface. We call $X_{W^T,G^T}$ the \emph{BHK mirror of $X_{W,G}$}. \subsection{Main theorem}\label{main_sec} We have described two kinds of mirror symmetry for K3 surfaces: LPK3 mirror symetry and BHK one. Since mirror symmetry describes a single physical phenomenon, we expect the two constructions to be compatible in situations where both apply. We will now state our main theorem, which shows that BHK and LPK3 mirror symmetry agree for the K3 surfaces $X_{W,G}$, when $W$ is of the form \eqref{eq-W}. When no confusion arises, we will denote the mirror K3 surfaces $X_{W,G}$ and $X_{W^T,G^T}$ simply by $X$ and $X^T$. Consider the data $(W,G,\sigma_m)$, where \begin{itemize} \item $W$ is an invertible polynomial of the form \eqref{eq-W} whose weight system belong to the 95 families of Reid and Yonemura, \item $\sigma_m=(\frac 1m,0,0,0)$ is the non-symplectic automorphism of order $m$, \item $G$ is a group of diagonal symmetries of $W$ such that $J_W\subseteq G\subseteq \SL_W$.\end{itemize} By section \ref{sec-K3lattice}, the invariant lattice $S_X(\sigma_m)$ is a primitive sublattice of $S_X$ and $(X_{W,G}, S_X(\sigma_m))$ is a $S_X(\sigma_m)$--polarizable K3 surface. Let $r$ be the rank of $S_X(\sigma_m)$. The BHK mirror is given by $(W^T,G^T,\sigma_m^T)$, where $\sigma_m^T$ is the non-symplectic automorphism of order $m$ on $X_{W^T,G^T}$. Notice that $\sigma_m$ and $\sigma_m^T$ have the same form, namely $(\tfrac{1}{m},0,0,0)$, but they act on different surfaces. \begin{thm}\label{t:main_thm} Suppose $m\neq 4,8,12$. If $W$ is a polynomial of the form \eqref{eq-W}, quasihomogeneous with respect to one of the 95 weight systems for K3 surfaces as in Section~\ref{K3_sec} and $G$ is a group of diagonal symmetries satisfying $J_W\subseteq G\subset\SL_W$, then $\left(X_{W^T,G^T}, S_{X^T}(\sigma_p^T\right))$ is an LPK3 mirror of $\left(X_{W,G}, S_X(\sigma_p)\right)$. \end{thm} The theorem is proved by showing that $$S_X(\sigma_m)^\vee\cong S_{X^T}(\sigma_m^T).$$ As we have seen in section \ref{s:lattice}, this amounts to checking that the invariants $(r,q_{S_X(\sigma_m)})$ for $X_{W,G}$ and $(r^T,q_{S_{X^T}(\sigma_m^T)})$ for $X_{W^T,G^T}$ satisfy $r=20-r^T$ and $q_{S_X(\sigma_m)}\cong -q_{S_{X^T}(\sigma_m^T)}$. Thus the heart of the proof is determining $q_{S_X(\sigma_m)}$ (or equivalently in our case $S_X(\sigma_m)$). In the following section, we will describe how this is done. It involves computing the invariant lattice and its overlattices. We list the results in tables in Section \ref{sec:tables}. Unfortunately, our method does not work for $m=4,8,12$ due to the presence of many overlattices, so that we cannot exactly pinpoint the invariant lattice. \section{Methods} \label{sec-method} In the setting of Theorem \ref{t:main_thm}, one has to show that $S_X(\sigma_m)^\vee\cong S_{X^T}(\sigma_m^T)$. Whenever $m=p$ a prime number, Theorem \ref{t:main_thm} was proved using a similar method in \cite{ABS} for $m=2$ and \cite{CLPS} for other primes. There is not a general method of proof in either article; instead the theorem is checked in every case. In \cite{ABS} and \cite{CLPS} there are several tools introduced in order to facilitate computation of the invariant lattice. The proof we give here follows roughly the same idea, however the methods used in the previous articles for computing $S_X(\sigma_m)$ are no longer valid, when $m$ is not prime. In order to illustrate the differences, we highlight briefly the method used in case of $p$ prime. Then we will describe the proof of the theorem, in case $m$ is not prime. \subsection{Method for $m=p$ prime}\label{s:methodsprime} As mentioned, the argument given in \cite{ABS} and \cite{CLPS} essentially boils down to determining the invariant lattice $S_X(\sigma_p)$ for $X_{W,G}$. The method for determining this lattice relies on the following powerful theorems. \begin{thm}[\cite{other_primes}] Given a $K3$ surface with a non-symplectic automorphism $\sigma$ of order $p$, a prime, the invariant lattice $S_X(\sigma)$ is $p$--elementary, i.e. $A_{S_X(\sigma)}\cong (\mathbb{Z}/p\mathbb{Z})^a$. \end{thm} \begin{thm}[\cite{RS,nikulin}] For a prime $p\neq 2$, a hyperbolic, $p$--elementary lattice $L$ with rank $r\geq 2$ is completely determined by the invariants $(r,a)$, where $a$ is the length. An indefinite 2--elementary lattice is determined by the invariants $(r,a,\delta)$, where $\delta\in{0,1}$ and $\delta=0$ if the discriminant quadratic form takes values 0 or 1 only and $\delta=1$ otherwise. \end{thm} By Proposition \ref{p:orth}, the orthogonal complement in $L_{K3}$ of a $p$--elementary lattice with invariants $(r,a)$ is a $p$--elementary lattice with invariants $(22-r,a)$. For both a given 2--elementary lattice and its orthogonal complement, the third invariant $\delta$ agrees. In the setting of Theorem \ref{t:main_thm}, Corollary \ref{c:U+T} shows that we also have $S_X(\sigma_p)^^{\prime}erp\cong U\oplus M$ where $M$ is a hyperbolic $p$--elementary with invariants $(20-r,a)$. Thus for $p\neq 2$ it is enough to verify that $(r,a)$ for $X_{W,G}$ and $(r^T,a_T)$ for $X_{W^T,G^T}$ satisfy $r=20-r^T$ and $a=a_T$. For $p=2$ the third invariant $\delta$ must also be compared, which the authors checked in \cite{ABS}. In order to compare $(r,a)$, we first look at the topology of the fixed point locus. \begin{thm}[cf. \cite{nikulin2, other_primes}]\label{summary_invariants} Let $X$ be a K3 surface with a non-symplectic automorphism $\sigma$ of prime order $p\neq 2$. Then the fixed locus $X^\sigma$ is nonempty, and consists of either isolated points or a disjoint union of smooth curves and isolated points of the following form: \begin{equation} X^\sigma=C\cup R_1\cup\ldots\cup R_k\cup \{p_1,\ldots,p_n\}\label{e:fixedlocus}. \end{equation} Here $C$ is a curve of genus $g\geq 0$, $R_i$ are rational curves and $p_i$ are isolated points. If $p=2$, then the fixed locus is either empty, the disjoint union of two elliptic curves, or is of the form \eqref{e:fixedlocus} with $n=0$. \end{thm} In \cite{ABS} and \cite{CLPS}, $\sigma_p$ always fixes a curve. Furthermore, the case of two elliptic curves does not appear in this setting described there. Therefore, the fixed locus is determined by the invariants $(g,k,n)$. In \cite{other_primes}, the authors give formulas to calculate $(r,a)$ given $(g,k,n)$ for each prime $p$ (see \cite[Theorem 0.1]{other_primes}). Thus, in order to prove Theorem \ref{t:main_thm} for $p$ prime, one first computes the invariants $(g,k,n)$, and from them computes the invariants $(r,a)$ (if $p=2$, additional computation are required to obtain $\delta$). Then one compares the invariants for BHK mirrors as described above. \input{methodnotprime} \subsection{Proof of main Theorem} We now provide the details of the proof of Theorem \ref{t:main_thm}. For each K3 surface $X_{W,G}$ we have the invariants $(r,q_{S_X(\sigma_m)})$ for the invariant lattice $S_X(\sigma_m)$, as discussed in the previous section. We know the invariant lattice has signature $(1,r-1)$, and so the orthogonal complement has signature $(2,20-r)$. We check that the conditions of Corollary \ref{c:U+T} are fulfilled so that $S_X(\sigma_m)^^{\prime}erp_{L_{K3}}\cong U\oplus M$ for some lattice $M$\footnote{In one case, the conditions are not fulfilled, namely $m=6$, $r=1$. However, in this case, we know the lattice $U\oplus\langle4\rangle$ has the given invariants, and by Corollary~\ref{c:latunique} this is the only such lattice}. This lattice $M$ is hyperbolic with signature $(1,19-r)$ and has discriminant quadratic form $-q_{S_X(\sigma_m)}$. One can see from the tables that the conditions of Theorem~\ref{t:lattice_unique} are satisfied. Hence, there is exactly one lattice with these invariants. To prove the theorem, therefore, we need simply to check that $(20-r, -q_{S_X(\sigma_m)})$ are the invariants for the invariant lattice $S_{X^T}(\sigma_m^T)$ for $X_{W^T,G^T}$. This can be checked by consulting the tables in Section \ref{sec:tables}. This concludes the proof. Tables contain all possible invertible polynomial of the from \eqref{eq-W} with non--symplectic automorphism of order $m$, and for each polynomial, we list the orders of the possible groups $G/J_W$ satisfying $J_W\subset G\subset \SL_W$. In most cases $\SL_W/J_W$ is cyclic, so the properties of $G^T$ make it clear what the dual group is. However for two examples, one can see by the multiplicity of subgroups with the same order that the group $\SL_W/J_W$ is not cyclic. These two examples are $x^3+y^3+z^6+w^6$ (number 3d) and $x^2+y^4+z^6+w^{12}$ (number 8d) in Table \ref{tab-6}. In these cases, we will clear up any ambiguities in the following sections. From now on we will make a change of notation from $(x_1,x_2,x_3,x_4)$ to $(x,y,z,w)$ for the variables of $W$, so that the variables are arranged with the weights in nonincreasing order. In other words, it is possible that $x_1$ corresponds to any of $x,y,z$, or $w$. This convention is also used in Tables of Section \ref{sec:tables}. \begin{rem} It is possible that a given K3 surface admits a purely non--symplectic automorphism of different orders. It turns out that it doesn't matter which automorphism one uses to exhibit LPK3 mirror symmetry, the notion still agrees with BHK mirror symmetry, in the sense of Theorem~\ref{t:main_thm}, as long as the defining polynomial is of the proper form \eqref{eq-W}. We expect that the theorem still holds for K3 surfaces that don't take the form of \eqref{eq-W}, but that is a topic for further investigation. \end{rem} \section{Examples} \label{sec-ex} In this section we will first give examples to illustrate each of the methods that was used to determine $S_X(\sigma_m)$. Then we will describe the subgroups in the two cases where $\SL_W/J_W$ is not cyclic. \begin{ex}\label{ex-method1}Method I: This first example will illustrate Method I for determining $S_X(\sigma_m)$. Let us consider the K3 surface with equation \[ W=x^2+y^3+z^9+yw^{12}=0 \] in the weighted projective space $\mathbb{P}(9,6,2,1)$ with degree 18. This is number 12b in Table~\ref{tab-9}. There are two non--symplectic automorphisms of interest $\sigma_2:(x,y,z,w)\mapsto(-x,y,z,w)$ and $\sigma_9:(x,y,z,w)\mapsto(x,y,\mu_9z,w)$. The invariant lattice $S_X(\sigma_2)$ was dealt with in \cite{ABS}, so we focus on $\sigma_9$. Here $|G_W|=36\cdot 18$, $|J_W|=18$ and the weight system for the BHK mirror is $(18,11,4,3;36)$ so that $[G_{W}:\SL_W]=36$. Thus $|\SL_W/J_W|=1$. Looking at the action of $\CC^{\ast}$ on the weighted projective space $\mathbb{P}(9,6,2,1)$, we find the following isotropy: \begin{align*} \mu_3 &: \text{fixes }z=w=0, x^2+y^3=0\\ \mu_2 &: \text{fixes }x=w=0, y^3+z^9=0. \end{align*} The first row provides a single point with $\Zfin{3}$ isotropy ($A_2$ singularity), and the second provides three points each with $\Zfin{2}$ isotropy (3 $A_1$'s). Their resolution gives the configuration of curves on $X_{W,G}$ depicted in Figure \ref{fig:res}. In this depiction, we have not indicated the three intersection points between $C_x$ and $C_z$. \begin{figure} \caption{Resolution of singularities on $X_W$} \label{fig:res} \end{figure} The set $\mathcal{E}$ consists of these five exceptional curves. Denote by $E_1$, $E_2$ and $E_3$ the three $A_1$ fibers, and $E_4$ and $E_5$ the two curves in the $A_2$ fiber. Looking at the form of $W$, we see that the curves $C_x=\{x=0\}$, $C_z=\{z=0\}$ and $C_w=\{w=0\}$ are smooth. The curve $C_y=\{y=0\}$ is not smooth. Thus the set $\mathcal{K}$ consists of these three smooth curves. The curve $C_x$ has genus 7, $C_z$ has genus 1, and $C_w$ has genus 0. There are two important representatives of the coset $\sigma_9J_W$ in $G_W$ which will help us compute the fixed locus for $\sigma_9$, namely $(0,0,\tfrac 19,0)$ and $(0,\tfrac 23,0,\tfrac 49 )$. These representatives show us that the curve $C_z=\{z=0\}$ is fixed and the point defined by $\{w=y=0, x^2+z^9=0\}$ is also fixed. Because $C_z$ is fixed, $E_4$ and $E_5$ are invariant (though not fixed pointwisely). The point of intersection of $C_w$ and the $A_2$ exceptional fiber and this other point are the only fixed points on $C_w$. Thus the three $A_1$ singularities are permuted by the action. By Lemma \ref{l:rank}, since there are three orbits, $S_X(\sigma_9)$ has rank 4. We now compute the lattice $L_\mathcal{B}$, generated by $\mathcal{B}=\set{E_1+E_2+E_3, E_4,E_5, C_x,C_z,C_w}$. Since there are six generators, two of them are redundant, for example $C_x$ and $C_z$. Consider the lattice $L$ generated by $E_1+E_2+E_3$, $E_4$, $E_5$, and $C_w$. This lattice has bilinear form \[ \left(\begin{matrix} -6 &0 &0 &3\\ 0 &-2 &1 &0\\ 0 &1 &-2 &1\\ 3 &0 &1 &-2\\ \end{matrix}\right) \] which has discriminant form $\omega_{3,1}^1$. By Proposition \ref{p:overl}, there are no non--trivial even overlattices of this lattice, hence $L=L_\mathcal{B}=S_X(\sigma_9)$. Thus we have the invariants $(r,q_{S_X(\sigma_9)})=(4,w_{3,1}^{1})$. In fact, $S_X(\sigma_9)\cong U\oplus A_2$. \end{ex} \begin{rem} This method also yields some other interesting facts regarding the Picard lattice of these surfaces. In \cite{belcastro}, Belcastro computes the Picard lattice for a generic hypersurface with these weights and degree as $U\oplus D_4$. However, if we look at the non--symplectic automorphism $\sigma_9^3$, we can compute the invariants $g=1,n=4,k=1$, and therefore $r=10, a=4$ for the invariant lattice, giving us the invariant lattice $S_X(\sigma_9^3)=U\oplus A_2 \oplus E_6$. This shows us in particular, that the Picard lattice of this surface is bigger than the Picard lattice for a generic quasihomogeneous polynomial with these weights. \end{rem} \begin{ex}\label{ex-method2}Method II: In order to illustrate Method II, we repeat the computations for the BHK mirror of the previous example: \[ W^T=x^2+y^3w+z^9+w^{12} \] with weight system $(18,11,4,3;36)$. This is number 43a in Table~\ref{tab-9}. Here again $|\SL_W/J_W|=1$. As before, we also have an involution, but we consider only $\sigma_9^T$. Looking at the action of $\CC^{\ast}$ on $\mathbb{C}^4$ and resolving the singularities we have an $A_{10}$ given by resolving the point $(0,1,0,0)$, $2A_2$ coming from the two points with $y=z=0$ fixed by $\mu_3$ and an $A_1$ coming from the point with $y=w=0$ fixed by $\mu_2$. This time $\mathcal{E}$ has 15 curves and $\mathcal{K}=\set{C_x,C_y,C_z}$ as in Figure~\ref{fig:res9}. Again we do not depict the three points of intersection between $C_x$ and $C_y$. \begin{figure} \caption{Resolution of singularities on $X_{W^T} \label{fig:res9} \end{figure} Two relevant representatives of $\sigma_9^T$ in $G_{W^T}/J_{W^T}$ are $(0,0,\tfrac 19,0)$ and $(0,\tfrac 49,0, \tfrac 23)$. From these we see that the curve $C_z$ is fixed. It has genus 0. Furthermore the exceptional curve from the $A_1$ singularity at $y=w=0$ is fixed pointwisely, as well as one of the curves in the exceptional $A_{10}$. Since $C_z$ is fixed, each of the exceptional curves is left invariant under $\sigma_9^T$. From Lemma \ref{l:rank}, the rank of the invariant lattice $S_{X^T}(\sigma_9^T)$ is $r=16$. In this case, we compute the invariant lattice for $(\sigma_9^T)^3$, which is a non--symplectic automorphism of order 3. The curves $C_z$ and $C_y$ are fixed; both have genus zero. Three of the curves on the $A_{10}$ chain are also fixed, as in Lemma~\ref{l:rationaltree}. Furthermore, the remainin intersection points of the chains of exceptional curves are fixed, and an additional point on the $A_1$. Thus the invariants are $(g,n,k)=(1,4,7)$. Using the results cited in Section \ref{s:methodsprime}, the invariants for the 3--elementary lattice $(\sigma_9^T)^3$ are $(16,1)$. Since $S_{X^T}(\sigma_9^T)$ is a primitive sublattice of this 3--elementary lattice, and both have the same rank, they are equal. Therefore we have invariants $(r,q)=(16, w_{3,1}^{-1})$ and the lattice is $S_X(\sigma_9)=U\oplus E_6\oplus E_8$. \end{ex} Comparing the ranks, and noticing that $\omega_{3,1}^1=-\omega_{3,1}^{-1}$, we see the BHK mirror matches the LPK3 mirror symmetry. \begin{ex} Method III:\label{ex-meth3} Let $W:=x^2+y^4+yz^4+w^{16}$ with $m=16$ in weight system $(8,4,3,1;16)$. This is number 37b in Table \ref{tab-16}. The order of $\SL_W/J_W$ is 2. This appears to be the same K3 surface investigated in \cite[Example~3.2]{order_sixteen}. Computing singularities we obtain an $A_2$ at the point $(0:0:1:0)$ and two $A_3$'s at the two points with $z=w=0$. Resolving these, we obtain the configuration of curves showed in Figure \ref{fig:resIII}. The curves $C_x$ and $C_z$ intersect in four points, which are not depicted. \begin{figure} \caption{Resolution of singularities for $X_W$} \label{fig:resIII} \end{figure} The genus of the curve $C_w$ is 0, the genus of $C_z$ is 1, the genus of $C_x$ is 6. However, $C_y$ consists of two components, each a copy of $\mathbb{P}^1$. The automorphism $\sigma_{16}=(0,0,0,\tfrac{1}{16})$ fixes $C_w$, and therefore leaves all of the exceptional curves invariant. Thus we have $|\mathcal{E}/\sigma_{16}|=8$, and $r=9$. Furthermore, $\mathcal{K}$ consists of the curves $C_w$, $C_z$ and the two curves that make up $C_y$. Using an explicit form of the intersection matrix, one can check that the lattice $L_\mathcal{B}$ is actually generated by the exceptional curves and $C_w$. one sees that it is a lattice of type $T_{3,4,4}$. The discriminant group of $T_{3,4,4}$ is $\mathbb Z/8\mathbb Z$ and the corresponding form $q$ is $w^{5}_{2,3}$. This form has one overlattice. However, Belcastro has computed the Picard Lattice for a general member of the family of K3 surfaces with this weight system as $T_{3,4,4}$. Thus $L_\mathcal{B}\cong T_{3,4,4}$ embeds primitively into $S_X(\sigma_{16})$ and so they are equal, e.g. $S_X(\sigma_{16})=L_\mathcal{B}$ with invariants $(r,q)=(9,w^5_{2,3})$. \end{ex} \begin{rem} There is another case with with the same invariant lattice in the same weight system, namely number 37a. The reasoning is similar to what we have just outlined. \end{rem} Finally, we will describe both of the cases requiring what we have called Method IV. These two cases are similar in that we use the Picard lattice to help determine $S_X(\sigma_{m})$. We will need the following proposition. \begin{prop}[{\cite[Prop. 1.15.1]{nikulin}}]\label{p:primembedding} The primitive embeddings of a lattice $L$ into an even lattice with invariants $(m_+,m_-,q)$ are determined by the sets $(H_L, H_q,\gamma;K,\gamma_K)$, where $H_L\subset A_L$ and $H_q\subset A_q$ are subgroups, $\gamma:q_S|_{H_S}\to q|_{H_q}$ is an isomorphism of subgroups preserving the quadratic forms to these subgroups, $K$ is an even lattice with invariants $(m_+-t_+, m_--t_-, -\delta)$, where $\delta\cong q_S\oplus (-q)|_{\Gamma_\gamma^^{\prime}erp/\Gamma_\gamma}$, $\Gamma_\gamma$ being the pushout of $\gamma$ in $A_S\oplus A_q$, and, finally, $\gamma_K:q_K\to (-\delta)$ is an isomorphism of quadratic forms. \end{prop} From this proposition, we can determine all primitive embeddings of one even lattice into another. We will use this in the next example. \begin{ex}Method IV:\label{ex-meth41} We now consider the BHK dual to the previous example. This is the first entry for 37b in Table~\ref{tab-16}. As mentioned in the introduction, this provides an example to the case in \cite{order_sixteen}, where no example could be found. In this case, we have \[ W^T=W=x^2+y^4+yz^4+w^{16}, \] and $G^T=\SL_W$, and we know $|\SL_W/J_W|=2$. In fact the group is generated by $(\tfrac{1}{2},0, \tfrac{1}{2},0)$, so we see that the points with $x=z=0$ are fixed. Another representative in the same coset is $(\tfrac{1}{2},0, 0, \tfrac{1}{2})$. Thus we see that the intersection points on the $A_2$ chain from the previous example are fixed, as well as other point with $x=w=0$. The two $A_3$ chains are permuted by the action. Thus on $X_{W^T,G^T}$ we get the configuration of curves of Figure \ref{fig:res16}. \begin{figure} \caption{Resolution of singularities on $X_{W^T,G^T} \label{fig:res16} \end{figure} Using the Riemann--Hurwitz Theorem, we can compute the genus of the coordinate curves. The curve $C_x$ is covered by a curve of genus 6 with 6 fixed points, so it has genus 2. Similarly, we see that the genus of $C_w$ is 0 and the genus of $C_z$ is 0. The two components of the curve $\set{y=0}$ from the previous example are permuted, to give us $C_y$ of genus 0. The non--symplectic automorphism $\sigma_{16}=(0,0,0,\tfrac{1}{16})$, fixes $C_w$, and therefore the chains of exceptional curves intersecting $C_w$ are invariant. It is not difficult to see also that the four exceptional curves intersecting $C_x$ and $C_z$ are permuted. Thus $r=11$. One can check that $C_z$, $C_y$ and $C_x$ are superfluous, giving us a lattice $L_\mathcal{B}$ with discriminant form $w^{-5}_{2,3}$. There is one isotropic subgroup $H$ and hence one overlattice of $L_\mathcal{B}$. By Proposition~\ref{p:overl} this overlattice has discriminant form $\omega^{-1}_{2,1}$. Since $S_X(\sigma_{16})$ is an overlattice of $L_\mathcal{B}$, the two possibilities for $S_X(\sigma_{16})$ are $U\oplus E_8\oplus A_1$ or $T_{2,5,6}$. Using Proposition~\ref{p:primembedding} we will show that $U\oplus E_8\oplus A_1$ does not embed primitively into $S_{X_{W^T,G^T}}$, so that $S_X(\sigma_{16})=T_{2,5,6}$. In \cite{order_sixteen} Al Tabbaa-Sarti-Taki have computed the Picard Lattice for K3 surfaces with non--symplectic automorphisms of order 16, and found that in our case, the Picard lattice is $U(2)\oplus D_4\oplus E_8$. This lattice is 2--elementary with $u\oplus v$ as discriminant quadratic form. In particular, this quadratic form takes values 0 or 1 (i.e. $\delta=0$). On the other hand, $\omega_{2,1}^{-1}$ has value $\tfrac{3}{2}$ on the generator for $\mathbb{Z}/2\mathbb{Z}$. By Proposition~\ref{p:primembedding}, a primitive embedding of $U\oplus E_8\oplus A_1$ into the Picard lattice $U(2)\oplus D_4\oplus E_8$ must therefore correspond to the trivial subgroup. The existence of such a primitive embedding depends on the existence of an even lattice with invariants $(0,3,u\oplus v\oplus \omega_{2,1}^1)$. The length of this discriminant quadratic form is 5, whereas the rank of the desired lattice is 3, and so no such lattice exists (see \cite[Theorem 1.10.1]{nikulin}). We conclude that the invariant lattice is $S_X(\sigma_{16})\cong T_{2,5,6}$, which has invariants $(11, \omega_{2,3}^{-5})$. \end{ex} \begin{rem} The other case with $m=16$, $r=11$ is number 58 in Table~\ref{tab-16}. The method for computing the invariant lattice in this case is very similar to what we have just computed. Alternatively, that case can also be computed with Method III. \end{rem} \begin{ex}Method IV:\label{ex-meth42} The other case that requires Method IV is $m=9$, $r=12$. This occurs for two of the K3 surfaces, namely 18a and 18b, both instances using the group $\SL_W$. Both of these cases are similar, so we describe only the first. Using methods similar to those described in the previous examples, we get the configuration of curves depicted in Figure~\ref{fig:resoIV}. \begin{figure} \caption{Resolution of curves on $X_{W,G} \label{fig:resoIV} \end{figure} For the discussion that follows, we denote by $E_1$ the exceptional curve in the $A_5$ chain, which intersects $C_x$. The automorphism $\sigma_9$ permutes the three $A_2$ chains (yielding one orbit for each curve in the chain for a total of 2 orbits), but leaves the other nine exceptional curves, as well as the coordinate curves, invariant. This gives us $r=12$, and we can compute the lattice $L_\mathcal{B}\cong M_9\oplus A_2\oplus E_8$ with discriminant quadratic form $\omega_{3,1}^1\oplus\omega_{3,2}^1$. There is one isotropic subgroup of this lattice, corresponding to the overlattice $U\oplus A_2\oplus E_8$. So we must find some way to show that $L_\mathcal{B}$ is primitively embedded in the Picard lattice, for then it is the invariant lattice. We can determine the Picard lattice $S_{X_{W,G}}$. We first notice that $\sigma_9^3$ has order 3. Furthermore, its fixed locus has invariants $(g,n,k)=(0,3,7)$. Therefore its invariant lattice $S_X(\sigma_9^3)$ is a 3-elementary lattice with invariants $(16,3)$, i.e. it is the lattice $U\oplus E_8\oplus 3A_2$, with discriminant quadratic form $3\omega_{3,1}^{1}$. Since the transcendental lattice $S_{X_{W,G}}^^{\prime}erp$ has order divisible by $^{\prime}hi(9)=6$, $S_X(\sigma_9^3)$ is the Picard lattice. In fact, we will determine a basis for $S_{X_{W,G}}$. Consider the set $\mathcal{E}_1$ consisting of all of the exceptional curves (15 of them), and $\mathcal{K}$ consisting of all irreducible components of (the strict transforms of) the coordinate curves. Let $\mathcal{B}_1=\mathcal{E}_1\cup\mathcal{K}$. This set generates $S_{X_{W,G}}$. One can check by direct computation (\cite{magma}) that $C_x$ and $E_1$ are redundant. Now we consider the set $\mathcal{B}$, generating the lattice $L_\mathcal{B}$. Again, we compute that $C_x$ and $E_1$ are redundant, so we get $L_\mathcal{B}$ generated by the two orbits from the $A_2$ chains, the remaining exceptional curves, and $C_w$ and $C_z$. Two of the generators for $L_\mathcal{B}$ are just sums of generators of $S_{X_{W,G}}$. Thus a change of basis shows that $S_X/L_\mathcal{B}$ is a free group of rank 4, and so $L_\mathcal{B}$ is primitively embedded. \end{ex} The other example is similar. Instead of three $A_2$ chains, there are three $A_1$'s. One can check that the set $\set{y=0}$ is composed of three curves, each of genus zero. Each of these curves intersects one of the $A_1$ curves. These are permuted by the action of $\sigma_9$. Up to a relabelling, we obtain the same configuration of curves, and the same Picard lattice. \label{ex-noncyclic} The only cases where $\SL_W/J_W$ is not cyclic are $x^3+y^3+z^6+w^6$ (number 3d) and $x^2+y^4+z^6+w^{12}$ (number 8d) in Table~\ref{tab-6}. We analyze them separately. \begin{ex} The first polynomial we consider is $W=x^3+y^3+z^6+w^6$ in $\mathbb{P}(2,2,1,1)$. This is number 3d in Table~\ref{tab-6}. In this case the order of $\SL_W/J_W$ is 9 and it results that $\SL_W/J_W=\mathbb{Z}/3\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}$ since there are no elements of order 9. The group $J_W$ is generated by $j_W=(\frac12,\frac13,\frac16,\frac16)$ and two generators for $\SL_W/J_W$ are $g_1=(\frac13,\frac23,0,0)$ and $g_2=(\frac13,\frac13,\frac13,0)$. Also we name $g_3=(\frac13,0,\frac23,0)$ and $g_4=(0,\frac13,0,\frac23)$. There are four subgroups of $\SL_W/J_W$ of order 3, namely $G_i=<g_i,\J{W}>, i=1,2,3,4$. We can also observe that $G_1^T=G_2$, $G_3^T=G_3$ and $G_4^T=G_4$. Now we consider the non-symplectic automorphism $\sigma_{6}=(0,0,0,\tfrac{1}{6})$. One may notice, there is another automorphism of order 6, namely $(0,0,\tfrac{1}{6},0)$, but due to symmetry (i.e. exchanging $z$ and $w$), we must only consider one of them. Using the same methods as before for each group, we can compute the invariant lattice for the corresponding K3 surface. In each case, the lattice $L_\mathcal{B}$ has no overlattices, so we can use Method I. When $G=G_3$, the invariant lattice has $(r,q)=(10,4w_{3,1}^1)$, which is self-dual. The same is true for $G=G_4$. When $G=G_1$ we get an invariant lattice with rank 16 and the discriminant form is $v\oplus w_{3,1}^1$. This is the dual of the invariant lattice we get with the choice $G=G_2$ and so it proves the theorem for this case. \end{ex} \begin{ex} Finally, we examine the polynomial $W=x^2+y^4+z^6+w^{12}$ in $\mathbb{P}(6,3,2,1)$. This is number 8d in Table~\ref{tab-6}. There are non--symplectic automorphisms of order 2, 4 and 12, but we again focus on the non-symplectic automorphism of order 6: $\sigma_6=(0,0,\tfrac{1}{6},0)$. The order of $\SL_W/J_W$ is 4 and since there are no elements of order 4, we conclude that $\SL_W/J_W=\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$. The elements \[g_1=\left(\frac12,0,\frac12,0\right),\ g_2=\left(0,\frac12,\frac12,0\right),\ g_3=\left(\frac12,\frac12,0,0\right)\] each have order 2 and represent different cosets in $\SL_W/J_W$; let $G_i:=<g_i,\J{W}>,i=1,2,3$. Observe that $G_1^T=G_1$ while $G_2^T=G_3$. When $G=G_1$, with Method I we compute the invariant lattice and obtain $(r,q)=(10,v\oplus 4w_{2,1}^{-1})$. As for $G_2$, we use again Method I and obtain $(r,q)=(14,2w_{2,1}^{-1}\oplus w_{3,1}^1)$, while for $G_3$ we get $(r,q)=(6,2w_{2,1}^{1}\oplus w_{3,1}^{-1})$. Observing they are mirror of each other, we can conclude that the Theorem is proved in this case. \end{ex} \section{Tables} \label{sec:tables} \footnotesize{ In each table, we have arranged the surfaces by weight system. Each weight system is listed by the number assigned to it by Yonemura in \cite{yonemura}. In each weight system, we have listed all possible invertible polynomials of the form \eqref{eq-W} with non--symplectic automorphism of order $m$, and for each polynomial, we list the orders of the possible groups $G/J_W$ satisfying $J_W\subseteq G\subseteq \SL_W$. The invariants $(r,q_{S_X(\sigma_m)})$ are then given, as well as the number of the BHK mirror dual. Finally, we have also indicated which method was used to determine $q_{S_X(\sigma_m)}$. When consulting the tables, it will be helpful to know that $\omega_{5,1}^\varepsilonilon=-\omega_{5,1}^\varepsilonilon$, and that $4\omega_{3,1}^{-1}=4\omega_{3,1}^1$, $4\omega_{2,1}^{-1}=4\omega_{2,1}^1$. The first fact follows simply by definition. The latter two follow from \cite[Theorem 1.8.2]{nikulin} \begin{longtable}{p{12pt}|c|c|c|c|c|c|c} \multicolumn{7}{c}{}\\ No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\ \hline \midrule \endfirsthead No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\\hline \midrule \endhead 14& (21,14,6,1;42)& $x^2+y^3+z^7+w^{42}$& 1&1&$(10,<0>)$&14&I\\ \caption{Table for $m=42$} \label{tab-42} \end{longtable} \begin{longtable}{p{12pt}|c|c|c|c|c|c|c} \multicolumn{7}{c}{}\\ No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\ \hline \midrule \endfirsthead No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\\hline \midrule \endhead 38& (15,8,6,1;30) &$x^2+y^3z+z^5+w^{30}$& 1&1&$(11,w_{2,1}^{-1})$ &50&I \\ 50& (15,10,4,1;30) &$x^2+y^3+yz^5+w^{30}$& 1&1&$(9,w_{2,1}^{1})$ &38&I \\ \caption{Table for $m=30$} \label{tab-30} \end{longtable} \begin{longtable}{p{12pt}|c|c|c|c|c|c|c} \multicolumn{7}{c}{}\\ No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\ \hline \midrule \endfirsthead No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\\hline \midrule \endhead 13a &(12,8,3,1;24) &$x^2+y^3+xz^4+w^{24}$ &1 &1 &$(8,w_{3,1}^{-1})$ &20&I\\ 13b &(12,8,3,1;24) &$x^2+y^3+z^8+w^{24}$ &2 &2 &$(12, w_{3,1}^{1})$ &13b&I\\ & & & &1 &$(8,w_{3,1}^{-1})$ &13b&I\\ 20 &(9,8,6,1;24) &$x^2z+y^3+z^4+w^{24}$ &1 &1 &$(12, w_{3,1}^{1})$ &13a&I\\ \caption{Table for $m=24$} \label{tab-24} \end{longtable} \begin{longtable}{p{12pt}|c|c|c|c|c|c|c} \multicolumn{7}{c}{}\\ No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\ \hline \midrule \endfirsthead No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\\hline \midrule \endhead 78 &(11,6,4,1;22) &$x^2+y^3z+yz^4+w^{22}$ &1 &1 &$(10,w_{2,1}^{-1}\oplus w_{2,1}^{1})$ &78&II\\ \caption{Table for $m=22$} \label{tab-22} \end{longtable} \begin{longtable}{p{12pt}|c|c|c|c|c|c|c} \multicolumn{7}{c}{}\\ No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\ \hline \midrule \endfirsthead No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\\hline \midrule \endhead 9a &(10,5,4,1;20) &$x^2+xy^2+z^5+w^{20}$ &1 &1 &$(10,w_{5,1}^{-1})$ &9a&I\\ 9b &(10,5,4,1;20) &$x^2+y^4+z^5+w^{20}$ &2 &2 &$(10,w_{5,1}^{-1})$ &9b&I\\ & & & &1 &$(10,w_{5,1}^{-1})$ &9b&I\\ \caption{Table for $m=20$} \label{tab-20} \end{longtable} \begin{longtable}{p{12pt}|c|c|c|c|c|c|c} \multicolumn{7}{c}{}\\ No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\ \hline \midrule \endfirsthead No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\\hline \midrule \endhead 12a &(9,6,2,1;18) &$x^2+y^3+yz^6+w^{18}$ &2 &2 &$(11,w_{2,1}^1\oplus w_{3,1}^1)$ &39a&I\\ & & & &1 &$(6,v)$ &39a&I\\ 12b &(9,6,2,1;18) &$x^2+y^3+z^9+w^{18}$ &3 &3 &$(14,v)$ &12b&I\\ & & & &1 &$(6,v)$ &12b&I\\ 39a &(9,5,3,1;18) &$x^2+y^3z+z^6+w^{18}$ &2 &2 &$(14,v)$ &12a&I\\ & & & &1 &$(9,w_{2,1}^{-1}\oplus w_{3,1}^{-1})$ &12a&I\\ 39b &(9,5,3,1;18) &$x^2+y^3z+xz^3+w^{18}$ &1 &1 &$(9,w_{2,1}^{-1}\oplus w_{3,1}^{-1})$ &60&I\\ 60 &(7,6,4,1;18) &$x^2z+y^3+yz^3+w^{18}$ &1 &1 &$(11,w_{2,1}^1\oplus w_{3,1}^1)$ &39b&I\\ \caption{Table for $m=18$} \label{tab-18} \end{longtable} \begin{longtable}{p{12pt}|c|c|c|c|c|c|c} \multicolumn{7}{c}{}\\ No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\ \hline \midrule \endfirsthead No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\\hline \midrule \endhead 37a &(8,4,3,1;16) &$x^2+xy^2+yz^4+w^{16}$ &1 &1 &$(9,w_{2,3}^{5})$ &58&III\\ 37b &(8,4,3,1;16) &$x^2+y^4+yz^4+w^{16}$ &2 &2 &$(11,w_{2,3}^{-5})$ &37b&IV\\ & & & &1 &$(9,w_{2,3}^{5})$ &37b&III\\ 58 &(6,5,4,1;16) &$x^2z+xy^2+z^4+w^{16}$ &1 &1 &$(11,w_{2,3}^{-5})$ &37a&IV\\ \caption{Table for $m=16$} \label{tab-16} \end{longtable} \begin{longtable}{p{12pt}|c|c|c|c|c|c|c} \multicolumn{7}{c}{}\\ No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\ \hline \midrule \endfirsthead No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\\hline \midrule \endhead 11a &(15,10,3,2;30) &$x^2+y^3+xz^5+w^{15}$ &1 &1 &$(10,w_{3,1}^{-1}\oplus w_{3,1}^{1})$ &22a&II\\ 11b &(15,10,3,2;30) &$x^2+y^3+z^{10}+w^{15}$ &1 &1 &$(10,w_{3,1}^{-1}\oplus w_{3,1}^{1})$ &11b&II\\ 22a &(6,5,3,1;15) &$x^2z+y^3+z^5+w^{15}$ &1 &1 &$(10,w_{3,1}^{-1}\oplus w_{3,1}^{1})$ &11a&II\\ 22b &(6,5,3,1;15) &$x^2z+y^3+xz^3+w^{15}$ &1 &1 &$(10,w_{3,1}^{-1}\oplus w_{3,1}^{1})$ &22b&II\\ \caption{Table for $m=15$} \label{tab-15} \end{longtable} \begin{longtable}{p{12pt}|c|c|c|c|c|c|c} \multicolumn{7}{c}{}\\ No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\ \hline \midrule \endfirsthead No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\\hline \midrule \endhead 40a &(7,4,2,1;14) &$x^2+y^3z+z^{7}+w^{14}$ &1 &1 &$(7,v\oplus w_{2,1}^{-1})$ &47 &III\\ 40b &(7,4,2,1;14) &$x^2+y^3z+yz^{5}+w^{14}$ &2 &2 &$(13,v\oplus w_{2,1}^{1})$ &40b&II \\ & & & &1 &$(7,v\oplus w_{2,1}^{-1})$ &40b &III\\ 47 &(21,14,4,3;42) &$x^2+y^3+yz^{7}+w^{14}$ &1 &1 &$(13,v\oplus w_{2,1}^{1})$ &40a&II \\ \caption{Table for $m=14$} \label{tab-14} \end{longtable} \begin{longtable}{p{12pt}|c|c|c|c|c|c|c} \multicolumn{7}{c}{}\\ No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\ \hline \midrule \endfirsthead No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\\hline \midrule \endhead 6a &(5,2,2,1;10) &$x^2+y^4z+z^5+w^{10}$ &2 &2 &$(8,w_{5,1}^{-1}\oplus 2w_{2,1}^1)$ &36a&I \\ & & & &1 &$(6,u\oplus v)$ &36a&II \\ 6b &(5,2,2,1;10) &$x^2+y^5+z^5+w^{10}$ &5 &5 &$(14,u\oplus v)$ &6b &II\\ & & & &1 &$(6,u\oplus v)$ &6b &II\\ 6c &(5,2,2,1;10) &$x^2+y^4z+yz^4+w^{10}$ &3 &3 &$(14,u\oplus v)$ &6c &II\\ & & & &1 &$(6,u\oplus v)$ &6c &II\\ 11a &(15,10,3,2;30) &$x^2+y^3+z^{10}+yw^{10}$ &2 &2 &$(17,w_{2,1}^{1})$ &42a&I \\ & & & &1 &$(10,v\oplus v)$ &42a&II \\ 11b &(15,10,3,2;30)&$x^2+y^3+z^{10}+w^{15}$ &1 &1 &$(10,v\oplus v)$ &11b&II \\ 36a &(10,5,3,2;20) &$x^2+y^4+yz^5+w^{10}$ &2 &2 &$(14,u\oplus v)$ &6a &II\\ & & & &1 &$(12,w_{5,1}^{-1}\oplus 2w_{2,1}^{-1})$ &6a&I \\ 36b &(10,5,3,2;20) &$x^2+xy^2+yz^5+w^{10}$ &1 &1 &$(12,w_{5,1}^{-1}\oplus 2w_{2,1}^{-1})$ &63&I \\ 42a &(5,3,1,1;10) &$x^2+y^3w+z^{10}+w^{10}$ &2 &2 &$(10,v\oplus v)$ &11a&II \\ & & & &1 &$(3,w_{2,1}^{-1})$ &11a&I \\ 42b &(5,3,1,1;10) &$x^2+y^3z+xz^5+w^{10}$ &1 &1 &$(3,w_{2,1}^{-1})$ &68&I \\ 42c &(5,3,1,1;10) &$x^2+y^3z+yz^7+w^{10}$ &4 &4 &$(17,w_{2,1}^{1})$ &42c&I \\ & & & &2 &$(10,v\oplus v)$ &42c&II\\ & & & &1 &$(3,w_{2,1}^{-1})$ &42c&I \\ 63 &(4,3,2,1;10) &$x^2z+y^2x+z^5+w^{10}$ &1 &1 &$(8,w_{5,1}^{-1}\oplus 2w_{2,1}^1)$ &36b&I \\ 68 &(13,10,4,3;30) &$x^2z+y^3+yz^5+w^{10}$ &1 &1 &$(17,w_{2,1}^{1})$ &42b&I \\ \caption{Table for $m=10$} \label{tab-10} \end{longtable} \begin{longtable}{p{12pt}|c|c|c|c|c|c|c} \multicolumn{7}{c}{}\\ No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\ \hline \midrule \endfirsthead No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\\hline \midrule \endhead 12a &(9,6,2,1;18) &$x^2+y^3+z^9+xw^9$ &3 &3 &$(16,w_{3,1}^{-1})$ &25a&I \\ & & & &1 &$(4,w_{3,1}^1)$ &25a&I \\ 12b &(9,6,2,1;18) &$x^2+y^3+z^9+yw^{12}$ &1 &1 &$(4,w_{3,1}^1)$ &43a&I \\ 12c &(9,6,2,1;18) &$x^2+y^3+z^9+w^{18}$ &3 &3 &$(16,w_{3,1}^{-1})$ &12c&I \\ & & & &1 &$(4,w_{3,1}^1)$ &12c&I \\ 18a &(3,3,2,1;9) &$x^3+y^3+xz^3+w^9$ &3 &3 &$(12,w_{3,1}^1\oplus w_{3,2}^1)$ &18a&IV \\ & & & &1 &$(8,w_{3,1}^{-1}\oplus w_{3,2}^{-1})$ &18a&III \\ 18b &(3,3,2,1;9) &$x^3+xy^2+yz^3+w^9$ &2 &2 &$(12,w_{3,1}^1\oplus w_{3,2}^1)$ &18b&IV \\ & & & &1 &$(8,w_{3,1}^{-1}\oplus w_{3,2}^{-1})$ &18b&III \\ 25a &(4,3,1,1;9) &$x^2w+y^3+z^9+w^9$ &3 &3 &$(16,w_{3,1}^{-1})$ &12a&I \\ & & & &1 &$(4,w_{3,1}^1)$ &12a&I \\ 25b &(4,3,1,1;9) &$x^2w+y^3+z^9+yw^6$ &1 &1 &$(4,w_{3,1}^1)$ &43b&I \\ 25c &(4,3,1,1;9) &$x^2w+y^3+z^9+xw^5$ &3 &3 &$(16,w_{3,1}^{-1})$ &25a&I \\ & & & &1 &$(4,w_{3,1}^1)$ &25a&I \\ 43a &(18,11,4,3;36) &$x^2+y^3w+z^9+w^{12}$ &1 &1 &$(16,w_{3,1}^{-1})$ &12b&I \\ 43b &(18,11,4,3;36) &$x^2+y^3w+z^9+xw^6$ &1 &1 &$(16,w_{3,1}^{-1})$ &25b&I \\ \caption{Table for $m=9$} \label{tab-9} \end{longtable} \begin{longtable}{p{12pt}|c|c|c|c|c|c|c} \multicolumn{7}{c}{}\\ No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\ \hline \midrule \endfirsthead No. &Weights&Polynomial&SL/J&G/J&$(r,q)$&BHK dual&Method\\\hline \midrule \endhead 2a &(4,3,3,2;12) &$x^3+y^3z+z^4+w^6$ &3 &3 &$(16,v\oplus w_{3,1}^1)$ &3a&I \\ & & & &1 &$(10,4w_{3,1}^1)$ &3a&II \\ 2b &(4,3,3,2;12) &$x^3+y^3z+yz^3+w^6$ &1 &1 &$(10,4w_{3,1}^1)$ &2b &II\\ 2c &(4,3,3,2;12) &$x^3+y^4+z^4+w^6$ &2 &2 &$(10,4w_{3,1}^1)$ &2c&II \\ & & & &1 &$(10,4w_{3,1}^1)$ &2c&II \\ 3a &(2,2,1,1;6) &$x^3+y^3+yz^4+w^6$ &3 &3 &$(10,4w_{3,1}^1)$ &2a &II\\ & & & &1 &$(4,v\oplus w_{3,1}^{-1})$ &2a &I\\ 3b &(2,2,1,1;6) &$x^2y+y^3+z^6+w^6$ &6 &6 &$(19,w_{2,1}^{-1})$ &5a &I\\ & & & &3 &$(16,v\oplus w_{3,1}^1)$ &5a&I \\ & & & &2 &$(9,3w_{2,1}^{-1}\oplus 2w_{3,1}^{-1})$ &5a&I \\ & & & &1 &$(4,v\oplus w_{3,1}^{-1})$ &5a &I\\ 3c &(2,2,1,1;6) &$x^3+xy^2+yz^4+w^6$ &1 &1 &$(4,v\oplus w_{3,1}^{-1})$ &57 &I\\ 3d &(2,2,1,1;6) &$x^3+y^3+z^6+w^6$ &9 &9 &$(16,v\oplus w_{3,1}^1)$ &3d&I \\ & & & &3 &$(16,v\oplus w_{3,1}^1)$ &3d &I\\ & & & &3 &$(10,4w_{3,1}^1)$ &3d&II \\ & & & &3 &$(10,4w_{3,1}^1)$ &3d&II \\ & & & &3 &$(4,v\oplus w_{3,1}^{-1})$ &3d &I\\ & & & &1 &$(4,v\oplus w_{3,1}^{-1})$ &3d &I\\ 3e &(2,2,1,1;6) &$x^2y+xy^2+z^6+w^6$ &3 &3 &$(16,v\oplus w_{3,1}^1)$ &3e &I\\ & & & &1 &$(4,v\oplus w_{3,1}^{-1})$ &3e &I\\ 5a &(3,1,1,1;6) &$x^2+xy^3+z^6+w^6$ &6 &6 &$(16,v\oplus w_{3,1}^1)$ &3b&I \\ & & & &3 &$(11,3w_{2,1}^1\oplus 2w_{3,1}^{1})$ &3b&I \\ & & & &2 &$(4,v\oplus w_{3,1}^{-1})$ &3b &I\\ & & & &1 &$(1,w_{2,1}^1)$ &3b&I \\ 5b &(3,1,1,1;6) &$x^2+y^5w+z^6+w^6$ &2 &2 &$(8,6w_{2,1}^{-1})$ &29 &II\\ & & & &1 &$(1,w_{2,1}^1)$ &29&I \\ 5c &(3,1,1,1;6) &$x^2+xy^3+yz^5+w^6$ &1 &1 &$(1,w_{2,1}^1)$ &56 &I\\ 5d &(3,1,1,1;6) &$x^2+y^6+z^5w+zw^5$ &8 &8 &$(19,w_{2,1}^{-1})$ &5d &I\\ & & & &4 &$(12,6w_{2,1}^1)$ &5d&II \\ & & & &2 &$(8,6w_{2,1}^{-1})$ &5d &II\\ & & & &1 &$(1,w_{2,1}^1)$ &5d&I \\ 5e &(3,1,1,1;6) &$x^2+y^6+z^6+w^6$ &12 &12 &$(19,w_{2,1}^{-1})$ &5e &I\\ & & & &6 &$(12,6w_{2,1}^1)$ &5e &II\\ & & & &4 &$(9,3w_{2,1}^{-1}\oplus 2w_{3,1}^{-1})$ &5e&I \\ & & & &3 &$(11,3w_{2,1}^1\oplus 2w_{3,1}^{1})$ &5e &I\\ & & & &2 &$(8,6w_{2,1}^{-1})$ &5e&II \\ & & & &1 &$(1,w_{2,1}^1)$ &5e &I\\ 8a &(6,3,2,1;12) &$x^2+y^4+z^6+xw^6$ &2 &2 &$(14,2w_{2,1}^{-1}\oplus w_{3,1}^1)$ &23 &I\\ & & & &1 &$(6,2w_{2,1}^1\oplus w_{3,1}^{-1})$ &23 &I\\ 8b &(6,3,2,1;12) &$x^2+y^4+z^6+yw^{9}$ &2 &2 &$(10,v\oplus 4w_{2,1}^{-1})$ &33a&I \\ & & & &1 &$(6,2w_{2,1}^1\oplus w_{3,1}^{-1})$ &33a &I\\ 8c &(6,3,2,1;12) &$x^2+xy^2+z^6+yw^{9}$ &1 &1 &$(6,2w_{2,1}^1\oplus w_{3,1}^{-1})$ &70 &I\\ 8d &(6,3,2,1;12) &$x^2+y^4+z^6+w^{12}$ &4 &4 &$(14,2w_{2,1}^{-1}\oplus w_{3,1}^1)$ &8d&I \\ & & & &2 &$(14,2w_{2,1}^{-1}\oplus w_{3,1}^1)$ &8d &I\\ & & & &2 &$(6,2w_{2,1}^1\oplus w_{3,1}^{-1})$ &8d&I \\ & & & &2 &$(10,v\oplus 4w_{2,1}^{-1})$ &8d&II \\ & & & &1 &$(6,2w_{2,1}^1\oplus w_{3,1}^{-1})$ &8d &I\\ 8e &(6,3,2,1;12) &$x^2+xy^2+z^6+w^{12}$ &2 &2 &$(14,2w_{2,1}^{-1}\oplus w_{3,1}^1)$ &8e&I \\ & & & &1 &$(6,2w_{2,1}^1\oplus w_{3,1}^{-1})$ &8e &I\\ 23 &(5,3,2,2;12) &$x^2w+y^4+z^6+w^6$ &2 &2 &$(14,2w_{2,1}^{-1}\oplus w_{3,1}^1)$ &8a &I\\ & & & &1 &$(6,2w_{2,1}^1\oplus w_{3,1}^{-1})$ &8a &I\\ 29 &(15,6,5,4;30) &$x^2+y^5+z^6+yw^6$ &2 &2 &$(19,w_{2,1}^{-1})$ &5b&I \\ & & & &1 &$(12,6w_{2,1}^1)$ &5b&II \\ 33a &(9,4,3,2;18) &$x^2+y^4w+z^6+w^{9}$ &2 &2 &$(14,2w_{2,1}^{-1}\oplus w_{3,1}^1)$ &8b &I\\ & & & &1 &$(10,v\oplus 4w_{2,1}^{-1})$ &8b &II\\ 33b &(9,4,3,2;18) &$x^2+y^4w+z^6+yw^{7}$&1 &1 &$(10,v\oplus 4w_{2,1}^{-1})$ &33b&II \\ 56 &(11,8,6,5;30) &$x^2y+y^3z+z^5+w^6$ &1 &1 &$(19,w_{2,1}^{-1})$ &5c &I\\ 57 &(9,6,5,4;24) &$x^2y+y^4+xz^3+w^6$ &1 &1 &$(16,v\oplus w_{3,1}^1)$ &3c &I\\ 70 &(8,5,3,2;18) &$x^2w+xy^2+z^6+w^{9}$&1 &1 &$(14,2w_{2,1}^{-1}\oplus w_{3,1}^1)$ &8c &I\\ \caption{Table for $m=6$} \label{tab-6} \end{longtable} } \normalsize \appendix \section{Computer code for computing lattices} In order to compute the lattices using the configuration of curves, we used the following Magma code, developed by Antonio Laface and added here with his permission. This first function takes an even bilinear form $B$, and outputs generators of the discriminant group and the values of $q_B$ on these generators. \begin{alltt} disc:=function(M) S,A,B:=SmithForm(M); l:=[[S[i,i],i]: i in [1..NumberOfColumns(S)]| S[i,i] notin {0,1}]; sA:=Matrix(Rationals(),ColumnSubmatrixRange(B,l[1][2],l[#l][2])); for i in [1..#l] do MultiplyColumn(\(\sim\)sA,1/l[i][1],i); end for; Q:=Transpose(sA)*Matrix(Rationals(),M)*sA; for i,j in [1..NumberOfColumns(Q)] do if i ne j then Q[i,j]:=Q[i,j]-Floor(Q[i,j]); else Q[i,j]:=Q[i,j]-Floor(Q[i,j])+ (Floor(Q[i,j]) mod 2); end if; end for; return [l[i][1]: i in [1..#l]], Q; end function; \end{alltt} The next function determines whether a given even bilinear form has overlattices. Input is a matrix $M$ and a number $n$. The output is the subgroup of $A_L$ that takes values equal to $n$ modulo $2\mathbb{Z}$. For isotropic subgroups of the discriminant group, use $n=0$. \begin{verbatim} isot:=function(M,n) v,U:=disc(M); Q:=Rationals(); A:=AbelianGroup(v); return [Eltseq(a) : a in A | mod2(Matrix(Q,1,#v,Eltseq(a))*U*Matrix(Q,#v,1,Eltseq(a)))[1,1] eq n]; end function; \end{verbatim} The function {\verb mod2 } is as follows: \begin{verbatim} mod2:=function(Q); for i,j in [1..Nrows(Q)] do if i ne j then Q[i,j]:=Q[i,j]-Floor(Q[i,j]); else Q[i,j]:=Q[i,j]-2*Floor(Q[i,j]/2); end if; end for; return Q; end function; \end{verbatim} Finally, the following function compares two discriminant quadratic forms, and lets us know if they are the same finite quadratic form or not. This is not always easy to check due to the relations in Proposition~\ref{t:relations}. \begin{verbatim} dicompare:=function(M,Q) v,U:=disc(M); w,D:=disc(Q); if v ne w then return false; end if; A:=AbelianGroup(v); Aut:=AutomorphismGroup(A); f,G:=PermutationRepresentation(Aut); h:=Inverse(f); ll:=[Matrix(Rationals(),[Eltseq(Image(h(g),A.i)) : i in [1..Ngens(A)]]) : g in G]; dd:=[mod2(a*U*Transpose(a)) : a in ll]; return D in dd; end function; \end{verbatim} \end{document}
\begin{document} \title{The cohomology of the free loop spaces of $SU(n+1)/T^n$} \author{Matthew I. Burfitt} \address{\scriptsize{Institute of Mathematics, University of Aberdeen, Aberdeen AB24 3UE, United Kingdom}} \email{[email protected]} \author{Jelena Grbi\'c} \address{\scriptsize{School of Mathematics, University of Southampton,Southampton SO17 1BJ, United Kingdom}} \email{[email protected]} \subjclass[2010]{} \keywords{} \thanks{Research supported in part by The Leverhulme Trust Research Project Grant RPG-2012-560.} \maketitle \begin{abstract} We study the cohomology of the free loop space of $SU(n+1)/T^n$, the simplest example of a complete flag manifolds and an important homogeneous space. Through this enhanced analysis we reveal rich new combinatorial structures arising in the cohomology algebra of the free loop spaces. We build new theory to allow for the computation of $H^*(\Lambda(SU(n+1)/T^{n});\mathbb{Z})$, a significantly more complicated structure than other known examples. In addition to our theoretical results, we explicitly implement a novel integral Gr\"obner basis procedure for computation. This procedure is applicable to any Leray-Serre spectral sequence for which the cohomology of the base space is the quotient of a finitely generated polynomial algebra. The power of this procedure is illustrated by the explicit calculation of $H^*( \Lambda(SU(4)/T^3);\mathbb{Z})$. We also provide a python library with examples of all procedures used in the paper. \end{abstract} \section{Introduction} The free loop space $\Lambda X$ of a topological space $X$ is defined to be the mapping space $Map(S^1,X)$, the space of all unpointed maps from the circle to $X$. This differs from the based loops space $\Omega X=Map_*(S^1,X)$, the space of all pointed maps from the circle to $X$. The two loop spaces are connected by the evaluation fibration. The based loop space functor is an important classical object in algebraic topology and has been well studied. However, the topology of free loop spaces, while required for many applications, behaves in a much more complex way and is still only well understood in a handful of examples. A primary motivation for studying the topology of the free loop space is the important role loops on a manifold play in both mathematics and physics. Given a Riemannian manifold $(M,g)$, the closed geodesics parametrised by $S^1$ are the critical points of the energy functional \begin{equation*}\label{eq:Energy} E\colon \Lambda M\to \mathbb R, \quad E(\gamma):= \frac{1}{2}\int_{S^1} ||\dot{\gamma}(t)||^2 dt. \end{equation*} Morse theory applied to the energy functional $E$ gives a description of the loop space $\Lambda M$ by successive attachments of bundles over the critical submanifolds. Knowledge of the topology of $\Lambda M$ therefore implies existence results for critical points of $E$. Computations of the cohomology of the free loop space have therefore received much attention over the last several decades. In the simplest case when $X$ is an $H$-space, there is a homotopy equivalence \[ \Lambda X \simeq \Omega X \times X. \] Progress past this is restricted to specific examples by applying specialised methods relevant to their particular case. Broadly speaking there are three main related approaches to studying the cohomology of the free loop space: the Hochschild cohomology of the normalized singular chains on the base loop space \cite{Goodwillie85, Burghelea86, Dupont03, Ndombo2002, Idrissi00, Menichi00}, the Eilenberg-Moore spectral sequence of the fibre square realising $\Lambda X$ as the pullback back of a pair of diagonal maps \cite{Smith81, Kuribayashi91, Kuribayashi99, Kuribayashi04} and the cohomology Leray-Serre spectral sequence of the path-loop (or Wegraum) fibration \cite{McCleary1987, cohololgy_Lprojective, Burfitt2018}. Computational examples in the literature include the complex projective space with integral coefficients \cite{Crabb88}, Grassman and Stiefel manifolds with coefficients in a finite field in many cases \cite{Kuribayashi91}, wedge products of same dimensional spheres with integral coefficients \cite{Parhizgar97}, the classifying space of a compact simply connected Lie groups with coefficients in a finite field \cite{Kuribayashi99} and simply connected $4$-manifolds with rational coefficients \cite{Onishchenko12}. The importance of the topology of the free loop space of closed oriented manifolds is further highlighted by the seminal work of Chas and Sullivan \cite{StringTopology} where two new algebraic operations, the loop product and Batalin–Vilkovisky operator where introduced, collectively referred to as string topology operations. In studying string topology operations the homology of the free loop space has also been considered in a several additional cases. Integral homology of the free loop space of complex Stiefel manifold \cite{Tamanoi07}, spaces whose cohomology is an exterior algebra with field coefficients \cite{Bohmann2021}, $(n-1)$-connected manifolds up to dimension $3n-2$ with homology coefficients over a field \cite{Berglund15} and many cases of $(n-1)$-connected $2n$-manifolds with integral coefficients \cite{Beben17}. Cohen-Jones-Yan \cite{Cohen2004} have also shown that there is a spectral sequence of algebras converging to the homology of the free loop space and the loop product demonstrating its use on spheres and complex proactive spaces. A straight forward Leray-Serre spectral sequence approach to computing the cohomology of the free loop space was presented by Seeliger \cite{cohololgy_Lprojective} and demonstrated on the free loop space of complex projective spaces. Greatly extending these ideas the authors \cite{Burfitt2018} previously obtained the integral cohomology of the free loop space of the complete flag manifolds of rank $2$ simple Lie groups. The other work closely related to this paper is that of McCleary and Ziller~\cite{McCleary1991, McCleary1987}, where it is shown using the Leray-Serre spectral sequence on the path-loop fibration and a classical theorems Gromoll-Meyer~\cite{Gromoll69}, that all homogeneous spaces with the exception of those of rank $1$ have infinitely many geometrically distinct closed geodesics. This extends earlier work of Ziller~\cite{Ziller1977}, in which Morse theory is applied to obtain the $\mathbb{Z}_2$ Betti number of the free loop space of globally symmetric spaces. In this paper we explore the cohomology of the free loop space of homogeneous spaces by studying $H^*(\Lambda(SU(n+1)/T^n);\mathbb{Z})$ for $n \geq 2$. In doing so we uncover surprising combinatorial structure, make use of computational commutative algebra and develop computer aided algorithms. The power of the theory is illustrated by obtaining the integral cohomology of $\Lambda(SU(4)/T^3)$, a significantly more complex example than obtaining $H^*(\Lambda S^3;\mathbb{Z})$ in the case when $n=2$ or $H^*(\Lambda(SU(3)/T^2);\mathbb{Z})$ in the case the $n=3$ previously considered in \cite{Burfitt2018}. We apply classical homotopy theoretic arguments to the path-loop fibration and its pull back along the diagonal map on $SU(n+1)/T^n$ to derive the differentials in the Serre spectral sequence of the free loop space evaluation fibration converging to $H^*(\Lambda(SU(n+1)/T^n);\mathbb{Z})$. Classically, the elementary symmetric polynomials are used for the basis of symmetric functions in order to write down generators of the quotient ideal in $H^*(SU(n+1)/T^n;\mathbb{Z})$. However, that choice of the basis elements does not lead to a description of the differentials that can be easily applied to developing further theory. In this work we choose the basis consisting of complete homogeneous symmetric polynomials. By doing so we acquire a new unexpectedly sophisticated combinatorial structure on the differentials. The consequences of the choice of basis is first highlighted by Theorem~\ref{thm:monomial sum} where the ideal generated by complete homogeneous symmetric generator is shown to straightforwardly rearrange to a reduced Gr\"obner basis. This demonstrates our new approach of using Gr\"obner basis for understating cohomology algebras expressed as a polynomial quotients by analysing them from the perspective of computational commutative algebra and computer aided algorithms. We explicitly apply Gr\"obner basis to spectral sequences in Proposition~\ref{thm:SpectralGrobner} and lay out an integral Gr\"obner basis procedure for performing computations applicable to any Leray-Serre spectral sequence for which the cohomology of the base space is the quotient of a finitely generated polynomial algebra. The enhancements to the classical Buchberger algorithm in Section~\ref{sec:SpectralGrobner} combine to provided a powerful procedure that makes the later application in Section~\ref{sec:LSU4/T3} computationally possible. It is the characterisation in Proposition~\ref{thm:SpectralGrobner} that motivates Theorem~\ref{thm:Ideals}, which provides part of the computation of $H^*(\Lambda(SU(n+1)/T^n);\mathbb{Z})$ for arbitrary $n$. Theorem~\ref{thm:Ideals} is used along side the direct application of Proposition~\ref{thm:SpectralGrobner} in Section~\ref{sec:LSU4/T3} to obtain an expression for the module structure of $H^*(\Lambda(SU(4)/T^3);\mathbb{Z})$ up to a small uncertainty in torsion type. \section{Background}\label{sec:Background} \subsection{Symmetric polynomials}\label{sec:SymPoly} A polynomial in $\mathbb{Z}[\gamma_1,\dots,\gamma_n]$ is called symmetric if it is invariant under permutations of the indices of variables $\gamma_1,\dots,\gamma_n$. The study of symmetric polynomials goes back more than three hundred years, originally used in the study of roots of single variable polynomials. Today symmetric polynomials have applications in a diverse range of areas of mathematics. In the paper the relevance of the symmetric polynomials is brought by their presence in the cohomology rings of complete flag manifolds, in Section~\ref{sec:CohomCompFlag}. In this section we summaries some basic concepts from the theory of symmetric polynomials that will be essential for our later work. A compete introduction to the topic can be found in \cite[\S $7$]{ECstanly} or \cite[\S $I$]{Macdonald}. \subsubsection{Elementary symmetric polynomials}\label{sec:elementary} Much of the language used to described symmetric polynomials is the language of partitions. An $n$ \emph{partition} $\lambda$ is a sequence of non-negative integers $(\lambda_1,\dots,\lambda_k)$, for some integer $k\geq 1$, such that \begin{equation*} \lambda_1\geq\cdots\geq\lambda_k \;\; \text{and} \;\; \lambda_1+\cdots+\lambda_k=n. \end{equation*} By convention we consider partition $(\lambda_1,\dots,\lambda_k)$ and $(\lambda_1,\dots,\lambda_k,0,\dots,0)$ to be equal and abbreviate an $n$ partition $\lambda$ by $\lambda \vdash n$. The elementary symmetric polynomials are a special collection of symmetric polynomials that form a basis of the symmetric polynomials, which we make explicit in Theorem~\ref{thm:FunThmSym}. For each integer $n\geq 1$ and $1\leq l \leq n$, the \emph{elementary symmetric polynomial} $\sigma_l\in\mathbb{Z}[\gamma_1,\dots,\gamma_n]$ in $n$ variables is given by \begin{equation*} \sigma_l=\sum_{1\leq i_1<\cdots<i_l\leq n}{\gamma_{i_1}\cdots\gamma_{i_l}}. \end{equation*} For a partition $\lambda=(\lambda_1,\dots,\lambda_k)$, denote by $\sigma_\lambda$ the symmetric polynomial $\sigma_{\lambda_1}\cdots\sigma_{\lambda_k}$. The following theorem is sometimes known as the fundamental theorem of symmetric polynomials. \begin{theorem}[{\cite[\S $7.4$]{ECstanly}}]\label{thm:FunThmSym} For each $n\geq 1$, the set of $\sigma_\lambda$, where $\lambda$ ranges over all $n$ partitions, forms an additive basis of all symmetric polynomials. That is, for $1\leq i \leq n$, the set of $\sigma_i$ are independent and generate the symmetric polynomials as an algebra. \end{theorem} \hspace*{\fill} $\square$ \subsubsection{Complete homogeneous symmetric polynomials}\label{sec:homogeneous} The complete homogeneous symmetric polynomials are another collection of $n$ symmetric polynomials in $n$ variables for each $n\geq 1$. In a sense, which is made explicit in \cite[\S $7.6$]{ECstanly}, the complete homogeneous symmetric polynomials can be thought of as dual to the elementary symmetric polynomials. For each integer $n\geq 1$ and $1\leq l \leq n$, define the \emph{complete homogeneous symmetric polynomials} $h_l\in\mathbb{Z}[\gamma_1,\dots,\gamma_n]$ in $n$ variables by \begin{equation}\label{defn:CompleteHomogeneous} h_l=\sum_{1\leq i_1\leq\cdots\leq i_l\leq n}{\gamma_{i_1}\cdots\gamma_{i_l}}. \end{equation} For a partition $\lambda=(\lambda_1,\dots,\lambda_k)$, denote by $h_\lambda$ the symmetric polynomial $h_{\lambda_1}\cdots h_{\lambda_k}$. Setting $\sigma_0=h_0=1$, the following identity derived for infinite variables in \cite[\S $I$,2]{Macdonald}, gives the relationship between the elementary symmetric and complete homogeneous symmetric polynomials. Evaluating all but $n$ variables to $0$, for any $m \leq n$ gives \begin{equation}\label{eq:ElementaryHomogeneousRelation} \sum_{t=0}^{m}(-1)^t \sigma_t h_{n-t} = 0. \end{equation} As $\sigma_0=h_0=1$, equation~(\ref{eq:ElementaryHomogeneousRelation}) can be used to inductively derive expressions for either elementally symmetric polynomials in terms of complete homogeneous polynomials or the other way round. Moreover the complete homogeneous symmetric polynomials also form a basis of the symmetric polynomials. \subsection{Gr\"{o}bner bases}\label{sec:Grobner} Gr\"{o}bner basis provide a powerful tool to perform computations on ideals in commutative algebra. Their use however extends far beyond such calculations having applications within mathematics, computer science physics and engineering. We now briefly describe the Gr\"{o}bner basis theory used later in the paper, for further details see \cite{Grobner2} or \cite{Grobner1}. The following results are stated over a Euclidean or principal ideal domain $R$. However throughout this paper we consider only the case when $R=\mathbb{Z}$. Gr\"{o}bner basis theory can be generalised to other rings and stronger results can be recovered over a field. Given a finite subset $A$ of $R[x_1,\dots,x_n]$, we denote by $\langle A\rangle$ the ideal generated by elements of $A$. Form now on we assume a total monomial ordering on the polynomial ring $R[x_1,\dots,x_n]$ which respects multiplication. In the course of this paper we take this order to be the lexicographic ordering. The \emph{leading term} of a polynomial with respect to an order is the term largest with respect to the order, the \emph{leading monomial} is the leading term multiplied by its coefficient the \emph{leading coefficient}. For $f,g,p\in R[x_1,\dots,x_n]$, the polynomial $g$ is said to be {\it reduced} from $f$ by $p$, written \begin{equation*} f \xrightarrow{p} g \end{equation*} if there exists a monomial $m$ in $f$ such that the leading monomial $l_p$ of $p$ divides $m$, say $m= m'l_p$ for some monomial $m'\in R[x_1,\dots,x_n]$ and $g=f-m'p$. Let $R$ be a principal ideal domain and let $G$ be a finite subset of $R[x_1,\dots,x_n]$. Then $G$ is a {\it Gr\"{o}bner basis} if any of the following equivalent conditions hold. \begin{enumerate} \item The ideal of leading terms of $\langle G\rangle$ is equal to the ideal generated by the leading terms of $G$. \item All elements of $\langle G\rangle$ can be reduced to zero by elements of $G$. \item Leading terms of elements in $\langle G\rangle$ are divisible by a leading terms of an elements in $G$. \end{enumerate} A set is called {\it decidable} if for any two elements input, there is an algorithm that can determine whether they are equal. A ring is called {\it computable} if it is decidable as a set and there is an effectively computable algorithm for addition, multiplication and subtraction in the ring for an input of a pair of elements. A principal ideal domain is called a {\it computable principal ideal domain} if it is a computable ring, there is an algorithm that can effectively compute whether a given pair of elements is divisible and an extended Euclidean algorithm can be effectively computed. Euclidean domain is a {\it computable Euclidean domain} if it is a computable ring and there is an algorithm that effectively computes division with remainder. The integers are a computable Euclidean domain. Moreover, as division with remainder can be applied to construct an extended Euclidean algorithm, so every computable Euclidean domain is also a computable principle ideal domain. If $R$ is a computable principle ideal domain, then for any ideal in $R[x_1,\dots,x_n]$, there exists a Gr\"{o}bner basis. In particular, for finite $A\subseteq R[x_1,\dots,x_n]$ there is an algorithm to obtain a Gr\"{o}bner basis $G$ such that $\langle G\rangle=\langle A\rangle$. Over a field the most efficient algorithm is known as the Buchberger algorithm and can easily be implemented by a computer and a similar algorithm can used for principal ideal domains. Over a Euclidean domain computation speed might be improved with the implementing of more advanced algorithms \cite{Lichtblau13, Eder19}. A basic Gr\"{o}bner basis algorithm can be deduced from the following theorem. Let $g_1,g_2\in R[x_1,\dots,x_n]$ be non-zero with leading terms $t_1,t_2$ and leading coefficients $c_1,c_2$. Set $b_1,b_2\in R$ be such that $b_1c_1=b_2c_2=\lcm(c_1,c_2)$ and $s_1,s_2\in R[x_1,\dots,x_n]$ be such that $s_1t_1=s_2t_2=\lcm(t_1,t_2)$. Then the $S$-polynomial of $g_1$ and $g_2$ is given by \begin{equation*} Spol(g_1,g_2) = b_1s_1g_1 - b_2s_2g_2. \end{equation*} Set $d_1,d_2\in R$ to be $d_1c_1=d_2c_2=\gcd(c_1,c_2)$. Then $G$-polynomial of $g_1$ and $g_2$ is given by \begin{equation*} Gpol(g_1,g_2) = d_1s_1g_1 + d_2s_2g_2. \end{equation*} \begin{theorem}[\cite{Grobner1}]\label{thm:GrobnerAlg} A finite subset $G$ of $R[x_1,\dots,x_n]$ is a Gr\"{o}bner basis if for any $g_1,g_2\in G$ \begin{enumerate} \item $Spol(g_1,g_2)$ reduces to $0$ by $G$ and \item $Gpol(g_1,g_2)$ is reducible by an element of $G$ in its leading term. \end{enumerate} \end{theorem} \hspace*{\fill} $\square$ In the computational part of our work it is important that a Gr\"{o}bner basis can be used to compute the intersection of ideals. This procedure is made explicit in the next remark. \begin{remark}\label{rmk:intersectionGrobner} Let $A=\{a_1,\dots,a_s\}$ and $B=\{b_1,\dots,b_l\}$ be subsets of $R[x_1,\dots,x_n]$. Take a Gr\"{o}bner basis $G$ of \begin{equation*} \{ya_1,\dots,ya_t,(1-y)b_1,\dots,(1-y)b_l\} \end{equation*} in $R[x_1,\dots,x_n,y]$ using a monomial ordering in which monomials containing $y$ are larger than $y$ free monomials. Then a Gr\"{o}bner basis of $\langle A\rangle\cap\langle B\rangle$ is given by the elements of $G$ that do not contain $y$. \end{remark} A Gr\"{o}bner basis over a principle ideal domain however is not unique. Importantly, different choices of ordering produce very different Gr\"{o}bner basis even when the basis is given in a reduced form. From now on let $E$ be a Euclidean domain. In this case, there may be a choice of remainder upon division resulting in alternative division algorithms and this might change the output of a Gr\"{o}bner basis algorithm. However, assume the division algorithm is fixed. To find a Gr\"{o}bner basis which is unique in $E[x_1,\dots,x_n]$ we are required to be more precise about reduction. For $f,g,p\in E[x_1,\dots,x_n]$, the polynomial $g$ is said to be \emph{E-reduced} from $f$ by $p$ to $g$, if there exists a monomial $m=at$ in $f$ with the leading term $l_p$ of $p$ dividing $t$ such that $t=sl_p$ and \begin{equation*} g=f-qsp \end{equation*} for some non-zero $q\in E$ the quotient of $a$ upon division with unique remainder by $l_p$. A Gr\"{o}bner basis basis $G$ in $E[x_1,\dots,x_n]$ is said to be \emph{reduced} if all polynomials in $G$ cannot be $E$-reduced by any other polynomial in $G$. \begin{theorem}[\cite{Kandri-Rody88}] A reduced Gr\"{o}bner basis $G$ over $\mathbb{Z}[x_1,\dots,x_n]$ for which all leading monomials have positive coefficients is unique. \end{theorem} \hspace*{\fill} $\square$ In general, with coefficients in a Euclidean domain the uniqueness of a reduced Gr\"{o}bner bias holds up to multiplication by a units. However, in this paper we only consider Gr\"{o}bner basis of integer polynomials. The following theorem expands upon part (2) of the equivalent Gr\"{o}bner basis definitions above. \begin{theorem}[\cite{Grobner1}]\label{thm:GrobnerOver} Let $G$ be a Gr\"{o}bner basis in $E[x_1,\dots,x_n]$. Then all elements in $E[x_1,\dots,x_n]$ E-reduce by elements of $G$ to a unique representative in \begin{equation*} \frac{E[x_1,\dots,x_n]}{\langle G\rangle}. \end{equation*} \end{theorem} \hspace*{\fill} $\square$ \subsection{Cohomology of complete flag manifolds}\label{sec:CohomCompFlag} A manifold $M$ is called homogeneous if it can be equipped with a transitive $G$ action for some Lie groups $G$. In this case, $M \cong G/H$ for some Lie subgroup $H$ of $G$ isomorphic to the orbit of a point in $M$. A Lie subgroup $T$ of a Lie group $G$ isomorphic to a torus is called maximal if any Lie subgroup also isomorphic to a torus containing $T$ coincidences with $T$. The next proposition is straightforward to show, see for example \cite[\S $5.3$]{MT2} Theorem $3.15$. \begin{proposition}\label{prop:tori} The conjugate of a torus in $G$ is a torus and all maximal tori are conjugate. In addition given a maximal torus $T$, for all $x\in G$ there exists an element $g\in G$ such that $g^{-1}xg\in T$. Hence the union of all maximal tori is $G$. \end{proposition} \hspace*{\fill} $\square$ It is therefore unambiguous to refer to the maximal torus $T$ of $G$ and consider the quotient $G/T$, which is isomorphic regardless of the choice of $T$. The homogeneous space $G/T$ is called the {\it complete flag manifold} of $G$. The rank of Lie group $G$ is the dimension of a maximal torus $T$. The ranks of classical simple Lie groups can be deduced by considering the standard maximal tori, see for example~\cite{MT}. Borel \cite{Borel} studied in detail the cohomology of homogeneous spaces, in particular deducing the rational cohomology of $G/T$. Following later work of Bott, Samelson, Toda, Watanabe and others the integral cohomology of complete flag manifolds of all simple Lie groups were deduced. The integral cohomology of the complete flag manifolds of the special unitary groups is as follows. \begin{theorem}[\cite{Borel}, \cite{AplicationsOfMorse}]\label{thm:H*SU/T} For each integer $n\geq 0$, the cohomology of the complete flag manifold of the simple Lie group $SU(n+1)$ is given by \begin{equation*} H^*(SU(n+1)/T^n;\mathbb{Z})=\frac{\mathbb{Z}[\gamma_1,\dots,\gamma_{n+1}]}{\langle\sigma_1,\dots,\sigma_{n+1}\rangle} \end{equation*} where $|\gamma_i|=2$. \end{theorem} \hspace*{\fill} $\square$ \subsection{Based loop space cohomology of $SU(n)$}\label{sec:LoopLie} The Hopf algebras of the based loop space of Lie groups were studied by Bott in \cite{bott1958}. More recently, Grbi{\'c} and Terzi{\'c} \cite{homology_Lflags} showed that the integral homology of the based loop space of a complete flag manifold is torsion free and found the integral Pontrjagin homology algebras of the complete flag manifolds of compact connected simple Lie groups $SU(n)$, $Sp(n)$, $SO(n)$, $G_2$, $F_4$ and $E_6$. Recall that the integral divided polynomial algebra on variables $x_1,\dots,x_n$ is given by \begin{equation*} \Gamma_{\mathbb{Z}}[x_1,\dots,x_n]=\frac{\mathbb{Z}[(x_i)_1,(x_i)_2,\dots]}{\langle(x_i)_k-k!x_i^k\rangle} \end{equation*} where $1\leq i \leq n$, $k\geq 1$ and $x_i=(x_i)_1$. The next theorem is obtained using a Leray-Serre spectral sequence argument applied to the path space fibrations \begin{equation*} \Omega SU(n) \to PSU(n) \to SU(n). \end{equation*} \begin{theorem}\label{thm:LoopSU(n)} For each $n\geq 1$, the cohomology of the based loop space of the classical simple Lie group $SU(n)$ is given by \begin{equation*} H^*(\Omega(SU(n));\mathbb{Z})=\Gamma_{\mathbb{Z}}[x_2,x_4,\dots,x_{2n-2}] \end{equation*} where $|x_i|=i$ for $i=2,4,\dots,2n-2$. \end{theorem} \hspace*{\fill} $\square$ \section{Combintorial coefficients}\label{sec:Comintorial} Before studying the the cohomology of the free loop space of $SU(n+1)/T^n$ we first analyse some of the combinatorial structures that appear in the cohomology algebras. \subsection{Binomial coefficients}\label{subsec:Binomial} The binomial coefficients $\binom{n}{k}$ are defined to be the number of size $k$ subsets of a size $n$ set and they satisfy the recurrence relation $\binom{n}{k}=\binom{n-1}{k}+\binom{n-1}{k-1}$. It is easily shown by induction on $n$ that for $0\leq k \leq n$, $\binom{n}{k}=\frac{n!}{k!(n-k)!}$ and is zero otherwise. The binomial coefficients also satisfy the well known formulas \begin{equation}\label{eq:binom} \sum_{k=0}^n{\binom{n}{k}}=2^n \;\; \text{and} \;\; \sum^{n}_{k=0}{(-1)^k \binom{n}{k}}=0. \end{equation} \subsection{Multinomial coefficients} Throughout this paper for integers $k\geq 2$ and $n,a_1,\dots,a_k \geq 0$, we set \begin{equation}\label{eq:Multinomial} \binom{n}{a_1,\dots,a_k}=\frac{n!}{a_1!\cdots a_k!}. \end{equation} The following expansion in term of binomial coefficients is easily verified, \begin{equation*}\label{eq:MultinomilExpansionOriginal} \binom{n}{a_1,\dots,a_k}=\binom{n}{a_1}\binom{n-a_1}{a_2}\cdots\binom{n-a_1-\cdots- a_{k-1}}{a_k}. \end{equation*} In particular \begin{equation}\label{eq:MultinomilExpansion} \binom{n}{a_1,\dots,a_k}=\binom{n}{a_2,\dots,a_{k-1}}\binom{n-a_1-\cdots-a_{k-1}}{a_k}. \end{equation} When $a_1+\cdots+a_k=n$, the expressions in equation~(\ref{eq:Multinomial}) are called the the \emph{multinomial coefficients}. In this case the combinatorial interpretation of the coefficients is as the number of way to partition a size $n$ set into subsets of sizes $a_1,\dots,a_k$. \subsection{Multiset coefficients} The number of size $k$ multisets that can be formed from elements of a size $n$ set is denoted by $\multiset{n}{k}$ and these numbers are called the {\it multiset coefficients}. It is well know that $\multiset{n}{k}=\binom{n+k-1}{k}$, hence $\multiset{n}{k}=\multiset{n-1}{k}+\multiset{n}{k-1}$. To the best of our knowledge the identity in the next lemma has not been shown before. \begin{lemma}\label{lem:combino} For integers $n,m\geq 1$, \begin{equation*} \sum^n_{k=0}{(-1)^k\binom{n}{k}\multiset{n}{m-k}}=0. \end{equation*} \end{lemma} \begin{proof} We prove the statement by induction on $n$. When $n=1$, \begin{equation*} \sum^n_{k=0}{(-1)^k\binom{n}{k}\multiset{n}{m-k}} =\binom{1}{0}\multiset{1}{m}-\binom{1}{1}\multiset{1}{m-1} =\binom{m}{m}-\binom{m-1}{m-1}=0. \end{equation*} Suppose the lemma holds for $n=t-1\geq1$, then \begin{align*} &\sum^t_{k=0}{(-1)^k\binom{t}{k}\multiset{t}{m-k}} =\sum^t_{k=0}{(-1)^k\bigg(\binom{t-1}{k}+\binom{t-1}{k-1}\bigg)\multiset{t}{m-k}} \\ &=\sum^t_{k=0}{(-1)^k\bigg(\binom{t-1}{k-1}\multiset{t}{m-k}+\binom{t-1}{k}\multiset{t-1}{m-k}+\binom{t-1}{k}\multiset{t}{m-k-1}\bigg)}=0 \end{align*} as the middle term sum $\sum^{t-1}_{k=0}{\binom{t-1}{k}\multiset{t-1}{m-k}}=0$ by assumption and all other terms cancel except for $\binom{t-1}{-1}\multiset{t}{m}$, $\binom{t-1}{t}\multiset{t-1}{m-t}$ and $\binom{t-1}{t}\multiset{t}{m-t-1}$ all of which are zero. \end{proof} \subsection{Stirling numbers of the second kind}\label{subsec:Stirling} Along with binomial, multinomial and multiset coefficients, Sterling numbers of the second kind appear as part of the so called \emph{12-fold way}, a class of enumerative problems concerned with the combinations of balls in boxes problems. The \emph{Stirling numbers of the second kind} $\stirling{n}{m}$, denote the number of ways to partition an $n$ element set into $m$ non-empty subsets. The Stirling numbers of the second kind satisfy a recurrence relations \begin{equation}\label{eq:StirlingRecurence} \stirling{n}{m} = \stirling{n-1}{m-1} + m \stirling{n-1}{m} \end{equation} for integers $n\geq m \geq 1$. From the combinatoral definitions, we obtain the relationship between Stirling numbers of the second kind and multinomial coefficients, given by \begin{equation}\label{eq:StirlingExpansion} m!\stirling{n}{m}= \sum_{\substack{a_1,\dots,a_m \geq 1 \\ a_1+\cdots+a_m=n }} \binom{n}{a_1,\dots,a_m}. \end{equation} To the best of our knowledge the identity in the next lemma has not been shown before. \begin{lemma}\label{lem:StirlingIdentity} For each $n \geq 1$, \begin{equation*} \sum^n_{m=1}{(-1)^m m! \stirling{n}{m}} = (-1)^n. \end{equation*} \end{lemma} \begin{proof} The formula is easily seen to hold for the case $n=1$. Using the recurrence relation in equation~(\ref{eq:StirlingRecurence}) and induction on $n$, \begin{align*} \sum^n_{m=1}{(-1)^m m! \stirling{n}{m}} =& \sum^n_{m=1}{(-1)^m m! \left( \stirling{n-1}{m-1} + m \stirling{n-1}{m} \right) } \\ =&\sum^n_{m=1}{(-1)^m m! \stirling{n-1}{m-1}} + \sum^n_{m=1}{(-1)^m m!m \stirling{n-1}{m}} \\ =&\sum^{n-1}_{m=1}{(-1)^{m+1} (m+1)! \stirling{n-1}{m}} + \sum^{n-1}_{m=1}{(-1)^m m!m \stirling{n-1}{m}} \\ =&\sum^{n-1}_{m=1}{(-1)^{m+1} m!(m+1) \stirling{n-1}{m}} + \sum^{n-1}_{m=1}{(-1)^m m!m \stirling{n-1}{m}} \\ =&\sum^n_{m=1}{(-1)^{m+1}\left( m!(m+1)-m!m \right) \stirling{n-1}{m}} \\ =&\sum^n_{m=1}{(-1)^{m+1} m! \stirling{n-1}{m}} =(-1)^{n}. \end{align*} \end{proof} By an expansion using the recurrence relation in equation~(\ref{eq:StirlingRecurence}) and Lemma~\ref{lem:StirlingIdentity}, the well known relation $\sum^n_{m=1}{(-1)^m (m-1)! \stirling{n}{m}} = 0$ can be easily derived. \section{Alternative forms of the symmetric ideal}\label{sec:IdeaForms} Replacing $\sigma_i$ with the complete homogeneous symmetric polynomials $h_i$ as generators of the symmetric ideal, leads to a simplification of the generator expressions, practical for working with $H^*(SU(n+1)/T^n)$ as demonstrated in the next section. For each integer $n\geq 1$ and all integers $1 \leq k' \leq k \leq n$, define $\Phi(k,k')$ to be the sum of all monomials in $\mathbb{Z}[x_1,\dots,x_n]$ of degree $k$ in variables $x_1,\dots,x_{n-k'+1}$. \begin{theorem}\label{thm:monomial sum} In the ring $\frac{\mathbb{Z}[x_1,\dots,x_n]}{\langle h_1,\dots,h_n\rangle}$ for each $1 \leq k' \leq k \leq n$, $\Phi(k,k')=0$. In addition \begin{equation}\label{eq:OrigIdeal} \langle h_1,\dots,h_n\rangle=\langle\Phi(1,1),\dots,\Phi(n,n)\rangle. \end{equation} Moreover these new ideal generators form a reduced Gr\"obner bases for the ideal of $n$ variable symmetric polynomials with respect to the lexicographic term order on variables $x_1<\cdots<x_n$. \end{theorem} \begin{proof} Note that by definition, $h_k=\Phi(k,1)$. We prove by induction on $k$ that for each $1 \leq k' \leq k \leq n$, $\Phi(k,k')\in \langle h_1,\dots,h_n\rangle$. When $k=1$, by definition $h_1=\Phi(1,1)$. Assume the statement is true for all $k<m\leq n$. By induction, $\Phi(m-1,m')\in [h_1,\dots,h_n]$ for all $1 \leq m'\leq m-1$. Note that $\Phi(m-1,m')x_{n-m'+1}$ is the sum of all monomials of degree $m$ in variables $x_1,\dots,x_{n-m'+1}$ divisible by $x_{n-m'+1}$. Hence, for each $1\leq m'\leq m-1$ \begin{equation*} h_m-\Phi(m-1,1)x_n-\cdots-\Phi(m-1,m'-1)x_{n-m'+2}=\Phi(m,m'). \end{equation*} At each stage of the proof the next $\Phi(k,k)$ is obtained as a sum of $h_k$ and polynomials obtained from $h_1,\dots,h_{k-1}$. Hence $\langle\Phi(1,1),\dots,\Phi(n,n)\rangle$ and $\langle h_1,\dots,h_n \rangle$ are equal. \end{proof} For integers $0\leq a\leq b$, denote by $h_a^b$ the complete homogeneous polynomial in variables $x_1,\dots,x_b$ of degree $a$. Then equation~(\ref{eq:OrigIdeal}) can be written as \begin{equation}\label{eq:SipleRedusingHomogenious} \langle h_1^n,\dots,h_n^n\rangle=\langle h_1^n,h_2^{n-1},\dots,h_n^1\rangle. \end{equation} A useful intermediate form of Theorem~\ref{thm:monomial sum} is separately set out in the next proposition. \begin{proposition}\label{prop:Homogeneous-1} For each $n\geq 1$, \begin{equation*} \langle h_1^n,\dots,h_n^n\rangle=\langle h_1^n,h_2^{n-1}\dots,h_n^{n-1}\rangle. \end{equation*} \end{proposition} \begin{proof} For each $1\leq i\leq n-1$ \begin{equation*} h^n_{i+1}-x_n h^n_i=h^{n-1}_{i+1}. \end{equation*} We can rearrange the ideal to achieve the desired result by performing the above elimination in sequence on the ideal for $i={n-1}$ to $i=1$. \end{proof} \begin{remark}\label{remk:SymQotForms} By Theorem~\ref{thm:monomial sum} and Proposition~\ref{prop:Homogeneous-1} eliminating the last variable in $\mathbb{Z}[x_1,\dots,x_n]$, by rewriting $h_1$ as $x_n=-x_1-\cdots-x_{n-1}$ gives \begin{equation*} \frac{\mathbb{Z}[x_1,\dots,x_n]}{\langle h_1^n,\dots,h_n^n\rangle} \cong\frac{\mathbb{Z}[x_1,\dots,x_{n-1}]}{\langle h_2^{n-1},\dots,h_n^{n-1}\rangle} \cong\frac{\mathbb{Z}[x_1,\dots,x_{n-1}]}{\langle h_2^{n-1},\dots,h_n^{1}\rangle}. \end{equation*} \end{remark} \section{Determining spectral sequence differentials}\label{sec:FreeLoopSU(n+1)/Tn} The aim of this section is to determine the differentials in a spectral sequence converging to the free loop cohomology $H^{*}(\Lambda(SU(n+1)/T^n);\mathbb{Z})$ for $n \geq 1$. The case when $n=0$ is trivial as $SU(1)$ is a point. The approach of the argument is similar to that of \cite{cohololgy_Lprojective}, in which the cohomology of the free loop spaces of spheres and complex projective space are calculated using spectral sequence techniques. However the details in the case of the complete flag manifold of the special unitary group are considerably more complex, requiring a sophisticated combinatorial argument arising form the structure of the complete homogeneous symmetric function not present for simpler spaces. \subsection{Differentials in the spectral sequence of the diagonal fibration}\label{sec:evalSS} For any space $X$, the map $eval \colon Map(I,X) \to X\times X$ is given by $\alpha \mapsto (\alpha(0),\alpha(1))$. It can be shown directly that $eval$ is a fibration with fiber $\Omega X$. In this section we compute the differentials in the cohomology Serre spectral sequence of this fibration for the case $X=SU(n+1)/T^n$. The aim is to compute $H^{*}(\Lambda(SU(n+1)/T^n);\mathbb{Z})$. The map $eval \colon \Lambda X \to X$ given by evaluation at the base point of a free loop is also a fibration with fiber $\Omega X$. This is studied in Section~\ref{sec:diff} by considering a map of fibrations from the evaluation fibration for $SU(n+1)/T^n$ to the diagonal fibration and hence the induced map on spectral sequences. For the rest of this section we consider the fibration \begin{equation}\label{eq:evalfib} \Omega (SU(n+1)/T^n) \to Map(I,SU(n+1)/T^n) \xrightarrow{eval} SU(n+1)/T^n\times SU(n+1)/T^n. \end{equation} By extending the fibration $T^n \to SU(n+1) \to SU(n+1)/T^n$, we obtain the homotopy fibration sequence \begin{equation}\label{eq:SU/Tfib} \Omega(SU(n+1)) \to \Omega(SU(n+1)/T^n) \to T^n \to SU(n+1). \end{equation} It is well known see \cite{CohomologyOmega(G/U)}, that the furthest right map above that is, the inclusion of the maximal torus into $SU(n+1)$ is null-homotopic. Hence there is a homotopy section $T^n \to \Omega(SU(n+1)/T^n)$. Therefore, as \eqref{eq:SU/Tfib} is a principle fibration, there is a space decomposition $\Omega(SU(n+1)/T^n) \simeq \Omega(SU(n+1)) \times T^n$. Using the K\"{u}nneth formula, we obtain the algebra isomorphisum \begin{equation}\label{eq:BaseLoopFlag} H^*(\Omega(SU(n+1)/T^n);\mathbb{Z}) \cong H^*(\Omega(SU(n+1);\mathbb{Z}) \otimes H^*(T^n;\mathbb{Z}) \cong \Gamma_{\mathbb{Z}}[x'_2,x'_4,\dots,x'_{2n}] \otimes \Lambda_{\mathbb{Z}}(y'_1,\dots,y'_n) \end{equation} where $\Gamma_{\mathbb{Z}}[x'_2,x'_4,\dots,x'_{2n}]$ is the integral divided polynomial algebra on $x'_2,\dots,x'_{2n}$ with $|x'_i|=i$ for each $i=2,\dots,2n$ and $\Lambda(y'_1,\dots,y'_n)$ is an exterior algebra generated by $y'_1,\dots,y'_n$ with $|y'_j|=1$ for each $j=1,\dots,n$. Since \begin{equation*} Map(I,SU(n+1)/T^n)\simeq SU(n+1)/T^n \end{equation*} by Theorem~\ref{thm:H*SU/T} all cohomology algebras of spaces in fibration (\ref{eq:evalfib}) are known. By studying the long exact sequence of homotopy groups associated to the fibration $T^n \to SU(n+1) \to SU(n+1)/T^n$, we obtain that $SU(n+1)/T^n$ and hence $SU(n+1)/T^n \times SU(n+1)/T^n$ are simply connected. Therefore the cohomology Serre spectral sequence of fibration (\ref{eq:evalfib}), denoted by $\{\bar{E}_r,\bar{d}^r\}$, converges to $H^*(SU(n+1)/T^n;\mathbb{Z})$ with $\bar{E}_2$-page \begin{equation*} \bar{E}^{p,q}_2=H^p(SU(n+1)/T^n\times SU(n+1)/T^n;H^q(\Omega(SU(n+1)/T^n);\mathbb{Z})). \end{equation*} In the following arguments we use the notation \begin{center} $H^*(Map(I,SU(n+1)/T^n);\mathbb{Z}) \cong \frac{\mathbb{Z}[\lambda_1,\dots,\lambda_{n+1}]}{\langle\sigma^{\lambda}_1,\dots,\sigma^{\lambda}_{n+1}\rangle}$ \end{center} and \begin{center} $H^*(SU(n+1)/T^n \times SU(n+1)/T^n;\mathbb{Z}) \cong \frac{\mathbb{Z}[\alpha_1,\dots,\alpha_{n+1}]}{\langle\sigma^{\alpha}_1,\dots,\sigma^{\alpha}_{n+1}\rangle} \otimes \frac{\mathbb{Z}[\beta_1,\dots,\beta_{n+1}]}{\langle\sigma^{\beta}_1,\dots,\sigma^{\beta}_{n+1}\rangle} $ \end{center} where $|\alpha_i|=|\beta_i|=|\lambda_i|=2$ for each $i=1,\dots,n+1$ and $\sigma^{\lambda}_i,\sigma^{\alpha}_i$ and $\sigma^{\beta}_i$ are the elementary symmetric polynomials in $\lambda_i,\alpha_i$ and $\beta_i$, respectively. \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes, nodes in empty cells,nodes={minimum width=5ex, minimum height=5ex,outer sep=-5pt}, column sep=1ex,row sep=1ex]{ & \vdots & \vdots & & & & & & \\ & 2n\; & \langle x'_{2n} \rangle& & & & & & \\ & \vdots & \vdots & & & & & & \\ & 6 & \langle x'_6 \rangle & & & &\dots & & \\ H^{*}(\Omega(SU(n+1)/T^n;\mathbb{Z})) & 4 & \langle x_4\rangle & & & & & & \\ & 2 & \langle x'_2\rangle & &\dots& & & & \\ & & & & & & & & \\ & 1 & \langle y'_i\rangle & \;\;\; \lcdot \;\;\; &\lcdot&\;\;\lcdot\;\;& \cdots& \lcdot & \cdots \\ & 0 & & \langle\alpha_i,\beta_i\rangle &\lcdot&\lcdot & \cdots& \lcdot & \cdots \\ &\quad\strut & 0 & 2 & 4 & 6 & \cdots& 2n & \cdots \strut \\}; \draw[-stealth] (m-2-3.south east) -- (m-8-8.north west); \draw[-stealth] (m-5-3.south east) -- (m-8-5.north); \draw[-stealth] (m-4-3.south east) -- (m-8-6.north); \draw[-stealth] (m-6-3.south) -- (m-8-4.north); \draw[-stealth] (m-8-3.south east) -- (m-9-4.north west); \draw[-stealth] (m-8-4.south east) -- (m-9-5.north west); \draw[-stealth] (m-8-5.south east) -- (m-9-6.north west); \draw[-stealth] (m-8-7.south east) -- (m-9-8.north west); \draw[thick] (m-1-2.east) -- (m-10-2.east) ; \draw[thick] (m-10-2.north) -- (m-10-9.north) ; \end{tikzpicture} \label{fig:evalSS} \end{center} \begin{center} $\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; H^{*}(SU(n+1)/T^n\times SU(n+1)/T^n;\mathbb{Z})$ \end{center} \begin{center} \captionof{figure}{Generators in the integral cohomology Leray-Serre spectral sequence $\{\bar{E}_r,\bar{d}^r\}$ converging to $H^{*}(Map(I,SU(n+1));\mathbb{Z})$.} \end{center} In the remainder of this section we describe explicitly the images of differentials shown in Figure~\ref{fig:evalSS} and show that all other differential not generated by these differentials using the Leibniz rule are zero. It will often be useful to use the alternative basis \begin{equation}\label{eq:ChangeBasis} v_i=\alpha_i-\beta_i \text{ and } u_i=\beta_i \end{equation} for $H^{*}(SU(n+1)/T^n\times SU(n+1)/T^n;\mathbb{Z})$, where $i=1,\dots,n+1$. The following lemma determines completely the $\bar{d}^2$ differential on $\bar{E}_2^{*,1}$. \begin{lemma}\label{lem:E^2_{*,1}d^2} With the notation above, in the cohomology Leray-Serre spectral sequence of fibration (\ref{eq:evalfib}), there is a choice of basis $y'_1,\dots,y'_n$ such that \begin{center} $\bar{d}^2(y'_i)=v_i$ \end{center} for each $i=1,\dots,n$. \end{lemma} \begin{proof} There is a homotopy commutative diagram \begin{equation*}\label{fig:evalcd} \xymatrix{ {SU(n+1)/T^n} \ar[r]^(.37){\Delta} & {SU(n+1)/T^n\times SU(n+1)/T^n} \ar@{=}[d] \\ {Map(I,SU(n+1)/T^n)} \ar[r]_(.42){eval} \ar[u]_{p_0} & {SU(n+1)/T^n\times SU(n+1)/T^n} } \end{equation*} where $p_0$, given by $\psi \mapsto \psi(0)$, is a homotopy equivalence and $\Delta$ is the diagonal map. As the cup product is induced by the diagonal map, $eval^*$ has the same image as the cup product. For dimensional reasons, $\bar{d}^2$ is the only possible non-zero differential ending at any $\bar{E}_*^{2,0}$ and no non-zero differential have domain in any $\bar{E}_*^{2,0}$. Therefore in order for the spectral sequence to converge to $H^{*}(Map(I,SU(n+1)/T^n))$, the image of $\bar{d}^2 \colon \bar{E}_2^{0,1} \to \bar{E}_2^{2,0}$ must be the kernel of the cup product on $H^*(SU(n+1)/T^n\times SU(n+1)/T^n;\mathbb{Z})$, which is generated by $v_1,\dots,v_n$. \end{proof} \begin{remark}\label{rmk:unique} The only remaining differentials on generators left to determine are those with domain in $\langle x'_2,x'_4\dots,x'_{2n} \rangle$ on some page $\bar{E}_r$ for $r \geq 2$. For dimensional reasons, the elements $x'_2,x'_4,\dots,x'_{2n}$ cannot be in the image of any differential. By Lemma~\ref{lem:E^2_{*,1}d^2}, the generators $u_1,\dots,u_n$ must survive to the $\bar{E}_{\infty}$-page, so generators $x'_2,x'_4,\dots,x'_{2n}$ cannot. This is due to dimensional reasons combined with the fact that the spectral sequence must converge to $H^*(SU(n+1)/T^n)$. Now assume inductively that for each $i=1,\dots,n$ and $1\leq j<i$, $\bar{d}^{2j}$ is constructed. For dimensional reasons and due to all lower rows except $\bar{E}_r^{*,2}$ and $\bar{E}_r^{*,1}$ being annihilated by differentials already determined at lower values of $1\leq j<i$, the only possible non-zero differential beginning at $x'_{2i}$ is $\bar{d}^{2i}:\bar{E}_{2i}^{0,2i}\to \bar{E}_{2i}^{2i,1}$. The image of each of the differentials $\bar{d}^{2i}$ will therefore be a unique class in $\bar{E}_{2i}^{2i,1}$ in the kernel of $\bar{d}^2$ not already contained in the image of any $\bar{d}^r$ for $r<2i$. We note also that the elements $(x_{2i})_m$ for each $m \geq 2$ are also generators on the $\bar{E}_2$-page of the spectral sequence. The differentials in the spectral sequence are also completely determined on all $(x_{2i})_m$ by their image on $x_{2i}$ in the following way. Using the relations $x_{2i}^m-m!(x_{2i})_m$ and the Leibniz rule, it follows that $\bar{d}^2(x_{2i}^m) = m\bar{d}^2(x_{2i})x_{2i}^{m-1}$ and hence again using the relations \begin{equation*} \bar{d}^2((x_{i2})_m) = \bar{d}^2(x_{2i})(x_{2i})_{m-1}. \end{equation*} Due to the fact that $\bar{d}^{2i}(x_{2i})$ must be non-trivial, there are no torsion elements on the spectral sequence pages $E_i^{*,*}$ and $(x_{2i})_m$ cannot be in the image of any differential, we obtain that $\bar{d}^r((x_{2i})_m) = \bar{d}^r(x_{2i})(x_{2i})_{m-1} = 0$ for $2\leq r<2i$. Therefore we can apply the same augments used to derive the $E_2$-page equation above on the $E_{2i}$-page we have that \begin{equation}\label{eq:DifOnDivPoly} \bar{d}^{2i}((x_{2i})_m) = \bar{d}^{2i}(x_{2i})(x_{2i})_{m-1}. \end{equation} \end{remark} We have $\bar{d}^2(u_i)=\bar{d}^2(v_i)=0$ and by Lemma~\ref{lem:E^2_{*,1}d^2} we may assume that $\bar{d}^2(y'_i)=v_i$ for each $i=1,\dots,n$. All non-zero generators $\gamma \in \bar{E}_2^{*,1}$ can be expressed in the form \begin{center} $\gamma = y'_k u_{i_1} \cdots u_{i_s} v_{j_1} \cdots v_{j_t}$ \end{center} for some $1\leq k \leq n$, $1\leq i_1<\cdots<i_s\leq n$ and $1\leq j_1<\cdots<j_t\leq n$. Therefore, $\bar{d}^2(\gamma)$ is zero only if it is contained in $\langle \sigma^{\alpha}_1,\dots,\sigma^{\alpha}_{n+1}, \sigma^{\beta}_1,\dots,\sigma^{\beta}_{n+1}\rangle$. Hence it is important to understand the structure of the symmetric polynomials $\sigma^{\alpha}_1,\dots,\sigma^{\alpha}_{n+1}, \sigma^{\beta}_1,\dots,\sigma^{\beta}_{n+1}$. From Remark~\ref{remk:SymQotForms}, we see that polynomials $\sigma^{\alpha}_1$ and $\sigma^{\beta}_1$ can be thought of as expressions for $\alpha_{n+1}$ and $\beta_{n+1}$ in terms of the other generators of the ideal and generators $\sigma^{\alpha}_2,\dots,\sigma^{\alpha}_{n+1}, \sigma^{\beta}_2,\dots,\sigma^{\beta}_{n+1}$ can be replaced with homogeneous symmetric polynomials in $\alpha_1,\dots,\alpha_{n}$ and $\beta_1,\dots,\beta_{n}$. It turns out that this rearranged basis is more convenient for deducing the differentials in the spectral sequence. Using the next two lemmas, we determine the correspondence between each $\sigma_l^\alpha$ and $\sigma_l^\beta$ and generators of the kernel of $\bar{d}^2$, which provides us with the image of the remaining differentials. For each $n \geq 1$, let $A=(a_1,\dots,a_n),B=(b_1,\dots,b_n)\in \mathbb{Z}^n_{\geq 0}$, such that not all of $a_1,\dots,a_n$ are zero. Denote by $L(a_1,\dots,a_n)$ the number of non-zero entries in $(a_1,\dots,a_n)$. Define the quotient of the permutation group on $n$ elements $\zeta_{(b_1,\dots,b_l)}^{(a_1,\dots,a_l)} \coloneqq S_n/\sim$ by \begin{equation*} \pi \sim \rho \iff (a_{\pi(1)},\dots,a_{\pi(n)})=(a_{\rho(1)},\dots,a_{\rho(n)}) \text{ and } (b_{\pi(1)},\dots,b_{\pi(n)})=(b_{\rho(1)},\dots,b_{\rho(n)}). \end{equation*} Let $1\leq x(a_1,\dots,a_n) \leq n$ be the minimal integers such that \begin{equation}\label{eq:x} a_{x(a_1,\dots,a_n)} \geq 1. \end{equation} Notice that $x(a_{\pi(1)},\dots,a_{\pi(n)})=x(a_{\rho(1)},\dots,a_{\rho(n)})$ if $\pi \sim \rho$. Define element $s_{(b_1,\dots,b_n)}^{(a_1,\dots,a_n)}$ of $E_2^{2l-1,1}$ by \begin{equation*} s_{(b_1,\dots,b_n)}^{(a_1,\dots,a_n)} \coloneqq \sum_{\substack{\pi \in \zeta_{(b_1,\dots,b_n)}^{(a_1,\dots,a_n)}}} {y'_{x(a_{\pi(1)},\dots,a_{\pi(n)})}v_{1}^{a_{\pi(1)}}\cdots v_{x(a_{\pi(1)},\dots,a_{\pi(n)})}^{a_{x(a_{\pi(1)},\dots,a_{\pi(n)})}-1}\cdots v_{n}^{a_{\pi(n)}} u_{1}^{b_{\pi(1)}}\cdots u_{n}^{b_{\pi(n)}}}. \end{equation*} \begin{lemma}\label{lem:SmallS} With $s_{(b_1,\dots,b_n)}^{(a_1,\dots,a_n)}$ as given above, \begin{align*} \bar{d}^{2}&(s_{(b_1,\dots,b_n)}^{(a_1,\dots,a_n)})= \\ &\sum_{\substack {\pi \in \zeta_{(b_1,\dots,b_l)}^{(a_1,\dots,a_l)} \\ 0\leq t_j \leq a_j, \; 1\leq j \leq n}} \Bigg{(}\prod_{1\leq k \leq n}{(-1)^{a_k-t_k}\binom{a_k}{t_k}}\Bigg{)} \alpha_{1}^{t_{\pi(1)}}\cdots \alpha_{n}^{t_{\pi(n)}} \beta_{1}^{b_{\pi(1)}+a_{\pi(1)}-t_{\pi(1)}}\cdots\beta_{n}^{b_{\pi(n)}+a_{\pi(n)}-t_{\pi(n)}} . \end{align*} \end{lemma} \begin{proof} Applying Lemma~\ref{lem:E^2_{*,1}d^2}, the Leibniz rule and the change of basis (\ref{eq:ChangeBasis}), \begin{align*} \bar{d}^{2}&(s_{(b_1,\dots,b_n)}^{(a_1,\dots,a_n)})= \sum_{\substack{\pi \in \zeta_{(b_1,\dots,b_l)}^{(a_1,\dots,a_n)}}} (\alpha_{1}-\beta_{1})^{a_{\pi(1)}} \cdots (\alpha_{n}-\beta_{n})^{a_{\pi(n)}} \beta_{1}^{b_{\pi(1)}}\cdots \beta_{n}^{b_{\pi(n)}}. \end{align*} Using the binomial expansion on the terms $(\alpha_{i_j}-\beta_{i_j})^{a_j}$ for each $1\leq j \leq n$, we obtain \begin{align*} \bar{d}^{2}(s_{(b_1,\dots,b_n)}^{(a_1,\dots,a_n)})= & \\ \sum_{\pi \in \zeta_{(b_1,\dots,b_n)}^{(a_1,\dots,a_n)}} & \Bigg{(} \bigg{(}\sum_{0\leq t\leq a_{\pi(1)}}{(-1)^{a_{\pi(1)}-t}\binom{a_{\pi(1)}}{t}\alpha_{1}^t\beta_{1}^{a_{\pi(1)}-t}} \bigg{)} \cdots \\ & \cdots \bigg{(}\sum_{0\leq t\leq a_{\pi(n)}}{(-1)^{a_{\pi(n)}-t}\binom{a_{\pi(n)}}{t}\alpha_{n}^t\beta_{n}^{a_{\pi(n)}-t}} \bigg{)} \beta_{1}^{b_{\pi(1)}}\cdots\beta_{n}^{b_{\pi(n)}} \Bigg{)}. \end{align*} The expression in the statement of the lemma now follows by collecting the terms of the same product of $\alpha_{i_j}$'s and $\beta_{i_j}$'s. \end{proof} Using the notation above, define \begin{align}\label{eq:BigS} & S^{(c_1,\dots,c_l)} \coloneqq \nonumber \\ & \sum_{\substack{ 0\leq t_j \leq c_j, \\ 1\leq j \leq n, \\ \text{some } t_j > 0 }} s_{(c_1-t_1,\dots,c_l-t_n)}^{(t_1,\dots,t_n)} \prod_{1 \leq i \leq n} \sum_{\substack{(a_1,\dots,a_{c_i}) \in \mathbb{Z}_{\geq 0} \\ |(a_1,\dots,a_{c_i})|=c_i-t_i, \\ a_{k-1} = 0 \implies a_{k} = 0, \\ 2 \leq k \leq c_i}} (-1)^{t_i+L(a_1,\dots,a_{c_i})} \binom{c_i}{a_1,\dots,a_{c_i}}. \end{align} The second differential when applied to $S^{(c_1,\dots,c_n)}$ gives an expression in terms of only products of $\alpha_{i_j}$'s or only $\beta_{i_j}$'s of type $(c_1,\dots,c_n)$. \begin{lemma}\label{lem:d2BigS} With $S^{(c_1,\dots,c_n)}$ given as above, \begin{equation*} \bar{d}^2(S^{(c_1,\dots,c_n)}) = \sum_{\pi \in \zeta_{(c_1,\dots,c_n)}^{(c_1,\dots,c_n)}} {\alpha_{1}^{c_{\pi(1)}}\cdots \alpha_{n}^{c_{\pi(n)}}} +(-1)^{c_1+\cdots+c_n+1}\sum_{\pi \in \zeta_{(b_1,\dots,b_n)}^{(a_1,\dots,a_n)}} {\beta_{1}^{c_{\pi(1)}}\cdots \beta_{n}^{c_{\pi(n)}}}. \end{equation*} \end{lemma} \begin{proof} It follows from Lemma~\ref{lem:SmallS} that the only monomials in $\bar{d}^2(S^{(c_1,\dots,c_l)})$ are integer multiples of \begin{equation}\label{eq:SumTerms} \alpha_{1}^{t_{\pi(1)}}\cdots \alpha_{n}^{t_{\pi(n)}} \beta_{1}^{c_{\pi(1)}-t_{\pi(1)}}\cdots \beta_{n}^{c_{\pi(n)}-t_{\pi(n)}} \end{equation} for some $(t_1,\dots,t_n)\leq (c_1,\dots,c_n)$ and $\pi \in \zeta_{(c_1-t_1,\dots,c_n-t_n)}^{(t_1,\dots,t_n)}$. When $(t_1,\dots,t_n)=(c_1,\dots,c_n)$, then the monomials in (\ref{eq:SumTerms}) are \begin{equation*} \alpha_{1}^{c_{\pi(1)}}\cdots \alpha_{n}^{c_{\pi(n)}} \end{equation*} for each $\pi \in \zeta_{(0,\dots,0)}^{(c_1,\dots,c_n)}$. By Lemma~\ref{lem:SmallS}, these monomial terms only appear in the image of the differential $\bar{d}^2$ of $s^{(c_1,\dots,c_n)}_{(0,\dots,0)}$ and appear with multiplicity one. Hence the statement of the lemma holds when restricted to such monomials. By induction on $l=|(c_1,\dots,c_n)-(t_1,\dots,t_n)|$ we now prove that the statement holds for all other terms containing monomials in~(\ref{eq:SumTerms}). Take $0\leq (t'_1,\dots,t'_n) < (c_1,\dots,c_n)$ such that $|(c_1,\dots,c_n)-(t'_1,\dots,t'_n)|=l\geq 1$. Since the terms of monomials in (\ref{eq:SumTerms}) for $(t'_1,\dots,t'_n)$ only occur in the image of the $\bar{d}^2$ differential of $s^{(t_1,\dots,t_n)}_{(c_1-t_1,\dots,c_n-t_n)}$ for $ 0 < (t_1,\dots,t_n) \leq (c_1,\dots,c_n)$ such that $|(c_1,\dots,c_n)-(t_1,\dots,t_n)|\leq l$, the multiplicity of $s^{(t_1,\dots,t_n)}_{(c_1-t_1,\dots,c_n-t_n)}$ in (\ref{eq:BigS}) for $|(c_1,\dots,c_n)-(t_1,\dots,t_n)| > l$ does not effect the multiplicity of the terms corresponding to monomials with $(t'_1,\dots,t'_n)$ in (\ref{eq:SumTerms}). Hence, we need only show that the sum of terms of monomials in (\ref{eq:SumTerms}) for $(t'_1,\dots,t'_n)\neq(0,\dots,0)$ arising from $|(c_1,\dots,c_n)-(t'_1,\dots,t'_n)| > l$ is the negative of the constant \begin{equation}\label{eq:CancelCoef} K_{(t'_1,\dots,t'_n)} = \prod_{1 \leq i \leq n} \sum_{\substack{(a_1,\dots,a_{c_i}) \in \mathbb{Z}_{\geq 0} \\ |(a_1,\dots,a_{c_i})|=c_i-t'_i, \\ a_{k-1} = 0 \implies a_{k} = 0, \\ 2 \leq k \leq c_i}} (-1)^{t'_i+L(a_1,\dots,a_{c_i})}\binom{c_i}{a_1,\dots,a_{c_i}}. \end{equation} The only remaining case is the terms of monomials from (\ref{eq:SumTerms}) when $(t'_1,\dots,t'_n)=(0,\dots,0)$, which requires the sum to be $(-1)^{c_1+\cdots+c_n+1}$. Returning first to the main cases, the sum we are interested in is the same for any $\pi \in \zeta_{(c_1-t'_1,\dots,c_n-t'_n)}^{(t'_1,\dots,t'_n)}$. So without loss of generality, we assume that $\pi$ is the identity. In this case using the formula from Lemma~\ref{lem:SmallS}, we consider \begin{equation*} \sum_{\substack{(t'_1,\dots,t'_n) < (t_1,\dots,t_n) \leq (c_1,\dots,c_n)}} {K_{t_1,\dots,t_n}}\prod_{1\leq j \leq n}{(-1)^{t_j-t'_j}\binom{t_j}{t_j-t'_j}} \end{equation*} where we have additionally used the fact that $\binom{t_j}{t'_j}=\binom{t_j}{t_j-t'_j}$, which using (\ref{eq:CancelCoef}) can be inductively rewritten as \begin{align}\label{eq:InductionExpression} \sum_{(t'_1,\dots,t'_n) < (t_1,\dots,t_n) \leq (c_1,\dots,c_n)} \prod_{1\leq j \leq n}{(-1)^{t_j-t'_j}\binom{t_j}{t_j-t'_j}} \\ \cdot \prod_{1 \leq i \leq n} \sum_{\substack{(a_1,\dots,a_{c_i}) \in \mathbb{Z}_{\geq 0} \\ |(a_1,\dots,a_{c_i})|=c_i-t_i, \\ a_{k-1} = 0 \implies a_{k} = 0, \\ 2 \leq k \leq c_i}} (-1)^{t_i+L(a_1,\dots,a_{c_i})} \binom{c_i}{a_1,\dots,a_{c_i}}. \nonumber \end{align} Since $c_i-a_1-\cdots -a_{l_i}=t_i$, using equation~(\ref{eq:MultinomilExpansion}) we see that multiplying ${\binom{t_j}{t_j-t'_j}}$ by $\binom{c_i}{a_1,\cdots,a_{c_i}}$ in (\ref{eq:InductionExpression}) when $i=j$ is the same as extending each $a_1,\dots,a_{c_i}$ to $a_1,\dots,a_{c_i},t_j-t'_j$. Which means we can rewrite (\ref{eq:InductionExpression}) as \begin{align}\label{eq:InductionExpression2} \sum_{(t'_1,\dots,t'_n) < (t_1,\dots,t_n) \leq (c_1,\dots,c_n)} \prod_{1 \leq i \leq n} \sum_{\substack{(a_1,\dots,a_{c_i}) \in \mathbb{Z}_{\geq 0} \\ |(a_1,\dots,a_{c_i})|=c_i-t_i, \\ a_{k-1} = 0 \implies a_{k} = 0, \\ 2 \leq k \leq c_i}} (-1)^{t'_i+L(a_1,\dots,a_{c_i})} \binom{c_i}{a_1,\dots,a_{c_i}, t_i-t'_i} \end{align} where we also take the signs into account noting that \begin{equation*} (-1)^{t_i-t'_i}(-1)^{t'_i+L(a_1,\dots,a_{c_i})} =(-1)^{t'_i+L(a_1,\dots,a_{c_i})}. \end{equation*} Notice also that as $|(a_1,\dots,a_{l_i},t_i-t_i',0,\dots,0)|=c_i-t'_i$, once $t_i-t'_i$ is moved to the left of $a_k = 0$, the multinomial type coefficient terms appearing in (\ref{eq:InductionExpression2}) are the same as those appearing in (\ref{eq:CancelCoef}). It remains to check that the multiplicity of the sum of these coefficients in (\ref{eq:InductionExpression2}) agrees with (\ref{eq:CancelCoef}). Each multinomial type coefficient in (\ref{eq:CancelCoef}) is determined by a choice of sequence $(a_1,\dots,a_{c_i})$ for $1\leq i \leq n$. Once the product is expanded, the multinomial type coefficient terms in (\ref{eq:InductionExpression}) are the product of $n$ coefficients corresponding to $n$ sequences \begin{equation*} (a_1,\dots,a_{c_i})=(a_1,\dots,a_{l_i},0,\dots,0), \end{equation*} for each $1\leq i \leq n$ and some $1\leq l_i\leq c_i$. The coefficient product terms in (\ref{eq:InductionExpression2}) of the same form correspond to $n$ sequences of the form \begin{equation}\label{eq:SubSequences} (a_1,\dots,a_{l_i},0,\dots,0) \text{ or } (a_1,\dots,a_{l_i-1},0,\dots,0) \end{equation} for $1 \leq i \leq n$, the second case corresponding to extending by $t_i-t'_i>0$ and the first to extending by $t_i-t'_i=0$. Note that for at least some $i$ the second case must be chosen as $(t'_1,\dots,t'_n)<(t_1,\dots,t_n)$. However, as we fix the order of the $n$ sequences all other possible combinations of choices occur exactly once. The sign of each of the terms from (\ref{eq:CancelCoef}) inside the product is the product of $(-1)^{t'_i+L(a_1,\dots,a_{c_i})}=t'_i+l_i$ changes from those in (\ref{eq:InductionExpression2}) each time the first choice in (\ref{eq:SubSequences}) is taken as for non-zero $t_j-t'_j$, $L(a_1,\dots,a_{l_i}-1,t_j-t'_j,0,\dots,0) = L(a_1,\dots,a_{l_i}-1,0,\dots,0)+1$. The number of ways to obtain a unique product of coefficients from (\ref{eq:CancelCoef}) in (\ref{eq:InductionExpression2}) is exactly all choices possible in (\ref{eq:SubSequences}). This choice corresponds to picking subsets of an $n$ set that is not the entire set. As this sign depends on the size of the chosen subset, the total sum of these terms is the alternating sum of the $n^{\text{th}}$ binomial coefficients (\ref{eq:binom}), without the first term in the sum. The result is therefore a single term of multiplicity $1$ up to sign. This sign is the opposite of that of the first missing term from the alternating sum of binomial coefficients, which corresponds to the term in (\ref{eq:CancelCoef}) as required. Finally it remains to consider the case when $(t'_1,\dots,t'_n)=(0,\dots,0)$. Using the result of the previous part of the proof, substituting $t'_i=0$ in (\ref{eq:CancelCoef}), this will be the negative of \begin{equation*} \prod_{1 \leq i \leq n} \sum_{\substack{(a_1,\dots,a_{c_i}) \in \mathbb{Z}_{\geq 0} \\ |(a_1,\dots,a_{c_i})|=c_i, \\ a_{k-1} = 0 \implies a_{k} = 0, \\ 2 \leq k \leq c_i}} (-1)^{L(a_1,\dots,a_{c_i})} \binom{c_i}{a_1,\dots,a_{c_i}} =(*). \end{equation*} Since now $a_1+\dots+a_{c_i}=c_i$, the coefficients $\binom{c_i}{a_1,\dots,a_{c_i}}$ are genuine multinomial coefficients. Hence, by collecting all sequence with the constant $L(a_1,\dots,a_{c_i})=j$ using equation~(\ref{eq:StirlingExpansion}) then Lemma~\ref{lem:StirlingIdentity} to rewrite the equation above as \begin{equation*} (*) = \prod_{1\leq i \leq n}\sum_{1\leq j \leq c_i}(-1)^j j!\stirling{c_i}{j} = \prod_{1\leq i \leq n}(-1)^{c_i} = (-1)^{c_1+\cdots+c_n} \end{equation*} which completes the proof. \end{proof} We can now describe explicitly the image of $\bar{d}^2$ in the spectral sequence. \begin{theorem}\label{thm:d^2Image} The only non-zero differentials on the generators $x'_{2l}$ for $2\leq l\leq n+1$ are given by \begin{equation*} \bar{d}^{2(l-1)}(x'_{2(l-1)}) =\sum_{\substack{|(c_1,\dots,c_{l'})|=l}}{S^{(c_1,\dots,c_l)}}. \end{equation*} \end{theorem} \begin{proof} Applying Lemma~\ref{lem:d2BigS}, the image under $\bar{d}^2$ of the right hand side is the difference of complete homogeneous symmetric functions in $l$ variables with $\alpha$'s and $\beta$'s respectively. By Remark~\ref{rmk:unique}, since no expression in only $\alpha$'s or $\beta$'s lies in the image of $\bar{d}^2$, the image of the difference of the homogeneous symmetric polynomials in $\alpha$'s and $\beta$'s must generate the image of $x'_{2l}$ under $\bar{d}^{2(l-1)}$. \end{proof} \subsection{Differentials for the free loop spectral sequence}\label{sec:diff} Next we consider the map $\phi$ of the evaluation fibration of $SU(n+1)/T^n$ for $n \geq 1$ and the diagonal fibration studied in Section~\ref{sec:evalSS} given by the following commutative diagram \begin{equation*}\label{fig:fibcdSU} \xymatrix{ {\Omega(SU(n+1)/T^n)} \ar[r]^(.5){} \ar[d]^(.45){id} & {\Lambda(SU(n+1)/T^n)} \ar[r]^{eval} \ar[d]^(.45){exp} & {SU(n+1)/T^n} \ar[d]^(.45){\Delta} \\ {\Omega(SU(n+1)/T^n)} \ar[r]^(.45){} & {Map(I,SU(n+1)/T^n)} \ar[r]^(.425){eval} & {SU(n+1)/T^n\times SU(n+1)/T^n}} \end{equation*} where $\exp$ is given by $\exp(\alpha)(t)=\alpha(e^{2\pi i t})$. As $SU(n+1)/T^{n}$ is simply connected, the evaluation fibration induces a cohomology Leray-Serre spectral sequence $\{ E_r,d^r \}$. Hence $\phi$ induces a map of spectral sequences $\phi^*:\{ \bar{E}_r,\bar{d}^r \} \to \{ E_r,d^r \}$. More precisely for each $r\geq 2$ and $a,b \in \mathbb{Z}$, there is a commutative diagram \begin{equation}\label{fig:phicd} \xymatrix{ {\bar{E}_r^{a,b}} \ar[r]^(.4){d^r} \ar[d]^(.46){\phi^*} & {\bar{E}_r^{a+r,b-r+1}} \ar[d]^(.46){\phi^*} \\ {E_r^{a,b}} \ar[r]^(.4){\bar{d}^r} & {E_r^{a+r,b-r+1}}} \end{equation} where $\phi^*$ for each successive $r$ is the induced map on the homology of the previous page, beginning as the map induced on the tensor on the $\bar{E}_2$-pages by the maps $id \colon \Omega(SU(n+1)/T^n)\to\Omega(SU(n+1)/T^n)$ and $\Delta \colon SU(n+1)/T^n \to SU(n+1)/T^n \times SU(n+1)/T^n$. For the remainder of the section we use the notation \begin{center} $H^*(\Omega(SU(n+1)/T^n);\mathbb{Z})\cong \Gamma_{\mathbb{Z}}(x_2,x_4,\dots,x_{2n})\otimes\Lambda_{\mathbb{Z}}(y_1,\dots,y_{n})$ \end{center} \begin{center} and $\;\;\; H^*(SU(n+1)/T^n;\mathbb{Z})\cong \frac{\mathbb{Z}[\gamma_1,\dots,\gamma_{n+1}]}{\langle\sigma^\gamma_1,\dots,\sigma_{n+1}^\gamma\rangle}$, \end{center} where $|y_i|=1, |\gamma_j|=2, |x_{2i}|=2i$ for each $1\leq i\leq n,1\leq j\leq n+1$ and $\sigma^\gamma_1,\dots,\sigma_{n+1}^\gamma$ are the basis of elementary symmetric functions on $\gamma_i$. We now determine all the differentials in $\{ E_r,d^r \}$. \begin{theorem}\label{thm:allDiff} For each integer $n\geq 1$ and for $2 \leq l \leq n+1$, the only non-zero differentials on generators of the $E_2$-page of $\{ E_r,d^r \}$ are up to class representative and sign are given by \begin{equation*} d^{2(l-1)}(x_{2(l-1)}) = \sum_{\substack{|(c_1,\dots,c_{n})|=l, \\ 1\leq j \leq n, \; c_j \geq 1}} {c_jy_{j}\gamma_1^{c_1}\cdots\gamma_j^{c_j-1}\cdots\gamma_n^{c_{n}}}. \end{equation*} \end{theorem} \begin{proof} The identity $id \colon \Omega(SU(n+1)/T^n)\to\Omega(SU(n+1)/T^n)$ induces the identity map on cohomology. The diagonal map $\Delta \colon SU(n+1)/T^n \to SU(n+1)/T^n \times SU(n+1)/T^n$ induces the cup product on cohomology. Hence by choosing appropriate generators of $\{ E_r,d^r \}$, we assume that \begin{center} $\phi^*(y'_i)=y_i,\;\;\phi^*(x'_i)=x_i\;\;$and$\;\;\phi^*(\alpha_i)=\phi^*(\beta_i)=\phi^*(u_i)=\gamma_i,\;\;$so$\;\;\phi^*(v_i)=0$. \end{center} For dimensional reasons, the only possibly non-zero differential on generators ${y}_i$ in $\{ E_r,d^r \}$ is $d^{2}$. However for each $1\leq i\leq n$ using commutative diagram (\ref{fig:phicd}) and Lemma~\ref{lem:E^2_{*,1}d^2}, we have \begin{center} $d^2(y_i)=d^2(\phi^*(y'_i))=\phi^*(\bar{d}^2(y'_i))=\phi^*(v_i)=0$. \end{center} Additionally all differentials on generators $\gamma_i$, for each $1\leq i \leq n+1$, are zero for dimensional reasons. Hence all elements of $E_2^{*,1}$ and $E_2^{*,0}$ survive to the $E_{\infty}$-page unless they are in the image of some differential $d^r$ for $r\geq 2$. Using commutative diagram (\ref{fig:phicd}), we have up to class representative and sign \begin{equation*} d^{2(l-1)}(x_{2(l-1)})=\phi^*(\bar{d}^{2(l-1)}(x'_{2(l-1)})). \end{equation*} Since $\phi^*(v_i)=0$, an expression for $\phi^*(\bar{d}^{2(l-1)}(x'_{2(l-1)}))$ is obtained form the terms in the expression of Theorem~\ref{thm:d^2Image} that do not contain $v_i$. Therefore \begin{align}\label{eq:ImediateDiff} \nonumber &d^{2(l-1)}(x_{2(l-1)}) = \\ &\sum_{\substack{|(c_1,\dots,c_{n})|=l, \\ 1\leq j \leq n, \; c_j \geq 1}} {y_{j}\gamma_1^{c_1}\cdots\gamma_j^{c_j-1}\cdots\gamma_n^{c_{n}}} \prod_{1 \leq i \leq n} \sum_{\substack{(a_1,\dots,a_{c_i}) \in \mathbb{Z}_{\geq 0} \\ |(a_1,\dots,a_{c_i})|=c_i-\mathbb{I}_{i=j}, \\ a_{k-1} = 0 \implies a_{k} = 0, \\ 2 \leq k \leq c_i}} (-1)^{\mathbb{I}_{i=j}+L(a_1,\dots,a_{c_i})} \binom{c_i}{a_1,\dots,a_{c_i}} \end{align} where $\mathbb{I}_{i=j}$ is the indicator function equal if $1$ if $i=j$ and $0$ otherwise. Notice that by applying (\ref{eq:StirlingExpansion}) to $(a_1,\dots,a_{c_i})$ with the same value of $L(a_1,\dots,a_{c_i})$ and then using Lemma~\ref{lem:StirlingIdentity}, when $i \neq j$, we have \begin{equation*} \sum_{\substack{(a_1,\dots,a_{c_i}) \in \mathbb{Z}_{\geq 0} \\ |(a_1,\dots,a_{c_i})|=c_i \\ a_{k-1} = 0 \implies a_{k} = 0, \\ 2 \leq k \leq c_i}} (-1)^{L(a_1,\dots,a_{c_i})} \binom{c_i}{a_1,\dots,a_{c_i}} = \sum_{1\leq L \leq c_i}(-1)^L\stirling{c_i}{L} = (-1)^{c_i} . \end{equation*} Additionally, when $i=j$ and using (\ref{eq:MultinomilExpansion}) we have \begin{align*} &\sum_{\substack{(a_1,\dots,a_{c_i}) \in \mathbb{Z}_{\geq 0} \\ |(a_1,\dots,a_{c_i})|=c_i-1 \\ a_{k-1} = 0 \implies a_{k} = 0, \\ 2 \leq k \leq c_i}} (-1)^{1+L(a_1,\dots,a_{c_i})} \binom{c_i}{a_1,\dots,a_{c_i}} \\ =& \sum_{\substack{(a_1,\dots,a_{c_i}) \in \mathbb{Z}_{\geq 0} \\ |(a_1,\dots,a_{c_i})|=c_i-1 \\ a_{k-1} = 0 \implies a_{k} = 0, \\ 2 \leq k \leq c_i}} (-1)^{1+L(a_1,\dots,a_{c_i})} \binom{c_i}{1} \binom{c_i-1}{a_1,\dots,a_{c_i}} \\ =& c_i\sum_{1\leq L \leq c_i-1}(-1)^{L+1} \stirling{c_i-1}{L} = c_i(-1)^{c_i} \end{align*} which together reduce (\ref{eq:ImediateDiff}) to \begin{equation*} d^{2(l-1)}(x_{2(l-1)}) = \sum_{\substack{|(c_1,\dots,c_{n})|=l, \\ 1\leq j \leq n, \; c_j \geq 1}} {(-1)^{\lceil \frac{n}{2} \rceil}c_jy_{j}\gamma_1^{c_1}\cdots\gamma_j^{c_j-1}\cdots\gamma_n^{c_{n}}}. \end{equation*} Since all terms have the same sign we choose the positive generator, obtaining the formula in the statement of the theorem. \end{proof} \begin{remark}\label{rmk:DifOnDivPoly} By applying $\phi^*$ to equation \eqref{eq:DifOnDivPoly} we obtain additionally for each $m\geq 2$ that \begin{equation*} d^{2(l-1)}((x_{2(l-1)})) = d^{2(l-1)}(x_{2(l-1)})(x_{2(l-1)})_{m-1}. \end{equation*} \end{remark} \section{Spectral sequence computations with Gr{\"o}bner basis}\label{sec:SpectralGrobner} The following proposition describes the application of Gr\"{o}bner basis to spectral sequences making use of the theory in Section~\ref{sec:Grobner}, motivates Theorem~\ref{thm:Ideals} and will be heavy used in Section~\ref{sec:LSU4/T3} of the paper. A library of code for preforming computations outlined in the proposition can be found at \cite{Burfitt2021}. The proposition applies straightforwardly to the Leray-Serre spectral sequence and is stated here for cohomology spectral sequences of algebras though could be reformulated through duality for a homology spectral sequence of algebras, swapping the base space with the fibre and rows for columns. \begin{proposition}\label{thm:SpectralGrobner} Let $\{ E_r, d^r \}$ be a first quadrant spectral sequence of algebras in coefficients $R$ for which the $E^2$-page is of the form $E_2^{p,q} = E_2^{p,0}\otimes E_2^{0,q}$. Suppose also that $E_2^{*,0}$ can be expressed as the quotient of a polynomial algebra \begin{equation*} A = \frac{R[x_1,x_2,\dots,x_n]}{I} \end{equation*} for some ideal $I$ in $R[x_1,x_2,\dots,x_n]$. For some $a\geq 0$, let $S_a$ be the additive generators of $E_r^{0,a}\setminus\{x_1,x_2,\dots,x_n\}$. The computation within each row may be broken down as follows. The kernel of $d^r$ in $A$ on row $E_{r}^{*,a}$ is generated by \begin{enumerate} \item the kernel of the map \begin{equation*} \phi_a^r \colon R[S_{a},x_1,x_2,\dots,x_n] \to R[S_{a-r+1},x_1,x_2,\dots,x_n] \end{equation*} induced by $d^r$ and \item the pre-image under $d^r$ of generators of the intersection \begin{equation*} \im(\phi_a^r) \cap S_{a-r+1}I \end{equation*} where $S_{a}I= \{ si \; | \; s\in S_{a-r+1}, i \in I \}$. Treating $S_{a-r+1}I$ as an ideal generated by elements $si$ such that $s\in S_{a-r+1}$ and $i$ is a generator of $I$, the intersection can be computed by Gr\"{o}bner basis as an intersection of ideals noting that any element that increases the degree of the polynomial element of $S_{a-r+1}$ is no longer on row $a$, hence is ignored as it is justified below. \end{enumerate} The algebra structure on the $E_{r+1}$-page is generated by the union of row generators given above and $S_{a}I \cap E_r^{a,*}$ for each $a\geq 0$. The relations for the algebra structure on $E_{r+1}$-page are given by the union of $I,\: \im(d^b)$ for $2\leq b \leq r$ and relations of the first column $E_{2}^{*,0}$ intersected with the generators of the $E_{r+1}$-page algebra. In addition when $R=\mathbb{Z}$, the type of torsion present in each row $E_r^{a,*}$ and their generating relations can be obtained from the reduced Gr\"{o}bner basis of the ideal \begin{equation}\label{eq:GrobnerTorsionPart} \langle I\cap E_r^{a,*},\: \im(d^2)\cap E_r^{a,*},\: \im(d^3)\cap E_r^{a,*}, \dots,\: \im(d^r)\cap E_r^{a,*} \rangle \end{equation} as the non-unit greatest common divisor of the absolute values of the coefficients of each element of the reduced Gr\"{o}bner basis. In this case the coefficients greatest common divisors describe all possible torsion types present and their multiplicities. \end{proposition} For part (1) of the proposition, the kernel of a morphism of polynomial rings can also be obtained using Gr\"{o}bner basis, see for example \cite{Biase05}. However such computations will be unnecessary in the present work. During a Gr\"{o}bner basis computation of the intersection of ideals in part (2), expression containing elements in $E_r^{*,b}$ for $b>a$ may be considered, these can be discarded immediately, since reduction, $S$-polynomials and $G$-polynomials of such elements can never decease the row $b$ during the procedure without reducing the entire polynomial to zero. Doing this greatly speeds up the execution of the algorithm. \begin{proof} For each $a\geq 0$ and $r\geq 2$, the following diagram commutes \begin{equation*} \xymatrix{ {\ker(\phi_a^r)} \ar[r]^(.5){} \ar[d]^(.45){q} & {R[S_{a},x_1,x_2,\dots,x_n]} \ar[r]^(0.46){\phi_a^r} \ar[d]^(.45){q} & {R[S_{a-r+1},x_1,x_2,\dots,x_n]} \ar[d]^(.45){q} \\ {\ker(d^r)} \ar[r]^(.45){} & {E_r^{a,*}} \ar[r]^-(.5){d^r} & {E_r^{a-r+1,*}}} \end{equation*} where $q$ is the quotient map by $I$. Elements of $\ker(\phi_a^r)$ remain in $\ker(d^r)$ after quotient map $q$. Other element of $R[S_{a-r+1},x_1,x_2,\dots,x_n]$ not in $\ker(\phi_a^r)$ that are in the kernel of $d^r$ under $q$ have non-trivial image under $\phi^r_n$ and so are contained in $S_{a-r+1}I$. Hence parts (1) and (2) together describe the whole kernel of $d^r$ on row $E^{a,*}_{r}$. The relations form $I$ and $\im(d^b)$ span all relations in the spectral sequence by construction and additional generators $S_{a-r+1}I \cap E_r^{a,*}$ are formally added to ensure that the all relations are contained in the span of the generators. Finally, the torsion in the case $R=\mathbb{Z}$ can be seen to be correct by the following inductive argument to $E$-reduce any minimal generating set $T_1,\dots,T_m$ of the torsion relations to the torsion polynomials in the minimal Gr\"{o}bner basis $G$. The leading term of $T_1$ must be divisible by the leading term of a unique element $g$ in $G$. As $T_1$ can be reduced to $0$ by $G$, reducing its leading term to be equal to that of $g$ and then fully $E$-reducing it by $G\setminus g$ must leave exactly $g$. We follow the same steps to reduce the rest of $T_2,\dots,T_m$ to element of $G$, except we first $E$-reduce each $T_i$ fully by all elements of $g$ already obtained form $T_1,\dots,T_{i-1}$ before beginning. No element of $T_i$ can be reduced to zero by previously obtained element of $g$ as we assume that $T_1,\dots,T_m$ is minimal and there can be no additionally torsion element of $G$ not obtained form reduction of $T_1, \dots, T_m$ as it is assumed to generate the torsion relations. \end{proof} Though not necessary in the course of this work, in some cases the procedure outlined in Proposition~\ref{thm:SpectralGrobner} can require an additional computational setup which can again be done algorithmicly as described in the flowing remark. \begin{remark}\label{rmk:Pre-ImageReduction} Note that the pre-image of the Gr\"{o}bner basis of $\im(\phi_a^r) \cap S_{a-r+1}I$ in part (2) of Proposition~\ref{thm:SpectralGrobner} can be uniquely obtained computationally by fully reducing it by a Gr\"{o}bner basis of $\im(\phi_a^r)$ to $0$ while keeping track of the reductions. The generators of $\im(\phi_a^{r})$ need not be a Gr\"{o}bner basis and since the reduction of $\im(\phi_a^r) \cap S_{a-r+1}I$ is in terms of a Gr\"{o}bner basis of $\im(\phi_a^r)$, the resulting expression from the reduction is in terms of their Gr\"{o}bner basis not the original generators. For the computations made in the paper we will only need to consider the case where the generators of $\im(\phi_a^{r})$ form a Gr\"{o}bner basis. However an expression in term of the original generators can be deduced as follows. Track the computation of the reduced Gr\"{o}bner basis of $\im(\phi_a^{r})$ to obtain an expression for the Gr\"{o}bner basis in terms of the original generators. This expression is unique up to Syzygys of the Gr\"{o}bner basis which can be computed through a reduction procedure, see for example \cite{Erocal16}, the algorithm being equally valid over $\mathbb{Z}$. \end{remark} Though Proposition~\ref{thm:SpectralGrobner} provides a procedure for computations with spectral sequences of the appropriate form, the later calculations in this work are obtained under stricter assumptions that greatly enhance the computation efficacy of integral Gr\"{o}bner basis as detailed in the next remark. \begin{remark}\label{rmk:HomGrobSimplification} When the generators $x_1,x_2\dots,x_n$ of the algebra $A$ in Proposition~\ref{thm:SpectralGrobner} have the same degree, the representatives of generators of $\im(\phi_a^r)$ are homogeneous in their $x_1,x_2\dots,x_n$ components. Reduction preserves homogeneity and in addition, $S$-polynomial and $G$-polynomials of homogeneous polynomials used during the Gr\"{o}bner basis computation are again homogeneous polynomials and only ever increase the degree. Assuming also that generators of $I$ are homogeneous and that elements of $A$ have bounded degree, then when computing the Gr\"{o}bner basis with such polynomials to obtain $\im(\phi_a^r) \cap S_{a-r+1}I$ in part (2) of the proposition, we may discard any $S$-polynomial or $G$-polynomials of homogeneous degree greater than the maximal degree in $A$, greatly speeding up the computation. \end{remark} \section{Basis}\label{sec:basis} We now develop theory to reveal the structure of the cohomology of the free loop space of $SU(n+1)/T^n$, which culminates in Theorem~\ref{thm:Ideals}. To begin we consider a basis of $\mathbb{Z}[\gamma_1,\dots,\gamma_n]$ that resembles the image of the $d^2$ differential in Theorem~\ref{thm:allDiff}, as it becomes easier to study the $E_3$-page and subsequent pages of the spectral sequence, which in addition to making possible theoretical results also simplifies all computations in Section~\ref{sec:LSU4/T3}. \begin{remark}\label{rmk:TildeBasis} In $\mathbb{Z}[\gamma_1,\dots,\gamma_{n}]$, let $\bar{\gamma}=\gamma_1+\cdots+\gamma_n$ and $\tilde{\gamma}_i=\bar{\gamma}+\gamma_i$ for each $1\leq i\leq n$. We rearrange the standard basis $\gamma_1,\dots,\gamma_n$ of $\mathbb{Z}[\gamma_1,\dots,\gamma_{n}]$ to $\gamma_1,\dots,\gamma_{n-1},\bar{\gamma}$ and then rearrange to $\tilde{\gamma}_1,\dots,\tilde{\gamma}_{n-1},\bar{\gamma}$, by adding $\bar{\gamma}$ to all other basis elements. Notice that the replacement $\gamma_i \mapsto \tilde{\gamma}_i$ for $1\leq i \leq n-1$, $\gamma_n \mapsto \bar{\gamma}$ could have instead been chosen as $\gamma_j \mapsto \bar{\gamma}$ for any $1\leq j \leq n$ and $\gamma_i \mapsto \tilde{\gamma}_i$ for any $i\neq j$. Furthermore, replacing $\bar{\gamma}$ by $\tilde{\gamma}_n = (n+1)\bar{\gamma}-\tilde{\gamma}_1-\cdots-\tilde{\gamma}_{n-1}$ gives $\tilde{\gamma}_n$, so $\tilde{\gamma}_1,\dots,\tilde{\gamma}_{n}$ forms a rational basis. \end{remark} \begin{proposition}\label{prop:basis} Using the notation of equation~(\ref{eq:SipleRedusingHomogenious}), we can rewrite $h^{n-l+2}_i$ for each $2\leq l \leq n+1$ in the basis of Remark~\ref{rmk:TildeBasis}, as \begin{equation*} h^{n-l+2}_l=\sum_{\substack{0\leq k \leq l \\ 1\leq i_1 \leq \cdots \leq i_k \leq n-l+1}} {(-1)^{l-k} \binom{n+1}{l-k} \tilde{\gamma}_{i_1}\cdots\tilde{\gamma}_{i_{k}}\bar{\gamma}^{l-k}}. \end{equation*} \end{proposition} \begin{proof} First note that in the basis of Remark~\ref{rmk:TildeBasis}, we can rewrite the original basis in terms of the new one by \begin{equation}\label{eq:BaseExpression} \gamma_i=\tilde{\gamma_i}-\bar{\gamma} \text{ for } 1\leq i\leq n-1, \;\;\; \gamma_n=n\bar{\gamma}-\sum_{i=1}^{n-1}{\tilde{\gamma_i}}. \end{equation} When $l=2$ using rearrangement (\ref{eq:BaseExpression}), \begin{flalign} h_2^{n}&=\sum_{a=0}^{2}{\big{(}(n\bar{\gamma}-\sum_{j=1}^{n-1}{\tilde{\gamma}_j})^{2-a} \sum_{1\leq i_1\leq i_2 \leq n-1}{\prod^a_{k=1}{(\tilde{\gamma}_{i_k}-\bar{\gamma})}}\big{)}} \nonumber \\ &= (n\bar{\gamma}-\sum_{j=1}^{n-1}{\tilde{\gamma}_j})^{2} +\sum^{n-1}_{a=1}{(n\bar{\gamma}-\sum_{j=1}^{n-1}{\tilde{\gamma}_j})(\tilde{\gamma}_a-\bar{\gamma})} +\sum^{n-1}_{a=1}{(\tilde{\gamma}_{a}-\bar{\gamma})^2} +\sum_{1\leq i_1 < i_2 \leq n-1}{(\tilde{\gamma}_{i_1}-\bar{\gamma})(\tilde{\gamma}_{i_2}-\bar{\gamma})}. \label{eq:l=2} \end{flalign} For $1\leq k,k_1, k_2\leq n-1$, $k_1 \neq k_2$, we consider the terms of the form \begin{equation*} \bar{\gamma}^2, \; \tilde{\gamma}_k\bar{\gamma}, \; \tilde{\gamma}_k^2, \; \tilde{\gamma}_{k_1}\tilde{\gamma}_{k_2} \end{equation*} and count their occurrences in the summands of (\ref{eq:l=2}). In total $n^2$ element of the form $\bar{\gamma}^2$ are produced by the first summand of (\ref{eq:l=2}), minus $n(n-1)$ times in the second, $n-1$ in the third and $\binom{n-1}{2}$ in the last. Hence in total \begin{equation*} n^2-n(n-1)+(n-1)+\binom{n-1}{2}=n+\binom{n-1}{1}+\binom{n-1}{2}=\binom{n}{1}+\binom{n}{2}=\binom{n+1}{2}. \end{equation*} In total $-2n$ elements of the form $\tilde{\gamma}_k\bar{\gamma}$ are produced in the first summand of (\ref{eq:l=2}), $2n-1$ in the second, $-2$ in the third and $2-n$ in the last. Hence in total \begin{equation*} -2n+(2n-1)-2+(2-n)=n+1=\binom{n+1}{1}. \end{equation*} The terms $\tilde{\gamma}_k^2$ appear once in the first summand of (\ref{eq:l=2}), once in the third and negative once in the second, hence once in total. The terms $\tilde{\gamma}_{k_1}\tilde{\gamma}_{k_2}$ appear twice in the first summand, minus twice in the the second and once in the last, hence once in total. Therefore the conditions of the proposition are satisfied. For $l\geq 3$, we first show that \begin{equation*} h^{n-l+2}_l=\sum_{\substack{0\leq k \leq l \\ 1\leq i_1 \leq \cdots \leq i_k \leq n-l+2}} {(-1)^{l-k} \binom{n+1}{l-k} \tilde{\gamma}_{i_1}\cdots\tilde{\gamma}_{i_{k}}\bar{\gamma}^{l-k}}. \end{equation*} where in the index on the sum here is $1\leq i_1 \leq \cdots \leq i_k \leq n-l+2$ rather than $1\leq i_1 \leq \cdots \leq i_k \leq n-l+1$ in the statement of the proposition. The proposition then follows form the statement above by inductions. This is because we have already shown the case $l=2$ and we can obtain the expression for $h^{n-l+2}_l$ in the proposition for $l\geq 3$ from the one above by subtracting off $\tilde{\gamma}_{n-l+2}$ times the expression for $h^{n-l+1}_{l-1}$ in the statement of the proposition, as this cancels all the summands containing a multiple $\tilde{\gamma}_{n-l+2}$. Using rearrangement (\ref{eq:BaseExpression}), \begin{equation}\label{eq:l>3} h^{n-l+2}_l=\sum_{1\leq i_1\leq \cdots\leq i_l \leq n-l+2}{\prod_{k=1}^{l}{(\tilde{\gamma}_{i_k}-\bar{\gamma})}}. \end{equation} For any choice of $1\leq i_1\leq \cdots \leq i_k \leq n-l+2$ and non-negative integers $b,a_1,\dots,a_k$ such that $b+a_1+\cdots+a_k=l$, the terms of the form \begin{equation}\label{eq:ProdChoice} \tilde{\gamma}_{i_1}^{a_1}\cdots\tilde{\gamma}_{i_k}^{a_k}\bar{\gamma}^{b} \end{equation} describe up to multiplicity all possible summand in the expansion of equation~(\ref{eq:l>3}). Define the notation $h^{n-l+2}_l\{ \tilde{\gamma}_{i_1}^{a_1}\cdots\tilde{\gamma}_{i_k}^{a_k}\bar{\gamma}^{b} \}$ to be the multiplicity of the summand containing $\tilde{\gamma}_{i_1}^{a_1}\cdots\tilde{\gamma}_{i_k}^{a_k}\bar{\gamma}^{b}$ in the expansion of equation~(\ref{eq:l>3}). Using this notation and equation~(\ref{eq:l>3}) if we show that for each $n+1\geq l \geq 3$, \begin{equation}\label{eq:BorckekenExpress} h^{n-l+2}_l\{ \tilde{\gamma}_{i_1}^{a_1}\cdots\tilde{\gamma}_{i_k}^{a_k}\bar{\gamma}^{b} \} = (-1)^{b} \binom{n+1}{b} \end{equation} we would complete the proof of the proposition. Considering each summand of equation~(\ref{eq:l>3}) in tern and counting the number of $\tilde{\gamma}_{i_1}^{a_1}\cdots\tilde{\gamma}_{i_k}^{a_k}\bar{\gamma}^{b}$ produced in each product, we obtain \begin{equation}\label{eq:ChoicExpanssion} h^{n-l+2}_l\{ \tilde{\gamma}_{i_1}^{a_1}\cdots\tilde{\gamma}_{i_k}^{a_k}\bar{\gamma}^{b} \} = (-1)^b \sum_{\theta=0}^{b}{\multiset{n-l+2-k}{b-\theta}\sum_{\substack{\alpha_1+\cdots+\alpha_k=\theta \\ \alpha_j\geq 0}} {\prod^{\theta}_{\beta=1}{\binom{a_\beta+\alpha_\beta}{\alpha_\beta}}}}. \end{equation} We proceed by induction on $n$ and prove (\ref{eq:BorckekenExpress}) for all $n\geq 1$ and $2\leq l\leq n+1$. When $n=1$, the only valid value of $l$ is $2$ and $h^{n-l+2}_{l}=(\tilde{\gamma}_1-\bar{\gamma})^2$ whose expansions satisfies (\ref{eq:BorckekenExpress}). Assume that (\ref{eq:BorckekenExpress}) holds for all $\phi\leq n$. It is clear that if $b=0$ or $n+1$, then $h^{n-l+2}_{l}\{ \bar{\gamma}^{n+1} \}=(-1)^{n+1}$ and $h^{n-l+2}_{l}\{ \tilde{\gamma}_{i_1}^{a_1}\cdots\tilde{\gamma}_{i_k}^{a_k} \}=1$ for any choice of $a_1,\dots,a_{k}$ since in the expansion of equation~(\ref{eq:ChoicExpanssion}) there would be only one way to obtain the element. For $1\leq b \leq n$, by induction \begin{equation}\label{eq:Case(n,b)} \binom{n}{b}=(-1)^{b}h^{n-l+1}_{l}\{ \tilde{\gamma}_{i_1}^{a_1}\cdots\tilde{\gamma}_{i_k}^{a_k}\bar{\gamma}^{b} \} =\sum_{\theta=0}^{b}{\multiset{n-l+1-k}{b-\theta}\sum_{\substack{\alpha_1+\cdots+\alpha_k=\theta \\ \alpha_j\geq 0}} {\prod^{\theta}_{\beta=1}{\binom{a_\beta+\alpha_\beta}{\alpha_\beta}}}} \end{equation} and \begin{equation}\label{eq:Case(n,b-1)} \binom{n}{b-1}=(-1)^{b-1}h^{n-l+2}_{l-1}\{ \tilde{\gamma}_{i_1}^{a_1}\cdots\tilde{\gamma}_{i_k}^{a_k}\bar{\gamma}^{b-1} \} =\sum_{\theta=0}^{b-1}{\multiset{n-l+2-k}{b-1-\theta}\sum_{\substack{\alpha_1+\cdots+\alpha_k=\theta \\ \alpha_j\geq 0}} {\prod^{\theta}_{\beta=1}{\binom{a_\beta+\alpha_\beta}{\alpha_\beta}}}}. \end{equation} For each $0\leq\theta\leq b-1$, the sum of values from (\ref{eq:Case(n,b)}) and (\ref{eq:Case(n,b-1)}) corresponds to the $\theta$ summand in the expression for $h^{n-l+2}_l\{ \tilde{\gamma}_{i_1}^{a_1}\cdots\tilde{\gamma}_{i_k}^{a_k}\bar{\gamma}^{b} \}$ since the binomial expressions agree and the multiset expression sum to the correct result. The only reaming summand in $h^{n-l+2}_l\{ \tilde{\gamma}_{i_1}^{a_1}\cdots\tilde{\gamma}_{i_k}^{a_k}\bar{\gamma}^{b} \}$ is the one corresponding to $\theta=b$. However this agrees in (\ref{eq:ChoicExpanssion}) and (\ref{eq:Case(n,b)}) because $\multiset{n-l+2-k}{0}=\multiset{n-l+1-k}{0}=1$, with the binomial parts being identical. \end{proof} \begin{remark}\label{rmk:NewBaisSymGrobner} Note that for a fixed $n$ the set of expressions for $h^{n-l+2}_l$ with $2\leq l \leq n+1$ given in Proposition~\ref{prop:basis} remains a reduced Gr\"obner bases for the ideal they generate with respect to the lexicographic term order on variables $\tilde{\gamma}_1<\cdots<\tilde{\gamma}_n<\bar{\gamma}$. \end{remark} The next theorem shows that the change of basis interacts well with the differentials. Recall that, the image of $d^m$ for $m\geq 2$ was determined in Theorem~\ref{thm:allDiff} and shown only to be non-trivial on terms containing $x_m$. \begin{theorem}\label{thm:InductiveDiff} For $2\leq l \leq n+1$, \begin{equation*} d^{2(l-1)}(x_{2(l-1)})= \sum_{\substack{1 \leq i \leq n}} {y_i(\tilde{\gamma}_i-\bar{\gamma})^{l-2}\tilde{\gamma}_i}. \end{equation*} After reduction by lower degree differentials this may be written as \begin{equation*} \sum_{\substack{1 \leq i \leq n}} {y_i\tilde{\gamma}_i^{l-1}}. \end{equation*} \end{theorem} \begin{proof} Generalise the notation of Remark~\ref{rmk:TildeBasis} to reflect the form of all differentials on $x_{2(l-1)}$ as follows. For each $1\leq i, \leq n$, write \begin{equation*} \tilde{\gamma}_i^{(j)}= \sum_{\substack{|(c_1,\dots,c_{n})|=j}} {(c_i+1)\gamma_1^{c_1}\cdots\gamma_i^{c_i}\cdots\gamma_n^{c_{n}}}. \end{equation*} In particular, we have $\tilde{\gamma}_i^{(1)}=\tilde{\gamma_i}$ and by Theorem~\ref{thm:allDiff} \begin{equation*} d^{2j}(x_{2j})= \sum_{\substack{1 \leq i \leq n}} {y_i\tilde{\gamma}_i^{(j)}}. \end{equation*} We next show that by quotienting out by symmetric polynomials, \begin{equation}\label{eq:TildaRelation} \tilde{\gamma}_i^{(j+1)}=\gamma_i\tilde{\gamma}_i^{(j)} \end{equation} and since by definition \begin{equation*} \gamma_i = \tilde{\gamma}_i-\bar{\gamma} \end{equation*} induction on $j$ then completes the proof of the first part of the theorem. Notice that, using Definition~(\ref{defn:CompleteHomogeneous}) of the complete homogeneous symmetric polynomials \begin{align*} \tilde{\gamma}_i^{(j+1)} & = \sum_{\substack{|(c_1,\dots,c_{n})|=j+1}} {(c_i+1)\gamma_1^{c_1}\cdots\gamma_i^{c_i}\cdots\gamma_n^{c_{n}}} \\ & = h_{j+1} + \sum_{\substack{|(c_1,\dots,c_{n})|=j+1, \\ c_i \geq 1}} {c_i\gamma_1^{c_1}\cdots\gamma_i^{c_i}\cdots\gamma_n^{c_{n}}} \\ & = h_{j+1} + \gamma_i\sum_{\substack{|(c_1,\dots,c_{n})|=j+1}} {c_i\gamma_1^{c_1}\cdots\gamma_i^{c_i-1}\cdots\gamma_n^{c_{n}}} \\ & = h_{j+1} + \gamma_i\tilde{\gamma}_i^{(j)}. \end{align*} This proves (\ref{eq:TildaRelation}). The final statement of the theorem is proved by induction subtracting multiples of $\bar{\gamma}^m$ from the lower degree differentials. \end{proof} \section{The integral cohomology of $\Lambda(SU(n+1)/T^n)$}\label{sec:Cohomology} In this section we use the results in previous section to deduce structure in the final page of $\{ E^r,d^r \}$, the cohomology Leray-Serre spectral sequence of the evaluations fibration (\ref{eq:evalfib}). This result alone performs part of the computation of the integral cohomology of the free loop space of the complete flag manifold of the special unitary group $SU(n+1)/T^n$ and is applied when $n=3$ in Section~\ref{sec:LSU4/T3}. We first make the following notation and remark on the choice of basis up to sign. For integers $1\leq t \leq j \leq n$ and $1\leq i_1 < \cdots < i_j \leq n$, write \begin{equation}\label{eq:yHat} \hat{y}_{i_1,\dots,i_j} \end{equation} to denote the product $y_{1}y_{2}\cdots y_{n}$ without the elements $y_{i_1},\dots,y_{i_j}$, while \begin{equation}\label{eq:yHatHat} \hat{y}_{i_1,\dots,\hat{i}_t,\dots,i_j} \end{equation} denotes the product $y_{1}y_{2}\cdots y_{n}$ without the elements $y_{i_1},\dots,y_{i_j}$ except $y_{i_t}$ which is still included. \begin{remark}\label{rmk:ySign} By choosing the appropriate signs on generators $y_i$ for each $1\leq i \leq n$, we express the image of differentials given in Theorem~\ref{thm:InductiveDiff} by \begin{equation*} d^{2l}(x_{2l})= \sum_{\substack{1 \leq i \leq n}} {(-1)^{i+1}y_i(\tilde{\gamma}_i-\bar{\gamma})^{l-1}\tilde{\gamma}_i} \end{equation*} where $1\leq l \leq n$. This choice is made to simplify multiplication of the differential by terms $y_1,\dots,y_n$ in the following way. Recall that the generators $y_i$ generate an exterior algebra, in particular $y_i^2=0$. So using Remark \ref{rmk:DifOnDivPoly}, for any $m\geq 1$, $1\leq j,l \leq n$ and $1\leq i_1 < \cdots < i_j \leq n$ we have \begin{align*} d^{2l}((x_{2l})_m\hat{y}_{i_1,\dots,i_j}) &=d^{2l}((x_{2l})_m)\hat{y}_{i_1,\dots,i_j} \\ &=(x_{2l})_{m-1}\sum_{t=1}^j{(-1)^{i_t+1}(-1)^{i_t+t-2}\hat{y}_{i_1,\dots,\hat{i}_t,\dots,i_j}(\tilde{\gamma}_{i_t}-\bar{\gamma})^{l-1}\tilde{\gamma}_{i_t}} \\ &=(x_{2l})_{m-1}\sum_{t=1}^j{(-1)^{t-1}\hat{y}_{i_1,\dots,\hat{i}_t,\dots,i_j} (\tilde{\gamma}_{i_t}-\bar{\gamma})^{l-1}\tilde{\gamma}_{i_t}} \end{align*} where the additional $(-1)^{i_t+t-2}$ sign changes come from reordering the $y_i$. The generator $y_t$ swaps places with $y_i$, $i_t-1$ times for $i<t$ changing the sign each time, however $t-1$ of these $y_i$ are missing. Therefore for the remainder of the section we assume the expression for the differential given above. \end{remark} The next theorem deduces for all $n\geq 2$ important structural information on the interaction between the images of differentials and the symmetric polynomials in a special case relevant to the spectral sequence $\{ E^r,\; d^r \}$. In particular the second part of the theorem solves a particular case of part (2) of Proposition~\ref{thm:SpectralGrobner} and the first part describe the torsion generators in this case as detailed at the end of Proposition~\ref{thm:SpectralGrobner}. Hence this result alone perform part of the calculations in Section~\ref{sec:LSU4/T3}. \begin{theorem}\label{thm:Ideals} Given integers \begin{equation*} 1\leq i\leq n-1,\; 1\leq j,l \leq n,\; 2\leq j'\leq n+1 ,\; 1\leq k \leq n+1,\; 1\leq t < l,\; 1\leq i_1 \leq i_2 \leq \cdots \leq i_j \leq n \end{equation*} consider all ideals in $\mathbb{Z}[\tilde{\gamma}_i,\bar{\gamma},y_j]$ with $d^{2l}$ image as described in Remark~\ref{rmk:ySign}. Then the following hold: \begin{enumerate} \item There is a equality of ideals \begin{equation*}\label{eq:BottemGrober} \langle d^{2l}(x_{2l}\hat{y}_j),\; y_1\cdots y_n h_{j'}^{n-j'+2} \rangle = \langle y_1\cdots y_n\tilde{\gamma}_i,\; \binom{n+1}{k}y_1\cdots y_n\bar{\gamma}^k \rangle. \end{equation*} \item For a fixed choice of $2 \leq l \leq n$, the intersection of ideals \begin{equation*}\label{eq:IntersectBottem} \langle d^{2l}(x_{2l}\hat{y}_j) \rangle \cap \langle d^{2t}(x_{2t}\hat{y}_j),\; y_1\cdots y_n h_{j'}^{n-j'+2} \rangle \end{equation*} is given by \begin{align*}\label{eq:Bottem} y_1\cdots y_n \langle & \left(\lcm \left(n+1,\binom{n+1}{j'}\right) /\binom{n+1}{j'} \right) h_{j'}^{n-j'+2}, \tilde{\gamma}_i h_{j'}^{n-j'+2}, (n+1)\bar{\gamma}h_{j'}^{n-j'+2} \rangle \end{align*} if $l=1$ and \begin{equation*} \langle d^{2l}(x_{2l}\hat{y}_j) \rangle \end{equation*} otherwise. \end{enumerate} \end{theorem} \begin{proof} We begin by proving part $(2)$ in the case $l=1$, then extend to the general case. Components of the proof of part $(2)$ are then used to prove part $(1)$ in the case $l=1$. First rearranging the generator representatives, then using the second part of Theorem~\ref{thm:InductiveDiff} and Remark~\ref{rmk:TildeBasis}, we see that \begin{align}\label{eq:d^2Ideal} \langle d^{2}(x_2\hat{y}_j) \rangle =& \langle (-1)^{i+1}d^2(x_2)\hat{y}_i,\; (-1)^{n+1}d^2(x_2)(\hat{y}_n+\sum_{1\leq k\leq n-1}(-1)^{i+1}\hat{y}_k)\rangle \nonumber \\ =& \langle y_1\dots y_n \tilde{\gamma_i},\; (n+1)y_1\dots y_n \bar{\gamma} \rangle. \end{align} So when $l=1$ generators $y_1\cdots y_n\tilde{\gamma}_i h_{j'}^{n-j'+2}$ and $y_1\cdots y_n(n+1)\bar{\gamma}h_{j'}^{n-j'+2}$ lie on both sides of the intersection of ideals in part $(2)$, hence are contained in the intersection. By Proposition~\ref{prop:basis}, \begin{equation}\label{eq:SymBasis} y_1,\dots,y_nh^{n-j'+2}_{j'}=y_1,\dots,y_n\sum_{\substack{0\leq t \leq j' \\ 1\leq i_1 \leq \cdots \leq i_t \leq n-j'+1}} {(-1)^{j'-t} \binom{n+1}{j'-t} \tilde{\gamma}_{i_1}\cdots\tilde{\gamma}_{i_{t}}\bar{\gamma}^{j'-t}}. \end{equation} Notice that the terms of the sum with $t>0$, are contained in $\langle y_1\dots y_n \tilde{\gamma}_i \rangle$, hence in $\langle d^{2l}(x_l\hat{y}_j) \rangle$ by (\ref{eq:d^2Ideal}). Using again (\ref{eq:d^2Ideal}), the left hand ideal of the intersection contains the generator $(n+1)\bar{\gamma}$. The summands of (\ref{eq:SymBasis}) with $t=0$, are divisible by $\bar{\gamma}$ not $\tilde{\gamma}_i$ and so are contained in the intersection only when divisible by $n+1$. Hence a scalar multiple of $h_{j'}^{n-{j'}+2}$ is in intersection when multiplied by the least common multiple of the $n+1$ and $\binom{n+1}{j'}$ divided by $\binom{n+1}{j'}$ since all other terms in the summands of \eqref{eq:SymBasis} for $t>0$ are divisible by a $\tilde{\gamma}_i$. Since all necessary degrees have been considered this completes the proof of prat $(2)$ when $l=1$. Considering now $2 \leq l \leq n$, by Theorem~\ref{thm:InductiveDiff} combined with Remark~\ref{rmk:ySign}, we have \begin{equation*} d^{2l}(x_{2l})= \sum_{\substack{1 \leq i \leq n}} {(-1)^{i+1}y_i(\tilde{\gamma}_i-\bar{\gamma})^{l-1}\tilde{\gamma}_i}. \end{equation*} Hence applying the discussion in Remark~\ref{rmk:ySign}, we obtain \begin{equation*} d^{2l}(x_{2l}\hat{y}_j )= {(-1)^{j+1}y_1\cdots y_n(\tilde{\gamma}_j-\bar{\gamma})^{l-1}\tilde{\gamma}_j}. \end{equation*} Therefore \begin{equation*} \langle d^{2l}(x_{2l}\hat{y}_j),\; y_1\cdots y_n h_{j'}^{n-j'+2} \rangle = \langle d^{2}(x_2\hat{y}_j),\; y_1\cdots y_n h_{j'}^{n-j'+2} \rangle \end{equation*} so the generators of $\langle \hat{y}_jd^{2l}(x_{2l}),\; y_1\cdots y_n h_{j'+1}^{n-j'+1} \rangle$ when $l>1$ may be omitted when considering the intersection of the ideals in part $(2)$ of the theorem. In addition the equality of ideals in part $(1)$ also follows from (\ref{eq:d^2Ideal}) and (\ref{eq:SymBasis}). When the generators of (\ref{eq:d^2Ideal}) are extended with those from (\ref{eq:SymBasis}), using the discussion below (\ref{eq:SymBasis}) we see that the sum in (\ref{eq:SymBasis}) can be reduced to just $\binom{n+1}{k}y_1,\dots,y_n\bar{\gamma}$ for each $k=j'>1$. This leaves the required set of ideal generators. \end{proof} \begin{remark}\label{rmk:CasePre-image} For the purposes of obtaining pre-images required in part (2) of Proposition~\ref{thm:SpectralGrobner}, we now express the intersection of ideas in part (2) of Theorem~\ref{thm:Ideals} when $2 \leq l \leq n$ in terms of $d^2(x_2)$ multiples free of terms divisible by symmetric polynomials. As discussed in the proof of Theorem~\ref{thm:Ideals} $y_1\cdots y_n\tilde{\gamma}_i$ and $(n+1)y_1\cdots y_n\bar{\gamma}_n$ are in the image of the $d^2$ differential, so $y_1\cdots y_n\tilde{\gamma}_ih^{n-j'+2}_{j'}$ and $(n+1)y_1\cdots y_n\bar{\gamma}_nh^{n-j'+2}_{j'}$ need not be further considered. The remaining generators are \begin{equation}\label{eq:HitGenerators} \left(\lcm\left(n+1,\binom{n+1}{j'}\right)/\binom{n+1}{j'}\right) y_1\cdots y_nh_{j'}^{n-j'+2} \end{equation} for each of which we want to obtain an generator of the preimage under $d^{2}$. Recall that using Remarks~\ref{rmk:TildeBasis},~\ref{rmk:ySign} and Theorem~\ref{thm:InductiveDiff} we have that \[ d^2(x_2\hat{y}_i)= y_1\cdots y_n\tilde{\gamma}_i \;\;\;\ \text{and} \;\;\; d^2(x_2\hat{y}_n)=y_1\cdots y_n((n+1)\bar{\gamma}-\tilde{\gamma}_1-\cdots-\tilde{\gamma}_{n-1}). \] Hence we see that \begin{align*} & d^2 \Bigg{(}\lcm\left(n+1,\binom{n+1}{j'}\right) x_2 \Bigg{(} (-1)^{j'}\frac{1}{n+1} \bar{\gamma}^{j'-1} (\hat{y}_1+\cdots+\hat{y}_{n-1}+\hat{y}_n) \\ \nonumber & \;\;\;\; + \frac{1}{\binom{n+1}{j'}} \Bigg( \sum_{\substack{1\leq t \leq j' \\ 1\leq i_1 \leq \cdots \leq i_t \leq n-j'+1}} {(-1)^{j'-t} \binom{n+1}{j'-t}\hat{y}_{i_1} \tilde{\gamma}_{i_2}\cdots\tilde{\gamma}_{i_{t}}\bar{\gamma}^{j'-t}} \Bigg) \Bigg{)} \Bigg{)} \\ & = \left(\lcm\left(n+1,\binom{n+1}{j'}\right)/\binom{n+1}{j'}\right) y_1\cdots y_n \sum_{\substack{0\leq t \leq j' \\ 1\leq i_1 \leq \cdots \leq i_t \leq n-j'+1}} {(-1)^{j'-t} \binom{n+1}{j'-t} \tilde{\gamma}_{i_1}\cdots\tilde{\gamma}_{i_{t}}\bar{\gamma}^{j'-t}} \end{align*} which using Proposition~\ref{prop:basis} we can see is the same as the expressions in \eqref{eq:HitGenerators}. Therefore \begin{align}\label{eq:PreImageHitGenerators} & \lcm\left(n+1,\binom{n+1}{j'}\right) \Bigg(\frac{1}{n+1} \bar{\gamma}^{j'-1} (\hat{y}_1+\cdots+\hat{y}_{n-1}+\hat{y}_n) \\ \nonumber & + \frac{1}{\binom{n+1}{j'}} \Bigg( \sum_{\substack{1\leq t \leq j' \\ 1\leq i_1 \leq \cdots \leq i_t \leq n-j'+1}} {(-1)^{t} \binom{n+1}{j'-t}\hat{y}_{i_1} \tilde{\gamma}_{i_2}\cdots\tilde{\gamma}_{i_{t}}\bar{\gamma}^{j'-t}} \Bigg)\Bigg) \end{align} are representatives of the preimage under $d^2$ of generators in \eqref{eq:HitGenerators} as required. We remove a multiple of $(-1)^{j'}$ as this only changes the sign of the generators. \end{remark} \section{The integral cohomology of $\Lambda(SU(4)/T^3)$}\label{sec:LSU4/T3} Theorem~\ref{thm:Ideals} gives enough information to deduce the algebra structure for the final page of $\{ E^r,d^r \}$ in a special case within the cohomology Leray-Serre spectral sequence of the evaluations fibration (\ref{eq:evalfib}) converging to $H^*(\Lambda(SU(n+1)/T^n);\mathbb{Z})$. Using Theorem~\ref{thm:Ideals}, Theorem~\ref{thm:InductiveDiff} and Proposition~\ref{thm:SpectralGrobner} we can deduce the final page completely in the case when $n=3$, which is considerably more complex than the rank $2$ case considered in \cite{Burfitt2018}. \begin{theorem}\label{thm:CohomologySU4} The $E^\infty$-page of the spectral sequence $\{ E^r,d^r \}$ of the evaluations fibration (\ref{eq:evalfib}) converging to $H^*(\Lambda(SU(4)/T^3);\mathbb{Z})$ is given by the algebra $A/I$, where $A$ is the free graded commutative algebra \begin{align*} A = \Lambda_{\mathbb{Z}} ( & \tilde{\gamma}_{1} ,\; \tilde{\gamma}_{2} ,\; \bar{\gamma} ,\; y_1 ,\; y_2 ,\; y_3 ,\; (x_2)_{m_2}(x_4)_{m_4}(x_6)_{m_6}\tilde{\gamma}_2\tilde{\gamma}_1^2\bar{\gamma}^3, (x_2)_{m_2}(x_4)_{m_4}(x_6)_{m_6}y_1y_2y_3 ,\; \\ & (x_2)_{m_2}(x_4)_{m_4}(x_6)_{m_6}(\tilde{\gamma}_1^2+\tilde{\gamma}_1\tilde{\gamma}_2+\tilde{\gamma}_2^2+4(\tilde{\gamma}_1+\tilde{\gamma}_2)\bar{\gamma}+6\bar{\gamma}^2) ,\; \\ & (x_2)_{m_2}(x_4)_{m_4}(x_6)_{m_6} (\tilde{\gamma}_1^3+4\tilde{\gamma}_1^2\bar{\gamma}+6\tilde{\gamma}_1\bar{\gamma}^2+4\bar{\gamma}^3) ,\; (x_2)_{m_2}(x_4)_{m_4}(x_6)_{m_6}\bar{\gamma}^4 ,\; \\ & (x_2)_{m_2}(x_4)_{b_4}(x_6)_{b_6}(\hat{y}_1(2\tilde{\gamma}_1+2\tilde{\gamma}_2-5\bar{\gamma})+\hat{y}_2(2\tilde{\gamma}_2-5\bar{\gamma})+3\hat{y}_3\bar{\gamma}) ,\; \\ & (x_2)_{m_2}(x_4)_{b_4}(x_6)_{b_6}(\hat{y}_1(\tilde{\gamma}_1^2-4\tilde{\gamma}_1\bar{\gamma}+5\bar{\gamma}^2)-\hat{y}_1\bar{\gamma}^2-\hat{y}_3\bar{\gamma}^2) ,\; (x_2)_{m_2}(x_4)_{b_4}(x_6)_{b_6}(\hat{y}_1+\hat{y}_2+\hat{y}_3)\bar{\gamma}^2 ,\; \\ & (x_4)_{m_4}(x_6)_{b_6}(\tilde{\gamma}_2\tilde{\gamma}_1^2-6\tilde{\gamma}_2\bar{\gamma}^2-6\tilde{\gamma}_1\bar{\gamma}^2-14\bar{\gamma}^3) ,\; (x_4)_{m_4}(x_6)_{b_6}(\tilde{\gamma}_2\tilde{\gamma}_1\bar{\gamma}+3\tilde{\gamma}_2\bar{\gamma}^2+3\tilde{\gamma}_1\bar{\gamma}^2+6\bar{\gamma}^3) ,\; \\ & (x_4)_{m_4}(x_6)_{b_6}\tilde{\gamma}_2\bar{\gamma}^3 ,\; (x_4)_{m_4}(x_6)_{b_6}(\tilde{\gamma}_1^2\bar{\gamma}+\tilde{\gamma}_1\bar{\gamma}^2) ,\; (x_4)_{m_4}(x_6)_{b_6}\tilde{\gamma}_1\bar{\gamma}^3 ,\; \\ & (x_6)_{m_6}\tilde{\gamma}_1 ,\; (x_6)_{m_6}\tilde{\gamma}_2 ,\; (x_6)_{m_6}\bar{\gamma} ,\; \\ & (x_2)_{m_2}(x_4)_{b_4}(x_6)_{b_6}(y_1\tilde{\gamma}_1-y_2\tilde{\gamma}_2-y_3(\tilde{\gamma}_2+\tilde{\gamma}_1+4\bar{\gamma})) , \; \\ & (x_4)_{m_4}(x_6)_{a_6}y_1\tilde{\gamma}_1^2-y_2\tilde{\gamma}_2^2+y_3(\tilde{\gamma}_2+\tilde{\gamma}_1+4\bar{\gamma})^2 , \; \\ & (x_6)_{m_6}y_1\tilde{\gamma}_1^3-y_2\tilde{\gamma}_2^3-y_3(\tilde{\gamma}_2+\tilde{\gamma}_1+4\bar{\gamma})^3 , \; \\ & (x_2)_{m_2}(x_4)_{b_4}(x_6)_{b_6}( y_1(\tilde{\gamma}_2^2+4\tilde{\gamma}_2\bar{\gamma}+6\bar{\gamma}^2) -y_2(\tilde{\gamma}_1^2+4\tilde{\gamma}_1\bar{\gamma}+6\bar{\gamma}^2) -y_3(\tilde{\gamma}_2^2+4\tilde{\gamma}_2\bar{\gamma}-4\bar{\gamma}^2)) ,\; \\ & (x_2)_{m_2}(x_4)_{b_4}(x_6)_{b_6}( y_1(\tilde{\gamma}_2\bar{\gamma}^3+\tilde{\gamma}_1\bar{\gamma}^3) +y_2\tilde{\gamma}_1\bar{\gamma}^3 -y_3\tilde{\gamma}_2\bar{\gamma}^3 ,\; \\ & (x_2)_{m_2}(x_4)_{b_4}(x_6)_{b_6}( y_1(2\tilde{\gamma}_2\bar{\gamma}^2+2\tilde{\gamma}_1\bar{\gamma}^2+8\bar{\gamma}^3) -y_2(3\tilde{\gamma}_1^2\bar{\gamma}+10\tilde{\gamma}_1\bar{\gamma}^2+10\bar{\gamma}^3) \\ & \;\;\;\;\;\;\;\;\;\;\;\; +y_3(2\tilde{\gamma}_2\tilde{\gamma}_1^2+8\tilde{\gamma}_2\tilde{\gamma}_1\bar{\gamma}+10\tilde{\gamma}_2\bar{\gamma}^2+5\tilde{\gamma}_1^2\bar{\gamma}+20\tilde{\gamma}_1\bar{\gamma}^2+22\bar{\gamma}^3)) ,\; \\ & (x_2)_{m_2}(x_4)_{b_4}(x_6)_{b_6}( y_2\tilde{\gamma}_1^2\bar{\gamma}^3 +y_3\tilde{\gamma}_1^2\bar{\gamma}^3) ,\; \\ & (x_2)_{m_2}(x_4)_{b_4}(x_6)_{b_6}( y_2(3\tilde{\gamma}_1^2\bar{\gamma}^2+12\tilde{\gamma}_1\bar{\gamma}^3) \\ & \;\;\;\;\;\;\;\;\;\;\;\; -y_3(2\tilde{\gamma}_2\tilde{\gamma}_1^2\bar{\gamma}+8\tilde{\gamma}_2\tilde{\gamma}_1\bar{\gamma}^2+12\tilde{\gamma}_2\bar{\gamma}^3+5\tilde{\gamma}_1^2\bar{\gamma}^2+20\tilde{\gamma}_1\bar{\gamma}^3)) ,\; \\ & (x_2)_{m_2}(x_4)_{b_4}(x_6)_{b_6} y_3(\tilde{\gamma}_2\tilde{\gamma}_1^2\bar{\gamma}^2+4\tilde{\gamma}_2\tilde{\gamma}_1\bar{\gamma}^3+4\tilde{\gamma}_1^2\bar{\gamma}^3) ,\; \\ & (x_4)_{m_4}(x_6)_{b_6}(y_1(\tilde{\gamma}_2+3\bar{\gamma})-y_2(\tilde{\gamma}_1+3\bar{\gamma})+y_3(2\tilde{\gamma}_2+2\tilde{\gamma}_1+6\bar{\gamma})) ,\; \\ & (x_4)_{m_4}(x_6)_{b_6}(y_1\tilde{\gamma}_1+y_2(\tilde{\gamma}_1+3\bar{\gamma})-y_3(2\tilde{\gamma}_2+\tilde{\gamma}_1+5\bar{\gamma})) ,\; \\ & (x_4)_{m_4}(x_6)_{b_6}(y_1\bar{\gamma}^2+y_3(\tilde{\gamma}_1^2+3\tilde{\gamma}_1\bar{\gamma}+3\bar{\gamma}^2)) ,\; \\ & (x_4)_{m_4}(x_6)_{b_6}(2y_1\bar{\gamma}-y_2(2\tilde{\gamma}_1+4\bar{\gamma})+y_3(2\tilde{\gamma}_2+2\tilde{\gamma}_1+6\bar{\gamma})) ,\; \\ & (x_4)_{m_4}(x_6)_{b_6}(y_2(\tilde{\gamma}_2+\tilde{\gamma}_1+3\bar{\gamma})-y_3(\tilde{\gamma}_2+\bar{\gamma})) ,\; \\ & (x_4)_{m_4}(x_6)_{b_6}(y_2\tilde{\gamma}_1^2-y_3(5\tilde{\gamma}_1^2+12\tilde{\gamma}_1\bar{\gamma}+12\bar{\gamma}^2)) ,\; \\ & (x_4)_{m_4}(x_6)_{b_6}(y_2\tilde{\gamma}_1\bar{\gamma}+y_3(3\tilde{\gamma}_1^2+7\tilde{\gamma}_1\bar{\gamma}+6\bar{\gamma}^2)) ,\; \\ & (x_4)_{m_4}(x_6)_{b_6}(y_2\bar{\gamma}^2-y_3(\tilde{\gamma}_1^2+2\tilde{\gamma}_1\bar{\gamma}+\bar{\gamma}^2)) ,\; \\ & (x_4)_{m_4}(x_6)_{b_6}y_3(\tilde{\gamma}_2\bar{\gamma}+\tilde{\gamma}_1\bar{\gamma}+4\bar{\gamma}^2) ,\; \\ & (x_4)_{m_4}(x_6)_{b_6}y_3(\tilde{\gamma}_1^2\bar{\gamma}-2\bar{\gamma}^3) ,\; (x_4)_{m_4}(x_6)_{b_6}y_3(\tilde{\gamma}_1\bar{\gamma}^2+2\bar{\gamma}^3) ,\; \\ & (x_6)_{m_6}(y_1-y_2) ,\; (x_6)_{m_6}y_3 ) \end{align*} with $|\tilde{\gamma}_{j}|=|\bar{\gamma}|=2$, $|(x_k)_{m_k}|=km_k$ and $|(x_k)_{a_k}|=ka_k$, $I$ is the ideal \begin{align*} I = [ & (x_2)_{a_2} ((x_2)_1^b-!(x_2)_b)s_2 ,\; \\ & (x_4)_{a_4} ((x_4)_1^b-!(x_4)_b)s_4 ,\; \\ & (x_6)_{a_6} ((x_6)_1^b-!(x_6)_b)s_6 ,\; \\ & (x_2)_{a_2}(x_4)_{a_4}(x_6)_{a_6}(\tilde{\gamma}_1^2+\tilde{\gamma}_1\tilde{\gamma}_2+\tilde{\gamma}_2^2+4(\tilde{\gamma}_1+\tilde{\gamma}_2)\bar{\gamma}+6\bar{\gamma}^2) ,\; \\ & (x_2)_{a_2}(x_4)_{a_4}(x_6)_{a_6} (\tilde{\gamma}_1^3+4\tilde{\gamma}_1^2\bar{\gamma}+6\tilde{\gamma}_1\bar{\gamma}^2+4\bar{\gamma}^3) ,\; \\ & (x_2)_{a_2}(x_4)_{a_4}(x_6)_{a_6}\bar{\gamma}^4 ,\; \\ & (x_2)_{a_2}(x_4)_{a_4}(x_6)_{a_6} (y_1\tilde{\gamma}_1+y_2\tilde{\gamma}_2+y_3(4\bar{\gamma}-\tilde{\gamma}_1-\tilde{\gamma}_2)) ,\; \\ & (x_2)_{a_2}(x_4)_{a_4}(x_6)_{a_6} (y_1\tilde{\gamma}_1^2+y_2\tilde{\gamma}_2^2+y_3(4\bar{\gamma}-\tilde{\gamma_1}-\tilde{\gamma}_2)^2) ,\; \\ & (x_2)_{a_2}(x_4)_{a_4}(x_6)_{a_6} (y_1\tilde{\gamma}_1^3+y_2\tilde{\gamma}_2^3+y_3(4\bar{\gamma}-\tilde{\gamma}_1-\tilde{\gamma}_2)^3) ] \end{align*} for elements $s_k\in S_k$ given by \begin{equation*} S_k = \{ a/{(x_k)_{m_k}} \; | \; a \text{ is a generator of } A \text{ divisible by some } (x_k)_{m_k} \} \end{equation*} where $b\geq 1$, $k=2,\:4,\:6$ and $m_k,a_k\geq 0$ with the additional condition that some $m_k\geq 1$ when appearing in a generator of $A$. Furthermore, the cohomology algebra $H^*(\Lambda(SU(4)/T^3);\mathbb{Z})$ is isomorphic as a module to the algebra $A/I$ up to order of $2$-torsion and $4$-torsion. In addition there are no multiplicative extension problem on the sub-algebra generated by $\gamma_1, \: \gamma_2, \: \gamma_3$, the sub-algebra generated by $y_1, \: y_2, \: y_3$ and no multiplicative extension on elements $y_i\gamma_j$ for $1\leq i,j\leq 3$. \end{theorem} \begin{proof} Throughout the proof all indices lie in the same ranges given in the statement of the theorem. All examples of code used for computation can be found in \cite{Burfitt2021}. Consider the cohomology Leray-Serre spectral sequence $\{ E_r,d^r \}$ associated to the evaluation fibration of $SU(4)/T^3$ studied in Section~\ref{sec:FreeLoopSU(n+1)/Tn}, \begin{equation*} \Omega(SU(4)/T^{3})\to\Lambda(SU(4)/T^3)\to SU(4)/T^3. \end{equation*} We first consider the algebra structure of the $E_{\infty}$-page of the spectral sequence, then consider the implications for the cohomology algebra. By Theorem~\ref{thm:H*SU/T} and using the basis of Remark~\ref{rmk:TildeBasis}, the integral cohomology of the base space $SU(4)/T^3$ is given by \begin{equation*} \frac{\mathbb{Z}[\tilde{\gamma}_1,\:\tilde{\gamma}_2,\:\bar{\gamma}]}{\langle h_{2}^{3},\: h_{3}^{2},\: h_{4}^{1} \rangle} \end{equation*} where $|\tilde{\gamma}_j|=|\bar{\gamma}|=2$. Using Proposition~\ref{prop:basis} after changing the sign of the $\bar{\gamma}$ generator, we have \begin{align}\label{eq:SU4SymetricQuotient} & h_{2}^{3} = \tilde{\gamma}_2^2+\tilde{\gamma}_1\tilde{\gamma}_2+\tilde{\gamma}_1^2+4(\tilde{\gamma}_2+\tilde{\gamma}_1)\bar{\gamma}+6\bar{\gamma}^2 ,\; \nonumber \\ & h_{3}^{2} = \tilde{\gamma}_1^3+4\tilde{\gamma}_1^2\bar{\gamma}+6\tilde{\gamma}_1\bar{\gamma}^2+4\bar{\gamma}^3 \; \\ \text{and } & h_{4}^{1} = \bar{\gamma}^4. \nonumber \end{align} Therefore, the maximal degree of elements of the algebra is $6$. From~(\ref{eq:BaseLoopFlag}), the integral cohomology of the fibre $\Omega(SU(4)/T^3)$ is given by \begin{equation}\label{eq:SU4LoopFibre} \Lambda_\mathbb{Z}(y_1,\:y_2,\:y_3)\otimes\Gamma_\mathbb{Z}[x_2,\:x_4,\:x_6] \end{equation} where $|y_i|=1$ and $|x_k|=k$. Applying Theorem~\ref{thm:GrobnerOver} with the Gr\"obner basis from Theorems \ref{thm:monomial sum}, the additive generators on the $E_2$-page of the spectral sequence are given by representative elements of the form \begin{equation}\label{eq:SU4Representatives} (x_2)_{a_2}(x_4)_{a_4}(x_6)_{a_6}y_{\alpha_1}\cdots y_{\alpha_l}P \end{equation} where $0\leq l \leq 3$, $1\leq \alpha_1< \cdots < \alpha_{l}\leq 3$ and $P\in \mathbb{Z}[\tilde{\gamma}_1,\:\tilde{\gamma}_2,\:\bar{\gamma}]$ is a monomial of degree at most $6$. By Theorem~\ref{thm:allDiff}, the only non-zero differentials are $d^2,\;d^4$ and $d^6$ which are non-zero only on the generators $x_2,\:x_4$ and $x_6$, respectively. Therefore the spectral sequence converges by the seventh page. Using Theorem~\ref{thm:InductiveDiff} and substituting $\tilde{\gamma}_3$ for $-(4\bar{\gamma}+\gamma_1+\gamma_2)$ in the basis of Remark~\ref{rmk:TildeBasis} and the sign change on $\bar{\gamma}$ made in (\ref{eq:SU4SymetricQuotient}), the images of the differentials up to sign are generated by \begin{align}\label{eq:SU4Difs} d^2(x_2) &= y_1\tilde{\gamma}_1-y_2\tilde{\gamma}_2-y_3(4\bar{\gamma}+\gamma_1+\gamma_2) , \nonumber \\ d^4(x_4) &= y_1\tilde{\gamma}_1^2-y_2\tilde{\gamma}_2^2+y_3(4\bar{\gamma}+\tilde{\gamma}_1+\tilde{\gamma}_2)^2 \\ \text{and } d^6(x_6) &= y_1\tilde{\gamma}_1^3-y_2\tilde{\gamma}_2^3-y_3(4\bar{\gamma}+\gamma_1+\gamma_2)^3. \nonumber \end{align} At this point we immediately obtain a number of the generators and relations occurring in $A$ and $I$ of the statement of the theorem. The monomial generators \begin{equation*} \tilde{\gamma}_1,\:\tilde{\gamma}_2,\:\bar{\gamma},\: y_1,\:y_2,\:y_3, (x_2)_{m_2}(x_4)_{m_4}(x_6)_{m_6}\tilde{\gamma}_2\tilde{\gamma}_1^2\bar{\gamma}^3 \text{ and } (x_2)_{m_2}(x_4)_{m_4}(x_6)_{m_6}y_1y_2y_3 \end{equation*} occur in $E_2^{*,0}$ or $E_2^{0,*}$ and are always in the kernel of the differentials, so are algebra generators of the $E_\infty$-page. All relations on the $E_7$-page coming from the divided polynomial relations in $H^*(\Omega(SU(4)/T^3);\mathbb{Z})$ given in (\ref{eq:SU4LoopFibre}) of the form $(x_k)_1^m-m!(x_k)_m$, the symmetric relations in (\ref{eq:SU4SymetricQuotient}) and images of the differentials (\ref{eq:SU4Difs}) hold on the $E_\infty$-page and therefore are in $I$. It is also necessary to include the multiple of these relations by $(x_2)_{a_2}(x_4)_{a_4}(x_6)_{a_6}$ to ensure that they occur as generators of the algebra $A$. In addition, we add to the relations $((x_k)_1^m-m!(x_k)_m)s_k$, where the multiple of an element $s_k$ from the set $S_k$ ensures all generators that occur as a multiple of some $(x_k)_{a_k}$ appear in $I$. We have considered all possible relations occurring on the $E_{\infty}$-page of the spectral sequence, so it remains to determine all generators of $A$. Any additional generators arise as elements whose image under the differentials obtained using (\ref{eq:SU4Difs}) lie inside the ideal generated by the symmetric relations~(\ref{eq:SU4SymetricQuotient}). To this end we need only consider elements whose image is generated by elements from (\ref{eq:SU4Representatives}) with $l=1,2,3$. The case $l=3$ coincides with Theorem~\ref{thm:Ideals}. Since up to $(x_2)_{a_2}(x_4)_{a_4}(x_6)_{a_6}$ multiples, the image of $d^2$ in this case can be rearranged as \begin{equation*} y_1y_2y_2\tilde{\gamma}_1, \; y_1y_2y_2\tilde{\gamma}_2, \; 4y_1y_2y_2\bar{\gamma} \end{equation*} there are no generators corresponding to part (1) of Proposition~\ref{thm:SpectralGrobner} in this case. In addition in the course of the proof of Theorem~\ref{thm:Ideals} it is shown that the images of $d^4$ and $d^6$ in this case are contained in the image of $d^2$ so their domain on these rows lies entirely in the kernel. Part (2) of Proposition~\ref{thm:SpectralGrobner} coincides with the results of part (2) of Theorem~\ref{thm:Ideals}, so using the expression \eqref{eq:PreImageHitGenerators} from Remark \ref{rmk:CasePre-image} we add to the generator of $A$, the following expressions \begin{equation*} \hat{y}_1(2\tilde{\gamma}_1+2\tilde{\gamma}_2-5\bar{\gamma})+\hat{y}_2(2\tilde{\gamma}_2-5\bar{\gamma})+3\hat{y}_3\bar{\gamma} ,\; \hat{y}_1(\tilde{\gamma}_1^2-4\tilde{\gamma}_1\bar{\gamma}+5\bar{\gamma}^2)-\hat{y}_1\bar{\gamma}^2-\hat{y}_3\bar{\gamma}^2 ,\; (\hat{y}_1+\hat{y}_2+\hat{y}_3)\bar{\gamma}^2 . \end{equation*} As noted in Remark~\ref{rmk:NewBaisSymGrobner}, the generators (\ref{eq:SU4SymetricQuotient}) are a Gr{\"o}bner basis with respect to the lexicographic minimal ordering given by \begin{equation*} \tilde{\gamma}_2 > \tilde{\gamma}_1 > \bar{\gamma}. \end{equation*} Hence using Theorem~\ref{thm:GrobnerOver}, we see that any element on the $E_2$-page may be reduced by elements of (\ref{eq:SU4SymetricQuotient}) to a unique form and this is zero if and only if it is a multiple of an element of the symmetric ideal. In addition, from the leading terms of (\ref{eq:SU4SymetricQuotient}) it is clear that we need only consider up to \begin{equation*} \tilde{\gamma}_2\tilde{\gamma}_1^2\bar{\gamma}^3 \end{equation*} multiples of the image of the differentials given in (\ref{eq:SU4Difs}). We now consider elements in the rows of the spectral sequence corresponding to elements in (\ref{eq:SU4Representatives}) when $l=1$. In this case the image of $d^2,d^4$ and $d^6$ on $x_2,x_4$ and $x_6$, respectively are generated by a single element, so there are no generators of the kernel corresponding to part (1) of Proposition~\ref{thm:SpectralGrobner} that we have not already included. Applying part (2) of Proposition~\ref{thm:SpectralGrobner} to rows in this case, we make all Gr\"obner bases computations up to degree $6$ in variables $\tilde{\gamma}_1,\: \tilde{\gamma}_2\:, \bar{\gamma}$ and degree $1$ in variables $y_1,\:y_2,\:y_3$. For the general case it is sufficient to consider the (\ref{eq:SU4SymetricQuotient}) reduced forms of $\tilde{\gamma}_1,\:\tilde{\gamma}_2,\:\bar{\gamma}$ multiples of the images of differentials given in (\ref{eq:SU4Difs}), as other rows will be multiples of theses by elements of $\Gamma_{\mathbb{Z}[x_2.x_4,x_6]}$. In computer computations, we use the extend lexicographic monomial ordering \begin{equation*} y_1 > y_2 > y_3 > \tilde{\gamma}_2 > \tilde{\gamma}_1 > \bar{\gamma}. \end{equation*} In the case of $\phi^2_2$ on multipels of $x_2$, using Gr\"obner bases to compute the intersection of the ideals generated by $\phi^2_2(x_2)$ and (\ref{eq:SU4SymetricQuotient}) gives an ideal generated by the three elements $\phi^2_2(x_2)h^3_2$, $\phi^2_2(x_2)h^2_3$ and $\phi^2_2(x_2)h^4_1$, all of which are are multiples of elements of (\ref{eq:SU4SymetricQuotient}), hence no additional generators need be added to $A$ in this case. Computing the Gr\"obner bases to obtain the intersection of the ideals generated by $\phi^4_4$ on multiples of $x_4$ and (\ref{eq:SU4SymetricQuotient}) gives an ideal generated by elements $\phi^4_4(x_4)h^2_3$, $\phi^4_4(x_4)h^4_1$ and \begin{align*} & \phi^4_4(x_4)(\tilde{\gamma}_2\tilde{\gamma}_1^2-6\tilde{\gamma}_2\bar{\gamma}^2-6\tilde{\gamma}_1\bar{\gamma}^2-14\bar{\gamma}^3), \\ & \phi^4_4(x_4)(\tilde{\gamma}_2\tilde{\gamma}_1\bar{\gamma}+3\tilde{\gamma}_2\bar{\gamma}^2+3\tilde{\gamma}_1\bar{\gamma}^2+6\bar{\gamma}^3), \\ & \phi^4_4(x_4)\tilde{\gamma}_2\bar{\gamma}^3, \\ & \phi^4_4(x_4)(\tilde{\gamma}_1^3+2\tilde{\gamma}_1\bar{\gamma}^2+4\bar{\gamma}^3), \\ & \phi^4_4(x_4)(\tilde{\gamma}_1^2\bar{\gamma}+\tilde{\gamma}_1\bar{\gamma}^2), \\ & \phi^4_4(x_4)\tilde{\gamma}_1\bar{\gamma}^3. \end{align*} Since \begin{equation*} h_2^3 = (\tilde{\gamma}_1^3+2\tilde{\gamma}_1\bar{\gamma}^2+4\bar{\gamma}^3)+4(\tilde{\gamma}_1^2\bar{\gamma}+\tilde{\gamma}_1\bar{\gamma}^2) \end{equation*} we need not add any generators to $A$ as multiples $x_4(\tilde{\gamma}_1^3+2\tilde{\gamma}_1\bar{\gamma}^2+4\bar{\gamma}^3)$. It can again be checked by Gr\"obner bases applying part (2) of Proposition~\ref{thm:SpectralGrobner} that $(x_4)_{m_4}(x_6)_{b_6}$ multiples of \begin{align*} \tilde{\gamma}_2\tilde{\gamma}_1^2-6\tilde{\gamma}_2\bar{\gamma}^2-6\tilde{\gamma}_1\bar{\gamma}^2-14\bar{\gamma}^3,\: \tilde{\gamma}_2\tilde{\gamma}_1\bar{\gamma}+3\tilde{\gamma}_2\bar{\gamma}^2+3\tilde{\gamma}_1\bar{\gamma}^2+6\bar{\gamma}^3,\: \tilde{\gamma}_2\bar{\gamma}^3,\: \tilde{\gamma}_1^2\bar{\gamma}+\tilde{\gamma}_1\bar{\gamma}^2 \text{ and } \tilde{\gamma}_1\bar{\gamma}^3 \end{align*} are in the kernel of $d^6$, therefore they are added to $A$ as generators since the spectral sequence converges on the $E_7$-page. Here part (1) of Proposition~\ref{thm:SpectralGrobner} need not be considered as we have already confirmed that everything lies in the kernel. Finally the Gr\"obner bases of the intersection of the ideals generated by $\phi^6_6(x_6)$ and (\ref{eq:SU4SymetricQuotient}) gives an ideal generated by elements \begin{equation*} d^6(x_6)\tilde{\gamma}_2,\: d^6(x_6)\tilde{\gamma}_1,\: d^6(x_6)\bar{\gamma} \end{equation*} all of whose $(x_6)_{m_6}$ multiples are added to $A$ as generators. Lastly we consider elements in the rows of the spectral sequence corresponding to the elements in (\ref{eq:SU4Representatives}) when $l=2$. In the case of $\phi^2_2$ on multiples of $y_ix_2$ for $i=1,2,3$ having image generated by \begin{align*} y_1d^2(x_2) & = -y_1y_2\tilde{\gamma}_2 -y_1y_3(\tilde{\gamma}_2+\tilde{\gamma}_1+4\bar{\gamma}), \\ y_2d^2(x_2) & = -y_1y_2\tilde{\gamma}_1 -y_2y_3(\tilde{\gamma}_2+\tilde{\gamma}_1+4\bar{\gamma}) \\ \text{and } y_3d^2(x_2) & = -y_1y_3\tilde{\gamma}_1 +y_2y_3\tilde{\gamma}_2. \end{align*} We first note that \begin{equation*} y_1d^2(x_2)\tilde{\gamma}_1-y_2d^2(x_2)\tilde{\gamma}_2-y_3d^2(x_2)(\tilde{\gamma}_2+\tilde{\gamma}_1+4\bar{\gamma}) = 0. \end{equation*} Similarly we have the following relations of the $d^4$ and $d^6$ differentials, \begin{align*} & y_1d^2(x_2)\tilde{\gamma}_1^2-y_2d^2(x_2)\tilde{\gamma}_2^2+y_3d^2(x_2)(\tilde{\gamma}_2+\tilde{\gamma}_1+4\bar{\gamma})^2 = 0 \\ \text{and} \;\;\; & y_1d^2(x_2)\tilde{\gamma}_1^3-y_2d^2(x_2)\tilde{\gamma}_2^3-y_3d^2(x_2)(\tilde{\gamma}_2+\tilde{\gamma}_1+4\bar{\gamma})^3 = 0. \end{align*} Therefore $(x_2)_{m_2}(x_4)_{b_4}(x_6)_{b_6}$ multiples of $y_1\tilde{\gamma}_1-y_2\tilde{\gamma}_2-y_3(\tilde{\gamma}_2+\tilde{\gamma}_1+4\bar{\gamma})$, $(x_4)_{m_4}(x_6)_{b_6}$ multiples of $y_1\tilde{\gamma}_1^2-y_2\tilde{\gamma}_2^2+y_3(\tilde{\gamma}_2+\tilde{\gamma}_1+4\bar{\gamma})^2$ and $(x_6)_{m_6}$ multiples of $y_1\tilde{\gamma}_1^3-y_2\tilde{\gamma}_2^3-y_3(\tilde{\gamma}_2+\tilde{\gamma}_1+4\bar{\gamma})^3$ are added to $A$. It can be checked by computing the Syzygys of the ideals spanned by the image of the three sets of differentials that these are the only additional generators not already included in $A$ corresponding to part (1) of Proposition~\ref{thm:SpectralGrobner}. Considering part (2) of Proposition~\ref{thm:SpectralGrobner}, using Gr\"obner bases to compute the intersection of the ideals generated by $\phi^2_2y_i(x_2)$ and (\ref{eq:SU4SymetricQuotient}) gives an ideal generated by elements corresponding to $y_i\phi^2_2(x_2)h_3^2,\: y_i\phi^2_2(x_2)h_2^3,\: y_i\phi^2_2(x_2)h_1^4$ and \begin{align*} & -y_1\phi^2_2(x_2)(\tilde{\gamma}_2^2+4\tilde{\gamma}_2\bar{\gamma}+6\bar{\gamma}^2) +y_2\phi^2_2(x_2)(\tilde{\gamma}_1^2+4\tilde{\gamma}_1\bar{\gamma}+6\bar{\gamma}^2) +y_3\phi^2_2(x_2)(\tilde{\gamma}_2^2+4\tilde{\gamma}_1^2 -4\bar{\gamma}^2) ,\\ & -y_1\phi^2_2(x_2)(\tilde{\gamma}_2\bar{\gamma}^3+\tilde{\gamma}_1\bar{\gamma}^3) -y_2\phi^2_2(x_2)\tilde{\gamma}_1\bar{\gamma}^3 +y_3\phi^2_2(x_2)\tilde{\gamma}_2\bar{\gamma}^3 ,\\ & -y_1\phi^2_2(x_2)(2\tilde{\gamma}_2\bar{\gamma}^2+2\tilde{\gamma}_1\bar{\gamma}^2+8\bar{\gamma}^3) +y_2\phi^2_2(x_2)(3\tilde{\gamma}_1^2\bar{\gamma}+10\tilde{\gamma}_1\bar{\gamma}^2+10\bar{\gamma}^3) \\ & \;\;\;\;\;\;\;\;\;\;\;\; -y_3\phi^2_2(x_2)(2\tilde{\gamma}_2\tilde{\gamma}_1^2+8\tilde{\gamma}_2\tilde{\gamma}_1\bar{\gamma}+10\tilde{\gamma}_2\bar{\gamma}^2+5\tilde{\gamma}_1^2\bar{\gamma}+20\tilde{\gamma}_1\bar{\gamma}^2+22\bar{\gamma}^3) ,\\ & -y_2\phi^2_2(x_2)\tilde{\gamma}_1^2\bar{\gamma}^3-y_3\phi^2_2(x_2)\tilde{\gamma}_1^2\bar{\gamma}^3 ,\\ & -y_2\phi^2_2(x_2)(3\tilde{\gamma}_1^2\bar{\gamma}^2+12\tilde{\gamma}_1\bar{\gamma}^3) +y_3\phi^2_2(x_2)(2\tilde{\gamma}_2\tilde{\gamma}_1^2\bar{\gamma}+8\tilde{\gamma}_2\tilde{\gamma}_1\bar{\gamma}^2+12\tilde{\gamma}_2\bar{\gamma}^3+5\tilde{\gamma}_1^2\bar{\gamma}^2+20\tilde{\gamma}_1\bar{\gamma}^3) ,\\ & -y_3\phi^2_2(x_2)(\tilde{\gamma}_2\tilde{\gamma}_1^2\bar{\gamma}^2+4\tilde{\gamma}_2\tilde{\gamma}_1\bar{\gamma}^3+4\tilde{\gamma}_1^2\bar{\gamma}^3) \end{align*} all of whose corresponding $d^2$ preimage $(x_2)_{m_2}(x_4)_{b_4}(x_6)_{b_6}$ multiples are added to $A$ as generators up to sign as it can again be check with Gr\"obner bases that all generators remain in the kernels of the $d^4$ and $d^6$ differentials. Using Gr\"obner bases to compute the intersection of the ideals generated by $y_i\phi^4_4(x_4)$ and (\ref{eq:SU4SymetricQuotient}) gives an ideal generated by elements corresponding to $y_3\phi^4_4(x_4)h_1^4$ and \begin{align} & \label{eq:kerd4row21} -y_1\phi^4_4(x_4)(\tilde{\gamma}_2+3\bar{\gamma})+y_2\phi^4_4(x_4)(\tilde{\gamma}_1+3\bar{\gamma})-y_3\phi^4_4(x_4)(2\tilde{\gamma}_2+2\tilde{\gamma}_1+6\bar{\gamma}) ,\\ & \label{eq:kerd4row22} -y_1\phi^4_4(x_4)\tilde{\gamma}_1-y_2\phi^4_4(x_4)(\tilde{\gamma}_1+3\bar{\gamma})+y_3\phi^4_4(x_4)(2\tilde{\gamma}_2+\tilde{\gamma}_1+5\bar{\gamma}) ,\\ & \label{eq:kerd4row23} -y_1\phi^4_4(x_4)\bar{\gamma}^2-y_3\phi^4_4(x_4)(\tilde{\gamma}_1^2+3\tilde{\gamma}_1\bar{\gamma}+3\bar{\gamma}^2) ,\\ & \label{eq:kerd4row24} -2y_1\phi^4_4(x_4)\bar{\gamma}+y_2\phi^4_4(x_4)(2\tilde{\gamma}_1+4\bar{\gamma})-y_3\phi^4_4(x_4)(2\tilde{\gamma}_2+2\tilde{\gamma}_1+6\bar{\gamma}) ,\\ & \label{eq:kerd4row25} -y_2\phi^4_4(x_4)(\tilde{\gamma}_2+\tilde{\gamma}_1+3\bar{\gamma})+y_3\phi^4_4(x_4)(\tilde{\gamma}_2+\bar{\gamma}) ,\\ & \label{eq:kerd4row26} -y_2\phi^4_4(x_4)\tilde{\gamma}_1^2+y_3\phi^4_4(x_4)(5\tilde{\gamma}_1^2+12\tilde{\gamma}_1\bar{\gamma}+12\bar{\gamma}^2) ,\\ & \label{eq:kerd4row27} -y_2\phi^4_4(x_4)\tilde{\gamma}_1\bar{\gamma}-y_3\phi^4_4(x_4)(3\tilde{\gamma}_1^2+7\tilde{\gamma}_1\bar{\gamma}+6\bar{\gamma}^2) ,\\ & \label{eq:kerd4row28} -y_2\phi^4_4(x_4)\bar{\gamma}^2+y_3\phi^4_4(x_4)(\tilde{\gamma}_1^2+2\tilde{\gamma}_1\bar{\gamma}+\bar{\gamma}^2) ,\\ & \label{eq:kerd4row29} -y_3\phi^4_4(x_4)(\tilde{\gamma}_2^2-\tilde{\gamma}_1^2-5\tilde{\gamma}_1\bar{\gamma}-10\bar{\gamma}^2) ,\\ & \label{eq:kerd4row210} -y_3\phi^4_4(x_4)(\tilde{\gamma}_2\tilde{\gamma}_1+2\tilde{\gamma}_1^2+5\tilde{\gamma}_1\bar{\gamma}) ,\\ & \label{eq:kerd4row211} -y_3\phi^4_4(x_4)(\tilde{\gamma}_2\bar{\gamma}+\tilde{\gamma}_1\bar{\gamma}+4\bar{\gamma}^2) ,\\ & \label{eq:kerd4row212} -y_3\phi^4_4(x_4)\tilde{\gamma}_1^3 ,\\ & \label{eq:kerd4row213} -y_3\phi^4_4(x_4)(\tilde{\gamma}_1^2\bar{\gamma}-2\bar{\gamma}^3) ,\\ & \label{eq:kerd4row214} -y_3\phi^4_4(x_4)(\tilde{\gamma}_1\bar{\gamma}^2+2\bar{\gamma}^3). \end{align} Notice that \begin{align*} -y_2\phi^4_4(x_4)h^2_3 = & \tilde{\gamma}_2(\ref{eq:kerd4row25})+(\ref{eq:kerd4row26})+\bar{\gamma}(\ref{eq:kerd4row25})+3(\ref{eq:kerd4row27})+3(\ref{eq:kerd4row28})+(\ref{eq:kerd4row29})+2(\ref{eq:kerd4row211}) , \\ -y_3\phi^4_4(x_4)h^2_3 = & (\ref{eq:kerd4row29})+(\ref{eq:kerd4row210})+4(\ref{eq:kerd4row211}) , \\ -y_3\phi^4_4(x_4)h^3_2 = & (\ref{eq:kerd4row212})+4(\ref{eq:kerd4row213})+6(\ref{eq:kerd4row214}). \end{align*} Hence as can again be checked that all of these generator are in the kernel of $d^6$, the $(x_4)_{m_4}(x_6)_{b_6}$ multiples of $d^4$ preimages of all except (\ref{eq:kerd4row29}), (\ref{eq:kerd4row210}) and (\ref{eq:kerd4row212}) are added to $A$ as generators up to sign. Using Gr\"obner bases to compute the intersection of the ideals generated by $\phi^6_6y_i(x_6)$ and (\ref{eq:SU4SymetricQuotient}) gives an ideal generated by the elements corresponding to \begin{align*} & -y_1\phi^6_6(x_6)+y_2\phi^6_6y_i(x_6) ,\; -y_2\phi^6_6(x_6)\tilde{\gamma}_2 ,\; -y_2\phi^6_6(x_6)\tilde{\gamma}_1 ,\; -y_2\phi^6_6(x_6)\bar{\gamma} \text{ and } -y_3\phi^6_6(x_6). \end{align*} Only the first and last of whose $(x_6)_{m_6}$ multiples are added to $A$ as generators up to sign, as the others are already products of exiting generators. By computing the Gr\"obner bases corresponding to (\ref{eq:GrobnerTorsionPart}) of Proposition~\ref{thm:SpectralGrobner}, we determine that only $2$-torsion and $4$-torsion occurs on the $E_\infty$-page of the spectral sequence. Hence module structure of the integral cohomology algebra of $SU(4)/T^3$ up to torsion type is determined by looking at the spectral sequence in modulo $2$ coefficients in the same way as in \cite[Theorem~5.2]{Burfitt2018} where the modulo $3$ spectral sequence is considered. The only remaining additive extension problem is whether the $4$-torsion on the $E_\infty$-page is $2$-torsion or $4$-torsion in $H^*(\Lambda (SU(4)/T^3);\mathbb{Z})$. The multiplicative extension problems on certain subalgebras are also determined in the same way as in the proof of \cite[Theorem~5.2]{Burfitt2018}. \end{proof} \end{document}
\begin{document} \title{Information geometry of warped product spaces} \begin{abstract} Information geometry is an important tool to study statistical models. There are some important examples in statistical models which are regarded as warped products. In this paper, we study information geometry of warped products. We consider the case where the warped product and its fiber space are equipped with dually flat connections and, in the particular case of a cone, characterize the connections on the base space $\mathbb{R}_{>0}$. The resulting connections turn out to be the $\alpha$-connections with $\alpha = \pm{1}$. \end{abstract} \tableofcontents \section{Introduction} Recently, the study of spaces consisting of probability measures is getting more attention. As tools to investigate such spaces, there are two famous theories in geometry: information geometry and Wasserstein geometry. Information geometry is mainly concerned with finite dimensional statistical models and Wasserstein geometry is concerned with infinite dimensional spaces of probability measures. We can compare these two geometries, for example, on Gaussian distributions. This paper concerns information geometry of a warped product and, in particular, on a cone, which is a kind of warped product of the line $\mathbb{R}_{>0}$ and a manifold. Under some natural assumptions, we characterize connections on the line, with which warped products are constructed. The assumption we set is different from that of \cite{leo} and matches examples of statistical models. Examples of warped product metrics include the denormalizations of the Fisher metric, the Bogoliubov-Kubo-Mori metric and the Fisher metric on the Takano Gaussian space, which is a set of multivariate Gaussian distributions with restricted parameters. Besides these examples, there are some more statistical models represented as warped products. In \cite{takatsu2}, it was shown that the Wasserstein Gaussian space, which is the set of multivariate Gaussian distributions on $\mathbb{R}^n$ with mean zero equipped with the $L^2$-Wasserstein metric, has a cone structure and in \cite{location} the relations between Fisher metrics of location scale models and warped product metrics are studied. It seems that warped products get more attention in the field of statistical models than before. Although information geometry is studied on real manifolds, the theory of statistical manifolds is studied in the field of affine geometry and statistical structures on complex manifolds get more attention as in \cite{furuhata1}. Also in this field, warped products are important since they play an important role in the theory of submanifolds in complex manifolds, for example, CR submanifold theory as in \cite{chen1}. There are many researches extending the theories of CR submanifolds in K\"{a}hler manifolds to submanifolds in holomorphic statistical manifolds as in \cite{chen2}. Statistical structures in \cite{furuhata2} and the structures cultivated in this paper are slightly different because we do not care the compatibility of statistical structures and complex structures. This compatibility is expressed in the definition of holomorphic statistical structures in \cite{furuhata1}. This paper is organized as follows. In Section 2, we briefly review information geometry. Section 3 is devoted to some formulas in warped products. Then in Section 4, we study cones and consider necessary conditions for making both the cone and the fiber space to be dually flat. This necessary condition states that there are only two possible connections on the line. The following theorem is one of our main results. \begin{theorem*}\rm{(Theorem 4.1)} Under Assumption \ref{assume2}, we have \begin{equation*} D_{\partial_t}\partial_t = \frac{1}{t}\frac{\partial}{\partial t} \mbox{ or } -\frac{1}{t}\frac{\partial}{\partial t}, \end{equation*} where $t$ is the natural coordinate on the line $\mathbb{R}_{>0}$, which is the base space of the warped product. \end{theorem*} By observing examples, these two connections turn out to be the $\alpha$-connections with $\alpha = \pm{1}$. An analogous characterization for the Takano Gaussian space is also considered in Section 5. In Section 6, we discuss dually flat connections on the Wasserstein Gaussian space. We remark that, although it is known in \cite{tayebi} that there is no dually flat proper doubly warped Finsler manifold, what they actually proved is that some coordinates cannot be dual affine coordinates. Thus our claims do not contradict their claim. We also discuss this point in Section 6. In Section 7, we study two-dimensional warped products as an appendix. \section{Preliminaries} \subsection{Information geometry}\label{infogeo} We briefly review the basics of information geometry, we refer to \cite{amari} for further reading. Let $(M,g)$ be a Riemannian manifold and $\nabla$ be an affine connection of $M$. $\mathfrak{X}(M)$ denotes the set of $C^\infty$ vector fields on $M$. We define another affine connection $\nabla^*$ by \begin{equation*} Xg(Y,Z) = g(\nabla_XY,Z) + g(Y,\nabla^*_XZ) \end{equation*} for $X,Y,Z\in\mathfrak{X}(M)$. We call $\nabla^*$ the \emph{dual connection} of $\nabla$. We define the torsion and the curvature of $\nabla$ by \begin{equation*} T(X,Y) := \nabla_XY - \nabla_YX - [X,Y],\quad R(X,Y)Z := [\nabla_X,\nabla_Y]Z - \nabla_{[X,Y]}Z, \end{equation*} respectively. If $R$ satisfies \begin{equation*} R(X,Y)Z = k\{g(Y,Z)X - g(X,Z)Y\} \end{equation*} for some $k\in\mathbb{R}$ and all $X,Y,Z\in\mathfrak{X}(M)$, $(M,g,\nabla)$ is called a space of constant curvature $k$. We summarize some important facts on $\nabla$ and $\nabla^*$ in the following. \begin{proposition}\label{levi} Let $\nabla$ and $\nabla^*$ be dual affine connections of $M$. If two of the following conditions hold true, then the other two of them also hold true: \begin{itemize} \item $\nabla$ is torsion free, \item $\nabla^*$ is torsion free, \item $\nabla g$ is a symmetric tensor, \item $\frac{\nabla + \nabla^*}{2}$ is the Levi-Civita connection of $g$. \end{itemize} \end{proposition} \begin{proposition}\label{curvature_prop} Let $(M,g,\nabla,\nabla^*)$ be a Riemannian manifold with dual affine connections. The curvature with respect to $\nabla$ vanishes if and only if the curvature with respect to $\nabla^*$ vanishes. \end{proposition} Let $(M,g,\nabla,\nabla^*)$ be a Riemannian manifold with dual affine connections. If the torsion and the curvature with respect to $\nabla$ and those of $\nabla^*$ all vanish, then we say that $(M,g,\nabla,\nabla^*)$ is \emph{dually flat}. For a local coordinate system $(U; x_1,\cdots, x_n)$, if the Christoffel symbols $\{\Gamma^k_{ij}\}$ of $\nabla$ vanish, we call it \emph{$\nabla$-affine coordinates}. \begin{proposition} Let $(M,g,\nabla,\nabla^*)$ be a Riemannian manifold with dual affine connections. If it is dually flat, then there exist $\nabla$-affine coordinates $(x_i)$ and $\nabla^*$-affine coordinates $(y_j)$ such that \begin{equation*} g\left(\frac{\partial}{\partial x_i},\frac{\partial}{\partial y_j}\right) = \delta_{ij}. \end{equation*} \end{proposition} The coordinates $\{(x_i),(y_j)\}$ above are called \emph{dual affine coordinates}. Using dual affine coordinates, we can construct the canonical divergence [2, \S 3.4]. Next, we introduce the Fisher metric and $\alpha$-connections. Consider a family $\mathcal{S}$ of probability distributions on a finite set $\mathcal{X}$. Suppose that $\mathcal{S}$ is parameterized by $n$ real-valued variables $[\xi^1,\ldots,\xi^n]$ so that \begin{equation*} \mathcal{S} := \{p_\xi = p(x;\xi)\mid \xi = [\xi^1,\ldots,\xi^n]\in\Xi\}, \end{equation*} where $\Xi$ is an open subset of $\mathbb{R}^n$. For $\alpha \in\mathbb{R}, u>0, x\in\mathcal{X}$ and $\xi\in\Xi$, we put \begin{equation*} L^{(\alpha)}(u) := \begin{cases} \frac{2}{1-\alpha}u^{\frac{1-\alpha}{2}} & (\alpha\neq 1),\\ \log u & (\alpha = 1), \end{cases} \quad l^{(\alpha)}(x;\xi) := L^{(\alpha)}(p(x;\xi)). \end{equation*} Then, we define the \emph{Fisher metric} $g$ as \begin{equation*} g_{ij}(\xi) := \int\partial_il^{(\alpha)}(x;\xi)\partial_jl^{(-\alpha)}(x;\xi)\, dx, \end{equation*} and \emph{$\alpha$-connections} $\nabla^{(\alpha)}$ as \begin{equation*} \Gamma^{(\alpha)}_{ij,k}(\xi) := \int\partial_i\partial_j l^{(\alpha)}(x;\xi)\partial_kl^{(-\alpha)}(x;\xi)\, dx, \end{equation*} where $g(\nabla^{(\alpha)}_{\partial_i}\partial_j,\partial_k) = \Gamma^{(\alpha)}_{ij,k}$. Note that the Fisher metric does not depend on $\alpha$. We set \begin{equation*} \tilde{\mathcal{S}} := \{\tau p_\xi \mid \xi \in\Xi , \tau > 0\}, \end{equation*} and call it the \emph{denormalization} of $\mathcal{S}$. In \cite{amari}, the Fisher metric and connections on $\tilde{\mathcal{S}}$ are defined as follows. An extension $\tilde{l}$ of $l$ is defined as \begin{equation*} \widetilde{l}^{(\alpha)} = \widetilde{l}^{(\alpha)}(x;\xi,\tau) := L^{(\alpha)}(\tau p(x;\xi)). \end{equation*} Using this $\tilde{l}$, we define the metric and connections on $\widetilde{\mathcal{S}}$ by \begin{equation}\label{netric_denormalization} \tilde{g}_{ij}(\xi) := \int\partial_i\tilde{l}^{(\alpha)}\partial_j\tilde{l}^{(-\alpha)}\, dx,\quad\tilde{\Gamma}^{(\alpha)}_{ij,k} = \int\partial_i\partial_j \tilde{l}^{(\alpha)}\partial_k\tilde{l}^{(-\alpha)}\, dx. \end{equation} \subsection{Quantum information geometry} Information geometry of density matrices is called quantum information geometry. The set of density matrices $\mathcal{D}$ is defined as \begin{equation*} \mathcal{D} := \{\rho\in\mathbb{P}(n) | \mathrm{Tr}(\rho) = 1\}, \end{equation*} where $\mathbb{P}(n)$ is the set of $n\times n$ positive definite Hermitian matrices. Parameterizing elements of $\mathcal{D}$ as $\rho_\xi$ by $\xi\in\Xi$, the $m$-representation of the natural basis is written as \begin{equation*} (\partial_i)^{(m)} = \partial_i\rho. \end{equation*} The \emph{mixture connection} $\nabla^{(m)}$ is a connection such that \begin{equation*} (\nabla^{(m)}_{\partial_i}\partial_j)^{(m)} = \partial_i\partial_j\rho. \end{equation*} We set \begin{equation*} \mathcal{MON} := \left\{f:\mathbb{R}_{>0}\rightarrow \mathbb{R}_{>0} | f \mbox{ is operator monotone}, \,f(1) = 1, f(t) = tf\left(\frac{1}{t}\right)\right\}. \end{equation*} The \emph{monotone metric} for $f\in\mathcal{MON}$ is expressed as \begin{equation*} g^f_\rho(X,Y) = \mbox{Tr}\left\{X^*\frac{1}{(2\pi i)^2}\oint\oint c(\xi,\eta) \frac{1}{\xi-\rho} Y \frac{1}{\eta-\rho} \, d\xi d\eta \right\}, \end{equation*} where $c(x,y) = 1/(yf(x/y))$ and $\xi(t), \eta(t)$ are paths surrounding the positive spectrum of $\rho$. The monotone metric for $f(x) = (x-1)/\log x$ is called the Bogoliubov-Kubo-Mori (BKM) metric. We refer to \cite{amari} and \cite{dit} for further reading. It is known that the BKM metric enjoys the following remarkable property. \begin{proposition} $(\mathcal{D},\rm{BKM})$ equipped with the mixture connection is dually flat. \end{proposition} We can find a proof of this proposition in [2, Theorem 7.1], and the proof does not use the condition that the matrices considered have trace 1. Thus we can prove the proposition below in completely the same way. \begin{proposition} $(\mathbb{P}(n),\rm{BKM})$ equipped with the mixture connection is dually flat. \end{proposition} \subsection{Takano Gaussian space}\label{takano_gauss} In this subsection, we explain some results from \cite{takano}. We consider multivariate Gaussian distributions \begin{equation*} p(x;\xi) = \frac{1}{(\sqrt{2\pi}\sigma)^n}\prod_{i=1}^n \exp\left\{-\frac{(x_i-m_i)^2}{2\sigma^2}\right\}, \end{equation*} where $\xi = (\sigma,m_1,\ldots , m_n)\in L^{(n+1)}, \,L^{(n+1)} := \mathbb{R}_{>0}\times \mathbb{R}^n$. By a straightforward calculation, we obtain the Fisher metric $G$ as \begin{equation*} G_{\sigma\sigma} = \frac{2n}{\sigma^2},\quad G_{\sigma i} = G_{i \sigma} = 0,\quad G_{ij} = \frac{1}{\sigma^2}\delta_{ij}, \end{equation*} where $\partial_\sigma = \partial/\partial\sigma$ and $\partial_i = \partial/\partial m_i$, i.e., \begin{equation*} ds^2 = \frac{1}{\sigma^2}(2nd\sigma^2 + dm_1^2 + \cdots + dm_n^2). \end{equation*} Its $\alpha$-connections are \begin{equation*} \Gamma^{(\alpha)}_{ij,k} = 0, \quad \Gamma_{ij,\sigma}^{(\alpha)} = \frac{1-\alpha}{\sigma^3}\delta_{ij},\quad \Gamma^{(\alpha)}_{i\sigma,k} = -\frac{1 + \alpha}{\sigma^3}\delta_{ik}, \end{equation*} \begin{equation*} \Gamma^{(\alpha)}_{i\sigma,\sigma} = 0,\quad \Gamma^{(\alpha)}_{\sigma\sigma,i} = 0,\quad \Gamma^{(\alpha)}_{\sigma\sigma,\sigma} = -(1 + 2\alpha)\frac{2n}{\sigma^3}, \end{equation*} and \begin{equation*} \nabla^{(\alpha)}_{\partial_i}\partial_j = \frac{1-\alpha}{2n\sigma}\delta_{ij}\partial_\sigma,\quad \nabla^{(\alpha)}_{\partial_i}\partial_\sigma = \nabla^{(\alpha)}_{\partial_\sigma}\partial_i = -\frac{1 + \alpha}{\sigma}\partial_i,\quad \nabla^{(\alpha)}_{\partial_\sigma}\partial_\sigma = -\frac{1 + 2\alpha}{\sigma}\partial_\sigma. \end{equation*} In \cite{takano}, they call \emph{$\alpha$-flat} if the curvature tensor with respect to the $\alpha$-connection vanishes identically and the following fact is proved. \begin{proposition} $(L^{(n + 1)},ds^2,\nabla^{(\alpha)})$ is a space of constant curvature $-\frac{(1-\alpha)(1 + \alpha)}{2n}$. In particular, $(L^{(n + 1)},ds^2)$ is $(\pm{1})$-flat. \end{proposition} For simplicity, we call $(L^{(n + 1)},ds^2)$ the \emph{Takano Gaussian space} in this paper. \section{Warped products}\label{cone} In this section, we calculate dual affine connections on warped products. \subsection{Koszul formula} Let $\nabla$, $\nabla^*$ be torsion free dual affine connections on $(M,g)$. For $X,Y,Z,W \in \mathfrak{X}(M)$, let us first see a kind of Koszul formula for $\nabla$. Summing up \begin{eqnarray*} Xg(Y,Z) &=& g(\nabla_XY,Z) + g(Y,\nabla^*_XZ)\nonumber,\\ Yg(X,Z) &=& g(\nabla_YX,Z) + g(X,\nabla^*_YZ)\nonumber,\\ -Zg(X,Y) &=& -g(\nabla_ZX,Y) - g(X,\nabla^*_ZY), \end{eqnarray*} we get \begin{eqnarray*} Xg(Y,Z) + Yg(X,Z) - Zg(X,Y) &=& g(\nabla_XY,Z) + g(\nabla_YX,Z) + g(Y,\nabla^*_XZ - \nabla_ZX) + g(X,\nabla^*_YZ - \nabla^*_ZY). \end{eqnarray*} Recalling that we consider torsion free affine connections, we have \begin{equation}\label{star} 2g(\nabla_XY,Z) = Xg(Y,Z) + Yg(X,Z) - Zg(X,Y) + g([X,Y],Z) - g(Y,\nabla^*_XZ-\nabla_ZX) - g(X,[Y,Z]). \end{equation} In order to have a further look on $(\nabla^*_XZ-\nabla_ZX)$, we put \begin{equation*} a(X,W) := \nabla^*_XW - \nabla_WX, \end{equation*} and calculate \begin{eqnarray*} a(X,W) - a(W,X) &=& (\nabla^*_XW-\nabla_WX) - (\nabla^*_WX - \nabla_XW) = 2[X,W],\nonumber\\ a(X,W) + a(W,X) &=& (\nabla^*_XW - \nabla_XW) + (\nabla^*_WX - \nabla_WX) = -2\left(P_XW + P_WX\right), \end{eqnarray*} where we put \begin{equation*} P := \frac{\nabla-\nabla^*}{2}. \end{equation*} Let us collect some properties of $P$. \begin{lemma} Let $f$ be an arbitrary $C^\infty$ function on $M$. For any $X,Y,Z\in\mathfrak{X}(M)$, we have the following equations: \begin{equation} \label{aa} P_XY = P_YX, \end{equation} \begin{equation} \label{bb} g(P_XY,Z) = g(Y,P_XZ), \end{equation} \begin{equation} \label{cc} P_{fX} Y = fP_XY,\quad P_XfY = fP_XY. \end{equation} \begin{proof} For (\ref{aa}), \begin{equation*} P_XY - P_YX = \frac{\nabla_XY-\nabla_YX}{2} - \frac{\nabla^*_XY - \nabla^*_YX}{2} = \frac{1}{2}([X,Y]-[X,Y]) = 0. \end{equation*} For (\ref{bb}), \begin{eqnarray*} g\left(\frac{\nabla_X-\nabla^*_X}{2}Y,Z\right) &=& \frac{1}{2}\left(g(\nabla_XY,Z)-g(\nabla^*_XY,Z)\right)\nonumber\\ &=& \frac{1}{2}\left(Xg(Y,Z)-g(Y,\nabla^*_XZ)\right)-\frac{1}{2}(Xg(Y,Z)-g(Y,\nabla_XZ))\nonumber\\ &=& \frac{1}{2}\left(g(Y,\nabla_XZ)-g(Y,\nabla^*_XZ)\right)\nonumber\\ &=& g(Y,P_XZ). \end{eqnarray*} For (\ref{cc}), the first equation is clear and we also observe \begin{equation*} P_X(fY) = \frac{\nabla_X(fY) - \nabla^*_X(fY)}{2} = Xf\frac{Y - Y}{2} + f\frac{\nabla_XY - \nabla^*_XY}{2} = fP_XY. \end{equation*} \end{proof} \end{lemma} By the above lemma, we can express $a$ using $P$ as \begin{equation*} a(X,W) + a(W,X) = -2(P_XW + P_WX) = -4P_XW,\quad a(X,W) = [X,W] - 2P_XW. \end{equation*} Substituting this into (\ref{star}), we obtain the following Koszul formula: \begin{equation} 2g(\nabla_XY,Z) = Xg(Y,Z) + Yg(X,Z) - Zg(X,Y) + g([X,Y],Z) - g(Y,[X,Z] - 2P_XZ) - g(X,[Y,Z]).\label{starstar} \end{equation} \subsection{O'Neill formulas for affine connections on warped products} Let $(B,g_B),(F,g_F)$ be Riemannian manifolds, $f$ be a positive $C^{\infty}$-function on $B$, $M := B\times_f F$ be the warped product of them equipped with the metric $G := g_B + f^2 g_F$, and $D,D^*$ be torsion free dual affine connections on $M$. Denote by $\mathcal{L}(F)$, $\mathcal{L}(B)$ the sets of lifts of vector fields on $F$ to $M$, $B$ to $M$, respectively. Let $X,Y,Z\in\mathcal{L}(B)$, $U,V,W\in\mathcal{L}(F)$ in the sequel. We will assume the following. \begin{assume}\label{assume1} $D_XY\in\mathcal{L}(B)$ for all $X,Y \in\mathcal{L}(B)$. \end{assume} \begin{lemma}\label{3_2} Under Assumption \ref{assume1}, we have \begin{equation*} G(D^*_XY,V) = 0,\quad G(P_XV,Y) = 0. \end{equation*} \end{lemma} \begin{proof} The first equation follows from Assumption \ref{assume1} and the fact that $(D + D^*)/2$ is the Levi-Civita connection (recall Proposition \ref{levi}). To see the second equation, since $G(Y,V) = G(X,V) = 0$ and $[X,V] = [Y,V] = 0$, we have \begin{eqnarray*} 2G(D_XY,V) &=& XG(Y,V) + YG(X,V) - VG(X,Y) + G([X,Y],V) - G(Y,[X,V]-2P_XV) - G(X,[Y,V])\nonumber\\ &=& -VG(X,Y) + G([X,Y],V) + G(Y,2P_XV)\nonumber\\ &=& G(Y,2P_XV). \end{eqnarray*} By combining this with $2G(D_XY,V) = 0$ by Assumption \ref{assume1}, the second equation holds. \end{proof} Let us modify some formulas on warped products in \cite{oneil} for the Levi-Civita connections to those for affine connections. First we express $D_VX$. On the one hand, $G(D_XV,Y)= 0$ by the Koszul formula (\ref{starstar}) and Lemma \ref{3_2}. On the other hand, it follows from (\ref{bb}) and (\ref{starstar}) that \begin{eqnarray*} 2G(D_XV,W) &=& XG(V,W) + VG(X,W) - WG(X,V) + G([X,V],W) - G(V,[X,W]-2P_XW) - G(X,[V,W])\nonumber\\ &=& XG(V,W) + 2G(V,P_XW)\nonumber\\ &=& XG(V,W) + 2G(P_XV,W). \end{eqnarray*} Since \begin{equation*} XG(V,W) = 2\frac{Xf}{f}G(V,W) \end{equation*} by the definition of $G$ and $D$ is torsion free, we obtain \begin{equation}\label{alpha} D_VX = D_XV =\frac{Xf}{f}V + P_XV. \end{equation} Next we consider $D_VW$. Observe that \begin{equation*}\label{gamma} G(D_VW,X) = -G(W,D^*_VX) = -G\left(W,\frac{Xf}{f}V-P_XV\right). \end{equation*} Using $Xf = G(\mbox{grad }f,X)$, (\ref{aa}) and (\ref{bb}), we have \begin{equation*} G(D_VW,X) = G\left(-\frac{G(V,W)}{f}\mbox{grad }f + P_VW,X\right). \end{equation*} Thus we obtain \begin{equation}\label{beta} \mbox{Hor }D_VW = -\frac{G(V,W)}{f}\mbox{grad }f + \mbox{Hor }P_VW, \end{equation} where $\mbox{Hor}$ denotes the projection to $TB$. \begin{remark} As another way to reach these formulas, we can use the fact that $(D + D^*)/2$ is the Levi-Civita connection and formulas in \cite{oneil}. For example, \begin{equation*} \left(\frac{D + D^*}{2}\right)_XV = \frac{Xf}{f}V \end{equation*} implies \begin{equation*} D_XV = \frac{Xf}{f}V + \frac{D_XV - D^*_XV}{2} = \frac{Xf}{f}V + P_XV. \end{equation*} \end{remark} \section{Cones} In this section, we specialize our study of warped products to cones. We fix our framework and assumptions (including Assumption \ref{assume1}). \begin{assume}\label{assume2} Let $B = \mathbb{R}_{>0}$ with the Euclidean metric $g_B$ such that $g_B(\frac{\partial}{\partial t},\frac{\partial}{\partial t}) = 1$, $f(t) = t$ and $(\widetilde{\nabla},\widetilde{\nabla}^*)$ be dually flat affine connections on $(F,g_F)$. Let $D$, $D^*$ be dually flat affine connections on $B\times_f F$ and $G$ be its warped product metric. We assume that $D$ satisfies \begin{itemize} \item $D_XY \mbox{ is horizontal, } i.e. , \,D_XY\in\mathcal{L}(B)$ for all $X,Y\in\mathcal{L}(B)$, \item $\mbox{Ver }(D_VW) = \mbox{Lift }(\widetilde{\nabla}_VW)$ for all $V,W\in\mathcal{L}(F)$, \end{itemize} where Ver is the projection to $TF$. \end{assume} \begin{remark} Denote the curvature with respect to $D$ by $R$ and the curvature with respect to $\widetilde{\nabla}$ by ${}^FR$. Denote their duals by $R^*$ and $^FR^{*}$. Note that by Proposition \ref{curvature_prop}, we have $R^* = {}^FR^* = 0$ when $R = {}^FR = 0$. \end{remark} In the following arguments in this section, we assume this assumption without mentioning. We shall study what $R = {}^FR =0$ means and characterize admissible connections on $B$ (Theorem 4.1). \subsection{Calculations of $G(R(U,V)V,U)$}\label{second} We first consider in vertical directions. We are going to calculate the Gauss equation (the relatioin between $R$ and $^FR$) for affine connections on $F$ and $M$ in a similar way to \cite{oneil}. For $U,V,W,Q\in\mathcal{L}(F)$, it follows from Assumption 4.1 that \begin{eqnarray*} G(D_UD_VW,Q) &=& G(D_U(\mbox{Ver }D_VW),Q) + G(D_U(\mbox{Hor }D_VW),Q)\nonumber\\ &=& G(\widetilde{\nabla}_U\widetilde{\nabla}_VW,Q) + \left\{UG(I\hspace{-.1em}I(V,W),Q) - G(I\hspace{-.1em}I(V,W),D^*_UQ)\right\}\nonumber\\ &=& G(\widetilde{\nabla}_U\widetilde{\nabla}_VW,Q) - G(I\hspace{-.1em}I(V,W),\mbox{Hor }D^*_UQ)\nonumber\\ &=& G(\widetilde{\nabla}_U\widetilde{\nabla}_VW,Q) -G(I\hspace{-.1em}I(V,W),I\hspace{-.1em}I^*(U,Q)), \end{eqnarray*} where $I\hspace{-.1em}I(W,X) := \mbox{Hor }{D}_WX$, which is an affine version of the second fundamental form on $F$. Thus we have \begin{eqnarray}\label{gauss1} G(R(U,V)W,Q) &=& G({}^FR(U,V)W,Q) - G(I\hspace{-.1em}I(V,W),I\hspace{-.1em}I^*(U,Q)) + G(I\hspace{-.1em}I(U,W),I\hspace{-.1em}I^*(V,Q)). \end{eqnarray} Recall that ${}^FR = 0$ by Assumption \ref{assume2}. Observe from (\ref{beta}) that \begin{eqnarray*} G(I\hspace{-.1em}I(V,W),I\hspace{-.1em}I^*(U,Q)) &=& G\left(-\frac{G(V,W)}{f}\mbox{grad}f + \mbox{Hor}(P_VW), -\frac{G(U,Q)}{f}\mbox{grad}f - \mbox{Hor}(P_UQ)\right)\nonumber \\ &=& \frac{\|\mbox{grad}f\|^2}{f^2}G(V,W)G(U,Q) - G(\mbox{Hor}(P_VW),\mbox{Hor}(P_UQ))\nonumber \\ &&\qquad+ \frac{G(V,W)G(\mbox{grad}f,P_UQ)}{f} - \frac{G(U,Q)G(\mbox{grad}f,P_VW)}{f}, \end{eqnarray*} and similarly \begin{eqnarray*} G(I\hspace{-.1em}I(U,W),I\hspace{-.1em}I^*(V,Q)) &=& \frac{\|\mbox{grad}f\|^2}{f^2}G(U,W)G(V,Q) - G(\mbox{Hor}(P_UW),\mbox{Hor}(P_VQ)) \nonumber \\ &&\qquad+ \frac{G(U,W)G(\mbox{grad}f,P_VQ)}{f} - \frac{G(V,Q)G(\mbox{grad}f,P_UW)}{f}. \end{eqnarray*} Substituting these and letting $W=V$ and $Q = U$, \begin{eqnarray*} G(R(U,V)V,U) &=& - \frac{\|\mbox{grad }f\|^2}{f^2}\left\{G(V,V)G(U,U) - G(U,V)^2\right\}\nonumber\\ &&\qquad + G(\mbox{Hor}(P_VV),\mbox{Hor}(P_UU)) -G(\mbox{Hor}(P_UV),\mbox{Hor}(P_VU)) - \frac{G(V,V)G(\mbox{grad }f,P_UU)}{f} \nonumber\\ &&\qquad + \frac{G(U,U)G(\mbox{grad }f,P_VV)}{f}+ \frac{G(U,V)G(\mbox{grad }f,P_VU)}{f} - \frac{G(V,U)G(\mbox{grad }f,P_UV)}{f}\nonumber\\ &=& - \frac{\|\mbox{grad }f\|^2}{f^2}\left\{G(V,V)G(U,U) - G(U,V)^2\right\}\nonumber\\ &&\qquad + G(\mbox{Hor}(P_VV),\mbox{Hor}(P_UU)) -G(\mbox{Hor}(P_VU),\mbox{Hor}(P_VU)) - \frac{G(V,V)G(\mbox{grad }f,P_UU)}{f} \nonumber\\ &&\qquad + \frac{G(U,U)G(\mbox{grad }f,P_VV)}{f}, \end{eqnarray*} where we used (\ref{aa}). Recalling $R = 0$ by Assumption \ref{assume2}, for any $U,V\in\mathcal{L}(F)$, we find \begin{eqnarray*} & &\frac{\|\mbox{grad }f\|^2}{f^2}\left\{G(V,V)G(U,U) - G(U,V)^2\right\}- G(\mbox{Hor}(P_VV),\mbox{Hor}(P_UU)) +G(\mbox{Hor}(P_VU),\mbox{Hor}(P_VU)) \nonumber\\ &&\qquad+ \frac{G(V,V)G(\mbox{grad }f,P_UU)}{f} - \frac{G(U,U)G(\mbox{grad }f,P_VV)}{f} = 0. \end{eqnarray*} Focusing on the symmetric and anti-symmetric parts in $U$ and $V$, and recalling $f(t) = t$, we obtain the following two equations: \begin{equation} \label{a} G(V,V)G(\mbox{grad }f,P_UU) = G(U,U)G(\mbox{grad }f,P_VV), \end{equation} \begin{equation} \label{b} \frac{1}{t^2}\left\{G(V,V)G(U,U) - G(U,V)^2\right\}- G(\mbox{Hor}(P_VV),\mbox{Hor}(P_UU)) +G(\mbox{Hor}(P_VU),\mbox{Hor}(P_VU)) = 0. \end{equation} To characterize admissible connections on $B$, we prepare some lemmas. \begin{lemma}\label{lem4_1} We have \begin{equation*} {\rm{Hor}}\left(P_{\frac{U}{\|U\|}}\frac{U}{\|U\|}\right) = {\rm{Hor}}\left(P_{\frac{V}{\|V\|}}\frac{V}{\|V\|}\right) \end{equation*} for any $U,V\in\mathcal{L}(F)$. \end{lemma} \begin{proof} It follows from (\ref{a}) that \begin{equation*} G\left(\mbox{grad }f,\mbox{Hor}P_{\frac{U}{\|U\|}}\frac{U}{\|U\|}\right) = G\left(\mbox{grad }f,\mbox{Hor}P_{\frac{V}{\|V\|}}\frac{V}{\|V\|}\right) \end{equation*} and we get \begin{equation*} \mbox{Hor}\left(P_{\frac{U}{\|U\|}}\frac{U}{\|U\|}\right) = \mbox{Hor}\left(P_{\frac{V}{\|V\|}}\frac{V}{\|V\|}\right). \end{equation*} \end{proof} Hereafter, fix an arbitrary $x\in F$. Let $(\xi_i)$ be normal coordinates around $x$ on $F$ and denote $\partial_i = \frac{\partial}{\partial\xi_i}$. \begin{lemma}\label{lem4_2} We have \begin{equation*} ({\rm{Hor}}P_{\partial_i}\partial_i)_{(x,t)} = \left(t\frac{\partial}{\partial t}\right) \mbox{ or } \left(- t\frac{\partial}{\partial t}\right). \end{equation*} \end{lemma} \begin{proof} Put $U = \partial_i$ and $V = \partial_j(i\neq j)$. Then Lemma \ref{lem4_1} implies \begin{equation*} \frac{\mbox{Hor } P_{(U + V)}(U + V)}{\|U + V\|^2} = \frac{\mbox{Hor }P_UU}{\|U\|^2} = \frac{\mbox{Hor }P_VV}{\|V\|^2}. \end{equation*} Note that, for all $t > 0$, \begin{equation*} \|U + V\|^2_{(x,t)} = \|U\|^2_{(x,t)} + \|V\|^2_{(x,t)} = 2t^2. \end{equation*} This yields \begin{equation*} \left\{\mbox{Hor}P_{(U + V)}(U + V)\right\}_{(x,t)} = 2\left(\mbox{Hor}P_UU\right)_{(x,t)} = 2(\mbox{Hor }P_VV)_{(x,t)}, \end{equation*} and hence \begin{equation}\label{delta} (\mbox{Hor}P_UV)_{(x,t)} = 0. \end{equation} Combining this with (\ref{b}), we have for all $t$, \begin{equation*} t^2= G_{(x,t)}(\mbox{Hor}P_VV,\mbox{Hor}P_UU) = G_{(x,t)}(\mbox{Hor}P_UU,\mbox{Hor}P_UU). \end{equation*} This proves the claim. \end{proof} \subsection{Calculations of $G(R(V,X)X,V)$}\label{coneconnection} Now, we put $X = \frac{\partial}{\partial t}$ and define $k$ by $D_XX = k(t)\frac{\partial}{\partial t}$. \begin{lemma}\label{lem4_3} Let $\partial_i = \frac{\partial}{\partial\xi_i}$ as in Lemma \ref{lem4_2}. We have \begin{equation*} (P_X\partial_i)_{(x,t)} = \left(\frac{1}{t}\partial_i\right)_{(x,t)} \mbox{ or } \left(- \frac{1}{t}\partial_i\right)_{(x,t)} \end{equation*} for all $t > 0$. \end{lemma} \begin{proof} Put $V = \partial_i$. Recall that $(\mbox{Hor }P_VV)_{(x,t)} = t\frac{\partial}{\partial t} \mbox{ or } \left(-t\frac{\partial}{\partial t}\right)$ by Lemma \ref{lem4_2}. First, we consider the case $(\mbox{Hor}P_VV)_{(x,t)} = t\frac{\partial}{\partial t}$. We deduce from (\ref{alpha}) that \begin{eqnarray*} G_{(x,t)}(D^*_VD_X^*X,V) &=& G\left(-k(t)\left(\frac{1}{t}V-P_XV\right),V\right)\nonumber\\ &=& -k(t)t + k(t)t\nonumber\\ &=& 0 \end{eqnarray*} for all $t > 0$, where the second equality follows since $G(P_XV,V) = G(P_VX,V) = G(X,P_VV) = t$ by (\ref{aa}) and (\ref{bb}). We similarly find from (\ref{alpha}) that \begin{eqnarray*} G_{(x,t)}(D^*_XD^*_VX,V) &=& XG\left(\frac{1}{t}V - P_XV,V\right) - G\left(\frac{V}{t}- P_XV,D_XV\right)\nonumber\\ &=& \frac{\partial}{\partial t}\left(\frac{1}{t}t^2 - t\right) - G\left(\frac{V}{t} - P_XV,\frac{V}{t} + P_XV\right)\nonumber\\ &=& -G\left(\frac{V}{t},\frac{V}{t}\right) + G(P_XV,P_XV)\nonumber\\ &=& -1 + G(P_XV,P_XV). \end{eqnarray*} Therefore we obtain for all $t > 0$, since $R^* = 0$, \begin{equation*} G(P_XV,P_XV) = 1. \end{equation*} Next we consider the case $(\mbox{Hor}P_VV)_{(x,t)} = -t\frac{\partial}{\partial t}$. We have \begin{eqnarray*} G_{(x,t)}(D_V(D_XX),V) &=& G\left(k(t)\left(\frac{1}{t}V+P_XV\right),V\right)\nonumber\\ &=& tk(t) - tk(t)\nonumber\\ &=& 0 \end{eqnarray*} and \begin{eqnarray*} G_{(x,t)}(D_X(D_VX),V) &=& G\left(D_X\left(\frac{1}{t}V + P_XV\right),V\right)\nonumber\\ &=& XG\left(\frac{1}{t}V + P_XV,V\right) - G\left(\frac{V}{t} + P_XV,\frac{V}{t} - P_XV\right)\nonumber\\ &=&- 1 + G(P_XV,P_XV). \end{eqnarray*} Since $R = 0$, we have $G_{(x,t)}(P_XV,P_XV) = 1$. For $U = \partial_j(i\neq j)$, using $(\mbox{Hor}P_VU)_{(x,t)} = 0$ in (\ref{delta}), (\ref{aa}) and (\ref{bb}), we have \begin{equation*} G_{(x,t)}(P_XV,U) = G_{(x,t)}(P_VX,U) = G_{(x,t)}(X,P_VU) = 0. \end{equation*} Moreover $G(P_XV,X) = G(V,P_XX) = 0$. Therefore $(P_XV)_{(x,t)}$ and $V_{(x,t)}$ are linearly dependent for all $t > 0$, which proves the claim. \end{proof} The next result is the aim of this section, which is a characterization of connections on the line $B$. \begin{theorem}\label{mainthm} Under Assumption \ref{assume2}, we have \begin{equation*} k(t) = \frac{1}{t} \mbox{ or } \left(-\frac{1}{t}\right). \end{equation*} \end{theorem} \begin{proof} Put $V = \partial_i$. When $(\mbox{Hor}P_VV)_{(x,t)} = -t\frac{\partial}{\partial t}$, we have \begin{equation*} G_{(x,t)}(P_XV,V) = G_{(x,t)}(P_VX,V) = G_{(x,t)}(X,P_VV) = -t. \end{equation*} Combining this with Lemma \ref{lem4_3}, we find \begin{equation*} (P_VX)_{(x,t)} = -\frac{1}{t}V. \end{equation*} We similarly find that $P_VX = \frac{1}{t}V$ if $\mbox{Hor }P_VV = t\frac{\partial}{\partial t}$. Hence, we need to consider only the following two cases. First, we consider the case $(\mbox{Hor}P_VV)_{(x,t)} = t\frac{\partial}{\partial t}$ and $(P_VX)_{(x,t)} = \frac{1}{t}V$. We have \begin{eqnarray*} G(D_V(D_XX),V) &=& G\left(k(t)\left(\frac{1}{t}V + P_V\frac{\partial}{\partial t}\right),V\right) = 2tk(t), \end{eqnarray*} and \begin{eqnarray*} G(D_X(D_VX),V) &=& XG(D_VX,V) - G(D_VX,D^*_XV)\nonumber\\ &=& \frac{\partial}{\partial t}\left\{G\left(\frac{1}{t}V + P_XV,V\right)\right\}-G\left(\frac{V}{t} + P_XV,\frac{V}{t}-P_XV\right)\nonumber\\ &=& \frac{\partial}{\partial t}(t + t) - 1 + 1 = 2. \end{eqnarray*} Hence by $R = 0$, we obtain $k(t) = \frac{1}{t}$. Next, we consider the case $(\mbox{Hor}P_VV)_{(x,t)} = -t\frac{\partial}{\partial t}$ and $(P_VX)_{(x,t)} = -\frac{1}{t}V$. We similarly have \begin{eqnarray*} G(D_V^*D_X^*X,V) &=& -k(t)G\left(\frac{1}{t}V - P_V\frac{\partial}{\partial t},V\right) = -2tk(t), \end{eqnarray*} and \begin{eqnarray*} G(D_X^*D_V^*X,V) &=& \frac{\partial}{\partial t}(t + t) - G\left(\frac{V}{t} - P_XV,\frac{V}{t} + P_XV\right) = 2-1 + 1 = 2. \end{eqnarray*} Therefore $k(t) = -\frac{1}{t}$. \end{proof} From the above proof, we obtain that $\mbox{Hor }(P_{\partial_i}\partial_i) = t\frac{\partial}{\partial t}$ and $P_{\partial_i}X = \frac{1}{t}\partial_i$ if $k(t) = \frac{1}{t}$, and that $\mbox{Hor }(P_{\partial_i}\partial_i) = -t\frac{\partial}{\partial t}$ and $P_{\partial_i}X = -\frac{1}{t}\partial_i$ if $k(t) = -\frac{1}{t}$. \subsection{Example 1: Denormalization} Here we consider the denormalization (recall Subsection \ref{infogeo}) as an example of warped product with affine connections. Since we can prove the isometry to a warped product in the same way as in Subsection {\ref{bkmcone}}, we omit a detailed proof and see an explicit expression of an isometry between the warped product and the denormalization. Let $\widetilde{\mathcal{S}}$ be the set of positive finite measures on a finite set $\mathcal{X}$. We define a map $h$ as \begin{eqnarray*} h:\mathbb{R}_{>0}\times \mathcal{S} &\rightarrow& \widetilde{\mathcal{S}}\nonumber\\ (t,p) &\longmapsto& t^2p/4. \end{eqnarray*} We pull back $\tilde{g}$ on $\widetilde{\mathcal{S}}$ in (\ref{netric_denormalization}) by $h$ and define the induced metric $G$ on $\mathbb{R}_{>0}\times \mathcal{S}$. This $(\mathbb{R}_{>0}\times \mathcal{S},G)$ is a warped product and $c(t) := t^2p/4$ is a line of constant speed 1. Let $\{\xi_1,\ldots,\xi_n\}$ be a coordinate system of $\mathcal{S}$. We adopt $\{\tau,\xi_1,\ldots,\xi_n\}$ as a coordinate system of $\tilde{\mathcal{S}}$ and denote its natural basis by $\tilde{\partial}_i = \frac{\partial}{\partial\xi_i}$ and $\tilde{\partial}_\tau = \frac{\partial}{\partial \tau}$. For a vector field $X = X^i\partial_i\in \mathfrak{X}(\mathcal{S})$, we define $\tilde{X} := X^i\tilde{\partial_i}\in \mathfrak{X}(\widetilde{\mathcal{S}})$. We can set affine connections on the denormalization as in Subsection \ref{infogeo}. In \cite{amari}, it is expressed as follows. For $X,Y\in \mathfrak{X}(\mathcal{S})$, \begin{eqnarray*} \widetilde{\nabla}^{(\alpha)}_{\widetilde{X}}\widetilde{Y} &=& \widetilde{(\nabla^{(\alpha)}_XY)} - \frac{1 + \alpha}{2}\langle\widetilde{X},\widetilde{Y}\rangle\tilde{\partial}_\tau,\nonumber\\ \widetilde{\nabla}^{(\alpha)}_{\tilde{\partial}_\tau}\widetilde{X} &=& \widetilde{\nabla}^{(\alpha)}_{\widetilde{X}}\tilde{\partial}_\tau = \frac{1-\alpha}{2}\frac{1}{\tau}\widetilde{X},\nonumber\\ \widetilde{\nabla}^{(\alpha)}_{\tilde{\partial}_\tau}\tilde{\partial}_\tau &=& -\frac{1 + \alpha}{2}\frac{1}{\tau}\tilde{\partial}_\tau. \end{eqnarray*} Also, the metric $\widetilde{g}$ is expressed as \begin{equation*} \widetilde{g}_{ij} = \tau g_{ij},\quad \widetilde{g}_{i\tau} = 0,\quad\widetilde{g}_{\tau\tau} = \frac{1}{\tau}. \end{equation*} We can check that this connection satisfies Assumption \ref{assume2} by direct calculations, that is to say, the $\alpha$-connection on the denormalization is compatible with the warped product structure and their curvatures vanish at $\alpha = \pm{1}$. Let us see that the results we obtained in Subsections \ref{second} and \ref{coneconnection} are also obtained in this situation. We set $\tau = t^2/4$. Note that $\|\partial_\tau\| := \sqrt{\tilde{g}(\partial_\tau,\partial_\tau)}= 1/\sqrt{\tau}$. We have, by omitting the tilde for simplicity, \begin{eqnarray*} D_{\frac{\partial_\tau}{\|\partial_\tau\|}}X &=& \frac{1}{\|\partial_\tau\|}\frac{1-\alpha}{2}\frac{1}{\tau} X = \frac{1-\alpha}{2}\frac{1}{\sqrt{\tau}}X = \frac{1-\alpha}{t}X,\nonumber\\ P_X\frac{\partial_\tau}{\|\partial_\tau\|} &=& P_{\frac{\partial_\tau}{\|\partial_\tau\|}}X = \frac{1}{2}\left(\frac{1-\alpha}{t}-\frac{1 + \alpha}{t}\right)X = -\frac{\alpha}{t}X,\nonumber\\ D_{\frac{\partial_\tau}{\|\partial_\tau\|}}\frac{\partial_\tau}{\|\partial_\tau\|} &=& \sqrt{\tau}\left\{(\partial_\tau\sqrt{\tau})\partial_\tau + \frac{1}{\|\partial_\tau\|}\left(-\frac{1+\alpha}{2}\frac{1}{\tau}\partial_\tau\right)\right\} = -\frac{\alpha}{2}\partial_\tau = -\frac{\alpha}{t}\frac{\partial_\tau}{\|\partial_\tau\|}, \end{eqnarray*} where $D = \widetilde{\nabla}^{(\alpha)}$. When $\alpha = \pm{1}$, these equations are compatible with the connections in Subsection \ref{coneconnection}. \subsection{Example 2: BKM cone}\label{bkmcone} Next, we consider $\mathbb{P}(n)$ equipped with the extended BKM metric (recall Subsection 2.2), which we call the BKM cone. We first show that $\mathbb{P}(n)$ with the extended monotone metric (not only the BKM metric) has a warped product structure. \begin{proposition} $\mathbb{P}(n)$ equipped with the extended monotone metric is a warped product. Precisely, there exists an isometry as follows: \begin{eqnarray*} \mathbb{R}_{>0}\times_{l(t) = t}\mathcal{D} &\rightarrow& (\mathbb{P}(n),g^f)\nonumber\\ (t,\rho) &\mapsto& \frac{t^2\rho}{4}, \end{eqnarray*} where $g^f$ is an arbitrary monotone metric. \end{proposition} \begin{proof} For simplicity, we calculate $2\times 2$ matrices as in \cite{dit}. The following argument can be easily extended to the $n\times n$ case. For an arbitrary $\rho \in\mathcal{D}$, there exists a unitary matrix $U$ such that $U\rho U^* = \rho_0$, where $\rho_0 = \mbox{diag}[x,y]$ for some $x,y\in\mathbb{R}$. We set \begin{equation*} X_1 = \begin{pmatrix} 2 & 0\\ 0 & 0 \end{pmatrix},\quad X_2 = \begin{pmatrix} 0 & 0\\ 0 & 2 \end{pmatrix},\quad X_3 = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix},\quad X_4 = \begin{pmatrix} 0 & i\\ -i & 0 \end{pmatrix}. \end{equation*} These $X_1,X_2,X_3,X_4$ form an orthogonal basis of every tangent space of $(\mathbb{P}(2),g^f)$. Let us calculate the length of these vectors at $\rho_0$: \begin{eqnarray*} g_{\rho_0}^f(X_1,X_1) &=& \frac{1}{(2\pi i)^2}\mathrm{Tr}\oint\oint c(\xi,\eta)\begin{pmatrix} 2 & 0\\ 0 & 0 \end{pmatrix}\begin{pmatrix} \frac{1}{\xi-x} & 0\\ 0 & \frac{1}{\xi-y} \end{pmatrix}\begin{pmatrix} 2 & 0\\ 0 & 0 \end{pmatrix}\begin{pmatrix} \frac{1}{\eta-x} & 0\\ 0 & \frac{1}{\eta-y} \end{pmatrix}\, d\xi d\eta\nonumber\\ &=& \frac{1}{(2\pi i)^2}\mathrm{Tr}\oint\oint c(\xi,\eta)\begin{pmatrix} \frac{4}{(\xi - x)(\eta - x)} & 0\\ 0 & 0 \end{pmatrix}\, d\xi d\eta\nonumber\\ &=& 4c(x,x),\nonumber\\ g^f_{\rho_0}(X_2,X_2) &=& 4c(y,y),\nonumber\\ g_{\rho_0}^f(X_3,X_3) &=& \frac{1}{(2\pi i)^2}\mathrm{Tr}\oint\oint c(\xi,\eta)\begin{pmatrix} \frac{1}{(\xi-y)(\eta-x)} & 0\\ 0 & \frac{1}{(\xi-x)(\eta-y)} \end{pmatrix}\, d\xi d\eta\nonumber\\ &=& 2c(x,y),\nonumber\\ g_{\rho_0}^f(X_4,X_4) &=& \frac{1}{(2\pi i)^2}\mathrm{Tr}\oint\oint c(\xi,\eta)\begin{pmatrix} \frac{1}{(\xi-y)(\eta-x)} & 0\\ 0 & \frac{1}{(\xi-x)(\eta-y)} \end{pmatrix}\, d\xi d\eta\nonumber\\ &=& 2c(x,y). \end{eqnarray*} These calculations show that, for any $k> 0$ and any tangent vectors $X$ and $Y$, we have \begin{equation*} g^f_{k\rho}(X,Y) = g^f_{k\rho_0}(UXU^*,UYU^*) = \frac{1}{k}g^f_{\rho_0}(UXU^*,UYU^*) = \frac{1}{k}g^f_\rho(X,Y), \end{equation*} where we used the fact that $g^f_{U\rho U^*}(UXU^*,UYU^*) = g^f_\rho(X,Y)$. Thus, we obtain \begin{equation}\label{mono1} g^f_{k\rho}(kX,kX) = kg^f_\rho(X,X). \end{equation} We define $h$ as \begin{eqnarray*} h:\mathbb{R}_{>0}\times\mathcal{D} &\rightarrow& \mathbb{P}(2)\nonumber\\ (t,\rho) &\mapsto& \frac{t^2\rho}{4}. \end{eqnarray*} We pull back $g^f$ on $\mathbb{P}(n)$ by $h$ and define $G$ on $\mathbb{R}_{>0}\times \mathcal{D}$. We show that $G$ is a warped product metric on $\mathbb{R}_{>0}\times\mathcal{D}$. We consider the lines \begin{equation*} \gamma(t) := \frac{t^2\rho}{4},\quad \gamma_0(t) := \frac{t^2\rho_0}{4}. \end{equation*} Then we find \begin{eqnarray*} G_{(t,\rho)}\left(\frac{\partial}{\partial t},\frac{\partial}{\partial t}\right) &=& g_{\gamma(t)}^f(\gamma'(t),\gamma'(t))\nonumber\\ &=& g^f_{U\gamma(t)U^*}(U\gamma'(t)U^*,U\gamma'(t)U^*)\nonumber\\ &=&g^f_{\gamma_0(t)}(\gamma_0'(t),\gamma_0'(t))\nonumber\\ &=& g_{\gamma_0(t)}^f\left(\begin{pmatrix} tx/2 & 0\\ 0 & 0 \end{pmatrix},\begin{pmatrix} tx/2 & 0\\ 0 & 0 \end{pmatrix}\right)+g_{\gamma_0(t)}^f\left(\begin{pmatrix} 0 & 0\\ 0 & ty/2 \end{pmatrix},\begin{pmatrix} 0 & 0\\ 0 & ty/2 \end{pmatrix}\right)\nonumber\\ &=& \left(\frac{tx}{2}\right)^2 \frac{4}{t^2x} + \left(\frac{ty}{2}\right)^2 \frac{4}{t^2y}\nonumber\\ &=& \mathrm{Tr}\rho. \end{eqnarray*} Hereafter, we assume $x + y = 1$. We now get \begin{equation}\label{warp1} G_{(t,\rho)}\left(\frac{\partial}{\partial t},\frac{\partial}{\partial t}\right) = 1. \end{equation} Let us calculate $dh$. Since $h$ is expressed by the natural coordinates of $\mathbb{R}_{>0}\times \mathcal{D}$ and $\mathbb{P}(2)$ as \begin{equation*} \left(t,\begin{pmatrix} x & z + iw\\ z - iw & 1-x \end{pmatrix}\right)\longmapsto\frac{t^2}{4}\begin{pmatrix} x & z + iw\\ z - iw & 1-x \end{pmatrix} = \begin{pmatrix} a & b + ic\\ b - ic & d \end{pmatrix}, \end{equation*} we have \begin{equation*} (Jh)_\rho = \begin{pmatrix} tx/2 & t^2/4 & 0 & 0 \\ tz/2 & 0 & t^2/4 & 0 \\ tw/2 & 0 & 0 & t^2/4 \\ t(1-x)/2 & -t^2/4 & 0 & 0 \\ \end{pmatrix}. \end{equation*} The pull-back metric $G$ satisfies \begin{eqnarray*} G\left(\left(\frac{\partial}{\partial z}\right)_{(t,\rho)},\left(\frac{\partial}{\partial z}\right)_{(t,\rho)}\right) = g^f\left(\frac{t^2}{4}\left(\frac{\partial}{\partial b}\right)_{\frac{t^2}{4}\rho},\frac{t^2}{4}\left(\frac{\partial}{\partial b}\right)_{\frac{t^2}{4}\rho}\right), \end{eqnarray*} where \begin{equation*} \left(\frac{\partial}{\partial b}\right)_{\frac{t^2}{4}\rho} := \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}. \end{equation*} Using (\ref{mono1}), we obtain \begin{eqnarray*} g^f\left(\frac{t^2}{4}\left(\frac{\partial}{\partial b}\right)_{\frac{t^2}{4}\rho},\frac{t^2}{4}\left(\frac{\partial}{\partial b}\right)_{\frac{t^2}{4}\rho}\right) &=& t^2g^f_{\rho/4}\left(\frac{1}{4}\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} ,\frac{1}{4}\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}\right)\nonumber\\ &=& t^2G\left(\left(\frac{\partial}{\partial z}\right)_{(1,\rho)},\left(\frac{\partial}{\partial z}\right)_{(1,\rho)}\right). \end{eqnarray*} Hence, \begin{equation}\label{warp2} G_{(t,\rho)}\left(\frac{\partial}{\partial z},\frac{\partial}{\partial z}\right) = t^2G_{(1,\rho)}\left(\frac{\partial}{\partial z},\frac{\partial}{\partial z}\right). \end{equation} The same equation holds for $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial w}$. Combining this with \eqref{warp1}, we see that $G$ is a warped product metric with the warping function $l(t) = t$. \end{proof} For $n=2$, if we take trivial coordinates of $\mathcal{D}$ such as \begin{equation*} \rho(x,y,z) = \begin{pmatrix} x & y + iz\\ y - iz & 1-x \end{pmatrix}, \end{equation*} the mixture connection is an affine connection, for which $\{x,y,z\}$ is affine coordinates. For example, we have the following calculation for the mixture connection ${\nabla}^{(m)}$. Set $X = \frac{\partial}{\partial x}$, $Y = \phi\frac{\partial}{\partial y}$, for an arbitrary function $\phi$ on $\mathcal{D}$. Then we have \begin{equation*} (\nabla^{(m)}_XY)_\rho = \left\{\nabla^{(m)}_{\frac{\partial}{\partial x}}\left(\phi\frac{\partial}{\partial y}\right) \right\}_{\rho}= \left(\frac{\partial \phi}{\partial x}\frac{\partial}{\partial y}\right)_{\rho} = \frac{\partial \phi}{\partial x}\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} = \frac{\partial}{\partial x}\left(\phi\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}\right) = X(Y\rho). \end{equation*} Similarly, if we take the coordinates of $\mathbb{P}(2)$ such as \begin{equation*} \widetilde{\rho} = \begin{pmatrix} \alpha & \beta + i\gamma\\ \beta - i \gamma & \zeta \end{pmatrix}, \end{equation*} then the connection $D$ whose affine coordinate system is $\{\alpha,\beta,\gamma,\zeta\}$ satisfies $(D_XY)_{\widetilde{\rho}} = X(Y\widetilde{\rho})$. For another coordinates such as \begin{equation*} \widetilde{\rho} = \tau\begin{pmatrix} x & y + iz\\ y-iz & 1-x \end{pmatrix}, \end{equation*} we have $D_{\partial_\tau}\partial_\tau = 0$ and $D_{\partial_x}\partial_x = D_{\partial_y}\partial_y = D_{\partial_z}\partial_z = 0$. Combining this with $\widetilde{\nabla}_{\partial_x}\partial_x = \widetilde{\nabla}_{\partial_y}\partial_y = \widetilde{\nabla}_{\partial_z}\partial_z = 0$, we see that the connection $D$ defined above satisfies Assumption \ref{assume2}. \begin{remark} In \cite{grasseli}, quantum $\alpha$-connections on the set of positive definite matrices and their dually flatness are studied. Also for the quantum $\alpha$-connections, we can check the same compatibility between connections and the warped product structure as that of the classical denormalization, which we studied in Subsection 4.3. \end{remark} \section{Connections on the Takano Gaussian space} In this section, we consider the Takano Gaussian space (recall Subsection 2.3) and show an analogue to Theorem 4.1. Let $(B,g_B),(F,g_F)$ be Riemannian manifolds and $\nabla^F$ denotes the Levi-Civita connection of $F$. We furnish $M := B \times F$ with a metric $G$ such that \begin{equation*} G := f^2 g_B + b^2 g_F, \end{equation*} where $f$, $b$ are positive functions on $B$. This is the same situation as in the Takano Gaussian space. Denote by $\nabla$ the Levi-Civita connection on $(M,G)$, and by $\mathcal{L}(F)$, $\mathcal{L}(B)$ the sets of lifts of tangent vector fields of $F$ to $M$, $B$ to $M$, respectively. Simple calculations show that $\nabla_XY$ is horizontal for any $X,Y\in\mathcal{L}(B)$ and $\mbox{Ver}\nabla_VW = \mbox{Lift}(\nabla^F_VW)$ for any $V,W\in\mathcal{L}(F)$. From now on, we set $M:=L^{n+1}$, $F:= \{(m_1,\ldots,m_n) | m_i\in\mathbb{R}\}$ and $B := \{\sigma\in\mathbb{R}_{>0}\}$. Let $G$ be the Fisher metric on $M$. Let $D$ be an arbitrary affine connection on $M$. We define the affine connection $\widetilde{\nabla}^F$ on the fiber space $F$ by the natural projection of $D$. We fix our framework. \begin{assume}\label{assume_5} We assume that $D$ satisfies \begin{itemize} \item $D_XY \mbox{ is horizontal, } i.e. ,\, D_XY\in\mathcal{L}(B)$ for any $X,Y\in\mathcal{L}(B)$, \item $\mbox{Ver }(D_VW) = \mbox{Lift }(\widetilde{\nabla}^F_VW)$ for any $V,W\in\mathcal{L}(F)$. \end{itemize} Let $R$ be the curvature with respect to $D$, $^FR$ be the curvature with respect to $\widetilde{\nabla}^F$, and $R^*$ and $^FR^{*}$ be their duals. We also assume $R = {}^FR =0$. \end{assume} \begin{remark} If we take an $\alpha$-connection of the Takano Gaussian space as $D$, we see that $(F,\widetilde{\nabla}^F)$ is dually flat from the expression of the Christoffel symbols of the Takano Gaussian space in Subsection \ref{takano_gauss}. \end{remark} In the following arguments in this section, we assume this assumption without mentioning. As we saw in Subsection 4.1, the following equations hold: \begin{equation} \label{c} G(V,V)G(\mbox{grad }b,P_UU) = G(U,U)G(\mbox{grad }b,P_VV), \end{equation} \begin{equation} \label{d} \frac{\|\mbox{grad }b\|^2}{b^2}\left\{G(V,V)G(U,U) - G(U,V)^2\right\}- G(\mbox{Hor}(P_VV),\mbox{Hor}(P_UU)) +G(\mbox{Hor}(P_VU),\mbox{Hor}(P_VU)) = 0, \end{equation} where $b(\sigma) = \frac{\sqrt{2n}}{\sigma}$. In the following arguments, we set $X =\partial_\sigma$, $U= \partial_i,V = \partial_j$, where $\partial_\sigma = \frac{\partial}{\partial\sigma}$ and $\partial_i = \frac{\partial}{\partial m_i}$. To characterize the connections on the line, we prepare some lemmas. \begin{lemma}\label{prop5_4} We have \begin{equation*} ({\rm{Hor}}P_UU)_{(x,\sigma)} = {\frac{1}{2n\sigma}}\partial_\sigma \mbox{ or } \left(- {\frac{1}{2n\sigma}}\partial_\sigma\right). \end{equation*} \end{lemma} \begin{proof} In the same way as Lemma \ref{lem4_1}, we have \begin{equation}\label{lem5_1} {\rm{Hor}}P_UU = {\rm{Hor}}P_VV. \end{equation} Applying (\ref{c}) to $U + V$ and $V$, we obtain \begin{equation*} G(U + V,U + V)G(\mbox{grad }b,P_VV) = G(V,V)G(\mbox{grad }b,P_{(U + V)}( U + V)). \end{equation*} Substituting $G(U + V ,U + V) = 2/\sigma^2$ and $G(V,V) = 1/\sigma^2$ to the equation above, we get \begin{equation*} 2\mbox{Hor }P_VV = \mbox{Hor }P_{(U + V)}(U + V). \end{equation*} Hence, \begin{equation*} \mbox{Hor }(P_UU + P_VV + 2 P_UV) = 2\mbox{Hor }P_VV. \end{equation*} Together with (\ref{lem5_1}), we get \begin{equation*} (\mbox{Hor}P_UV)_{(x,\sigma)} = 0. \end{equation*} Combining this with (\ref{d}) and \begin{equation*} \frac{\|\mbox{grad }b\|^2}{b^2} = \frac{G(\mbox{grad }b,\mbox{grad }b)}{b^2} = \frac{G\left(G^{\sigma\sigma}\partial_\sigma\left(\frac{1}{\sigma}\right)\partial_\sigma,G^{\sigma\sigma}\partial_\sigma\left(\frac{1}{\sigma}\right)\partial_\sigma\right)}{\left(\frac{1}{\sigma^2}\right)} = \left(\frac{\sigma}{2n}\right)^2G(\partial_\sigma,\partial_\sigma) = \frac{1}{2n}, \end{equation*} we have, for all $t > 0$, \begin{equation*} \frac{1}{2n}\frac{1}{\sigma^2}\frac{1}{\sigma^2} =G_{(x,\sigma)}(\mbox{Hor}P_VV,\mbox{Hor}P_UU)= G_{(x,\sigma)}(\mbox{Hor}P_UU,\mbox{Hor}P_UU), \end{equation*} which proves the claim. \end{proof} We define $k,l$ by $D_XX = k(\sigma)\frac{\partial}{\partial\sigma}$ and $D^*_XX = l(\sigma)\frac{\partial}{\partial\sigma}$. Combining $G(\partial_\sigma,\partial_\sigma) = \frac{2n}{\sigma^2}$ with $ \partial_\sigma G(\partial_\sigma,\partial_\sigma) = G(D_{\partial_\sigma}\partial_\sigma,\partial_\sigma) + G(\partial_\sigma,D^*_{\partial_\sigma}\partial_\sigma)$, we obtain \begin{equation*} -\frac{4n}{\sigma^3} = \frac{2n}{\sigma^2}(k(\sigma) + l(\sigma)). \end{equation*} Hence, \begin{equation}\label{star5} -\frac{2}{\sigma} = k(\sigma) + l(\sigma) \end{equation} holds. \begin{theorem}\label{takano_2} Under Assumption \ref{assume_5}, the connection on the line is \begin{equation*} D_{\partial_\sigma}\partial_\sigma = \frac{1}{\sigma}\partial_\sigma \mbox{ or } D_{\partial_\sigma}\partial_\sigma = -\frac{3}{\sigma}\partial_\sigma. \end{equation*} \end{theorem} \begin{proof} We only check the case of $\mbox{Hor}P_VV = \frac{1}{2n\sigma}\partial_\sigma$, because the other case of $\mbox{Hor}P_VV = -\frac{1}{2n\sigma}$ follows from the completely same argument. According to the O'Neill formula (\ref{alpha}) for affine connections, we have \begin{equation*} D_VX = D_XV = \frac{\partial_\sigma b}{b}V + P_VX = -\frac{1}{\sigma}V + P_XV. \end{equation*} Using this, we calculate $G(R^*(V,X)X,V)$. We have \begin{eqnarray*} G(D^*_VD^*_XX,V) &=& G\left(l(\sigma)\left\{-\frac{1}{\sigma}V-P_XV\right\},V\right)\nonumber\\ &=& l(\sigma)\left\{-\frac{1}{\sigma}G(V,V) - G(X,P_VV)\right\}\nonumber\\ &=& l(\sigma)\left(-\frac{1}{\sigma}\frac{1}{\sigma^2} - G\left(\partial_\sigma,\frac{1}{2n\sigma}\partial_\sigma\right)\right)\nonumber\\ &=& l(\sigma)\left(-\frac{1}{\sigma^3} - \frac{1}{2n\sigma}\frac{2n}{\sigma^2}\right)\nonumber\\ &=& l(\sigma)\left(-\frac{2}{\sigma^3}\right), \end{eqnarray*} and \begin{eqnarray*} G(D^*_XD^*_VX,V) &=& XG(D^*_VX,V) - G(D^*_VX,D_XV)\nonumber\\ &=& \partial_\sigma G\left(-\frac{1}{\sigma}V-P_XV,V\right) - G\left(\frac{V}{\sigma},\frac{V}{\sigma}\right) + G(P_XV,P_XV)\nonumber\\ &=& \partial_\sigma\left(-\frac{1}{\sigma}G(V,V)-G(X,P_VV)\right) - \frac{1}{\sigma^2}G(V,V) + G(P_XV,P_XV)\nonumber\\ &=& \frac{5}{\sigma^4} + G(P_XV,P_XV). \end{eqnarray*} Since $R^* = 0$, we obtain \begin{equation}\label{5thm_eq_1} l(\sigma)\left(-\frac{2}{\sigma^3}\right) = \frac{5}{\sigma^4} + G(P_XV,P_XV). \end{equation} Next, let us calculate $G(R(V,X)X,V)$. We have \begin{eqnarray*} G(D_VD_XX,V) &=& G\left(k(\sigma)(D_VX),V\right)\nonumber\\ &=& k(\sigma)G\left(-\frac{1}{\sigma}V + P_XV,V\right)\nonumber\\ &=& k(\sigma)\left(-\frac{1}{\sigma^3} + G(P_XV,V)\right)\nonumber\\ &=& k(\sigma)\left(-\frac{1}{\sigma^3} + \frac{1}{2n\sigma}\frac{2n}{\sigma^2}\right) = 0, \end{eqnarray*} and \begin{eqnarray*} G(D_XD_VX,V) &=& XG(D_VX,V) - G(D_VX,D^*_XV)\nonumber\\ &=& \partial_\sigma G\left(-\frac{1}{\sigma}V + P_XV,V\right) - G\left(\frac{V}{\sigma},\frac{V}{\sigma}\right) + G(P_XV,P_XV)\nonumber\\ &=& \partial_\sigma\left(-\frac{1}{\sigma}\frac{1}{\sigma^2} + G(X,P_VV)\right) - \frac{1}{\sigma^2}\frac{1}{\sigma^2} + G(P_XV,P_XV)\nonumber\\ &=& \partial_\sigma\left(-\frac{1}{\sigma^3} + \frac{1}{\sigma^3}\right) - \frac{1}{\sigma^4} + G(P_XV,P_XV) = -\frac{1}{\sigma^4} + G(P_XV,P_XV). \end{eqnarray*} Since $R = 0$, we have \begin{equation*} 0 = -\frac{1}{\sigma^4} + G(P_XV,P_XV). \end{equation*} Combining this with (\ref{5thm_eq_1}) and (\ref{star5}), we obtain \begin{equation*} l(\sigma)= -\frac{3}{\sigma},\quad k(\sigma) = \frac{1}{\sigma}. \end{equation*} The other case is shown in the same way. \end{proof} Note that these connections coincide with the $\alpha $-connections at $\alpha = \pm{1}$ in the Takano Gaussian space (recall Subsection 2.3). \section{Discussion: Wasserstein Gaussian space} By \emph{Wasserstein Gaussian space}, we mean the set of multivariate Gaussian distributions on $\mathbb{R}^n$ with mean zero equipped with the $L^2$-Wasserstein metric. When we started investigating warped products in information geometry, we thought that we would be able to find dually flat connections on the Wasserstein Gaussian space and calculate its canonical divergence. The scenario we thought was the following. In \cite{takatsu}, it is proved that the Wasserstein Gaussian space has a cone structure. Recently in \cite{fujiwara2}, it is proved that we can find dually flat connections on the space of density matrices equipped with the monotone metric. We can apply this result because the SLD metric and the Wasserstein metric on Gaussian distributions are essentialy the same on $\mathcal{D}$ \cite{bures}. We thought that once we study dually flat affine connections on warped products, we would be able to extend the dually flat connections on the fiber space to the warped product in a natural way. However, it turned out that it is difficult to draw dual affine coordinates of the dually flat connections on warped products we made. Thus, we do not know how to calculate the canonical divergence. Let us explain the difficulty in this section. In the previous sections, we discussed necessary conditions for warped products and fiber spaces to be dually flat. First, we show that it is also a sufficient condition for the Wasserstein Gaussian space. The question is, when we extend connections on the fiber space to the warped product, whether the warped product with those connections becomes dually flat or not. According to the arguments in Section 4, we now define the connection $D$ on a cone $M = \mathbb{R}_{>0}\times_{f} F$, where $f(t) = t$ and $(F,g_F)$ is a Riemannian manifold. We consider the situation that the fiber space is equipped with a dually flat affine connection $\widetilde{\nabla}$. Let $G := g_B + f^2g_F$ be the warped product metric on $M$. Denote the base space by $(\mathbb{R}_{>0},g_B)$ with a coordinate $\{t\in\mathbb{R}_{>0}\}$ such that $g_B(\frac{\partial}{\partial t},\frac{\partial}{\partial t}) = 1$. Let $X:= \frac{\partial}{\partial t}$ and $\{U_i\}_{i=1}^n\subset\mathcal{L}(F)$ be a basis of $\mathcal{L}(F)$ such that $[U_i,U_j] = 0$ for any $i,j\in\{1,\ldots,n\}$. Following Theorem \ref{mainthm} and Lemma \ref{lem4_3}, define the connection $D$ by \begin{equation*} D_XX = \frac{1}{t}\frac{\partial}{\partial t},\quad D_XV = D_VX := \frac{2}{t}V,\quad \begin{cases} \mbox{Hor } D_VW := 0,\\ \mbox{Ver } D_VW := \mbox{Lift} (\widetilde{\nabla}_VW), \end{cases} \end{equation*} where $V,W$ are arbitrary vectors in $\{U_i\}_{i=1}^n$. \begin{proposition} $(M,D,D^*)$ is a dually flat space. \end{proposition} \begin{proof} We only have to check that the curvature vanishes with respect to $D$. Let $U,V,W,Q$ be arbitrary vectors in $\{U_i\}_{i=1}^n$. Note that we have \begin{eqnarray}\label{last1} \mbox{Hor}D_UV = 0, \end{eqnarray} and \begin{equation}\label{last2} D^*_XU = D^*_UX = \frac{1}{t}U - \frac{1}{t}U = 0. \end{equation} We first check that $R(U,V)W$ vanishes. From (\ref{last1}) and (\ref{last2}), we have \begin{eqnarray*} G(R(U,V)W,X) &=& G(D_UD_VW,X) - G(D_VD_UW,X)\nonumber\\ &=& \left\{UG(D_VW,X) - G(D_VW,D^*_UX)\right\} - \left\{VG(D_UW,X) - G(D_UW,D^*_VX)\right\}\nonumber\\ &=& UG(\mbox{Hor}D_VW,X) - VG(\mbox{Hor}D_UW,X) = 0. \end{eqnarray*} Recall from Subsection 4.1 and (\ref{beta}) that \begin{eqnarray*} G(I\hspace{-.1em}I(V,W),I\hspace{-.1em}I^*(U,Q)) &=& \frac{\|\mbox{grad}f\|^2}{f^2}G(V,W)G(U,Q) - G(\mbox{Hor}(P_VW),\mbox{Hor}(P_UQ)) \nonumber\\ &&\qquad+ \frac{G(V,W)G(\mbox{grad}f,P_UQ)}{f} - \frac{G(U,Q)G(\mbox{grad}f,P_VW)}{f}\nonumber\\ &=& \frac{1}{t^2}G(V,W)G(U,Q) - G\left(\frac{G(V,W)}{t}\frac{\partial}{\partial t},\frac{G(U,Q)}{t}\frac{\partial}{\partial t}\right) \nonumber\\ &&\qquad+ \frac{1}{t}G(V,W)G\left(\frac{\partial}{\partial t},\frac{G(U,Q)}{t}\frac{\partial}{\partial t}\right) - \frac{1}{t}G(U,Q)G\left(\frac{\partial}{\partial t},\frac{G(V,W)}{t}\frac{\partial}{\partial t}\right)\nonumber\\ &=& 0, \end{eqnarray*} thus we have $G(R(U,V)W,Q) = G(^FR(U,V)W,Q) = 0$ by (\ref{gauss1}). Hence, we have $R(U,V)W = 0$. Next, we check that $R(X,U)V$ vanishes. From (\ref{last1}) and (\ref{last2}), we have \begin{eqnarray*} G(R(X,U)V,X) &=& G(D_XD_UV- D_UD_XV,X)\nonumber\\ &=& \{XG(D_UV,X) - G(D_UV,D^*_XX)\} - UG(D_XV,X) \nonumber\\ &=& -UG\left(\frac{2}{t}V,X\right) = 0. \end{eqnarray*} We also have \begin{eqnarray*} G(R(X,U)V,Q) &=& \{XG(D_UV,Q) - G(D_UV,D^*_XQ)\} - \{UG(D_XV,Q) - G(D_XV,D^*_UQ)\}\nonumber\\ &=& 2t\{g_F(\widetilde{\nabla}_UV,Q) - Ug_F(V,Q) + g_F(V,\widetilde{\nabla}^*_UQ)\}\nonumber\\ &=& 0. \end{eqnarray*} Hence, $R(X,U)V = 0$. In a similar way, we can check $R(U,X)X = 0$. \end{proof} \begin{remark} We denote the Wasserstein Gaussian space over $\mathbb{R}^n$ by the \emph{$n\times n$ Wasserstein Gaussian space} since its elements are represented by $n\times n$ covariance matrices. For the $2\times 2$ Wasserstein Gaussian space, we remark that the existence of a dually flat affine connection $\widetilde{\nabla}$ is guaranteed by \cite{fujiwara2}. \end{remark} Thus the above proposition implies that we can furnish the $2\times 2$ Wasserstein Gaussian space with dually flat affine connections. Though we think it necessary to draw dual affine coordinates to calculate the canonical divergence, it turned out to be difficult. This is because, for example, $D_XU$ does not vanish, which means that the trivial extension of affine coordinates on the fiber space does not give affine coordinates of the warped product. Here, trivial extension means $\{t,\xi_1,\ldots,\xi_n\}$ for affine coordinates $\{\xi_1,\cdots,\xi_n\}$ of the fiber space and the coordinate $\{t\}$ of the line. \begin{remark}\label{finsler1} In \cite{tayebi}, it is claimed that there is no dually flat proper doubly warped Finsler manifolds. Let us restrict their argument to Riemannian manifolds. For two manifolds $M_1$ and $M_2$ and their doubly warped product $(M_1\times M_2,G)$, let $(x_i)$ and $(u_\alpha)$ be coordinates of $M_1$ and $M_2$, respectively. Then their claim asserts that the coordinates $((x_i),(u_\alpha))$ on $(M_1\times M_2,G)$ cannot be affine coordinates for any dually flat connections on $M_1\times M_2$ unless $G$ is the product metric. \end{remark} For further understanding the relation between affine coordinates and their connections, let us observe the $2\times 2$ BKM cone (recall Subsection 4.4). Let $\bar{\nabla}$ be an affine connection whose affine coordinate is $\{a,b,c,d\}$, with which $2\times 2$ matrices are expressed as \begin{equation*} \begin{pmatrix} a & c + ib\\ c -ib & d \end{pmatrix}. \end{equation*} This $\bar{\nabla}$ is a dually flat affine connection on the BKM cone. Let $\bar{D}$ be an affine connection whose affine coordinate is $\{t,\alpha,\beta,\gamma\}$, with which $2\times 2$ matrices are expressed as \begin{equation*} t\begin{pmatrix} \alpha & \beta + i\gamma\\ \beta - i\gamma& 1-\alpha \end{pmatrix}. \end{equation*} Relations of these coordinates are \begin{equation*} t = a + d,\quad\alpha = \frac{a}{a + d}, \quad1-\alpha = \frac{d}{a + d}, \end{equation*} \begin{equation*} \frac{\partial}{\partial\alpha} = \begin{pmatrix} t & 0\\ 0 & -t \end{pmatrix} = (a+d)\left(\frac{\partial}{\partial a} - \frac{\partial}{\partial d}\right), \end{equation*} \begin{equation*} \frac{\partial}{\partial t} = \begin{pmatrix} \alpha & 0\\ 0 & 1-\alpha \end{pmatrix} = \frac{a}{a + d} \frac{\partial}{\partial a} + \frac{d}{a + d}\frac{\partial}{\partial d}. \end{equation*} Using these relations, we calculate \begin{eqnarray*} \bar{\nabla}_{\frac{\partial}{\partial\alpha}}\frac{\partial}{\partial t} &=& \bar{\nabla}_{(a + d)\left(\frac{\partial}{\partial a}-\frac{\partial}{\partial d}\right)}\left(\frac{a}{a + d} \frac{\partial}{\partial a} + \frac{d}{a + d}\frac{\partial}{\partial d}\right)\nonumber\\ &=& (a+d)\left(\frac{\partial}{\partial a}-\frac{\partial}{\partial d}\right)\left(\frac{a}{a + d}\right)\frac{\partial}{\partial a} + (a+d)\left(\frac{\partial}{\partial a}-\frac{\partial}{\partial d}\right)\left(\frac{d}{a + d}\right)\frac{\partial}{\partial d}\nonumber\\ &=& (a+d)\left(\frac{d}{(a + d)^2}+\frac{a}{(a+d)^2}\right)\frac{\partial}{\partial a} + (a+d)\left(-\frac{d}{(a+d)^2}-\frac{a}{(a+d)^2}\right)\frac{\partial}{\partial d}\nonumber\\ &=& \frac{\partial}{\partial a} -\frac{\partial}{\partial d}\neq 0. \end{eqnarray*} On the other hand \begin{equation*} \bar{D}_{\frac{\partial}{\partial\alpha}}\frac{\partial}{\partial t} = 0. \end{equation*} Hence, $\bar{\nabla}$ and $\bar{D}$ are different. \\ \section{Appendix} Main contributions of this appendix are following two points. \begin{itemize} \item We study an example of warped product whose dually flat connections are not realized as ($\pm{1}$)-connections of $\alpha$-connections. \item We study dually flat connections compatible with the structure of a two-dimensional warped product. \end{itemize} \subsection{Preliminaries for elliptic distributions} As described in \cite{elliptic}, a $p$-dimensional random variable $X$ is said to have an elliptic distribution with parameters $\mu^{\mathrm{T}} = (\mu_1,\cdots, \mu_p)$ and $\Psi$, a $p\times p$ positive definite matrix, if its density is \begin{equation*} p_h(x|\mu,\Psi) = \frac{h\{(x-\mu)^{\mathrm{T}}\Psi^{-1}(x-\mu)\}}{\sqrt{\det\Psi}} \end{equation*} for some function $h$. We say that $X$ has an $EL_p^h(\mu,\Psi)$ distribution. We consider the class of one-dimensional elliptic distributions $EL_1^h(\mu,\sigma^2)$, where $\theta = (\mu,\sigma)$. We set $Z$ as an $EL_1^h(0,1)$ random variable and $ W = \{d \log h(Z^2)\}/d(Z^2)$. We also set \begin{equation*} a = E(Z^2W^2),\quad b = E(Z^4W^2), \quad d = E(Z^6W^3). \end{equation*} The Fisher metric of elliptic distributions is \begin{eqnarray}\label{fisher} ds^2 = \frac{4a d\mu^2 + (4b-1)d\sigma^2}{\sigma^2}. \end{eqnarray} We denote the Fisher metric $ds^2$ as $G_F$. \begin{example} Gaussian distribution, Student's t distribution and Cauchy distributions are examples of elliptic distributions. Their constants are given in Table \ref{table:data_type}, which is calculated in \cite{elliptic}. \begin{table}[hbtp] \caption{Important constants} \label{table:data_type} \centering \begin{tabular}{lccc} \hline & Gauss & Cauchy & Student's t \\ \hline \hline $a$ & $\frac{1}{4}$ & $\frac{1}{8}$ & $\frac{k + 1}{4(k + 3)}$ \\ $b$ & $\frac{3}{4}$ & $\frac{3}{8}$ & $\frac{3(k + 1)}{4(k + 3)}$\\ $d$ & $\frac{-15}{8}$ & $-\frac{5}{16}$ & $-\frac{15(k + 1)^2}{8(k + 3)(k + 5)}$\\ \hline \end{tabular} \end{table} \end{example} \subsection{Elliptic distributions as warped products} We set \begin{equation*} t := \sqrt{4b-1}\log\sigma. \end{equation*} Since \begin{equation*} G_F\left(\frac{\partial}{\partial t},\frac{\partial}{\partial t}\right) = G_F\left(\frac{\partial\sigma}{\partial t}\frac{\partial}{\partial \sigma},\frac{\partial\sigma}{\partial t}\frac{\partial}{\partial \sigma}\right) = \frac{\sigma^2}{4b-1}G_F\left(\frac{\partial}{\partial\sigma},\frac{\partial}{\partial\sigma}\right) = 1, \end{equation*} we have \begin{equation*} G_F = dt^2 + f(t)^2d\mu^2, \end{equation*} where \begin{equation*} f(t) := \sqrt{4a}\exp\left(-\frac{t}{\sqrt{4b-1}}\right). \end{equation*} \subsection{Calculations of $R(V,X,X,V)$} For the parameter spaces $M$ of elliptic distributions, we set new assumptions. \begin{assume}\label{assume_appendix} Let $B = \mathbb{R}_{>0}$ with the Euclidean metric $g_B$ such that $g\left(\frac{\partial}{\partial t},\frac{\partial}{\partial t}\right) = 1$, $f(t) = \sqrt{4a}\exp\left(-\frac{t}{\sqrt{4b-1}}\right)$ and $F = \mathbb{R}$ with the Euclidean metric and $\widetilde{\nabla},\widetilde{\nabla}^*$ be a dually flat affine connections on $\mathbb{R}$ with the Euclidean metric $g_\mu$ such that $g_\mu\left(\frac{\partial}{\partial\mu},\frac{\partial}{\partial\mu}\right) = 1$. For an arbitrary connection $D$ on $B\times_f F$, we assume that $D$ satisfies \begin{itemize} \item $D_XY \mbox{ is horizontal, } i.e. ,\, D_XY\in\mathcal{L}(B)$ for any $X,Y\in\mathcal{L}(B)$, \item $\mbox{Ver }(D_VW) = \mbox{Lift }(\widetilde{\nabla}^F_VW)$ for any $V,W\in\mathcal{L}(F)$. \end{itemize} We also assume $R = 0$ where $R$ is the curvature of $M$ with respect to $D$. \end{assume} We next calculate the curvature $R$ under Assumption \ref{assume_appendix}. We set the notations $X = \frac{\partial}{\partial t}, V = \frac{\partial}{\partial \mu}$ and $k,l$ as \begin{equation*} D_{\frac{\partial}{\partial t}}\frac{\partial}{\partial t} = k(t)\frac{\partial}{\partial t} \end{equation*} and \begin{equation*} P_XV = l(t,\mu)\frac{\partial}{\partial \mu}, \end{equation*} where $P_XV := \frac{1}{2}(D_XV - D^*_XV)$. Since \begin{equation*} \frac{Xf}{f} = \frac{\sqrt{4a}\left(-\frac{1}{\sqrt{4b-1}}\right)\exp\left(-\frac{t}{\sqrt{4b-1}}\right)}{\sqrt{4a}\exp\left(-\frac{t}{\sqrt{4b-1}}\right)} = -\frac{1}{\sqrt{4b-1}}, \end{equation*} we have \begin{eqnarray*} G_F( D_VD_XX,V) &=& k(t)\left\{-\frac{1}{\sqrt{4b-1}}f(t)^2 + l(t,\mu)f(t)^2\right\}, \end{eqnarray*} \begin{eqnarray*} G_F( D^*_VD^*_XX,V) &=& -k(t)\left\{ -\frac{1}{\sqrt{4b-1}}f(t)^2 - l(t,\mu)f(t)^2\right\}, \end{eqnarray*} \begin{eqnarray*} G_F( D_XD_VX,V) &=& XG_F( \frac{Xf}{f}V + P_XV,V) - G_F( D_XV,D^*_XV)\\ &=& \frac{\partial}{\partial t}\left\{\left(-\frac{1}{\sqrt{4b-1}}f(t)^2 + l(t,\mu)f(t)^2\right)\right\} - \left\{\left(\frac{Xf}{f}\right)^2G_F( V,V)- G_F( P_XV,P_XV)\right\}\\ &=& f(t)^2\left\{\frac{1}{4b-1} + \partial_t l - \frac{2l}{\sqrt{4b-1}} + l^2\right\} \end{eqnarray*} and \begin{eqnarray*} G_F (D^*_XD^*_VX, V ) &=& f^2\left\{\frac{1}{4b-1} - \partial_tl + \frac{2l}{\sqrt{4b-1}} + l^2\right\}. \end{eqnarray*} Hence, we have \begin{eqnarray*} R(V,X,X,V) &=& G_F( D_VD_XX,V) - G_F( D_XD_VX , V)\\ &=& f(t)^2\left\{-\frac{k(t)}{\sqrt{4b-1}} + kl - \frac{1}{4b-1} - \partial_t l + \frac{2l}{\sqrt{4b-1}} - l^2\right\} \end{eqnarray*} and \begin{eqnarray*} R^*(V,X,X,V) &=& G_F( D^*_VD^*_XX,V)- G_F( D^*_XD^*_VX, V )\\ &=& f(t)^2\left\{ \frac{k(t)}{\sqrt{4b-1}} + kl - \frac{1}{4b-1} + \partial_t l - \frac{2l}{\sqrt{4b-1}} - l^2\right\}. \end{eqnarray*} Since we now consider the case $R = R^* = 0$, we have \begin{eqnarray*} &&f(t)^2\left\{kl - \frac{1}{4b-1} - l^2\right\} = 0,\\ &&f(t)^2\left\{-\frac{k(t)}{\sqrt{4b-1}} - \partial_t l + \frac{2l}{\sqrt{4b-1}}\right\} = 0. \end{eqnarray*} Summarizing the above arguments, we have the following theorem. \begin{theorem}\label{constant_connection} For any $\gamma\in\mathbb{R}$, we define affine connections $D, D^*$ on $M$ by \begin{eqnarray*} &&D_{\frac{\partial}{\partial t}}\frac{\partial}{\partial t} = k\frac{\partial}{\partial t},\\ &&D_{\frac{\partial}{\partial \mu}}\frac{\partial}{\partial\mu} = \nabla_{\frac{\partial}{\partial t}}\frac{\partial}{\partial\mu} + l\frac{\partial}{\partial\mu},\\ &&D_{\frac{\partial}{\partial\mu}}\frac{\partial}{\partial\mu} = \nabla_{\frac{\partial}{\partial\mu}}\frac{\partial}{\partial\mu} + \gamma\frac{\partial}{\partial\mu} + \frac{l}{2\sigma^2}\frac{\partial}{\partial t}, \end{eqnarray*} where $\nabla$ is the Levi-Civita connection and $k,l$ are arbitrary functions satisfying \begin{eqnarray}\label{difeq} \begin{cases} kl - \frac{1}{4b-1} - l^2 = 0,\\ -\frac{k(t)}{\sqrt{4b-1}} - \partial_t l + \frac{2l}{\sqrt{4b-1}} = 0. \end{cases} \end{eqnarray} Then $(M,D,D^*)$ satisfies Assumption \ref{assume_appendix}. \end{theorem} \begin{remark} In Section 5, we consider Takano Gaussian space $(L^{n + 1},G_T,\nabla^{(\alpha)})$. It was shown in Lemma \ref{prop5_4} and Theorem \ref{takano_2} that the dually flat connections $D,D^*$ on $(L^{n + 1},G_T)$ compatible with warped product structure satisfy following two equations: \begin{equation}\label{takano_connection_1} \mbox{Hor} P_{\frac{\partial}{\partial m_i}}\frac{\partial}{\partial m_i} = \frac{1}{2n\sigma}\frac{\partial}{\partial\sigma} \quad \mbox{or} \quad -\frac{1}{2n\sigma}\frac{\partial}{\partial\sigma}, \end{equation} \begin{equation}\label{takano_connection_2} D_{\frac{\partial}{\partial\sigma}}\frac{\partial}{\partial\sigma} = \frac{1}{\sigma}\frac{\partial}{\partial \sigma} \quad \mbox{or}\quad -\frac{3}{\sigma}\frac{\partial}{\partial\sigma}. \end{equation} Since the Fisher metric $G_T$ of Takano Gaussian space is expressed as \begin{equation*} G_T = \frac{dm_1^2 + \cdots + dm_n^2 + 2nd\sigma^2}{\sigma^2}, \end{equation*} if we set the parameter $t = \sqrt{2n}\log \sigma $, we have \begin{equation*} G_T\left(\frac{\partial}{\partial t},\frac{\partial}{\partial t}\right) = G_T\left(\frac{\partial\sigma}{\partial t}\frac{\partial}{\partial\sigma},\frac{\partial\sigma}{\partial t}\frac{\partial}{\partial \sigma}\right) = \frac{\sigma^2}{2n}\frac{2n}{\sigma^2} = 1. \end{equation*} Hence, the metric is \begin{equation*} G_T = dt^2 + f_T(t)^2 \{dm_1^2 + \cdots + dm_n^2\}, \end{equation*} where \begin{equation*} f_T(t) = \exp\left(-\frac{t}{\sqrt{2n}}\right). \end{equation*} From (\ref{takano_connection_1}) and (\ref{takano_connection_2}), we have \begin{equation*} k = \sqrt{\frac{2}{n}},\quad l = \frac{1}{\sqrt{2n}}. \end{equation*} This is the only solution of (\ref{difeq}) if $l$ is constant when $n = 1, a = \frac{1}{4}$ and $ b = \frac{3}{4}$. \end{remark} \subsection{$\alpha$-connections and dually flat connections} Using calculations for $\alpha$-connections on elliptic distributions $\nabla^{(\alpha)}$ in \cite{elliptic}, we have \begin{eqnarray*} \nabla^{(\alpha)}_{\frac{\partial}{\partial\sigma}}\frac{\partial}{\partial\sigma} &=& \frac{1-4b + \alpha(6b + 4d - 1)}{(4b-1)\sigma}\frac{\partial}{\partial\sigma}\\ &=& \frac{1-4b + \alpha(6b + 4d - 1)}{(4b-1)\sigma}\frac{\partial t}{\partial\sigma}\frac{\partial}{\partial t}. \end{eqnarray*} On the other hand, since $\frac{\partial t}{\partial\sigma} = \frac{\sqrt{4b-1}}{\sigma}$, we have \begin{eqnarray*} \nabla^{(\alpha)}_{\frac{\partial}{\partial\sigma}}\frac{\partial}{\partial\sigma} &=& \nabla^{(\alpha)}_{\frac{\partial t}{\partial\sigma}\frac{\partial}{\partial t}}\frac{\partial t}{\partial \sigma}\frac{\partial}{\partial t}\\ &=& \frac{\partial t}{\partial \sigma}\sqrt{4b-1}\frac{(-1)}{\sigma^2}\frac{\partial\sigma}{\partial t}\frac{\partial }{\partial t} + \left(\frac{\partial t}{\partial \sigma}\right)^2 \nabla^{(\alpha)}_{\frac{\partial}{\partial t}}\frac{\partial}{\partial t} . \end{eqnarray*} Comparing two equations above, we have \begin{equation*} \nabla^{(\alpha)}_{\frac{\partial}{\partial t}}\frac{\partial}{\partial t} = \frac{\alpha(6b + 4d - 1)}{(4b-1)^{\frac{3}{2}}}\frac{\partial}{\partial t}. \end{equation*} The only solution of $(\ref{difeq})$ when $l$ is constant is \begin{equation}\label{only_constant_sol} k = \frac{2}{\sqrt{4b-1}},\quad l= \frac{1}{\sqrt{4b-1}}. \end{equation} If we set $k,l$ as (\ref{only_constant_sol}), dually flat connections which are compatible with the structure of warped products are constructed. For dually flat connections $D,D^*$ constructed by Theorem \ref{constant_connection} using (\ref{only_constant_sol}), we have \begin{equation*} \frac{1}{2}(D_{\partial_t}\partial_t - D^*_{\partial_t}\partial_t) = \frac{2}{\sqrt{4b-1}}\frac{\partial}{\partial t}. \end{equation*} On the other hand, for $\alpha$-connections of elliptic distributions $\nabla^{(\alpha)}$, we have \begin{equation*} \frac{1}{2}(\nabla^{(-\alpha)}_{\partial_t}\partial_t - \nabla^{(\alpha)}_{\partial_t}\partial_t) = \frac{-\alpha(6b + 4d - 1)}{(4b-1)^{\frac{3}{2}}}\frac{\partial}{\partial t}. \end{equation*} \begin{example} We compare dually flat connections on the line $B = \mathbb{R}_{>0}$ constructed by Theorem \ref{constant_connection} with dually flat connections among $\alpha$-connections in Table \ref{table:compare}. Note that $\alpha$-connections of Cauchy distribution are not dually flat connections \cite{elliptic} and Student's $t$ distributions are dually flat when $\alpha = \pm{\frac{k + 5}{k-1}}$. \begin{table}[hbtp] \caption{Comparing dually flat connections} \label{table:compare} \centering \begin{tabular}{lllll} \hline Connections on $B = \mathbb{R}_{>0}$ & Gauss & Cauchy & Student's t \\ \hline \hline $\frac{2}{\sqrt{4b-1}}\partial_t$ & $\sqrt{2}\partial_t$ & $2\sqrt{2}\partial_t$ & $\sqrt{\frac{2(k+3)}{k}}\partial_t$\\ $\frac{-\alpha(6b + 4d - 1)}{(4b-1)^{\frac{3}{2}}}\partial_t$ &$\sqrt{2}\partial_t (\alpha = 1)$ & none & $\frac{k\sqrt{2}}{k + 3}\partial_t (\alpha = \frac{k + 5}{k-1})$\\ \hline \end{tabular} \end{table} \end{example} Although every dually flat connections which is compatible with structure of warped product appeared before this appendix were realized as one of $\alpha$-connections, it is observed in this appendix that it does not always happen. \begin{remark} In \cite{furuhata}(4.2), they defined $\alpha$-connections with respect to a metric $g = \frac{dx^2 + \lambda^2 dy^2}{y^2}$ ($\lambda > 0$) on the upper half plane $\{(x,y) | x\in\mathbb{R}, y > 0\}$. By direct calculations, we see that their $\alpha$-connections are also compatible with the structure of warped product and their $\alpha$-connections are dually flat when $\alpha = \pm{1}$. \end{remark} \end{document}
\begin{document} \title[A lower bound for $\chi (\mathcal O_S)$] {A lower bound for $\chi (\mathcal O_S)$} \author{Vincenzo Di Gennaro } \address{Universit\`a di Roma \lq\lq Tor Vergata\rq\rq, Dipartimento di Matematica, Via della Ricerca Scientifica, 00133 Roma, Italy.} \email{[email protected]} \abstract Let $(S,\mathcal L)$ be a smooth, irreducible, projective, complex surface, polarized by a very ample line bundle $\mathcal L$ of degree $d > 25$. In this paper we prove that $\chi (\mathcal O_S)\geq -\frac{1}{8}d(d-6)$. The bound is sharp, and $\chi (\mathcal O_S)=-\frac{1}{8}d(d-6)$ if and only if $d$ is even, the linear system $|H^0(S,\mathcal L)|$ embeds $S$ in a smooth rational normal scroll $T\subset \mathbb P^5$ of dimension $3$, and here, as a divisor, $S$ is linearly equivalent to $\frac{d}{2}Q$, where $Q$ is a quadric on $T$. Moreover, this is equivalent to the fact that the general hyperplane section $H\in |H^0(S,\mathcal L)|$ of $S$ is the projection of a curve $C$ contained in the Veronese surface $V\subseteq \mathbb P^5$, from a point $x\in V\backslash C$. \noindent {\it{Keywords}}: Projective surface, Castelnuovo-Halphen's Theory, Rational normal scroll, Veronese surface. \noindent {\it{MSC2010}}\,: Primary 14J99; Secondary 14M20, 14N15, 51N35. \endabstract \maketitle \section{Introduction} In \cite{DGF}, one proves a sharp lower bound for the self-intersection $K^2_S$ of the canonical bundle of a smooth, projective, complex surface $S$, polarized by a very ample line bundle $\mathcal L$, in terms of its degree $d={\text{deg}}\,\mathcal L$, assuming $d>35$. Refining the line of the proof in \cite{DGF}, in the present paper we deduce a similar result for the Euler characteristic $\chi(\mathcal O_S)$ of $S$ \cite[p. 2]{BV}, in the range $d>25$. More precisely, we prove the following: \begin{theorem}\label{lbound} Let $(S,\mathcal L)$ be a smooth, irreducible, projective, complex surface, polarized by a very ample line bundle $\mathcal L$ of degree $d > 25$. Then: $$ \chi (\mathcal O_S)\geq -\frac{1}{8}d(d-6). $$ The bound is sharp, and the following properties are equivalent. (i) $\chi (\mathcal O_S)= -\frac{1}{8}d(d-6)$; (ii) $h^0(S,\mathcal L)=6$, and the linear system $|H^0(S,\mathcal L)|$ embeds $S$ in $\mathbb P^5$ as a scroll with sectional genus $g=\frac{1}{8}d(d-6)+1$; (iii) $h^0(S,\mathcal L)=6$, $d$ is even, and the linear system $|H^0(S,\mathcal L)|$ embeds $S$ in a smooth rational normal scroll $T\subset \mathbb P^5$ of dimension $3$, and here $S$ is linearly equivalent to $\frac{d}{2}(H_T-W_T)$, where $H_T$ is the hyperplane class of $T$, and $W_T$ the ruling (i.e. $S$ is linearly equivalent to an integer multiple of a smooth quadric $Q\subset T$). \end{theorem} By Enriques' classification, one knows that if $S$ is unruled or rational, then $\chi (\mathcal O_S)\geq 0$. Hence, Theorem \ref{lbound} essentially concerns irrational ruled surfaces. In the range $d>35$, the family of extremal surfaces for $\chi (\mathcal O_S)$ is exactly the same for $K^2_S$. We point out there is a relationship between this family and the Veronese surface. In fact one has the following: \begin{corollary}\label{Veronese} Let $S\subseteq \mathbb P^r$ be a nondegenerate, smooth, irreducible, projective, complex surface, of degree $d > 25$. Let $L\subseteq \mathbb P^r$ be a general hyperplane. Then $\chi(\mathcal O_S)=-\frac{1}{8}d(d-6)$ if and only if $r=5$, and there is a curve $C$ in the Veronese surface $V\subseteq \mathbb P^5$ and a point $x\in V\backslash C$ such that the general hyperplane section $S\cap L$ of $S$ is the projection $p_x(C)\subseteq L$ of $C$ in $L\cong\mathbb P^4$, from the point $x$. \end{corollary} In particular, $S\cap L$ is not linearly normal, instead $S$ is. \section{Proof of Theorem \ref{lbound}} \begin{remark}\label{k8} $(i)$ We say that $S\subset \mathbb P^r$ is a {\it scroll} if $S$ is a $\mathbb P^1$-bundle over a smooth curve, and the restriction of $\mathcal O_S(1)$ to a fibre is $\mathcal O_{\mathbb P^1}(1)$. In particular, $S$ is a geometrically ruled surface, and therefore $\chi (\mathcal O_S)= \frac{1}{8}K^2_S$ \cite[Proposition III.21]{BV}. $(ii)$ By Enriques' classification \cite[Theorem X.4 and Proposition III.21]{BV}, one knows that if $S$ is unruled or rational, then $\chi (\mathcal O_S)\geq 0$, and if $S$ is ruled with irregularity $>0$, then $\chi (\mathcal O_S)\geq \frac{1}{8}K^2_S$. Therefore, taking into account previous remark, when $d>35$, Theorem \ref{lbound} follows from \cite[Theorem 1.1]{DGF}. In order to examine the range $25<d\leq 35$, we are going to refine the line of the argument in the proof of \cite[Theorem 1.1]{DGF}. $(iii)$ When $d=2\delta$ is even, then $\frac{1}{8}d(d-6)+1$ is the genus of a plane curve of degree $\delta$, and the genus of a curve of degree $d$ lying on the Veronese surface. \end{remark} Put $r+1:=h^0(S,\mathcal L)$. Therefore, $|H^0(S,\mathcal L)|$ embeds $S$ in $\mathbb P^r$. Let $H\subseteq \mathbb P^{r-1}$ be the general hyperplane section of $S$, so that $\mathcal L\cong \mathcal O_S(H)$. We denote by $g$ the genus of $H$. If $2\leq r\leq 3$, then $\chi (\mathcal O_S)\geq 1$. Therefore, we may assume $r\geq 4$. {\bf The case $r=4$}. We first examine the case $r=4$. In this case we only have to prove that, for $d>25$, one has $\chi (\mathcal O_S)> -\frac{1}{8}d(d-6)$. We may assume that $S$ is an irrational ruled surface, so $K^2_S\leq 8\chi (\mathcal O_S)$ (compare with previous Remark \ref{k8}, $(ii)$). We argue by contradiction, and assume also that \begin{equation}\label{sest} \chi (\mathcal O_S)\leq -\frac{1}{8}d(d-6). \end{equation} We are going to prove that this assumption implies $d\leq 25$, in contrast with our hypothesis $d>25$. By the double point formula: $$ d(d-5)-10(g-1)+12\chi(\mathcal O_S)=2K^2_S, $$ and $K^2_S\leq 8\chi (\mathcal O_S)$, we get: $$ d(d-5)-10(g-1)\leq 4\chi(\mathcal O_S). $$ And from $\chi (\mathcal O_S)\leq -\frac{1}{8}d(d-6)$ we obtain \begin{equation}\label{fest} 10g\geq \frac{3}{2}d^2-8d+10. \end{equation} Now we distinguish two cases, according that $S$ is not contained in a hypersurface of degree $<5$ or not. First suppose that $S$ is not contained in a hypersurface of $\mathbb P^4$ of degree $<5$. Since $d > 16$, by Roth's Theorem (\cite[p. 152]{R}, \cite[p. 2, (C)]{EP}), $H$ is not contained in a surface of $\mathbb P^3$ of degree $<5$. Using Halphen's bound \cite{GP}, we deduce that $$ g\leq \frac{d^2}{10} + \frac{d}{2}+1-\frac{2}{5}(\epsilon+1)(4-\epsilon), $$ where $d-1=5m+\epsilon$, $0\leq \epsilon<5$. It follows that $$ \frac{3}{2}d^2-8d+10\leq \,10 g\,\leq d^2+5d+10\left(1-\frac{2}{5}(\epsilon+1)(4-\epsilon)\right). $$ This implies that $d\leq 25$, in contrast with our hypothesis $d>25$. In the second case, assume that $S$ is contained in an irreducible and reduced hypersurface of degree $s\leq 4$. When $s\in\{2,3\}$, one knows that, for $d>12$, $S$ is of general type \cite[p. 213]{BF}. Therefore, we only have to examine the case $s=4$. In this case $H$ is contained in a surface of $\mathbb P^3$ of degree $4$. Since $d>12$, by Bezout's Theorem, $H$ is not contained in a surface of $\mathbb P^3$ of degree $<4$. Using Halphen's bound \cite{GP}, and \cite[Lemme 1]{EP}, we get: $$ \frac{d^2}{8}-\frac{9d}{8}+1\leq \, g\,\leq \frac{d^2}{8}+1. $$ Hence, there exists a rational number $0\leq x\leq 9$ such that $$ g=\frac{d^2}{8}+d\left(\frac{x-9}{8}\right)+1. $$ If $0\leq x\leq \frac{15}{2}$, then $g\leq \frac{d^2}{8}-\frac{3}{16}d+1$, and from (\ref{fest}) we get $$ \frac{3}{20}d^2-\frac{4}{5}d+1\,\leq g\,\leq \frac{d^2}{8}-\frac{3}{16}d+1. $$ It follows $d\leq 24$, in contrast with our hypothesis $d>25$. Assume $\frac{15}{2}< x\leq 9$. Hence, $$ \left(\frac{d^2}{8}+1\right)-g= -d\left(\frac{x-9}{8}\right)<\frac{3}{16}d. $$ By \cite[proof of Proposition 2, and formula (2.2)]{D}, we have $$ \chi(\mathcal O_S)\geq 1+ \frac{d^3}{96}-\frac{d^2}{16}-\frac{5d}{3}-\frac{349}{16}-(d-3)\left[\left(\frac{d^2}{8}+1\right)-g\right] $$ $$ >1+ \frac{d^3}{96}-\frac{d^2}{16}-\frac{5d}{3}-\frac{349}{16}-(d-3)\frac{3}{16}d = \frac{d^3}{96}-\frac{d^2}{4}-\frac{53}{48}d-\frac{333}{16}. $$ Combining with (\ref{sest}), we get $$ \frac{d^3}{96}-\frac{d^2}{4}-\frac{53}{48}d-\frac{333}{16}+\frac{1}{8}d(d-6)<0, $$ i.e. $$ d^3-12d^2-178d-1998<0. $$ It follows $d\leq 23$, in contrast with our hypothesis $d>25$. This concludes the analysis of the case $r=4$. {\bf The case $r\geq 5$}. When $r\geq 5$, by \cite[Remark 2.1]{DGF}, we know that, for $d>5$, one has $K^2_S>-d(d-6)$, except when $r=5$, and the surface $S$ is a scroll, $K^2_S=8\chi (\mathcal O_S)=8(1-g)$, and \begin{equation}\label{bound} g=\frac{1}{8}d^2-\frac{3}{4}d+\frac{(5-\epsilon)(\epsilon+1)}{8}, \end{equation} with $d-1=4m+\epsilon$, $0<\epsilon\leq 3$. In this case, by \cite[pp. 73-76]{DGF}, we know that, for $d>30$, $S$ is contained in a smooth rational normal scroll of $\mathbb P^5$ of dimension $3$. Taking into account that we may assume $K^2_S\leq 8\chi (\mathcal O_S)$ (compare with Remark \ref{k8}, $(i)$ and $(ii)$), at this point Theorem \ref{lbound} follows from \cite[Proposition 2.2]{DGF}, when $d>30$. In order to examine the remaining cases $26\leq d \leq 30$, we refine the analysis appearing in \cite{DGF}. In fact, we are going to prove that, assuming $r=5$, $S$ is a scroll, and (\ref{bound}), it follows that $S$ is contained in a smooth rational normal scroll of $\mathbb P^5$ of dimension $3$ also when $26\leq d \leq 30$. Then we may conclude as before, because \cite[Proposition 2.2]{DGF} holds true for $d\geq 18$. First, observe that if $S$ is contained in a threefold $T\subset \mathbb P^5$ of dimension $3$ and minimal degree $3$, then $T$ is necessarily a {\it smooth} rational normal scroll \cite[p. 76]{DGF}. Moreover, observe that we may apply the same argument as in \cite[p. 75-76]{DGF} in order to exclude the case $S$ is contained in a threefold of degree $4$. In fact the argument works for $d>24$ \cite[p. 76, first line after formula (13)]{DGF}. In conclusion, assuming $r=5$, $S$ is a scroll, and (\ref{bound}), it remains to exclude that $S$ is not contained in a threefold of degree $<5$, when $26\leq d \leq 30$. Assume $S$ is not contained in a threefold of degree $<5$. Denote by $\Gamma\subset \mathbb P^3$ the general hyperplane section of $H$. Recall that $26\leq d \leq 30$. $\bullet$ Case I: $h^0(\mathbb P^3,\mathcal I_{\Gamma}(2))\geq 2$. It is impossible. In fact, if $d>4$, by monodromy \cite[Proposition 2.1]{CCD}, $\Gamma$ should be contained in a reduced and irreducible space curve of degree $\leq 4$, and so, for $d>20$, $S$ should be contained in a threefold of degree $\leq 4$ \cite[Theorem (0.2)]{CC}. $\bullet$ Case II: $h^0(\mathbb P^3,\mathcal I_{\Gamma}(2))=1$ and $h^0(\mathbb P^3,\mathcal I_{\Gamma}(3))>4$. As before, if $d>6$, by monodromy, $\Gamma$ is contained in a reduced and irreducible space curve $X$ of degree $\deg(X)\leq 6$. Again as before, if $\deg(X)\leq 4$, then $S$ is contained in a threefold of degree $\leq 4$. So we may assume $5\leq \deg(X)\leq 6$. Since $d\geq 26$, by Bezout's Theorem we have $h_{\Gamma}(i)=h_X(i)$ for all $i\leq 4$. Let $X'$ be the general plane section of $X$. Since $h_X(i)\geq \sum_{j=0}^{i}h_{X'}(j)$, we have $h_X(3)\geq 14$ and $h_X(4)\geq 19$ \cite[pp. 81-87]{EH}. Therefore, when $d\geq 26$, taking into account \cite[Corollary (3.5)]{EH}, we get: $$ h_{\Gamma}(1)=4,\, h_{\Gamma}(2)=9,\, h_{\Gamma}(3)\geq 14,\, h_{\Gamma}(4)\geq 19, $$ $$ h_{\Gamma}(5)\geq 22, \, h_{\Gamma}(6)\geq \min\{d,\, 27\},\, h_{\Gamma}(7)=d. $$ It follows that: $$ p_a(C)\leq \sum_{i=1}^{+\infty}d-h_{\Gamma}(i)\leq (d-4)+(d-9)+(d-14)+(d-19)+(d-22)+3=5d-65, $$ which is $<\frac{1}{8}d(d-6)+1$ for $d \geq 26$. This is in contrast with (\ref{bound}). $\bullet$ Case III: $h^0(\mathbb P^3,\mathcal I_{\Gamma}(2))=1$ and $h^0(\mathbb P^3,\mathcal I_{\Gamma}(3))=4$. We have: $$ h_{\Gamma}(1)=4,\, h_{\Gamma}(2)=9,\, h_{\Gamma}(3)=16, \, h_{\Gamma}(4)\geq 19, h_{\Gamma}(5)\geq 24, \, h_{\Gamma}(6)=d. $$ It follows that: $$ p_a(C)\leq \sum_{i=1}^{+\infty}d-h_{\Gamma}(i)\leq (d-4)+(d-9)+(d-16)+(d-19)+(d-24)=5d-72, $$ which is $< \frac{1}{8}d(d-6)+1$ for $d \geq 26$. This is in contrast with (\ref{bound}). $\bullet$ Case IV: $h^0(\mathbb P^3,\mathcal I_{\Gamma}(2))=0$. We have: $$ h_{\Gamma}(1)=4,\, h_{\Gamma}(2)=10,\, h_{\Gamma}(3)\geq 13, \, h_{\Gamma}(4)\geq 19, $$ $$ h_{\Gamma}(5)\geq 22,\, h_{\Gamma}(6)\geq \min\{d,\, 28\}, \, h_{\Gamma}(7)=d. $$ It follows that: $$ p_a(C)\leq \sum_{i=1}^{+\infty}d-h_{\Gamma}(i)\leq (d-4)+(d-10)+(d-13)+(d-19)+(d-22)+2=5d-66, $$ which is $< \frac{1}{8}d(d-6)+1$ for $d \geq 26$. This is in contrast with (\ref{bound}). This concludes the proof of Theorem \ref{lbound}. \begin{remark}\label{altro} $(i)$ Let $Q\subseteq \mathbb P^3$ be a smooth quadric, and $H\in|\mathcal O_Q(1,d-1)|$ be a smooth rational curve of degree $d$ \cite[p. 231, Exercise 5.6]{Hartshorne}. Let $S\subseteq\mathbb P^4$ be the projective cone over $H$. A computation, which we omit, proves that $$ \chi (\mathcal O_S)=1-\binom{d-1}{3}. $$ Therefore, if $S$ is singular, it may happen that $\chi (\mathcal O_S)<-\frac{1}{8}d(d-6)$. One may ask whether $1-\binom{d-1}{3}$ is a lower bound for $\chi(\mathcal O_S)$ for every {\it integral} surface. $(ii)$ Let $(S,\mathcal L)$ be a smooth surface, polarized by a very ample line bundle $\mathcal L$ of degree $d$. By Harris' bound for the geometric genus $p_g(S)$ of $S$ \cite{H}, we see that $p_g(S)\leq \binom{d-1}{3}$. Taking into account that for a smooth surface one has $\chi(\mathcal O_S)=h^0(S,\mathcal O_S)-h^1(S,\mathcal O_S)+h^2(S,\mathcal O_S) \leq 1+h^2(S,\mathcal O_S)=1+p_g(S)$, from Theorem \ref{lbound} we deduce (the first inequality only when $d>25$): $$ -\binom{\frac{d}{2}-1}{2}\leq \chi (\mathcal O_S)\leq 1+\binom{d-1}{3}. $$ \end{remark} \section{Proof of Corollary \ref{Veronese}} $\bullet$ First, assume that $\chi(\mathcal O_S)=-\frac{1}{8}d(d-6)$. By Theorem \ref{lbound}, we know that $r=5$. Moreover, $S$ is contained in a nonsingular threefold $T\subseteq \mathbb P^5$ of minimal degree $3$. Therefore, the general hyperplane section $H=S\cap L$ of $S$ ($L\cong \mathbb P^4$ denotes the general hyperplane of $\mathbb P^5$) is contained in a smooth surface $\Sigma=T\cap L$ of $L\cong \mathbb P^4$, of minimal degree $3$. This surface $\Sigma$ is isomorphic to the blowing-up of $\mathbb P^2$ at a point, and, for a suitable point $x\in V\backslash L$, the projection of $\mathbb P^5\backslash\{x\}$ on $L\cong \mathbb P^4$ from $x$ restricts to an isomorphism $$p_x:V\backslash\{x\}\to \Sigma\backslash E,$$ where $E$ denotes the exceptional line of $\Sigma$ \cite[p. 58]{BV}. Since $S$ is linearly equivalent on $T$ to $\frac{d}{2}(H_T- W_T)$ ($H_T$ denotes the hyperplane section of $T$, and $W_T$ the ruling), it follows that $H$ is linearly equivalent on $\Sigma$ to $\frac{d}{2}(H_{\Sigma}- W_{\Sigma})$ (now $H_{\Sigma}$ denotes the hyperplane section of $\Sigma$, and $W_{\Sigma}$ the ruling of $\Sigma$). Therefore, $H$ does not meet the exceptional line $E=H_{\Sigma}- 2W_{\Sigma}$. In fact, since $H_{\Sigma}^2=3$, $H_{\Sigma}\cdot W_{\Sigma}=1$, and $W_{\Sigma}^2=0$, one has: $$(H_{\Sigma}- W_{\Sigma})\cdot (H_{\Sigma}- 2W_{\Sigma})= H_{\Sigma}^2-3H_{\Sigma}\cdot W_{\Sigma}+2W_{\Sigma}^2=0.$$ This implies that $H$ is contained in $\Sigma\backslash E$, and the assertion of Corollary \ref{Veronese} follows. $\bullet$ Conversely, assume there exists a curve $C$ on the Veronese surface $V\subseteq \mathbb P^5$, and a point $x\in V\backslash C$, such that $H$ is the projection $p_x(C)$ of $C$ from the point $x$. In particular, $d$ is an even number, and $H$ is contained in a smooth surface $\Sigma\subseteq L\cong \mathbb P^4$ of minimal degree, and is disjoint from the exceptional line $E\subseteq \Sigma$. By \cite[Theorem (0.2)]{CC}, $S$ is contained in a threefold $T\subseteq \mathbb P^5$ of minimal degree. $T$ is nonsingular. In fact, otherwise, $H$ should be a Castelnuovo's curve in $\mathbb P^4$ \cite[p. 76]{DGF}. On the other hand, by our assumption, $H$ is isomorphic to a plane curve of degree $\frac{d}{2}$. Hence, we should have: $$ g=\frac{d^2}{6}-\frac{2}{3}d+1=\frac{d^2}{8}-\frac{3}{4}d+1 $$ (the first equality because $H$ is Castelnuovo's, the latter because $H$ is isomorphic to a plane curve of degree $\frac{d}{2}$). This is impossible when $d>0$. Therefore, $S$ is contained in a smooth threefold $T$ of minimal degree in $\mathbb P^5$. Now observe that in $\Sigma$ there are only two families of curves of degree even $d$ and genus $g=\frac{d^2}{8}-\frac{3}{4}d+1$. These are the curves linearly equivalent on $\Sigma$ to $\frac{d}{2}(H_{\Sigma}- W_{\Sigma})$, and the curves equivalent to $\frac{d+2}{6}H_{\Sigma}+ \frac{d-2}{2}W_{\Sigma}$. But only in the first family the curves do not meet $E$. Hence, $H$ is linearly equivalent on $\Sigma$ to $\frac{d}{2}(H_{\Sigma}- W_{\Sigma})$. Since the restriction ${\text{Pic}}(T)\to {\text{Pic}}(\Sigma)$ is bijective, it follows that $S$ is linearly equivalent on $T$ to $\frac{d}{2}(H_{T}- W_{T})$. By Theorem \ref{lbound}, $S$ is a fortiori linearly normal, and of minimal Euler characteristic $\chi(\mathcal O_S)=-\frac{1}{8}d(d-6)$. \end{document}
\begin{document} \title{\large E\MakeLowercase{xistence of flips and minimal models for 3-folds in char $p$} \begin{abstract} We will prove the following results for $3$-fold pairs $(X,B)$ over an algebraically closed field $k$ of characteristic $p>5$: log flips exist for $\mathbb Q$-factorial dlt pairs $(X,B)$; log minimal models exist for projective klt pairs $(X,B)$ with pseudo-effective $K_X+B$; the log canonical ring $R(K_X+B)$ is finitely generated for projective klt pairs $(X,B)$ when $K_X+B$ is a big $\mathbb Q$-divisor; semi-ampleness holds for a nef and big $\mathbb Q$-divisor $D$ if $D-(K_X+B)$ is nef and big and $(X,B)$ is projective klt; $\mathbb Q$-factorial dlt models exist for lc pairs $(X,B)$; terminal models exist for klt pairs $(X,B)$; ACC holds for lc thresholds; etc. \end{abstract} \tableofcontents \section{Introduction} We work over an algebraically closed field $k$ of characteristic (char) $p>0$. The pairs $(X,B)$ we consider in this paper always have $\mathbb R$-boundaries $B$ unless otherwise stated. Higher dimensional birational geometry in char $p$ is still largely conjectural. Even the most basic problems such as base point freeness are not solved in general. Ironically though Mori's work on existence of rational curves which plays an important role in characteristic $0$ uses reduction mod $p$ techniques. There are two reasons among others which have held back progress in char $p$: resolution of singularities is not known and Kawamata-Viehweg vanishing fails. However, it was expected that one can work out most components of the minimal model program in dimension $3$. This is because resolution of singularities is known in dimension $3$ and many problems can be reduced to dimension $2$ hence one can use special features of surface geometry. On the positive side there has been some good progress toward understanding birational geometry in char $p$. People have tried to replace the characteristic $0$ tools that fail in char $p$. For example, Keel [\ref{Keel}] developed techniques for dealing with the base point free problem and semi-ampleness questions in general without relying on Kawamata-Viehweg vanishing type theorems. On the other hand, motivated by questions in commutative algebra, people have introduced Frobenius-singularities whose definition do not require resolution of singularities and they are very similar to singularities in characteristic $0$ (cf. [\ref{Schwede}]). More recently Hacon-Xu [\ref{HX}] proved the existence of flips in dimension $3$ for pairs $(X,B)$ with $B$ having standard coefficients, that is, coefficients in $\mathfrak{S}=\{1-\frac{1}{n}\mid n\in \mathbb N\cup \{\infty\}\}$, and char $p>5$. From this they could derive existence of minimal models for $3$-folds with canonical singularities. In this paper, we rely on their results and ideas. The requirement $p>5$ has to do with the behavior of singularities on surfaces, eg a klt surface singularity over $k$ of char $p>5$ is strongly $F$-regular.\\ {\textbf{\sffamily{Log flips.}}} Our first result is on the existence of flips. \begin{thm}\label{t-flip-1} Let $(X,B)$ be a $\mathbb Q$-factorial dlt pair of dimension $3$ over $k$ of char $p>5$. Let $X\to Z$ be a $K_X+B$-negative extremal flipping projective contraction. Then its flip exists. \end{thm} The conclusion also holds if $(X,B)$ is klt but not necessarily $\mathbb Q$-factorial. This follows from the finite generation below (\ref{t-fg}). The theorem is proved in Section \ref{s-flips} when $X$ is projective. The quasi-projective case is proved in Section \ref{s-mmodels}. We reduce the theorem to the case when $X$ is projective, $B$ has standard coefficients, and some component of $\rddown{B}$ is negative on the extremal ray: this case is [\ref{HX}, Theorem 4.12] which is one of the main results of that paper. A different approach is taken in [\ref{CGS}] to prove \ref{t-flip-1} when $B$ has hyperstandard coefficients and $p\gg 0$ (these coefficients are of the form $\frac{n-1}{n}+\sum \frac{l_ib_i}{n}$ where $n \in\mathbb N\cup \{\infty\}$, $l_i\in \mathbb Z^{\ge 0}$ and $b_i$ are in some fixed DCC set). To prove Theorem \ref{t-flip-1} we actually first prove the existence of \emph{generalized flips} [\ref{HX}, after Theorem 5.6]. See Section \ref{s-flips} for more details.\\ {\textbf{\sffamily{Log minimal models.}}} In [\ref{HX}, after Theorem 5.6], using generalized flips, a \emph{generalized LMMP} is defined which is used to show the existence of minimal models for varieties with canonical singularities (or for pairs with canonical singularities and "good" boundaries). Using weak Zariski decompositions as in [\ref{B-WZD}], we construct log minimal models for klt pairs in general. \begin{thm}\label{t-mmodel} Let $(X,B)$ be a klt pair of dimension $3$ over $k$ of char $p>5$ and let $X\to Z$ be a projective contraction. If $K_X+B$ is pseudo-effective$/Z$, then $(X,B)$ has a log minimal model over $Z$. \end{thm} The theorem is proved in Section \ref{s-mmodels}. Alternatively, one can apply the methods of [\ref{B-mmodel}] to construct log minimal models for lc pairs $(X,B)$ such that $K_X+B\equiv M/Z$ for some $M\ge 0$. Note that when $X\to Z$ is a semi-stable fibration over a curve and $B=0$, the theorem was proved much earlier by Kawamata [\ref{Kawamata}].\\ {\textbf{\sffamily{Remark on Mori fibre spaces.}}} Let $(X,B)$ be a projective klt pair of dimension $3$ over $k$ of char $p>5$ such that $K_X+B$ is not pseudo-effective. An important question is whether $(X,B)$ has a Mori fibre space. There is an ample $\mathbb R$-divisor $A\ge 0$ such that $K_X+B+A$ is pseudo-effective but $K_X+B+(1-\epsilon)A$ is not pseudo-effective for any $\epsilon>0$. Moreover, we may assume that $(X,B+A)$ is klt as well (\ref{l-ample-dlt}). By Theorem \ref{t-mmodel}, $(X,B+A)$ has a log minimal model $(Y,B_Y+A_Y)$. Since $K_Y+B_Y+A_Y$ is not big, $K_Y+B_Y+A_Y$ is numerically trivial on some covering family of curves by [\ref{CTX}](see also \ref{t-aug-b-non-big} below). Again by [\ref{CTX}], there is a nef reduction map $Y\dashrightarrow T$ for $K_Y+B_Y+A_Y$ which is projective over the generic point of $T$. Although $Y\dashrightarrow T$ is not necessarily a Mori fibre space but in some sense it is similar.\\ {\textbf{\sffamily{Finite generation, base point freeness, and contractions.}}} We will prove finite generation in the big case from which we can derive base point freeness and contractions of extremal rays in many cases. These are proved in Section \ref{s-fg}. \begin{thm}\label{t-fg} Let $(X,B)$ be a klt pair of dimension $3$ over $k$ of char $p>5$ and $X\to Z$ a projective contraction. Assume that $K_X+B$ is a $\mathbb Q$-divisor which is big$/Z$. Then the relative log canonical algebra $\mathcal{R}(K_X+B/Z)$ is finitely generated over $\mathcal{O}_Z$. \end{thm} Assume that $Z$ is a point. If $K_X+B$ is not big, then ${R}(K_X+B/Z)$ is still finitely generated if $\kappa(K_X+B)\le 1$. It remains to show the finite generation when $\kappa(K_X+B)=2$: this can probably be reduced to dimension $2$ using an appropriate canonical bundle formula, for example as in [\ref{CTX}]. A more or less immediate consequence of the above finite generation is the following base point freeness. \begin{thm}\label{t-bpf} Let $(X,B)$ be a projective klt pair of dimension $3$ over $k$ of char $p>5$ and $X\to Z$ a projective contraction where $B$ is a $\mathbb Q$-divisor. Assume that $D$ is a $\mathbb Q$-divisor such that $D$ and $D-(K_X+B)$ are both nef and big$/Z$. Then $D$ is semi-ample$/Z$. \end{thm} Assume that $Z$ is a point. When $D-(K_X+B)$ is nef and big but $D$ is nef with numerical dimension $\nu(D)$ one or two, semi-ampleness of $D$ is proved in [\ref{CTX}] under some restrictions on the coefficients. \begin{thm}\label{t-contraction} Let $(X,B)$ be a projective $\mathbb Q$-factorial dlt pair of dimension $3$ over $k$ of char $p>5$, and $X\to Z$ a projective contraction. Let $R$ be a $K_X+B$-negative extremal ray$/Z$. Assume that there is a nef and big$/Z$ $\mathbb Q$-divisor $N$ such that $N\cdot R=0$. Then $R$ can be contracted by a projective morphism. \end{thm} Note that if $K_X+B$ is pseudo-effective$/Z$, then for every $K_X+B$-negative extremal ray $R/Z$ there exists $N$ as in the theorem (see \ref{ss-ext-rays-II}). Therefore such extremal rays can be contracted by projective morphisms. Theorems \ref{t-bpf} and \ref{t-contraction} have been proved by Xu [\ref{Xu}] independently and more or less at the same time but using a different approach. His proof also relies on our results on flips and minimal models.\\ {\textbf{\sffamily{Dlt and terminal models.}}} The next two results are standard consequences of the LMMP (more precisely, of special termination). They are proved in Section \ref{s-crepant-models}. \begin{thm}\label{cor-dlt-model} Let $(X,B)$ be an lc pair of dimension $3$ over $k$ of char $p>5$. Then $(X,B)$ has a (crepant) $\mathbb Q$-factorial dlt model. In particular, if $(X,B)$ is klt, then $X$ has a $\mathbb Q$-factorialization by a small morphism. \end{thm} The theorem was proved in [\ref{HX}, Theorem 6.1] for pairs with standard coefficients. \begin{thm}\label{cor-terminal-model} Let $(X,B)$ be a klt pair of dimension $3$ over $k$ of char $p>5$. Then $(X,B)$ has a (crepant) $\mathbb Q$-factorial terminal model. \end{thm} The theorem was proved in [\ref{HX}, Theorem 6.1] for pairs with standard coefficients and canonical singularities.\\ {\textbf{\sffamily{The connectedness principle with applications to semi-ampleness.}}} The next result concerns the Koll\'ar-Shokurov connectedness principle. In characteristic $0$, the surface case was proved by Shokurov by taking a resolution and then calculating intersection numbers [\ref{Shokurov}, Lemma 5.7] but the higher dimensional case was proved by Koll\'ar by deriving it from the Kawamata-Viehweg vanishing theorem [\ref{Kollar+}, Theorem 17.4]. \begin{thm}\label{t-connectedness-d-3} Let $(X,B)$ be a projective $\mathbb Q$-factorial pair of dimension $3$ over $k$ of char $p>5$. Let $f\colon X\to Z$ be a birational contraction such that $-(K_X+B)$ is ample$/Z$. Then for any closed point $z\in Z$, the non-klt locus of $(X,B)$ is connected in any neighborhood of the fibre $X_z$. \end{thm} The theorem is proved in Section \ref{s-connectedness}. To prove it we use the LMMP rather than vanishing theorems. When $\dim X=2$, the theorem holds in a stronger form (see \ref{t-connectedness-d-2}). We will use the connectedness principle on surfaces to prove some semi-ampleness results on surfaces and $3$-folds. Here is one of them: \begin{thm}\label{t-sa-reduced-boundary} Let $(X,B+A)$ be a projective $\mathbb Q$-factorial dlt pair of dimension $3$ over $k$ of char $p>5$. Assume that $A,B\ge 0$ are $\mathbb Q$-divisors such that $A$ is ample and $(K_X+B+A)|_{\rddown{B}}$ is nef. Then $(K_X+B+A)|_{\rddown{B}}$ is semi-ample. \end{thm} Note that if one could show that $\rddown{B}$ is semi-lc, then the result would follow from Tanaka [\ref{Tanaka-2}]. In order to show that $\rddown{B}$ is semi-lc one needs to check that it satisfies the Serre condition $S_2$. In characteristic $0$ this is a consequence of Kawamata-Viehweg vanishing (see Koll\'ar [\ref{Kollar+}, Corollary 17.5]). The $S_2$ condition can be used to glue sections on the various irreducible components of $\rddown{B}$. To prove the above semi-ampleness we instead use a result of Keel [\ref{Keel}, Corollary 2.9] to glue sections.\\ {\textbf{\sffamily{Log canonical thresholds.}}} As in characteristic $0$, we will derive the following result from existence of $\mathbb Q$-factorial dlt models and boundedness results on Fano surfaces. \begin{thm}\label{t-ACC} Suppose that $\Lambda\subseteq [0,1]$ and $\Gamma\subseteq \mathbb{R}$ are DCC sets. Then the set $$\{\lct(M,X,B)|\mbox{ $(X,B)$ is lc of dimension $\le 3$}\}$$ {\flushleft satisfies} the ACC where $X$ is over $k$ with char $p>5$, the coefficients of $B$ belong to $\Lambda$, $M\ge 0$ is an $\mathbb R$-Cartier divisor with coefficients in $\Gamma$, and $\lct(M,X,B)$ is the lc threshold of $M$ with respect to $(X,B)$. \end{thm} With some work it seems that using the above ACC one can actually prove termination for those lc pairs $(X,B)$ of dimension $3$ such that $K_X+B\equiv M$ for some $M\ge 0$ following the ideas in [\ref{B-acc}]. But we will not pursue this here.\\ {\textbf{\sffamily{Numerically trivial family of curves in the non-big case.}}} We will also give a somewhat different proof of the following result which was proved by Cascini-Tanaka-Xu [\ref{CTX}] in char $p$. This was also proved independently by M$^{\rm c}$Kernan much earlier but unpublished. He informed us that his proof was inspired by [\ref{KMM}]. \begin{thm}\label{t-aug-b-non-big} Assume that $X$ is a normal projective variety of dimension $d$ over an algebraically closed field (of any characteristic), and that $B,A\ge 0$ are $\mathbb R$-divisors. Moreover, suppose $A$ is nef and big and $D=K_X+B+A$ is nef. If $D^d=0$, then for each general closed point $x\in X$ there is a rational curve $L_x$ passing through $x$ with $D\cdot L_x=0$. \end{thm} The theorem is independent of the rest of this paper. Its proof is an application of the bend and break theorem.\\ {\textbf{\sffamily{Some remarks about this paper.}}} In writing this paper we have tried to give as much details as possible even if the arguments are very similar to the characteristic $0$ case. This is for convenience, future reference, and to avoid any unpleasant surprise having to do with positive characteristic. The main results are proved in the following order: \ref{t-flip-1} in the projective case, \ref{cor-dlt-model}, \ref{cor-terminal-model}, \ref{t-mmodel}, \ref{t-flip-1} in general, \ref{t-connectedness-d-3}, \ref{t-sa-reduced-boundary}, \ref{t-fg}, \ref{t-bpf}, \ref{t-contraction}, \ref{t-ACC}, and \ref{t-aug-b-non-big}.\\ {\textbf{\sffamily{Acknowledgements.}}} Part of this work was done when I visited National Taiwan University in September 2013 with the support of the Mathematics Division (Taipei Office) of the National Center for Theoretical Sciences. The visit was arranged by Jungkai A. Chen. I would like to thank them for their hospitality. This work was partially supported by a Leverhulme grant. I would also like to thank Paolo Cascini, Christopher Hacon, Janos Koll\'ar, James M$^{\rm{c}}$Kernan, Burt Totaro and Chenyang Xu for their comments and suggestions. Finally I would like to thank the referee for valuable corrections and suggestions. \section{Preliminaries} We work over an algebraically closed field $k$ of characteristic $p>0$ fixed throughout the paper unless stated otherwise. \subsection{Contractions}\label{ss-contractions} A \emph{contraction} $f\colon X\to Z$ of algebraic spaces over $k$ is a proper morphism such that $f_*\mathcal{O}_X=\mathcal{O}_Z$. When $X,Z$ are quasi-projective varieties over $k$ and $f$ is projective, we refer to $f$ as a \emph{projective contraction} to avoid confusion. Let $f\colon X\to Z$ be a projective contraction of normal varieties. We say $f$ is \emph{extremal} if the relative Kleiman-Mori cone of curves $\overline{NE}(X/Z)$ is one-dimensional. Such a contraction is a \emph{divisorial contraction} if it is birational and it contracts some divisor. It is called a \emph{small contraction} if it is birational and it contracts some subvariety of codimension $\ge 2$ but no divisors. Let $f\colon X\to Z$ be a small contraction and $D$ an $\mathbb R$-Cartier divisor such that $-D$ is ample$/Z$. We refer to $f$ as a \emph{$D$-flipping contraction} or just a flipping contraction for short. We say the \emph{$D$-flip} of $f$ exists if there is a small contraction $X^+\to Z$ such that the birational transform $D^+$ is ample$/Z$. \subsection{Some notions related to divisors}\label{ss-divisors} Let $X$ be a normal projective variety over $k$ and $L$ a nef $\mathbb R$-Cartier divisor. We define $L^\perp:=\{\alpha\in \overline{NE}(X) \mid L\cdot \alpha=0\}$. This is an extremal face of $\overline{NE}(X)$ cut out by $L$. Let $f\colon X\to Z$ be a projective morphism of normal varieties over $k$, and let $D$ be an $\mathbb R$-divisor on $X$. We define the algebra of $D$ over $Z$ as $\mathcal{R}(D/Z)=\bigoplus_{m\in\mathbb Z^{\ge 0}} f_*\mathcal{O}_X(\rddown{mD})$. When $Z$ is a point we denote the algebra by $R(D)$. When $D=K_X+B$ for a pair $(X,B)$ we call the algebra the \emph{log canonical algebra} of $(X,B)$ over $Z$. Now let $\phi\colon X\dashrightarrow Y$ be a birational map of normal projective varieties over $k$ whose inverse does not contract divisors. Let $D$ be an $\mathbb R$-Cartier divisor on $X$ such that $D_Y:=\phi_*D$ is $\mathbb R$-Cartier too. We say that $\phi$ is \emph{$D$-negative} if there is a common resolution $f\colon W\to X$ and $g\colon W\to Y$ such that $f^*D-g^*D_Y$ is effective and exceptional$/Y$, and its support contains the birational transform of all the prime divisors on $X$ which are contracted$/Y$. \subsection{The negativity lemma}\label{ss-negativity} The negativity lemma states that if $f\colon Y\to X$ is a projective birational contraction of normal quasi-projective varieties over $k$ and $D$ is an $\mathbb R$-Cartier divisor on $Y$ such that $-D$ is nef$/X$ and $f_*D\ge 0$, then $D\ge 0$ (since this is a local statement over $X$, it also holds if we assume $X$ is an algebraic space and $f$ is proper). See [\ref{Shokurov}, Lemma 1.1] for the characteristic $0$ case. The proof there also works in char $p>0$ and we reproduce it for convenience. Assume that the lemma does not hold. We reduce the problem to the surface case. Let $P$ be the image of the negative components of $D$. If $\dim P>0$, we take a general hypersurface section $H$ on $X$, let $G$ be the normalization of the birational transform of $H$ on $Y$ and reduce the problem to the contraction $G\to H$ and the divisor $D|_G$. But if $\dim X>2$ and $\dim P=0$, we take a general hypersurface section $G$ on $Y$, let $H$ be the normalization of $f(G)$, and reduce the problem to the induced contraction $G\to H$ and divisor $D|_G$. So we can reduce the problem to the case when $X,Y$ are surfaces, $P$ is just one point, and $f$ is an isomorphism over $X\setminus \{P\}$. Taking a resolution enables us to assume $Y$ is smooth. Now let $E\ge 0$ be a divisor whose support is equal to the exceptional locus of $f$ and such that $-E$ is nef$/X$: pick a Cartier divisor $L\ge 0$ passing through $P$ and write $f^*L=L^\sim+E$ where $L^\sim$ is the birational transform of $L$; then $E$ satisfies the requirements. Let $e$ be the smallest number such that $D+eE\ge 0$. Now there is a component $C$ of $E$ whose coefficient in $D+eE$ is zero and that $C$ intersects $\Supp (D+eE)$. But then $(D+eE)\cdot C>0$, a contradiction. \subsection{Resolution of singularities}\label{ss-resolution} Let $X$ be a quasi-projective variety of dimension $\le 3$ over $k$ and $P\subset X$ a closed subset. Assume that there is an open set $U\subset X$ such that $P\cap U$ is a divisor with simple normal crossing (snc) singularities. Then there is a \emph{log resolution} of $X,P$ which is an isomorphism over $U$, that is, there is a projective birational morphism $f\colon Y\to X$ such that the union of the exceptional locus of $f$ and the birational transform of $P$ is an snc divisor, and $f$ is an isomorphism over $U$. This follows from Cutkosky [\ref{Cut-sing}, Theorems 1.1, 1.2, 1.3] when $k$ has char $p>5$, and from Cossart-Piltant [\ref{CP-sing}, Theorems 4.1, 4.2][\ref{CP-sing-2}, Theorem] in general (see also [\ref{HX}, Theorem 2.1]). \subsection{Pairs}\label{ss-pairs} A \emph{pair} $(X,B)$ consists of a normal quasi-projective variety $X$ over $k$ and an \emph{$\mathbb R$-boundary} $B$, that is an $\mathbb R$-divisor $B$ on $X$ with coefficients in $[0,1]$, such that $K_X+B$ is $\mathbb{R}$-Cartier. When $B$ has rational coefficients we say $B$ is a \emph{$\mathbb Q$-boundary} or say $B$ is \emph{rational}. We say that $(X,B)$ is \emph{log smooth} if $X$ is smooth and $\Supp B$ has simple normal crossing singularities. Let $(X,B)$ be a pair. For a prime divisor $D$ on some birational model of $X$ with a nonempty centre on $X$, $a(D,X,B)$ denotes the \emph{log discrepancy} which is defined by taking a projective birational morphism $f\colon Y\to X$ from a normal variety containing $D$ as a prime divisor and putting $a(D,X,B)=1-b$ where $b$ is the coefficient of $D$ in $B_Y$ and $K_Y+B_Y=f^*(K_X+B)$. As in characteristic $0$, we can define various types of singularities using log discrepancies. Let $(X,B)$ be a pair. We say that the pair is \emph{log canonical} or lc for short (resp. \emph{Kawamata log terminal} or {klt} for short) if $a(D,X,B)\ge 0$ (resp. $a(D,X,B)>0$) for any prime divisor $D$ on birational models of $X$. An \emph{lc centre} of $(X,B)$ is the image in $X$ of a $D$ with $a(D,X,B)=0$. The pair $(X,B)$ is \emph{terminal} if $a(D,X,B)>1$ for any prime divisor $D$ on birational models of $X$ which is exceptional$/X$ (such pairs are sometime called terminal in codimension $\ge 2$). On the other hand, we say that $(X,B)$ is \emph{dlt} if there is a closed subset $P\subset X$ such that $(X,B)$ is log smooth outside $P$ and no lc centre of $(X,B)$ is inside $P$. In particular, the lc centres of $(X,B)$ are exactly the components of $S_1\cap \cdots \cap S_r$ where $S_i$ are among the components of $\rddown{B}$. Moreover, there is a log resolution $f\colon Y\to X$ of $(X,B)$ such that $a(D,X,B)>0$ for any prime divisor $D$ on $Y$ which is exceptional$/X$, eg take a log resolution $f$ which is an isomorphism over $X\setminus P$. Finally, we say that $(X,B)$ is \emph{plt} if it is dlt and each connected component of $\rddown{B}$ is irreducible. In particular, the only lc centres of $(X,B)$ are the components of $\rddown{B}$. \subsection{Ample divisors on log smooth pairs}\label{ss-log-smooth} Let $(X,B)$ be a projective log smooth pair over $k$ and let $A$ be an ample $\mathbb Q$-divisor. We will argue that there is $A'\sim_\mathbb Q A$ such that $A'\ge 0$ and that $(X,B+A')$ is log smooth. The argument was suggested to us by several people independently. We may assume that $B$ is reduced. Let $S_1,\dots, S_r$ be the components of $B$ and let $\mathcal{S}$ be the set of the components of $S_{i_1}\cap \cdots \cap S_{i_n}$ for all the choices $\{i_1,\dots,i_n\}\subseteq \{1,\cdots,r\}$. By Bertini's theorem, there is a sufficiently divisible integer $l>0$ such that for any $T\in\mathcal{S}$, a general element of $|lA|_T|$ is smooth. Since $lA$ is sufficiently ample, such general elements are restrictions of general elements of $|lA|$. Therefore, we can choose a general $G\sim lA$ such that $G$ is smooth and $G|_T$ is smooth for any $T\in \mathcal{S}$. This means that $(X,B+G)$ is log smooth. Now let $A'=\frac{1}{l}G$. \subsection{Models of pairs}\label{ss-mmodels} Let $(X,B)$ be a pair and $X\to Z$ a projective contraction over $k$. A pair $(Y,B_Y)$ with a projective contraction $Y\to Z$ and a birational map $\phi\colon X\dashrightarrow Y/Z$ is a \emph{log birational model} of $(X,B)$ if $B_Y$ is the sum of the birational transform of $B$ and the reduced exceptional divisor of $\phi^{-1}$. We say that $(Y,B_Y)$ is a \emph{weak lc model} of $(X,B)$ over $Z$ if in addition\\\\ (1) $K_Y+B_Y$ is nef/$Z$.\\ (2) for any prime divisor $D$ on $X$ which is exceptional/$Y$, we have $$ a(D,X,B)\le a(D,Y,B_Y) $$ And we call $(Y,B_Y)$ a \emph{log minimal model} of $(X,B)$ over $Z$ if in addition\\\\ (3) $(Y,B_Y)$ is $\mathbb Q$-factorial dlt,\\ (4) the inequality in (2) is strict.\\ When $K_X+B$ is big$/Z$, the \emph{lc model} of $(X,B)$ over $Z$ is a weak lc model $(Y,B_Y)$ over $Z$ with $K_Y+B_Y$ ample$/Z$. On the other hand, a log birational model $(Y,B_Y)$ of $(X,B)$ is called a \emph{Mori fibre space} of $(X,B)$ over $Z$ if there is a $K_Y+B_Y$-negative extremal projective contraction $Y\to T/Z$, and if for any prime divisor $D$ on birational models of $X$ we have $$ a(D,X,B)\le a(D,Y,B_Y) $$ with strict inequality if $D\subset X$ and if it is exceptional/$Y$, Note that the above definitions are slightly different from the traditional definitions. However, if $(X,B)$ is plt (hence also klt) the definitions coincide. Let $(X,B)$ be an lc pair over $k$. A $\mathbb Q$-factorial dlt pair $(Y,B_Y)$ is a \emph{$\mathbb Q$-factorial dlt model} of $(X,B)$ if there is a projective birational morphism $f\colon Y\to X$ such that $K_Y+B_Y=f^*(K_X+B)$ and such that every exceptional prime divisor of $f$ has coefficient $1$ in $B_Y$. On the other hand, when $(X,B)$ is klt, a pair $(Y,B_Y)$ with terminal singularities is a \emph{terminal model} of $(X,B)$ if there is a projective birational morphism $f\colon Y\to X$ such that $K_Y+B_Y=f^*(K_X+B)$. \subsection{Keel's results}\label{ss-Keel} We recall some of the results of Keel which will be used in this paper. For a nef $\mathbb Q$-Cartier divisor $L$ on a projective scheme $X$ over $k$, the \emph{exceptional locus} $\mathbb{E}(L)$ is the union of those positive-dimensional integral subschemes $Y\subseteq X$ such that $L|_Y$ is not big, i.e. $(L|_Y)^{\dim Y}=0$. By [\ref{CMM}], $\mathbb{E}(L)$ coincides with the augmented base locus ${\bf{B_+}}(L)$. We say $L$ is \emph{endowed with a map} $f\colon X\to V$, where $V$ is an algebraic space over $k$, if: an integral subscheme $Y$ is contracted by $f$ (i.e. $\dim Y>\dim f(Y)$) if and only if $L|_Y$ is not big. \begin{thm}[{[\ref{Keel}, 1.9]}]\label{t-Keel-1} Let $X$ be a projective scheme over $k$ and $L$ a nef $\mathbb Q$-Cartier divisor on $X$. Then $\bullet$ $L$ is semi-ample if and only if $L|_{\mathbb{E}(L)}$ is semi-ample; $\bullet$ $L$ is endowed with a map if and only if $L|_{\mathbb{E}(L)}$ is endowed with a map. \end{thm} The theorem does not hold if $k$ is of characteristic $0$. When $L|_{\mathbb{E}(L)}\equiv 0$, then $L|_{\mathbb{E}(L)}$ is automatically endowed with the constant map $\mathbb{E}(L)\to \rm{pt}$ hence $L$ is endowed with a map. This is particularly useful for studying $3$-folds because it is often not difficult to show that $L|_{\mathbb{E}(L)}$ is endowed with a map, eg when $\dim \mathbb{E}(L)=1$. \begin{thm}[{[\ref{Keel}, 0.5]}]\label{t-Keel-2} Let $(X,B)$ be a projective $\mathbb Q$-factorial pair of dimension $3$ over $k$ with $B$ a $\mathbb Q$-divisor. Assume that $A$ is an ample $\mathbb Q$-divisor such that $L=K_X+B+A$ is nef and big. Then $L$ is endowed with a map. \end{thm} In particular, when $L^\perp$ is an extremal ray, then we can contract $R$ to an algebraic space by the map associated to $L$. Thus such an extremal ray is generated by the class of some curve. We also recall the following cone theorems which we will use repeatedly in Section \ref{s-ext-rays}. Note that these theorems (as well as \ref{t-Keel-2}) do not assume singularities to be lc. \begin{thm}[{[\ref{Keel}, 0.6]}]\label{t-Keel-3} Let $(X,B)$ be a projective $\mathbb Q$-factorial pair of dimension $3$ over $k$ with $B$ a $\mathbb Q$-divisor. Assume that $K_X+B\sim_\mathbb Q M$ for some $M\ge 0$. Then there is a countable number of curves $\Gamma_i$ such that $\bullet$ $\overline{NE}(X)=\overline{NE}(X)_{K_X+B\ge 0}+\sum_i \mathbb R [\Gamma_i]$, $\bullet$ all but finitely many of the $\Gamma_i$ are rational curves satisfying $-3\le (K_X+B)\cdot \Gamma_i<0$, and $\bullet$ the rays $\mathbb R [\Gamma_i]$ do not accumulate inside $\overline{NE}(X)_{K_X+B<0}$.\\ \end{thm} \begin{thm}[{[\ref{Keel}, 5.5.2]}]\label{t-Keel-4} Let $(X,B)$ be a projective $\mathbb Q$-factorial pair of dimension $3$ over $k$. Assume that $$ L=K_X+B+H\sim_\mathbb R A+M $$ is nef where $H,A$ are ample $\mathbb R$-divisors, and $M\ge 0$. Then any extremal ray of $L^\perp$ is generated by some curve $\Gamma$ such that either $\bullet$ $\Gamma$ is a component of the singular locus of $B+M$ union with the singular locus of $X$, or $\bullet$ $\Gamma$ is a rational curve satisfying $-3\le (K_X+B)\cdot \Gamma<0$.\\ \end{thm} \begin{rem}\label{ss-good-exc-locus} Let $(X,B)$ a projective lc pair of dimension $3$ over $k$ with $B$ a $\mathbb Q$-boundary, and $H$ an ample $\mathbb Q$-divisor. Assume that $L=K_X+B+H$ is nef and big. Moreover, suppose that each connected component of $\mathbb{E}(L)$ is inside some normal irreducible component $S$ of $\rddown{B}$. Then $L|_S$ is semi-ample for such components (cf. [\ref{Tanaka}]) hence $L|_{\mathbb{E}(L)}$ is semi-ample and this in turn implies that $L$ is semi-ample by Theorem \ref{t-Keel-1}. \end{rem} \section{Extremal rays and special kinds of LMMP}\label{s-ext-rays} As usual the varieties and algebraic spaces in this section are defined over $k$ of char $p>0$. \subsection{Extremal curve of a ray} \label{ss-ext-curve} Let $X$ be a projective variety and $H$ a fixed ample Cartier divisor. Let $R$ be a ray of $\overline{NE}(X)$ which is generated by some curve $\Gamma$. Assume that $$ H\cdot \Gamma=\min\{H\cdot C\mid \mbox{$C$ generates $R$} \} $$ In this case, we say $\Gamma$ is an \emph{extremal curve} of $R$ (in practice we do not mention $H$ and assume that it is already fixed). Let $C$ be any other curve generating $R$. Assume that $D\cdot R<0$ for some $\mathbb R$-Cartier divisor $D$. Since $\Gamma$ and $C$ both generate $R$, $$ \frac{D\cdot C}{H\cdot C}=\frac{D\cdot \Gamma}{H\cdot \Gamma} $$ hence $$ D\cdot \Gamma={D\cdot C}(\frac{H\cdot \Gamma}{H\cdot C})\ge D\cdot C $$ which implies that $$ D\cdot \Gamma=\max\{D\cdot C\mid \mbox{$C$ generates $R$} \} $$ \subsection{Negative extremal rays}\label{ss-ext-rays} Let $(X,B)$ be a projective $\mathbb Q$-factorial pair of dimension $3$. Let $R$ be a $K_X+B$-negative extremal ray. Assume that there is a boundary $\Delta$ such that $K_X+\Delta$ is pseudo-effective and $(K_X+\Delta)\cdot R<0$. By adding a small ample divisor and perturbing the coefficients we can assume that $\Delta$ is rational and that $K_X+\Delta$ is big. Then by Theorem \ref{t-Keel-3}, $R$ is generated by some extremal curve and $R$ is an isolated extremal ray of $\overline{NE}(X)$. Now assume that $K_X+B$ is pseudo-effective and let $A$ be an ample $\mathbb R$-divisor. Then for any $\epsilon>0$, there are only finitely many $K_X+B+\epsilon A$-negative extremal rays: assume that this is not the case; then we can find a $\mathbb Q$-boundary $\Delta$ such that $K_X+\Delta$ is big and $$ K_X+B+\epsilon A\sim_\mathbb R K_X+\Delta+G $$ where $G$ is ample; so there are also infinitely many $K_X+\Delta$-negative extremal rays; but $K_X+\Delta$ is big hence by Theorem \ref{t-Keel-3} all but finitely many of the $K_X+\Delta$-negative extremal rays are generated by extremal curves $\Gamma$ with $-3\le (K_X+\Delta)\cdot \Gamma<0$; if $(K_X+B+\epsilon A)\cdot \Gamma<0$, then $G\cdot \Gamma\le 3$; since $G$ is ample, there can be only finitely many such $\Gamma$ up to numerical equivalence. Let $R$ be a $K_X+B$-negative extremal ray where $K_X+B$ is not necessarily pseudo-effective. But assume that there is a pseudo-effective $K_X+\Delta$ with $(K_X+\Delta)\cdot R<0$. By the remarks above we may assume $\Delta$ is rational, $K_X+\Delta$ big, and that there are only finitely many $K_X+\Delta$-negative extremal rays. Therefore, we can find an ample $\mathbb Q$-divisor $H$ such that $L=K_X+\Delta+H$ is nef and big and $L^\perp=R$. That is, $L$ is a \emph{supporting divisor} of $R$. Moreover, $R$ can be contracted to an algebraic space, by Theorem \ref{t-Keel-2}. More precisely, there is a contraction $X\to V$ to an algebraic space such that it contracts a curve $C$ if and only if $L\cdot C=0$ if and only if the class $[C]\in R$. \subsection{More on negative extremal rays}\label{ss-ext-rays-II} Let $(X,B)$ be a projective $\mathbb Q$-factorial pair of dimension $3$. Let $\mathcal{C}\subset \overline{NE}(X)$ be one of the following: $(1)$ $\mathcal{C}=\overline{NE}(X/Z)$ for a given projective contraction $X\to Z$ such that $K_X+B\equiv P+M/Z$ where $P$ is nef$/Z$ and $M\ge 0$ (this is a \emph{weak Zariski decomposition}; see \ref{ss-WZD}); or $(2)$ $\mathcal{C}=N^\perp$ for some nef and big $\mathbb Q$-divisor $N$; We will show that in both cases, each $K_X+B$-negative extremal ray $R$ of $\mathcal{C}$ is generated by an extremal curve $\Gamma$, and for all but finitely many of those rays we have $-3\le (K_X+B)\cdot \Gamma<0$. We first deal with case (1). Fix a $K_X+B$-negative extremal ray $R$ of $\mathcal{C}$. By replacing $P$ we can assume that $K_X+B=P+M$. Let $A$ be an ample $\mathbb R$-divisor and $T$ be the pullback of a sufficiently ample divisor on $Z$ so that $K_X+B+A+T$ is big and $(K_X+B+A+T)\cdot R<0$. By \ref{ss-ext-rays}, there is a nef and big $\mathbb Q$-divisor $L$ with $L^\perp=R$. Moreover, we may assume that if $l\gg 0$, then $$ Q_1:=K_X+B+T+lL+A $$ is nef and big and $Q_1^\perp=R$. By construction, $T+lL+A$ is ample, $P+T+lL+A$ is also ample, and $$ K_X+B+T+lL+A=P+T+lL+A+M $$ Therefore, by Theorem \ref{t-Keel-4}, $R$ is generated by some curve $\Gamma$ satisfying $-3\le (K_X+B)\cdot \Gamma<0$ or $R$ is generated by some curve in the singular locus of $B+M$ or $X$. There are only finitely many possibilities in the latter case. The claim then follows. Now we deal with case (2). Fix a $K_X+B$-negative extremal ray $R$ of $\mathcal{C}$. Since $N$ is nef and big, for some $n>0$, $$ K_X+B+nN\sim_\mathbb R G+S $$ where $G$ is ample and $S\ge 0$. By \ref{ss-ext-rays}, there is a nef and big $\mathbb Q$-divisor $L$ with $L^\perp=R$. Moreover, for some $l\gg 0$ and some ample $\mathbb R$-divisor $A$, $$ Q_2:=K_X+B+nN+lL+A $$ is nef and big with $Q_2^\perp=R$. Now, $nN+lL+A$ is ample, $G+lL+A$ is ample, and $$ K_X+B+nN+lL+A\sim_\mathbb R G+lL+A+S $$ Therefore, by Theorem \ref{t-Keel-4}, $R$ is generated by some curve $\Gamma$ satisfying $-3\le (K_X+B)\cdot \Gamma<0$ or $R$ is generated by some curve in the singular locus of $B+S$ or $X$. There are only finitely many possibilities in the latter case. The claim then follows. Assume that $R$ is a $K_X+B$-negative extremal ray of $\mathcal{C}$, in either case. Then the above arguments show that there is a $\mathbb Q$-boundary $\Delta$ and an ample $\mathbb Q$-divisor $H$ such that $K_X+\Delta$ is big, $(K_X+\Delta)\cdot R<0$, and $L=K_X+\Delta+H$ is nef and big with $L^\perp=R$. Therefore, as in \ref{ss-ext-rays}, $R$ can be contracted via a contraction $X\to V$ to an algebraic space. Moreover, if $B$ is rational, then we can find an ample $\mathbb Q$-divisor $H'$ such that $L'=K_X+B+H'$ is nef and big and again $L'^\perp=R$. \subsection{Extremal rays given by scaling.}\label{ss-ext-rays-scaling} Let $(X,B)$ be a projective $\mathbb Q$-factorial pair of dimension $3$. Assume that either $\mathcal{C}=\overline{NE}(X/Z)$ for some projective contraction $X\to Z$ such that $K_X+B\equiv M/Z$ for some $M\ge 0$, or $\mathcal{C}=N^\perp$ for some nef and big $\mathbb Q$-divisor $N$. In addition assume that $(X,B+C)$ is a pair for some $C\ge 0$ and that $K_X+B+C$ is nef on $\mathcal{C}$, that is, $(K_X+B+C)\cdot R\ge 0$ for every extremal ray $R$ of $\mathcal{C}$. Let $$ \lambda=\inf\{t\ge 0\mid K_X+B+tC \mbox{~is nef on $\mathcal{C}$}\} $$ Then we will see that either $\lambda=0$ or there is an extremal ray $R$ of $\mathcal{C}$ such that $(K_X+B+\lambda C)\cdot R=0$ and $(K_X+B)\cdot R<0$. Assume $\lambda>0$. If the claim is not true, then there exist a sequence of numbers $t_1<t_2<\cdots$ approaching $\lambda$ and extremal rays $R_i$ of $\mathcal{C}$ such that $(K_X+B+t_i C)\cdot R_i=0$ and $(K_X+B)\cdot R_i<0$. First assume that $\mathcal{C}=N^\perp$ for some nef and big $\mathbb Q$-divisor $N$. We can write a finite sum $K_X+B=\sum_j r_j(K_X+B_j)$ where $r_j\in (0,1]$, $\sum r_j=1$, and $(X,B_j)$ are pairs with $B_j$ being rational. By \ref{ss-ext-rays-II}, we may assume that each $R_i$ is generated by some extremal curve $\Gamma_i$ with $-3\le (K_X+B_j)\cdot \Gamma_i$ for each $j$. This implies that there are only finitely many possibilities for the numbers $(K_X+B)\cdot \Gamma_i$. A similar reasoning shows that there are only finitely many possibilities for the numbers $(K_X+B+\frac{\lambda}{2}C)\cdot \Gamma_i$ hence there are also only finitely many possibilities for the numbers $C\cdot \Gamma_i$. But then this implies that there are finitely many $t_i$, a contradiction. Now assume that $\mathcal{C}=\overline{NE}(X/Z)$ for some projective contraction $X\to Z$ such that $K_X+B\equiv M/Z$ for some $M\ge 0$. Then we can write $K_X+B=\sum_j r_j(K_X+B_j)$ and $M=\sum_j r_jM_j$ where $r_j\in (0,1]$, $\sum r_j=1$, $(X,B_j)$ are pairs with $B_j$ being rational, $K_X+B_j\equiv M_j/Z$, and $M_j\ge 0$. To find such a decomposition we argue as in [\ref{BP}, pages 96-97]. Let $V$ and $W$ be the $\mathbb R$-vector spaces generated by the components of $B$ and $M$ respectively. For a vector $v\in V$ (resp. $w\in W$) we denote the corresponding $\mathbb R$-divisor by $B_v$ (resp. $M_w$). Let $F$ be the set of those $(v,w)\in V\times W$ such that $(X,B_v)$ is a pair, $M_w\ge 0$, and $K_X+B_v\equiv M_w/Z$. Then $F$ is defined by a finite number of linear equalities and inequalities with rational coefficients. If $B=B_{v_0}$ and $M=M_{w_0}$ are the given divisors, then $(v_0,w_0)\in F$ hence it belongs to some polytope in $F$ with rational vertices. The vertices of the polytope give the $B_j,M_j$. The rest of the proof is as in the last paragraph. \subsection{LMMP with scaling}\label{ss-g-LMMP-scaling} Let $(X,B)$ be a projective $\mathbb Q$-factorial pair of dimension $3$. Assume that either $\mathcal{C}=\overline{NE}(X/Z)$ for some projective contraction $X\to Z$ such that $K_X+B\equiv M/Z$ for some $M\ge 0$, or $\mathcal{C}=N^\perp$ for some nef and big $\mathbb Q$-divisor $N$. In addition assume that $(X,B+C)$ is a pair for some $C\ge 0$ and that $K_X+B+C$ is nef on $\mathcal{C}$. If $K_X+B$ is not nef on $\mathcal{C}$, by \ref{ss-ext-rays-scaling}, there is an extremal ray $R$ of $\mathcal{C}$ such that $(K_X+B+\lambda C)\cdot R=0$ and $(K_X+B)\cdot R<0$ where $\lambda$ is the smallest number such that $K_X+B+\lambda C$ is nef on $\mathcal{C}$. Assume that $R$ can be contracted by a projective morphism. The contraction is birational because $L\cdot R=0$ for some nef and big $\mathbb Q$-Cartier divisor $L$ (see \ref{ss-ext-rays-II}). Assume that $X\dashrightarrow X'$ is the corresponding divisorial contraction or flip, and assume that $X'$ is $\mathbb Q$-factorial. Let $\mathcal{C}'$ be the cone given by $\mathcal{C}'=\overline{NE}(X'/Z)$ or $\mathcal{C}'=(N')^\perp$ corresponding to the above cases. Let $\lambda'$ be the smallest nonnegative number such that $K_{X'}+B'+\lambda' C'$ is nef on $\mathcal{C}'$. If $\lambda'>0$, then there is an extremal ray $R'$ of $\mathcal{C}'$ such that $(K_{X'}+B'+\lambda' C')\cdot R'=0$ and $(K_{X'}+B')\cdot R'<0$. Assume that $R'$ can be contracted and so on. Assuming that all the necessary ingredients exist, the process gives a special kind of LMMP which we may refer to as \emph{LMMP$/\mathcal{C}$ on $K_X+B$ with scaling of $C$}. Note that $\lambda\ge \lambda'\ge \cdots$ If $\mathcal{C}=\overline{NE}(X/Z)$, we also refer to the above LMMP as the LMMP$/Z$ on $K_X+B$ with scaling of $C$. If $\mathcal{C}=N^\perp$, and if $N$ is endowed with a map $X\to V$ to an algebraic space, we refer to the above LMMP as the LMMP$/V$ on $K_X+B$ with scaling of $C$. In practice, when we run an LMMP with scaling, $(X,B)$ is $\mathbb Q$-factorial dlt and each extremal ray in the process intersects some component of $\rddown{B}$ negatively. In particular, such rays can be contracted by projective morphisms and the $\mathbb Q$-factorial property is preserved by the LMMP (see \ref{ss-pl-ext-rays}). If the required flips exist then the LMMP terminates by special termination (see \ref{ss-special-termination}). \subsection{Extremal rays given by a weak Zariski decomposition}\label{ss-ext-ray-WZD} Let $(X,B)$ be a projective $\mathbb Q$-factorial pair of dimension $3$ and $X\to Z$ a projective contraction such that \begin{enumerate} \item $K_X+B\equiv P+M/Z$, $P$ is nef$/Z$, $M\ge 0$, and \item $\Supp M\subseteq \rddown{B}$. \end{enumerate} Let $$ \mu=\sup \{t\in [0,1] \mid P+tM ~~~\mbox{is nef$/Z$}\}. $$ Assume that $\mu<1$. We will show that there is an extremal ray $R/Z$ such that $(K_X+B)\cdot R<0$ and $(P+\mu M)\cdot R=0$. Replacing $P$ with $P+\mu M$ we may assume that $\mu=0$. Then by definition of $\mu$, $P+\epsilon'M$ is not nef$/Z$ for any $\epsilon'>0$. In particular, for any $\epsilon'>0$ there is a $K_X+B$-negative extremal ray $R/Z$ such that $(P+\epsilon'M)\cdot R<0$ but $(P+\epsilon M)\cdot R=0$ for some $\epsilon\in [0,\epsilon')$. If there is no $K_X+B$-negative extremal ray $R/Z$ such that $P\cdot R=0$, then there is an infinite strictly decreasing sequence of sufficiently small positive real numbers $\epsilon_i$ and $K_X+B$-negative extremal rays $R_i/Z$ such that $\lim_{i\to \infty} \epsilon_i=0$ and $(P+\epsilon_i M)\cdot R_i=0$. We may assume that for each $i$, there is an extremal curve $\Gamma_i$ generating $R_i$ such that $-3\le (K_X+B)\cdot \Gamma_i<0$ (see \ref{ss-ext-rays-II}). Since $\Supp M\subseteq \rddown{B}$, there is a small $\delta>0$ such that $(K_X+B-\delta M)\cdot \Gamma_i<0$ for each $i$, $B-\delta M\ge 0$, and $\Supp (B-\delta M)=\Supp B$. We have $$ K_X+B-\delta M\equiv P+(1-\delta)M/Z $$ By replacing the sequence of extremal rays with a subsequence, we can assume that each component $S$ of $M$ satisfies: either $S\cdot R_i\ge 0$ for every $i$, or $S\cdot R_i<0$ for every $i$. Pick a component $S$. If $S\cdot R_i\ge 0$ for each $i$, then by \ref{ss-ext-rays-II}, we may assume that $$ -3\le (K_X+B-\delta M)\cdot \Gamma_i<0 $$ and $$ -3\le (K_X+B-\delta M-\tau S)\cdot \Gamma_i<0 $$ for every $i$ where $\tau>0$ is a small number. In particular, this means that $S\cdot \Gamma_i$ is bounded from below and above. On the other hand, if $S\cdot R_i< 0$ for each $i$, then by considering $K_X+B-\delta M+\tau S$ and arguing similarly we can show that again $S\cdot \Gamma_i$ is bounded from below and above. In particular, there are only finitely many possibilities for the numbers $M\cdot \Gamma_i$. Therefore, $$ \lim_{i\to \infty} P\cdot \Gamma_i=\lim_{i\to \infty} -\epsilon_iM\cdot \Gamma_i=0 $$ Write $K_X+B=\sum_j r_j(K_X+B_j)$ where $r_j\in (0,1]$, $\sum r_j=1$, and $(X,B_j)$ are pairs with $B_j$ being rational. We can assume that each component of $B-B_j$ has irrational coefficient in $B$ hence $B-B_j$ and $M$ have no common components because $\Supp M\subseteq \rddown{B}$. Assume $(K_X+B_j)\cdot \Gamma_i<0$ for some $i,j$. Let $S$ be a component of $M$ such that $S\cdot \Gamma_i<0$, and let $S^\nu$ be its normalization. Let $K_{S^\nu}+B_{j,S^\nu}=(K_X+B_j)|_{S^\nu}$ (see Section \ref{s-adjunction} for adjunction formulas of this type). On the other hand, by \ref{ss-ext-rays-II}, there is an ample $\mathbb Q$-divisor $H$ such that $Q=K_X+B_j+H$ is nef and big and $R_i=Q^\perp$. Now the face $(Q|_{S^\nu})^\perp$ of $\overline{NE}(S^\nu/Z)$ is generated by finitely many curves $\Lambda_1^\nu, \dots, \Lambda_r^\nu$ such that $\alpha_j\le (K_{S^\nu}+B_{j,S^\nu})\cdot \Lambda_l^\nu<0$ where $\alpha_j$ depends on $({S^\nu},B_{j,S^\nu})$ but does not depend on $i$, by Tanaka [\ref{Tanaka}, Theorem 4.4, Remark 4.5]. Let $\Lambda_l$ be the image of $\Lambda_l^\nu$ under the map $S^\nu\to X$. Since $R_i=Q^\perp$ and $Q\cdot \Lambda_l=0$, each $\Lambda_l$ also generates $R_i$. But as $\Gamma_i$ is extremal, perhaps after replacing the $\alpha_j$, we get $$ \alpha_j\le (K_X+B_j)\cdot \Lambda_l\le (K_X+B_j)\cdot \Gamma_i<0 $$ by \ref{ss-ext-curve}. On the other hand, since $$ -3\le (K_X+B)\cdot \Gamma_i=\sum_j r_j(K_X+B_j)\cdot \Gamma_i<0 $$ for each $i$, we deduce that $(K_X+B_j)\cdot \Gamma_i$ is bounded from below and above for each $i,j$ which in turn implies that there are only finitely many possibilities for $(K_X+B)\cdot \Gamma_i$. Recalling that there are also finitely many possibilities for $M\cdot \Gamma_i$, we get a contradiction as $$ 0<P\cdot \Gamma_i=(K_X+B)\cdot \Gamma_i-M\cdot \Gamma_i $$ but $\lim_{i\to \infty} P\cdot \Gamma_i=0$. \subsection{LMMP using a weak Zariski decomposition}\label{ss-LMMP-WZD} Let $(X,B)$ be a projective $\mathbb Q$-factorial pair of dimension $3$ and $X\to Z$ a projective contraction such that $K_X+B\equiv P+M/Z$ where $P$ is nef$/Z$, $M\ge 0$, and $\Supp M\subseteq \rddown{B}$. Let $\mu$ be the largest number such that $P+\mu M$ is nef$/Z$. Assume $\mu<1$. Then, by \ref{ss-ext-ray-WZD}, there is an extremal ray $R/Z$ such that $(K_X+B)\cdot R<0$ and $(P+\mu M)\cdot R=0$. By replacing $P$ with $P+\mu M$ we may assume that $P\cdot R=0$. Assume that $R$ can be contracted by a projective morphism and that it gives a divisorial contraction or a log flip $X\dashrightarrow X'/Z$ with $X'$ being $\mathbb Q$-factorial. Obviously, $K_{X'}+B'\equiv P'+M'/Z$ where $P'$ is nef$/Z$, $M'\ge 0$, and $\Supp M'\subseteq \rddown{B'}$. Continuing this process we obtain a particular kind of LMMP which we will refer to as the \emph{LMMP using a weak Zariski decomposition} or more specifically the \emph{LMMP$/Z$ on $K_X+B$ using $P+M$}. When we need this LMMP below we will make sure that all the necessary ingredients exist.\\ \section{Adjunction}\label{s-adjunction} The varieties in this section are over $k$ of arbitrary characteristic. We will use some of the results of Koll\'ar [\ref{Kollar-sing}] to prove an adjunction formula. Let $\Lambda$ be a DCC set of numbers in $[0,1]$. Then the hyperstandard set $$ \mathfrak{S}_\Lambda=\{\frac{m-1}{m}+\sum \frac{l_ib_i}{m}\le 1 \mid m\in \mathbb N\cup\{\infty\}, l_i\in \mathbb Z^{\ge 0}, b_i\in \Lambda\} $$ also satisfies DCC. Now let $(X,B)$ be a pair and $S$ a component of $\rddown{B}$. Let $S^\nu \to S$ be the normalization. Following a suggestion of Koll\'ar, we will show that the pullback of $K_X+B$ to $S^{\nu}$ can be canonically written as $K_{S^\nu}+B_{S^\nu}$ for some $B_{S^\nu}\ge 0$ which is called the \emph{different}. Moreover, if $(X,B)$ is lc outside a codimension $3$ closed subset and if the coefficients of $B$ belong to $\Lambda$, then we show $B_{S^\nu}$ is a boundary with coefficients in $\mathfrak{S}_\Lambda$. When there is a log resolution $f\colon W\to X$, it is easy to define $B_{S^\nu}$: let $K_W+B_W=f^*(K_X+B)$ and let $K_T+B_T=(K_W+B_W)|_T$ where $T$ is the birational transform of $S$. Next, let $B_{S^\nu}$ be the pushdown of $B_T$ via $T\to S^\nu$. However, since existence of log resolutions is not known in general, we follow a different path, that is, that of [\ref{Kollar-sing}, Section 4.1]. Actually, in this paper we will need this construction only when $\dim X\le 3$ in which case log resolutions exist. The characteristic $0$ case of the results mentioned is due to Shokurov [\ref{Shokurov}, Corollary 3.10]. His idea is to cut by appropriate hyperplane sections and reduce the problem to the case when $X$ is a surface. If the index of $K_X+S$ is $1$ one proves the claim by direct calculations on a resolution. If the index is more than $1$ one then uses the index $1$ cover. Unfortunately this does not work in positive characteristic. \begin{prop}\label{p-adjunction-existence} Let $(X,B)$ be a pair, $S$ be a component of $\rddown{B}$, and $S^\nu\to S$ be the normalization. Then there is a canonically determined $\mathbb R$-divisor $B_{S^\nu}\ge 0$ such that $$ K_{S^\nu}+B_{S^\nu}\sim_\mathbb R (K_X+B)|_{S^\nu} $$ where $|_{S^\nu}$ means pullback to ${S^\nu}$ by the induced morphism $S^\nu\to X$. \end{prop} \begin{proof} If $K_X+B$ is $\mathbb Q$-Cartier, then the statement is proved in [\ref{Kollar-sing}, 4.2 and 4.5]. In fact, [\ref{Kollar-sing}, 4.2] defines $\Delta_{S^\nu}$ in general when $\Delta$ is a $\mathbb Q$-divisor with arbitrary rational coefficients, $S$ is a component of $\Delta$ with coefficient $1$, and $K_X+\Delta$ is $\mathbb Q$-Cartier (but $\Delta_{S^\nu}$ is not effective in general). Let $U$ be the $\mathbb R$-vector space generated by the components of $B$. There is a rational affine subspace $V$ of $U$ containing $B$ and with minimal dimension. Since $V$ has minimal dimension, $\Delta-B$ is supported in the irrational part of $B$ for every $\Delta\in V$. Thus the coefficient of $S$ in $\Delta$ is $1$ for every $\Delta\in V$. Let $V_\mathbb Q$ be the underlying $\mathbb Q$-affine space of $V$. Let $$ W_\mathbb Q=\{\Delta_{S^\nu} \mid \Delta\in V_\mathbb Q\} $$ If $\Delta=\sum r_j\Delta^j$ where $r_j> 0$ is rational, $\sum r_j=1$, and $\Delta^j\in V_\mathbb Q$, then the construction of [\ref{Kollar-sing}, 4.2] shows that $\Delta_{S^\nu}=\sum r_j\Delta^j_{S^\nu}$. Therefore, $W_\mathbb Q$ is a $\mathbb Q$-affine space and the map $\alpha \colon V_\mathbb Q\to W_\mathbb Q$ sending $\Delta$ to $\Delta_{S^\nu}$ is an affine map. Letting $W$ be the $\mathbb R$-affine space generated by $W_\mathbb Q$, we get an induced affine map $V\to W$ which sends $B$ to some element $B_{S^\nu}$. Writing $B=\sum r_j\Delta^j$ where $r_j> 0$, $\sum r_j=1$, and $0\le \Delta^j\in V_\mathbb Q$, we see that $B_{S^\nu}=\sum r_j\Delta^j_{S^\nu}\ge 0$. Moreover, by construction $$ K_{S^\nu}+B_{S^\nu}=\sum r_j(K_{S^\nu}+\Delta^j_{S^\nu})\sim_\mathbb R \sum r_j(K_X+\Delta^j)|_{S^\nu} =(K_X+B)|_{S^\nu} $$\\ \end{proof} Note that in general $B_{S^\nu}$ is not a boundary, i.e. its coefficients may not be in $[0,1]$. \begin{prop}\label{p-adjunction-DCC} Let $\Lambda\subseteq [0,1]$ be a DCC set of real numbers. Let $(X,B)$ be a pair, $S$ be a component of $\rddown{B}$, $S^\nu\to S$ be the normalization, and $B_{S^\nu}$ be the divisor given by Proposition \ref{p-adjunction-existence}. Assume that $\bullet$ $(X,B)$ is lc outside a codimension $3$ closed subset, and $\bullet$ the coefficients of $B$ are in $\Lambda$.\\ Then $B_{S^\nu}$ is a boundary with coefficients in $\mathfrak{S}_\Lambda$. More precisely: write $B=S+\sum_{i\ge 2} b_iB_i$, let $V^\nu$ be a prime divisor on $S^\nu$ and let $V$ be its image on $S$; then there exists $m\in\mathbb N\cup \{\infty\}$ depending only on $X,S$ and $V$, and there exist nonnegative integers $l_i$ depending only on $X,S,B_i$ and $V$, such that the coefficient of $V^\nu$ in $B_{S^\nu}$ is equal to $$ \frac{m-1}{m}+\sum_{i\ge 2} \frac{l_ib_i}{m} $$ \end{prop} \begin{lem}\label{l-lc-codim2} Let $(X,B)$ be a pair which is lc outside a codimension $3$ closed subset. Then we can write $B=\sum r_jB^j$ where $r_j> 0$, $\sum r_j=1$, $B^j$ are $\mathbb Q$-boundaries, and $(X,B^j)$ are lc outside a codimension $3$ closed subset. \end{lem} \begin{proof} As in the proof of Proposition \ref{p-adjunction-existence}, there is a rational affine space $V$ of divisors, containing $B$, such that $K_X+\Delta$ is $\mathbb R$-Cartier for every $\Delta\in V$. The set of those $\Delta\in V$ with coefficients in $[0,1]$ is a rational polytope $\mathcal{P}$ containing $B$. We want to show that there is a rational polytope $\mathcal{L}\subseteq \mathcal{P}$, containing $B$, such that $(X,\Delta)$ is lc outside a fixed codimension $3$ closed subset, for every $\Delta\in \mathcal{L}$. If $(X,B)$ has a log resolution, then existence of $\mathcal{L}$ can be proved using the same arguments as in [\ref{Shokurov}, 1.3.2]. The pair $(X,B)$ is log smooth outside some codimension $2$ closed subset $Y$. In particular, $(X,\Delta)$ is lc outside $Y$, for every $\Delta\in \mathcal{P}$. Shrinking $X$ we can assume $Y$ is of pure codimension $2$ and that $(X,B)$ is lc everywhere. Assume that for each component $R$ of $Y$, there is a rational polytope $\mathcal{L}_R\subseteq \mathcal{P}$, containing $B$, such that $(X,\Delta)$ is lc near the generic point of $R$, for every $\Delta\in \mathcal{L}_R$. Then we can take $\mathcal{L}$ to be any rational polytope, containing $B$, inside the intersection of the $\mathcal{L}_R$. Existence of $\mathcal{L}_R$ is a local problem near the generic point of $R$. By replacing $X$ with $\Spec \mathcal{O}_{X,R}$ we are reduced to the situation in which $X$ is a normal excellent scheme of dimension $2$ (see [\ref{Kollar-sing}, 3.3] for notion of lc pairs in this setting). Now $(X,B)$ has a log resolution (cf. see [\ref{Shafarevich}, page 28 and following remarks, and page 72]). So existence of $\mathcal{L}_R$ can be proved again as in [\ref{Shokurov}, 1.3.2].\\ \end{proof} \begin{proof}(of Proposition \ref{p-adjunction-DCC}) Assume that the proposition holds whenever $K_X+B$ is $\mathbb Q$-Cartier. In the general case, that is, when $K_X+B$ is only $\mathbb R$-Cartier, we can use Lemma \ref{l-lc-codim2} to write $B=\sum r_jB^j$ where $r_j> 0$, $\sum r_j=1$, $B^j$ are $\mathbb Q$-boundaries, and $(X,B^j)$ are lc outside a codimension $3$ closed subset. Moreover, we can assume $S$ is a component of $\rddown{B^j}$ for each $j$ since we can choose the $B^j$ so that $B-B^j$ are supported on the irrational part of $B$. Then $B_{S^\nu}=\sum r_jB^j_{S^\nu}$ (see the proof of Proposition \ref{p-adjunction-existence}). Write $B^j=S+\sum_{i\ge 2} b^j_iB_i$. By assumption, there exists $m\in\mathbb N\cup \{\infty\}$ depending only on $X,S$ and $V$, and there exist nonnegative integers $l_i$ depending only on $X,S,B_i$ and $V$, such that the coefficient of $V^\nu$ in $B^j_{S^\nu}$ is equal to $$ \frac{m-1}{m}+\sum \frac{l_ib^j_i}{m} $$ Therefore, the coefficient of $V^\nu$ in $B_{S^\nu}$ is equal to $$ \frac{m-1}{m}+\sum_j r_j (\sum_i \frac{l_ib^j_i}{m})=\frac{m-1}{m}+\sum_i l_i (\sum_j \frac{r_jb^j_i}{m})=\frac{m-1}{m}+\sum_i \frac{l_ib_i}{m} $$ So from now on we can assume that $K_X+B$ is $\mathbb Q$-Cartier. Determining the coefficient of $V^\nu$ in $B_{S^\nu}$ is a local problem near the generic point of $V$. As in the proof of Lemma \ref{l-lc-codim2}, we can replace $X$ with $\Spec \mathcal{O}_{X,V}$ hence assume that $X$ is a normal excellent scheme of dimension $2$, $S$ is one-dimensional, and $V$ is a closed point. Now $(X,B)$ is lc and the fact that $B_{S^\nu}$ is a boundary is proved in [\ref{Kollar-sing}, 4.5]. Assume that $X$ is regular at $V$. If $S$ is not regular at $V$, then $B=S$ and the coefficient of $V^\nu$ in $B_{S^\nu}$ is equal to $1$ (by [\ref{Kollar-sing}, 3.45] or by blowing up $V$ and working on the blow up). But if $S$ is regular at $V$, then $S^\nu \to S$ is an isomorphism, $(K_X+S)|_{S^\nu}=K_{S^\nu}$, $m=1$, and $B_{S^\nu}=B|_{S^\nu}$. From these we can get the formula for the coefficient of $V^\nu$ as claimed. Thus we can assume $X$ is not regular at $V$. Since $(X,B)$ is lc, $(X,S)$ is numerically lc (see [\ref{Kollar-sing}, 3.3] for definition of numerical lc which is the same as lc except that $K_X+S$ may not be $\mathbb Q$-Cartier). If $(X,S)$ is not numerically plt, i.e. if there is an exceptional divisor over $V$ whose log discrepancy with respect to $(X,S)$ is $0$, then in fact $B=S$, and the coefficient of $V^\nu$ in $B_{S^\nu}$ is equal to $1$ by [\ref{Kollar-sing}, 3.45]. Thus we can assume $(X,S)$ is numerically plt which in particular implies that $S$ is regular and that $S^\nu \to S$ is an isomorphism, by [\ref{Kollar-sing}, 3.35]. Let $f\colon Y\to X$ be a log minimal resolution of $(X,S)$ as in [\ref{Kollar-sing}, 2.25] and let $S^\sim$ be the birational transform of $S$. Then $S^\sim\to S$ is an isomorphism and the extended dual graph of the resolution is of the form $$ \mathscrymatrix{ \bullet \ar@{-}[r] & c_1 \ar@{-}[r] & c_2 \ar@{-}[r] &\cdots \ar@{-}[r] & c_r } $$ where $\bullet$ corresponds to $S^\sim$, $c_i=-E_i^2$, and $E_1,\dots, E_r$ are the exceptional curves of $f$. Let $M=[-E_i\cdot E_j]$ be the minus of the intersection matrix of the resolution, and let $m=\det M$. Then by [\ref{Kollar-sing}, 3.35.1] we have $$ K_Y+S^\sim+\sum e_jE_j\equiv 0/X $$ for certain $e_j>0$ and $e_1=\frac{m-1}{m}$. Let $D\neq 0$ be an effective Weil divisor on $X$ with coefficients in $\mathbb N$. Let $d_i$ be the numbers so that $D^\sim+\sum d_iE_i\equiv 0/X$ where $D^\sim$ is the birational transform of $D$. The $d_i$ satisfy the equations $$ (\sum d_jE_j)\cdot E_t =-D^\sim\cdot E_t $$ Since $M$ has integer entries and the numbers $-D^\sim\cdot E_t$ are integers, by Cramer's rule, we can write $d_j=\frac{n_j}{m}$ for certain $n_j\in \mathbb N$. Applying this to $D=B_i$, we have $B_i^\sim+\sum d_{i,j}E_j\equiv 0/X$ for certain $d_{i,j}=\frac{n_{i,j}}{m}$ with $n_{i,j}\in \mathbb N$. But then $$ K_Y+B^\sim+\sum e_j'E_j\equiv 0/X $$ where $B^\sim$ is the birational transform of $B$ and $e_j'=e_j+\sum_{i\ge 2} \frac{n_{i,j}b_i}{m}$. In particular, $e_1'=\frac{m-1}{m}+\sum_{i\ge 2} \frac{l_ib_i}{m}$ where we put $l_i:=n_{i,1}$. Now the coefficient of $V^\nu$ in $B_{S^\nu}$ is simply the coefficient of the divisor $e_1'E_1|_{S^\sim}$ which is nothing but $e_1'$.\\ \end{proof} \section{Special termination}\label{s-st} All the varieties and algebraic spaces in this section are over $k$ of char $p>0$ unless stated otherwise. \subsection{Reduced components of boundaries of dlt pairs} \begin{lem}\label{l-plt-normal} Let $(X,B)$ be an lc pair of dimension $3$ and $S$ a component of $\rddown{B}$. Then we have: $(1)$ if the coefficients of $B$ are standard, then the coefficients of $B_{S^\nu}$ are also standard; $(2)$ if $k$ has char $p>5$ and $(X,B)$ is $\mathbb Q$-factorial dlt, then $S$ is normal. \end{lem} \begin{proof} (1) This follows from Koll\'ar [\ref{Kollar-sing}, Corollary 3.45] (see also [\ref{Kollar-sing}, 4.4]). (2) We may assume $B=S$ by discarding all the other components, in particular, $(X,B)$ is plt hence $({S^\nu},B_{S^\nu})$ is klt. By (1), $B_{S^\nu}$ has standard coefficients. By [\ref{HX}, Theorem 3.1], $({S^\nu},B_{S^\nu})$ is actually strongly $F$-regular. Therefore, $S$ is normal by [\ref{HX}, Theorem 4.1].\\ \end{proof} \subsection{Pl-extremal rays}\label{ss-pl-ext-rays} Let $(X,B)$ be a projective $\mathbb Q$-factorial dlt pair of dimension $3$. A $K_X+B$-negative extremal ray $R$ is called a \emph{pl-extremal ray} if $S\cdot R<0$ for some component $S$ of $\rddown{B}$. This is named after Shokurov's pl-flips. Assume that $k$ has char $p>5$. Now as in \ref{ss-ext-rays-II}, assume that $\mathcal{C}=\overline{NE}(X/Z)$ for some projective contraction $X\to Z$ such that $K_X+B\equiv P+M/Z$ where $P$ is nef$/Z$ and $M\ge 0$, or $\mathcal{C}=N^\perp$ for some nef and big $\mathbb Q$-divisor $N$. Let $R$ be a $K_X+B$-negative pl-extremal ray of $\mathcal{C}$. By \ref{ss-ext-rays-II}, we can find a $\mathbb Q$-boundary $\Delta$ and an ample $\mathbb Q$-divisor $H$ such that $\rddown{\Delta}=S$, $(K_X+\Delta)\cdot R<0$ and $L=K_X+\Delta+H$ is nef and big with $L^\perp=R$. Let $X\to V$ be the contraction associated to $L$ which contracts $R$ to an algebraic space. Every curve contracted by $X\to V$ is inside $S$. So the exceptional locus $\mathbb{E}(L)$ of $L$ is inside $S$. Thus $L$ is semi-ample by \ref{ss-good-exc-locus}. Therefore $X\to V$ is a projective contraction. In other words, pl-extremal rays can be contracted by projective morphisms. This was proved in [\ref{HX}, Theorem 5.4] when $K_X+B$ is pseudo-effective. The extremal rays that appear below are often pl-extremal rays. If $X\to V$ is a divisorial contraction put $X'=V$ but if it is a flipping contraction assume $X\dashrightarrow X'/V$ is its flip. Then it is not hard to see that in any case $X'$ is $\mathbb Q$-factorial, by the following argument [\ref{Xu}]: we treat the divisorial case; the flipping case can be proved similarly. We can assume that $B$ is a $\mathbb Q$-boundary and $\Delta=B$. Let $D'$ be a prime divisor on $X'$ and $D$ its birational transform on $X$. There are rational numbers $\epsilon>0$ and $\delta$ such that $M:=K_X+B+H+\epsilon D+\delta S$ is nef and big, $M\equiv 0/V$, $H+\epsilon D+\delta S$ is ample, and $\mathbb{E}(M)=\mathbb{E}(L)=S$. Since $M|_{S}$ is semi-ample, $M$ is semi-ample by Theorem \ref{t-Keel-1}. That is, $M$ is the pullback of some ample divisor $M'$ on $X'$. But then $\epsilon D'=M'-L'$ is $\mathbb Q$-Cartier hence $D'$ is $\mathbb Q$-Cartier. \subsection{Special termination}\label{ss-special-termination} The following important result is proved just like in characteristic $0$. We include the proof for convenience. \begin{prop}\label{p-st} Let $(X,B)$ be a projective $\mathbb Q$-factorial dlt pair of dimension $3$ over $k$ of char $p>5$. Assume that we are given an LMMP on $K_X+B$, say $X_i\dashrightarrow X_{i+1}/Z_i$ where $X_1=X$ and each $X_i\dashrightarrow X_{i+1}/Z_i$ is a flip, or a divisorial contraction with $X_{i+1}=Z_i$. Then after finitely many steps, each remaining step of the LMMP is an isomorphism near the lc centres of $(X,B)$. \end{prop} \begin{proof} There are only finitely many lc centres and no new one can be created in the process, so we may assume that the LMMP does not contract any lc centre. In particular, we can assume that the LMMP is an isomorphism near each lc centre of dimension zero. Now let $C$ be an lc centre of dimension one. Since $(X,B)$ is dlt, $C$ is a component of the intersection of two components $S,S'$ of $\rddown{B}$. Let $C_i,S_i\subset X_i$ be the birational transforms of $C,S$. Applying Lemma \ref{l-plt-normal}, we can see that $C_i,S_i$ are normal. By adjunction, we can write $(K_{X_i}+B_i)|_{S_i}=K_{S_i}+B_{S_i}$ where the coefficient of $C_i$ in $B_{S_i}$ is one. Applying adjunction once more, we can write the pullback of $K_{S_i}+B_{S_i}$ to $C_i$ as $K_{C_i}+B_{C_i}$ for some boundary $B_{C_i}$. Since $C_i\simeq {C_{i+1}}$, we will use the notation $(C,B_{i,C})$ instead of $({C_i},B_{C_i})$. Since each step of the LMMP makes the divisor $K_X+B$ "smaller", $$ K_{C}+B_{i,C}\ge K_{C}+B_{i+1,C} $$ hence $B_{i,C}\ge B_{i+1,C}$ for every $i$. By Propositions \ref{p-adjunction-DCC}, the coefficients of $B_{S_i}$ and $B_{i,C}$ belong to some fixed DCC set. Therefore $B_{i,C}=B_{i+1,C}$ for every $i\gg 0$ which implies that after finitely many steps, each remaining step of the LMMP is an isomorphism near $C_i$. From now on we may assume that all the steps of the LMMP are flips. Let $S$ be any lc centre of dimension $2$, i.e. a component of $B$ with coefficient one. If $S_i$ intersects the exceptional locus $E_i$ of $X_i\to Z_i$, then no other component of $\rddown{B_i}$ can intersect the exceptional locus: assume that another component $T_i$ intersects the exceptional locus; if either $S_i$ or $T_i$ contains $E_i$, then $S_i\cap T_i$ intersects $E_i$; but $S_i\cap T_i$ is a union of lc centres of dimension one and this contradicts the last paragraph; so none of $S_i,T_i$ contains $E_i$. But then both contain the exceptional locus of $X_{i+1}\to Z_i$ and similar arguments give a contradiction. Assume $D_i\subset S_i$ is a component of the exceptional locus of $X_i\to Z_{i-1}$ where $i>1$. Then the log discrepancy of $D_i$ with respect to $(S_1,B_{S_1})$ is less than one. Moreover, we can assume that the generic point of the centre of $D_i$ on $S_1$ is inside the klt locus of $(S_1,B_{S_1})$ by the last paragraph. But there can be at most finitely many such $D_i$ (as prime divisors on birational models of $S_1$). Since the coefficients of $D_i$ in $B_{S_i}$ belongs to a DCC set, the coefficient of $D_i$ stabilizes. Therefore after finitely many steps, $S_i$ cannot contain any component of the exceptional locus of $X_i\to Z_{i-1}$. So we get a sequence $S_i\dashrightarrow S_{i+1}$ of birational morphisms which are isomorphisms if $i\gg 0$. In particular, $S_i$ is disjoint from $E_i$ for $i\gg 0$.\\ \end{proof} \section{Existence of log flips}\label{s-flips} In this section, we first prove that generalized flips exist (\ref{t-flip-2}). Next we prove Theorem \ref{t-flip-1} in the projective case, that is, when $X$ is projective. The general case of Theorem \ref{t-flip-1} is proved in Section \ref{s-mmodels} where $X$ is quasi-projective. \subsection{Divisorial and flipping extremal rays} Let $(X,B)$ be a projective $\mathbb Q$-factorial pair of dimension $3$ over $k$ of char $p>0$, and let $R$ be a $K_X+B$-negative extremal ray. Assume that there is a nef and big $\mathbb Q$-divisor $L$ such that $R=L^\perp$. We say $R$ is a \emph{divisorial extremal ray} if $\dim \mathbb{E}(L)=2$. But we say $R$ is a \emph{flipping extremal ray} if $\dim \mathbb{E}(L)=1$. By \ref{ss-ext-rays-II}, such rays can be contracted to algebraic spaces. By \ref{ss-ext-rays}, when $K_X+B$ is pseudo-effective, each $K_X+B$-negative extremal ray is either a divisorial extremal ray or a flipping extremal ray. We will show below (\ref{t-contraction}) that any divisorial or flipping extremal ray can actually be contracted by a projective morphism if $(X,B)$ is dlt and $p>5$. However, we still need contractions to algebraic spaces as an auxiliary tool. \subsection{Existence of generalized flips} We recall the definition of \emph{generalized flips} which was introduced in [\ref{HX}]. Let $(X,B)$ be a projective $\mathbb Q$-factorial pair of dimension $3$ over $k$ char $p>0$, and let $R$ be a $K_X+B$-negative flipping extremal ray. We say that the {generalized flip} of $R$ exists (see [\ref{HX}, after Theorem 5.6]) if there is a birational map $X\dashrightarrow X^+/V$ which is an isomorphism in codimension one, $X^+$ is $\mathbb Q$-factorial projective, and $K_{X^+}+B^+$ is numerically positive on any curve contracted by $X^+\to V$. \begin{thm}\label{t-flip-2} Let $(X,B)$ be a projective $\mathbb Q$-factorial dlt pair of dimension $3$ over $k$ of char $p>5$. Let $R$ be a $K_X+B$-negative flipping extremal ray. Then the generalized flip of $R$ exists. \end{thm} The theorem was proved in [\ref{HX}, Theorem 5.6] when $B$ has standard coefficients and $K_X+B$ is pseudo-effective. \begin{proof} This proof (as well as the proof of [\ref{HX}, Theorem 5.6]) is modeled on the proof of Shokurov's reduction theorem [\ref{Shokurov-pl}, Theorem 1.2]. Since $R$ is a flipping extremal ray, by definition, there is a nef and big $\mathbb Q$-divisor $L$ such that $R=L^\perp$. Moreover, $L$ is endowed with a map $X\to V$ to an algebraic space which contracts the curves generating $R$. Note that if $B'$ is another boundary such that $(K_X+B')\cdot R<0$, then the generalized flip exists for $(X,B)$ if and only if it exists for $(X,B')$. This follows from the fact that $K_X+B\equiv t(K_X+B')/V$ for some number $t>0$ where the numerical equivalence means that $K_X+B-t(K_X+B')$ is numerically trivial on any curve contracted by $X\to V$. Let $\mathfrak{S}$ be the set of standard coefficients as defined in the introduction. Define $$ \zeta(X,B)=\#\{S \mid \mbox{$S$ is a component of $B$ and its coefficient is not in~} \mathfrak{S}\} $$ Assume that the generalized flip of $R$ does not exist. We will derive a contradiction. We can assume that $\zeta(X,B)$ is minimal, that is, we may assume that generalized flips always exist for pairs with smaller $\zeta$. We can decrease the coefficients of $\rddown{B}$ slightly so that $(X,B)$ becomes klt and $\zeta(X,B)$ is unchanged. In addition, each component $S$ of $B$ whose coefficient is not in $\mathfrak{S}$ satisfies $S\cdot R<0$ otherwise we can discard $S$ and decrease $\zeta(X,B)$ which is not possible by the minimality assumption. First assume that $\zeta(X,B)>0$. Choose a component $S$ of $B$ whose coefficient $b$ is not in $\mathfrak{S}$. There is a positive number $a$ such that $K_X+B\equiv aS/V$. Let $g\colon W\to X$ be a log resolution, and let $B_W=B^\sim+E$ and $\Delta_W=B_W+(1-b)S^\sim$ where $E$ is the reduced exceptional divisor of $g$ and $B^\sim,S^\sim$ are birational transforms. Note that $\rddown{B_W}=E$ and $\rddown{\Delta_W}=S^\sim+E$. Since $(X,B)$ is klt, $$ K_W+\Delta_W=K_W+B_W+(1-b)S^\sim=g^*(K_X+B)+G+(1-b)S^\sim $$ where $G$ is effective and its support is equal to the support of $E$. Thus $$ K_W+\Delta_W\equiv g^*(aS)+G+(1-b)S^\sim=(a+1-b)S^\sim+F/V $$ where $F$ is effective and $\Supp F=\Supp E$. By construction, we have $$ \mbox{$\Supp (S^\sim+F)=\rddown{\Delta_W}~~$ and $~~\zeta(W,\Delta_W)<\zeta(X,B)$} $$ Run an LMMP$/V$ on $K_W+\Delta_W$ with scaling of some ample divisor, as in \ref{ss-g-LMMP-scaling}. Recall that this is an LMMP$/\mathcal{C}$ on $K_W+\Delta_W$ where $\mathcal{C}=N^\perp$ and $N$ is the pullback of the nef and big $\mathbb Q$-divisor $L$. In each step some component of $\rddown{\Delta_W}$ is negative on the corresponding extremal ray. So such extremal rays are pl-extremal rays, they can be contracted by projective morphisms, and the $\mathbb Q$-factorial property is preserved (see \ref{ss-pl-ext-rays}). Moreover, if we encounter a flipping contraction, then its generalized flip exists because $\zeta(W,\Delta_W)<\zeta(X,B)$ and because we chose $\zeta(X,B)$ to be minimal; the flip is a usual one since its extremal ray is contracted projectively. By special termination (\ref{p-st}), the LMMP terminates on some model $Y/V$. Now run an LMMP$/V$ on $K_Y+B_Y$ with scaling of $(1-b)S_Y$ where $B_Y$ is the pushdown of $B_W$ and $S_Y$ is the pushdown of $S^\sim$. Since we have the numerical equivalence $K_Y+B_Y\equiv a S_Y+F_Y/V$ and $\Supp F_Y=\rddown{B_Y}$, in each step of the LMMP the corresponding extremal ray intersects some component of $\rddown{B_Y}$ negatively hence they are pl-extremal rays and they can be contracted by projective morphisms (\ref{ss-pl-ext-rays}). Moreover, if one of these rays gives a flipping contraction, then its generalized flip exists because $K_Y+B_Y-bS_Y$ is negative on that ray and $\zeta(Y,B_Y-bS_Y)<\zeta(X,B)$. Note that again such flips are usual flips. The LMMP terminates on a model $X^+$ by special termination. Let $h\colon W'\to X$ and $e\colon W'\to X^+$ be a common resolution. Now the negativity lemma (\ref{ss-negativity}) applied to the divisor $h^*(K_X+B)-{e^*}(K_{X^+}+B^+)$ over $X$ implies that $$ h^*(K_X+B)-{e^*}(K_{X^+}+B^+)\ge 0 $$ Thus every component $D$ of $E$ is contracted over $X^+$ because $$ 0<a(D,X,B)\le a(D,X^+,B^+) $$ Therefore $X\dashrightarrow X^+$ is an isomorphism in codimension one. It is enough to show that $K_{X^+}+B^+$ is numerically positive$/V$. Let $H^+$ be an ample divisor on $X^+$ and $H$ its birational transform on $X$. There is a positive number $c$ such that $K_X+B\equiv cH/V$ hence $K_{X^+}+B^+\equiv cH^+/V$ which implies that $K_{X^+}+B^+$ is numerically positive$/V$. So we have constructed the generalized flip and this contradicts our assumptions above. Now assume that $\zeta(X,B)=0$. If $K_X+B$ is pseudo-effective, then we can simply apply [\ref{HX}, Theorem 5.6] to get a contradiction. Unfortunately, $K_X+B$ may not be pseudo-effective (note that even if we originally start with a pseudo-effective log divisor we may end up with a non-pseudo-effective $K_X+B$ since we decreased some coefficients). However, this is not a problem because the proof of [\ref{HX}, Theorem 5.6] still works. Since there is a nef and big $\mathbb Q$-divisor $L$ with $L\cdot R=0$, there is a prime divisor $S$ with $S\cdot R<0$. There is a number $a>0$ such that $K_X+B\equiv aS/V$. Now take a log resolution $g\colon W\to X$ and define $B_W$ and $\Delta_W$ as above (if $S$ is not a component of $B$ simply let $b=0$). Run an LMMP$/V$ on $K_W+\Delta_W$. The extremal rays in the process are all pl-extremal rays hence they can be contracted by projective morphisms. Moreover, if we encounter a flipping contraction, then its flip exists by [\ref{HX}, Theorem 4.12] because all the coefficients of $\Delta_W$ are standard. The LMMP terminates on some model $Y$ by the special termination. Next, run the LMMP$/V$ on $K_Y+B_Y$ with scaling of $(1-b)S_Y$. Again, the extremal rays in the process are all pl-extremal rays hence they can be contracted by projective morphisms. Moreover, if we encounter a flipping contraction, then its flip exists by [\ref{HX}, Theorem 4.12] because all the coefficients of $B_Y$ are standard. The LMMP terminates on some model $X^+$ by the special termination. The rest of the argument goes as before.\\ \end{proof} \subsection{Proof of \ref{t-flip-1} in the projective case} \begin{proof}(of Theorem \ref{t-flip-1} in the projective case) Assume that $X$ is projective. Then by Theorem \ref{t-flip-2}, the generalized flip of the extremal ray of $X\to Z$ exists. But since $X\to Z$ is a projective contraction, the generalized flip is a usual flip. If $X$ is only quasi-projective, we postpone the proof to Section \ref{s-mmodels}. Until then we need flips only in the projective case.\\ \end{proof} \section{Crepant models}\label{s-crepant-models} \subsection{Divisorial extremal rays} The next lemma is essentially {[\ref{HX}, Theorem 5.6(2)]}. \begin{lem}\label{l-div-ray} Let $(X,B)$ be a projective $\mathbb Q$-factorial dlt pair of dimension $3$ over $k$ of char $p>5$. Let $R$ be a $K_X+B$-negative divisorial extremal ray. Then $R$ can be contracted by a projective morphism $X\to Z$ where $Z$ is $\mathbb Q$-factorial. \end{lem} \begin{proof} We may assume that $(X,B)$ is klt. Since $R$ is a divisorial extremal ray, by definition, there is a nef and big $\mathbb Q$-divisor $L$ such that $R=L^\perp$ and $\dim \mathbb{E}(L)=2$. Moreover, $R$ can be contracted by a map $X\to V$ to an algebraic space. There is a prime divisor $S$ with $S\cdot R<0$. In particular, $\mathbb{E}(L)\subseteq S$ and $S$ is the only prime divisor contracted by $X\to V$. There is a number $a>0$ such that $K_X+B\equiv aS/V$. Let $g\colon W\to X$ be a log resolution and define $\Delta_W$ as in the proof of Theorem \ref{t-flip-2}. Run an LMMP$/V$ on $K_W+\Delta_W$. As in \ref{t-flip-2}, the extremal rays in the process are pl-extremal rays hence they are contracted projectively and the LMMP terminates with a model $Z$. We are done if we show that $Z\to V$ is an isomorphism (the $\mathbb Q$-factoriality claim follows from \ref{ss-pl-ext-rays}). Assume this is not the case. Recall that $$ K_W+\Delta_W\equiv (a+1-b)S^\sim+F/V $$ and now $(a+1-b)S^\sim+F$ is exceptional$/V$. In particular, $(a+1-b)S_Z+F_Z$ is effective, exceptional and nef$/V$. Let $H_Z$ be a general ample divisor on $Z$ and $H$ its birational transform on $X$. There is a number $t\ge 0$ such that $H+tS\equiv 0/V$. Therefore there is an effective and exceptional$/V$ divisor $P_Z$ such that $H_Z+P_Z\equiv 0/V$. Note that $\Supp P_Z$ contains all the exceptional divisors of $Z\to V$ hence $\Supp P_Z=\Supp F_Z$. Moreover, $P_Z\neq 0$ otherwise $H_Z\equiv 0/V$ hence $Z\to V$ is an isomorphism which is not the case by assumption. This also shows that $F_Z\neq 0$. Let $s$ be the smallest number such that $$ Q_Z:=(a+1-b)S_Z+F_Z-sP_Z\le 0 $$ Then $Q_Z$ is numerically positive over $V$ and there is some prime exceptional$/V$ divisor $D$ which is not a component of $Q_Z$. This is not possible since $Q_Z$ cannot be numerically positive on the general curves of $D$ contracted$/V$.\\ \end{proof} \subsection{Projectivization and dlt models} \begin{lem}\label{l-reduced-Cartier} Let $X$ be a normal projective variety over $k$ and $D\neq X$ a closed subset. Then there is a reduced effective Cartier divisor $H$ whose support contains $D$. \end{lem} \begin{proof} We may assume that each irreducible component of $D$ is a prime divisor hence we can think of $D$ as a reduced Weil divisor. Let $A$ be a sufficiently ample divisor. Let $U$ be the smooth locus of $X$. Since $(A-D)|_U$ is sufficiently ample, we can choose a reduced effective divisor $H'$ with no common components with $D$ such that $H'|_U\sim (A-D)|_U$. This extends to $X$ and gives $H'\sim A-D$. Now $H:=H'+D\sim A$ is Cartier and satisfies the requirements.\\ \end{proof} The next few results are standard consequences of special termination (cf. [\ref{B-mmodel}, Lemma 3.3][\ref{HX}, Theorem 6.1]). \begin{lem}\label{l-dlt-model-proj} Let $(X,B)$ be an lc pair of dimension $3$ over $k$ of char $p>5$, and let $\overline{X}$ be a projectivization of $X$. Then there is a projective $\mathbb Q$-factorial dlt pair $(\overline{Y},B_{\overline{Y}})$ with a birational morphism $\overline{Y}\to \overline{X}$ satisfying the following: $\bullet$ $K_{\overline{Y}}+B_{\overline{Y}}$ is nef$/\overline{X}$, $\bullet$ let $Y$ be the inverse image of $X$ and $B_Y=B_{\overline{Y}}|_Y$; then $(Y,B_Y)$ is a $\mathbb Q$-factorial dlt model of $(X,B)$. \end{lem} \begin{proof} We may assume that $\overline{X}$ is normal. By Lemma \ref{l-reduced-Cartier}, there is a reduced effective Cartier divisor $H$ containing the complement of $X$ in $\overline{X}$. We may assume that $H$ has no common components with $B$. Let $f\colon \overline{W}\to \overline{X}$ be a log resolution. Now let $B_{\overline{W}}$ be the sum of the reduced exceptional divisor of $f$ and the birational transform of $B$, and let $\Delta_{\overline{W}}$ be the sum of $B_{\overline{W}}$ and the birational transform of $H$. Run the LMMP$/\overline{X}$ on $K_{\overline{W}}+\Delta_{\overline{W}}$ inductively as follows. Assume that we have arrived at a model $\overline{Y}$. Let $R$ be a $K_{\overline{Y}}+\Delta_{\overline{Y}}$-negative extremal ray$/\overline{X}$. Let $\overline{Y}\to \overline{Z}$ be the contraction of $R$ to an algebraic space, and let $L$ be a nef and big $\mathbb Q$-divisor with $L^\perp=R$. Any curve contracted by $\overline{Y}\to \overline{Z}$ is also contracted over $\overline{X}$. If $\dim \mathbb{E}(L)=2$, then $R$ is a divisorial extremal ray hence $\overline{Y}\to \overline{Z}$ is a projective contraction by Lemma \ref{l-div-ray}. In this case, we continue the program with $\overline{Z}$. Now assume that $\dim \mathbb{E}(L)=1$. Let $C$ be a connected component of $\mathbb{E}(L)$ and $P$ its image in $\overline{X}$ which is just a point. If $P\in \Supp H$, then $C$ is contained in some component of the pullback of $H$ hence it is contained in some component of $\rddown{\Delta_{\overline{Y}}}$. In this case, $\overline{Y}\to \overline{Z}$ is again a projective contraction by \ref{ss-good-exc-locus}. Now assume that $P$ does not belong to the support of $H$. Since $(X,B)$ is lc, over $X\setminus H$ the divisor $$ K_{\overline{W}}+\Delta_{\overline{W}}-f^*(K_X+B) $$ is effective and exceptional$/\overline{X}$ hence some component of $\Delta_{\overline{Y}}$ intersects $R$ negatively which implies again that the contraction $\overline{Y}\to \overline{Z}$ is projective. Therefore in any case $R$ can be contracted by a projective morphism and we can continue the LMMP as usual. The required flips exist by the results of Section \ref{s-flips}. By special termination (\ref{p-st}), the LMMP terminates say on ${{\overline{Y}}}$. Next, we run the LMMP$/\overline{X}$ on $K_{\overline{Y}}+B_{\overline{Y}}$ with scaling of $\Delta_{\overline{Y}}-B_{\overline{Y}}$ as in \ref{ss-g-LMMP-scaling}. Note that $\Delta_{\overline{Y}}-B_{\overline{Y}}$ is nothing but the birational transform of $H$. Since the pullback of $H$ is numerically trivial over $\overline{X}$, each extremal ray in the process intersects some exceptional divisor negatively hence such extremal rays can be contracted by projective morphisms. Moreover, the required flips exist and by special termination the LMMP terminates on a model which we may again denote by ${\overline{Y}}$. Now let $Y$ be the inverse image of $X$ under $g\colon \overline{Y}\to \overline{X}$ and let $B_Y$ be the restriction of $B_{\overline{Y}}$ to $Y$. Then $(Y,B_Y)$ is a $\mathbb Q$-factorial dlt model of $(X,B)$ because $K_Y+B_Y-g^*(K_X+B)$ is effective and exceptional hence zero as it is nef$/X$.\\ \end{proof} \begin{proof}(of Theorem \ref{cor-dlt-model}) This is already proved in Lemma \ref{l-dlt-model-proj}.\\ \end{proof} \subsection{Extraction of divisors and terminal models} \begin{lem}\label{l-extraction} Let $(X,B)$ be an lc pair of dimension $3$ over $k$ of char $p>5$ and let $\{D_i\}_{i\in I}$ be a finite set of exceptional$/X$ prime divisors (on birational models of $X$) such that $a(D_i,X,B)\le 1$. Then there is a $\mathbb Q$-factorial dlt pair $(Y,B_Y)$ with a projective birational morphism $Y\to X$ such that\\ $\bullet$ $K_Y+B_Y$ is the crepant pullback of $K_X+B$,\\ $\bullet$ every exceptional/$X$ prime divisor $E$ of $Y$ is one of the $D_i$ or $a(E,X,B)=0$,\\ $\bullet$ the set of exceptional/$X$ prime divisors of $Y$ includes $\{D_i\}_{i\in I}$. \end{lem} \begin{proof} By Lemma \ref{l-dlt-model-proj}, we can assume that $(X,B)$ is projective $\mathbb Q$-factorial dlt. Let $f\colon W\to X$ be a log resolution and let $\{E_j\}_{j\in J}$ be the set of prime exceptional divisors of $f$. We can assume that for some $J'\subseteq J$, $\{E_j\}_{j\in J'}=\{D_i\}_{i\in I}$. Now define $$ K_W+B_W:=f^*(K_{X}+B)+\sum_{j\notin J'} a(E_j,X,B)E_j $$ which ensures that if $j\notin J'$, then $E_j$ is a component of $\rddown{B_W}$. Run an LMMP$/X$ on $K_W+B_W$ which would be an LMMP on $\sum_{j\notin J'} a(E_j,X,B)E_j$. So each extremal ray in the process intersects some component of $\rddown{B_W}$ negatively hence such rays can be contracted by projective morphisms (\ref{ss-pl-ext-rays}), the required flips exists (Section \ref{s-flips}), and the LMMP terminates by special termination (\ref{p-st}), say on a model $Y$. Now $(Y,B_Y)$ satisfies all the requirements.\\ \end{proof} \begin{proof}(of Corollary \ref{cor-terminal-model}) Apply Lemma \ref{l-extraction} by taking $\{D_i\}_{i\in I}$ to be the set of all prime divisors with log discrepancy $a(D_i,X,B)\le 1$.\\ \end{proof} \section{Existence of log minimal models}\label{s-mmodels} \subsection{Weak Zariski decompositions}\label{ss-WZD} Let $D$ be an $\mathbb R$-Cartier divisor on a normal variety $X$ and $X\to Z$ a projective contraction over $k$. A \emph{weak Zariski decomposition$/Z$} for $D$ consists of a projective birational morphism $f\colon W\to X$ from a normal variety, and a numerical equivalence $f^*D\equiv P+M/Z$ such that \begin{enumerate} \item $P$ and $M$ are $\mathbb R$-Cartier divisors, \item $P$ is nef$/Z$, and $M\ge 0$. \end{enumerate} We then define ${\theta}(X,B,M)$ to be the number of those components of $f_*M$ which are not components of $\rddown{B}$. \subsection{From weak Zariski decompositions to minimal models} We use the methods of [\ref{B-WZD}], which is somewhat similar to [\ref{BCHM}, \S 5], to prove the following result. \begin{prop}\label{p-WZD} Let $(X,B)$ be a projective lc pair of dimension $3$ over $k$ of char $p>5$, and $X\to Z$ a projective contraction. Assume that $K_X+B$ has a weak Zariski decomposition$/Z$. Then $(X,B)$ has a log minimal model over $Z$. \end{prop} \begin{proof} Assume that $\mathfrak{W}$ is the set of pairs $(X,B)$ and projective contractions $X\to Z$ such that\\ \begin{description} \item[L] $(X,B)$ is projective, lc of dimension $3$ over $k$, \item[Z] $K_X+B$ has a weak Zariski decomposition$/Z$, and \item[N] $(X,B)$ has no log minimal model over $Z$.\\ \end{description} Clearly, it is enough to show that $\mathfrak{W}$ is empty. Assume otherwise and let $(X,B)$ and $X\to Z$ be in $\mathfrak{W}$. Let $f\colon W\to X$, $P$ and $M$ be the data given by a weak Zariski decomposition$/Z$ for $K_X+B$ as in \ref{ss-WZD}. Assume in addition that ${\theta}(X,B,M)$ is minimal. Perhaps after replacing $f$ we can assume that $f$ gives a log resolution of $(X, \Supp (B+f_*M))$. Let $B_W=B^\sim+E$ where $B^\sim$ is the birational transform of $B$ and $E$ is the reduced exceptional divisor of $f$. Then $$ K_W+B_W=f^*(K_X+B)+F\equiv P+M+F/Z $$ is a weak Zariski decomposition where $F\ge 0$ is exceptional$/X$. Moreover, $$ {\theta}(W,B_W,M+F)={\theta}(X,B,M) $$ and any log minimal model of $(W,B_W)$ is also a log minimal model of $(X,B)$ [\ref{B-WZD}, Remark 2.4]. So by replacing $(X,B)$ with $(W,B_W)$ and $M$ with $M+F$ we may assume that $W=X$, $(X,\Supp (B+M))$ is log smooth, and that $K_X+B\equiv P+M/Z$. First assume that ${\theta}(X,B,M)=0$, that is, $\Supp M\subseteq \rddown{B}$. Run the LMMP$/Z$ on $K_X+B$ using $P+M$ as in \ref{ss-LMMP-WZD}. Obviously, $M$ negatively intersects each extremal ray in the process, and since $\Supp M\subseteq \rddown{B}$, the rays are pl-extremal rays. Therefore those rays can be contracted by projective morphisms (\ref{ss-pl-ext-rays}), the required flips exist (Section \ref{s-flips}), and the LMMP terminates by special termination (\ref{ss-special-termination}). Thus we get a log minimal model of $(X,B)$ over $Z$ which contradicts the assumption that $(X,B)$ and $X\to Z$ belong to $\mathfrak{W}$. For the rest of the proof we do not use LMMP. From now on we assume that ${\theta}(X,B,M)>0$. Define $$ \alpha:=\min\{t>0~|~~\rddown{(B+tM)^{\le 1}}\neq \rddown{B}~\} $$ where for a divisor $D=\sum d_iD_i$ we define $D^{\le 1}=\sum d_i'D_i$ with $d_i'=\min\{d_i,1\}$. In particular, $(B+\alpha M)^{\le 1}=B+C$ for some $C\ge 0$ supported in $\Supp M$, and $\alpha M=C+A$ where $A\ge 0$ is supported in $\rddown{B}$ and $C$ has no common components with $\rddown{B}$. Note that ${\theta}(X,B,M)$ is equal to the number of components of $C$. The pair $(X,B+C)$ is lc and the expression $$ K_X+B+C\equiv P+M+C/Z $$ is a weak Zariski decomposition$/Z$. By construction $$ {\theta}(X,B+C,M+C)<{\theta}(X,B,M) $$ so $(X,B+C)$ has a log minimal model over $Z$ by minimality of ${\theta}(X,B,M)$ and the definition of $\mathfrak W$. Let $(Y,(B+C)_Y)$ be the minimal model. Let $g\colon V\to X$ and $h\colon V\to Y$ be a common resolution. By definition, $K_Y+(B+C)_Y$ is nef/$Z$. In particular, the expression $$ g^*(K_X+B+C)= P'+M' $$ is a weak Zariski decomposition$/Z$ of $K_X+B+C$ where $P'=h^*(K_Y+(B+C)_Y)$ and $M'\ge 0$ is exceptional$/Y$ (cf. [\ref{B-WZD}, Remark 2.4 (2)]). Moreover, $$ g^*(K_X+B+C)=P'+M'\equiv g^*P+g^*(M+C)/Z $$ Since $M'$ is exceptional$/Y$, $$ h_*(g^*(M+C)-M')\ge 0 $$ On the other hand, $$ g^*(M+C)-M'\equiv P'-g^*P/Z $$ is anti-nef$/Y$ hence by the negativity lemma, $g^*(M+C)-M'\ge 0$. Therefore $\Supp M'\subseteq \Supp g^*(M+C)=\Supp g^*M$. Now, \begin{equation*} \begin{split} (1+\alpha)g^*(K_X+B) & \equiv g^*(K_X+B)+\alpha g^*P+ \alpha g^*M\\ & \equiv g^*(K_X+B)+\alpha g^*P+g^* C+g^*A\\ & \equiv P'+\alpha g^*P+M'+g^*A/Z \end{split} \end{equation*} hence we get a weak Zariski decomposition$/Z$ as $$ g^*(K_X+B)\equiv P''+M''/Z $$ where $$ P''=\frac{1}{1+\alpha}(P'+ \alpha g^*P) \mbox{\hspace{0.5cm} and \hspace{0.5cm}} M''=\frac{1}{1+\alpha}(M'+g^*A) $$ and $\Supp M''\subseteq \Supp g^* M$ hence $\Supp g_*M''\subseteq \Supp M$. Since ${\theta}(X,B,M)$ is minimal, $$ {\theta}(X,B,M)={\theta}(X,B,M'') $$ So every component of $C$ is also a component of $g_*M''$ which in turn implies that every component of $C$ is also a component of $g_*M'$. But $M'$ is exceptional$/Y$ hence so is $C$ which means that $(B+C)_Y=B^\sim+C^\sim+E=B^\sim+E=B_Y$ where $\sim$ stands for birational transform and $E$ is the reduced exceptional divisor of $Y\dashrightarrow X$. Thus we have $P'=h^*(K_Y+B_Y)$. Although $K_Y+B_Y$ is nef$/Z$, $(Y,B_Y)$ is not necessarily a log minimal model of $(X,B)$ over $Z$ because condition (4) of definition of log minimal models may not be satisfied (see \ref{ss-mmodels}). Let $G$ be the largest $\mathbb R$-divisor such that $G\le g^*C$ and $G\le {M}'$. By letting $\tilde{C}=g^*C-G$ and $\tilde{M}'= M'-G$ we get the expression $$ g^*(K_X+B)+\tilde{C}= P'+\tilde{M}' $$ where $\tilde{C}$ and $\tilde{M}'$ are effective with no common components. Assume that $\tilde{C}$ is exceptional$/X$. Then $g^*(K_X+B)-P'=\tilde{M}'-\tilde{C}$ is antinef$/X$ so by the negativity lemma $\tilde{M}'-\tilde{C}\ge 0$ which implies that $\tilde{C}=0$ since $\tilde{C}$ and $\tilde{M}'$ have no common components. Thus $$ g^*(K_X+B)-h^*(K_Y+B_Y)=\sum_D a(D,Y,B_Y)D-a(D,X,B)D=\tilde{M}' $$ where $D$ runs over the prime divisors on $V$. If $\Supp g_*\tilde{M}'=\Supp g_*M'$, then $\Supp \tilde{M}'$ contains the birational transform of all the prime exceptional$/Y$ divisors on $X$ hence $(Y,B_Y)$ is a log minimal model of $(X,B)$ over $Z$, a contradiction. Thus $$ \Supp (g_*M'-g_*G)=\Supp g_*\tilde{M}'\subsetneq \Supp g_* M'\subseteq \Supp M $$ so some component of $C$ is not a component of $g_*\tilde{M}'$ because $\Supp g_*G\subseteq \Supp C$. Therefore $$ {\theta}(X,B,M)>{\theta}(X,B,\tilde{M}') $$ which gives a contradiction again by minimality of ${\theta}(X,B,M)$ and the assumption that $(X,B)$ has no log minimal model over $Z$. So we may assume that $\tilde{C}$ is not exceptional$/X$. Let $\beta>0$ be the smallest number such that $\tilde{A}:=\beta g^* M-\tilde{C}$ satisfies $g_*\tilde{A}\ge 0$. Then there is a component of $g_*\tilde{C}$ which is not a component of $g_*\tilde{A}$. Now \begin{equation*} \begin{split} (1+\beta)g^*(K_X+B) & \equiv g^*(K_X+B)+\beta g^*M+\beta g^*P\\ & \equiv g^*(K_X+B)+\tilde{C}+\tilde{A}+\beta g^*P\\ & \equiv P'+\beta g^*P+ \tilde{M}'+\tilde{A}/Z \end{split} \end{equation*} where $\tilde{M}'+\tilde{A}\ge 0$ by the negativity lemma. Thus we get a weak Zariski decomposition$/Z$ as $g^*(K_X+B)\equiv P'''+M'''/Z$ where $$ P'''=\frac{1}{1+\beta}(P'+\beta g^*P) \mbox{\hspace{0.5cm} and \hspace{0.5cm}} M'''=\frac{1}{1+\beta}(\tilde{M}'+\tilde{A}) $$ and $\Supp g_*M'''\subseteq \Supp M$. Moreover, by construction, there is a component $D$ of $g_*\tilde{C}$ which is not a component of $g_*\tilde{A}$. Since $g_*\tilde{C}\le C$, $D$ is a component of $C$ hence of $M$, and since $\tilde{C}$ and $\tilde{M}'$ have no common components, $D$ is not a component of $g_*\tilde{M}'$. Therefore $D$ is not a component of $g_*M'''=\frac{1}{1+\beta}(g_*\tilde{M}'+g_*\tilde{A})$ which implies that $$ {\theta}(X,B,M)>{\theta}(X,B,M''') $$ giving a contradiction again.\\ \end{proof} \subsection{Proofs of \ref{t-mmodel} and \ref{t-flip-1}} \begin{proof}(of Theorem \ref{t-mmodel}) By applying Lemma \ref{l-dlt-model-proj}, we can reduce the problem to the case when $X,Z$ are projective. We can find a log resolution $f\colon W\to X$ and a $\mathbb Q$-boundary $B_W$ such that $$ K_W+B_W=f^*(K_X+B)+E $$ where $E\ge 0$ and its support is equal to the union of the exceptional divisors of $f$, and $(W,B_W)$ has terminal singularities. It is enough to construct a log minimal model for $(W,B_W)$ over $Z$. So by replacing $(X,B)$ with $(W,B_W)$ we can assume $(X,B)$ has terminal singularities and that $X$ is $\mathbb Q$-factorial. Let $$ \mathcal{E}=\{B' \mid K_X+B' ~~\mbox{is pseudo-effective$/Z$ and $0\le B'\le B$}\} $$ which is a compact subset of the $\mathbb R$-vector space $V$ generated by the components of $B$. Let $B'$ be an element in $\mathcal{E}$ which has minimal distance from $0$ with respect to the standard metric on $V$. So either $B'=0$, or $K_X+B''$ is not pseudo-effective$/Z$ for any $0\le B''\lneq B'$. Run the generalized LMMP$/Z$ on $K_X+B'$ as follows [\ref{HX}, proof of Theorem 5.6]: let $R$ be a $K_{X}+B'$-negative extremal ray$/Z$. By \ref{ss-ext-rays-II}, $R$ is either a divisorial extremal ray or a flipping extremal ray (see the beginning of Section \ref{s-flips} for definitions), and $R$ can be contracted to an algebraic space. If $R$ is a divisorial extremal ray, then it can actually be contracted by a projective morphism, by Lemma \ref{l-div-ray}, and we continue the process. But if $R$ is a flipping extremal ray, then we use the generalized flip, which exists by Theorem \ref{t-flip-2}, and then continue the process. No component of $B'$ is contracted by the LMMP: otherwise let $X_i\dashrightarrow X_{i+1}$ be the sequence of log flips and divisorial contractions of this LMMP where $X=X_1$. Pick $j$ so that $\phi_j\colon X_j\dashrightarrow X_{j+1}$ is a divisorial contraction which contracts a component $D_j$ of $B_j'$, the birational transform of $B'$. Now there is $a>0$ such that $$ K_{X_j}+B_j'=\phi_j^*(K_{X_{j+1}}+B_{j+1}')+aD_j $$ Since $K_{X_{j+1}}+B_{j+1}'$ is pseudo-effective$/Z$, $K_{X_{j}}+B_{j}'-aD_j$ is pseudo-effective$/Z$ which implies that $K_X+B'-bD$ is pseudo-effective$/Z$ for some $b>0$ where $D$ is the birational transform of $D_j$, a contradiction. Therefore every $(X_j,B'_j)$ has terminal singularities. The LMMP terminates for reasons similar to the characteristic $0$ case [\ref{Shokurov-nv}, Corollary 2.17][\ref{Kollar-Mori}, Theorem 6.17] (see also [\ref{HX}, proof of Theorem 1.2]). So we get a log minimal model of $(X,B')$ over $Z$, say $(Y,B'_{Y})$. Let $g\colon V\to X$ and $h\colon V\to Y$ be a common resolution. By letting $P=h^*(K_{Y}+B'_{Y})$ and $$ M=g^*(K_X+B)-h^*(K_{Y}+B'_{Y}) $$ we get a weak Zariski decomposition$/Z$ as $ g^*(K_X+B)=P+M/Z. $ Note that $M\ge 0$ because $g^*(K_X+B')-h^*(K_{Y}+B'_{Y})\ge 0$. Therefore $(X,B)$ has a log minimal model over $Z$ by Proposition \ref{p-WZD}.\\ \end{proof} \begin{proof}(of Theorem \ref{t-flip-1} in general case) Recall that we proved the theorem when $X$ is projective, in Section \ref{s-flips}. By perturbing the coefficients, we can assume that $(X,B)$ is klt. By Theorem \ref{t-mmodel}, $(X,B)$ has a log minimal model over $Z$, say $(X^+,B^+)$. Since $(X,B)$ is klt, $X\dashrightarrow X^+$ is an isomorphism in codimension one. Let $H^+$ be an ample$/Z$ divisor on $X^+$ and let $H$ be its birational transform on $X$. Since $X\to Z$ is a $K_X+B$-negative extremal contraction, $K_X+B\equiv hH/Z$ for some $h>0$. Thus $K_{X^+}+B^+\equiv hH^+/Z$ which means that $K_{X^+}+B^+$ is ample$/Z$ so we are done.\\ \end{proof} \section{The connectedness principle with applications to semi-ampleness}\label{s-connectedness} \subsection{Connectedness} In this subsection, we prove the connectedness principle in dimension $\le 3$. The proof is based on LMMP rather than vanishing theorems. The following lemma is essentially [\ref{Xu}, Proposition 2.3]. We recall its proof for convenience. \begin{lem}\label{l-ample-dlt} Let $(X,B)$ be a projective pair of dimension $\le 3$ over $k$. Assume that $(X,B)$ is klt (resp. dlt) and that $A$ is a nef and big (resp. ample) $\mathbb R$-divisor. Then there is $0\le A'\sim_\mathbb R A$ such that $(X,B+A')$ is klt (resp. dlt). \end{lem} \begin{proof} First we deal with the dlt case. Let $f\colon W\to X$ be a log resolution of $(X,B)$ which extracts only prime divisors with positive log discrepancy with respect to $(X,B)$. This exists by the definition of dlt pairs. The resolution is obtained by a sequence of blow ups with smooth centers, hence there is an $\mathbb R$-divisor $E'$ exceptional$/X$ with sufficiently small coefficients such that $-E'$ is ample$/X$ and $\Supp E'$ is the union of all the prime exceptional$/X$ divisors on $W$. Note that by the negativity lemma (\ref{ss-negativity}), $E'\ge 0$. Moreover, $f^*A-E'$ is ample$/X$. Let $B_W$ be given by $$ K_W+B_W= f^*(K_X+B) $$ By assumption, $B_W$ has coefficients at most $1$ and the coefficient of any prime exceptional$/X$ divisor is less than $1$. Let $A_W'\sim_\mathbb R f^*A-E'$ be general and let $A':=f_*A_W'$. Then $A'\sim_\mathbb R A$ and we can write $$ K_W+B_W+A_W'+E'= f^*(K_X+B+A') $$ where we can make sure that the coefficients of $B_W+A_W'+E'$ are at most $1$ and that the coefficient of any prime exceptional$/X$ divisor is less than $1$ because the coefficients of $E'$ are sufficiently small. This implies that $(X,B+A')$ is dlt. Now we deal with the klt case. Since $A$ is nef and big, by definition, $A\sim_\mathbb R G+D$ with $G\ge 0$ ample and $D\ge 0$. So by replacing $A$ with $(1-\epsilon)A+\epsilon G$ and replacing $B$ with $B+\epsilon D$ we can assume that $A$ is ample. Now apply the dlt case.\\ \end{proof} \begin{proof}(of Theorem \ref{t-connectedness-d-3}) Assume that the statement does not hold for some $z$. By Lemma \ref{l-dlt-model-proj}, there is a $\mathbb Q$-factorial dlt pair $(Y,B_Y)$ and a birational morphism $g\colon Y\to X$ with $K_Y+B_Y$ nef$/X$, every exceptional divisor of $g$ is a component of $\rddown{B_Y}$, and $g_*B_Y=B$. Moreover, $K_Y+B_Y+E_Y=f^*(K_X+B)$ for some $E_Y\ge 0$ with $\Supp E_Y\subseteq \rddown{B_Y}$. Also the non-klt locus of $(Y,B_Y)$, that is $\rddown{B_Y}$, maps surjectively onto the non-klt locus of $(X,B)$ hence $\rddown{B_Y}$ is not connected in some neighborhood of $Y_z$. Now by assumptions, $K_Y+B_Y+E_Y+L_Y\sim_\mathbb R 0/Z$ for some globally nef and big $\mathbb R$-divisor $L_Y$. Since $X$ is $\mathbb Q$-factorial, we can write $L_Y\sim_\mathbb R A_Y+D_Y$ where $A_Y$ is ample and $D_Y\ge 0$ is exceptional$/X$. In particular, $\Supp D_Y\subset \rddown{B_Y}$. By picking a general $$ G_Y\sim_\mathbb R \epsilon A_Y+(1-\epsilon)L_Y-\delta \rddown{B_Y} $$ for some small $\delta>0$ and applying Lemma \ref{l-ample-dlt} we can assume that $(Y,B_Y+G_Y)$ is dlt. By construction, $$ K_Y+B_Y+G_Y\sim_\mathbb R P_Y:=-\epsilon D_Y-E_Y-\delta\rddown{B_Y}/Z $$ and $\Supp P_Y=\rddown{B_Y}$. Run a generalized LMMP$/Z$ on $K_Y+B_Y+G_Y$ as in the proof of Theorem \ref{t-mmodel}. We show that this is actually a usual LMMP hence it terminates by special termination (\ref{p-st}). Assume that we have arrived at a model $Y'$ and let $R$ be a $K_{Y'}+B_{Y'}+G_{Y'}$-negative extremal ray$/Z$. Since $Y'\to Z$ is birational, $R$ is either a divisorial extremal ray or a flipping extremal ray. In the former case $R$ can be contracted by a projective morphism by Lemma \ref{l-div-ray}. So assume $R$ is a flipping extremal ray. Then the generalized flip $Y'\dashrightarrow Y''/V$ exists by Theorem \ref{t-flip-2} where $Y'\to V$ is the contraction of $R$ to the algebraic space $V$. Since $P_{Y'}\cdot R<0$, some component $S_{Y'}$ of $\rddown{B_{Y'}}$ intersects $R$ positively. Now there is a boundary $\Delta_{Y'}$ such that $(Y',\Delta_{Y'})$ is plt, $S_{Y'}=\rddown{\Delta_{Y'}}$, and $(K_{Y'}+\Delta_{Y'})\cdot R=0$. But then we can find $N_{Y''}\ge 0$ such that $(Y'',\Delta_{Y''}+N_{Y''})$ is plt and $(K_{Y''}+\Delta_{Y''}+N_{Y''})\cdot R<0$. Therefore by \ref{ss-pl-ext-rays} and \ref{ss-good-exc-locus}, $Y''\to V$ is a projective morphism which implies that $Y'\to V$ is also a projective morphism and that the flip is a usual flip. We claim that the connected components of $\rddown{B_Y}$ over $z$ remain disjoint over $z$ in the course of the LMMP: assume not and let $Y'$ be the first model in the process such that there are irreducible components $S_Y,T_Y$ of $\rddown{B_Y}$ belonging to disjoint connected components over $z$ such that $S_{Y'},T_{Y'}$ intersect over $z$. Let $\Delta_Y=B_Y-\tau (\rddown{B_Y}-S_Y-T_Y)$ for some small $\tau>0$. Then $(Y,\Delta_Y+G_Y)$ is plt in some neighborhood of $Y_z$ because $\rddown{\Delta_Y+G_Y}=S_Y+T_Y$ and $S_Y,T_Y$ are disjoint over $z$. Moreover, $Y\dashrightarrow Y'$ is a partial LMMP on $K_Y+\Delta_Y+G_Y$ hence $(Y',\Delta_{Y'}+G_{Y'})$ is also plt over $z$. But since $S_{Y'},T_{Y'}$ intersect over $z$, $(Y',\Delta_{Y'}+G_{Y'})$ cannot be plt over $z$, a contradiction. Next we claim that no connected component of $\rddown{B_Y}$ over $z$ can be contracted by the LMMP (although some of their irreducible components might be contracted). By construction $-P_Y\ge 0$ and $\Supp -P_Y=\rddown{B_Y}$, and $-P_Y$ is positive on each extremal ray in the LMMP. Write $-P_Y=\sum -P_Y^i$ where $-P_Y^i$ are the connected components of $-P_Y$ over $z$. By the previous paragraph, $-P_Y^i$ and $-P_Y^j$ remain disjoint during the LMMP if $i\neq j$. Moreover, if we arrive a model $Y'$ in the LMMP on which we contract an extremal ray $R$, then $-P_{Y'}^j\cdot R>0$ for some $j$ and $-P_{Y'}^i\cdot R=0$ for $i\neq j$. Therefore the contraction of $R$ cannot contract any of the $-P_{Y'}^i$. The LMMP ends up with a log minimal model $(Y',B_{Y'}+G_{Y'})$ over $Z$. Then $P_{Y'}$ is nef$/Z$. Assume that $Y_z'\nsubseteq \Supp P_{Y'}$ set-theoretically. Since $Y_z'$ intersects $\Supp P_{Y'}$, there is some curve $C\subset Y_z'$ not contained in $\Supp P_{Y'}$ but intersects it. Then as $-P_{Y'}\ge 0$ we have $-P_{Y'}\cdot C>0$ hence $P_{Y'}\cdot C<0$, a contradiction. Now since $Y_z'$ is connected, it is contained in exactly one connected component of $\rddown{B_{Y'}}$ over $z$. This is a contradiction because by assumptions at least two connected components of $\rddown{B_{Y'}}$ over $z$ intersect the fibre $Y_z'$.\\ \end{proof} We now show that a strong form of the connectedness principle holds on surfaces. \begin{thm}\label{t-connectedness-d-2} Let $(X,B)$ be a $\mathbb Q$-factorial projective pair of dimension $2$ over $k$. Let $f\colon X\to Z$ be a projective contraction (not necessarily birational) such that $-(K_X+B)$ is ample$/Z$. Then for any closed point $z\in Z$, the non-klt locus $N$ of $(X,B)$ is connected in any neighborhood of the fibre $X_z$ over $z$. More strongly, $N\cap X_z$ is connected. \end{thm} \begin{proof} It is enough to prove the last claim. Assume that $N\cap X_z$ is not connected for some $z$. We use the notation and the arguments of the proof of Theorem \ref{t-connectedness-d-3}. Let $(Y,B_Y)$ be the pair constructed over $X$ and $Y\dashrightarrow Y'$ the LMMP$/Z$ on $K_Y+B_Y+G_Y\sim_\mathbb R P_Y$ and $h\colon Y'\to Z$ the corresponding map. The same arguments of the proof of Theorem \ref{t-connectedness-d-3} show that the connected components of $P_Y$ over $z$ remain disjoint in the course of the LMMP and none of them will be contracted. By assumptions, $\rddown{B_Y}\cap Y_z$ is not connected. We claim that the same holds in the course of the LMMP. If not, then at some step of the LMMP we arrive at a model $W$ with a $K_W+B_W+G_W$-negative extremal birational contraction $\phi\colon W\to V$ such that $\rddown{B_W}\cap W_z$ is not connected but $\rddown{B_V}\cap V_z$ is connected. Let $C$ be the exceptional curve of $W\to V$. Now $\phi(\rddown{B_W})=\rddown{B_V}$: the inclusion $\supseteq$ is clear; the inclusion $\subseteq$ follows from the fact that if $C$ is a component of $\rddown{B_W}$, then at least one other irreducible component of $\rddown{B_W}$ intersects $C$ because $P_W\cdot C<0$. Therefore $\phi(\rddown{B_W}\cap W_z)=\rddown{B_V}\cap V_z$. Since $\rddown{B_V}\cap V_z$ is connected but $\rddown{B_W}\cap W_z$ is not connected, there exist two connected components of $\rddown{B_W}\cap W_z$ whose images under $\phi$ intersect. So there are closed points $w,w'$ belonging to different connected components of $\rddown{B_W}\cap W_z$ such that $\phi(w)=\phi(w')$. In particular, $w,w'\in C$. Note that $C$ is not a component of $\rddown{B_W}$ otherwise $C\subset \rddown{B_W}\cap W_z$ connects $w,w'$ which contradicts the assumptions. Therefore $\rddown{B_W}\cap C$ is a finite set of closed points with more than one element. Now perturbing the coefficients of $B_W$ we can find a $\Gamma_W\le \rddown{B_W}$ such that $(W,\Gamma_W)$ is plt in a neighborhood of $C$, $(K_W+\Gamma_W)\cdot C<0$ and such that $\rddown{\Gamma_W}\cap C$ is a finite set of closed points with more than one element. Then in a formal neighborhood of $\phi(w)$, $\rddown{\Gamma_V}$ has at least two branches which implies that $\rddown{\Gamma_V}$ is not normal which in turn contradicts the plt property of $(V,\Gamma_V)$. Since $\rddown{B_{Y'}}\cap Y_z'$ is not connected, there is a component $D$ of $Y_z'$ not contained in $\Supp P_{Y'}=\rddown{B_{Y'}}$ but intersects it. Thus $P_{Y'}$ cannot be nef$/Z$ as $-P_{Y'}\ge 0$. Therefore the LMMP terminates with a Mori fibre space $Y'\to Z'/Z$. If $Z'$ is a point, then $\rddown{B_{Y'}}$ has at least two disjoint irreducible components which contradicts the fact that the Picard number $\rho(Y')=1$ in this case. So we can assume that $Z'$ is a curve. Assume that $Z$ is also a curve in which case $Z'=Z$. Let $F$ be the reduced variety associated to a general fibre of $Y'\to Z'$. Then by the adjunction formula we get $F\simeq \mathbb P^1$, $K_{Y'}\cdot F=-2$, and $(B_{Y'}+G_{Y'})\cdot F<2$. On the other hand, since $\rddown{B_{Y'}}\cap Y'_z$ has at least two points, $\rddown{B_{Y'}}\cap F$ also has at least two points hence $$ (B_{Y'}+G_{Y'})\cdot F\ge (\rddown{B_{Y'}}+G_{Y'})\cdot F>2 $$ which is a contradiction. Now assume that $Z$ is a point. Since $\rddown{B_{Y'}}\cap Y_z'$ is not connected, $\rddown{B_{Y'}}$ has at least two disjoint connected components, say $M_{Y'},N_{Y'}$. On the other hand, since $P_{Y'}\cdot F<0$, we may assume that $M_{Y'}$ intersects $F$ (hence $M_{Y'}$ intersects every fibre of $Y'\to Z'$). If some component of $N_{Y'}$ is vertical$/Z'$, then $M_{Y'},N_{Y'}$ intersect a contradiction. Thus each component of $N_{Y'}$ is horizontal$/Z'$ hence they intersect each fibre of $Y'\to Z'$. But then we can get a contradiction as in the $Z'=Z$ case.\\ \end{proof} \subsection{Semi-ampleness} We use the connectedness principle on surfaces to prove some semi-ampleness results in dimension $2$ and $3$. These are not only interesting on their own but also useful for the proof of the finite generation (\ref{t-fg}). \begin{proof}(of Theorem \ref{t-sa-reduced-boundary}) Let $S\le \rddown{B}$ be a reduced divisor. Assume that $(K_X+B+A)|_S$ is not semi-ample. We will derive a contradiction. We can assume that if $S'\lneq S$ is any other reduced divisor, then $(K_X+B+A)|_{S'}$ is semi-ample. Note that $S$ cannot be irreducible by abundance for surfaces (cf. [\ref{Tanaka}]). Using the ample divisor $A$ and applying Lemma \ref{l-ample-dlt}, we can perturb the coefficients of $B$ so that we can assume $S=\rddown{B}$. Let $T$ be an irreducible component of $S$ and let $S'=S-T$. By assumptions, $(K_X+B+A)|_{T}$ and $(K_X+B+A)|_{S'}$ are both semi-ample. Let $g\colon T\to Z$ be the projective contraction associated to $(K_X+B+A)|_{T}$. By adjunction define $K_T+B_T:=(K_X+B)|_T$ and $A_T=A|_T$. Since $K_T+B_T+A_T\sim_\mathbb Q 0/Z$ and since $A_T$ is ample, $-(K_T+B_T)$ is ample$/Z$. Moreover, $S'\cap T=\rddown{B_T}$ as topological spaces. By the connectedness principle for surfaces (\ref{t-connectedness-d-2}), $\rddown{B_T}\to Z$ has connected fibres hence $S'\cap T\to Z$ also has connected fibres. Now apply Keel [\ref{Keel}, Corollary 2.9].\\ \end{proof} \begin{thm}\label{t-sa-reduced-boundary-2} Let $(X,B+A)$ be a projective $\mathbb Q$-factorial dlt pair of dimension $3$ over $k$ of char $p>5$. Assume that $\bullet$ $A,B\ge 0$ are $\mathbb Q$-divisors with $A$ ample, $\bullet$ $(Y,B_Y+A_Y)$ is a $\mathbb Q$-factorial weak lc model of $(X,B+A)$, $\bullet$ $Y\dashrightarrow X$ does not contract any divisor, $\bullet$ $\Supp A_{Y}$ does not contain any lc centre of $(Y,B_{Y}+A_{Y})$, $\bullet$ if $\Sigma$ is a connected component of $\mathbb{E}(K_{Y}+B_{Y}+A_{Y})$ and $\Sigma \nsubseteq \rddown{B_{Y}}$, then $(K_{Y}+B_{Y}+A_{Y})|_{\Sigma}$ is semi-ample.\\ Then $K_{Y}+B_{Y}+A_{Y}$ is semi-ample. \end{thm} \begin{proof} Note that if $K_X+B+A$ is not big, then $\mathbb{E}(K_{Y}+B_{Y}+A_{Y})=Y$ hence the statement is trivial. So we can assume that $K_X+B+A$ is big. Let $\phi$ denote the map $X\dashrightarrow Y$ and let $U$ be the largest open set over which $\phi$ is an isomorphism. Then since $A$ is ample and $X$ is $\mathbb Q$-factorial, $\Supp A_Y$ contains $Y\setminus \phi(U)$: indeed let $y\in Y\setminus \phi(U)$ be a closed point and let $W$ be the normalization of the graph of $\phi$, and $\alpha\colon W\to X$ and $\beta\colon W\to Y$ be the corresponding morphisms; first assume that $\dim \beta^{-1}\{y\}>0$; then $\alpha^*A$ intersects $\beta^{-1}\{y\}$ because $A$ is ample hence $\Supp A_Y$ contains $y$; now assume that $\dim \beta^{-1}\{y\}=0$; then $\beta$ is an isomorphism over $y$; on the other hand, $\alpha$ cannot be an isomorphism near $\beta^{-1}\{y\}$ otherwise $\phi$ would be an isomorphism near $\alpha(\beta^{-1}\{y\})$ hence $y\in \phi(U)$, a contradiction; thus as $X$ is $\mathbb Q$-factorial, $\alpha$ contracts some prime divisor $E$ containing $\beta^{-1}\{y\}$; but then $Y\dashrightarrow X$ contracts a divisor, a contradiction. Let $C\ge 0$ be any $\mathbb Q$-divisor such that $(X,B+A+C)$ is dlt. Then $(Y,B_{Y}+A_{Y}+\epsilon C_Y)$ is dlt for any sufficiently small $\epsilon>0$ because $(Y,B_{Y}+A_{Y})$ has no lc centre inside $Y\setminus \phi(U)\subset \Supp A_Y$. Now let $G_{Y}\ge 0$ be a general small ample $\mathbb Q$-divisor on $Y$ and $G$ its birational transform on $X$. Since $G$ is small, $A-G$ is ample. Let $C\sim_\mathbb Q A-G$ be a general $\mathbb Q$-divisor. Let $$ \Gamma_{Y}:=B_{Y}+(1-\epsilon)A_{Y}+\epsilon C_{Y}+\epsilon G_{Y} $$ Then $$ K_{Y}+\Gamma_{Y}\sim_\mathbb Q K_{Y}+B_{Y}+A_{Y} $$ and $\rddown{B_{Y}}=\rddown{\Gamma_{Y}}$. Moreover, by the above remarks and by Lemma \ref{l-ample-dlt} we can assume that $(Y,\Gamma_{Y})$ is dlt. Now by Theorem \ref{t-sa-reduced-boundary}, $(K_{Y}+\Gamma_{Y})|_{\rddown{\Gamma_Y}}$ is semi-ample hence $(K_{Y}+B_{Y}+A_{Y})|_{\rddown{B_Y}}$ is semi-ample. Therefore $(K_{Y}+B_{Y}+A_{Y})|_{\Sigma}$ is semi-ample for any connected component of $\mathbb{E}(K_{Y}+B_{Y}+A_{Y})$ hence we can apply Theorem \ref{t-Keel-1}.\\ \end{proof} \section{Finite generation and base point freeness}\label{s-fg} \subsection{Finite generation} In this subsection we prove Theorem \ref{t-fg}. \begin{lem}\label{l-fg-decrease} Let $(X,B)$ be a pair and $M$ a $\mathbb Q$-divisor satisfying the following properties: $(1)$ $(X,\Supp(B+M))$ is projective log smooth of dimension $3$ over $k$ of char $p>5$, $(2)$ $K_X+B$ is a big $\mathbb Q$-divisor, $(3)$ $K_X+B\sim_\mathbb Q M\ge 0$ and $\rddown{B}\subset \Supp M\subseteq \Supp B$, $(4)$ $M=A+D$ where $A$ is an ample $\mathbb Q$-divisor and $D\ge 0$, $(5)$ $\alpha M=N+C$ for some rational number $\alpha>0$ such that $N,C\ge 0$ are $\mathbb Q$-divisors, $\Supp N=\rddown{B}$, and $(X,B+C)$ is dlt, $(6)$ there is an ample $\mathbb Q$-divisor $ A'\ge 0$ such that $A'\le A$ and $A'\le C$.\\ If $(X,B+tC)$ has an lc model for some real number $t\in (0,1]$, then $(X,B+(t-\epsilon)C)$ also has an lc model for any sufficiently small $\epsilon>0$. \end{lem} \begin{proof} We can assume that $C\neq 0$. If we let $\Delta=B-\delta(N+C)$ for some small rational number $\delta>0$, then $(X,\Delta)$ is klt and $K_X+B$ is a positive multiple of $K_X+\Delta$ up to $\mathbb Q$-linear equivalence. Similarly, for any $s\in (0,1]$, there is $s'\in (0,s)$ such that $(X,\Delta+s'C)$ is klt and $K_X+B+sC$ is a positive multiple of $K_X+\Delta+s'C$ up to $\mathbb Q$-linear equivalence. So if $(Y,\Delta_Y+s'C_Y)$ is a log minimal model of $(X,\Delta+s'C)$, which exists by Theorem \ref{t-mmodel}, then $(Y,B_Y+sC_Y)$ is a $\mathbb Q$-factorial weak lc model of $(X,B+sC)$ such that $Y\dashrightarrow X$ does not contract divisors and $X\dashrightarrow Y$ is $K_X+B+sC$-negative (see \ref{ss-divisors} for this notion). We will make use of this observation below. Let $T$ be the lc model of $(X,B+tC)$ and let $(Y,B_Y+tC_Y)$ be a $\mathbb Q$-factorial weak lc model of $(X,B+tC)$ such that $X\dashrightarrow Y$ is $K_X+B+tC$-negative and its inverse does not contract divisors. Then the induced map $Y\dashrightarrow T$ is a morphism and $K_T+B_T+tC_T$ pulls back to $K_Y+B_Y+tC_Y$. First assume that $t$ is irrational. Then $C_Y\equiv 0/T$. Moreover, $C_T$ is $\mathbb Q$-Cartier because the set of those $s\in\mathbb R$ such that $K_T+B_T+sC_T$ is $\mathbb R$-Cartier forms a rational affine subspace of $\mathbb R$ (this can be proved using simple linear algebra similar to \ref{ss-ext-rays-scaling}). Since $t$ belongs to this affine subspace and $t$ is not rational, the affine subspace is equal to $\mathbb R$ hence $K_T+B_T+sC_T$ is $\mathbb R$-Cartier for every $s$ which implies that $C_T$ is $\mathbb Q$-Cartier. Thus $C_Y\sim_\mathbb Q 0/T$ hence $K_T+B_T+(t-\epsilon) C_T$ pulls back to $K_Y+B_Y+(t-\epsilon)C_Y$ and the former is ample for every sufficiently small $\epsilon>0$. This means that $T$ is also the lc model of $(X,B+(t-\epsilon)C)$. From now on we assume that $t$ is rational. Replace $Y$ with a $\mathbb Q$-factorial weak lc model of $(Y,B_Y+(t-\epsilon)C_Y)$ over $T$ so that $X\dashrightarrow Y$ is still $K_X+B+(t-\epsilon)C$-negative. Since $K_T+B_T+tC_T$ is ample, by choosing $\epsilon$ to be small enough, we can assume that $K_Y+B_Y+(t-\epsilon)C_Y$ is nef globally, by \ref{ss-ext-rays-II}. Then $(Y,B_Y+(t-\epsilon)C_Y)$ is a weak lc model of $(X,B+(t-\epsilon)C)$ hence it is enough to show that $K_Y+B_Y+(t-\epsilon)C_Y$ is semi-ample. Perhaps after replacing $\epsilon$ with a smaller number we can assume that $K_Y+B_Y+(t-\epsilon')C_Y$ also nef globally for some $\epsilon'>\epsilon$ and that $t-\epsilon$ is rational. Let $Y\to V$ be the contraction to an algebraic space associated to $K_Y+B_Y+(t-\epsilon)C_Y$. Any curve contracted by $Y\to V$ is also contracted by $Y\to T$ because $K_Y+B_Y+tC_Y$ and $K_Y+B_Y+(t-\epsilon')C_Y$ are both nef and $\epsilon'>\epsilon$. Thus we get an induced map $V\to T$. Moreover, there is a small contraction $Y'\to V$ from a $\mathbb Q$-factorial normal projective variety $Y'$: recall that $(Y,\Lambda_Y:=\Delta_Y+t'C_Y)$ is klt where $\Delta$ and $t'$ are as in the first paragraph; now $Y'$ can be obtained by taking a log resolution $W\to Y$, defining $\Lambda_W$ to be the birational transform of $\Lambda_V$ plus the reduced exceptional divisor of $W\to V$, running an LMMP$/V$ on $K_W+\Lambda_W$, using special termination and the fact that $K_W+\Lambda_W\equiv E/V$ for some $E\ge 0$ whose support is equal to the reduced exceptional divisor of $W\to V$, and applying the negativity lemma (\ref{ss-negativity}). Since $K_Y+B_Y+(t-\epsilon)C_Y\equiv 0/V$, $K_{Y'}+B_{Y'}+(t-\epsilon)C_{Y'})$ is also nef and the former is semi-ample if and only if the latter is. So by replacing $Y$ with $Y'$, we can in addition assume that $Y\to V$ is a small contraction. Let $\Sigma$ be a connected component of the exceptional set of $Y\to V$. Since $Y\to V$ is a small morphism, $\Sigma$ is one-dimensional. On the other hand, since $$ K_{Y}+B_Y+(t-\epsilon)C_Y\equiv 0/V $$ and $$ K_{Y}+B_Y+tC_Y\equiv 0/V $$ we get $C_Y\equiv 0/V$ hence $N_Y\equiv 0/V$. Therefore either $\Sigma\subset \Supp N_Y$ or $\Sigma\cap \Supp N_Y=\emptyset$. Moreover, if $\Sigma\cap \Supp N_Y=\emptyset$, then $(K_Y+B_Y+(t-\epsilon)C_Y)|_\Sigma$ is semi-ample because near $\Sigma$ the divisor $K_Y+B_Y+(t-\epsilon)C_Y$ is a multiple of $K_Y+B_Y+tC_Y$ and the latter is semi-ample. We can assume that $ A'$ in (6) has small coefficients. Let $B'=B+(t-\epsilon)C-A'$. Since $(Y,B_Y'+A_Y'+\epsilon C_Y)$ is lc, $\Supp C_Y$ (hence also $\Supp A'_Y$) does not contain any lc centre of $(Y,B_Y'+A_Y')$. Now applying Theorem \ref{t-sa-reduced-boundary-2} to $(X,B'+A')$ shows that $K_Y+B_Y+(t-\epsilon)C_Y$ is semi-ample (note that the exceptional locus of $Y\to V$ is equal to $\mathbb{E}(K_Y+B_Y'+A_Y')$). Therefore, $K_Y+B_Y+sC_Y$ is semi-ample for every $s\in[t-\epsilon,t]$.\\ \end{proof} \begin{prop}\label{p-fg-1-4} Let $(X,B)$ be a pair and $M$ a $\mathbb Q$-divisor satisfying properties $(1)$ to $(4)$ of Lemma \ref{l-fg-decrease}. Then the lc ring $R(K_X+B)$ is finitely generated. \end{prop} \begin{proof} \emph{Step 1.} We follow the proof of [\ref{B-mmodel}, Proposition 3.4], which is similar to [\ref{BCHM}, \S 5], but with some twists. Assume that $R(K_X+B)$ is not finitely generated. We will derive a contradiction. By replacing $A$ with $\frac{1}{m}S$ where $m$ is sufficiently divisible and $S$ is a general member of $|mA|$, and changing $M,B$ accordingly, we can assume that $(7)$ $S:=\Supp A$ is irreducible and $K_X+S+\Delta$ is ample for any boundary $\Delta$ supported on $\Supp(B)-S$.\\ Let ${\theta}(X,B,M)$ be the number of those components of $M$ which are not components of $\rddown{B}$ (such $\theta$ functions were defined in \ref{ss-WZD} in a more general setting). By (7), $S$ is not a component of $\rddown{B}$, hence ${\theta}(X,B,M)>0$ otherwise $K_X+B$ is ample and $R(K_X+B)$ is finitely generated, a contradiction. Define $$ \alpha:=\min\{t>0~|~~\rddown{(B+tM)^{\le 1}}\neq \rddown{B}~\} $$ where for a divisor $R=\sum r_iR_i$ we define $R^{\le 1}=\sum r_i'R_i$ with $r_i'=\min\{r_i,1\}$. In particular, $(B+\alpha M)^{\le 1}=B+C$ for some $C\ge 0$ supported in $\Supp M$, and $\alpha M=C+N$ where $N\ge 0$ is supported in $\rddown{B}$ and $C$ has no common components with $\rddown{B}$. Property (3) ensures that $\Supp N=\rddown{B}$, and by property (7) we have $\alpha A\le C$. So $(X,B)$ and $M$ also satisfy properties (5) and (6) of \ref{l-fg-decrease} with $A'=\alpha' A$ for some $\alpha'>0$. \emph{Step 2.} Let $B':=B+C$ and let $M':=M+C$. Then the pair $(X,B')$ is log smooth dlt and $$ {\theta}(X,B',M')<{\theta}(X,B,M) $$ Assume that $R(K_X+B')$ is not finitely generated. By (7), $S$ is not a component of $\rddown{B'}$ and ${\theta}(X,B',M')>0$. Now replace $(X,B)$ with $(X,B')$, replace $D$ with $D':=D+C$, and replace $M$ with $M'$. By construction, all the properties (1) to (4) of \ref{l-fg-decrease} and property (7) above are still satisfied. Repeating the above process we get to the situation in which either $R(K_X+B')$ is finitely generated, or ${\theta}(X,B',M')=0$ and $K_X+B'$ is ample. Thus in any case we can assume $R(K_X+B')$ is finitely generated. \emph{Step 3.} Let $$ \mathcal T=\{t\in [0,1]~|~ (X,B+tC)~~\mbox{has an lc model}\} $$ Since $R(K_X+B'=K_X+B+C)$ is finitely generated, $1\in\mathcal T$ hence $\mathcal T\neq \emptyset$. Moreover, if $t\in\mathcal T\cap (0,1]$, then by Lemma \ref{l-fg-decrease}, $[t-\epsilon,t]\subset \mathcal{T}$ for some $\epsilon>0$. Now let $\tau=\inf \mathcal{T}$. If $\tau\in \mathcal{T}$, then $\tau=0$ which implies that $R(K_X+B)$ is finitely generated, a contradiction. So we may assume $\tau\notin \mathcal{T}$. There is a sequence $t_1>t_2>\cdots$ of rational numbers in $\mathcal{T}$ approaching $\tau$. For each $i$, there is a $\mathbb Q$-factorial weak lc model $(Y_i,B_{Y_i}+t_iC_{Y_i})$ of $(X,B+t_iC)$ such that $Y_i\dashrightarrow X$ does not contract divisors (see the beginning of the proof of Lemma \ref{l-fg-decrease}). By taking a subsequence, we can assume that all the $Y_i$ are isomorphic in codimension one. In particular, $N_\sigma(K_{Y_1}+B_{Y_1}+\tau C_{Y_1})=0$. Arguing as in the proof of Theorem \ref{t-sa-reduced-boundary-2}, we can show that $({Y_1},B_{Y_1}+\tau C_{Y_1})$ is dlt because $\alpha A\le C$ is ample and $\Supp A_{Y_1}$ does not contain any lc centre of $({Y_1},B_{Y_1}+\tau C_{Y_1})$. Run the LMMP on $K_{Y_1}+B_{Y_1}+\tau C_{Y_1}$ with scaling of $(t_1-\tau)C_{Y_1}$ as in \ref{ss-g-LMMP-scaling}. Since $\alpha M_{Y_1}=N_{Y_1}+C_{Y_1}$, the LMMP is also an LMMP on $N_{Y_1}$. Thus each extremal ray in the process is a pl-extremal ray hence they can be contracted by projective morphisms (\ref{ss-pl-ext-rays}). Moreover, the required flips exist by Theorem \ref{t-flip-1}, and the LMMP terminates with a model $Y$ on which $K_{Y}+B_{Y}+\tau C_{Y}$ is nef, by special termination (\ref{p-st}). Note that the LMMP does not contract any divisor by the $N_\sigma=0$ property. Moreover, $K_{Y}+B_{Y}+(\tau+\delta) C_{Y}$ is nef for some $\delta>0$. Now, by replacing the sequence we can assume that $K_{Y}+B_{Y}+t_i C_{Y}$ is nef for every $i$ and by replacing each $Y_i$ with $Y$ we can assume that $Y_i=Y$ for every $i$. A simple comparison of discrepancies (cf. [\ref{B-mmodel}, Claim 3.5]) shows that $(Y,B_{Y}+\tau C_{Y})$ is a $\mathbb Q$-factorial weak lc model of $(X,B+\tau C)$. \emph{Step 4.} Let $T_i$ be the lc model of $(X,B+t_iC)$. Then the map $Y\dashrightarrow T_i$ is a morphism and $K_{Y}+B_{Y}+t_i C_{Y}$ is the pullback of an ample divisor on $T_i$. Moreover, for each $i$, the map $T_{i+1}\dashrightarrow T_i$ is a morphism because any curve contracted by $Y\to T_{i+1}$ is also contracted by $Y\to T_i$. So perhaps after replacing the sequence, we can assume that $T_i$ is independent of $i$ so we can drop the subscript and simply use $T$. Since $C\sim_\mathbb Q 0/T$, we can replace $Y$ with a $\mathbb Q$-factorialization of $T$ so that we can assume that $Y\to T$ is a small morphism (such a $\mathbb Q$-factorialization exists by the observations in the first paragraph of the proof of Lemma \ref{l-fg-decrease}). Assume that $\tau$ is irrational. If $K_Y+B_Y+(\tau-\epsilon)C_Y$ is nef for some $\epsilon>0$, then $K_Y+B_Y+\tau C_Y$ is semi-ample because in this case $K_T+B_T+(\tau-\epsilon)C_T$ is nef and $K_T+B_T+t_i C_T$ is ample hence $K_T+B_T+\tau C_T$ is ample. If there is no $\epsilon$ as above, then by \ref{ss-ext-rays-scaling} and \ref{ss-ext-rays-II}, there is a curve $\Gamma$ generating some extremal ray such that $(K_Y+B_Y+\tau C_Y)\cdot \Gamma=0$ and $C_Y\cdot \Gamma>0$. This is not possible since $\tau$ is assumed to be irrarional. So from now on we assume that $\tau$ is rational. \emph{Step 5.} Let $Y\to V$ be the contraction to an algebraic space associated to $K_Y+B_Y+\tau C_Y$. This map factors through $Y\to T$ so we get an induced map $T\to V$. We can write $$ K_T+B_T+\tau C_T=a(K_T+B_T+t_iC_T)+bN_T $$ for some $i$ and some rational numbers $a,b>0$. Since $K_T+B_T+t_iC_T$ is ample, we get $$ \mathbb{E}(K_T+B_T+\tau C_T)\subset \Supp N_T=\rddown{B_T} $$ Thus since $N_Y\sim_\mathbb Q 0\sim_\mathbb Q C_Y/T$, the locus $\mathbb{E}(K_Y+B_Y+\tau C_Y)$ is a subset of the union of $\Supp N_Y=\rddown{B_Y}$ and the exceptional set of $Y\to T$. Let $\Lambda$ be a connected component of the exceptional set of $Y\to T$. Then, since $N_Y\sim_\mathbb Q 0/T$ and since $\Lambda$ is one-dimensional, either $\Lambda\subset \Supp N_{Y}$ or $\Lambda\cap \Supp N_{Y}=\emptyset$. Therefore if $\Sigma$ is a connected component of $\mathbb{E}(K_Y+B_Y+\tau C_Y)$, then either $\Sigma\subset \Supp N_{Y}$ or $\Sigma\cap \Supp N_{Y}=\emptyset$. In the latter case, $(K_Y+B_Y+\tau C_Y)|_\Sigma$ is semi-ample because near $\Sigma$ the divisor $K_Y+B_Y+\tau C_Y$ is a multiple of $K_Y+B_Y+t_i C_Y$ and the latter is semi-ample. Finally as in the end of the proof of Lemma \ref{l-fg-decrease} we can apply Theorem \ref{t-sa-reduced-boundary-2} to show that $K_Y+B_Y+\tau C_Y$ is semi-ample. This is a contradiction because we assumed $\tau\notin\mathcal{T}$.\\ \end{proof} \begin{proof}(of Theorem \ref{t-fg}) First assume that $Z$ is a point. Pick $M\ge 0$ such that $K_X+B\sim_\mathbb Q M$. We can choose $M$ so that $M=A+D$ where $A\ge 0$ is ample and $D\ge 0$. Let $f\colon W\to X$ be a log resolution of $(X,\Supp (B+M))$. Since $(X,B)$ is klt, we can write $$ K_W+B_W=f^*(K_X+B)+E $$ where $(W,B_W)$ is klt, $K_W+B_W$ is a $\mathbb Q$-divisor, and $E\ge 0$ is exceptional$/X$. Moreover, there is $E'\ge 0$ exceptional$/X$ such that $-E'$ is ample$/X$ (cf. proof of Lemma \ref{l-ample-dlt}). Let $A_W\sim_\mathbb Q f^*A-E'$ be general and let $D_W=f^*D+E+E'$. Then $$ K_W+B_W\sim_\mathbb Q M_W:=A_W+D_W $$ Now replace $(X,B)$ with $(W,B_W)$, replace $M$ with $M_W$, and replace $A$ and $D$ with $A_W$ and $D_W$. Moreover, by adding a small multiple of $M$ to $B$ we can also assume that $\Supp M\subseteq \Supp B$. Then $(X,B)$ and $M$ satisfy the properties (1) to (4) of Lemma \ref{l-fg-decrease}. Therefore, by Proposition \ref{p-fg-1-4}, $R(K_X+B)$ is finitely generated. Now we treat the general case, that is, when $Z$ is not necessarily a point. By taking projectivizations of $X,Z$ and taking a log resolution, we may assume that $X,Z$ are projective and that $(X,B)$ is log smooth. We can also assume that $K_X+B\sim_\mathbb Q M=A+D/Z$ where $A$ is an ample $\mathbb Q$-divisor and $D\ge 0$. By adding some multiple of $M$ to $B$ we may assume $\Supp M\subseteq \Supp B$. Let $(Y,B_Y)$ be a log minimal model of $(X,B)$ over $Z$. Let $H$ be the pullback of an ample divisor on $Z$. Since $A\le B$, for each integer $m\ge 0$, there is $\Delta$ such that $K_X+B+mH\sim_\mathbb Q K_X+\Delta$ is big globally and that $(X,\Delta)$ is klt. Moreover, $(Y,\Delta_Y)$ is a log minimal model of $(X,\Delta)$ over $Z$. Now by \ref{ss-ext-rays-II}, if $m\gg 0$, then $K_Y+\Delta_Y$ is big and globally nef. On the other hand, $R(K_Y+\Delta_Y)$ is finitely generated over $k$ which means that $K_Y+\Delta_Y$ is semi-ample. Therefore $K_Y+B_Y$ is semi-ample$/Z$ hence $\mathcal{R}(K_X+B/Z)$ is a finitely generated $\mathcal{O}_Z$-algebra. \end{proof} \subsection{Base point freeness} \begin{proof}(of Theorem \ref{t-bpf}) It is enough to show that $\mathcal{R}(D/Z)$ is a finitely generated $\mathcal{O}_Z$-algebra. By taking a $\mathbb Q$-factorialization using Theorem \ref{cor-dlt-model}, we may assume that $X$ is $\mathbb Q$-factorial. Let $A=D-(K_X+B)$ which is nef and big$/Z$ by assumptions. By replacing $A$, and replacing $B$ accordingly, we may assume that $A$ is ample globally. By Lemma \ref{l-ample-dlt}, we can change $A$ up to $\mathbb Q$-linear equivalence so that $(X,B+A)$ is klt. But then $\mathcal{R}(K_X+B+A/Z)$ is finitely generated by Theorem \ref{t-fg} hence $\mathcal{R}(D/Z)$ is also finitely generated. \end{proof} \subsection{Contractions} \begin{proof}(of Theorem \ref{t-contraction}) We may assume that $B$ is a $\mathbb Q$-divisor and that $(X,B)$ is klt. We can assume $N=H+D$ where $H$ is ample$/Z$ and $D\ge 0$. Let $G$ be the pullback of an ample divior on $Z$, and let $N'=mG+nN+\epsilon H+\epsilon D$ where $\epsilon>0$ is sufficiently small and $m\gg n\gg 0$. Then we can find $A\sim_\mathbb Q N'$ such that $(X,B+A)$ is klt, $K_X+B+A$ is globally big, and $(K_X+B+A)\cdot R<0$. By \ref{ss-ext-rays-II}, we can find an ample divisor $E$ such that $L:=(K_X+B+A+E)$ is nef and big globally and $L^\perp=R$. We can also assume that $(X,B+A+E)$ is klt hence by Theorem \ref{t-bpf}, $L$ is semi-ample which implies that $R$ can be contracted by a projective morphism.\\ \end{proof} \section{ACC for lc thresholds}\label{s-ACC} In this section, we prove Theorem \ref{t-ACC} by a method similar to the characteristic $0$ case (see [\ref{Kollar+}, Chapter 18] and [\ref{MP}]). Let us recall the definition of \emph{lc threshold}. Let $(X,B)$ be an lc pair over $k$ and $M\ge 0$ an $\mathbb R$-Cartier divisor. The lc threshold of $M$ with respect to $(X,B)$ is defined as $$ \lct(M,X,B)=\sup \{t\mid (X,B+tM)~~\mbox{is lc}\} $$ We first prove some results, including ACC for lc thresholds, for surfaces before we move on to 3-folds. \subsection{ACC for lc thresholds on surfaces} \begin{prop}\label{p-acc-surfaces} ACC for lc thresholds holds in dimension $2$ (formulated similar to \ref{t-ACC}). \end{prop} \begin{proof} If this is not the case, then there is a sequence $(X_i,B_i)$ of lc pairs of dimension $2$ over $k$ and $\mathbb R$-Cartier divisors $M_i\ge 0$ such that the coefficients of $B_i$ are in $\Lambda$, the coefficients of $M_i$ are in $\Gamma$ but such that the $t_i:=\lct(M_i,X_i,B_i)$ form a strictly increasing sequence of numbers. If for infinitely many $i$, $(X_i,\Delta_i:=B_i+t_iM_i)$ has an lc centre of dimension one contained in $\Supp M_i$, then it is quite easy to get a contradiction. We may then assume that each $(X_i,\Delta_i)$ has an lc centre $P_i$ of dimension zero contained in $\Supp M_i$. We may also assume that $(X_i,\Delta_i)$ is plt outside $P_i$. Let $(Y_i,\Delta_{Y_i})$ be a $\mathbb Q$-factorial dlt model of $(X_i,\Delta_i)$ such that there are some exceptional divisors on $Y_i$ mapping to $P_i$. Such $Y_i$ exist by a version of Lemma \ref{l-extraction} in dimension $2$. There is a prime exceptional divisor $E_i$ of $Y_i\to X_i$ such that it intersects the birational transform of $M_i$. Note that $E_i$ is normal and actually isomorphic to $\mathbb P^1_k$ since $E_i$ is a component of $\rddown{\Delta_{Y_i}}$ and $(K_{Y_i}+\Delta_{Y_i})\cdot E_i=0$. Now by adjunction define $K_{E_i}+\Delta_{E_i}=(K_{Y_i}+\Delta_{Y_i})|_{E_i}$. Then by Proposition \ref{p-adjunction-DCC} and its proof, the set of all the coefficients of the $\Delta_{E_i}$ is a subset of a fixed DCC set but they do not satisfy ACC. This is a contradiction since $\deg \Delta_{E_i}=2$. \end{proof} We apply the ACC of \ref{p-acc-surfaces} to negativity of contractions. \begin{lem}\label{l-lim-nefness} Let $\Lambda\subset [0,1]$ be a DCC set of real numbers. Then there is $\epsilon>0$ satisfying the following: assume we have $\bullet$ a klt pair $(X,B)$ of dimension $2$, $\bullet$ the coefficients of $B$ belong to $\Lambda\cup [1-\epsilon,1]$, $\bullet$ $f\colon X\to Y$ is an extremal birational projective contraction with exceptional divisor $E$, $\bullet$ the coefficient of $E$ in $B$ belongs to $[1-\epsilon,1]$, and $\bullet$ $-(K_X+B)$ is nef$/Y$.\\ If $\Delta$ is obtained from $B$ by replacing each coefficient in $[1-\epsilon, 1]$ with $1$, then $-(K_X+\Delta)$ is also nef$/Y$. \end{lem} \begin{proof} Note that klt pairs of dimension $2$ are $\mathbb Q$-factorial so $K_X+\Delta$ is $\mathbb R$-Cartier. By \ref{p-acc-surfaces}, we can pick $\epsilon>0$ so that: if $(T,C)$ is lc of dimension $2$ and $M\ge 0$ such that the coefficients of $C$ belong to $\Lambda$ and the coefficients of $M$ belong to $\{1\}$, then the lc threshold $\lct(M,T,C)$ does not belong to $[1-\epsilon,1)$. Now since $(X,B)$ is klt and $-(K_X+B)$ is nef$/Y$, $(Y,B_Y)$ is also klt. Thus $(Y,\Delta_Y-\epsilon\rddown{\Delta_Y})$ is klt because $B_Y\ge \Delta_Y-\epsilon\rddown{\Delta_Y}$. In particular, the lc threshold of $\rddown{\Delta_Y}$ with respect to $(Y,\Delta_Y-\rddown{\Delta_Y})$ is at least $1-\epsilon$. Note that the coefficients of $\Delta$ belong $\Lambda\cup \{1\}$ and the coefficients of $\Delta_Y-\rddown{\Delta_Y}$ belong to $\Lambda$. Thus by our choice of $\epsilon$, the pair $(Y,\Delta_Y)$ is lc. Therefore we can write $$ K_X+\Delta=f^*(K_Y+\Delta_Y)+eE $$ for some $e\ge 0$ because the coefficient of $E$ in $\Delta$ is $1$. This implies that $-(K_X+\Delta)$ is indeed nef$/Y$.\\ \end{proof} \subsection{Global ACC for surfaces} In this subsection we prove a global type of ACC for surfaces (\ref{p-ACC-global}) which will be used in the proof of Theorem \ref{t-ACC}. \begin{constr}\label{rem-Fano-twist} Let $\epsilon\in (0,1)$ and let $X'$ be a klt Fano surface with $\rho(X')=1$. Assume that $X'$ is not $\epsilon$-lc. Pick a prime divisor $E$ (on birational models of $X'$) with log discrepancy $a(E,X',0)<\epsilon$. By a version of Lemma \ref{l-extraction} in dimension two, there is a birational contraction $Y'\to X'$ which is extremal and has $E$ as the only exceptional divisor. Under our assumptions it is easy to find a boundary $D_{Y'}$ such that $(Y',D_{Y'})$ is klt and $K_{Y'}+D_{Y'}\sim_\mathbb R -eE$ for some $e>0$. In particular, we can run an LMMP on $-E$ which ends with a Mori fibre space $X''\to T''$ so that $E''$ positively intersects the extremal ray defining $X''\to T''$ where $E''$ is the birational transform of $E$. As $\rho(X')=1$, we get $\rho(Y')=2$. One of the extremal rays of $Y'$ gives the contraction $Y'\to X'$. The other one either gives $X''\to T''$ with $Y'=X''$ or it gives a birational contraction $Y'\to X''$. If $\dim T''=0$, then $X''$ is also a klt Fano with $\rho(X'')=1$. \end{constr} \begin{lem}\label{l-bnd-comps-surfaces} Let $b\in (0,1)$ be a real number. Then there is a natural number $m$ depending only on $b$ such that: let $(X,B)$ be a klt pair of dimension $2$ and $x\in X$ a closed point; then the number of those components of $B$ containing $x$ and with coefficient $\ge b$ is at most $m$. \end{lem} \begin{proof} Since $(X,B)$ is klt and $\dim X=2$, $X$ is $\mathbb Q$-factorial. We can assume that each coefficient of $B$ is equal to $b$ by discarding any component with coefficient less than $b$ and by decreasing each coefficient which is more than $b$. Moreover, we can assume every component of $B$ contains $x$. Pick a nonzero $\mathbb R$-Cartier divisor $G\ge 0$ such that $(X,C:=B+G)$ is lc near $x$ and such that $x$ is a lc centre of $(X,B+G)$: for example we can take a log resolution $W\to X$ and let $G$ be the pushdown of an appropriate ample $\mathbb R$-divisor on $W$. Shrinking $X$ we can assume $(X,C)$ is lc. Since $(X,B)$ is klt, there is an extremal contraction $f\colon Y\to X$ which extracts a prime divisor $S$ with log discrepancy $a(S,X,C)=0$. Let $B_Y$ be the sum of $S$ and the birational transform of $B$. Then $-(K_Y+B_Y)$ is ample$/X$. Apply adjunction (\ref{p-adjunction-DCC}) and write $K_{S^\nu}+B_{S^\nu}$ for the pullback of $K_Y+B_Y$ to the normalization of $S$. As $-(K_{S^\nu}+B_{S^\nu})$ is ample, ${S^\nu}\simeq \mathbb P^1$ and $\deg B_{S^\nu}<2$. By \ref{p-adjunction-DCC}, the coefficient of each $s\in \Supp B_{S^\nu}$ is of the form $\frac{n-1}{n}+\frac{rb}{n}$ for some integer $r\ge 0$ and some $n\in \mathbb N\cup \{\infty\}$. In particular, the number of the components of $B_{S^\nu}$ is bounded and the number $r$ in the formula is also bounded. This bounds the number of the components of $B$ because $r$ is more than or equal to the number of those components of $B_Y-S$ which pass through the image of $s$.\\ \end{proof} \begin{prop}\label{p-ACC-global} Let $\Lambda\subset [0,1]$ be a DCC set of real numbers. Then there is a finite subset $\Gamma\subset \Lambda$ with the following property: let $(X,B)$ be a pair and $X\to Z$ a projective morphism such that $\bullet$ $(X,B)$ is lc of dimension $2$ over $k$, $\bullet$ the coefficients of $B$ are in $\Lambda$, $\bullet$ $K_X+B\equiv 0/Z$, $\bullet$ $\dim X>\dim Z$. Then the coefficient of each horizontal$/Z$ component of $B$ is in $\Gamma$. \end{prop} \begin{proof} \emph{Step 1.} We can assume that $1\in \Lambda$. If the proposition is not true, then there is a sequence $(X_i,B_i), X_i\to Z_i$ of pairs and morphisms as in the proposition such that the set of the coefficients of the horizontal$/Z_i$ components of all the $B_i$ put together does not satisfy ACC. By taking $\mathbb Q$-factorial dlt models we can assume that $(X_i,B_i)$ are $\mathbb Q$-factorial dlt. Write $B_i=\sum b_{i,j}B_{i,j}$. We may assume that $B_{i,1}$ is horizontal$/Z_i$ and that $b_{1,1}<b_{2,1}<\cdots$. \emph{Step 2.} First assume that $\dim Z_i=1$ for every $i$. Run the LMMP$/Z_i$ on $K_{X_i}+B_i-b_{i,1}B_{i,1}$ with scaling of $b_{i,1}B_{i,1}$. This terminates with a model $X_i'$ having an extremal contraction $X_i'\to Z_i'/Z_i$ such that $K_{X_i'}+B_i'-b_{i,1}B_{i,1}'$ is numerically negative over $Z_i'$. Let $F_i'$ be the reduced variety associated to a general fibre of $X_i'\to Z_i'$. Since $K_{X_i'}+B_i'\equiv 0/Z'$ and ${F'_i}^2=0$, we get $(K_{X_i'}+B_i'+F_i')\cdot F_i'=0$ hence the arithmetic genus $p_a(F_i')<0$ which implies that $F_i'\simeq \mathbb P^1_k$. We can write $$ \deg (K_{X_i'}+B_i'+F_i')|_{F_i'}=-2+\sum n_{i,j}b_{i,j}= 0 $$ for certain integers $n_{i,j}\ge 0$ such that $n_{i,1}>0$. Since the $b_{i,j}$ belong to the DCC set $\Lambda$, $n_{i,1}$ is bounded from above and below. Moreover, we can assume that the sums $\sum_{j\ge 2} n_{i,j}b_{i,j}$ satisfy the DCC hence $n_{i,1}b_{i,1}=2-\sum_{j\ge 2} n_{i,j}b_{i,j}$ satisfies the ACC, a contradiction. \emph{Step 3.} From now on we may assume that $\dim Z_i=0$ for every $i$. Run the LMMP$/Z_i$ on $K_{X_i}+B_i-b_{i,1}B_{i,1}$ with scaling of $b_{i,1}B_{i,1}$. This terminates with a model $X_i'$ having an extremal contraction $X_i'\to Z_i'$ such that $K_{X_i'}+B_i'-b_{i,1}B_{i,1}'$ is numerically negative over $Z_i'$. If $\dim Z_i'=1$ for infinitely many $i$, then we get a contradiction by Step 2. So we assume that $Z_i'$ are all points hence each $X_i'$ is a Fano with Picard number one. Assume that $({X_i}',B_i')$ is lc but not klt for every $i$. Assume that each $({X_i}',B_i')$ has an lc centre $S_i'$ of dimension one. Let $K_{S_i'}+B_{S_i'}=(K_{X_i'}+B_i')|_{S_i'}$ by adjunction. Note that $S_i'$ is normal since $({X_i'},B_i'-b_{i,1}B_{i,1}')$ is $\mathbb Q$-factorial dlt. Since $K_{S_i'}+B_{S_i'}\equiv 0$, $S_i'\simeq \mathbb P^1_k$. If $\Supp B_{i,1}'$ contains an lc centre for infinitely many $i$, then we get a contradiction by ACC for lc thresholds in dimension $2$. So we can assume that $\Supp B_{i,1}'$ does not contain any lc centre, in particular, none of the points of $S_i'\cap B_{i,1}'$ is an lc centre. Now, since $\{b_{i,j}\}$ does not satisfy ACC, by Proposition \ref{p-adjunction-DCC}, the set of the coefficients of all the $B_{S_i'}$ satisfies DCC but not ACC which gives a contradiction as above (by considering the coefficients of the points in $S_i'\cap B_{i,1}'$). So we can assume that each $({X_i}',B_i')$ has an lc centre of dimension zero. By a version of Lemma \ref{l-extraction} in dimension $2$, there is a projective birational contraction $Y_i'\to X_i'$ which extracts only one prime divisor $E_i'$ and it satisfies $a(E_i',X_i',B_i')=0$. Let $K_{Y_i'}+B_{Y_i'}$ be the pullback of $K_{X_i'}+B_i'$. By running the LMMP on $K_{Y_i'}+B_{Y_i'}-E_i'$, we arrive on a model on which either the birational transform of $E_i'$ intersects the birational transform of $B_{i,1}'$ for infinitely many $i$, or we get a Mori fibre space over a curve whose general fibre intersects the birational transform of $B_{i,1}'$ for infinitely many $i$. In any case, we can apply the arguments above to get a contradiction. So from now on we may assume that $({X_i}',B_i')$ are all klt. \emph{Step 4.} If there is $\epsilon>0$ such that $X_i'$ is $\epsilon$-lc for every $i$, then we are done since such $X_i'$ are bounded by Alexeev [\ref{Alexeev}]. So we can assume that the minimal log discrepancies of the $X_i'$ form a strictly decreasing sequence of positive numbers. Since $({X_i}',B_i')$ are klt, we can assume that the minimal log discrepancies of the $(X_i',B_i')$ also form a strictly decreasing sequence of positive numbers. As in Construction \ref{rem-Fano-twist}, we find a contraction ${Y}_i'\to X_i'$ extracting a prime divisor $E_i$ with minimal log discrepancy $a(E_i,X_i',B_i')<\epsilon$ and run a $-E_i$-LMMP to get a Mori fibre structure $X_i''\to Z_i''$. If $\dim Z_i''=1$ for each $i$, we use Step 2 to get a contradiction. So we may assume that $\dim Z_i''=0$ for each $i$. Note that the exceptional divisor of $X_i''\dashrightarrow X_i'$ is a component of $B_i''$ with coefficient $\ge 1-\epsilon$ where $K_{X_i''}+B_i''$ is the pullback of $K_{X_i'}+B_i'$. Write $K_{Y_i'}+{B}_{Y_i'}$ for the pullback of $K_{X_i'}+B_i'$. By construction, the coefficients of ${B}_{Y_i'}$ belong to some DCC subset of $\Lambda\cup [1-\epsilon,1]$. We show that if $\epsilon$ is sufficiently small, then ${Y}_i'\to X_i''$ cannot contract a component of ${B}_{Y_i'}$ with coefficient $\ge 1-\epsilon$. Indeed let $\Delta_{Y_i'}$ be obtained from ${B}_{Y_i'}$ by replacing each coefficient $\ge 1-\epsilon$ with $1$. Then by Lemma \ref{l-lim-nefness}, $-(K_{Y_i'}+{\Delta}_{Y_i'})$ is nef over both $X_i'$ and $X_i''$. As $\rho(Y_i')=2$, $-(K_{Y_i'}+{\Delta}_{Y_i'})$ is nef globally. This is a contradiction because the pushdown of $K_{Y_i'}+{\Delta}_{Y_i'}$ to $X_i''$ is ample. \emph{Step 5.} Now replace $({X_i'},B_i')$ with $({X_i''},B_i'')$ and repeat the process of Step 4, $m$ times. By the last paragraph the new components of $B_i'$ that appear in the process are not contracted again. So we may assume that we have at least $m$ components of $B_i'$ with coefficients $\ge 1-\epsilon$. Let $x_i'$ be the image of the exceptional divisor of $Y_i'\to X_i'$ and let $x_i''$ be the image of the exceptional divisor of $Y_i'\to X_i''$. Also let $m_i'$ be the number of those components of $B_i'$ with coefficient $\ge 1-\epsilon$ and passing through $x_i'$. Define $m_i''$ similarly. Since $\rho(Y_i')=2$, each component of $B_{Y_i'}$ intersects the exceptional divisor of $Y_i'\to X_i'$ or the exceptional divisor of $Y_i'\to X_i''$. Therefore, $m_i'+m_i''\ge m$. Finally by Lemma \ref{l-bnd-comps-surfaces} both $m_i'$ and $m_i''$ are bounded hence $m$ is also bounded. This means that after finitely many times applying the process of Step 4, we can assume there is $\epsilon>0$ such that $X_i'$ is $\epsilon$-lc for every $i$, and then apply boundedness of such $X_i'$ [\ref{Alexeev}]. \end{proof} \subsection{$3$-folds} \begin{proof}(of Theorem \ref{t-ACC}) If the theorem does not hold, then there is a sequence $(X_i,B_i)$ of lc pairs of dimension $3$ over $k$ and $\mathbb R$-Cartier divisors $M_i\ge 0$ such that the coefficients of $B_i$ are in $\Lambda$, the coefficients of $M_i$ are in $\Gamma$ but such that the $t_i:=\lct(M_i,X_i,B_i)$ form a strictly increasing sequence of numbers. We may assume that each $(X_i,\Delta_i:=B_i+t_iM_i)$ has an lc centre of dimension $\le 1$ contained in $\Supp M_i$. Let $(Y_i,\Delta_{Y_i})$ be a $\mathbb Q$-factorial dlt model of $(X_i,\Delta_i)$ such that there is an exceptional divisor on $Y_i$ mapping onto an lc centre inside $\Supp M_i$. Such $Y_i$ exist by Lemma \ref{l-extraction}. There is a prime exceptional divisor $E_i$ of $Y_i\to X_i$ such that it intersects the birational transform of $M_i$ and that it maps into $\Supp M_i$. Note that $E_i$ is normal by Lemma \ref{l-plt-normal}. Let $E_i\to Z_i$ be the contraction induced by $E_i\to X_i$. Now by adjunction define $K_{E_i}+\Delta_{E_i}=(K_{Y_i}+\Delta_{Y_i})|_{E_i}$. Then the set of all the coefficients of the horizontal$/Z_i$ components of the $\Delta_{E_i}$ satisfies DCC but not ACC, by Proposition \ref{p-adjunction-DCC}. This contradicts Proposition \ref{p-ACC-global}. \end{proof} \section{Non-big log divisors: proof of \ref{t-aug-b-non-big}}\label{s-numerical} \begin{lem}\label{l-movable-curve} Let $X$ be a normal projective variety of dimension $d$ over an algebraically closed field (of any characteristic). Let $A$ an ample $\mathbb R$-divisor and $P$ a nef $\mathbb R$-divisor with $P^d=0$. Then for any $\epsilon>0$, there exist $\delta\in [0,\epsilon]$ and a very ample divisor $H$ such that $(P-\delta A)\cdot H^{d-1}=0$. \end{lem} \begin{proof} First we show that there is an ample divisor $H$ such that $(P-\epsilon A)\cdot H^{d-1}<0$. Put $r(\tau):=(P-\epsilon A)(P+\tau A)^{d-1}$. Then $$ r(\tau)=(P-\epsilon A)(P^{d-1}+a_{d-2}\tau P^{d-2}A+\dots+a_1\tau^{d-2}PA^{d-2}+\tau^{d-1}A^{d-1}) $$ where the $a_i>0$ depend only on $d$. Put $a_{d-1}=a_0=1$, $a_{-1}=0$, and let $n$ be the smallest integer such that $P^{d-n}A^{n}\neq 0$. Then we can write $$ r(\tau)=\sum_{i=0}^{d-1} (a_{i-1}\tau^{d-i}-\epsilon a_i\tau^{d-i-1}) P^iA^{d-i} $$ from which we get $$ r(\tau)=\sum_{i=0}^{d-n} (a_{i-1}\tau^{d-i}-\epsilon a_i\tau^{d-i-1}) P^iA^{d-i} $$ hence $$ \frac{r(\tau)}{\tau^{n-1}}=(a_{d-n-1}\tau-\epsilon a_{d-n}) P^{d-n}A^{n}+\tau s(\tau) $$ for some polynomial function $s(\tau)$. Now if $\tau>0$ is sufficiently small it is clear that the right hand side is negative hence $r(\tau)<0$. Choose $\tau>0$ so that $r(\tau)<0$. Since $P+\tau A$ is ample and ampleness is an open condition, there is an ample $\mathbb Q$-divisor $H$ close to $P+\tau A$ such that $(P-\epsilon A)\cdot H^{d-1}<0$. By replacing $H$ with a multiple we can assume that $H$ is very ample. Since $P\cdot H^{d-1}\ge 0$ by the nefness of $P$, it is then obvious that there is some $\delta\in [0,\epsilon]$ such that $(P-\delta A)\cdot H^{d-1}=0$.\\ \end{proof} \begin{proof}(of Theorem \ref{t-aug-b-non-big}) Assume that $D^d=0$. By replacing $A$ we may assume that it is ample. Fix $\alpha>0$. By Lemma \ref{l-movable-curve}, there exist a number $t$ sufficiently close to $1$ (possibly equal to $1$) and a very ample divisor $H$ such that $$ (K_X+B+t(A+\alpha D))\cdot H^{d-1}=0 $$ Now we can view $H^{d-1}$ as a $1$-cycle on $X$. For each point $x\in X$, there is an effective $1$-cycle $C_x$ whose class is the same as $H^{d-1}$ and such that $x\in C_x$. Since $H$ is very ample, we may assume that $C_x$ is irreducible and that it is inside the smooth locus of $X$ for general $x$. In particular, we have $$ (K_X+B+t(A+\alpha D))\cdot C_x= 0 $$ Pick a general $x\in X$ and let $C_x$ be the curve mentioned above. Since $B$ is effective and $A+\alpha D$ is ample, we get $K_X\cdot C_x<0$. Thus by Koll\'ar [\ref{kollar}, Chapter II, Theorem 5.8], there is a rational curve $L_x$ passing through $x$ such that $$ 0<A\cdot L_x\le (A+\alpha D)\cdot L_x\le (2d) \frac{(A+\alpha D)\cdot C_x}{-K_X\cdot C_x} $$ $$ =\frac{2d}{t} (1+\frac{B\cdot C_x}{K_X\cdot C_x})\le \frac{2d}{t}<3d $$ because $K_X\cdot C_x<0$, $B\cdot C_x\ge 0$, and $t$ is sufficiently close to $1$. Note that although $K_X$ and $B$ need not be $\mathbb R$-Cartier, the intersection numbers still make sense since $C_x$ is inside the smooth locus of $X$. As $A$ is ample and $A\cdot L_x\le 3d$, we can assume that such $L_x$ (for general $x$) belong to a bounded family $\mathcal{L}$ of curves on $X$ (independent of the choice of $t,\alpha$). Therefore there are only finitely many possibilities for the intersection numbers $D\cdot L_x$. If we choose $\alpha$ sufficiently large, then the inequality $(A+\alpha D)\cdot L_x\le 3d$ implies $D\cdot L_x=0$ and so we get the desired family.\\ \end{proof} \flushleft{DPMMS}, Centre for Mathematical Sciences,\\ Cambridge University,\\ Wilberforce Road,\\ Cambridge, CB3 0WB,\\ UK\\ email: [email protected]\\ \end{document}
\color{blue}egin{document} \title{{\huge Open-loop and Closed-loop Local and Remote Stochastic Nonzero-sum Game with Inconsistent Information Structure }} \author{Xin Li, Qingyuan Qi$^{*}$ and Xinbei Lv \thanks{This work was supported by National Natural Science Foundation of China under grants 61903210, Natural Science Foundation of Shandong Province under grant ZR2019BF002, China Postdoctoral Science Foundation under grant 2019M652324, 2021T140354, Qingdao Postdoctoral Application Research Project, Major Basic Research of Natural Science Foundation of Shandong Province (ZR2021ZD14).(Corresponding author: Qingyuan Qi.) X. Li ([email protected]) is with Institute of Complexity Science, College of Automation, Qingdao University, Qingdao, China 266071. Q. Qi ([email protected]) and X. Lv ([email protected]) are with Qingdao Innovation and Development Center of Harbin Engineering University, Qingdao, China, 266000. }} \maketitle \IEEEpeerreviewmaketitle \color{blue}egin{abstract} In this paper, the open-loop and closed-loop local and remote stochastic nonzero-sum game (LRSNG) problem is investigated. Different from previous works, the stochastic nonzero-sum game problem under consideration is essentially a special class of two-person nonzero-sum game problem, in which the information sets accessed by the two players are inconsistent. More specifically, both the local player and the remote player are involved in the system dynamics, and the information sets obtained by the two players are different, and each player is designed to minimize its own cost function. For the considered LRSNG problem, both the open-loop and closed-loop Nash equilibrium are derived. The contributions of this paper are given as follows. Firstly, the open-loop optimal Nash equilibrium is derived, which is determined in terms of the solution to the forward and backward stochastic difference equations (FBSDEs). Furthermore, by using the orthogonal decomposition method and the completing square method, the feedback representation of the optimal Nash equilibrium is derived for the first time. Finally, the effectiveness of our results is verified by a numerical example. \end{abstract} \color{blue}egin{IEEEkeywords} Discrete-time stochastic nonzero-sum game, inconsistent information structure, open-loop and closed-loop Nash equilibrium, maximum principle. \end{IEEEkeywords} \def\small{\smallmall} \def\normalsize{\normalsizeormalsize} \def\!+\!{\!+\!} \def\!-\!{\!-\!} \def\!=\!{\!=\!} \def\mathbb{E}{\mathbb{E}} \def\mathbb{P}{\mathbb{P}} \def\mathrm{Tr}{\mathrm{Tr}} \def\color{blue}{\color{blue}} \def\color{red}{\color{red}} \smallection{Introduction} As is well known, due to the wide applications in industry, economics, management \cite{djl2000,cl2002}, the differential games have received many scholars' interest since 1950s, and abundant research results have been obtained, see \cite{i1965,ek1972,i1999,zp2022}. As a special case of differential games, the linear quadratic (LQ) non-cooperative stochastic game is a hot research topic, in view of its rigorous solution and the potential application background. For the LQ non-cooperative stochastic game problem, the system dynamics is described with a linear stochastic differential/difference equation, and the quadratic utility functions are to be minimized. It is worth mentioning that the open-loop and closed-loop Nash equilibrium were proposed in \cite{sy2019} for the continuous-time stochastic LQ two-person nonzero-sum differential game, it was shown that the existence of the open-loop Nash equilibrium is characterized by the forward-backward stochastic differential equations, and the closed-loop Nash equilibrium is characterized by the Riccati equations. Besides, references \cite{sjz2012a,sjz2012b} studied the discrete-time LQ non-cooperative stochastic game problem, and the Nash equilibrium was derived. Please refer to \cite{r2007,h1999,sy2019,slz2011,sjz2012a,sjz2012b,z2018,szl2021,cyl2022} and the cited references therein for the recent study on the LQ non-cooperative stochastic game. It is noted that the previous works \cite{h1999,sy2019,slz2011,sjz2012a,sjz2012b,z2018,szl2021,cyl2022} on LQ stochastic game mainly focused on the case of the information structure being consistent, i.e., the two players share the same information sets. While, the information inconsistent case remains less investigated, i.e., the information sets obtained by the two players are different. As pointed out in \cite{nglb2014}, the asymmetry of information among the controllers makes it difficult to compute or characterize Nash equilibria. In fact, due to the inconsistent of the information structure, the two players are coupled with each other and finding the Nash equilibrium strategy becomes hard. Therefore, the study of the LQ non-zero stochastic game is challenging from the theoretical aspect. In this paper, a special case of LQ stochastic two-person nonzero-sum differential games with inconsistent information shall be studied, which is called the local and remote stochastic nonzero-sum game (LRSNG) problem. The detailed description of the networked system under consideration is shown in Figure \color{red}ef{Figure1}. It can be seen that the two players (the local player and the remote player) are included, and the uplink channel from the local player to the remote player is unreliable, while the downlink channel from the remote player to the local player is perfect. In other words, the precise state information can only be perfectly observed by the local player, and the remote player can just receive the disturbed state information delivered from the local player due to the unreliable uplink channel. Hence, the information sets accessed by the local player and the remote player are not the same, which is called inconsistent information structure. The objective of the two players is that each player should minimize its own quadratic cost function. Specifically, the two players are non-cooperative, therefore, the above problem is actually a LQ two-person nonzero-sum differential game with inconsistent information (LRSNG problem), and the goal of this paper is to find the Nash equilibrium associated with the LRSNG problem. Actually, the corresponding local and remote control problem was originally studied in \cite{ott2016}, in which two controllers (the local controller and the remote controller) were cooperative, and the goal was to derive the jointly control law to optimize one common cost function. The optimal local and remote control problem is a decentralized control problem with asymmetric information structure, since the local controller and the remote controller can access different information sets, and the recent research results on the local and remote control can be found in \cite{qxz2020,lx2018,lqz2021,aon2019,tyw2022}. For instance, \cite{lx2018} investigated the optimal local and remote control problem, and a necessary and sufficient condition for the finite horizon optimal control problem was given by the use of maximum principle. In \cite{aon2019}, the local and remote control problem with multiple subsystems was studied, by using the common information approach, the optimal control strategy of the controller was obtained. \color{blue}egin{figure}[htbp] \centering \includegraphics[width=0.38\textwidth]{Figure1.pdf} \caption{ Description of the LRSNG problem with inconsistent information structure.} \label{Figure1} \end{figure} While, the LRSNG problem studied in this paper is essentially different with the previous works \cite{qxz2020,lx2018,lqz2021,aon2019,tyw2022} on the optimal local and remote control in the following aspects: Firstly, the two players for the LRSNG problem need to be designed respectively to optimize their own cost function, i.e., they are non-cooperative. While, the two controllers for the local and remote control problem are cooperative, which is derived to minimize a common cost function. Secondly, although the local and remote control problem has been well studied, the LRSNG problem has not been solved before. It is stressed that the study of the LRSNG problem is significant both from the theoretical and application aspects. From the theoretical point of view, as pointed out earlier, the LRSNG problem has not been solved before in view of the challenges from the inconsistent information structure. Furthermore, the LRSNG can be potentially applied in unmanned systems, manufacturing systems and autonomous vehicles, smart grid, remote surgery, etc., see \cite{lx2018,aon2019,hv2000,gc2010,hnx2007,lczsm2014}, and the cited references therein. As analyzed above, the inconsistent information structure makes that the two players are coupled with each other, hence finding the Nash equilibrium for LRSNG problem becomes difficult. To overcome this challenge, the maximum principle and orthogonal decomposition approach are adopted in this paper to solve the LRSNG problem, and the open-loop and closed-loop Nash equilibrium are derived. Firstly, by the use of the convex variational method, the Pontryagin maximum principle is derived. Moreover, the necessary and sufficient conditions for the existence of the open-loop optimal Nash equilibrium for the LRSNG problem are derived, which are based on the solution of FBSDEs from the maximum principle. Consequently, in order to find a feedback explicit Nash equilibrium strategy, the orthogonal decomposition method and the completing square approach are applied, then the closed-loop optimal Nash equilibrium strategy for the LRSNG problem is derived for the first time, which is based on the solution to modified coupled Riccati equations. Finally, a numerical example is given to illustrate the main results of this paper. In this paper, a special class of LQ non-cooperative stochastic game with inconsistent information (i.e., LRSNG) is firstly solved. The contributions of this paper are twofold. On the one hand, the open-loop Nash equilibrium for LRSNG problem is solved, the necessary and sufficient solvability conditions are derived. On the other hand, the closed-loop Nash equilibrium for LRSNG is developed, and the optimal feedback Nash equilibrium strategy is shown to rely on the solution to coupled Riccati equations. The remainder of the paper is organized as follows. The LRSNG Problem is formulated in Section II. Section III solves the open-loop Nash equilibrium, and the solvability conditions are investigated. Section IV is devoted to solving the closed-loop Nash equilibrium for LRSNG problem. In Section V, the effectiveness of the main results is illustrated by numerical examples. The paper is concluded in Section VI. For convenience, we will use the following notations throughout the paper. $\mathbb{R}^n$ signifies the $n$-dimensional Euclidean space. $I_n$ presents the unit matrix of $n\times n$ dimension. $A^\mathrm{T}$ denotes the transpose of the matrix $A$. $\mathcal F(X)$ denotes the $\smalligma$-algebra generated by the random variable $X$. $A$ $\geq 0$ $(>0)$ means that $A$ is a positive semi-definite (positive definite) matrix. $Tr(A)$ represents the trace of matrix A. $\mathbb{E}[X]$ is the mathematical expectation, and $\mathbb{E}[X|\mathcal F_k]$ means the conditional expectation with respect to $\mathcal F_k$. The superscripts $L$, $R$ denote the local player and the remote player, respectively. $\inf$ means the infimum or the greatest lower bound of a set. \smallection{Problem Formulation} Throughout this paper, the following system dynamics shall be considered: \color{blue}egin{align}\label{ss1} x_{k + 1} = Ax_k + {B^L}u_k^L + B^Ru_k^R + w_k, \end{align} where $x_k \in \mathbb{R}^n$ is the state, $u_k^L\in \mathbb{R}^{m_1}$ and $u_k^R\in \mathbb{R}^{m_2}$ are the local player and the remote player, respectively. $A$, $B^L$, $B^R$ are the constant system matrices with appropriate dimensions. $w_k$ is system noise with zero mean and covariance $\Sigma_{w}$, taking values in $\mathbb{R}^n$. The initial value of state is $x_0$ with mean $\mu$ and covariance $\Sigma_{x_0}$, taking values in $\mathbb{R}^n$. $x_0$ and $w_k$, independent of each other, are Gaussian random variables. As illustrated in Figure \color{red}ef{Figure1}, the uplink channel from the local player to the remote player is unreliable, while the downlink channel from the remote player to the local player is perfect. Thus, the information sets accessed by the local player and the remote player are given as follows, respectively: \color{blue}egin{align}\label{is1} \mathcal{F}_k^R&=\smalligma \left\{\gamma_0x_0, \cdots, \gamma_kx_k \color{red}ight\},\normalsizeotag\\ \mathcal{F}_k^L&=\smalligma\left\{x_0, w_0, \cdots, w_{k-1},\gamma_0x_0, \cdots, \gamma_kx_k \color{red}ight\}, \end{align} in which $\gamma _k$ is an independent identically distributed Bernoulli random variable describing the state information transmitted through unreliable communication channel, i.e., \color{blue}egin{equation}\label{uk3} \gamma_k=\left\{ \color{blue}egin{array}{ll} 0, ~~\text{with probability}~~1-p,\\ 1, ~~\text{with probability}~~p. \end{array} \color{red}ight. \end{equation} In the above, $\gamma _k=1$ denotes that the state can be successfully accessed by the remote player, while $\gamma _k=0$ means the dropout of the state information from the local player to the remote player. For the purpose of simplicity, the following notations are given: \color{blue}egin{align}\label{cs1} \mathcal{U}^L_N =\{u_0^L,\cdots,&u_N^L|u_k^L \in \mathbb{R}^{m_1}, u_k^L~is~\mathcal{F}_{k}^L-adapted,\normalsizeotag\\ &and~\smallum\limits_{k = 0}^N \mathbb{E}[(u_k^L)^Tu_k^L]<+\infty \},\normalsizeotag\\ \mathcal{U}^R_N =\{u_0^R,\cdots,&u_N^R|u_k^R \in \mathbb{R}^{m_2}, u_k^R~is~\mathcal{F}_{k}^R-adapted,\normalsizeotag\\ &and~\smallum\limits_{k = 0}^N \mathbb{E}[(u_k^R)^Tu_k^R]<+\infty\}. \end{align} The quadratic cost functions associated with system \eqref{ss1} are given by \color{blue}egin{small} \color{blue}egin{align} J_N^L({u_k^L},{u_k^R})&= \smallum\limits_{k = 0}^N \mathbb{E}[x_k^T{Q^L}{x_k}+(u_k^L)^T{S^L}u_k^L+(u_k^R)^T{M^L}{u_k^R}]\normalsizeotag\\ &+\mathbb{E}[ x_{N + 1}^T{P^L_{N + 1}}{x_{N + 1}}]\label{cf1},\\ J_N^R({u_k^L},{u_k^R})&= \smallum\limits_{k = 0}^N \mathbb{E}[x_k^T{Q^R}{x_k}+(u_k^L)^T{S^R}{u_k^L}+(u_k^R)^T{M^R}{u_k^R}]\normalsizeotag\\ &+\mathbb{E}[ x_{N + 1}^T{P^R_{N + 1}}{x_{N + 1}}]\label{cf2}, \end{align} \end{small} where $Q^L$, $Q^R$, $S^L$, $M^L$, $S^R$, $M^R$, $P_{N+1}^L$, $P_{N+1}^R$ are given symmetric weighting matrices with compatible dimensions. \color{blue}egin{remark}\label{rm1} It is stressed that the information sets $\mathcal{F}_{k}^L$ and $\mathcal{F}_k^R$ available to $u_k^L$ and $u_k^R$ are different, which is different from the consistent information structure case studied in previous works on LQ stochastic games \cite{slz2011,sjz2012a, sjz2012b,sy2019}. In fact, for $k=0,\cdots, N$, we have $\mathcal{F}_k^R \smallubseteq \mathcal{F}_{k}^L$, and the inconsistent information structure property would bring essential difficulties in solving the LQ stochastic two-person nonzero-sum game. \end{remark} Then, the open-loop and closed-loop LRSNG problems are stated as follows: \textbf{Problem LRSNG.} For system \eqref{ss1} and cost functions \eqref{cf1}-\eqref{cf2}, find $u_k^L\in\mathcal{U}_N^L$ and $u_k^R\in\mathcal{U}_N^R$ to minimize $J_N^L$ and $J_N^R$, respectively. \smallection{Open-loop Nash equilibrium} In this section, we shall discuss the open-loop Nash equilibrium for Problem LRSNG in terms of FBSDEs, and the methods used are the convex variational principle and the maximum principle. In the first place, the definition of the open-loop Nash equilibrium will be introduced. \color{blue}egin{definition}\label{def1} A pair $(u_k^{L,*},u_k^{R,*})$ $\in$ $\mathcal{U}_N^L \times \mathcal{U}_N^R$ is called an open-loop Nash equilibrium of Problem LRSNG if \color{blue}egin{align} J_N^L\left(u_k^{L,*},u_k^{R,*} \color{red}ight) \le J_N^L\left(u_k^{L},u_k^{R,*} \color{red}ight),\forall u_k^L\in \mathcal{U}_N^L,\normalsizeotag\\ J_N^R\left( u_k^{L,*},u_k^{R,*} \color{red}ight) \le J_N^R\left(u_k^{L,*},u_k^{R} \color{red}ight), \forall u_k^R\in \mathcal{U}_N^R.\label{pi1} \end{align} \end{definition} Before stating the main results of this section, the following two lemmas will be given, which serve as preliminaries. \color{blue}egin{lemma}\label{lem1} If we denote $(u_{k}^{L,*},u_{k}^{R,*})$ as the open-loop Nash equilibrium, then set $u_{k}^{L,\varepsilon}=u_{k}^{L,*}+\varepsilon \delta u_k^{L}$, $\delta u_k^{L}\in \mathcal{U}_N^L$, $\varepsilon\in\mathbb{R}$, ${z_{k}} = \frac{{x_{k}^\varepsilon - {x_{k}}}}{\varepsilon }$, and denote ${x_{k}^\varepsilon}$ and ${J_N^L}(u_k^{L,\varepsilon},u_k^{R,*})$ as the corresponding state and cost function associated with $u_{k}^{L,\varepsilon}$, $k =0, \cdots N$, then there holds \color{blue}egin{align}\label{lbf1} &J_N^{L}(u_k^{L,\varepsilon},u_{k}^{R,*}) - {J_N^{L}(u_k^{L,*},u_{k}^{R,*})}\\ =&{\varepsilon ^2}\delta J_N^{L}(\delta u_k^{L})+ 2\varepsilon \smallum\limits_{k = 0}^N \mathbb{E}[[(B^L)^T\theta _k^L + {S^L}u_k^{L,*}]^T\delta u_k^{L}],\normalsizeotag \end{align} where $\delta J_N^{L}(\delta u_k^{L})$ is given by \color{blue}egin{align}\label{lbf2} &\delta J_N^{L}(\delta u_k^{L})=\smallum\limits_{k = 0}^N \mathbb{E}[z_k^T{Q^L}{z_k} + (\delta u_k^{L})^T{S^L}\delta {u_k^{L}}]\normalsizeotag\\ &+ \mathbb{E}[z_{N + 1}^TP_{N + 1}^L{z_{N + 1}}]. \end{align} In the above, the costate ${\theta}_k^L (k = 0, \cdots, N)$ satisfies the following backward stochastic difference equation \color{blue}egin{equation}\label{lbf4} \theta _{k-1}^{L}=Q^Lx_k+ \mathbb{E}\left[ A^T\theta _{k}^{L}|\mathcal{F}_k^L \color{red}ight],\theta_N^L=P_{N+1}^{L}x_{N+1}. \end{equation} \end{lemma} \color{blue}egin{proof} Using the notations introduced above, it can be derived that $z_k$ satisfies \color{blue}egin{align}\label{lbf3} z_{k+1}=Az_k+B^L\delta u_k^{L}, \end{align} with initial condition $z_0=0$. Consequently, the variation of the cost function can be calculated as follows. \color{blue}egin{align*} &{J_N^L}(u_k^{L,\varepsilon},u_k^{R,*}) - {J_N^L}(u_k^{L,*},u_k^{R,*})\\ &= \smallum\limits_{k = 0}^N \mathbb{E}[{({x_k} + \varepsilon {z_k})}^T{Q^L}({x_k} + \varepsilon {z_k})\\ &+ {({u_k^{L,*}} + \varepsilon \delta {u_k^{L}})^T}{S^L}(u_k^{L,*} + \varepsilon \delta u_k^{L})\\ &+ (u_k^{R,*})^{T}{M^L}{u_k^{R,*}}]+ \mathbb{E}[({x_{N + 1}} + \varepsilon{z_{N + 1}})^T{P^L_{N + 1}}\\ &\times({x_{N + 1}}+ \varepsilon{z_{N + 1}})]- \mathbb{E}[x_{N + 1}^T{P^L_{N + 1}}{x_{N + 1}}]\\ &- \smallum\limits_{k = 0}^N \mathbb{E}[x_k^T{Q^L}{x_k} + (u_k^{L,*})^TS^Lu_k^{L,*}+(u_k^{R,*})^{T}{M^L}{u_k^{R,*}}]\\ &= 2\varepsilon \mathbb{E}[\smallum\limits_{k = 0}^N[x_k^TQ^Lz_k + (u_k^{L,*})^TS^L\delta u_k^{L}]\\ &+x_{N + 1}^TP^L_{N +1}z_{N + 1}] \\ &+ {\varepsilon ^2}\mathbb{E}[\smallum\limits_{k = 0}^N[z_k^T{Q^L}{z_k} + (\delta u_k^{L})^T{S^L}\delta u_k^{L}]\\ &+z_{N + 1}^TP_{N + 1}^L{z_{N + 1}}]. \end{align*} Furthermore, from \eqref{lbf4} and \eqref{lbf3}, we have \color{blue}egin{align*} &\mathbb{E}[\smallum\limits_{k = 0}^N[x_k^TQ^Lz_k + (u_k^{L,*})^TS^L\delta u_k^{L}]\\ &+x_{N + 1}^TP^L_{N +1}z_{N + 1}] \\ &=\mathbb{E}[\smallum\limits_{k = 0}^N[\theta _{k - 1}^L- E[A^T\theta _k^L|\mathcal{F}_k^L]]^Tz_k \\ &+\smallum\limits_{k = 0}^N(u_k^{L,*})^TS^L\delta u_k^{L}+(\theta _N^L)^Tz_{N + 1}] \\ &=\mathbb{E}[\smallum\limits_{k = 0}^N[(B^L)^T\theta _k^L+S^Lu_k^{L,*}]^T\delta u_k^{L}]. \end{align*} The proof is complete. \end{proof} Similar to Lemma \color{red}ef{lem1} and its proof, the following lemma can be given without proof. \color{blue}egin{lemma}\label{lem2} For the open-loop Nash equilibrium $(u_{k}^{L,*},u_{k}^{R,*})$, choose $\eta \in \mathbb{R}$, and for $k = 0, \cdots, N$, let $u_k^{R,\eta}= u_k^{R,*}+\eta\Delta u_k^{R}$, where $\Delta u_k^{R}\in \mathcal{U}_N^R$, $y_{k} = \frac{{x_{k}^\eta -{x_{k}}}}{\eta}$. Let ${x_{k}^\eta}$, $J_N^R(u_k^{L,*},u_k^{R,\eta})$ be the state and cost function associated with $u_k^{R,\eta}$, $k =0, \cdots, N$, respectively. Then, we have \color{blue}egin{align}\label{rbf1} &J_N^{R}(u_k^{L,*},u_k^{R,\eta}) - J_N^{R}(u_k^{L,*},u_k^{R,*})\normalsizeotag\\ &={\eta ^2}\Delta J_N^{R}(\Delta u_k^{R})+2\eta \smallum\limits_{k =0}^N \mathbb{E}[[{(B^R)^T}\theta _k^R + {M^R}u_k^{R,*}]^T\Delta u_k^{R}], \end{align} where $\Delta J_N^{R}(\Delta u_k^{R})$ is given by \color{blue}egin{align}\label{rbf2} \Delta J_N^{R}(\Delta u_k^{R})=&\smallum\limits_{k = 0}^N \mathbb{E}[y_k^T{Q^R}{y_k} + {{(\Delta u_k^{R})}^T}M^R\Delta u_k^{R}]\normalsizeotag\\ &+ \mathbb{E}[y_{N + 1}^TP_{N + 1}^R{y_{N + 1}}]. \end{align} And the costate $\theta_k^R(k =0, \cdots, N)$ satisfies \color{blue}egin{align}\label{rbf4} \theta _{k - 1}^R = {Q^R}{x_k} + \mathbb{E}[{A^T}{\theta _k^R}|{\mathcal F_k^L}], \end{align} with terminal condition $\theta_{N}^R=P_{N+1}^Rx_{N+1}$. \end{lemma} Using the results derived in Lemmas \color{red}ef{lem1}-\color{red}ef{lem2}, the main results of this section will be presented, and the necessary and sufficient conditions for the open-loop Nash equilibrium of Problem LRSNG will be derived. \color{blue}egin{theorem}\label{th-01} For system \eqref{ss1} and cost functions \eqref{cf1} and \eqref{cf2}, the open-loop Nash equilibrium $(u_k^{L,*}, u_k^{R,*})$ for Problem LRSNG is unique if and only if the following two conditions are satisfied 1) The convexity condition holds: \color{blue}egin{align}\label{cc} \inf \delta J_N^L(\delta u_k^{L})\geq 0,~~\text{and}~~ \inf \Delta J_N^R(\Delta u_k^{R})\geq 0, \end{align} in which $\delta J_N^L(\delta u_k^{L})$ and $\Delta J_N^R(\Delta u_k^{R})$ are given by \eqref{lbf2} and \eqref{rbf2}, respectively. 2) The stationary conditions can be uniquely solved: \color{blue}egin{align} 0&=S^Lu_k^{L,*}+\mathbb{E}[(B^L)^T\theta _k^L|\mathcal{F}_k^L],\label{ec1}\\ 0&=M^Ru_k^{R,*}+ \mathbb{E}[(B^R)^T\theta _k^R|\mathcal{F}_k^R],\label{ec2} \end{align} where $\theta_k^L$, $\theta _k^R$ satisfy \eqref{lbf4} and \eqref{rbf4}, respectively. \end{theorem} \color{blue}egin{proof} `Necessity': Suppose the open-loop Nash equilibrium $(u_k^{L,*},u_k^{R,*})$ for Problem LRSNG is unique, we will show the two conditions 1)-2) are satisfied. In fact, for the open-loop Nash equilibrium $(u_k^{L,*},u_k^{R,*})$, from Lemmas \color{red}ef{lem1}-\color{red}ef{lem2}, we know that for arbitrary $\delta u_k^{L}\in \mathcal{U}_N^L$, $\varepsilon\in \mathbb{R}$, and arbitrary $\Delta u_k^{R}\in \mathcal{U}_N^R$, $\eta \in \mathbb{R}$, there holds \color{blue}egin{align}\label{diff1} &{J_N^L}(u_k^{L,\varepsilon},u_k^{R,*}) - {J_N^L}(u_k^{L,*},u_k^{R,*})\normalsizeotag\\ = &2\varepsilon \smallum\limits_{k = 0}^N \mathbb{E}[[(B^L)^T\theta _k^L + S^Lu_k^{L,*}]^T\delta u_k^{L}]+{\varepsilon ^2}\delta J_N^L(\delta u_k^{L}) \normalsizeotag\\ \geq& 0, \end{align} and \color{blue}egin{align}\label{diff2} &J_N^R(u_k^{L,*},u_k^{R,\eta}) - J_N^R(u_k^{L,*},u_k^{R,*})\normalsizeotag\\ =&2\eta \smallum\limits_{k = 0}^N \mathbb{E}[[(B^R)^T\theta _k^R + {M^R}u_k^{R,*}]^T\Delta u_k^{R}]+{\eta ^2}\Delta J_N^R(\Delta u_k^{R}) \normalsizeotag\\ \geq& 0. \end{align} On the one hand, suppose the convexity condition \eqref{cc} is not true, then there exists some $\delta u_k^{L}$, $k=0, \cdots, N$ such that ${J_N^L}(u_k^{L,\varepsilon} ,u_k^{R,*}) - {J_N^L}(u_k^{L,*},u_k^{R,*})=-\infty$ with $\varepsilon\color{red}ightarrow\infty$. For the same reason, there exists some $\Delta u_k^{R}$, $k=0, \cdots, N$ satisfying $J_N^R(u_k^{L,*},u_k^{R,\eta} )- J_N^R(u_k^{L,*},u_k^{R,*})=-\infty$ with $\eta\color{red}ightarrow\infty$. This contradicts 1). On the other hand, if 2) is not satisfied, then we can assume that \color{blue}egin{align} S^Lu_k^{L,*}+\mathbb{E}[(B^L)^T\theta _k^L|\mathcal{F}_k^L]&=\Theta_k^L \normalsizeeq 0,\label{u11}\\ M^Ru_k^{R,*}+\mathbb{E}[(B^R)^T\theta _k^R|\mathcal{F}_k^R]&=\Theta_k^R \normalsizeeq 0.\label{u12} \end{align} In this case, if we choose $\delta u_k^{L}=\Theta_k^L$ and $\Delta u_k^{R}=\Theta_k^R$, then from \eqref{diff1} and \eqref{diff2} we have \color{blue}egin{align*} &{J_N^L}(u_k^{L,\varepsilon},u_k^{R,*}) - {J_N^L}(u_k^{L,*},u_k^{R,*})\\ &= 2\varepsilon\smallum_{k=0}^{N} (\Theta_k^L)^T\Theta_k^L +\varepsilon^2\delta J_N^L(\delta u_k^{L}),\\ &J_N^R(u_k^{L,*},u_k^{R,\eta} ) - J_N^R(u_k^{L,*},u_k^{R,*})\\ &=2\eta\smallum_{k=0}^{N} (\Theta_k^R)^T\Theta_k^R +\eta^2\Delta J_N^R(\Delta u_k^{R}). \end{align*} Then, we can always find some $\varepsilon$ and $\eta$ $<0$ such that ${J_N^L}(u_k^{L,\varepsilon},u_k^{R,*}) - {J_N^L}(u_k^{L,*},u_k^{R,*})$and $J_N^R(u_k^{L,*},u_k^{R,\eta} ) - J_N^R(u_k^{L,*},u_k^{R,*})$ $<0$, which contradicts with \eqref{diff1}, \eqref{diff2}. Thus, $\Theta_k^L$, $\Theta_k^R=0$, i.e., \eqref{ec1}, \eqref{ec2} holds. This ends the necessity proof. `Sufficiency': Suppose the two conditions 1)-2) hold, we need to prove that the open-loop Nash equilibrium $(u_k^{L,*}, u_k^{R,*})$ is unique. Actually, it can be deduced from \eqref{lbf1} and \eqref{rbf1} that for any $\varepsilon\in\mathbb{R}$, $\eta\in\mathbb{R}$ and $\delta u_k^{L}\in \mathcal{U}_N^L$, $\Delta u_k^{R}\in \mathcal{U}_N^R$, we have \color{blue}egin{align*} {J_N^L}(u_k^{L,\varepsilon},u_k^{R,*}) - {J_N^L}(u_k^{L,*},u_k^{R,*})&=\varepsilon^2\delta J_N^L(\delta u_k^{L})\geq 0,\\ J_N^R(u_k^{L,*},u_k^{R,\eta} ) - J_N^R(u_k^{L,*},u_k^{R,*})&={\eta ^2}\Delta J_N^R(\Delta u_k^{R})\geq 0, \end{align*} which means that open-loop Nash equilibria of Problem LRSNG is uniquely solvable. The proof is complete. \end{proof} \color{blue}egin{remark}\label{rm2} By using the variational method, the maximum principle for Problem LRSNG is derived. Furthermore, Theorem \color{red}ef{th-01} provides the necessary and sufficient solvability conditions for the open-loop Nash equilibrium of Problem LRSNG with inconsistent information structure for the first time. \end{remark} \color{blue}egin{remark}\label{rm3} From Theorem \color{red}ef{th-01}, the forward and backward difference equations (FBSDEs) can be given as follows \color{blue}egin{align}\label{fbsde} \left\{ \color{blue}egin{array}{ll} &x_{k + 1}= Ax_k + {B^L}u_k^L + B^Ru_k^R + w_k,\\ &\theta _{k-1}^{L}=Q^Lx_k+ \mathbb{E}\left[ A^T\theta_{k}^{L}|\mathcal{F}_k^L \color{red}ight],\\ &\theta _{k - 1}^R= {Q^R}{x_k} + \mathbb{E}[{A^T}{\theta _k^R}|{\mathcal F_k^L}],\\ &0=S^Lu_k^{L,*}+\mathbb{E}[(B^L)^T\theta _k^L|\mathcal{F}_k^L],\\ &0=M^Ru_k^{R,*}+ \mathbb{E}[(B^R)^T\theta _k^R|\mathcal{F}_k^R],\\ &\theta_N^L=P_{N+1}^{L}x_{N+1},\theta_N^R=P_{N+1}^{R}x_{N+1}. \end{array} \color{red}ight. \end{align} Due to the different information structure caused by the unreliable uplink channel, the FBSDEs \eqref{fbsde} cannot be decoupled as the traditional consistent information case, see \cite{nglb2014,nb2012}. Hence the explicit feedback Nash equilibrium cannot be derived via solving FBSDEs \eqref{fbsde}, which is challenging. \end{remark} \smallection{Closed-loop Nash equilibrium} In this section, the closed-loop Nash equilibrium for Problem LRSNG will be studied. As illustrated in Remark \color{red}ef{rm3}, in view of the existence of inconsistent information structure, the FBSDEs cannot be decoupled, which indicates the explicit Nash equilibrium cannot be derived via decoupling FBSDEs \eqref{fbsde} of Theorem \color{red}ef{th-01}. Alternatively, our aim is to obtain a feedback explicit Nash equilibrium (the closed-loop Nash equilibrium) by the use of the orthogonal decomposition and completing square approaches. \smallubsection{Preliminaries} To begin with, some preliminary results shall be introduced on the orthogonal decomposition method in deriving a feedback explicit Nash equilibrium (the closed-loop Nash equilibrium). Without loss of generality, the following standard assumption will be made, see \cite{eln2013,y2013,ls1995}. \color{blue}egin{assumption}\label{ass1} The weighting matrices in \eqref{cf1}, \eqref{cf2} satisfy $Q^L\ge 0$, $Q^R\ge 0$, $S^L>0$, $S^R>0$, $M^L>0$, $M^R>0$, and $P_{N+1}^L\ge 0$, $P_{N+1}^R\ge 0$. \end{assumption} For the sake of discussion, the following notations are introduced. \color{blue}egin{align}\label{ncl} &{\Lambda^L}= \color{blue}egin{bmatrix} S^L & \\ & M^L \end{bmatrix}, {\Lambda^R}= \color{blue}egin{bmatrix} S^R & \\ & M^R \end{bmatrix}, U_k=\color{blue}egin{bmatrix} \hat{u}_k^L\\ u_k^R\\ \end{bmatrix},\normalsizeotag\\ &\mathcal {B}=\color{blue}egin{bmatrix} B^L&B^R \end{bmatrix}, \hat{u}_k^L=\mathbb{E}[u_k^L|\mathcal F_k^R], \tilde u_k^L=u_k^L-\hat {u}_k^L. \end{align} Consequently, due to the existence of the unreliable uplink channel from the local player to the remote player, based on the disturbed state information, an estimator should be derived, which will be presented in the following lemma. \color{blue}egin{lemma}\label{lem3} For system \eqref{ss1} and the disturbed state \eqref{is1}, in the sense of minimizing the error covariance, using the notations introduced in \eqref{ncl}, the optimal estimator $\hat x_{k|k}=\mathbb{E}[x_k|\gamma_0x_0,\cdots, \gamma_kx_k]$ can be calculated as \color{blue}egin{align}\label{kw6} \hat{x}_{k|k}={\gamma _k}x_k+(1 - {\gamma _k})(A\hat x_{k-1|k-1}+\mathcal BU_{k-1}), \end{align} with initial condition $\hat x_{0|0}={\gamma _0}x_0+(1 - {\gamma _0})\mu$, and $\mu$ is the mean value of the initial state $x_0$. Moreover, the estimation error ${{\tilde x}_k} = {x_k} - {{\hat x}_{k|k}}$ satisfies \color{blue}egin{align}\label{kw7} \tilde{x}_k= (1 - {\gamma _k})(A{{\tilde x}_{k - 1}} + {B^L}{{\tilde u}_{k - 1}^L} + {w_{k - 1}}), \end{align} with initial condition $\tilde{x}_0=(1-\gamma _0)(x_0-\mu)$. \end{lemma} \color{blue}egin{proof} The detailed proof can be found in \cite{qz2017a,qz2017b}, which is omitted here. \end{proof} In order to be consistent with the information structure introduced in \eqref{cs1}, the form of the feedback explicit Nash equilibrium of Problem LRSNG is assumed as follows, see \cite{sjz2012a,sjz2012b,ott2016,aon2019}. Specifically, based on \eqref{ncl} and Lemma \color{red}ef{lem3}, we assume: \color{blue}egin{assumption}\label{ass2} On the one hand, $U_k$ is the feedback of the optimal estimator $\hat{x}_{k|k}$, i.e., $U_k=\tilde{K}_k^{L}\hat{x}_{k|k}$. On the other hand, $\tilde u_k^{L}$ is the feedback of the optimal estimation error $\tilde x_k$, i.e., $\tilde u_k^{L}=\tilde{K}_k^R\tilde x_k$. \end{assumption} From Assumption \color{red}ef{ass2}, we know that the feedback explicit Nash equilibrium (the closed-loop Nash equilibrium) is of the following form: \color{blue}egin{align}\label{form} u_k^L & =[I_{m_1} ~ 0]\tilde{K}_k^{L}\hat{x}_{k|k}+\tilde{K}_k^R\tilde x_{k},~\text{and}~ u_k^{R} =[0 ~ I_{m_2}]\tilde{K}_k^{L}\hat{x}_{k|k}. \end{align} In the following, we will introduce the definition of closed-loop Nash equilibrium. \color{blue}egin{definition}\label{def2} Under Assumption \color{red}ef{ass2}, a pair $(K_k^{L},K_k^{R})$ is called the closed-loop Nash equilibrium of Problem LRSNG if for any $(\tilde{K}_k^{L},\tilde{K}_k^{R})\in \mathbb{R}^{(m_1+m_2) \times n}\times \mathbb{R}^{m_1 \times n}$, the following relationships hold: \color{blue}egin{align} J_N^L&([I_{m_1} ~~ 0]K_k^{L}\hat{x}_{k|k}+{K}_k^{R}\tilde x_{k},[0 ~~ I_{m_2}]K_k^{L}\hat{x}_{k|k})\label{clne1}\\ &\leq J_N^L([I_{m_1} ~~ 0]\tilde{K}_k^{L}\hat{x}_{k|k}+{K}_k^{R}\tilde x_{k},[0 ~~ I_{m_2}]\tilde{K}_k^L\hat{x}_{k|k}),\normalsizeotag\\ J_N^R&([I_{m_1} ~~ 0]K_k^{L}\hat{x}_{k|k}+{K}_k^{R}\tilde x_{k},[0 ~~ I_{m_2}]K_k^{L}\hat{x}_{k|k})\label{clne2}\\ &\leq J_N^R([I_{m_1} ~~ 0]K_k^{L}\hat{x}_{k|k}+\tilde{K}_k^R\tilde x_{k},[0 ~~ I_{m_2}]K_k^{L}\hat{x}_{k|k}).\normalsizeotag \end{align} \end{definition} From Assumption \color{red}ef{ass2} and Definition \color{red}ef{def2}, the following lemma on the orthogonal property can be directly derived. \color{blue}egin{lemma}\label{lem4} Under Assumption \color{red}ef{ass2}, for arbitrary $(\tilde{K}_k^{L},\tilde{K}_k^{R})\in \mathbb{R}^{(m_1+m_2) \times n}\times \mathbb{R}^{m_1 \times n}$, $U_k$ and $\tilde u_k^{L}$ given in Assumption \color{red}ef{ass2} are orthogonal, i.e., $\mathbb{E}[U_k^TH\tilde u_k^{L}]=0$ for any constant matrix $H$ with compatible dimensions. \end{lemma} \color{blue}egin{proof} In fact, from Assumption \color{red}ef{ass2}, we know that $U_k=\tilde{K}_k^L\hat{x}_{k|k}$ and $\tilde u_k^{L}=\tilde{K}_k^R\tilde x_k$, hence \color{blue}egin{align} \mathbb{E}[U_k^TH\tilde{u}_k^L] =&\mathbb{E}[(\tilde{K}_k^L\hat{x}_{k|k})^TH\tilde{K}^R_k\tilde{x}_k]\normalsizeotag\\ =&\mathbb{E}[\hat{x}_{k|k}^T(\tilde{K}_k^L)^TH\tilde{K}^R_k\tilde{x}_k]\normalsizeotag\\ =&0, \end{align} in which the orthogonality of $\hat{x}_{k|k}$ and $\tilde{x}_k$ has been inserted. \end{proof} \color{blue}egin{remark} In fact, noting that $u_k^L$ is $\mathcal{F}_{k}^L$-adapted, $u_k^R$ is $\mathcal{F}_{k}^R$-adapted, and $\mathcal{F}_k^R \smallubseteq \mathcal{F}_{k}^L$. Based on this basic property, hence Assumption \color{red}ef{ass2} is given, the orthogonal of $U_k$ and $\tilde u_k^{L}$ is shown in Lemma \color{red}ef{lem4}. The above method is called the orthogonal decomposition method, which is important in deriving the closed-loop Nash equilibrium. \end{remark} Using the results of Lemma \color{red}ef{lem4}, the cost functions \eqref{cf1}-\eqref{cf2} can be equivalently rewritten as follows: \color{blue}egin{align} J_N^L(u_k^L, u_k^R)\normalsizeotag &= \smallum\limits_{k = 0}^N \mathbb{E}[x_k^T{Q^L}{x_k} + U_k^T{\Lambda^L}{U_k} + (\tilde u_k^L)^T{S^L}{\tilde u_k^L}]\normalsizeotag\\ &+\mathbb{E}[ x_{N + 1}^T{P_{N + 1}^L}{x_{N + 1}}],\label{kw2}\\ J_N^R({u_k^L},u_k^R)\normalsizeotag &= \smallum\limits_{k = 0}^N \mathbb{E}[x_k^T{Q^R}{x_k} + U_k^T{\Lambda^R}{U_k} + (\tilde u_k^L)^T{S^R}{\tilde u_k^L}]\normalsizeotag\\ &+\mathbb{E}[ x_{N + 1}^T{P_{N + 1}^R}{x_{N + 1}}]\label{kw3}. \end{align} Besides, for simplicity, we can always rewrite system dynamics \eqref{ss1} as: \color{blue}egin{align}\label{kw1} x_{k+1}=Ax_k+\mathcal BU_k+B^L\tilde{u}_k^L+w_k. \end{align} \smallubsection{The closed-loop Nash equilibrium} Before presenting the main results of closed-loop Nash equilibrium, we will introduce the following coupled Riccati equations in the first place. {\smallmall\color{blue}egin{align}\label{rere} \left\{ \color{blue}egin{array}{ll} P_k^L&= {A^T} P_{k+1}^LA+ Q^L-(K_k^L)^T(\Lambda^L+\mathcal {B}^TP_{k+1}^L\mathcal {B})K_k^L,\\ P_k^R&={A^T} P_{k+1}^RA + Q^R-(K_k^R)^T[{S^R}+(B^L)^TP_{k+1}^RB^L]K_k^R\\ &+p[(A + B^LK_k^R)^T \Omega_{k+1}^R(A + B^LK_k^R)\\ &- {(A + B^LK_k^R)^T} P_{k+1}^R(A + B^LK_k^R)],\\ \Omega_k^L&=p[(A + B^LK_k^R)^TP_{k+1}^L(A + B^LK_k^R)]\\ &+(1-p)[(A + B^LK_k^R)^T\Omega_{k+1}^L(A + B^LK_k^R)]\\ &+{Q^L}+ (K_k^R)^T{S^L}K_k^R,\\ \Omega_k^R&={(A + \mathcal {B}K_k^L)^T} \Omega_{k+1}^R(A + \mathcal {B}K_k^L)\\ &+ {Q^R}+ (K_k^L)^T\Lambda^{R}K_k^L,\\ K_k^{L}&=-(\Lambda^L+\mathcal {B}^TP_{k+1}^L\mathcal B)^{-1}\mathcal {B}^TP_{k+1}^LA,\\ K_k^{R}&=-[S^R+(B^L)^TP_{k+1}^RB^L]^{-1}(B^L)^TP_{k+1}^RA,\\ &S^R+(B^L)^TP_{k+1}^RB^L>0, \end{array} \color{red}ight. \end{align}} with terminal conditions $\Omega_{N+1}^L=P_{N+1}^L$, $\Omega_{N+1}^R=P_{N+1}^R$ and $P_{N+1}^L, P_{N+1}^R$ are given in \eqref{cf1}-\eqref{cf2}, respectively. \color{blue}egin{theorem}\label{th-02} Under Assumptions \color{red}ef{ass1} and \color{red}ef{ass2}, if the coupled Riccati equation \eqref{rere} is solvable, then the closed-loop Nash equilibrium of Problem LRSNG is unique, and the closed-loop optimal Nash equilibrium is derived as \color{blue}egin{align} u_k^{L,*} & =[I_{m_1} ~~ 0]K_k^{L}\hat{x}_{k|k}+{K}_k^{R}\tilde x_{k}\label{occ1},\\ u_k^{R,*} & =[0 ~~ I_{m_2}]K_k^{L}\hat{x}_{k|k}\label{occ2}, \end{align} where $\hat{x}_{k/k}$ and $\tilde x_{k}$ are the optimal estimator and estimation error given in Lemma \color{red}ef{lem3}, and the gain matrices $K_k^L,K_k^R$ can be calculated via \eqref{rere}. With the closed-loop Nash equilibrium \eqref{occ1}-\eqref{occ2}, the corresponding optimal cost functions can be respectively calculated as follows. \color{blue}egin{align} &J_N^L(u_k^{L,*}, u_k^{R,*}) = \mathbb{E}[\hat x_{0|0}^TP_0^L\hat x_{0|0}+\tilde x_0^T \Omega_0^L\tilde {x}_0]\normalsizeotag\\ &~~~~~+p\smallum\limits_{k = 0}^N Tr(\Sigma _wP_{k+1}^L) +(1 - p)\smallum\limits_{k = 0}^N Tr(\Sigma _w\Omega_{k+1}^L)\label{ocf1},\\ &J_N^R(u_k^{L,*}, u_k^{R,*}) =\mathbb{E}[{{\hat x}_{0|0}}^T\Omega_0^R{\hat x}_{0|0} + \tilde {x}_0^TP_0^R\tilde {x}_0]\normalsizeotag\\ &~~~~~+p\smallum\limits_{k = 0}^NTr(\Sigma _w\Omega_{k+1}^R) +(1 - p)\smallum\limits_{k = 0}^NTr(\Sigma _wP_{k+1}^R)\label{ocf2}. \end{align} \end{theorem} Before we give the proof of Theorem \color{red}ef{th-02}, we will show the following propositions, which will be useful in deriving the main results. \color{blue}egin{proposition}\label{prop1} Under Assumption \color{red}ef{ass1}, $\Lambda^L+\mathcal {B}^TP_{k+1}^L\mathcal B$ is positive definite for $k=0, \cdots, N$. \end{proposition} \color{blue}egin{proof} The backward induction method will be adopted to show that $\Lambda^L+\mathcal {B}^TP_{k+1}^L\mathcal B>0$ for $k=0, \cdots, N$. Actually, from Assumption \color{red}ef{ass1}, we know that $P_{N+1}^L \geq 0$, and $\Lambda^L = \color{blue}egin{bmatrix} S^L & \\ & M^L \end{bmatrix} > 0$. Then, $\Lambda^L+\mathcal {B}^TP_{N+1}^L\mathcal B >0$ can be derived, and \eqref{rere} is solvable for $k=N$, that is \color{blue}egin{align}\label{pr1} P_N^L&=Q^L+A^TP_{N+1}^LA-(K_N^L)^T(\Lambda^L+\mathcal {B}^TP_{k+1}^L\mathcal {B})K_N^L\normalsizeotag\\ &=Q^L+A^TP_{N+1}^LA+(K_N^L)^T\mathcal {B}^TP_{N+1}^LA\normalsizeotag\\ &+A^TP_{N+1}^L\mathcal {B}K_N^L+(K_N^L)^T(\Lambda^L+\mathcal {B}^TP_{k+1}^L\mathcal {B})K_N^L\normalsizeotag\\ &=Q^L+(K_N^L)^T\Lambda^LK_N^L\normalsizeotag\\ &+(A+\mathcal {B}K_N^L)^TP_{N+1}^L(A+\mathcal {B}K_N^L) \end{align} where $K_N^L$ is given as in \eqref{rere} for $k=N$. Notice that $\Lambda^L>0$, $Q^L\geq0$, $P_{N+1}^L \geq 0$, then $P_N^L\geq 0$ can be obtained from \eqref{pr1}. By repeating the above procedures step by step backwardly, we can conclude that $P_k^L\geq0$, which indicates that $\Lambda^L+\mathcal {B}^TP_{k+1}^L\mathcal B>0$ for any $0\leq k\leq N$. \end{proof} From the definition of the closed-loop Nash equilibrium (Definition \color{red}ef{def2}), the following two propositions are to be shown. \color{blue}egin{proposition}\label{prop2} Under Assumptions \color{red}ef{ass1}-\color{red}ef{ass2}, for the closed-loop Nash equilibrium $(K_k^{L}, K_k^{R})$, which minimizes $J_N^L(u_k^L,u_k^R)$ (see Definition \color{red}ef{def2}), if $\tilde u_k^{L,*}=K_k^{R}\tilde{x}_{k}$ is given in advance, then $U_k^*$ can be calculated as \color{blue}egin{align}\label{mathu} U_k^*=K_k^L\hat{x}_{k|k}, \end{align} in which $K_k^L$ is given by \color{blue}egin{align}\label{kkl} K_k^{L}&=-(\Lambda^L+\mathcal {B}^TP_{k+1}^L\mathcal {B})^{-1}\mathcal {B}^T{P}_{k+1}^LA, \end{align} with $P_k^L$ satisfying \color{blue}egin{align} P_k^L&= {A^T} P_{k+1}^LA-(K_k^L)^T(\Lambda^L+\mathcal {B}^TP_{k+1}^L\mathcal {B})K_k^L\normalsizeotag\\ &+ {Q^L},~ P_{N+1}^L,\label{pkl}\\ \Omega_k^L&=p[(A + B^LK_k^R)^TP_{k+1}^L(A + {B}^LK_k^R)]\normalsizeotag\\ &+(1-p)[(A + B^LK_k^R)^T\Omega_{k+1}^L(A + {B}^LK_k^R)]\normalsizeotag\\ &+{Q^L}+ (K_k^R)^TS^LK_k^R,~~\Omega_{N+1}^L=P_{N+1}^L.\label{okl} \end{align} In this case, the optimal $J_N^L(u_k^{L,*}, u_k^{R,*})$ is given by \eqref{ocf1}. \end{proposition} \color{blue}egin{proof} For the sake of discussion, we denote $V_N^L(\hat{x}_{k|k},\tilde{x}_k)$ as follows: \color{blue}egin{align}\label{vnlk} V_N^L(\hat{x}_{k|k},\tilde{x}_k)&=\mathbb{E}[\hat x_{k|k}^T P_k^L\hat {x}_{k|k}+\tilde x_k^T \Omega_k^L\tilde {x}_k], \end{align} where $P_k^L$, $\Omega_k^L$ satisfy the equations \eqref{pkl}-\eqref{okl}. Noting that $K_k^{R}$ is given, i.e., $\tilde{u}_k^{L,*}=K_k^{R}\tilde{x}_{k}$, hence we have, \color{blue}egin{align}\label{lzhs1} &V_N^L(\hat{x}_{k|k},\tilde{x}_k)- V_N^L(\hat{x}_{k+1|k+1},\tilde{x}_{k+1})\normalsizeotag\\ &= \mathbb{E}[\hat x_{k|k}^TP_k^L{{\hat x}_{k|k}}+\tilde x_k^T \Omega_k^L\tilde {x}_k]\normalsizeotag\\ &- \mathbb{E}[\hat x_{k + 1|k + 1}^T P_{k+1}^L{{\hat x}_{k + 1|k + 1}}]-\mathbb{E}[\tilde x_{k + 1}^T \Omega_{k+1}^L\tilde {x}_{k + 1}]\normalsizeotag\\ &= \mathbb{E}[\hat x_{k|k}^TP_k^L\hat {x}_{k|k}+\tilde {x}_k^T \Omega_k^L\tilde {x}_k]\normalsizeotag\\ &-\mathbb{E}[[\gamma _{k + 1}(A\hspace{-1mm}+\hspace{-1mm} B^LK_k^{R})\tilde {x}_k \hspace{-0.5mm}+\hspace{-0.5mm} \gamma _{k + 1}w_k\hspace{-1mm} +\hspace{-1mm} A\hat {x}_{k|k} \hspace{-1mm}+\hspace{-1mm}\mathcal {B}U_k]^TP_{k+1}^L\normalsizeotag\\ &\times [\gamma _{k + 1}(A\hspace{-1mm}+\hspace{-1mm} B^LK_k^{R})\tilde {x}_k \hspace{-0.5mm}+ \hspace{-0.5mm}\gamma _{k + 1}w_k\hspace{-1mm} +\hspace{-1mm} A\hat {x}_{k|k} \hspace{-1mm}+\hspace{-1mm}\mathcal {B}U_k]]\normalsizeotag\\ &- \mathbb{E}[[(1 - \gamma _{k + 1})(A + B^LK_k^{R})\tilde {x}_k +(1 - \gamma _{k + 1}) w_k]^T\Omega_{k+1}^L\normalsizeotag\\ &\times[(1 - \gamma _{k + 1})(A + B^LK_k^{R})\tilde {x}_k +(1 - \gamma _{k + 1}) w_k]]\normalsizeotag\\ &= \mathbb{E}[-(U_k - K_k^L\hat{x}_{k|k})^T(\Lambda^L + \mathcal {B}^TP_{k+1}^L\mathcal {B})\normalsizeotag\\ &\times(U_k - K_k^L\hat{x}_{k|k})+ U_k^T\Lambda^LU_k]\normalsizeotag\\ &+\mathbb{E}[\tilde x_k^T[\Omega_k^L \hspace{-0.5mm}- \hspace{-0.5mm}(1-p)(A\hspace{-0.5mm} +\hspace{-0.5mm} B^LK_k^{R})^T \Omega_{k+1}^L(A \hspace{-0.5mm}+\hspace{-0.5mm} B^LK_k^{R})\normalsizeotag\\ &-p(A + B^LK_k^{R})^T P_{k+1}^L(A + B^LK_k^{R})]\tilde {x}_k]\normalsizeotag\\ &+\mathbb{E}[\hat{x}_{k|k}^T [P_k^L-A^TP_{k+1}^LA\normalsizeotag\\ &+(K_k^L)^T(\Lambda^L+\mathcal {B}^TP_{k+1}^L\mathcal {B})K_k^L]\hat{x}_{k|k}]\normalsizeotag\\ &-(1-p)Tr(\Sigma _w\Omega_{k+1}^L)-pTr(\Sigma _wP_{k+1}^L). \end{align} Furthermore, from \eqref{kkl}-\eqref{okl}, it can be derived from \eqref{lzhs1} that \color{blue}egin{align}\label{lzhs2} &V_N^L(\hat{x}_{k|k},\tilde{x}_k)- V_N^L(\hat{x}_{k+1|k+1},\tilde{x}_{k+1})\normalsizeotag\\ &= \mathbb{E}[-(U_k - K_k^L\hat{x}_{k|k})^T(\Lambda^L + \mathcal {B}^TP_{k+1}^L\mathcal {B})\normalsizeotag\\ &\times(U_k - K_k^L\hat{x}_{k|k})+ U_k^T\Lambda^LU_k]\normalsizeotag\\ &+\mathbb{E}[\hat x_{k|k}^TQ^L\hat {x}_{k|k}]+\mathbb{E}[\tilde{x}_k^T[Q^L+(K_k^{R})^TS^LK_k^{R}]\tilde{x}_k]\normalsizeotag\\ &-(1 - p)Tr(\Sigma _w\Omega_{k+1}^L)-pTr(\Sigma _wP_{k+1}^L). \end{align} Taking summation on both sides of \eqref{lzhs2} from $k=0$ to $k=N$, there holds \color{blue}egin{align}\label{lzhs3} &V_N^L(\hat{x}_{0|0},\tilde{x}_0)- V_N^L(\hat{x}_{N+1|N+1},\tilde{x}_{N+1})\normalsizeotag\\ &= \mathbb{E}[{\hat x_{0|0}}^TP_0^L\hat x_{0|0}+\tilde x_0^T\Omega_0^L\tilde {x}_0]\normalsizeotag\\ &- \mathbb{E}[\hat x_{N+1|N+1}^TP_{N+1}^L\hat {x}_{N+1|N+1}+\tilde x_{N+1}^T\Omega_{N+1}^L\tilde {x}_{N + 1}]\normalsizeotag\\ &= \smallum\limits_{k = 0}^N \mathbb{E}[-(U_k - K_k^L\hat{x}_{k|k})^T(\Lambda^L + \mathcal {B}^TP_{k+1}^L\mathcal {B})\normalsizeotag\\ &\times(U_k - K_k^L\hat{x}_{k|k})+ U_k^T\Lambda^LU_k]\normalsizeotag\\ &+ \mathbb{E}[\hat x_{k|k}^TQ^L\hat {x}_{k|k}] + \mathbb{E}[\tilde x_k^T[Q^L+(K_k^{R})^TS^LK_k^{R}]\tilde {x}_k]\normalsizeotag\\ &-(1 - p)\smallum\limits_{k = 0}^N Tr(\Sigma _w \Omega_{k+1}^L)- p\smallum\limits_{k = 0}^N Tr(\Sigma _wP_{k+1}^L). \end{align} Then, from \eqref{kw2} we have that \color{blue}egin{align}\label{ocf3} &J_N^L(u_k^L, u_k^R)=\smallum\limits_{k = 0}^N \mathbb{E}[\hat x_{k|k}^T{Q^L}{{\hat x}_{k|k}}+U_k^T\Lambda^LU_k\normalsizeotag\\ & + \tilde x_k^T[Q^L+(K_k^{R})^TS^LK_k^{R}]\tilde {x}_k]+\mathbb{E}[x_{N + 1}^TP_{N+1}^Lx_{N + 1}]\normalsizeotag\\ &= \smallum\limits_{k = 0}^N \mathbb{E}[(U_k - K_k^L\hat{x}_{k|k})^T(\Lambda^L + \mathcal {B}^TP_{k+1}^L\mathcal {B})\normalsizeotag\\ &\times(U_k - K_k^L\hat{x}_{k|k})]+ \mathbb{E}[\hat x_{0|0}^TP_0^L\hat x_{0|0}+\tilde x_0^T\Omega_0^L\tilde {x}_0]\normalsizeotag\\ &+ (1 - p)\smallum\limits_{k = 0}^N Tr(\Sigma _w \Omega_{k+1}^L)+ p\smallum\limits_{k = 0}^NTr(\Sigma _wP_{k+1}^L). \end{align} As shown in Proposition \color{red}ef{prop1}, $\Lambda^L+\mathcal{B}^TP_{k+1}^L\mathcal{B}>0$, therefore, $J_N^L(u_k^L, u_k^R)$ can be minimized by \eqref{mathu} with $\tilde{u}_k^{L,*}=K_k^{R}\tilde{x}_{k}$ given in advance. This completes the proof. \end{proof} Similarly, the following proposition can be shown. \color{blue}egin{proposition}\label{prop3} Suppose Assumptions \color{red}ef{ass1}-\color{red}ef{ass2} hold, and denote $(K_k^{L}, K_k^{R})$ as the closed-loop Nash equilibrium of optimizing $J_N^R(u_k^L,u_k^R)$, if we set $ U_k^*=K_k^L\hat{x}_{k|k}$ in advance, then $ \tilde{u}_k^{L,*}$ is given by \color{blue}egin{align}\label{tildeu} \tilde{u}_k^{L,*}&=K_k^{R}\tilde{x}_{k}. \end{align} In the above, $K_k^R$ satisfies {\smallmall\color{blue}egin{align}\label{kkr} \left\{ \color{blue}egin{array}{ll} K_k^{R}&\hspace{-2mm}=-[S^R+(B^L)^T{P}_{k+1}^RB^L]^{-1}(B^L)^T{P}_{k+1}^RA,\\ P_k^R&\hspace{-2mm}={A^T} P_{k+1}^RA + Q^R-(K_k^R)^T[S^R+(B^L)^TP_{k+1}^RB^L]K_k^R\\ &+p[(A + B^LK_k^R)^T \Omega_{k+1}^R(A + B^LK_k^R)\\ &- {(A + B^LK_k^R)^T} P_{k+1}^R(A + B^LK_k^R)],\\ \Omega_k^R&\hspace{-2mm}=(A + \mathcal {B}K_k^L)^T \Omega_{k+1}^R(A + \mathcal {B}K_k^L)\\ &+ Q^R+ (K_k^L)^T\Lambda^RK_k^L \end{array} \color{red}ight. \end{align}} Furthermore, $J_N^R(u_k^{L}, u_k^{R})$ can be minimized as \eqref{ocf2}. \end{proposition} \color{blue}egin{proof} By following Proposition \color{red}ef{prop2} and its proof, we define \color{blue}egin{align*} V_N^R(\hat{x}_{k|k},\tilde{x}_k) = \mathbb{E}[\hat x_{k|k}^T \Omega_k^R\hat {x}_{k|k} + \tilde x_k^TP_k^R\tilde {x}_k], \end{align*} in which $P_k^R$, $\Omega_k^R$ satisfy \eqref{kkr}. Hence, it can be derived that \color{blue}egin{align}\label{rzhs1} &V_N^R(\hat{x}_{k|k},\tilde{x}_k)-V_N^R(\hat{x}_{k+1|k+1},\tilde{x}_{k+1})\normalsizeotag\\ &= \mathbb{E}[\hat x_{k|k}^T \Omega_k^R\hat {x}_{k|k} + \tilde x_k^TP_k^R\tilde {x}_k]\normalsizeotag\\ &- \mathbb{E}[\hat x_{k + 1|k + 1}^T\Omega_{k+1}^R\hat x_{k + 1|k + 1}]- \mathbb{E}[\tilde x_{k +1}^TP_{k+1}^R\tilde {x}_{k + 1}]\normalsizeotag\\ &= \mathbb{E}[\hat x_{k|k}^T \Omega_k^R{{\hat x}_{k|k}} + \tilde x_k^TP_k^R\tilde {x}_k]\normalsizeotag\\ &- \mathbb{E}[[\gamma _{k + 1}(A\tilde {x}_k\hspace{-0.5mm} +\hspace{-0.5mm} B^L\tilde {u}_k^L\hspace{-0.5mm} +\hspace{-0.5mm} w_k)\hspace{-0.5mm} +\hspace{-0.5mm} (A+\mathcal BK_k^{L})\hat {x}_{k|k}]^T\Omega_{k+1}^R\normalsizeotag\\ &\times [{\gamma _{k + 1}}(A\tilde {x}_k\hspace{-0.5mm} +\hspace{-0.5mm} B^L\tilde {u}_k^L\hspace{-0.5mm} +\hspace{-0.5mm} w_k)\hspace{-0.5mm} +\hspace{-0.5mm} (A+\mathcal BK_k^{L})\hat {x}_{k|k}]]\normalsizeotag\\ &- \mathbb{E}[[(1 - \gamma _{k + 1})(A\tilde {x}_k + B^L\tilde {u}_k^L+ {w_k})]^TP_{k+1}^R\normalsizeotag\\ &\times[(1 - \gamma _{k + 1})(A\tilde {x}_k + {B^L}\tilde {u}_k^L + w_k)]]\normalsizeotag\\ &= \mathbb{E}[\hat x_{k|k}^T \Omega_k^R\hat {x}_{k|k} + \tilde x_k^TP_k^R\tilde {x}_k]-\mathbb{E}[\tilde{x}_k^TA^TP_{k+1}^RA\tilde{x}_k]\normalsizeotag\\ &-\mathbb{E}[\hat{x}_{k|k}^T(A+\mathcal {B}K_k^{L})^T\Omega_{k+1}^R(A+\mathcal {B}K_k^{L})\hat{x}_{k|k}]\normalsizeotag\\ &-\mathbb{E}[(\tilde{u}_k^L-K_k^R\tilde{x}_k)^T[S^R+(B^L)^TP_{k+1}^RB^L](\tilde{u}_k^L-K_k^R\tilde{x}_k)]\normalsizeotag\\ &+\mathbb{E}[(\tilde{u}_k^L)^TS^R\tilde{u}_k^L]\hspace{-1mm}+\hspace{-1mm}\mathbb{E}[p(A\tilde{x}_k\hspace{-1mm}+\hspace{-1mm}B^L\tilde{u}_k^L)^TP_{k+1}^R(A\tilde{x}_k\hspace{-1mm}+\hspace{-1mm}B^L\tilde{u}_k^L)\normalsizeotag\\ &-p(A\tilde{x}_k+B^L\tilde{u}_k^L)^T\Omega_{k+1}^R(A\tilde{x}_k+B^L\tilde{u}_k^L)]\normalsizeotag\\ &+\mathbb{E}[\tilde{x}_k^T(K_k^R)^T[S^R+(B^L)^TP_{k+1}^RB^L]K_k^R\tilde{x}_k]\normalsizeotag\\ &-pTr(\Sigma _w\Omega_{k+1}^R) - (1 - p)Tr(\Sigma _wP_{k+1}^R). \end{align} Next, by using \eqref{rere}, we obtain \color{blue}egin{align}\label{rzhs2} &V_N^R(\hat{x}_{k|k},\tilde{x}_k)-V_N^R(\hat{x}_{k+1|k+1},\tilde{x}_{k+1})\normalsizeotag\\ &=-\mathbb{E}[(\tilde{u}_k^L\hspace{-0.5mm}-\hspace{-0.5mm}K_k^R\tilde{x}_k)^T[S^R\hspace{-0.5mm}+\hspace{-0.5mm}(B^L)^TP_{k+1}^RB^L](\tilde{u}_k^L\hspace{-0.5mm}-\hspace{-0.5mm}K_k^R\tilde{x}_k)]\normalsizeotag\\ &+\mathbb{E}[(\tilde{u}_k^L)^TS^R\tilde{u}_k^L]+ \mathbb{E}[\hat x_{k|k}^T[{Q^R} + (K_k^{L})^T\Lambda^RK_k^{L}]{\hat x}_{k|k}]\normalsizeotag\\ &+\mathbb{E}[\tilde {x}_k^TQ^R\tilde {x}_k]-pTr(\Sigma _w\Omega_{k+1}^R) - (1 - p)Tr(\Sigma _wP_{k+1}^R). \end{align} Then, by adding from $k=0$ to $k=N$ of \eqref{rzhs2}, there holds \color{blue}egin{align}\label{rzhs3} &V_N^R(\hat{x}_{0|0},\tilde{x}_0)-V_N^R(\hat{x}_{N+1|N+1},\tilde{x}_{N+1})\normalsizeotag\\ &= \mathbb{E}[\hat x_{0|0}^T\Omega_0^R\hat {x}_{0|0} + \tilde x_0^TP_0^R\tilde {x}_0]\normalsizeotag\\ &- \mathbb{E}[\hat x_{N + 1|N + 1}^T\Omega_{N+1}^R{\hat x_{N + 1|N + 1}}- \tilde x_{N + 1}^TP_{N+1}^R\tilde {x}_{N + 1}]\normalsizeotag\\ &= \smallum\limits_{k = 0}^N-\mathbb{E}[(\tilde{u}_k^L-K_k^R\tilde{x}_k)^T[S^R+(B^L)^TP_{k+1}^RB^L]\normalsizeotag\\ &\times(\tilde{u}_k^L-K_k^R\tilde{x}_k)]+\mathbb{E}[(\tilde{u}_k^L)^TS^R\tilde{u}_k^L+\tilde x_k^TQ^R\tilde {x}_k]\normalsizeotag\\ &+ \mathbb{E}[\hat x_{k|k}^T[Q^R + (K_k^{L})^T\Lambda^RK_k^{L}]{{\hat x}_{k|k}}]\normalsizeotag\\ &-pTr(\Sigma _w\Omega_{k+1}^R) - (1 - p)Tr(\Sigma _wP_{k+1}^R). \end{align} Finally, from \eqref{kw3}, we have \color{blue}egin{align}\label{ocf4} &J_N^R(u_k^{L}, u_k^{R})= \smallum\limits_{k = 0}^N \mathbb{E}[\tilde x_k^TQ^R\tilde {x}_k+(\tilde{u}_k^L)^TS^R\tilde{u}_k^L] \normalsizeotag\\ &+\mathbb{E}[ \hat x_{k|k}^T[Q^R + (K_k^R)^T{S^L}K_k^R]\hat {x}_{k|k}]+ \mathbb{E}[x_{N + 1}^TP_{N+1}^Rx_{N + 1}] \normalsizeotag\\ &= \smallum\limits_{k = 0}^N \mathbb{E}[(\tilde{u}_k^L-K_k^R\tilde{x}_k)^T[S^R+(B^L)^TP_{k+1}^RB^L]\normalsizeotag\\ &\times(\tilde{u}_k^L-K_k^R\tilde{x}_k)]+ \mathbb{E}[\hat x_{0|0}^T\Omega_0^R\hat {x}_{0|0} + \tilde x_0^TP_0^R\tilde {x}_0]\normalsizeotag\\ &+p\smallum\limits_{k = 0}^N Tr(\Sigma _w\Omega_{k+1}^R) + (1 - p)\smallum\limits_{k = 0}^N Tr(\Sigma _wP_{k+1}^R). \end{align} Since $S^R+(B^L)^T{P}_{k+1}^RB^L>0$ given in \eqref{rere}, thus $J_N^R(u_k^{L}, u_k^{R})$ can be minimized by \eqref{tildeu}. The proof is complete. \end{proof} In the following, the proof of Theorem \color{red}ef{th-02} shall be given.\\ \color{blue}egin{proof} \textbf{Proof of Theorem \color{red}ef{th-02}.} Combining Propositions \color{red}ef{prop2}-\color{red}ef{prop3}, we can conclude that if the coupled Riccati equations \eqref{rere} is solvable, then $(K_k^L, K_k^R)$ given in \eqref{mathu} and \eqref{tildeu} is the unique closed-loop Nash equilibrium, as given in \eqref{clne1}-\eqref{clne2} of Definition \color{red}ef{def2}). Therefore, from \eqref{ncl}, we know that the optimal action of $(u_k^{L,*},u_k^{R,*})$ can be given as \eqref{occ1}-\eqref{occ2}. Moreover, the optimal $J_N^L(u_k^{L,*}, u_k^{R,*}), J_N^R(u_k^{L,*}, u_k^{R,*})$ are given as \eqref{ocf1}-\eqref{ocf2}, this ends the proof. \end{proof} \color{blue}egin{remark} In Theorem \color{red}ef{th-02}, the closed-loop Nash equilibrium of Problem LRSNG is derived in the feedback form, the obtained results are new to the best of our knowledge. For the closed-loop Nash equilibrium of Problem LRSNG, on the one hand, the local player $u_k^{L}$ should take the closed-loop optimal Nash equilibrium $u_k^{L,*}$ if and only if the remote player takes the closed-loop optimal feedback Nash equilibrium $u_k^{R,*}$. Meanwhile, the optimal cost function is given as the value of \eqref{ocf1}. On the other hand, the remote player $u_k^{R}$ should take the closed-loop optimal Nash equilibrium $u_k^{R,*}$ if and only if the local player takes the closed-loop optimal feedback Nash equilibrium $u_k^{L,*}$, and the optimal cost function \eqref{ocf2} is also derived. Besides, the calculation of closed-loop Nash equilibrium is based on the coupled Riccati equations. \end{remark} \smallection{Numerical Example} \color{blue}egin{figure} \centering \includegraphics[width=0.38\textwidth]{Figure2.pdf}\\ \caption{Closed-loop Nash equilibrium: $K_k^L(1,i), i=1,\cdots,4$, the first column value of $K_k^L$.}\label{Figure2} \end{figure} \color{blue}egin{figure} \centering \includegraphics[width=0.38\textwidth]{Figure3.pdf}\\ \caption{Closed-loop Nash equilibrium: $K_k^L(2,i), i=1,\cdots,4$, the second column value of $K_k^L$.}\label{Figure3} \end{figure} \color{blue}egin{figure} \centering \includegraphics[width=0.38\textwidth]{Figure4.pdf}\\ \caption{Closed-loop Nash equilibrium: $K_k^R(1,i), i=1,2$, the first column value of $K_k^R$.}\label{Figure4} \end{figure} \color{blue}egin{figure} \centering \includegraphics[width=0.38\textwidth]{Figure5.pdf}\\ \caption{Closed-loop Nash equilibrium: $K_k^R(2,i), i=1,2$, the second column value of $K_k^R$.}\label{Figure5} \end{figure} In order to illustrate the main results obtained in Theorem \color{red}ef{th-02}, a numerical example is provided in this section. Without loss of generallity, we shall consider the system dynamics \eqref{ss1} and the cost functions \eqref{cf1}-\eqref{cf2} with the following coefficients: \color{blue}egin{align}\label{coes} &N=50, p=0.5, \mu=0,A= \color{blue}egin{bmatrix} {1.2}&{0}\\ {0}&{1.1} \end{bmatrix};\normalsizeotag\\ &B^L= \color{blue}egin{bmatrix} {0.3}&{0.2}\\ {0.4}&{-0.1} \end{bmatrix}, B^R= \color{blue}egin{bmatrix} {0.1}&{0.2}\\ {0}&{0.1} \end{bmatrix};\normalsizeotag\\ &Q^L=Q^R=S^L=S^R=M^L=M^R=I_2;\normalsizeotag\\ &\Lambda^L=\color{blue}egin{bmatrix} S^L & \\ & M^L \end{bmatrix}=I_4, P^L_{N+1}=P^R_{N+1}=I_2. \end{align} From the above coefficients in \eqref{coes}, obviously, Assumption \color{red}ef{ass1} is satisfied. Consequently, $P_k$ can be calculated from \eqref{rere} backwardly using \eqref{coes}, then it can be verified that $\Lambda^L+\mathcal {B}^TP_{k+1}^L\mathcal {B}$ and $S^R+(B^L)^TP_{k+1}^RB^L$ are positive definite for $k=1, \cdots, 50$. Using Theorem \color{red}ef{th-02}, we can conclude that the closed-loop Nash equilibrium of Problem LRSNG is unique, and $(K_k^L, K_k^R)$, $k=0, \cdots, 50$ can be calculated from \eqref{rere}, which are shown in Figures \color{red}ef{Figure2}-\color{red}ef{Figure5}, backwardly. As can be seen from Figures \color{red}ef{Figure2}-\color{red}ef{Figure5}, it is apparent that the closed-loop Nash equilibrium $(K_k^L,K_k^R)$ would be convergent with $N$ becomes large. \smallection{Conclusion} In this paper, we have discussed the open-loop and closed-loop local and remote Nash equilibrium for LRSNG problem for discrete-time stochastic systems with inconsistent information structure. This paper extends the existing works on LQ stochastic games with consistent information structure to the inconsistent information structure case. Both the open-loop and closed-loop Nash equilibrium have been derived in this paper, and a numerical example is given to illustrate the main results. We believe the proposed methods and results would shed a light in solving other kinds of stochastic game problem with inconsistent information structure. \ifCLASSOPTIONcaptionsoff \normalsizeewpage \fi \color{blue}egin{thebibliography}{99} \color{blue}ibitem{djl2000} E. J. Dockner, S. Jorgensen, N. Van Long, et al, ``Differential games in economics and management science,'' \emph{Cambridge University Press}, 2000. \color{blue}ibitem{cl2002} R. Cellini, L. Lambertini, ``A differential game approach to investment in product differentiation,'' \emph{Journal of Economic Dynamics and Control}, vol. 27, no. 1, pp. 51-62, 2002. \color{blue}ibitem{i1965} R. Issacs, ``Differential games,'' York: Wiley, 1965. \color{blue}ibitem{ek1972} R. J. Elliot, N. J. Kalton, ``The existence of value in differential games,'' \emph{American Mathematical Soc.}, 1972. \color{blue}ibitem{i1999} R. Isaacs, ``Differential games: a mathematical theory with applications to warfare and pursuit, control and optimization,'' \emph{Courier Corporation}, 1999. \color{blue}ibitem{zp2022} W. Zhang and C. Peng, ``Indefinite Mean-Field Stochastic Cooperative Linear-Quadratic Dynamic Difference Game With Its Application to the Network Security Model,'' \emph{IEEE Transactions on Cybernetics}, vol. 52, no. 11, pp. 11805-11818, 2022. \color{blue}ibitem{r2007} C. Rainer, ``Two different approaches to nonzero-sum stochastic differential games,'' \emph{Applied Mathematics and Optimization}, vol. 56, no. 1, pp. 131-144, 2007. \color{blue}ibitem{h1999} S. Hamadene, ``Nonzero sum linear-quadratic stochastic differential games and backward–forward equations,'' \emph{Stochastic Analysis and Applications}, vol. 17, no. 1, pp. 117-130, 1999. \color{blue}ibitem{sy2019} J. Sun, J. Yong, ``Linear–quadratic stochastic two-person nonzero-sum differential games: Open-loop and closed-loop Nash equilibria,'' \emph{Stochastic Processes and their Applications}, vol. 129 no. 2, pp. 381-418, 2019. \color{blue}ibitem{slz2011} H. Sun, M. Li, W. Zhang, ``Linear-quadratic stochastic differential game: infinite-time case,'' \emph{ICIC Express Letters}, vol. 5 no. 4B, pp. 1449 -1454, 2011. \color{blue}ibitem{sjz2012a} H. Sun, L. Jiang, and W. Zhang, ``Infinite horizon linear quadratic differential games for discrete-time stochastic systems,'' \emph{Journal of Control Theory and Applications}, vol. 10, no. 3, pp. 391-396, 2012. \color{blue}ibitem{sjz2012b} H. Sun, L. Jiang, W. Zhang, ``Feedback control on Nash equilibrium for discrete-time stochastic systems with markovian jumps: finite-horizon case,'' \emph{International Journal of Control, Automation and Systems}, vol. 10, no. 5, pp. 940-946, 2012. \color{blue}ibitem{szl2021} R. Song, Q. Wei, H. Zhang and F. L. Lewis, ``Discrete-time non-zero-sum games with completely unknown dynamics,'' \emph{IEEE Transactions on Cybernetics}, vol. 51, no. 6, pp. 2929-2943, 2021. \color{blue}ibitem{z2018} H. Zhou, ``The nonzero-sum games of discrete-time stochastic singular systems,'' \emph{2018 Chinese Control And Decision Conference (CCDC)}, pp. 1510-1514, 2018. \color{blue}ibitem{cyl2022} B. -S. Chen, C. -T. Yang and M. -Y. Lee, ``Multiplayer noncooperative and cooperative minimax H$_\infty$ tracking game strategies for Linear mean-field stochastic systems with applications to cyber-social systems,'' \emph{IEEE Transactions on Cybernetics}, vol. 52, no. 5, pp. 2968-2980, 2022. \color{blue}ibitem{nglb2014} A. Nayyar, A. Gupta, C. Langbort and T. Başar, ``Common information based Markov perfect equilibria for stochastic games with asymmetric information: finite games,'' \emph{IEEE Transactions on Automatic Control}, vol. 59, no. 3, pp. 555-570, 2014. \color{blue}ibitem{nb2012} A. Nayyar and T. Başar, "Dynamic stochastic games with asymmetric information," \emph{2012 IEEE 51st IEEE Conference on Decision and Control (CDC)}, pp. 7145-7150, 2012. \color{blue}ibitem{ott2016} Y. Ouyang, H. Tavafoghi, D. Teneketzis, ``Dynamic games with asymmetric information: Common information based perfect bayesian equilibria and sequential decomposition,'' \emph{IEEE Transactions on Automatic Control}, vol. 62, no. 1, pp. 222-237, 2016. \color{blue}ibitem{qxz2020} Q. Qi, L. Xie, H. Zhang, ``Optimal control for stochastic systems with multiple controllers of different information structures,'' \emph{IEEE Transactions on Automatic Control}, vol. 66, no. 9, pp. 4160-4175, 2021. \color{blue}ibitem{lx2018} X. Liang, J. Xu, ``Control for networked control systems with remote and local controllers over unreliable communication channel,'' \emph{Automatica}, vol. 98, pp. 86-94, 2018. \color{blue}ibitem{tyw2022} C. Tan, L. Yang and W. S. Wong, ``Learning-based control policy and regret analysis for online quadratic optimization with asymmetric information structure,'' \emph{IEEE Transactions on Cybernetics}, vol. 52, no. 6, pp. 4797-4810, 2022. \color{blue}ibitem{lqz2021} X. Liang, Q. Qi, H. Zhang, et al, ``Decentralized control for networked control systems with asymmetric information,'' \emph{IEEE Transactions on Automatic Control}, vol. 67, no.4, pp. 2076-2083, 2021. \color{blue}ibitem{aon2019} S. M. Asghari, Y. Ouyang and A. Nayyar, ``Optimal local and remote controllers with unreliable uplink channels,'' \emph{IEEE Transactions on Automatic Control}, vol. 64, no. 5, pp. 1816-1831, 2019. \color{blue}ibitem{hv2000} R. Horowitz and P. Varaiya, ``Control design of an automated highway system,'' \emph{roceedings of the IEEE}, vol. 88, no. 7, pp. 913–925, 2000. \color{blue}ibitem{gc2010} R. A. Gupta and M. -Y. Chow, ``Networked control system: overview and research trends,'' \emph{IEEE Transactions on Industrial Electronics}, vol. 57, no. 7, pp. 2527–2535, 2010. \color{blue}ibitem{hnx2007} J. P. Hespanha, P. Naghshtabrizi, and Y. Xu, ``A survey of recent results in networked control systems,'' \emph{Proceedings of the IEEE}, vol. 95, pp. 138–162, 2007. \color{blue}ibitem{lczsm2014} N. Lu, N. Cheng, N. Zhang, X. Shen, and J. W. Mark, ``Connected vehicles: Solutions and challenges,'' \emph{IEEE Internet of Things Journal}, vol. 1, no. 4, pp. 289–299, 2014. \color{blue}ibitem{eln2013} R. Elliott, X. Li, and Y. H. Ni, ``Discrete time mean-field stochastic linear-quadratic optimal control problems,'' \emph{Automatica}, vol. 49, no. 11, pp. 3222-3233, 2013. \color{blue}ibitem{y2013} J. Yong, ``Linear-quadratic optimal control problems for mean-field stochastic differential equations,'' \emph{SIAM J. Control Optim.}, vol. 51, no. 4, pp. 2809-2838, 2013. \color{blue}ibitem{ls1995} F. L. Lewis and V. L. Syrmos, \emph{Optimal Control}. John Wiley and Sons, 1995. \color{blue}ibitem{qz2017a} Q. Qi, H. Zhang, ``Output feedback control and stabilization for networked control systems with packet losses,'' \emph{IEEE Transactions on Cybernetics}, vol. 47, no. 8, pp. 2223-2234, 2017. \color{blue}ibitem{qz2017b} Q. Qi and H. Zhang, ``Output Feedback Control and Stabilization for Multiplicative Noise Systems With Intermittent Observations,'' \emph{IEEE Transactions on Cybernetics}, vol. 48, no. 7, pp. 2128-2138, 2018. \end{thebibliography} \end{document}
\begin{document} \begin{center}{\Large \bf Hafnian point processes and quasi-free states on the CCR algebra} \end{center} {\large Maryam Gharamah Ali Alshehri}\\ Department of Mathematics, Faculty of Science, University of Tabuk, Tabuk, KSA; \\ e-mail: \texttt{[email protected]} {\large Eugene Lytvynov\\ Department of Mathematics, Swansea University, Swansea, UK;\\ e-mail: \texttt{[email protected]} {\small \begin{center} {\bf Abstract} \end{center} \noindent Let $X$ be a locally compact Polish space and $\sigma$ a nonatomic reference measure on $X$ (typically $X=\mathbb R^d$ and $\sigma$ is the Lebesgue measure). Let $X^2\ni(x,y)\mapsto\mathbb K(x,y)\in\mathbb C^{2\times 2}$ be a $2\times 2$-matrix-valued kernel that satisfies $\mathbb K^T(x,y)=\mathbb K(y,x)$. We say that a point process $\mu$ in $X$ is hafnian with correlation kernel $\mathbb K(x,y)$ if, for each $n\in\mathbb N$, the $n$th correlation function of $\mu$ (with respect to $\sigma^{\otimes n}$) exists and is given by $k^{(n)}(x_1,\dots,x_n)=\operatorname{haf}\big[\mathbb K(x_i,x_j)\big]_{i,j=1,\dots,n}\,$. Here $\operatorname{haf}(C)$ denotes the hafnian of a symmetric matrix $C$. Hafnian point processes include permanental and 2-permanental point processes as special cases. A Cox process $\Pi_R$ is a Poisson point process in $X$ with random intensity $R(x)$. Let $G(x)$ be a complex Gaussian field on $X$ satisfying $\int_{\Delta}\mathbb E(|G(x)|^2)\sigma(dx)<\infty$ for each compact $\Delta\subset X$. Then the Cox process $\Pi_R$ with $R(x)=|G(x)|^2$ is a hafnian point process. The main result of the paper is that each such process $\Pi_R$ is the joint spectral measure of a rigorously defined particle density of a representation of the canonical commutation relations (CCR), in a symmetric Fock space, for which the corresponding vacuum state on the CCR algebra is quasi-free. } {\bf Keywords:} Hafnian point process, Cox process, permanental point process; quasi-free state on CCR algebra \noindent {\it Mathematics Subject Classification (2020):} Primary 60G55; 46L30 Secondary 60G15 \section{Introduction} \subsection{Hafnian point processes} Let $X$ be a locally compact Polish space, let $\mathcal B(X)$ denote the Borel $\sigma$-algebra on $X$, and let $\mathcal B_0(X)$ denote the algebra of all pre-compact sets from $\mathcal B(X)$. Let $\sigma$ be a reference measure on $(X,\mathcal B(X))$ which is non-atomic (i.e., $\sigma(\{x\})=0$ for all $x\in X$) and Radon (i.e., $\sigma(\Delta)<\infty$ for all $\Delta\in\mathcal B_0(X)$). For applications, the most important example is $X=\R^d$ $\sigma(dx)=dx$ is the Lebesgue measure. A {\it (simple) configuration} $\gamma$ in $X$ is a Radon measure on $X$ of the form $\gamma=\sum_i\delta_{x_i}$, where $\delta_{x_i}$ denotes the Dirac measure with mass at $x_i$ and $x_i\ne x_j$ if $i\ne j$. Note that, since $\gamma$ is a Radon measure, it has a finite number of atoms in each compact set in $X$. Let $\Gamma(X)$ denote the set of all configurations $\gamma$ in $X$. Let $\mathcal C(\Gamma(X))$ denote the minimal $\sigma$-algebra on $\Gamma(X)$ such that, for each $\Delta\in\mathcal B_0(X)$, the mapping $\Gamma(X)\ni\gamma\mapsto\gamma(\Delta)$ is measurable. A {\it (simple) point process in $X$} is a probability measure on $(\Gamma(X),\mathcal C(\Gamma(X)))$. Denote $X^{(n)}:=\{(x_1,\dots,x_n)\in X^n\mid x_i\ne x_j\text{ if }i\ne j\}$. A measure on $X^{(n)}$ is called symmetric if it remains invariant under the natural action of the symmetric group $\mathfrak S_n$ on $X^{(n)}$. For each $\gamma=\sum_i\delta_{x_i}\in\Gamma(X)$, the {\it spatial falling factorial} $(\gamma)_n$ is the symmetric measure on $X^{(n)}$ of the form \begin{equation}\label{ctrsw5ywus} (\gamma)_n:=\sum_{i_1}\sum_{i_2\ne i_1}\dotsm\sum_{i_n\ne i_1,\dots, i_n\ne i_{n-1}}\delta_{(x_{i_1}, x_{i_2}, \dots,x_{i_n})}.\end{equation} Let $\mu$ be a point process in $X$. The {\it $n$-th correlation measure of $\mu$} is the symmetric measure $\theta^{(n)}$ on $X^{(n)}$ defined by \begin{equation}\label{rqay45qy4q} \theta^{(n)}(dx_1\dotsm dx_n):=\frac1{n!}\int_{\Gamma(X)}(\gamma)_n(dx_1\dotsm dx_n)\,\mu(d\gamma). \end{equation} If each measure $\theta^{(n)}$ is absolutely continuous with respect to $\sigma^{\otimes n}$, then the symmetric functions $k^{(n)}:X^{(n)}\to[0,\infty)$ satisfying \begin{equation}\label{5w738} d\theta^{(n)}=\frac1{n!}\,k^{(n)}d\sigma^{\otimes n}\end{equation} are called the {\it correlation functions of the point process $\mu$}. Under a very weak assumption, the correlations functions (or correlation measures) uniquely identify a point process, see \cite{Lenard}. Let $C=[c_{ij}]_{i,j=1,\dots,2n}$ be a symmetric $2n\times2n$-matrix. The {\it hafnian of $C$} is defined by $$\operatorname{haf}(C):=\frac1{n!\,2^n}\sum_{\pi\in\mathfrak S_{2n}}\prod_{i=1}^n c_{\pi(2i-1)\pi(i)}, $$ see e.g.\ \cite[Section~4.1]{Barvinok}. (Note the the value of the hafnian of $C$ does not depend on the diagonal elements of the matrix $C$.) The hafnian can also be written as \begin{equation}\label{cxtseewu645} \operatorname{haf}(C)=\sum c_{i_1j_1}\dotsm c_{i_nj_n},\end{equation} where the summation is over all (unordered) partitions $\{i_1,j_1\},\dots,\{i_n,j_n\}$ of $\{1,\dots,2n\}$. Hafnians were introduced by physicist Edoardo Caianiello in the 1950's, while visiting Niels Bohr's group in Copenhagen (whose latin name is Hafnia), as a Boson analogue of the formula expressing the correlations of a quasi-free Fermi state.\footnote{We are grateful to the referee for sharing with us this historical fact.} By analogy with the definition of a pfaffian point process (see e.g.\ \cite[Section~10]{Borodin} and the references therein), we now define a hafnian point process. Let $X^2\ni(x,y)\mapsto\mathbb K(x,y)\in\mathbb C^{2\times 2}$ be a $2\times 2$-matrix-valued kernel that satisfies $\mathbb K^T(x,y)=\mathbb K(y,x)$. We will say that a point process $\mu$ is {\it hafnian with correlation kernel $\mathbb K(x,y)$} if, for each $n\in\mathbb N$, the $n$th correlation function of $\mu$ exists and is given by \begin{equation}\label{vytds6} k^{(n)}(x_1,\dots,x_n)=\operatorname{haf}\big[\mathbb K(x_i,x_j)\big]_{i,j=1,\dots,n}\,.\end{equation} Note that the matrix $$ \big[\mathbb K(x_i,x_j)\big]_{i,j=1,\dots,n}=\left[\begin{matrix} \mathbb K(x_1,x_1)&\mathbb K(x_1,x_2)&\dotsm&\mathbb K(x_1,x_n)\\ \mathbb K(x_2,x_1)&\mathbb K(x_2,x_2)&\dotsm&\mathbb K(x_2,x_n)\\ \vdots&\vdots&\vdots&\vdots\\ \mathbb K(x_n,x_1)&\mathbb K(x_n,x_2)&\dotsm&\mathbb K(x_n,x_n) \end{matrix}\right]$$ is built upon $2\times2$-blocks $\mathbb K(x_i,x_j)$, hence it has dimension $2n\times 2n$. Furthermore, the condition $\mathbb K^T(x,y)=\mathbb K(y,x)$ ensures that the matrix $\big[\mathbb K(x_i,x_j)\big]_{i,j=1,\dots,n}$ is symmetric, and so its hafnian is a well-defined number. Since $$X^2=\{(x,x)\mid x\in X\}\sqcup X^{(2)},$$ for the definition of a hafnian point process, it is sufficient to assume that $\mathbb K(x,x)$ is defined for $\sigma$-a.a.\ $x\in X$, and the restriction of $\mathbb K(x,y)$ to $X^{(2)}$ is defined for $\sigma^{\otimes 2}$-a.a.\ $(x,y)\in X^{(2)}$. Note that, for the hafnian point process $\mu$, the correlation kernel $\mathbb K(x,y)$ is not uniquely determined by $\mu$. Indeed, since the hafnian of a matrix does not depend on its diagonal elements, formula \eqref{vytds6} implies that the correlation functions $k^{(n)}(x_1,\dots,x_n)$ do not depend on the diagonal elements of the $2\times 2$-matrices $\mathbb K(x,x)$ for $x\in X$. Hence, these elements can be chosen arbitrarily. Let $\alpha\in\mathbb R$ and let $B=[b_{ij}]_{i,j=1,\dots,n}$ be an $n\times n$ matrix. The {\it $\alpha$-determinant of $B$} is defined by \begin{equation}\label{tera5yw3} \operatorname{det}_\alpha (B):=\sum_{\pi\in \mathfrak S_n}\prod_{i=1}^n\alpha^{n-\nu(\pi)}\,b_{i\,\pi(i)},\end{equation} see \cite{VJ,ST}. In formula \eqref{tera5yw3}, for $\pi\in \mathfrak S_n$, $\nu(\pi)$ denotes the number of cycles in the permutation $\pi$. In particular, for $\alpha=1$, $\operatorname{det}_1(B)$ is the usual permanent of $B$. A point process $\mu$ is called {\it $\alpha$-permanental (or $\alpha$-determinantal) with correlation kernel $K:X^2\to\mathbb C$} if, for each $n\in\mathbb N$, the $n$th correlation function of $\mu$ exists and is given by $$k^{(n)}(x_1,\dots,x_n)=\operatorname{det}_\alpha [K(x_i,x_j)]_{i,j=1,\dots,n},$$ \cite{ST}, see also \cite{E2}. For $\alpha=1$, one calls $\mu$ a {\it permanental point process}. As easily follows from \cite[Section~4.1]{Barvinok} a permanental point process with correlation kernel $K(x,y)$ is hafnian with correlation kernel $$\mathbb K(x,y)=\left[\begin{matrix}0&K(x,y)\\K(y,x)&0\end{matrix}\right].$$ Furthermore, similarly to \cite[Proposition~1.1]{Frenkel}, we see that a $2$-permanental point process with a symmetric correlation kernel $K(x,y)=K(y,x)$ is hafnian with the correlation kernel $$\mathbb K(x,y)=\left[\begin{matrix}K(x,y)&K(x,y)\\K(x,y)&K(x,y)\end{matrix}\right].$$ For studies of permanental, and more generally $\alpha$-permanental point processes, we refer to \cite{BM,KMR,Macchi1,Macchi2,ST}. Recall that a {\it Cox process $\Pi_R$} is a Poisson point process with a random intensity $R(x)$. Here $R(x)$ is a random field defined for $\sigma$-a.a\ $x\in X$ and taking a.s.\ non-negative values. The correlation functions of the Cox process $\Pi_R$ are given by \begin{equation}\label{xreew5u} k^{(n)}(x_1,\dots,x_n)=\mathbb E\big(R(x_1)\dotsm R(x_n)\big).\end{equation} Let $G(x)$ be a mean-zero, complex Gaussian field defined for $\sigma$-a.a.\ $x\in X$. Assume additionally that $\int_\Delta\mathbb E(|G(x)|^2)\sigma(dx)<\infty$ for each $\Delta\in\mathcal B_0(X)$. Let $R(x):=|G(x)|^2=G(x)\overline{G(x)}$. Comparing the classical moment formula for Gaussian random variables with formula \eqref{cxtseewu645}, we immediately see that \begin{equation}\label{dr6e6u43wq} \mathbb E\big(R(x_1)\dotsm R(x_n)\big)=\operatorname{haf}\big[\mathbb K(x_i,x_j)\big]_{i,j=1,\dots,n}\, , \end{equation} where \begin{equation}\label{d6e6ie4} \mathbb K(x,y)=\left[\begin{matrix} \mathbb E(G(X)G(y))&\mathbb E(G(x)\overline{G(y)})\\\mathbb E(\overline{G(x)}G(y))&\mathbb E(\overline{G(x)G(y)})\end{matrix}\right]=\left[\begin{matrix} \mathcal K_2(x,y)&\mathcal K_1(x,y)\\\overline{\mathcal K_1(x,y)}&\overline{\mathcal K_2(x,y)}\end{matrix}\right].\end{equation} Here $\mathcal K_1(x,y):=\mathbb E(G(x)\overline{G(y)})$ is the {\it covariance} of the Gaussian field and $\mathcal K_2(x,y):=\mathbb E(G(x)G(y))$ is the {\it pseudo-covariance} of the Gaussian field. By \eqref{xreew5u}--\eqref{d6e6ie4}, the corresponding Cox process $\Pi_R$ is hafnian with the correlation kernel \eqref{d6e6ie4}. In the case where the Gaussian field $G(x)$ is real-valued, the moments of $R(x)$ are given by the $2$-determinants built upon the kernel $K(x,y):=\mathcal K_1(x,y)=\mathcal K_2(x,y)$, hence $R(x)$ is a 2-permanental process. For studies of $\alpha$-permanental processes, we refer e.g.\ to \cite{E1,E2,E3,E4,KMR,MR1,MR2,MR3} and the references therein. Obviously, in this case, $\Pi_R$ is a $2$-permanental point process with the correlation kernel $K(x,y)$, compare with \cite[Subsection~6.4]{ST}. A Gaussian random field is called {\it proper} if $\mathcal K_2(x,y)=0$ for all $x$ and $y$. Since the moments of the random field $R(x)$ are given by permanents built upon the kernel $K(x,y):=\mathcal K_1(x,y)$, $R(x)$ is a permanental process, compare with \cite{BM,Macchi1,Macchi2}. We note, however, that the available studies of $\alpha$-permanental processes usually discuss only the case where the kernel is real-valued. In the case of $R(x)$, the correlation kernel is, of course, complex-valued. \subsection{Aim of the paper} Quasi-free states play a central role in studies of operator algebras related to quantum statistical mechanics, see e.g.\ \cite{A1,A2,A3,A4,BR,DG,MV}. Let $\mathcal H=L^2(X,\sigma)$ be the $L^2$-space of $\sigma$-square-integrable functions $h:X\to\mathbb C$. Let $\mathfrak F$ be a separable complex Hilbert spaces. Let $A^+(h)$, $A^-(h)$ ($h\in\mathcal H$) be linear operators in $\mathfrak F$ that satisfy the following assumptions: \begin{itemize} \item[(i)] $ A^+(h)$ and $A^-(h)$ depend linearly on $h\in\mathcal H$; \item[(ii)] for each $h\in\mathcal H$, $A^-(\overline h)$ is (the restriction of) the adjoint operator of $A^+(h)$, where $\bar h$ is the complex conjugate of $h$; \item[(iii)] the operators $A^+(h)$, $A^-(h)$ satisfy the canonical commutation relations (CCR). \end{itemize} See Section~\ref{xrwaq4q2} for details. Let $\mathbb A$ be the unital $*$-algebra generated by the operators $A^+(h)$, $A^-(h)$. If we additionally assume that $\mathfrak F$ is a certain symmetric Fock space, then we can define the vacuum state $\tau$ on $\mathbb A$. If $\tau$ appears to be a quasi-free state, one says that the operators $A^+(h)$ and $A^-(h)$ form a {\it quasi-free representation of the CCR}. We define operator-valued distributions $A^+(x)$ and $A^-(x)$ ($x\in X$) through the equalities \begin{equation}\label{ray45} A^+(h)=\int_X h(x)A^+(x)\sigma(dx),\quad A^-(h)=\int_X h(x)A^-(x)\sigma(dx),\end{equation} holding for all $h\in\mathcal H$. Then the {\it particle density $\rho(x)$} is formally defined as $$\rho(x):=A^+(x)A^-(x),\quad x\in X.$$ We called this definition {\it formal} since it requires to take product of two operator-valued distributions, and {\it a priori\/} it is not clear if this product indeed makes sense. Nevertheless, in all the examples below, we will be able to rigorously define $\rho(x)$ as an operator-valued distribution. The CCR imply the commutation $[\rho(x),\rho(y)]=0$ ($x,y\in X$), where $[\cdot,\cdot]$ denotes the commutator. For each $\Delta\in\mathcal B_0(X)$, we denote \begin{equation}\label{raq5wu} \rho(\Delta):=\int_\Delta\rho(x)\sigma(dx)=\int_\Delta A^+(x)A^-(x)\sigma(dx),\end{equation} which is a family of Hermitian commuting operators in the Fock space $\mathfrak F$. In view of the spectral theorem, one can expect that the operators $(\rho_\Delta)_{\Delta\in\mathcal B_0(X)}$ can be realized as operators of multiplication in $L^2(\Gamma_X,\mu)$, where $\mu$ is the joint spectral measure of this family of operators at the vacuum. Let $G(x)$ be a complex-valued Gaussian field and $R(x)=|G(x)|^2$. The main aim of the paper is show that the Cox process $\Pi_R$ is the joint spectral measure of a (rigorously defined) particle density $(\rho_\Delta)_{\Delta\in\mathcal B_0(X)}$ for a certain quasi-free representation of the CCR. As a by-product, we obtain a unitary isomorphism between a subspace of a Fock space and $L^2(\Gamma_X,\Pi_R)$. In the special case where $\Pi_R$ is a permanental point process (with a real-valued correlation kernel), such a statement was proved in \cite{LM} (see also \cite{Ly}). In that case, the corresponding quasi-free state has an additional property of being gauge-invariant, so one could use the gauge-invariant quasi-free representation of the CCR by Araki and Woods \cite{ArWoods}. We stress that, even in the case of a gauge-invariant quasi-free state, the representation of the CCR that we use in this paper has a different form as compared to the one by Araki and Woods \cite{ArWoods}. Nevertheless, since both gauge-invariant quasi-free representations have the same $n$-point functions, one can show that these representations are unitarily equivalent. We note that, in \cite{Ly,LM}, it was also shown that each determinantal point process ($\alpha=-1$) arises as the joint spectral measure of the particle density of a quasi-free representation of the Canonical Anticommutation Relations (CAR). In that case, the state is also gauge-invariant, so one can use the Araki--Wyss representation of the CAR from \cite{ArWyss}. It is worth to compare our result with the main result of Koshida \cite{Koshida}. In the latter paper, it is proven that, when the underlying space $X$ is {\it discrete}, every pfaffian point process on $X$ arises as the particle density of a quasi-free representation of the CAR. As noted in \cite{Koshida}, a similar statement in the case of a continuous space $X$ is still an open problem. \subsection{Organization of the paper} The starting point of our considerations is the observation that the Poisson point process with (deterministic) intensity $|\lambda(x)|^2$ arises from the trivial (quasi-free) representation of the CCR with \begin{equation}\label{teraw5uw} A^+(x)=a^+(x)+\overline{\lambda(x)},\quad A^-(x)=a^-(x)+\lambda(x), \end{equation} where $a^+(x)$, $a^-(x)$ are the creation and annihilation operators at point $x$, acting in the symmetric Fock space $\mathcal F(\mathcal H)$ over $\mathcal H$, compare with \cite{GGPS}. We then proceed as follows: \begin{itemize} \item We realize a Gaussian field $G(x)$ as a family of operators $\Phi(x)$ acting in a Fock space $\mathcal F(\mathcal G)$ over a Hilbert space $\mathcal G$ (typically $\mathcal G=\mathcal H$ or $\mathcal G=\mathcal H\oplus\mathcal H$). \item We consider a quasi-free representation of the CCR with \begin{equation}\label{fyr6sw6u} A^+(x)=a^+(x)+\Phi^*(x),\quad A^-(x)=a^-(x)+\Phi(x) \end{equation} acting in the Fock space $\mathcal F(\mathcal H\otimes\mathcal G)=\mathcal F(\mathcal H)\otimes\mathcal F(\mathcal G)$. \item We prove that the corresponding particle density $(\rho_\Delta)_{\Delta\in\mathcal B_0(X)}$ is well-defined and has the joint spectral measure $\Pi_R$. \end{itemize} The paper is organized as follows. In Section \ref{ew56u3wu}, we discuss complex-valued Gaussian fields on $X$ realized in a symmetric Fock space $\mathcal F(\mathcal G)$ over a separable Hilbert space $\mathcal G$. We start with a $\mathcal G^2$-valued function $(L_1(x),L_2(x))$ that is defined for $\sigma$-a.a.\ $x\in X$ and satisfies the assumptions \eqref{ydxdrdxdrg}, \eqref{xrsa5yw4} below. We then define operators $\Phi(x)$ in the Fock space $\mathcal F(\mathcal G)$ by formula \eqref{waaq4yq5y}. Theorem~\ref{reaq5y43wu} states that the operators $\Phi(x)$ form a Fock-space realization of a Gaussian field $G(x)$ that is defined for $\sigma$-a.a.\ $x\in X$. (Note, however, that the set of those $x\in X$ for which $G(x)$ is defined can be smaller than the set of those $x\in X$ for which the function $(L_1(x),L_2(x))$ was defined.) The covariance and pseudo-covariance of the Gaussian field $G(x)$ are given by formulas \eqref{w909i8u7y689} and \eqref{ftsqw43qd}, respectively. As a consequence of our considerations, in Example~\ref{vcrtw5y3}, we derive a Fock-space realization of a proper Gaussian field. The operators $\Phi(x)$ in this case resemble the classical Fock-space realization of a real-valued Gaussian field. The main difference is that, in the case of a real-valued Gaussian field, the creation and annihilation operators use same real vectors, whereas in the case of a proper Gaussian field, the creation and annihilation operators use orthogonal copies of same complex vectors. In Section~\ref{xrwaq4q2}, we briefly recall the definition of a quasi-free state on the CCR algebra and a quasi-free representation of the CCR. Next, in Section~\ref{tew532w}, we recall in Theorem~\ref{tes56uwe4u6} a result from \cite{LM} which gives sufficient conditions for a family of commuting Hermitian operators, $(\rho(\Delta))_{\Delta\in\mathcal B_0(X)}$, in a separable complex Hilbert space, to be essentially self-adjoint and have a point process $\mu$ in $X$ as their joint spectral measure. The key condition of Theorem~\ref{tes56uwe4u6} is that the family of operators, $(\rho(\Delta))_{\Delta\in\mathcal B_0(X)}$, should possess certain correlation measures $\theta^{(n)}$, whose definition is given in Section~\ref{tew532w}. These measures $\theta^{(n)}$ are then also the correlation measures of the point process $\mu$. We also present formal considerations about the form of the correlation measures $\theta^{(n)}$ when $\rho(\Delta)$ is a particle density given by \eqref{raq5wu}. In Section~\ref{xeeraq54q}, we apply Theorem~\ref{tes56uwe4u6} to show that a Poisson point process is the joint spectral measure of the operators $(\rho(\Delta))_{\Delta\in\mathcal B_0(X)}$, where $\rho(\Delta)$ is the particle density of the trivial quasi-free representation of the CCR in which the creation and annihilation operators are given by \eqref{teraw5uw}. The main results of the paper are in Section~\ref{yd6w6wdd}. Using the $\mathcal G^2$-valued function $(L_1(x),L_2(x))$ from Section~\ref{ew56u3wu}, we construct a quasi-free representation of the CCR in the symmetric Fock space $\mathcal F(\mathcal H\oplus\mathcal G)$. We prove that the corresponding particle density is well defined as a family of commuting Hermitian operators, $(\rho(\Delta))_{\Delta\in\mathcal B_0(X)}$ (Corollary~\ref{3rtlgpr}). Theorem~\ref{due6uew4} states that these operators satisfy the assumptions of Theorem~\ref{tes56uwe4u6} and their joint spectral measure $\mu$ is the Cox process $\Pi_R$, where $R(x)=|G(x)|^2$ and $G(x)$ is the Gaussian field as in in Theorem~\ref{reaq5y43wu}. In particular, $\mu$ is a hafnian point process. \section{Fock-space realization of complex Gaussian fields}\label{ew56u3wu} Let $\mathcal G$ be a separable Hilbert space with an antilinear involution $\mathcal J$ satisfying $(\mathcal Jf,\mathcal Jg)_{\mathcal G}=(g,f)_{\mathcal G}$ for all $f,g\in\mathcal G$. Let $\mathcal G^{\odot n}$ denote the $n$th symmetric tensor power of $\mathcal G$. For $n\in\mathbb N$, let $\mathcal F_n(\mathcal G):=\mathcal G^{\odot n}n!$, i.e., $\mathcal F_n(\mathcal G)$ coincides with $\mathcal G^{\odot n}$ as a set and the inner product in $\mathcal F_n(\mathcal G)$ is equal to $n!$ times the inner product in $\mathcal G^{\odot n}$. Let also $\mathcal F_0(\mathcal G):=\mathbb C$. Then $\mathcal F(\mathcal G)=\bigoplus_{n=0}^\infty\mathcal F_n(\mathcal G)$ is called the {\it symmetric Fock space over $\mathcal G$}. The vector $\Omega=(1,0,0,\dots)\in\mathcal F(\mathcal G)$ is called the {\it vacuum}. Let $\mathcal F_{\mathrm{fin}}(\mathcal G)$ denote the (dense) subspace of $\mathcal F(\mathcal G)$ consisting of all finite vectors $f=(f^{(0)},f^{(1)},\dots,f^{(n)},0,0,\dots)$ ($n\in\mathbb N$). We equip $\mathcal F_{\mathrm{fin}}(\mathcal G)$ with the topology of the topological direct sum of the $\mathcal F_n(\mathcal G)$ spaces. For topological vector spaces $V$ and $W$, we denote by $\mathcal L(V,W)$ the space of all linear continuous operators $A:V\to W$. We also denote $\mathcal L(V):=\mathcal L(V,V)$. Let $g\in\mathcal G$. We define a {\it creation operator} $a^+(g)\in\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal G))$ by $a^+(g)\Omega:=g$, and for $f^{(n)}\in\mathcal F_n(\mathcal G)$ ($n\in\mathbb N$), $a^+(g)f^{(n)}:=g\odot f^{(n)}\in\mathcal F_{n+1}(\mathcal G)$ . Next, we define an {\it annihilation operator} $a^-(g)\in\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal G))$ that satisfies $a^-(g)\Omega:=0$ and for any $f_1,\dots,f_n\in\mathcal G$, $$a^-(g)f_1\odot\dotsm\odot f_n=\sum_{i=1}^n(f_i,\mathcal Jg)_{\mathcal G}\,f_1\odot\dots\odot f_{i-1}\odot f_{i+1}\odot\dots\odot f_n.$$ We have $a^+(g)^*\restriction_{\mathcal F_{\mathrm{fin}}(\mathcal G)}=a^-(\mathcal Jg)$ and the operators $a^+(g)$, $a^-(g)$ satisfy the CCR: \begin{equation}\label{tsw654w} [a^+(f),a^+(g)]=[a^-(f),a^-(g)]=0,\quad [a^-(f),a^+(g)]=(g,\mathcal Jf)_{\mathcal G}\end{equation} for all $f,g\in\mathcal G$. Let $D\in\mathcal B(X)$ be such that $\sigma(X\setminus D)=0$. Let $D\ni x\mapsto (L_1(x),L_2(x))\in\mathcal G^2$ be a measurable mapping. We assume that \begin{align} &(L_1(x),\mathcal JL_2(y))_{\mathcal G}=(L_1(y),\mathcal JL_2(x))_{\mathcal G},\label{ydxdrdxdrg}\\ &(L_1(x),L_1(y))_{\mathcal G}=(L_2(x),L_2(y))_{\mathcal G}\quad \text{for all }x,y\in D.\label{xrsa5yw4} \end{align} Define \begin{equation}\label{waaq4yq5y} \Phi(x):=a^+(L_1(x))+a^-(L_2(x)).\end{equation} Let $\Psi(x):=\Phi(x)^*\restriction_{\mathcal F_{\mathrm{fin}}(\mathcal G)}$. Then \begin{equation}\label{ytd7r57} \Psi(x)=a^+(\mathcal JL_2(x))+a^-(\mathcal JL_1(x)).\end{equation} It follows from \eqref{tsw654w} that conditions \eqref{ydxdrdxdrg}, \eqref{xrsa5yw4} are necessary and sufficient in order that $[\Phi(x),\Phi(y)]=[\Psi(x),\Psi(y)]=[\Phi(x),\Psi(y)]=0$ for all $x,y\in D$. Below, for each $\Lambda\subset D$, we denote by $\mathbb F_\Lambda$ the subspace of the Fock space $\mathcal F(\mathcal G)$ that is the closed linear span of the set \begin{multline*} \big\{\Psi(x_1)^{k_1}\dotsm\Psi(x_m)^{k_m}\Phi(y_1)^{l_1}\dotsm \Phi(y_n)^{l_n}\Omega\mid x_1,\dots,x_m,y_1,\dots,y_n\in \Lambda,\\ k_1,\dots,k_m,l_1,\dots,l_n\in\mathbb N_0,\ m,n\in\mathbb N\big\}.\end{multline*} Here $\mathbb N_0:=\{0,1,2,3,\dots\}$. \begin{theorem}\label{reaq5y43wu} There exists a measurable subset $\Lambda\subset D$ with $\sigma(X\setminus\Lambda)=0$ and a mean-zero complex-valued Gaussian field $\{G(x)\}_{x\in \Lambda}$ on a probability space $(\Xi,\mathfrak A,P)$ such that: \begin{itemize} \item[(i)] The Gaussian field $\{G(x)\}_{x\in \Lambda}$ has the covariance \begin{equation}\label{w909i8u7y689} \mathcal K_1(x,y)=(L_1(x),L_1(y))_{\mathcal G},\quad x,y\in \Lambda,\end{equation} and the pseudo-covariance \begin{equation}\label{ftsqw43qd} \mathcal K_2(x,y)=(L_1(x),\mathcal JL_2(y))_{\mathcal G},\quad x,y\in \Lambda.\end{equation} \item[(ii)] There exists a unique unitary operator $\mathcal I:\mathbb F_\Lambda\to L^2(\Xi,P)$ that satisfies \begin{align} &\mathcal I\Psi(x_1)^{k_1}\dotsm\Psi(x_m)^{k_m}\Phi(y_1)^{l_1}\dotsm \Phi(y_n)^{l_n}\Omega\notag\\ &\quad= \overline{G(x_1)}^{\,k_1}\dotsm\overline{G(x_m)}^{\,k_m}G(y_1)^{l_1}\dotsm G(y_n)^{l_n}\label{cy6e64u43}\end{align} for all $x_1,\dots,x_m,y_1,\dots,y_n\in \Lambda$, $k_1,\dots,k_m,l_1,\dots,l_n\in\mathbb N_0$, $m,n\in\mathbb N$. \end{itemize} \end{theorem} \begin{proof} We define $L_1(x)=L_2(x)=0$ for all $x\in X\setminus D$. Then \begin{equation}\label{tes5w53w} X\ni x\mapsto(L_1(x),L_2(x))\in\mathcal G^2 \end{equation} is measurable and satisfies \eqref{ydxdrdxdrg}, \eqref{xrsa5yw4} for all $x,y\in X$. By Lusin's theorem (see e.g.\ \cite[26.7~Theorem]{Bauer}), there exists a sequence of mutually disjoint compact sets $(\Lambda_n)_{n=1}^\infty$ such that $\sigma\big(X\setminus\bigcup_{n=1}^\infty\Lambda_n\big)=0$, and the restriction of the mapping \eqref{tes5w53w} to each $\Lambda_n$ is continuous. Denote $\Lambda:=\bigcup_{n=1}^\infty\Lambda_n$ and choose a countable subset $\Lambda'\subset\Lambda$ such that, for each $n\in\mathbb N$, the set $\Lambda'\cap\Lambda_n$ is dense in $\Lambda_n$. As easily seen by approximation, $\mathbb F_{\Lambda'}=\mathbb F_\Lambda$. Let us consider the real and imaginary parts of the operators $\Phi(x)$: \begin{align} \Re(\Phi(x))&:=\frac12(\Phi(x)+\Psi(x))=\frac12\left(a^+\big(L_1(x)+\mathcal JL_2(x)\big)+a^-\big(\mathcal JL_1(x)+L_2(x)\big)\right),\notag\\ \Im(\Phi(x))&:=\frac1{2i}(\Phi(x)-\Psi(x))=\frac1{2i}\left(a^+\big(L_1(x)-\mathcal JL_2(x)\big)-a^-\big(\mathcal JL_1(x)-L_2(x)\big)\right).\label{se5w5u33w} \end{align} These operators are Hermitian and commuting. It is a standard fact that, for each $g\in\mathcal G$, \begin{equation}\label{cr6sw6u4e} \|a^+(g)\|_{\mathcal L(\mathcal F_k(\mathcal G),\mathcal F_{k+1}(\mathcal G))}=\|a^-(g)\|_{\mathcal L(\mathcal F_{k+1}(\mathcal G),\mathcal F_{k}(\mathcal G))}=\sqrt{k+1}\,\|g\|_{\mathcal G}. \end{equation} From here it easily follows that each $f\in\mathcal F_{\mathrm{fin}}(\mathcal G)$ is an analytic vector for each $\Re(\Phi(x))$ and $\Im(\Phi(x))$ ($x\in X$), and the projection-valued measures of the closures of all these operators commute, see \cite[Chapter~5, Theorem~1.15]{BK}. We now apply the projection spectral theorem \cite[Chapter~3, Theorems~2.6 and 3.9 and Section~3.1]{BK} to the closures of the operators $\Re(\Phi(x))$ and $\Im(\Phi(x))$ with $x\in\Lambda'$. This implies the existence of a probability space $(\Xi,\mathfrak A,P)$, real-valued random variables $G_1(x)$ and $G_2(x)$ ($x\in\Lambda'$) and a unique unitary operator $\mathcal I:\mathbb F_\Lambda\to L^2(\Xi,P)$ that satisfies \begin{align} &\mathcal I\,\Re(\Phi(x_1))^{k_1}\dotsm\Re(\Phi(x_m))^{k_m}\Im(\Phi(y_1))^{l_1}\dotsm \Im(\Phi(y_n))^{l_n}\Omega\notag\\ &\quad= G_1(x_1)^{\,k_1}\dotsm G_1(x_m)^{\,k_m}G_2(y_1)^{l_1}\dotsm G_2(y_n)^{l_n} \label{xrq5y3q2u} \end{align} for all $x_1,\dots,x_m,y_1,\dots,y_n\in \Lambda'$, $k_1,\dots,k_m,l_1,\dots,l_n\in\mathbb N_0$, $m,n\in\mathbb N$. \begin{remark} In fact, $\Xi=\{\omega:\Lambda'\to\R^2\}$, $\mathfrak A$ is the cylinder $\sigma$-algebra on $\Xi$ (equivalently the countable product of the Borel $\sigma$-algebras $\mathcal B(\R)$), and $P(\cdot)=(E(\cdot)\Omega,\Omega)_{\mathcal F(\mathcal G)}$. Here, $E$ is the projection-valued measure on $(\Xi,\mathfrak A)$ that is constructed as the countable product of the projection-valued measures of the closures of the operators $\Re(\Phi(x))$ and $\Im(\Phi(x))$ with $x\in\Lambda'$. Furthermore, for each $x\in\Lambda'$, $(G_i(x))(\omega)=\omega_i(x)$ for $\omega=(\omega_1,\omega_2)\in\Xi$. \end{remark} Next, let $n\in\mathbb N$ and $x\in\Lambda_n\setminus\Lambda'$. Then we can find a sequence $(x_k)_{k=1}^\infty$ in $\Lambda'\cap \Lambda_n$ such that $x_k\to x$, hence, by continuity, $(L_1(x_k),L_2(x_k))\to (L_1(x),L_2(x))$ in $\mathcal G^2$ . It follows from \eqref{xrq5y3q2u} that $(G_i(x_k))_{k=1}^\infty$ is a Cauchy sequence in $L^2(\Xi,P)$ ($i=1,2$), so we define $G_i(x):=\lim_{k\to\infty}G_i(x_k)$. Then we easily see by approximation that \eqref{xrq5y3q2u} remains true for all $x_1,\dots,x_m,y_1,\dots,y_n\in \Lambda$. Let $Z$ be an arbitrary finite linear combination (with real coefficients) of random variables from $\{G_1(x),\,G_2(x)\mid x\in\Lambda\}$. Then it follows from \eqref{se5w5u33w} that the moments of $Z$ are given by $$\mathbb E(Z^k)=\big((a^+(g)+a^-(\mathcal Jg))^k\,\Omega,\Omega\big)_{\mathcal F(\mathcal G)}$$ for some $g\in\mathcal G$. But this implies that the random variable $Z$ has a Gaussian distribution, see e.g.\ \cite[Chapter~3, Subsection~3.8]{BK}. Hence, $\{G_1(x),\,G_2(x)\mid x\in\Lambda\}$ is a Gaussian field. Finally, for each $x\in\Lambda$, we define $G(x):=G_1(x)+iG_2(x)$. Then $\{G(x)\}_{x\in \Lambda}$ is a complex-valued Gaussian field. Formula \eqref{xrq5y3q2u} implies \eqref{cy6e64u43}. This, in turn, gives us the covariance and the pseudo-covariance of the Gaussian field $\{G(x)\}_{x\in \Lambda}$. \end{proof} Let $\mathcal H=L^2(X,\sigma)$ and define an antilinear involution $J:\mathcal H\to \mathcal H$ by $(Jh)(x):=\overline{h(x)}$. Let us consider a measurable mapping $x\mapsto L(x)\in\mathcal H$ defined $\sigma$-a.e.\ on $X$, and let $K(x,y):=(L(x),L(y))_{\mathcal H}$. We will now consider two examples of complex-valued Gaussian fields with covariance $K(x,y)$. \begin{example}\label{fyte6i47} Let $\mathcal G=\mathcal H$, $\mathcal J=J$, and let $L_1(x)=L_2(x)=L(x)$. Obviously, conditions \eqref{ydxdrdxdrg}, \eqref{xrsa5yw4} are satisfied. Then, $$\Phi(x)=a^+(L(x))+a^-(L(x)),\quad \Psi(x)=a^+(JL(x))+a^-(JL(x)).$$ By Theorem~\ref{reaq5y43wu}, the corresponding Gaussian field $G(x)$ has the covariance $\mathcal K_1(x,y)=K(x,y)$ and the pseudo-covariance $\mathcal K_2(x,y)=(L(x),JL(y))_{\mathcal H}$. If $L(x)$ is a real-valued function for $\sigma$-a.a.\ $x\in X$, $\mathcal K_1(x,y)=\mathcal K_2(x,y)=K(x,y)$, while the function $K(x,y)$ is symmetric. Hence, as discussed in Introduction, $R(x):=|G(x)|^2$ is a 2-permanental process, defined for $\sigma$-a.a.\ $x\in X$. If $L(x)$ is not real-valued on a set of positive $\sigma$ measure, then the moments of $R(x)$ are given by \eqref{dr6e6u43wq}, \eqref{d6e6ie4} with $\mathcal K_1(x,y)$, $\mathcal K_2(x,y)$ as above. \end{example} \begin{example}\label{vcrtw5y3} Let $\mathcal G=\mathcal H\oplus\mathcal H$, $\mathcal J=J\oplus J$, and let $L_1(x)=(L(x),0)$, $L_2(x)=(0,L(x ))$. As easily seen, conditions \eqref{ydxdrdxdrg}, \eqref{xrsa5yw4} are satisfied. We define, for each $h\in\mathcal H$, $a_1^+(h):=a^+(h,0)$, $a_2^+(h):=a^+(0,h)$ and similarly $a_1^-(h)$, $a_2^-(h)$. Then $$\Phi(x)=a_1^+(L(x))+a_2^-(L(x)),\quad \Psi(x)=a_2^+(JL(x))+a_1^-(JL(x)).$$ For the corresponding Gaussian field $G(x)$, $\mathcal K_1(x,y)=K(x,y)$, while $\mathcal K_2(x,y)=0$, i.e., $G(x)$ is a proper Gaussian field. Hence, as discussed in Introduction, $R(x):=|G(x)|^2$ is a permanental process, defined for $\sigma$-a.a.\ $x\in X$. \end{example} \begin{remark} Let $G_1(x)$ and $G_2(x)$ be two independent copies of the Gaussian field from Example~\ref{fyte6i47}. Then, the Gaussian field $G(x)$ from Example~\ref{vcrtw5y3} can be constructed as $G(x)=\frac1{\sqrt2}\big(G_1(x)+iG_2(x)\big)$. \end{remark} The following example generalizes the constructions in Examples~\ref{fyte6i47} and \ref{vcrtw5y3}. \begin{example}\label{xerwu56} Let $\mathcal G=\mathcal H\oplus\mathcal H$ be as in Example~\ref{vcrtw5y3} and consider a measurable mapping $x\mapsto(\alpha(x),\beta(x))\in\mathcal G^2$ defined $\sigma$-a.e.\ on $X$. Let $$L_1(x):=\left(\frac{\alpha(x)+\beta(x)}2\,,\frac{\alpha(x)-\beta(x)}2\right),\quad L_2(x):=\left(\frac{\alpha(x)-\beta(x)}2\,,\frac{\alpha(x)+\beta(x)}2\right).$$ As easily seen, conditions \eqref{ydxdrdxdrg} and \eqref{xrsa5yw4} are satisfied. For the corresponding Gaussian field $G(x)$, \begin{align} \mathcal K_1(x,y)&=\frac12\big((\alpha(x),\alpha(y))_{\mathcal H}+(\beta(x),\beta(y))_{\mathcal H}\big)\notag,\\ \mathcal K_2(x,y)&=\frac12\big((\alpha(x), J\alpha(y))_{\mathcal H}-(\beta(x),J\beta(y))_{\mathcal H}\big).\notag \end{align} In the special case where $L(x)=\alpha(x)=\beta(x)$, this is just the construction from Example~\ref{vcrtw5y3}. When choosing $\alpha(x)=\sqrt 2\, L(x)$ and $\beta(x)=0$, the corresponding Gaussian field $G(x)$ has the same finite-dimensional distributions as the Gaussian field from Example~\ref{fyte6i47}. \end{example} Let the conditions of Theorem~\ref{reaq5y43wu} be satisfied and $R(x)=|G(x)|^2$. To construct the Cox process $\Pi_R$ with correlation functions given by \eqref{xreew5u}, we further assume that, for each $\Delta\in\mathcal B_0(X)$, $\int_\Delta\mathbb E(R(x))\sigma(dx)<\infty$. By \eqref{xrsa5yw4} and \eqref{w909i8u7y689}, this is equivalent to the condition \begin{equation}\label{vd5yw573} \int_\Delta\|L_1(x)\|_{\mathcal G}^2\,\sigma(dx)=\int_\Delta\|L_2(x)\|_{\mathcal G}^2\,\sigma(dx)<\infty.\end{equation} \setcounter{theorem}{2} \begin{example}[continued] Since $\mathcal G=\mathcal H$, we define $L(x,y):=(L(x))(y)$. By \eqref{vd5yw573}, $$\int_{\Delta\times X} |L(x,y)|^2\sigma^{\otimes 2}(dx\,dy)<\infty.$$ Hence, for each $\Delta\in\mathcal B_0(X)$, we can define a Hilbert--Schmidt operator $L^\Delta$ in $\mathcal H$ with integral kernel $\chi_\Delta(x)L(x,y)$. Here $\chi_\Delta$ denotes the indicator function of the set $\Delta$. Define $K^\Delta:=L^\Delta(L^\Delta)^*$. This operator is nonnegative ($K^\Delta\ge0$), trace-class, and has integral kernel $K^\Delta(x,y)=(L(x),L(y))_{\mathcal H}$ for $x,y\in\Delta$. (Note that $K^\Delta(x,y)$ vanishes outside $\Delta^2$). Thus, for $x,y\in\Delta$, the covariance $\mathcal K_1(x,y)$ of the Gaussian $G(x)$ is equal to $K^\Delta(x,y)$. Next, for a bounded linear operator $A\in\mathcal H$, we define the {\it transposed} of $A$ by $A^T:=JA^*J$. If $A$ is an integral operator with integral kernel $A(x,y)$, then $A^T$ is the integral operator with integral kernel $A^T(x,y)=A(y,x)$. Hence, for all $x,y\in\Delta$, the pseudo-covariance $\mathcal K_2(x,y)$ of the Gaussian $G(x)$ is equal to the integral kernel $Q^\Delta(x,y)$ of the operator $Q^\Delta:=L^\Delta(L^\Delta)^T$. In the case where $L(x,y)$ is an integral kernel of a bounded linear operator $L$ in $\mathcal H$, we can define $K:=LL^*$ and $Q:=LL^T$, and $\mathcal K_1(x,y)=K(x,y)$, $\mathcal K_2(x,y)=Q(x,y)$, where $K(x,y)$ and $Q(x,y)$ are the integral kernels of the operators $K$ and $Q$, respectively. \end{example} \begin{example}[continued] We proceed similarly to Example~\ref{fyte6i47}. However, in this case, the moments of the Gaussian field $G(x)$ are determined by the covariance $\mathcal K_1(x,y)$ only. Hence, if $L(x,y)$ is an integral kernel of a bounded linear operator $L$ in $\mathcal H$, the moments are determined by (the integral kernel of) the operator $K:=LL^*$. Hence, without loss of generality, we may assume that $L=\sqrt K$, equivalently the operator $L$ is self-adjoint. \end{example} \setcounter{theorem}{5} \begin{example}[continued] Since $\mathcal G=\mathcal H$, we define $\alpha(x,y):=(\alpha(x))(y)$ and similarly $\beta(x,y)$. In this case, condition \eqref{vd5yw573} means that, for each $\Delta\in\mathcal B_0(X)$, $$\int_{\Delta\times X}(|\alpha(x,y)|^2+|\beta(x,y)|^2)\sigma^{\otimes 2}(dx\,dy)<\infty, $$ and we can proceed similarly to Example~\ref{fyte6i47}. Assume additionally that $\alpha(x,y)$ and $\beta(x,y)$ are integral kernels of operators $A,B\in\mathcal L(\mathcal H)$, respectively. Then the covariance $\mathcal K_1(x,y)$ of the Gaussian field $G(x)$ is the integral kernel of the operator $\frac12(AA^*+BB^*)$, while the pseudo-covariance $\mathcal K_2(x,y)$ is the integral kernel of the operator $\frac12(AA^T-BB^T)$. \end{example} \section{Quasi-free states on the CCR algebra}\label{xrwaq4q2} In this section, we assume that $\mathcal H$ is a separable complex Hilbert space with an antilinear involution $J$ in $\mathcal H$ satisfying $(Jf,Jh)_{\mathcal H}=(h,f)_{\mathcal H}$ for all $f,h\in\mathcal H$. Let $\mathcal V$ be a dense subspace of $\mathcal H$ that is invariant for $J$. Let $\mathfrak F$ be a separable Hilbert space and $\mathfrak D$ a dense subspace of $\mathfrak F$. For each $h\in\mathcal V$, let $A^+(h),\, A^-(h):\mathfrak D\to\mathfrak D$ be linear operators satisfying the following assumptions: \begin{itemize} \item[(i)] $A^+(h)$ and $A^-(h)$ depend linearly on $h\in\mathcal V$; \item[(ii)] the domain of the adjoint operator of $A^+(h)$ in $\mathfrak F$ contains $\mathfrak D$ and $A^+(h)^*\restriction\mathfrak D=A^-(Jf)$; \item[(iii)] the operators $A^+(h)$, $A^-(h)$ satisfy the CCR: \begin{equation}\label{ydey7e76} [A^+(f),A^+(h)]=[A^-(f),A^-(h)]=0,\quad [A^-(f),A^+(h)]=(h,Jf)_{\mathcal H}\end{equation} for all $f,h\in\mathcal V$. \end{itemize} Let $\mathbb A$ denote the complex unital $*$-algebra generated by the operators $A^+(h)$, $A^-(h)$ ($h\in\mathcal V$). Let $\tau:\mathbb A\to\mathbb C$ be a state on $\mathbb A$, i.e., $\tau$ is linear, $\tau(\mathbf 1)=1$ and $\tau(a^*a)\ge0$ for all $a\in\mathbb A$. For each $h\in\mathcal V$, we define a Hermitian operator \begin{equation}\label{e5w7w37w2} B(h):=A^+(h)+A^-(Jh). \end{equation} These operators satisfy the commutation relation \begin{equation}\label{vctsdtu6} [B(f),B(h)]=2\Im(h,f)_{\mathcal H},\quad h,f\in\mathcal H. \end{equation} Note that $$A^+(h)=\frac12(B(h)-iB(ih)),\quad A^-(h)=\frac12(B(Jh)+iB(Jh)).$$ Therefore, we can think of the algebra $\mathbb A$ as generated by the operators $B(h)$ ($h\in\mathcal V$), subject to the commutation relation \eqref{vctsdtu6}. Hence, the state $\tau$ is completely determined by the functionals $T^{(n)}:\mathcal V^n\to\mathbb C$ ($n\ge1$), where $T^{(1)}(h):=\tau(B(h))$ and $$ T^{(n)}(h_1,\dots,h_n):=\tau\big((B(h_1)-T^{(1)}(h_1))\dotsm (B(h_n)-T(h_n))\big),\quad n\ge2.$$ The state $\tau$ is called {\it quasi-free} if, for each $n\in\mathbb N$, $T^{(2n+1)}=0$ and $$T^{(2n)}(h_1,\dots,h_{2n})=\sum T^{(2)}(h_{i_1},h_{j_1})\dotsm T^{(2)}(h_{i_n},h_{j_n}) $$ where the summation is over all partitions $\{i_1,j_1\},\dots,\{i_n,j_n\}$ of $\{1,\dots,2n\}$ with $i_k<j_k$ ($k=1,\dots,n$), see e.g.\ \cite[Section~5.2]{BR}. \begin{remark} \label{vtw53u} Let $\phi:\mathcal V\to\mathbb C$ be a linear functional. For each $h\in\mathcal V$, we define operators $\mathbf A^+(h):=A^+(h)+\phi'(h)$ and $\mathbf A^-(h):=A^-(h)+\phi(h)$, where $\phi'(h):=\overline{\phi(Jh)}$. The operators $\mathbf A^+(h)$, $\mathbf A^-(h)$ also satisfy the conditions (i)--(iii) discussed above. Obviously, the algebra generated by the operators $\mathbf A^+(h)$, $\mathbf A^-(h)$ coincides with $\mathbb A$. If $\tau:\mathbb A\to\mathbb C$ is a quasi-free state with respect to the operators $A^+(h)$, $A^-(h)$, then $\tau$ is also quasi-free with respect to the operators $\mathbf A^+(h)$, $\mathbf A^-(h)$. \end{remark} Let us now present an explicit construction of a representation of the CCR algebra $\mathbb A$ and a quasi-free state $\tau$ on it. This construction resembles the {\it Bogoliubov transformation}, see e.g.\ \cite[Subsection 5.2.2.2]{BR} or \cite[Section~4]{Berezin}\footnote{In \cite{Berezin}, a Bogoliubov transformation is called a {\it linear canonical transformation}.}. Let $\mathcal E$ be a separable Hilbert space with an antilinear involution $\mathcal J$ satisfying $(\mathcal Jf,\mathcal Jg)_{\mathcal E}=(g,f)_{\mathcal E}$ for all $f,g\in\mathcal E$. Let $K_i\in\mathcal L(\mathcal H,\mathcal E)$ ($i=1,2$). Denote \begin{equation}\label{rdtsw5u4w3wq}K_i':=\mathcal JK_iJ\in\mathcal L(\mathcal H,\mathcal E) \end{equation} and assume that \begin{align} (K'_2)^*K_1-(K_1')^*K_2&=0,\notag\\ K_2^*K_2-K_1^*K_1=1.\label{cdtwe64u53e} \end{align} For each $h\in\mathcal H$, we define, in the symmetric Fock space $\mathcal F(\mathcal E)$, the operators \begin{equation}\label{tsw53w3} A^+(h):=a^+(K_2h)+a^-(K_1h),\quad A^-(h):=a^-(K_2'h)+a^+(K_1'h) \end{equation} with domain $\mathcal F_{\mathrm{fin}}(\mathcal E)$. Here $a^+(\cdot)$ and $a^-(\cdot)$ are the creation and annihilation operators in $\mathcal F(\mathcal E)$, respectively. It follows from \eqref{tsw654w}, \eqref{cdtwe64u53e}, and \eqref{tsw53w3} that $A^+(h)$ and $A^-(h)$ satisfy the conditions (i)--(iii) with $\mathcal V=\mathcal H$, $\mathfrak F=\mathcal F(\mathcal E)$, and $\mathfrak D=\mathcal F_{\mathrm{fin}}(\mathcal E)$. Let $\mathbb A$ denote the corresponding CCR algebra and let $\tau: \mathbb A\to\mathbb C$ be the {\it vacuum state on $\mathbb A$}, i.e., $\tau(a)$$:=(a\Omega,\Omega)_{\mathcal F(\mathcal E)}$\,. For each $h\in\mathcal H$, \begin{equation}\label{vcreaw53y} B(h)=A^+(h)+A^-(Jh)=a^+((K_2+\mathcal JK_1)h)+a^-((K_1+\mathcal JK_2)h). \end{equation} In particular, $\tau(B(h))=0$. Hence, it easily follows from \eqref{vcreaw53y} that $\tau$ is a quasi-free state with \begin{equation}\label{rtwe64ue3} T^{(2)} (f,h)=\big((K_1+\mathcal JK_2)f,(K_1+\mathcal JK_2)h\big)_{\mathcal E}\,. \end{equation} \begin{remark}Note that, in the classical Bogoliubov transformation, one chooses $\mathcal E=\mathcal H$. \end{remark} \begin{remark}\label{vrtw52} Choosing $\mathcal E=\mathcal H$, $K_1=1$ and $K_2=0$, we get $A^+(h)=a^+(h)$, $A^-(h)=a^-(h)$. In this case, the vacuum state is quasi-free, with $T^{(1)}=0$ and $T^{(2)}(f,h)=(f,Jh)_{\mathcal H}$. \end{remark} \section{Particle density and correlation functions}\label{tew532w} Let $\mathfrak F$ be a separable Hilbert space and let $\mathfrak D$ be a dense subspace of $\mathfrak F$. For each $\Delta\in\mathcal B_0(X)$, let $\rho(\Delta):\mathfrak D\to\mathfrak D$ be a linear Hermitian operator in $\mathfrak F$. We further assume that the operators $\rho(\Delta)$ commute, i.e., $[\rho(\Delta_1),\rho(\Delta_2)]=0$ and for any $\Delta_1,\Delta_2\in\mathcal B_0(X)$. Let $\mathcal A$ denote the complex unital (commutative) $*$-algebra generated by $\rho(\Delta)$ ($\Delta\in\mathcal B_0(X)$). Let $\Omega$ be a fixed vector in $\mathfrak F$ with $\|\Omega\|_{\mathfrak F}=1$, and let a state $\tau:\mathcal A\to\mathbb C$ be defined by $\tau(a):=(a\Omega,\Omega)_{\mathfrak F}$ for $a\in\mathcal A$. We define {\it Wick polynomials in $\mathcal A$} by the following recurrence formula: \begin{align} {:}\rho(\Delta){:}&=\rho(\Delta),\notag\\ {:}\rho(\Delta_1)\dotsm \rho(\Delta_{n+1}){:}&=\rho(\Delta_{n+1})\, {:}\rho(\Delta_1)\dotsm \rho(\Delta_{n}){:}\notag\\ &\quad-\sum_{i=1}^n {:}\rho(\Delta_1)\dotsm \rho(\Delta_{i-1})\rho(\Delta_i\cap\Delta_{n+1})\rho(\Delta_{i+1})\dotsm \rho(\Delta_n){:}\,,\label{dsaea78} \end{align} where $\Delta,\Delta_1,\dots,\Delta_{n+1}\in\mathcal B_0(X)$ and $n\in\mathbb N$. It is easy to see that, for each permutation $\pi\in \mathfrak S_n$, $${:}\rho(\Delta_1)\cdots \rho(\Delta_n){:} = {:}\rho(\Delta_{\pi(1)})\cdots \rho(\Delta_{\pi(n)}){:}\, .$$ We assume that, for each $n\in\mathbb N$, there exists a symmetric measure $\theta^{(n)}$ on $X^n$ that is concentrated on $X^{(n)}$ (i.e., $\theta^{(n)}(X^n\setminus X^{(n)})=0$) and satisfies\footnote{In view of formulas \eqref{dsaea78}, \eqref{6esuw6u61d}, it is natural to call $\theta^{(n)}$ the {\it $n$-th correlation measure of the family of operators} $(\rho(\Delta))_{\Delta\in\mathcal B_0(X)}$.} \begin{equation}\label{6esuw6u61d} \theta^{(n)}\big(\Delta_1\times\dots\times\Delta_n\big)=\frac1{n!}\, \tau\big({:}\rho(\Delta_1)\dotsm \rho(\Delta_{n}){:}\big),\quad \Delta_1,\dots,\Delta_{n}\in\mathcal B_0(X).\end{equation} Furthermore, we assume that, for each $\Delta\in\mathcal B_0(X)$, there exists a constant $C_\Delta>0$ such that \begin{equation}\label{fte76i4} \theta^{(n)}(\Delta^n)\le C_\Delta^n,\quad n\in\mathbb N, \end{equation} and for any sequence $\{\Delta_{l}\}_{l\in\mathbb{N}}\subset\mathcal{B}_{0}(X)$ such that $\Delta_{l}\downarrow\varnothing$ (i.e., $\Delta_1\supset\Delta_2\supset\Delta_3\supset\cdots$ and $\bigcap_{l=1}^\infty\Delta_l=\varnothing$), we have $C_{\Delta_{l}}\rightarrow 0$ as $l\rightarrow\infty$. \begin{theorem}[\!\!\cite{LM}] {\rm (i)} Under the above assumptions, there exists a unique point process $\mu$ in $X$ whose correlation measures are $(\theta^{(n)})_{n=1}^\infty$. {\rm (ii)} Let $\mathfrak D':=\{a\Omega\mid a\in\mathcal A\}$ and let $\mathfrak F'$ denote the closure of $\mathfrak D'$ in $\mathfrak F$. Then each operator $(\rho(\Delta),\mathfrak D')$ is essentially self-adjoint in $\mathfrak F'$, and the operator-valued measures of the closures of the operators $(\rho(\Delta),\mathfrak D')$ commute. Furthermore, there exists a unique unitary operator $U:\mathfrak F'\to L^2(\Gamma(X),\mu)$ satisfying $U\Omega=1$ and \begin{equation}\label{te5w6u3e} U(\rho(\Delta_1)\dotsm \rho(\Delta_{n})\Omega)=\gamma(\Delta_1)\dotsm\gamma(\Delta_n)\end{equation} for any $\Delta_1,\dots,\Delta_{n}\in\mathcal B_0(X)$ ($n\in\mathbb N$). In particular, \begin{equation}\label{cts6wu4w5} \tau\big(\rho(\Delta_1)\dotsm \rho(\Delta_{n})\big)= \int_{\Gamma(X)}\gamma(\Delta_1)\dotsm\gamma(\Delta_n)\,\mu(d\gamma).\end{equation} \label{tes56uwe4u6} \end{theorem} We finish this section with a formal observation. Let again $\mathcal H=L^2(X,\sigma)$ and the antilinear involution $J$ in $\mathcal H$ be given by $(Jf)(x):=\overline{f(x)}$. Let $A^+(h)$ and $A^-(h)$ ($h\in\mathcal V$) be operators satisfying the CCR, and let the corresponding operators $A^+(x)$, $A^-(x)$ ($x\in X$) be derfined by \eqref{ray45}. For each $\Delta\in\mathcal B_0(X)$, let $\rho(\Delta)$ be given by \eqref{raq5wu}. The CCR \eqref{ydey7e76} and formulas \eqref{raq5wu}, \eqref{dsaea78} imply that, for any $\Delta_1,\dots,\Delta_{n}\in\mathcal B_0(X)$, \begin{equation}\label{terw5yw35} {:}\rho(\Delta_1)\dotsm \rho(\Delta_{n}){:}=\int_{\Delta_1\times\dots\times \Delta_n}A^+(x_n)\dotsm A^+(x_1)A^-(x_1)\dotsm A^-(x_n)\sigma^{\otimes n}(dx_1\dotsm dx_n).\end{equation} Thus, the Wick polynomials correspond to the Wick normal ordering, in which all the creation operators are to the left of all the annihilation operators. Hence, by \eqref{6esuw6u61d} and~\eqref{terw5yw35}, we formally obtain \begin{align*} &\theta^{(n)}\big(\Delta_1\times\dots\times\Delta_n\big)\notag\\ &\quad =\frac1{n!}\int_{\Delta_1\times\dots\times \Delta_n}\tau\big(A^+(x_n)\dotsm A^+(x_1)A^-(x_1)\dotsm A^-(x_n)\big)\sigma^{\otimes n}(dx_1\dotsm dx_n).\end{align*} Therefore, by \eqref{5w738}, the point process $\mu$ from Theorem~\ref{tes56uwe4u6} has the correlation functions $$k^{(n)}(x_1,\dots,x_n)=\tau\big(A^+(x_n)\dotsm A^+(x_1)A^-(x_1)\dotsm A^-(x_n)\big).$$ Below we will see that, in the case of a Cox process $\Pi_R$, where $R(x)=|G(x)|^2$ and $G(x)$ is a complex-valued Gaussian field from Section~\ref{ew56u3wu}, the above formal calculations can be given a rigorous meaning. We will start, however, with the simpler case of a Poisson point process. \section{Application of Theorem~\ref{tes56uwe4u6} to Poisson point processes}\label{xeeraq54q} Recall Remark~\ref{vtw53u}. Let $\mathcal V$ denote the (dense) subspace of $\mathcal H=L^2(X,\sigma)$ consisting of all measurable bounded (versions of) functions $h:X:\to\mathbb C$ with compact support. Let us fix a function $\lambda\in L^2_{\mathrm{loc}}(X,\sigma)$ and define a functional $\phi:\mathcal V\to\mathbb C$ by \begin{equation}\label{s5w5as} \phi(h):=\int_X h(x)\lambda(x)\sigma(dx). \end{equation} Note that \begin{equation}\label{xse5w64ue} \phi'(h):=\int_X h(x)\overline{\lambda(x)}\,\sigma(dx). \end{equation} For each $h\in\mathcal V$, we define operators $A^+(h),A^-(h)\in\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal H))$ by \begin{equation}\label{er4q53} A^+(h):=a^+(h)+\phi'(h),\quad A^-(h):=a^-(h)+\phi(h).\end{equation} Here $a^+(h)$ and $a^-(h)$ are the creation and annihilation operators in $\mathcal F(\mathcal H)$. By Remarks~\ref{vtw53u} and~\ref{vrtw52}, the vacuum state $\tau$ on the CCR algebra generated by $A^+(h)$, $A^-(h)$ ($h\in\mathcal V$) is quasi-free. Let $a^+(x)$ and $a^-(x)$ be the operator-valued distributions corresponding to $a^+(h)$ and $a^-(h)$, respectively. Then $A^+(x)=a^+(x)+\overline{\lambda(x)}$ and $A^-(x)=a^-(x)+\lambda(x)$. Hence, the corresponding particle density takes the form $$\rho(x)=\lambda(x)a^+(x)+\overline{\lambda(x)}\,a^-(x)+a^+(x)a^-(x)+|\lambda(x)|^2.$$ Our next aim is to rigorously define, for each $\Delta\in\mathcal B_0(X)$, an operator $\rho(\Delta)=\int_\Delta A^+(x)A^-(x)\sigma(dx)\in\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal H))$. We clearly have, for each $\Delta\in\mathcal B_0(X)$, $$\int_\Delta \lambda(x)a^+(x)\,\sigma(dx)=\int_X\chi_\Delta(x)\lambda(x)a^+(x)\,\sigma(dx)=a^+(\chi_\Delta \lambda) $$ and similarly $$\int_\Delta \overline{\lambda(x)}\,a^-(x)\,\sigma(dx)=a^-(\chi_\Delta \overline{\lambda}).$$ (Note that $\chi_\Delta \lambda\in\mathcal H$.) Next, for $h\in\mathcal V$ and $f^{(n)}\in\mathcal F_n(\mathcal H)$, $$\big(a^-(h)f^{(n)}\big)(x_1,\dots,x_{n-1})=n\int_X h(x)f^{(n)}(x,x_1,\dots,x_{n-1})\sigma(dx).$$ Hence \begin{equation}\label{cyd64uew6wqa} (a^-(x)f^{(n)})(x_1,\dots,x_{n-1})=nf^{(n)}(x,x_1,\dots,x_{n-1}),\end{equation} which implies \begin{equation}\label{w4q53aestyp} \bigg(\int_\Delta a^+(x)a^-(x)\sigma(dx)f^{(n)}\bigg)(x_1,\dots,x_n)=\big(\chi_\Delta(x_1)+\dots+\chi_\Delta(x_n)\big)f^{(n)}(x_1,\dots,x_n). \end{equation} Hence, $a^0(\chi_\Delta):=\int_\Delta a^+(x)a^-(x)\sigma(dx)\in\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal H))$. The $a^0(\chi_\Delta)$ is called a {\it neutral operator}. Thus, for each $\Delta\in\mathcal B_0(X)$, we have rigorously defined \begin{equation}\label{rtw52qy} \rho(\Delta)=a^+(\chi_\Delta\lambda)+a^-(\chi_\Delta\overline{\lambda})+a^0(\chi_\Delta)+\int_\Delta|\lambda(x)|^2\sigma(dx)\in\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal H)).\end{equation} Obviously, $\rho(\Delta)$ is a Hermitian operator in $\mathcal F(\mathcal H)$. To construct a state on the corresponding $*$-algebra, we use the vacuum $\Omega$ in the Fock space $\mathcal F(\mathcal H)$. \begin{proposition}\label{t7re6} The operators $(\rho(\Delta))_{\Delta\in\mathcal B_0(X)}$ defined by \eqref{rtw52qy} and the vacuum state $\tau$ satisfy the assumptions of Theorem~\ref{tes56uwe4u6}. In this case, $\theta^{(n)}=\frac1{n!}(|\lambda|^2\sigma)^{\otimes n}$, so that $\mu$ is the Poisson point process with intensity $|\lambda(x)|^2$. Furthermore, we have $\mathfrak F'=\mathcal F(\mathcal H)$. \end{proposition} \begin{remark} For the Poisson point process $\mu$ with intensity $|\lambda|^2$, the existence of the unitary isomorphism $U:\mathcal F(\mathcal H)\to L^2(\Gamma(X),\mu)$ that satisfies \eqref{te5w6u3e}, \eqref{cts6wu4w5} is a well-known fact, see e.g.\ \cite{Surgailis}. Our approach to the construction of the isomorphism $U$ may be compared with paper \cite{GGPS}. \end{remark} \begin{proof}[Proof of Proposition \ref{t7re6}] For any $\Delta_1,\Delta_2\in\mathcal B_0(X)$, the commutation $[\rho(\Delta_1),\rho(\Delta_2)]=0$ follows from the CCR and the commutation relations $$\big[a^0(\chi_{\Delta_1}),a^+(\chi_{\Delta_2}\lambda)\big]=a^+(\chi_{\Delta_1\cap\Delta_2}\lambda),\quad \big[a^0(\chi_{\Delta_1}),a^-(\chi_{\Delta_2}\overline{\lambda})\big]=-a^-(\chi_{\Delta_1\cap\Delta_2}\overline{\lambda}).$$ Next, let $C\in\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal H))$. Similarly to \eqref{cyd64uew6wqa}, \eqref{w4q53aestyp}, we see that, for each $\Delta\in\mathcal B_0(X)$, $\int_\Delta a^+(x)Ca^-(x)\sigma(dx) $ determines an operator from $\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal H))$. In particular, for $f\in\mathcal H$ and $n\in\mathbb N$, \begin{equation}\label{dr6wu3} \int_\Delta a^+(x)Ca^-(x)\sigma(dx)f^{\otimes n}=n(\chi_\Delta f)\odot(Cf^{\otimes(n-1)}).\end{equation} Therefore, \begin{align*} &\int_\Delta A^+(x)CA^-(x)\sigma(dx)\\ &\quad=\int_\Delta a^+(x)Ca^-(x)\sigma(dx)+a^+(\chi_\Delta\lambda)C+Ca^-(\chi_\Delta\overline{\lambda})+\int_\Delta|\lambda(x)|^2\sigma(dx)\,C\in\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal H)). \end{align*} Hence, we may define, for any $\Delta_1,\dots,\Delta_n\in\mathcal B_0(X)$, \begin{align*} &\int_{\Delta_1\times\dots\times \Delta_n}A^+(x_n)\dotsm A^+(x_1)A^-(x_1)\dotsm A^-(x_n)\,\sigma^{\otimes n}(dx_1\dotsm dx_n)\\ &\quad:=\int_{\Delta_n}A^+(x_n)\bigg(\int_{\Delta_{n-1}}A^+(x_{n-1})\bigg(\dotsm\int_{\Delta_1}A^+(x_1)A^-(x_1)\sigma(dx_1)\bigg)\\ &\qquad\dotsm A^-(x_{n-1})\sigma(dx_{n-1})\bigg)A^-(x_n)\sigma(dx_n)\in\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal H)).\end{align*} We next state that formula \eqref{terw5yw35} now holds rigorously. Indeed, a direct calculation shows that, for any $\Delta_1,\Delta_2 \in\mathcal B_0(X)$ and $C\in\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal H))$, \begin{align} &\rho(\Delta_1)\int_{\Delta_2}A^+(x)CA^-(x)\,\sigma(dx)\notag\\ &\quad= \int_{\Delta_2}A^+(x)\rho(\Delta_1)CA^-(x)\,\sigma(dx)+\int_{\Delta_1\cap\Delta_2}A^+(x)CA^-(x)\,\sigma(dx). \label{cx4t3y7}\end{align} Now formula \eqref{terw5yw35} follows by induction from \eqref{dsaea78} and \eqref{cx4t3y7}. Applying the vacuum state $\tau$ to \eqref{terw5yw35} , we get \begin{equation}\label{ctea5ywq} \tau\big({:}\rho(\Delta_1)\dotsm \rho(\Delta_{n}){:}\big)=\int_{\Delta_1\times\dots\times\Delta_n}|\lambda(x_1)|^2\dotsm|\lambda(x_n)|^2\sigma^{\otimes n}(dx_1\dotsm dx_n). \end{equation} Since the measure $\sigma$ is non-atomic, $\sigma^{\otimes n}$ is concentrated on $X^{(n)}$. By \eqref{6esuw6u61d} and \eqref{ctea5ywq}, estimate \eqref{fte76i4} holds with $C_\Delta=\int_\Delta |\lambda(x)|^2\sigma(dx)$. Hence, the assumptions of Theorem~\ref{tes56uwe4u6} are satisfied. The form of the correlation measures implies that $\mu$ is the Poisson point process with intensity $|\lambda(x)|^2$. Finally, the proof of the equality $\mathfrak F'=\mathcal F(\mathcal H)$ is standard and we leave it to the interested reader. \end{proof} \section{Application of Theorem~\ref{tes56uwe4u6} to hafnian point processes}\label{yd6w6wdd} We will use below the notations from Section~\ref{ew56u3wu}. We assume that conditions \eqref{ydxdrdxdrg}, \eqref{xrsa5yw4}, and \eqref{vd5yw573} are satisfied. Let also the subspace $\mathcal V$ of $\mathcal H$ be as in Section~\ref{xeeraq54q}. Let $h\in\mathcal V$. By the Cauchy inequality, $$\int_X |h(x)|\,\|L_i(x)\|_{\mathcal G}\,\sigma(dx)<\infty,\quad i=1,2.$$ Hence, by using e.g.\ \cite[Chapter 10, Theorem~3.1]{BUS}, we can define \begin{equation*} \int_X hL_i\,d\sigma, \int_X h\,\mathcal JL_i\,d\sigma\in\mathcal G\end{equation*} as Bochner integrals. \begin{example} Recall Examples \ref{fyte6i47} and \ref{vcrtw5y3}. As easily seen, for each $h\in\mathcal V$, \begin{equation}\label{eaw4ayaqwy} \int_X hL\,d\sigma =(L^\Delta)^Th,\ \int_X h\, JL\,d\sigma =(L^\Delta)^*h\in\mathcal H, \end{equation} where $\Delta\in\mathcal B_0(X)$ is chosen so that $h$ vanishes outside $\Delta$. In particular, if $L(x,y)$ is the integral kernel of an operator $L\in\mathcal L(\mathcal H)$, then we can replace $L^\Delta$ in formula \eqref{eaw4ayaqwy} with $L$. Furthermore, in the latter case, we could set $\mathcal V=\mathcal H$. \end{example} Denote $\mathcal E:=\mathcal H\oplus \mathcal G$. We recall the well-known unitary isomorphism between\linebreak $\mathcal F(\mathcal H)\otimes\mathcal F(\mathcal G)$ and $\mathcal F(\mathcal E)$. In view of our considerations in Sections~\ref{ew56u3wu} and \ref{xeeraq54q}, see, in particular, formulas \eqref{waaq4yq5y}, \eqref{ytd7r57}, and \eqref{s5w5as}--\eqref{er4q53}, we consider in $\mathcal F(\mathcal E)$ the following linear operators with domain $\mathcal F_{\mathrm{fin}}(\mathcal E)$, \begin{align} A^+(h):=&a^+\bigg(h,\int_X h\,\mathcal JL_2\,d\sigma\bigg)+a^-\bigg(0,\int_X h\,\mathcal JL_1\,d\sigma\bigg),\notag\\ A^-(h):=&a^+\bigg(0,\int_X h L_1\,d\sigma\bigg)+a^-\bigg(h,\int_X hL_2\,d\sigma\bigg),\quad h\in\mathcal V.\label{tdr65eS} \end{align} \begin{proposition} The operators $A^+(h)$, $A^-(h)$ defined by \eqref{tdr65eS} satisfy the conditions (i)--(iii) from Section~\ref{xrwaq4q2} with $\mathfrak F=\mathcal F(\mathcal E)=\mathcal F(H\oplus \mathcal G)$ and $\mathfrak D=\mathcal F_{\mathrm{fin}}(\mathcal E)$. The vacuum state on the corresponding CCR algebra is quasi-free with $T^{(1)}=0$ and \begin{align} T^{(2)}(f,h)&=\int_X \overline{f(x)}\, h(x)\sigma(dx)\notag\\ &\quad+2\int_{X^2}\Re\left(f(x)h(y)\overline{\mathcal K_2(x,y)}+\overline{f(x)}\,h(y)\mathcal K_1(x,y)\right)\sigma^{\otimes 2}(dx\,dy). \end{align} Here $\mathcal K_1$ and $\mathcal K_2$ are defined by \eqref{w909i8u7y689} and \eqref{ftsqw43qd}, respectively. \end{proposition} \begin{proof} The first statement of the proposition is obvious in view of the commutation of the operators $\Phi(x)$, $\Psi(x)$. By \eqref{e5w7w37w2} and \eqref{tdr65eS}, \begin{align*} B(h)&=a^+\bigg(h,\int_X (h\,\mathcal JL_2+(Jh)\, L_1)d\sigma\bigg)+a^-\bigg(Jh, \int_X(h\,\mathcal JL_1+(Jh)\, L_2)d\sigma\bigg). \end{align*} Hence, by \eqref{ydxdrdxdrg}, \eqref{xrsa5yw4}, \eqref{w909i8u7y689}, and \eqref{ftsqw43qd}, the second statement of the proposition also follows. \end{proof} \begin{remark}\label{yre7i645i7r45e} Assume that, for $i=1,2$, the map $\mathcal V\ni h\mapsto \int_X hL_i\,d\sigma\in\mathcal G$ extends by continuity to a bounded linear operator $\mathbb L_i\in\mathcal L(\mathcal H,\mathcal G)$. Then, by \eqref{rdtsw5u4w3wq} and \eqref{tdr65eS}, $$A^+(h)=a^+(h,\mathbb L_2'h)+a^-(0,\mathbb L_1'h),\quad A^-(h)=a^+(0,\mathbb L_1h)+a^-(h,\mathbb L_2h).$$ This quasi-free representation of the CCR is a special case of \eqref{cdtwe64u53e}, \eqref{tsw53w3}. By \eqref{rtwe64ue3}, $$ T^{(2)}(f,h)=(h,f)_{\mathcal H}+\big((\mathcal J\mathbb L_1+\mathbb L_2)Jf,(\mathcal J\mathbb L_1+\mathbb L_2)Jh\big)_{\mathcal G}.$$ \end{remark} By \eqref{tdr65eS}, the corresponding operator-valued distributions $A^+(x)$ and $A^-(x)$ are given by \begin{align} A^+(x)&=a_1^+(x)+a_2^+(\mathcal JL_2(x))+a_2^-(\mathcal JL_1(x)),\notag\\ A^-(x)&=a_1^-(x)+a_2^+(L_1(x))+a_2^-(L_2(x)), \label{ytyuryr}\end{align} compare with \eqref{waaq4yq5y}, \eqref{ytd7r57}. The operators $a_i^\pm(\cdot)$ ($i=1,2$) are defined similarly to Example~\ref{vcrtw5y3}. \begin{proposition}\label{cyse6ui5i} Let $A^+(x)$, $A^-(x)$ be given by \eqref{ytyuryr}, and let $C\in\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal E))$. For each $\Delta\in\mathcal B_0(X)$, $\int_\Delta A^+(x)CA^-(x)\,\sigma(dx) $ determines an operator from $\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal E))$, in the sense explained in the proof. \end{proposition} \begin{proof} It is sufficient to prove the statement when $C\in\mathcal L(\mathcal F_n(\mathcal E),\mathcal F_m(\mathcal E))$. We also fix $\Delta\in\mathcal B_0(X)$. By \eqref{cr6sw6u4e} and \eqref{vd5yw573}, \begin{align} &\int_\Delta \|a_2^-(\mathcal JL_1(x))Ca_2^+(L_1(x))\|_{\mathcal L(\mathcal F_{n-1}(\mathcal E),\mathcal F_{m-1}(\mathcal E))}\,\sigma(dx)\notag\\ &\quad\le\|C\|_{\mathcal L(\mathcal F_{n}(\mathcal E),\mathcal F_{m}(\mathcal E))}\sqrt{nm}\,\int_{\Delta}\|L_1(x)\|_{\mathcal G}^2 \,\sigma(dx)<\infty.\notag\end{align} Hence, by \cite[Chapter 10, Theorem~3.1]{BUS}, the following Bochner integral is well-defined: \begin{equation*} \int_\Delta a_2^-(\mathcal JL_1(x))C a_2^+(L_1(x))\sigma(dx)\in \mathcal L(\mathcal F_{n-1}(\mathcal E),\mathcal F_{m-1}(\mathcal E)).\end{equation*} Note that, by e.g.\ \cite[Chapter 10, Theorem~3.2]{BUS}, for each $f^{(n-1)}\in\mathcal F_{n-1}(\mathcal E)$, $$\bigg(\int_\Delta a_2^-(\mathcal JL_1(x))C a_2^+(L_1(x))\sigma(dx)\bigg)f^{(n-1)}=\int_\Delta a_2^-(\mathcal JL_1(x))C a_2^+(L_1(x))f^{(n-1)}\sigma(dx),$$ where the right hand side is a Bochner integral with values in $\mathcal F_{m-1}(\mathcal E)$. The proof of existence of the other Bochner integrals of the type $\int_\Delta a_2^\pm(\mathcal JL_i(x))Ca_2^\pm(L_j(x))\sigma(dx)$ ($i,j\in\{1,2\}$) is similar. Next, we define $$\int_\Delta a_1^+(x)Ca_1^-(x)\sigma(dx)\in\mathcal L(\mathcal F_{n+1}(\mathcal E),\mathcal F_{m+1}(\mathcal E))$$ by analogy \eqref{dr6wu3}. Let $(e_i)_{i\ge1}$ be an orthonormal basis in $\mathcal H$ such that $Je_i=e_i$ for all $i\ge1$. As easily seen, \begin{equation}\label{r5w5w32} \int_\Delta a_1^+(x)Ca_1^-(x)\sigma(dx)=\sum_{i,j\ge1}(\chi_\Delta e_i,e_j)_{\mathcal H}\,a_1^+(e_j)Ca_1^-(e_i),\end{equation} where the series converges strongly in $\mathcal L(\mathcal F_{n+1}(\mathcal E),\mathcal F_{m+1}(\mathcal E))$. By \eqref{vd5yw573}, we can define a linear operator $L_2^\Delta\in\mathcal L(\mathcal G,\mathcal H)$ by $$(L_2^\Delta g)(x):=\chi_\Delta(x)(L_2(x),\mathcal Jg)_{\mathcal G}\,.$$ By analogy with \eqref{dr6wu3}, we define \begin{equation}\label{cw5uu5} \int_\Delta a_1^+(x)Ca_2^-(L_2(x))\sigma(dx)\in\mathcal L(\mathcal F_{n+1}(\mathcal E),\mathcal F_{m+1}(\mathcal E))\end{equation} that satisfies, for each $f=(h,g)\in\mathcal E$, \begin{equation}\label{vctsw5w3} \int_\Delta a_1^+(x)Ca_2^-(L_2(x))\sigma(dx)f^{\otimes (n+1)}=(n+1)\big(L_2^\Delta g\big)\odot(Cf^{\otimes n}).\end{equation} Let $(u_j)_{j\ge1}$ be an orthonormal basis in $\mathcal G$ such that $\mathcal Ju_j=u_j$ for all $j\ge1$. Then, similarly to \eqref{r5w5w32}, we obtain \begin{equation}\label{fys6w6u4} \int_\Delta a_1^+(x)Ca_2^-(L_2(x))\sigma(dx)=\sum_{i,j\ge1}(L_2^\Delta u_j,e_i)_{\mathcal H}\,a_1^+(e_i)Ca_2^-(u_j), \end{equation} where the series converges strongly in $\mathcal L(\mathcal F_{n+1}(\mathcal E),\mathcal F_{m+1}(\mathcal E))$. Next, we note that $\mathcal H\otimes\mathcal G=L^2(X\to\mathcal G,\sigma)$. Hence, by \eqref{vd5yw573}, $\chi_\Delta(\cdot)L_1(\cdot)\in \mathcal H\otimes\mathcal G$. Note also that $\mathcal H\otimes\mathcal G$ is a subspace of $\mathcal E^{\otimes 2}=(\mathcal H\oplus\mathcal G)^{\otimes 2}$. For each $m\in\mathbb N$, we denote by $P_m:\mathcal E^{\otimes m}\to\mathcal E^{\odot m}$ the symmetrization operator. We naturally set, for each $f^{(k)}\in\mathcal F_{k}(\mathcal E)$, $$\int_\Delta a_1^+(x)a_2^+(L_1(x))\sigma(dx)f^{(k)}=P_{k+2}\big((\chi_\Delta(\cdot)L_1(\cdot))\otimes f^{(k)}\big).$$ Hence, we define $$\int_\Delta a_1^+(x)Ca_2^+(L_1(x))\sigma(dx)\in\mathcal L(\mathcal F_{n-1}(\mathcal E),\mathcal F_{m+1}(\mathcal E))$$ by $$\int_\Delta a_1^+(x)Ca_2^+(L_1(x))\sigma(dx)f^{(n-1)}:=P_{m+1}(1_{\mathcal E}\otimes (CP_n))\big( (\chi_\Delta(\cdot)L_1(\cdot)) \otimes f^{(n-1)}\big) $$ for $f^{(n-1)}\in\mathcal F_{n-1}(\mathcal E)$. Here $1_\mathcal E$ is the identity operator in $\mathcal E$. Therefore, \begin{equation}\label{s4qy} \int_\Delta a_1^+(x)Ca_2^+(L_1(x))\sigma(dx)=\sum_{i,j\ge1}(\chi_\Delta(\cdot)L_1(\cdot) ,e_i\otimes u_j)_{\mathcal H\otimes\mathcal G}\,a_1^+(e_i)Ca_2^+(u_j), \end{equation} where the series converges strongly in $\mathcal L(\mathcal F_{n-1}(\mathcal E),\mathcal F_{m+1}(\mathcal E))$. Similarly to Remark \ref{yre7i645i7r45e}, for $i=1,2$, we define $\mathbb L_i^\Delta\in\mathcal L(\mathcal H,\mathcal G)$ by $$\mathbb L_i^\Delta h:=\int_\Delta h(x)L_i(x)\sigma(dx),\quad h\in\mathcal H$$ (in the sense of Bochner integration). Similarly to \eqref{cw5uu5}, \eqref{vctsw5w3}, we define $$\int_\Delta a_2^+ (\mathcal JL_2(x))Ca_1^-(x)\sigma(dx)\in\mathcal L(\mathcal F_{n+1}(\mathcal E),\mathcal F_{m+1}(\mathcal E))$$ by $$\int_\Delta a_2^+ (\mathcal JL_2(x))Ca_1^-(x)\sigma(dx) f^{\otimes(n+1)}:=(n+1)\big((\mathbb L_2^\Delta)' h\big) \odot (Cf^{\otimes n}),\quad f=(h,g)\in\mathcal E. $$ Similarly to \eqref{fys6w6u4}, \begin{equation}\label{vctesw5y} \int_\Delta a_2^+ (\mathcal JL_2(x))Ca_1^-(x)\sigma(dx)=\sum_{i,j\ge1}\big((\mathbb L_2^\Delta)' e_i,u_j\big)_{\mathcal G}\, a_2^+(u_j)Ca_1^-(e_i), \end{equation} where the series converges strongly in $\mathcal L(\mathcal F_{n+1}(\mathcal E),\mathcal F_{m+1}(\mathcal E))$. Finally, we define $$\int_\Delta a_2^- (\mathcal JL_1(x))Ca_1^-(x)\sigma(dx)\in\mathcal L(\mathcal F_{n+1}(\mathcal E),\mathcal F_{m-1}(\mathcal E))$$ by $$\int_\Delta a_2^- (\mathcal JL_1(x))Ca_1^-(x)\sigma(dx) f^{\otimes(n+1)} :=(n+1)a^-_2\big((\mathbb L_1^\Delta)'h\big)(Cf^{\otimes n}),\quad f=(h,g)\in\mathcal E.$$ Hence, \begin{equation}\label{cxrw53ew} \int_\Delta a_2^- (\mathcal JL_1(x))Ca_1^-(x)\sigma(dx)=\sum_{i,j\ge1} \big((\mathbb L_1^\Delta)'e_i,u_j\big)_{\mathcal G} \,a_2^-(u_j)Ca_1^-(e_i), \end{equation} where the series converges strongly in $\mathcal L(\mathcal F_{n+1}(\mathcal E),\mathcal F_{m-1}(\mathcal E))$. \end{proof} \begin{proposition}\label{3rtlgpr} For each $\Delta\in\mathcal B_0(X)$, the particle density $\rho(\Delta)=\int_\Delta A^+(x)A^-(x)\,\sigma(dx)$ is a well-defined Hermitian operator in $\mathcal F(\mathcal E)$ with domain $\mathcal F_{\mathrm{fin}}(\mathcal E)$. Furthermore, for any $\Delta_1,\Delta_2\in\mathcal B_0(X)$, $[\rho(\Delta_1)$, $\rho(\Delta_2)]=0$. \end{proposition} \begin{proof} By Proposition \ref{cyse6ui5i}, we have $\rho(\Delta)\in\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal E))$. The fact that $\rho(\Delta)$ is a Hermitian operator in $\mathcal F(\mathcal E)$ easily follows from the proof of Proposition~ \ref{cyse6ui5i}. To prove the commutation, one uses the corresponding Bochner integrals, formulas \eqref{r5w5w32}, \eqref{fys6w6u4}--\eqref{cxrw53ew}. In doing so, one uses the fact that every strongly convergent sequence of bounded linear operators is norm-bounded. Hence, for every strongly convergent sums of bounded linear operators, $A=\sum_{i\ge1}^\infty A_i$ and $B=\sum_{j\ge1}^\infty B_j$, one has $AB=\sum_{i,j\ge 1}A_iB_j$, where the latter double series converges strongly. \end{proof} \begin{theorem}\label{due6uew4} The operators $(\rho(\Delta))_{\Delta\in\mathcal B_0(X)}$ defined by Propositions~\ref{cyse6ui5i}, \ref{3rtlgpr} and the state $\tau$ defined by the vacuum vector $\Omega$ satisfy the assumptions of Theorem~\ref{tes56uwe4u6}. The corresponding point process $\mu$ is the Cox process $\Pi_R$, where $R(x)=|G(x)|^2$, and $G(s)$ is the Gaussian field from Theorem~\ref{reaq5y43wu}. The point process $\Pi_R$ is hafnian with the correlation kernel $\mathbb K(x,y)$ given by \eqref{d6e6ie4}, where $\mathcal K_1(x,y)$ and $\mathcal K_2(x,y)$ are given by \eqref{w909i8u7y689} and \eqref{ftsqw43qd}, respectively. \end{theorem} \begin{proof} Direct calculations show that, for any $\Delta_1,\Delta_2 \in\mathcal B_0(X)$ and $C\in\mathcal L(\mathcal F_{\mathrm{fin}}(\mathcal E))$, formula \eqref{cx4t3y7} holds, which implies formula \eqref{terw5yw35}. We apply the vacuum state $\tau$ to \eqref{terw5yw35}. Using formulas \eqref{dr6e6u43wq}, \eqref{d6e6ie4}, \eqref{waaq4yq5y}, \eqref{ytd7r57}, \eqref{ytyuryr}, Theorem~\ref{reaq5y43wu}, and Proposition~\ref{cyse6ui5i}, we conclude that the measure $\theta^{(n)}$ is given by \begin{align} \theta^{(n)}(dx_1\dotsm dx_n)&=\frac1{n!}\,\tau\big(\Psi(x_n)\dotsm\Psi(x_1)\Phi(x_1)\dotsm\Phi(x_n)\big) \sigma^{\otimes n}(dx_1\dotsm dx_n)\notag\\ &=\frac1{n!}\,\mathbb E\big(\overline{G(x_n)}\dotsm \overline{G(x_1)}G(x_1)\dotsm G(x_n)\big)\sigma^{\otimes n}(dx_1\dotsm dx_n)\label{buyftd7s}\\ &=\frac1{n!}\,\mathbb E\big(|G(x_1)|^2\dotsm|G(x_n)|^2\big)\sigma^{\otimes n}(dx_1\dotsm dx_n)\label{vud6ws5}\\ &=\frac1{n!}\,\operatorname{haf}[\mathbb K(x_i,x_j)]_{i,j=1,\dots,n}\,\sigma^{\otimes n}(dx_1\dotsm dx_n).\label{uyfy7de6} \end{align} In particular, $\theta^{(n)}$ is a positive measure that is concentrated on $X^{(n)}$. If $\mathcal Y=G(x)$ or $\overline{G(x)}$ and $\mathcal Z=G(y)$ or $\overline{G(y)}$, then $$|\mathbb E(\mathcal Y\mathcal Z)|\le\big(\mathbb E(|\mathcal Y|^2)\mathbb E(|\mathcal Z|^2)\big)^{1/2}=\big(\mathbb E(|G(x)|^2)\,\mathbb E(|G(y)|^2)\big)^{1/2}=\|L_1(x)\|_{\mathcal G}\,\|L_1(y)\|_{\mathcal G}. $$ The number of all partitions $\{i_1,j_1\},\dots,\{i_n,j_n\}$ of $\{1,\dots,2n\}$ is $\frac{(2n)!}{(n!)2^n}\le 2^nn!$\,. Hence, by \eqref{buyftd7s} and the formula for the moments of Gaussian random variables, $$\theta^{(n)}(\Delta^n)\le\bigg(2\int_\Delta\|L_1(x)\|_{\mathcal G}^2\,\sigma(dx)\bigg)^n,\quad\Delta\in\mathcal B_0(X).$$ Thus, the operators $(\rho(\Delta))_{\Delta\in\mathcal B_0(X)}$ satisfy the assumptions of Theorem~\ref{tes56uwe4u6}. The statement of the theorem about the arising point process $\mu$ follows immediately from \eqref{vud6ws5} and \eqref{uyfy7de6}. \end{proof} \subsection*{Acknowledgments} The authors are grateful to the referee for valuable comments that improved the manuscript. \end{document}
\begin{document} \title{Bootstrapping partition regularity of linear systems} \author{\tsname} \address{\tsaddress} \email{\tsemail} \begin{abstract} Suppose that $A$ is a $k \times d$ matrix of integers and write $\mathfrak{R}_A:\N \rightarrow \N\cup \{ \infty\}$ for the function taking $r$ to the largest $N$ such that there is an $r$-colouring $\mathcal{C}$ of $[N]$ with $\bigcup_{C \in \mathcal{C}}{C^d}\cap \ker A =\emptyset$. We show that if $\mathfrak{R}_A(r)<\infty$ for all $r \in \N$ then $\mathfrak{R}_A(r) \leq \exp (\exp(r^{O_{A}(1)}))$ for all $r \geq 2$. When the kernel of $A$ consists only of Brauer configurations -- that is vectors of the form $(y,x,x+y,\dots,x+(d-2)y)$ -- the above has been proved by Chapman and Prendiville with good bounds on the $O_A(1)$ term. \end{abstract} \maketitle \section{Introduction} Our work concerns colourings. For a set $X$ and natural $r$ we say that $\mathcal{C}$ is an \textbf{$r$-colouring of $X$} if $\mathcal{C}$ is a cover of $X$ \emph{i.e.} $X \subset \bigcup_{C \in \mathcal{C}}{C}$, and $\mathcal{C}$ has size $r$. In particular we shall not need our colours to be disjoint, though such colourings are included. Suppose that $A$ is a $k\times d$ matrix of integers. We write $\mathfrak{R}_A:\N \rightarrow \N \cup \{\infty\}$ for the function taking $r$ to the largest $N$ such that there is an $r$-colouring $\mathcal{C}$ of $[N]:=\{1,\dots,N\}$ with $\bigcup_{C \in \mathcal{C}}{C^d}\cap \ker A =\emptyset$ -- in words, such that there are no monochromatic solutions to $Ax=0$. Note that the function $\mathfrak{R}_A$ is monotonically increasing. Not all matrices $A$ have $\mathfrak{R}_A(r)<\infty$ for all $r \in \N$ (\emph{e.g.} if all the non-zero terms in $A$ are positive), but those that do we call \textbf{partition regular}. There are matrices $A$ such that van der Waerden's theorem \cite[Exercise 6.3.7]{taovu::} (first proved in \cite{van::0}) is implied by the partition regularity of $A$ (see \cite[Satz I]{rad::1}), and similarly for Schur's Theorem \cite[6.12]{taovu::} (first proved in \cite{sch::4}). Schur's theorem actually gives the stronger fact that\footnote{It is a result of Abbott and Moser \cite{abbmos::} that we cannot do much better.} $\mathfrak{R}_A(r) \leq \lfloor e r!\rfloor$, and since the celebrated work of Gowers \cite{gow::4,gow::0} we know that van der Waerden's theorem also has reasonable bounds in terms of the number of colours. It is the purpose of this paper to use Gowers' work to show the following. \begin{theorem}\label{thm.mn} Suppose that $A$ is a $k\times d$ integer-valued partition regular matrix and $r\geq 2$ is natural. Then there is some $N \leq \exp(\exp(r^{O_{A}(1)}))$ such that any $r$-colouring of $[N]$ contains a colour class $C$ and some $x \in C^d$ such that $Ax=0$. \end{theorem} The basic method is expounded in the model setting\footnote{The model setting has proved very fruitful for distilling the important aspects of arguments in additive combinatorics. See the paper \cite{gre::9} and the sequel \cite{wol::3}.} of $\F_2^n$ by Shkredov in \cite[Theorem 24]{shk::7} for the purpose of illustrating how analytic techniques can be applied to colouring results. Chapman and Prendiville in \cite{chapre::} independently discovered the argument given in \cite[Theorem 24]{shk::7} (though with some technical differences around expansion vs large Fourier coefficients) and importantly showed how it could be applied to provide good bounds in colouring problems in the integers where none were previously known. Specifically in \cite[Theorem 1.1]{chapre::} they prove Theorem \ref{thm.mn} for Brauer configurations, meaning for a matrix $A$ whose kernel is the set of vectors of the form $(y,x,x+y,\dots,x+(d-2)y)$ for some fixed $d \geq 3$, with a doubly exponential bound on $d$ in place of the $O_A(1)$ term. (They also show in \cite[Theorem 1.2]{chapre::} that one may replace the $O_A(1)$ term by $1+o(1)$ for Brauer configurations with $d=4$.) It is the purpose of this note to extend the arguments of Chapman and Prendiville to partition regular linear systems. This entails a large notational burden and as a result, while they are able to give rather good estimates for the $O_A(1)$-term when $A$ is a matrix corresponding to a Brauer configuration, we give no meaningful estimates.\footnote{Though see the remark after the proof of Theorem \ref{thm.main}.} The above work comes on the back of a wave of investigations using analytic techniques for colouring problems. This really took off with the paper \cite{cwasch::0} of Cwalina and Schoen, and was followed by the work of Green and collaborators \cite{grelin::,gresan::1}, then Chow, Lindqvist and Prendiville \cite{cholinpre::}, and most recently Chapman \cite{cha::6} which inspired this particular paper. One would often like to insist that the $x$ found in Theorem \ref{thm.mn} is in a certain sense non-degenerate. The extent to which this is possible varies, but the question has been dealt with comprehensively by Hindman and Leader in \cite{hinlea::}. See also \cite{fragrarod::0} for a related supersaturated formulation. \subsection*{Existing bounds on the Rado numbers $\mathfrak{R}_A(r)$} Other than the aforementioned \cite[Theorems 1.1 \& 1.2]{chapre::} most work has focused on the case where $A$ has one row \emph{i.e.} systems with one equation which for clarity we write in the comma-delimited form $A=(a_1,\dots,a_k)$. In this case Rado's theorem \cite[Theorem 9.5]{lanrob::} tells us that if (and only if) $A$ is partition regular then there is $\emptyset \neq I \subset [k]$ such that $\sum_{i \in I}{a_i}=0$. Schur's theorem itself gives rather good bounds on $\mathfrak{R}_A(r)$ when $A=(1,1,-1)$, and more generally \cite[Theorem 1.3]{cwasch::0} gives singly exponential bounds when $A$ is a partition regular row. Stronger results when the equation satisfies additional properties are given in \cite[Theorems 1.4 \& 1.5]{cwasch::0} and \cite[Theorem 4.7]{gasmortum::}. When $A=(1,\dots,1,-1)$ the numbers $\mathfrak{R}_A(r)$ are sometimes called the generalised diagonal Schur numbers (although they are just called Schur numbers in \cite{beubre::}). These have been computed for many values of $r$, being completely known for $r=2$ \cite[Theorem 1.3]{beubre::}, and for $r \geq 3$ the reader is directed to \cite[Table 1]{ahmsch::} for recent calculations. Note that the bounds in Theorem \ref{thm.mn} as ineffective so, for example, when $r=2$ our result says nothing more than $\mathfrak{R}_A(2)<\infty$. When $A$ has just one row there is a large body of work computing the exact value of $\mathfrak{R}_A(2)$ using arguments which are much more combinatorial than those in the present paper. This work has many extensions covering things such as Rado numbers for inhomogenous equations \cite[p259]{lanrob::}; off-diagonal Rado numbers \cite[p280]{lanrob::}; and Rado numbers for disjunctive equations \cite[p293]{lanrob::}. We restrict ourselves to recording those results which ask for bounds on $\mathfrak{R}_A(2)$ under the same hypotheses as Theorem \ref{thm.mn}. When $A=(a_1,a_2, -a_2)$ for $a_1,a_2 \in \N$ the value of $\mathfrak{R}_A(2)$ is computed in \cite[Theorem 9.17]{lanrob::}; when $A=(a_1, a_2, -(a_1+a_2))$ for $a_1,a_2 \in \N$ the value of $\mathfrak{R}_A(2)$ is computed in \cite[Theorem 1.1]{gupthutri::}; when $A=(1, 1, a_3, -a_4)$ for $a_3,a_4 \in \N$ (where partition regularity of $A$ ensures that $a_4 \in \{1,2,a_3,a_3+1,a_3+2\}$) the value of $\mathfrak{R}_A(2)$ is computed in \cite[Theorems 3, 4 \& 8]{robmye::} for $a_4=a_3$, $a_4=a_3+1$ and $a_4=2$ respectively, with the case $a_4=a_3+2$ being trivial; and when \begin{equation}\label{eqn.study} A=(\overbrace{1,\dots , 1}^{n \text{ times}}, -a_{n+1},\dots , -a_k) \end{equation} with $a_{n+1},\dots,a_k \in \N$ and $n \geq a_{n+1}+\cdots +a_k$, the value of $\mathfrak{R}_A(2)$ is computed in \cite[Theorem 3]{sar::2}. Note that the work of \cite{lanrob::}, \cite{robmye::} and \cite{sar::2} goes further and computes $\mathfrak{R}_A(2)$ for some $A$ which are not partition regular. (This makes sense since we may have $\mathfrak{R}_A(2)<\infty$ without $\mathfrak{R}_A(r)<\infty$ for all $r \in \N$. See \cite[Theorem 9.2]{lanrob::} for conditions on a single row $A$ such that $\mathfrak{R}_A(2)<\infty$.) Finally, \cite[Theorem 1.8]{cwasch::0} shows that $\mathfrak{R}_A(r)=o_{r \rightarrow \infty}(r!)$ when $A$ is as in (\ref{eqn.study}) with $n=3$, $l=2$ and $a_1,a_2=1$, beating the bound following from Schur's argument. \subsection*{Variable conventions} There are some conflicts between standard uses for certain symbols in different areas. $m$, $p$ and $c$ are the parameters of an $(m,p,c)$-set in Deuber's sense (as defined in \S\ref{ssec.mpc}), so that $c$ need not be an absolute constant, and $p$ need not be prime. $\mathcal{C}$ usually denotes a colouring and $\mathfrak{C}$ the conjugation operator (see \S\ref{ssec.gn}). $C$ then typically denotes a colour class in $\mathcal{C}$, rather than an absolute constant. \subsection*{Big-$O$ notation} We use big-$O$ notation in the usual way, see \emph{e.g.} \cite[p11]{taovu::}. The constants behind the big-$O$ and $\Omega$ expressions may depend in peculiar ways on other parameters, and we shall sometimes need some control. We capture this in the same way as \cite[p17]{gresan::1}: Some big-$O$ expressions will be replaced by `universal functions' of the form $f:D_1 \times \cdots \times D_k \rightarrow D_0$ where each $D_i$ is one of the sets $(0,1]$, $\N_0$, or $\N$. If $D_i=(0,1]$ then we write $x \preceq_i y$ if and only if $y \leq x$; otherwise we write $x \preceq_i y$ if and only if $x \leq y$. We say that $f$ is \textbf{monotone} if $f(x) \preceq_0 f(y)$ whenever $x_i \preceq_i y_i$ for all $1 \leq i \leq k$. Note that the above is the usual order on $\N$ and $\N_0$ and the opposite of the usual order on $(0,1]$. This reflects the fact that we shall want bounds on, say, the size of an interval which do not get too much worse as, say, a the number of colours grows -- that would be a natural number parameter -- and also as the density of some related set does not get too small -- that would be a $(0,1]$ parameter. Our notation of monotone aligns these different notions of large and small to point in the same direction. It is useful to note that if $f(x)=O_{a}(g(x))$ where $a \in \N_0^d$ then there is a monotone function $F:\N_0^d \rightarrow \N$ such that \begin{equation*} |f(x)| \leq F(a)g(x) \text{ for all }x. \end{equation*} This can be shown by letting $F(a)$ be the max of the constants behind the $O_{a'}$ term as $a' \preceq a$ -- a finite set. The universal functions mapping into $\N$ or $\N_0$ will usually be denoted by $F$s with various decorations \emph{e.g.} subscripts and superscripts, while those mapping into $(0,1]$ will usually be denoted by $\eta$s with various decorations. To avoid too many different functions, we shall often use the same functions in situations where the optimal functions are almost certainly different but where there is little cost to doing so. \section{Setup and tools}\label{sec.deuber} In this section we record the tools we need. First, in \S\ref{ssec.mpc}, we explain Deuber's framework \cite{deu::} for understanding colouring problems. This will reduce the problem to proving Theorem \ref{thm.main}. The key tools to prove this are recorded in \S\ref{ssec.gn}. Finally we gather a few more technical facts in \S\ref{ssec.tool}. \subsection{Deuber's Theorem}\label{ssec.mpc} In \cite[Satz 3.1]{deu::} Deuber proved a conjecture of Rado, and we shall use Deuber's ideas here too. We follow the exposition and definitions of \cite{gun::}: as in \cite[Definition 2.5]{gun::}\footnote{Which Gunderson notes is slightly different to Deuber's original.}, given $m,p,c \in \N$, a set $S \subset \N$ is an \textbf{$(m,p,c)$-set} if there is some $s=(s_0,\dots,s_{m+1}) \in \N^{m+1}$ such that \begin{equation*} S=\bigcup_{j=0}^m{\{cs_{m-j}+i_{m-j+1}s_{m-j+1}+ \cdots + i_{m}s_{m}: -p \leq i_{m-j+1},\dots,i_{m} \leq p\}}. \end{equation*} For example, if $m=2$ then \begin{equation*} S=\{cs_2\} \cup \{cs_1+i_2s_2: -p \leq i_2 \leq p\} \cup \{cs_0+i_1s_1+i_2s_2: -p \leq i_1,i_2\leq p\}, \end{equation*} and even more concretely, the set $\{s_1\}\cup \{s_0-s_1,s_0,s_0+s_1\}$ -- which is a three-term arithmetic progression union its common difference -- is a $(1,1,1)$-set. We shall give a little more motivation for these sets in a moment but first we state Deuber's Theorem. \begin{theorem}[Deuber's Theorem, {\cite[Theorem 2.8]{gun::}}] Suppose that $m,p,c,r \in \N$. Then there are $M,P,C \in \N$ such that any $r$-colouring of an $(M,P,C)$-set contains a monochromatic $(m,p,c)$-set. \end{theorem} The main result of this paper is the following. \begin{theorem}\label{thm.main} Suppose that $m,p,c,r \in \N$. Then there is $N \leq \exp(\exp(r^{O_{m,p,c}(1)}))$ such that any $r$-colouring of $[N]$ contains a monochromatic $(m,p,c)$-set. \end{theorem} Qualitatively this is a special case of Deuber's Theorem since $[N]$ is a $(1,N,N+1)$-set. One of the reasons $(m,p,c)$-sets are important is their relationship with solutions of equations, which we now explain. In \cite[Satz IV]{rad::1} Rado famously proved that partition regularity of a system is equivalent to something called the columns condition: we say that a $k \times d$ matrix $A$ satisfies the \textbf{columns condition} if there is a $d \times t$ matrix of rationals $\alpha$ and a partition $[d]=I_1 \sqcup \cdots \sqcup I_t$, such that writing $a_1,\dots,a_d$ for the columns of $A$ in their given order we have \begin{equation*} \sum_{i \in I_{j+1}}{a_i} =\sum_{i \in I_1\cup \cdots \cup I_j}{\alpha_{ij}a_i} \text{ for all }0 \leq j < t, \end{equation*} with the usual convention that the empty sum is $0$. When we need to refer to a specific $\alpha$ we shall call it a \textbf{witness} for the columns condition. It is natural to assume that $\alpha_{ij}=0$ for all $i \in I_{j+1}\cup \cdots \cup I_t$ and we shall always do so without remark. In view of this we see that $t=1+\rk \alpha$. \begin{theorem}[Rado's theorem, {\cite[Theorem 2.3]{gun::}}]\label{thm.rad} Suppose that $A$ is a $k \times d$ integer-valued matrix. Then $A$ is partition regular if and only if $A$ satisfies the columns condition. \end{theorem} Deuber connected the columns condition to $(m,p,c)$-sets through the following. \begin{theorem}[{\cite[Theorem 2.6(i)]{gun::}}]\label{thm.gun} Suppose that $A$ is a $k \times d$ integer-valued matrix satisfying the columns condition as witnessed by $\alpha$. Then, writing $c$ for the least common multiple of the denominators of the rationals in $\alpha$, every $(1+\rk \alpha, \max_{i,j}{|c\alpha_{ij}|},c)$-set $S$ has some $x \in S^d$ such that $Ax=0$. \end{theorem} This is not \cite[Theorem 2.6(i)]{gun::} as stated, but a quick look at the proof shows that this is what is proved. \begin{proof}[Proof of Theorem \ref{thm.mn} given Theorem \ref{thm.main}] By Theorem \ref{thm.rad} and Theorem \ref{thm.gun}, we see that if $A$ is partition regular then there are naturals $m,p,c=O_A(1)$ such that any $(m,p,c)$-set $S$ contains some $x \in S^d$ with $Ax=0$. By Theorem \ref{thm.main} we see that for $N \leq \exp(\exp(r^{O_{m,p,c}(1)}))=\exp(\exp(r^{O_A(1)}))$ any $r$-colouring of $[N]$ has a colour class $C$ containing a set $S$ that is an $(m,p,c)$-set, and hence there is some $x \in S^d \subset C^d$ with $Ax=0$ as required. \end{proof} \subsection{Gowers norms}\label{ssec.gn} The Gowers norms are defined in \cite[Lemma 3.9]{gow::0}, though they are not given that name, and while they can be defined more generally for finite Abelian groups we shall restrict attention to cyclic groups of prime order (in line with \cite{gow::0}). We use \cite{tao::10} as our basic reference though admittedly many of the result there are left as exercises. The material is developed in considerable generality in \cite{gretao::7}; the generality we need is closer to that discussed in \cite[\S2]{gowwol::0}. (Other introductions may be found in many places including \cite[\S4]{gretao:::}, \cite[\S\S2\&3]{hatlov::}, \cite[Appendix A]{wal::2}, and \cite[\S1]{man::3}. All these, including \cite{gowwol::0}, ultimately refer to \cite{gretao::7} for details, though the paper \cite{wal::2} does expand on the details somewhat in \S4.) For $N \in \N$ (which will be prime though need not be right now), $k \in \N$ and $f:\Z/N\Z \rightarrow \C$ we put \begin{equation*} \|f\|_{U^k(\Z/N\Z)}:=\left(\E_{x,h_1,\dots,h_k \in \Z/N\Z}{\prod_{\omega \in \{0,1\}^k}{\mathfrak{C}^{|\omega|}f(x+\omega\cdot h)}}\right)^{2^{-k}}, \end{equation*} where $\mathfrak{C}$ denotes the operation of complex conjugation. The map $\|\cdot \|_{U^k(\Z/N\Z)}$ defines a norm \cite[Exercise 1.3.19]{tao::10} for $k \geq 2$, and enjoys the nesting property $\|\cdot \|_{U^k(\Z/N\Z)} \leq \|\cdot \|_{U^{k+1}(\Z/N\Z)}$ for $k \in \N$ \cite[Exercise 1.3.19]{tao::10}. (Proofs of these two facts are given explicitly on \cite[p466]{taovu::} and in \cite[(11.7)]{taovu::}.) One of the reasons these norms are important is that they control counts of various linear configurations. Specifically, suppose that $\Psi:\Z^d \rightarrow \Z^l$ is a homomorphism and $f$ is a vector of $l$ functions $\Z/N\Z \rightarrow \C$. We define \begin{equation}\label{eqn.dlp} \Lambda_\Psi(f):=\E_{x \in [N]^d}{\prod_{i=1}^l{f_i(\Psi_i(x)+N\Z)}}. \end{equation} The following is the `generalised von Neumann Theorem' we need. It is a special case of \cite[Theorem 4.1]{gretao:::} once the notation has been unpacked, and also of \cite[Exercise 1.3.23]{tao::10} combined with \cite[Exercise 1.3.14]{tao::10}. \begin{theorem}\label{thm.gvn} Suppose that $\Psi:\Z^d \rightarrow \Z^l$ is a homomorphism and for every $i \neq j$, $(\Psi_i,\Psi_j)$ is a pair of independent vectors (\emph{i.e.} if $z\Psi_i + w\Psi_j\equiv 0$ for some $z,w \in \Z$ then $z=w=0$). Then there are naturals $N_0(\Psi)$ and $k(\Psi)$ such that if $N \geq N_0(\Psi)$ is a prime and $f$ is a vector of $l$ functions $\Z/N\Z \rightarrow \C$ bounded by $1$ we have \begin{equation*} |\Lambda_\Psi(f)| \leq \inf_{1 \leq i \leq l}{\|f_i\|_{U^k(\Z/N\Z)}}. \end{equation*} \end{theorem} We shall use the above to count $(m,p,c)$-sets. This was already done by L{\^e} in \cite{Le::} for the purpose of transferring the partition regularity of Brauer configurations to the sets $\{p-1:p \text{ is prime}\}$ and $\{p+1:p \text{ is prime}\}$, itself answering a question of Li and Pan \cite{lipan::}. We also need Gowers' inverse theorem. The following result is what is proved in \cite[Theorem 18.1]{gow::0}, though it is not stated in precisely this way. \begin{theorem}\label{thm.gi} There is a monotone function $F_1:\N \rightarrow \N$ such that the following holds. Suppose that $N$ is prime, $\epsilon \leq \frac{1}{2}$ and $f:\Z/N\Z\rightarrow \C$ is bounded in magnitude by $1$ with $\|f\|_{U^k(\Z/N\Z)} \geq \epsilon$. Then there is a partition of $[N]$ into arithmetic progressions $P_1,\dots,P_M$ of average size at least $N^{\epsilon^{F_1(k)}}$ such that \begin{equation*} \sum_{j=1}^M{\left|\sum_{s \in P_j}{f(s+N\Z)}\right|} \geq \epsilon^{F_1(k)}N. \end{equation*} \end{theorem} \subsection{Convolution, dilation and progressions}\label{ssec.tool} First we record notation for dilation and translation: given $x,y \in \Z$ we write $\lambda_x(y):=xy$; further, given $f:\Z \rightarrow \C$ we write $\tau_x(f)(y):=f(y+x)$. Suppose that $P\subset \Z$ is an arithmetic progression of odd length. Then \begin{equation*} P=x_P+d_P \cdot \{-N_P,\dots,N_P\} \end{equation*} for some $x_P \in \Z$ called the \textbf{centre}; $d_P \in \N_0$ called the \textbf{common difference}; and $N_P \in \N_0$ called the \textbf{radius}. Technically the common difference and radius need not be uniquely defined but this only becomes a problem for arithmetic progressions of size $1$ where the necessary adaptations of any argument are trivial and omitted for clarity. We work with arithmetic progressions of odd length for convenience not because of any important difference. We say that $P$ is a \textbf{centred arithmetic progression} if $x_P=0$, so in particular a centred arithmetic progression is of odd length. For $\delta \geq 0$ we shall define \begin{equation*} I_\delta(P):=d_P \cdot \{-\lfloor \delta N_P\rfloor,\dots,\lfloor \delta N_P\rfloor\}, \end{equation*} and below record some basic properties of these `fractional dilates' of progressions. In many cases these properties are special cases of properties of Bohr sets (see \cite[\S4.4]{taovu::}). We do not require the generality of Bohr sets here because the result of Gowers' inverse theorem (Theorem \ref{thm.gi}) is a decomposition in terms of progressions. This has the additional benefit of meaning we do not need to deal with the problem of finding regular Bohr sets (see \cite[Lemma 4.24]{taovu::}), since all progressions are regular in a suitable sense. This is captured in the last three properties below. \begin{lemma}[Basic properties]\label{lem.triv} Suppose that $P$ and $P'$ are arithmetic progressions of odd length; $c \in \N$; $x \in \Z$; and $\delta, \delta' \in (0,1]$. \begin{enumerate} \item \label{pt.trivsym} \emph{(Symmetry)} $I_\delta(P)$ is a centred progression of size at least $\frac{1}{3}\delta |P|$; \item \label{pt.monpar} \emph{(Monotonicity in radius)} $I_{\delta'}(P) \subset I_{\delta}(P)$ whenever $\delta' \leq \delta$; \item \label{pt.monprog} \emph{(Monotonicity in progression)} $I_{\delta}(P') \subset I_{\delta}(P)$ whenever $P' \subset P$; \item \label{pt.cen} \emph{(Translation)} $x+P$ is a progression of odd length and $I_\delta(P)=I_\delta(x+P)$; \item \label{pt.trivdil} \emph{(Dilations)} $c\cdot P$ is an arithmetic progression of odd length and $I_{\delta}(c\cdot P) = c\cdot I_\delta(P)$; \item \label{pt.trivsa} \emph{(Sub-additivity)} $I_{\delta}(P)+I_{\delta'}(P) \subset I_{\delta+\delta'}(P)$; \item \label{pt.comp} \emph{(Composition)} $I_{\delta}(I_{\delta'}(P))\subset I_{\delta\delta'}(P)$; \item \label{pt.trivint}\emph{(Interiors)} there is an arithmetic progression of odd length, $\Int_\delta(P)$, such that \begin{equation*} \Int_\delta(P)+I_\delta(P) \subset P \text{ and } |\Int_\delta(P)| \geq (1-\delta)|P|; \end{equation*} \item \label{pt.trivclos}\emph{(Closures)} $P+I_\delta(P)$ is an arithmetic progression of odd length and \begin{equation*} |P + I_\delta(P)|\leq (1+\delta)|P|; \end{equation*} \item \label{pt.trivinv} \emph{(Invariance)} for all $f:\Z \rightarrow \C$ and $y \in I_\delta(P)$ we have \begin{equation*} |\E_{x \in P}{\tau_y(f)(x)} - \E_{x \in P}{f(x)}| \leq 2\delta \|f\|_{L_\infty}. \end{equation*} \end{enumerate} \end{lemma} \begin{proof} (\ref{pt.trivsym}) is trivial on noting that $|I_\delta(P)| = 2\lfloor \delta N_P\rfloor +1 \geq \frac{1}{3}\delta (2N_P+1)$. (\ref{pt.monpar}), (\ref{pt.monprog}), (\ref{pt.cen}), and (\ref{pt.trivdil}) are immediate. (\ref{pt.trivsa}) follows since $\lfloor \delta N_P\rfloor + \lfloor \delta' N_P\rfloor \leq \lfloor (\delta+\delta') N_P\rfloor$; and (\ref{pt.comp}) since $\lfloor \delta \lfloor \delta' N\rfloor \rfloor \leq \lfloor \delta\delta'N\rfloor$. For (\ref{pt.trivint}) set \begin{equation*} \Int_\delta(P):=x_P+d_P\cdot \{-(N_P-\lfloor \delta N_P\rfloor),\dots,N_P-\lfloor \delta N_P\rfloor\}, \end{equation*} so that $\Int_\delta(P)$ is an arithmetic progression of odd length, $\Int_\delta(P) + I_\delta(P) \subset P$, and \begin{equation*} |\Int_\delta(P)| \geq 2(N_P-\lfloor \delta N_P\rfloor) +1 = 2N_P +1 - 2\lfloor \delta N_P\rfloor \geq (1-\delta)|P|. \end{equation*} For (\ref{pt.trivclos}) we note that \begin{equation*} P+I_\delta(P) = x_P+d_P\cdot \{-N_P,\dots,N_P\} + d_P\cdot \{ -\lfloor \delta N_P\rfloor,\dots,\lfloor \delta N_P\rfloor\}, \end{equation*} and so \begin{equation*} |P+I_\delta(P)| = 2(N_P+\lfloor \delta N_P\rfloor) +1 \leq (1+\delta)(2N_P+1). \end{equation*} For (\ref{pt.trivinv}) note if $s' \in I_\delta(P)$ then we have \begin{align*} & \left|\E_{s \in P}{\tau_{s'}(f)(s)} - \E_{s \in P}{f(s)}\right|\\ & \qquad = \left|\E_{s \in P}{(1-1_{\Int_\delta(P)})(s+s') f(s+s')} + \E_{s \in P}{1_{\Int_\delta(P)}(s+s')f(s+s')}\right.\\ &\qquad \qquad \left.- \E_{s \in P}{1_{\Int_\delta(P)}(s)f(s)} - \E_{s \in P}{(1-1_{\Int_\delta(P)})(s)f(s)}\right|\\ &\qquad \leq \left|\E_{s \in P}{(1-1_{\Int_\delta(P)})(s+s') f(s+s')}\right|+\left|\E_{s \in P}{(1-1_{\Int_\delta(P)})(s)f(s)}\right| \leq 2\delta \|f\|_{L_\infty}. \end{align*} The result is proved. \end{proof} \section{An example} The notation in the final proof is quite heavy, so before turning to this we present an example case kindly suggested by one of the referees. The arguments of \cite{chapre::} work to deal with $(1,p,c)$-sets (see \cite[Theorem 5.1]{chapre::}) and are similar to ours in terms of how these sorts of sets are dealt with. The additional complexity we encounter is in dealing with $(m,p,c)$-sets for $m>1$. Our approach is inductive on $m$, and our intention is that that by motivating the $m=2$ case the general argument will become clear. We consider the problem of finding monochromatic septuples \begin{equation}\label{eqn.config} (x;y,x+y; z,z+x,z+y,z+x+y) \end{equation} in $r$-colourings of $\{1,\dots,N\}$. Such septuples do not correspond to an $(m,p,c)$-set, but for our purposes it behaves rather like a $(2,1,1)$-set. In fact looking for configurations of this type is a special case of Folkman's theorem \cite[Theorem 11, \S3.4]{grarotspe::0} (an explanation of the name may also be found in that reference), which was discovered independently by Folkman\footnote{Folkman's proof was unpublished, but a record of the fact he proved is found in \cite[Corollary 4]{grarot::}.}, Rado \cite{rad::4}, and Sanders \cite[Theorem 2]{san::31}. We treat the three sets of terms in (\ref{eqn.config}) separated by semi-colons at three different scales. In particular, we shall find arithmetic progressions $P_1$, $P_2$, and $P_3$ iteratively by using the Gowers inverse theorem (Theorem \ref{thm.gi}) to give a density increment for a colour class on a certain progression. This increment translates to an increment to the sum over all colour classes of their maximum density on the translate of a progression. Importantly the translate may be different for different colour classes; and the process terminates since the sum of the maximal densities is bounded above by $r$. On termination we have control of some localised Gowers norms like \begin{equation*} \|f\|_{U^3(P_i;P_3)}:=\E_{z \in P_3}{\|f1_{z+P_i}\|_{U^3}} \text{ for }i\in \{1,2\}. \end{equation*} Localised Gowers norms of this type are defined in \cite[(2.12)]{pre::0} amongst other places and the additional discussion around that definition may be of interest. The Generalised von Neumann Theorem (Theorem \ref{thm.gvn}) ensures that control of these localised Gowers norms of a colour class $C$ on the translate $a_C+P_3$ on which $C$ has maximal density $\delta$ leads to \begin{align*} &\E_{x \in P_1,y \in P_2, z \in a_C+P_3}{1_C(x)1_C(y)1_C(x+y)1_C(z)1_C(x+z)1_C(y+z)1_C(x+y+z)}\\ & \qquad \qquad \approx \delta^4\E_{x \in P_1,y \in P_2}{1_C(x)1_C(y)1_C(x+y)}. \end{align*} Inductively we can ensure that this second term is large for some colour class $C$, and the largeness of that colour class in turn ensures that $\delta$ is large. This gives a large count of septuples in one colour class as required. \section{The proof} We begin by recording some notation for linear forms associated with $(m,p,c)$-sets. For $p,c \in \N$ and $t \in \N_0$ put \begin{equation*} \mathcal{D}_{p,c;t}:=\left\{ i \in \Z^{\N_0}: i_j \in \{-p,\dots,p\} \text{ for all }j<t; i_{t} =c;\text{ and } i_{j}=0 \text{ for all }j >t\right\}. \end{equation*} The sets $\mathcal{D}_{p,c;t}$ as $t$ ranges $\N_0$ are disjoint. For $m \in \N_0$ put \begin{equation*} \mathcal{U}_{m,p,c}:=\bigcup_{t=0}^m{\mathcal{D}_{p,c;t}}. \end{equation*} Then for $i \in \mathcal{U}_{m,p,c}$ write \begin{equation}\label{eqn.lis} L_i:\Z^{\N_0}\rightarrow \Z; s \mapsto \sum_{j:i_j \neq 0}{s_ji_j}, \end{equation} which is well-defined since the set of $j$ such that $i_j \neq 0$ has size at most $m+1$ (and in particular is finite) for $i \in \mathcal{U}_{m,p,c}$. The unique element of $\mathcal{D}_{p,c;t}$ with support of size $1$ is particularly important: we put \begin{equation*} i^*(c,t):=(\overbrace{0,\dots,0}^{t \text{ times}},c,0,\dots) \text{ and } \mathcal{D}_{p,c;t}^*:=\mathcal{D}_{p,c;t}\setminus \{i^*(c,t)\}. \end{equation*} Suppose that $P_0,\dots,P_m$ are arithmetic progressions. We are interested in the count \begin{equation}\label{eqn.qdef} Q_{m,p,c}(A;P_0,\dots,P_m):=\E_{s_0 \in P_0,\dots,s_m \in P_m}{\prod_{i \in \mathcal{U}_{m,p,c}}{1_A(L_i(s))}}, \end{equation} since if $P_0,\dots,P_m \subset \N$ then this quantity is non-zero only if $A$ contains an $(m,p,c)$-set. The following is our key counting/density-increment dichotomy. \begin{lemma}\label{lem.count} There are monotone functions $F_2:\N^3 \rightarrow \N$ and $\eta_0:\N^3 \rightarrow (0,1]$ such that the following holds. Suppose that $m,p,c \in \N$, $\delta\in (0,1]$ and $P_0,\dots , P_m\subset \Z$ are arithmetic progressions of odd length with \begin{equation}\label{eqn.hyp1} P_i \subset I_\delta(c\cdot P_m) \text{ for all }0 \leq i \leq m-1; \end{equation} $P'' \subset \Z$ is a centred arithmetic progression with \begin{equation}\label{eqn.hyp2} P'' \subset I_\delta(P_i) \text{ for all }0 \leq i \leq m; \end{equation} $P_m \subset \N$ and $A \subset \Z$ has $\alpha:=\E_{x \in c\cdot P_m}{1_A(x)}>0$. Then at least one of the following holds: \begin{enumerate} \item \label{pt.smallN} $|P''| \leq \exp(\delta^{-F_2(m,p,c)})$; \item \label{pt.ldelta} $\delta \geq \eta_0(m,p,c)$; \item \label{pt.harder} there is an arithmetic progression of odd length, $P'''\subset \N$, with $I_1(c\cdot P''')\subset I_{F_2(m,p,c)\delta^2}(c\cdot P_m)$, such that \begin{equation*} |P'''| \geq |P''|^{\delta^{F_2(m,p,c)}} \text{ and } \E_{x \in c \cdot P'''}{1_A(x)} \geq \alpha+\delta. \end{equation*} \item \label{pt.count} or \begin{align*} & \left|Q_{m,p,c}(A;P_0,\dots,P_m)\right.\\ & \qquad \qquad \left. - \alpha^{|\mathcal{D}_{p,c;m}|}Q_{m-1,p,c}(A;P_0,\dots,P_{m-1})\right|\leq F_2(m,p,c)\delta^{\eta_0(m,p,c)}; \end{align*} \end{enumerate} \end{lemma} The proof below is long but not at all conceptually difficult. The length is a result of taking care with technicalities and somewhat licentious notation. The basic idea is to use Theorem \ref{thm.gvn} to control the $Q$s by suitable uniformity norms and then Theorem \ref{thm.gi} to show that if that error is not small then there is a density increment. There are two types of density increment, one is the expected increment resulting from large $U^k$ norm in Theorem \ref{thm.gi}. The other results from ensuring that the density of $A$ is the same on two progressions, one of which is a small dilate of the other. This second increment is common to arguments where groups are replaced by approximate groups -- in this case progressions -- and they perhaps originate in the work of Bourgain \cite{bou::5}. (See \cite[(10.16)]{taovu::} and the definition of the function $G$ there.) \begin{proof} With $L_i$s defined as in (\ref{eqn.lis}) let $\Psi:=(L_i)_{i \in \mathcal{U}_{m,p,c}}$ so that $\Psi:\Z^{m+1} \rightarrow \Z^{\mathcal{U}_{m,p,c}}$ is a homomorphism. Every $(L_i,L_j)$ with $i \neq j$ is a pair of independent vectors, so by Theorem \ref{thm.gvn} applied to $\Psi$ there is $k=k(\Psi)=O_{m,p,c}(1)$ such that if $N \geq N_0(\Psi)$ is prime and $h$ is a vector of functions $\Z/N\Z \rightarrow \C$ (indexed by $\mathcal{U}_{m,p,c}$) then \begin{equation}\label{eqn.cnt} \left|\E_{x \in [N]^{m+1}}{\prod_{i \in \mathcal{U}_{m,p,c}}{h_i(L_i(x)+N\Z)}}\right|\leq \inf_{i \in \mathcal{U}_{m,p,c}}{\|h_i\|_{U^{k}(\Z/N\Z)}}. \end{equation} Note that this will be applied with a vector $h$ to be determined, but which will be a combination of translates of the set $A$ suitably restricted, and also translates of the balanced function of $A$. The particular choice is made in (\ref{eqn.hdef}) (which itself depends on (\ref{eqn.gdef}) and (\ref{eqn.fdef})). As remarked in the subsection on big-$O$ notation, since $m$, $p$ and $c$ are in $\N$ we see that there is a monotone function $F:\N^3 \rightarrow \N$ such that $F(m,p,c) \geq N_0(\Psi)$. (We shall take $F_2 \geq F$, but there are other functions later which will determine exactly what $F_2$ needs to be.) Since $P''$ is a centred arithmetic progression there are natural numbers $d''$ and $N''$ such that $P''=d''\cdot \{-N'',\dots,N''\}$. By Bertrand's postulate there is a prime $N$ such that \begin{equation*} \max\{(mp+c)N'',F(m,p,c)\} <N =O_{m,p,c}(N''). \end{equation*} The reason for this choice will become clear just before (\ref{eqn.newrhs}) below. Before that we record the following claim. \begin{claim*} There is $\delta'=O_{m,p,c}(\delta)$ such that if $s_0 \in P_0, \dots ,s_{m-1} \in P_{m-1}$, $x \in d''\cdot \{-N,\dots,N\}$ and $i \in \mathcal{D}_{p,c;m}$ then \begin{equation*} L_i(s)+x-cs_m \in c\cdot I_{\delta'}(P_m) \end{equation*} and if additionally $s_m \in \Int_{\delta'}(P_m)$ then \begin{equation*} L_i(s)+x \in c \cdot P_m. \end{equation*} \end{claim*} \begin{proof} Write $l=\left\lceil \frac{N}{N''}\right\rceil$ so by (\ref{eqn.hyp1}), (\ref{eqn.hyp2}) and Lemma \ref{lem.triv} we have \begin{align*} d''\cdot \{-N,\dots,N\} &\subset {\overbrace{(d''\cdot \{-N'',\dots,N''\})+\cdots + (d''\cdot \{-N'',\dots,N''\})}^{l\text{ times}}}\\ & = lP'' \subset I_{l\delta}(P_0) \subset I_{l\delta}(I_\delta(c\cdot P_m)) \subset I_{l\delta^2}(c\cdot P_m). \end{align*} If follows that if $i \in \mathcal{D}_{p,c;m}$, $s_0\in P_0,\dots,s_{m-1} \in P_{m-1}$, and $x \in d''\cdot \{-N,\dots,N\}$, then by (\ref{eqn.hyp1}) and Lemma \ref{lem.triv} we have \begin{equation}\label{eqn.kul} i_0s_0 + \cdots + i_{m-1}s_{m-1} + x \in mpI_\delta(c\cdot P_m)+I_{l\delta^2}(c\cdot P_m)\subset I_{(mp+l\delta)\delta}(c\cdot P_m). \end{equation} Let $\delta':=(mp+l\delta)\delta=O_{m,p,c}(\delta)$. If $i \in \mathcal{D}_{p,c;m}$ we have $i_m=c$ and then by (\ref{eqn.kul}) and Lemma \ref{lem.triv} we have \begin{equation*} L_i(s)+x -cs_m= i_0s_0 + \cdots + i_{m-1}s_{m-1} +x \in I_{\delta'}(c\cdot P_m) = c\cdot I_{\delta'}(P_m) \end{equation*} giving the first conclusion. Finally, if $s_m \in \Int_{\delta'}(P_m)$ then by Lemma \ref{lem.triv} again \begin{equation*} L_i(s)+x \in cs_m + c\cdot I_{\delta'}(P_m) \subset c\cdot (\Int_{\delta'}(P_m) +I_{\delta'}(P_m) ) \subset c\cdot P_m. \end{equation*} The claim is proved. \end{proof} Let $\epsilon>0$ be a further constant to be optimised later and suppose (using $\tau$ for translation as defined in \S\ref{ssec.tool}) that for some $j \in \mathcal{D}_{p,c;m}$ we have \begin{equation}\label{eqn.small} \E_{s_0 \in P_0,\dots,s_m \in P_m}{1_{\Int_{\delta'}(P_m)}(s_m)\left|\E_{x \in d''\cdot [N]}{\tau_{L_j(s)}(1_A - \alpha 1_{c \cdot P_m})(x)}\right|} > \epsilon. \end{equation} By interchanging order of summation we have \begin{align} \label{eqn.split} & \left|\E_{s_0 \in P_0,\dots,s_m \in P_m}{1_{\Int_{\delta'}(P_m)}(s_m)\E_{x \in d''\cdot [N]}{\tau_{L_j(s)}(1_A - \alpha 1_{c \cdot P_m})(x)}}\right|\\ \nonumber& \qquad = \left|\E_{s_0 \in P_0,\dots,s_{m-1} \in P_{m-1},x \in d''\cdot [N]}{}\right.\\ \nonumber & \qquad \qquad \qquad \quad\left(\E_{s_m \in P_m}{1_{\Int_{\delta'}(P_m)}(s_m)\tau_{L_j(s)+x-cs_m}1_A (cs_m)}\right.\\ \nonumber & \qquad \qquad \qquad \qquad \qquad \qquad \left.\left.-\alpha\E_{s_m \in P_m}{1_{\Int_{\delta'}(P_m)}(s_m)\tau_{L_j(s)+x-cs_m}(1_{c\cdot P_m})(cs_m )}\right)\right|. \end{align} Suppose that $s_0 \in P_0,\dots,s_{m-1} \in P_{m-1}$ and $x \in d''\cdot [N]$. The claim and Lemma \ref{lem.triv} tell us that \begin{align*} \E_{s_m \in P_m}{1_{\Int_{\delta'}(P_m)}(s_m)\tau_{L_j(s)+x-cs_m}1_A (cs_m)} & \leq\E_{s_m \in P_m}{\tau_{L_j(s)+x-cs_m}1_A (cs_m)}\\ & \leq \E_{s_m \in P_m}{1_A(cs_m)}+ 2\delta' = \alpha+2\delta'; \end{align*} and \begin{align*} \E_{s_m \in P_m}{1_{\Int_{\delta'}(P_m)}(s_m)\tau_{L_j(s)+x-cs_m}1_A (cs_m)} & \geq\E_{s_m \in P_m}{\tau_{L_j(s)+x-cs_m}1_A (cs_m)} -\delta'\\ & \geq \E_{s_m \in P_m}{1_A(c\cdot s_m)}-3\delta' = \alpha-3\delta'. \end{align*} We conclude that \begin{equation*} |\E_{s_m \in P_m}{1_{\Int_{\delta'}(P_m)}(s_m)\tau_{L_j(s)+x-cs_m}1_{A} (cs_m)}- \alpha| = O(\delta')=O_{m,p,c}(\delta), \end{equation*} and similarly \begin{equation*} |\E_{s_m \in P_m}{1_{\Int_{\delta'}(P_m)}(s_m)\tau_{L_j(s)+x-cs_m}1_{c\cdot P_m} (cs_m)}- 1| = O(\delta')=O_{m,p,c}(\delta) \end{equation*} for all $s_0 \in P_0,\dots,s_{m-1} \in P_{m-1}$ and $x \in d''\cdot [N]$. It follows from these and (\ref{eqn.split}) that \begin{equation*} \left|\E_{s_0 \in P_0,\dots,s_m \in P_m}{1_{\Int_{\delta'}(P_m)}(s_m)\E_{x \in d''\cdot [N]}{\tau_{L_j(s)}(1_A - \alpha 1_{c \cdot P_m})(x)}}\right|=O_{m,p,c}(\delta). \end{equation*} Combining this with (\ref{eqn.small}) and averaging we see that there are elements $s_0 \in P_0,\dots,s_{m-1} \in P_{m-1},s_{m} \in \Int_{\delta'}(P_m)$ such that \begin{equation}\label{eqn.cd} \E_{x \in d''\cdot [N]}{\tau_{L_j(s)}(1_A - \alpha 1_{c \cdot P_m})(x)}> \frac{1}{2}\epsilon - O_{m,p,c}(\delta). \end{equation} Since $s_m \in \Int_{\delta'}(P_m)$ the claim tells us \begin{equation*} \tau_{L_j(s)}(1_{c\cdot P_m})(x) = 1_{c\cdot P_m}(L_j(s)+x) =1 \text{ for all }x \in d''\cdot \{-N,\dots,N\}, \end{equation*} and so $L_j(s)-d''\cdot \{-N,\dots,N\} \subset c\cdot P_m \subset \N$. It follows from this that $c$ divides $L_j(s)$ and of course $c$ divides $d''$ (since $P'' \subset I_1(P_0) \subset I_1(c\cdot P_m)=c\cdot I_1(P_m)$ by Lemma \ref{lem.triv}). From (\ref{eqn.cd}) we then have \begin{equation*} \E_{x \in c\cdot (c^{-1}L_j(s)-(c^{-1}d'')\cdot [N])}{1_A(x)} >\alpha + \frac{1}{2}\epsilon - O_{m,p,c}(\delta). \end{equation*} We can take $\epsilon \in [\delta,O_{m,p,c}(\delta)]$ such that the right hand side is at least $\alpha+\delta$, and we are in case (\ref{pt.harder}) of the lemma (with $P''':=c^{-1}L_j(s)-(c^{-1}d'') \cdot [N]$ so $|P'''|=N \geq N''=|P''|$ where $|P'''|$ is odd since it is prime, and \begin{equation*} I_1(c\cdot P''') =I_1(d''\cdot [N])\subset I_{O_{m,p,c}(1)}(P'') \subset I_{O_{m,p,c}(\delta^2)}(c\cdot P_m) \end{equation*} by Lemma \ref{lem.triv}. Again by Lemma \ref{lem.triv} and the discussion in the section on big-$O$ notation there is a monotone function $F':\N^3 \rightarrow \N$ such that $I_1(c\cdot P''') \subset I_{F(m,p,c)\delta^2}(c\cdot P_m)$. And, again, we shall take $F_2 \geq F$.)\footnote{This is much stronger than the conclusion offered in case (\ref{pt.harder}) but this is because this is the easy density increment mentioned at the end of the discussion before the proof of this lemma.} In view of the above we may suppose that (\ref{eqn.small}) does not happen for any $j \in \mathcal{D}_{p,c;m}$; we are in the main case. We split the integrand in (\ref{eqn.qdef}) into two factors \begin{equation*} \prod_{i \in \mathcal{U}_{m,p,c}}{1_A(L_i(s))}=\left(\prod_{t=0}^{m-1}{\prod_{i \in \mathcal{D}_{p,c;t}}{1_A(L_i(s))}}\right)\cdot \left(\prod_{i \in \mathcal{D}_{p,c;m}}{1_A(L_i(s))}\right). \end{equation*} The first term on the right is independent of $s_m$; we decompose the second through an arbitrary fixed total order on $\mathcal{D}_{p,c;m}^*$. For $i,j \in \mathcal{D}_{p,c;m}^*$ put \begin{equation}\label{eqn.fdef} f_{j,i}:=\begin{cases} 1_A & \text{ if }i<j\\ 1_A - \alpha 1_{c\cdot P_m} & \text{ if } i=j\\ \alpha 1_{c\cdot P_m} & \text{ if }i>j\end{cases}; \end{equation} so for all $x \in \Z^{\mathcal{D}_{p,c;m}}$ we have \begin{align*} & \sum_{j \in \mathcal{D}_{p,c;m}^*}{1_A(x_{i^*(c,m)})\prod_{i \in \mathcal{D}_{p,c;m}^*}{f_{j,i}(x_i)}}\\ & \qquad \qquad = \prod_{i \in \mathcal{D}_{p,c;m}}{1_A(x_i)} -\alpha^{|\mathcal{D}_{p,c;m}^*|}1_A(x_{i^*(c,m)})\prod_{i \in \mathcal{D}_{p,c;m}^*}{1_{c\cdot P_m}(x_i)}. \end{align*} It follows that \begin{align} \label{eqn.i}&Q_{m,p,c}(A;P_0,\dots,P_m)\\ \nonumber & \qquad - \alpha^{|\mathcal{D}_{p,c;m}^*|}\E_{s_0 \in P_0,\dots,s_{m-1} \in P_{m-1}}{\left(\prod_{t=0}^{m-1}{\prod_{i \in \mathcal{D}_{p,c;t}}{1_A(L_i(s))}}\right)}\\ \nonumber & \qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\left(\E_{s_m \in P_m}{1_A(L_{i^*(c,m)}(s))\prod_{i \in \mathcal{D}_{p,c;m}^*}{1_{c\cdot P_m}(L_i(s))}}\right)\\ \nonumber &\qquad \qquad = \sum_{j \in \mathcal{D}_{p,c;m}^*}{\E_{s_0 \in P_0,\dots,s_{m} \in P_{m}}{\left(\prod_{t=0}^{m-1}{\prod_{i \in \mathcal{D}_{p,c;t}}{1_A(L_i(s))}}\right)}}\\ \nonumber &\qquad \qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad \times \left(1_A(L_{i^*(c,m)}(s))\prod_{i \in \mathcal{D}_{p,c;m}^*}{f_{j,i}(L_i(s))}\right). \end{align} On the other hand for all $i \in \mathcal{D}_{p,c;m}$ and $s \in P_0 \times \cdots \times P_{m-1} \times \Int_{\delta'}(P_m)$ we have from the claim (with $x=0$) that $L_i(s) \in c\cdot P_m$. Hence for all $s_0 \in P_0,\dots,s_{m-1} \in P_{m-1}$ we have \begin{align*} & \E_{s_m \in P_m}{1_A(L_{i^*(c,m)}(s))\prod_{i \in \mathcal{D}_{p,c;m}^*}{1_{c\cdot P_m}(L_i(s))}}\\ &\qquad \qquad \geq \E_{s_m \in P_m}{1_A(cs_m)1_{c\cdot \Int_{\delta'}(P_m)}(cs_m)\prod_{i \in \mathcal{D}_{p,c;m}^*}{1_{c\cdot P_m}(L_i(s))}}\\ & \qquad \qquad = \E_{s_m \in P_m}{1_A(cs_m)1_{c\cdot \Int_{\delta'}(P_m)}(cs_m)}\geq \alpha-O_{m,p,c}(\delta). \end{align*} On the other hand \begin{equation*} \E_{s_m \in P_m}{1_A(L_{i^*(c,m)}(s))\prod_{i \in \mathcal{D}_{p,c;m}^*}{1_{c\cdot P_m}(L_i(s))}}\leq \E_{s_m \in P_m}{1_A(L_{i^*(c,m)}(s))} = \alpha, \end{equation*} and so \begin{equation*} \E_{s_m \in P_m}{1_A(L_{i^*(c,m)}(s))\prod_{i \in \mathcal{D}_{p,c;m}^*}{1_{c\cdot P_m}(L_i(s))}}= \alpha+O_{m,p,c}(\delta). \end{equation*} Moreover, \begin{equation*} Q_{m-1,p,c}(A;P_0,\dots,P_{m-1})=\E_{s_0 \in P_0,\dots,s_{m-1} \in P_{m-1}}{\left(\prod_{t=0}^{m-1}{\prod_{i \in \mathcal{D}_{p,c;t}}{1_A(L_i(s))}}\right)} \end{equation*} since the sets $\mathcal{D}_{p,c;t}$ are disjoint over $0 \leq t <m$. We conclude that the left hand side of (\ref{eqn.i}) is equal to \begin{equation*} Q_{m,p,c}(A;P_0,\dots,P_m) - \alpha^{|\mathcal{D}_{p,c;m}|}Q_{m-1,p,c}(A;P_0,\dots,P_m)+O_{m,p,c}(\delta). \end{equation*} To estimate the right hand side of (\ref{eqn.i}) first note (by (\ref{eqn.hyp2}) and Lemma \ref{lem.triv}) that for $j \in \mathcal{D}_{p,c;m}^*$ the summand equals \begin{align} \label{eqn.innerexpect}&\E_{s_0 \in P_0,\dots,s_{m} \in P_{m}}{\E_{s_0',\dots,s_m' \in d''\cdot [N'']}{\left(\prod_{t=0}^{m-1}{\prod_{i \in \mathcal{D}_{p,c;t}}{1_A(L_i(s+s'))}}\right)}}\\ \nonumber &\qquad \qquad\qquad \qquad\qquad \qquad\qquad \times \left(1_A(L_{i^*(c,m)}(s+s'))\prod_{i \in \mathcal{D}_{p,c;m}^*}{f_{j,i}(L_i(s+s'))}\right)\\ \nonumber & \qquad \qquad\qquad \qquad\qquad \qquad\qquad\qquad \qquad\qquad+ O_{m}(\delta). \end{align} We shall look at the inner expectation of the first term here for which it will be useful to introduce some more notation. We shall use $\lambda$ for dilation in the way defined in \S\ref{ssec.tool}, and then for $s \in P_0 \times \cdots \times P_m$ and $y \in \Z^{m+1}$ put \begin{equation}\label{eqn.gdef} g_i^{(s)}(y)=\begin{cases} \tau_{L_i(s)}(1_A) \circ \lambda_{d''}(y) & \text{ if }i \in \mathcal{U}_{m-1,p,c} \text{ or }i=i^*(c,m)\\ \tau_{L_i(s)}(f_{j,i})\circ \lambda_{d''}(y) & \text{ if } i \in \mathcal{D}_{p,c;m}^*. \end{cases} \end{equation} With this notation, the inner expectation in (\ref{eqn.innerexpect}) equals \begin{align*} & \E_{y_0,\dots,y_m \in [N'']}{\prod_{i \in \mathcal{U}_{m,p,c}}{g_i^{(s)}(L_i(y))}}\\ & \qquad = \frac{1}{(N'')^{m+1}} \cdot \sum_{y \in \Z^{m+1}}{\left(\prod_{t=0}^m{1_{[N'']}(y_t)}\right)\left(\prod_{t=0}^m{\prod_{i \in \mathcal{D}_{p,c;t}^*}{g_i^{(s)}(L_i(y))}}\right)\left(\prod_{t=0}^m{g_{i^*(c,t)}^{(s)}(L_{i^*(c,t)}(y))}\right)}\\ & \qquad = \frac{1}{(N'')^{m+1}} \cdot \sum_{y \in \Z^{m+1}}{\left(\prod_{t=0}^m{\prod_{i \in \mathcal{D}_{p,c;t}^*}{g_i^{(s)}(L_i(y))}}\right)\left(\prod_{t=0}^m{g_{i^*(c,t)}^{(s)}|_{c\cdot [N'']}(L_{i^*(c,t)}(y))}\right)}, \end{align*} since $L_{i^*(c,t)}(y)=cy_t$ for all $0 \leq t \leq m$. The notation is potentially a little confusing here: $g_{i^*(c,t)}^{(s)}|_{c\cdot [N'']}$ denotes the function $g_{i^*(c,t)}^{(s)}$ restricted to the set $c\cdot [N'']$. For $x \in [N]$ write \begin{equation}\label{eqn.hdef} h_i^{(s)}(x+N\Z)=\begin{cases} g_i^{(s)}(x) & \text{ if }i \in \bigcup_{t=0}^m{\mathcal{D}_{p,c;t}^*} \\ g_{i}^{(s)}|_{c \cdot [N'']}(x) & \text{ if }i \in \{i^*(c,t):0 \leq t \leq m\} \end{cases}. \end{equation} In view of this definition, for $x \in [N]^{m+1}$, the product \begin{equation*} \prod_{i \in \mathcal{U}_{m,p,c}}{h_i^{(s)}(L_i(x)+N\Z)} \end{equation*} is non-zero only if $x \in ((c\cdot [N'']+N\Z)^{m+1})\cap ([N]^{m+1})$. This set equals $(c\cdot [N''])^{m+1}$ since $N >cN''$. Now, if $x \in (c\cdot [N''])^{m+1}$ then $L_i(x) \in [N]$ since $N > (mp+c)N''$ and so $h_i^{(s)}(x+N\Z)=g_i^{(s)}(x)$. It follows that \begin{equation}\label{eqn.newrhs} \sum_{x \in [N]^{m+1}}{\prod_{i \in \mathcal{U}_{m,p,c}}{h_i^{(s)}(L_i(x)+N\Z)}} = (N'')^{m+1}\cdot \E_{x_0,\dots,x_m \in [N'']}{\prod_{i \in \mathcal{U}_{m,p,c}}{g_i^{(s)}(L_i(x))}}. \end{equation} Apply (\ref{eqn.cnt}) to the above and conclude that the right hand side of (\ref{eqn.i}) is at most \begin{align*} &\sum_{j \in \mathcal{D}_{p,c;m}^*}{\E_{s_0 \in P_0,\dots,s_{m} \in P_{m}}{\left(\frac{N}{N''}\right)^{m+1}\left\|h_j^{(s)}\right\|_{U^k(\Z/N\Z)}}}+ O_{m,p,c}(\delta). \end{align*} Let $\eta:=(4\sqrt{\epsilon})^{\frac{1}{F_1(k)}} + \sqrt{\epsilon} + \delta'$ (where $F_1$ is as in Theorem \ref{thm.gi}) and suppose \begin{equation}\label{eqn.earear} \sum_{j \in \mathcal{D}_{p,c;m}^*}{\E_{s_0 \in P_0,\dots,s_m \in P_m}{\left\|h_j^{(s)}\right\|_{U^k(\Z/N\Z)}}} < \eta |\mathcal{D}_{p,c;m}^*|. \end{equation} Then it follows that \begin{equation*} \left|Q_{m,p,c}(A;P_0,\dots,P_m) - \alpha^{|\mathcal{D}_{p,c;m}|}Q_{m-1,p,c}(A;P_0,\dots,P_m)\right| = O_{m,p,c}(\delta) + O_{m,p,c}(\eta), \end{equation*} and we will find ourselves in case (\ref{pt.count}) in view of the definition of $\eta$ and choice of $\epsilon$ earlier. On the other hand suppose that (\ref{eqn.earear}) does not hold, so that by averaging there is some $j \in \mathcal{D}_{p,c;m}^*$ such that \begin{equation}\label{eqn.upit} \E_{s_0 \in P_0,\dots,s_m \in P_m}{\left\|h_j^{(s)}\right\|_{U^k(\Z/N\Z)}} \geq \eta. \end{equation} Since (\ref{eqn.small}) does not happen, writing \begin{equation*} \mathcal{S}:=\{s \in P_0 \times \cdots \times P_m: s_m \in \Int_{\delta'}(P_m) \text{ and } \left|\E_{x \in d''\cdot [N]}{\tau_{L_j(s)}(1_A - \alpha 1_{c \cdot P_m})(x)}\right|>\sqrt{\epsilon} \} \end{equation*} we have \begin{equation*} \E_{s_0 \in P_0,\dots,s_m \in P_m}{1_{\mathcal{S}}(s)\sqrt{\epsilon}} \leq \epsilon. \end{equation*} By (\ref{eqn.upit}), Lemma \ref{lem.triv}, the value of $\eta$ and the triangle inequality we see that \begin{equation*} \E_{s_0 \in P_0,\dots,s_m \in P_m}{1_{\Int_{\delta'}(P_m)}(s_m)1_{\mathcal{S}^c}(s)\left\|h_j^{(s)}\right\|_{U^k(\Z/N\Z)}} \geq \eta - \sqrt{\epsilon} - \delta' \geq (4\sqrt{\epsilon})^{\frac{1}{F_1(k)}}. \end{equation*} By averaging there is some $s \in (P_0\times \cdots \times P_{m-1}\times \Int_{\delta'}(P_m))\setminus \mathcal{S}$ such that \begin{equation*} \left\|h_j^{(s)}\right\|_{U^k(\Z/ N\Z)}\geq (4\sqrt{\epsilon})^{\frac{1}{F_1(k)}}. \end{equation*} By Theorem \ref{thm.gi} (applicable since $4\sqrt{\epsilon} \leq 2^{-F_1(k)}$ as otherwise we are in case (\ref{pt.ldelta}) of the Lemma since $\epsilon \geq \delta$) there is a partition of $[N]$ into arithmetic progressions $Q_1,\dots,Q_M$ of average size at least $N^{\epsilon^{F_1(k)}}$ such that \begin{equation}\label{eqn.yy} \sum_{l=1}^M{\left|\sum_{x \in Q_l}{\tau_{L_j(s)}(f_{j,j}) \circ \lambda_{d''}(x)}\right|} =\sum_{l=1}^M{\left|\sum_{x \in Q_l}{g_j^{(s)}(x)}\right|} =\sum_{l=1}^M{\left|\sum_{x \in Q_l}{h_j^{(s)}(x+N\Z)}\right|} \geq 4\sqrt{\epsilon}N. \end{equation} Of course \begin{align*} \sum_{l=1}^M{\sum_{x \in Q_l}{\tau_{L_j(s)}(f_{j,j})\circ \lambda_{d''}(x)}} & = \sum_{x \in d''\cdot [N]}{\tau_{L_j(s)}(f_{j,j})(x)}\\ & = \sum_{x \in d''\cdot [N]}{\tau_{L_j(s)}(1_A - \alpha 1_{c \cdot P_m})(x)}, \end{align*} and so (since $s \not \in \mathcal{S}$) \begin{equation*} \left|\sum_{l=1}^M{\sum_{x \in Q_l}{\tau_{L_j(s)}(f_{j,j})\circ \lambda_{d''}(x)}}\right| \leq \sqrt{\epsilon }N. \end{equation*} Moreover, since the average size of $Q_l$ is at least $N^{\epsilon^{F_1(k)}}$, we have \begin{equation*} \sum_{\substack{1 \leq l \leq M\\|Q_l| \leq N^{\epsilon^{F_1(k)/2}}}}{\left|\sum_{x \in Q_l}{\tau_{L_j(s)}(f_{j,j}) \circ \lambda_{d''}(x)}\right|} \leq N^{\epsilon^{F_1(k)/2}}M \leq N^{-\epsilon^{F_1(k)/2}}N \leq \sqrt{\epsilon }N, \end{equation*} since we may assume $N^{-\epsilon^{F_1(k)/2}}\leq \sqrt{\epsilon}$ (or else we are in case (\ref{pt.smallN}) of the Lemma since $\epsilon \geq \delta$ and $N \geq N''=|P''|$). We may assume all the $Q_l$s are of odd size by removing at most one point from each at a cost of at most $M$ in (\ref{eqn.yy}). (Again $M\leq \sqrt{\epsilon}N$ or else we are in case (\ref{pt.smallN}) of the Lemma.) It follows by the triangle inequality and averaging that there is some $1 \leq l \leq M$ with \begin{equation*} \sum_{x \in Q_l}{\tau_{L_j(s)}(f_{j,j}) \circ \lambda_{d''}(x)} \geq \frac{1}{2}\sqrt{\epsilon }|Q_l| \text{ and } |Q_l| > N^{\epsilon^{F_1(k)}/2}. \end{equation*} Rewriting the first expression we get that \begin{align*} \E_{x \in d''\cdot Q_l}{1_A(L_j(s) + x)} & \geq \alpha \E_{x \in d''\cdot Q_l}{1_{c\cdot P_m}(L_j(s)+x)} + \frac{1}{2}\sqrt{\epsilon} = \alpha + \frac{1}{2}\sqrt{\epsilon} \end{align*} by the claim. First, $\frac{1}{2}\sqrt{\epsilon }\geq 2\delta$ (or else we are in case (\ref{pt.ldelta}) of the Lemma in view of the fact that $\epsilon \geq \delta$). Secondly, as noted after (\ref{eqn.cd}), $L_j(s)-d''\cdot \{-N,\dots,N\} \subset c\cdot P_m \subset \N$, $c$ divides $L_j(s)$ and $c$ divides $d''$, and so \begin{equation*} \E_{x \in c^{-1}L_j(s)-(c^{-1}d'')\cdot Q_l}{1_A( x)} \geq \alpha + \delta, \end{equation*} and putting $P''':=c^{-1}L_j(s)-(c^{-1}d'')\cdot Q_l \subset \N$ we are in case (\ref{pt.harder}) and hence are done. (Indeed, $|P'''|=|Q_l| \geq N^{\epsilon^{F_1(k)}/2} \geq N^{\delta^{F_1(k)}/2}$, and $|Q_l|$ is odd. As before \begin{equation*} I_1(c\cdot P''') =I_1(d''\cdot [N])\subset I_{O_{m,p,c}(1)}(P'') \subset I_{O_{m,p,c}(\delta^2)}(c\cdot P_m) \end{equation*} by Lemma \ref{lem.triv}. Again by Lemma \ref{lem.triv} and the discussion in the section on big-$O$ notation there is a monotone function $F':\N^3\rightarrow \N$ such that $I_1(c\cdot P''') \subset I_{F(m,p,c)\delta^2}(c\cdot P_m)$. And, again, we shall take $F_2 \geq F$.) The lemma is proved. \end{proof} Our main result is the following theorem. \begin{theorem}\label{thm.key} Suppose that $m,p,c,r \in \N$, $\delta \in (0,1]$; $P \subset \N$ is an arithmetic progression of odd length $N$; and $\mathcal{C}$ is an $r$-colouring of $I_1(P)\cap \N$. Then at least one of the following holds. \begin{enumerate} \item \label{cs.n3b} $N \leq \exp(\exp(\delta^{-O_{m,p,c}(1)}))$; \item \label{cs.d2b} $\delta \geq (2r)^{-O_{m,p,c}(1)}$; \item \label{cs.conc} there are progressions $P_0,\dots, P_m \subset I_\delta(c\cdot P)\cap \N$ of odd length with \begin{equation*} P_i \subset I_1(P_{i+1}) \text{ for all }0 \leq i \leq m-1 \text{ and } |P_0| \geq N^{\exp(-\delta^{-O_{m,p,c}(1)})}, \end{equation*} and some $C \in \mathcal{C}$ such that \begin{equation*} Q_{m,p,c}(C;P_0,\dots,P_m)\geq (2r)^{-O_{m,p,c}(1)}. \end{equation*} \end{enumerate} \end{theorem} We shall proceed by a double induction. The outer induction will be on $m$ and the inner is a density increment argument. \begin{lemma}[Iteration Lemma]\label{lem.iil} Suppose that Theorem \ref{thm.key} holds for some $m \in \N_0$, \emph{i.e.} \begin{center}\fbox{\begin{minipage}{35em}There are monotone functions $F^{(m)}:\N^2\times (0,1] \rightarrow \N$, $\eta^{(m)}_0:\N^3\rightarrow (0,1]$ and $\eta^{(m)}_1:\N^2 \times (0,1] \rightarrow (0,1]$ such that the following holds. For any $p,c,r \in \N$, $\delta \in (0,1]$, $P\subset \N$ an arithmetic progressions of odd length $N$, and $r$-colouring $\mathcal{C}$ of $I_1(P)\cap \N$ at least one of the following holds. \begin{enumerate} \item \label{cs.n3} $N \leq F^{(m)}(p,c,\delta)$; \item \label{cs.d2} $\delta \geq \eta_0^{(m)}(p,c,r)$; \item there are progressions $P_0,\dots, P_m \subset I_\delta(c\cdot P)\cap \N$ of odd length with \begin{equation*} P_i \subset I_1(P_{i+1}) \text{ for all }0 \leq i \leq m-1 \text{ and } |P_0| \geq N^{\eta_1^{(m)}(p,c,\delta)}, \end{equation*} and some $C \in \mathcal{C}$ such that \begin{equation*} Q_{m,p,c}(C;P_0,\dots,P_m)\geq \eta_0^{(m)}(p,c,r). \end{equation*} \end{enumerate}\end{minipage}}\end{center} Then for any $p,c,r \in \N$, $\delta\in (0,1]$, $P\subset \N$ an arithmetic progressions of odd length $N$, and $r$-colouring $\mathcal{C}$ of $I_1(P)\cap \N$ at least one of the following holds. \begin{enumerate} \item \label{cs.n4} \begin{equation*} N \leq \min\left\{F^{(m)}(p,c,\delta),\exp(\delta^{-O_{m,p,c}(1)}\eta_1^{(m)}(p,c,\delta)^{-1})\right\} \end{equation*} \item \label{cs.d5} \begin{equation*} \delta \geq\left(\frac{\eta_0^{(m)}(p,c,r)}{2r}\right)^{O_{m,p,c}(1)}; \end{equation*} \item \label{cs.inc} or there is a progression $P'''$ of odd length with \begin{equation*} I_1(P''') \subset I_1(P) \text{ and } |P'''| \geq N^{\delta^{O_{m,p,c}(1)} \eta_1^{(m)}(p,c,\delta)} \end{equation*} such that \begin{equation*} \sum_{C \in \mathcal{C}}{\max_{y: y+ P''' \subset \N}{\E_{x \in c\cdot (y+P''')}{1_C(x)}}} \geq \sum_{C \in \mathcal{C}}{\max_{y: y+ P \subset \N}{\E_{x \in c\cdot (y+P)}{1_C(x)}}}+ \frac{1}{2}\delta; \end{equation*} \item \label{cs.ct} there are progressions $P_0,\dots, P_{m+1} \subset I_\delta(c\cdot P)\cap \N$ of odd length with \begin{equation*} P_i \subset I_1(P_{i+1}) \text{ for all }0 \leq i \leq m \text{ and } |P_0| \geq N^{\eta_1^{(m)}(p,c,\delta)}, \end{equation*} and some $C \in \mathcal{C}$ such that \begin{equation*} Q_{m+1,p,c}(C;P_0,\dots,P_{m+1})\geq \left(\frac{\eta_0^{(m)}(p,c,r)}{2}\right)^{O_{m,p,c}(1)}. \end{equation*} \end{enumerate} \end{lemma} \begin{proof} Apply the content of the box to $P$ to get that either we are in case (\ref{cs.n3}) or (\ref{cs.d2}) of the hypothesis and so in case (\ref{cs.n4}) and (\ref{cs.d5}) respectively of the present lemma, or else there are progressions $P_0,\dots,P_m \subset I_\delta(c\cdot P)\cap \N$ with $P_i \subset I_1(P_{i+1})$ for all $0 \leq i \leq m-1$, and some $C \in \mathcal{C}$ with \begin{equation*} Q_{m,p,c}(C;P_0,\dots,P_m) \geq \eta_0^{(m)}(p,c,r) \text{ and }|P_0| \geq N^{\eta_1^{(m)}(p,c,\delta)}. \end{equation*} Let $y_0 \in \Z$ be such that $y_0+P \subset \N$ and \begin{equation}\label{eqn.max} \E_{x \in c\cdot (y_0+P)}{1_C(x)}=\max_{y: y+ P \subset \N}{\E_{x \in c\cdot (y+P)}{1_C(x)}}, \end{equation} and let $P_{m+1}=y_0+P$. Then $P_0,\dots,P_{m+1}$ are arithmetic progressions of odd length. Furthermore, by Lemma \ref{lem.triv} (and assuming we are not in case (\ref{cs.d5})) we have \begin{equation*} P_i \subset I_\delta(c\cdot P_m) \subset I_\delta(c\cdot I_\delta(c\cdot P)) \subset I_\delta(c\cdot P_{m+1}) \text{ for all } 0 \leq i \leq m-1, \end{equation*} and \begin{equation*} P_m \subset I_\delta(c\cdot P) = I_\delta(c\cdot P_{m+1}). \end{equation*} It follows that we can apply Lemma \ref{lem.count} with parameters $m+1, p, c \in \N$ and $\delta\in (0,1]$, set $C$, and odd length arithmetic progressions $P_0,\dots,P_{m+1} \subset \Z$, and \begin{equation*} P'':=I_\delta(P_0) \subset I_\delta(P_i) \text{ for all }0 \leq i \leq m+1. \end{equation*} We have four cases. \begin{enumerate} \item \emph{(Case ({\ref{pt.smallN}}))} Then \begin{equation*} \delta N^{\eta_1^{(m)}(p,c,\delta)} \leq |I_\delta(P_0)| = |P''| \leq \exp(\delta^{-F_2(m+1,p,c)}), \end{equation*} and we are in case (\ref{cs.n4}) of this lemma. \item \emph{(Case ({\ref{pt.ldelta}}))} \begin{equation*} \delta\geq \eta_0(m+1,p,c) \end{equation*} and we are in case (\ref{cs.d5}) of the lemma. \item \emph{(Case ({\ref{pt.harder}})} Then there is an arithmetic progression $P''' \subset \N$ of odd length with $I_1(c\cdot P''') \subset I_{F_2(m+1,p,c)\delta^2}(c\cdot P)$ such that \begin{equation*} |P'''| \geq N^{\eta_1^{(m)}(p,c,\delta)\delta^{O_{m,p,c}(1)}} \text{ and } \E_{x \in c\cdot P'''}{1_C(x)} \geq \E_{x \in c\cdot P_{m+1}}{1_C(x)} + \delta. \end{equation*} (Then either we are in case (\ref{cs.d5}) of the lemma or else $I_1(P''') \subset I_1(P)$.) In view of the choice of $y_0$ (\ref{eqn.max}) and the definition of $P_{m+1}$ the second expression tells us that \begin{equation*} \max_{w:w+P''' \subset \N}{\E_{z \in c\cdot (w+P''')}{1_{C}(x)}}\geq \E_{x \in c\cdot P'''}{1_C(x)} \geq \max_{y:y+P\subset \N}{\E_{x \in c\cdot (y+P)}{1_C(x)}} + \delta. \end{equation*} For (the other) $C' \in \mathcal{C}$ and $y \in \Z$ such that $y+P \subset \N$ we have, by Lemma \ref{lem.triv}, that \begin{align*} \max_{w:w+P''' \subset \N}{\E_{z \in c\cdot (w+P''')}{1_{C'}(z)}}& \geq \E_{x \in c\cdot P}{\E_{z \in c\cdot (x+y+P''')}{1_{C'}(z)}}\\ & =\E_{u \in c\cdot (y+P)}{\E_{z \in c\cdot P'''}{\tau_z(1_{C'})(u)}}\\ &\geq \E_{x \in c\cdot (y+P)}{1_{C'}(x)} - F_2(m+1,p,c)\delta^2. \end{align*} Taking the maximum over $y$ such that $y+P \subset \N$ and summing it follows that \begin{align*} & \sum_{C' \in \mathcal{C}}{ \max_{w:w+P''' \subset \N}{\E_{z \in c\cdot (w+P''')}{1_{C'}(z)}}}\\ &\qquad \qquad \geq\sum_{C' \in \mathcal{C}}{ \max_{w:w+P \subset \N}{\E_{z \in c\cdot (w+P)}{1_{C'}(z)}}} +\delta - (r-1)F_2(m+1),p,c)\delta^2. \end{align*} So we are either in case (\ref{cs.d5}) of the lemma or (\ref{cs.inc}) of the lemma. \item \emph{(Case ({\ref{pt.count}}))} Then \begin{align} \label{eqn.ss}& \left|Q_{m+1,p,c}(C;P_0,\dots,P_{m+1})\right.\\ \nonumber & \qquad \qquad \left. - \alpha^{|\mathcal{D}_{m+1,p,c}|}Q_{m,p,c}(C;P_0,\dots,P_{m})\right|\leq F_2(m+1,p,c)\delta^{\eta_0(m+1,p,c)}, \end{align} where $\alpha:=\E_{x \in c\cdot P_{m+1}}{1_C(x)}$. First, suppose that \begin{equation}\label{eqn.falssup} \max_{y:y+P_m \subset \N}{\E_{x \in c\cdot (y+P_m)}{1_C(x)}}>\alpha + ((r-1)F_2(m+1,p,c)+1)\delta \end{equation} and so by (\ref{eqn.max}) and the definition of $P_{m+1}$ we have \begin{align*} \max_{y:y+P_m \subset \N}{\E_{x \in c\cdot (y+P_m)}{1_C(x)}}&>\max_{y:y+P \subset \N}{\E_{x \in y+P}{1_C(x)}}\\ &\qquad \qquad+((r-1)F_2(m+1,p,c)+1)\delta. \end{align*} For the other $C' \in \mathcal{C}$, we use that $P_m \subset I_\delta(c\cdot P_{m+1})=I_\delta(c\cdot P)$, and Lemma \ref{lem.triv} to give that for any $y+P \subset \N$ we have \begin{align*} \max_{w:w+P_m \subset \N}{\E_{z \in c\cdot (w+P_m)}{1_{C'}(z)}}& \geq \E_{x \in c\cdot (y+P)}{\E_{z \in c\cdot (x+y+P_m)}{1_{C'}(z)}}\\ & =\E_{u \in c\cdot (y+P)}{\E_{z \in c\cdot P'''}{\tau_z(1_{C'})(u)}}\\ &\geq \E_{x \in c\cdot (y+P)}{1_{C'}(x)} - F_2(m+1,p,c)\delta. \end{align*} Taking the maximum over $y$ such that $y+P \subset \N$ and summing it follows that \begin{align*} & \sum_{C' \in \mathcal{C}}{ \max_{w:w+P_m \subset \N}{\E_{z \in c\cdot (w+P_m)}{1_{C'}(z)}}}\\ &\qquad \qquad \geq\sum_{C' \in \mathcal{C}}{ \max_{w:w+P \subset \N}{\E_{z \in c\cdot (w+P)}{1_{C'}(z)}}} +\delta. \end{align*} We are in case (\ref{cs.inc}). (We should also note that we are in case (\ref{cs.d5}) of the lemma or else $I_1(P_m) \subset I_1(P)$.) We conclude that (\ref{eqn.falssup}) does not hold and so \begin{align*} & \alpha+ ((r-1)F_2(m+1,p,c)+1)\delta\\ &\qquad \qquad \geq \max_{y:y+P_m \subset \N}{\E_{x \in c\cdot (y+P_m)}{1_C(x)}}\\ & \qquad \qquad \geq \E_{x \in c\cdot P_m}{1_C(x)} \geq Q_{m,p,c}(C;P_0,\dots,P_m) \geq \eta_0^{(m)}(p,c,r). \end{align*} Either \begin{equation*} \delta \geq \frac{\eta_0^{(m)}(p,c,r)}{2((r-1)F_2((m+1),p,c)+1)}, \end{equation*} and we are in case (\ref{cs.d5}) of the lemma; or from (\ref{eqn.ss}) we have \begin{equation*} F_2(m+1,p,c)\delta^{\eta_0(m+1,p,c)} \geq \frac{1}{2}\left(\frac{1}{2}\eta_0^{(m)}(p,c,r)\right)^{|\mathcal{D}_{m+1,p,c}|+1} \end{equation*} and we are in case (\ref{cs.d5}) of the lemma; or we are in case (\ref{cs.ct}) of the lemma. \end{enumerate} The lemma is proved. \end{proof} \begin{proof}[Proof of Theorem \ref{thm.key}] We proceed by induction on $m$ to show that the content of the box in Lemma \ref{lem.iil} holds. This gives the theorem. The result holds for $m=0$ and it is convenient to use that as the base case. To see this take $P_0:=I_\delta(c\cdot P)\cap \N$. If $\delta N <2$ then we are in case (\ref{cs.n3}) of the box. If not then $|P_0| =\Omega(\delta N)$, and again we are either in case (\ref{cs.n3}), or $|P_0|$ satisfies the required lower bound. Finally, we are in case (\ref{cs.d2}) or else $I_\delta(c^2\cdot P) \subset I_1(P)$ and \begin{equation*} \sum_{C \in \mathcal{C}}{Q_{0,p,c}(C;P_0)} = \sum_{C \in \mathcal{C}}{\E_{s_0 \in P_0}{1_C(cs_0)}} \geq \E_{s_0 \in P_0}{1_{I_1(P)\cap \N}(cs_0)} =1. \end{equation*} The result follows by averaging. Now, suppose we have proved that the content of the box holds for some $m$. We proceed iteratively defining progressions $P^{(0)},P^{(1)},\dots$ with $I_{1}( P^{(j+1)}) \subset I_1(P^{(j)})$ for all $j \geq 0$. Begin with $P^{(0)}:=I_1(P)\cap \N$ and define \begin{equation*} \mu_j:=\sum_{C \in \mathcal{C}}\max_{y:y+P^{(j)} \subset \N}{\E_{z \in y+P^{(j)}}{1_C(z)}}. \end{equation*} By hypothesis we have $\mu_0 \geq 1$ and we also have $\mu_j \leq r$ for all $j$. At stage $j \in \N_0$ we apply Lemma \ref{lem.iil} to $P^{(j)}$ and unless we are in case (\ref{cs.inc}) we terminate. If we are in case (\ref{cs.inc}) then we let $P^{(j+1)}$ be the progression given, which has \begin{equation*} I_1(P^{(j+1)}) \subset I_1(P^{(j)}), |P^{(j+1)}| \geq |P^{(j)}|^{\delta^{O_{m,p,c}(1)}\eta_1^{(m)}(p,c,\delta)} \text{ and }\mu_{j+1} \geq \mu_j + \frac{1}{2}\delta. \end{equation*} In view of the last fact this iteration can proceed for at most $2\delta^{-1}$ steps before terminating. When it terminates we have either \begin{equation*} N^{\delta^{-O_{m,p,c}(\delta^{-1})}(\eta_1^{(m)}(p,c,\delta))^{2\delta^{-1}}} \leq \min \left\{F^{(m)}(p,c,\delta),\exp(\delta^{-O_{m,p,c}(1)}\eta_1^{(m)}(p,c,\delta)^{-1})\right\}; \end{equation*} or \begin{equation*} \delta \geq \left(\frac{\eta_0^{(m)}(p,c,r)}{2r}\right)^{O_{m,p,c}(1)}; \end{equation*} or there is some $C \in \mathcal{C}$ such that \begin{equation*} Q_{m+1,p,c}(C;P_0,\dots,P_{m+1}) \geq \left(\frac{\eta_0^{(m)}(p,c,r)}{2}\right)^{O_{m,p,c}(1)} \text{ and } |P_0| \geq N^{\delta^{-O_{m,p,c}(\delta^{-1})}(\eta_1^{(m)}(p,c,\delta))^{2\delta^{-1}}}. \end{equation*} It follows that we can take \begin{equation*} F^{(m+1)}(p,c,\delta) \leq (2F^{(m)}(p,c,\delta))^{\eta_1^{(m)}(p,c,\delta)^{-O_{m,p,c}(\delta^{-1})}}; \end{equation*} \begin{equation*} \eta_1^{(m+1)}(p,c,\delta) \geq \left(\frac{\eta_1^{(m)}(p,c,\delta)}{2}\right)^{O_{m,p,c}(\delta^{-1})}; \end{equation*} and \begin{equation*} \eta_0^{(m+1)}(p,c,r) \geq \left(\frac{\eta_0^{(m)}(p,c,r)}{2r}\right)^{O_{m,p,c}(1)}. \end{equation*} These recursions give the claimed bounds. \end{proof} \begin{proof}[Proof of Theorem \ref{thm.main}] We apply Theorem \ref{thm.key} with $P=[N]$ (or $P=[N-1]$ if $N$ is even) with $\frac{1}{2c} \geq \delta = r^{-O_{m,p,c}(1)}$ such that case (\ref{cs.d2b}) never holds. If the colouring contains no $(m,p,c)$-set then $Q_{m,p,c}(C;P_0,\dots,P_m)=0$ for any $P_0,\dots,P_m$ of form described in case (\ref{cs.conc}) and so that does not happen. We conclude that $N$ is bounded in a way that yields the result. \end{proof} As a final remark, although we have made no effort to track the $m$, $p$, and $c$ dependencies they should also not be too bad given the known bounds in Theorem \ref{thm.gvn} and Theorem \ref{thm.gi}. \end{document}
\begin{document} \centerline{\Large \bf Performance of Equal Phase-Shift} \centerline{\Large \bf Search for One Iteration} \footnote{ The paper was supported by NSFC(Grants No. 60433050 and 60673034), the basic research fund of Tsinghua university NO: JC2003043.} \centerline{Dafa Li$^{a}$\footnote{email address:[email protected]}, Jianping Chen$^a$, Xiangrong Li$^{b}$, Hongtao Huang$^{c}$, Xinxin Li$^{d}$ } \centerline{$^a$ Dept of mathematical sciences, Tsinghua University, Beijing 100084 CHINA} \centerline{$^b$ Department of Mathematics, University of California, Irvine, CA 92697-3875, USA} \centerline{$^c$ Electrical Engineering and Computer Science Department} \centerline{ University of Michigan, Ann Arbor, MI 48109, USA} \centerline{$^d$ Dept. of computer science, Wayne State University, Detroit, MI 48202, USA} Abstract Grover presented the phase-shift search by replacing the selective inversions by\ selective phase shifts of $\pi /3$. In this paper, we investigate the phase-shift search with general equal phase shifts. We show that for small uncertainties, the failure probability of the Phase-$\pi /3$ search is smaller than the general phase-shift search and for large uncertainties, the success probability of the large phase-shift search is larger than the Phase-$\pi /3$ search. Therefore, the large phase-shift search is suitable for large-size of databases. PACS number: 03.67.Lx Keywords: Amplitude amplification, the \ phase-shift search, quantum computing. \section{Introduction} Grover's quantum search algorithm is used to find a target state in an unsorted database of size $N$\cite{Grover98}\cite{Grover05}. Grover's quantum search algorithm can be considered as a rotation of the state vectors in two-dimensional Hilbert space generated by the initial ($s$) and target ($t$) vectors\cite{Grover98}. The amplitude of the desired state increases monotonically towards its maximum and decreases monotonically\ after reaching the maximum \cite{LDF05}. As mentioned in \cite{Grover05} \cite{Brassard97}, unless we stop when it is right at the target state, it will drift away. A new search algorithm was presented in \cite{Grover05} to avoid drifting away from the target state. Grover proposed the new algorithm by replacing the selective inversions by\ selective phase shifts of $\pi /3$ , the algorithm converges to the target state irrespective of the number of iterations. In his paper, Grover demonstrated the power of his algorithm by calculating its success probability when only a single query into the database was allowed. It turned out that if the success probability for a random item in the database was $1-\epsilon $, where $\epsilon $ is known to randomly lie somewhere in the range $\left( 0,\epsilon _{0}\right) $, after a single quantum query into the database, Grover's new \ phase-shift algorithm was able to increase the success probability to $1-\epsilon _{0}^{3}.$ This was shown to be superior to existing algorithms and later shown to be optimal\cite{Grover06}\cite{Tulsi}. In \cite{Farhi}\cite{Roland}, adiabatic quantum computation provides an alternative scheme for amplitude amplification\ that also does not drift away from the solution. In \cite{Tulsi}, an algorithm for obtaining fixed points in iterative quantum transformations was presented and the average number of oracle queries for the fixed -point search algorithm was discussed. In \cite{Boyer}, Boyer et al. described an algorithm that succeeds with probability approaching to 1. In \cite{LDF06}, \ we discussed the \ phase-shift search algorithm with different phase shifts. As discussed below, the implementation of the general \ phase-shift search\ relies on selective phase shifts. In this paper, we investigate the \ phase-shift search with general but equal phase shifts. We are able to considerably improve the algorithm by varying the phase-shift away from $\pi /3$ when $\epsilon $ is large. As well known, the smaller deviation makes the algorithm converge to the target state more rapidly. The deviation for the Phase-$\pi /3$ search is $\epsilon ^{3}$\cite{Grover05}. For the large size of database, we investigate that the deviation for any phase shifts of $ \theta >\pi /3$ is smaller than $\epsilon ^{3}$ and the closer to $\pi $ the phase shifts are, the smaller the deviation is. In this paper, we study the performance of the general \ phase-shift search for only one iteration. This also determines the failure probability and success probability of the general \ phase-shift search after recursively applying the single iteration for $n$ times. Note that we neglect the effects\ of decoherence completely in this paper. This paper is organized as follows. In section 3, we give the necessary and sufficient conditions for the smaller deviation than $\epsilon ^{3}$. In section 4, we show that the Phase-$\pi /3$ search algorithm performs well for the small $\epsilon $. In section 6, we demonstrate that the closer to $ \pi $ the phase shifts are, the smaller\ the deviation is. In section 7, we propose the ratio measurement of the behavior of the Phase-$\theta $ search algorithm for one query. \section{Grover's \ phase-shift search and the reduction of the deviation} The standard amplitude amplification algorithm would overshoot the target state. To avoid drifting away from the target state, Grover presented the \ phase-shift search\cite{Grover05}. In \cite{Grover05} the transformation $UR_{s}^{\pi /3}U^{+}R_{t}^{\pi /3}U$ was applied to the initial state $|s\rangle $, \begin{eqnarray} R_{s}^{\pi /3} &=&I-[1-e^{i\frac{\pi }{3}}]|s\rangle \langle s|, \nonumber \\ R_{t}^{\pi /3} &=&I-[1-e^{i\frac{\pi }{3}}]|t\rangle \langle t|, \label{grover1} \end{eqnarray} \noindent where $|t\rangle $ stands for the target state. The transformation $UR_{s}^{\pi /3}U^{+}R_{t}^{\pi /3}U$ is denoted as Grover's the Phase-$\pi /3$ search algorithm in \cite{Tulsi}. Grover let $\theta $ denote $\pi /3$. Then \begin{eqnarray} R_{s}^{\theta } &=&I-[1-e^{i\theta }]|s\rangle \langle s|, \nonumber \\ R_{t}^{\theta } &=&I-[1-e^{i\theta }]|t\rangle \langle t|. \label{grover2} \end{eqnarray} \noindent The transformation $UR_{s}^{\theta }U^{+}R_{t}^{\theta }U$ is called as the Phase-$\theta $ search algorithm in this paper. As indicated in \cite{Grover05}, when $\theta =\pi $, this becomes one iteration of the amplitude amplification algorithm\cite{Grover98}\cite{Brassard97}. Note that if we apply $U$ to the initial state $|s\rangle $, then the amplitude of reaching the target state $|t\rangle $ is $U_{ts}$\cite{Grover98}.\ Applying the transformation $UR_{s}^{\theta }U^{+}R_{t}^{\theta }U$ to the start state $|s\rangle $, Grover derived the following, \begin{equation} UR_{s}^{\theta }U^{+}R_{t}^{\theta }U|s\rangle =U|s\rangle \lbrack e^{i\theta }+\left\vert U_{ts}\right\vert ^{2}(e^{i\theta }-1)^{2}]+|t\rangle U_{ts}(e^{i\theta }-1). \label{grover3} \end{equation} Let $D(\theta )$ be the deviation from the $t$ state for any phase shifts of $\theta $. Then from (\ref{grover3}) the following was obtained in \cite {Grover05}, \begin{equation} D(\theta )=(1-\left\vert U_{ts}\right\vert ^{2})|e^{i\theta }+\left\vert U_{ts}\right\vert ^{2}(e^{i\theta }-1)^{2}|^{2}. \label{groverdev} \end{equation} \noindent Grover chose $\pi /3$ as phase shifts and let $\left\vert U_{ts}\right\vert ^{2}=1-\epsilon $, where $0<\epsilon <1$. Substituting $ \left\vert U_{ts}\right\vert ^{2}=1-\epsilon $, the deviation from the $t$ state becomes $D(\pi /3)=\epsilon ^{3}$\cite{Grover05}. Deviation $D(\theta )$ in (\ref{groverdev}) can be reduced as follows. For any $\theta $, \begin{equation} e^{i\theta }+\left\vert U_{ts}\right\vert ^{2}(e^{i\theta }-1)^{2}=e^{i\theta }+2(\cos \theta -1)e^{i\theta }(1-\epsilon )=e^{i\theta }[1+2(\cos \theta -1)(1-\epsilon )]. \label{reduction} \end{equation} \noindent So by (\ref{reduction}), we obtain \begin{equation} D(\theta )=\epsilon \lbrack 1+2(\cos \theta -1)(1-\epsilon )]^{2}. \label{dev1} \end{equation} In this paper, we study the \ phase-shift search algorithm with two equal phase shifts. It is clear that it is enough to consider $\theta $ in $[0,\pi ]$. It can be shown that the maximum and minimum of deviation $D(\theta )$ are $1$ and $0$. That is, \begin{equation} 0\leq D(\theta )\leq 1. \label{maxmin} \end{equation} \section{The phase shifts for smaller deviation} As indicated in \cite{Grover98}, in the case of database search, $|U_{ts}|$ is almost $1/\sqrt{N}$, where $N$ is the size of the database. Thus, $ \epsilon $ is almost $1-1/N$ and $\epsilon $ is close to $1$ for the large size of database. It is known that the deviation for Grover's the Phase-$\pi /3$ search is $\epsilon ^{3}$. In this section, we give the phase shifts for smaller deviation than $\epsilon ^{3}$. \subsection{Necessary and sufficient conditions} From (\ref{dev1}) let us calculate \begin{eqnarray} D(\theta )-\epsilon ^{3} &=&\epsilon \lbrack 1+2(\cos \theta -1)(1-\epsilon )]^{2}-\epsilon ^{3} \nonumber \\ &=&\allowbreak \allowbreak \epsilon (1-\epsilon )(2\cos \theta -1)[2+(2\cos \theta -3)(1-\epsilon )]. \label{dev2} \end{eqnarray} See Figs. 1 and 4. From (\ref{dev2}), we have the following statement. Lemma 1. Deviation $D(\theta )$ in (\ref{dev1}) for any phase shifts of $ \theta $ in $[0,\pi /3)$ is greater than $\epsilon ^{3}$ for any $\epsilon $ . That is, $D(\theta )>\epsilon ^{3}$ for any $\theta $ in $[0,\pi /3)$ and for any $\epsilon $. See table 1. The argument is as follows. When $0\leq \theta <\pi /3$, $0<2\cos \theta -1\leq 1$ and $2\epsilon <2+(2\cos \theta -3)(1-\epsilon )\leq 1+\epsilon $ for any $\epsilon $. Therefore when $0\leq \theta <\pi /3$, it follows (\ref{dev2})\ that $ D(\theta )>\epsilon ^{3}$ for any $\epsilon $. \ From (\ref{dev2}) and Lemma 1,\ the following lemma holds immediately. See table 1. Lemma 2. $D(\theta )<\epsilon ^{3}$ if and only if \begin{eqnarray} \theta >\pi /3\wedge \epsilon >1-2/(3-2\cos \theta ). \label{cond2} \end{eqnarray} \noindent The following remark is used to describe the monotonicity of $ 1-2/(3-2\cos \theta )$ in (\ \ref{cond2}). The monotonicity is used to find smaller deviation than $\epsilon ^{3}$ below. Remark 1. $1-2/(3-2\cos \theta )$ increases from $-1$ to $3/5$ as $\theta $ increases from $0$ to $\pi $. Thus, \begin{eqnarray} -1\leq 1-2/(3-2\cos \theta )\leq 3/5. \label{remark1} \end{eqnarray} Table 1. The phase shifts for deviations \begin{tabular}{|c|c|c|} \hline $\theta $ & & $\epsilon $ \\ \hline When $\theta >\pi /3$ & $D(\theta )<\epsilon ^{3}$ & for $\epsilon >1-2/(3-2\cos \theta )$ \\ \hline When $\theta <\pi /3$ & $D(\theta )>\epsilon ^{3}$ & for any $\epsilon $ \\ \hline When $\theta >\pi /3$ & $D(\theta )>\epsilon ^{3}$ & for $\epsilon <1-2/(3-2\cos \theta )$ \\ \hline \end{tabular} \subsection{The phase shifts for smaller deviation} In this subsection, we give the phase shifts for which the deviations are smaller than $\epsilon ^{3}$. \ Corollary 1. Deviation $D(\theta )$ for any phase shifts of $\theta $ in $ (\pi /3,\alpha ]$ is smaller than $\epsilon ^{3}$ whenever $\epsilon >$ $ 1-2/(3-2\cos \alpha )$. Proof. By Remark 1, $1-2/(3-2\cos \theta )$ increases from $0$ to $ 1-2/(3-2\cos \alpha )$ as $\theta $ increases from $\pi /3$ to $\alpha $. Thus, $0<1-2/(3-2\cos \theta )\leq $ $1-2/(3-2\cos \alpha )$\ whenever $\pi /3<\theta \leq \alpha $. Therefore, when $\epsilon >$ $1-2/(3-2\cos \alpha )$ , always $\epsilon $ $>1-2/(3-2\cos \theta )$. Hence, this corollary follows Lemma 2. When $\alpha =\pi $, $2\pi /3$, $\pi /2$\ and $\arccos \frac{1-3\delta }{ 2(1-\delta )}$, from Corollary 1 we have the following phase shifts for smaller deviations than $\epsilon ^{3}$. See table 2. Result 1. For any phase shifts of $\theta >\pi /3$, deviation $D(\theta )<\epsilon ^{3}$ for $\epsilon >$ $3/5$. \ See Fig. 2 (a). Result 2. For any phase shifts of $\theta $ in $(\pi /3,2\pi /3]$, deviation $D(\theta )<\epsilon ^{3}$ for $\epsilon >1/2$. See Fig. 2 (b). Result 3. For any phase shifts of $\theta $ in $(\pi /3,\pi /2]$, deviation $ D(\theta )<\epsilon ^{3}$ for $\epsilon >1/3$. See Fig. 2 (c). Result 4. When $\epsilon >$ $\delta $, for any phase shifts of $\theta $ in $ (\pi /3$ ,$\arccos \frac{1-3\delta }{2(1-\delta )}]$, deviation $D(\theta )<\epsilon ^{3}$. Note that $\lim_{\delta \rightarrow 0}\arccos \frac{1-3\delta }{2(1-\delta )} =\pi /3$ . Our conclusion is when we search large database, i.e., $\epsilon $ is large, for any phase shifts of $\theta >\pi /3$ the deviation is smaller than $ \epsilon ^{3}$. Table 2. The phase shifts for $D(\theta )<\epsilon ^{3}$ \begin{tabular}{|c|c|c|} \hline $\theta $ & & $\epsilon $ \\ \hline When $\theta >\pi /3$ & $D(\theta )<\epsilon ^{3}$ & for $\epsilon >3/5$ \\ \hline When $\pi /3<\theta \leq 2\pi /3$ & $D(\theta )<\epsilon ^{3}$ & for $ \epsilon >1/2$ \\ \hline When $\pi /3<\theta \leq \pi /2$ & $D(\theta )<\epsilon ^{3}$ & for $ \epsilon >1/3$ \\ \hline When $\pi /3<\theta \leq \arccos \frac{1-3\delta }{2(1-\delta )}$ & $ D(\theta )<\epsilon ^{3}$ & for $\epsilon >\delta $ \\ \hline \end{tabular} \section{The Phase-$\protect\pi /3$ search is optimal for small uncertainties.} As indicated in \cite{Grover98}, the size of the database is very large, i.e., $\epsilon $ is large. However, it is interesting to investigate the performances of the Phase-$\pi /3$ search and the Phase-$\theta $ search for small $\epsilon $. \subsection{The Phase-$\protect\pi /3$ search possesses smaller failure probability} As said in \cite{Grover05}, $\epsilon ^{3}$ and $D(\theta )$ are the failure probabilities of the Phase-$\pi /3$ search and the Phase-$\theta $ search, respectively. Let us consider the ratio of the two failure probabilities. It is easy to see that $\lim_{\epsilon \rightarrow 0}\epsilon ^{3}/D(\theta )=0$ for any $\theta \neq \pi /3$.\ That is, $\epsilon ^{3}=o(D(\theta ))$. In other words, $\epsilon ^{3}$ is smaller than $D(\theta )$ for small $ \epsilon $. It means that $\epsilon ^{3}$ approaches $0$ more rapidly than $ D(\theta )$ as $\epsilon $ approaches $0$. \subsection{The conditions under which the Phase-$\protect\pi /3$ search behaves well \ \ \ \ \ \ \ \ \ \ } Here, we discuss what $\epsilon $ satisfies $\epsilon ^{3}<$ $D(\theta )$. From (\ref{dev2}) and Lemma 1, we have the following lemma. Lemma 3. $D(\theta )>\epsilon ^{3}$ if and only if $\theta >\pi /3$ and $ \epsilon <1-2/(3-2\cos \theta )$ or $0\leq \theta <\pi /3$. See table 1. The following corollary follows Lemma 3. Corollary 2. When $\pi /3<\alpha \leq \theta $ and $\epsilon <1-2/(3-2\cos \alpha )$, $D(\theta )>\epsilon ^{3}$. The argument is as follows. By Remark 1, $1-2/(3-2\cos \theta )$ increases from $1-2/(3-2\cos \alpha )$ to $3/5$ as $\theta $ increases from $\alpha $ to $\pi $. Thus, when $\pi /3<\alpha \leq \theta $, $1-2/(3-2\cos \alpha )\leq 1-2/(3-2\cos \theta )$. Consequently, this corollary holds by Lemma 3. From Corollary 2 we have the following results. Result 5. When $\theta \geq \pi /2$, $D(\theta )>\epsilon ^{3}$ for $ \epsilon <1/3$. Result 6. When $\theta \geq 2\pi /3$, $D(\theta )>\epsilon ^{3}$ for $ \epsilon <1/2$. Result 7. When $\theta \geq \arccos \frac{1-3\delta }{2(1-\delta )}$, $ D(\theta )>\epsilon ^{3}$ for $\epsilon <\delta $. Our conclusion is that for small $\epsilon $,\ the search algorithm performs optimal for $\theta =\pi /3$. By means of the performance the Phase-$\pi /3$ search algorithm can be applied to quantum error corrections. \section{Zero deviation and average zero deviation points} \subsection{Zero deviation} Let $d=1+2(\cos \theta -1)(1-\epsilon )$. Then, deviation $D(\theta )$ in ( \ref{dev1})\ can be rewritten as $D(\theta )=\epsilon d^{2}$. Let $d=0$. Then we obtain $\cos \theta =1-\frac{1}{2(1-\epsilon )}$, where $0<\epsilon \leq \frac{3}{4}$ to make $\left\vert 1-\frac{1}{2(1-\epsilon )}\right\vert \leq 1$. Conclusively, if $U_{ts}$ is given, that is, $\epsilon $ is fixed, then we choose $\theta =\arccos [1-\frac{1}{2(1-\epsilon )}],$ which is in $ (\pi /3$ ,$\pi ]$, as phase shifts. $\arccos [1-\frac{1}{2(1-\epsilon )}]$ will obviously make the deviation vanish and is called as a zero deviation point. It means that one iteration will reach $t$ state with certainty if the zero deviation point is chosen as phase shifts . Note that $ \lim_{\epsilon \rightarrow 0}\arccos [1-\frac{1}{2(1-\epsilon )}]=\pi /3$ . This says that $\pi /3$ is the limit of the zero deviation points $\theta $ though it is not a zero deviation point. \subsection{Average zero deviation points} When $0<\epsilon \leq \frac{3}{4}$, $\arccos [1-\frac{1}{2(1-\epsilon )}]$ is called as a zero deviation point. Since $\epsilon $ is not given, the zero deviation point is unknown. However, if we know the range of $\epsilon $ , then in terms of mean-value theorem for integrals, we can find the average value $\bar{\theta}$ of the zero deviation points $\theta $. Here, we assume that $\epsilon $ is uniformly distributed in the interval\ $(\beta ,\alpha )\subseteq $ $(0,3/4]$. Let $\epsilon $ be in the range $(\beta ,\alpha )$, where $(\beta ,\alpha )\subseteq $ $(0,3/4]$. Then we calculate the average value of $1-\frac{1}{ 2(1-\epsilon )}$ over the range $(\beta ,\alpha )$ as follows. \begin{eqnarray} \frac{1}{\alpha -\beta }\int_{\beta }^{\alpha }[1-\frac{1}{2(1-\epsilon )} ]d\epsilon =1+\frac{1}{2(\alpha -\beta )}\ln \frac{1-\alpha }{1-\beta }. \end{eqnarray} It can be argued that $-1\leq 1+\frac{1}{2(\alpha -\beta )}\ln \frac{ 1-\alpha }{1-\beta }<\frac{1}{2}$. Thus, it is reasonable to define \begin{eqnarray} \bar{\theta}=\arccos [1+\frac{1}{2(\alpha -\beta )}\ln \frac{1-\alpha }{ 1-\beta }]. \label{zeropoint} \end{eqnarray} \noindent $\bar{\theta}$ can be considered as the average value of the zero deviation points $\theta $ and is called as the average zero deviation point. It can be seen that $\pi /3<\bar{\theta}\leq \pi $. When $\bar{\theta}$ is chosen as phase shift, we obtain the following deviation \begin{eqnarray} D(\bar{\theta})=\epsilon (1+\frac{1-\epsilon }{\alpha -\beta }\ln \frac{ 1-\alpha }{1-\beta })^{2}. \label{zpdev} \end{eqnarray} Let us compute $D(\bar{\theta})-\epsilon ^{3}$ \ as follows. \begin{eqnarray} D(\bar{\theta})-\epsilon ^{3}=\epsilon (1+\frac{1}{\alpha -\beta }\ln \frac{ 1-\alpha }{1-\beta })(1-\epsilon )(1+\frac{1-\epsilon }{\alpha -\beta }\ln \frac{1-\alpha }{1-\beta }+\epsilon ). \end{eqnarray} \noindent Notice that $1+\frac{1}{\alpha -\beta }\ln \frac{1-\alpha }{ 1-\beta }<0$ and $1-\frac{1}{\alpha -\beta }\ln \frac{1-\alpha }{1-\beta }>0$ . Let $\kappa =1-2/(1-\frac{1}{\alpha -\beta }\ln \frac{1-\alpha }{1-\beta } ) $. It can be proven that $0<\kappa <1$. We can conclude when $\epsilon >\kappa $, $D(\bar{\theta})<\epsilon ^{3}$. We will find the average zero deviation point $\bar{\theta}$ for the ranges $ ($ $0,1/2)$ and $(0,3/4)$ of $\epsilon $, respectively, as follows. Example 1. Let $\epsilon $ lie in the range $(0,1/2]$.\ By (\ref{zeropoint} ),\ the average zero deviation point $\bar{\theta}_{1}=\arccos (1-\ln 2)=72^{\circ }30^{\prime }$. Taking $\bar{\theta}_{1}$ as phase shifts, by ( \ref{zpdev}) deviation $D(\bar{\theta}_{1})=\epsilon \lbrack 1-2(1-\epsilon )\ln 2]^{2}$. Deviation $D(\bar{\theta}_{1})$ for phase shifts of $\bar{ \theta}_{1}$ is smaller than $\epsilon ^{3}$, i.e., $D(\bar{\theta} _{1})<\epsilon ^{3}$, if and only if $\epsilon >\frac{2\ln 2-1}{2\ln 2+1} =\allowbreak 0.16$. Example 2. Let $(0,3/4]$ be the range of $\epsilon $. Then by (\ref {zeropoint}), the average zero deviation point $\bar{\theta}_{2}$\ $=$ $ \arccos (1-\frac{4}{3}\ln 2)=86^{\circ }$. Choosing $\bar{\theta}_{2}$ as phase shifts,\ by (\ref{zpdev})\ deviation\ $D(\bar{\theta}_{2})={\small \epsilon \lbrack 1-}\frac{8}{3}{\small (1-\epsilon )}\ln {\small 2]}^{2}$ and $D(\bar{\theta}_{2})$\ is smaller than $\epsilon ^{3}$ when $\epsilon >\allowbreak 0.30$. \ \section{Monotonicity of the deviation for large $\protect\epsilon $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } As discussed above, when $\epsilon $\ is fixed and lies in the range $ (0,3/4] $ and $\arccos (1-\frac{1}{2(1-\epsilon )})$ is chosen as phase shifts, the deviation vanishes. When $\epsilon >3/4$ ,\ since $\left\vert 1- \frac{1}{2(1-\epsilon )}\right\vert >1$, deviation $D(\theta )$\ does not vanish for any phase shifts of $\theta $ in $[0,\pi ]$. When $\epsilon \geq \frac{3}{4}$, \begin{eqnarray} -1\leq 2(\cos \theta -1)(1-\epsilon )\leq 0. \label{cond1} \end{eqnarray} \noindent and $0\leq d\leq 1$. When $U_{ts}$ is given, that is, $\epsilon $ is fixed, by using (\ref{cond1}) it can be shown that deviation $D(\theta )$ monotonically decreases\ from $\epsilon $ to $\epsilon (4\epsilon -3)^{2}$ as $\theta $ increases from $0$ to $\pi $. See Fig. 3. for the monotonicity of $D(\theta )$. When $\theta =\pi $, the deviation gets its minimum $ \epsilon (4\epsilon -3)^{2}$. That is, \begin{equation} \epsilon (4\epsilon -3)^{2}\leq D(\theta ) \label{inequality} \end{equation} \noindent for any phase shifts of $\theta $ in $[0,\pi ]$, whenever $ \epsilon \geq 3/4$. Peculiarly,\textbf{\ }the deviation $\epsilon (4\epsilon -3)^{2}<$ $\epsilon ^{3}$ whenever $\epsilon >3/5$. The inequality in ({\ref{inequality})} also follows that $(4\epsilon -3)\leq d$ for any phase shifts of $\theta $ in $ [0,\pi ]$\ whenever $\epsilon \geq 3/4$. See table 3 for the deviations $D(\theta )$\ for $\theta =\pi /2,2\pi /3,3\pi /4,5\pi /6$,$\pi $. Also see Fig. 4. Table 3. The deviations for $\epsilon >3/4$ \begin{tabular}{|c|c|c|c|c|} \hline ${\small \theta }$ & ${\small \pi /2}$ & ${\small 2\pi /3}$ & ${\small 3\pi /4}$ & ${\small 5\pi /6}$ \\ \hline ${\tiny D(\theta )}$ & ${\tiny \epsilon (2\epsilon -1)}^{2}$ & ${\tiny \epsilon (3\epsilon -2)}^{2}$ & ${\tiny \epsilon ((}\sqrt{2}{\tiny +2)\epsilon -(}\sqrt{2}{\tiny +1))}^{2}$ & ${\tiny \epsilon ((}\sqrt{3} {\tiny +2)\epsilon -(}\sqrt{3}{\tiny +1))}^{2}$ \\ \hline \end{tabular} Remark 2. From the discussion above, it is easy to see that the closer to $\pi $ the phase shifts are, the smaller\ the deviation is, when $\epsilon \geq \frac{3 }{4}$. By means of the inequality in ({\ref{inequality}) we can discuss the lower bound of the number of iterations to find the $t$ state. } Note that when the selective phase shift $\theta $ becomes $\pi $, the phase- $\pi $ search is the amplitude amplification search. \section{The ratio measurement of the success probabilities for one query} \subsection{The ratio of the success probabilities} Clearly, the greater the success probability is, the better the algorithm performs. In other words, the more rapidly the algorithm converges. In this section, it is demonstrated that the limit of the ratio of success probabilities of the Phase-$\theta $ and the Phase-$\pi /3$ search algorithms is used to quantify the performance of the Phase-$\theta $ search algorithm. From (\ref{maxmin}), let $\Delta (\theta )$ $=$ $1-D(\theta )$. Then $\Delta (\theta )$ is the success probability with which the transformation $ UR_{s}^{\theta }U^{+}R_{t}^{\theta }U$ in (\ref{grover3}) drives the start state to the target state. For instance, $\Delta (\pi /3)=1-D(\pi /3)=1-\epsilon ^{3}$, which is the success probability of the Phase-$\pi /3$ search algorithm for one query. See Page 1 in \cite{Grover05}. Explicitly, $ \Delta (\theta )$ is not the desired measurement free of $\epsilon $ for the Phase-$\theta $ search algorithm because $\Delta (\theta )$ is also a function of $\epsilon $. Let us compute the limit of $\Delta (\theta )$ as $\epsilon $ approaches 1 as follows. $\lim_{\epsilon \rightarrow 1}\Delta (\theta )=\lim_{\epsilon \rightarrow 1}(1-\epsilon (1+2(\cos \theta -1)(1-\epsilon ))^{2}))=0$, for any $\theta $ in $[0,\pi ]$. It is straightforward that the above limit can not be used to describe the performance of the Phase-$\theta $ search algorithm for any phase shifts of $ \theta $ in $[0,\pi ]$\ because the limit always is zero for any $\theta $ in $[0,\pi ]$. It is natural to consider and calculate $\frac{\Delta (\theta )}{\Delta (\pi /3)}$ as follows. \begin{eqnarray} \frac{\Delta (\theta )}{\Delta (\pi /3)}=\frac{4\left( \cos ^{2}\theta \right) \epsilon ^{2}-8\left( \cos \theta \right) \epsilon ^{2}+4\epsilon ^{2}+4\left( \cos \theta \right) \epsilon -4\left( \cos ^{2}\theta \right) \epsilon +1}{\epsilon ^{2}+\epsilon +1}. \end{eqnarray} \noindent Then we obtain the following limit of $\frac{\Delta (\theta )}{ \Delta (\pi /3)}$ as $\epsilon $ approaches 1. Let \begin{eqnarray} \rho =\lim_{\epsilon \rightarrow 1}\frac{\Delta (\theta )}{\Delta (\pi /3)} =\allowbreak \frac{5-4\cos \theta }{3}. \label{rate} \end{eqnarray} \noindent Then $\rho $ can be considered as the ratio of success probabilities for the Phase-$\theta $ and the Phase-$\pi /3$ search algorithms for large $\epsilon $.\ Notice that $\rho $ is free of $\epsilon $ and only depends on $\theta $. Hence, $\rho $ can be considered as a measurement of performance of Phase-$\theta $ search algorithm for any phase shifts of $\theta $\ in $[0,\pi ]$. We can follow \cite{Grover05} to define by the recursion $U_{m+1}=$ $ U_{m}R_{s}^{\theta }U_{m}^{+}R_{t}^{\theta }U_{m}$, where $U_{0}=U$. For the Phase-$\pi /3$ search, after recursive application of the basic iteration $m$ times, the success probability $\left\vert U_{m,ts}\right\vert =1-\epsilon ^{3^{m}}$\cite{Grover05}. For the Phase-$\theta $ search, as well we can derive the success probability $\left\vert U_{m,ts}\right\vert $ and the failure probability $1-\left\vert U_{m,ts}\right\vert $ after recursive application of the basic iteration $m$ times. Fixed points of the Phase-$ \theta $ search algorithm are discussed in \cite{reviewer}. \subsection{The larger phase shifts than $\protect\pi /3$ for larger size of database} It can be shown that $\rho $ increases from $1/3$ to $3$ as $\theta $ increases from 0 to $\pi $. In particular, $\rho $ increases from $1$ to $3$ as $\theta $ increases from $\pi /3$ to $\pi $. This also says that for large databases, the larger the phase shifts are, the greater the success probabilities are. For instance, $\rho =2.8$ for Phase-$5\pi /6$ search. This means that for large $\epsilon $, the ratio of success probabilities for the Phase-$5\pi /6$ and the Phase-$\pi /3$ search is 2.8. See table 4. Table 4.\ $\rho $'s values for the Phase-$\theta $ search \begin{tabular}{|c|c|c|c|c|c|} \hline $\theta $ & $\pi /2$ & $2\pi /3$ & $3\pi /4$ & $5\pi /6$ & $\pi $ \\ \hline $\rho $ & $5/3$ & $7/3$ & $(5+2\sqrt{2})/3=2.6$ & $(5+2\sqrt{3})/3=2.8$ & $3$ \\ \hline \end{tabular} \ \ \ \ \ \ \ \ \ \ \ \section{Summary} In this paper, we give the phase shifts for smaller deviation than $\epsilon ^{3}$. When $\epsilon \leq 3/4$ and $\epsilon $ is given, we choose the zero deviation point as phase shifts to find the desired state for one iteration. When $\epsilon \geq 3/4$, the deviation decreases from $\epsilon ^{3}$\ to $ \epsilon (4\epsilon -3)^{2}$ as $\theta $ increases from $\pi /3$ to $\pi $. It is shown that for small $\epsilon $, the Phase-$\pi /3$ search behaves better than the general Phase-$\theta $ search. Therefore the Phase-$\pi /3$ search can be applied to quantum error correction. We propose the limit of the ratio of success probabilities of the Phase-$\theta $ and the Phase-$\pi /3$ search algorithms as a measure of efficiency of a single Phase-$\theta $ iteration. The measure can help us find the optimal phase shifts for small deviation and large success probability. Thus, there are more choices for phase shifts to adjust an algorithm for large size of database and more loose constraint opens a door for more feasible or robust realization. Acknowledgement We want to thank Lov K. Grover for his helpful discussions and comments on the original manuscript ( in December, 2005) and the reviewer for the helpful comments on this paper and useful discussions about fixed points of the Phase-$\theta $ search. \end{document}
\begin{document} \maketitle \begin{abstract} Geometric conditions on general polygons are given in \cite{GRB} in order to guarantee the error estimate for interpolants built from generalized barycentric coordinates, and the question about identifying sharp geometric restrictions in this setting is proposed. In this work, we address the question when the construction is made by using Wachspress coordinates. We basically show that the imposed conditions: {\it bounded aspect ratio property $(barp)$}, {\it maximum angle condition $(MAC)$} and {\it minimum edge length property $(melp)$} are actually equivalent to $[MAC,melp]$, and if any of these conditions is not satisfied, then there is no guarantee that the error estimate is valid. In this sense, $MAC$ and $melp$ can be regarded as sharp geometric requirements in the Wachspress interpolation error estimate. \end{abstract} \section{Introduction} Many and different conditions on the geometry of finite elements were required in order to guarantee optimal convergence in the interpolation error estimate. Some of them deal with interior angles like the {\it maximum angle condition} (maximum interior angle bounded away from $\pi$) and the {\it minimum angle condition} (minimum interior angle bounded away from $0$), but others deal with some lengths of the element like the {\it minimum edge length property} (the diameter of the element is comparable to the length of the segment determined by any two vertices) and the {\it bounded aspect ratio property} often called {\it regularity condition} (the diameter of the element and the diameter of the largest ball inscribed are comparable). Classical results on general Lagrange finite elements consider the regularity condition \cite{CR}. On triangular elements, the error estimate holds under the minimum angle condition \cite{Ze,Z}. However, on triangles, the minimum angle condition and the regularity condition are equivalent. From \cite{BA,BG,J} we know that the weakest sufficient condition on triangular elements is the maximum angle condition. Some examples can be constructed in order to show that if a family of triangles does not satisfy the maximum angle condition, then the error estimate on these elements does not hold. Recently, it was proved \cite{AM:2} that, for quadrilaterals elements, the minimum angle condition ($mac$) is the weakest known geometric condition required to obtain the classical $W^{1,p}$-error estimate, when $1 \leq p < 3$, to any arbitrary order $k$ greater than 1. Moreover, in this case, $mac$ is also necessary. In \cite{AM,AM:2} it was proved that the {\emph{double angle condition}} (any interior angle bounded away from zero and $\pi$) is a sufficient requirement to obtain the error estimate for any order and any $p \geq 1$. When $k=1$ and $1 \leq p < 3$, a less restrictive condition ensures the error estimate \cite{AD,AM}: the {\it regular decomposition property} ($RDP$). Property $RDP$ requires that after dividing the quadrilateral into two triangles along one of its diagonals, each resultant triangle verifies the maximum angle condition and the quotient between the length of the diagonals is uniformly bounded. This brief picture intends to show that study of sharp geometric restrictions on finite elements under which the optimal error estimate remains valid is an interesting and active field of research. In \cite{GRB,GRB:2}, geometric conditions on general polygons are given in order to guarantee the error estimate for interpolants built from generalized barycentric coordinates, and the question about identifying sharp geometric restrictions in this setting is proposed. In this work, we address the question for the first-order Wachspress interpolation operator. We show that the three sufficient conditions considered in \cite{GRB} ({\it regularity condition}, {\it maximum angle condition} and {\it minimum edge length property}) are actually equivalent to the last two since the regularity condition is a consequence of the maximum angle condition and the minimum edge length property. Then we exhibit families of polygons satisfying only one of these conditions and show that the interpolation error estimate does not hold to adequate functions. In this sense, the {\it maximum angle condition} and the {\it minimum edge length property} can be regarded as sharp geometric requirements to obtain the optimal error estimate. This work is structured as follows: In Section \ref{geoimpl}, we introduce notation and exhibit some basic relationships between different geometric conditions on general convex polygons. Section \ref{wach} is devoted to recall Wachspress coordinates and some elementary results associated to them; a brief picture about error estimates for the first-order Wachspress interpolation operator is also given there. Finally, in Section \ref{sharp}, we present two counterexamples to show that $MAC$ and $melp$ are sharp geometric requirements under which the optimal error estimate is valid. \section{Geometric conditions} \label{geoimpl} \setcounter{equation}{0} In order to introduce notation and formalize the requirements of each geometric condition, we give the following definitions. From now on, $\Omega$ will refer to a general convex polygon. \begin{enumerate} \item[(i)] {\it (Bounded aspect ratio property)} We say that $\Omega$ satisfies the {\emph{bounded aspect ratio property}} (also called {\emph{regularity condition}}) if there exists a constant $\sigma>0$ such that \begin{equation} \label{barp} \frac{diam(\Omega)}{\rho(\Omega)} \le \sigma, \end{equation} where $\rho(\Omega)$ is the diameter of the maximum ball inscribed in $\Omega$. In this case, we write $barp(\sigma)$. \item[(ii)] {\it (Minimum edge length property)} We say that $\Omega$ satisfies the {\emph{minimum edge length property}} if there exists a constant $d_m>0$ such that \begin{equation} \label{mel} 0<d_m \leq \frac{\left\| {\bf v}_i-{\bf v}_j \right\|}{diam(\Omega)} \end{equation} for all $i \neq j$, where ${\bf v}_1, {\bf v}_2, \dots, {\bf v}_n$ are the vertices of $\Omega$. In this case, we write $melp(d_m)$. \item[(iii)] {\it (Maximum angle condition)} We say that $\Omega$ satisfies the {\emph{maximum angle condition}} if there exists a constant $\psi_M>0$ such that \begin{equation} \label{MAC} \beta \leq \psi_M < \pi \end{equation} for all interior angle $\beta$ of $\Omega$. In this case, we write $MAC(\psi_M)$, \item[(iv)] {\it (Minimum angle condition)} We say that $\Omega$ satisfies the {\emph{minimum angle condition}} if there exists a constant $\psi_m>0$ such that \begin{equation} \label{mac} 0 < \psi_m \leq \beta. \end{equation} for all interior angle $\beta$ of $\Omega$. In this case, we write $mac(\psi_m)$. \end{enumerate} All along this work, when we say {\it regular polygon}, we refer to a polygon satisfying the regularity condition given by \end{equation}ref{barp}. \subsection{Some basic relationships} It is well known that regularity assumption implies that the minimum interior angle is bounded away from zero. We state this result in the following lemma \begin{lemma} \label{lemma:regmac} If $\Omega$ is a convex polygon satisfying $barp(\sigma)$, then $\Omega$ verifies $mac(\psi_m)$ where $\psi_m$ is a constant depending only on $\sigma$. \end{lemma} \proof See for instance \cite[Proposition 4 (i)]{GRB}. \qed Considering the rectangle $R=[0,1] \times [0,s]$, where $0<s<1$, and taking $s \to 0^+$, we see that the converse statement of Lemma \ref{lemma:regmac} does not hold. Indeed, $R$ verifies the $mac(\pi/2)$ (independently of $s$), but, when $s$ tends to zero, $R$ is not regular in the sense given by \end{equation}ref{barp}. However, on triangular elements, $barp$ and $mac$ are equivalent. We use this fact to show that, on general polygons, the regularity condition is a consequence of the minimum edge length property and the maximum angle condition. To our knowledge, this elementary result has not been established or demonstrated previously. \begin{figure} \caption{(A): A polygon with its diameter attained as the length of the straight line joining two non-consecutive vertices. (B): A polygon with its diameter attained as the length of the straight line joining two consecutive vertices.} \label{fig:macmel=reg} \end{figure} \begin{lemma} \label{lemma:macmelpeqreg} If $\Omega$ is a convex polygon satisfying $MAC(\psi_M)$ and $melp(d_m)$, then $\Omega$ verifies $barp(\sigma)$, where $\sigma=\sigma(\psi_M,d_m)$. \end{lemma} \proof We prove this by induction on the number $n$ of vertices of $\Omega$. If $n=3$, i.e., $\Omega$ is a triangle, the result follows from the law of sines. Indeed, we only have to prove that $\Omega$ has its minimum interior angle bounded away from zero. Let $\alpha$ be the minimum angle of $\Omega$ (if there is more than one choice, we choose it arbitrarily) and let $l$ be the length of its opposite side. Since $diam(\Omega)$ is attained on one side of $\Omega$, we can assume, without loss of generality, $l \neq diam(\Omega)$. We call $\beta$ the opposite angle to $diam(\Omega)$. It is clear that $\beta$ can not approach zero and since it is bounded above by $\psi_M$, we get that $1/\sin(\beta) \le C$ for some positive constant $C$. Then, from the law of sines and the assumption $melp(d_m)$, we have $$\frac{\sin(\alpha)}{\sin(\beta)} = \frac{l}{diam(\Omega)} \geq d_m.$$ In consequence, $\sin(\alpha) \ge C^{-1} d_m$ which proves that $\alpha$ is bounded away from zero. Let $n>3$. Since the diameter of $\Omega$ realizes as the length of its longest {\it diagonal}, i.e., the longest straight line joining two vertices of $\Omega$, we need to consider two cases depending if these vertices are consecutive or not. Assume that $diam(\Omega)$ is attained as the length of the line joining two non-consecutive vertices (these may not be unique, in this case we choose them arbitrarily). We can divide $\Omega$ by this diagonal into two convex polygons $\Omega_1$ and $\Omega_2$ with less number of vertices (see Figure \ref{fig:macmel=reg} (A)). It is clear that both of them satisfy $MAC(\psi_M)$ and, since $diam(\Omega_i)=diam(\Omega)$ and the set of vertices of $\Omega_i$ is a subset of the vertices of $\Omega$, we conclude that $\Omega_i$ also verifies $melp(d_m)$. Therefore, by the inductive hypothesis, $\Omega_1$ and $\Omega_2$ verify $barp(\sigma_1)$ and $barp(\sigma_2)$, respectively, for some constants $\sigma_1, \sigma_2$ depending only on $\psi_M$ and $d_m$. Then, since $\rho(\Omega) \geq \rho(\Omega_i)$, $i=1,2$, we have $$\displaystyle \frac{diam(\Omega)}{\rho(\Omega)} = \frac{diam(\Omega_i)}{\rho(\Omega)} \leq \frac{diam(\Omega_i)}{\rho(\Omega_i)} \leq \sigma_i.$$ Finally, if $diam(\Omega)$ is attained on a side of $\Omega$, i.e., is the length of the line joining two consecutive vertices ${\bf v}_{j-1}$ and ${\bf v}_j$ (these may not be unique, in this case we choose them arbitrarily), we divide $\Omega$ by the diagonal joining ${\bf v}_{j-1}$ and ${\bf v}_{j+1}$ into the triangle $T_1=\Delta({\bf v}_{j-1}{\bf v}_j{\bf v}_{j+1})$ and a convex polygon $\Omega_2$ (see Figure \ref{fig:macmel=reg} (B)). It is clear that $T_1$ verifies $melp(d_m)$ and $MAC(\psi_M)$, so (by the case $n=3$) we have that $T_1$ satisfies $barp(\sigma_1)$ for some positive constant $\sigma_1$. Then, since $diam(T_1)=diam(\Omega)$ and $\rho(\Omega) \geq \rho(T_1)$, we have $$\displaystyle \frac{diam(\Omega)}{\rho(\Omega)} = \frac{diam(T_1)}{\rho(\Omega)} \leq \frac{diam(T_1)}{\rho(T_1)} \leq \sigma_1.$$ \qed \begin{cor} \label{cor:equiv} $[MAC, melp]$ and $[barp, MAC, melp]$ are equivalent conditions. \end{cor} Finally, notice that reciprocal statement of Lemma \ref{lemma:macmelpeqreg} is false. Consider the following families of quadrilaterals: $\mathcal{F}_1=\{ K(1,1-s,s,1-s) \}_{0<s<1}$ where $K(1,1-s,s,1-s)$ denotes the convex quadrilateral with vertices $(0,0), (1,0), (s, 1-s)$ and $(0,1-s)$, and $\mathcal{F}_2=\{ K(1,1,s,s) \}_{1/2<s<1}$ where $K(1,1,s,s)$ denotes the convex quadrilateral with vertices $(0,0), (1,0), (s, s)$ and $(0,1)$. Clearly, any quadrilateral belonging to $\mathcal{F}_1 \cup \mathcal{F}_2$ is regular in the sense given by \end{equation}ref{barp}. Each element of $\mathcal{F}_1$ satisfies $MAC(3\pi/4)$, but taking $s \to 0^+$, we see that the minimum edge length property is violated. On the other hand, each element of $\mathcal{F}_2$ verifies $melp(1/2)$; but taking $s \to 1/2^+$, we see that the maximum angle condition is not satisfied. \section{Wachspress coordinates and the error estimate} \label{wach} \setcounter{equation}{0} \subsection{Wachspress coordinates} We start this section by remembering the definition of Wachspress coordinates and some of their main properties \cite{Fl:2, W}. Henceforth, we denote by ${\bf v}_1, {\bf v}_2, \dots, {\bf v}_n$ the vertices of $\Omega$ enumerated in counterclockwise order starting in an arbitrary vertex. Let $\bf x$ denote an interior point of $\Omega$ and let $A_i(\bf x)$ denote the area of the triangle with vertices $\bf x$, ${\bf v}_i$ and ${\bf v}_{i+1}$, i.e., $A_i({\bf x})=|\Delta ({\bf x} {\bf v}_i {\bf v}_{i+1})|$, where, by convention, ${\bf v}_0:= {\bf v}_n$ and ${\bf v}_{n+1}:={\bf v}_1$. Let $B_i$ denote the area of the triangle with vertices ${\bf v}_{i-1}$, ${\bf v}_i$ and ${\bf v}_{i+1}$, i.e., $B_i=|\Delta ({\bf v}_{i-1} {\bf v}_i {\bf v}_{i+1})|$. We summarize the notation in Figure \ref{fig:notation}. \begin{figure} \caption{(A): Notation for $A_i({\bf x} \label{fig:notation} \end{figure} Define the Wachspress weight function $w_i$ as the product of the area of the “boundary” triangle, formed by ${\bf v}_i$ and its two adjacent vertices, and the areas of the $n-2$ interior triangles, formed by the point ${\bf x}$ and the polygon's adjacent vertices (making sure to exclude the two interior triangles that contain the vertex ${\bf v}_i$), i.e., \begin{equation} \label{wi} \displaystyle w_i({\bf x}) = B_i \prod_{j \neq i,i-1} A_j(\bf x). \end{equation} After applying the standard normalization, Wachspress coordinates are then given by \begin{equation} \label{lambdai} \displaystyle \lambda_i({\bf x}) = \frac{w_i({\bf x})}{\sum_{j=1}^n w_j({\bf x})}. \end{equation} An equivalent expression of \end{equation}ref{wi} for $w_i$ is given in \cite{Mey}; the main advantages of this alternative expression is that the result is easy to implement and it shows that only the edge $\overline{{\bf x} {\bf v}_i}$ and its two adjacent angles $\alpha_i$ and $\delta_i$ are needed (see Figure \ref{fig:notation} (A)). Indeed, $w_i$ can be written as \begin{equation} \label{weights} w_i({\bf x}) = \frac{\cot(\alpha_i)+\cot(\delta_i)}{\left\| {\bf x}-{\bf v}_i \right\|^2} \end{equation} where $\alpha_i=\angle\ {\bf x} {\bf v}_i {\bf v}_{i+1}$ and $\delta_i=\beta_i-\alpha_i$ with $\beta_i$ being the inner angle of $\Omega$ associated to ${\bf v}_i$ (see Figure \ref{fig:notation}). The evaluation of the Wachspress basis function is carried out using elementary vector calculus operations. The angles $\alpha_i$ and $\delta_i$ are not explicitly computed, as suggested in \cite{Mey}, vector cross product and vector dot product formulas are used to find the cotangents. Wachspress coordinates have the well-known following properties: \begin{itemize} \item[(I)] {\it (Non-negativeness)} $\lambda_i \geq 0$ on $\Omega$. \item[(II)] {\it (Linear Completeness)} for any linear function $\ell :\Omega \to \mathbb R$, there holds $\ell = \sum_{i} \ell({\bf v}_i) \lambda_i$. \item[] (Considering the linear map $\ell \end{equation}uiv 1$ yields $\sum_{i} \lambda_i = 1$; this property is usually named {\it partition of unity}). \item[(III)] {\it (Invariance)} If $L:\mathbb R^2 \to \mathbb R^2$ is a linear map and $S:\mathbb R^2 \to \mathbb R^2$ is a composition of rotation, translation and uniform scaling transformations, then $\lambda_i({\bf x})=\lambda_i^L(L({\bf x}))=\lambda_i^S(S({\bf x}))$, where $\lambda_i^F(F({\bf x}))$ denotes a set of barycentric coordinates on $F(\Omega)$. \item[(IV)] {\it (Linear precision)} $\sum_{i} {\bf v}_i \lambda_i({\bf x})={\bf x}$, i.e., every point on $\Omega$ can be written as a convex combination of the vertices ${\bf v}_1, {\bf v}_2, \dots, {\bf v}_n$. \item[(V)] {\it (Interpolation)} $\lambda_i({\bf v}_j)=\delta_{ij}$. \end{itemize} \subsection{Error estimate to the first-order Wachspress interpolation operator} We only give a brief overview of some definitions and results which are of interest to us; for more details we refer to \cite{Das, GRB, Suk:2, Suk}. Let $\{ \lambda_i \}$ be the Wachspress coordinates associated to $\Omega$ (see \end{equation}ref{lambdai}). Then, we can consider the first-order interpolation operator $I:H^2(\Omega) \to span \{ \lambda_i \} \subset H^1(\Omega)$ defined as \begin{equation} \label{defI} \displaystyle I_{\Omega}u=Iu := \sum_{i} u({\bf v}_i) \lambda_i. \end{equation} Properties (I)-(V) of the Wachspress coordinates (more generally, generalized barycentric coordinates) guarantee that $I$ has the desirable properties of an interpolant. For this interpolant, called here the {\it first-order Wachspress interpolation operator}, the optimal convergence estimate \begin{equation} \label{errorestimate} \left\| u-Iu \right\|_{H^1(\Omega)} \leq C diam(\Omega) |u|_{H^2(\Omega)} \end{equation} on polygons satisfying $[barp, MAC, melp]$ was proved \cite[Lemma 6]{GRB}. \begin{rem} \label{rem:red} Thanks to {\rm Corollary \ref{cor:equiv}}, we can affirm that \end{equation}ref{errorestimate} holds on general convex polygons satisfying $[MAC, melp]$. \end{rem} \section{About sharpness on geometric restrictions} \label{sharp} \setcounter{equation}{0} Since $[MAC, melp]$ are sufficient conditions to obtain \end{equation}ref{errorestimate}, we wonder if some of these requirements can be relaxed in order to obtain the error estimate. This question was partially answered in \cite{GRB}, where a counterexample, using pentagonal elements, is given in order to show that the $MAC$ can not be removed. For the sake of completeness, in Counterexample \ref{necmac}, we give a family of quadrilateral elements which does not satisfy $MAC$ but it verifies $melp$ and \end{equation}ref{errorestimate} does not hold. This example shows two things: $MAC$ is necessary in order to obtain the error estimate and, since every element in this family is regular in the sense given by \end{equation}ref{barp}, $barp$ is not enough to obtain \end{equation}ref{errorestimate}. On the other hand, in Counterexample \ref{necmel}, we present a family of quadrilaterals which does not satisfy $melp$ but it verifies $MAC$ and \end{equation}ref{errorestimate} does not hold. Then, in order to obtain the interpolation error estimate, $melp$ is necessary. In this sense, the question raised in \cite{GRB} about identifying sharp geometric restrictions under which the error estimates for the first-order Wachspress interpolation operator holds can be considered as answered. \begin{figure} \caption{Schematic picture of $K_s$ and $T_s$ (hatched area) considered in Counterexample \ref{necmac} \label{fig:cex1} \end{figure} \begin{cex} \label{necmac} Consider the convex quadrilateral $K_s$ with the vertices ${\bf v}_1=(0,0), {\bf v}_2=(1,0), {\bf v}_3=(s,s)$ and ${\bf v}_4=(0,1)$, where $1/2<s<1$. We will be interested in the case when $s$ tends to $1/2$ since then the family of quadrilaterals $\{ K_s \}$ does not satisfy the maximum angle condition although it satisfies $melp(1/2)$. Consider the function $u({\bf x})=x(1-x)$. Since $u({\bf v}_1)=0=u({\bf v}_2)=u({\bf v}_4)$, we have $$Iu({\bf x})=u({\bf v}_3) \lambda_3({\bf x})= s(1-s) \lambda_3({\bf x}).$$ An straightforward computation yields $$\displaystyle \lambda_3({\bf x}) = \frac{(2s-1)x}{s} \frac{y}{(s-1)(x+y)+s},$$ therefore $$\displaystyle \frac{\partial \lambda_3}{\partial y} = \frac{(2s-1)x}{s} \frac{(s-1)x+s}{[(s-1)(x+y)+s]^2}.$$ Consider the triangle $T_s$ with vertices $(1/4,3/4)$, $(1/2,1/2)$ and $(1/2, (3s-1)/(2s))$ {\rm (see Figure \ref{fig:cex1})}. Then, on $T_s$, we have $1/4 \leq x \leq 1/2$, $1/2 \le y \le (3s-1)/(2s)$ and $x+y \geq 1$, so it follows that $$0<(s-1)(x+y)+s \leq 2s-1 \quad \text{and} \quad (s-1)x+s \geq (3s-1)/2$$ and hence $$\displaystyle \frac{\partial \lambda_3}{\partial y} \geq \frac{(2s-1)}{4s} \frac{3s-1}{2(2s-1)^2}=\frac{3s-1}{8s(2s-1)}.$$ Then $$|u-Iu|_{H^1(K_s)} \ge \left\| \frac{\partial (u-Iu)}{\partial y} \right\|_{L^2(K_s)} = \left\| \frac{\partial Iu}{\partial y} \right\|_{L^2(K_s)} = s(1-s)\left\| \frac{\partial \lambda_3}{\partial y} \right\|_{L^2(K_s)}$$ and, consequently, $$|u-Iu|_{H^1(K_s)} \ge s(1-s)\left\| \frac{\partial \lambda_3}{\partial y} \right\|_{L^2(T_s)}.$$ Since $|T_s|=(2s-1)/(2^4s)$, we have $$\left\| \frac{\partial \lambda_3}{\partial y} \right\|_{L^2(T_s)}^2 \geq \frac{(3s-1)^2}{(8s)^2(2s-1)^2}|T_s|= \frac{(3s-1)^2}{2^{10}s^3(2s-1)} \to \infty$$ when $s \to 1/2^+$. Finally, as $|u|_{H^2(K_s)} = 2 |K_s|^{1/2} \leq 2$ and $diam(K_s)=\sqrt{2}$, we conclude that \end{equation}ref{errorestimate} can not hold. \end{cex} \begin{figure} \caption{Schematic picture of $K_s$ and $D_s$ (hatched area) considered in Counterexample \ref{necmel} \label{fig:cex2} \end{figure} \begin{cex} \label{necmel} Consider now the convex quadrilateral $K_s$ with the vertices ${\bf v}_1=(0,0), {\bf v}_2=(1,0), {\bf v}_3=(1-\sqrt[4]{s},s)$ and ${\bf v}_4=(0,s)$, where $0 < s < (1/2)^4$. Note that the family of quadrilaterals $\{ K_s \}$ satisfies $MAC(\pi/2+\tan^{-1}(2^3))$ $($independently of $s)$ but it does not satisfy the minimum edge length property when $s$ tends to zero since $\left\| {\bf v}_1-{\bf v}_4 \right\| = s \to 0^+$ and $diam(K_s) \sim 1$. Consider the function $u({\bf x})=x^2$. Since $u({\bf v}_1)=0=u({\bf v}_4)$, we have, calling $a := 1-\sqrt[4]{s}$, $$Iu({\bf x})=u({\bf v}_2) \lambda_2({\bf x})+u({\bf v}_3) \lambda_3({\bf x}) = \lambda_2({\bf x})+ a^2 \lambda_3({\bf x})$$ where $$\lambda_2({\bf x})=\frac{x(s-y)}{s+y(a-1)} \quad \text{and} \quad \lambda_3({\bf x})=\frac{xy}{s+y(a-1)}.$$ A simple computation yields $$\frac{\partial (Iu-u)}{\partial y} = \frac{\partial Iu}{\partial y} = \frac{xsa(a-1)}{(s+y(a-1))^2}.$$ Let $D_s = K_s \cap \{ x \geq 1/2 \}$ $($see {\rm Figure \ref{fig:cex2})}. Since $a-1 <0$, we get $s+y(a-1) \leq s$ and then, on $D_s$, we have $$\left| \frac{\partial (Iu-u)}{\partial y} \right| \geq \frac{xa(1-a)}{s} \geq \frac{a(1-a)}{2s}.$$ Therefore, $$|Iu-u|_{H^1(K_s)}^2 \geq \left\| \frac{\partial (Iu-u)}{\partial y} \right\|_{L^2(K_s)}^2 \geq \left\| \frac{\partial (Iu-u)}{\partial y} \right\|_{L^2(D_s)}^2 \geq \frac{a^2(1-a)^2}{4s^2} |D_s|,$$ and since $|D_s|=as/2$, we conclude that $$|Iu-u|_{H^1(K_s)}^2 \geq \frac{a^3(1-a)^2}{8s} = \frac{(1-\sqrt[4]{s})^3}{8\sqrt{s}}$$ which tends to infinity when $s$ tends to zero. Finally, since $|u|_{H^2(K_s)} = 2 |K_s|^{1/2} \leq 2$ and $diam(K_s) \sim 1$, we conclude that \end{equation}ref{errorestimate} can not hold. \end{cex} \end{document}
\begin{document} \title{{Bounds for the first several prime character nonresidues} \begin{abstract}\noindent Let $\varepsilon > 0$. We prove that there are constants $m_0=m_0(\varepsilon)$ and $\kappa=\kappa(\varepsilon) > 0$ for which the following holds: For every integer $m > m_0$ and every nontrivial Dirichlet character modulo $m$, there are more than $m^{\kappa}$ primes $\ell \le m^{\frac{1}{4\sqrt{e}}+\varepsilon}$ with $\chi(\ell)\notin \{0,1\}$. The proof uses the fundamental lemma of the sieve, Norton's refinement of the Burgess bounds, and a result of Tenenbaum on the distribution of smooth numbers satisfying a coprimality condition. For quadratic characters, we demonstrate a somewhat weaker lower bound on the number of primes $\ell \le m^{\frac14+\epsilon}$ with $\chi(\ell)=1$. \end{abstract} \section{Introduction} Let $\chi$ be a nonprincipal Dirichlet character. An integer $n$ is called a $\chi$-\emph{nonresidue} if $\chi(n) \notin \{0,1\}$. Problems about character nonresidues go back to the beginnings of modern number theory. Indeed, one can read out of Gauss's \emph{Disquisitiones} that for primes $p\equiv 1\pmod{8}$ and $\chi(\cdot) = \leg{p}{\cdot}$, the smallest $\chi$-nonresidue does not exceed $2\sqrt{p}+1$ \cite[Article 129]{gauss86}. This was an auxiliary result required for Gauss's first proof of the quadratic reciprocity law. In the early 20th century, I.\,M. Vinogradov initiated the study of how the quadratic residues and nonresidues modulo a prime $p$ are distributed in the interval $[1,p-1]$. A particularly natural problem is to estimate the size of $n_p$, the smallest quadratic nonresidue modulo $p$. Vinogradov conjectured that $n_p \ll_{\varepsilon} p^{\varepsilon}$, for each $\varepsilon >0$. By means of a novel estimate for character sums (independently discovered by P\'olya), coupled with a clever sieving argument, he showed \cite{vinogradov18} that $n_p \ll_{\varepsilon} p^{\frac{1}{2\sqrt{e}} + \varepsilon}$. Burgess's character sum bounds \cite{burgess57}, in conjunction with Vinogradov's methods, yield the sharper estimate \begin{equation}\label{eq:burgessquadratic} n_p \ll_{\varepsilon} p^{\frac{1}{4\sqrt{e}}+\varepsilon}. \end{equation} Fifty years of subsequent research has not led to any improvement in the exponent $\frac{1}{4\sqrt{e}}$. But generalizing \eqref{eq:burgessquadratic}, Norton showed that if $\chi$ is any nontrivial character modulo $m$, then the least $\chi$-nonresidue is $O_{\varepsilon}(m^{1/4\sqrt{e} + \varepsilon})$. See \cite[Theorem 1.30]{norton98}. Since $\chi$ is completely multiplicative, the smallest $\chi$-nonresidue is necessarily prime. In this note, we prove that there are actually many prime $\chi$-nonresidues satisfying the Burgess--Norton upper bound. \begin{thm}\label{thm:main} For each $\varepsilon > 0$, there are numbers $m_0(\varepsilon)$ and $\kappa=\kappa(\varepsilon)> 0$ for which the following holds: For all $m > m_0$ and each nontrivial character $\chi$ mod $m$, there are more than $m^{\kappa}$ prime $\chi$-nonresidues not exceeding $m^{\frac{1}{4\sqrt{e}}+\varepsilon}$. \end{thm} The problem of obtaining an upper bound on the first several prime character nonresidues was considered already by Vinogradov. In \cite{vinogradov18}, he showed that for large $p$, there are at least $\frac{\log{p}}{7\log\log{p}}$ prime quadratic nonresidues modulo $p$ not exceeding \[ p^{\frac{1}{2}-\frac{1}{\log\log{p}}}. \] For characters to prime moduli, a result resembling Theorem \ref{thm:main} was proved by Hudson in 1983 \cite{hudson83}. (See also Hudson's earlier investigations \cite{hudson73,hudson74,hudson74A}.) But even restricted to prime $m$, Theorem \ref{thm:main} improves on \cite{hudson83} in multiple respects. In \cite{hudson83}, the exponent on $p$ is $\frac{1}{4}+\varepsilon$ instead of $\frac{1}{4\sqrt{e}} +\varepsilon$, and the number of nonresidues produced is only $c_{\varepsilon} \frac{\log{p}}{\log\log{p}}$. Moreover, it is assumed in \cite{hudson83} that the order of $\chi$ is fixed. Stronger results than those of \cite{hudson83} were announced by Norton already in 1973 \cite{norton74}.\footnote{Norton claims in \cite{norton74}: \emph{Let $\varepsilon>0$ and $k_0 \ge 2$. If $m \ge 3$ and $[(\mathbf{Z}/m\mathbf{Z})^{\times}: {(\mathbf{Z}/m\mathbf{Z})^{\times}}^k] \ge k_0$, then each of the smallest $\lfloor \log{m}/\log\log{m}\rfloor$ primes not dividing $m$ that are $k$th power nonresidues modulo $m$ is $\ll_{\varepsilon,k_0}n^{1/4u_{k_0} + \varepsilon}$}. Here $u_{k_0}$ has the same meaning as in our introduction.} Unfortunately, a full account of Norton's work seems to have never appeared. It becomes easier to produce small character nonresidues as the order of $\chi$ increases. This phenomenon was noticed by Vinogradov \cite{vinogradov27} and further investigated by Buchstab \cite{buchstab49} and Davenport and Erd\H{o}s \cite{DE52}. To explain their results requires us to first recall the rudiments of the theory of smooth numbers. For each positive integer $n$, let $P^{+}(n)$ denote the largest prime factor of $n$, with the convention that $P^{+}(1)=1$. A natural number $n$ is called \emph{$y$-smooth} (or \emph{$y$-friable}) if $P^{+}(n) \le y$. For $x \ge y \ge 2$, we let $\Psi(x,y)$ be the count of $y$-smooth numbers up to $x$. We let $\rho$ be Dickman's function, defined by \[ \rho(u)=1\text{ for $0 \le u \le 1$}, \quad \text{and}\quad u \rho'(u) = -\rho(u-1) \quad\text{for $u > 1$}. \] The functions $\Psi(x,y)$ and $\rho(u)$ are intimately connected; it is known that $\Psi(x,y) \sim x\rho(u)$, where $u:=\frac{\log{x}}{\log{y}}$, in a wide range of $x$ and $y$. In fact, Hildebrand \cite{hildebrand86} has shown that this asymptotic formula holds whenever $x\to\infty$, as long as \[ y \ge \exp((\log\log{x})^{5/3+\lambda}) \] for some fixed positive $\lambda$. For this estimate to be useful, one needs to understand the behavior of $\rho(u)$. It is not hard to show that $\rho$ is strictly decreasing for $u > 1$ and that $\rho(u) \le 1/\Gamma(u+1)$. So for any $k > 1$, there is a unique $u_k > 1$ with $\rho(u_k)=\frac{1}{k}$. Buchstab and, independently, Davenport and Erd\H{o}s (developing ideas implicit in \cite{vinogradov27}) showed that if $\chi$ mod $p$ has order $k \ge 2$, then the least $\chi$-nonresidue is $O_{\varepsilon,k}(p^{1/2u_k+\varepsilon})$. If in their argument Burgess's method (which was not available at the time) is used in place of the P\'olya--Vinogradov inequality, then $1/2u_k$ may be replaced by $1/4u_k$ \cite{wy64}. We prove the following: \begin{thm}\label{thm:fixedprime} Let $\varepsilon >0$ and $k_0 \ge 2$. There are numbers $m_0(\varepsilon,k_0)$ and $\kappa = \kappa(\varepsilon,k_0) > 0$ for which the following holds: For all $m > m_0$ and each nontrivial character $\chi$ mod $m$ of order $k \ge k_0$, there are more than $m^{\kappa}$ prime $\chi$-nonresidues not exceeding $m^{\frac{1}{4u_{k_0}}+\varepsilon}$.\end{thm} \begin{rmk}\mbox{ } \begin{itemize} \item It follows readily from the definition that $\rho(u) = 1-\log{u}$ for $1 \le u \le 2$, and so $u_2 = e^{1/2} = 1.6487\ldots$ and $u_3 = e^{2/3} = 1.9477\ldots$. For $k > 3$, it does not seem that $u_k$ has a simple closed form expression. \item Theorem \ref{thm:main} is the special case $k_0=2$ of Theorem \ref{thm:fixedprime}. \end{itemize} \end{rmk} One might compare Theorem \ref{thm:main} for the quadratic character modulo a prime $p$ with a result of Banks--Garaev--Heath-Brown--Shparlinski \cite{BGHBS08}. They show that for each fixed $\varepsilon > 0$, and each $N \ge p^{1/4\sqrt{e}+\varepsilon}$, the proportion of quadratic nonresidues modulo $p$ in $[1,N]$ is $\gg_{\varepsilon} 1$ for all primes $p > p_0(\varepsilon)$. Our arguments use the ideas of Vinogradov and Davenport--Erd\H{o}s but take advantage of modern developments in sieve methods and the theory of smooth numbers. A variant of the Burgess bounds developed by Norton also plays an important role. We note that an application of the sieve that is similar in spirit to ours appears in work of Bourgain and Lindenstrauss \cite[Theorem 5.1]{BL03}.\footnote{A special case of their result: \emph{Given $\varepsilon >0$, there is an $\alpha>0$ such that $\sum_{\substack{p^{\alpha} \le \ell \le p^{1/4+\varepsilon} \\ \leg{\ell}{p}=-1}}\frac{1}{\ell} > \frac{1}{2}-\varepsilon$, for all $p > p_0(\varepsilon)$.}} It is equally natural to ask for small prime character \emph{residues}, i.e., primes $\ell$ with $\chi(\ell)=1$. The most significant unconditional result in this direction is due to Linnik and A.\,I. Vinogradov \cite{VL66}. They showed that if $\chi$ is the quadratic character modulo a prime $p$, then the smallest prime $\ell$ with $\chi(\ell)=1$ satisfies $\ell \ll_{\varepsilon} p^{1/4+\varepsilon}$. More generally, Elliott \cite{elliott71} proved that when $\chi$ has order $k$, the least such $\ell$ is $O_{k,\varepsilon}(p^{\frac{k-1}{4}+\epsilon})$. As Elliott notes, this bound is only interesting for small values of $k$; otherwise, it is inferior to what follows from known forms of Linnik's theorem on primes in progressions. For extensions of the Linnik--Vinogradov method in a different direction, see \cite{pollack14B, pollack14}. Our final result is a partial analogue of Theorem \ref{thm:main} for prime residues of quadratic characters. Regrettably, the number of primes produced falls short of a fixed power of $m$. \begin{thm}\label{thm:smallresidue} Let $\varepsilon > 0$ and let $A >0$. There is an $m_0=m_0(\varepsilon,A)$ with the following property: If $m > m_0$, and $\chi$ is a quadratic character modulo $m$, then there are at least $(\log{m})^{A}$ primes $\ell \le m^{\frac{1}{4}+\varepsilon}$ with $\chi(\ell)=1$. \end{thm} Results of the sort proven here have direct consequences for prime splitting in cyclic extensions of $\mathbf{Q}$. For example, Theorem \ref{thm:main} (respectively Theorem \ref{thm:smallresidue}) implies that there are more than $|\Delta|^{\kappa}$ inert (respectively, more than $(\log|\Delta|)^{A}$ split) primes $p \le |\Delta|^{\frac{1}{4\sqrt{e}}+\varepsilon}$ (respectively, $p \le |\Delta|^{\frac{1}{4}+\varepsilon}$) in the quadratic field of discriminant $\Delta$, as soon as $|\Delta|$ is large enough in terms of $\varepsilon$ (and $A$). \section{Small prime nonresidues: Proofs of Theorems \ref{thm:main} and \ref{thm:fixedprime}} \subsection{Preparation} As might be expected, the Burgess bounds play the key role in our analysis. The following version is due to Norton (see \cite[Theorem 1.6]{norton98}). \begin{prop}\label{prop:norton} Let $\chi$ be a nontrivial character modulo $m$ of order dividing $k$. Let $r$ be a positive integer, and let $\epsilon > 0$. For all $x > 0$, \[ \sum_{n \le x} \chi(n) \ll_{\epsilon,r} R_k(m)^{1/r} x^{1-\frac{1}{r}} m^{\frac{r+1}{4r^2}+\epsilon}. \] Here \[ R_k(m) = \min\left\{M(m)^{3/4},Q(k)^{9/8}\right\}, \] where \[ M(m) = \prod_{p^e \parallel m,~e\ge 3} p^e \qquad\text{and}\quad Q(k) = \prod_{p^e \parallel k,~e\ge 2} p^e. \] The factor of $R_k(m)^{1/r}$ can be omitted if $r \le 3$. \end{prop} Another crucial tool is a theorem of Tenenbaum concerning the distribution of smooth numbers satisfying a coprimality condition. For $x\ge y\ge 2$, let \[ \Psi_q(x,y) = \#\{n\le x: \gcd(n,q)=1, P^{+}(q) \le y\}. \] \begin{prop}\label{prop:FT} For positive integers $q$ and real numbers $x, y$ satisfying \[ P^{+}(q) \le y \le x \quad\text{and}\quad \omega(q) \le y^{1/\log(1+u)}, \] we have \[ \Psi_q(x,y) = \frac{\varphi(q)}{q} \Psi(x,y) \left(1+O\left(\frac{\log(1+u) \log(1+\omega(q))}{\log{y}}\right)\right). \] As before, $u$ denotes the ratio $\log{x}/\log{y}$. \end{prop} \begin{proof} This is the main result of \cite{tenenbaum93} in the case $A=1$.\end{proof} \begin{remark} If $q'$ is the largest divisor of $q$ supported on the primes not exceeding $y$, then $\Psi_{q}(x,y) = \Psi_{q'}(x,y)$. So the assumption in Proposition \ref{prop:FT} that $P^{+}(q) \le y$ does not entail any loss of generality. \end{remark} Theorem \ref{thm:fixedprime} will be deduced from two variant results claiming weaker upper bounds. \begin{thm}\label{thm:fixedprime0} Let $\varepsilon >0$ and $k_0 \ge 2$. There are numbers $m_0(\varepsilon,k_0)$ and $\kappa = \kappa(\varepsilon,k_0) > 0$ for which the following holds: For all $m > m_0$ and each nontrivial character $\chi$ mod $m$ of order $k \ge k_0$, there are more than $m^{\kappa}$ prime $\chi$-nonresidues not exceeding $m^{\frac{1}{3u_{k_0}}+\varepsilon}$.\end{thm} \begin{thm}\label{thm:fixedprime1} Let $\varepsilon >0$ and $k_0 \ge 2$. There are numbers $m_0(\varepsilon,k_0)$ and $\kappa = \kappa(\varepsilon,k_0) > 0$ for which the following holds: For all $m > m_0$ and each nontrivial character $\chi$ mod $m$ of order $k \ge k_0$, there are more than $m^{\kappa}$ prime $\chi$-nonresidues not exceeding $R_k(m) m^{\frac{1}{4u_{k_0}}+\varepsilon}$. Here $R_k(m)$ is as defined in Proposition \ref{prop:norton}. \end{thm} The proof of Theorem \ref{thm:fixedprime1} is given in detail in the next section. We include only a brief remark about the proof of Theorem \ref{thm:fixedprime0}, which is almost entirely analogous (but slightly simpler). We then present the derivation of Theorem \ref{thm:fixedprime} from Theorems \ref{thm:fixedprime0} and \ref{thm:fixedprime1}. We remind the reader that Theorem \ref{thm:main} is the special case $k_0=2$ of Theorem \ref{thm:fixedprime}. \subsection{Proof of Theorem \ref{thm:fixedprime1}}\label{sec:proofs} We let $\chi$ be a nontrivial character modulo $m$ of order $k \ge k_0$, where $k_0 \ge 2$ is fixed. With $\delta \in (0,\frac{1}{4})$, we set \[ x= R_k(m) \cdot m^{\frac14 + \delta}, \quad y = x^{\frac{1}{u_{k_0}} + \delta}. \] To prove Theorem \ref{thm:fixedprime1}, it suffices to show that for all large $m$ (depending only on $k_0$ and $\delta$), there are at least $x^{\kappa}$ prime $\chi$-nonresidues in $[1,y]$ for a certain constant $\kappa = \kappa(k_0,\delta) > 0$. Let $q$ be the product of the prime $\chi$-nonresidues in $[1,y]$. Note that $\gcd(q,m)=1$, from the definition of a $\chi$-nonresidue. Our strategy is to estimate \begin{equation}\label{eq:different} \sum_{\substack{n \le x \\ \gcd(n,mq)=1}} (1+\chi(n) + \chi^2(n) + \dots + \chi^{k-1}(n)) \end{equation} in two different ways. We first derive a lower bound on \eqref{eq:different}, under the assumption that there are not so many prime $\chi$-nonresidues in $[1,y]$. \begin{lem}\label{lem:lower} There are constants $\eta = \eta(\delta,k_0) > 0$, $\kappa =\kappa(\delta,k_0) > 0$, and $m_0 = m_0(\delta,k_0)$ with the following property: If $m > m_0$ and $\omega(q) \le x^{\kappa}$, then \[ \sum_{\substack{n \le x \\ \gcd(n,mq)=1}} (1+\chi(n) + \dots +\chi(n)^{k-1}) \ge \left(1+\frac{2k}{3}\eta\right) \frac{\varphi(mq)}{mq} x. \] \end{lem} \begin{proof} Observe that \[ \sum_{\substack{n \le x \\ \gcd(n,mq)=1}} (1+\chi(n) + \dots + \chi(n)^{k-1}) = k \sum_{\substack{n \le x \\ \gcd(n,q)=1,~\chi(n)=1}} 1 \ge k\sum_{\substack{n \le x\\ \gcd(n,mq)=1 \\ p \mid n \Rightarrow p \le y}} 1 = k \cdot \Psi_{mq}(x,y).\] We estimate $\Psi_{mq}(x,y)$ using Proposition \ref{prop:FT} and the succeeding remark. We have $u \asymp_{k_0} 1$, or equivalently, $\log y\asymp_{k_0} \log{x}$. So if $\kappa$ is sufficiently small in terms of $k_0$, and $\omega(q) \le x^{\kappa}$, Proposition \ref{prop:FT} gives \begin{align*} \Psi_{mq}(x,y) &= \bigg(\Psi(x,y)\prod_{\substack{p \mid mq \\ p \le y}}\left(1-\frac{1}{p}\right)\bigg) \left(1+O_{k_0}\left(\frac{\log(1+x^{\kappa})}{\log{x}}\right)\right)\\ &\ge \Psi(x,y) \frac{\varphi(mq)}{mq} \left(1+O_{k_0}\left(\frac{\log(1+x^{\kappa})}{\log{x}}\right)\right).\end{align*} Now the result of Hildebrand quoted in the introduction (or a much more elementary theorem) shows that $\Psi(x,y) = \Psi(x, x^{\frac{1}{u_{k_0}}+\delta}) \ge (\frac{1}{k_0}+\eta) x$ for a certain $\eta= \eta(k_0,\delta) > 0$ and all large $x$. So if $\kappa$ is fixed sufficiently small, depending on $k_0$ and $\delta$, and $x$ is sufficiently large, \[ \Psi_{mq}(x,y) > \left(\frac{1}{k_0} + \frac{2}{3}\eta\right) \frac{\varphi(mq)}{mq} x. \] Hence, \[ \sum_{\substack{n \le x \\ \gcd(n,q)=1}} (1+\chi(n) + \dots + \chi(n)^{k-1}) \ge \left(\frac{k}{k_0} + \frac{2k}{3}\eta\right)\frac{\varphi(mq)}{mq} x \ge \left(1+\frac{2k}{3}\eta\right) \frac{\varphi(mq)}{mq} x. \qedhere\] \end{proof} We turn next to an upper bound. \begin{lem}\label{lem:upper} Let $\beta > 0$. There are numbers $\eta' = \eta'(\delta) > 0$, $\kappa' = \kappa'(\delta,\beta)>0$ and $m_0 = m_0(\delta,\beta)$ with the following property: If $m > m_0$ and $\omega(q) \le x^{\kappa'}$, then \[ \sum_{\substack{n \le x \\ \gcd(n,mq)=1}} (1+\chi(n) + \chi(n)^2 + \dots + \chi(n)^{k-1}) \le (1+\beta) \frac{\varphi(mq)}{mq}x +O_{\delta}(k x^{1-\eta'}). \] \end{lem} \begin{proof} We let $\mathcal{A} = \{n \le x: \gcd(n,m)=1,~\chi(n)=1\}$ and observe that \begin{equation}\label{eq:fundidentity} \sum_{\substack{n \le x \\ \gcd(n,mq)=1}} (1+\chi(n) + \chi(n)^2 + \dots + \chi(n)^{k-1}) = k \sum_{\substack{n \in \mathcal{A} \\ \gcd(n,q)=1}} 1. \end{equation} We apply the fundamental lemma of the sieve to estimate the right-hand sum. (The precise form of the fundamental lemma is not so important, but we have in mind \cite[Theorem 4.1, p. 29]{diamond08}.) Let $d \in [1,x]$ be a squarefree integer dividing $q$. Then \begin{align*} \sum_{\substack{n \in \mathcal{A} \\ d \mid n}} 1 &= \frac{1}{k} \sum_{\substack{n \le x \\\gcd(n,m)=1,~d\mid n}} (1+\chi(n) + \dots + \chi(n)^{k-1}). \end{align*} For each $j=0,1,2,\dots, k-1$, \[ \sum_{\substack{n \le x \\ \gcd(n,m)=1,~d\mid n}}\chi^j(n) = \chi^j(d) \sum_{\substack{e \le x/d \\ \gcd(e,m)=1}} \chi^j(e). \] When $j=0$, the right-hand side is $\frac{x}{d}\frac{\varphi(m)}{m} +O_{\epsilon}(m^{\epsilon})$, by a straightforward inclusion-exclusion. For $j \in \{1,2,\dots, k-1\}$, Proposition \ref{prop:norton} gives \begin{align*} \sum_{\substack{e \le x/d \\ \gcd(e,m)=1}} \chi^j(e) = \sum_{e \le x/d} \chi^j(e) \sum_{\substack{f \mid e \\ f\mid m}} \mu(f) &= \sum_{f \mid m} \mu(f) \chi^j(f) \sum_{g \le x/df} \chi^j(g) \\ &\ll_{\epsilon,r} R_k(m)^{1/r} x^{1-\frac1r} d^{-1+\frac{1}{r}} m^{\frac{r+1}{4r^2}+\epsilon} \sum_{f \mid m} f^{-1+\frac{1}{r}} \\&\ll_{\epsilon} R_k(m)^{1/r} x^{1-\frac1r} d^{-1+\frac{1}{r}} m^{\frac{r+1}{4r^2}+2\epsilon}; \end{align*} here $r \ge 2$ and $\epsilon > 0$ are parameters to be chosen. (We used in the last step that the sum on $f$ has only $O_{\epsilon}(m^{\epsilon})$ terms, each of which is $O(1)$.) Assembling the preceding estimates, \[ \sum_{\substack{n \in \mathcal{A} \\ d \mid n}} 1 = \frac{x}{dk}\frac{\varphi(m)}{m} + r(d), \quad\text{where}\quad r(d) \ll_{\epsilon,r} R_k(m)^{1/r} x^{1-\frac1r} d^{-1+\frac{1}{r}} m^{\frac{r+1}{4r^2}+2\epsilon}. \] By the fundamental lemma, for any choices of real parameters $z\ge 2$ and $v\ge 1$ with $z^{2v} < x$, \begin{multline*} \sum_{\substack{n \in \mathcal{A}\\\gcd(n,q)=1}} 1 \le \sum_{\substack{n \in \mathcal{A} \\ p \mid \gcd(n,q) \Rightarrow p \ge z}} 1 = \Bigg(\frac{x}{k}\frac{\varphi(m)}{m} \prod_{\substack{p \mid q \\ p< z}}\left(1-\frac{1}{p}\right)\Bigg)\left(1 + O(v^{-v})\right) \\ + O_{\epsilon,r}\Bigg(R_k(m)^{1/r} x^{1-\frac1r} m^{\frac{r+1}{4r^2}+2\epsilon} \sum_{\substack{d < z^{2v} \\ d\mid q}} \mu^2(d) 3^{\omega(d)} d^{-1+\frac1r}\Bigg).\end{multline*} We now make a choice of parameters. Let $r = \lceil \frac{1}{2\delta}\rceil$ (so that $\delta \ge \frac{1}{2r}$). Since $x=R_k(m)\cdot m^{1/4+\delta}$, we have \[ R_k(m)^{1/r} x^{1-\frac1r} m^{\frac{r+1}{4r^2}} = x \cdot m^{-\frac{1}{4r} -\delta/r} m^{\frac{r+1}{4r^2}} = x \cdot m^{\frac{1}{r}(\frac{1}{4r}-\delta)} \le x\cdot m^{-\frac{\delta}{4r^2}}. \] We take $\epsilon = \frac{\delta}{16r^2}$, so that \[ m^{2\epsilon} = m^{\frac{\delta}{8r^2}}. \] Since $r\ge 2$ and $3^{\omega(d)} \ll d^{1/2}$, each term in the sum on $d$ is $O(1)$. Putting it all together, the $O$-term above is \[ \ll_{\delta} x \cdot m^{-\frac{\delta}{4r^2}} \cdot m^{\frac{\delta}{8r^2}} \cdot z^{2v}. \] Since $x = R_k(m) \cdot m^{1/4+\delta} \le m^{3/4} \cdot m^{1/4+\delta} < m^2$, this upper bound is $\ll_{\delta} x^{1-\frac{\delta}{16r^2}} z^{2v}$. Taking $z = x^{\frac{\delta}{64r^2 v}}$ gives a final upper bound on the $O$-term of \[ \ll_{\delta} x^{1-\eta'},\quad\text{where}\quad \eta' = \frac{\delta}{32r^2}. \] Turning attention to the main term, we fix $v$ large enough that the factor $1+O(v^{-v})$ is smaller than $1+\frac{1}{2}\beta$. Then our main term above does not exceed \begin{align*} \frac{x}{k} \frac{\varphi(mq)}{mq} \left(1+\frac{1}{2}\beta\right) \prod_{\substack{p \mid q\\ p \ge z}} \left(1-\frac{1}{p}\right)^{-1} &\le \frac{x}{k} \frac{\varphi(mq)}{mq} \left(1+\frac{1}{2}\beta\right) \exp\bigg(2\sum_{\substack{p \mid q \\ p \ge z}}\frac{1}{p}\bigg) \\&\le \frac{x}{k} \frac{\varphi(mq)}{mq} \left(1+\frac{1}{2}\beta\right) \exp(2\omega(q) z^{-1}). \end{align*} Take $\kappa' = \frac{\delta}{128r^2 v}$. Under the assumption that $\omega(q) \le x^{\kappa'}$, we have $2 \omega(q) z^{-1} \le 2x^{-\delta/128r^2 v}$, and $\exp(2\omega(q) z^{-1}) = 1 + O(x^{-\delta/128r^2 v})$. So once $x$ (or equivalently, $m$) is large enough, our main term is smaller than $\frac{x}{k}\frac{\varphi(mq)}{mq}(1+\beta)$. So we have shown that for large $m$, \[ \sum_{\substack{n \in \mathcal{A} \\ \gcd(n,q)=1}} 1 \le \frac{x}{k}\frac{\varphi(mq)}{mq}(1+\beta) + O_{\delta}(x^{1-\eta'}). \] Recalling \eqref{eq:fundidentity} finishes the proof. \end{proof} \begin{proof}[Completion of the proof of Theorem \ref{thm:fixedprime}] We keep the notation from earlier in this section. Let $\eta$, $\kappa$ be as specified in Lemma \ref{lem:lower}. With $\beta = \eta/2$, choose $\eta'$ and $\kappa'$ as in Lemma \ref{lem:upper}. If $m$ is large and we assume that \[ \omega(q) \le x^{\kappa''}, \quad\text{where} \quad\kappa'' = \min\{\kappa,\kappa'\}, \] then these lemmas imply that \[ \bigg(1+\frac{2k}{3}\eta\bigg) \frac{\varphi(mq)}{mq} x \le \bigg(1+\frac{1}{2}\eta\bigg) \frac{\varphi(mq)}{mq} x + O_{\delta}(kx^{1-\eta'}). \] Rearranging, \[ k \eta \frac{\varphi(mq)}{mq} x \ll \frac{4k-3}{6} \eta \cdot \frac{\varphi(mq)}{mq}x \ll_{\delta} k x^{1-\eta'}, \] and so \[ \frac{mq}{\varphi(mq)} \gg_{k_0,\delta} x^{\eta'}. \] Noting that $m < x^{4}$ and $q \le y^{\omega(q)}\le x^{\omega(q)}$, we see that for large $x$, \[ \frac{mq}{\varphi(mq)} \ll \log\log(mq+2) \ll \log\log{x} + \log(\omega(q)+2) \ll \log{x}. \] Comparing with the above lower bound, we see that $x$, and hence $m$, is bounded. Turning it around, for $m$ large enough, there are at least $x^{\kappa''}$ prime $\chi$-nonresidues in $[1,y]$. \end{proof} \begin{proof}[Sketch of the proof of Theorem \ref{thm:fixedprime0}] The proof of Theorem \ref{thm:fixedprime0} is quite similar, except that now we take $x = m^{1/3+\delta}$. With this choice of $x$, we can apply the Burgess bounds with $r=3$, which allows us to omit the factor of $R_k(m)$ in the resulting estimates.\end{proof} \subsection{Deduction of Theorem \ref{thm:fixedprime}} Let $\varepsilon> 0$ and $k_0 \ge 2$ be fixed. Let $\chi$ be a nonprincipal character mod $m$ of order $k$, where $k \ge k_0$. We would like to show that as long as $m$ is large enough there must be at least $m^{\kappa}$ prime $\chi$-nonresidues not exceeding $x^{1/4u_{k_0}+\varepsilon}$, for a certain $\kappa = \kappa(\varepsilon,k_0) > 0$. Let $k_1$ be the smallest positive integer with $3u_{k_1} > 4u_{k_0}$. If $k \ge k_1$, apply Theorem \ref{thm:fixedprime0}: We find that for large $m$, there are at least $m^{\kappa_0}$ prime $\chi$-nonresidues \[ \le m^{\frac{1}{3u_{k_1}} + \varepsilon} \le m^{\frac{1}{4u_{k_0}} + \varepsilon}, \] where $\kappa_0 = \kappa(\varepsilon,k_1)$ in the notation of Theorem \ref{thm:fixedprime0}. Suppose instead that $k_0 \le k< k_1$. Then $R_k(m)$ is bounded in terms of $k_0$. Theorem \ref{thm:fixedprime1} thus shows that for large $m$, there are at least $m^{\kappa_1}$ prime $\chi$-nonresidues \[ \le R_k(m) m^{\frac{1}{4u_{k_0}} + \varepsilon/2} \le m^{\frac{1}{4u_{k_0}}+\varepsilon}, \] where $\kappa_1 = \kappa(\varepsilon/2,k_0)$ in the notation of Theorem \ref{thm:fixedprime1}. Theorem \ref{thm:fixedprime} follows with $\kappa = \min\{\kappa_0,\kappa_1\}$. \begin{remark} By a minor modification of our proof, one can establish the following more general result. Theorem \ref{thm:fixedprime} corresponds to the case $H = \ker\chi$. \begin{thm} Let $\varepsilon >0$ and $k_0 \ge 2$. There are numbers $m_0(\varepsilon,k_0)$ and $\kappa = \kappa(\varepsilon,k_0) > 0$ for which the following holds: For all $m > m_0$ and every proper subgroup $H$ of $G=(\mathbf{Z}/m\mathbf{Z})^{\times}$ of index $k \ge k_0$, there are more than $m^{\kappa}$ primes $\ell$ not exceeding $m^{\frac{1}{4u_{k_0}}+\varepsilon}$ with $\ell \nmid m$ and $\ell \bmod{m}\notin H$.\end{thm} \noindent This strengthens \cite[Theorem 1.20]{norton98}, where the bound $O_{k_0,\epsilon}(m^{\frac{1}{4u_{k_0}}+\varepsilon})$ is established for the first such prime $\ell$. The main idea in the proof of the generalization is to replace $1+\chi(n)+\dots + \chi(n)^{k-1}$ with $\sum_{\chi\in \widehat{G/H}} \chi(n)$, where $\widehat{G/H}$ denotes the group of characters $\chi$ mod $m$ with $\ker \chi \supset H$. We leave the remaining details to the reader. \end{remark} \section{Small prime residues of quadratic characters: Proof of Theorem \ref{thm:smallresidue}} The next proposition is a variant of \cite[Theorem 2]{VL66}. Given a character $\chi$, we let $r_{\chi}(n) = \sum_{d \mid n} \chi(d)$. Since $\chi$ will be clear from context, we will suppress the subscript. \begin{prop}\label{prop:LV} For each $\epsilon > 0$, there is a constant $\eta = \eta(\epsilon) >0$ for which the following holds: If $\chi$ is a quadratic character modulo $m$ and $x \ge m^{1/4+\epsilon}$, then \[ \sum_{n \le x} r(n) = L(1,\chi) x + O_{\epsilon}(x^{1-\eta}). \] \end{prop} \begin{proof} With $\upsilon = \frac{1/4+\epsilon/2}{1/4+\epsilon}$, put $y = x^{\upsilon}$, so that $y \ge m^{\frac{1}{4}+\frac{1}{2}\epsilon}$. Put $z=x/y$. By Dirichlet's hyperbola method, \begin{equation}\label{eq:hyperbola} \sum_{n \le x} r(n) = \sum_{d \le y}\chi(d) \sum_{e \le x/d} 1 + \sum_{e \le z} \sum_{d \le x/e} \chi(d) - \sum_{d\le y}\chi(d)\sum_{e\le z}1. \end{equation} By Proposition \ref{prop:norton} (with $k=2$, so that $R_k(m)^{1/r}=1$), there is an $\eta_0 =\eta_0(\epsilon) > 0$ with $\sum_{d \le T} \chi(d) \ll_{\epsilon} T^{1-\eta_0} \quad\text{for all}\quad T \ge y$. Thus, the second double sum on the right of \eqref{eq:hyperbola} is $\ll_{\delta} x^{1-\eta_0} \sum_{e \le z} e^{\eta_0-1} \ll_{\delta} x (z/x)^{\eta_0} = x y^{-\eta_0}$. Similarly, the third double sum is $\ll_{\epsilon} z y^{1-\eta_0} = x y^{-\eta_0}$. Finally, \[ \sum_{d \le y}\chi(d) \sum_{e \le x/d} 1=\sum_{d\le y} \chi(d) \left(\frac{x}{d}+O(1)\right) = xL(1,\chi) - x\sum_{d > y} \frac{\chi(d)}{d} + O(y) = x L(1,\chi) + O_{\epsilon}(xy^{-\eta_0}) + O(y). \] (Here the sum on $d>y$ has been handled by partial summation.) Collecting our estimates and keeping in mind that $y=x^\upsilon$, we obtain the theorem with $\eta$ defined by $1-\eta = \max\{\upsilon,1-v\eta_0\}$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:smallresidue}] Let $\varepsilon \in (0, \frac14)$ and let $\chi$ be a quadratic character modulo $m$. Let \[ x = m^{\frac{1}{4}+\varepsilon}, \] and let $q$ be the product of the primes $\ell \le x$ with $\chi(\ell)=1$. We suppose that $\omega(q) \le (\log{m})^{A}$, and we show this implies that $m$ is bounded by a constant depending on $\varepsilon$ and $A$. Throughout this proof, we suppress any dependence on $\varepsilon$ and $A$ in our $O$-notation. By Proposition \ref{prop:LV}, \begin{equation} \sum_{n \le x} r(n) = L(1,\chi) \cdot x + O(x^{1-\eta}). \label{eq:ldub}\end{equation} We can estimate the sum in a second way. Observe that \begin{equation}\label{eq:rnexpression} r(n) = \prod_{\ell^e \parallel n} \left(1+\chi(\ell) + \dots + \chi(\ell^e)\right) \ge 0. \end{equation} Hence, if the subset $\mathcal{S}$ of $[1,x]$ is chosen to contain the support of $r(n)$ on $[1,x]$, then \[ 0 \le \sum_{n \le x} r(n) \le \#\mathcal{S} \cdot \left(\max_{n \in \mathcal{S}} r(n)\right). \] Examining the expression in \eqref{eq:rnexpression} for $r(n)$, we see $\mathcal{S}$ can be chosen as the set of $n\le x$ where every prime that appears to the first power in the factorization of $n$ divides $mq$. For each $n \in \mathcal{S}$, we can write $n=n_1 n_2$, where $n_1$ is a squarefree divisor of $mq$ and $n_2$ is squarefull. The number of elements of $\mathcal{S}$ with $n_2 > x^{1/2}$ is $O(x^{3/4})$. For the remaining elements of $\mathcal{S}$, we have $n_1 \le x/n_2$ and $n_1$ is a squarefree product of primes dividing $mq$. There is a bijection \[ \iota\colon \{\text{squarefree divisors of $mq$}\} \to \{\text{squarefrees composed of the first $\omega(mq)$ primes}\} \] with $\iota(r) \le r$ for all $r$. Hence, given $n_2$, the number of choices for $n_1$ is at most the number of integers in $[1,x/n_2]$ supported on the product of the first $\omega(mq)$ primes. By our assumption on $\omega(q)$, those primes all belong to the interval $[1, (\log{x})^{A+1}]$, once $x$ is large. Hence, given $n_2$, the number of possible values of $n_1$ is at most \[ \Psi(x/n_2, (\log{x})^{A+1}). \] For fixed $\theta \ge 1$, a classical theorem of de Bruijn \cite{dB66} asserts that $\Psi(X,(\log{X})^{\theta}) = X^{1-\frac{1}{\theta}+o(1)}$, as $X\to\infty$. Since $x/n_2 \ge x^{1/2}$, we deduce that \[ \Psi(x/n_2, (\log{x})^{A+1}) \le (x/n_2)^{1-\frac{1}{A+2}} \] if $x$ is large. Summing on squarefull $n_2 \le x^{1/4}$, we see that the number of elements of $\mathcal{S}$ arising in this way is $O(x^{1-\frac{1}{A+2}})$. Hence, \[ \#\mathcal{S} \ll x^{3/4} + x^{1-\frac{1}{A+2}} \ll x^{1-\eta'}, \quad\text{where}\quad \eta'=\min\left\{\frac14,\frac{1}{A+2}\right\}. \] Since $r(n) \le \tau(n) \ll x^{\eta'/2}$ for $n \le x$, \begin{equation}\label{eq:udub} \sum_{n\le x} r(n) \ll \#\mathcal{S}\cdot x^{\eta'/2} \ll x^{1-\eta'/2}. \end{equation} Comparing \eqref{eq:ldub} and \eqref{eq:udub} gives \[ L(1,\chi) \ll x^{-\min\{\eta'/2,\eta\}}. \] But for large $x$, this contradicts Siegel's theorem \cite[Theorem 11.14, p. 372]{MV07}. \end{proof} \begin{remark} Any improvement on Siegel's lower bound for $L(1,\chi)$ would boost the number of $\ell$ produced in Theorem \ref{thm:smallresidue}. Substantial improvements of this kind would have other closely related implications. For example, a simple modification of an argument of Wolke \cite{wolke69} shows that for any quadratic character $\chi$ mod $m$, \[ \sum_{\substack{\ell \le m \\ \chi(\ell)= 1}}\frac{1}{\ell} \ge \frac{1}{2} \log\left(\frac{\varphi(m)}{m} L(1,\chi) \log{m}\right) + O(1), \] where the $O(1)$ constant is absolute. {(Here is the short proof: By Proposition \ref{prop:LV}, $\frac{1}{m}\sum_{n \le m}r(n) \gg L(1,\chi)$. On the other hand, \cite[Theorem 5, p. 308]{tenenbaum95} yields $\frac{1}{m}\sum_{n \le m} r(n) \ll \frac{1}{\log{m}} \sum_{n \le m} \frac{r(n)}{n} \ll \frac{1}{\log{m}} \cdot \frac{m}{\varphi(m)} \cdot \exp\left(2 \sum_{\ell \le m,~\chi(\ell)=1}\frac{1}{\ell}\right)$.)} \end{remark} \section*{Acknowledgments} This work was motivated in part by observations made on \texttt{mathoverflow} by ``GH from MO'' \cite{52393}. The author is also grateful to ``Lucia'' for pointing out there the work of Bourgain--Lindenstrauss. He thanks Enrique Trevi\~no for useful feedback on an early draft. This research was supported by NSF award DMS-1402268. {\small \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} } \end{document}
\begin{document} \title{Measuring the quantum state of a single system with minimum state disturbance} \author{Maximilian Schlosshauer} \affiliation{Department of Physics, University of Portland, 5000 North Willamette Boulevard, Portland, Oregon 97203, USA} \begin{abstract} Conventionally, unknown quantum states are characterized using quantum-state tomography based on strong or weak measurements carried out on an ensemble of identically prepared systems. By contrast, the use of protective measurements offers the possibility of determining quantum states from a series of weak, long measurements performed on a single system. Because the fidelity of a protectively measured quantum state is determined by the amount of state disturbance incurred during each protective measurement, it is crucial that the initial quantum state of the system is disturbed as little as possible. Here we show how to systematically minimize the state disturbance in the course of a protective measurement, thus enabling the maximization of the fidelity of the quantum-state measurement. Our approach is based on a careful tuning of the time dependence of the measurement interaction and is shown to be dramatically more effective in reducing the state disturbance than the previously considered strategy of weakening the measurement strength and increasing the measurement time. We describe a method for designing the measurement interaction such that the state disturbance exhibits polynomial decay to arbitrary order in the inverse measurement time $1/T$. We also show how one can achieve even faster, subexponential decay, and we find that it represents the smallest possible state disturbance in a protective measurement. In this way, our results show how to optimally measure the state of a single quantum system using protective measurements. \\[-.1cm] \noindent Journal reference: \emph{Phys.\ Rev.\ A\ }\textbf{93}, 012115 (2016), DOI: \href{http://dx.doi.org/10.1103/PhysRevA.93.012115}{10.1103/PhysRevA.93.012115} \end{abstract} \pacs{03.65.Ta, 03.65.Wj} \maketitle \section{Introduction} The characterization of unknown quantum states is an important experimental task and of great significance to quantum information processing \cite{Vogel:1989:uu,Dunn:1995:oo,Smithey:1993:lm,Breitenbach:az:1997,White:1999:az,James:2001:uu,Haffner:2005:sc,Leibfried:2005:yy,Altepeter:2005:ll,Lvovsky:2009:zz}. In conventional quantum-state tomography \cite{Vogel:1989:uu,Paris:2004:uu,Altepeter:2005:ll}, the quantum state is reconstructed from expectation values obtained from strong measurements of different observables, performed on an ensemble of identically prepared systems. An alternative approach to quantum-state measurement \cite{Lundeen:2011:ii,Lundeen:2012:rr,Fischbach:2012:za,Bamber:2014:ee,Dressel:2011:au} uses a combination of weak and strong measurements on an ensemble of identically prepared systems, together with the concept of weak values \cite{Aharonov:1988:mz,Duck:1989:uu}. However, since both approaches require an ensemble of identically prepared systems, they can only be said to reconstruct the quantum state in the statistical sense of measurement averages over an ensemble of systems presumed to have been prepared in the same quantum state. This raises the question of whether it might be possible to determine the quantum state of an individual system from measurements carried out not on an ensemble but on this single system only. Such single-system state determination would not only offer a conceptually transparent and rigorous version of quantum-state measurement, but also avoid time-consuming postprocessing and error propagation associated with quantum-state tomography \cite{Altepeter:2005:ll,Maccone:2014:uu,Dressel:2011:au}. As long as one demands perfect fidelity of the state reconstruction and possesses no prior knowledge of the initial quantum-state subspace, then it is well known that single-system state determination is impossible \cite{Wootters:1982:ww,Ariano:1996:om}. However, if one weakens these conditions, then it has been shown that one can, in principle, measure the quantum state of a single system by using the protective-measurement protocol \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Alter:1996:oo,Dass:1999:az,Vaidman:2009:po,Gao:2014:cu}. Protective measurement allows for a set of expectation values to be obtained from weak measurements performed on the same single system, provided the system is initially in an (potentially unknown) nondegenerate eigenstate of its (potentially unknown) Hamiltonian. A defining feature of a protective measurement is that the disturbance of the system's quantum state during the measurement can be made arbitrarily small by weakening the measurement interaction and increasing the measurement time \cite{Aharonov:1993:jm,Dass:1999:az,Vaidman:2009:po}. Thus, a series of expectation values can be measured on the same system while the system remains in its initial state with probability arbitrarily close to unity. In this sense, one can measure the quantum state of a single system with a fidelity arbitrarily close to unity \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Dass:1999:az,Vaidman:2009:po,Auletta:2014:yy,Diosi:2014:yy,Aharonov:2014:yy}, providing an important complementary approach to conventional quantum-state tomography based on ensembles. Recently, the possibility of using protective measurement for quantum-state determination has attracted renewed interest \cite{Gao:2014:cu}, and protective measurement has been shown to have many related applications, such as the determination of stationary states \cite{Diosi:2014:yy}, investigation of particle trajectories \cite{Aharonov:1996:ii,Aharonov:1999:uu}, translation of ergodicity into the quantum realm \cite{Aharonov:2014:yy}, studies of fundamental issues of quantum measurement \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Alter:1997:oo,Dass:1999:az,Gao:2014:cu}, and the complete description of two-state thermal ensembles \cite{Aharonov:2014:yy}. The fact that each protective measurement has a nonzero probability of disturbing the quantum state of the measured system leads to error propagation and reduced fidelity over the course of the multiple measurements required to determine the set of expectation values \cite{Aharonov:1993:qa,Alter:1996:oo,Dass:1999:az,Schlosshauer:2014:tp,Schlosshauer:2014:pm,Schlosshauer:2015:pm}. Therefore, a chief goal when using protective measurement to characterize quantum states of single systems is the minimization of the state disturbance. However, the conventional approach of making the measurement interaction arbitrarily weak while allowing it to last for an arbitrarily long time \cite{Aharonov:1993:jm,Dass:1999:az,Vaidman:2009:po} is not only unlikely to be practical in an experimental setting but is also, as we will show in this paper, comparably ineffective. Here we will describe a dramatically more effective approach that allows one to minimize the state disturbance while keeping the strength and duration of the measurement interaction constant. In this way, we demonstrate how to optimally implement the measurement of an unknown quantum state of a single system using protective measurement. Our approach consists of a systematic tuning of the time dependence of the measurement interaction, such that the state disturbance becomes dramatically reduced even for modestly weak and relatively short interactions. While early expositions of protective measurement \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp} had hinted at the role of the time dependence of the measurement interaction, this role had not been explicitly explored and was instead relegated to a reference to the quantum adiabatic theorem \cite{Born:1928:yf}, which, as we will see in this paper, provides a condition that is neither necessary nor sufficient for minimizing the state disturbance in a protective measurement. Issues of time dependence of the protective-measurement interaction were first considered explicitly in Ref.~\cite{Dass:1999:az}, which estimated the effect of the turn-on and turnoff of the measurement interaction on the adiabaticity of the interaction. Recently, the case of finite measurement times in a protective measurement and its influence on the reliability of the measurement were studied \cite{Auletta:2014:yy,Schlosshauer:2014:tp}, and a framework for the perturbative treatment of time-dependent measurement interactions in a protective measurement has been developed and applied to specific examples \cite{Schlosshauer:2014:pm,Schlosshauer:2015:pm}. None of these existing studies, however, have shown how to systematically minimize the state disturbance in a protective measurement for the physically and experimentally relevant case of finite measurement times and interaction strengths, such that the reliability of the protective measurement can be maximized. Here we present a rigorous and comprehensive solution to this problem. Our results demonstrate how one can optimally measure the quantum state of an individual quantum system using protective measurements. In any future experimental implementation of protective quantum-state measurement, this will enable one to optimize the measurement interaction to produce a high fidelity of the quantum-state measurement. While our analysis is motivated by the goal of optimizing protective measurements, it also provides insights into the issue of state disturbance in any quantum measurement. This paper is organized as follows. After a brief review of the basics of protective measurements involving time-dependent measurement interactions (Sec.~\ref{sec:prot-meas}), we first use a Fourier-like series approach to construct measurement interactions that achieve a state disturbance that decreases as $1/T^N$, where $T$ is the measurement time and $N$ can be made arbitrarily large by modifying the functional form of the time dependence of the measurement interaction using a systematic procedure (Sec.~\ref{sec:seri-appr-minim}). We also make precise the relationship between the smoothness of the measurement interaction and the dependence of the state disturbance on $T$. We then show that the measurement interaction can be further optimized, leading to an even faster, subexponential decay of the state disturbance with $T$, and we show that this constitutes the optimal choice (Sec.~\ref{sec:minim-state-dist}). These results are established by calculating the state disturbance from the perturbative transition amplitude to first order in the interaction strength. To justify this approach, we prove that this amplitude accurately represents the exact transition amplitude to leading order in $1/T$ (Sec.~\ref{sec:suff-first-order}). \section{\label{sec:prot-meas}Protective measurement} We begin by briefly reviewing protective measurements and their treatment with time-dependent perturbation theory. In a protective measurement \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Dass:1999:az,Vaidman:2009:po,Gao:2014:cu}, the interaction between system $S$ and apparatus $A$ is treated quantum mechanically and described by the interaction Hamiltonian $\op{H}_\text{int}(t) = g(t)\op{O} \otimes \op{P}$, where $\op{O}$ is an arbitrary observable of $S$, $\op{P}$ generates the shift of the pointer of $A$, and the coupling function $g(t)$ describes the time dependence of the interaction strength during the measurement interval $0 \le t \le T$, with $g(t)=0$ for $t <0$ and $t >T$. The function $g(t)$ is normalized, $\int_{0}^{T} \text{d} t\, g(t) =1$, which introduces an inverse relationship between the duration $T$ and the average strength of the interaction, so that the pointer shift depends neither on these two parameters nor on the functional form of $g(t)$. The spectrum $\{ E_n \}$ of $\op{H}_S$ is assumed to be nondegenerate and $S$ is assumed to be in an eigenstate $\ket{n}$ of $\op{H}_S$ at $t=0$. One can then show \cite{Aharonov:1993:qa,Aharonov:1993:jm,Dass:1999:az,Vaidman:2009:po,Gao:2014:cu} that for $T \rightarrow \infty$ the system remains in the state $\ket{n}$, while the apparatus pointer shifts by an amount proportional to $\bra{n}\op{O}\ket{n}$, thus providing partial information about $\ket{n}$. However, in the realistic case of finite $T$ and a corresponding non-infinitesimal average interaction strength, the system becomes entangled with the apparatus, disturbing the initial state \cite{Auletta:2014:yy,Schlosshauer:2014:tp,Schlosshauer:2014:pm,Schlosshauer:2015:pm}. To quantify this state disturbance, we calculate the probability amplitude $A_m(T)$ for finding the system in an orthogonal state $\ket{m}\not=\ket{n}$ at the conclusion of the measurement. We write $A_m(T)$ as a perturbative series, $A_m(T) = \sum_{\ell=1}^\infty A_m^{(\ell)}(T)$. Here $A_m^{(1)}(T)$ is the transition amplitude to first order in the interaction strength and the amplitudes $A_m^{(\ell)}(T)$ for $\ell \ge 2$ are the $\ell$th-order corrections to $A_m^{(1)}(T)$, where \cite{Sakurai:1994:om,Schlosshauer:2014:pm} \begin{align}\label{eq:g8fbvsv1} A^{(\ell)}_{m}(T) &= \left(-\frac{\text{i}}{\hbar}\right)^\ell \sum_{k_1,\hdots,k_{\ell-1}} O_{mk_1}O_{k_1k_2}\cdots O_{k_{\ell-1}n}\notag \\ & \quad \times\int_{0}^{T} \text{d} t' \, \text{e}^{\text{i} \omega_{mk_1} t'} g(t') \cdots \notag \\ & \quad \times\int_{0}^{t^{(\ell-1)}} \text{d} t^{(\ell)} \,\text{e}^{\text{i} \omega_{k_{\ell-1} n} t^{(\ell)}} g(t^{(\ell)}). \end{align} Here $O_{ij}\equiv \bra{k_i} \op{O} \ket{k_j}$, and $\omega_{mn}\equiv (E_m-E_n)/\hbar$ is the frequency of the transition $\ket{n} \rightarrow \ket{m}$ \footnote{The full expression for $A^{(\ell)}_{m}(T)$ also contains contributions from the apparatus subspace \cite{Schlosshauer:2014:pm}. They are irrelevant to our present analysis.}. Of particular interest is the first-order transition amplitude $A_m^{(1)}(T)$, \begin{align}\label{eq:8aadhj7gr7ss82} A_m^{(1)}(T) &= -\frac{\text{i}}{\hbar} O_{mn} \int_{0}^{T} \text{d} t\, \text{e}^{\text{i} \omega_{mn} t} g(t). \end{align} The total state disturbance is measured by the probability $\abs{\sum_{m\not=n} A_m(T)}^2$ of a transition to the subspace orthogonal to the initial state $\ket{n}$. Our goal is now to determine coupling functions $g(t)$ that minimize this transition probability. \section{\label{sec:seri-appr-minim}Series approach to minimization of state disturbance} Our first approach will consist of building up coupling functions $g(t)$ from sinusoidal components such that the coupling functions become increasingly smooth (in a sense to be defined below). We take $g(t)$ to be symmetric about $t=T/2$ and expand it in terms of the functions \begin{equation}\label{eq:bvdhkjb678678} f_n(t)= (-1)^{n+1}\cos\left[\frac{2n\pi (t-T/2)}{T}\right], \quad n=1,2,3,\hdots, \end{equation} which form an orthogonal basis over the interval $[0,T]$ for functions symmetric about $t=T/2$. That is, we write $g(t)$ as \begin{equation}\label{eq:bvdhkjbvd} g(t) = \begin{cases} \frac{1}{T} \left( 1 + \sum_{n=1}^N a_n f_n(t)\right), & 0 \le t \le T, \\ 0, & \text{otherwise}, \end{cases} \end{equation} where the coefficients $a_n$ are dimensionless and do not depend on $T$. Since $\int_0^T \text{d} t \, f_n(t)=0$, the area under $g(t)$ is normalized as required. The dominant contribution comes from the $f_1(t)$ term describing a gradual increase and decrease. The terms $f_n(t)$ for $n \ge 2$ represent sinusoidal components with multiple peaks that we will now use to suitably shape the basic pulse represented by $f_1(t)$. We will first consider the first-order transition amplitude $A_m^{(1)}(T)$ given by Eq.~\eqref{eq:8aadhj7gr7ss82}, and then subsequently justify this approach by showing that higher-order corrections $A_m^{(\ell\ge 2)}(T)$ do not modify the results. Equation~\eqref{eq:8aadhj7gr7ss82} shows that the coupling-dependent part of $A_m^{(1)}(T)$ is represented by the Fourier transform $G(\omega T) = \int_0^T \text{d} t\, \text{e}^{\text{i} \omega t} g(t)$ of $g(t)$, where $\omega\equiv \omega_{mn}$. Thus, to quantify the state disturbance we evaluate the Fourier transform of $g(t)$ given by Eq.~\eqref{eq:bvdhkjbvd}, \begin{align}\label{eq:bvdhkjbvd0} G(\omega T) = \frac{2\text{e}^{\text{i} \omega T/2}}{\omega T} \sin\left( \omega T/2 \right) \left[ 1 - \sum_{n=1}^N \frac{a_n}{1-(2\pi n/\omega T)^2 }\right], \end{align} where $\omega T$ is a dimensionless quantity that measures the ratio of the measurement time to the internal timescale $\omega^{-1}$ associated with the transition $\ket{n}\rightarrow\ket{m}$. In physical situations, $\omega^{-1}$ typically represents atomic timescales and we may safely assume that $\omega T \gg N$. Then we can write Eq.~\eqref{eq:bvdhkjbvd0} as a power series in $1/\omega T$, \begin{equation}\label{eq:uuuun} G(\omega T) = \frac{2\text{e}^{\text{i} \omega T/2}}{\omega T} \sin\left( \omega T/2 \right)\left[ 1 - \sum_{k=0}^\infty \sum_{n=1}^N a_n \left(\frac{2\pi n}{\omega T}\right)^{2k} \right]. \end{equation} To minimize the state disturbance, we want $G(\omega T)$ to decay quickly with $T$ from its initial value of 1 at $T=0$. For the constant-coupling function $g(t)=1/T$ (all $a_n=0$), which describes a sudden turn-on and turnoff, we obtain $A_m^{(1)}(T) \propto 1/\omega T$, where the $T$ dependence is due to the fact that the average interaction strength is proportional to $1/T$. Clearly, we must have $\omega T \gg 1$ to achieve small state disturbance. For arbitrary coefficients $a_n$, $A_m^{(1)}(T)$ is still of first order in $1/\omega T$. Equation~\eqref{eq:uuuun} shows that we may increase the order of the leading term in $1/\omega T$ by imposing the conditions \begin{align}\label{eq:conds} \sum_{n=1}^N a_n=1, \quad \sum_{n=1}^N a_n n^{2k}=0, \quad 1 \le k \le N-1, \end{align} which define a set of $N$ linearly independent coupled equations for $N$ coefficients $a_n$ with a unique solution $\bvec{a}_N=(a_1,\hdots,a_N)$; e.g., $\bvec{a}_1=\left(1 \right)$, $\bvec{a}_2=\left(\frac{4}{3},-\frac{1}{3}\right)$, $\bvec{a}_3=\left(\frac{3}{2},-\frac{3}{5},\frac{1}{10}\right)$, etc. Using the solution $\bvec{a}_N$, $A_m^{(1)}(T)$ to leading order in $1/\omega T$ becomes [see Eqs.~\eqref{eq:8aadhj7gr7ss82} and \eqref{eq:uuuun}] \begin{align}\label{eq:uuuun2233} \widetilde{A}_m^{(1)}(T)&= -\frac{2\text{i}}{\hbar} O_{mn} \text{e}^{\text{i} \omega T/2} \sin\left( \omega T/2 \right)\left(2\pi\right)^{2N} \notag \\ & \quad \times \left(\sum_{n=1}^N a_n n^{2N}\right) \left(\frac{1}{\omega T}\right)^{2N+1}, \end{align} where the tilde indicates leading-order expressions. This amplitude is of order $(\omega T)^{-(2N+1)}$. \begin{figure} \caption{\label{fig:g} \label{fig:g} \end{figure} \begin{figure} \caption{\label{fig:decay} \label{fig:decay} \end{figure} Figure~\ref{fig:g} displays the coupling functions determined from the conditions~\eqref{eq:conds} for different values of $N$. Functions with larger $N$ describe a smoother turn-on and turnoff behavior. Figure~\ref{fig:decay}(a) shows the corresponding squared Fourier transforms $\abs{G(\omega T)}^2$ of these coupling functions in the regime $\omega T \gg N$ relevant to protective measurement, with $\abs{G(\omega T)}^2$ representing the dependence of the state disturbance on the choice of $g(t)$. We have neglected the rapid oscillations of $\abs{G(\omega T)}^2$, since they are irrelevant to considerations of state disturbance in protective measurements \footnote{Targeting the zeros of $\abs{G(\omega T)}^2$ to minimize the state disturbance by tuning $T$ would require precise knowledge of $\omega$ and therefore of $\op{H}_S$. But in a protective measurement, $\op{H}_S$ is \emph{a priori} unknown \cite{Aharonov:1993:jm,Dass:1999:az}.}. Small values of $N$ already achieve a strong reduction of the state disturbance. Figures~\ref{fig:g} and \ref{fig:decay}(a) show that while increasing $N$ entails a higher rate of change of the measurement strength outside the turn-on and turnoff region and a larger peak strength at $t=T/2$, it nevertheless reduces the state disturbance. This indicates that the smoothness of the turn-on and turnoff of the interaction has a decisive influence on the state disturbance. Increasing $N$ also makes $g(t)$ narrower (see Fig.~\ref{fig:g}), making its Fourier transform wider and the initial decay of the transition amplitude slower, as seen in Fig.~\ref{fig:decay}(b). However, Fig.~\ref{fig:decay}(a) shows that this increase in width is insignificant in the relevant regime $\omega T \gg N$. Fundamentally, if $N \rightarrow \infty$, $g(t)$ becomes infinitely narrow and the transition amplitude becomes infinitely wide. Thus, one cannot eliminate the state disturbance altogether even in the limit of infinitely many $f_n(t)$. We now make precise the connection between smoothness and state disturbance. Mathematically, smoothness is measured by how many times a function is continuously differentiable over a given domain; we call a function that is $k$ times continuously differentiable a $C^k$-smooth function. The $j$th-order derivative of $g(t)=\frac{1}{T} \left( 1 + \sum_{n=1}^N a_n f_n(t)\right)$ [Eq.~\eqref{eq:bvdhkjbvd}] at $t=0$ and $t=T$ is proportional to $\sum_{n=1}^N a_n (2\pi n)^j$ for even $j$ and zero for odd $j$. Since all derivatives of $g(t)$ vanish for $t<0$ and $t>T$, the turn-on and turnoff points introduce a discontinuity in the derivatives. We can make all derivatives up to order $2N-1$ vanish (and thus continuous) at $t=0$ and $t=T$ by requiring that $\sum_{n=1}^N a_n (2\pi n)^{2k}=0$ for $k=1,2,\hdots,N-1$, in addition to the requirement $\sum_{n=1}^N a_n=1$ ensuring continuity of $g(t)$ itself. These, however, are precisely the conditions~\eqref{eq:conds} previously derived from the requirement of eliminating lower-order terms in the Fourier transform. Thus, increasing $N$ makes $g(t)$ arbitrarily smooth, resulting in a polynomial decay of the transition probability to arbitrary order in $1/\omega T$. \section{\label{sec:minim-state-dist}Minimization of state disturbance using bump coupling functions} \begin{figure} \caption{\label{fig:gbump} \label{fig:gbump} \end{figure} The construction of coupling functions from Eq.~\eqref{eq:bvdhkjbvd} progressively increases smoothness and illuminates the relationship between smoothness and state disturbance. However, the decay of the corresponding transition probability with $T$ is only polynomial. This raises the question of whether coupling functions exist that achieve superpolynomial decay. Clearly, this will require functions with compact support $[0,T]$ that are $C^\infty$-smooth, known as bump functions \cite{Lee:2003:oo}. No such function can have a Fourier transform that follows an exponential decay in $1/\omega T$, since a function whose Fourier transform decays exponentially cannot have compact support. Thus, the state disturbance can at most exhibit subexponential decay. A suitable class of bump functions with support $[0,T]$ is given by \begin{equation}\label{eq:bumfcts} g_{\alpha\beta} (t) = \begin{cases} c_{\alpha\beta}^{-1}\exp\left(-\beta \left[1-\left(\frac{2t}{T}-1\right)^2\right]^{1-\alpha}\right), \\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad 0 < t <T, \\ 0 \qquad \qquad \qquad \qquad \qquad \qquad \quad \,\,\, \text{otherwise}, \end{cases} \end{equation} where $\alpha \ge 2$ and $\beta\ge 1$ are integers, and $c_{\alpha\beta}$ normalizes the area under $g_{\alpha\beta} (t)$. These functions are $C^\infty$-smooth with vanishing derivatives and essential singularities at $t=0$ and $t=T$. Figure~\ref{fig:gbump} shows $g_{\alpha\beta} (t)$ for several different choices of $\alpha$ and $\beta$. \begin{figure} \caption{\label{fig:bumpsd} \label{fig:bumpsd} \end{figure} For $\alpha=2$ and $\beta=1$, the Fourier transform exhibits subexponential decay proportional to $(\omega T)^{-3/4}\text{e}^{-\sqrt{\omega T}}$ (Fig.~\ref{fig:bumpsd}). Increasing $\alpha$ and $\beta$ enhances the decay (see again Fig.~\ref{fig:bumpsd}), with Fourier transform (to leading order in $1/\omega T$) proportional to $(\omega T)^{-(\alpha+1)/2\alpha} \exp\left[-\gamma_{\alpha\beta} (\omega T)^{(\alpha-1)/\alpha}\right]$, where $\gamma_{\alpha\beta}$ is a constant. By increasing $\alpha$ we can asymptotically approach exponential decay. As seen in Fig.~\ref{fig:gbump}, this will also make $g(t)$ more narrow, rendering the initial decay less rapid, just as for $g(t)$ constructed from an increasing number of sinusoidal components. Figure~\ref{fig:bumpsd} makes clear that since $\omega T \gg 1$, bump functions are superior to coupling functions composed of the sinusoidal components defined in Eq.~\eqref{eq:bvdhkjb678678}. \section{\label{sec:suff-first-order}Sufficiency of the first-order amplitude} The higher-order corrections $A_m^{(\ell \ge 2)}(T)$ [Eq.~\eqref{eq:g8fbvsv1}] are of $\ell$th order in the interaction strength, but in general contain terms of first order in $1/T$ \cite{Schlosshauer:2014:pm}. This raises the question of whether the conditions~\eqref{eq:conds}, which eliminate terms up to order $(\omega T)^{-(2N+1)}$ in $A_m^{(1)}(T)$, also eliminate these orders in $A_m^{(\ell)}(T)$ for all $\ell \ge 2$. We find that this is indeed the case. Evaluating $A_m^{(\ell)}(T)$ for $g(t)$ with $N$ nonzero coefficients $a_n$ satisfying the $N$ conditions~\eqref{eq:conds} gives, to leading order in $1/\omega T$, \begin{align}\label{eq:uuuun22243} \widetilde{A}_m^{(\ell)}(T)&= \left(-\frac{\text{i}}{\hbar}\right)^\ell \frac{\text{i} O_{mn} }{(\ell-1)!} \left[O_{mm} ^{\ell-1}-O_{nn} ^{\ell-1}\text{e}^{\text{i} \omega T}\right] \left(2\pi\right)^{2N} \notag\\ &\quad \times \left(\sum_{n=1}^N a_n n^{2N}\right) \left(\frac{1}{\omega T}\right)^{2N+1}. \end{align} Since this is of the same leading order in $1/\omega T$ as the first-order transition amplitude $A_m^{(1)}(T)$ [see Eq.~\eqref{eq:uuuun2233}], the total transition amplitude $A_m(T) = \sum_{\ell=1}^\infty A_m^{(\ell)}(T)$ is also of the same leading order as $A_m^{(1)}(T)$. We establish a stronger result still. We calculate the total transition amplitude to leading order in $1/\omega T$ by summing Eq.~\eqref{eq:uuuun22243} over all orders $\ell$. The result is \begin{align}\label{eq:uuudfb43} \widetilde{A}_m(T) &\approx -\frac{2\text{i}}{\hbar} O_{mn} \text{e}^{\text{i}\omega T/2} \sin \left\{ \frac{\omega T }{2} \left[1 +\chi_{mn}(T) \right]\right\} \left(2\pi\right)^{2N} \notag\\ &\quad \times\left(\sum_{n=1}^N a_n n^{2N}\right) \left(\frac{1}{\omega T}\right)^{2N+1}, \end{align} where $\chi_{mn}(T) = (\hbar\omega T)^{-1} \left[O_{mm} - O_{nn}\right]$ \footnote{Eq.~\eqref{eq:uuudfb43} omits an overall phase factor, which does not influence the transition probability.}. Comparison with Eq.~\eqref{eq:uuuun2233} shows that the corrections $A^{(\ell \ge 2)}_{m}(T)$ merely introduce a scaling factor $1+\chi_{nm}(T)$ into the argument of the sine function, whose oscillations, however, may be disregarded (see note~\cite{Note2}). Hence we may replace the sine function by 1, in which case Eqs.~\eqref{eq:uuuun2233} and \eqref{eq:uuudfb43} become identical. Thus, to leading order in $1/\omega T$, the first-order transition probability $\abs{A^{(1)}(T)}^2$ accurately describes the state disturbance. This offers an important calculational advantage and enables the analysis of state disturbance in terms of properties of Fourier-transform pairs. \section{Discussion} A particularly intriguing application of protective measurement is the possibility of characterizing the quantum state of a single system from a set of protectively measured expectation values. While this approach is intrinsically limited by its requirement that the system initially be in an eigenstate of its Hamiltonian \cite{Aharonov:1993:qa,Aharonov:1993:jm,Dass:1999:az}, it has the distinct conceptual and practical advantage of not requiring ensembles of identically prepared systems, in contrast with conventional quantum-state tomography based on strong \cite{Vogel:1989:uu,Dunn:1995:oo,Smithey:1993:lm,Breitenbach:az:1997,White:1999:az,James:2001:uu,Haffner:2005:sc,Leibfried:2005:yy,Altepeter:2005:ll,Lvovsky:2009:zz} or weak \cite{Lundeen:2011:ii,Lundeen:2012:rr,Fischbach:2012:za,Bamber:2014:ee,Dressel:2011:au} measurements. Thus, it provides an important alternative and complementary strategy for quantum-state measurement \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Dass:1999:az,Vaidman:2009:po,Auletta:2014:yy,Diosi:2014:yy,Aharonov:2014:yy}. To successfully characterize the initial state of the system with protective measurements, it is crucial that the initial state of the system is minimally disturbed during the series of protective measurements that determine the set of expectation values. We have shown how one can minimize this state disturbance, given a fixed duration $T$ and average strength ($\propto 1/T$) of each protective measurement. Specifically, we have described a systematic procedure for designing the time dependence of the system--apparatus interaction (described by the coupling function) such that the state disturbance decreases polynomially or subexponentially with $T$. The leading order in $1/T$ can be made arbitrarily large for polynomial decay, and one may also come arbitrarily close to exponential-decay behavior by using bump functions. Since strictly exponential decay cannot be attained, bump functions are the optimal choice, as they produce the least possible state disturbance in a protective measurement. Previous discussions of protective measurement \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp} have appealed to the condition that the coupling function change slowly during the measurement such that the quantum adiabatic theorem \cite{Born:1928:yf} can be applied. But our results indicate that this condition is both too weak and too strict. It is too weak, because it concerns only the smallness of the first-order derivative of the coupling function, rather than the number of continuous derivatives. It is too strict, because our analysis shows that the state disturbance in a protective measurement is chiefly due to discontinuities in the coupling function and its derivatives during the turn-on and turnoff of the measurement interaction. Once a sufficiently smooth turn-on and turnoff is achieved, the interaction strength may be changed comparably rapidly during the remaining period without creating significant additional state disturbance. Thus, the reduction of the state disturbance through an optimization of the coupling function does not necessitate adjustment of the measurement time or average interaction strength. Furthermore, compared to the condition of smoothness, the weakness of the interaction has a small effect on the state disturbance, which depends only quadratically on the average interaction strength. The optimization procedure described here is very general, because it solely modifies the time dependence of the coupling function and is independent of the physical details of the system and the apparatus. In particular, it is independent of the Hamiltonian and the measured observable. This raises the question of whether and how one might further improve the fidelity of the state measurement if the specifics of the physical system and measured observables are taken into account. One approach would be to make use of any available partial knowledge of the Hamiltonian of the system. Such knowledge may be used to additionally reduce the state disturbance, since then the system--apparatus interaction can be designed to target the partially known eigenspaces of the Hamiltonian \footnote{Of course, in the limiting case of a completely known Hamiltonian, a projective measurement in the energy eigenbasis permits determination of the state of the system without any state disturbance, since the system is assumed to be in one of these eigenstates.}. In this case, one may also be able to reduce particular transition amplitudes by minimizing some of the transition matrix elements $O_{mn}=\bra{m} \op{O} \ket{n}$ [see Eqs.~\eqref{eq:uuuun22243} and \eqref{eq:uuudfb43}]. However, this approach can be expected to succeed only for a subset of eigenstates and very few particular choices (if any) of observables $\op{O}$, while state determination requires the protective measurement of multiple complementary (and practically measurable) observables. In summary, we have shown how to optimally implement protective measurements and thereby maximize the likelihood of success of protective measurements that seek to determine the quantum state of single systems. Our results dramatically improve the performance of protective measurements and may aid in their future experimental realization. \begin{thebibliography}{42} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \text{e}print [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\text{e}OS\space} \providecommand \text{e}OS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Vogel}\ and\ \citenamefont {Risken}(1989)}]{Vogel:1989:uu} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Vogel}}\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Risken}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {40}},\ \bibinfo {pages} {2847} (\bibinfo {year} {1989})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dunn}\ \emph {et~al.}(1995)\citenamefont {Dunn}, \citenamefont {Walmsley},\ and\ \citenamefont {Mukamel}}]{Dunn:1995:oo} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Dunn}}, \bibinfo {author} {\bibfnamefont {I.~A.}\ \bibnamefont {Walmsley}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Mukamel}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {884} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Smithey}\ \emph {et~al.}(1993)\citenamefont {Smithey}, \citenamefont {Beck}, \citenamefont {Raymer},\ and\ \citenamefont {Faridani}}]{Smithey:1993:lm} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~T.}\ \bibnamefont {Smithey}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Beck}}, \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Raymer}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Faridani}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {70}},\ \bibinfo {pages} {1244} (\bibinfo {year} {1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Breitenbach}\ \emph {et~al.}(1997)\citenamefont {Breitenbach}, \citenamefont {Schiller},\ and\ \citenamefont {Mlynek}}]{Breitenbach:az:1997} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Breitenbach}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Schiller}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Mlynek}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {387}},\ \bibinfo {pages} {471} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {White}\ \emph {et~al.}(1999)\citenamefont {White}, \citenamefont {James}, \citenamefont {Eberhard},\ and\ \citenamefont {Kwiat}}]{White:1999:az} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {White}}, \bibinfo {author} {\bibfnamefont {D.~F.~V.}\ \bibnamefont {James}}, \bibinfo {author} {\bibfnamefont {P.~H.}\ \bibnamefont {Eberhard}}, \ and\ \bibinfo {author} {\bibfnamefont {P.~G.}\ \bibnamefont {Kwiat}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {3103} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {James}\ \emph {et~al.}(2001)\citenamefont {James}, \citenamefont {Munro},\ and\ \citenamefont {White}}]{James:2001:uu} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~F.~V.}\ \bibnamefont {James}}, \bibinfo {author} {\bibfnamefont {P.~G. K. W.~J.}\ \bibnamefont {Munro}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {White}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {64}},\ \bibinfo {pages} {052312} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {H{\"a}ffner}\ \emph {et~al.}(2005)\citenamefont {H{\"a}ffner}, \citenamefont {H{\"a}nsel}, \citenamefont {Roos}, \citenamefont {Benhelm}, \citenamefont {{Chek-al-kar}}, \citenamefont {Chwalla}, \citenamefont {K{\"o}rber}, \citenamefont {Rapol}, \citenamefont {Riebe}, \citenamefont {Schmidt}, \citenamefont {Becher}, \citenamefont {G{\"u}hne}, \citenamefont {D{\"u}rr},\ and\ \citenamefont {Blatt}}]{Haffner:2005:sc} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {H{\"a}ffner}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {H{\"a}nsel}}, \bibinfo {author} {\bibfnamefont {C.~F.}\ \bibnamefont {Roos}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Benhelm}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {{Chek-al-kar}}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Chwalla}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {K{\"o}rber}}, \bibinfo {author} {\bibfnamefont {U.~D.}\ \bibnamefont {Rapol}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Riebe}}, \bibinfo {author} {\bibfnamefont {P.~O.}\ \bibnamefont {Schmidt}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Becher}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {G{\"u}hne}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {D{\"u}rr}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {438}},\ \bibinfo {pages} {643} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Leibfried}\ \emph {et~al.}(2005)\citenamefont {Leibfried}, \citenamefont {Knill}, \citenamefont {Seidelin}, \citenamefont {Britton}, \citenamefont {Blakestad}, \citenamefont {Chiaverini}, \citenamefont {Hume}, \citenamefont {Itano}, \citenamefont {Jost}, \citenamefont {Langer}, \citenamefont {Ozeri}, \citenamefont {Reichle},\ and\ \citenamefont {Wineland}}]{Leibfried:2005:yy} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Leibfried}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Seidelin}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Britton}}, \bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Blakestad}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Chiaverini}}, \bibinfo {author} {\bibfnamefont {D.~B.}\ \bibnamefont {Hume}}, \bibinfo {author} {\bibfnamefont {W.~M.}\ \bibnamefont {Itano}}, \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Jost}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Langer}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ozeri}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Reichle}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {438}},\ \bibinfo {pages} {639} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Altepeter}\ \emph {et~al.}(2005)\citenamefont {Altepeter}, \citenamefont {Jeffrey},\ and\ \citenamefont {Kwiat}}]{Altepeter:2005:ll} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~B.}\ \bibnamefont {Altepeter}}, \bibinfo {author} {\bibfnamefont {E.~R.}\ \bibnamefont {Jeffrey}}, \ and\ \bibinfo {author} {\bibfnamefont {P.~G.}\ \bibnamefont {Kwiat}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Adv. Atom. Mol. Opt. Phy.}\ }\textbf {\bibinfo {volume} {52}},\ \bibinfo {pages} {105} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lvovsky}\ and\ \citenamefont {Raymer}(2009)}]{Lvovsky:2009:zz} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~I.}\ \bibnamefont {Lvovsky}}\ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Raymer}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {299} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Paris}\ and\ \citenamefont {Rehacek}(2004)}]{Paris:2004:uu} \BibitemOpen \bibinfo {editor} {\bibfnamefont {M.}~\bibnamefont {Paris}}\ and\ \bibinfo {editor} {\bibfnamefont {J.}~\bibnamefont {Rehacek}},\ eds.,\ \href@noop {} {\emph {\bibinfo {title} {Quantum State Estimation}}},\ \bibinfo {series} {Lecture Notes in Physics}, Vol.\ \bibinfo {volume} {649}\ (\bibinfo {publisher} {Springer},\ \bibinfo {address} {Heidelberg},\ \bibinfo {year} {2004})\BibitemShut {NoStop} \bibitem [{\citenamefont {Lundeen}\ \emph {et~al.}(2011)\citenamefont {Lundeen}, \citenamefont {Sutherland}, \citenamefont {Patel}, \citenamefont {Stewart},\ and\ \citenamefont {Bamber}}]{Lundeen:2011:ii} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont {Lundeen}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Sutherland}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Patel}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Stewart}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Bamber}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {474}},\ \bibinfo {pages} {188} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lundeen}\ and\ \citenamefont {Bamber}(2012)}]{Lundeen:2012:rr} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont {Lundeen}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Bamber}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {070402} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fischbach}\ and\ \citenamefont {Freyberger}(2012)}]{Fischbach:2012:za} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Fischbach}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Freyberger}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {052110} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bamber}\ and\ \citenamefont {Lundeen}(2014)}]{Bamber:2014:ee} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Bamber}}\ and\ \bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont {Lundeen}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {070405} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dressel}\ \emph {et~al.}(2011)\citenamefont {Dressel}, \citenamefont {Broadbent}, \citenamefont {Howell},\ and\ \citenamefont {Jordan}}]{Dressel:2011:au} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Dressel}}, \bibinfo {author} {\bibfnamefont {C.~J.}\ \bibnamefont {Broadbent}}, \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Howell}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Jordan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages} {040402} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aharonov}\ \emph {et~al.}(1988)\citenamefont {Aharonov}, \citenamefont {Albert},\ and\ \citenamefont {Vaidman}}]{Aharonov:1988:mz} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Aharonov}}, \bibinfo {author} {\bibfnamefont {D.~Z.}\ \bibnamefont {Albert}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Vaidman}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {60}},\ \bibinfo {pages} {1351} (\bibinfo {year} {1988})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Duck}\ \emph {et~al.}(1989)\citenamefont {Duck}, \citenamefont {Stevenson},\ and\ \citenamefont {Sudarshan}}]{Duck:1989:uu} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~M.}\ \bibnamefont {Duck}}, \bibinfo {author} {\bibfnamefont {P.~M.}\ \bibnamefont {Stevenson}}, \ and\ \bibinfo {author} {\bibfnamefont {E.~C.~G.}\ \bibnamefont {Sudarshan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {40}},\ \bibinfo {pages} {2112} (\bibinfo {year} {1989})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Maccone}\ and\ \citenamefont {Rusconi}(2014)}]{Maccone:2014:uu} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}}\ and\ \bibinfo {author} {\bibfnamefont {C.~C.}\ \bibnamefont {Rusconi}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {022122} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wootters}\ and\ \citenamefont {Zurek}(1982)}]{Wootters:1982:ww} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Wootters}}\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zurek}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {299}},\ \bibinfo {pages} {802} (\bibinfo {year} {1982})}\BibitemShut {NoStop} \bibitem [{\citenamefont {D'Ariano}\ and\ \citenamefont {Yuen}(1996)}]{Ariano:1996:om} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~M.}\ \bibnamefont {D'Ariano}}\ and\ \bibinfo {author} {\bibfnamefont {H.~P.}\ \bibnamefont {Yuen}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {2832} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aharonov}\ and\ \citenamefont {Vaidman}(1993)}]{Aharonov:1993:qa} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Aharonov}}\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Vaidman}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Lett. A}\ }\textbf {\bibinfo {volume} {178}},\ \bibinfo {pages} {38} (\bibinfo {year} {1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aharonov}\ \emph {et~al.}(1993)\citenamefont {Aharonov}, \citenamefont {Anandan},\ and\ \citenamefont {Vaidman}}]{Aharonov:1993:jm} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Aharonov}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Anandan}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Vaidman}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages} {4616} (\bibinfo {year} {1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Anandan}\ and\ \citenamefont {Vaidman}(1996)}]{Aharonov:1996:fp} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.~A.~J.}\ \bibnamefont {Anandan}}\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Vaidman}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Found. Phys.}\ }\textbf {\bibinfo {volume} {26}},\ \bibinfo {pages} {117} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Alter}\ and\ \citenamefont {Yamamoto}(1996)}]{Alter:1996:oo} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Alter}}\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yamamoto}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo {pages} {R2911} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Hari Dass}}\ and\ \citenamefont {Qureshi}(1999)}]{Dass:1999:az} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {{Hari Dass}}}\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Qureshi}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {59}},\ \bibinfo {pages} {2590} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vaidman}(2009)}]{Vaidman:2009:po} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Vaidman}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Compendium of Quantum Physics: Concepts, Experiments, History and Philosophy}}},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {D.}~\bibnamefont {Greenberger}}, \bibinfo {editor} {\bibfnamefont {K.}~\bibnamefont {Hentschel}}, \ and\ \bibinfo {editor} {\bibfnamefont {F.}~\bibnamefont {Weinert}}}\ (\bibinfo {publisher} {Springer},\ \bibinfo {address} {Berlin/Heidelberg},\ \bibinfo {year} {2009}), pp.\ \bibinfo {pages} {505--508}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gao}(2014)}]{Gao:2014:cu} \BibitemOpen \bibinfo {editor} {\bibfnamefont {S.}~\bibnamefont {Gao}},\ ed.,\ \href@noop {} {\emph {\bibinfo {title} {Protective Measurement and Quantum Reality: Towards a New Understanding of Quantum Mechanics}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {address} {Cambridge},\ \bibinfo {year} {2014})\BibitemShut {NoStop} \bibitem [{\citenamefont {Auletta}(2014)}]{Auletta:2014:yy} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Auletta}},\ }in\ Ref.~\cite{Gao:2014:cu}, pp.\ \bibinfo {pages} {39--62}\BibitemShut {NoStop} \bibitem [{\citenamefont {Di{\'o}si}(2014)}]{Diosi:2014:yy} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Di{\'o}si}},\ }in\ Ref.~\cite{Gao:2014:cu}, pp.\ \bibinfo {pages} {63--67}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aharonov}\ and\ \citenamefont {Cohen}(2014)}]{Aharonov:2014:yy} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Aharonov}}\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Cohen}},\ }in\ Ref.~\cite{Gao:2014:cu}, pp.\ \bibinfo {pages} {28--38}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aharonov}\ and\ \citenamefont {Vaidman}(1996)}]{Aharonov:1996:ii} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Aharonov}}\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Vaidman}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Bohmian Mechanics and Quantum Theory: An Appraisal}}},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {J.~T.}\ \bibnamefont {Cushing}}, \bibinfo {editor} {\bibfnamefont {A.}~\bibnamefont {Fine}}, \ and\ \bibinfo {editor} {\bibfnamefont {S.}~\bibnamefont {Goldstein}}}\ (\bibinfo {publisher} {Kluwer},\ \bibinfo {address} {Dordrecht},\ \bibinfo {year} {1996}), pp.\ \bibinfo {pages} {141--154}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aharonov}\ \emph {et~al.}(1999)\citenamefont {Aharonov}, \citenamefont {Englert},\ and\ \citenamefont {Scully}}]{Aharonov:1999:uu} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Aharonov}}, \bibinfo {author} {\bibfnamefont {B.~G.}\ \bibnamefont {Englert}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~O.}\ \bibnamefont {Scully}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Lett. A}\ }\textbf {\bibinfo {volume} {263}},\ \bibinfo {pages} {137} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Alter}\ and\ \citenamefont {Yamamoto}(1997)}]{Alter:1997:oo} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Alter}}\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yamamoto}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {56}},\ \bibinfo {pages} {1057} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schlosshauer}\ and\ \citenamefont {Claringbold}(2014)}]{Schlosshauer:2014:tp} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schlosshauer}}\ and\ \bibinfo {author} {\bibfnamefont {T.~V.~B.}\ \bibnamefont {Claringbold}},\ }in\ Ref.~\cite{Gao:2014:cu}, pp.\ \bibinfo {pages} {180--194}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schlosshauer}(2014)}]{Schlosshauer:2014:pm} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schlosshauer}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {052106} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schlosshauer}(2015)}]{Schlosshauer:2015:pm} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schlosshauer}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {062116} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Born}\ and\ \citenamefont {Fock}(1928)}]{Born:1928:yf} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Born}}\ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Fock}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Z. Phys.}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo {pages} {165} (\bibinfo {year} {1928})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sakurai}(1994)}]{Sakurai:1994:om} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Sakurai}},\ }\href@noop {} {\emph {\bibinfo {title} {Modern Quantum Mechanics}}},\ \bibinfo {edition} {2nd}\ ed.\ (\bibinfo {publisher} {Addison-Wesley},\ \bibinfo {address} {Reading, Massachusetts},\ \bibinfo {year} {1994})\BibitemShut {NoStop} \bibitem [{Note1()}]{Note1} \BibitemOpen \bibinfo {note} {The full expression for $A^{(\ell )}_{m}(T)$ also contains contributions from the apparatus subspace \cite {Schlosshauer:2014:pm}. They are irrelevant to our present analysis.}\BibitemShut {Stop} \bibitem [{Note2()}]{Note2} \BibitemOpen \bibinfo {note} {Targeting the zeros of $\left \delimiter 69640972 G(\omega T)\right \delimiter 86418188 ^2$ to minimize the state disturbance by tuning $T$ would require precise knowledge of $\omega $ and therefore of $\protect \mathaccentV {hat}05E{H}_S$. But in a protective measurement, $\protect \mathaccentV {hat}05E{H}_S$ is \protect \emph {a priori} unknown \cite {Aharonov:1993:jm,Dass:1999:az}.}\BibitemShut {Stop} \bibitem [{\citenamefont {Lee}(2003)}]{Lee:2003:oo} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Lee}},\ }\href@noop {} {\emph {\bibinfo {title} {Introduction to Smooth Manifolds}}}\ (\bibinfo {publisher} {Springer},\ \bibinfo {address} {New York},\ \bibinfo {year} {2003})\BibitemShut {NoStop} \bibitem [{Note3()}]{Note3} \BibitemOpen \bibinfo {note} {Eq.~\protect \textup {\hbox {\mathsurround \z@ \protect \normalfont (\ignorespaces \ref {eq:uuudfb43}\unskip \@@italiccorr )}} omits an overall phase factor, which does not influence the transition probability.}\BibitemShut {Stop} \bibitem [{Note4()}]{Note4} \BibitemOpen \bibinfo {note} {Of course, in the limiting case of a completely known Hamiltonian, a projective measurement in the energy eigenbasis permits determination of the state of the system without any state disturbance, since the system is assumed to be in one of these eigenstates.}\BibitemShut {Stop} \end{thebibliography} \end{document}
\begin{equation}gin{document} \title{Spatial entanglement using a quantum walk on a many-body system} \author{Sandeep K. \surname{Goyal}} \email{[email protected]} \affiliation{The Institute of Mathematical Sciences, CIT campus, Chennai 600 113, India} \author{C. M. \surname{Chandrashekar}} \email{[email protected]} \affiliation{Institute for Quantum Computing, University of Waterloo, Ontario N2L 3G1, Canada} \affiliation{Perimeter Institute for Theoretical Physics, Waterloo, ON, N2L 2Y5, Canada} \begin{equation}gin{abstract} The evolution of a many-particle system on a one-dimensional lattice, subjected to a quantum walk can cause spatial entanglement in the lattice position, which can be exploited for quantum information/communication purposes. We demonstrate the evolution of spatial entanglement and its dependence on the quantum coin operation parameters, the number of particles present in the lattice and the number of steps of the quantum walk on the system. Thus, spatial entanglement can be controlled and optimized using a many-particle discrete-time quantum walk. \end{abstract} \maketitle \preprint{Version} \section{Introduction} \langlebel{intro} Entanglement in a quantum state has been the fundamental resource in many quantum information and computation protocols, such as cryptography, communication, teleportation and algorithms \cite{NC00, HHH09}. To implement these protocols, generating an entangled state is very important. Similarly, studies on the interface between condensed matter systems and quantum information have shown entanglement as a signature of quantum phase transition \cite{ON02, OAF02, OROM06}. To understand the phases and dynamics in many-body systems an analysis of entanglement in many-body systems is very important. Hence, various schemes have been proposed for entanglement generation in quantum systems \cite{BH02, LHL03, RER07, WS09} and for understanding entanglement in many-body systems \cite{AFOV08}. Quantum walk (QW) is one such process in which an uncorrelated state can evolve to an entangled state and be used to analyze the evolution of entanglement \cite{Kem03, CLX05}. \par The QW, which was developed as a quantum analog of the classical random walk (CRW), evolves a particle into an entanglement between its internal and position degrees of freedom. It has played a significant role in the development of quantum algorithms \cite{Amb03}. Furthermore, the QW has been used to demonstrate coherent quantum control over atoms, quantum phase transition \cite{CL08}, to explain the phenomena such as breakdown of an electric field-driven system \cite{OKA05} and direct experimental evidence for wavelike energy transfer within photosynthetic systems \cite{ECR07, MRL08}. Experimental implementation of the QW has also been reported \cite{DLX03, RLB05, PLP08, KFC09}, and various other schemes have been proposed for its physical realization \cite{TM02, RKB02, EMB05, Cha06, MBD06}. Therefore, studying entanglement during the QW process will be useful from a quantum information theory perspective and also contribute to further investigation of the practical applications of the QW. In this direction, evolution of entanglement between single particle and position with time (number of steps of the discrete-time QW) has been reported \cite{CLX05}. \par In this paper, we consider a multipartite quantum walk on a one-dimensional lattice and study the evolution of {\it spatial entanglement}, entanglement between different lattice points. All the particles considered in the system are identical and indistinguishable with two internal states (sides of the quantum coin). Spatial entanglement generated using a QW can be controlled by tuning different parameters, such as parameters in the quantum coin operation, number of particles in the system and evolution time (number of steps). To quantify entanglement in the system we are using Meyer-Wallach multipartite entanglement measure. \par In Sec. \ref{qw}, we describe single-particle and many-particle discrete-time QWs. In Sec. \ref{entanglement}, entanglement between a particle and position space and spatial entanglement using single- and many-particle QWs are discussed. In Sec. \ref{mpent}, we present the measure for spatial entanglement of the system using the Meyer-Wallach global entanglement measure scheme for particles in a one-dimensional lattice and in a closed chain ($n-$cycle). We also demonstrate control over spatial entanglement by exploiting the dynamical properties of the QW. We conclude with the summary in Sec. \ref{conc}. \section{Quantum Walk} \langlebel{qw} Classical random walk (CRW) describes the dynamics of a particle in position space with a certain probability. The QW is the quantum analog of CRW-developed exploiting features of quantum mechanics such as superposition and interference of quantum amplitudes \cite{GVR58, FH65, ADZ93}. The QW, which involves superposition of states, moves simultaneously exploring multiple possible paths with the amplitudes corresponding to the different paths interfering. This makes the variance of the QW on a line to grow quadratically with the number of steps which is in sharp contrast to the linear growth for the CRW. \par The study of QWs has been largely divided into two standard variants: discrete-time QW (DTQW) \cite{ADZ93, DM96, ABN01} and a continuous-time QW (CTQW) \cite{FG98}. In the CTQW, the walk is defined directly on the {\it position} Hilbert space $\mathcal{H}_p$, whereas for the DTQW it is necessary to introduce an additional {\it coin} Hilbert space $\mathcal{H}_c$, a quantum coin operation to define the direction in which the particle amplitude has to evolve. The connection between these two variants and the generic version of the QW has been studied \cite{FS06, C08}. However, the coin degree of freedom in the DTQW is an advantage over the CTQW as it allows control of dynamics of the QW \cite{AKR05, CSL08}. Therefore, we take full advantage of the coin degree of freedom in this work and study the DTQW on a many-particle system. \subsection{Single-particle quantum walk} \langlebel{spqw} The DTQW is defined on the Hilbert space $\mathcal H= \mathcal H_{c} \otimes \mathcal H_{p}$. In one dimension, the coin Hilbert space $\mathcal H_{c}$, spanned by the basis state $|0\ranglengle$ and $|1\ranglengle$, represents two sides of the quantum coin, and the position Hilbert space $\mathcal H_{p}$, spanned by the basis states $|\psi_j\ranglengle$, $j \in \mathbb{Z}$, represent the positions in the lattice. To implement the DTQW, we will consider a three-parameter U(2) operator $C_{\xi, \theta, \zeta}$ of the form \begin{equation} \langlebel{coin} C_{\xi,\theta,\zeta} \equiv \left( \begin{equation}gin{array}{clcr} e^{i\xi}\cos(\theta) & & e^{i\zeta}\sin(\theta) \\ e^{-i\zeta} \sin(\theta) & & -e^{-i\xi}\cos(\theta) \end{array} \right) \end{equation} as the quantum coin operation \cite{CSL08}. The quantum coin operation is applied on the particle state ($C_{\xi, \theta, \zeta} \otimes {\mathbbm 1}$) when the initial state of the complete system is \begin{equation} \langlebel{qw:in} |\Psi_{in}\ranglengle= \left [ \cos(\delta)|0\ranglengle + e^{i\eta}\sin(\delta)|1\ranglengle \right ] \otimes |\psi_{0}\ranglengle. \end{equation} The state $\cos(\delta)|0\ranglengle + e^{i\eta}\sin(\delta)|1\ranglengle$ is the state of the particle and $|\psi_{0}\ranglengle$ is the state of the position at the lattice position $j=0$. \par The quantum coin operation on the particle is followed by the conditional unitary shift operation $S$ which acts on the complete Hilbert space of the system: \begin{equation} \langlebel{eq:alter} S =\exp(-i \sigma_{z}\otimes Pl), \end{equation} where $P$ is the momentum operator, $\sigma_{z}$ is the Pauli spin operator in the $z$ direction and $l$ is the length of each step. The eigenstates of $\sigma_{z}$ are denoted by $|0\ranglengle$ and $|1\ranglengle$. Therefore, $S$, which delocalizes the wave packet over the positions $(j-1)$ and $(j+1)$, can also be written as \begin{equation}gin{eqnarray} \langlebel{eq:condshift} S = |0\ranglengle \langlengle 0|\otimes \sum_{j \in \mathbb{Z}}|\psi_{j-1}\ranglengle \langlengle \psi_{j} |+|1\ranglengle \langlengle 1 |\otimes \sum_{j \in \mathbb{Z}} |\psi_{j+1}\ranglengle \langlengle \psi_{j}|. \end{eqnarray} \par The process of \begin{equation} \langlebel{dtqwev} W_{\xi, \theta, \zeta} = S(C_{\xi, \theta, \zeta} \otimes {\mathbbm 1}) \end{equation} is iterated without resorting to intermediate measurement to help realize a large number of steps of the QW. The parameters $\delta$ and $\eta$ in Eq. (\ref{qw:in}) can be varied to obtain different initial states of the particle. The three parameters $\xi$, $\theta$ and $\zeta$ of $C_{\xi, \theta, \zeta}$ can be varied to choose the quantum coin operation. By varying parameter $\theta$ the variance can be increased or decreased according to the functional form, $\sigma^{2} \approx (1-\sin(\theta))t^{2}$, where $t$ is the number of steps of the QW, as shown in Fig. \ref{fig:qw1a}. \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \epsfig{figure=fig1.eps, width=9.0cm} \caption{\langlebel{fig:qw1a}(color online) Spread of the probability distribution for different values of $\theta$ using the quantum coin operator $C_{0, \theta, 0}$. The distribution is wider for (a) $(0, \theta, 0)= (0, \frac{\pi}{12}, 0)$ than for (b) $(0, \theta, 0)= (0, \frac{\pi}{4}, 0)$ and (c) $(0, \theta, 0)= (0, \frac{5 \pi}{12}, 0)$, showing the decrease in spread with increase in $\theta$. The initial state of the particle is $|\Psi_{ins}\ranglengle = \frac{1}{\sqrt 2}\left ( |0\ranglengle + i |1\ranglengle \right ) \otimes |\psi_{0}\ranglengle$ and the distribution is for 100 steps.} \end{center} \end{figure} \par {\it Biased coin operation and biased QW:} The most widely studied form of the DTQW is the walk using the Hadamard operation \begin{equation} H = \frac{1}{\sqrt{2}} \begin{equation}gin{pmatrix} 1 & \mbox{~} 1 \\ 1 & -1 \end{pmatrix}, \langlebel{hadamard} \end{equation} corresponding to the quantum coin operation with $\xi = \zeta = 0$ and $\theta = \pi/4$ in Eq. (\ref{coin}). The Hadamard operation is an unbiased coin operation, and the resulting walk is known as the Hadamard walk. This walk implemented on a particle initially in a symmetric superposition state, \begin{equation} \langlebel{qw:in1} |\Psi_{ins}\ranglengle = \frac{1}{\sqrt 2} \left [ |0\ranglengle + i |1\ranglengle \right ] \otimes |\psi_{0}\ranglengle, \end{equation} obtained by choosing $\delta = \pi/4$ and $\eta = \pi/2$ in Eq. (\ref{qw:in}), returns a symmetric, unbiased probability distribution of the particle in position space. However, the Hadamard walk on any asymmetric initial state of the particle results in an asymmetric, biased probability distribution of the particle in position space \cite{ABN01}. We should note that the role of the initial state on the symmetry of the probability distribution is not vital for a QW using the three-parameter operator given by Eq. (\ref{coin}) as a quantum coin operation. \par To elaborate this further, we will consider the first-step evolution of the DTQW using a three-parameter quantum coin operation given by Eq. (\ref{coin}) on a particle initially in the symmetric superposition state. After the first step of the DTQW the state can be written as \begin{equation}gin{eqnarray} \langlebel{eq:condshift2} W_{\xi, \theta, \zeta}|\Psi_{ins}\ranglengle = \frac{1}{\sqrt 2} \left [ \left(e^{i\xi} \cos(\theta)+ i e^{i\zeta} \sin(\theta)\right ) |0\ranglengle|\psi_{-1}\ranglengle + \left( e^{-i\zeta}\sin (\theta) - i e^{-i\xi} \cos(\theta)\right) |1\ranglengle|\psi_{+1}\ranglengle \right ]. \end{eqnarray} If $\xi=\zeta$, Eq. (\ref{eq:condshift2}) has left-right symmetry in the position probability distribution, but not otherwise. That is, the parameters $\xi$ and $\zeta$ introduce asymmetry in the position space probability distribution. Therefore, a coin operation with $\xi \neq \zeta$ in Eq. (\ref{coin}) can be called as a biased quantum coin operation which will bias the QW probability distribution of the particle initially in a symmetric superposition state (Fig. \ref{fig:qw2}) \cite{CSL08}. However, we should note that irrespective of the quantum coin operation used, QW can also be biased by choosing an asymmetric initial state of the particle (for example, the Hadamard walk of a particle initially in the state $|0\ranglengle$ or the state $|1\ranglengle$). \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \epsfig{figure=fig2.eps, width=9.0cm} \caption{Spread of probability distribution for different values of $\xi$, $\theta$, $\zeta$ using the quantum coin operator $C_{\xi, \theta, \zeta}$. The parameter $\xi$ shifts the distribution to the left: (a)$(\xi, \theta, \zeta) = ( \frac{\pi}{6}, \frac{\pi}{6}, 0)$ and (c) $(\xi, \theta, \zeta)= (\frac{5 \pi}{12}, \frac{\pi}{3}, 0 )$. The parameter $\zeta$ shifts it to the right: (b) $(\xi, \theta, \zeta)= (0, \frac{ \pi}{6}, \frac{\pi}{6})$ and (d) $(\xi, \theta, \zeta) = (0, \frac{\pi}{3}, \frac{5 \pi}{12})$. The initial state of the particle $|\Psi_{ins}\ranglengle = \frac{1}{\sqrt{2}}(|0\ranglengle + i |1\ranglengle) \otimes |\psi_{0}\ranglengle$ and the distribution is for 100 steps.} \langlebel{fig:qw2} \end{center} \end{figure} \subsection{Many-particle quantum walk} \langlebel{mbqw1} To define a many-particle QW in one dimension, we will consider an $M$-particle system with one non-interacting particle at each position (Fig. \ref{mi}). The $M$ identical particles in $M$ lattice points with each particle having its own coin and position Hilbert space will have a total Hilbert space $\mathcal{H}=\left( \mathcal{H}_c \otimes \mathcal{H}_p \right)^M $. We assume the particles to be distinguishable. \par The evolution of each step of the QW on the $M$-particle system is given by the application of the operator $W_{0,\theta, 0}^{\otimes M}$. The initial state that we will consider for the many-particle system in one dimension will be \begin{equation}gin{equation} |\Psi_{ins}^{M}\ranglengle = \bigotimes_{j=-\frac{M-1}{2}}^{j=\frac{M-1}{2}} \left( \frac{|0\ranglengle + i|1\ranglengle}{\sqrt{2}} \right) \otimes |\psi_{j}\ranglengle. \langlebel{initialMBQWstate} \end{equation} \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=8.5cm]{fig3.eps} \caption{Many-particle state with one non-interacting particle at each position space.} \langlebel{mi} \end{center} \end{figure} \begin{equation}gin{figure}[ht] \includegraphics[width=9.0cm]{fig4.eps} \caption{(color online) Probability distribution of 40 particles initially with one particle in each position space when subjected to the QW of different number of steps. The initial state of all the particles is $\frac{1}{\sqrt 2}(|0\rangle + i |1\rangle)$ and is evolved in position space using the Hadamard operator, $C_{0, \pi/4, 0}$ as the quantum coin. The distribution spreads in the position space with an increase in number of steps.} \langlebel{miqw} \end{figure} \par For an $M$-particle system after $t$ steps of the QW, the Hilbert space consists of the tensor product of single lattice position Hilbert space which is $(2t+M+1)$ in number. That is, after $t$ steps of the QW, the $M$ particles are spread between $(j-t)$ to $(j+t)$. In principle, each lattice point is associated with a Hilbert space spanned by two subspaces, a zero-particle subspace and one-particle subspace spanned by two possible states of the coin, $|0\ranglengle$ and $|1\ranglengle$. Therefore, the dimension of each lattice point will be $3^{M}$ and the dimension of total Hilbert space is $(3^{M})^{\otimes M}$. Fig. (\ref{miqw}) shows the probability distribution of the many-particle system with an increase in number of steps of the QW. \section{Entanglement} \langlebel{entanglement} To efficiently make use of entanglement as a physical resource, the amount of entanglement in a given system has to be quantified. Therefore, entanglement in a pure bipartite system or a system with two Hilbert spaces is quantified using standard measures known as entropy of entanglement or Schmidt number \cite{NC00}. The entropy of entanglement corresponds to the von Neumann entropy, a functional of the eigenvalues of the reduced density matrix, and a Schmidt number is the number of non-zero Schmidt coefficients in its Schmidt decomposition. For a multipartite state, there are quite a few good entanglement measures that have been proposed \cite{CKW00, BL01, EB01, MW02, VDM03, Miy08, HJ08}. However, as the number of particles in the system increases, the complexity of finding an appropriate entanglement measure also increases, making scalability impractical. Among the proposed measures, to address this scalability problem, Mayer and Wallach proposed a {\em scalable} global entanglement measure (polynomial measure) to quantify entanglement in many-particle systems \cite{MW02}. \par In this section, we will first discuss the entanglement of a particle with position space quantified using entropy of entanglement. Later we will discuss spatial entanglement quantified using the Mayer-Wallach (M-W) measure. Spatial entanglement has been explored earlier using different methods. For example, in an ideal bosonic gas it has been studied using off-diagonal long-range order \cite{HAKV07}. For our investigations, we consider a distinguishable many-particle system, implement QW and use the M-W measure to quantify spatial entanglement. In this system the dynamics of particles can be controlled by varying the quantum coin parameters, the initial state of the particles, the number of particles in the system and the number of steps of the QW. In particular, we choose the particles in one-dimensional open and closed chains. The spatial entanglement thus created can be used for example to create entanglement between distant atoms in an optical lattice \cite{SGC09} or as a channel for state transfer in spin chain systems \cite{Bos03, CDE04, CDD05}. \subsection{Single-particle - position entanglement} \langlebel{cpent} QW entangles the particle (coin) and the position degrees of freedom. To quantify it, let us consider a DTQW on a particle initially in a state given by Eq. (\ref{qw:in1}) with a simple form of a coin operation \begin{equation} \langlebel{coin1} C_{0,\theta, 0} \equiv \left( \begin{equation}gin{array}{clcr} \cos(\theta) & & \sin(\theta) \\ \sin(\theta) & & -\cos(\theta) \end{array} \right). \end{equation} After the first step, $W_{0,\theta,0} = S(C_{0, \theta, 0} \otimes {\mathbbm 1})$, the state takes the form \begin{equation}gin{eqnarray} | \Psi_{1} \ranglengle &=& W_{0,\theta, 0} | \Psi_{ins}\ranglengle = \gamma \left( |0 \ranglengle \otimes |\psi_{j-1} \ranglengle \right) + \delta \left( |1 \ranglengle \otimes |\psi_{j+1} \ranglengle \right) \langlebel{evolution01} \end{eqnarray} where $\gamma = \left( \frac{\cos(\theta) + i\sin(\theta)}{\sqrt{2}} \right)$ and $\delta = \left( \frac{\sin(\theta) - i\cos(\theta)}{\sqrt{2}} \right)$. The Schmidt rank of $|\Psi_1\ranglengle$ is $2$ which implies entanglement in the system. The value of entanglement with an increase in the number of steps can be further quantified by computing the von Neumann entropy of the reduced density matrix of the position subspace. \begin{equation}gin{figure}[ht] \includegraphics[width=9.0cm]{fig5.eps} \caption{(color online) Entanglement of a single particle with position space when subjected to the QW. The initial state of a particle is $\frac{1}{\sqrt 2}(|0\rangle + i |1\rangle)$ and is evolved in position space using different values for $\theta$ in the quantum coin operation $C_{0, \theta, 0}$. The entanglement initially oscillates and approaches an asymptotic value with an increase in the number of steps. For smaller values of $\theta$ the entanglement is higher and decreases with an increase in $\theta$. Initial oscillation is also larger for higher $\theta$.} \langlebel{enta} \end{figure} \begin{equation}gin{figure}[ht] \includegraphics[width=9.0cm]{fig6.eps} \caption{(color online) Entanglement of single particle with position space when subjected to the QW. The initial state of the particle is given by Eq. (\ref{qw:in}) with $\delta = \frac{2\pi}{9}$ and $\eta = \frac{\pi}{6}$ and is evolved in position space using different values for $\theta$ in the quantum coin operation $C_{0, \theta, 0}$. The entanglement initially oscillates and approaches an asymptotic value with an increase in the number of steps. For smaller values of $\theta$ the entanglement is higher and decreases with an increase in $\theta$. Initial oscillation is also larger for higher $\theta$.} \langlebel{enta1} \end{figure} \par Fig. \ref{enta} shows a plot of the entanglement against the number of steps of the QW on a particle initially in a symmetric superposition state using different values for $\theta$ in the operation $W_{\theta}$. The von Neumann entropy of the reduced density matrix of the coin is used to quantify entanglement between the coin and the position in Fig. \ref{enta}. That is, \begin{equation} E_{c}(t) = - \sum_{j} \langlembda_{j} \rm{log}_{2}(\langlembda_{j}) \end{equation} where $\langlembda_{j}$ are eigenvalues of the reduced density matrix of the coin after $t$ steps (time). The entanglement initially oscillates and reaches an asymptotic value with increasing number of steps. In the asymptotic limit, the entanglement value decreases with an increase in $\theta$ and this dependence can be attributed to the spread of the amplitude distribution in position space. That is, with an increase in $\theta$, constructive interference of quantum amplitudes toward the origin becomes prominent narrowing the distribution in the position space. In Fig. \ref{enta1}, the process is repeated for a particle initially in an asymmetric superposition state $|\Psi_{in}\ranglengle = \left [\cos(\frac{2\pi}{9}) |0\ranglengle + e^{i \frac{\pi}{6}} \sin(\frac{2\pi}{9})|1\ranglengle \right ] \otimes |\psi_{0}\ranglengle$. Comparing Fig. \ref{enta1} with Fig. \ref{enta}, we can note the increase in entanglement and decrease in the oscillation. This observation can be explained by going back to our earlier note on biased QW in Sec. \ref{spqw}. In Fig. \ref{fig:qw2} we note that biasing of the coin operation leads to an asymmetry in the probability distribution, with an increase in peak height on one side and a decrease on the other side (increase and decrease are in reference to the symmetric distribution). A similar biasing effect can also be reproduced by choosing an asymmetric initial state of the particle. The biased distribution with an increased value of probability at one side in the distribution contributes to a reduced oscillation in the distribution. This in turn results in the increase of the von Neumann entropy: entanglement. \subsection{Spatial entanglement} \langlebel{spentqw} {\em Spatial entanglement} is the entanglement between the lattice points. This entanglement takes the form of non-local particle number correlations between spatial modes. To observe spatial entanglement we first need to associate the lattice with the state of a particle. Then we need to consider the evolution of a single-particle QW followed by the evolution of a many-particle QW, in order to understand spatial entanglement. \subsubsection{Using a single-particle quantum walk} \langlebel{spentqw1} In a single-particle QW, each lattice point is associated with a Hilbert space spanned by two subspaces. The first is the zero-particle subspace which does not involve any coin (particle) states. The other is the one-particle subspace spanned by the two possible states of the coin, $|0\ranglengle$ and $|1\ranglengle$. To obtain the spatial entanglement we will write the state of the particle in the form of the state of a lattice. Following from Eq. (\ref{evolution01}), the state of the particles after first two steps of QW takes the form \begin{equation}gin{eqnarray} | \Psi_{2} \ranglengle &=& W_{0,\theta, 0} | \Psi_{1} \ranglengle = \gamma \left [ \cos (\theta) |0\ranglengle |\psi_{j-2} \ranglengle + \sin (\theta) |1\ranglengle|\psi_{j} \ranglengle \right ] + \delta \left [ \sin(\theta) |0\ranglengle |\psi_{j} \ranglengle - \cos(\theta) |1\ranglengle|\psi_{j+2} \ranglengle \right ]. \langlebel{evolution02} \end{eqnarray} In order to obtain the state of the lattice we can redefine the position state in the following way: the occupied position state $|\psi_j\ranglengle$ as $|1_{j}\ranglengle$, which means that the $j^{th}$ position is occupied and the rest of the lattice is empty. Therefore, we can rewrite Eq. (\ref{evolution02}) as \begin{equation}gin{eqnarray} | \Psi_{2} \ranglengle & = & \gamma \left [ \cos (\theta) |0\ranglengle | 1_{j-2} \ranglengle + \sin (\theta) |1\ranglengle| 1_{j} \ranglengle \right ] + \delta \left [ \sin(\theta) |0\ranglengle | 1_{j} \ranglengle - \cos(\theta) |1\ranglengle| 1_{j+2} \ranglengle \right ]. \langlebel{evolution03} \end{eqnarray} Since we are interested in the spatial entanglement, we project this state into one of the coin state so that we can ignore the entanglement between the coin and the position state and consider only the lattice states. Here we will choose the coin state to be $|0\ranglengle$ and take projection to obtain the state of the lattice in the form \begin{equation}gin{equation} |\Psi_{lat}\ranglengle = |0\ranglengle \left(\gamma \cos(\theta)|1_{j-2}\ranglengle + \delta \sin(\theta) |1_{j}\ranglengle \right). \langlebel{latticestate} \end{equation} Each lattice site $j$ can be considered as a Hilbert space with basis states $|1_{j}\ranglengle$ (occupied state) and $|0_{j}\ranglengle$ (unoccupied state). Then, the above Eq. (\ref{latticestate}) in the extended Hilbert space of each lattice can be rewritten in terms of occupied and unoccupied lattice states as \begin{equation}gin{equation} |\Psi_{lat}^{\prime}\ranglengle = \gamma \cos(\theta) |1_{j-2} \,0_{j}\ranglengle+\delta \sin(\theta) |0_{j-2} \,1_{j} \ranglengle. \langlebel{latticestate1} \end{equation} We can see that after first two steps of the QW the lattice points $j$ and $(j-2)$ are entangled. One can check that the lattice points $j$ and $(j+2)$ are entangled if we choose the coin state to be $|1\ranglengle$. With an increase in the number of steps, the state of the particle spreads in position space and the projection over one of the coin state reduces that state to a pure state, for which one may compute spatial entanglement, according to the above prescription. Therefore, with an increase in the number of steps, the spatial entanglement from a single-particle QW decreases. \subsubsection{Using many-particle quantum walk} \langlebel{mbqw} We will extend the study of evolution of spatial entanglement as the QW progresses on a many-particle system. \par Let us first consider the analysis of first two steps of the Hadamard walk ($\theta = \pi/4$ in Eq. (\ref{coin1})) on a three-particle system with the initial state: \begin{equation}gin{equation} |\Psi_{ins}^{3p}\ranglengle = \bigotimes_{j=-1}^{+1} \left( \frac{|0\ranglengle + i|1\ranglengle}{\sqrt{2}} \right) \otimes |\psi_{j}\ranglengle. \langlebel{initialMBQWstate3p} \end{equation} We will label the three particles at positions $-1$, $0$ and $1$ as ${\rm A}$, ${\rm B}$ and ${\rm C}$, respectively. Since evolution of these particles is independent, we write down the state after the first step as a tensor product of each of the three particles: \begin{equation}gin{align} |\Psi^{3p}_{1}\ranglengle = W_{0, \theta, 0}^{\otimes 3}|\Psi_{ins}^{3p}\ranglengle = \left[ \gamma |0\ranglengle |-2 \ranglengle + \delta |1\ranglengle| 0 \ranglengle \right]_{\rm A} \otimes \left[ \gamma |0\ranglengle |-1 \ranglengle + \delta |1\ranglengle| +1 \ranglengle \right]_{\rm B} \otimes \left[ \gamma |0\ranglengle |0 \ranglengle + \delta |1\ranglengle| +2 \ranglengle \right]_{\rm C}, \langlebel{threeparticle1step} \end{align} where $\gamma = (1+i)/2$ and $\delta = (1-i)/2$. After two steps the tensor product of each of the three particles is given by \begin{equation}gin{align} |\Psi^{3p}_{2}\ranglengle &= \left[ \gamma \left( \frac{|0\ranglengle |-3 \ranglengle+|1\ranglengle|-1 \ranglengle}{\sqrt{2}} \right) + \delta \left( \frac{|0\ranglengle |-1 \ranglengle - |1\ranglengle|+1 \ranglengle}{\sqrt{2}} \right) \right]_{\rm A} \nonumber \\ &\otimes \left[ \gamma \left( \frac{|0\ranglengle |-2 \ranglengle+|1\ranglengle|0 \ranglengle}{\sqrt{2}} \right) + \delta \left( \frac{|0\ranglengle |0 \ranglengle - |1\ranglengle|+2 \ranglengle}{\sqrt{2}} \right) \right]_{\rm B} \nonumber \\ &\otimes \left[ \gamma \left( \frac{|0\ranglengle |-1 \ranglengle+|1\ranglengle|+1 \ranglengle}{\sqrt{2}} \right) + \delta \left( \frac{|0\ranglengle |+1 \ranglengle - |1\ranglengle|+3 \ranglengle}{\sqrt{2}} \right) \right]_{\rm C}. \langlebel{threeparticlestate} \end{align} By projecting this state into one of the coin states (we choose state $|0\ranglengle \otimes |0\ranglengle \otimes |0\ranglengle$) we can obtain a state of the lattice for which spatial entanglement may be computed. Then the state of the lattice after projection and normalization is \begin{equation}gin{align} |\Psi_{lat}\ranglengle =& \gamma^3 \, |{\rm A}\ranglengle_{-3}|{\rm B}\ranglengle_{-2}|{\rm C}\ranglengle_{-1} \nonumber \\ & + \gamma^2 \delta \left( \, |{\rm A}\ranglengle_{-3}|{\rm B}\ranglengle_{-2}|{\rm C}\ranglengle_{1} + |{\rm A}\ranglengle_{-3}|{\rm B}\ranglengle_{0}|{\rm C}\ranglengle_{-1} +|{\rm AC}\ranglengle_{-1}|{\rm B}\ranglengle_{-2} \right) \nonumber \\ & + \gamma \delta^2 \left( \, |{\rm A}\ranglengle_{-3}|{\rm B}\ranglengle_{0}|{\rm C}\ranglengle_{1} + |{\rm AC}\ranglengle_{-1}|{\rm B}\ranglengle_{0} + |{\rm A}\ranglengle_{-1}|{\rm B}\ranglengle_{-2}|{\rm C}\ranglengle_{1} \right) \nonumber \\ & + \delta^3 \, |{\rm A}\ranglengle_{-1}|{\rm B}\ranglengle_{0}|{\rm C}\ranglengle_{1}, \langlebel{mblatticestate} \end{align} where $A,B$ and $C$ represent the particle labels and the subscripts represent the position labels. In a similar manner we can obtain $|\Psi_{lat}\ranglengle$ for a system with a large number of particles. Then the next task is to calculate the spatial entanglement. \section{Calculating spatial entanglement in a multipartite system} \langlebel{mpent} In a system with two particles, the state is separable if we can write it as a tensor product of individual particle states, and entangled if not. For a system with $M > 2$ particles, a state is said to be fully separable if it can be written as \begin{equation}gin{equation} | \psi \ranglengle = | \phi_1 \ranglengle \otimes | \phi_2 \ranglengle \otimes \cdots | \phi_k \ranglengle, \langlebel{partialentangled} \end{equation} when $k=M$. $|\phi_i\ranglengle$ will then denote the state of the $i^{th}$ particle. When $k < M$ a state is said to be \emph{partially} entangled and when $k=1$ the state will be fully entangled. \par Rather than using the von Neumann entropy to quantify multipartite entanglement of a given state $\rho$, one sometimes often prefers to consider purity, which corresponds (up to a constant) to linear entropy, that is the first-order term in the expansion of the von Neumann entropy around its maxima, given by \begin{equation}gin{equation} E = \frac{d}{d-1} \left[ 1 - {\rm Tr} \rho^2 \right] \langlebel{linentropy} \end{equation} for a $d$-dimensional particle Hilbert space \cite{PWK04}. To quantify the entanglement of multipartite pure states, one measure commonly, used is the Meyer- Wallach (M-W) measure \cite{MW02}. It is the entanglement measure of a single particle to the rest of the system, averaged over the whole of the system and is given by \begin{equation}gin{equation} E_{MW} = \frac{d}{d-1} \left[ 1 - \frac{1}{L} \sum_{i=1}^{L} {\rm Tr} \rho_i^2 \right] \langlebel{W-M} \end{equation} where $L$ is the system size and $\rho_i$ is the reduced density matrix of the $i^{th}$ subsystem. The M-W measure does not diverge with increasing system size and is relatively easy to calculate. \par In a multipartite QW the dimension at each lattice point, after projection over one particular state of coin, is $2^M$ where $M$ is the number of particles. Hence, the expression for entanglement will be \begin{equation}gin{align} E_{MW}(|\psi_{lat}\ranglengle) & = \frac{2^M}{2^M-1}\left(1-\frac{1}{2t+M+1}\sum_{j=-(t+\frac{M}{2})}^{t+\frac{M}{2}}{\rm tr}\rho_j^2\right) \langlebel{mw} \end{align} where $t$ is the number of steps and $\rho_j$ is the reduced density matrix of $j^{th}$ lattice point. The reduced density matrix $\rho_{j}$ can be written as \begin{equation}gin{align} \rho_j &= \sum_{k} p^j_k|k\ranglengle\langlengle k| \end{align} where $|k\ranglengle$ is one of the $2^M$ possible states available for a lattice point and $p^j_k$ can be calculated once we have the probability distribution of an individual particle on the lattice. \par Since we have $M$ distinguishable particles, we have $2^M$ configurations depending upon whether a given particle is present in the lattice point or not after freezing the state of the particle. This set of configurations forms the basis for a single-lattice point Hilbert space. Now we can calculate $p^j_k$, the probability of $k^{th}$ configuration of a particle in the $j^{th}$ lattice point as follows. Let us say $a_j^{(l_i)}$ is the probability of the $i^{th}$ particle to be or not to be in the $j^{th}$ lattice point depending on $l_i$. If $l_i$ is $1$, then it gives us the probability of the particle to be in the lattice point. If $l_i$ is $0$, then $a^{(l_i)}_j$ is the probability of a particle not to be in the lattice point, that is, $a_j^{(0)} = 1-a_j^{(1)}$. Hence, we can write \begin{equation}gin{align} p^j_k &= \prod_{i}a^{l_i}_j. \end{align} Once we have the probability of each particle at a given lattice position, the spatial entanglement can be conveniently calculated. Since the QW is a controlled evolution, one can obtain a probability distribution of each particle over all lattice positions. In fact, one can easily control the probability distribution by varying quantum coin parameters during the QW process and hence the entanglement. \par \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=9.5cm]{fig7.eps} \caption{(color online) Evolution of spatial entanglement with an increase in the number of steps of the QW for different number of particles in an open one-dimensional lattice chain. The entanglement first increases and with further increase in the number of steps, the number of lattice positions exceeds the number of particles in the system resulting in the decrease of the spatial entanglement. The distribution is obtained by implementing the QW on particles in the initial state $\frac{1}{\sqrt 2}(|0\rangle + i |1\rangle)$ and the Hadamard operation $C_{0, \pi/4, 0}$ as quantum coin operation.} \langlebel{eeqw} \end{center} \end{figure} \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=8.5cm]{fig8.eps} \caption{(color online) Quantity of spatial entanglement for 10 particles after 10 steps and 20 particles after 20 steps of the QW on a one-dimensional lattice using different values of $\theta$ in the quantum coin operation $C_{0, \theta, 0}$. For (a) and (b), the distribution is for particles initially in the symmetric superposition state, $\frac{1}{\sqrt{2}}(|0\ranglengle + i|1\ranglengle)$, and for (c) the particle's initial state is $|0\ranglengle$ (will be the same for state $|1\ranglengle$). Quantity of entanglement is higher for $\theta$ closer to $0$ and $\pi/2$ compared to the intermediate value. We note that the asymmetric probability distribution due to an asymmetric initial state in case of (c) contributes for an increase in the quantity of spatial entanglement. When $\theta = \pi/2$, for every even number of steps of the QW, the system returns to the initial state where entanglement is $0$. Entanglement is 0 for $\theta= 0$.} \langlebel{entqwtheta} \end{center} \end{figure} \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=8.5cm]{fig9.eps} \caption{(color online) Evolution of spatial entanglement for a system with different number of particles in a closed chain. With an increase in the number of steps, the entanglement value remains close to asymptotic value with some peaks in between. The peaks can be accounted for the crossover of leftward and rightward propagating amplitudes of the internal state of the particle during the QW. The peaks are more for a chain with a smaller number of particles. An increase in the number of particles in the system results in the decrease of the entanglement value. The distribution is obtained by using $\frac{1}{\sqrt{2}}(|0\rangle + i|1\rangle)$ as the initial states of all particles and the Hadamard operation $C_{0, \pi/4, 0}$ as quantum coin operation.} \langlebel{Ering} \end{center} \end{figure} \begin{equation}gin{figure}[ht] \begin{equation}gin{center} \includegraphics[width=8.5cm]{fig10.eps} \caption{(color online) Quantity of spatial entanglement for 20 particles on a closed chain after 20 steps of the QW using different values of $\theta$ in the quantum coin operation $C_{0, \theta, 0}$. The distribution is for particles initially in the state $\frac{1}{\sqrt{2}}(|0\ranglengle + i|1\ranglengle)$. Since the system is a closed chain, the QW does not expand the position Hilbert space, and therefore for all values of $\theta$ from $0$ to $\pi/2$ the entanglement value remains roughly uniform except for a small peak at smaller values of $\theta$. For $\theta=0$ when the number of steps equal to the number of particles, the amplitudes goes round the chain and returns to its initial state making the entanglement $0$ and for $\theta = \pi/2$, for every even number of steps of the QW, the system returns to the initial state where entanglement is again $0$.} \langlebel{Eringtheta} \end{center} \end{figure} \par Fig. \ref{eeqw} shows the phase diagram of the spatial entanglement using a many-particle QW. Data for the phase diagram were obtained numerically by subjecting the many-particle system with different number of particles to the QW with increasing number of steps. The quantity of spatial entanglement was computed using Eq. (\ref{mw}). \par Here, we have chosen the Hadamard operation $C_{0, \pi/4, 0}$ and $\frac{1}{\sqrt 2}(|0\ranglengle + i |1\ranglengle)$ as the quantum coin operation and initial state of the particles, respectively, for the evolution of the many-particle QW. To see the variation of entanglement for a fixed number of particles with an increase in steps, we can pick a line parallel to the $y$ axis. That is, fix the number of particles and see the variation of entanglement with the number of steps. \par In Fig. \ref{eeqw}, we see that for a fixed number of particles, the entanglement at first increases to some value before gradually falling. For $M=12$ we can note that the peak value is about $0.5$ before gradually falling. With an increase in the number of steps of the QW, the number of lattice positions to which the particles evolve increases resulting in the decrease of the spatial entanglement (see Eq. (\ref{mw})). The decrease in entanglement before the number of steps is equal to the number of particles should be noted. This is because for the Hadamard walk the spread of a probability distribution after $t$ steps is between $\frac{-t}{\sqrt 2}$ and $\frac{t}{\sqrt 2}$ \cite{CSL08}. \par If we fix the number of steps and measure the entanglement by increasing the number of particles in the system, the quantity of spatial entanglement first decreases and then it starts increasing with an increase in the number of particles. \par To show the variation of spatial entanglement with the quantum coin parameter $\theta$, we plot the spatial entanglement by varying the parameter $\theta$ for a system with 10 particles after 10 steps of the QW and for a system with 20 particles after 20 steps of the QW in Fig. \ref{entqwtheta}. In this figure, (a) and (b) are plots that use the symmetric superposition state $\frac{1}{\sqrt 2}(|0\rangle + i|1\rangle)$ (unbiased QW) as an initial state of all the particles, and (c) is the plot with all the particles in one of the basis states $|0\rangle$ or $|1\rangle$ (biased QW) as the initial state. We note that the quantity of entanglement is higher for $\theta$ values closer to $0$ and $\pi/2$ and dips for values in between for all the three cases. Biasing the QW, plot (c) shows a slight increase in the quantity of entanglement compared to the unbiased case, plot (b). A similar effect is seen by biasing the QW using two parameters $\xi, \zeta$ in the coin operation $C_{\xi, \theta,\zeta}$ on particles initially in a symmetric superposition state. \par {\it Closed chain: } Since most physical systems considered for implementation will be of a definite dimension, we extend our calculations to one of the simplest examples of closed geometry, an $n-$cycle. For a QW on an $n-$cycle, the shift operation, Eq. (\ref{eq:condshift}), takes the form \begin{equation}gin{eqnarray} \langlebel{eq:condshift1} S = |0\ranglengle \langlengle 0|\otimes \sum_{j = 0}^{n-1} |\psi_{j-1 \mbox{~mod~}n}\ranglengle \langlengle \psi_{j} | +|1\ranglengle \langlengle 1 |\otimes \sum_{j =0}^{n-1} |\psi_{j+1 \mbox{~mod~}n}\ranglengle \langlengle \psi_{j}|. \end{eqnarray} When we consider a many-particle system in a closed chain, with the number of lattice positions equal to the number of particles $M$, the QW process does not expand the position Hilbert space like it does on an open chain (line). Therefore the spatial entanglement does not decrease at later times as it does for a walk on an open chain, but remains close to the asymptotic value. Fig. \ref{Ering} shows the evolution of entanglement for a system with different number of particles in a closed chain. The peaks seen in the plot can be accounted for by the crossover of the leftward and rightward propagating amplitudes of the internal state of the particle during the QW process. The frequency of the peaks is more for a smaller number of particles (smaller closed chain). Also, note that the increase in the number of particles and the number of lattice points in the closed cycle results in the decrease in spatial entanglement of the system. \par In Fig. \ref{Eringtheta}, the value of spatial entanglement for 20 particles on a closed chain after 20 steps of the QW using different values of $\theta$ in the quantum coin operation $C_{0, \theta, 0}$ is presented. For all values of $\theta$ from $0$ to $\pi/2$ the entanglement value remains roughly uniform except for the extreme values of $\theta$. For $\theta=0$, the amplitude goes round the ring and returns to its initial state making the spatial entanglement value $= 0$. For $\theta = \pi/2$, for every even number of steps of the QW, the system returns to the initial state where spatial entanglement is again $0$. \par Therefore, spatial entanglement on a large lattice space can be created, controlled and optimized for a maximum entanglement value by varying the quantum coin parameters and number of particles in the multi-particle QW. \section{Conclusion} \langlebel{conc} We have presented the evolution of spatial entanglement in a many-particles system subjected to a QW process. By considering many particle in the one-dimensional open and closed chain we have shown that spatial entanglement can be generated and controlled by varying the quantum coin parameters, the initial state and the number of steps in the dynamics of the QW process. The spatial entanglement generated can have a potential application in quantum information theory and other physical processes. \begin{center} {\bf Acknowledgement} \end{center} C.M.C is thankful to Mike and Ophelia Lezaridis for the financial support at IQC, ARO, QuantumWorks and CIFAR for travel support. C.M.C also thank IMSc, Chennai, India for the hospitality during November - December 2008. S.K.G thanks Aiswarya Cyriac for the help in programming. \begin{equation}gin{thebibliography}{99} \bibitem{NC00} Michael Nielsen and Issac Chuang, Quantum Computation and Quantum Information, Cambridge University Press, {2000} \bibitem{HHH09} Ryszard Horodecki, Pawel Horodecki, Michal Horodecki, and Karol Horodecki, Rev. Mod. Phys. {\bf 81}, No. 2, 865-942 (2009). \bibitem{ON02} T. J. Osborne and M. A. Nielsen, Phys. Rev. A {\bf 66}, 032110 (2002). \bibitem{OAF02} A. Osterloh, L. Amico, G. Falci, and R. Fazio, Nature (London) {\bf 416}, 608 (2002). \bibitem{OROM06} T. R. de Oliveria, G. Rigolin, M. C. de Oliveria, and E. Miranda, Phys. Rev. Lett. {\bf 97}, 170401 (2006). \bibitem{BH02} S. Bose and D. Home, Phys. Rev. Lett. {\bf 88}, 050401 (2002). \bibitem{LHL03} M. S. Leifer, L. Henderson, and N. Linden, Phys. Rev. A {\bf 67}, 012306 (2003). \bibitem{RER07} O. Romero-Isart, K. Eckert, C. Rod—, and A Sanpera, J. Phys. A: Math. Theor. {\bf 40}, 8019-8031(2007). \bibitem{WS09} Xiaoting Wang and S. G. Schirmer, Phys. Rev. A {\bf 80}, 042305 (2009). \bibitem{AFOV08} L. Amico, R. Fazio, A. Osterloh, and V. Vedral, Rev. Mod. Phys. {\bf 80}, 517, (2008). \bibitem{Kem03} Julia Kempe, Contemporary Physics {\bf 44}, 307Ð327 (2003). \bibitem{CLX05} Ivens Carneiro, Meng Loo, Xibai Xu, Mathieu Girerd, Viv Kendon and Peter L. Knight, New J. Phys. {\bf 7}, 156 (2005). \bibitem{Amb03} A. Ambainis, Int. Journal of Quantum Information, {\bf 1}, No. 4, 507-518 (2003). \bibitem{CL08} C. M. Chandrashekar and R. Laflamme, Phys. Rev. A {\bf 78}, 022314 (2008). \bibitem{OKA05} Takashi Oka, Norio Konno, Ryotaro Arita, and Hideo Aoki, Phys. Rev. Lett. {\bf 94}, 100602 (2005). \bibitem{ECR07} Gregory S. Engel {\it et. al.}, Nature, {\bf 446}, 782-786 (2007). \bibitem{MRL08} M. Mohseni, P. Rebentrost, S. Lloyd, and A. Aspuru-Guzik, j. Chem. Phys. {\bf 129}, 174106 (2008). \bibitem{DLX03} Jiangfeng Du, Hui Li, Xiaodong Xu, Mingjun Shi, Jihui Wu, Xianyi Zhou, and Rongdian Han, Phys. Rev. A {\bf 67}, 042316 (2003) \bibitem{RLB05} C.A. Ryan, M. Laforest, J.C. Boileau, and R. Laflamme, Phys. Rev. A {\bf 72}, 062317 (2005). \bibitem{PLP08} H.B. Perets, Y. Lahini, F. Pozzi, M. Sorel, R. Morandotti, and Y. Silberberg, Phys. Rev. Lett. {\bf 100}, 170506 (2008). \bibitem{KFC09} K. Karski, L. Foster, J.-M. Choi, A. Steffen, W. Alt, D. Meschede, and A. Widera, Science, {\bf 325}, 174 (2009). \bibitem{TM02} B.C. Travaglione and G.J. Milburn, Phys. Rev. A {\bf 65}, 032310 (2002). \bibitem{RKB02} W. Dur, R. Raussendorf, V. M. Kendon, and H.J. Briegel, Phys. Rev. A {\bf 66}, 052319 (2002). \bibitem{EMB05} K. Eckert, J. Mompart, G. Birkl, and M. Lewenstein, Phys. Rev. A {\bf 72}, 012327 (2005). \bibitem{Cha06} C. M. Chandrashekar, Phys. Rev. A {\bf 74}, 032307 (2006). \bibitem{MBD06} Z.-Y. Ma, K. Burnett, M. B. d'Arcy, and S. A. Gardiner, Phys. Rev. A {\bf 73}, 013401 (2006). \bibitem{GVR58} G. V. Riazanov, Sov. Phys. JETP {\bf 6}, 1107 (1958). \bibitem{FH65} R. P. Feynman and A. R. Hibbs, {\it Quantum Mechanics and Path Integrals} (McGraw-Hill, New York, 1965). \bibitem {ADZ93} Y. Aharonov, L. Davidovich and N. Zagury, Phys. Rev. A {\bf 48}, 1687, (1993). \bibitem{DM96} David A. Meyer, J. Stat. Phys. {\bf 85}, 551 (1996). \bibitem{ABN01} A. Ambainis, E. Bach, A. Nayak, A. Vishwanath, and J. Watrous, Proceeding of the 33rd ACM Symposium on Theory of Computing (ACM Press, New York), 60 (2001). \bibitem{FG98} E. Farhi and S. Gutmann, Phys.Rev. A {\bf 58}, 915 (1998). \bibitem{FS06} Fredrick Strauch, Phys. Rev. A {\bf 74}, 030310 (2006). \bibitem{C08} C. M. Chandrashekar, Phys. Rev. A {\bf 78}, 052309 (2008). \bibitem{AKR05} A. Ambainis, J. Kempe and A. Rivosh, Proceedings of ACM-SIAM Symposium on Discrete Algorithms (SODA), (AMC Press, New York), 1099-1108, (2005). \bibitem{CSL08} C. M. Chandrashekar, R. Srikanth, and R. Laflamme, Phys. Rev. A {\bf 77}, 032326 (2008). \bibitem{CKW00} Valerie Coffman, Joydip Kundu, and William K. Wootters, Phys. Rev. A {\bf 61}, 052306 (2000). \bibitem{BL01} H. Barnum and N. Linden, Monotones and Invariants for Multi-particle Quantum States, J. Phys. A: Math. Gen. {\bf 34}, 6787 (2001). \bibitem{EB01} Jens Eisert and Hans J. Briegel, Phys. Rev. A {\bf 64}, 022306 (2001). \bibitem{MW02} D. A. Meyer and N. R. Wallach, J. Math. Phys. {\bf 43}, 4273 (2002). \bibitem{VDM03} Frank Verstraete, Jeroen Dehaene, and Bart De Moor, Phys. Rev. A {\bf 68}, 012103 (2003). \bibitem{Miy08} Akimasa Miyake, Phys. Rev. A {\bf 67}, 012108 (2003). \bibitem{HJ08} Ali Saif M. Hassan and Pramod S. Joag, Phys. Rev. A {\bf 77}, 062334 (2008). \bibitem{HAKV07} L. Heaney, J. Anders, D. Kaszlikowski and V. Vedral, Phys. Rev. A {\bf 76}, 053605 (2007). \bibitem{SGC09} Kathy-Anne Brickman Soderberg, Nathan Gemelke, and Cheng Chin, New Journal of Physics {\bf 11}, 055022 (2009). \bibitem{Bos03} Sougato Bose, Phys. Rev. Lett., {\bf 91}, 207901 (2003). \bibitem{CDE04} Matthias Christandl, Nilanjana Datta, Artur Ekert, and Andrew J. Landahl, Phys. Rev. Lett. {\bf 92}, 187902 (2004) \bibitem{CDD05} Matthias Christandl, Nilanjana Datta, Tony C. Dorlas, Artur Ekert, Alastair Kay, and Andrew J. Landahl, Phys. Rev. A {\bf 71}, 032312 (2005). \bibitem{PWK04} Nicholas A. Peters, Tzu-Chieh Wei, and Paul G. Kwiat , Phys. Rev. A {\bf 70} 052309 (2004). \end{thebibliography} \end{document}
\begin{document} \title{Characterisations of Testing Preorders for a Finite Probabilistic $\pi$-Calculus} \begin{abstract} We consider two characterisations of the may and must testing preorders for a probabilistic extension of the finite $\pi$-calculus: one based on notions of probabilistic weak simulations, and the other on a probabilistic extension of a fragment of Milner-Parrow-Walker modal logic for the $\pi$-calculus. We base our notions of simulations on the similar concepts used in previous work for probabilistic CSP. However, unlike the case with CSP (or other non-value-passing calculi), there are several possible definitions of simulation for the probabilistic $\pi$-calculus, which arise from different ways of scoping the name quantification. We show that in order to capture the testing preorders, one needs to use the ``earliest'' simulation relation (in analogy to the notion of early (bi)simulation in the non-probabilistic case). The key ideas in both characterisations are the notion of a ``characteristic formula'' of a probabilistic process, and the notion of a ``characteristic test'' for a formula. As in an earlier work on testing equivalence for the $\pi$-calculus by Boreale and De Nicola, we extend the language of the $\pi$-calculus with a mismatch operator, without which the formulation of a characteristic test will not be possible. \vskip12pt Keywords: Probabilistic $\pi$-calculus; Testing semantics; Bisimulation; Modal logic \end{abstract} \section{Introduction} We consider an extension of a finite version (without replication or recursion) of the $\pi$-calculus~\cite{Milner92IC2} with a probabilistic choice operator, alongside the non-deterministic choice operator of the $\pi$-calculus. Such an extension has been shown to be useful in modelling protocols and their properties, see, e.g., \cite{Norman09,Chatzikokolakis07}. The combination of both probabilistic and non-deterministic choice has long been a subject of study in process theories, see, e.g., \cite{Hansson90,Yi92,Segala94CONCUR,Deng08LMCS}. In this paper, we consider a natural notion of preorders for the probabilistic $\pi$-calculus, based on the notion of {\em testing}~\cite{Nicola84,Hennessy88}. In this testing theory, one defines a notion of test, what it means to apply a test to a process, the outcome of a test, and how the outcomes of tests can be compared. In general, the outcome of a test can be any non-empty set, endowed with a (partial) order; in the case of the original theory, this is simply a two-element lattice, with the top element representing success and the bottom element representing failure. In the probabilistic case, the set of outcomes is the unit interval [0,1], denoting probabilities of success, with the standard mathematical ordering $\leq$. In the presence of non-determinism, it is natural to consider a set of such probabilities as the result of applying a test to a process. Two standard approaches for comparing results of a test are the so-called Hoare preorder, written $\sqsubseteq_{Ho}$, and the Smyth preorder, $\sqsubseteq_{Sm}$~\cite{Hennessy82}: \begin{itemize} \item $O_1 \sqsubseteq_{Ho} O_2$ if for every $o_1 \in O_1$ there exists $o_2 \in O_2$ such that $o_1 \leq o_2.$ \item $O_1 \sqsubseteq_{Sm} O_2$ if for every $o_2 \in O_2$ there exists $o_1 \in O_1$ such that $o_1 \leq o_2.$ \end{itemize} Correspondingly, these give rise to two semantic preorders for processes: \begin{itemize} \item {\em may-testing}: $P \sqsubseteq_{pmay} Q$ iff for every test $T$, $Apply(T,P) \sqsubseteq_{Ho} Apply(T,Q)$ \item {\em must-testing}: $P \sqsubseteq_{pmust} Q$ iff for every test $T$, $Apply(T,P) \sqsubseteq_{Sm} Apply(T,Q)$, \end{itemize} where $Apply(T,P)$ refers to the result of applying the test $T$ to process $P$. We derive two characterisations of both may-testing and must-testing: one based on a notion of probabilistic weak (failure) simulation~\cite{Segala94CONCUR}, and the other based on a modal logic obtained by extending Milner-Parrow-Walker (MPW) modal logic for the (non-probabilistic) $\pi$-calculus~\cite{Milner93TCS}. The probabilistic $\pi$-calculus that we consider here is a variant of the probabilistic $\pi$-calculus considered in \cite{Chatzikokolakis07}, but extended with the mismatch operator. As has already been observed in the testing semantics for the non-probabilistic $\pi$-calculus~\cite{Boreale95IC}, the omission of mismatch would result in a strictly less discriminating test. This is essentially due to the possibility of two kinds of output transitions in the $\pi$-calculus, a bound-output action, which outputs a new name, e.g., $\bar x(w).0$, and a free-output action, e.g., $\bar x y.0.$ Without the mismatch operator, the two processes are related via may-testing, because the test cannot distinguish between output of a fresh name and output of an arbitrary name (see \cite{Boreale95IC}). The technical framework used to prove the main results in this paper is based on previous works on probabilistic CSP (pCSP)~\cite{Deng07ENTCS,Deng08LMCS}, an extension of Hoare's CSP~\cite{Hoare85} with a probabilistic choice operator. This allows us to adapt some proofs and results from \cite{Deng07ENTCS,Deng08LMCS} that are not calculus-specific. The name-passing feature of the $\pi$-calculus, however, gives rise to several difficulties not found in the non-name-passing calculi such as pCSP, and it consequently requires new techniques to deal with. For instance, there is not a canonical notion of (weak) simulation in the $\pi$-calculus, unlike the case with pCSP. Different variants arise from different ways of scoping the name quantification in the simulation clause dealing with input transitions, e.g., the ``early'' vs. the ``late'' variants of (bi)simulation~\cite{Milner92IC2}. In the case of weak simulation, one also gets a ``delay'' variant of (bi)simulation~~\cite{Ferrari95,Sangiorgi96,vanGlabbeek96}. As we show in Section~\ref{sec:sim}, the right notion of simulation is the early variant, as all other weak simulation relations are strictly more discriminating than the early one. Another difficulty is in proving congruence properties, a prerequisite for the soundness of the (failure) simulation preorders. The possibility of performing a `close' communication in the $\pi$-calculus requires a combination of closure under parallel composition and name restriction (see Section \ref{sec:sound}). We use the so-called ``up-to'' techniques~\cite{Sangiorgi98MSCS} for non-probabilistic calculi to prove these congruences. We show that $\sqsubseteq_{pmay}$ coincides with a simulation preorder $\sqsubseteq_S$ and a preorder $\sqsubseteq_{{\cal L}}$ induced by a modal logic ${\cal L}$ extending the MPW logic. Dually, the must-testing preorder is shown to coincide with a failure simulation preorder, $\sqsubseteq_{FS}$, and a preorder $\sqsubseteq_{{\cal F}}$ induced by a modal logic ${\cal F}$ extending ${\cal L}.$ For technical reasons in proving the completeness result of (failure) simulation, we make use of testing preorders involving vector-based testing ($\sqsubseteq_{pmay}^\Omega$ and $\sqsubseteq_{pmust}^\Omega$ below). The precise relations among these preorders are as follows: $$ \sqsubseteq_S ~ \subseteq ~ \sqsubseteq_{pmay} ~ = ~ \sqsubseteq_{pmay}^\Omega ~ \subseteq ~ \sqsubseteq_\Lcal ~ \subseteq ~ \sqsubseteq_S $$ $$ \sqsubseteq_{FS} ~ \subseteq ~ \sqsubseteq_{pmust} ~ = ~ \sqsubseteq_{pmust}^\Omega ~ \subseteq ~ \sqsubseteq_\Fcal ~ \subseteq ~ \sqsubseteq_{FS}. $$ The proofs of these inclusions are subjects of Section~\ref{sec:sound}, Section~\ref{sec:modal} and Section~\ref{sec:comp}. Let us highlight the characterisations of may-testing preorder. As with the case with pCSP \cite{Deng08LMCS}, the key idea to the proof of the inclusion $\sqsubseteq_\Lcal\; \subseteq\; \sqsubseteq_S$ is to show that for each process $P$, there exists a {\em characteristic formula} $\varphi_P$ such that if $Q\models \varphi_P$ then $P\sqsubseteq_S Q$. The inclusion $\sqsubseteq_{pmay}^\Omega \; \subseteq\; \sqsubseteq_\Lcal$ is proved by showing that for each formula $\varphi$, there exists a {\em characteristic test} $T_\varphi$ such that for all process $P$, $P \models \varphi$ iff $P$ passes the test $T_\varphi$ with some threshold testing outcome. \section{Processes and probabilistic distributions} \label{sec:pi} We consider an extension of the (finite) $\pi$-calculus with a probabilistic choice operator, $\pch p$, where $p \in (0,1].$ We shall be using the late version of the operational semantics, formulated in the reactive style (in the sense of \cite{vanGlabbeek95}) following previous work \cite{Deng07ENTCS,Deng08LMCS}. The use of the late semantics allows for a straightforward definition of characteristic formulas (see Section~\ref{sec:modal}), which are used in the completeness proof. So our testing equivalence is essentially a ``late'' testing equivalence. However, as has been shown in \cite{Ingolfsdottir95,Boreale95IC}, late and early testing equivalences coincide for value-passing/name-passing calculi. We assume a countably infinite set of {\em names}, ranged over by $a,b,x,y$ etc. Given a name $a$, its {\em co-name} is $\bar a.$ We use $\mu$ to denote a name or a co-name. Process expressions are generated by the following two-sorted grammar: \[\begin{array}{rcl} P & ::= & s \mid P {\pch p} P \\ s & ::= & \nil \mid a(x).s \mid \bar a x.s \mid [x=y]s \mid [x\not = y]s \mid s + s \mid s | s \mid \nu x.s \end{array}\] We let $P,Q,...$ range over process terms defined by this grammar, and $s,t$ range over the subset $S_p$ comprising only the state-based process terms, i.e. the sub-sort $s$. The input prefix $a(x)$ and restriction $\nu x$ are name-binding contructs; $x$ in this case is a bound name. We denote with $fn(P)$ the set of free names in $P$ and $bn(P)$ the set of bound names. The set of names in $P$ (free or bound) is denoted by $n(P).$ We shall assume that bound names are different from each other and different from any free names. Processes are considered equivalent modulo renaming of bound names. Processes are ranged over by $P$,$Q$,$R$, etc. We shall refer to our probablistic extension of the $\pi$-calculus as $\pi_p.$ We shall sometimes use an $n$-ary version of the binary operators. For example, we use $\bigoplus_{i \in I} p_iP_i$, where $\sum_{i\in I} p_i = 1$, to denote a process obtained by several applications of the probabilistic choice operator. Simiarly, $\sum_{i \in I} P_i$ denotes several applications of the non-deterministic choice operator $+.$ We shall use the $\tau$-prefix, as in $\tau.P$, as an abbreviation of $\nu x(x(y).\nil \mid \bar x x. P),$ where $x,y \not \in fn(P).$ In this paper, we take the viewpoint that a probabilistic process represents an unstable state that may probabilistically evolve into some stable states. Formally, we describe unstable states as distributions and stable states as state-based processes. Note that in a state-based process, probablistic choice can only appear under input/output prefixes. The operational semantics of $\pi_p$ will be defined only for state-based processes. Probabilistic distributions are ranged over by $\Delta.$ A {\em discrete probabilistic distribution} over a set $S$ is a mapping $\Delta : S \rightarrow [0,1]$ with $\sum_{s \in S} \Delta(s) = 1.$ The {\em support} of a distribution $\Delta$, denoted by $\supp \Delta$, is the set $\{s \mid \Delta(s) > 0 \}.$ From now on, we shall restrict to only probabilistic distributions with finite support, and we let ${\cal D}(S)$ denote the collection of such distributions over $S.$ If $s$ is a state-based process, then $\pdist s$ denote the point distribution that maps $s$ to $1.$ For a finite index set $I$, given $p_i$ and distribution $\Delta_i$, for each $i\in I$, such that $\sum_{i\in I} p_i = 1$, we define another probability distribution $\sum_{i\in I} p_i \cdot \Delta_i$ as $(\sum_{i\in I} p_i \cdot \Delta_i)(s) = \sum_{i\in I} p_i \cdot \Delta_i(s),$ where $\cdot$ here denotes multiplication. We shall sometimes write this distribution as a summation $p_1 \cdot \Delta_1 + p_2 \cdot \Delta_2 + \ldots + p_n \cdot \Delta_n$ when the index set $I$ is $\{1,\ldots,n\}.$ Probabilistic processes are interpreted as distributions over state-based processes as follows. \[\begin{array}{rcl} \interp s & ::= & \pdist s \ \mbox{ for $s\in S_p$}\\ \interp {P \pch p Q} & ::= & p \cdot \interp P + (1-p) \cdot \interp Q \end{array}\] Note that for each process term $P$ the distribution $\interp P$ is finite, that is it has finite support. A transition judgment can take one of the following forms: $$ \one{s}{a(x)}{\Delta} \qquad \one{s}{\tau}{\Delta} \qquad \one{s}{\bar a x}{\Delta} \qquad \one{s}{\bar a(x)}{\Delta} $$ The action $a(x)$ is called a {\em bound-input action}; $\tau$ is the silent action; $\bar ax$ is a {\em free-output action} and $\bar a(x)$ is a {\em bound-output action}. In actions $a(x)$ and $\bar a(x)$, $x$ is a bound name. Given an action $\alpha$, we denote with $fn(\alpha)$ the set of free names in $\alpha$, i.e., those names in $\alpha$ which are not bound names. The set of bound names in $\alpha$ is denoted by $bn(\alpha)$, and the set of all names (free and bound) in $\alpha$ is denoted by $n(\alpha).$ The free names of a distribution is the union of free names of its support, i.e., $ fn(\Delta) = \bigcup \{fn(s) \mid s \in \supp \Delta \}. $ A substitution is a mapping from names to names; substitutions are ranged over by $\rho, \sigma$ and $\theta.$ A substitution $\theta$ is a {\em renaming substitution} if $\theta$ is an injective map, i.e., $\theta(x) = \theta(y)$ implies $x = y$. A substitution is extended to a mapping between processes in the standard way, avoiding capture of free variables. We use the notation $s[y/x]$ to denote the result of substituting free occurrences of $x$ in $s$ with $y.$ Substitution is lifted to a mapping between distributions as follows: $$ \Delta[y/x] (s) = \sum \{\Delta(s') \mid s'[y/x] = s \}. $$ It can be verified that $\interp {P[y/x]} = \interp P [y/x]$ for every process $P.$ The operational semantics is given in Figure~\ref{fig:pi}. The rules for parallel composition and restriction use an obvious notation for distributing an operator over distributions, for example: \[\begin{array}{rcl} (\Delta_1 ~|~ \Delta_2)(s) & = & \left\{ \begin{array}{ll} \Delta_1(s_1) \cdot \Delta_2(s_2) & \hbox{ if $s = s_1 | s_2$ } \\ 0 & \hbox{ otherwise} \end{array} \right . \\ (\nu x.\Delta)(s) & = & \left\{ \begin{array}{ll} \Delta(s') & \hbox{ if $s = \nu x.s'$ } \\ 0 & \hbox{ otherwise.} \end{array} \right. \end{array}\] The symmetric counterparts of \textbf{Sum}, \textbf{Par}, \textbf{Com} and \textbf{Close} are omitted. The semantics of $\pi_p$ processes is presented in terms of simple probabilistic automata \cite{Segala94CONCUR}. \begin{figure} \caption{The operational semantics of $\pi_p$.} \label{fig:pi} \end{figure} \section{Testing probabilistic processes} \label{sec:test} As standard in testing theories~\cite{Nicola84,Hennessy88,Boreale95IC}, to define a test, we introduce a distinguished name $\omega$ which can only be used in tests and is not part of the processes being tested. A {\em test} is just a probabilistic process with possible free occurrences of the name $\omega$ as channel name in output prefixes, i.e., a test is a process which may have subterms of the form $\bar \omega a.P$. Note that the object of the action prefix (i.e., the name $a$) is irrelevant for the purpose of testing. Note also that it makes no differences whether the name $\omega$ appears in input prefixes instead of output prefixes; the notion of testing preorder will remain the same. Therefore we shall often simply write $\omega.P$ to denote $\bar \omega a.P$, and $\one {P} {\omega} {\Delta}$ to denote $\one {P}{\bar \omega a}{\Delta}.$ The definitions of may-testing preorder, $\sqsubseteq_{pmay}$, and must-testing preorder, $\sqsubseteq_{pmust}$, have already been given in the introduction, but we left out the definition of the $Apply$ function. This will be given below. Following \cite{Deng07ENTCS}, to define the $Apply$ function, we first define a {\em results-gathering function} ${\mathbb V}: S_p \rightarrow {\cal P}([0,1])$ as follows: $$ {\mathbb V}(s) = \left\{ \begin{array}{ll} \{1\} & \qquad \hbox{if $\one s \omega {}$} \\ \bigcup \{{\mathbb V}(\Delta) \mid \one s \tau \Delta \} & \qquad \hbox{if $s \not \stackrel{\omega}{\longrightarrow} {}$ but $\one s {\tau} {}$} \\ \{0\} & \qquad \hbox{ otherwise.} \end{array} \right. $$ Here the notation ${\cal P}([0,1])$ stands for the powerset of $[0,1]$, and we use ${\mathbb V}(\Delta)$ to denote the set of probabilities $\{\sum_{s\in \supp \Delta} \Delta(s) \cdot p_s \mid p_s\in{\mathbb V}(s)\}$. The $Apply$ function is then defined as follows: given a test $T$ and a process $P$, $$ Apply(T,P) = {\mathbb V}(\interp {\nu \vec x.(T~|~P)}) $$ where $\{\vec x\}$ is the set of free names in $T$ and $P$, excluding $\omega.$ So the process (or rather, the distribution) $\nu \vec x.(T ~|~P)$ can only perform an observable action on $\omega.$ \paragraph{Vector-based testing.} Following \cite{Deng08LMCS}, we introdude another approach of testing called {\em vector-based testing}, which will play an important role in Section~\ref{sec:comp}. Let $\Omega$ be a set of fresh success actions different from any normal channel names. An $\Omega$-test is a $\pi_p$-process, but allowing subterms $\omega.P$ for any $\omega\in\Omega$. Applying such a test $T$ to a process $P$ yields a non-empty set of test outcome-tuples $Apply^\Omega(T,P)\subseteq [0,1]^\Omega$. For each such tuple, its $\omega$-component gives the probability of successfully performing action $\omega$. To define a results-gathering function for vector-based testing, we need some auxiliary notations. For any action $\alpha$ define $\alpha!:[0,1]^\Omega\rightarrow[0,1]^\Omega$ by \[\alpha!o(\omega)=\left\{\begin{array}{ll} 1 & \mbox{if $\omega=\alpha$}\\ o(\omega) & \mbox{otherwise} \end{array}\right.\] so that if $\alpha$ is a success action in $\Omega$ then $\alpha!$ updates the tuple $1$ at that point, leaving it unchanged otherwise, and when $\alpha\not\in\Omega$ the function $\alpha!$ is the identity. For any set $O\subseteq [0,1]^\Omega$, we write $\alpha!O$ for the set $\{\alpha!o \mid o\in O\}$. For any set $X$ define its \emph{convex closure} $\updownarrow X$ by \[\updownarrow X ~:=~ \{\sum_{i\in I}p_i\cdot o_i \mid o_i\in X \mbox{ for each $i\in I$ and $\sum_{i\in I}p_i=1$}\}.\] Here, $I$ is assumed to be a finite index set. Finally, zero vector $\vec{0}$ is given by $\vec{0}(\omega)=0$ for all $\omega\in\Omega$. Let $S_p^\Omega$ be the set of state-based $\Omega$-tests. \begin{definition} \label{def:vector-based-results} The vector-based results-gathering function ${\mathbb V}^\Omega:S_p^\Omega\rightarrow {\cal P}([0,1]^\Omega)$ is given by \[{\mathbb V}^\Omega(s)~:=~ \left\{\begin{array}{ll} \updownarrow\bigcup\{\alpha!({\mathbb V}^\Omega(\Delta)) \mid s\ar{\alpha}\Delta\} & \mbox{if $s\rightarrow$}\\ \{\vec{0}\} & \mbox{otherwise} \end{array}\right.\] The notation $s\rightarrow$ means that $s$ is not a deadlock state, i.e. there is some $\alpha$ and $\Delta$ such that $s\ar{\alpha}\Delta$. For any process $P$ and $\Omega$-test $T$, we define $Apply^\Omega(T,P)$ as ${\mathbb V}^\Omega(\interp{\nu\vec{x}.(T|P)})$, where $\{\vec x\} = fn(T,P) - \Omega.$ The vector-based may and must preorders are given by \[\begin{array}{rcl} P \sqsubseteq_{pmay}^\Omega Q & \mbox{ iff } & \mbox{for all $\Omega$-test $T: Apply^\Omega(T,P) \sqsubseteq_{Ho} Apply^\Omega(T,Q)$}\\ P \sqsubseteq_{pmust}^\Omega Q & \mbox{ iff } & \mbox{for all $\Omega$-test $T: Apply^\Omega(T,P) \sqsubseteq_{Sm} Apply^\Omega(T,Q)$}\\ \end{array}\] where $\sqsubseteq_{Ho}$ and $\sqsubseteq_{Sm}$ are the Hoare and Smyth preorders on ${\cal P}([0,1]^\Omega)$ generated from $\leq$ index-wise on $[0,1]^\Omega$. \end{definition} Notice a subtle difference between the definition of ${\mathbb V}^\Omega$ above and the definition of ${\mathbb V}$ given earlier. In ${\mathbb V}^\Omega$, we use {\em action-based testing}, i.e., the actual execution of $\omega$ constitutes a success. This is in contrast to the {\em state-based testing} in ${\mathbb V}$, where a success is defined for a state where a success action $\omega$ is possible, without having to actually perform the action $\omega.$ In the case where there is no divergence, as in our case, these two notions of testing coincide; see \cite{Deng08LMCS} for more details. The following theorem can be shown by adapting the proof of Theorem 6.6 in \cite{Deng08LMCS}, which states a general property about probabilistic automata \cite{DGMZ07}. \begin{theorem}\label{thm:multi-uni} Let $P$ and $Q$ be any $\pi_p$-processes. \begin{enumerate} \item $P\sqsubseteq_{pmay}^\Omega Q$ iff $P\sqsubseteq_{pmay} Q$ \item $P\sqsubseteq_{pmust}^\Omega Q$ iff $P\sqsubseteq_{pmust} Q$. \end{enumerate} \end{theorem} \section{Simulation and Failure Simulation} \label{sec:sim} To define simulation and failure simulation, we need to generalise the transition relations between states and distributions to those between distributions and distributions. This is defined via a notion of lifting of a relation. \begin{definition}[Lifting \cite{Deng09CONCUR}] \label{def:lifting} Given a relation ${\cal R} \subseteq S_p \times {\cal D}(S_p)$, define a {\em lifted relation} $\overline {\cal R} \subseteq {\cal D}(S_p) \times {\cal D}(S_p)$ as the smallest relation that satisfies \begin{enumerate} \item $s {\cal R} \Theta$ implies $\pdist{s} \lift{\cal R} \Theta$ \item (Linearity) $\Delta_i \lift{\cal R} \Theta_i$ for all $i\in I$ implies $(\sum_{i\in I}p_i\cdot\Delta_i) \lift{\cal R} (\sum_{i\in I}p_i\cdot\Theta_i)$ for any $p_i\in [0,1]$ with $\sum_{i\in I}p_i = 1$. \end{enumerate} \end{definition} The following is a useful properties of the lifting operation. \begin{proposition}[\cite{Deng07ENTCS}] \label{prop:lifting} Suppose ${\cal R} \subseteq S \times {\cal D}(S)$ and $\sum_{i \in I} p_i = 1.$ If $(\sum_{i\in I} p_i \cdot \Delta_i) \lift {\cal R} \Theta$ then $\Theta = \sum_{i \in I} p_i \cdot \Theta_i$ for some set of distributions $\Theta_i$ such that $\Delta_i \lift {\cal R} \Theta_i$ for all $i\in I$. \end{proposition} For simplicity of presentation, the lifted version of the transition relation $\sstep{\alpha}$ will be denoted by the same notation as the unlifted version. So we shall write $\one \Delta \alpha \Theta$ when $\Delta$ and $\Theta$ are related by the lifted relation from $\sstep{\alpha}.$ Note that in the lifted transition $\one \Delta \alpha \Theta$, {\em all} processes in $\supp{\Delta}$ must be able to simultaneously make the transition $\alpha$. For example, $$ \one{ \frac{1}{2} \cdot \pdist{\bar a x.s} + \frac{1}{2} \cdot \pdist{\bar a x.t} } {\bar a x} {\frac{1}{2} \cdot \pdist s + \frac{1}{2} \cdot \pdist t} $$ but the distribution $\frac{1}{2} \cdot \pdist{\bar a x.s} + \frac{1}{2} \cdot \pdist{\bar b x.t}$ will not be able to make that transition. We need a few more relations to define (failure) simulation: \begin{itemize} \item We write $\one s {\hat \tau} \Delta$ to denote either $\one s \tau \Delta$ or $\Delta = \pdist s.$ Its lifted version will be denoted by the same notation, e.g., $\one {\Delta_1}{\hat\tau} {\Delta_2}.$ The reflexive-transitive closure of the latter is denoted by $\stackrel{\hat \tau}{\Longrightarrow}.$ \item$\bigstep {\Delta_1} {\hat \alpha} {\Delta_2}$, for $\alpha \not = \tau$, iff $\Delta_1 \bstep{\hat\tau} \Delta' \sstep{\alpha} \Delta'' \bstep{\hat\tau} \Delta_2$ for some $\Delta'$ and $\Delta''.$ \item We write $s\barb{a}$ to denote $s\ar{a(x)}$, and $s\barb{\bar a}$ to denote either $s\ar{\bar{a}(x)}$ or $s\ar{\bar{a}x}$; $s\not \barb{\mu}$ stands for the negation. We write $s\not\barb{X}$ when $s \not \! \ar{\tau}$ and $\forall\mu\in X: s\not\barb{\mu}$, and $\Delta\not\barb{X}$ when $\forall s\in\supp{\Delta}:s\not\barb{X}$. \end{itemize} \begin{definition} \label{def:sim} A relation ${\cal R} \subseteq S_p \times {\cal D}(S_p)$ is said to be a {\em failure simulation} if $s {\cal R} \Theta$ implies: \begin{enumerate} \item If $\one s {a(x)} {\Delta}$ and $x \not \in fn(s,\Theta)$, then for every name $w$, there exists $\Theta_1$, $\Theta_2$ and $\Theta'$ such that $$\Theta \bstep{\hat \tau} \Theta_1 \sstep {a(x)} {\Theta_2}, \qquad \Theta_2[w/x] \bstep{\hat \tau} \Theta', \qquad \hbox{ and } \qquad (\Delta[w/x]) ~ \overline {\cal R} ~ \Theta'. $$ \item If $\one s \alpha \Delta$ and $\alpha$ is not an input action, then there exists $\Theta'$ such that $\bigstep \Theta {\hat \alpha} {\Theta'}$ and $\Delta ~ \overline {\cal R} ~ \Theta'$ \item If $s\not\barb{X}$ then there exists $\Theta'$ such that $\Theta\dar{\hat \tau} \Theta'\not\barb{X}$. \end{enumerate} We denote with $\triangleleft_{FS}$ the largest failure simulation relation. Similarly, we define \emph{simulation} and $\triangleleft_S$ by dropping the third clause above. The {\em simulation preorder} $\sqsubseteq_S$ and \emph{failure simulation preorder} $\sqsubseteq_{FS}$ on process terms are defined by letting \[\begin{array}{rll} P \sqsubseteq_S Q \mbox{ iff } \mbox{there is a distribution $\Theta$ with $\bigstep {\interp Q}{\hat \tau} \Theta$ and $\interp P ~ \lift{\triangleleft_S} ~ \Theta.$}\\ P \sqsubseteq_{FS} Q \mbox{ iff } \mbox{there is a distribution $\Theta$ with $\bigstep {\interp P}{\hat \tau} \Theta$ and $\interp Q ~ \lift{\triangleleft_{FS}} ~ \Theta.$} \end{array}\] \end{definition} Notice the rather unusual clause for input action, where no silent action from $\Theta_2$ is permitted after the input transition. This is reminiscent of the notion of {\em delay (bi)simulation}~\cite{Ferrari95,Sangiorgi96,vanGlabbeek96}. If instead of that clause, we simply require $\Theta \bstep {\widehat{a(x)}} {\Theta''}$ and $\Delta[w/x] ~ \overline {\cal R} ~ \Theta''[w/x]$ then, in the presence of mismatch, simulation is not sound w.r.t. the may-testing preorder, even in the non-probabilistic case. Consider, for example, the following processes: $$ P = a(x).\bar a b \qquad Q = a(x).[x \not = c] \tau.\bar a b $$ where we recall that $\tau.R$ abbreviates $\nu z.(z(u) ~|~ \bar z z.R)$ for some $z \not \in fn(R).$ The process $P$ can make an input transition, and regardless of the value of the input, it can then output $b$ on channel $a.$ Notice that for $Q$, we have $$ Q \sstep{a(x)} [x \not = c] \tau. \bar a b \sstep{\tau} \nu z(0 ~|~ \bar a b) = Q'. $$ $Q'$ can also outputs $b$ on channel $a$, so under this alternative definition, $Q$ can simulate $P.$ But $P \not \sqsubseteq_{pmay} Q$, as the test $\bar a c.a(y).\omega$ will distinguish them. This issue has also appeared in the theory of weak (late) bisimulation for the non-probabilistic $\pi$-calculus; see, e.g., \cite{Sangiorgi01book}. Note that the above definition of $\triangleleft_S$ is what is usually called the ``early'' simulation. One can obtain different variants of ``late'' simulation using different alternations of the universal quantification on names and the existential quantifications on distributions in clause 1 of Definition~\ref{def:sim}. Any of these variants leads to a strictly more discriminating simulation. To see why, consider the weaker of such late variants, i.e., one in which the universal quantifier on $w$ comes after the existential quantifier on $\Theta_1$: \begin{quote} If $\one s {a(x)} {\Delta}$ and $x \not \in fn(s,\Theta)$, then there exists $\Theta_1$ such that for every name $w$, there exist $\Theta_2$ and $\Theta'$ such that $$\Theta \bstep{\hat \tau} \Theta_1 \sstep {a(x)} {\Theta_2}, \qquad \Theta_2[w/x] \bstep{\hat \tau} \Theta', \qquad \hbox{ and } \qquad (\Delta[w/x]) ~ \overline {\cal R} ~ \Theta'. $$ \end{quote} Let us denote this variant with $\sqsubseteq_{S'}.$ Consider the following processes: $$ P = a(x).\bar b x.\nil + a(x).\nil + a(x).[x=z]\bar b x.\nil \qquad Q = \tau.a(x).\bar b x.\nil + \tau.a(x).\nil $$ It is easy to see that $P \sqsubseteq_S Q$ but $P {\not \sqsubseteq}_{S'} Q.$ If we drop the silent transitions $\Theta_2[w/x] \bstep{\hat \tau} \Theta'$ in clause (1) of Definition~\ref{def:sim}, i.e., we let $\Theta' = \Theta_2[w/x]$ (hence, we get a delay simulation), then again we get a strictly stronger relation than $\sqsubseteq_S$. Let us refer to this stronger relation as $\sqsubseteq_{D}$. Let $P$ be $a(x).(c {\pch {\frac{1}{2}}} d)$ and let $Q$ be $a(x).\tau.(c {\pch {\frac{1}{2}}} d).$ Here we remove the parameters in the input prefixes $c$ and $d$ to simplify presentation. Again, it can be shown that $P \sqsubseteq_S Q$ but $P ~ {\not \sqsubseteq}_{D} ~ Q.$ For the latter to hold, we would have to prove $ \frac{1}{2} \cdot \pdist c + \frac{1}{2} \cdot \pdist d ~ \overline{\triangleleft_S} ~ \pdist{\tau.(c {\pch {\frac{1}{2}}} d)}, $ which is impossible. Note that (failure) simulation is a relation between processes and distributions, rather than between processes, so it is not immediately obvious that it is a preorder. This is established in Corollary~\ref{cor:sim.fail.preorder} below, whose proof requires a series of lemmas. In the following, when we apply a substitution to an action, we assume that the substitution affects both the free and the bound names in the action. For example, if $\alpha = a(x)$ and $\theta = [b/a, y/x]$ then $\alpha\theta = b(y).$ However, application of a substitution to processes or distributions must still avoid capture. \begin{lemma}\label{lm:rename} Suppose $\sigma$ is a renaming substitution. \begin{enumerate} \item If $\one s \alpha \Delta$ then $\one {s\sigma}{\alpha \sigma}{\Delta\sigma}.$ \item If $\bigstep {\Delta} {\hat \alpha} {\Delta'}$ then $\bigstep {\Delta\sigma}{\hat \alpha\sigma} {\Delta'\sigma}.$ \end{enumerate} \end{lemma} \begin{lemma} \label{lm:lifted-trans-rename} Let $I$ be a finite index set, and let $\sum_{i \in I} p_i = 1.$ Suppose $\one {s_i} {a(x_i)} {\Delta_i}$ for each $i \in I$. Let $x$ be a fresh name not occuring in any of $s_i$, $a(x_i)$ or $\Delta_i.$ Then $$ \one {\sum_{i \in I} p_i \cdot \pdist {s_i}} {a(x)}{\sum_{i \in I} p_i \cdot \Delta_i[x/x_i]}. $$ \end{lemma} Given the above lemma, given transitions $\one {s_i}{a(x_i)}{\Delta_i}$, we can always assume that, all the $x_i$'s are the same fresh name, so that when lifting those transitions to distributions, we shall omit the explicit renaming of individual $x_i.$ This will simplify the presentation of the proofs in the following. The same remark applies to bound output transitions. \begin{lemma} \label{lm:bigstep-dist} Suppose $\sum_{i \in I} p_i = 1$ and $\Delta_i \bstep {\hat \alpha} \Phi_i$ for each $i \in I,$ where $I$ is a finite index set. Then $$ \sum_{i \in I} p_i \cdot \Delta_i \bstep {\hat \alpha} \sum_{i \in I} p_i \cdot \Phi_i. $$ \end{lemma} \begin{proof} Same as in the proof of Lemma 6.6. in \cite{Deng07ENTCS}. \qed \end{proof} \begin{lemma} \label{lm:sim-refl} For every state-based process $s$, we have $s \triangleleft_S \pdist s$ and $s \triangleleft_{FS} \pdist s.$ \end{lemma} \begin{proof} Let ${\cal R} \subseteq S_p \times {\cal D}(S_p)$ be the relation defined as follows: $s ~ {\cal R} ~ \Theta$ iff $\Theta = \pdist s.$ It is easy to see that ${\cal R}$ is a simulation and also a failure simulation. \qed \end{proof} \begin{lemma} \label{lm:sim-like1} Suppose $\Delta ~ \triangleleft_Sl_S ~ \Phi$ and $\one {\Delta} {\alpha} {\Delta'}$, where $\alpha$ is either $\tau$, a free action or a bound output action. Then $\one \Phi {\hat \alpha} {\Phi'}$ for some $\Phi'$ such that $\Delta' ~ \triangleleft_Sl_S ~ \Phi'.$ \end{lemma} \begin{proof} Similar to the proof of Lemma 6.7 in \cite{Deng07ENTCS}. \qed \end{proof} \begin{lemma} \label{lm:sim-like2} Suppose $\Delta ~ \triangleleft_Sl_S ~ \Phi$ and $\one{\Delta}{a(x)}{\Delta'}$. Then for all name $w$, there exist $\Psi_1$, $\Psi_2$ and $\Psi$ such that $$ \Phi \bstep {\hat \tau} \Psi_1 \sstep {a(x)} \Psi_2, \qquad \Psi_2[w/x] \bstep {\hat \tau} \Psi, \qquad \mbox{ and } \qquad (\Delta'[w/x]) ~ \triangleleft_Sl_S ~ \Psi. $$ \end{lemma} \begin{proof} From $\Delta ~ \triangleleft_Sl_S ~ \Phi$ we have that \begin{equation} \label{eq:sim-like2-1} \Delta = \sum_{i \in I} p_i \cdot \pdist{s_i}, \qquad s_i \triangleleft_S \Phi_i, \qquad \Phi = \sum_{i \in I} p_i \cdot \Phi_i. \end{equation} and from $\Delta \sstep {a(x)} \Delta'$ we have: \begin{equation} \label{eq:sim-like2-2} \Delta = \sum_{j \in J} q_j \cdot \pdist{t_j}, \qquad t_j \sstep {a(x)} \Theta_j, \qquad \Delta' = \sum_{j \in J} q_j \cdot \Theta_j. \end{equation} We assume w.l.o.g. that all $p_i$ and $q_j$ are non-zero. Following \cite{Deng07ENTCS}, we define two index sets: $I_j = \{ i \in I \mid s_i = t_j \}$ and $J_i = \{ j \in J \mid t_j = s_i \}.$ Obviously, we have \begin{equation} \label{eq:sim-like2-3} \{(i,j) \mid i \in I, j \in J_i\} = \{(i,j) \mid j \in J, i \in J_i \}, \quad \mbox{and} \end{equation} \begin{equation} \label{eq:sim-like2-4} \Delta(s_i) = \sum_{j \in J_i} q_j \qquad \Delta(t_j) = \sum_{i \in I_j} p_i. \end{equation} It follows from (\ref{eq:sim-like2-4}) that we can rewrite $\Phi$ as $$ \Phi = \sum_{i \in I} \sum_{j \in J_i} \frac{p_i \cdot q_j}{\Delta(s_i)} \cdot \Phi_i. $$ Note that $s_i = t_j$ when $j \in I_i.$ Since $s_i \triangleleft_S \Phi_i$, and $s_i = t_j \sstep {a(x)}{\Theta_j}$, we have, given any name $w$, some $\Phi_{ij}^1$, $\Phi_{ij}^2$ and $\Phi_{ij}$ such that: \begin{equation} \label{eq:sim-like2-5} \Phi_i \bstep {\hat \tau} \Phi_{ij}^1 \sstep {a(x)} \Phi_{ij}^2, \qquad \Phi_{ij}^2[w/x] \bstep{\hat \tau}{\Phi_{ij}}, \qquad \Theta_j[w/x] ~ \triangleleft_Sl_S ~ \Phi_{ij}. \end{equation} Let $$ \Psi_1 = \sum_{i \in I} \sum_{j \in J_i} \frac{p_i \cdot q_j}{\Delta(s_i)} \cdot \Phi_{ij}^1 \qquad \Psi_2 = \sum_{i \in I} \sum_{j \in J_i} \frac{p_i \cdot q_j}{\Delta(s_i)} \cdot \Phi_{ij}^2 \qquad \Psi = \sum_{i \in I} \sum_{j \in J_i} \frac{p_i \cdot q_j}{\Delta(s_i)} \cdot \Phi_{ij}. $$ Lemma~\ref{lm:bigstep-dist} and (\ref{eq:sim-like2-5}) above give us: $$ \Phi = \sum_{i \in I} \sum_{j \in J_i} \frac{p_i \cdot q_j}{\Delta(s_i)} \cdot \Phi_{i} \bstep {\hat \tau} \Psi_1 \sstep {a(x)} \Psi_2 \qquad \Psi_2[w/x] \bstep {\hat \tau} \Psi $$ It remains to show that $\Delta'[w/x] ~ \triangleleft_Sl_S ~ \Psi.$ \begin{align*} \Delta'[w/x] & = \sum_{j \in J} q_j \cdot \Theta_j[w/x] & \\ & = \sum_{j \in J} q_j \cdot \sum_{i \in I_j} \frac{p_i}{\Delta(t_j)} \cdot \Theta_j[w/x] & \mbox{ using (\ref{eq:sim-like2-4})} \\ & = \sum_{j \in J} \sum_{i \in I_j} \frac{p_i \cdot q_j}{\Delta(t_j)} \cdot \Theta_j[w/x] \\ & = \sum_{i \in I} \sum_{j \in J_i} \frac{p_i \cdot q_j}{\Delta(s_i)} \cdot \Theta_j[w/x] & \mbox{ using (\ref{eq:sim-like2-3}) } \\ & \triangleleft_Sl_S ~ \sum_{i \in I} \sum_{j \in J_i} \frac{p_i \cdot q_j}{\Delta(t_j)} \cdot \Phi_{ij} = \Psi & \mbox{ using (\ref{eq:sim-like2-5}) and linearity of $\triangleleft_Sl_S$} \end{align*} \qed \end{proof} \begin{lemma} \label{lm:sim-like3} Suppose $\Delta ~ \triangleleft_Sl_S ~ \Phi$ and $\Delta ~ \bstep {\hat \alpha} \Delta'$, where $\alpha$ is either $\tau$, a free action or a bound output. Then $\Phi \bstep {\hat \alpha} \Phi'$ for some $\Phi'$ such that $\Delta' ~ \triangleleft_Sl_S ~ \Phi'$. \end{lemma} \begin{proof} Similar to the proof of Lemma 6.8 in \cite{Deng07ENTCS}. \qed \end{proof} \begin{proposition}\label{prop:sim-refl-trans} The relation $\triangleleft_Sl_S$ is reflexive and transitive. \end{proposition} \begin{proof} Reflexivity of $\triangleleft_Sl_S$ follows from Lemma~\ref{lm:sim-refl}. To show transitivity, let us define a relation ${\cal R} \subseteq S_p \times {\cal D}(S_p)$ as follows: $ s ~ {\cal R} ~ \Theta $ iff there exists $\Delta$ such that $s ~ \triangleleft_S ~ \Delta$ and $\Delta ~ \triangleleft_Sl_S ~ \Theta.$ We show that ${\cal R}$ is a simulation. But first, we claim that $\Theta ~ \triangleleft_Sl_S ~ \Delta ~ \triangleleft_Sl_S ~ \Phi$ implies $\Theta ~ \overline {\cal R} ~ \Phi.$ This can be proved similarly as in the case of CSP (see the proof of Proposition 6.9 in \cite{Deng07ENTCS}). Now to show that ${\cal R}$ is a simulation, there are two cases to consider. Suppose $s ~ {\cal R} ~ \Phi$, i.e., $s ~ \triangleleft_S ~ \Delta ~ \triangleleft_Sl_S ~ \Phi.$ \begin{itemize} \item Suppose $s \sstep {\alpha} \Theta$, where $\alpha$ is either $\tau$, a free action or a bound output action. From $s ~ \triangleleft_S ~ \Delta$, we have \begin{equation} \label{eq:sim-trans-1} \Delta \bstep {\hat \alpha} \Delta' \qquad \mbox{ and } \qquad \Theta ~ \triangleleft_Sl_S ~ \Delta'. \end{equation} By Lemma~\ref{lm:sim-like3} and (\ref{eq:sim-trans-1}), we have $\Phi \bstep{\hat \alpha} \Phi'$ and $\Delta' ~ \triangleleft_Sl_S ~ \Phi'$, and by the above claim and (\ref{eq:sim-trans-1}), $\Theta ~ \overline {\cal R} ~ \Phi'$. \item Suppose $s \sstep {a(x)} \Theta,$ so we have: for all $w$, there exist $\Delta_1$, $\Delta_2$, and $\Delta'$ such that \begin{equation} \Delta \bstep{\hat \tau} \Delta_1 \sstep{a(x)} \Delta_2, \qquad \Delta_2[w/x] \bstep{\hat \tau} \Delta', \qquad \mbox{ and } \Theta[w/x] ~ \triangleleft_Sl_S ~ \Delta'. \end{equation} Since $\Delta ~ \triangleleft_Sl_S ~ \Phi$, by Lemma~\ref{lm:sim-like3} we have $\Phi \bstep {\hat \tau} \Phi_1$ and $\Delta_1 ~ \triangleleft_Sl_S ~ \Phi_1.$ And since $\Delta_1 \sstep{a(x)} \Delta_2$, by Lemma~\ref{lm:sim-like2}, for all $w$, there exist $\Phi_2$, $\Phi_3$ and $\Phi_4$ such that: $$ \Phi_1 \bstep{\hat\tau} \Phi_2 \sstep{a(x)} \Phi_3, \qquad \Phi_3[w/x] \bstep{\hat\tau} \Phi_4, \qquad \Delta_2[w/x] ~ \triangleleft_Sl_S ~ \Phi_4. $$ Lemma~\ref{lm:sim-like3}, together with $\Delta_2[w/x] ~ \triangleleft_Sl_S ~ \Phi_4$ and $\Delta_2[w/x] \bstep{\hat\tau} \Delta'$, implies that $\Phi_4 \bstep{\hat\tau} \Phi_5$ and $\Delta' ~ \triangleleft_Sl_S ~ \Phi_5$ for some $\Phi_5.$ From $\Theta[w/x] ~ \triangleleft_Sl_S ~ \Delta'$ and $\Delta' ~ \triangleleft_Sl_S ~ \Phi_5$, we have $\Theta[w/x] ~ \overline {\cal R} ~ \Phi_5.$ Putting it all together, we have: $$ \Phi \bstep{\hat\tau} \Phi_2 \sstep{a(x)} \Phi_3, \qquad \Phi_3[w/x] \bstep{\hat\tau} \Phi_5, \qquad \Theta[w/x] ~ \overline {\cal R} ~ \Phi_5. $$ \end{itemize} Thus ${\cal R}$ is indeed a simulation. \qed \end{proof} \begin{proposition} \label{prop:failsim-refl-trans} The relation $\triangleleft_Sl_{FS}$ is reflexive and transitive. \end{proposition} \begin{proof} Reflexivity of $\triangleleft_Sl_{FS}$ follows from Lemma~\ref{lm:sim-refl}. To show transivity, we use a similar argument as in the proof of Proposition~\ref{prop:sim-refl-trans}: define ${\cal R}$ such that $s ~ {\cal R} ~ \Theta$ iff there exists $\Delta$ such that $s ~ \triangleleft_{FS} ~ \Delta$ and $\Delta ~ \triangleleft_Sl_{FS} ~ \Theta.$ We show that ${\cal R}$ is a failure simulation. Suppose $s ~ {\cal R} ~ \Theta$. The matching up of transitions between $s$ and $\Theta$ is proved similarly to the case with simulation, by proving the analog of Lemmas~\ref{lm:sim-like1} - \ref{lm:sim-like3} for failure simulation. It then remains to show that when $s \not \barb{X}$ then there exists $\Theta'$ such that $\Theta \dar{\hat \tau} \Theta' \not \barb{X}.$ Since $s ~{\cal R} ~ \Theta$, by the definition of ${\cal R}$, we have a $\Delta$ s.t. $s ~ \triangleleft_{FS} ~ \Delta$ and $\Delta ~ \triangleleft_Sl_{FS} ~ \Theta.$ The former implies that $\Delta \dar{\hat \tau} \Delta' \not \barb{X}$, for some $\Delta'$. It can be shown that, using arguments similar to the proof of Lemma~\ref{lm:sim-like3} that $\Theta \dar{\hat\tau} \Theta'$ for some $\Theta'$ such that $\Delta' ~ \triangleleft_Sl_{FS} \Theta'.$ Suppose $\supp {\Delta'} = \{s_i\}_{i \in I},$ i.e., $\Delta' = \sum_{i\in I} p_i \cdot \pdist{s_i}$ with $\sum_{i\in I} p_i = 1.$ Obviously, $s_i \not \barb{X}$ for each $i\in I.$ By Proposition~\ref{prop:lifting}, $\Theta = \sum_{i\in I} p_i \cdot \Theta_i$ for some distributions $\Theta_i$ such that $\pdist{s_i} ~ \triangleleft_Sl_{FS} ~ \Theta_i.$ The latter implies, by Definition~\ref{def:lifting}, that $s_i ~ \triangleleft_{FS} ~ \Theta_i.$ Since $s_i \not \barb{X}$, it follows that $\Theta_i \dar{\hat \tau} \Theta_i' \not \barb{X}$, for some $\Theta_i'.$ Thus $\Theta \dar{\hat \tau} (\sum_{i \in I} p_i \cdot \Theta_i) \not \barb{X}.$ \qed \end{proof} \begin{corollary}\label{cor:sim.fail.preorder} The relations $\sqsubseteq_S$ and $\sqsubseteq_{FS}$ are preorders. \end{corollary} \begin{proof} The fact that $\sqsubseteq_S$ is a preorder follows from Lemma~\ref{lm:sim-like3} and Proposition~\ref{prop:sim-refl-trans}. Similar arguments hold for $\sqsubseteq_{FS}$, using an analog of Lemma~\ref{lm:sim-like3} and Proposition~\ref{prop:failsim-refl-trans}. \qed \end{proof} \section{Soundness of the simulation preorders} \label{sec:sound} In proving soundness of the simulation preorders with respect to testing preorders, we first need to prove certain congruence properties, i.e., closure under restriction and parallel composition. For this, it is helpful to consider a slightly more general definition of simulation, which incorporates another relation. This technique, called the {\em up-to} technique, has been used in the literature to prove congruence properties of various (pre-)order for the $\pi$-calculus~\cite{Sangiorgi98MSCS}. \begin{definition}[Up-to rules] Let ${\cal R} \subseteq S_p \times {\cal D}(S_p).$ Define the relation ${\cal R}^t$ where $t \in \{r, \nu, p\}$ as the smallest relation which satisfies the closure rule for $t$, given below (where $\sigma$ is a renaming substitution): $$ \infer[r] {s \sigma ~{\cal R}^r ~ \Delta \sigma} {s ~ {\cal R} ~ \Delta} \qquad \infer[\nu] {(\nu \vec x.s) ~ {\cal R}^\nu ~ (\nu \vec x.\Delta)} {s ~ {\cal R} ~ \Delta} \qquad \infer[p] {(s_1 ~|~ s_2) ~ {\cal R}^p ~ (\Delta_1 ~|~ \Delta_2)} { s_1 ~ {\cal R} ~ \Delta_1 & s_2 ~ {\cal R} ~ \Delta_2 } $$ \end{definition} \begin{definition}[(Failure) Simulation up-to] A relation ${\cal R} \subseteq S_p \times {\cal D}(S_p)$ is said to be a {\em (failure) simulation up to renaming} (likewise, restriction and parallel composition) if it satisfies the clauses 1, and 2, (and 3 for failure simulation) in Definition~\ref{def:sim}, but with $\overline {\cal R}$ in the clauses replaced by $\overline {{\cal R}^r}$ (respectively, $\overline{{\cal R}^\nu}$ and $\overline{{\cal R}^p}$). \end{definition} It is easy to see that ${\cal R} \subseteq {\cal R}^t$ for any $t \in \{r,\nu\}$ (i.e., via the identity relation as renaming substitution in the former, and via the empty restriction in the latter). The following lemma is then an easy consequence. \begin{lemma} \label{lm:sim-upto} If ${\cal R}$ is a (failure) simulation then it is a (failure) simulation up-to renaming, and also a (failure) simulation up to restriction. \end{lemma} Our objective is really to show that simulation up-to parallel composition is itself a simulation. This would then entail that (the lifted) simulation is closed under parallel composition, from which soundness w.r.t. may-testing follows. We prove this indirectly in three stages: \begin{itemize} \item simulation up-to renaming is a simulation; \item simulation up-to restriction is a simulation up-to renaming (hence also a simulation by the previous item); \item and, finally, simulation up-to parallel composition is a simulation up-to restriction. \end{itemize} \subsection{Up to renaming} Note that as a consequence of Lemma~\ref{lm:rename} (1), given an injective renaming substitution $\sigma$, we have: if $\one {s\sigma}{\alpha'}{\Delta'}$ then there exists $\alpha$ and $\Delta$ such that $\alpha' = \alpha \sigma$, $\Delta' = \Delta\sigma$ and $\one s \alpha \Delta.$ This is proved by simply applying Lemma~\ref{lm:rename} (1) to $\one {s\sigma}{\alpha'}{\Delta'}$ using the inverse of $\sigma$. In the following, we shall write ${\cal R}^{tt}$ to denote $({\cal R}^t)^t$, i.e., the result of applying the up-to closure rule $t$ twice to ${\cal R}.$ \begin{lemma} ${\cal R}^{rr} = {\cal R}^r.$ \end{lemma} \begin{lemma} \label{lm:lift-renaming} If $\Delta_1 ~ \overline{{\cal R}^r} ~ \Delta_2$ then $(\Delta_1\sigma) ~ \overline {{\cal R}^r} ~ (\Delta_2 \sigma)$ for any renaming substitution $\sigma.$ \end{lemma} \begin{proof} This follows from the fact that $\Delta_1 ~ \overline{{\cal R}^r} ~ \Delta_2$ implies $\Delta_1\sigma ~ \overline{{\cal R}^{rr}} ~ \Delta_2\sigma$ and that ${\cal R}^{rr} = {\cal R}^r.$ \qed \end{proof} \begin{lemma} \label{lm:upto-renaming} If ${\cal R}$ is a (failure) simulation up to renaming, then ${\cal R}^r \subseteq \triangleleft_S$ (respectively, ${\cal R}^r \subseteq \triangleleft_{FS}$). \end{lemma} \begin{proof} Suppose ${\cal R}$ is a simulation. It is enough to show that ${\cal R}^r$ is a simulation. So suppose $s ~{\cal R}^r ~ \Delta$ and $\one {s} \alpha \Theta.$ By the definition of ${\cal R}^r$, $s = s'\sigma$ and $\Delta = \Delta'\sigma$ for some renaming substitution $\sigma$ and some $s'$ and $\Delta'$ such that $s' ~ {\cal R} ~ \Delta'.$ There are several cases to consider depending on the type of $\alpha$. \begin{itemize} \item $\alpha$ is $\tau$ or a free action: By Lemma~\ref{lm:rename} (1) we have $\one {s'} {\alpha'} {\Theta'}$ for some $\alpha'$ and $\Theta'$ such that $\alpha = \alpha'\sigma$ and $\Theta = \Theta'\sigma.$ Since ${\cal R}$ is a simulation up to renaming, $s' {\cal R} \Delta'$ implies that $\bigstep {\Delta'}{\hat{\alpha'}}{\Delta_1}$ and $\Theta' ~ \overline{{\cal R}^r} ~ \Delta_1.$ The former implies, by Lemma~\ref{lm:rename} (2), that $\bigstep{\Delta}{\hat \alpha}{\Delta_2}$ for some $\Delta_2$ such that $\Delta_2 = \Delta_1\sigma,$ while the latter implies, by Lemma~\ref{lm:lift-renaming}, that $\Theta = (\Theta'\sigma) ~ \overline {{\cal R}^r} ~ (\Delta_1\sigma) = \Delta_2.$ \item $\alpha = a(x)$ for some $a$ and $x$: In this case, $x \not \in fn(s,\Delta),$ so we can assume, without loss of generality, that $x$ does not occur in $\sigma.$ Using a similar argument as in the previous case, we have that $\one {s'} {b(x)} {\Theta'}$ for some $b$ and $\Theta'$ such that $\sigma(b) = a$ and $\Theta = \Theta'\sigma.$ Since ${\cal R}$ is a simulation up to renaming, $s' {\cal R} \Delta'$ implies that for every name $w$, there exist $\Delta_w^1$, $\Delta_w^2$ and $\Delta_w$ such that: \begin{equation} \label{eq:ren1} \Delta' \bstep {\hat \tau} \Delta_w^1 \sstep {b(x)} \Delta_w^2, \qquad \Delta_w^2[w/x] \bstep {\hat \tau} \Delta_w, \quad \mbox{ and } \end{equation} \begin{equation} \label{eq:ren2} \Theta'[w/x] ~ \overline{{\cal R}^r} ~ \Delta_w. \end{equation} Let $\Phi_1 = \Delta_w^1\sigma$, $\Phi_2 = \Delta_w^2 \sigma$ and $\Phi = \Delta_w\sigma.$ From (\ref{eq:ren1}) and Lemma~\ref{lm:rename} (2) we get: $$ \Delta = \Delta' \sigma \bstep {\hat \tau} \Delta_w^1\sigma = \Phi_1 \sstep{a(x)} \Delta_w^2 \sigma = \Phi_2. $$ By (\ref{eq:ren1}), the freshness assumption of $x$ w.r.t. $\sigma$, and Lemma~\ref{lm:rename} (2), we get $$ \Phi_2[w/x] = \Delta_w^2\sigma [w/x] = \Delta_w^2 [w/x] \sigma \bstep{\hat\tau} \Delta_w\sigma = \Phi. $$ Finally, by (\ref{eq:ren2}) and Lemma~\ref{lm:lift-renaming}, $ \Theta[w/x] = \Theta'\sigma [w/x] = \Theta'[w/x] \sigma ~ \overline{{\cal R}^r} ~ \Delta_w\sigma = \Phi. $ \item $\alpha = \bar a(x)$: This case can be proved similarly to the previous cases. \end{itemize} For the case where ${\cal R}$ is a failure simulation, we additionally need to show that whenever $s ~ {\cal R}^r ~ \Delta$ and $s \not \barb{X}$, we have $\Delta \dar{\hat \tau} \Theta \not \barb{X}$ for some $\Theta$. Since $s {\cal R} \Delta$, we have $s = s'\sigma$ and $\Delta = \Delta'\sigma$ for some $s'$, $\Delta$ and renaming substitution $\sigma.$ Let $X' = X\sigma^{-1}$, i.e., $X'$ is the inverse image of $X$ under $\sigma.$ Then we have that $s' \not \barb{X'}$, and $\Delta' \dar{\hat \tau} \Theta' \not \barb{X'}.$ Applying $\sigma^{-1}$ to the latter, we obtain $\Delta \dar{\hat \tau} \Theta \not \barb{X}.$ \qed \end{proof} \begin{lemma} \label{lm:clo-renaming} Suppose $P \sqsubseteq_S Q$ ($P \sqsubseteq_{FS} Q$) and $\sigma$ is a renaming substitution. Then $P \sigma \sqsubseteq_S Q\sigma$ (respectively, $P \sigma \sqsubseteq_{FS} Q\sigma$). \end{lemma} \begin{proof} Immediate from Lemma~\ref{lm:upto-renaming}. \qed \end{proof} \subsection{Up to name restriction} The following lemma says that transitions are closed under name restriction, if certain conditions are satisfied. \begin{lemma}\label{lm:res} \begin{enumerate} \item For every state-based process $s$, every action $\alpha$ and every list of names $\vec x$ such that $\{\vec x\}\cap n(\alpha) = \emptyset$, $\one{s}{\alpha}{\Delta}$ implies $\one{\nu \vec x.s}{\alpha}{\nu \vec x.\Delta}.$ \item For every $\Delta$ and $\Phi$, every action $\alpha$ and every list of names $\vec x$ such that $\{\vec x\}\cap n(\alpha) = \emptyset$, $\Delta \sstep{\alpha} \Phi$ implies $\nu \vec x.\Delta \sstep{\alpha} \nu \vec x.\Phi.$ \item Suppose $\one s {\bar a b} \Delta$ and suppose $\vec x$ and $\vec y$ are names such that $\{\vec x,\vec y\} \cap \{a,b\} = \emptyset.$ Then $\one {\nu \vec x \nu b\nu \vec y. s} {\bar a(b)}{\nu \vec x\nu \vec y.\Delta}$. \end{enumerate} \end{lemma} \begin{lemma} \label{lm:nu} If $\Delta ~ \overline{{\cal R}^\nu} ~ \Theta$ then $(\nu \vec x.\Delta) ~ \overline{{\cal R}^\nu} ~ (\nu \vec x.\Theta) $ \end{lemma} \begin{lemma} \label{lm:upto-restriction} If ${\cal R}$ is a (failure) simulation up to restriction, then ${\cal R}^\nu \subseteq \triangleleft_S$ (respectively, ${\cal R}^\nu \subseteq \triangleleft_{FS}$). \end{lemma} \begin{proof} Suppose ${\cal R}$ is a simulation up to restriction. We show that ${\cal R}^\nu$ is a simulation up to renaming, hence by Lemma~\ref{lm:upto-renaming} we have ${\cal R}^\nu \subseteq {\cal R}^{\nu r} \subseteq \triangleleft_S.$ Suppose $s ~ {\cal R}^\nu \Delta$ and $\one {s} {\alpha}{\Theta}.$ By the definition of ${\cal R}^\nu$, we have that $s = \nu \vec x.s'$, $\Delta = \nu \vec x.\Delta'$, and $s'[\vec y/\vec x] ~ {\cal R} ~ \Delta'[\vec y/\vec x]$ for some $\vec y$ such that $\{\vec y\} \cap fn(s,\Delta) = \emptyset.$ There are several cases depending on how the transition $\one s \alpha \Theta$ is derived. Note that there may be implicit $\alpha$-renaming involved in the derivations of a transition judgment. We assume that the names $\vec x$ are chosen such that no $\alpha$-renaming is needed in deriving the transition relation $\one{\nu \vec x.s'}{\alpha}{\Theta}$, e.g., one such choice would be one that avoids clashes with the free names in $\vec y$, $s$, and $\Delta$. \begin{itemize} \item $\alpha$ is either $\tau$ or a free action. In this case, the transition must have been derived as follows: $$ \infer=[res] {\one {\nu \vec x.s'}{\alpha}{\nu \vec x.\Theta'}} { \one {s'}{\alpha}{\Theta'} } $$ where $\Theta = \nu \vec x.\Theta'$ and $n(\alpha) \cap \{\vec x\} = \emptyset.$ Here a double-line in the inference rule indicates zero or more applications of the rule. An inspection on the operational semantics will reveal that in this case, $n(\alpha) \subseteq fn(s)$ and $fn(\Theta) \subseteq fn(s)$. So in particular, $\{ \vec y\} \cap n(\alpha) = \emptyset.$ We thus can apply the renaming substitution $[\vec y/\vec x, \vec x/\vec y]$ to get $ \one {s'[\vec y/\vec x]} {\alpha} {\Theta'[\vec y/\vec x]}. $ Since $s'[\vec y/\vec x] ~ {\cal R} ~ \Delta'[\vec y/\vec x]$, we have that $ \bigstep{\Delta'[\vec y/\vec x]}{\alpha}{\Delta''[\vec y/\vec x]} $ and $ \Theta'[\vec y/\vec x] ~ \overline{{\cal R}^\nu} ~ \Delta''[\vec y/\vec x]. $ The former implies, via Lemma~\ref{lm:res} (1), that $ \bigstep{\nu \vec x. \Delta'}{\alpha}{\nu \vec x.\Delta''} $ and the latter implies, via Lemma~\ref{lm:nu}, that $ (\nu \vec x. \Theta') ~ \overline{{\cal R}^\nu} ~ (\nu \vec x. \Delta'') $. Since ${\cal R}^\nu \subseteq ({\cal R}^{\nu})^r$, we also have $ (\nu \vec x. \Theta') ~ \overline{{\cal R}^{\nu r}} ~ (\nu \vec x. \Delta''). $ \item $\alpha = a(z)$: With a similar argument as in the previous case, we can show that in this case we must have $\one {s} {a(z)}{\Theta'}$ where $\Theta = \nu \vec x.\Theta'.$ We need to show that for every name $w$, there exist $\Gamma_w^1$, $\Gamma_w^2$ and $\Gamma_w$ such that $\Delta \bstep {\hat \tau} \Gamma_w^1 \sstep {a(z)} \Gamma_w^2$, $\Gamma_w^2[w/z] \bstep{\hat \tau} \Gamma_w$, and $\Theta[w/z] ~ \overline{{\cal R}^{\nu r}} ~ \Gamma_w.$ Note that $z \not \in \{\vec x\}$, but it may be the case that $z \in \{\vec y\}.$ So we first apply a renaming $[u/z,z/u, \vec y/\vec x,\vec x/\vec y]$, for some fresh name $u$, to the transition $\one {s'}{a(z)}{\Theta'}$ to get: $$ \one{s'[\vec y/\vec x]}{a(u)}{\Theta'[u/z,\vec y/\vec x]}. $$ Since $s'[\vec y/\vec x] ~ {\cal R} ~ \Delta'[\vec y/\vec x]$, we have, for every name $w$, some $\Delta_w^1$, $\Delta_w^2$ and $\Delta_w$ such that \begin{equation} \label{eq:clo-nu2a} \Delta'[\vec y/\vec x] \bstep{\hat\tau} \Delta_w^1 \sstep{a(u)} \Delta_w^2, \qquad \Delta_w^2[w/u] \bstep{\hat \tau} \Delta_w, \qquad \mbox{and } \end{equation} \begin{equation} \label{eq:clo-nu3a} \Theta'[u/z, \vec y/\vec x][w/u] = \Theta'[w/z,\vec y/\vec x] ~ \overline{{\cal R}^\nu} ~ \Delta_w[w/u]. \end{equation} Let $\Phi_w^1$, $\Phi_w^2$ and $\Phi_w$ be distributions such that $\Delta_w^1 = \Phi_w^1[\vec y/\vec x]$, $\Delta_w^2 = \Phi_w^2[u/z, \vec y/\vec x]$, and $\Delta_w = \Phi_w[\vec y/\vec x].$ So in particular, $\Delta_w^2[w/u] = \Phi_w^2[w/z, \vec y/\vec x]$ and $\Delta_w[w/u] = \Phi_w[w/z, \vec y/\vec x].$ Then (\ref{eq:clo-nu2a}) can be rewritten as: \begin{equation} \label{eq:clo-nu2b} \Delta'[\vec y/\vec x] \bstep{\hat\tau} \Phi_w^1[\vec y/\vec x] \sstep{a(u)} \Phi_w^2[u/z, \vec y/\vec x] \qquad \Phi_w^2[w/z, \vec y/\vec x] \bstep{\hat \tau} \Phi_w[\vec y/\vec x], \end{equation} and (\ref{eq:clo-nu3a}) can be rewritten as: \begin{equation} \label{eq:clo-nu3b} \Theta'[w/z,\vec y/\vec x] ~ \overline{{\cal R}^\nu} ~ \Phi_w[w/z, \vec y/\vec x]. \end{equation} Now, to define $\Gamma_w^1$, $\Gamma_w^2$ and $\Gamma_w$, we need to consider two cases, based on the value of $w$. The reason is that in the construction of $\Gamma_w$ we need to bound the free names in $\Phi_w$, so if $z$ is substituted with a name in $\vec y$, it could get captured. \begin{itemize} \item $w \not \in \{\vec x, \vec y\}$. In this case, define: $$ \Gamma_w^1 = \nu \vec x. \Phi_w^1, \qquad \Gamma_w^2 = \nu \vec x. \Phi_w^2, \qquad \Gamma_w = \nu \vec x. \Phi_w. $$ By Lemma~\ref{lm:res} (1) and (\ref{eq:clo-nu2b}), we have: $$ \nu \vec x. \Delta' \bstep {\hat \tau} \Gamma_w^1 \sstep {a(z)} \Gamma_w^2, \qquad \Gamma_w^2[w/z] \bstep {\hat \tau} \Gamma_w $$ and by Lemma~\ref{lm:nu} and (\ref{eq:clo-nu3b}), we have $$ (\Theta[w/z]) = (\nu \vec x. \Theta')[w/z] ~ \overline{{\cal R}^\nu} ~ \Gamma_w, $$ hence also, $ (\Theta[w/z]) = (\nu \vec x. \Theta')[w/z] ~ \overline{{\cal R}^{\nu r}} ~ \Gamma_w. $ \item $w \in \{\vec x, \vec y\}.$ Let $v$ be a new name (distinct from all other names considered so far). From the previous case, we know how to construct $\Gamma_v^1$, $\Gamma_v^2$ and $\Gamma_v$ such that \begin{equation} \label{eq:clo-nu3c} \nu \vec x. \Delta' \bstep {\hat \tau} \Gamma_v^1 \sstep {a(z)} \Gamma_v^2, \qquad \Gamma_v^2[v/z] \bstep {\hat \tau} \Gamma_v \qquad (\Theta[v/z]) ~ \overline{{\cal R}^{\nu r}} ~ \Gamma_v. \end{equation} In this case, let $\Gamma_w^1 = \Gamma_v^1$, $\Gamma_w^2 = \Gamma_v^2$ and $\Gamma_w = \Gamma_v[w/v].$ (Note that because subsitution is capture-avoiding, the bound names in $\Gamma_v$ will be renamed via $\alpha$-conversion). Then by Lemma~\ref{lm:rename} (2) and Lemma~\ref{lm:lift-renaming} and (\ref{eq:clo-nu3c}): $$ \nu \vec x. \Delta' \bstep {\hat \tau} \Gamma_w^1 \sstep {a(z)} \Gamma_w^2, \qquad \Gamma_v^2[w/z] \bstep {\hat \tau} \Gamma_w \qquad (\Theta[w/z]) ~ \overline{{\cal R}^{\nu r}} ~ \Gamma_w. $$ \end{itemize} \item If $\alpha$ is a bound output action, i.e., $\alpha = \bar a(b)$ for some $a$ and $b.$ There are two subcases to consider, depending on whether $b \in \{\vec x\}$ (i.e., one of the restriction names $\vec x$ is extruded) or not. The latter can be proved similarly to the previous case. We show here a proof of the former case. So suppose $b \in \vec x$, i.e., $\nu \vec x = \nu \vec x_1 \nu b \nu \vec x_2$ and suppose that $[\vec y/\vec x]$ maps $b$ to $c$, i.e., $\nu \vec y = \nu \vec y_1 \nu c \nu \vec y_2.$ Suppose the transition relation is derived as follows: $$ \infer=[res] {\one{\nu\vec x_1\nu b \nu \vec x_2.s'}{\bar a(b)}{\nu \vec x_1 \nu \vec x_2. \Theta'}} { \infer[open] {\one {\nu b \nu \vec x_2.s} {\bar a(b)}{\nu \vec x_2.\Theta'}} { \infer=[res] {\one {\nu \vec x_2.s}{\bar a b}{\nu \vec x_2.\Theta'} } {\one {s}{\bar a b}{\Theta'}} } } $$ Applying the renaming $[\vec y/\vec x, \vec x/\vec y]$ we have: $ \one{s[\vec y/\vec x]}{\bar a c}{\Theta'[\vec y/\vec x]}. $ Since $s'[\vec y/\vec x] ~ {\cal R} ~ \Delta'[\vec y/\vec x]$, we have that \begin{equation} \label{eq:clo-nu4} \bigstep {\Delta'[\vec y/\vec x]}{\bar a c}{\Phi}, \qquad \mbox{ and } \qquad \Theta'[\vec y/\vec x] ~ \overline{{\cal R}^\nu} ~ \Phi. \end{equation} Let $\Psi[\vec y/\vec x] = \Phi.$ Lemma~\ref{lm:res} (3) and (\ref{eq:clo-nu4}) imply that $$ \bigstep{\nu \vec x.\Delta' = \nu \vec y_1\nu c\vec y_2.\Delta'[\vec y/\vec x]} {\bar a(c)}{\nu\vec y_1\nu \vec y_2.\Psi[\vec y/\vec x] = \nu \vec x_1\vec x_2. \Psi[c/b]} $$ and by an application of a renaming (Lemma~\ref{lm:rename} (1)) we get $$ \bigstep {\nu \vec x.\Delta'}{\bar a (b)}{\nu\vec x_1\nu \vec x_2.\Psi}. $$ Lemma~\ref{lm:nu} and (\ref{eq:clo-nu4}) imply $$ (\nu \vec x_1\nu\vec x_2.\Theta'[c/b]) ~ \overline{{\cal R}^\nu} ~ (\nu\vec x_1\nu \vec x_2.\Psi[c/b]) $$ hence, via the renaming $[c/b,b/c]$, $ (\nu \vec x_1\nu\vec x_2.\Theta') ~ \overline{{\cal R}^{\nu r}} ~ (\nu\vec x_1\nu \vec x_2.\Psi). $ \end{itemize} If ${\cal R}$ is a failure simulation up to restriction, we need to additionally show that ${\cal R}^\nu$ satisfies clause 3 of Definition~\ref{def:sim}. Suppose $s ~ {\cal R}^\nu ~ \Theta$. Then $s = \nu \vec x. s'$ and $\Theta = \nu \vec x.\Theta'$ for some $\vec x$, $s'$ and $\Theta'$ such that $s' ~ {\cal R} ~ \Theta'.$ Suppose $s \not \barb{X}.$ We need to show that $\Theta \dar{\hat \tau} \Delta$ such that $\Delta \not \barb{X}$ for some $\Delta.$ Since name restriction hides visible actions, it can be shown that $s' \not \barb{X \setminus \{\vec x\}}$ iff $\nu \vec x. s' \not \barb{X}.$ So from $s' ~ {\cal R} ~ \Theta'$ we have that $\Theta' \dar{\hat \tau} \Delta' \not \barb{X \setminus \{\vec x\}}.$ Let $\Delta = \nu \vec x.\Delta'.$ Then by Lemma~\ref{lm:res} (2), we have $\Theta = \nu \vec x.\Theta' \dar{\hat \tau} \nu \vec x.\Delta' = \Delta \not \barb{X}.$ \qed \end{proof} \begin{lemma} \label{lm:clo-res} If $P \sqsubseteq_S Q$ ($P \sqsubseteq_{FS} Q$) then $(\nu \vec x. P) ~ \sqsubseteq_S (\nu \vec x.Q)$ (respectively, $(\nu \vec x.P) \sqsubseteq_{FS} (\nu \vec x.Q)$). \end{lemma} \begin{proof} This is a simple corollary of Lemma~\ref{lm:sim-upto} and Lemma~\ref{lm:upto-restriction}. \qed \end{proof} \subsection{Up to parallel composition} The following lemma will be useful in proving the closure of simulation under parallel composition. It is independent of the underlying calculus, and is originally proved in \cite{Deng07ENTCS}. \begin{lemma} \label{lm:par0} \begin{enumerate} \item $ (\sum_{j\in J} p_j \cdot \Phi_j) ~|~ (\sum_{k\in K} q_k \cdot \Delta_k) = \sum_{j\in J} \sum_{k \in K} (p_j \cdot q_k) \cdot (\Phi_j ~|~ \Delta_k). $ \item Suppose ${\cal R}, {\cal R}' \subseteq S_p \times {\cal D}(S_p)$ are two relations such that $s {\cal R}' \Delta$ whenever $s = s_1 ~|~ s_2$ and $\Delta = \Delta_1 ~|~ \Delta_2$ with $s_1 {\cal R} \Delta_1$ and $s_2 {\cal R} \Delta_2.$ Then $\Phi_1 \overline {\cal R} \Delta_1$ and $\Phi_2 \overline {\cal R} \Delta_2$ imply $(\Phi_1 ~|~ \Phi_2) \overline {{\cal R}'} (\Delta_1 ~|~ \Delta_2)$. \end{enumerate} \end{lemma} We also need a slightly more general substitution lemma for transitions than the one given in Lemma~\ref{lm:rename} (1). In the following, we denote with $n(\theta)$ the set of all names appearing in the domain and range of $\theta$. \begin{lemma} \label{lm:trans-subst} For any substitution $\sigma$, the following hold: \begin{enumerate} \item If $\one s \alpha \Delta$ and $bn(\alpha) \cap n(\sigma) = \emptyset$ then $\one{s\sigma}{\alpha\sigma}{\Delta\sigma}.$ \item If $\bigstep {\Delta}{\hat \alpha}{\Phi}$ and $bn(\alpha) \cap n(\sigma) = \emptyset$ then $\bigstep{\Delta\sigma}{\hat \alpha\sigma}{\Phi\sigma}.$ \end{enumerate} \end{lemma} The following lemma shows that transitions are closed under parallel composition, under suitable conditions. \begin{lemma}\label{lm:trans-par} \begin{enumerate} \item If $\one {s}{\alpha}{\Delta}$ and $fn(s') \cap bn(\alpha) = \emptyset$ then $\one{s~|~s'} {\alpha}{\Delta~|~\pdist{s'}}$ and $\one{s'~|~s}{\alpha}{\pdist{s'}~|~\Delta}.$ \item If $\bigstep {\Phi}{\hat \alpha}{\Delta}$, where $\alpha$ is either $\tau$, a free action or a bound output, and $fn(\Phi') \cap bn(\alpha) = \emptyset$ then $\bigstep {\Phi~|~\Phi'} {\hat \alpha}{\Delta~|~\Phi'}$ and $\bigstep{\Phi'~|~\Phi}{\hat \alpha}{\Phi'~|~\Delta}.$ \item If $\one{\Phi}{a(y)}{\Phi'}$ and $\one{\Delta}{\bar a w}{\Delta'}$ then $\one{\Phi~|~\Delta}{\tau}{\Phi'[w/y]~|~\Delta'}.$ \item If $\one{\Phi}{a(y)}{\Phi'}$ and $\one{\Delta}{\bar a(y)}{\Delta'}$ then $\one{\Phi~|~\Delta}{\tau}{\nu y.(\Phi'~|~\Delta')}.$ \end{enumerate} \end{lemma} \begin{lemma} \label{lm:upto-par} If ${\cal R}$ is a simulation, then ${\cal R}^p \subseteq \triangleleft_S$. \end{lemma} \begin{proof} We show that ${\cal R}^p$ is a simulation up to restriction, and therefore, by Lemma~\ref{lm:upto-restriction}, it is included in $\triangleleft_S$. So suppose $s ~ {\cal R}^p ~ \Delta$ and $\one{s}{\alpha}{\Theta}.$ By definition, we have $s = s_1 ~|~ s_2$ and $\Delta = \Delta_1 ~|~ \Delta_2$ such that $s_1 ~ {\cal R} ~ \Delta_1$ and $s_2 ~{\cal R} ~ \Delta_2.$ There are several cases to consider depending on the type of $\alpha$: \begin{itemize} \item $\alpha$ is a free output action. There can be two ways in which the transition $\one{s}{\alpha}{\Theta}$ is derived. We show here one case; the other case is symmetric. So suppose the transition is derived as follows: $$ \infer[par] {\one{s_1~|~s_2}{\alpha}{\Theta'~|~\pdist{s_2}}} { \one{s_1}{\alpha}{\Theta'} } $$ where $\Theta = \Theta'~|~ \pdist{s_2}.$ Since $s_1 ~ {\cal R} ~ \Delta_1$, we have $$ \bigstep{\Delta_1}{\hat \alpha}{\Delta_1'} $$ and $\Theta' ~ \overline{{\cal R}} ~ \Delta_1'$. The former implies, via Lemma~\ref{lm:trans-par} (2), that $\bigstep{\Delta_1 ~|~ \Delta_2}{\hat\alpha}{\Delta_1' ~|~ \Delta_2}.$ Since $s_2 ~ {\cal R} ~ \Delta_2$ by assumption, and therefore $\pdist{s_2} ~ \overline {{\cal R}} ~\Delta_2$, by Lemma~\ref{lm:par0} (2) we have $$\Theta = (\Theta'~|~\pdist{s_2})~\overline {{\cal R}^p} ~ (\Delta_1' ~|~ \Delta_2)$$ and therefore, also $$\Theta = (\Theta'~|~\pdist{s_2})~\overline {{\cal R}^{p \nu}} ~ (\Delta_1' ~|~ \Delta_2).$$ \item $\alpha = a(y)$ and $y \not \in fn(s,\Delta).$ That is, in this case, the transition is derived as follows: $$ \infer[par] {\one{s_1 ~|~ s_2}{a(y)}{\Theta' ~|~ \pdist{s_2}}} { \one{s_1}{a(y)}{\Theta'} } $$ and $y \not \in fn(s_2).$ (There is another symmetric case which we omit here.) Since $s_1 ~ {\cal R} ~\Delta_1$, we have, for every name $w$, some $\Delta_w^1$, $\Delta_w^2$ and $\Delta_w$ such that: \begin{equation} \label{eq:clo-par1} \Delta_1 \bstep{\hat \tau} \Delta_w^1 \sstep{a(y)} \Delta_w^2, \qquad \Delta_w^2[w/y] \bstep{\hat \tau} \Delta_w, \quad \mbox{ and } \end{equation} \begin{equation} \label{eq:clo-par2} \Theta'[w/y]~ \overline{{\cal R}} ~ \Delta_w. \end{equation} From (\ref{eq:clo-par1}) above and Lemma~\ref{lm:trans-par} (2), and the assumption that $y \not \in fn(s,\Delta)$, we have $$ \Delta_1~|~\Delta_2 \bstep{\hat \tau} \Delta_w^1 ~|~ \Delta_2 \sstep{a(y)} \Delta_w^2 ~|~ \Delta_2, \qquad \Delta_w^2[w/y] ~|~ \Delta_2 \bstep{\hat\tau} \Delta_w ~|~ \Delta_2. $$ Since $s_2 ~ {\cal R} ~ \Delta_2$, and therefore $\pdist{s_2} ~ \overline{{\cal R}} ~ \Delta_2$, it then follows from (\ref{eq:clo-par2}) and Lemma~\ref{lm:par0} (2) that $$ \Theta[w/y] = (\Theta'[w/y]~|~ \pdist{s_2}) ~ \overline {{\cal R}^p} ~ (\Delta_w ~|~ \Delta_2) $$ and therefore $$ \Theta[w/y] = (\Theta'[w/y]~|~ \pdist{s_2}) ~ \overline {{\cal R}^{p \nu}} ~ (\Delta_w ~|~ \Delta_2). $$ \item $\alpha = \bar a(y)$ and $y \not \in fn(s,\Delta)$. This case is similar to the previous cases, except that we only need to consider an instantiation of $y$ with a fresh name. This is left as an exercise for the reader. \item $\alpha = \tau$ and the transition $\one{s}{\tau}{\Theta}$ is derived via a \textbf{Com}-rule. We show here one case; the other case can be dealt with symmetrically. So suppose the transition is derived as follows: $$ \infer[com] {\one{s_1 ~|~ s_2}{\tau}{\Theta_1[w/y]~|~\Theta_2}} { \one{s_1}{a(y)}{\Theta_1} & \one{s_2}{\bar a w}{\Theta_2} } $$ Without loss of generality, we can assume that $y \not \in fn(s,\Delta).$ Since $s_1 ~ {\cal R} ~ \Delta_1$ and $s_2 ~ {\cal R} ~ \Delta_2$, we have: \begin{itemize} \item For every name $w$, there are $\Lambda_1$, $\Lambda_2$ and $\Delta_1^w$ such that \begin{equation} \label{eq:clo-par3} \Delta_1 \bstep{\hat \tau} \Lambda_1 \sstep{a(y)} \Lambda_2, \qquad \Lambda_2[w/y] \bstep{\hat \tau} \Delta_1^w \qquad \mbox{ and } \end{equation} \begin{equation} \label{eq:clo-par4} \Theta_1[w/y] ~ \overline{{\cal R}} ~ \Delta_1^w \end{equation} \item There exists $\Delta_2'$ such that \begin{equation} \label{eq:clo-par5} \Delta_2 \bstep {\hat \tau} \Phi_1 \sstep{\bar a w} \Phi_2 \bstep{\hat \tau} \Delta_2' \qquad \mbox{ and } \end{equation} \begin{equation} \label{eq:clo-par6} \Theta_2 ~ \overline{{\cal R}} ~ \Delta_2' \end{equation} \end{itemize} From (\ref{eq:clo-par3}), (\ref{eq:clo-par5}), and Lemma~\ref{lm:trans-par} (2)-(3), we have: $$ \Delta_1 ~|~ \Delta_2 \bstep{\hat \tau} {\Lambda_1 ~|~ \Phi_1} \sstep {\tau} {\Lambda_2[w/y] ~|~ \Phi_2} \bstep {\hat \tau} {\Delta_1^w ~|~ \Delta_2'}, $$ and Lemma~\ref{lm:par0} (2), together with (\ref{eq:clo-par4}) and (\ref{eq:clo-par6}), implies $$ (\Theta_1[w/y] ~|~ \Theta_2) ~ \overline{{\cal R}^p} ~ (\Delta_1^w ~|~ \Delta_2') $$ and therefore $$ (\Theta_1[w/y] ~|~ \Theta_2) ~ \overline{{\cal R}^{p \nu}} ~ (\Delta_1^w ~|~ \Delta_2'). $$ \item $\alpha = \tau$ and the transition $\one s \tau \Theta$ is derived via the \textbf{Close}-rule: $$ \infer[close .] {\one{s_1 ~|~ s_2}{\tau}{\nu y.(\Theta_1 ~|~ \Theta_2)}} { \one{s_1}{a(y)}{\Theta_1} & \one{s_2}{\bar a(y)}{\Theta_2} } $$ Again, we only show one of the two symmetric cases. Without loss of generality, assume that $y$ is chosen to be fresh w.r.t. $s$ and $\Delta.$ Since $s_1 ~ {\cal R} \Delta_1$ and $s_2 ~ {\cal R} \Delta_2$, we have: \begin{itemize} \item For every name $w$, there are $\Lambda_1$, $\Lambda_2$ and $\Delta_1^w$ such that $$ \Delta_1 \bstep{\hat \tau} \Lambda_1 \sstep{a(y)} \Lambda_2, \qquad \Lambda_2[w/y] \bstep{\hat \tau} \Delta_1^w \qquad \mbox{and} \qquad \Theta_1[w/y] ~ \overline{{\cal R}} ~ \Delta_1^w. $$ Note that letting $w = y$, we have \begin{equation} \label{eq:clo-par7} \Delta_1 \bstep{\hat \tau} \Lambda_1 \sstep{a(y)} \Lambda_2, \qquad \Lambda_2 \bstep{\hat \tau} \Delta_1^y \qquad \mbox{and} \end{equation} \begin{equation} \label{eq:clo-par8} \Theta_1 ~ \overline{{\cal R}} ~ \Delta_1^y \end{equation} \item There exist $\Phi_1$, $\Phi_2$ and $\Delta_2'$ such that \begin{equation} \label{eq:clo-par9} \Delta_2 \bstep {\hat \tau} \Phi_1 \sstep{\bar a(y)} \Phi_2 \bstep{\hat \tau} \Delta_2' \qquad \mbox{and} \end{equation} \begin{equation} \label{eq:clo-par10} \Theta_2 ~ \overline{{\cal R}} ~ \Delta_2' \end{equation} \end{itemize} Then, by (\ref{eq:clo-par7}), (\ref{eq:clo-par9}), Lemma~\ref{lm:trans-par} (2) and (4), and Lemma~\ref{lm:res} (1), we have: $$ \Delta_1 ~|~ \Delta_2 \bstep{\hat \tau} {\Lambda_1 ~|~ \Phi_1} \sstep {\tau} {\nu y.(\Lambda_2 ~|~ \Phi_2)} \bstep {\hat \tau} {\nu y.(\Delta_1^y ~|~ \Delta_2')}. $$ Lemma~\ref{lm:par0} (2), together with (\ref{eq:clo-par8}) and (\ref{eq:clo-par10}), implies $$ (\Theta_1 ~|~ \Theta_2) ~ \overline{{\cal R}^p} ~ (\Delta_1^y ~|~ \Delta_2'), $$ which also means: $$ (\Theta_1 ~|~ \Theta_2) ~ \overline{{\cal R}^{p \nu}} ~ (\Delta_1^y ~|~ \Delta_2'). $$ Now by Lemma~\ref{lm:nu}, the latter implies that $$ \nu y.(\Theta_1 ~|~ \Theta_2) ~ \overline{{\cal R}^{p \nu}} ~ \nu y. (\Delta_1^y ~|~ \Delta_2'). $$ \end{itemize} \qed \end{proof} \begin{lemma} \label{lm:upto-par-failsim} If ${\cal R}$ is a failure simulation, then ${\cal R}^p \subseteq \triangleleft_{FS}$. \end{lemma} \begin{proof} Suppose $s {\cal R}^p \Delta$ and $s\not\barb{X}$. By definition, we have $s = s_1 ~|~ s_2$ and $\Delta = \Delta_1 ~|~ \Delta_2$ such that $s_1 ~ {\cal R} ~ \Delta_1$ and $s_2 ~{\cal R} ~ \Delta_2.$ Then we have $s_i\not\barb{X}$ for $i=1,2$. Define a set $A$ as follows: $$ A = \{ a, \bar a \mid a \in fn(s_1,s_2,\Delta_1,\Delta_2) \} \cup X. $$ That is, $A$ contains the set of free (co-)names in $s_i$ and $\Delta_i$ and $X.$ Let $X_i$ be the largest set such that $X \subseteq X_i \subseteq A$ and $s_i \not\barb{X_i}.$ Since ${\cal R}$ is a failure simulation, it follows that there exist $\Delta_i'$ such that $\Delta_i \dar{\tau} \Delta_i' \not \barb{X_i}.$ By Lemma~\ref{lm:trans-par} (2), we have $\Delta_1~|~\Delta_2 \dar{\tau} \Delta'_1~|~\Delta'_2.$ We claim that $(\Delta_1' ~|~ \Delta_2') \not \barb{X}.$ Suppose otherwise, that is, there exist $t_1 \in \supp {\Delta_1'}$ and $t_2 \in \supp {\Delta_2'}$ such that either $(t_1 ~|~ t_2) \barb \mu$, for some $\mu \in X$, or $(t_1 ~|~ t_2) \ar{\tau}$. If $(t_1 ~|~ t_2) \barb \mu$ then our operational semantics entails that either $t_1 \barb \mu$ or $t_2 \barb \mu$, which contradicts the fact that $\Delta_i' \not\barb {X_i}.$ So let's assume that $(t_1 ~|~ t_2) \ar{\tau}.$ Again, from the assumption $\Delta_i' \not\barb {X_i}$, we can immediately rule out the cases where $t_i \ar{\tau}$ or $t_i \barb \mu$, for some $\mu \in X.$ This leaves us only with the cases where $t_1 \ar{\mu}$ and $t_2 \ar{\bar \mu}$ where $\mu \not \in X$ and $\bar\mu \not \in X.$ But since $\Delta_i' \not \barb{X_i}$, this can only be the case if $\mu \not \in X_1$ and $\bar\mu \not \in X_2.$ From the operational semantics, it is easy to see that $fn(\Delta_1',\Delta_2') \subseteq fn(\Delta_1,\Delta_2)$, so it must be the case that $\mu \in A$ and $\bar\mu \in A.$ It also must be the case that $s_1 \barb \mu$, for otherwise, it would contradict the ``largest'' property of $X_1$. Similarly, we can argue that $s_2 \barb {\bar \mu}$. But then this would imply that $(s_1 ~|~ s_2) \ar{\tau}$, contradicting the fact that $(s_1 ~|~ s_2) \not\barb{X}.$ The matching up of transitions and the using of ${\cal R}$ to prove the preservation property of $\triangleleft_{FS}$ under parallel composition are similar to those in the corresponding proof in Lemma~\ref{lm:upto-par} for simulations, so we omit them. \qed \end{proof} \begin{lemma}\label{lm:clo-par} \begin{enumerate} \item If $P_1 \sqsubseteq_S Q_1$ and $P_2 \sqsubseteq_S Q_2$ then $P_1 ~|~ P_2 ~ \sqsubseteq_S Q_1 ~|~ Q_2.$ \item If $P_1 \sqsubseteq_{FS} Q_1$ and $P_2 \sqsubseteq_{FS} Q_2$ then $P_1 ~|~ P_2 ~ \sqsubseteq_{FS} Q_1 ~|~ Q_2.$ \end{enumerate} \end{lemma} \begin{proof} It is enough to show that $(\triangleleft_S)^p \subseteq \triangleleft_S$ and $(\triangleleft_{FS})^p \subseteq \triangleleft_{FS}$, which follow directly from Lemmas~\ref{lm:upto-par} and \ref{lm:upto-par-failsim} respectively. \qed \end{proof} \subsection{Soundness} We now proceed to proving the main result, which is that $P \sqsubseteq_S Q$ implies $P \sqsubseteq_{pmay} Q$, and $P \sqsubseteq_{FS} Q$ implies $P \sqsubseteq_{pmust} Q$. The structure of the proof follows closely that of \cite{Deng08LMCS}. Most of the intermediate lemmas in this section are not specific to the $\pi$-calculus; rather, they utilise the underlying probabilistic automata semantics. Let $\pi^\omega$ be the set of all $\pi$ processes that may use action $\omega$. We write $s\ar{\alpha}_\omega\Delta$ if either $\alpha=\omega$ or $\alpha\not=\omega$ but both $s\nar{\omega}$ and $s\ar{\alpha}\Delta$ hold. We define $\ar{\hat{\tau}}_w$ as we did for $\ar{\hat{\tau}}$, using $\ar{\tau}_\omega$ in place of $\ar{\tau}$. Similarly, we define $\dar{}_\omega$ and $\dar{\hat{\alpha}}_\omega$. Simulation and failure simulation are adapted to $\pi^\omega$ as follows. \begin{definition} Let $\triangleleft_{FS}^e \subseteq \pi^\omega\times {\cal D}(\pi^\omega)$ be the largest relation such that $s \triangleleft_{FS}^e \Theta$ implies \begin{itemize} \item If $ s \ar{a(x)}_\omega {\Delta}$ and $x \not \in fn(s,\Theta)$, then for every name $w$, there exists $\Theta_1$, $\Theta_2$ and $\Theta'$ such that $$\Theta \dar{\hat \tau}_\omega \Theta_1 \ar{a(x)}_\omega {\Theta_2}, \qquad \Theta_2[w/x] \dar{\hat \tau}_\omega \Theta', \qquad \hbox{ and } \qquad (\Delta[w/x]) ~ \overline {\cal R} ~ \Theta'. $$ \item if $s\ar{\alpha}_\omega \Delta$ and $\alpha$ is not an input action, then there is some $\Theta'$ with $\Theta\dar{\hat{\alpha}}_\omega\Theta'$ and $\Delta\lift{\triangleleft_{FS}^e}\Theta'$ \item if $s\not \barb{X}$ with $\omega\in X$ then there is some $\Theta'$ with $\Theta\dar{\hat{\tau}}_\omega\Theta'$ and $\Theta'\not \barb{X}$. \end{itemize} Similarly we can define $\triangleleft_S^e$ by dropping the third clause. Let $P \sqsubseteq_{FS}^e Q$ if $\interp{P}\dar{\hat{\tau}}_\omega\Theta$ for some $\Theta$ with $\interp{Q}\lift{\triangleleft_{FS}^e}\Theta$. Similarly, $P\sqsubseteq_S^e Q$ if $\interp{Q}\dar{\hat{\tau}}_\omega\Theta$ for some $\Theta$ with $\interp{P}\lift{\triangleleft_S^e}\Theta$. \end{definition} Note that for $\pi$-processes $P,Q$, there is no action $\omega$, therefore we have $P\sqsubseteq_{FS} Q$ iff $P\sqsubseteq_{FS}^e Q$, and $P\sqsubseteq_S Q$ iff $P\sqsubseteq_S^e Q$. \begin{lemma}\label{lem:preserve.par} Let $P,Q$ be processes in $\pi$ and $T$ be a process in $\pi^\omega$. \begin{enumerate} \item If $P\sqsubseteq_S Q$ then $T~|~ P \sqsubseteq_S^e T ~|~Q$. \item If $P\sqsubseteq_{FS} Q$ then $T~|~ P \sqsubseteq_{FS}^e T ~|~Q$. \end{enumerate} \end{lemma} \begin{proof} Similar to the proof of Lemma~\ref{lm:clo-par}. \qed \end{proof} \begin{lemma} \label{lem:pmay-max} \begin{enumerate} \item $P \sqsubseteq_{pmay} Q$ if and only if for every test $T$ we have $$ max({\mathbb V}(\interp {\nu \vec x.(T ~|~ P)})) \leq max({\mathbb V}(\interp {\nu \vec x.(T ~|~ Q)})) $$ where $\vec x$ contain the free names of $T$, $P$ and $Q$, excluding $\omega.$ \item $P \sqsubseteq_{pmust} Q$ if and only if for every test $T$ we have $$ min({\mathbb V}(\interp {\nu \vec x.(T ~|~ P)})) \leq min({\mathbb V}(\interp {\nu \vec x.(T ~|~ Q)})) $$ where $\vec x$ contain the free names of $T$, $P$ and $Q$, excluding $\omega.$ \end{enumerate} \end{lemma} \begin{proof} The results follow from the simple fact that, for non-empty finite outcome sets $O_1,O_2$, \begin{itemize} \item $O_1\sqsubseteq_{Ho}O_2$ iff $max(O_1)\leq max(O_2)$ \item $O_1\sqsubseteq_{Sm}O_2$ iff $min(O_1)\leq min(O_2)$ \end{itemize} which is established as Proposition 2.1 in \cite{Deng07ENTCS}. \qed \end{proof} \begin{lemma} \label{lm:trans-max} $\Delta_1 \dar{\hat \tau} \Delta_2$ implies $max({\mathbb V}(\Delta_1)) \geq max({\mathbb V}(\Delta_2))$ and $min({\mathbb V}(\Delta_1)) \leq min({\mathbb V}(\Delta_2))$. \end{lemma} \begin{proof} Similar properties are proven in \cite[Lemma 6.15]{Deng07ENTCS} using a function $maxlive$ instead of $max\circ{\mathbb V}$. Essentially the same arguments apply here. \qed \end{proof} \begin{proposition} \label{prop:sim-max} \begin{enumerate} \item $\Delta_1\lift{\triangleleft_S^e}\Delta_2$ implies $max({\mathbb V}(\Delta_1)) \leq max({\mathbb V}(\Delta_2))$. \item $\Delta_1\lift{\triangleleft_{FS}^e}\Delta_2$ implies $min({\mathbb V}(\Delta_1)) \geq min({\mathbb V}(\Delta_2))$. \end{enumerate} \end{proposition} \begin{proof} The first clause is proven in \cite[Proposition 6.16]{Deng07ENTCS} using a function $maxlive$ instead of $max\circ{\mathbb V}$. The second clause is proven in \cite[Proposition 4.10]{Deng08LMCS} \qed \end{proof} \begin{theorem}\label{thm:sim-sound} \begin{enumerate} \item $P \sqsubseteq_S Q$ implies $P \sqsubseteq_{pmay} Q$ \item $P \sqsubseteq_{FS} Q$ implies $P \sqsubseteq_{pmust} Q.$ \end{enumerate} \end{theorem} \begin{proof} We prove the second statement; similar is the first one. Suppose $P \sqsubseteq_{FS} Q$. Given Proposition~\ref{lem:pmay-max}, it is sufficient to show that for every test $T$, $$ min({\mathbb V}(\interp{\nu \vec x (T ~|~ P) })) \leq min({\mathbb V}(\interp{\nu \vec x(T ~|~ Q)})) $$ where $\vec x$ contain the free names of $T$, $P$ and $Q$, but excluding $\omega.$ Since $\sqsubseteq_{FS}$ is preserved by parallel composition (cf. Lemma~\ref{lem:preserve.par}) and name restriction, we have that $$ \nu \vec x(T ~|~ P) \sqsubseteq_{FS}^e \nu \vec x(T~|~Q), $$ which means there is a $\Theta$ such that $\interp {\nu \vec x(T~|~P)} \bstep{\hat \tau} \Theta$ and $\interp {\nu \vec x(T~|~Q)} ~ \lift{\triangleleft_{FS}^e} \Theta.$ The result then follows from Proposition~\ref{prop:sim-max} and Lemma~\ref{lm:trans-max}. \qed \end{proof} \section{A modal logic for $\pi_p$} \label{sec:modal} We consider a modal logic based on a fragment of Milner-Parrow-Walker's (MPW) modal logic for the (non-probabilistic) $\pi$-calculus~\cite{Milner93TCS}, but extended with a probabilistic disjunction operator $\oplus$, similar to that used in \cite{Deng08LMCS}. The language of formulas is given by the following grammar: $$ \varphi ::= \top ~ \mid ~ \Ref{X} ~ \mid ~ \ldia{a(x)} \varphi ~ \mid ~ \ldia{\bar a x} \varphi ~ \mid ~ \ldia{\bar a(x)} \varphi ~ \mid ~ \varphi_1 \wedge \varphi_2 ~ \mid ~ \varphi_1 {\pch p} \varphi_2 $$ The $x$'s in $\ldia{a(x)}\varphi$ and $\ldia{\bar a(x)}\varphi$ are binders, whose scope is over $\varphi.$ The diamond operator $\ldia{a(x)}$ is called a bound input modal operator, $\ldia{\bar a x}$ a free output modal operator and $\ldia{\bar a(x)}$ a bound output modal operator. Instead of binary conjunction and probabilistic disjunction, we sometimes write $\bigwedge_{i\in I}\varphi_i$ and $\varphi_1 {\pch p} \varphi_2$ for finite index set $I$; they can be expressed by nested use of their binary forms. We refer to this modal logic as ${\cal F}$. Let ${\cal L}$ be the sub-logic of ${\cal F}$ by skipping the $\Ref{X}$ clause. The semantics of each operator is defined as follows. \begin{definition} \label{def:sat} The {\em satisfaction relation} $\models$ between a distribution and a modal formula is defined inductively as follows: \begin{itemize} \item $\Delta \models \top$ always. \item $\Delta \models \Ref{X}$ iff there is a $\Delta'$ with $\Delta\dar{\hat{\tau}} \Delta'$ and $\Delta'\not\barb{X}$. \item $\Delta \models \ldia{a(x)}\varphi$ iff for all $z$ there are $\Delta_1,$ $\Delta_2,$ $\Delta'$ and $w$ such that $\Delta \bstep{\hat \tau} \Delta_1 \sstep{a(w)} \Delta_2$, $\Delta_2[z/w] \bstep{\hat \tau} \Delta'$ and $\Delta' \models \varphi[z/x].$ \item $\Delta \models \ldia{\bar a x}\varphi$ iff for some $\Delta'$, $\Delta \bstep{\widehat{\bar a x}} \Delta'$ and $\Delta' \models \varphi.$ \item $\Delta \models \ldia{\bar a(x)}\varphi$ iff for some $\Delta'$ and $w \not \in fn(\varphi, \Delta)$, $\Delta \bstep{\widehat{\bar a(w)}} \Delta'$ and $\Delta' \models \varphi[w/x].$ \item $\Delta \models \varphi_1 \wedge \varphi_2$ iff $\Delta \models \varphi_1$ and $\Delta \models \varphi_2$. \item $\Delta \models \varphi_1 {\pch p} \varphi_2$ iff there are $\Delta_1,\Delta_2 \in {\cal D}(S_p)$ with $\Delta_1\models\varphi_1$ and $\Delta_2\models\varphi_2$, such that $\bigstep{\Delta}{\hat \tau}{p \cdot \Delta_1 + (1-p)\cdot\Delta_2.}$ \end{itemize} We write $\Delta \sqsubseteq_\Lcal \Theta$ just when $\Delta \models \psi$ implies $\Theta \models \psi$ for all $\psi \in {\cal L}$, and $\Delta \sqsubseteq_\Fcal \Theta$ just when $\Theta \models \varphi$ implies $\Delta \models \varphi$ for all $\varphi\in{\cal F}$. We write $P \sqsubseteq_\Lcal Q$ when $\interp P \sqsubseteq_\Lcal \interp Q$, and $P \sqsubseteq_\Fcal Q$ when $\interp P \sqsubseteq_\Fcal \interp Q$. \end{definition} Following \cite{Deng08LMCS}, in order to show soundness of the logical preorders w.r.t. the simulation pre-orders, we need to define a notion of characteristic formulas. \begin{definition}[Characteristic formula] \label{def:char-form} The {\em ${\cal F}$-characteristic formulas} $\varphi_s$ and $\varphi_\Delta$ of, respectively, a state-based process $s$ and a distribution $\Delta$ are defined inductively as follows: $$ \begin{array}{rcl} \varphi_s & := & \bigwedge\{\ldia{\alpha}\varphi_\Delta \mid s\ar{\alpha}\Delta\} \wedge \Ref{\{\mu \mid s\not\barb{\mu}\}} \qquad\mbox{ if $s\not\ar{\tau}$},\\ \varphi_s & := & \bigwedge\{\ldia{\alpha}\varphi_\Delta \mid s\ar{\alpha}\Delta, ~ \alpha \not = \tau\} ~ \wedge ~ \bigwedge\{\varphi_\Delta \mid s\ar{\tau}\Delta\} \qquad \mbox{ otherwise}.\\ \varphi_\Delta & := & \bigoplus_{s\in\supp{\Delta}}\Delta(s) \cdot \varphi_s \end{array} $$ where $\bigoplus$ is a generalised probabilistic choice as in Section~\ref{sec:pi}. The \emph{${\cal L}$-characteristic formulas} $\psi_s$ and $\psi_\Delta$ are defined likewise, but omitting the conjuncts $\Ref{\{\mu \mid s\not\barb{\mu}\}}$. \end{definition} Note that because we use the late semantics (cf. Figure~\ref{fig:pi}), the conjunction in $\varphi_s$ is finite even though there can be infinitely many (input) transitions from $s.$ Given a state based process $s$, we define its {\em size}, $|s|$, as the number of process constructors and names in $s.$ The following lemma is straightforward from the definition of the operational semantics of $\pi_p$. \begin{lemma} \label{lm:trans-size} If $s \sstep{\alpha} \Delta$ then $|s| > |t|$ for every $t \in \supp \Delta.$ \end{lemma} \begin{lemma} \label{lm:char-form} For every $\Delta \in {\cal D}(S_p)$, $\Delta \models \varphi_\Delta$, as well as $\Delta\models\psi_\Delta$. \end{lemma} \begin{proof} It is enough to show that $\bar s \models \varphi_s.$ This is proved by by induction on $|s|.$ So suppose $s\not\ar{\tau}$. Then we have $$ \begin{array}{ll} \varphi_s = & \Ref{\{\mu \mid s\not\barb{\mu}\}}\wedge \\ & \bigwedge \{\ldia {a(x)} \varphi_\Delta \mid \one s {a(x)}{\Delta} \} \wedge \bigwedge \{\varphi_\Delta \mid \one s \tau \Delta \} \wedge \\ & \bigwedge \{\ldia {\bar a x } \varphi_\Delta \mid \one s {\bar a x }{\Delta} \} \wedge \bigwedge \{\ldia {\bar a(x) } \varphi_\Delta \mid \one s {\bar a(x) }{\Delta} \}. \end{array} $$ where $ \varphi_\Delta = \bigoplus_{s \in \supp \Delta} \Delta(s).\varphi_s. $ For each of the conjunct $\phi$, we prove that $\pdist s \models \phi.$ We show here two cases; the other cases are similar. \begin{itemize} \item $\phi= \Ref{X}$, where $X=\{\mu \mid s\not\barb{\mu}\}$. For each $\mu \in X$ we have $s\not\barb{\mu}$. Moreover, since $s\not\ar{\tau}$, we see that $s\not\barb{X}$. \item $\phi = \ldia {a(x)} \varphi_\Delta$. So suppose $s \sstep{a(x)} \Delta$ and $\supp \Delta = \{s_i \mid i \in I\}$ and $\Delta = \sum_{i \in I} p_i \cdot \pdist{s_i}.$ Since $|s_i| < |s|$, by the induction hypothesis, for every name $w$, we have $$ \pdist{s_i[w/x]} \models \varphi_{s_i[w/x]} $$ and therefore: $$ \Delta[w/x] = \sum_{i\in I} p_i \cdot \pdist{s_i[w/x]} \models \bigoplus_{i \in I} p_i \cdot \varphi_{s_i[w/x]} = \varphi_\Delta[w/x]. $$ Let $\Phi_1 = \Phi_2 = \pdist s.$ Obviously we have, for every $w$, $$ \Phi_1 \bstep {\hat \tau} \Phi_2 \sstep {a(x)} \Delta, \qquad \Delta[w/x] \models \varphi_\Delta[w/x]. $$ So by Definition~\ref{def:sat}, $\pdist s \models \phi.$ \end{itemize} \qed \end{proof} \begin{lemma} \label{lm:modal-sim} For any processes $P$ and $Q$, $\interp{P}\models\varphi_{\interp{Q}}$ implies $P\sqsubseteq_{FS} Q$, and likewise $\interp Q \models \psi_{\interp P}$ implies $P \sqsubseteq_S Q.$ \end{lemma} \begin{proof} Let ${\cal R}$ be the relation defined as follows: $s ~ {\cal R} ~ \Theta$ iff $\Theta \models \varphi_s.$ We first prove the following claim: \begin{equation} \label{claim} \mbox{ $\Theta \models \varphi_\Delta$ implies there exists $\Theta'$ such that $\bigstep{\Theta}{\hat \tau}{\Theta'}$ and $\Delta ~ \overline {\cal R} ~ \Theta'.$ } \end{equation} To prove this claim (following \cite{Deng08LMCS}), suppose that $\Theta \models \Delta$. By definition, $\varphi_\Delta = \bigoplus_{i \in I} p_i \cdot \varphi_{s_i}$ and $\Delta = \sum_{i \in I} p_i \cdot \pdist {s_i}$. For every $i \in I$, we have $\Theta_i \in {\cal D}(S_p)$ with $\Theta_i \models \varphi_{s_i}$ such that $\bigstep {\Theta} {\hat \tau}{\Theta'}$ with $\Theta' = \sum_{i\in I} p_i \cdot \Theta_i.$ Since $s_i ~ {\cal R} ~ \Theta_i$ for all $i \in I$, we have $\Delta ~ \overline {\cal R} ~ \Theta'.$ We now proceed to show that ${\cal R}$ is a failure simulation, hence proving the first statement of the lemma. So suppose $s ~ {\cal R} ~ \Theta$. \begin{enumerate} \item Suppose $\one s \tau \Delta$. By the definition of ${\cal R}$, we have $\Theta \models \varphi_s.$ By Definition~\ref{def:char-form}, we also have $\Theta \models \varphi_\Delta.$ By (\ref{claim}) above, there exists $\Theta'$ such that $\bigstep{\Theta}{\hat \tau}{\Theta'}$ and $\Delta ~ \overline {\cal R} ~ \Theta'.$ \item Suppose $\one s {\bar a x} \Delta$. Then by Definition~\ref{def:char-form}, $\Theta \models \ldia{\bar a x}\varphi_\Delta.$ So $\Theta \bstep {\bar a x } \Theta'$ and $\Theta' \models \varphi_\Delta$, for some $\Theta'.$ By (\ref{claim}), there exists $\Theta''$ such that $\Theta' \bstep{\hat {\tau}}{\Theta''}$ and $\Delta ~ \overline {\cal R} ~ \Theta''.$ This means that $\Theta \bstep{\bar a x} \Theta''$ and $\Delta ~ \overline {\cal R} ~ \Theta''.$ \item Suppose $\one s {a(x)} \Delta$ for some $x \not \in fn(s,\Theta).$ By Definition~\ref{def:char-form}, $\Theta \models \ldia{a(x)}\varphi_\Delta.$ This means for every name $z$, there exists $\Theta_z^1$, $\Theta_z^2$ and $\Theta_z$ such that $\Theta \bstep {\hat \tau} \Theta_z^1 \sstep {a(x)} {\Theta_z^2}$, $\Theta_z^2[z/x] \bstep{\hat \tau} \Theta_z$ and $\Theta_z \models \varphi_\Delta[z/x].$\footnote{Strictly speaking, we should also consider the case where $\Theta_z^1 \sstep{a(w)} \Theta_z^2$, but it is easy to see that since $x \not \in fn(s,\Theta)$ we can always apply a renaming to rename $w$ to $x.$} Then by (\ref{claim}) we have $\bigstep {\Theta_z }{\hat \tau}{\Theta_z'}$ and $\Delta[z/x] ~ \overline {\cal R} ~ \Theta_z'.$ So we indeed have, for every name $z$, $\Theta_z^1$, $\Theta_z^2$ and $\Theta_z'$ such that $$ \Theta \bstep{\hat \tau} \Theta_z^1 \sstep{a(x)} \Theta_z^2, \qquad \Theta_z^2[z/x] \bstep{\hat \tau} \Theta_z' \qquad \hbox{ and } \qquad \Delta[z/x] ~ \overline {\cal R} ~ \Theta_z'. $$ \item Suppose $\one s {\bar a(x)}{\Delta}.$ This case is similar to the previous one, except that we need only to consider one instance of $x$ with a fresh name. \item Suppose $s\not\barb{X}$ for a set of channel names $X$. By Definition~\ref{def:char-form}, we have $\Theta\models\Ref{X}$. Hence, there is some $\Theta'$ with $\Theta\dar{\hat{\tau}}\Theta'$ and $\Theta'\not\barb{X}$. \end{enumerate} To establish the second statement, define ${\cal R}$ by $s{\cal R}\Theta$ iff $\Theta\models\psi_s$. Just as above it can be shown that ${\cal R}$ is a simulation. Then the second statement of the lemma easily follows. \qed \end{proof} \begin{theorem}\label{thm:modal-sim} \begin{enumerate} \item If $P \sqsubseteq_\Lcal Q$ then $P \sqsubseteq_S Q.$ \item If $P \sqsubseteq_\Fcal Q$ then $P \sqsubseteq_{FS} Q.$ \end{enumerate} \end{theorem} \begin{proof} Suppose $P \sqsubseteq_{\cal L} Q$. By Lemma~\ref{lm:char-form}, we have $\interp P \models \psi_{\interp P}$, hence $\interp Q \models \psi_{\interp P}$. Then by Lemma~\ref{lm:modal-sim}, we have $P \sqsubseteq_S Q.$ For the second statement, assume $P \sqsubseteq_{FS} Q$, we have $\interp{Q}\models\varphi_{\interp{Q}}$ and hence $\interp{P}\models\varphi_{\interp{Q}}$, and thus $P\sqsubseteq_{FS} Q$. \qed \end{proof} \newcommand\vsim[1]{{\widehat \sqsubseteq}^{#1}_{pmay}} \newcommand\vapply[2]{{\widehat {\cal A}}^\Omega_{\updownarrow}(#1,#2) } \newcommand\pr[2]{\langle #1, #2\rangle} \section{Completeness of the simulation preorders} \label{sec:comp} In the following, we assume a function $new$ that takes as an argument a finite set of names and outputs a fresh name, i.e., if $new(N) = x$ then $x\not \in N.$ If $N = \{x_1,\ldots,x_n\}$, we write $[x \not = N]P$ to abbreviate $[x \not = x_1][x \not = x_2] \cdots [x \not = x_n]P.$ For convenience of presentation, we write $\vec{\omega}$ for the vector in $[0,1]^\Omega$ defined by $\vec{\omega}(\omega)=1$ and $\vec{\omega}(\omega')=0$ for any $\omega'\not=\omega$. We also extend the $Apply^\Omega$ function to allow applying a test to a distribution, defined as $Apply^\Omega(T, \Delta) = {\mathbb V}({\nu \vec x(\interp T~|~\Delta)})$ where $\vec x = fn(T,\Delta) - \Omega.$ \begin{lemma} \label{lm:sat-renaming} If $\Delta \models \varphi$ then $\Delta \sigma \models \varphi \sigma$ for any renaming substitution $\sigma.$ \end{lemma} In the following, given a name $a$, we write $a.P$ to denote $a(y).P$ for some $y \not \in fn(P).$ Similarly, we write $\bar a.P$ to denote $\bar a a.P.$ Recall that the size of a state-based process, $|s|$, is the number of symbols in $s.$ The {\em size} of a distribution $\Delta$, written $|\Delta|$, is the {\em multiset} $\{|s| \mid s \in \supp \Delta \}.$ There is a well-founded ordering on $|\Delta|$, i.e., the multiset (of natural numbers) ordering, which we shall denote with $\prec$. \begin{lemma} \label{lm:test1} Let $P$ be a process and $T, T_i$ be tests. \begin{enumerate} \item $o\in Apply^\Omega(\omega,P)$ iff $o=\vec{\omega}$. \item Let $X=\{\mu_1,...,\mu_n\}$ and $T=\mu_1.\omega+...+\mu_n.\omega$. Then $\vec{0} \in Apply^\Omega(T,P)$ iff $\interp{P}\dar{\hat{\tau}}\Delta$ for some $\Delta$ with $\Delta\not\barb{X}$. \item Suppose the action $\omega$ does not occur in the test $T$. Then $o\in Apply^\Omega(\omega+a(x).([x=y]\tau.T+\omega), P) $ with $o(\omega)=0$ iff there is $\Delta$ such that $\interp P \bstep {\widehat{\bar a y}} \Delta$ and $o\in Apply^\Omega(T[y/x], \Delta).$ \item Suppose the action $\omega$ does not occur in the test $T$ and $fn(P)\subseteq N$. Then $o\in Apply^\Omega(\omega+a(x).([x\not=N]\tau.T+\omega), P) $ with $o(\omega)=0$ iff there is $\Delta$ such that $\interp P \bstep {\widehat{\bar a (y)}} \Delta$ and $o\in Apply^\Omega(T[y/x], \Delta).$ \item Suppose the action $\omega$ does not occur in the test $T$. Then $o\in Apply^\Omega(\omega+\bar a x.T, P)$ with $o(\omega)=0$ iff there are $\Delta$, $\Delta_1$ and $\Delta_2$ such that $\interp P \bstep{\hat \tau} \Delta_1 \sstep{a(y)} \Delta_2,$ $\Delta_2[x/y] \bstep{\hat \tau} \Delta$ and $o\in Apply^\Omega(T,\Delta).$ \item $o\in Apply^\Omega(\bigoplus_{i\in I} p_i \cdot T_i, P) $ iff $o=\sum_{i\in I}p_i\cdot o_i$ for some $o_i\in Apply^\Omega(T_i,P)$ for all $i\in I.$ \item $o\in Apply^\Omega(\sum_{i\in I} \tau.T_i, P)$ if for all $i \in I$ there are $q_i \in [0,1]$ and $\Delta_i$ such that $\sum_{i\in I} q_i = 1$, $\interp P \bstep{\hat \tau} \sum_{i \in I} q_i \cdot \Delta_i$ and $o=\sum_{i\in I}q_i\cdot o_i$ for some $o_i\in Apply^\Omega(T_i,\Delta_i).$ \end{enumerate} \end{lemma} \begin{proof} The proofs of items 1 and 2 are similar to the proofs of Lemma 6.7(1) and 6.7(2) in \cite{Deng08LMCS} for pCSP; items 6 and 7 correspond to Lemma 6.7(4) and Lemma 6.7(5) in \cite{Deng08LMCS}, respectively. Items 3, 4 and 5 have a counterpart in Lemma 6.7(3) of \cite{Deng08LMCS}, but they are quite different, due to the name-passing feature of the $\pi$-calculus, and the possibility of checking the identity of the input value via the match and the mismatch operators. We show here a proof of item 3; the proofs of items 4 and 5 are similar. We first generalize item 3 to distributions: given $\omega$ and $T$ as above, we have, for every distribution $\Theta$, \begin{quote} $o\in Apply^\Omega(\omega+a(x).([x=y]\tau.T+\omega), \Theta) $ with $o(\omega)=0$ iff there is $\Delta$ such that $\Theta \bstep {\widehat{\bar a y}} \Delta$ and $o\in Apply^\Omega(T[y/x], \Delta).$ \end{quote} The `if' part is straightforward from Definition~\ref{def:vector-based-results}. We show the `only if' part here. The proof will make use of the following claim (easily proved by induction on $|\Theta|$): \begin{equation} \label{eq:claim} \begin{array}{l} \mbox{\bf Claim:} ~ o\in Apply^\Omega([y=y]\tau.T[y/x]+\omega, \Theta) \mbox{ with } o(\omega)=0 \mbox{ iff } \\ \mbox{there is $\Delta$ such that $\Theta \bstep {\hat \tau} \Delta$ and $o\in Apply^\Omega(T[y/x], \Delta).$ } \end{array} \end{equation} So, suppose we have $o\in Apply^\Omega(\omega+a(x).([x=y]\tau.T+\omega), \Theta) $ with $o(\omega)=0$. We show, by induction on $|\Theta|$, that there exists $\Delta$ such that $\Theta \bstep {\bar a y} \Delta$ and $o\in Apply^\Omega(T[y/x], \Delta).$ Let $T' = \omega+a(x).([x=y]\tau.T+\omega)$, and suppose $\Theta = p_1 \cdot \pdist{s_1} + \ldots + p_n \cdot \pdist{s_n}$, for pairwise distincts state-based processes $s_1,\ldots,s_n$, and suppose that $\vec z$ is an enumeration of the set $fn(T',\Theta) - \Omega.$ Then $$ Apply^\Omega(T',\Theta) = {\mathbb V}^\Omega(p_1 \cdot \pdist{\nu \vec z(T'|s_1)} + \ldots + p_n \cdot \pdist{\nu \vec z(T'|s_n)}). $$ From Definition~\ref{def:vector-based-results}, in order to have $o(\omega) = 0$, it must be the case that $\nu \vec z(T' | s_j) \ar{\tau} $ for every $j \in \{1,\dots,n\}.$ From the definition of the operational semantics, there are exactly two cases where this might happen: \begin{itemize} \item For some $i$, $s_i \ar{\tau} \Lambda$ for some distribution $\Lambda.$ Let $\Theta' = p_1 \cdot \pdist{s_1} + \ldots + p_i \cdot \Lambda + \ldots + p_n \cdot \pdist{s_n}.$ Then we have $\Theta \ar{\hat \tau} \Theta'$ and $\nu\vec z(T' | \Theta) \ar{\hat \tau} \nu \vec z(T'|\Theta').$ The latter means that $o \in {\mathbb V}^\Omega(\nu \vec z(T'|\Theta'))$ as well. By Lemma~\ref{lm:trans-size}, we know that $|\Lambda| \prec \{|s_i|\}$, and therefore $|\Theta'| \prec |\Theta|.$ By the induction hypothesis, $$ \Theta \ar{\hat\tau} \Theta' \bstep{\widehat{\bar a y}} \Delta $$ and $o \in Apply^\Omega(T[y/x], \Delta).$ \item For every $i \in \{1,\dots,n\}$, we have $s_i \not \ar{\tau}.$ This can only mean that the $\tau$ transition from $\nu \vec z(T'|s_i)$ derives from a communiation between $T'$ and $s_i.$ This means that $s_i\barb{\bar a}$, for every $i \in \{1,\dots,n\}.$ We claim that, in fact, for every $i$, we have $s_i \ar{\bar a y} \Theta_i$, for some $\Theta_i.$ For otherwise, we would have that for some $j$, $\nu \vec z(T' | s_j) \ar{\tau} \nu \vec z(([u = y]\tau.T[y/x] + \omega) ~|~ \Theta_j)$, for some $u$ distinct from $y.$ But this means that only the $\omega$ action is enabled in the test, so all results of ${\mathbb V}^\Omega(\nu \vec z(([u = y]\tau.T[y/x] + \omega) ~|~ \Theta_i))$ in this case would have a non-zero $\omega$ component, which would mean that $o(\omega)$ would be non-zero as well, contradicting the assumption that $o(\omega) = 0$. So, we have $s_i \ar{\bar a y} \Theta_i$ for every $i \in \{1,\dots,n\}.$ Let $\Theta'=p_1 \cdot \Theta_1 + \ldots + p_n \cdot \Theta_n.$ Then we have $\Theta \ar{\bar a y} \Theta'$ and $\nu \vec z(T' ~|~ \Theta) \ar{\tau} \nu \vec z(T'' ~|~ \Theta')$ where $T'' = [y=y]\tau.T[y/x] + \omega$. The latter transition means that $o \in {\mathbb V}^\Omega(\nu \vec z(T'' ~|~ \Theta')) = Apply^\Omega(T'',\Theta').$ We can therefore apply Claim~\ref{eq:claim} to get: $$ \Theta \ar{\bar a y} \Theta' \bstep{\hat \tau} \Delta $$ and $o \in Apply^\Omega(T[y/x], \Delta).$ \end{itemize} \qed \end{proof} \begin{lemma} \label{lm:test2} If $o\in Apply^\Omega(\sum_{i\in I} \tau.T_i, P)$ then for all $i \in I$ there are $q_i \in [0,1]$ and $\Delta_i$ with $\sum_{i\in I} q_i = 1$ such that $\interp P \bstep {\hat \tau} \sum_{i\in I} q_i \cdot \Delta_i$ and $o=\sum_{i\in I}q_i\cdot o_i$ for some $o_i\in Apply^\Omega(T_i,\Delta_i).$ \end{lemma} \begin{proof} The proof is similar to the proof of Lemma 6.8 in \cite{Deng08LMCS}. \qed \end{proof} The key to the completeness proof is to find a `characteristic test' for every formula $\varphi \in {\cal L}$ with a certain property. The construction of these characteristic tests is given in the following lemma. Note that unlike in the case of pCSP~\cite{Deng08LMCS}, this construction is parameterised by a finite set of names $N$, representing the set of free names of the process/distribution on which the test applies to. This parameter is important for the test to be able to detect output of fresh names. \begin{lemma}\label{lm:comp} For every finite set of names $N$ and every $\varphi \in {\cal F}$ such that $fn(\varphi) \subseteq N$, there exists a test $T_{\pr N \varphi}$ and $v_\varphi \in [0,1]^\Omega$, such that \begin{equation} \label{eq:comp-ex1} \Delta \models \varphi \qquad \hbox{ iff } \qquad \exists o\in Apply^\Omega(T_{\pr N \varphi}, \Delta): o\leq v_\varphi \end{equation} for every $\Delta$ with $fn(\Delta) \subseteq N$, and in case $\varphi\in{\cal L}$ we also have \begin{equation} \label{eq:comp-ex2} \Delta \models \varphi \qquad \hbox{ iff } \qquad \exists o\in Apply^\Omega(T_{\pr N \varphi}, \Delta): o\geq v_\varphi. \end{equation} $T_{\pr N \varphi}$ is called a \emph{characteristic test} of $\varphi$ and $v_\varphi$ its \emph{target value}. \end{lemma} \begin{proof} The characteristic tests and target values are defined by induction on $\varphi$: \begin{itemize} \item $\varphi = \top$: Let $T_{\pr N \varphi} := \omega$ for some $\omega\in\Omega$ and $v_\varphi:=\vec{\omega}$. \item $\varphi = \Ref{X}$ with $X=\{\mu_1,...,\mu_n\}$. Let $T_\varphi:=\mu_1.\omega+...+\mu_n.\omega$ for some $\omega\in\Omega$, and $v_\varphi=\vec{0}$. \item $\varphi = \ldia{\bar a x} \psi$: Let $T_{\pr N \varphi} :=\omega+ a(y).([y=x]\tau.T_{\pr N \psi}+ \omega)$ for some $y \not \in fn(T_{\pr N \psi})$, where $\omega\in\Omega$ does not occur in $T_{\pr N \psi}$ and $v_\varphi:=v_\psi$. \item $\varphi = \ldia{\bar a (x)}\psi$: Let $z = new(N)$ and $N' = N \cup \{z\}.$ Without loss of generality, we can assume that $x = z$ (since we consider terms equivalent modulo $\alpha$-conversion). Then let $ T_{\pr N \varphi} := \omega+a(x).([x \not = N] \tau.T_{\pr {N'} \psi}+\omega) $, where $\omega\in\Omega$ does not occur in $T_{\pr {N'} \psi}$ and $v_\varphi:=v_\psi$. \item $\varphi = \ldia{a(x)}\psi$: Let $z = new(N)$ and $N' = N \cup \{z\}.$ Let $p_w \in (0,1]$ for $w \in N'$ be chosen arbitrarily such that $\sum_{w \in N'} p_w = 1.$ Then let $$ T_{\pr N \varphi} := \bigoplus_{w \in N'} p_w \cdot (\omega_w+\bar a w.T_{\pr {N'} {\psi[w/x]}}) $$ where $\omega_w$ does not occur in $T_{\pr {N'} {\psi[w/x]}}$ for each $w\in N'$, and $\omega_{w_1}\not=\omega_{w_2}$ if $w_1\not=w_2$. We let $v_\varphi:= \sum_{w\in N'}p_w\cdot v_{\psi[w/x]}$. \item $\varphi = \bigwedge_{i \in I} \varphi_i$ where $I$ is a finite and non-empty index set. Choose an $\Omega$-disjoint family $(T_{\pr N {\varphi_i}}, v_{\varphi_i})_{i\in I}$ of characteristic tests and target values. Let $p_i \in (0,1]$ for $i \in I$ be chose arbitrarily such that $\sum_{i \in I} p_i = 1.$ Then let $$T_{\pr N \varphi} := \bigoplus_{i\in I} p_i \cdot T_{\pr N {\varphi_i}}$$ and $v_\varphi:=\sum_{i\in I}p_i\cdot v_{\varphi_i}$. \item $\varphi = \bigoplus_{i\in I} p_i \cdot \varphi_i.$ Choose an $\Omega$-disjoint family $(T_i,v_i)_{i\in I}$ of characteristic tests $T_i$ with target values $v_i$ for each $\varphi_i$, such that there are distinct success actions $\omega_i$ for $i\in I$ that do not occur in any of those tests. Let $T'_i:=T_i\pch{\frac{1}{2}}\omega_i$ and $v'_i:=\frac{1}{2}v_i+\frac{1}{2}\vec{\omega_i}$. Note that for all $i\in I$ also $T'_i$ is a characteristic test of $\varphi_i$ with target value $v'_i$. Let $T_{\pr N \varphi} := \sum_{i\in I} \tau.T_{\pr N {\varphi_i}}$ and $v_\varphi:=\sum_{i\in I}p_i\cdot v'_i$. \end{itemize} We now prove (\ref{eq:comp-ex1}) above by induction on $\varphi$: \begin{itemize} \item $\varphi = \top$: obvious. \item $\varphi = \Ref{X}$. Suppose $\Delta\models\varphi$. Then there is a $\Delta'$ with $\Delta\dar{\hat{\tau}}\Delta'$ and $\Delta'\not\barb{X}$. By Lemma~\ref{lm:test1}(2), $\vec{0}\in Apply^\Omega(T_{\pr N \varphi},\Delta)$. Now suppose $\exists o\in Apply^\Omega(T_{\pr N \varphi},\Delta): o\leq v_\varphi$. This means $o=\vec{0}$, so by Lemma~\ref{lm:test1}(2) there is a $\Delta'$ with $\Delta\dar{\tau}\Delta'$ and $\Delta'\not\barb{X}$. Hence $\Delta\models\varphi$. \item $\varphi = \ldia {\bar a x} \phi:$ Suppose $\Delta \models \varphi.$ Then $\Delta \bstep{\bar a x } \Delta'$ and $\Delta' \models \phi.$ By the induction hypothesis, $\exists o\in Apply^\Omega(T_{\pr N \phi}, \Delta'): o\leq v_\phi$. By Lemma~\ref{lm:test1}(3), this means $o\in Apply^\Omega(\omega+a(y).([y=x]\tau.T_{\pr N \phi}+\omega), \Delta)$. Therefore, we have $o\in Apply^\Omega(T_{\pr N \varphi}, \Delta)$ and $o\leq v_\varphi$. Conversely, suppose $\exists o\in Apply^\Omega(T_{\pr N \varphi}, \Delta): o\leq v_\varphi$. This implies $o(\omega)=0$. By Lemma~\ref{lm:test1}(3), this means $ \Delta \bstep{\bar a y} \Delta' $ and $o\in Apply^\Omega(T_{\pr N \phi}, \Delta')$. By the induction hypothesis, we have $\Delta' \models \phi$, and therefore, by Definition~\ref{def:sat}, $\Delta \models \varphi.$ \item $\varphi = \ldia {\bar a(x)} \phi:$ This is similar to the previous case. The only difference is that the guard $[x \not = N]$ makes sure that it is the bound output transition that is enabled from $\Delta$, so we use Lemma~\ref{lm:test1}(4) in place of Lemma~\ref{lm:test1}(3). \item $\varphi = \ldia {a(x)} \phi:$ Suppose $\Delta \models \varphi.$ Then for every name $w$, there exist $\Delta_1$, $\Delta_2$ and $\Delta'$ such that: \begin{equation} \label{eq:comp1} \Delta \bstep{\hat \tau} \Delta_1 \sstep{a(x)} \Delta_2, \qquad \Delta_2[w/x] \bstep{\hat \tau} \Delta', \qquad \mbox{ and } \Delta' \models \phi[w/x]. \end{equation} In particular, (\ref{eq:comp1}) holds for any $w \in N'$, where $N'=N\cup\{new(N)\}$. By the induction hypothesis, $\exists o_w\in Apply^\Omega(T_{\pr {N'} {\phi[w/x]}}): o_w\leq v_{\pr {N'} {\phi[w/x]}}$, hence by Lemma~\ref{lm:test1}(5), $$o_w\in Apply^\Omega(\omega+\bar a w.T_{\pr {N'} {\phi[w/x]}}, \Delta)$$ for each $w \in N'.$ Then by Lemma~\ref{lm:test1}(6), we have $$o\in Apply^\Omega(T_{\pr {N} \varphi}, \Delta))$$ where $o=\sum_{w\in N'}p_w\cdot o_w ~\leq~ o_\varphi$. Suppose $\exists o\in Apply^\Omega(T_{\pr {N} \varphi}, \Delta): o\leq v_\varphi$. Then by Lemma~\ref{lm:test1}(6), we have $o=\sum_{w\in N'}p_w\cdot o_w$ for some $o_w$ with $$o_w\in Apply^\Omega(\omega+\bar{a}w.T_{\pr {N'} {\phi[w/x]}}, \Delta)$$ The latter means, by Lemma~\ref{lm:test1}(5), for each $w \in N'$, there are $\Delta_1$, $\Delta_2$ and $\Delta'$ such that \begin{equation} \label{eq:comp2} \Delta \bstep{\hat \tau} \Delta_1 \sstep {a(x)} \Delta_2, \qquad \Delta_2[w/x] \bstep{\hat \tau} \Delta', \end{equation} and \begin{equation}\label{eq:com3} o_w\in Apply^\Omega(T_{\pr{N'} {\phi[w/x]}}, \Delta'). \end{equation} Since $\sum_{w\in N'}p_w\cdot o_w = o \leq v_\varphi = \sum_{w\in N'}p_w\cdot v_{\phi[w/x]}$, we have \begin{equation}\label{eq:com4} o_w \leq v_{\phi[w/x]} \end{equation} for each $w\in N'$. Otherwise, suppose $o_w(\omega) > v_{\phi[w/x]}(\omega)$ for some $\omega\in\Omega$. We would have $o(\omega)=p_w\cdot o_w(\omega) > p_w\cdot v_{\phi[w/x]}(\omega) = v_\varphi(w)$, a contradiction to $o\leq v_\varphi$. By (\ref{eq:com3}), (\ref{eq:com4}), and the induction hypothesis, we have \begin{equation} \label{eq:comp3} \Delta' \models \phi[w/x]. \end{equation} To show $\Delta \models \varphi$, we need to show for every $w$, there exist $\Delta_1$, $\Delta_2$ and $\Delta'$ satisfying (\ref{eq:comp2}) and (\ref{eq:comp3}) above. We have shown this for $w \in N'$. For the case where $w \not \in N'$, this is obtained from the case where $x = z$ via the renaming $[w/z]$: Recall that $z \not \in N$, so $z \not \in fn(\Delta_2)$ and $z\not \in fn(\phi)$. Therefore, we have, from (\ref{eq:comp2}) and Lemma~\ref{lm:rename} (2), $$ \Delta_2[z/x][w/z] = \Delta_2[w/x] \bstep {\hat\tau} \Delta'[w/z] $$ and from (\ref{eq:comp3}) and Lemma~\ref{lm:sat-renaming}, we have $\Delta'[w/z] \models \phi[w/x] = \phi[z/x][w/z].$ \item $\varphi = \bigwedge_{i \in I} \varphi_i:$ Suppose $\Delta \models \varphi$. Then $\Delta \models \phi_i$ for all $i \in I$, and by the induction hypothesis, $o_i\in Apply^\Omega(T_{\pr N {\phi_i}}, \Delta): o_i\leq v_{\varphi_i}$ and by Lemma~\ref{lm:test1}(6) $$\sum_{i\in I}p_i\cdot o_i \in Apply^\Omega(T_{\pr N \varphi}, \Delta)$$ and $\sum_{i\in I}p_i\cdot o_i \leq \sum_{i\in I}p_i\cdot v_{\varphi_i}=v_\varphi$. Suppose $\exists o\in Apply(T_{\pr N \varphi}, \Delta): o\leq v_\varphi$ Then by Lemma~\ref{lm:test1}(6), $o=\sum_{i\in I}p_i\cdot o_i$ with $$o_i\in Apply(T_{\pr N {\phi_i}}, \Delta)$$ for each $i\in I$. As in the last case, we see from $\sum_{i\in I}p_i\cdot o_i \leq \sum_{i\in I}p_i\cdot v_{\varphi_i} $ that $o_i\leq v_{\varphi_i}$ for each $i\in I$. By induction, we have $\Delta \models \phi_i$, therefore, by Definition~\ref{def:sat}, $\Delta \models \varphi.$ \item $\varphi = \bigoplus_{i\in I} p_i \cdot \varphi_i:$ Suppose $\Delta \models \varphi$. Then $\Delta \bstep{\hat \tau} \sum_{i\in I} p_i \cdot \Delta_i$ and $\Delta_i \models \phi_i.$ By the induction hypothesis, $$ \exists o_i\in Apply^\Omega(T_i, \Delta_i): o_i\leq v_i. $$ Hence, there are $o'_i\in Apply^\Omega(T'_i, \Delta_i)$ with $o'_i\leq v'_i$. Thus by Lemma~\ref{lm:test1}(7), $o:=\sum_{i\in I}p_i\cdot o'_i \in Apply^\Omega(T_{\pr N \varphi}, \Delta)$, and $o\leq v_\varphi$. Conversely, suppose $\exists o\in Apply(T_{\pr N \varphi}, \Delta): o\leq v_\varphi$. Then by Lemma~\ref{lm:test2}, there are $q_i$ and $\Delta_i$, for all $i \in I$, such that $\sum_{i\in I} q_i = 1$ and $\Delta \bstep{\hat \tau} \sum_{i\in I} q_i \cdot \Delta_i$ and $o=\sum_{i\in I}q_i\cdot o'_i$ for some $o'_i\in Apply^\Omega(T'_i, \Delta_i)$. Now $o'_i(\omega_i)=v'_i(\omega_i)=\frac{1}{2}$ for each $i\in I$. Using that $(T_i)_{i\in I}$ is an $\Omega$-disjoint family of tests, $\frac{1}{2}q_i = q_i o'_i(\omega_i) = o(\omega_i) \leq v_\varphi(\omega_i)=p_i v'_i(\omega_i)=\frac{1}{2}p_i$. As $\sum_{i\in I}q_i = \sum_{i\in I}p_i =1$, it must be that $q_i=p_i$ for all $i\in I$. Exactly as in the previous case we obtain $o'_i\leq v'_i$ for all $i\in I$. Given that $T'_i=T_i \pch{\frac{1}{2}}\omega_i$, using Lemma~\ref{lm:test1}(6), it must be that $o'=\frac{1}{2}o_i+\frac{1}{2}\vec{\omega_i}$ for some $o_i\in Apply^\Omega(T_i,\Delta_i)$ with $o_i\leq v_i$. By induction, $\Delta_i \models \phi_i$ for all $i\in I$, Therefore, by Definition~\ref{def:sat}, $\Delta \models \varphi.$ \end{itemize} In case $\varphi\in{\cal L}$, the formula cannot be of the form $\Ref{X}$. Then it is easy to show that $\sum_{\omega\in\Omega}v_\varphi(\omega)=1$ and for all $\Delta$ and $o\in Apply^\Omega(T_\varphi,\Delta)$ we have $\sum_{w\in\Omega}o(\omega)=1$. Therefore, $o\leq v_\varphi$ iff $o\geq v_\varphi$ iff $o=v_\varphi$, yielding (\ref{eq:comp-ex2}). \qed \end{proof} Completeness of $\sqsubseteq_{pmay}^\Omega$ and $\sqsubseteq_{pmust}^\Omega$, and hence also $\sqsubseteq_{pmay}$ and $\sqsubseteq_{pmust}$ by Theorem~\ref{thm:modal-sim} and Theorem~\ref{thm:multi-uni}, follows from Lemma~\ref{lm:comp}. \begin{theorem}\label{thm:multi-logic} \begin{enumerate} \item If $P \sqsubseteq_{pmay}^\Omega Q$ then $P \sqsubseteq_\Lcal Q.$ \item If $P \sqsubseteq_{pmust}^\Omega Q$ then $P \sqsubseteq_\Fcal Q.$ \end{enumerate} \end{theorem} \begin{proof} Suppose $P \sqsubseteq_{pmay}^\Omega Q$ and $\interp P \models \psi$ for some $\psi \in {\cal L}.$ Let $N = fn(P, \psi)$ and let $T_{\pr N \psi}$ be a characteristic test of $\psi$ with target value $v_\psi$. Then by Lemma~\ref{lm:comp}, we have $$\exists o\in Apply^\Omega(T_{\pr N \psi}, \interp P): o\geq v_\psi.$$ But since $P \sqsubseteq_{pmay}^\Omega Q$, this means $\exists o'\in Apply^\Omega(T_{\pr N \psi}, \interp Q): o\leq o'$, and thus $o'\geq v_\psi$. So again, by Lemma~\ref{lm:comp}, we have $\interp Q \models \psi$. The case for must preorder is similar, using the Smyth preorder. \qed \end{proof} \begin{theorem} \begin{enumerate} \item If $P \sqsubseteq_{pmay} Q$ then $P \sqsubseteq_S Q.$ \item If $P \sqsubseteq_{pmust} Q$ then $P \sqsubseteq_{FS} Q.$ \end{enumerate} \end{theorem} \section{Related and future work} There have been a number of previous works on probabilistic extensions of the $\pi$-calculus by Palamidessi et. al. \cite{HerescuP00,Chatzikokolakis07,Norman09}. One distinction between our formulation with that of Palamidessi et. al. is the fact that we consider an interpretation of probabilistic summation as distribution over state-based processes, whereas in those works, a process like $s {\pch p} t$ is considered as a proper process, which can evolve into the distribution $p\cdot \pdist s + (1-p) \cdot \pdist t$ via an internal transition. We could encode this behaviour by a simple prefixing with the $\tau$ prefix. It would be interesting to see whether similar characterisations could be obtained for this restricted calculus. As far as we know, there are no existing works in the literature that give characterisations of the may- and must-testing preorders for the probabilistic $\pi$-calculus. We structure our completeness proofs for the simulation preorders along the line of the proofs of similar characterisations of simulation preorders for pCSP~\cite{Deng07ENTCS,Deng08LMCS}. The name-passing feature of the $\pi$-calculus, however, gives rise to several complications not encountered in pCSP, and requires new techniques to deal with. In particular, due to the possibility of scope extrusion and close communication, the congruence properties of (failure) simulation is proved using an adaptation of the up-to techniques~\cite{Sangiorgi98MSCS}. The immediate future work is to consider replication/recursion. There is a well-known problem with handling possible divergence; some ideas developed in \cite{Deng09CONCUR,Boreale95IC} might be useful for studying the semantics of $\pi_p$ as well. \paragraph{Acknowledgment} The second author is supported by the Australian Research Council Discovery Project DP110103173. Part of this work was done when the second author was visiting NICTA Kensington Lab in 2009; he would like to thank NICTA for the support he received during his visit. \end{document}
\begin{document} \begin{center} {\large {\bf ON THE QUOTIENT-LIFT MATROID RELATION}} \\[5ex] {\bf Jos\'e F. De Jes\'us (University of Puerto Rico, San Juan,Puerto Rico, U.S.A.) } \\[4ex] {\bf Alexander Kelmans (University of Puerto Rico, San Juan,Puerto Rico, U.S.A.) } \end{center} \date{} \vskip 3ex \begin{abstract} It is well known that a matroid $L$ is a lift of a matroid $M$ if and only if every circuit of $L$ is the union of some circuits of $M$. In this paper we give a simpler proof of this important theorem. We also described a discrete homotopy theorem on two matroids of different ranks on the same ground set. \vskip 2ex {\bf Key words}: matroid, circuit, quotient, lift. \vskip 1ex {\bf MSC Subject Classification}: 05B35 \end{abstract} \section{Introduction} \label{Intro} \indent It is well known that a matroid $L$ is a lift of a matroid $M$ if and only if every circuit of $L$ is the union of some circuits of $M$ \cite{Ox}. The proof of this characterization given in the classic book "Matroid Theory", by James Oaxley, depends heavily on mathematical induction, assumes the elementary quotient construction and is based on the notion of modular cuts of flats. In this paper we give a simpler (constructive) proof of this important theorem. Our proof is based only on the simple notions of cyclic sets and matroid ciclomaticity. We also prove a discrete homotopy theorem on two matroids $M_1$ and $M_2$ of different ranks $r_1 < r_2$ on the same ground set $E$ saying that $M_2$ can be obtained from $M_1$ by a series of $r_2 - r_1$ one element extentions. \section{Preliminaries} All notions and basic facts on matroids that are used here can be found in \cite{Ox,Wlsh}. In this section we will remind the reader the main matroid notions and facts we need for our proof. Given a family ${\cal F}$ of subsets of a set $E$ (i.e. ${\cal F}\subseteq 2^E$), let ${\cal M}in \hskip 0.5ex {\cal F}$ and ${\cal M}ax \hskip 0.5ex {\cal F}$ denote the family of elements of $S$ which are minimal and maximal by the set inclusion, respectively. A family ${\cal P}$ of subsets of $E$ is called a {\em clutter} if $P, Q \in {\cal P}~and ~ P \subseteq Q \Rightarrow P = Q$, and so ${\cal M}in \hskip 0.5ex {\cal F}$ and ${\cal M}ax \hskip 0.5ex {\cal F}$ are clutters. \vskip 1.0ex Let $E$ be a finite non-empty set and $\cal{I}$ a family of subsets of $E$, i.e. ${\cal I} \subseteq 2^E$. A pair $M = (E, {\cal I})$ is called a {\em matroid} if \vskip 0.7ex \noindent $(AI0)$ $\emptyset \in {\cal I}$, \vskip 0.7ex \noindent $(AI1)$ if $X \in {\cal I}$ and $Z \subseteq X$, then $Z \in {\cal I}$, and \vskip 0.7ex \noindent $(AI2)$ if $X,Y \in {\cal I}$ and $|X| < |Y|$, then there exists $y \in Y\setminus X$ such that $X + y \in {\cal I}$. \vskip 1.0ex The set $E$ is called the {\em ground set of} $M$ and an element of ${\cal I}$ is called an {\em independent set} of $M$. The family ${\cal I} = {\cal I}(M)$ is the family of {\em independent sets} of $M$, ${\cal D} = {\cal D}(M) = 2^E \setminus \cal{I}$ is the family of {\em dependent sets} of $M$, ${\cal B} = {\cal B}(M)= {\cal M}ax \hskip 0.5ex \cal{I}$ is the family of {\em bases} of $M$, and ${\cal C} = {\cal C}(M) ={\cal M}in \hskip 0.5ex \cal{D}$ is the family of {\em circuits} of $M$. Let ${\cal B}^* = {\cal B^*}(M) = \{E \setminus B: B \in {\cal B}(M) \}$. It is easy to see that ${\cal B}^*$ is the set of bases of a matroid (denoted by $M^*$) on the ground set $E$. Matroids $M$ and $M^*$ are called {\em dual matroids}. \vskip 1ex Given a matroid $M = (E, {\cal I})$ and $Z \subset E$, let $E' = E \setminus Z$ and $I' = \{I \in {\cal I}: I \subseteq E' \}$. Then, obviously, $R= (E', I')$ is a matroid. Put $R = M \setminus Z$. We say that $(d)$ {\em $R = M \setminus Z$ is obtained from $M$ by deleting set $Z$} and $(c)$ {\em $R^*$ is obtained from $M$ by codeleting $($or contracting$)$ set $Z$ and put $R^* = M / Z$}. \vskip 1ex Let $\rho(M) = |B|$, where $B \in {\cal B} (M)$. \vskip 1.5ex Obviously, the family ${\cal C}= {\cal C}(M)$ of circuits of a matroid $M$ has the following properties: \vskip 0.7ex \noindent $(AC0)$ $\emptyset \not \in {\cal C}$ and \vskip 0.7ex \noindent $(AC1)$ $C_1, C_2 \in {\cal C}~and ~ C_1\subseteq C_2 \Rightarrow C_1 = C_2$, and so ${\cal C}$ is a clutter. Moreover, ${\cal C}$ is the set of circuits of a matroid if and only if ${\cal C}$ satisfies axioms $(AC1)$, $(AC2)$, and the following axiom \vskip 0.7ex \noindent $(AC2)$ if $C_1, C_2 \in {\cal C}$, $C_1\ne C_2$, and $e \in C_1\cap C_2$, then there exists $C \in {\cal C}$ such that $C \subseteq C_1\cup C_2 - e$. \vskip 0.7ex \noindent Axiom $(AC2)$ is called the {\em circuit elimination axiom} of a matroid (CEA, for short). \vskip 1ex It turns out that the set ${\cal C}$ of circuits of a matroid also satisfies the following {\em strong circuit elimination axiom} of a matroid (SCEA, for short): \vskip 0.7ex \noindent $(AC2!)$ if $C_1, C_2 \in {\cal C}$, $C_1\ne C_2$, $e \in C_1\cap C_2$ and $d \in C_1\setminus C_2$, then there exists $C \in {\cal C}$ such that $d \in C \subseteq C_1\cup C_2 - e$. \vskip 1ex Consider an independent set $I$ of a matroid $M = (E, {\cal I})$, $x \in E \setminus I$. If $I + x$ is not independent, then by $(AC2)$ there exists a unique circuit $C$ of $M$ such that $C \subseteq I + x$ and, obviously, $x \in C$. We call $C$ the {\em $x$-fundamental circuit of $M$} ({\em with respect to $I$}) and denote it $C(x,I)$. \vskip 1ex We call a subset $A$ of $E$ a {\em cyclic set} of $M$ if $A$ is the union of some circuits of $M$. \vskip 1ex Let $E$ and $X$ be non-empty finite disjoint sets. Let $N$ be a matroid on the ground set $E \cup X$ and ${\cal C}_N$ the set of circuits of $N$. As above, $N \setminus X$ is the matroid on $E$ obtained from $N$ by {\em deleting} set $X$ and $N / X$ is the matroid on $E$ obtained from N by {\em contracting} set $X$. Obviously, the circuits of $N \setminus X$ are the circuits of ${\cal C}_N$ that are contained in $E$. Given two sets $Y$ and $Z$, we call $Y \cap Z$ the {\em trace of $Z$ in $Y$} and also the {\em trace of $Y$ in $Z$}. Obviously, $R$ is a circuit of $M = N / X$ if and only if $R$ is a minimal trace of a circuit of $N$ in $E$. \section{Main results} Let $M$ be a matroid on a finite set $E$. \begin{definition} \label{cdpair} Let $M$ and $L$ be matroids on $E$. We call $(M, L)$ an {\em $X$-codeletion-deletion pair} (or simply, an {\em $X$-$(c,d) $-pair} or just a {\em $(c,d) $-pair}) if the exists a finite non-empty set $X$ disjoint from $E$ and a matroid $N$ on $E \cup X$ such that $M = N / X$ and $L = M \setminus X$. In this case $M$ is also called a {\em quotient of} $L$ and $L$ is called a {\em lift of} $M$, and so we can also call $(M, L)$ a {\em quotient-lift pair}. \end{definition} \begin{Theorem} \label{liftproperty} Let $M$ and $L$ be matroids on $E$. Suppose that $(M, L)$ is an $X$-$(c,d) $-pair for some set $X$. Then every circuit of $L$ is the union of some circuits of $M$. \end{Theorem} \noindent{\bf Proof} \ Since $(M, L)$ is an $X$-$(c,d) $-pair, we have: $X \cap E = \emptyset $ and there exists matroid $N$ on $X \cup E$ such that $N / X = M$ and $N \setminus X = L$. Also ${\cal C}(L)\subseteq {\cal C}(N)$. Let $D \in {\cal C}(L)$. If $D \in {\cal C}(M)$, then we are done. So suppose that $D \not \in {\cal C}(M)$. Since $(M, L)$ is an $X$-$(c,d) $-pair, there exists $C \in {\cal C}(M)$ such that $C \subset D$ and $C$ is a minimal trace of a circuit of $N$, say $Q$, in $E$. We need to prove that every element $d$ of $D$ is in some minimal trace of a circuit of $N$ in $E$ which is a subset of $D$. If $d \in C$, we are done. So suppose $d \not \in C$. Note that $Q \cap X \ne \emptyset$ for otherwise $C \in {\cal C}(L)$, a contradiction. Since $C \ne \emptyset$ and $C \subset Q \cap D$, there exits $a \in Q \cap D$. By (SCEA), applied to circuits $Q$ and $D$ in $N$ with $a \in Q \cap D$ and $d \in D \setminus Q$, there exists a circuit $S$ of $N$ such that $d \in S \subseteq (Q \cup D) - a$, and so $d$ is in the trace $S'$ of $S$ in $E$. Let ${\cal T}'$ be the set of all traces of circuits of $N$ in $E$ containing $d$ which are subsets of $D$. Let $T'$ be a minimal set in ${\cal T}'$. If $T'$ is a minimal trace of a circuit of $N$ in $E$, then we are done. So suppose that $T'$ is not a minimal trace of a circuit of $N$ in $E$. Then there is a circuit $Z$ of $N$ with trace $Z'$ in $E$ and $z \in Z'$ such that $Z' \subseteq T' - d \subseteq D$. Now by (SCEA), applied to circuits $Z$ and $T$ in $N$ with $z \in Z \cap T$ and $d \in T \setminus Z$, there exists a circuit $P$ of $N$ such that $d \in P \subseteq (Z \cup T) - z$. Let $P'$ be the trace of $P$ in $E$. Then $d \in P' \subseteq (Z' \cup T') - z \subseteq T' - z$. Thus, $T'$ is not a minimal trace of a circuit in $N$ containing $d$, a contradiction. $\Box$ \vskip 1.5ex Below we will show that the converse of Theorem \ref{liftproperty} is also true. We need some more definitions and preliminary facts. \begin{definition} $(${\sc Fundamental $s$-family of circuits in a matroid}$)$ \label{FundList} \\ Let ${\cal F} \subseteq {\cal C}(M)$. We call ${\cal F}$ a {\em fundamental $s$-family of circuits} of $M$ {\em with respect to $I$} if there exists $I \in {\cal I}(M)$ and $S \subseteq E \setminus I$ such that ${\cal F} = \{C(x,I): x \in S\}$ and $|S| = s$. \end{definition} \begin{definition} \label{cyclomaticity} Let $A \subseteq E$ and $I$ be a maximal independent subset of $A$. We call $c(A) = | A \setminus I|$ {\em the cyclomaticity of} $A$ (also known as {\em the cyclomatic number} or {\em the nullity of} $A$). \end{definition} We will need the following well-known fact due to J. Edmonds. \begin{lemma} $(${\sc Spanning property of a fundamental family of circuits in a matroid}$)$ \label{spanprop} Suppose that $A \subseteq E$, $A$ is a dependent set of $M$, $I$ is a maximal independent set of $A$ in $M$, and $c = c(A) = |A \setminus I|$. Then $~~~~~~~~~~~\cup \{C \in {\cal C}(M): C \subseteq A\} = \cup \{C(x,I): x \in A \setminus I\}$ \\ and $\{C(x,I): x \in A \setminus I\} $ is a fundamental $c$-family of circuits of $M$ with respect to $I$. \end{lemma} \begin{claim} \label{A,Q} Suppose that $A$ is a cyclic set of $M$, $I$ is a maximal independent subset of $A$, ${\cal F} = \{C(a, I): a \in A \setminus I\}$ is a fundamental $c$-family of circuits $ C(a, I)$ of $M$ such that $\cup \{F \in {\cal F}\} = A$, and (as above) $|A \setminus I| = c(A) = c$. Then for every circuit $Q$ of $M$ which is not a subset of $A$ there exists a circuit $Q'$ of $M$ such that $Q'$ is a subset of $A \cup Q$ and ${\cal F}' = {\cal F} \cup \{Q'\}$ is a fundamental $(c +1)$-list of circuits of $M$. \end{claim} \noindent{\bf Proof} \ Let $A' = A \cup Q$ and $I'$ be a maximal independent set in $ A'$ such that $I \subseteq I'$. Then every $C(a, I)$ is also a fundamental circuit (rooted at $a$) with respect to the independent set $I'$. Obviously, $Q \setminus I' \ne \emptyset $, say $q \in Q \setminus I'$, and $q \not \in A \setminus I$. Then $I' + q$ has a unique circuit $Q' = C(q,I')$ of $M$ containing $q$ and $Q' \subset A \cup Q$. Thus, ${\cal F}' = {\cal F} \cup \{Q'\}$ is a fundamental $(c +1)$-list of circuits of $M$. $\Box$ \vskip 1.5ex From Claim \ref{A,Q} we have: \begin{lemma} $(${\sc Extension property of cyclic sets in a matroid}$)$ \label{cycl-sets-extnsion} \\ Let $A_1$ and $A_2$ be distinct cyclic sets of $M$ such that $A_1 \not \subseteq A_2$ and let $c(A_1) = c$. Then there exists a cyclic set $A$ of $M$ such that $A \subseteq A_1 \cup A_2$ and $c(A) = c+1$. \end{lemma} Using Lemma \ref{spanprop}, it is easy to prove the following \begin{claim} \label{A,B,D} Let $A$ be a cyclic set of $M$, $c(A) = c$, $D \in {\cal C}(M)$, and $d \in D \subseteq A$. Then there exists a base $B$ of $M$ and a fundamental $c$-family ${\cal F}$ of circuits of $M$ $($with respect to $B$$)$ such that $D \in {\cal F}$, $\cup\{F \in {\cal F}\} = A$, and $d \notin B$. \end{claim} \noindent{\bf Proof} \ Obviously, $D - d$ is an independent set of $M$. Let $I$ be a maximal independent set in $A$ such that $D - d \subseteq I$. By Lemma \ref{spanprop}, $~~~~~~~~\cup \{C \in {\cal C}(M): C \subseteq A\} = \cup \{C(x,I): x \in A \setminus I\}$. \\ Since $D - d \subseteq I$, clearly $C(d,I) = D$. Let $B$ be a base of $M$ containing $I$ as a subset and ${\cal F} = \{C(x,I): x \in A \setminus I\}$. Then ${\cal F}$ is a fundamental $c$-family of circuits of $M$ $($with respect to $B$$)$, $D\in {\cal F}$, $\cup \{F \in {\cal F}\} = A$, and $d \notin B$. $\Box$ \begin{claim} \label{cycl-sets-elimination1} Let $A$ be a cyclic set of $M$ with $c(A) = c$ and $Q$ be a circuit of $M$ which is not a subset of $A$. Then for every $a \in A \cap Q$ there exists a cyclic set $A'$ of $M$ such that $a \notin A' \subseteq A \cup Q$ and $c(A') = c$. \end{claim} \noindent{\bf Proof} \ By Lemma \ref{spanprop}, $A = \cup \{C \in {\cal C}(M): C \subseteq A\} = \cup \{C(x,I): x \in A \setminus I\}$, where $c = |A \setminus I|$. Let $I'$ be a maximal independent set in $ A \cup Q$ such that $I \subseteq I'$. Then for every $q \in Q \setminus I'$ there is a unique circuit $Z$ such that $Z$ is a fundamental circuit $C(q, I')$ with respect $I'$ rooted at $q$. First, suppose that $a \in A \setminus I$. Let $A' = (A \setminus C(a, A)) \cup C(q,I')$. Then $A'$ is a required set. Now suppose that $a \in I$. Then $a \in C(z,I) - z$ for some $z \in A\setminus I$. Also by (SCEA) in $M$, there exists a circuit $Q'$ of $M$ such that $a \notin Q'$ and $q \in Q' \subseteq Q \cup C(z,I)$. Let $A' = (A \setminus C(z, A)) \cup C(q,I')$. Then again $A'$ is a required set. $\Box$ \vskip 1.5ex From Claim \ref{cycl-sets-elimination1} we have: \begin{lemma} $(${\sc Elimination property of cyclic sets in a matroid}$)$ \label{cycl-sets-elimination2} \\ Let $A_1$ and $A_2$ be distinct cyclic sets of $M$ and let $c(A_1) = c$. Then for every $a \in A_1 \cap A_2$ there exists a cyclic set $A$ of $M$ such that $a \notin A \subseteq A_1 \cup A_2$ and $c(A) = c$. \end{lemma} \begin{claim} \label{r(L)>r(M)} Let $M$ and $L$ be distinct matroids on $E$. Suppose that $(M, L)$ is an $X$-$(c,d) $-pair for some set $X$. Then $\rho(L) > \rho(M)$. \end{claim} \noindent{\bf Proof} \ By Theorem \ref{liftproperty}, every circuit of $L$ is the union of some circuits of $M$. Therefore every dependent set of $L$ is also a dependent set of $M$ or, equivalently, every independent set of $M$ is an independent set of $L$. In particular, every base of $M$ is an independent set of $L$. Therefore $\rho(L) \ge \rho(M)$. Since $M \ne L$, clearly $B \in {\cal B}(M) \Rightarrow B \in {\cal I}(L) \setminus {\cal B}(L)$. Thus, $\rho(L) > \rho(M)$. $\Box$ \vskip 1.5ex We need the following \begin{lemma} \label{bigcyclo} Let $M$ and $L$ be matroids on $E$ and $\rho(L) = \rho(M) + s$, where $s \in \mathbb{N}$. Suppose that every circuit of $L$ is the union of some circuits of $M$. If $A$ is a cyclic set of $M$ with $c(A) = s+1$, then $A$ is not an independent set of $L$. \end{lemma} \noindent{\bf Proof} \ Suppose, on the contrary, that $A$ is an independent set of $L$. Since $A$ is a cyclic set of $M$ with $c(A) = s+1$, there exists a subset $R$ of $A$ with $s+1$ elements such that $I = A \setminus R$ is a maximal independent set of $A$ in $M$. Let $B$ be a base of $M$ such that $I \subseteq B$. Note that $|R \cup B| = \rho(M) + s + 1 > \rho(L)$. Thus, $R \cup B$ is a dependent set of $L$, i.e. there exists $D \in {\cal C}(L)$ such that $D \subseteq R \cup B$. Since $A$ is an independent set of $L$, clearly $D \not \subseteq A$. Let $d \in D \setminus A$. Since $D$ is the union of some circuits of $M$, there exists $D' \in {\cal C}(M)$ such that $d \in D' \subseteq D \subseteq R \cup B$. Now $\cup \{C(x,I): x \in A \setminus I\} = \cup \{C(x,I): x \in R\} = \cup \{C(x,B): x \in R\}$. By Lemma \ref{spanprop}, $ \cup \{C(x,B): x \in R\} = \cup \{C \in {\cal C}(M): C \subseteq B \cup R \}$. It follows that $D' \in \cup \{C(x,I): x \in A \setminus I\}$, and so $d \in D' \subseteq A$. However $d \in D \setminus A$, a contradiction. $\Box$ \vskip 1.5ex Now we are ready to prove the converse of Theorem \ref{liftproperty}. By Claim \ref{r(L)>r(M)}, if $(M, L)$ is an $X$-$(c,d) $-pair of matroids for some set $X$, then $\rho(L) - \rho(M) = s \in \mathbb{N}$. Therefore in the converse of Theorem \ref{liftproperty} we can assume that $\rho(L) - \rho(M) = s \in \mathbb{N}$. \begin{Theorem} \label{liftcriterion} Let $M$ and $L$ be matroids on $E$ and $\rho(L) = \rho(M) + s$, where $s \in \mathbb{N}$. Suppose that every circuit of $L$ is the union of some circuits of $M$. Then $(M, L)$ is an $X$-$(c,d) $-pair for some set $X$ with $s$ elements. \end{Theorem} \noindent{\bf Proof} \ Let $X$ be a set with $s$ elements. By definition, a pair $(M, L)$ is an $X$-$(c,d) $-pair if and only if there exists a matroid $N$ on $E \cup X$ such that $N / X = M$ and $N \setminus X = L$. Let \vskip 0.5ex $~~~~~~~~~{\cal X} = \cup \{A \cup Z : A \in {\cal CS}(M) \cap {\cal I}(L), \emptyset \ne Z \subseteq X, ~and~c(A) + |Z| = s + 1\}$. We prove that ${\cal C}(L) \cup {\cal X}$ satisfies the elimination axiom of the set of circuits of a matroid, say $N$, on $E \cup X$, i.e. that ${\cal C}(L) \cup {\cal X} = {\cal C}(N)$. Let $C_1, C_2 \in {\cal C}(N)$, $C_1 \ne C_2$, and $a \in C_1 \cap C_2$. \vskip 1.5ex \noindent ${\bf (p1)}$ Suppose that $C_1,C_2 \in {\cal C}(L)$. Then, obviously, our claim is true. \vskip 1.5ex \noindent ${\bf (p2)}$ Suppose that $C_1\in {\cal C}(L)$ and $C_2 \in {\cal X}$. Then $C_2 = A \cup Z$, where $A \in {\cal CS}(M) \cap {\cal I}(L), Z \subset X, ~and~c(A) + |Z| = s + 1$. \\ Since $a \in C_1 \subseteq E $, $Z \subseteq X$, and $X \cap E = \emptyset$, clearly $a \notin Z$, and so $a \in A$. Both $A$ and $C_1$ are cyclic sets of $M$. By Lemma \ref{cycl-sets-elimination2} with $A_1=A$ and $A_2=C_1$, there exists a cyclic set $A'$ of $M$ such that $a \notin A' \subseteq C_1 \cup A \subseteq C_1 \cup C_2$ and $c(A') = c(A)$. If $A'$ contains no circuit of $L$, then $A' \cup Z \in {\cal X}$. Since $a \notin A' \cup Z \subseteq C_1 \cup C_2$, we are done. If $A'$ contains a circuit $C'$ of $L$, then $a \notin C' \subseteq A' \cup Z \subseteq C_1 \cup C_2$, and we are also done. \vskip 1.5ex \noindent ${\bf (p3)}$ Suppose that $C_1,C_2 \in {\cal X}$, namely, $C_1= A_1 \cup Z_1 \in {\cal X}$ and $C_2= A_2 \cup Z_2 \in {\cal X}$. \vskip 1.5ex ${\bf (p3.1)}$ Suppose that $Z_1 = Z_2 = \{a\}$. Then $A_1 \ne A_2$ and $c(A_1) = c(A_2) = s$. Since $A_1$ and $A_2$ are distinct cyclic sets of $M$, by Lemma \ref{cycl-sets-extnsion}, there exists a cyclic set $A$ of $M$ such that $A \subseteq A_1 \cup A_2$ and $c(A) = c(A_1)+1 = s + 1$. Now by Lemma \ref{bigcyclo}, $A$ contains a circuit, say $Q$, of $L$ as a subset, and we are done because $Q \subseteq A_1 \cup A_2$ and $a \not \in A_1 \cup A_2$. \vskip 1.5ex ${\bf (p3.2)}$ Now suppose that at least one of $Z_i$'s, say $Z_1$, has an element distinct from $a$. We remind that $(A_1 \cup A_2) \cap (Z_1 \cup Z_2) = \emptyset$. Thus, either $a \in Z_1 \cap Z_2$ or $a \in A_1 \cap A_2$. First, suppose that $a \in Z_1 \cap Z_2$. Since $A_1$ and $A_2$ are distinct cyclic sets of $M$, by Lemma \ref{cycl-sets-extnsion}, there exists a cyclic set $A$ of $M$ such that $A \subseteq A_1 \cup A_2$ and $c(A) = c(A_1)+1$. If $A$ contains a circuit of $L$, we are done. If $c(A) = s + 1$, then we are done by the Lemma \ref{bigcyclo}. If $c(A) < s +1$, then $c(A_1) < s$, and therefore $|Z_1| > 1$. If $A$ contains no circuit of $L$, then $(A \cup Z_1 \setminus a ) \in {\cal X}$, and we are also done. Finally, suppose that $a \in A_1 \cap A_2$. Then by Lemma \ref{cycl-sets-elimination2}, there exists a cyclic set $A$ of $M$ such that $a \notin A \subseteq A_1 \cup A_2$ and $c(A) = c$. If $A$ contains a circuit of $L$ as a subset, then we are done. If $A$ contains no circuit of $L$ as a subset, then $A \cup Z_1 \in {\cal X}$ and again we are done. $\Box$ \vskip 1.5ex Using Theorems \ref{liftproperty} and \ref{liftcriterion} it is easy to prove the following useful fact. \begin{lemma} {\sc (Transitivity of (c,d)-pair relation between matroids)} \label{(c,d)-pair-transitivity} \\ Let $K$, $L$, and $M$ be matroids on $E$. If $(M,L)$ is an $X$-$($c,d$)$-pair, $(L,K)$ is a $Y$-$($c,d$)$-pair, and $X \cap Y = \emptyset $, then $(M,K)$ is an $X \cup Y$-$($c,d$)$-pair. \end{lemma} \noindent{\bf Proof} \ By Theorem \ref{liftproperty}, every circuit of $L$ is the union of some circuits of $M$ and every circuit of $K$ is the union of some circuits of $L$. Therefore every circuit of $K$ is the union of some circuits of $K$. Thus, by Theorem \ref{liftcriterion}, $(M, K)$ is a $Z$-$(c,d)$-pair for some set $Z$. Since $(M,L)$ is an $X$-$(c,d)$-pair and $(L,K)$ is a $Y$-$(c,d)$-pair (where $X \cap Y = \emptyset $), we have: $Z = X \cup Y$, and so $(M,K)$ is an $X \cup Y$-$(c,d)$-pair. $\Box$ \vskip 1.5ex Here is another criterion for an $X$-{\em $(c,d)$}-pair of matroids. \begin{Theorem} \label{delete-contract-pair} Let $M$ and $L$ be matroids on $E$, $X = \{x_1, \ldots , x_k\}$, and $E \cap X = \emptyset $. Then the following are equivalent: \vskip 1ex \noindent $(c1)$ $(M, L)$ is an $X$-$(c,d) $-pair and \vskip 1ex \noindent $(c2)$ there exists a sequence $(L_0, L_1, \ldots , L_k)$ such that $L_0 = M$, $L_k = L$, each $L_i$ is a matroid on $E$, and each $(L_{i-1}, L_i)$ is an $x_i$-$(c,d) $-pair, where $1\le i \le k$. \end{Theorem} \noindent{\bf Proof} \ Claim $(c2) \Rightarrow (c1)$ can be easily proved by induction on $|X| = k$ using Lemma \ref{(c,d)-pair-transitivity}. Now we prove Claim $(c1) \Rightarrow (c2)$ by induction on $|X| = k$. If $|X| = 1$, then Claim $(c1) \Rightarrow (c2)$ is obviously true. Suppose that Claim $(c1) \Rightarrow (c2)$ is true for $|X| = k-1$. We need to prove that Claim $(c1) \Rightarrow (c2)$ is also true for $|X| = k$. Let $X' = X - x_k$, and so $|X'| = k-1$. By $(c1)$, there exists a matroid $N$ on $E \cup X$ with $X = \{x_1, \ldots , x_k\}$ such that $M = N/ X = (N / x_k) / X'$ and $L = N \setminus X = (N \setminus x_k) \setminus X'$. Let $N' = N / x_k$. Then $N'$ is a matroid on $E \cup X'$ with $X' =\{x_1, \ldots , x_{k-1}\}$. Let $L' = N' \setminus X' = (N / x_k) \setminus X'$. Obviously, $(M, L')$ is an $X'$-$(c,d) $-pair. Put $M = L_0$ and $L' = L_{k-1}$. By the induction hypothesis, $(c2)$ holds for pair $(M, L')$, namely, there exists a sequence $(L_0, L_1, \ldots , L_{k-1})$ such that $L_0 = M$, $L_{k-1} = L'$, each $L_i$ is a matroid on $E$, and each $(L_{i-1}, L_i)$ is an $x_i$-$(c,d) $-pair, where $1\le i \le k-1$. Then \vskip 1ex $ ~~~~~~~~~~~~~ (N \setminus X')/ x_k = (N / x_k) \setminus X' = L' = L_{k-1}$. \vskip 1ex \noindent Put $L_k = L$. Then $L_k = (N \setminus x_k) \setminus X' = (N \setminus X')\setminus x_k$. Thus, $(L_{k-1}, L_k)$ is an $x_k$-$(c,d)$-pair, and so $(c2)$ also holds for $|X| = k$. $\Box$ \begin{remark} $(${\sc Construction of intermediate matroids in homotopy}$)$ \label{c1-c2} Claim $(c1) \Rightarrow (c2)$ in Theorem \ref{delete-contract-pair} can also be proved by putting \vskip 1ex ${\cal C}(L_i) = \{C \in {\cal C}(L): c(C) \leq i\} \cup \{A \in {\cal CS}(M): A \in {\cal I}(L) ~and~ c(A) = i\}$ \vskip 1ex \noindent and using the arguments similar to those in the proof of Theorem \ref{liftcriterion}. \end{remark} \vskip 1ex \begin{remark} \label{reduction} From Theorem \ref{delete-contract-pair} it follows that the problem of constructing for a given matroid $M$ all matroids $L$ such that $(M, L)$ is an $X$-$(c,d) $-pair can be reduced to the same problem for $|X| = 1$, i.e. to the problem of constructing all so-called {\em elementary $(c,d)$-pairs} $(M, L)$. \end{remark} \vskip 2ex \noindent \addcontentsline{toc}{chapter}{Bibliography} \end{document}
\begin{document} {\rm{t}}itle{A Comparison of Quantum Oracles} \author{Elham Kashefi$^{*\dagger}$, Adrian Kent$^{{\cal{S}}}$, Vlatko Vedral$^{*}$ and Konrad Banaszek$^{*\dagger}$} \address{$^{*}$Optics Section, The Blackett Laboratory, Imperial College, London SW7 2BZ, England \\ $^{\dagger}$ Centre for Quantum Computation, Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, England \\ $^{{\cal{S}}}$ Hewlett-Packard Laboratories, Filton Road, Stoke Gifford, Bristol BS34 8QZ, England} \date{{\rm{t}}oday} \title{A Comparison of Quantum Oracles} \begin{abstract} A standard quantum oracle $S_f$ for a general function $f: Z_N \rightarrow Z_N $ is defined to act on two input states and return two outputs, with inputs $\ket{i}$ and $\ket{j}$ ($i,j \in Z_N $) returning outputs $\ket{i}$ and $\ket{j \oplus f(i)}$. However, if $f$ is known to be a one-to-one function, a simpler oracle, $M_f$, which returns $\ket{f(i)}$ given $\ket{i}$, can also be defined. We consider the relative strengths of these oracles. We define a simple promise problem which minimal quantum oracles can solve exponentially faster than classical oracles, via an algorithm which cannot be naively adapted to standard quantum oracles. We show that $S_f$ can be constructed by invoking $M_f$ and $(M_f )^{-1}$ once each, while $\Theta({\rm{s}}qrt{N})$ invocations of $S_f$ and/or $(S_f )^{-1}$ are required to construct $M_f$. \end{abstract} \draft \begin{multicols}{2} Recent years have witnessed an explosion of interest in quantum computation, as it becomes clearer that quantum algorithms are more efficient than any known classical algorithm for a variety of tasks.\cite{Deutsch85,Shor94,Grover96,BBBV97}. One important way of comparing the efficiencies is by analysing {\it query complexity}, which measures the number of invocations of an ``oracle'' --- which may be a standard circuit implementing a useful sub-routine, a physical device, or a purely theoretical construct --- needed to complete a task. A number of general results show the limitations and advantages of quantum computers using the query complexity models \cite{BBCMW98,vanDam98,Cleve99}. In this paper we compare the query complexity analysis of quantum algorithms given two different ways of representing a permutation in terms of a black box quantum oracle. We begin with a short discussion of graph isomorphism problems, which motivates the rest of the paper. Suppose we are given two graphs, $G_1 = (V_1 , E_1 )$ and $G_2 = (V_2 , E_2)$, represented as sets of vertices and edges in some standard notation. The graph isomorphism (GI) problem is to determine whether $G_1$ and $G_2$ are isomorphic: that is, whether there is a bijection $f: V_1 \rightarrow V_2$ such that $( f(u) , f(v) ) \in E_2$ if and only if $(u,v) \in E_1$. (We assume $| V_1| = | V_2 |$, else the problem is trivial.) GI is a problem which is NP but not known to be NP-complete for classical computers, and for which no polynomial time quantum algorithm is currently known. We are interested in a restricted version (NAGI) of GI, in which it is given that $G_1$ and $G_2$ are non-automorphic: i.e., they have no non-trivial automorphisms. So far as we are aware, no polynomial time classical or quantum algorithms are known for NAGI either. The following observations suggest a possible line of attack in the quantum case. First, for any non-automorphic graph $G = (V,E)$, we can define a unitary map $M_G$ that takes permutations $\rho$ of $V$ as inputs and outputs the permuted graph $\rho(G) = (\rho(V), \rho(E))$, with some standard ordering (e.g. alphabetical) of the vertices and edges, in some standard computational basis representations. That is, writing $| V | = N$, for any $\rho \in S_N$, $M_G$ maps $\ket{\rho}$ to $\ket{\rho(G)}$. Consider a pair $(G_1 , G_2 )$ of non-automorphic graphs. Given circuits implementing $M_{G_1}$, $M_{G_2}$, we could input copies of the state $ {{}^4\!f}rac{1}{{\rm{s}}qrt{N!}} {\rm{s}}um_{\rho \in S_N} \ket{\rho}$ to each circuit, and compare the outputs $\ket{\psi_i} = {{}^4\!f}rac{1}{{\rm{s}}qrt{N!}} {\rm{s}}um_{\rho \in S_N} \ket{ \rho (G_i ) }$. Now, if the graphs are isomorphic, these outputs are equal; if not, they are orthogonal. These two cases can be distinguished with arbitrarily high confidence in polynomial time (see below), so this would solve the problem. Unfortunately, our algorithm for NAGI requires constructing circuits for the $M_{G_i}$, which could be at least as hard as solving the original problem. On the other hand, it is easy to devise a circuit, $S_G$, which takes two inputs, $\ket{\rho}$ and a blank set of states $\ket{0}$, and outputs $\ket{\rho}$ and $\ket{\rho (G) }$. Since $S_G$ and $M_G$ implement apparently similar tasks, one might hope to find a way of constructing $M_G$ from a network involving a small number of copies of $S_G$. Such a construction would solve NAGI. Alternatively, one might hope to prove such a construction is impossible, and so definitively close off this particular line of attack. Thus motivated, we translate this into an abstract problem in query complexity. Consider the following oracles, defined for a general function $f: \{0,1\}^m \rightarrow \{0,1\}^n$: \begin{itemize} \itemsep0pt \parsep0pt \item the {\it standard} oracle, $ S_f : |x\rangle |b\rangle \rightarrow |x\rangle |b \oplus f(x)\rangle$. \item the {\it Fourier phase} oracle, $P_f : |x\rangle |b\rangle \rightarrow e^{2\pi i f(x) b / 2^n }|x\rangle |b\rangle$. \end{itemize} Here $x$ and $b$ are strings of $m$ and $n$ bits respectively, represented as numbers modulo $M=2^m$ and $N = 2^n$, $| x \rangle$ and $| b \rangle$ are the corresponding computational basis states, and $\oplus$ is addition modulo $2^n$. Note that the oracles $P_f$ and $S_f$ are equivalent, in the sense that each can be constructed by an $f$-independent quantum circuit containing just one copy of the other, and also equivalent to their inverses. To see this, define the quantum Fourier transform operation $F$ and the parity reflection $R=F^2$ by $$ F : \ket{j} \rightarrow {{}^4\!f}rac{1}{ {\rm{s}}qrt{N}} {\rm{s}}um_{k=0}^{N - 1} \exp ( 2 \pi i j k / N ) \ket{k} \, , {\tilde q}quad R: \ket{j} \rightarrow \ket{-j} \, . $$ Then we have \begin{eqnarray*} &(& I \otimes F ) \circ S_f \circ (I \otimes F^{-1} ) = P_f \, , \\ &(& I \otimes F^{-1} ) \circ P_f \circ (I \otimes F) = S_f \, , \\ &(& I \otimes R ) \circ S_f \circ (I \otimes R ) = (S_f )^{-1} \, , \\ & (& I \otimes R ) \circ P_f \circ (I \otimes R ) = (P_f )^{-1} \, . \end{eqnarray*} For the rest of the paper we take $m=n$ and suppose we know $f$ is a permutation on the set $\{ 0,1 \}^n$. There is then a simpler invertible quantum map associated to $f$: \begin{itemize} \itemsep0pt \parsep0pt \item the {\it minimal} oracle: $ M_f : |x\rangle \rightarrow |f(x)\rangle$. \end{itemize} We can model NAGI, and illustrate the different behaviour of standard and minimal oracles, by a promise problem. Suppose we are given two permutations, $\alpha$ and $\beta$, of $Z_N$, and a subset $S$ of $Z_N$, and are promised that the images $\alpha(S)$ and $\beta(S)$ are either identical or disjoint. The problem is to determine which. (This problem has been considered in a different context by Buhrman et al \cite{BCWW01}.) We represent elements $x \in Z_N$ by computational basis states of $n$ qubits in the standard way, and write $|S \rangle = {\rm{s}}um_{x \in S} \ket{x}$. Figure $1$ gives a quantum network with minimal oracles that identifies disjoint images with probability at least $1/2$. \begin{figure} \caption{A quantum circuit for the permutation promise problem. $O_{\alpha} \end{figure} Let $A=\{ \alpha(x)|x \in S \}$ and $B=\{ \beta(x)| x \in S \}$. One query to the oracles $M_{\alpha}$ and $M_{\beta}$ creates the (unnormalised) states $| A \rangle$ and $| B \rangle$ respectively. The state before applying the controlled gates is: \begin{eqnarray*} | A \rangle | B \rangle \otimes (|0\rangle - |1\rangle ) \end{eqnarray*} After controlled swap gates, the state becomes: $$ | A \rangle | B \rangle |0\rangle - | B \rangle | A \rangle |1\rangle \, . $$ The final Hadamard gate on the ancilla qubit gives: $$ ( | A \rangle | B \rangle - | B \rangle | A \rangle ) |0\rangle + ( | A \rangle | B \rangle + | B \rangle | A \rangle ) |1\rangle $$ A $|0\rangle$ outcome shows unambiguously that the images are disjoint. A $|1\rangle$ outcome is generated with probability $1$ if the images are identical, and with probability $1/2$ if the images are disjoint. Repeating the computation $K$ times allows one to exponentially improve the confidence of the result. If after $K$ trials we get $|0\rangle$ at least once, we know for certain that $\alpha(S) \neq \beta(S)$. When all the $K$ outcomes were $|1\rangle$, the conclusion that $\alpha(S)=\beta(S)$ has the conditional probability $p_K = {{}^4\!f}rac{1}{2^K}$ of having been erroneously generated by disjoint input images. Note that $p_K$ is independent of the problem size and decreases exponentially with the number of repetitions. Clearly, a naive adaptation of the algorithm to standard oracles does not work. Replacing $M_{\alpha}$ and $M_{\beta}$ by $S_{\alpha}$ and $S_{\beta}$, and replacing the inputs by $\ket{S} \otimes \ket{0}$, results in output states which are orthogonal if the images are disjoint, but also in general very nearly orthogonal if the images are identical. Applying a symmetric projection as above thus almost always fails to distinguish the cases. To the best of our knowledge a non-trivial lower bound for this problem using the $S_f$ is not known (however, see \cite{Aaronson01}). This example suggests that minimal oracles may be rather more powerful than standard oracles. To establish a more precise version of this hypothesis, we examine how good each oracle is at simulating the other. One way round turns out to be simple. We can construct $S_f$ from $M_f$ and $(M_f)^{-1} = M_{f^{-1}}$ as follows: $$ S_f = ( M_{f^{-1}} \otimes I ) \circ A \circ ( M_f \otimes I ) \, $$ where $\circ$ represents the composition of operations (or the concatenation of networks) and the modulo $N$ adder $A$ is defined by $A : \ket{a} \otimes \ket{b} \rightarrow \ket{a} \otimes \ket{a \oplus b }$. Suppose that we are given $M_f$ in the form of a specified complicated quantum circuit. We may be completely unable to simplify the circuit or deduce a simpler form of $f$ from it. However, by reversing the circuit gate by gate, we can construct a circuit for $(M_f )^{-1}$. Hence, by the above construction, we can produce a circuit for $S_f$, using one copy and one reversed copy of the circuit for $M_f$. This way of looking at oracles can be formalised into the {\it circuit model}, in which the query complexity of an algorithm involving an oracle $O_f$ associated to a function $f$ is the number of copies of $O_f$ and/or $O^{-1}_f$ required to implement the algorithm in a circuit that, apart from the oracles, is independent of $f$. In the circuit model, a standard oracle can easily be simulated given a minimal oracle. Ignoring constant factors, we say that the minimal oracle is at least as strong as the standard oracle. It should be stressed that, while the circuit model has a natural justification, there are other interesting oracle models, to which our arguments will not apply. For example, if we think of the oracle $M_f$ as a black box supplied by a third party, then we should not assume that $(M_f)^{-1}$ can easily be constructed from $M_f$, as we know no way of efficiently reversing the operation of an unknown physical evolution. Remaining within the circuit model, we now show that $M_f$ and $S_f$ are not (even up to constant factors) equivalent. In fact, simulating $M_f$ requires exponentially many uses of $S_f$. First, consider the standard oracle $S_{f^{-1}}$ which maps a basis state $|y\rangle |b \rangle$ to $|y\rangle |b \oplus f^{-1} (y)\rangle$. Since $S_{f^{-1}} : |y\rangle |0 \rangle \rightarrow |y\rangle | f^{-1}(y)\rangle$, simulating it allows us to solve the search problem of identifying $|f^{-1}(y)\rangle$ from a database of $N$ elements. It is known that, using Grover's search algorithm, one can simulate $S_{f^{-1}}$ with $O({\rm{s}}qrt{N})$ invocations of $S_f$ \cite{BHT98,BHMT00}. In the following we explain one possible way of doing that. Prepare the state $|y\rangle|0\rangle|0\rangle|0\rangle$, where the first three registers consist of $n$ qubits and the last register is a single qubit. Apply Hadamard transformations on the second register to get $|\phi_1 \rangle = |y\rangle{\rm{s}}um_{x \in Z_N}|x\rangle|0\rangle|0\rangle \mbox{.}$ Invoking $S_f$ on the second and third registers now gives $$ |y\rangle( {\rm{s}}um_{x \in Z_N}|x\rangle|f(x)\rangle ) |0\rangle \mbox{.}$$ Using CNOT gates, compare the first and third registers and put the result in the fourth, obtaining $$ {\cal{B}}ig(|y\rangle{\rm{s}}um_{x \in Z_N , x \ne f^{-1}(y)}|x\rangle|f(x)\rangle|0\rangle{\cal{B}}ig)+ {\cal{B}}ig(|y\rangle|f^{-1}(y)\rangle|y\rangle|1\rangle{\cal{B}}ig) \mbox{.}$$ Now apply $( S_f )^{-1}$ on the second and third registers, obtaining $$ {\cal{B}}ig(|y\rangle{\rm{s}}um_{x \in Z_N , x \ne f^{-1}(y)}|x\rangle|0\rangle|0\rangle {\cal{B}}ig) + {\cal{B}}ig(|y\rangle|f^{-1}(y)\rangle|0\rangle|1\rangle {\cal{B}}ig) \mbox{.} $$ Taken together, these operations leave the first and third registers unchanged, while their action on the second and fourth defines an oracle for the search problem. Applying Grover's algorithm\cite{Grover96} to this oracle, we obtain the state $|y\rangle|f^{-1}(y)\rangle$ after $O({\rm{s}}qrt{N})$ invocations. {\bf Lemma 1} {\tilde q}quad To simulate the inverse oracle $S_{f^{-1}}$ with a quantum network using oracles $S_f$ and $(S_f )^{-1}$, a total number of $\Theta({\rm{s}}qrt{N})$ invocations of $S_f$ are necessary. {\rm{t}}extbf{Proof} The upper bound of $O({\rm{s}}qrt{N})$ is implied by the Grover-based algorithm just discussed. Ambainis \cite{Ambainis00} has shown that $\Omega({\rm{s}}qrt{N})$ invocations of the standard oracle $S_f$ are required to invert a general permutation $f$. {\tilde q}quad{\bf \rm QED.} {\rm{v}}skip5pt Given $S_f$ and $S_{f^{-1}}$, Bennett has shown how to simulate $M_f$ within classical reversible computation \cite{Bennett73}. Using a quantum version of this construction, we can establish our main result: {\rm{v}}skip5pt {\bf Lemma 2} {\tilde q}quad To simulate the minimal oracle $M_f$ with a quantum network using oracles $S_f$ and $(S_f )^{-1}$, a total number of $\Theta({\rm{s}}qrt{N})$ invocations of $S_f$ are necessary. {\rm{t}}extbf{Proof} Given $S_f$ and $S_{f^{-1}}$, we can simulate $M_f$ as follows: $$ M_f \otimes I = ( S_{f^{-1}} )^{-1} \circ X \circ S_f \, , $$ where the swap gate $X$ is defined by $ X: \ket{a} \otimes \ket{b} \rightarrow \ket{b} \otimes \ket{a}$. From Lemma $1$, $S_{f^{-1}}$ needs $\Theta({\rm{s}}qrt{N})$ invocations of $S_f$ and $(S_f )^{-1}$. Therefore we get the upper bound of $O({\rm{s}}qrt{N})$ for simulation of $M_f$. However this is the optimal simulation. For suppose there is a network which simulates $M_f$ with less than $\Omega( {\rm{s}}qrt{N} )$ queries. The reversed network simulates $M_{f^{-1}}$. From these two, by our earlier results, we can construct a network that simulates $S_{f^{-1}}$ with fewer than $\Omega ( {\rm{s}}qrt{N} ) $ queries, which contradicts Lemma $1$. {\tilde q}quad{\bf \rm QED.} {\rm{v}}skip5pt It is worth remarking that we could equally well have carried through our discussion using variants of $S_f$ and $P_f$, such as the bitwise acting versions: \begin{itemize} \item the {\it bit string standard} oracle, $ S^{\rm bit}_f : |{\bf x}\rangle | \bf{b} \rangle \rightarrow |\bf{x} \rangle |\bf{b} \oplus \bf{f(x)} \rangle $. \item the {\it bit string phase} oracle, $P^{\rm bit}_f : |{\bf x}\rangle |{\bf b}\rangle \rightarrow e^{2\pi i {\bf f(x) \cdot b } / 2 }|{\bf x}\rangle |{\bf b} \rangle$. \end{itemize} Here $\bf{b} \oplus \bf{x}$ denotes the bitwise sum mod $2$ of the strings $\bf{b}$ and $\bf{x}$, and ${\bf b \cdot x}$ their inner product mod $2$. Again, $S^{\rm bit}_f$ and $P^{\rm bit}_f$ are equivalent: writing $$ {\cal F} = H \otimes H \otimes \cdots \otimes H \, ,$$ for the tensor product of $n$ Hadamard operators acting on register qubits, we have \begin{eqnarray*} &(& I \otimes {\cal F} ) \circ S^{\rm bit}_f \circ (I \otimes {\cal F}^{-1} ) = P^{\rm bit}_f \, , \\ &(& I \otimes {\cal F}^{-1} ) \circ P^{\rm bit}_f \circ (I \otimes {\cal F}) = S^{\rm bit}_f \, . \end{eqnarray*} Note also that $S^{\rm bit}_f = (S^{\rm bit}_f )^{-1}$, $P^{\rm bit}_f = (P^{\rm bit}_f )^{-1}$. Our results still apply: $S^{\rm bit}_f$ has essentially the same relation to $M_f$ that $S_f$ does. In summary, constructing a minimal oracle requires exponentially many invocations of a standard oracle. We can thus indeed definitively exclude the possibility of efficiently solving NAGI by simulating $M_f$ using $S_f$, which motivated our discussion. We have not, however, been able to exclude the possibility of directly constructing a polynomial size network defining an $M_f$ oracle for any given $1-1$ function $f$, which would lead to a polynomial time solution of NAGI. \noindent {\em Acknowledgments}. We thank Charles Bennett for helpful discussions and for drawing our attention to Refs. \cite{Bennett73}, and Richard Jozsa for helpful comments. E. K. thanks Mike Mosca for useful discussions and Waterloo University for hospitality. This work was supported by EPSRC and by the European projects EQUIP, QAIP and QUIPROCONE. \end{multicols} \end{document}
\begin{document} \title{A near deterministic linear optical CNOT gate} \author{Kae Nemoto}\email{[email protected]} \affiliation{National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan} \author{W. J. Munro}\email{[email protected]} \affiliation{National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan} \affiliation{Hewlett-Packard Laboratories, Filton Road, Stoke Gifford, Bristol BS34 8QZ, United Kingdom} \date{\today} \begin{abstract} We show how to construct a near deterministic CNOT using several single photons sources, linear optics, photon number resolving quantum non-demolition detectors and feed-forward. This gate does not require the use of massively entangled states common to other implementations and is very efficient on resources with only one ancilla photon required. The key element of this gate are non-demolition detectors that use a weak cross-Kerr nonlinearity effect to conditionally generate a phase shift on a coherent probe, if a photon is present in the signal mode. These potential phase shifts can then be measured using highly efficient homodyne detection. \end{abstract} \pacs{03.67.Lx, 03.67.-a, 03.67.Mn, 42.50.-p} \maketitle In the past few years we have seen the emergence of single photon optics with polarisation states as a realistic path for achieving universal quantum computation. This started with the pioneering work of Knill, Laflamme and Milburn [KLM]\cite{KLM} who showed that with only single photon sources and detectors and linear elements such as beam-splitters, a near deterministic CNOT gate could be created, through with the use of significant but polynomial resources. With this architecture for the CNOT gate and trivial single qubit rotations a universal set of gates is hence possible and a route forward for creating large devices can be seen. Since this original work there has been significant progress both theoretically\cite{Pittman01,Knill02,Knill03,Nielsen04,Browne04} and experimentally\cite{Pittman03,OBrien03,Gasparoni04}, with a number of CNOT gates actually demonstrated. Much of the theoretical effort has focused on determining more efficient ways to perform the controlled logic. The standard model for linear logic uses only\cite{KLM}: \begin{itemize} \item Single photon sources, \item Linear optical elements including feed-forward, \item Photon number resolving single photon detectors, \end{itemize} and it has been shown by Knill\cite{Knill03} that the maximum probability for achieving the CNOT gate is 3/4. While these upper bounds are not thought to be tight, with the best success probabilities for the CNOT gate being 2/27\cite{Knill01}, it does indicate that near deterministic gates are not possible using only the above resources and strategy. These gates can be made efficient using the ''standard'' optical teleportation tricks which require the use of massively entangled resources. Are there other natural ways to increase the efficient of these gate operations? Franson {\it et al.}\cite{Pittman01} showed that if you can increase your allowed physical resources to include maximally entangled two photon states, then the CNOT gate can have its probability of success boosted to 1/4, though this is still far below the 3/4 maximum. Alternatively it is possible to use single photons for the cluster state method of one way quantum computation\cite{Nielsen04,Browne04}. This can dramatically decrease the number of single photons sources required to perform a CNOT gate (from up to 10000 for KLM logic to 45 for the cluster approaches). The overhead here in single photon sources is large (but polynomial and hence still efficient in a sense). Can we however build near deterministic (or deterministic) linear optics gates with a low overhead for sources and detectors by relaxing the constraints in the standard model? There are several options here: we can change the way in which we encode our information (from polarisation encoded single photon qubits) or the mechanism by which we condition and detect them. There have been schemes by Yoran and Reznik\cite{Yoran03} that encode there information in both polarisation and which path. This encoding allows a deterministic Bell state measurement but the basic gate operations are still relatively inefficient. Alternatively one could encode the information in coherent states of light as proposed by Ralph et. al\cite{Ralph03}. A key issue here becomes the creation and detection of superpositions of coherent states. If we want to maintain encoding our information in polarisation states of light, what else is possible? The main architecture freedom we have left to change are the single photon detectors. We could move to nondestructive quantum non-demolition detectors (QND) which would have the potential available of be able to condition the evolution of our system but without necessarily destroying the single photons\cite{Milburn84,Yamamoto85,Grangier98}. They can also resolve one photon from a superposition of zero and two. QND devices are generally based on cross-Kerr nonlinearities. Historically these reversible nonlinearities have been extremely tiny and unsuitable for single photon interactions but recently giant Kerr nonlinearities have become available with electromagnetically induced transparency (EIT)\cite{Imamoglu96}. It is currently not clear whether these nonlinearities are sufficient from the natural implementation of single photon-single photon quantum gates, however they can be used for QND detection where we require a single photon- large coherent beam interaction. Here the nonlinearity strength needs to be sufficient only for a small phase shift to be induced onto a coherent probe beam (which is distinguishable from the original probe)\cite{Munro03}. Now that we have decided to use QND detection for linear optical quantum computation we need to investigate its effect on the CNOT gates and this is the key purpose of this paper. We could investigate each of the known gates in turn but we will focus on the FRANSON's 3 photon CNOT gate\cite{Pittman03}, the reason being that it requires fewer physical detectors to condition the results\footnote{Our results generalise to most of the other linear logic cnot gates known. The franson four photon gate follows most naturally.}. We will show that a near deterministic CNOT gate can be performed with such QND detectors without destroying the ancilla photon provided feed-forward is available. More generally we will show that for a $n$ qubit circuit, the number of single photon sources requires scales as $n+1$. The extra photon is however not destroyed in the computation and is left at the end. It is not consumed in the computation. This approach can also be applied to achieve cluster state computing or computing by measurement alone\cite{Nielsen04,Browne04}. Before we begin our detailed discussion, let us first consider the photon number QND measurement using a cross-Kerr nonlinearity, which has a Hamiltonian of the form $H_{QND}= \hbar \chi a_s^\dagger a_s a_p^\dagger a_p$ where the signal (probe) mode has the creation and destruction operators given by $a_s^\dagger, a_s$ ($a_p^\dagger, a_p$) respectively and $\chi$ is the strength of the nonlinearity. If we consider the signal state to have the form $|\psi\rangle= c_0 |0 \rangle_s + c_1 |1 \rangle_s $ with the probe beam initially in a coherent state $|\alpha\rangle_p$ then the cross-Kerr interaction causes the combined signal/probe system to evolve as \begin{eqnarray} U_{ck} |\psi\rangle_s |\alpha\rangle_p &=& e^{i H_{QND}t/\hbar} \left[c_0 |0 \rangle_s + c_1 |1 \rangle_s\right] |\alpha\rangle_p \nonumber \\ &=& c_0 |0 \rangle_s |\alpha\rangle_p + c_1 |1 \rangle_s |\alpha e^{i \theta }\rangle_p \end{eqnarray} where $\theta=\chi t$ with $t$ being the interaction time. We observe immediately that the Fock state $|n_a\rangle$ is unaffected by the interaction but the coherent state $|\alpha_c\rangle$ picks up a phase shift directly proportional to the number of photons $n_a$ in the $|n_a\rangle$ state. For $n_a$ photons in the signal mode, the probe beam evolves to $|\alpha e^{i n_a \theta }\rangle_p$. Assuming $\alpha \theta \gg 1$ a measurement of the phase of the probe beam (via homodyne/heterdyne techniques) projects the signal mode into a definite number state or superposition of number states. The requirement $\alpha \theta \gg 1$ is interesting as it tells us that a large nonlinearity $\theta$ is not absolutely required to distinguish different $|n_a\rangle$, even for zero, one and two Fock states. We could have $\theta$ small but would then require $\alpha$, the amplitude of the probe beam large. This is entirely possible and means that we can operate in the regime $\theta \ll 1$ which is experimentally more realizable. If this cross-Kerr nonlinearity were going to be used directly to implement a CPhase/CNOT gate between single photons then we would require $\theta=\pi$. In this Fock state detection model we measure the phase of the probe beam immediately after it has interacted with the weak cross-Kerr nonlinearity. This is the regime where the QND detector functions like the standard single photon detector. However, if we want to do a more ''generalised'' type of measurement between different signal beams, we could delay the measurement of the probe beam instead having the probe beam interact with several cross-Kerr nonlinearities where the signal mode is different in each case. The probe beam measurement then occurs after all these interactions in a collective way which could for instance allow a nondestructive detection that distinguishes superpositions and mixtures of the states $|HH\rangle$ and $|VV\rangle$ from $|HV\rangle$ and $|VH\rangle$. The key here is that we could have no nett phase shifts on the $|HH\rangle$ and $|VV\rangle$ terms while having a phase shift on the $|HV\rangle$ and $|VH\rangle$ terms. We will call this generalization a {\it two qubit polarisation parity QND detector} and it is this type of detector that allows us to circumvent the Knill bounds. \begin{figure} \caption{Schematic diagram of a two qubit polarisation QND detector that distinguishes superpositions and mixtures of the states $|HH\rangle$ and $|VV\rangle$ from $|HV\rangle$ and $|VH\rangle$ using several cross-Kerr nonlinearities nonlinearities and a coherent laser probe beam $|\alpha\rangle$. The scheme works by first splitting each polarisation qubit into a which path qubit on a polarising beamsplitter. The action of the first cross-Kerr nonlinearity puts a phase shift $\theta$ on the probe beam only if a photon was present in that mode. The second cross-Kerr nonlinearity put a phase shift $-\theta$ on the probe beam only if a photon was present in that mode. After the nonlinear interactions the which path qubit are converted back to polarisation encoded qubits. The probe beam only picks up a phase shift if the states $|HV\rangle$ and/or $|VH\rangle$ were present and hence the appropriate homodyne measurement allows the states $|HH\rangle$ and $|VV\rangle$ to be distinguished from $|HV\rangle$ and $|VH\rangle$. The two qubit polarisation QND detector thus acts like a parity checking device.} \label{fig-qnd-parity} \end{figure} Consider two polarisation qubits initially prepared in the states $|\Psi_1\rangle = c_0 |H \rangle_a+ c_1 |V \rangle_a$ and $|\Psi_2\rangle = d_0 |H \rangle_b+ d_1 |V \rangle_b$. These qubits are split individually on polarizing beam-splitters (PBS) into spatial modes which then interact with cross-Kerr nonlinearities as shown in Figure (\ref{fig-qnd-parity}). The action of the PBS's and cross-Kerr nonlinearities evolve the combined system $|\Psi_1\rangle|\Psi_2\rangle |\alpha\rangle_p $ will evolve to $|\psi\rangle_{T}=\left[c_0 d_0 |H H \rangle +c_1 d_1|V V \rangle \right] |\alpha\rangle_p +c_0 d_1 |H V \rangle |\alpha e^{i \theta}\rangle_p+c_1 d_0 |V H \rangle |\alpha e^{-i \theta}\rangle_p$. We observe immediately that the $|H H \rangle$ and $|V V \rangle$ pick up no phase shift and remain coherent with respect to each other. The $|H V \rangle$ and $|V H \rangle$ pick up opposite sign phase shift $\theta$ which could allow them to be distinguished by a general homodyne/heterodyne measurement. However if we choose the local oscillator phase $\pi/2$ offset from the probe phase (we will call this an X quadrature measurement), then the states $|\alpha e^{\pm i \theta}\rangle_p$ can not be distinguished\cite{barrett04}. More specifically with $\alpha$ real an $X$ homodyne measurement conditions $|\psi\rangle_{T}$ to \begin{eqnarray} &&|\psi_X\rangle_{T}={\it f}(X,\alpha)\left[c_0 d_0 |H H \rangle +c_1 d_1|V V \rangle \right] \\ &&\;+ {\it f}(X,\alpha cos \theta) \left[ c_0 d_1 e^{i \phi(X)} |H V \rangle+c_1 d_0 e^{-i \phi(X)}|V H \rangle \right] \nonumber \end{eqnarray} where ${\it f}(x,\beta)=\exp \left[-\frac{1}{4} \left(x-2\beta\right)^2\right]/(2 \pi)^{1/4}$ and $\phi(X)= \alpha x \sin \theta -\alpha^2 \sin 2\theta ({\rm Mod} 2 \pi)$. We see that ${\it f}(X,\alpha)$ and ${\it f}(X,\alpha cos \theta)$ are two Gaussian curves with the mid point between the peaks located at $X_0=\alpha \left[1+\cos \theta \right]$ and the peaks separated by a distance $X_d=2 \alpha \left[1-\cos \theta \right]$. As long as this difference is large $\alpha \theta^2 \gg 1$, then there is little overlap between these curves. Hence for $X>X_0$ we have \begin{eqnarray}\label{even-parity} |\psi_{X>X_0}\rangle_{T}\sim c_0 d_0 |H H \rangle +c_1 d_1|V V \rangle \end{eqnarray} while for $X<X_0$ \begin{eqnarray}\label{odd-parity} |\psi_{X<X_0}\rangle_{T}\sim c_0 d_1 e^{i \phi(X)} |H V \rangle+c_1 d_0 e^{-i \phi(X)}|V H \rangle \end{eqnarray} We have used the approximate symbol $\sim$ in these equation as there is a small but finite probability that the state (\ref{even-parity}) can occur for $X<X_0$. The probability of this error occurring is given by $P_{\rm error}=\frac{1}{2}\left(1-Erf[X_d/{2\sqrt 2}]\right)$ which is less than $10^{-5}$ when the distance $X_d \sim \alpha \theta^2 > 9$. This shows that it is still possible to operate in the regime of weak cross-Kerr nonlinearities, $\theta \ll \pi$. The action of this two mode polarisation non-demolition parity detector is now very clear; it splits the even parity terms (\ref{even-parity}) nearly deterministically from the odd parity cases (\ref{odd-parity}). This is really the power enabled by non-demolition measurements and why we can engineer strong nonlinear interactions using weak cross-Kerr effects. Above we have chosen to call the even parity state \{$|HH\rangle, |VV\rangle$\} and the odd parity states \{$|HV\rangle, |VH\rangle$\}, but this is an arbitrary choice primarily dependent on the form/type of PBS used to convert the polarisation encoded qubits to which path encoded qubits. Any other choice is also acceptable and it does not have to be symmetric between the two qubits. It is also interesting to look at the $X<X_0$ solution given by (\ref{odd-parity}). We observe immediately that this state is dependent on the measured $X$ homodyne value and hence the state is conditioned dependent on our measurement result $X$. However simple local rotations using phase shifters dependent on the measurement result $X$ can be performed via a feed forward process to transform this state to $c_0 d_1 |H \rangle_a |V \rangle_b+c_1 d_0 |V \rangle_a |H \rangle_b$ which is independent of $X$. These transformations are very interesting as it seems possible with the appropriate choice of $c_0, c_1$ and $d_0, d_1$ to create arbitrary entangled states near deterministically. For instance if we choose $d_0=d_1=1/\sqrt{2}$, then our device outputs either the state $c_0 |HH \rangle +c_1|V V \rangle$ or $c_0|HV \rangle+c_1 |VH \rangle$. A simple bit flip on the second polarisation qubit transforms it into the first. Thus our two mode parity QND detector can be configured to acts as a near deterministic entangler (see figure \ref{fig-qnd-entangler}). \begin{figure} \caption{Schematic diagram of a two polarisation qubit entangling gate. The basis of the scheme uses the QND-based parity detector described in Fig (\ref{fig-qnd-parity} \label{fig-qnd-entangler} \end{figure} This gate allows us to take two separable polarisation qubits and efficiently entangle them (near deterministically). If each of our qubits are initially $|H \rangle+ |V \rangle$ then the action of this entangling gate is to create the maximally entangled state $|HH \rangle +|VV \rangle$. Generally it was thought that strong nonlinearities are required to do this near deterministically, however our scheme here is using only weak nonlinearities $\theta \ll \pi$. This gate is critical and forms the key element for our efficient Franson CNOT gate. It can also obviously be used to generate maximally entangled state required for several of the other CNOT implementations. \begin{figure} \caption{Schematic diagram of a near deterministic CNOT composed two polarisation qubit entangling gates (one with PBS in the \{H,V\} \label{fig-qnd-franson} \end{figure} Now let us move our attention to the construction of the CNOT gate (depicted in Fig \ref{fig-qnd-franson}). This is the analogue of the Franson CNOT gate from \cite{Pittman03} but with the key PBS and 45-PBS replaced with \{H,V\} and $\{D=H+V,\bar D=H-V\}$ two polarisation qubit entangling gates. Franson's photon number resolving detectors have also been replaced with single photon number resolving QND detectors. Consider an initial state of the form $\left[ c_0 |H \rangle_c + c_1 |V \rangle_c \right]\otimes \left[|H\rangle +|V \rangle\right] \otimes \left[d_0 |H \rangle_t + d_1 |V \rangle_t\right]$. The action of the left hand side entangler evolves the system to \begin{eqnarray} \left[c_0 |HH \rangle + c_1 |VV \rangle\right]\otimes \left[d_0 |H \rangle_t + d_1 |V \rangle_t\right] \end{eqnarray} Now the action of the 45-entangling gate (where the PBS in the original gate have been replaced with 45-PBS's) transforms the state to $\left\{c_0 |H \rangle - c_1 |V \rangle\right\} (d_0-d_1)|\bar D,\bar D\rangle+ \left\{c_0 |H \rangle + c_1 |V \rangle\right\} (d_0+d_1)|D,D\rangle$ where for the $X<X_0$ measurement we have performed the usual phase correction, bit flip and an addition sign change $|V \rangle\rightarrow -|V \rangle$ on the first qubit). The second mode is now split on a normal \{H,V\} PBS and a QND photon number measurement performed. A bit flip is performed if a photon is detected in the $V$ mode. The final state from these interactions and feed forward operations\footnote{There are feedforward operations both in the entangling gate and the final measurement step. These can be delayed and performed together at the end of the gate} is \begin{eqnarray} c_0 d_0 |HH \rangle+c_0 d_1 |HV \rangle+c_1 d_0 |VV \rangle+c_1 d_1 |VH \rangle, \end{eqnarray} which is the same state obtained by performing a CNOT operation on the state $\left[ c_0 |H \rangle_c + c_1 |V \rangle_c \right]\otimes \left[d_0 |H \rangle_t + d_1 |V \rangle_t\right]$. This shows that our QND-based gates has performed a near deterministic CNOT operation. The core element of this gate is the {\it two qubit polarisation parity QND detector} which engineers a two polarisation qubit interaction via a strong probe beam. At the heart of this detector are weak cross-Kerr nonlinearities that make it possible to distinguish subspaces of basis states from others which is not possible with convenient destructive photon counters. It is this that allows us to exceed the Knill bounds presented in \cite{Knill03}. From a different perceptive our two mode QND entangling gate is acting like a fermonic polarizing beam-splitter, that is it does not allow the photon bunching effects. Without these photon bunching effects simple feed-forward operations allows our overall CNOT gate to be made near deterministic. This represents a huge saving in the physical resources to implement single photon quantum logic. For the CNOT operation, only one extra ancilla photon is needed beyond the control and target photons to perform the gate operation in the near deterministic fashion. In fact it is straighforward to observe that if we want to do an $n$ qubit computation (with number of one and two qubit gates), only $n+1$ single photon sources would be required in principle. The resources required to perform this QND based CNOT gate as presented here are: three single photon sources, two to encode the control and target qubits and one ancilla, six weak cross-Kerr nonlinearities, two coherent light laser probe beams and homodyne detectors plus basic linear optics elements to convert polarisation encoded qubits to spatial coding ones and perform the feed-forward. The single photon sncilla is not consumed in the gate operation and can be recycled for further use. This compares with potentially thousands of single photon sources, detectors and linear optical elements to implement the original KLM gate. It is possible to construct this near deterministic CNOT with fewer cross-Kerr nonlinearities (potentially as few as two but recylcing them) but as a cost of more feed-forward operations. Finally we should discuss the size of the weak cross-Kerr nonlinearity required. Previously we have specified a constraint that $\alpha \theta^2 \gg 1$. Thus for realistic pumps with mean photon number on the order of $10^{12}$ a weak nonlinearity of the order of $\theta=10^{-3}$ could be sufficient. While this is still a technological challenge it is likely to be achievable in the near future and really shows the potential power of weak (but not tiny) cross-Kerr nonlinearities. Strong nonlinearities are not a prerequisite to be able to perform quantum computation. {\it To summarize}, We have shown in this letter that weak cross-Kerr nonlinearities can be used to construct near deterministic CNOT gates with far fewer physical resources than other linear optical schemes. This has enormous implementations for the development of single photon quantum computing and information processing using either the convienent models or cluster state techniques. It can be immediately be applied to optical cluster state computer allowing a significant reduction in the physical resources. At the core of the scheme are generalised QND detectors that allow us one to distinguish subspaces of the basis states, rather than all the basis states which occurs with the classic photon counters. The strength of the nonlinearities required for our gate are orders of magnitude weaker than those required to perform CNOT gates naturally between the single photons. Such nonlinearities are potentially available today using doped optical fibers, cavity QED and EIT. We hope this work motivates the search for weak cross-Kerr nonlinearities which now have applications beyond for instance single photon number resolving detectors. \noindent {\em Acknowledgments}: We will like to thank S. Barrett, R. Beausoleil, P. Kok and T. Spiller for valuable discussions. This work was supported in part by a JSPS research grant and fellowship, an Asahi-Glass research grant and the European Project RAMBOQ. \end{document}
\begin{document} \title [Carath\'eodory functions in Riemann surfaces] {Carath\'eodory functions on Riemann surfaces and reproducing kernel spaces} \author[D. Alpay]{Daniel Alpay} \address{(DA) Schmid College of Science and Technology, Chapman University, One University Drive Orange, California 92866, USA} \varepsilonmail{[email protected]} \author[A. Pinhas]{Ariel Pinhas} \address{(AP) Department of mathematics, Ben-Gurion University of the Negev, P.O. Box 653, Beer-Sheva 84105, Israel} \varepsilonmail{[email protected]} \author[V. Vinnikov]{Victor Vinnikov} \address{(VV) Department of mathematics, Ben-Gurion University of the Negev, P.O. Box 653, Beer-Sheva 84105, Israel} \varepsilonmail{[email protected]} \date{} \thanks{The first author thanks the Foster G. and Mary McGaw Professorship in Mathematical Sciences, which supported this research} \begin{abstract} Carath\'eodory functions, i.e. functions analytic in the open upper half-plane and with a positive real part there, play an important role in operator theory, $1D$ system theory and in the study of de Branges-Rovnyak spaces. The Herglotz integral representation theorem associates to each Carath\'eodory function a positive measure on the real line and hence allows to further examine these subjects. In this paper, we study these relations when the Riemann sphere is replaced by a real compact Riemann surface. The generalization of Herglotz's theorem to the compact real Riemann surface setting is presented. Furthermore, we study de Branges-Rovnyak spaces associated with functions with positive real-part defined on compact Riemann surfaces. Their elements are not anymore functions, but sections of a related line bundle. \varepsilonnd{abstract} \subjclass{46E22,30F15} \keywords{compact Riemann surface, de Branges-Rovnyak spaces, Carath\'eodory function} \maketitle \setcounter{tocdepth}{1} \tableofcontents \section{Introduction and overview} A Carath\'eodory function $\varphi(z)$, that is, analytic with positive real part in the open upper half-plane ${\mathbb C}_+$, admits an integral representation, also known by the Herglotz's representation theorem (see e.g \cite{MR48:904,RosenblumRovnyak}). More precisely, the Carath\'eodory function $\varphi(z)$ can be written as: \begin{equation} \lambdabel{27-octobre-2000} \varphi(z)=iA-iBz - i\int_{{\mathbb R}}\left(\frac{1}{t-z}-\frac{t}{t^2+1} \right)d\mu(t), \varepsilonnd{equation} where $A\in{\mathbb R}$, $B\geq 0$ and $d\mu(t)$ is a positive measure on the real line such that $$\int_{\mathbb R}\frac{d\mu(t)}{t^2+1}<\infty.$$ One of the main two objectives of this paper is to extend \varepsilonqref{27-octobre-2000} to the non zero genus case; this is done in Theorem \ref{caraTmRS}. The second point is to extend \varepsilonqref{4-juin-2000} also to the non zero genus case, and this result is presented in Theorem \ref{thm41}. The right handside of \varepsilonqref{27-octobre-2000} defines an analytic extension of $\varphi(z)$ to ${\mathbb C}\setminus{\mathbb R}$ such that $\overline{\varphi(\overline{z})}+\varphi(z)=0$. Thus, for any $z,w \in{\mathbb C}\setminus {\mathbb R}$, we have \begin{equation} \lambdabel{4-juin-2000} \frac{\varphi(z)+\overline{\varphi(w)}}{-i(z-\overline{w})}= B+\int_{\mathbb R}\frac{d\mu(t)}{(t-z)(t-\overline{w})}= B+\innerProductTri{\frac{1}{t-z}}{\frac{1}{t-w}}{{\bf L}^2(d\mu)}, \varepsilonnd{equation} where ${\bf L}^2(d\mu)$ stands for the Lebesgue (Hilbert) space associated with the measure $d\mu$. Thus, the kernel $\frac{\varphi(z)+\overline{\varphi(w)}}{-i(z-\overline{w})}$ is positive in ${\mathbb C}\setminus{\mathbb R}$. When $B=0$, the associated reproducing kernel Hilbert space, denoted by $\mathcal{L}(\varphi)$, is described in the theorem below. \begin{Tm}[{\cite[Section 5]{MR0229011}}] \lambdabel{Thm21} The space $\mathcal{L}(\varphi)$ consists of the functions of the form \begin{equation} \lambdabel{Thm21A} F(z)=\int_{\mathbb R}\frac{f(t)d\mu(t)}{t-z} \varepsilonnd{equation} where $f\in{\bf L}^2(d\mu)$. Furthermore, $\mathcal{L}(\varphi)$ is invariant under the resolvent-like operators $R_\alpha$, where for $\alpha \in \mathbb C \setminus \mathbb R$, is given by: \begin{equation}\lambdabel{liberte'} (R_\alpha F)(z)=\frac{F(z)-F(\alpha)}{z-\alpha}. \varepsilonnd{equation} Finally, under the hypothesis $\int_{\mathbb R}d\mu(t)<\infty$, the elements of $\mathcal{L}(\varphi)$ satisfy: $$ \lim_{y\rightarrow\infty}F(iy)=0.$$ \varepsilonnd{Tm} Moreover, the resolvent operator satisfies $R_\alpha = (M- \alpha I)^{-1}$, where $M$ is the multiplication operator defined by \[ M \, F(z) = z F(z) - \lim_{z \rightarrow \infty} z F(z). \] We note that $M$ corresponds, through \varepsilonqref{Thm21A}, to the operator of multiplication by $t$ in ${\bf L}^2(d\mu)$. We provide here the outline of the proof in order to motivate the analysis presented in the sequel in the compact real Riemann setting. \begin{pf}[of Theorem \ref{Thm21}] Let $N\in{\mathbb N}$, $w_1,\ldots , w_N \in{\mathbb C}\setminus{\mathbb R}$ and $c_1 \cdots c_N\in{\mathbb C}$. Then, \[ F(z) \overset{\text{def} } {=} \sum_{j=1}^{N} c_j \frac{\varphi(z)+\overline{\varphi(w_j)}}{-i(z-\overline{w_j})} = \int_{\mathbb R}\frac{d\mu (t)}{t-z}f(t) \] where \[ f(t)=\sum_{j=1}^{N}\frac{c_j}{t-\overline{w_j}}\in{\bf L}^2(d\mu). \] In view of \varepsilonqref{4-juin-2000}, we have \[ \|F\|^2_{\mathcal{L}(\varphi)} = \|f\|^2_{{\bf L}^2(d\mu)} = \sum_{\varepsilonll,j}\overline{c_\varepsilonll} \frac{\varphi(w_\varepsilonll)+\overline{\varphi(w_j)}}{-i(w_\varepsilonll-\overline{w_j})}c_j. \] The first claim (Equation \ref{Thm21A}) follows by the fact that the linear span of the functions $\frac{1}{-i(z-\overline{w})}$, $w\in{\mathbb C}\setminus{\mathbb R}$ is dense in ${\bf L}^2(d\mu)$. Next, let $F(z)=\int_{\mathbb R}\frac{f(t)d\mu(t)}{t-z}\in\mathcal{L}(\varphi)$. Then, \begin{equation} \lambdabel{tatche} (R_\alpha F)(z)=\int_{\mathbb R}\frac{f(t)d\mu(t)}{(t-\alpha)(t-z)} \varepsilonnd{equation} belongs to $\mathcal{L}(\varphi)$ since $f(t)/(t-\alpha)\in{\bf L}^2( d\mu)$ where $\alpha$ is lying outside the real line. \varepsilonnd{pf} Furthermore, using \varepsilonqref{tatche}, the structure identity \begin{equation} \lambdabel{strucId} [R_\alpha f, g] - [f, R_\beta g] - (\alpha - \overline{\beta}) [R_\alpha f, R_\beta g] = 0,\quad \alpha,\beta\in\mathbb C\setminus\mathbb R, \varepsilonnd{equation} holds in the $\mathcal{L}(\varphi)$ spaces. In fact, this is an "if and only if" relation. If the identity \varepsilonqref{strucId} holds in the space $\mathcal L$ of functions analytic in $\mathbb C\setminus\mathbb R$, then $\mathcal{L}=\mathcal{L}(\varphi)$ for some Carath\'eodory function $\varphi(z)$ (see \cite[Theorem 6]{MR0229011}). Using the observation that an ${\bf L}^2(d\mu)$ space is finite dimensional if and only if the measure $d\mu$ is has only a singular part, consisting of a finite number of jumps, we may continue and mention the following result (see for instance \cite{dbbook}): \begin{Tm} \lambdabel{finiteDimentionalLphi} Let $\varphi(z)$ be a Carath\'eodory function associated via \varepsilonqref{27-octobre-2000} to a positive measure $d \mu$ and let $\mathcal{L}(\varphi)$ be the corresponding reproducing kernel Hilbert space. Then the following are equivalent: \begin{enumerate} \item $\mathcal{L}(\varphi)$ is finite dimensional. \item ${\rm dim} \, {\bf L}^2(d \mu) < \infty$. \item $d \mu$ is a jump measure with a finite number of jumps. \item The Carath\'eodory function is of the form $$\varphi(z) = i A + i B z + \sum _{j=1}^N \frac{i c_j}{z-t_j},$$ where $c_j,B>0$, $A, t_j\in \mathbb R$ for all $1\leq j\leq N$. \varepsilonnd{enumerate} \varepsilonnd{Tm} \begin{Rk} There are two different ways to obtain the positive measure $d\mu$ given in \varepsilonqref{27-octobre-2000}. \begin{enumerate} \item Using the Cauchy formula on the boundary and the Banach-Alaoglu Theorem. \item Using the spectral theorem for $R_0$ in the space $\mathcal{L}(\varphi)$. In this case, $R_0$ is self-adjoint and the measure $d\mu$ is given by $d\mu(t) = \innerProductReg{dE(t)u}{u}$ where $E$ is the spectral measure of $R_0$. \varepsilonnd{enumerate} In this paper we focus on the first approach, while in \cite{AVP3} we explore the second approach. \varepsilonnd{Rk} We mention here that Carath\'eodory functions are the characteristic functions or transfer functions of selfadjoint vessel or impedance $2D$ systems, respectively. Furthermore, they are also related to de Branges-Rovnyak spaces $\mathcal{L}(\varphi)$ of sections of certain vector bundles defined on compact Riemann surfaces of non zero genus. These subjects and interconnections are further studied by the authors in \cite{AVP3}.\\ {\bf Outline of the paper:} The paper consists of five sections besides the introduction. In Section \ref{secPrel}, we give a brief overview of compact real Riemann surfaces and the associated Cauchy kernels. In Section \ref{secHerg}, we describe explicitly the Green function on $X$ in terms of the canonical homology basis. As a consequence, we present the Herglotz representation theorem for compact real Riemann surfaces. We utilize the integral representation of Carath\'eodory functions in order to study, in Section \ref{secdBLphi}, the de Branges space $\mathcal{L}(\varphi)$. In Section \ref{secPhiSingleVal}, we examine the case where $\varphi(z)$ is a single-valued function which defines a contractive function $s(z)$ through the Cayley transformation. Hence, we may determine the relation between the de Branges spaces $\mathcal{L}(\varphi)$ and the the de Branges Rovnyak space $\mathcal H (s)$ associated to $s$. Finally, in Section \ref{chSumm43}, we summarize some of the results by comparing the $\mathcal{L}(\varphi)$ theory in the genus zero case and the real compact Riemann surfaces of genus $g>0$. \section{Preliminaries} \lambdabel{secPrel} In this section, we give a brief review of the basic properties and definitions of compact real Riemann surfaces. We replace the open upper-half plane (or, more precisely, its double, i.e the Riemann sphere) by a compact real Riemann surface $X$ of genus $g>0$. A survey of the main tools required in the present study (including the prime form and the Jacobian) can be found in \cite[Section 2]{av3}, and in particular the descriptions of the Jacobian variety of a real curve and the real torii is in \cite{vinnikov5}. For general background, we refer to \cite{fay1,GrHa,gunning2,mumford1} and \cite{mumford2}. It is crucial to choose a canonical basis to the homology group $H_1(X,\mathbb Z)$ which is symmetric, in some sense, under the involution $\tau$ (for more details we refer to \cite{gross1981real}. Here we use the conventions as in \cite{av3,vinnikov5}). Let $X_{\mathbb R}$ be the be set of the invariant points under $\tau$, $X_{\mathbb R} = \{ p\in X | \tauBa{p} = p\}$, which is always assumed to be not empty. Then $X_{\mathbb R}$ consists of $k$ connected components denoted by $X_j$ where $j=0,...,k-1$ (disjoint analytic simple closed curves). We choose for each component $X_j$ a point $p_j \in X_j$. Then, we set $A_{g+1-k+j} = X_j$ and $B_{g+1-k+j}=C_j - \tauBa{C_j}$, where $j=1,...,k-1$ and $C_j$ is a path from $p_0$ to $p_j$ which does not contain any other fixed point. We can extend to the homology basis $A_1,...,A_g,B_1,...,B_g$ under which the involution is given by $\begin{psmallmatrix}I & H\\0 & -I\varepsilonnd{psmallmatrix}$, where the matrix $H$ is given by \[ H= \left( \begin{smallmatrix} 0 & 1 \\ 1 & 0 \\ & & \ddots \\ & & & 0 & 1 \\ & & & 1 & 0 \\ & & & & & & 0 \\ & & & & & & & \ddots \\ & & & & & & & & 0 \\ \varepsilonnd{smallmatrix} \right) \quad {\rm and} \quad H= \left( \begin{smallmatrix} 1 & \\ & \ddots \\ & & 1 & \\ & & & 0 \\ & & & & \ddots \\ & & & & & 0 \\ \varepsilonnd{smallmatrix} \right) , \] for the dividing case and the non-dividing case, respectively. In both cases, $H$ is of rank of $g+1-k$. Then, we choose a normalized basis of holomorphic differentials on $X$ satisfying $\int _{A_i } \omega_j = \delta_{ij}$. The matrix $Z \in \mathbb C ^{g \times g}$, with entries $Z_{i,j} = \int _{B_i } \omega_j $, is symmetric, with positive real part, satisfies \[ Z^* = H - Z \] and is referred as the period matrix of $X$ associated with the basis $\left( \omega_j \right)_{j=1} ^g$. The Jacobian variety is defined by $J(X) = \mathbb C ^ g \backslash \Gamma$, where $\Gamma = \mathbb Z ^g + Z \mathbb Z ^g$, and the Abel-Jacobi map from $X$ to the Jacobian variety is given by \[ \mu:p \rightarrow \begin{pmatrix}\int_{p_0}^p \omega_1\\ \vdots\\ \int_{p_0}^p \omega_g\varepsilonnd{pmatrix}. \] It is convenient to define \begin{equation}\lambdabel{eqZHY}Z= \frac{1}{2}H + i Y^{-1}.\varepsilonnd{equation} We denote the universal covering of $X$ by $\pi:\omegaidetilde{X}\rightarrow X$. The group of deck transformations of $X$, denoted by the $\mathrm{Deck} (\omegaidetilde{X} / X)$, consists of the homeomorphisms $\mathcal{T}: \omegaidetilde{X} \rightarrow \omegaidetilde{X}$ such that $\pi_X \circ \mathcal{T} = \pi_X$. It is well-known that the group of deck transformations on the universal covering is isomorphic to the fundamental group $\pi_1(X)$. The analogue of the kernel $\frac{1}{-i(z-\overline{w})}$, is given by $\frac{K_{\zetaeta}(u,\tauBa{v})}{-i}$ where \[ K_{\zetaeta}(u,v) \overset{\text{def} } {=} \frac{ \vartheta [{\zetaeta}](v-u)} { \vartheta [ \zetaeta ](0)E(v,u)}. \] The analogue of the kernel $\frac{1-s(z)\overline{s(w)}}{-i(z-\overline{w})}$ is now given by the expression \begin{equation*} K_{\tilde{\zetaeta},s}(u,v)=\frac{\vartheta [\tilde{\zetaeta} ](\tauBa{v}-u)} {i\vartheta [\tilde{\zetaeta} ](0)E(u,\tauBa{v})}- s(u) \frac{ \vartheta [{\zetaeta}](\tauBa{v}-u)} {i\vartheta [ \zetaeta ](0)E(u,\tauBa{v})} \overline{s(v)} , \varepsilonnd{equation*} where $u$ and $v$ are points on $X$ (see \cite{vinnikov4} and \cite{vinnikov5}). Furthermore, $\zetaeta$ and $\tilde{\zetaeta}$ are points on the Jacobian $J(X)$ (in fact $\zetaeta$ and $\tilde{\zetaeta}$ belong to the real torii $T_\nu$, see \cite{MR1634421}) of $X$ such that $\vartheta(\zetaeta)$ and $\vartheta(\tilde{\zetaeta})$ are nonzero and: \begin{enumerate} \item $\vartheta [\zetaeta ]$ denotes the theta function of $X$ with characteristic $\left[ \begin{array}{c} a \\ b \varepsilonnd{array} \right]$ where $\zetaeta=b+Za$ (with $a$ and $b$ in ${\mathbb R}^g$). \item $E(u,v)$ is the prime form on $X$, for more details see \cite{fay1,mumford2}. \item For fixed $v$, the map $u\mapsto K_{\omegaidetilde{\zetaeta},s}(u,v)$ is a multiplicative half order differential (with multipliers corresponding to $\tilde{\zetaeta}$). \item $s$ is a map of line bundles on $X$ with multipliers corresponding to $\tilde{\zetaeta}-\zetaeta$ and satisfying $s(u)s(\tauBa{u})^*=1$. \varepsilonnd{enumerate} The analogue of the operators \varepsilonqref{liberte'} is given now by \begin{equation*} R_\alpha^{y}f(u)= \frac{f(u)}{y(u)-\alpha}-\sum_{j=1}^n \frac{1}{ d y(u^{(j)})} \frac{\vartheta[\zetaeta](u^{(j)}-u)}{\vartheta[\zetaeta](0) E(u^{(j)},u)} f(u^{(j)}), \varepsilonnd{equation*} where $y$ is a real meromorphic function of degree $n$ and $\alpha\in\mathbb C$ is such that there are $n$ distinct points $u^{(j)}$ in $X$ such that $y(u^{(j)})= \alpha$ and where $f$ is a section of $L_{\zetaeta}\otimes\Delta$ analytic at the points $u^{(j)}$. Furthermore, ({\cite[Lemma 4.3]{av3}}) the Cauchy kernels are eigenvectors of $R^y_\alpha$ with eigenvalues $\frac{1}{\overline{y(w)}-\alpha}$. We conclude with the definition of the model operator, $M^{y}$ \cite[Equation 3-3]{MR1634421}, satisfying $(M^y - \alpha I ) ^{-1} = R_\alpha ^y$ for $\alpha$ large enough. It is defined on sections of the line bundle $L_{\zetaeta}\otimes \Delta$ analytic at the neighborhood of the poles of $y$ and is explicitly given by \begin{equation} M^{y}f(u) \lambdabel{m_y} = y(u)f(u) + \sum_{m=1}^{n}{c_m f(p^{(m)}) \frac {\vartheta[\zetaeta](p^{(m)}-u)} {\vartheta[\zetaeta] (0)E(p^{(m)},u)}}, \varepsilonnd{equation} where $y(u)$ is a meromorphic function on $X$ with $n$ distinct simple poles, $p^{(1)},...,p^{(n)}$. \section{Herglotz theorem for compact real Riemann surfaces} \lambdabel{secHerg} We first develop the analogue of Herglotz's formula for analytic functions with a positive real part in ${X}_+$, instead of $\mathbb C_+$. We consider the case of multi-valued functions but with purely imaginary period, i.e. multi-valued functions that satisfy \begin{equation*} \varphi(\mathcal{T}(\omegaidetilde{p}))=\varphi(\omegaidetilde{p})+\chi(\mathcal{T}). \varepsilonnd{equation*} Here $\mathcal{T}$ is an element in the group of deck transformations on the universal covering of ${X}$ and $$\chi:\,\,\pi_1(X)\rightarrow i{\mathbb R},$$ is a homomorphism of groups. We call such a mapping an {\it additive function}. Although in general it is not uniquely defined, the real part of $\varphi(p)$ is well-defined. The involution $\tau$ is extended on the universal covering of $X$. In particular, for $\omegaidetilde{x} \in \omegaidetilde{X}$, an inverse under $\pi$ of an element in $X_\mathbb R$, there exists $\mathcal{T}_{\omegaidetilde{x}} \in \mathrm{Deck}(\omegaidetilde{X} / X)$ such that $\tauBa{\omegaidetilde{x}} = \mathcal{T} _{\omegaidetilde{x}}(\omegaidetilde{x})$. Hence, since $\mathrm{Deck}(\omegaidetilde{X} / X)$ is isomorphic to $\pi_1(X)$, we write $\mathcal{T}_{\omegaidetilde{x}}$ in the form $\mathcal{T}_{\omegaidetilde{x}} = \sum_{j=1}^{g}{m_j A_j + n_j B_j}$, and we extensively use the notation \begin{align*} n( \cdot ) : & \omegaidetilde{X} \longrightarrow \mathbb Z^g \\ & \omegaidetilde{x} \longrightarrow (n_1 \cdots n_g)^{t}. \varepsilonnd{align*} We note that when $\omegaidetilde{x} \in \omegaidetilde{X}_0$ it follows that $n(\omegaidetilde{x}) = 0$ and $n(\omegaidetilde{x}) = e_{g+1-k+j}$ whenever $\omegaidetilde{x} \in \omegaidetilde{X}_j$ for $j=1,...,k-1$ (where the set $e_1,...,e_g$ forms the canonical basis of $\mathbb R ^g$). \begin{theorem} \lambdabel{harmonicIntRep} Let $X$ be a compact real Riemann surface and let $\psi(p)$ be a positive harmonic function defined on ${X}\setminus {X}_{\mathbb R}$. Then for every $p \in {X}\setminus {X}_{\mathbb R}$ there exists a positive measure $d \varepsilonta(p,x)$ on $X_{\mathbb R}$ such that \begin{align} \lambdabel{la-guerre-commence1} \psi(p) = & \int_{X_{\mathbb R}} \psi(x) d \varepsilonta(p,x). \varepsilonnd{align} \varepsilonnd{theorem} We start by presenting a preliminary lemma, revealing a useful property of the prime form. \begin{lem} \lambdabel{primeFormA} Let $x$ be an element of $X_j$. Then the prime form satisfies the following relation \begin{align} \lambdabel{primeFormEqA} \overline{\frac{\partial}{\partial x}\ln E(\tauBa{p},x)} & = \frac{\partial}{\partial x}\ln E(p,x)-2\pi i \omega_{g-k-1+j}(x) \\ & = \frac{\partial}{\partial x}\ln E(p,x)-2\pi i \Big[ \omega_1(x) \cdots \omega_g(x) \Big] n(x). \nonumber \varepsilonnd{align} \varepsilonnd{lem} \begin{pf} Let $x\in{ X}_j$, where $j=1,2,\ldots , k-1$. Then, $\tau$ is lifted to the universal covering as follows \[ \omegaidetilde{x} - \tauBa{\omegaidetilde{x}} = \sum_{\varepsilonll=1}^g m_\varepsilonll A_\varepsilonll + n_\varepsilonll B_\varepsilonll, \] where $n_\varepsilonll = \delta (g-k-1+j - \varepsilonll )$ and where $\delta$ stands for the Kronecker delta. We also recall that the prime form (see \cite[Lemma 2.3]{av3}) satisfies \begin{align} \nonumber E(\omegaidetilde{p},\omegaidetilde{u}_1) = & E(\omegaidetilde{p},\omegaidetilde{u}_2) \varepsilonxp \left( {-i \pi n^t \Gamma n + 2 \pi i (\omegaidetilde{\mu}(\omegaidetilde{p})-\omegaidetilde{\mu}(\omegaidetilde{u}_2))^t n} \right) \times \\&\times \varepsilonxp \left( 2 \pi i (\beta^t_0 n - \alpha ^t _0 m ) \right) \lambdabel{eqPrimeFormConj} , \varepsilonnd{align} where $\omegaidetilde{\mu}$ is the lifting of the Abel-Jacobi mapping to the universal covering and where $\zetaeta = \alpha_0 + \beta_0 \Gamma$. Thus, choosing $\tauBa{\omegaidetilde{u}_2} = \omegaidetilde{u}_1 = \omegaidetilde{x}$, the relation in \varepsilonqref{eqPrimeFormConj} becomes \begin{align} \lambdabel{primeFormProp} \ln \, E(\omegaidetilde{p},\tauBa{\omegaidetilde{x}}) = & \ln \, E(\omegaidetilde{p},\omegaidetilde{x}) -i \pi \Gamma_{jj} + 2 \pi i (\omegaidetilde{\mu}(\omegaidetilde{p})-\omegaidetilde{\mu}(\omegaidetilde{x}))_j + \\ &+ 2 \pi i (\beta^t_0 n - \alpha ^t _0 m ) . \nonumber \varepsilonnd{align} We note that by using \cite[Lemma 2.4]{av3}, the prime form satisfies the identity $\overline{E(\tauBa{p},x)} = E(p,\tauBa{x})$. It remains to differentiate \varepsilonqref{primeFormProp} with respect to $x$ and \varepsilonqref{primeFormEqA} follows. \varepsilonnd{pf} \begin{pf}[of Theorem \ref{harmonicIntRep}] We show that the expression \begin{align} \lambdabel{la-guerre-commence2} G(p,x) \overset{\text{def} } {=} & \pi \Big[ \omega_1(x) \cdots \omega_g(x) \Big] \cdot \left( \frac{n(x)}{2} + i (Yp) \right) - \\ & \nonumber -\frac{i}{2} \frac{\partial}{\partial x} \ln E(p,x) , \varepsilonnd{align} is the differential with respect to $x$ of the Green function, where $x\in X_\mathbb R$, $p \in X\setminus X_\mathbb R$ and where $\omega(x)$ is a section of the canonical bundle (denoted by $K_X$ and for an atlas $(V_j,z_j)$ defining the analytic structure of $X$, is given by cocycles $dz_j/dz_i$). Here and in the following pages, with an abuse of notation, $Yp$ denotes $Y \omegaidetilde{\mu}(\omegaidetilde{p})$. The existence of a Green function on a Riemann surface is a well-known result, see for instance \cite[Chapter V]{bergman} or \cite[Chapter X]{Tsuji}. Therefore, there exists a (unique) Green function, denoted by $g(p,x)$, with the differential $G(p,x)$ which contains singularities of the form $\frac{1}{x-p}$ along its diagonal. Hence, it is enough to show that the expression in \varepsilonqref{la-guerre-commence2} and the Green function satisfy the upcoming properties: \begin{enumerate} \item {\it The function $g(x,p)$ contains a logarithmic singularity while $G(x,p)$ has a simple pole at $p=x$.} It follows immediately by using the prime form properties and moving to local coordinates that the following relation holds (see for instance \cite[Section II]{fay1}): \[ \frac{i}{2} \frac{\partial}{\partial x} \ln E(p,x) = \frac{i}{2} \frac{\partial}{\partial v} \ln (t(u)-t(v)) = \frac{i}{2(t(u)-t(v))} . \] \item {\it The real part of the differential $G(x,p)$ is single-valued:} Let $p$ and $p_1$ be two elements of $\omegaidetilde{X}$ which are the pre-images of the same element in $X_j$, i.e. $\pi(p)=\pi(p_1) \in X_j $. It follows, using \varepsilonqref{eqZHY}, that \begin{equation} \lambdabel{eqPp1} \omegaidetilde{\mu}(p) - \omegaidetilde{\mu}(p_1) = n + \Gamma m = n + \left(\frac{1}{2} H + i Y^{-1}\right) m, \varepsilonnd{equation} for some $n,m \in \mathbb R^g$ and thus, using again \cite[Lemma 2.3]{av3} we have, modulo $2\pi i$: \begin{align*} \ln\left( E(p_1,x) \right) = & \ln \bigg( E(p,x) \varepsilonxp \big( 2 \pi i (\mu(x) - \mu(p))^t m \big) \times \\ & \varepsilonxp \left( 2 \pi i (\beta_0^t m - \alpha_0^t n) -i \pi m^t \Gamma m \right) \bigg) \\ = & \ln \, E(p,x) - \frac{i}{2} \pi m^t H m - \pi m^t Y^{-1} m + \\ & 2 \pi i (\omegaidetilde{\mu}(x) - \omegaidetilde{\mu}(p))^t m + 2 \pi i (\beta_0^t m - \alpha_0^t n). \varepsilonnd{align*} Then, the real part of a multiplier of ${\it ln}\left( E(p,x) \right)$ is: \begin{align} \lambdabel{eqReLnMult} \mathfrak{Re} ~ \big( \ln E(p,x) - &\ln E(p_1,x) \big) = \\ & \nonumber 2 \pi \left( \frac{1}{2} m^t Y^{-1} + \mathfrak{Im} ~g { \mu(p) - \mu(x)} ^t \right) m. \varepsilonnd{align} Clearly, using \varepsilonqref{eqPp1}, we have that $$ m = Y \, \mathfrak{Im} ~g { \omegaidetilde{\mu}(p) - \omegaidetilde{\mu}(p_1)}$$ and hence the derivative with respect to $x$ of \varepsilonqref{eqReLnMult}, is \begin{align*} \frac{\partial}{\partial x} \mathfrak{Re} ~ (\ln \, \left( E(p,x)\right) & - \ln \, \left( E(p_1,x) \right) ) \\ = & - 2 \pi \, \mathfrak{Im} ~ \Big[\omega_1(x) \cdots \omega_g(x) \Big] m \\ = & - 2 \pi \, \mathfrak{Im} ~ \Big[\omega_1(x) \cdots \omega_g(x) \Big] Y \omegaidetilde{\mu} (p - p_1) . \varepsilonnd{align*} Hence, $G(x,p)$ has the appropriate singularity and has a single-valued real-part if it is of the following form: \[ \frac{\partial}{\partial x} \ln\left( E(p,x) \right)+ 2 \pi \Big[\omega_1(x) \cdots \omega_g(x) \Big] Y \omegaidetilde{\mu} (p) + h(x), \] for some $h(x)$ with purely imaginary periods. \item {\it The real part of the complex Green function vanishes on the boundary components:} Let $x \in X_j$ for some $0 \leq j \leq k-1$ and let $p \in X_l$ for some $0 \leq l \leq k-1$ such that $p \neq x$. Then, we integrate $G(x,p)$ with respect to $x$ and note that the integration of the vector $\Big[ \omega_1(x) \cdots \omega_g(x) \Big]$ is just the Abel-Jacobi mapping at $x$. Then, the Green function is: \[ g(x,p)= \left( \frac{n(x)}{2} + i (Yp) \right) \mu(x) - \frac{i}{2} \ln E(p,x). \] We use the equality, see \cite[Lemma 2.4]{av3}, $$ E(x,p) = \overline{E(\tauBa{x},\tauBa{p})} $$ to conclude that whenever $x$ and $p$ are both real, the relation \begin{align*} \overline{ \frac{\partial}{\partial x} \ln{(E(x,p))}} = & \frac{\partial}{\partial x} \ln{(E(\tauBa{x},\tauBa{p}))} \\ = & \frac{\partial}{\partial x} \ln{(E(x,p))} + 2\pi i \omega _{g-k+j-1}(x) \varepsilonnd{align*} holds. Hence, the real part of $g(x,p)$, using Equation \ref{primeFormEqA}, is equal to \begin{align*} \mathfrak{Re} ~l{g(x,p)} = & i (Yp) \mu(x) - \frac{i}{2} \ln E(p,x) + \overline{i (Yp) \mu(x) } - \\ & \overline{\frac{i}{2} \ln E(p,x)} + \mathfrak{Re} ~l{h(x)} \\ = & \mathfrak{Re} ~ \frac{i}{2} \left( \ln E(\tauBa{p},\tauBa{x}) - \ln E(p,x) \right) + \mathfrak{Re} ~l{h(x)} \\ = & \frac{\pi }{2} \omega(x) n(x) + \mathfrak{Re} ~l{h(x)} , \varepsilonnd{align*} and therefore, setting $\mathfrak{Re} ~ h(x) = - \frac{\pi}{2} w(x) n(x)$, the Green function vanishes on the real points. \varepsilonnd{enumerate} Thus, $G(x,p)$ is the differential of the complex Green function and so, for any $p \in X \setminus X_\mathbb R$ and for sufficient small $\varepsilon$, defines the solution to the Dirichlet problem, i.e. \[ \psi(p) = \int_{X_\mathbb R(\varepsilon)} \psi(x) G(x,p). \] Here, the integration contour is a collection of smooth simple closed curves located within a distance $\varepsilon$ approximating $X_\mathbb R$. We then consider a sequence $(\varepsilon_n)_{n \in \mathbb N}$ such that $\varepsilon_n \rightarrow 0$ as $n \rightarrow \infty$. Then, by the Banach-Alaoglu Theorem (see for instance, \cite[p. 223]{MR1681462}), there exists a subsequence $(\varepsilon_{n_k})_{k \in \mathbb N}$ such that the limit $$\lim_{k \rightarrow \infty} \int_{X_\mathbb R(\varepsilon_{n_k})} \psi(x) G(x,p)$$ exists. Thus, the weak-star limit defines a positive measure on $X_\mathbb R$ satisfying \varepsilonqref{la-guerre-commence1}. \varepsilonnd{pf} Using the previous result, we may state the Herglotz theorem for real compact Riemann surfaces. \begin{Tm} \lambdabel{caraTmRS} Let $X$ be a compact real Riemann surface of dividing-type. Then an additive function $\varphi(x)$ analytic in ${X}\setminus {X}_{\mathbb R}$ with positive real part in ${X}\setminus {X}_{\mathbb R}$ and, furthermore, satisfies \begin{equation*} \varphi(p)+\overline{\varphi(\tauBa{p})}=0,\quad p\in{ X}\setminus { X}_{\mathbb R}, \varepsilonnd{equation*} if and only if \begin{align} \nonumber \varphi(p) = & \frac{\pi}{2} \int_{X_{\mathbb R}} \Big[\omega_1(x) \cdots \omega_g(x) \Big] n(\omegaidetilde{x}) \, \frac{d \varepsilonta(x)}{\omega(x)} - \frac{i}{2}\int_{X_{\mathbb R}} \frac{\partial}{\partial x} \ln E(p,x) \, \frac{d \varepsilonta(x)}{\omega(x)} + \\ \lambdabel{la-guerre-commence} & \pi i\int_{X_{\mathbb R}} \Big[ \omega_1(x) \cdots \omega_g(x) \Big] (Yp) \, \frac{d \varepsilonta(x)}{\omega(x)} +iM. \varepsilonnd{align} Here, $M$ is a real number, $d \varepsilonta$ is a positive finite measure on $X _{\mathbb R}$, $\omega(x)$ is a section of the canonical line bundle which is positive with respect to the measure $d \varepsilonta$. \varepsilonnd{Tm} \begin{pf} We start with the "if" part as we compute $\overline{\varphi(\tauBa{p})}$: \begin{eqnarray*} \overline{\varphi(\tauBa{p})} &=& \frac{\pi}{2} \int_{{X}_{\mathbb R}} [\omega_1(x) \cdots \omega_g(x) ] n(\omegaidetilde{x}) \frac{d \varepsilonta(x)}{\omega(x)} - \\ & & \pi i \int_{{X}_{\mathbb R}} [\omega_1(x) \cdots \omega_g(x)] (\overline{Y p})~ \frac{d \varepsilonta(x)}{\omega(x)} + \\ & & \frac{i}{2} \int_{{X}_{\mathbb R}} \frac{\partial}{\partial x} \ln \overline{E(\tauBa{p},x)} \frac{d \varepsilonta(x)}{\omega(x)} -iM . \varepsilonnd{eqnarray*} Thus, using Lemma \ref{primeFormA} and since $\omega$ is real (i.e. $\overline{\tauBa{ \omega_i}} = \omega_i$), we have: \begin{align} \nonumber \overline{\varphi(\tauBa{p})} =& \frac{\pi}{2} \int_{{X}_{\mathbb R}} [\omega_1(x) \, \cdots \, \omega_g(x)] n(\omegaidetilde{x}) \frac{d\varepsilonta(x)}{\omega(x)} - \\ \nonumber & \pi i \int_{{ X}_{\mathbb R}} [\omega_1(x) \, \cdots \, \omega_g(x)] (Yp)~ \frac{d \varepsilonta(x)}{\omega(x)} + \\ \lambdabel{la-guerre-commence11} & \frac{i}{2} \int_{{X}_{\mathbb R}} \frac{\partial}{\partial x} \ln E(p,x) \frac{d \varepsilonta(x)}{\omega(x)} - \pi \sum_{j=1}^{k-1}\int_{{X}_j}\omega_j(x) \frac{d \varepsilonta(x)}{{\omega(x)}} -iM. \varepsilonnd{align} Summing up \varepsilonqref{la-guerre-commence} and \varepsilonqref{la-guerre-commence11}, leads to \begin{align*} \varphi(p)+\overline{\varphi(\tauBa{p})} = & \pi \int_{{X}_{\mathbb R}} [ \omega_1(x) \, \cdots \, \omega_g(x)] n (\omegaidetilde{x}) \frac{d \varepsilonta(x)}{\omega(x)} - \\ & \pi \sum_{j=1}^{k-1}\int_{{X}_j} \omega_j(x) \frac{d \varepsilonta(x)}{\omega(x)} = 0. \varepsilonnd{align*} For the "only if" statement: The real part of $\varphi(p)$ is positive, harmonic and with a single-valued real part in $X \setminus X_{\mathbb R}$. Thus, by Theorem \ref{harmonicIntRep}, $\mathfrak{Re} ~{ \varphi(p)}$ has an integral representation as given in \varepsilonqref{la-guerre-commence1} for some positive measure $d \, \varepsilonta_{\varphi}$ on $X_\mathbb R$. Finally, it is well-known that two analytic functions defined on a connected domain with the same real part differ only by some imaginary constant. Hence we may summarize that \[ \varphi(p) = \int_{X_{\mathbb R}} G(p,x) d \nu_{\varphi}(x) + iM, \] for some $M \in \mathbb R$. \varepsilonnd{pf} In the case where $X = \mathbb P^1$ coupled with the anti-holomorphic involution $z \rightarrow \overline{z}$, we set $\omega = \frac{d \, t}{t^2 + 1}$ and then \varepsilonqref{27-octobre-2000} can be extracted from \varepsilonqref{la-guerre-commence} by setting, \begin{align*} d \nu (t) & = \frac{1}{2} d \varepsilonta (t) (t^2 +1), \qquad B = \frac{1}{2} \varepsilonta (\infty), \\ A & = M - \frac{1}{2} \int_{I} t \, d \varepsilonta (t) + \frac{1}{2} \int_{\mathbb R \backslash I} \frac{d \varepsilonta (t)}{t}, \varepsilonnd{align*} where $I$ is any interval of $\mathbb R$ containing zero. Similarly, in the case of the torus, one may deduce H. Villat's formula, see \cite{MR1629812}. (Akhiezer in \cite[Section 56]{MR1054205} presented a different but equivalent formula). \section{de Branges \texorpdfstring{$\mathcal{L}(\varphi)$}{ $\mathcal{L}(\varphi)$ } spaces in the nonzero genus case} \lambdabel{secdBLphi} In this section, we further study the reproducing kernel Hilbert space associated with an additive function defined on a real compact Riemann space. To do so, we utilize the Herglotz's integral representation proved inthe previous section in order to examine $\mathcal{L}(\varphi)$ spaces and their properties. First, we introduce the analogue of formula \varepsilonqref{4-juin-2000}. \begin{Tm} \lambdabel{thm41} Let $X$ be a compact real Riemann surface of dividing type, $\zeta \in T_0$ and let $\varphi$ be an analytic with positive real part in $X_+$. Then, the identity \begin{align} \nonumber \int_{{X}_{\mathbb R}} & \frac{\vartheta[\zetaeta](p-x)}{\vartheta[\zetaeta](0)E(x,p)} \frac{\vartheta[\zetaeta](x-{\tauBa{q}})}{\vartheta[\zetaeta](0)E(x,{\tauBa{q}})} \frac{d \varepsilonta(x)}{\omega(x)} = \frac{\vartheta[\zetaeta](p-{\tauBa{q}})}{\vartheta[\zetaeta](0)E({\tauBa{q}},p)} \times \\ \nonumber & \times\bigg[ \left({\varphi}(p)+\overline{{\varphi}(\tauBa{q})}\right) + \sum_{j=0}^{k-1} a_{jj}\frac{\partial}{\partial z_j} \ln\frac{\vartheta(\zetaeta)}{\vartheta(\zetaeta+p-{\tauBa{q}})} - \\ & -2\pi i \row_{i=0,\ldots,g} \left( \sum_{j=0}^{k-1} a_{ji} \right) Y(p-{\tauBa{q}}) , \lambdabel{jfk-le-26-octobre-2000} \varepsilonnd{align} \lambdabel{la-fin-du-sionisme?} holds where \begin{equation} \lambdabel{a_j} a_{ji} \overset{\text{def} } {=} \int_{{X}_{j}} \frac{\omega_i(x)}{\omega(x)}d \varepsilonta(x),\quad j=0,\ldots,k-1, \quad i=1,\ldots,g. \varepsilonnd{equation} \varepsilonnd{Tm} Before heading to prove Theorem \ref{thm41}, we make a number of remarks. The left hand side of \varepsilonqref{jfk-le-26-octobre-2000} may be written as $$ \innerProductTri{K_{\zetaeta}(p,x)}{K_{\zetaeta}(x,\tauBa{q})} {{\bf L}^2\left( X_{\mathbb R} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(x)}{w(x)} \right) } $$ and hence, it is precise the counterpart of the right hand side of \varepsilonqref{4-juin-2000}. Furthermore, whenever we additionally assume zero cycles along the boundary components, that is, \begin{equation} \lambdabel{eqCycles} \int_{{ X}_j}\frac{\omega_i(x)}{\omega(x)}d \varepsilonta(x)=0, \quad j=0,\ldots,k-1, \varepsilonnd{equation} the right hand side of \varepsilonqref{jfk-le-26-octobre-2000} is the counterpart of the left hand side of \varepsilonqref{4-juin-2000}. Hence, we may summarize and present the following result. \begin{corollary} \lambdabel{kernelAj0} Let $X$ be a compact real Riemann surface of dividing type, let $\zeta \in T_0$ and let $\varphi$ be an additive function on $X$ such that \varepsilonqref{eqCycles} holds. Then, the identity \begin{align} \left({\varphi}(p)+\overline{{\varphi}(\tauBa{q})}\right) & \frac{\vartheta[\zetaeta](p-{\tauBa{q}})}{\vartheta[\zetaeta](0)E({\tauBa{q}},p)} \nonumber = \\ & \innerProductTri{K_{\zetaeta}(p,u)}{K_{\zetaeta}(u,\tauBa{q})} {{\bf L}^2\left( X_{\mathbb R} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(x)}{w(x)} \right) } , \lambdabel{jfk-le-26-octobre-2000_SV} \varepsilonnd{align} holds. \varepsilonnd{corollary} \begin{pf}[of Theorem \ref{la-fin-du-sionisme?}] Using the Herglotz-type formula \varepsilonqref{la-guerre-commence}, we may write \begin{align} \nonumber (\varphi(p) + &\overline{\varphi(\tauBa{q})}) \frac{\vartheta[\zetaeta](p-{\tauBa{q}})}{\vartheta[\zetaeta](0)E({\tauBa{q}},p)} = \\ =& 2\pi i\left(\int_{X_{\mathbb R}} [ \omega_1(x) \cdots \omega_g(x) ] Y(p-\tauBa{q}) \frac{d\varepsilonta(x)}{\omega(x)}\right) \frac{\vartheta[\zetaeta](p-{\tauBa{q}})}{\vartheta[\zetaeta](0)E({\tauBa{q}},p)} \nonumber - \\ \lambdabel{Thm43A} & \frac{i}{2} \int_{X_{\mathbb R}} \left(\frac{\partial}{\partial x}\ln E(p,x)-\frac{\partial}{\partial x}\ln E({\tauBa{q}},x)\right) \frac{d\varepsilonta(x)}{\omega(x)} \frac{\vartheta[\zetaeta](p-{\tauBa{q}})}{\vartheta[\zetaeta](0)E({\tauBa{q}},p)}. \varepsilonnd{align} Furthermore, by \cite[Proposition 2.10, p. 25]{fay1}, we have \begin{align} \frac{\partial}{\partial x}\ln\frac{E(p,x)}{E({\tauBa{q}},x)} + & \sum_{j=1}^g\left(\frac{\partial}{\partial z_j}\ln \vartheta(\zetaeta+p-{\tauBa{q}}) -\frac{\partial}{\partial z_j}\ln \vartheta(\zetaeta)\right)\omega_j(x) = \nonumber \\ \lambdabel{eq123} & \frac{E({\tauBa{q}}, p)}{E(x,{\tauBa{q}})E(x,p)} \frac {\vartheta[\zetaeta](x-{\tauBa{q}})\vartheta[\zetaeta](p-x)} {\vartheta[\zetaeta](p-{\tauBa{q}})\vartheta[\zetaeta](0)}. \varepsilonnd{align} Thus, multiplying both sides of \varepsilonqref{eq123} by $\frac{\vartheta[\zetaeta](p-{\tauBa{q}})}{\vartheta[\zetaeta](0)E({\tauBa{q}},p)}$, leads to \begin{align} \nonumber & \frac{\vartheta[\zetaeta](p-{\tauBa{q}})}{\vartheta[\zetaeta](0)E({\tauBa{q}},p)} \frac{\partial}{\partial x}\ln\frac{E(p,x)}{E({\tauBa{q}},x)} = \frac{\vartheta[\zetaeta](x-{\tauBa{q}})}{ \vartheta[\zetaeta](0)E(x,{\tauBa{q}})} \frac{\vartheta[\zetaeta](p-x)}{ \vartheta[\zetaeta](0)E(x,p)}- \\ & \sum_{j=1}^g \frac{\partial}{\partial z_j} \left( \ln \vartheta(\zetaeta+p-{\tauBa{q}}) - \ln \vartheta(\zetaeta)\right)\omega_j(x) \frac{\vartheta[\zetaeta](p-{\tauBa{q}})}{ \vartheta[\zetaeta](0)E({\tauBa{q}},p)}. \lambdabel{Thm43B} \varepsilonnd{align} Finally, by substituting \varepsilonqref{Thm43B} into \varepsilonqref{Thm43A}, we conclude that the identity \begin{align*} (\varphi(p)-\overline{\varphi(\tauBa{q})}) & \frac{\vartheta[\zetaeta](p-{\tauBa{q}})}{\vartheta[\zetaeta](0)E({\tauBa{q}},p)} = \frac{\vartheta[\zetaeta](p-{\tauBa{q}})}{ \vartheta[\zetaeta](0)E({\tauBa{q}},p)} \times \\ & \bigg[ \frac{i}{2} \sum_{j=1}^{g} \int_{X_{j}} \frac{\omega_j(x) d \varepsilonta(x)}{\omega(x)} \frac{\partial}{\partial z_j} \left( \ln \vartheta(\zetaeta+p-{\tauBa{q}}) -\ln \vartheta(\zetaeta) \right) + \\ & 2\pi i\int_{X_{\mathbb R}} \frac{[\omega_1(x)\,\cdots \, \omega_g(x)]}{\omega(x)}Y(p-{\tauBa{q}})d \varepsilonta(x) \bigg] - \\ & \frac{i}{2} \int_{{X}_{\mathbb R}} \frac{\vartheta[\zetaeta](p-x)}{\vartheta[\zetaeta](0)E(x,p)} \frac{\vartheta[\zetaeta](x-{\tauBa{q}})}{ \vartheta[\zetaeta](0)E(x,{\tauBa{q}})}\frac{d \varepsilonta(x)}{\omega(x)} \varepsilonnd{align*} follows. Setting $a_j$ as in \varepsilonqref{a_j}, completes the proof. \varepsilonnd{pf} From this point and onward we assume that \varepsilonqref{eqCycles} holds. \begin{definition} Let $\varphi(x)$ be analytic in ${X}\setminus {X}_{\mathbb R}$ with positive real part in ${X}\setminus {X}_{\mathbb R}$. The reproducing kernel Hilbert space of sections of the line bundle $L_\zetaeta \otimes \Delta$ with the reproducing kernel \[ K(p,q) = (\varphi(p) + \overline{\varphi(\tauBa{q})}) \frac{\vartheta[\zetaeta](p-{\tauBa{q}})}{\vartheta[\zetaeta](0)E({\tauBa{q}},p)}, \] is denoted by $\mathcal{L}(\varphi)$. \varepsilonnd{definition} The analogue of the first part of Theorem \ref{Thm21} is given below in Theorem \ref{phiIntPresentation}. However, we first present a preliminary lemma that is required during this section (see \cite[Ex. 6.3.2]{capb2}, in the unit-disk case). \begin{lem} \lambdabel{denseL2} Let $X$ be a compact real Riemann surface of dividing type. Then the linear span of Cauchy kernels {\allowbreak $\frac{\vartheta[\zetaeta](x-u)}{i \vartheta[\zetaeta](0)E(u,x)}$ } where $u$ varies in $ X \setminus X_{\mathbb R}$ is dense in {\allowbreak ${\bf L}^2\left( X_{\mathbb R} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(x)}{w(x)} \right)$}. \varepsilonnd{lem} \begin{pf} Let us assume that a section $f$ of $L_{\zetaeta} \otimes \Delta$ satisfies \begin{equation} \lambdabel{eqCkDense} \int_{{X}_{\mathbb R}}K_{\zetaeta}(u, x)f(x)\frac{d \varepsilonta(x)}{\omega(x)} = 0, \varepsilonnd{equation} for all $u\in X \setminus X_\mathbb R$. We recall that by \cite{av2}, there exists an isometric isomorphism from ${\bf L}^2\left( X_{\mathbb R} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(x)}{w(x)} \right)$ to $ ({\bf L}^2(\mathbb T))^n$ and therefore there exists an orthogonal decomposition, see \cite[Equation 4.14]{av2}, \begin{align*} {\bf L}^2\bigg( & X_{\mathbb R} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(x)}{w(x)} \bigg) \\ = & {\bf H}^2\left( X_{+} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(x)}{w(x)} \right) \oplus {\bf H}^2\left( X_{-} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(x)}{w(x)} \right) . \varepsilonnd{align*} Furthermore, Equation \ref{eqCkDense}, for $u \in X_+$ is just the projection from ${\bf L}^2 \bigg( X_{\mathbb R} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(x)}{w(x)} \bigg)$ into ${\bf H}^2 \left( X_{+} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(x)}{w(x)} \right)$. Thus, $P_+(f)(u)=0$ and, similarly, $P_-(f)(u)=0$ and we may conclude that $f=0$ and the claim follows. \varepsilonnd{pf} \begin{Tm} \lambdabel{phiIntPresentation} The elements of $\mathcal{L}(\varphi)$ are of the form \begin{equation} \lambdabel{l-phi} F(u) = \int_{{X}_{\mathbb R}}K_{\zetaeta}(u, x)f(x)\frac{d \varepsilonta(x)}{\omega(x)}, \varepsilonnd{equation} where $f(x)$ is a section of $L_\zetaeta\otimes\Delta$ which is square summable with respect to $\frac{d \varepsilonta (x)}{\omega(x)}$. \varepsilonnd{Tm} \begin{pf} Equation \ref{l-phi} follows by Corollary \ref{kernelAj0}. Let us set $N\in{\mathbb N}$, then for any choice of $w_1,\ldots,w_N \in{X}\setminus X_{\mathbb R}$ and $c_1 \cdots c_N \in {\mathbb C}$, the identity \begin{align} \lambdabel{lPhinorm} F(u) \overset{\text{def} } {=} & \sum_{j=1}^{n} c_j (\varphi(u) + \overline{\varphi(w_j)}) K_{\zetaeta}( u, w_j) \\ = & \int_{X_{\mathbb R}} K_{\zetaeta}(u, x) f(x) \frac{d \varepsilonta (x)}{w(x)} \nonumber \varepsilonnd{align} holds, where \[ f(u) = \sum_{j=1}^{n} c_j K_{\zetaeta}( w_j, u) \in {{\bf L}^2\left( X_{\mathbb R} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(x)}{w(x)} \right)} . \] Due to Lemma \ref{denseL2}, the linear span of the kernels \varepsilonqref{jfk-le-26-octobre-2000_SV} is dense in ${\bf L}^2\left( X_{\mathbb R} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(x)}{w(x)} \right)$ and hence \varepsilonqref{l-phi} follows. \varepsilonnd{pf} \begin{Tm} The norm of an element $F$ in $\mathcal{L}(\varphi)$ is given by \begin{equation*} \normTwo{ F }{\mathcal{L}(\varphi)} \overset{\text{def} } {=} \normTwo{f}{{\bf L}^2 \left( X_{\mathbb R} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(t)}{w(t)} \right)}. \varepsilonnd{equation*} \varepsilonnd{Tm} \begin{pf} Since, by Lemma \ref{denseL2}, the linear span of the kernels \varepsilonqref{jfk-le-26-octobre-2000_SV} is dense in ${\bf L}^2(d \varepsilonta)$, it is enough to check the eqaulity of the norms for a linear combination of the Cauchy kernels. The norm of an element in the reproducing kernel Hilbert space $\mathcal{L}(\varphi)$ is given by \[ \norm{F}^2_{\mathcal{L}(\varphi)} = \sum_{\varepsilonll,j=1}^{n} \overline{c_\varepsilonll} (\varphi(u_\varepsilonll) + \overline{\varphi(u_j)}) K_{\zetaeta}( u_\varepsilonll, u_j)c_j. \] Then, by \varepsilonqref{lPhinorm} \[ \norm{F}^2_{\mathcal{L}(\varphi)} = \sum_{\varepsilonll,j=1}^{n} \overline{c_j} \innerProductTri {K_{\zetaeta}( u_\varepsilonll, w )} {K_{\zetaeta}( w, u_j)} {{\bf L}^2\left( X_{\mathbb R} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(t)}{w(t)} \right)} c_\varepsilonll, \] which is exactly the norm of $f(w)$ in ${\bf L}^2\left( X_{\mathbb R} , L_{\zetaeta} \otimes \Delta , \frac{d \varepsilonta(t)}{w(t)} \right)$. \varepsilonnd{pf} As an immediate consequence, whenever $y$ is a real function, we may state an additional result. It follows that $M^y$ is simply the multiplication operator in ${\bf L}^2(d \varepsilonta)$. \begin{Tm} Let $y$ be a meromorphic function with simple poles such that the poles of $y$ do not belong to the support of the measure $d \varepsilonta$. Then, the multiplication model operator $M^{y}$, defined on $\mathcal{L}(\varphi)$, satisfies the following properties: \begin{enumerate} \item $M^{y}$ is given explicitly by \begin{equation} \lambdabel{MyLPhi} \left(M^y F \right) (u) = \int_{{X}_{\mathbb R}} { K_{\zetaeta}( u, x )}f(x)y(x)\frac{d \varepsilonta(x)}{\omega(x)}, \varepsilonnd{equation} where $f$ is a section of $L_\zetaeta\otimes\Delta$, which is square summable with respect to $d \varepsilonta$. \item $\mathcal{L}(\varphi)$ is invariant under $M^{y}$. \item $M^{y}$ is bounded. \varepsilonnd{enumerate} \varepsilonnd{Tm} \begin{pf} Considering the model operator \varepsilonqref{m_y} together with Theorem \ref{phiIntPresentation}, we conclude the following: \begin{align*} (M^y F)(u) = & y(u)F(u) + \sum_{m=1}^{n}{c_m F(p_m) K_{\zetaeta}( u, p_m)} \\ = & y(u)F(u) + \sum_{m=1}^{n}{c_m \int_{X_{\mathbb R}} f(x) K_{\zetaeta} (p_m,x) \frac{d \varepsilonta(x)}{\omega(x)} K_{\zetaeta} ( u, p_m)} \\ = & y(u)F(u) + \int_{X_{\mathbb R}} f(x) \frac{d \varepsilonta(x)}{\omega(x)} \sum_{m=1}^{n}{c_m K_{\zetaeta}( p_m,x) K_{\zetaeta} ( u, p_m)}, \varepsilonnd{align*} where $p^{(1)},...,p^{(n)}$ are the distinct poles of $y$. Using the collection formula \cite[Proposition 3.1]{av2} and using again Theorem \ref{phiIntPresentation}, we have: \begin{align*} (M^y F)(u) = & y(u)F(u) + \int_{X_{\mathbb R}} f(x) \frac{d \varepsilonta(x)}{\omega(x)} K_{\zetaeta} (u,x) (y(x)-y(u)) \\ = & \int_{{X}_{\mathbb R}} { K_{\zetaeta}( u, x )}f(x)y(x)\frac{d \varepsilonta(x)}{\omega(x)}. \varepsilonnd{align*} We note that $f$ is a section of $L_\zetaeta\otimes\Delta$ and remains so after multiplication by a meromorphic function $y$. Furthermore, it is square summable with respect to the measure $d \varepsilonta$, since, by assumption, the poles of $y$ lie outside the support of $d \varepsilonta$. \varepsilonnd{pf} \begin{corollary} Let $y$ be a real meromorphic function on $X$ such that the poles of $y$ do not belong to the support of the measure $d \varepsilonta$. Then, the multiplication model operator $M^{y}$ is selfadjoint. \varepsilonnd{corollary} \begin{pf} The model operator $M^y$ satisfies $\innerProductReg{M^y F}{G} = \innerProductReg{ F}{M^yG}$ as follows from \[ \innerProductReg{M^y F}{G} = \int_{{X}_{\mathbb R}} f(x)y(x) \overline{ g(x)} \frac{d \varepsilonta(x)}{\omega(x)} \] and from the assumption that $y$ is a real meromorphic function. \varepsilonnd{pf} Equation \ref{MyLPhi} immediately produces the $\mathcal{L}(\varphi)$ counterpart of \cite[Theorem 4.6]{av3}. \begin{corollary} Let $y_1$ and $y_2$ be two meromorphic functions of degree $n_1$ and $n_2$, respectively. Furthermore, we assume that the poles of $y_1$ and $y_2$ lie outside the support of $d \varepsilonta$ on $X _{\mathbb R}$. Then, $M^{y_1}$ and $M^{y_2}$ commute on $\mathcal{L}(\varphi)$, that is, for every $F(z) \in \mathcal{L}(\varphi)$, $M^{y_1} M^{y_2} F = M^{y_2} M^{y_1} F$ holds. \varepsilonnd{corollary} Using the observation that $R^y_{\alpha}$ is just the operator $M^{\frac{1}{y(u)-\alpha}}$, we present the counterpart of \varepsilonqref{tatche}, that is, an integral representation of the resolvent operator at $\alpha$. \begin{corollary} $\mathcal{L}(\varphi)$ is invariant under the resolvent operator $R^y_\alpha$ where $\alpha$ is a non-real complex number. Moreover, the resolvent operator has the integral representation: \begin{equation} \lambdabel{RyInLPhi} (R^y_\alpha F)(u) = \int_{X _ {\mathbb R}}K_{\zetaeta}(u,x)\frac{f(u)}{(y(u)-\alpha)} \frac{d \varepsilonta(x)}{\omega(x)} , \varepsilonnd{equation} where the poles of $y$ do not belong to the support of $d \varepsilonta$. \varepsilonnd{corollary} As another immediate corollary, we mention that any pair of resolvent operator commutes. \begin{corollary} Let $y_1$ and $y_2$ be two meromorphic functions of degree $n_1$ and $n_2$, respectively. Furthermore, assume the poles of $y_1$ and $y_2$ lie outside the support of $d \varepsilonta$ on $X _{\mathbb R}$ and let $\alpha$ and $\beta$ be two elements in $\mathbb C \setminus \mathbb R$. Then the resolvent operators $R^{y_1}_{\alpha}$ and $R^{y_2}_{\beta}$ commute, i.e. for any $F(z) \in \mathcal{L}(\varphi)$ the following holds $R^{y_1}_{\alpha} R^{y_2}_{\beta} F= R^{y_2}_{\beta} R^{y_1}_{\alpha} F$. \varepsilonnd{corollary} The counterpart of Theorem \ref{finiteDimentionalLphi} is given below. \begin{Tm} Let $\varphi$ be analytic in $X \backslash X_{\mathbb R}$ and with positive real part. Then the following are equivalent: \begin{enumerate} \item \lambdabel{tfre1} The reproducing kernel space $\mathcal{L}(\varphi)$ is finite dimensional. \item \lambdabel{tfre2} $\varphi$ is meromorphic on ${X}$. \item \lambdabel{tfre3} ${\bf L}^2(d \varepsilonta)$ is finite dimensional. \varepsilonnd{enumerate} \varepsilonnd{Tm} \begin{pf} Since $\mathcal{L}(\varphi)$ is isomorphic to ${\bf L}^2(d \varepsilonta)$, $\mathcal{L}(\varphi)$ is finite dimensional if and only if ${\bf L}^2(d \varepsilonta)$ is finite dimensional. If ${\bf L}^2(d \varepsilonta)$ is finite dimensional then $d \varepsilonta$ has a finite number of atoms and hence $\varphi$ is meromorphic. On the other hand assume that $\varphi$ is meromorphic on $X$. As in the classical case, the measure in the integral representation of $\varphi$ is obtained as the weak star limit of $\varphi(p) + \overline{\varphi(p)}$, and the poles of $\varphi$ correspond to the atoms of $d \varepsilonta$. \varepsilonnd{pf} \begin{Tm} \lambdabel{thm432} Let $f,g \in \mathcal{L}(\varphi)$ and $\alpha, \beta \in \mathbb C$ with non-zero imaginary part. Then the following identity holds, \begin{equation} \lambdabel{strucIdPhi} \innerProductReg{R^y_\alpha f}{ g} - \innerProductReg{f}{ R^y_\beta g} - (\alpha - \overline{\beta}) \innerProductReg{R^y_\alpha f}{ R^y_\beta g} = 0. \varepsilonnd{equation} \varepsilonnd{Tm} \begin{pf} Using \varepsilonqref{RyInLPhi}, the left hand side of \varepsilonqref{strucIdPhi} can be written as \begin{align*} \innerProductReg{R^y_\alpha f}{ g} & - \innerProductReg{f}{ R^y_\beta g} - (\alpha - \overline{\beta}) \innerProductReg{R^y_\alpha f}{ R^y_\beta g} = \int_{X _ {\mathbb R}} f(u) g(u) K_{\zetaeta}(p,u) \times \\ & \left( \frac{1}{(y(u)-\alpha)} - \frac{1}{(y(u)-\overline{\beta})} - \frac{\alpha - \overline{\beta}}{(y(u)-\alpha)(y(u)-\overline{\beta})} \right) \frac{ d \varepsilonta(u)}{\omega(u)}. \varepsilonnd{align*} One may note that \[ \frac{1}{(y(u)-\alpha)} - \frac{1}{(y(u)-\overline{\beta})} - \frac{\alpha - \overline{\beta}}{(y(u)-\alpha)(y(u)-\overline{\beta})} \] is identically zero, hence the result follows. \varepsilonnd{pf} In fact Theorem \ref{thm432} is an if and only if relation, and we refer the reader to \cite{AVP3} for the related de Branges structure theorems. \section{The \texorpdfstring{$\mathcal{L}(\varphi)$}{ $\mathcal{L}(\varphi)$ } spaces in the single-valued case} \lambdabel{secPhiSingleVal} Whenever an additive function $\varphi$ is single-valued, the formula \begin{equation*} s(p)= \frac{1-\varphi(p)}{1+\varphi(p)} \varepsilonnd{equation*} makes sense and defines a single-valued function $s(p)$. Then, the reproducing kernel associated with $s(p)$, denoted by ${\mathcal H}(s)$, is of the form \[ i(1-s(p)s(q)^*)K_{\zetaeta}( p,{\tauBa{q}}). \] These spaces were studied in \cite{av3} in the finite dimensional setting and in \cite{AVP1} in the infinite dimensional case. We note that the multiplication operator $ u \mapsto \frac{(1+\varphi(p))}{{\sqrt{2}}} u $ maps, as in the zero genus case, ${\mathcal H}(s)$ onto $\mathcal{L}(\varphi)$ unitarily. Hence, we may pair any $u\in{\mathcal H}(s)$ to a function $f\in{\bf L}^2(d \varepsilonta)$ through the corresponding $\mathcal{L}(\varphi)$ space, such that \begin{equation*} \frac{1}{\sqrt{2}}(1+\varphi(p))u(p) = \int_{{X}_{\mathbb R}} K_{\zetaeta}( p,x)f(x)\frac{d \varepsilonta(x)}{\omega(x)}. \varepsilonnd{equation*} We denote the mapping from ${\mathcal H}(s)$ onto ${\bf L}^2(d \varepsilonta)$ by $$\Lambda: u(p) \longrightarrow f(x).$$ We now turn to express the operator $M^y$ using the operator of multiplication by $y$ in ${\bf L}^2(d \varepsilonta)$. \begin{Tm} Let $\varphi$ be a single-valued function with positive real part on a dividing-type compact Riemann surface $X$ and let $y$ be a meromorphic function of degree $n$ on $X$. Then, any $f\in {\bf L}^2(d \varepsilonta)$ satisfies \begin{align} \nonumber (\Lambda M^y \Lambda^*)f(x) = & y(x)f(x)+ \\ \lambdabel{jardin-des-plantes} & i \sum_{j=1}^{n} \frac{c_j}{1+\varphi(p^{(j)})}K_{\zetaeta}( x, p^{(j)}) \left(\int_{{X}_{\mathbb R}} K_{\zetaeta}( p^{(j)}, p)f(p) \frac{d \varepsilonta(p)}{\omega(p)}\right), \varepsilonnd{align} where $p^{(1)},...,p^{(n)}$ are the $n$ distinct poles of $y$. \varepsilonnd{Tm} \begin{pf} Let $u\in{\mathcal H}(s)$ and $f\in{\bf L}^2(d \varepsilonta)$ such that, $\Lambda u = f$, that is, they satisfy the relation \begin{equation} \lambdabel{phiInt} \frac{1+\varphi(p)}{{\sqrt{2}}}u(p)=\int_{{ X}_{\mathbb R}}K_{\zetaeta}(x,p)f(x)\frac{d \varepsilonta(x)}{\omega(x)}. \varepsilonnd{equation} Then, multiplying both sides of \varepsilonqref{phiInt} by $y(p)$, we obtain \begin{align} \nonumber y(p) \frac{1+\varphi(p)}{\sqrt{2}}u(p) = & \int_{{ X}_{\mathbb R}}y(p)K_{\zetaeta}( p, x)f(x)\frac{ d \varepsilonta(x)}{\omega(x)} \\ \nonumber =& \int_{{ X}_{\mathbb R}}y(x)K_{\zetaeta}( p, x)f(x)\frac{d \varepsilonta(x)}{\omega(x)}+ \\ & \int_{{ X}_{\mathbb R}}(y(p)-y(x))K_{\zetaeta}( p, x)f(x) \frac{d \varepsilonta(x)}{\omega(x)}. \lambdabel{phiInt1} \varepsilonnd{align} Then, using the collection-type formula (see \cite[Proposition 3.1, Eq. 3.5]{av2}), we have \begin{equation*} (y(p)-y(q))K_{\zetaeta}( p, q) = - \sum_{j=1}^{n} \frac{c_j}{dt_j(p^{(j)})}K_{\zetaeta}( p, p^{(j)})K_{\zetaeta}( p^{(j)},q). \varepsilonnd{equation*} Then \varepsilonqref{phiInt1} becomes: \begin{align} \nonumber \int_{{X}_{\mathbb R}}y(x)K_{\zetaeta}( p, x)f(x) \frac{d \varepsilonta(x)}{\omega(x)} = & \frac{(1+\varphi(p))}{\sqrt{2}} y(p)u(p) - \\ & \sum_{j=1}^{n} \frac{c_j K_{\zetaeta}( p, p^{(j)})}{dt_j(p^{(j)})} \int_{{X}_{\mathbb R}} K_{\zetaeta}( p^{(j)}, x)f(x)\frac{d \varepsilonta(x)}{\omega(x)}. \lambdabel{phiInt2} \varepsilonnd{align} On the other hand, using the equality \begin{equation} (1+\varphi(p))\displaystyle{\frac{1+s(p)}{2}} = 1. \lambdabel{phiInt3} \varepsilonnd{equation} Equation \ref{phiInt} becomes \begin{equation} \lambdabel{phiInt4} \int_{{X}_{\mathbb R}} K_{\zetaeta}(x,p^{(j)})f(x)\frac{d \varepsilonta(x)}{\omega(x)}= \displaystyle{\frac{\sqrt{2}}{1+s(p^{(j)})}}u(p^{(j)}). \varepsilonnd{equation} Now, substituting \varepsilonqref{phiInt3} and \varepsilonqref{phiInt4} in \varepsilonqref{phiInt2}, we obtain the following calculation: \begin{align} \frac{\sqrt{2}}{1+\varphi(p)} & \int_{{X}_{\mathbb R}} K_{\zetaeta} ( p, x) y(x)f(x) \frac{d \varepsilonta(x)}{\omega(x)} = \nonumber \\ = & y(p)u(p) - \frac{1+s(p)}{\sqrt{2}} \sum_{j=1}^{n} c_j K_{\zetaeta}( p, p^{(j)})\frac{\sqrt{2}u(p^{(j)})}{1+s(p^{(j)})} \nonumber \\ =& y(p)u(p) - \sum_{j=1}^{n} c_j\frac{1+s(p)}{1+s(p^{(j)})} K_{\zetaeta}( p , p^{(j)})u(p^{(j)}) \nonumber \\ \nonumber =& y(p)u(p) - \sum_{j=1}^{n} c_j K_{\zetaeta}( p , p^{(j)})u(p^{(j)}) \left(1 + \frac{s(p)-s(p^{(j)})}{1+s(p^{(j)})} \right) \\ = & (M^y u)(p) + \sum_{j=1}^{n} c_j \frac{s(p)-s(p^{(j)})}{\sqrt{2}} K_{\zetaeta}( p , p^{(j)}) u(p^{(j)}) . \lambdabel{phiInt5} \varepsilonnd{align} On the other hand, using \varepsilonqref{phiInt3}, we have \begin{align} \nonumber \varphi(p)-\varphi(p^{(j)}) =& \frac{2(s(p^{(j)})-s(p))}{(1+s(p))(1+s(p^{(j)}))} \\ =& (1+\varphi(p))(1+\varphi(p^{(j)})) \frac{s(p^{(j)})-s(p)}{2}. \lambdabel{phiInt6} \varepsilonnd{align} Thus, we take \varepsilonqref{phiInt6} and multiply it on the right by the Cauchy kernel $K_{\zetaeta}( p, p^{(j)})$ in both sides and use \varepsilonqref{jfk-le-26-octobre-2000_SV} to conclude \begin{align*} (1+\varphi(p)) \frac{s(p)-s(p^{(j)})}{2} & K_{\zetaeta}( p, p^{(j)}) = \frac{\varphi(p^{(j)})-\varphi(p)}{1+\varphi(p^{(j)})} K_{\zetaeta}( p, p^{(j)}) \\ =& \frac{i}{1+\varphi(p^{(j)})} \int_{{X}_{\mathbb R}} K_{\zetaeta}( p, x)K_{\zetaeta}( x,p^{(j)})\frac{d \varepsilonta(x)}{\omega(x)} . \varepsilonnd{align*} Thus, \varepsilonqref{phiInt5} becomes \begin{align*} \frac{1+\varphi(p)}{\sqrt{2}} & (M^yu)(p) = \int_{{X}_{\mathbb R}} K_{\zetaeta}( p, x)y(x)f(x) \frac{d \varepsilonta(x)}{\omega(x)} + i \sum_{j=1}^{n} \frac{c_j}{1+\varphi(p^{(j)})} \times \nonumber \\ & \times\left( \int_{{X}_{\mathbb R}}K_{\zetaeta}( p, x)K_{\zetaeta}( x,p^{(j)})\frac{d \varepsilonta(x)}{\omega(x)} \right) \left( \int_{{X}_{\mathbb R}} K_{\zetaeta}( p^{(j)}, s)f(s)\frac{d \varepsilonta(s)}{\omega(s)} \right), \varepsilonnd{align*} and by setting \begin{align*} {\bf \omegaidehat{f}}(q) = y(q)f(q) + i \sum_{j=1}^{n} \frac{c_jK_{\zetaeta}( q, p^{(j)})}{1+\varphi(p^{(j)})} \left( \int_{{X}_{\mathbb R}} K_{\zetaeta}( p^{(j)}, x)f(x) \frac{d \varepsilonta(x)}{\omega(x)} \right), \varepsilonnd{align*} the identity in \varepsilonqref{phiInt6} becomes \[ \frac{1+\varphi(p)}{\sqrt{2}}(M^yu)(p) = \int_{{X}_{\mathbb R}} K_{\zetaeta}( p, x){\bf \omegaidehat{f}}(x)\frac{d \varepsilonta(x)}{\omega(x)} . \] \varepsilonnd{pf} \begin{Cy} Let $\varphi$ be a single-valued function with positive real part on a dividing-type compact Riemann surface $X$. Furthermore, let us assume that $y(p)$ is a real meromorphic function of degree $n$ such that $s(p^{(j)})=1$ for all $1\leq j \leq n$. Then the following identity holds: \begin{equation*} (\Lambda (\mathfrak{Re} ~ M^y)\Lambda^*)f(p)=y(p)f(p). \varepsilonnd{equation*} \varepsilonnd{Cy} We note that, in fact, one may assume that $s(p^{(j)})$ for all $1 \leq j \leq n$ equal to a common constant of modulus one. Furthermore, for an arbitrary $f\in{\bf L}^2(d \varepsilonta)$ we set \[ \mathbb Phi_{y}(f) \overset{\text{def} } {=} \col_{1\leq j \leq n}~\left( \frac{1}{1+\varphi(p^{(j)})}\int_{{X}_{\mathbb R}} K_{\zetaeta}( p^{(j)}, x)f(x)\frac{d \varepsilonta(x)}{\omega(x)}\right), \] and then, for an element $d= (d_1,...,d_n)^t \in {\mathbb C}^{n}$, the adjoint operator $\mathbb Phi_{y}^*:\mathbb C^n \rightarrow \mathcal{L}(\varphi)$ is given explicitly by \[ \mathbb Phi_{y}^*d = \sum_{j=1}^{n} \frac{1}{1+\overline{\varphi(p^{(j)})}}d_j K_{\zetaeta}( p, \tauBa{p^{(j)}}). \] Then, if we further use the notation $$\sigma_y \overset{\text{def} } {=} {\rm diag}~ c_j \, (1+\overline{\varphi(p^{(j)})}),$$ the formula in \varepsilonqref{jardin-des-plantes} may be rewritten and simplified as follows \begin{equation*} (\Lambda M^y \Lambda^*)f(p) = y(p)f(p)+\frac{i}{2}\mathbb Phi_{y}^*\sigma_y\mathbb Phi_{y} f, \varepsilonnd{equation*} while the real part of the operator $M^y$ has the form \begin{equation*} (\Lambda (\mathfrak{Re} ~ M^y)\Lambda^*)f(p)=y(p)f(p)+ \mathbb Phi_{y}^*{\rm diag}~({\rm Im}~\varphi(p^{(j)}))\mathbb Phi_{y} f. \varepsilonnd{equation*} \begin{landscape} \section{Summary} \lambdabel{chSumm43} \setcounter{equation}{0} The table below summarizes the comparison between the $\mathcal{L}(\varphi)$ spaces in the Riemann sphere case and in a compact real Riemann surfaces setting. \begin{center} \begin{tabular}{|m{5cm}||M{6.0cm}|M{8.3cm}|} \hline & {\bf The $g=0$ setting} & {\bf The $g>0$ setting} \\ \hline\hline & $z-w$ & $E(p,q)$ \\ \hline The Cauchy kernel & $\frac{1}{-i(z-\overline{w})}$ & $K_{\zetaeta}(p,\tauBa{q}) \overset{\text{def} } {=} \frac {\vartheta[\zetaeta](p-{\tauBa{q}})} {i\vartheta[\zetaeta](0)E(p,{\tauBa{q}})} $ \\ \hline The Hardy space $H^2$ & The reproducing kernel Hilbert space with kernel $\frac{1}{-i(z-\overline{w})}$ & The reproducing kernel Hilbert space the with kernel $\frac{1}{-i}K_{\zetaeta}(p, \tauBa{q})$ where $\zetaeta\in T_0$. \\ \hline The kernel of $\mathcal{L}(\varphi)$ & $\frac{\varphi(z)+\varphi(w)^*}{-i(z-\overline{w})} = \innerProductTri{\frac{1}{t-z}}{\frac{1}{t-w}}{{\bf L}^2(d \varepsilonta)},$ & $ \left({\varphi}(p)+{\varphi}(q)^*\right) \frac{\vartheta[\zetaeta](p-{\tauBa{q}})}{\vartheta[\zetaeta](0)E({\tauBa{q}},p)}$ \\ \hline Reproducing kernel in ${\bf L}^2( d \mu) $ & $\int_{\mathbb R}\frac{d \varepsilonta(t)}{(t-z)(t-\overline{w})}$ & \small $\begin{aligned} \innerProductTri {K_{\zetaeta}(\tauBa{q},x)}{ K_{\zetaeta}( \tauBa{p}, x)}{{\bf L}^2\left(\frac{d \varepsilonta}{ \omega}\right)} \varepsilonnd{aligned}$ \\ \hline The elements of $\mathcal{L}(\varphi)$ & $F(z)=\int_{\mathbb R}\frac{f(t)d \varepsilonta(t)}{t-z}$ & \small $ \innerProductReg {f}{ K_{\zetaeta}( \tauBa{p}, x)} = \int_{{X}_{\mathbb R}}f(x)K_{\zetaeta}(x, p)f(x)\frac{d \varepsilonta(x)}{\omega(x)} $ \\ \hline {The Herglotz integral representation formula} & \small $\begin{aligned} \varphi(z)= & iA-iBz + \\ & i\int_{{\mathbb R}}\left(\frac{1}{t-z}-\frac{t}{t^2+1} \right)d \varepsilonta(t) \varepsilonnd{aligned}$ & \small $\setlength{\jot}{0pt}\begin{aligned} \varphi(z) = & \frac{\pi}{2} \int_{X_{\mathbb R}} \frac{[\omega_1(x) \cdots \omega_g(x)]}{\omega(x)} n(\omegaidetilde{x}) \, d \varepsilonta(x) + \\ & \pi i\int_{X_{\mathbb R}} \frac{[\omega_1(x)\,\cdots \,\omega_g(x)]}{\omega(x)}(Yp)d \varepsilonta(x) - \\ & \frac{i}{2}\int_{X_{\mathbb R}}\frac{ \frac{\partial}{\partial x} \ln E(p,\omegaidetilde{x})}{\omega(x)}d \varepsilonta(x) + iM \varepsilonnd{aligned}$ \\ \hline Integral representation of the model operator $M^y$ & $(M F)(z)=\int_{\mathbb R}{\frac{t f(t)d \varepsilonta(t)}{t-z}}$ & $ (M^y F)(z)= \int_{{X}_{\mathbb R}} K_{\zetaeta}( u, x) f(x)y(x)\frac{d \varepsilonta(x)}{\omega(x)}$ \\ \hline Integral representation of the resolvent operator $R_\alpha^y$ & $(R_\alpha F)(z)=\int_{\mathbb R}\frac{f(t)d \varepsilonta(t)}{(t-z)(t-\alpha)}$ & \small $\begin{aligned} (R_\alpha F)(p) & = \int_{ X_{\mathbb R}} K_{\zetaeta}( p,u) \frac{f(u)}{y(u) - \alpha} \frac{ d \mu(u)}{\omega(u)} \varepsilonnd{aligned}$ \\ \hline \varepsilonnd{tabular} \varepsilonnd{center} \varepsilonnd{landscape} \normalsize \def\cfgrv#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\varepsilonlse \setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7 \hbox{\lower1.05ex\hbox to 1\omegad7{\hss\accent"12\hss}}\penalty 10000 \hskip-1\omegad7\penalty 10000\box7} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def\lfhook#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{'}\hidewidth\crcr\unhbox0}}} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \begin{thebibliography}{10} \bibitem{MR1054205} N.~I. Akhiezer. \newblock {\varepsilonm Elements of the theory of elliptic functions}, volume~79 of {\varepsilonm Translations of Mathematical Monographs}. \newblock American Mathematical Society, Providence, RI, 1990. \newblock Translated from the second Russian edition by H. H. McFaden. \bibitem{capb2} D.~Alpay. \newblock {\varepsilonm An advanced complex analysis problem book. Topological vector spaces, functional analysis, and Hilbert spaces of analytic functions}. \newblock Birk\"auser {B}asel, 2015. \bibitem{AVP3} D.~Alpay, A.~Pinhas, and V.~Vinnikov. \newblock Impedance multidimensional systems and selfadjoint vessels. \newblock arXiv. \bibitem{AVP1} D.~Alpay, A.~Pinhas, and V.~Vinnikov. \newblock de {B}ranges spaces on compact {R}iemann surfaces and {B}eurling-{L}ax type theorem. \newblock {\varepsilonm arXiv preprint arXiv:1806.08670}, 2018. \bibitem{av2} D.~Alpay and V.~Vinnikov. \newblock Indefinite {H}ardy spaces on finite bordered {R}iemann surfaces. \newblock {\varepsilonm J. Funct. Anal.}, 172:221--248, 2000. \bibitem{av3} D.~Alpay and V.~Vinnikov. \newblock Finite dimensional de {B}ranges spaces on {R}iemann surfaces. \newblock {\varepsilonm J. Funct. Anal.}, 189(2):283--324, 2002. \bibitem{bergman} S.~Bergman. \newblock {\varepsilonm The kernel function and conformal mapping}. \newblock {American {M}athematical {S}ociety}, 1950. \bibitem{MR0229011} {L. de} Branges. \newblock {\varepsilonm Hilbert spaces of entire functions}. \newblock Prentice-Hall Inc., Englewood Cliffs, N.J., 1968. \bibitem{dbbook} {L. de} Branges. \newblock {\varepsilonm Espaces {H}ilbertiens de fonctions enti\`{e}res}. \newblock Masson, {P}aris, 1972. \bibitem{MR48:904} M.~S. Brodski{\u\i}. \newblock {\varepsilonm Triangular and {J}ordan representations of linear operators}. \newblock American Mathematical Society, Providence, R.I., 1971. \newblock Translated from the Russian by J. M. Danskin, Translations of Mathematical Monographs, Vol. 32. \bibitem{MR1629812} D.~Crowdy and S.~Tanveer. \newblock A theory of exact solutions for annular viscous blobs. \newblock {\varepsilonm J. Nonlinear Sci.}, 8(4):375--400, 1998. \bibitem{fay1} J.~D. Fay. \newblock {\varepsilonm Theta functions on {R}iemann surfaces}. \newblock Lecture Notes in Mathematics, Vol. 352. Springer-Verlag, Berlin-New York, 1973. \bibitem{MR1681462} G.~B. Folland. \newblock {\varepsilonm Real analysis}. \newblock Pure and Applied Mathematics (New York). John Wiley \& Sons Inc., New York, second edition, 1999. \newblock Modern techniques and their applications, A Wiley-Interscience Publication. \bibitem{GrHa} P.~A. Griffiths and J.~Harris. \newblock {\varepsilonm Principles of algebraic geometry}. \newblock Wiley-Interscience [John Wiley \& Sons], New York, 1978. \newblock Pure and Applied Mathematics. \bibitem{gross1981real} B.~H. Gross and J.~Harris. \newblock Real algebraic curves. \newblock In {\varepsilonm Annales Scientifiques de l'{\'E}cole Normale Sup{\'e}rieure}, volume~14, pages 157--182, 1981. \bibitem{gunning2} R.~C. Gunning. \newblock {\varepsilonm Lectures on {R}iemann surfaces}. \newblock Princeton Mathematical Notes. Princeton University Press, Princeton, N.J., 1966. \bibitem{mumford1} D.~Mumford. \newblock {\varepsilonm Tata lectures on theta. {I}}, volume~28 of {\varepsilonm Progress in Mathematics}. \newblock Birkh\"{a}user Boston, Inc., Boston, MA, 1983. \newblock With the assistance of C. Musili, M. Nori, E. Previato and M. Stillman. \bibitem{mumford2} D.~Mumford. \newblock {\varepsilonm Tata lectures on theta. {II}}, volume~43 of {\varepsilonm Progress in Mathematics}. \newblock Birkh\"{a}user Boston, Inc., Boston, MA, 1984. \newblock Jacobian theta functions and differential equations, With the collaboration of C. Musili, M. Nori, E. Previato, M. Stillman and H. Umemura. \bibitem{RosenblumRovnyak} M.~Rosenblum and J.~Rovnyak. \newblock {\varepsilonm Topics in {H}ardy classes and univalent functions}. \newblock Birkh{\" a}user Verlag, Basel, 1985. \bibitem{Tsuji} M.~Tsuji. \newblock {\varepsilonm Potential theory in modern function theory}. \newblock Maruzen, {Tokyo}, 1959. \bibitem{vinnikov4} V.~Vinnikov. \newblock Commuting nonselfadjoint operators and algebraic curves. \newblock In {\varepsilonm Operator theory and complex analysis ({S}apporo, 1991)}, volume~59 of {\varepsilonm Oper. Theory Adv. Appl.}, pages 348--371. Birkh\"{a}user, Basel, 1992. \bibitem{vinnikov5} V.~Vinnikov. \newblock Self--adjoint determinantal representations of real plane curves. \newblock {\varepsilonm Math. {A}nn.}, 296:453--479, 1993. \bibitem{MR1634421} V.~Vinnikov. \newblock Commuting operators and function theory on a {R}iemann surface. \newblock In {\varepsilonm Holomorphic spaces (Berkeley, CA, 1995)}, volume~33 of {\varepsilonm Math. Sci. Res. Inst. Publ.}, pages 445--476. Cambridge Univ. Press, Cambridge, 1998. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \maketitle \section{Introduction} The classical connection between dynamical systems and $C^*$-algebras is the crossed product construction which associates a $C^*$-algebra to a ho\-meo\-mor\-phism of a compact metric space. This construction has been generalized stepwise by J. Renault (\cite{Re}), V. Deaconu (\cite{De1}) and C. Anantharaman-Delaroche (\cite{An}) to local ho\-meo\-mor\-phisms and recently also to locally injective surjections by the second named author in \cite{Th1}. The main motivation for the last generalisation was the wish to include the Matsumoto-type $C^*$-algebra of a subshift which was introduced by the first named author in \cite{C}. In this paper we continue the investigation of the structure of the $C^*$-algebra of a locally injective surjection which was begun in \cite{Th1}. The main goal here is to give necessary and sufficient conditions for the algebras, or at least any simple quotient of them, to be purely infinite; a property they are known to have in many cases. Recall that a simple $C^*$-algebra is said to be purely infinite when all its non-zero hereditary $C^*$-subalgebras contain an infinite projection. Our main result is that a simple quotient of the $C^*$-algebra arising from a locally injective surjection on a compact metric space of finite covering dimension, as in Section 4 of \cite{Th1}, is one of the following kinds: \begin{enumerate} \item[1)] a full matrix algebra $M_n(\mathbb C)$ for some $n \in \mathbb N$, or \item[2)] the crossed product $C(K) \times_f \mathbb Z$ corresponding to a minimal homeomorphism $f$ of a compact metric space $K$ of finite covering dimension, or \item[3)] a unital purely infinite simple $C^*$-algebra. \end{enumerate} In particular, when the algebra itself is simple it must be one of the three types, and in fact purely infinite unless the underlying map is a homeomorphism. Hence the problem of finding necessary and sufficient conditions for the $C^*$-algebra of a locally injective surjection on a compact metric space of finite covering dimension to be both simple and purely infinite has a strikingly straightforward solution: If the algebra is simple (and \cite{Th1} gives necessary and sufficient conditions for this to happen) then it is automatically purely infinite unless the map in question is a homeomorphism. A corollary of this result is that if the $C^*$-algebra of a one-sided subshift is simple, then it is also purely infinite. On the way to the proof of the main result we study the ideal structure. We find first the gauge invariant ideals, obtaining an insight which combined with methods and results of Katsura (\cite{K}) leads to a list of the primitive ideals. We then identify the maximal ideals among the primitive ones and obtain in this way a description of the simple quotients which we use to obtain the conclusions described above. A fundamental tool all the way is the canonical locally homeomorphic extension discovered in \cite{Th2} which allows us to replace the given locally injective map with a local homeomorphism. It means, however, that much of the structure we investigate gets described in terms of the canonical locally homeomorphic extension, and this is unfortunate since it may not be easy to obtain a satisfactory understanding of it for a given locally injective surjection. Still, it allows us to obtain qualitative conclusions of the type mentioned above. Besides the $C^*$-algebras of subshifts our results cover of course also the $C^*$-algebras associated to a local homeomorphism by the construction of Renault, Deaconu and Anantharaman-Delaroche, provided the map is surjective and the space is of finite covering dimension. This means that the results have bearing on many classes of $C^*$-algebras which have been associated to various structures, for example the $\lambda$-graph systems of Matsumoto (\cite{Ma}) and the continuous graphs of Deaconu (\cite{De2}). \emph{Acknowledgement:} This work was supported by the NordForsk Research Network 'Operator Algebras and Dynamics' (grant 11580). The first named author was also supported by the Research Council of Norway through project 191195/V30. \section{The $C^*$-algebra of a locally injective surjection} Let $X$ be a compact metric space and $\varphi : X \to X$ a locally injective surjection. Set $$ \Gamma_{\varphi} = \left\{ (x,k,y) \in X \times \mathbb Z \times X : \ \exists n,m \in \mathbb N, \ k = n -m , \ \varphi^n(x) = \varphi^m(y)\right\} . $$ This is a groupoid with the set of composable pairs being $$ \Gamma_{\varphi}^{(2)} \ = \ \left\{\left((x,k,y), (x',k',y')\right) \in \Gamma_{\varphi} \times \Gamma_{\varphi} : \ y = x'\right\}. $$ The multiplication and inversion are given by $$ (x,k,y)(y,k',y') = (x,k+k',y') \ \text{and} \ (x,k,y)^{-1} = (y,-k,x) . $$ Note that the unit space of $\Gamma_{\varphi}$ can be identified with $X$ via the map $x \mapsto (x,0,x)$. To turn $\Gamma_{\varphi}$ into a locally compact topological groupoid, fix $k \in \mathbb Z$. For each $n \in \mathbb N$ such that $n+k \geq 0$, set $$ {\Gamma_{\varphi}}(k,n) = \left\{ \left(x,l, y\right) \in X \times \mathbb Z \times X: \ l =k, \ \varphi^{k+i}(x) = \varphi^i(y), \ i \geq n \right\} . $$ This is a closed subset of the topological product $X \times \mathbb Z \times X$ and hence a locally compact Hausdorff space in the relative topology. Since $\varphi$ is locally injective $\Gamma_{\varphi}(k,n)$ is an open subset of $\Gamma_{\varphi}(k,n+1)$ and hence the union $$ {\Gamma_{\varphi}}(k) = \bigcup_{n \geq -k} {\Gamma_{\varphi}}(k,n) $$ is a locally compact Hausdorff space in the inductive limit topology. The disjoint union $$ \Gamma_{\varphi} = \bigcup_{k \in \mathbb Z} {\Gamma_{\varphi}}(k) $$ is then a locally compact Hausdorff space in the topology where each ${\Gamma_{\varphi}}(k)$ is an open and closed set. In fact, as is easily verified, $\Gamma_{\varphi}$ is a locally compact groupoid in the sense of \cite{Re} and a semi \'etale groupoid in the sense of \cite{Th1}. The paper \cite{Th1} contains a construction of a $C^*$-algebra from any semi \'etale groupoid, but we give here only a description of the construction for $\Gamma_{\varphi}$. Consider the space $B_c\left(\Gamma_{\varphi}\right)$ of compactly supported bounded functions on $\Gamma_{\varphi}$. They form a $*$-algebra with respect to the convolution-like product $$ f \star g (x,k,y) = \sum_{z,n+ m = k} f(x,n,z)g(z,m,y) $$ and the involution $$ f^*(x,k,y) = \overline{f(y,-k,x)} . $$ To turn it into a $C^*$-algebra, let $x \in X$ and consider the Hilbert space $H_x$ of square summable functions on $\left\{ (x',k,y') \in \Gamma_{\varphi} : \ y' = x \right\}$ which carries a representation $\pi_x$ of the $*$-algebra $B_c\left(\Gamma_{\varphi}\right)$ defined such that \begin{equation}Ll{pirep} \left(\pi_x(f)\psi\right)(x',k, x) = \sum_{z, n+m = k} f(x',n,z)\psi(z,m,x) \end{equation} when $\psi \in H_x$. One can then define a $C^*$-algebra $B^*_r\left(\Gamma_{\varphi}\right)$ as the completion of $B_c\left(\Gamma_{\varphi}\right)$ with respect to the norm $$ \left\|f\right\| = \sup_{x \in X} \left\|\pi_x(f)\right\| . $$ The space $C_c\left(\Gamma_{\varphi}\right)$ of continuous and compactly supported functions on $\Gamma_{\varphi}$ generate a $*$-subalgebra $\operatorname{alg}^* \Gamma_{\varphi}$ of $B^*_r\left(\Gamma_{\varphi}\right)$ which completed with respect to the above norm becomes the $C^*$-algebra $C^*_r\left(\Gamma_{\varphi}\right)$ which is our object of study. When $\varphi$ is open, and hence a local homeomorphism, $C_c\left(\Gamma_{\varphi}\right)$ is a $*$-subalgebra of $B_c\left(\Gamma_{\varphi}\right)$ so that $\operatorname{alg}^* \Gamma_{\varphi} = C_c\left(\Gamma_{\varphi}\right)$ and $C^*_r\left(\Gamma_{\varphi}\right)$ is then the completion of $C_c\left(\Gamma_{\varphi}\right)$. In this case $C_r^*\left(\Gamma_{\varphi}\right)$ is the algebra studied by Renault in \cite{Re}, by Deaconu in \cite{De1}, and by Anantharaman-Delaroche in \cite{An}. The algebra $ C^*_r\left(\Gamma_{\varphi}\right)$ contains several canonical $C^*$-subalgebras which we shall need in our study of its structure. One is the $C^*$-algebra of the open sub-groupoid $$ R_{\varphi} = \Gamma_{\varphi}(0) $$ which is a semi \'etale groupoid (equivalence relation, in fact) in itself. The corresponding $C^*$-algebra $C^*_r\left(R_{\varphi}\right)$ is the $C^*$-subalgebra of $C^*_r\left(\Gamma_{\varphi}\right)$ generated by the continuous and compactly supported functions on $R_{\varphi}$. Equally important are two canonical abelian $C^*$-subalgebras, $D_{\Gamma_{\varphi}}$ and $D_{R_{\varphi}}$. They arise from the fact that the $C^*$-algebra $B(X)$ of bounded functions on $X$ sits canonically inside $B^*_r\left(\Gamma_{\varphi}\right)$, cf. p. 765 of \cite{Th1}, and are then defined as $$ D_{\Gamma_{\varphi}} = C^*_r\left(\Gamma_{\varphi}\right) \cap B(X) $$ and $$ D_{R_{\varphi}} = C^*_r\left(R_{\varphi}\right) \cap B(X), $$ respectively. There are faithful conditional expectations $P_{\Gamma_{\varphi}} : C^*_r\left(\Gamma_{\varphi}\right) \to D_{\Gamma_{\varphi}}$ and $P_{R_{\varphi}} : C^*_r\left(R_{\varphi}\right) \to D_{R_{\varphi}}$, obtained as extensions of the restriction map $\operatorname{alg}^* \Gamma_{\varphi} \to B(X)$ to $C^*_r\left(\Gamma_{\varphi}\right)$ and $C^*_r\left(R_{\varphi}\right)$, respectively. When $\varphi$ is open and hence a local homeomorphism, the two algebras $D_{\Gamma_{\varphi}}$ and $D_{R_{\varphi}}$ are identical and equal to $C(X)$, but in general the inclusion $D_{R_{\varphi}} \subseteq D_{\Gamma_{\varphi}}$ is strict. Our approach to the study of $C^*_r\left(\Gamma_{\varphi}\right)$ hinges on a construction introduced in \cite{Th2} of a compact Hausdorff space $Y$ and a local homeomorphic surjection $\phi : Y \to Y$ such that $(X,\varphi)$ is a factor of $(Y,\phi)$ and \begin{equation}Ll{basiciso} C^*_r\left(\Gamma_{\varphi}\right) \simeq C^*_r\left(\Gamma_{\phi}\right) . \end{equation} Everything we can say about ideals and simple quotients of $C^*_r\left(\Gamma_{\phi}\right)$ will have bearing on $C^*_r\left(\Gamma_{\varphi}\right)$, but while the isomorphism (\ref{basiciso}) is equivariant with respect to the canonical gauge actions (see Section \ref{gaugeac}), it will not in general take $C^*_r\left(R_{\varphi}\right)$ onto $C^*_r\left(R_{\phi}\right)$. This is one reason why we will work with $C^*_r\left(\Gamma_{\varphi}\right)$ whenever possible, instead of using (\ref{basiciso}) as a valid excuse for working with local homeomorphisms only. Another is that it is generally not so easy to get a workable description of $(Y,\phi)$. As in \cite{Th2} we will refer to $(Y,\phi)$ as the \emph{canonical locally homeomorphic extension} of $(X,\varphi)$. The space $Y$ is the Gelfand spectrum of $D_{\Gamma_{\varphi}}$ so when $\varphi$ is already a local homeomorphism itself, the extension is redundant and $(Y,\phi) = (X,\varphi)$. \section{Ideals in $C^*_r\left(R_{\varphi}\right)$} Recall from \cite{Th1} that there is a semi \'etale equivalence relation $$ R\left(\varphi^n\right) = \left\{ (x,y) \in X \times X : \varphi^n(x) = \varphi^n(y) \right\} $$ for each $n \in \mathbb N$. They will be considered as open sub-equivalence relations of $R_{\varphi}$ via the embedding $(x,y) \mapsto (x,0,y) \in \Gamma_{\varphi}(0)$. In this way we get embeddings $C^*_r\left(R\left(\varphi^n\right)\right) \subseteq C^*_r\left(R\left(\varphi^{n+1}\right)\right) \subseteq C^*_r\left(R_{\varphi}\right)$ by Lemma 2.10 of \cite{Th1}, and then \begin{equation}Ll{crux} C^*_r\left(R_{\varphi}\right) = \overline{\bigcup_n C^*_r\left(R\left(\varphi^n\right)\right)} , \end{equation} cf. (4.2) of \cite{Th1}. This inductive limit decomposition of $C^*_r\left(R_{\varphi}\right)$ defines in a natural way a similar inductive limit decomposition of $D_{R_{\varphi}}$. Set $$ D_{R\left(\varphi^n\right)} = C^*_r\left(R\left(\varphi^n\right)\right) \cap B(X) . $$ \begin{lemma}Ll{AA1} $D_{R_{\varphi}} = \overline{\bigcup_{n=1}^{\infty} D_{R\left(\varphi^n\right)}}$. \begin{proof} Since $C^*_r\left(R\left(\varphi^n\right)\right) \subseteq C^*_r\left(R_{\varphi}\right)$, it follows that $$ D_{R(\varphi^n)} = C^*_r\left(R\left(\varphi^n\right)\right) \cap B(X) \subseteq C^*_r\left(R_{\varphi}\right) \cap B(X) = D_{R_{\varphi}}. $$ Hence \begin{equation}Ll{AA2} \overline{\bigcup_{n=1}^{\infty} D_{R\left(\varphi^n\right)}} \subseteq D_{R_{\varphi}} . \end{equation} Let $x \in D_{R_{\varphi}}$ and let $\epsilon > 0$. It follows from (\ref{crux}) that there is an $n \in \mathbb N$ and an element $y \in \operatorname{alg}^* R\left(\varphi^n\right)$ such that \begin{equation*} \left\|x - P_{R_{\varphi}}(y)\right\| \leq \epsilon . \end{equation*} On $\operatorname{alg}^* R_{\varphi}$ the conditional expectation $P_{R_{\varphi}}$ is just the map which restricts functions to $X$ and the same is true for the conditional expectation $P_{R\left(\varphi^n\right)}$ on $\operatorname{alg}^* R\left(\varphi^n\right)$, where $P_{R\left(\varphi^n\right)}$ is the conditional expectation of Lemma 2.8 in \cite{Th1} obtained by considering $R\left(\varphi^n\right)$ as a semi \'etale groupoid in itself. Hence $P_{R_{\varphi}}(y) = P_{R\left(\varphi^n\right)}(y) \in D_{R\left(\varphi^n\right)}$. It follows that we have equality in (\ref{AA2}). \end{proof} \end{lemma} In the following, by an ideal of a $C^*$-algebra we will always mean a closed and two-sided ideal. The next lemma is well known and crucial for the sequel. \begin{lemma}Ll{AA3} Let $Y$ be a compact Hausdorff space, $M_n$ the $C^*$-algebra of $n$-by-$n$ matrices for some natural number $n \in \mathbb N$ and $p$ a projection in $C(Y,M_n)$. Set $A = pC(Y,M_n)p$ and let $C_A$ be the center of $A$. For every ideal $I$ in $A$ there is an approximate unit for $I$ in $I \cap C_A$. \end{lemma} \begin{lemma}Ll{A7} Let $I,J \subseteq C^*_r\left(R_{\varphi}\right)$ be two ideals. Then $$ I \cap D_{R_{\varphi}} \subseteq J \cap D_{R_{\varphi}} \ \operatorname{\mathcal R^-}ightarrow \ I \subseteq J. $$ \begin{proof} If $I \cap D_{R_{\varphi}} \subseteq J \cap D_{R_{\varphi}}$ it follows that $I \cap D_{R\left(\varphi^n\right)} \subseteq J \cap D_{R\left(\varphi^n\right)}$ for all $n$. Note that the center of $C^*_r\left(R\left(\varphi^n\right)\right)$ is contained in $D_{R\left(\varphi^n\right)}$ since $D_{R\left(\varphi^n\right)}$ is maximal abelian in $C^*_r\left(R\left(\varphi^n\right)\right)$ by Lemma 2.19 of \cite{Th1}. By using Corollary 3.3 of \cite{Th1} it follows therefore from Lemma \ref{AA3} that there is a sequence $\{x_n\}$ in $ I \cap D_{R\left(\varphi^n\right)}$ such that $\lim_{n \to \infty} x_na = a$ for all $a \in I \cap C^*_r\left(R\left(\varphi^n\right)\right)$. Since $x_n \in J \cap D_{R\left(\varphi^n\right)}$ this implies that $I \cap C^*_r\left(R\left(\varphi^n\right)\right) \subseteq J \cap C^*_r\left(R\left(\varphi^n\right)\right)$ for all $n$. Combining with (\ref{crux}) we find that $$ I = \overline{\bigcup_n I \cap C^*_r\left(R\left(\varphi^n\right)\right)} \subseteq \overline{\bigcup_n J \cap C^*_r\left(R\left(\varphi^n\right)\right)} = J . $$ \end{proof} \end{lemma} Recall from \cite{Th1} that an ideal $J$ in $D_{R_{\varphi}}$ is said to be \emph{$R_{\varphi}$-invariant} when $n^*Jn \subseteq J$ for all $n \in \operatorname{alg}^* R_{\varphi}$ supported in a bisection of $R_{\varphi}$. For every $R_{\varphi}$-invariant ideal $J$ in $D_{R_{\varphi}}$, set $$ \widehat{J} = \left\{ a \in C^*_r\left(R_{\varphi}\right) : \ P_{R_{\varphi}}(a^*a) \in J \right\} . $$ \begin{thm}Ll{A4} The map $J \mapsto \widehat{J}$ is a bijection between the $R_{\varphi}$-invariant ideals in $D_{R_{\varphi}}$ and the ideals in $C^*_r\left(R_{\varphi}\right)$. The inverse is given by the map $I \mapsto I \cap D_{R_{\varphi}}$ \begin{proof} It follows from Lemma 2.13 of \cite{Th1} that $\widehat{J} \cap D_{R_{\varphi}} = J$ for any $R_{\varphi}$-invariant ideal in $D_{R_{\varphi}}$. It suffices therefore to show that every ideal in $C^*_r\left(R_{\varphi}\right)$ is of the form $\widehat{J}$ for some $R_{\varphi}$-invariant ideal $J$ in $D_{R_{\varphi}}$. Let $I$ be an ideal in $C^*_r\left(R_{\varphi}\right)$. Set $J = I \cap D_{R_{\varphi}}$, which is clearly a $R_{\varphi}$-invariant ideal in $D_{R_{\varphi}}$. Since $\widehat{J} \cap D_{R_{\varphi}} = J = I \cap D_{R_{\varphi}}$ by Lemma 2.13 of \cite{Th1}, we conclude from Lemma \ref{A7} that $\widehat{J} = I$. \end{proof} \end{thm} A subset $A \subseteq Y$ is said to be \emph{$\phi$-saturated} when $\phi^{-k}\left(\phi^k(A)\right) = A$ for all $k \in \mathbb N$. \begin{cor}Ll{A5} (Cf. Proposition II.4.6 of \cite{Re}) The map $$ L \mapsto I_L = \left\{ a \in C^*_r\left(R_{\phi}\right) : \ P_{R_{\phi}}(a^*a)(x) = 0\ \forall x \in L \right\} $$ is a bijection from the non-empty closed $\phi$-saturated subsets $L$ of $Y$ onto the set of proper ideals in $C^*_r\left(R_{\phi}\right)$. \begin{proof} Since $\phi$ is a local homeomorphism, we have that $D_{R_{\phi}} = C(Y)$ so the corollary follows from Theorem \ref{A4} by use of the well-known bijection between ideals in $C(Y)$ and closed subsets of $Y$. The only thing to show is that an open subset $U$ of $Y$ is $\phi$-saturated if and only if the ideal $C_0(U)$ of $C(Y)$ is $R_{\phi}$-invariant which is straightforward, cf. the proof of Corollary 2.18 in \cite{Th1}. \end{proof} \end{cor} The next issue will be to determine which closed $\phi$-saturated subsets of $Y$ correspond to primitive ideals. For a point $x \in Y$ we define the \emph{$\phi$-saturation} of $x$ to be the set $$ H(x) = \bigcup_{n=1}^{\infty} \left\{ y \in Y : \ \phi^n(y) = \phi^n(x) \right\} . $$ The closure $\overline{H(x)}$ of $H(x)$ will be referred to as the \emph{closed $\phi$-saturation} of $x$. Observe that both $H(x)$ and $\overline{H(x)}$ are $\phi$-saturated. \begin{prop}Ll{A17} Let $L \subseteq Y$ be a non-empty closed $\phi$-saturated subset. The ideal $I_L$ is primitive if and only $L$ is the closed $\phi$-saturation of a point in $Y$. \begin{proof} Since $C^*_r\left(R_{\phi}\right)$ is separable an ideal is primitive if and only if it is prime, cf. \cite{Pe}. We show that $I_L$ is prime if and only if $L = \overline{H(x)}$ for some $x \in Y$. Assume first that $L = \overline{H(x)}$ and consider two ideals, $I_1$ and $I_2$, in $C^*_r\left(R_{\phi}\right)$ such that $I_1I_2 \subseteq I_{\overline{H(x)}}$. By Corollary \ref{A5} there are closed $\phi$-saturated subsets, $L_1$ and $L_2$, in $Y$ such that $I_j = I_{L_j}$, $j =1,2$. It follows from Corollary \ref{A5} that $\overline{H(x)} \subseteq L_1 \cup L_2$. At least one of the $L_j$'s must contain $x$, say $x \in L_1$. Since $L_1$ is $\phi$-saturated and closed it follows that $\overline{H(x)} \subseteq L_1$, and hence that $I_1 \subseteq I_{\overline{H(x)}}$. Thus $I_{\overline{H(x)}}$ is prime. Assume next that $I_L$ is prime. Let $\{U_k\}_{k=0}^{\infty}$ be a base for the topology of $L$ consisting of non-empty sets. We will construct sequences $\{B_k\}_{k=0}^{\infty}$ of compact non-empty neighbourhoods in $L$ and non-negative integers $\left\{n_k\right\}_{k=0}^{\infty}$ such that \begin{enumerate} \item[i)] $B_k \subseteq B_{k-1}$ for $ k \geq 1$, and \item[ii)] $\phi^{n_{k}}\left(B_{k}\right) \subseteq \phi^{n_{k}}\left(U_{k}\right)$ for $ k \geq 0$. \end{enumerate} We start the induction by letting $B_0$ be any compact non-empty neighbourhood in $U_0$ and $n_0 = 0$. Assume then that $B_0,B_1,B_2,\dots , B_m$ and $n_0,n_1, \dots, n_m$ have been constructed. Choose a non-empty open subset $V_{m+1} \subseteq B_{m}$. Note that both of $$ L \backslash \bigcup_l \phi^{-l}\left(\phi^l(V_{m+1})\right) $$ and $$ L \backslash \bigcup_l \phi^{-l}\left(\phi^l(U_{m+1})\right) $$ are closed $\phi$-saturated subsets of $L$, and hence of $Y$, and none of them is all of $L$. It follows from Corollary \ref{A5} and primeness of $I_L$ that $L$ is not contained in their union, which in turn implies that $$ \phi^{-n_{m+1}}\left(\phi^{n_{m+1}}(V_{m+1})\right) \cap \phi^{-n_{m+1}}\left(\phi^{n_{m+1}}(U_{m+1})\right) $$ is non-empty for some $n_{m+1} \in \mathbb N$. There is therefore a point $z \in V_{m+1}$ such that $\phi^{n_{m+1}}(z) \in \phi^{n_{m+1}}\left(U_{m+1}\right)$, and therefore also a compact non-empty neighbourhood $B_{m+1} \subseteq V_{m+1}$ of $z$ such that $\phi^{n_{m+1}}(B_{m+1}) \subseteq \phi^{n_{m+1}}\left(U_{m+1}\right)$. This completes the induction. Let $x \in \bigcap_m B_m$. By construction every $U_k$ contains an element from $H(x)$. It follows that $\overline{H(x)} = L$. \end{proof} \end{prop} \section{On the ideals of $C^*_r\left(\Gamma_{\varphi}\right)$}Ll{gaugeac} The $C^*$-algebra $C^*_r\left(\Gamma_{\varphi}\right)$ carries a canonical circle action $\beta$, called the \emph{gauge action}, defined such that $$ \beta_{\lambda}(f)(x,k,y) = \lambda^k f(x,k,y) $$ when $f \in C_c\left(\Gamma_{\varphi}\right)$ and $\lambda \in \mathbb T$, cf. \cite{Th1}. As the next step we describe in this section the gauge-invariant ideals in $C^*_r\left(\Gamma_{\varphi}\right)$. Consider first the function $m : X \to \mathbb N$ defined such that \begin{equation}Ll{m-funk} m(x) = \# \left\{ y \in X : \ \varphi(y) = \varphi(x) \right\} . \end{equation} As shown in \cite{Th1}, $m \in D_{R(\varphi)} \subseteq D_{R_{\varphi}}$. Define a function $V_{\varphi} : \Gamma_{\varphi} \to \mathbb C$ such that $$ V_{\varphi} (x,k,y) = \begin{cases} m(x)^{-\frac{1}{2}} & \ \text{when} \ k = 1 \ \text{and} \ y = \varphi(x) \\ 0 & \ \text{otherwise.} \end{cases} $$ Then $V_{\varphi}$ is the product $V_{\varphi} = m^{-\frac{1}{2}} 1_{\Gamma_{\varphi}(1,0)}$ in $C^*_r\left(\Gamma_{\varphi}\right)$ and in fact an isometry which induces an endomorphism $\widehat{\varphi}$ of $C^*_r\left(R_{\varphi}\right)$, viz. $$ \widehat{\varphi}(a) = V_{\varphi}aV_{\varphi}^*$$ Together with $C^*_r\left(R_{\varphi}\right)$ the isometry $V_{\varphi}$ generates $C^*_r\left(\Gamma_{\varphi}\right)$ which in this way becomes a crossed product $C^*_r\left(R_{\varphi}\right) \times_{\widehat{\varphi}} \mathbb N$ in the sense of Stacey, cf. \cite{St} and \cite{Th1}; in particular Theorem 4.6 in \cite{Th1}. \subsection{Gauge invariant ideals } Let $C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T}$ denote the fixed point algebra of the gauge action. \begin{lemma}Ll{kalgs} For each $k \in \mathbb N$ we have that ${V_{\varphi}^*}^k C^*_r\left(R_{\varphi}\right)V_{\varphi}^k $ is a $C^*$-subalgebra of $C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T}$, \begin{equation}Ll{bkr0} {V_{\varphi}^*}^kC^*_r\left(R_{\varphi}\right)V_{\varphi}^k \subseteq {V_{\varphi}^*}^{k+1}C^*_r\left(R_{\varphi}\right)V_{\varphi}^{k+1}, \end{equation} and \begin{equation}Ll{bkr} C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T} = \overline{\bigcup_{k=0}^{\infty} {V_{\varphi}^*}^k C^*_r\left(R_{\varphi}\right)V_{\varphi}^k }. \end{equation} \begin{proof} Since $V_{\varphi}^k{V_{\varphi}^*}^k \in C^*_r\left(R_{\varphi}\right)$, it is easy to check that ${V_{\varphi}^*}^k C^*_r\left(R_{\varphi}\right)V_{\varphi}^k$ is a $*$-algebra. To see that ${V_{\varphi}^*}^kC^*_r\left(R_{\varphi}\right)V_{\varphi}^k$ is closed let $\{a_n\}$ be a sequence in $C^*_r\left(R_{\varphi}\right)$ such that $\left\{{V_{\varphi}^*}^ka_nV_{\varphi}^k\right\}$ converges in $C^*_r\left(\Gamma_{\varphi}\right)$, say $\lim_{n \to \infty} {V_{\varphi}^*}^ka_nV_{\varphi}^k = b$. It follows that $$ \left\{V_{\varphi}^k{V_{\varphi}^*}^ka_nV_{\varphi}^k{V_{\varphi}^*}^k\right\} $$ is Cauchy in $C^*_r\left(R_{\varphi}\right)$ and hence convergent, say to $a \in C^*_r\left(R_{\varphi}\right)$. But then $b = \lim_{n \to \infty} {V_{\varphi}^*}^k a_nV_{\varphi}^k = \lim_{n \to \infty} {V_{\varphi}^*}^kV_{\varphi}^k {V_{\varphi}^*}^k a_nV_{\varphi}^k{V_{\varphi}^*}^kV_{\varphi}^k = {V_{\varphi}^*}^kaV_{\varphi}^k$. It follows that $$ {V_{\varphi}^*}^kC^*_r\left(R_{\varphi}\right)V_{\varphi}^k $$ is closed and hence a $C^*$-subalgebra. The inclusion (\ref{bkr0}) follows from the observation that $V_{\varphi}^k = V_{\varphi}^*V_{\varphi}^{k+1}$ and $V_{\varphi}C^*_r\left(R_{\varphi}\right)V_{\varphi}^* \subseteq C^*_r\left(R_{\varphi}\right)$. It is straightforward to check that $\beta_{\lambda}(V_{\varphi}) = \lambda V_{\varphi}$ and that $C^*_r\left(R_{\varphi}\right) \subseteq C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T}$. The inclusion $\supseteq$ in (\ref{bkr}) follows from this. To obtain the other, let $x \in C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T}$ and let $\epsilon > 0$. It follows from Theorem 4.6 of \cite{Th1} and Lemma 1.1. of \cite{BoKR} that there is an $n \in \mathbb N$ and an element $$ y \in \operatorname{Span} \bigcup_{i,j \leq n} {V_{\varphi}^*}^i C^*_r\left(R_{\varphi}\right)V_{\varphi}^j $$ such that $\left\|x - y\right\| \leq \epsilon$. Then $\left\| x - \int_{\mathbb T} \beta_{\lambda}(y) \ d\lambda\right\| \leq \epsilon$ and since $$ \int_{\mathbb T} \beta_{\lambda}(y) \ d\lambda \in {V_{\varphi}^*}^nC^*_r\left(R_{\varphi}\right)V_{\varphi}^n, $$ we see that (\ref{bkr}) holds. \end{proof} \end{lemma} \begin{lemma}Ll{ident} Let $I$ be a gauge invariant ideal in $C^*_r\left(\Gamma_{\varphi}\right)$. It follows that $$ I = \left\{ a \in C^*_r\left(\Gamma_{\varphi}\right) : \ \int_{\mathbb T} \beta_{\lambda}(a^*a) \ d \lambda \in I \cap C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T} \right\} . $$ \begin{proof} Set $B = C^*_r\left(\Gamma_{\varphi}\right)/I$. Since $I$ is gauge-invariant there is an action $\hat{\beta}$ of $\mathbb T$ on $B$ such that $q \circ \beta = \hat{\beta} \circ q$, where $q : C^*_r\left(\Gamma_{\varphi}\right) \to B$ is the quotient map. Thus, if $$ y \in \left\{ a \in C^*_r\left(\Gamma_{\varphi}\right) : \ \int_{\mathbb T} \beta_{\lambda}(a^*a) \ d \lambda \in I \cap C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T} \right\} , $$ we find that $$ \int_{\mathbb T} \hat{\beta}_{\lambda}(q(y^*y)) \ d \lambda = q\left(\int_{\mathbb T} \beta_{\lambda}(y^*y) \ d\lambda \right) = 0 . $$ Since $\int_{\mathbb T} \hat{\beta}_{\lambda}( \cdot ) \ d \lambda$ is faithful we conclude that $q(y) = 0$, i.e. $y \in I$. This establishes the non-trivial part of the asserted identity. \end{proof} \end{lemma} \begin{lemma}Ll{intersectunique} Let $I,I'$ be gauge invariant ideals in $C^*_r\left(\Gamma_{\varphi}\right)$. Then $$ I \cap D_{R_{\varphi}} \subseteq I' \cap D_{R_{\varphi}} \ \operatorname{\mathcal R^-}ightarrow \ I \subseteq I' . $$ \begin{proof} Assume that $I \cap D_{R_{\varphi}} \subseteq I' \cap D_{R_{\varphi}}$. It follows from Lemma \ref{A7} that $I \cap C^*_r\left(R_{\varphi}\right) \subseteq I' \cap C^*_r\left(R_{\varphi}\right)$. Then \begin{equation*} \begin{split} &I \cap {V_{\varphi}^*}^kC^*_r\left(R_{\varphi}\right)V_{\varphi}^k = {V_{\varphi}^*}^k\left(I \cap C^*_r\left(R_{\varphi}\right)\right)V_{\varphi}^k \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \subseteq {V_{\varphi}^*}^k\left(I' \cap C^*_r\left(R_{\varphi}\right)\right)V_{\varphi}^k = I' \cap {V_{\varphi}^*}^kC^*_r\left(R_{\varphi}\right)V_{\varphi}^k \end{split} \end{equation*} for all $k \in \mathbb N$. Hence Lemma \ref{kalgs} implies that $I \cap C_r^*\left(\Gamma_{\varphi}\right)^{\mathbb T} \subseteq I' \cap C_r^*\left(\Gamma_{\varphi}\right)^{\mathbb T}$. It follows then from Lemma \ref{ident} that $I \subseteq I'$. \end{proof} \end{lemma} \begin{prop}Ll{gaugeideals} The map $J \mapsto \widehat{J}$, where $$ \widehat{J} = \left\{ a \in C^*_r\left(\Gamma_{\varphi}\right) : \ P_{\Gamma_{\varphi}}(a^*a) \in J\right\}, $$ is a bijection from the $\Gamma_{\varphi}$-invariant ideals of $D_{\Gamma_{\varphi}}$ onto the gauge invariant ideals of $C^*_r\left(\Gamma_{\varphi}\right)$. Its inverse is the map $I \mapsto I \cap D_{\Gamma_{\varphi}}$. \begin{proof} Since $P_{\Gamma_{\varphi}} \circ \beta = P_{\Gamma_{\varphi}}$ the ideal $\widehat{J}$ is gauge invariant. It follows from Lemma 2.13 of \cite{Th1} that $\widehat{J} \cap D_{\Gamma_{\varphi}} = J$ so it suffices to show that \begin{equation}Ll{gaugeb} \widehat{I \cap D_{\Gamma_{\varphi}}} = I \end{equation} when $I$ is a gauge invariant ideal in $C^*_r\left(\Gamma_{\varphi}\right)$. It follows from Lemma 2.13 of \cite{Th1} that $\widehat{I \cap D_{\Gamma_{\varphi}}} \cap D_{\Gamma_{\varphi}} = I \cap D_{\Gamma_{\varphi}}$. Since $D_{R_{\varphi}} \subseteq D_{\Gamma_{\varphi}}$ this implies that $\widehat{I \cap D_{\Gamma_{\varphi}}} \cap D_{R_{\varphi}} = I \cap D_{R_{\varphi}}$. Then (\ref{gaugeb}) follows from Lemma \ref{intersectunique}. \end{proof} \end{prop} To simplify notation, set $D = D_{\Gamma_{\phi}} = C(Y)$. Every ideal $I$ in $C^*_r\left(\Gamma_{\phi}\right)$ determines a closed subset $\rho(I)$ of $Y$ defined such that \begin{equation}Ll{rho} \rho(I) = \left\{ y \in Y : \ f(y) = 0 \ \forall f \in I \cap D \right\} . \end{equation} We say that a subset $F \subseteq Y$ is \emph{totally $\phi$-invariant} when $\phi^{-1}(F) = F$. \begin{lemma}Ll{psiinv} $\rho(I)$ is totally $\phi$-invariant for every ideal $I$ in $C^*_r\left(\Gamma_{\phi}\right)$. \begin{proof} It suffices to show that $Y\setminus \rho(I)$ is totally $\phi$-invariant, which is what we will do. Assume first that $x\in Y \setminus \rho(I)$. Then there is an $f\in I\cap D$ such that $f(x)\neq 0$. Choose an open bisection $W \subseteq \Gamma_{\phi}$ such that $(x,1,\phi(x)) \in W$. Choose then $\eta\in C_c(\Gamma_\phi)$ such that $\eta((x,1,\phi(x))=1$ and $\operatorname{supp} \eta \subseteq W$. It is not difficult to check that $\eta^*f\eta\in D$ and that $\eta^*f\eta(\phi(x))= f(x)\ne 0$, and since $\eta^*f\eta\in I$, it follows that $\phi(x)\in Y \setminus \rho(I)$. Assume then that $\phi(x)\in Y \setminus \rho(I)$. Then there is a $g\in I\cap D$ such that $g(\phi(x))\ne 0$. Choose an open bisection $W \subseteq \Gamma_{\phi}$ such that $(x,1,\phi(x)) \in W$ and $\eta\in C_c(\Gamma_\phi)$ such that $\eta((x,1,\phi(x))=1$ and $\operatorname{supp} \eta \subseteq W$. Then $\eta g\eta^*\in D$ and $\eta g \eta^*(\phi(x))= g(x)\ne 0$, and since $\eta g\eta^*\in I$, this shows that $x \in Y\backslash \rho(I)$, proving that $\phi^{-1}\left(\rho(I)\right) = \rho(I)$. \end{proof} \end{lemma} Thus every ideal in $C^*_r\left(\Gamma_{\phi}\right)$ gives rise to a closed totally $\phi$-invariant subset of $Y$. To go in the other direction, let $F$ be a closed totally $\phi$-invariant subset of $Y$. Then $Y\backslash F$ is open and totally $\phi$-invariant so that the reduction $\Gamma_{\phi}|_{Y \backslash F}$ is an \'etale groupoid in its own right, cf. \cite{An}. In fact, $\phi$ restricts to local homeomorphic surjections $\phi : Y \backslash F \to Y \backslash F$ and $\phi : F \to F$, and $$ \Gamma_{\phi}|_{Y \backslash F} = \Gamma_{\phi|_{Y \backslash F}} . $$ Note that $C^*_r\left( \Gamma_{\phi}|_{Y \backslash F}\right) = C^*_r\left( \Gamma_{\phi|_{Y \backslash F}}\right)$ is an ideal in $C^*_r\left(\Gamma_{\phi}\right)$ because $Y \backslash F$ is totally $\phi$-invariant. \begin{prop} Ll{prop:canonic} (Cf. Proposition II.4.5 of \cite{Re}.) Let $F$ be a non-empty, closed and totally $\phi$-invariant subset of $Y$. There is then a surjective $*$-homomorphism $\pi_F: C_r^*(\Gamma_\phi)\to C_r^*(\Gamma_{\phi|_F})$ which extends the restriction map $C_c\left(\Gamma_{\phi}\right) \to C_c\left(\Gamma_{\phi|_F}\right)$ and has the property that $\ker \pi_F = C^*_r\left(\Gamma_{\phi|_{Y \backslash F}}\right)$, i.e. \begin{equation*} \begin{xymatrix}{ 0 \ar[r] & C^*_r\left(\Gamma_{\phi|_{Y \backslash F}}\right) \ar[r] & C_r^*(\Gamma_\phi) \ar[r]^-{\pi_F} \ar[r] & C_r^*(\Gamma_{\phi|_F}) \ar[r] & 0} \end{xymatrix} \end{equation*} is exact. Furthermore, \begin{equation}Ll{rhoF} \rho(\ker\pi_F)=F. \end{equation} \end{prop} \begin{proof} Let $\dot{\pi_F} : C_c\left(\Gamma_{\phi}\right) \to C_c\left(\Gamma_{\phi|_F}\right)$ denote the restriction map which is surjective by Tietze's theorem. By using that $F$ is totally $\phi$-invariant, it follows straightforwardly that $\dot{\pi_F}$ is a $*$-homomorphism. Since $\pi_x \circ \dot{\pi_F} = \pi_x$ when $x \in F$, it follows that $\dot{\pi_F}$ extends by continuity to a $*$-homomorphism $\pi_F : C_r^*(\Gamma_\phi)\to C_r^*(\Gamma_{\phi|_F})$ which is surjective because $\dot{\pi_F}$ is. To complete the proof observe that \begin{equation*}Ll{estblis} \ker \pi_F \cap D = C_0\left(Y \backslash F\right) = C^*_r\left(\Gamma_{\phi|_{Y \backslash F}}\right) \cap D . \end{equation*} The first identity shows that (\ref{rhoF}) holds, and since $\ker \pi_F$ and $C^*_r\left(\Gamma_{\phi|_{Y \backslash F}}\right)$ are both gauge-invariant ideals the second that they are identical by Lemma \ref{intersectunique}. \end{proof} By combining Proposition \ref{gaugeideals}, Lemma \ref{psiinv} and Proposition \ref{prop:canonic} we obtain the following. \begin{thm}Ll{psi-invariant} The map $\rho$ is a bijection from the gauge-invariant ideals in $C^*_r\left(\Gamma_{\phi}\right)$ onto the set of closed totally $\phi$-invariant subsets of $Y$. The inverse is the map which sends a closed totally $\phi$-invariant subset $F \subseteq Y$ to the ideal $$ \ker \pi_F = \left\{ a \in C^*_r\left(\Gamma_{\phi}\right) : \ P_{\Gamma_{\phi}}(a^*a)(x) = 0 \ \forall x \in F \right\} . $$ \end{thm} We remark that since the isomorphism (\ref{basiciso}) is equivariant with respect to the gauge actions, Theorem \ref{psi-invariant} gives also a description of the gauge invariant ideals in $C^*_r\left(\Gamma_{\varphi}\right)$, as a complement to the one of Proposition \ref{gaugeideals}. \subsection{The primitive ideals} We are now in position to obtain a complete description of the primitive ideals of $C^*_r\left(\Gamma_{\phi}\right)$. Much of what we do is merely a translation of Katsuras description of the primitive ideals in the more general $C^*$-algebras considered by him in \cite{K}. Recall that because we only deal with separable $C^*$-algebras the primitive ideals are the same as the prime ideals, cf. 3.13.10 and 4.3.6 in \cite{Pe}. \begin{lemma} Ll{prop:ideal-gen} Let $I$ be an ideal in $C_r^*(\Gamma_\phi)$ and let $A$ be a closed totally $\phi$-invariant subset of $Y$. If $\rho(I)\subseteq A$, then $\ker\pi_A\subseteq I$. \end{lemma} \begin{proof} Since $\rho(I)\subseteq A$ it follows from the Stone-Weierstrass theorem that $C_0(Y\setminus A)\subseteq I \cap C(Y)$. Let $\left\{i_n\right\}$ be an approximate unit in $C_0(Y \backslash A)$. It follows from Proposition \ref{prop:canonic} that $\{i_n\}$ is also an approximate unit in $\ker \pi_A$. Since $\{i_n\} \subseteq I$ it follows that $\ker\pi_A\subseteq I$. \end{proof} We say that a closed totally $\phi$-invariant subset $A$ of $Y$ is \emph{prime} when it has the property that if $B$ and $C$ also are closed and totally $\phi$-invariant subsets of $Y$ and $A\subseteq B\cup C$, then either $A\subseteq B$ or $A\subseteq C$. Let ${\mathcal{M}}:=\{A\subseteq Y : A\text{ is non-empty, closed, totally $\phi$-invariant and prime}\}$. For $x\in Y$ let $$ \orb(x) =\{y\in Y : \exists m,n\in{\mathbb{N}}:\phi^n(x)=\phi^m(y)\}. $$ We call $\orb(x)$ the \emph{total $\phi$-orbit of $x$}. \begin{prop}(Cf. Proposition 4.13 and 4.4 of \cite{K}.)Ll{glemt} \begin{equation*} {\mathcal{M}}=\{\overline{\orb(x)} : x\in Y\}. \end{equation*} \end{prop} \begin{proof} It is clear that $\overline{\orb(x)}\in{\mathcal{M}}$ for every $x\in Y$. Assume that $A\in{\mathcal{M}}$ and let $\{U_k\}_{k=1}^\infty$ be a basis for $A$. We will by induction show that we can choose compact neighbourhoods $\{C_k\}_{k=0}^\infty$ and $\{C_k'\}_{k=0}^\infty$ in $A$ and positive integers $(n_k)_{k=0}^\infty$ and $(n_k')_{k=0}^\infty$ such that $C_k\subseteq U_k$ and $C_k'\subseteq \phi^{n_{k-1}}(C_{k-1})\cap \phi^{n'_{k-1}}(C'_{k-1})$ for $k\ge 1$. For this set $C_0=C'_0=A$. Assume then that $n\ge 1$ and that $C_1,\dots,C_n$, $C'_1,\dots,C'_n$, $n_0,\dots,n_{n-1}$ and $n'_0,\dots,n'_{n-1}$ satisfying the conditions above have been chosen. Choose non-empty open subsets $V_n\subseteq C_n$ and $V'_n\subseteq C'_n$. We then have that \begin{equation*} \bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V_n))\text{ and }\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V'_n)) \end{equation*} are non-empty open and totally $\phi$-invariant subsets of $A$, and thus that \begin{equation} Ll{eq:1} A\setminus\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V_n))\text{ and }A\setminus\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V'_n)) \end{equation} are closed, totally $\phi$-invariant subsets of $Y$. Since $A$ is prime and is not contained in either of the sets from \eqref{eq:1}, it follows that $A$ is not contained in \begin{equation*} \left(A\setminus\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V_n))\right)\bigcup \left(A\setminus\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V'_n)) \right), \end{equation*} whence \begin{equation*} \left(\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V_n))\right)\bigcap \left(\bigcup_{l,m=0}^\infty\phi^{-l}(\phi^m(V'_n))\right) \ne\emptyset. \end{equation*} It follows that there are positive integers $n_n$ and $n'_n$ such that $\phi^{n_n}(V_n)\cap\phi^{n'_n}(V'_n)$ is non-empty. Thus we can choose a compact neighbourhood $C_{n+1}\subseteq U_{n+1}$ and a compact neighbourhood $C'_{n+1}\subseteq \phi^{n_n}(V_n)\cap\phi^{n'_n}(V'_n)\subseteq \phi^{n_n}(C_n)\cap\phi^{n'_n}(C'_n)$ which is what is required for the induction step. It is easy to check that \begin{equation*} C'_0\cap\phi^{-n'_0}(C'_1)\cap\dots \dots \cap\phi^{-n'_0- \dots -n'_k}(C'_{k+1}),\ k=0,1,\dots \end{equation*} is a decreasing sequence of non-empty compact sets. It follows that there is an \begin{equation*} x\in \bigcap_{k=0}^\infty \phi^{-n'_0-\dots \dots -n'_k}(C'_{k+1})\cap C'_0. \end{equation*} We have for every $k\in{\mathbb{N}}$ that $\phi^{n'_0+\dots+n'_k}(x)\in C'_{k+1}\subseteq \phi^{n_k}(C_k)\subseteq \phi^{n_k}(U_k)$, and it follows that $\orb(x)$ is dense in $A$, and thus that $A=\overline{\orb(x)}$. \end{proof} \begin{prop}(Cf. Proposition 9.3 of \cite{K}.) Ll{prop:prime} Assume that $I$ is a prime ideal in $C_r^*(\Gamma_\phi)$. It follows that $\rho(I)\in{\mathcal{M}}$. \end{prop} \begin{proof} It follows from Lemma \ref{psiinv} that $\rho(I)$ is closed and totally $\phi$-invariant. To show that $\rho(I)$ is also prime, assume that $B$ and $C$ are closed totally $\phi$-invariant subsets such that $\rho(I)\subseteq B\cup C$. It follows then from Lemma \ref{prop:ideal-gen} that $\ker(\pi_{B\cup C})\subseteq I$. Since $\ker \pi_B \cap \ker \pi_C \cap D = C_0(Y \backslash B) \cap C_0(Y \backslash C) = C_0\left(Y \backslash (B\cup C)\right) = \ker \pi_{B \cup C} \cap D$ it follows from Lemma \ref{intersectunique} that $\ker \pi_B \cap \ker \pi_C = \ker \pi_{B\cup C}$. Therefore $\ker(\pi_B)\subseteq I$ or $\ker(\pi_C)\subseteq I$ since $I$ is prime. Hence $\rho(I)\subseteq B$ or $\rho(I)\subseteq C$, thanks to (\ref{rhoF}). \end{proof} We say that a point $x\in Y$ is $\phi$-periodic if $\phi^n(x)=x$ for some $n>0$. Let $\per $ denote the set of $\phi$-periodic points $x\in Y$ which are isolated in $\orb(x)$ and let $$ {\mathcal{M}}p =\{\overline{\orb(x)} : x\in \per\} $$ and $$ {\mathcal{M}}a ={\mathcal{M}}\setminus{\mathcal{M}}p. $$ Let $A \subseteq Y$ be a closed totally $\phi$-invariant subset. We say that $\phi|_A$ is \emph{topologically free} if the set of $\phi$-periodic points in $A$ has empty interior in $A$. \begin{prop} (Cf. Proposition 11.3 of \cite{K}.) Ll{prop:free} Let $A\in{\mathcal{M}}$. Then $\phi|_A$ is topologically free if and only if $A\in{\mathcal{M}}a$. \end{prop} \begin{proof} We will show that $\phi|_A$ is not topologically free if and only if $A\in{\mathcal{M}}p$. If $x\in\per$ and $A=\overline{\orb(x)}$, then $\phi|_A$ is not topologically free because $x$ is periodic and isolated in $\orb(x)$ and thus in $A$. Assume then that $\phi|_A$ is not topologically free. There is then a non-empty open subset $U\subseteq A$ such that every element of $U$ is $\phi$-periodic. Choose $x\in A$ such that $A=\overline{\orb(x)}$. Then $U\cap\orb(x)\ne\emptyset$. Let $y\in U\cap\orb(x)$. Then $y$ is $\phi$-periodic and $\overline{\orb(y)}=\overline{\orb(x)}=A$, so if we can show that $y$ is isolated in $\orb(y)$, then we have that $A\in{\mathcal{M}}p$. Since $y$ is $\phi$-periodic there is an $n\ge 1$ such that $\phi^n(y)=y$. We claim that $U\subseteq\{y,\phi(y),\dots,\phi^{n-1}(y)\}$. It will then follow that $y$ is isolated in $A$ and thus in $\orb(y)$. Assume that $U\setminus \{y,\phi(y),\dots,\phi^{n-1}(y)\}$ is non-empty. Since it is also open it follows that $\orb(y)\cap U\setminus \{y,\phi(y),\dots,\phi^{n-1}(y)\}$ is non-empty. Let $z\in \orb(y)\cap U\setminus \{y,\phi(y),\dots,\phi^{n-1}(y)\}$. Since $z\in U$ there is an $m\ge 1$ so that $\phi^m(z)=z$, and since $z\in\orb(y)$ there are $k,l\in{\mathbb{N}}$ such that $\phi^k(z)=\phi^l(y)$. But then $z=\phi^{mk}(z)=\phi^{(m-1)k+l}(y)\in \{y,\phi(y),\dots,\phi^{n-1}(y)\}$ and we have a contradiction. It follows that $U\subseteq\{y,\phi(y),\dots,\phi^{n-1}(y)\}$. \end{proof} In particular, it follows from Proposition \ref{prop:free} that the elements of ${\mathcal{M}}a$ are infinite sets. \begin{prop} (Cf. Proposition 11.5 of \cite{K}.) Ll{prop:aper} Let $A\in{\mathcal{M}}a$. Then $\ker\pi_A$ is the unique ideal $I$ in $C_r^*(\Gamma_\phi)$ with $\rho(I)=A$. \end{prop} \begin{proof} We have already in Proposition \ref{prop:canonic} seen that $\rho(\ker\pi_A)=A$. Assume that $I$ is an ideal in $C_r^*(\Gamma_\phi)$ with $\rho(I)=A$. It follows then from Lemma \ref{prop:ideal-gen} that $\ker\pi_A\subseteq I$. Thus it sufficies to show that $\pi_A(I)=\{0\}$. Note that $\pi_A(I)$ is an ideal in $C_r^*(\Gamma_{\phi|_A})$ with $\rho(\pi_A(I))=A$. It follows that $\pi_A(I)\cap C(A)=\{0\}$. To conclude from this that $\pi_A(I) = \{0\}$ we will show that the points of $A$ whose isotropy group in $\Gamma_{\phi|_A}$ is trivial are dense in $A$. It will then follow from Lemma 2.15 of \cite{Th1} that $\pi_A(I)=\{0\}$ because $\pi_A(I) \cap C(A) = \{0\}$. That the points of $A$ with trivial isotropy in $\Gamma_{\phi|_A}$ are dense in $A$ is established as follows: The points in $A$ with non-trivial isotropy in $\Gamma_{\phi|A}$ are the pre-periodic points in $A$. Let $\operatorname{Per}_n A$ denote the set of points in $A$ with minimal period $n$ under $\phi$ and note that $\operatorname{Per}_n A$ is closed and has empty interior since $\phi|_A$ is topologically free by Proposition \ref{prop:free}. It follows that $A \backslash \phi^{-k}\left(\operatorname{Per}_n A\right)$ is open and dense in $A$ for all $k,n$. By the Baire category theorem it follows that $$ A \backslash \left( \bigcup_{k,n} \phi^{-k}\left(\operatorname{Per}_n A\right) \right) = \bigcap_{k,n} A \backslash \phi^{-k}\left(\operatorname{Per}_n A\right) $$ is dense in $A$, proving the claim. \end{proof} \begin{lemma}Ll{prop2} Let $A\in{\mathcal{M}}a$. Then $\ker\pi_A$ is a primitive ideal. \begin{proof} Let $A = \overline{\orb(x)}$. To show that $\ker \pi_A$ is primitive it suffices to show that it is prime, cf. Proposition 4.3.6 of \cite{Pe}. Equivalently, it suffices to show that $C^*_r\left(\Gamma_{\phi|_A}\right)$ is a prime $C^*$-algebra. Consider therefore two ideals $I_j \subseteq C^*_r\left(\Gamma_{\phi|_A}\right), j = 1,2$, such that $I_1I_2 = \{0\}$. Then $$ \left\{ y \in A : \ f(y) = 0 \ \forall f \in I_1 \cap C(A) \right\} \cup \left\{ y \in A : \ f(y) = 0 \ \forall f \in I_2 \cap C(A) \right\} = A . $$ In particular, $x$ must be in $\left\{ y \in A : \ f(y) = 0 \ \forall f \in I_j \cap C(A) \right\}$, either for $j = 1$ or $j =2$. It follows then from Lemma \ref{psiinv}, applied to $\phi|_A$, that $$ A = \left\{ y \in A : \ f(y) = 0 \ \forall f \in I_j \cap C(A) \right\}. $$ Hence $I_j = \{0\}$ by Proposition \ref{prop:aper} applied to $\phi|_A$. \end{proof} \end{lemma} Let $A\in{\mathcal{M}}p$. Choose $x\in\per$ such that $\overline{\orb(x)}=A$, and let $n$ be the minimal period of $x$. Then $x$ is isolated in $A$. It follows that the characteristic functions $1_{(x,0,x)}$ and $1_{(x,n,x)}$ belong to $C_r^*(\Gamma_{\phi|_A})$. Let $p_x=1_{(x,0,x)}$ and $u_x=1_{(x,n,x)}$. For $w\in{\mathbb{T}}$ let $\dot{P}_{x,w}$ denote the ideal in $C_r^*(\Gamma_{\phi|_A})$ generated by $u_x-wp_x$. \begin{lemma} Ll{remark:ens} Suppose that $x,y\in\per$ and that $\overline{\orb(x)}=\overline{\orb(y)}$ and let $w\in{\mathbb{T}}$. Then $\dot{P}_{x,w}=\dot{P}_{y,w}$. \end{lemma} \begin{proof} By symmetry, it is enough to show that $\dot{P}_{y,w}\subseteq\dot{P}_{x,w}$. Since $y$ is isolated in $\orb(y)$, it is isolated in $\overline{\orb(y)}=\overline{\orb(x)}$. Thus $y$ must belong to $\orb(x)$. This means that there are $k,l\in{\mathbb{N}}$ such that $\phi^k(x)=\phi^l(y)$. Since $y$ is $\phi$-periodic, it follows that there is an $i\in{\mathbb{N}}$ such that $y=\phi^i(x)$. Let $A=\overline{\orb(y)}=\overline{\orb(x)}$. Since $x$ and $y$ are isolated in $A$ we have that $1_{(x,i,y)}\in C_r^*(\Gamma_{\phi|_A})$. Let $v=1_{(x,i,y)} $. It is easy to check that $v^*p_xv=p_y$ and that $v^*u_xv=u_y$. Thus $u_y-wp_y=v^*(u_x-wp_x)v\in \dot{P}_{x,w}$ and it follows that $\dot{P}_{y,w}\subseteq\dot{P}_{x,w}$. \end{proof} Let $A\in{\mathcal{M}}p$ and let $w\in{\mathbb{T}}$. It follows from Lemma \ref{remark:ens} that the ideal $\dot{P}_{x,w}$ does not depend of the particular choice of $x \in A \cap \operatorname{Per}$, as long as $\overline{\orb(x)}=A$. We will therefore simply write $\dot{P}_{A,w}$ for $\dot{P}_{x,w}$. We then define $P_{A,w}$ to be the ideal $\pi_A^{-1}(\dot{P}_{A,w})$ in $C_r^*(\Gamma_\phi)$. \begin{prop} (Cf. Proposition 11.13 of \cite{K}.) Ll{prop:per} Let $A\in{\mathcal{M}}p$. Then \begin{equation*} w\mapsto P_{A,w} \end{equation*} is a bijection between ${\mathbb{T}}$ and the set of primitive ideals $P$ in $C_r^*(\Gamma_\phi)$ with $\rho(P)=A$. \end{prop} \begin{proof} The map $P\mapsto\pi_A(P)$ gives a bijection between the primitive ideals in $C_r^*(\Gamma_\phi)$ with $\ker\pi_A\subseteq P$ and the primitive ideals in $C_r^*(\Gamma_{\phi|_A})$, cf. Theorem 4.1.11 (ii) in \cite{Pe}. The inverse of this bijection is the map $Q\mapsto \pi_A^{-1}(Q)$. If $P$ is a primitive ideal in $C_r^*(\Gamma_\phi)$ with $\rho(P)=A$, it follows from Lemma \ref{prop:ideal-gen} that $\ker\pi_A\subseteq P$. In addition $\rho(\pi_A(P))=A$. If on the other hand $Q$ is a primitive ideal in $C_r^*(\Gamma_{\phi|_A})$ with $\rho(Q)=A$, then $\pi_A^{-1}(Q)$ is a primitive ideal in $C_r^*(\Gamma_\phi)$ and $\rho(\pi_A^{-1}(Q))=A$. Thus $P\mapsto\pi_A(P)$ gives a bijection between the primitive ideals in $C_r^*(\Gamma_\phi)$ with $\rho(P)=A$ and the primitive ideals $Q$ in $C_r^*(\Gamma_{\phi|_A})$ with $\rho(Q)=A$. Choose $x\in\per$ such that $\overline{\orb(x)}=A$. Let $\langle p_x\rangle$ be the ideal in $C_r^*(\Gamma_{\phi|_A})$ generated by $p_x$. Observe that $\dot{P}_{A,w} \subseteq \langle p_x \rangle$ for all $w \in \mathbb T$ since $p_x\left(u_x -wp_x\right) = u_x - wp_x$. The map $Q\mapsto Q\cap\langle p_x\rangle$ gives a bijection between the primitive ideals in $C_r^*(\Gamma_{\phi|_A})$ with $\langle p_x\rangle\nsubseteq Q$ and the primitive ideals in $\langle p_x\rangle$, cf. Theorem 4.1.11 (ii) in \cite{Pe}. We claim that $\langle p_x\rangle\nsubseteq Q$ if and only if $\rho(Q)=A$. To see this, let $Q$ be an ideal in $C_r^*(\Gamma_{\phi|_A})$. If $p_x\in Q$, then $x\notin \rho(Q)$ and $\rho(Q)\ne A$. If on the other hand $\rho(Q)\ne A$, then $x\notin \rho(Q)$ because $\rho(Q)$ is closed and totally $\phi$-invariant and $\overline{\orb(x)}=A$. It follows that there is an $f\in Q\cap C(A)$ such that $f(x)\ne 0$, whence $p_x\in Q$. This proves the claim and it follows that $Q\mapsto Q\cap\langle p_x\rangle$ gives a bijection between the primitive ideals in $C_r^*(\Gamma_{\phi|_A})$ with $\rho(Q)=A$ and the primitive ideals in $\langle p_x\rangle$. The $C^*$-algebra $\langle p_x\rangle$ is Morita equivalent to $p_xC_r^*(\Gamma_{\phi|_A})p_x$ via the $p_xC_r^*(\Gamma_{\phi|_A})p_x$- $\langle p_x\rangle$ imprimitivity bimodule $p_xC_r^*(\Gamma_{\phi|_A})$, and therefore $T\mapsto p_xTp_x$ gives a bijection between the primitive ideals $T$ in $\langle p_x\rangle$ and the primitive ideals in $p_xC_r^*(\Gamma_{\phi|A})p_x$, cf. Proposition 3.24 and Corollary 3.33 in \cite{RW}. Now note that \begin{equation*} \{(x',n',y') \in\Gamma_{\phi|_A} : x'=y' =x \}=\{(x,kn,x) : k\in{\mathbb{Z}}\} \end{equation*} where $n$ is the smallest positive integer such that $\phi^n(x)=x$. It follows that $p_xC_r^*(\Gamma_{\phi|A})p_x$ is isomorphic to $C({\mathbb{T}})$ under an isomorphism taking the canonical unitary generator of $C(\mathbb T)$ to $u_x$. In this way we conclude that the primitive ideals of $p_xC_r^*(\Gamma_{\phi|A})p_x$ are in one-to-one correspondance with $\mathbb T$ under the map $$ \mathbb T \ni w \mapsto p_x\overline{C_r^*(\Gamma_{\phi|A}) \left(u_x-wp_x\right)C_r^*(\Gamma_{\phi|A})}p_x = p_x \dot{P}_{A,w} p_x. $$ This completes the proof. \end{proof} By combining Proposition \ref{prop:prime}, \ref{prop:aper} and \ref{prop:per} we get the following theorem. \begin{thm}Ll{primitive} The set of primitive ideals in $C_r^*(\Gamma_\phi)$ is the disjoint union of $\{\ker\pi_A : A\in{\mathcal{M}}a\}$ and $\{P_{A,w} : A\in{\mathcal{M}}p,\ w\in{\mathbb{T}}\}$. \end{thm} \subsection{The maximal ideals} The next step is to identify the maximal ideals among the primitive ones. \begin{lemma}Ll{intcx} Assume that not all points of $Y$ are pre-periodic and that $C^*_r\left(\Gamma_{\phi}\right)$ contains a non-trivial ideal. It follows that there is a non-trivial gauge-invariant ideal $J$ in $C^*_r\left(\Gamma_{\phi}\right)$ such that $J \cap C(Y) \neq \{0\}$. \begin{proof} Let $I$ be a non-trivial ideal in $C^*_r\left(\Gamma_{\phi}\right)$. Assume first that $I \cap C(Y) = \{0\}$. Since we assume that not all points of $Y$ are pre-periodic we can apply Lemma 2.16 of \cite{Th1} to conclude that $J_0 = \overline{P_{\Gamma_{\phi}}(I)}$ is a non-trivial $\Gamma_{\phi}$-invariant ideal in $C(Y)$. Then $$ J = \left\{ a \in C^*_r\left(\Gamma_{\phi}\right) : \ P_{\Gamma_{\phi}}(a^*a) \in J_0 \right\} $$ is a non-trivial gauge-invariant ideal by Theorem \ref{gaugeideals}, and $J \cap C(Y) = J_0 \neq \{0\}$. Note that $J$ contains $I$ in this case. If $I \cap C(X) \neq \{0\}$ we set $$ J = \left\{ a \in C^*_r\left(\Gamma_{\phi}\right) : \ P_{\Gamma_{\phi}}(a^*a) \in I \cap C(Y) \right\} $$ which is a non-trivial ideal in $C^*_r\left(\Gamma_{\phi}\right)$ such that $J \cap C(Y) = I \cap C(Y)$ by Lemma 2.13 of \cite{Th1}. Since $J$ is gauge-invariant, this completes the proof. \end{proof} \end{lemma} \begin{lemma}Ll{minimal} Let $F \subseteq Y$ be a minimal closed non-empty totally $\phi$-invariant subset. Then either \begin{enumerate} \item[1)] $F \in \mathcal M_{Aper}$ and $\ker \pi_F$ is a maximal ideal, or \item[2)] $F = \orb(x) = \left\{ \phi^n(x) : n \in \mathbb N\right\}$, where $x \in \operatorname{Per}$. \end{enumerate} \begin{proof} It follows from the minimality of $F$ that $\overline{\orb(x)}=F$ for all $x\in F$. We will show that 1) holds when $F$ does not contain an element of $\per$, and that 2) holds when it does. Assume first that $F$ does not contain any elements of $\per$. Then $F\in \mathcal M_{Aper}$. If there is a proper ideal $I$ in $C_r^*(\Gamma_\phi)$ such that $\ker \pi_F \subsetneq I$, then $\pi_F(I)$ is a non-trivial ideal in $C^*_r\left(\Gamma_{\phi|_F}\right)$, and then it follows from Lemma \ref{intcx} that there is a non-trivial gauge-invariant ideal $J$ in $C^*_r\left(\Gamma_{\phi|_F}\right)$. By Theorem \ref{psi-invariant} $\rho(\pi_F^{-1}(J))$ is then a non-trivial closed totally $\phi$-invariant subset of $F$, contradicting the minimality of $F$. Thus 1) holds when $F$ does not contain an element from $\per$. Assume instead that there is an $x\in F\cap\operatorname{Per}$. Then $x$ is isolated in $\orb(x)$, and thus in $F$. It follows that $F=\orb(x)$, because if $y\in F\setminus\orb(x)$ we would have that $x\notin \overline{\orb(y)}=F$, which is absurd. Since $F$ is compact, $\orb(x)$ must be finite. Since $\phi$ is surjective we must then have that $\orb(x) = \left\{ \phi^n(x) : n \in \mathbb N\right\}$. Thus 2) holds if $F$ contains an element from $\operatorname{Per}$. \end{proof} \end{lemma} \begin{lemma}Ll{maxideal1} Let $I$ be a maximal ideal in $C^*_r\left(\Gamma_{\phi}\right)$. Then either $I = \ker \pi_F$ for some minimal closed totally $\phi$-invariant subset $F \in \mathcal M_{Aper}$, or $I = P_{\orb(x),w}$ for some $w \in \mathbb T$ and some $x \in \operatorname{Per}$ such that $\orb(x) = \left\{\phi^n(x): \ n \in \mathbb N\right\}$. \begin{proof} Since $I$ is also primitive we know from Theorem \ref{primitive} that $I = \ker \pi_A$ for some $A \in \mathcal M_{Aper}$ or $I = P_{A,w}$ for some $A \in \mathcal M_{\per}$ and some $w \in \mathbb T$. In the first case it follows that $A$ must be a minimal closed totally $\phi$-invariant subset since $I$ is a maximal ideal. Assume then that $I = P_{A,w}$ for some $A \in \mathcal M_{\per}$ and some $w \in \mathbb T$. In the notation from the proof of Proposition \ref{prop:per}, observe that $\dot{P}_{A,w} \subseteq \langle p_x \rangle$ since $p_x\left(u_x-wp_x\right) = u_x -wp_x$. Note that $\dot{P}_{A,w} \neq \langle p_x \rangle$ because the latter of these ideals is gauge-invariant and the first is not. By maximality of $I$ this implies that $\langle p_x \rangle = C^*_r\left(\Gamma_{\phi|_A}\right)$. On the other hand, $\orb(x)$ is an open totally $\phi$-invariant subset of $A$ and $p_x \in C^*_r\left( \Gamma_{\phi|_{\orb(x)}}\right)$, so we see that $\langle p_x \rangle = C^*_r\left(\Gamma_{\phi|_A}\right) = C^*_r\left( \Gamma_{\phi|_{\orb(x)}}\right)$. This implies that $$ C_0\left(\orb(x)\right) = C(A) \cap C^*_r\left( \Gamma_{\phi|_{\orb(x)}}\right) = C(A), $$ and hence that $A = \orb(x)$. Compactness of $A$ implies that $\orb(x)$ is finite and surjectivity of $\phi$ that $\orb(x) = \left\{\phi^n(x): \ n \in \mathbb N\right\}$. \end{proof} \end{lemma} \begin{thm}Ll{maximal2} The maximal ideals in $C^*_r\left(\Gamma_{\phi}\right)$ consist of the primitive ideals of the form $\ker \pi_F$ for some infinite minimal closed totally $\phi$-invariant subset $F \subseteq Y$ and the primitive ideals $P_{A,w}$ for some $w \in {\mathbb{T}}$, where $A = \orb(x) = \left\{\phi^n(x) : \ n \in \mathbb N\right\}$ for a $\phi$-periodic point $x \in Y$. \begin{proof} This follows from the last two lemmas, after the observation that a primitive ideal $P_{A,w}$ of the form described in the statement is maximal. \end{proof} \end{thm} \begin{cor}Ll{maximal3} Let $A$ be a simple quotient of $C^*_r\left(\Gamma_{\phi}\right)$. Assume $A$ is not finite dimensional. It follows that there is an infinite minimal closed totally $\phi$-invariant subset $F$ of $Y$ such that $A \simeq C^*_r\left( \Gamma_{\phi|_F}\right)$. \end{cor} To make more detailed conclusions about the simple quotients we need to restrict to the case where $Y$ is of finite covering dimension so that the result of \cite{Th3} applies. For this reason we prove first that finite dimensionality of $Y$ follows from finite dimensionality of $X$. \section{On the dimension of $Y$} Let $\operatorname{Dim} X$ and $\operatorname{Dim} Y$ denote the covering dimensions of $X$ and $Y$, respectively. The purpose with this section is to establish \begin{prop}Ll{dim!!!} $\operatorname{Dim} Y \leq \operatorname{Dim} X$.\end{prop} \begin{proof}By definition $Y$ is the Gelfand spectrum of $D_{\Gamma_{\varphi}}$. Since the conditional expectation $P_{\Gamma_{\varphi}} : C^*_r\left(\Gamma_{\varphi}\right) \to D_{\Gamma_{\varphi}}$ is invariant under the gauge action, in the sense that $P_{\Gamma_{\varphi}} \circ \beta_{\lambda} = P_{\Gamma_{\varphi}}$ for all $\lambda$, it follows that \begin{equation*}Ll{D17} D_{\Gamma_{\varphi}} = P_{\Gamma_{\varphi}}\left(C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T}\right) . \end{equation*} To make use of this description of $D_{\Gamma_{\varphi}}$ we need a refined version of (\ref{bkr}). Note first that it follows from (4.4) and (4.5) of \cite{Th1} that $V_{\varphi}C^*_r\left(R\left(\varphi^l\right)\right)V_{\varphi}^* \subseteq C^*_r\left(R\left(\varphi^{l+1}\right)\right)$ for all $l \in \mathbb N$. Consequently $$ {V_{\varphi}^*}^k C^*_r\left(R\left(\varphi^l\right)\right) V_{\varphi}^k = {V_{\varphi}^*}^{k+1}V_{\varphi} C^*_r\left(R\left(\varphi^l\right)\right)V_{\varphi}^* V_{\varphi}^{k+1} \subseteq {V_{\varphi}^*}^{k+1} C^*_r\left(R\left(\varphi^{l+1}\right)\right) V_{\varphi}^{k+1} $$ for all $k,l \in \mathbb N$. It follows therefore from (\ref{crux}) and (\ref{bkr}) that there are sequences $\{k_n\}$ and $\left\{l_n\right\}$ in $\mathbb N$ such that $l_n \geq k_n$, \begin{equation}Ll{D17} {V_{\varphi}^*}^{k_n} C^*_r\left(R\left(\varphi^{l_n}\right)\right)V_{\varphi}^{k_n} \subseteq {V_{\varphi}^*}^{k_{n+1}} C^*_r\left(R\left(\varphi^{l_{n+1}}\right)\right)V_{\varphi}^{k_{n+1}} \end{equation} and \begin{equation}Ll{D1} C^*_r\left(\Gamma_{\varphi}\right)^{\mathbb T} = \overline{ \bigcup_n {V_{\varphi}^*}^{k_n} C^*_r\left(R\left(\varphi^{l_n}\right)\right)V_{\varphi}^{k_n}}; \end{equation} we can for example use $k_n=n$ and $l_n=2n$. Let $D_n$ denote the $C^*$-subalgebra of $D_{\Gamma_{\varphi}}$ generated by $$ P_{\Gamma_{\varphi}}\left({V^*_{\varphi}}^{k_n} C^*_r\left(R\left(\varphi^{l_n} \right)\right)V_{\varphi}^{k_n}\right) $$ and let $Y_n$ be the character space of $D_n$. Note that $C(X) \subseteq D_n$ since $V_{\varphi}^{k_n}g{V_{\varphi}^*}^{k_n} \in C^*_r\left(R\left(\varphi^{l_n}\right)\right)$ and $g = P_{\Gamma_{\varphi}}\left( {V^*_{\varphi}}^{k_n} V_{\varphi}^{k_n}g {V_{\varphi}^*}^{k_n} V^{k_n}_{\varphi} \right)$ when $g \in C(X)$. There is therefore a continuous surjection $$ \pi_n : Y_n \to X $$ defined such that $g\left(\pi_n(y)\right) = y(g), \ g \in C(X)$. We claim that $\# \pi_n^{-1}(x) < \infty$ for all $x \in X$. To show this note that by definition $D_n$ is generated as a $C^*$-algebra by functions of the form \begin{equation}Ll{expresssion} \begin{split} &x \mapsto P_{\Gamma_{\varphi}}\left( {V^*_{\varphi}}^{k_n} fV_{\varphi}^{k_n}\right)(x) = \sum_{z,z' \in \varphi^{-k_n}(x)} f(z,z') \prod_{j=0}^{k_n-1} m(\varphi^j(z))^{-\frac{1}{2}} m(\varphi^j(z'))^{-\frac{1}{2}} \end{split} \end{equation} for some $f \in C^*_r\left(R\left(\varphi^{l_n}\right)\right)$. In fact, since $\operatorname{alg}^* R\left(\varphi^{l_n}\right)$ is dense in $C^*_r\left(R\left(\varphi^{l_n}\right)\right)$, already functions of the form (\ref{expresssion}) with \begin{equation}Ll{expression7} f = f_1 \star f_2 \star \dots \star f_N, \end{equation} for some $f_i \in C\left(R\left(\varphi^{l_n}\right)\right), i = 1,2,\dots, N$, will generate $D_n$. Fix $x \in X$ and consider an element $y \in \pi_n^{-1}(x)$. Every $x' \in X$ defines a character $\iota_{x'}$ of $D_n$ by evaluation, viz. $\iota_{x'}(h) = h(x')$, and $\left\{\iota_{x'} : x' \in X \right\}$ is dense in $Y_n$ because the implication $$ h \in D_n, \ h(x') = 0 \ \forall x' \in X \ \operatorname{\mathcal R^-}ightarrow \ h = 0 $$ holds. In particular, there is a sequence $\left\{x_l\right\}$ in $X$ such that $\lim_{l \to \infty} \iota_{x_l} = y$ in $Y_n$. Recall now from Lemma 3.6 of \cite{Th1} that there is an open neighbourhood $U$ of $x$ and open sets $V_j, j=1,2, \dots,d$, where $d = \# \varphi^{-k_n}(x)$, in $X$ such that \begin{enumerate} \item[1)] $\varphi^{-k_n}\left(\overline{U}\right) \subseteq V_1 \cup V_2 \cup \dots \cup V_d$, \item[2)] $\overline{V_i} \cap \overline{V_j} = \emptyset, \ i \neq j$, and \item[3)] $\varphi^{k_n}$ is injective on $\overline{V_j}$ for each $j$. \end{enumerate} Since $\lim_{l \to \infty} x_l = x$ in $X$ we can assume that $x_l \in U$ for all $l$. For each $l$, set $$ F_l = \left\{ j : \ \varphi^{-k_n}(x_l) \cap V_j \neq \emptyset \right\} \subseteq \left\{1,2,\dots,d\right\}. $$ Note that there is a subset $F \subseteq \left\{1,2,\dots,d\right\}$ such that $F_l = F$ for infinitely many $l$. Passing to a subsequence we can therefore assume that $F_l = F$ for all $l$. For each $k \in F$ we define a continuous map $\lambda_k : \varphi^{k_n}\left(\overline{V_k}\right) \to \overline{V_k}$ such that $\varphi^{k_n} \circ \lambda_k (z) = z$. Set $T = \max_{z \in X} \# \varphi^{-1}(z)$. For each $j \in \left\{1,2, \dots, T\right\}$, set $$ A_j = \left\{ z \in X : \# \varphi^{-1}\left(\varphi(z)\right) = j \right\} = m^{-1}(j). $$ For each $l$ and each $k \in F$ there is a unique tuple $\left(j_0(k),j_1(k), \dots, j_{k_n-1}(k)\right) \in \left\{1,2, \dots, T\right\}^{k_n}$ such that $$ \varphi^{-k_n}(x_l) \cap V_k \cap A_{j_0(k)} \cap \varphi^{-1}\left(A_{j_1(k)}\right) \cap \varphi^{-2}\left(A_{j_2(k)}\right) \cap \dots \cap \varphi^{-k_n+1 }\left(A_{j_{k_n-1}(k)}\right) \neq \emptyset . $$ Since there are only finitely many choices we can arrange that the same tuples, $\left(j_0(k),j_1(k), \dots, j_{k_n-1}(k)\right), k \in F$, work for all $l$. Then \begin{equation}Ll{expresssion2} \begin{split} &\iota_{x_l}\left(P_{\Gamma_{\varphi}}\left( {V^*_{\varphi}}^{k_n} fV_{\varphi}^{k_n}\right)\right) = \sum_{k,k' \in F} f\left(\lambda_k(x_l),\lambda_{k'}(x_l)\right) \prod_{i=0}^{k_n-1} j_i(k)^{-\frac{1}{2}} j_i(k')^{-\frac{1}{2}} \end{split} \end{equation} for all $f \in C^*_r\left(R\left(\varphi^{l_n}\right)\right)$ and all $l$. There is an open neighbourhood $U'$ of $\varphi^{l_n-k_n}(x)$ and open sets $V'_j, j=1,2, \dots,d'$, where $d' = \# \varphi^{-l_n}\left(\varphi^{l_n-k_n}(x)\right)$, in $X$ such that \begin{enumerate} \item[1')] $\varphi^{-l_n}\left(\overline{U'}\right) \subseteq V'_1 \cup V'_2 \cup \dots \cup V'_{d'}$, \item[2')] $\overline{V'_i} \cap \overline{V'_j} = \emptyset, \ i \neq j$, and \item[3')] $\varphi^{l_n}$ is injective on $\overline{V'_j}$ for each $j$. \end{enumerate} Since $\lim_{l \to \infty} \varphi^{l_n-k_n}(x_l) = \varphi^{l_n -k_n}(x)$ we can assume that $\varphi^{l_n -k_n}(x_l) \in U'$ for all $l$. By an argument identical to the way we found $F$ above we can now find a subset $F' \subseteq \{1,2,\dots, d'\}$ such that $$ F' = \left\{ j : \varphi^{-l_n}\left(\varphi^{l_n-k_n}(x_l)\right) \cap V'_j \neq \emptyset \right\} $$ for all $l$. For $i \in F'$ we define a continuous map $\mu'_i : \varphi^{l_n}\left(\overline{V'_i}\right) \to \overline{V'_i}$ such that $\mu'_i \circ \varphi^{l_n}(z) = z$ when $z \in \overline{V'_i}$. Set $$ \mu_i = \mu'_i \circ \varphi^{l_n-k_n} $$ on $\varphi^{-(l_n-k_n)}\left(\varphi^{l_n}\left(\overline{V'_i}\right)\right)$. Assuming that $f$ has the form (\ref{expression7}) we find now that \begin{equation}Ll{yrk} \begin{split} &f\left(\lambda_k(x_l),\lambda_{k'}(x_l)\right) = \\ & \sum_{i_1,i_2, \dots, i_{N-1} \in F'} f_1\left(\lambda_k(x_l), \mu_{i_1}(x_l)\right)f_2\left(\mu_{i_1}(x_l), \mu_{i_2}(x_l)\right)\dots \dots f_N\left(\mu_{i_{N-1}}(x_l), \lambda_{k'}(x_l)\right) \end{split} \end{equation} for all $k,k' \in F$. By combining (\ref{yrk}) with (\ref{expresssion2}) we find by letting $l$ tend to infinity that $$ y\left(P_{\Gamma_{\varphi}}\left( {V^*_{\varphi}}^{k_n} fV_{\varphi}^{k_n}\right)\right) = \sum_{k,k' \in F} H_{k,k'}(x)\prod_{i=0}^{k_n-1} j_i(k)^{-\frac{1}{2}} j_i(k')^{-\frac{1}{2}}, $$ where $$ H_{k,k'}(x) = \sum_{i_1,i_2, \dots, i_{N-1} \in F'} f_1\left(\lambda_k(x), \mu_{i_1}(x)\right)f_2\left(\mu_{i_1}(x), \mu_{i_2}(x)\right)\dots \dots f_N\left(\mu_{i_{N-1}}(x), \lambda_{k'}(x)\right) . $$ Since this expression only depends on $F,F'$ and the tuples $$ \left(j_0(k),j_1(k), \dots, j_{k_n-1}(k)\right), k \in F, $$ it follows that the number of possible values of an element from $\pi_n^{-1}(x)$ on the generators of the form (\ref{expresssion}) does not exceed $2^d2^{d'}T^{k_n}$, proving that $\# \pi_n^{-1}(x) < \infty$ as claimed. We can then apply Theorem 4.3.6 on page 281 of \cite{En} to conclude that $\operatorname{Dim} Y_n \leq \operatorname{Dim} X$. Note that $D_n \subseteq D_{n+1}$ and $D_{\Gamma_{\varphi}} = \overline{\bigcup_n D_n}$ by (\ref{D17}) and (\ref{D1}). Hence $Y$ is the projective limit of the sequence $Y_1 \gets Y_2 \gets Y_3 \gets \dots $. Since $\operatorname{Dim} Y_n \leq \operatorname{Dim} X$ for all $n$ we conclude now from Theorem 1.13.4 in \cite{En} that $\operatorname{Dim} Y \leq \operatorname{Dim} X$. \end{proof} \section{The simple quotients} Following \cite{DS} we say that $\phi$ is \emph{strongly transitive} when for any non-empty open subset $U \subseteq Y$ there is an $n \in \mathbb N$ such that $Y = \bigcup_{j=0}^n \phi^j(U)$, cf. \cite{DS}. By Proposition 4.3 of \cite{DS}, $C^*_r\left(\Gamma_{\phi}\right)$ is simple if and only if $Y$ is infinite and $\phi$ is strongly transitive. \begin{lemma}Ll{hmzero} Assume that $\phi$ is strongly transitive but not injective. It follows that $$ \lim_{k \to \infty} \frac{1}{k} \log \left(\inf_{x \in Y}\# \phi^{-k}(x)\right) > 0. $$ \begin{proof} Note that $U = \left\{ x \in Y : \ \# \phi^{-1}(x) \geq 2 \right\}$ is open and not empty since $\phi$ is a local homeomorphism and not injective. It follows that there is an $m \in \mathbb N$ such that \begin{equation}Ll{D18} \bigcup_{j=0}^{m-1} \phi^j(U) = Y \end{equation} because $\phi$ is strongly transitive. We claim that \begin{equation}Ll{est1} \inf_{z \in Y} \# \phi^{-k}(z) \geq 2^{\left[\frac{k}{m}\right]} \end{equation} for all $k \in \mathbb N$ where $\left[\frac{k}{m}\right]$ denotes the integer part of $\frac{k}{m}$. This follows by induction: Assume that it true for all $k' < k$. Consider any $z \in Y$. If $k < m$ there is nothing to prove so assume that $k \geq m$. By (\ref{D18}) we can then write $z = \phi^j(z_1) = \phi^j(z_2)$ for some $j \in \left\{1,2,\dots, m\right\}$ and some $z_1 \neq z_2$. It follows that $$ \# \phi^{-k}(z) \geq \# \phi^{-(k-j)}(z_1) + \# \phi^{-(k-j)}(z_2) \geq 2 \cdot 2^{\left[\frac{k-j}{m}\right]} \geq 2^{\left[\frac{k}{m}\right]} . $$ It follows from (\ref{est1}) that $\lim_{k \to \infty} \frac{1}{k} \log \left(\inf_{x \in Y}\# \phi^{-k}(x)\right) \geq \frac{1}{m} \log 2$. \end{proof} \end{lemma} Let $M_l$ denote the $C^*$-algebra of complex $l \times l$-matrices. In the following a \emph{homogeneous $C^*$-algebra} will be a $C^*$-algebra isomorphic to a $C^*$-algebra of the form $eC(X,M_l)e$ where $X$ is a compact metric space and $e$ is a projection in $C(X,M_l)$ such that $e(x) \neq 0$ for all $x \in X$. \begin{defn}Ll{slowdim} A unital $C^*$-algebra $A$ is an \emph{AH-algebra} when there is an increasing sequence $A_1 \subseteq A_2 \subseteq A_3 \subseteq \dots$ of unital $C^*$-subalgebras of $A$ such that $A = \overline{\bigcup_n A_n}$ and each $A_n$ is a homogeneous $C^*$-algebra. We say that $A$ has \emph{no dimension growth} when the sequence $\{A_n\}$ can be chosen such that $$ A_n \simeq e_nC\left(X_n,M_{l_n}\right)e_n $$ with $\sup_n \operatorname{Dim} X_n < \infty$ and $\lim_{n \to \infty} \min_{x \in X_n} \operatorname{\mathcal R^-}ank e_n(x) = \infty$. \end{defn} Note that the no dimension growth condition is stronger than the slow dimension growth condition used in \cite{Th3}. \begin{prop}Ll{AHthm} Assume that $\operatorname{Dim} Y < \infty$ and that $\phi$ is strongly transitive and not injective. It follows $C^*_r\left(R_{\phi}\right)$ is an AH-algebra with no dimension growth. \end{prop} \begin{proof} For each $n$ we have that \begin{equation}Ll{renu} C^*_r\left(R\left(\phi^n\right)\right) \simeq e_nC\left(Y,M_{m_n}\right)e_n \end{equation} for some $m_n \in \mathbb N$ and some projection $e_n \in C\left(Y,M_{m_n}\right)$. Although this seems to be well known it is hard to find a proof anywhere so we point out that it can proved by specializing the proof of Theorem 3.2 in \cite{Th1} to the case of a surjective local homeomorphism $\phi$. In fact, it suffices to observe that the $C^*$-algebra $A_{\phi}$ which features in Theorem 3.2 of \cite{Th1} is $C(Y)$ in this case. Since $\min_{y \in Y} \operatorname{\mathcal R^-}ank e_n(y)$ is the minimal dimension of an irreducible representation of $C^*_r\left(R\left(\phi^n\right)\right)$ it therefore now suffices to show that the minimal dimension of the irreducible representations of $C^*_r\left(R(\phi^n)\right)$ goes to infinity when $n$ does. It follows from Lemma 3.4 of \cite{Th1} that the minimal dimension of the irreducible representations of $C^*_r\left(R(\phi^n)\right)$ is the same as the number $\min_{y \in Y} \# \phi^{-n}(y)$. It follows from Lemma \ref{hmzero} that $$ \lim_{n \to \infty} \min_{y \in Y} \# \phi^{-n}(y) = \infty , $$ exponentially fast in fact. \end{proof} \begin{lemma}Ll{?1} Assume that $C^*_r\left(\Gamma_{\phi }\right)$ is simple. Then either $\phi $ is a homeomorphism or else \begin{equation}Ll{limit} \lim_{n \to \infty} \sup_{x \in Y} m(x)^{-1}m(\phi (x))^{-1}m\left(\phi ^2(x)\right)^{-1} \dots m\left(\phi ^{n-1}(x)\right)^{-1} = 0 , \end{equation} where $m : Y \to \mathbb N$ is the function (\ref{m-funk}). \begin{proof} Assume (\ref{limit}) does not hold. Since $\phi$ is a local homeomorphism, the function $m$ is continuous so it follows from Dini's theorem that there is at least one $x$ for which \begin{equation}Ll{limit2} \lim_{n \to \infty} m(x)^{-1}m(\phi (x))^{-1}m\left(\phi ^2(x)\right)^{-1} \dots m\left(\phi ^{n-1}(x)\right)^{-1} \end{equation} is not zero. For this $x$ there is a $K$ such that $\# \phi^{-1} \left(\phi^k(x)\right) = 1$ when $k \geq K$, whence the set $$ F = \left\{ y \in Y : \ \# \phi ^{-1}\left( \phi^k(y)\right) =1 \ \forall k \geq 0\right\} $$ is not empty. Note that $F$ is closed and that $\phi ^{-k}\left(\phi ^k(F)\right) = F$ for all $k$, i.e. $F$ is $\phi$-saturated. It follows from Corollary \ref{A5} that $F$ determines a proper ideal $I_F$ in $C_r^*(R_\phi)$. Since $\phi(F) \subseteq F$, it follows that $\widehat{\phi}(I_F)\subseteq I_F$. Then Theorem 4.10 of \cite{Th1} and the simplicity of $C^*_r\left(\Gamma_{\phi}\right)$ imply that either $\phi$ is injective or $I_F = \{0\}$. But $I_F = \{0\}$ means that $F=Y$ and thus that $\phi$ is injective. Hence $\phi$ is a homeormophism in both cases. \end{proof} \end{lemma} \begin{thm}Ll{quotients} Let $\varphi : X \to X$ be a locally injective surjection on a compact metric space $X$ of finite covering dimension, and let $(Y,\phi)$ be its canonical locally homeomorphic extension. Let $A$ be a simple quotient of $C^*_r\left(\Gamma_{\varphi}\right)$. It follows that $A$ is $*$-isomorphic to either \begin{enumerate} \item[1)] a full matrix algebra $M_n(\mathbb C)$ for some $n \in \mathbb N$, or \item[2)] the crossed product $C(F) \times_{\phi|_F} \mathbb Z$ corresponding to an infinite minimal closed totally $\phi$-invariant subset $F \subseteq Y$ on which $\phi$ is injective, or \item[3)] a purely infinite, simple, nuclear, separable $C^*$-algebra; more specifically to the crossed product $C^*_r\left(R_{\phi|_F}\right) \times_{\widehat{\phi|_F}} \mathbb N$ where $F$ is an infinite minimal closed totally $\phi$-invariant subset of $Y$ and $C^*_r\left(R_{\phi|_F}\right)$ is an AH-algebra with no dimension growth. \end{enumerate} \begin{proof} If $A$ is not a matrix algebra it has the form $C^*_r\left(\Gamma_{\phi|_F}\right)$ for some infinite minimal closed totally $\phi$-invariant subset $F \subseteq Y$ by (\ref{basiciso}) and Corollary \ref{maximal3}. If $\phi$ is injective on $F$ we are in case 2). Assume not. Since $\operatorname{Dim} F \leq \operatorname{Dim} Y \leq \operatorname{Dim} X$ by Proposition \ref{dim!!!} it follows from Proposition \ref{AHthm} that $C^*_r\left(R_{\phi|_F}\right)$ is an AH-algebra with no dimension growth. By \cite{An} (or Theorem 4.6 of \cite{Th1}) we have an isomorphism $$ C^*_r\left(\Gamma_{\phi|_F}\right) \simeq C^*_r\left(R_{\phi|_F}\right) \times_{\widehat{\phi|_F}} \mathbb N, $$ where $\widehat{\phi|_F}$ is the endomorphism of $C^*_r\left(R_{\phi|_F}\right)$ given by conjugation with $V_{\phi|_F}$. We claim that the pure infiniteness of $ C^*_r\left(R_{\phi|_F}\right) \times_{\widehat{\phi|_F}} \mathbb N$ follows from Theorem 1.1 of \cite{Th3}. For this it remains only to check that $\widehat{\phi|_F} = {\mathbb A}d V_{\phi|_F}$ satisfies the two conditions on $\beta$ in Theorem 1.1 of \cite{Th3}, i.e. that $\widehat{\phi|_F}(1) = V_{\phi|_F}V_{\phi|_F}^*$ is a full projection and that there is no $\widehat{\phi|_F}$-invariant trace state on $C^*_r\left(R_{\phi|_F}\right)$. The first thing was observed already in Lemma 4.7 of \cite{Th1} so we focus on the second. Observe that it follows from Lemma 2.24 of \cite{Th1} that $\omega = \omega \circ P_{R_{\phi}}$ for every trace state $\omega$ of $C^*_r\left(R_{\phi}\right)$. By using this, a direct calculation as on page 787 of \cite{Th1} shows that $$ \omega\left( V_{\phi|_F}^n{V_{\phi|_F}^*}^n\right) \leq \sup_{y \in Y} \left[m(y)m(\phi(y))\dots m\left(\phi^{n-1}(y)\right)\right]^{-1} $$ Then Lemma \ref{?1} implies that $\lim_{n \to \infty} \omega\left( V_{\phi|_F}^n{V_{\phi|_F}^*}^n\right) = 0$. In particlar, $\omega$ is not $\widehat{\phi|_F}$-invariant. \end{proof} \end{thm} \begin{cor}Ll{cor1} Assume that $C^*_r\left(\Gamma_{\varphi}\right)$ is simple and that $\operatorname{Dim} X < \infty$. It follows that $C^*_r\left(\Gamma_{\varphi}\right)$ is purely infinite if and only if $\varphi$ is not injective. \end{cor} \begin{proof} Assume first that $\varphi$ is injective. Then $C^*_r\left(\Gamma_{\varphi}\right)$ is the crossed product $C(X) \times_{\varphi} \mathbb Z$ which is stably finite and thus not purely infinite. Conversely, assume that $\varphi$ is not injective. Then a direct calculation, as in the proof of Theorem 4.8 in \cite{Th1}, shows that $V_{\varphi}$ is a non-unitary isometry in $C^*_r\left(\Gamma_{\varphi}\right)$. Since the $C^*$-algebras which feature in case 1) and case 2) of Theorem \ref{quotients} are stably finite, the presence of a non-unitary isometry implies that $C^*_r\left(\Gamma_{\varphi}\right)$ is purely infinite. \end{proof} \begin{cor}Ll{tokecor} Let $S$ be a one-sided subshift. If the $C^*$-algebra $\mathcal{O}_S$ associated with $S$ in \cite{C} is simple, then it is also purely infinite. \end{cor} \begin{proof} It follows from Theorem 4.18 in \cite{Th1} that $\mathcal{O}_S$ is isomorphic to $C^*_r\left(\Gamma_{\sigma}\right)$ where $\sigma$ is the shift map on $S$. If $\mathcal{O}_S$ is simple, $S$ must be infinite and it then follows from Proposition 2.4.1 in \cite{BS} (cf. Theorem 3.9 in \cite{BL}) that $\sigma$ is not injective. The conclusion follows then from Corollary \ref{cor1}. \end{proof} In Corollary \ref{tokecor} we assume that the shift map $\sigma$ on $S$ is surjective. It is not clear if the result holds without this assumption. For completeness we point out that when $X$ is totally disconnected (i.e. zero dimensional) the algebra $C^*_r\left(R_{\phi|_F}\right)$ which features in case 3) of Theorem \ref{quotients} is approximately divisible, cf. \cite{BKR}. We don't know if this is the case in general, but a weak form of divisibility is always present in $C^*_r\left(R_{\phi}\right)$ when $C^*_r\left(\Gamma_{\varphi}\right)$ is simple and $\phi$ not injective, cf. \cite{Th3}. \begin{prop}Ll{propfinish} Assume that $Y$ is totally disconnected and $\phi$ strongly transitive and not injective. It follows that $C^*_r\left(R_{\phi}\right)$ is an approximately divisible AF-algebra. \begin{proof} It follows from Proposition 6.8 of \cite{DS} that $C^*_r\left(R_{\phi}\right)$ is an AF-algebra. As pointed out in Proposition 4.1 of \cite{BKR} a unital AF-algebra fails to be approximately divisible only if it has a quotient with a non-zero abelian projection. If $C^*_r\left(R_{\phi}\right)$ has such a quotient there is also a primitive quotient with an abelian projection; i.e. by Proposition \ref{A17} there is an $x \in Y$ such that $C^*_r \left(R_{\phi|_{\overline{H(x)}}}\right)$ has a non-zero abelian projection $p$. It follows from (\ref{crux}) that every projection of $C^*_r \left(R_{\phi|_{\overline{H(x)}}}\right)$ is unitarily equivalent to a projection in $C^*_r \left(R\left(\phi^n|_{\overline{H(x)}}\right)\right)$ for some $n$. Since $\overline{H(x)}$ is totally disconnected we can use Proposition 6.1 of \cite{DS} to conclude that every projection in $C^*_r \left(R\left(\phi^n|_{\overline{H(x)}}\right)\right)$ is unitarily equivalent to a projection in $D_{R_{\phi|_{\overline{H(x)}}}} = C\left(\overline{H(x)}\right)$. We may therefore assume that $p \in C\left(\overline{H(x)}\right)$ so that $p = 1_A$ for some clopen $A \subseteq \overline{H(x)}$. Then $H(x) \cap A \neq \emptyset$ so by exchanging $x$ with some element in $H(x)$ we may assume that $x \in A$. If there is a $y \neq x$ in $A$ such that $\phi^k(x) = \phi^k(y)$ for some $k \in \mathbb N$, consider functions $g \in C\left(\overline{H(x)}\right)$ and $f \in C_c\left(R_{\phi}\right)$ such that $g(x) = 1, g(y) = 0$, $\operatorname{supp} g \subseteq A$, $\operatorname{supp} f \subseteq R_{\phi} \cap \left(A \times A\right)$ and $f(x,y) \neq 0$. Then $f,g \in 1_AC^*_r \left(R_{\phi|_{\overline{H(x)}}}\right)1_A$ and $gf \neq 0$ while $fg = 0$, contradicting that $1_AC^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)1_A$ is abelian. Thus no such $y$ can exist which implies that $\pi_x(1_A) = 1_{\{x\}}$, where $\pi_x$ is the representation (\ref{pirep}), restricted to the subspace of $H_x$ consisting of the functions supported in $\left\{(x',k,x) \in \Gamma_{\phi} : \ k = 0 \right\}$. It follows that $\pi_x\left(1_AC^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)1_A\right) \simeq \mathbb C$. Consider a non-zero ideal $J \subseteq \pi_x\left(C^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)\right)$. Then $\pi_x^{-1}(J)$ is a non-zero ideal in $C^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)$ and it follows from Corollary \ref{A5} that there is an open non-empty subset $U$ of $\overline{H(x)}$ such that $\phi^{-k}\left( \phi^k(U)\right) = U$ for all $k$ and $C_0(U) = \pi_x^{-1}(J) \cap C\left(\overline{H(x)}\right)$. Since $H(x) \cap U \neq \emptyset$, it follows that $x \in U$ so there is a function $g \in \pi_x^{-1}(J) \cap C\left(\overline{H(x)}\right)$ such that $g(x) = 1$. It follows that $\pi_x\left(g1_A\right) = 1_{\left\{x\right\}} = \pi_x\left(1_A\right) \in J$. This shows that $\pi_x(1_A)$ is a full projection in $\pi_x\left(C^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)\right)$ and Brown's theorem, \cite{Br}, shows now that $\pi_x\left(C^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)\right)$ is stably isomorphic to $\pi_x\left(1_AC^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)1_A\right) \simeq \mathbb C$. Since $\pi_x\left(C^*_r\left(R_{\phi|_{\overline{H(x)}}}\right)\right)$ is unital this means that it is a full matrix algebra. In conclusion we deduce that if $C^*_r\left(R_{\phi}\right)$ is not approximately divisible it has a full matrix algebra as a quotient. By Corollary \ref{A5} this implies that there is a finite set $F' \subseteq Y$ such that $F' = \phi^{-k}\left(\phi^k(F')\right)$ for all $k \in \mathbb N$. Since $$ \phi^{-k}\left(\phi^k(x)\right) \subseteq \phi^{-k-1}\left(\phi^{k+1}(x)\right) \subseteq F' $$ for all $k$ when $x \in F'$, there is for each $x \in F $ a natural number $K$ such that $\phi^{-k}\left(\phi^k(x)\right) = \phi^{-K}\left(\phi^K(x)\right)$ when $k \geq K$. Then $\# \phi^{-1} \left( \phi^k(x)\right) =1$ for $k \geq K+1$, so that $m\left(\phi^k(x)\right) = 1$ for all $k \geq K$, which by Lemma \ref{hmzero} contradicts that $\phi$ is not injective. This contradiction finally shows that $C^*_r\left(R_{\phi}\right)$ is approximately divisible, as desired. \end{proof} \end{prop} \end{document}
\begin{document} \title[Asymptotic formulas for the gamma function]{Asymptotic formulas for the gamma function constructed by bivariate means} \author{Zhen-Hang Yang} \address{Power Supply Service Center, ZPEPC Electric Power Research Institute, Hangzhou, Zhejiang, China, 310007} \email{[email protected]} \date{July 19, 2014} \subjclass[2010]{Primary 33B15, 26E60; Secondary 26D15, 11B83} \keywords{Stirling's formula, gamma function, mean, inqueality, polygamma function} \thanks{This paper is in final form and no version of it will be submitted for publication elsewhere.} \begin{abstract} Let $K,M,N$ denote three bivariate means. In the paper, the author prove the asymptotic formulas for the gamma function have the form of \begin{equation*} \Gamma \left( x+1\right) \thicksim \sqrt{2\pi }M\left( x+\theta ,x+1-\theta \right) ^{K\left( x+\epsilon ,x+1-\epsilon \right) }e^{-N\left( x+\sigma ,x+1-\sigma \right) } \end{equation*} or \begin{equation*} \Gamma \left( x+1\right) \thicksim \sqrt{2\pi }M\left( x+\theta ,x+\sigma \right) ^{K\left( x+\epsilon ,x+1-\epsilon \right) }e^{-M\left( x+\theta ,x+\sigma \right) } \end{equation*} as $x\rightarrow \infty $, where $\epsilon ,\theta ,\sigma $ are fixed real numbers. This idea can be extended to the psi and polygamma functions. As examples, some new asymptotic formulas for the gamma function are presented. \end{abstract} \maketitle \section{Introduction} The Stirling's formula \begin{equation} n!\thicksim \sqrt{2\pi n}n^{n}e^{-n}:=s_{n} \label{S} \end{equation} has important applications in statistical physics, probability theory and and number theory. Due to its practical importance, it has attracted much interest of many mathematicians and have motivated a large number of research papers concerning various generalizations and improvements. Burnside's formula \cite{Burnside-MM-46-1917} \begin{equation} n!\thicksim \sqrt{2\pi }\left( \frac{n+1/2}{e}\right) ^{n+1/2}:=b_{n} \label{B} \end{equation} slight improves (\ref{S}). Gosper \cite{Gosper-PNAS-75-1978} replaced $\sqrt{ 2\pi n}$ by $\sqrt{2\pi \left( n+1/6\right) }$ in (\ref{S}) to get \begin{equation} n!\thicksim \sqrt{2\pi \left( n+\tfrac{1}{6}\right) }\left( \frac{n}{e} \right) ^{n}:=g_{n}, \label{G} \end{equation} which is better than (\ref{S}) and (\ref{B}). In the recent paper \cite {Batir-P-27(1)-2008}, N. Batir obtained an asymptotic formula similar to ( \ref{G}): \begin{equation} n!\thicksim \frac{n^{n+1}e^{-n}\sqrt{2\pi }}{\sqrt{n-1/6}}:=b_{n}^{\prime }, \label{Batir1} \end{equation} which is stronger than (\ref{S}) and (\ref{B}). A more accurate approximation for the factorial function \begin{equation} n!\thicksim \sqrt{2\pi }\left( \frac{n^{2}+n+1/6}{e^{2}}\right) ^{n/2+1/4}:=m_{n} \label{M} \end{equation} was presented in \cite{Mortici-CMI-19(1)-2010} by Mortici. The classical Euler's gamma function $\Gamma $ may be defined by \begin{equation} \Gamma \left( x\right) =\int_{0}^{\infty }t^{x-1}e^{-t}dt \label{Gamma} \end{equation} for $x>0$, and its logarithmic derivative $\psi \left( x\right) =\Gamma ^{\prime }\left( x\right) /\Gamma \left( x\right) $ is known as the psi or digamma function, while $\psi ^{\prime }$, $\psi ^{\prime \prime }$, ... are called polygamma functions (see \cite{Anderson-PAMS-125(11)-1997}). The gamma function is closely related to the Stirling's formula, since $ \Gamma (n+1)=n!$ for all $n\in \mathbb{N}$. This inspires some authors to also pay attention to find better approximations for the gamma function. For example, Ramanujan's \cite[P. 339]{Ramanujan-SB-1988} double inequality for the gamma function: \begin{equation} \sqrt{\pi }\left( \tfrac{x}{e}\right) ^{x}\left( 8x^{3}+4x^{2}+x+\tfrac{1}{ 100}\right) ^{1/6}<\Gamma \left( x+1\right) <\sqrt{\pi }\left( \tfrac{x}{e} \right) ^{x}\left( 8x^{3}+4x^{2}+x+\tfrac{1}{30}\right) ^{1/6} \label{R} \end{equation} for $x\geq 1$. Batir \cite{Batir-AM-91-2008} showed that for $x>0$, \begin{eqnarray} &&\sqrt{2}e^{4/9}\left( \frac{x}{e}\right) ^{x}\sqrt{x+\frac{1}{2}}\exp \left( -\tfrac{1}{6\left( x+3/8\right) }\right) \label{Batir2} \\ &<&\Gamma \left( x+1\right) <\sqrt{2\pi }\left( \frac{x}{e}\right) ^{x}\sqrt{ x+\frac{1}{2}}\exp \left( -\tfrac{1}{6\left( x+3/8\right) }\right) . \notag \end{eqnarray} Mortici \cite{Mortici-AM-93-2009-1} proved that for $x\geq 0$, \begin{eqnarray} \sqrt{2\pi e}e^{-\omega }\left( \frac{x+\omega }{e}\right) ^{x+1/2} &<&\Gamma \left( x+1\right) \leq \alpha \sqrt{2\pi e}e^{-\omega }\left( \frac{x+\omega }{e}\right) ^{x+1/2}, \label{Ml} \\ \beta \sqrt{2\pi e}e^{-\varsigma }\left( \frac{x+\varsigma }{e}\right) ^{x+1/2} &<&\Gamma \left( x+1\right) \leq \sqrt{2\pi e}e^{-\varsigma }\left( \frac{x+\varsigma }{e}\right) ^{x+1/2} \label{Mr} \end{eqnarray} where $\omega =\left( 3-\sqrt{3}\right) /6$, $\alpha =1.072042464...$ and $ \varsigma =\left( 3+\sqrt{3}\right) /6$, $\beta =0.988503589...$. More results involving the asymptotic formulas for the factorial or gamma functions can consult \cite{Shi-JCAM-195-2006}, \cite{Guo-JIPAM-9(1)-2008}, \cite{Mortici-MMN-11(1)-2010}, \cite{Mortici-CMA-61-2011}, \cite {Zhao-PMD-80(3-4)-2012}, \cite{Mortici-MCM-57-2013}, \cite{Qi-JCAM-268-2014} , \cite{Qi-JCAM-268-2014}, \cite{Lu-RJ-35(1)-2014} and the references cited therein). Mortici \cite{Mortici-BTUB-iii-3(52)-2010} presented an idea that by replacing an under-approximation and an upper-approximation of the factorial function by one of their geometric mean to improve certain approximation formula of the factorial. In fact, by observing and analyzing these asymptotic formulas for factorial or gamma function, we find out that they have the common form of \begin{equation} \ln \Gamma \left( x+1\right) \thicksim \frac{1}{2}\ln 2\pi +P_{1}\left( x\right) \ln P_{2}\left( x\right) -P_{3}\left( x\right) +P_{4}\left( x\right) , \label{g-form} \end{equation} where $P_{1}\left( x\right) ,P_{2}\left( x\right) $ and $P_{3}\left( x\right) $ are all means of $x$ and $\left( x+1\right) $, while $P_{4}\left( x\right) $ satisfies $P_{4}\left( \infty \right) =0$. For example, (\ref{S} )--(\ref{M}) can be written as \begin{eqnarray*} &&\ln n!\thicksim \frac{1}{2}\ln 2\pi +\left( n+\frac{1}{2}\right) \ln n-n, \\ &&\ln n!\thicksim \frac{1}{2}\ln 2\pi +\left( n+\frac{1}{2}\right) \ln \left( n+\frac{1}{2}\right) -\left( n+\frac{1}{2}\right) , \\ &&\ln n!\thicksim \frac{1}{2}\ln 2\pi +\left( n+\frac{1}{2}\right) \ln n-n+ \frac{1}{2}\ln \left( 1+\tfrac{1}{6n}\right) , \\ &&\ln n!\thicksim \frac{1}{2}\ln 2\pi +\left( n+\frac{1}{2}\right) \ln n-n- \frac{1}{2}\ln \left( 1-\tfrac{1}{6n}\right) , \\ &&\ln n!\thicksim \frac{1}{2}\ln 2\pi +\left( n+\frac{1}{2}\right) \ln \sqrt{ \frac{n^{2}+4n\left( n+1\right) +\left( n+1\right) ^{2}}{6}}-\left( n+\frac{1 }{2}\right) . \end{eqnarray*} Inequalities (\ref{R})--(\ref{Mr}) imply that \begin{eqnarray*} &&\ln \Gamma \left( x+1\right) \thicksim \frac{1}{2}\ln 2\pi +\left( x+\frac{ 1}{2}\right) \ln x-x+\frac{1}{6}\ln \left( 1+\frac{1}{2x}+\frac{1}{8x^{2}}+ \frac{1}{240x^{3}}\right) , \\ &&\ln \Gamma \left( x+1\right) \thicksim \frac{1}{2}\ln 2\pi +\left( x+\frac{ 1}{2}\right) \ln x-x+\frac{1}{2}\ln \left( 1+\frac{1}{2x}\right) -\tfrac{1}{ 6\left( x+3/8\right) }, \\ &&\ln \Gamma \left( x+1\right) \thicksim \frac{1}{2}\ln 2\pi +\left( x+\frac{ 1}{2}\right) \ln \left( \left( 1-a\right) x+a\left( x+1\right) \right) -\left( \left( 1-a\right) x+a\left( x+1\right) \right) , \end{eqnarray*} where $a=\omega =(3-\sqrt{3})/6$, $\varsigma =(3+\sqrt{3})/6$. The aim of this paper is to prove the validity of the form (\ref{g-form}) which offers such a new way to construct asymptotic formulas for Euler gamma function in terms of bivariate means. Our main results are included in Section 2. Some new examples are presented in the last section. \section{Main results} Before stating and proving our main results, we recall some knowledge on means. Let $I$ be an interval on $\mathbb{R}$. A bivariate real valued function $M:I^{2}\rightarrow \mathbb{R}$ is said to be a bivariate mean if \begin{equation*} \min \left( a,b\right) \leq M\left( a,b\right) \leq \max \left( a,b\right) \end{equation*} for all $a,b\in I$. Clearly, each bivariate mean $M$ is reflexive, that is, \begin{equation*} M\left( a,a\right) =a \end{equation*} for any $a\in I$. $M$ is symmetric if \begin{equation*} M\left( a,b\right) =M\left( b,a\right) \end{equation*} for all $a,b\in I$, and $M$ is said to be homogeneous (of degree one) if \begin{equation} M\left( ta,tb\right) =tM\left( a,b\right) \label{M-h} \end{equation} for any $a,b\in I$ and $t>0$. The lemma is crucial to prove our results. \begin{lemma}[{\protect\cite[Thoerem 1, 2, 3]{Toader.MIA.5.2002}}] \label{Lemma M}If $M:I^{2}\rightarrow \mathbb{R}$ is a differentiable mean, then for $c\in I$, \begin{equation*} M_{a}^{\prime }\left( c,c\right) ,M_{b}^{\prime }\left( c,c\right) \in \left( 0,1\right) \text{ \ and \ }M_{a}^{\prime }\left( c,c\right) +M_{b}^{\prime }\left( c,c\right) =1\text{.} \end{equation*} In particular, if $M$ is symmetric, then \begin{equation*} M_{a}^{\prime }\left( c,c\right) =M_{b}^{\prime }\left( c,c\right) =1/2. \end{equation*} \end{lemma} Now we are in a position to state and prove main results. \begin{theorem} \label{MT-p2><p3}Let $M:\left( 0,\infty \right) \times \left( 0,\infty \right) \rightarrow \left( 0,\infty \right) $ and $N:\left( -\infty ,\infty \right) \times \left( -\infty ,\infty \right) \rightarrow \left( -\infty ,\infty \right) $ be two symmetric, homogeneous and differentiable means and let $r$ be defined on $\left( 0,\infty \right) $ satisfying $ \lim_{x\rightarrow \infty }r\left( x\right) =0$. Then for fixed real numbers $\theta ,\theta ^{\ast },\sigma ,\sigma ^{\ast }$ with $\theta +\theta ^{\ast }=\sigma +\sigma ^{\ast }=1$ such that $x>-\min \left( 1,\theta ,\theta ^{\ast }\right) $, we have \begin{equation*} \Gamma \left( x+1\right) \thicksim \sqrt{2\pi }M\left( x+\theta ,x+\theta ^{\ast }\right) ^{x+1/2}e^{-N\left( x+\sigma ,x+\sigma ^{\ast }\right) }e^{r\left( x\right) }\text{, as }x\rightarrow \infty . \end{equation*} \end{theorem} \begin{proof} Since $\lim_{x\rightarrow \infty }r\left( x\right) =0$, the desired result is equivalent to \begin{equation*} \lim_{x\rightarrow \infty }\left( \ln \Gamma \left( x+1\right) -\ln \sqrt{ 2\pi }-\left( x+\frac{1}{2}\right) \ln M\left( x+\theta ,x+\theta ^{\ast }\right) +N\left( x+\sigma ,x+\sigma ^{\ast }\right) \right) =0. \end{equation*} Due to $\lim_{x\rightarrow \infty }r\left( x\right) =0$ and the known relation \begin{equation*} \lim_{x\rightarrow \infty }\left( \ln \Gamma \left( x+1\right) -\left( x+ \frac{1}{2}\right) \ln \left( x+\frac{1}{2}\right) +\left( x+\frac{1}{2} \right) \right) =\frac{1}{2}\ln 2\pi , \end{equation*} it suffices to prove that \begin{eqnarray*} D_{1} &:&=\lim_{x\rightarrow \infty }\left( x+\frac{1}{2}\right) \ln \frac{ M\left( x+\theta ,x+\theta ^{\ast }\right) }{x+1/2}=0, \\ D_{2} &:&=\lim_{x\rightarrow \infty }\left( N\left( x+\sigma ,x+\sigma ^{\ast }\right) -\left( x+\frac{1}{2}\right) \right) =0. \end{eqnarray*} Letting $x=1/t$, using the homogeneity of $M$, that is, (\ref{M-h}), and utilizing L'Hospital rule give \begin{eqnarray*} D_{1} &=&\lim_{t\rightarrow 0^{+}}\frac{1+t/2}{t}\ln \frac{M\left( 1+\theta t,1+\theta ^{\ast }t\right) }{1+t/2} \\ &=&\lim_{t\rightarrow 0^{+}}\frac{\ln M\left( 1+\theta t,1+\theta ^{\ast }t\right) -\ln \left( 1+t/2\right) }{t} \\ &=&\lim_{t\rightarrow 0^{+}}\left( \frac{\theta M_{x}\left( 1+\theta t,1+\theta ^{\ast }t\right) +\theta ^{\ast }M_{y}\left( 1+\theta t,1+\theta ^{\ast }t\right) }{M\left( 1+\theta t,1+\theta ^{\ast }t\right) }-\frac{1}{ 2+t}\right) \\ &=&\frac{\theta M_{x}\left( 1,1\right) +\theta ^{\ast }M_{y}\left( 1,1\right) }{M\left( 1,1\right) }-\frac{1}{2}=0, \end{eqnarray*} where the last equality holds due to Lemma \ref{Lemma M}. Similarly, we have \begin{eqnarray*} D_{2} &=&\lim_{x\rightarrow \infty }\left( N\left( x+\sigma ,x+\sigma ^{\ast }\right) -\left( x+\frac{1}{2}\right) \right) \\ &&\overset{1/x=t}{=\!=\!=}\lim_{t\rightarrow 0^{+}}\frac{N\left( 1+\sigma t,1+\sigma ^{\ast }t\right) -\left( 1+t/2\right) }{t} \\ &=&\lim_{t\rightarrow 0^{+}}\left( \sigma N_{x}\left( 1+\sigma t,1+\sigma ^{\ast }t\right) +\sigma ^{\ast }N_{y}\left( 1+\sigma t,1+\sigma ^{\ast }t\right) -\frac{1}{2}\right) \\ &=&\frac{\sigma +\sigma ^{\ast }}{2}-\frac{1}{2}=0, \end{eqnarray*} which proves the desired result. \end{proof} \begin{theorem} \label{MT-p2=p3}Let $M:\left( 0,\infty \right) \times \left( 0,\infty \right) \rightarrow \left( 0,\infty \right) $ be a mean and let $r$ be defined on $\left( 0,\infty \right) $ satisfying $\lim_{x\rightarrow \infty }r\left( x\right) =0$. Then for fixed real numbers $\theta ,\sigma $ such that $x>-\min \left( 1,\theta ,\sigma \right) $, we have \begin{equation*} \Gamma \left( x+1\right) \thicksim \sqrt{2\pi }M\left( x+\theta ,x+\sigma \right) ^{x+1/2}e^{-M\left( x+\theta ,x+\sigma \right) }e^{r\left( x\right) } \text{, as }x\rightarrow \infty . \end{equation*} \end{theorem} \begin{proof} Since $\lim_{x\rightarrow \infty }r\left( x\right) =0$, the desired result is equivalent to \begin{equation*} \lim_{x\rightarrow \infty }\left( \ln \Gamma \left( x+1\right) -\ln \sqrt{ 2\pi }-\left( x+\frac{1}{2}\right) \ln M\left( x+\theta ,x+\sigma \right) +M\left( x+\theta ,x+\sigma \right) \right) =0. \end{equation*} Similarly, it suffices to prove that \begin{eqnarray*} D_{3} &:&=\lim_{x\rightarrow \infty }\left( \left( x+\frac{1}{2}\right) \ln \frac{M\left( x+\theta ,x+\sigma \right) }{x+1/2}-\left( M\left( x+\theta ,x+\sigma \right) -\left( x+\frac{1}{2}\right) \right) \right) \\ &=&\lim_{x\rightarrow \infty }\left( \left( M\left( x+\theta ,x+\sigma \right) -\left( x+\frac{1}{2}\right) \right) \times \left( \frac{1}{L\left( y,1\right) }-1\right) \right) =0, \end{eqnarray*} where $L\left( a,b\right) $ is the logarithmic mean of positive $a$ and $b$, $y=M\left( x+\theta ,x+\sigma \right) /\left( x+1/2\right) $. Now we first show that \begin{equation*} D_{4}:=M\left( x+\theta ,x+\sigma \right) -\left( x+\frac{1}{2}\right) \end{equation*} is bounded. In fact, by the property of mean we see that \begin{equation*} x+\min \left( \theta ,\sigma \right) -\left( x+\frac{1}{2}\right) <D_{4}<x+\max \left( \theta ,\sigma \right) -\left( x+\frac{1}{2}\right) \end{equation*} that is, \begin{equation*} \min \left( \theta ,\sigma \right) -\frac{1}{2}<D_{4}<\max \left( \theta ,\sigma \right) -\frac{1}{2}. \end{equation*} It remains to prove that \begin{equation*} \lim_{x\rightarrow \infty }D_{5}:=\lim_{x\rightarrow \infty }\left( \frac{1}{ L\left( y,1\right) }-1\right) =0. \end{equation*} Since \begin{equation*} \frac{x+\min \left( \theta ,\sigma \right) }{x+1/2}<y=\frac{M\left( x+\theta ,x+\sigma \right) }{x+1/2}<\frac{x+\max \left( \theta ,\sigma \right) }{x+1/2 }, \end{equation*} so we have $\lim_{x\rightarrow \infty }y=1$. This together with \begin{equation*} \min \left( y,1\right) \leq L\left( y,1\right) \leq \max \left( y,1\right) \end{equation*} yields $\lim_{x\rightarrow \infty }L\left( y,1\right) =1$, and therefore, $ \lim_{x\rightarrow \infty }D_{5}=0$. This completes the proof. \end{proof} \begin{theorem} \label{MT-p2=p3=x+1/2}Let $K:\left( -\infty ,\infty \right) \times \left( -\infty ,\infty \right) \rightarrow \left( -\infty ,\infty \right) $ be a symmetric, homogeneous and twice differentiable mean and let $r$ be defined on $\left( 0,\infty \right) $ satisfying $\lim_{x\rightarrow \infty }r\left( x\right) =0$. Then for fixed real numbers $\epsilon ,\epsilon ^{\ast }$ with $\epsilon +\epsilon ^{\ast }=1$, we have \begin{equation*} \Gamma \left( x+1\right) \thicksim \sqrt{2\pi }\left( x+\frac{1}{2}\right) ^{K(x+\epsilon ,x+\epsilon ^{\ast })}e^{-\left( x+1/2\right) }e^{r\left( x\right) }\text{, as }x\rightarrow \infty \end{equation*} \end{theorem} \begin{proof} Due to $\lim_{x\rightarrow \infty }r\left( x\right) =0$, the result in question is equivalent to \begin{equation*} \lim_{x\rightarrow \infty }\left( \ln \Gamma \left( x+1\right) -\ln \sqrt{ 2\pi }-K\left( x+\epsilon ,x+\epsilon ^{\ast }\right) \ln \left( x+\frac{1}{2 }\right) +\left( x+\frac{1}{2}\right) \right) =0. \end{equation*} Clearly, we only need to prove that \begin{equation*} D_{6}:=\lim_{x\rightarrow \infty }\left( K\left( x+\epsilon ,x+\epsilon ^{\ast }\right) -\left( x+\frac{1}{2}\right) \right) \ln \left( x+\frac{1}{2} \right) =0. \end{equation*} By the homogeneity of $K$, we get \begin{eqnarray*} &&D_{6}\!\overset{1/x=t}{=\!=\!=}\lim_{t\rightarrow 0^{+}}\frac{K\left( 1+\epsilon t,1+\epsilon ^{\ast }t\right) -\left( 1+t/2\right) }{t}\left( \ln \left( 1+\frac{t}{2}\right) -\ln t\right) \\ &=&\lim_{t\rightarrow 0^{+}}\frac{K\left( 1+\epsilon t,1+\epsilon ^{\ast }t\right) -\left( 1+t/2\right) }{t^{2}}\lim_{t\rightarrow 0^{+}}\left( t\ln \left( 1+\frac{t}{2}\right) -t\ln t\right) =0, \end{eqnarray*} where the first limit, by L'Hospital's rule, is equal to \begin{eqnarray*} &&\lim_{t\rightarrow 0^{+}}\frac{\epsilon K_{x}\left( 1+\epsilon t,1+\epsilon ^{\ast }t\right) +\epsilon ^{\ast }K_{y}\left( 1+\epsilon t,1+\epsilon ^{\ast }t\right) -1/2}{2t} \\ &=&\lim_{t\rightarrow 0^{+}}\frac{\epsilon ^{2}K_{xx}\left( 1+\epsilon t,1+\epsilon ^{\ast }t\right) +2\epsilon \epsilon ^{\ast }K_{xy}\left( 1+\epsilon t,1+\epsilon ^{\ast }t\right) +\epsilon ^{\ast }K_{yy}\left( 1+\epsilon t,1+\epsilon ^{\ast }t\right) }{2} \\ &=&\frac{\epsilon ^{2}K_{xx}\left( 1,1\right) +2\epsilon \epsilon ^{\ast }K_{xy}\left( 1,1\right) +\epsilon ^{\ast }K_{yy}\left( 1,1\right) }{2}=- \frac{\left( 2\epsilon -1\right) ^{2}}{2}K_{xy}\left( 1,1\right) , \end{eqnarray*} while the second one is clearly equal to zero. The proof ends. \end{proof} By the above three theorems, the following assertion is immediate. \begin{corollary} \label{MCg-form1}Suppose that (i) the function $K:\mathbb{R}^{2}\rightarrow \mathbb{R}$ is a symmetric, homogeneous and twice differentiable mean; (ii) the functions $M:\left( 0,\infty \right) \times \left( 0,\infty \right) \rightarrow \left( 0,\infty \right) $ and $N:\mathbb{R}^{2}\rightarrow \mathbb{R}$ are two symmetric, homogeneous, and differentiable means; (iii) the function $r:\left( 0,\infty \right) \rightarrow \left( -\infty ,\infty \right) $ satisfies $\lim_{x\rightarrow \infty }r\left( x\right) =0$. Then for fixed real numbers $\epsilon ,\epsilon ^{\ast },\theta ,\theta ^{\ast },\sigma ,\sigma ^{\ast }$ with $\epsilon +\epsilon ^{\ast }=\theta +\theta ^{\ast }=\sigma +\sigma ^{\ast }=1$ such that $x>-\min \left( 1,\theta ,\theta ^{\ast }\right) $, we have \begin{equation*} \Gamma \left( x+1\right) \thicksim \sqrt{2\pi }M\left( x+\theta ,x+\theta ^{\ast }\right) ^{K\left( x+\epsilon ,x+\epsilon ^{\ast }\right) }e^{-N\left( x+\sigma ,x+\sigma ^{\ast }\right) }e^{r\left( x\right) },\text{ as }x\rightarrow \infty . \end{equation*} \end{corollary} \begin{corollary} \label{MCg-form2}Suppose that (i) the function $K:\left( -\infty ,\infty \right) ^{2}\rightarrow \left( -\infty ,\infty \right) $ is a symmetric, homogeneous and twice differentiable mean; (ii) the functions $M,N:\left( 0,\infty \right) ^{2}\rightarrow \left( 0,\infty \right) $ are two means; (iii) the function $r:\left( 0,\infty \right) \rightarrow \left( -\infty ,\infty \right) $ satisfies $\lim_{x\rightarrow \infty }r\left( x\right) =0$. Then for fixed real numbers $\epsilon ,\epsilon ^{\ast },\theta ,\sigma $ with $\epsilon +\epsilon ^{\ast }=1$ such that $x>-\min \left( 1,\theta ,\sigma \right) $, we have \begin{equation*} \Gamma \left( x+1\right) \thicksim \sqrt{2\pi }M\left( x+\theta ,x+\sigma \right) ^{K\left( x+\epsilon ,x+\epsilon ^{\ast }\right) }e^{-M\left( x+\theta ,x+\sigma \right) }e^{r\left( x\right) },\text{ as }x\rightarrow \infty . \end{equation*} \end{corollary} Further, it is obvious that our ideas constructing asymptotic formulas for the gamma function in terms of bivariate means can be extended to the psi and polygamma functions. \begin{theorem} Let $M:\left( 0,\infty \right) ^{2}\rightarrow \left( 0,\infty \right) $ be a mean and let $r$ be defined on $\left( 0,\infty \right) $ satisfying $ \lim_{x\rightarrow \infty }r\left( x\right) =0$. Then for fixed real numbers $\theta $, $\sigma $ such that $x>-\min \left( 1,\theta ,\sigma \right) $, the asymptotic formula for the psi function \begin{equation*} \psi \left( x+1\right) \thicksim \ln M\left( x+\theta ,x+\sigma \right) +r\left( x\right) \end{equation*} holds as $x\rightarrow \infty $. \end{theorem} \begin{proof} It suffices to prove \begin{equation*} \lim_{x\rightarrow \infty }\left( \psi \left( x+1\right) -\ln M\left( x+\theta ,x+\sigma \right) \right) =0. \end{equation*} Since $M$ is a mean, we have $x+\min \left( \theta ,\sigma \right) \leq M\left( x+\theta ,x+\sigma \right) \leq x+\max \left( \theta ,\sigma \right) $, and so \begin{equation*} \psi \left( x+1\right) -\ln \left( x+\max \left( \theta ,\sigma \right) \right) <\psi \left( x+1\right) -\ln M\left( x+\theta ,x+\sigma \right) <\psi \left( x+1\right) -\ln \left( x+\min \left( \theta ,\sigma \right) \right) , \end{equation*} which yields the inquired result due to \begin{equation*} \lim_{x\rightarrow \infty }\left( \psi \left( x+1\right) -\ln \left( x+\max \left( \theta ,\sigma \right) \right) \right) =\lim_{x\rightarrow \infty }\left( \psi \left( x+1\right) -\ln \left( x+\min \left( \theta ,\sigma \right) \right) \right) =0. \end{equation*} \end{proof} \begin{theorem} Let $M:\left( 0,\infty \right) ^{2}\rightarrow \left( 0,\infty \right) $ be a mean and let $r$ be defined on $\left( 0,\infty \right) $ satisfying $ \lim_{x\rightarrow \infty }r\left( x\right) =0$. Then for fixed real numbers $\theta ,\sigma $ such that $x>-\min \left( 1,\theta ,\sigma \right) $, the asymptotic formula for the polygamma function \begin{equation*} \psi ^{(n)}\left( x+1\right) \thicksim \frac{\left( -1\right) ^{n-1}\left( n-1\right) !}{M^{n}\left( x+\theta ,x+\sigma \right) }+r\left( x\right) \end{equation*} holds as $x\rightarrow \infty $. \end{theorem} \begin{proof} It suffices to show \begin{equation*} \lim_{x\rightarrow \infty }\left( \left( -1\right) ^{n-1}\psi ^{(n)}\left( x+1\right) -\frac{\left( n-1\right) !}{M^{n}\left( x+\theta ,x+\sigma \right) }\right) =0. \end{equation*} For this purpose, we utilize a known double inequality that for $k\in \mathbb{N}$ \begin{equation*} \frac{(k-1)!}{x^{k}}+\frac{k!}{2x^{k+1}}<\left( -1\right) ^{k+1}\psi ^{(k)}\left( x\right) <\frac{(k-1)!}{x^{k}}+\frac{k!}{x^{k+1}} \end{equation*} holds on $(0,\infty )$ proved by Guo and Qi in \cite[Lemma 3] {Guo-BKMS-47(1)-2010} to get \begin{equation*} \frac{k!}{2x^{k+1}}<\left( -1\right) ^{k+1}\psi ^{(k)}\left( x\right) -\frac{ (k-1)!}{x^{k}}<\frac{k!}{x^{k+1}}. \end{equation*} This implies that \begin{equation} \lim_{x\rightarrow \infty }\left( \left( -1\right) ^{k-1}\psi ^{(k)}\left( x\right) -\frac{(k-1)!}{x^{k}}\right) =0. \label{GQ} \end{equation} On the other hand, without loss of generality, we assume that $\theta \leq \sigma $. By the property of mean, we see that \begin{equation*} x+\theta \leq M\left( x+\theta ,x+\sigma \right) \leq x+\sigma , \end{equation*} and so \begin{eqnarray*} \left( -1\right) ^{n-1}\psi ^{(n)}\left( x+1\right) -\frac{\left( n-1\right) !}{\left( x+\theta \right) ^{n}} &<&\left( -1\right) ^{n-1}\psi ^{(n)}\left( x+1\right) -\frac{\left( n-1\right) !}{M^{n}\left( x+\theta ,x+\sigma \right) } \\ &<&\left( -1\right) ^{n-1}\psi ^{(n)}\left( x+1\right) -\frac{1}{\left( x+\sigma \right) ^{n}}. \end{eqnarray*} Then, by (\ref{GQ}), for $a=\theta ,\sigma $, we get \begin{eqnarray*} &&\left( -1\right) ^{n-1}\psi ^{(n)}\left( x+1\right) -\frac{\left( n-1\right) !}{\left( x+a\right) ^{n}} \\ &=&\left( \left( -1\right) ^{n-1}\psi ^{(n)}\left( x+1\right) -\frac{(n-1)!}{ \left( x+1\right) ^{n}}\right) +\left( \frac{(n-1)!}{\left( x+1\right) ^{n}}- \frac{\left( n-1\right) !}{\left( x+a\right) ^{n}}\right) \\ &\rightarrow &0+0=0\text{, as }x\rightarrow \infty , \end{eqnarray*} which gives the desired result. Thus we complete the proof. \end{proof} \section{Examples} In this section, we will list some examples to illustrate applications of Theorems \ref{MT-p2><p3} and \ref{MT-p2=p3}. To this end, we first recall the arithmetic mean $A$, geometric mean $G$, and identric (exponential) mean $I$ of two positive numbers $a$ and $b$ defined by \begin{eqnarray*} A\left( a,b\right) &=&\frac{a+b}{2}\text{, \ \ \ }G\left( a,b\right) =\sqrt{ ab}, \\ \mathcal{I}\left( a,b\right) &=&\left( b^{b}/a^{a}\right) ^{1/\left( b-a\right) }/e\text{ if }a\neq b\text{ and }I\left( a,a\right) =a, \end{eqnarray*} (see \cite{Stolarsky-MM-48-1975}, \cite{Yang-MPT-4-1987}). Clearly, these means are symmetric and homogeneous. Another possible mean is defined by \begin{equation} H_{^{p_{k};q_{k}}}^{n,n-1}\left( a,b\right) =\frac{ \sum_{k=0}^{n}p_{k}a^{k}b^{n-k}}{\sum_{k=0}^{n-1}q_{k}a^{k}b^{n-1-k}}, \label{H^n,n-1} \end{equation} where \begin{equation} \sum_{k=0}^{n}p_{k}=\sum_{k=0}^{n-1}q_{k}=1. \label{pk-qk1} \end{equation} It is clear that $H_{^{p_{k};q_{k}}}^{n,n-1}\left( a,b\right) $ is homogeneous and satisfies $H_{^{p_{k};q_{k}}}^{n,n-1}\left( a,a\right) =a$. When $p_{k}=p_{n-k}$ and $q_{k}=q_{n-1-k}$, we denote $ H_{^{p_{k};q_{k}}}^{n,n-1}\left( a,b\right) $ by $S_{^{p_{k};q_{k}}}^{n,n-1} \left( a,b\right) $, which can be expressed as \begin{equation} S_{^{p_{k};q_{k}}}^{n,n-1}\left( a,b\right) =\frac{\sum_{k=0}^{[n/2]}p_{k} \left( ab\right) ^{k}\left( a^{n-2k}+b^{n-2k}\right) }{\sum_{k=0}^{[\left( n-1\right) /2]}q_{k}\left( ab\right) ^{k}\left( a^{n-1-2k}+b^{n-1-2k}\right) }, \label{S^n,n-1} \end{equation} where $p_{k}$ and $q_{k}$ satisfy \begin{equation} \sum_{k=0}^{[n/2]}\left( 2p_{k}\right) =\sum_{k=0}^{[\left( n-1\right) /2]}\left( 2q_{k}\right) =1, \label{pk-qk2} \end{equation} $[x]$ denotes the integer part of real number $x$. Evidently, $ S_{^{p_{k};q_{k}}}^{n,n-1}$ is symmetric and homogeneous, and $ S_{^{p_{k};q_{k}}}^{n,n-1}\left( a,a\right) =a$. But $ H_{^{p_{k};q_{k}}}^{n,n-1}\left( a,b\right) $ and $ S_{^{p_{k};q_{k}}}^{n,n-1}\left( a,b\right) $ are not always means of $a$ and $b$. For instance, when $p=2/3$, \begin{equation*} S_{^{p;1/2}}^{2,1}\left( a,b\right) =\frac{pa^{2}+pb^{2}+\left( 1-2p\right) ab}{\left( a+b\right) /2}=\frac{2}{3}\frac{2a^{2}+2b^{2}-ab}{a+b}>\max (a,b) \end{equation*} in the case of $\max (a,b)>4\min \left( a,b\right) $. Indeed, it is easy to prove that $S_{^{p;1/2}}^{2,1}\left( a,b\right) $ is a mean if and only if $ p\in \lbrack 0,1/2]$. Secondly, we recall the so-called completely monotone functions. A function $ f$ is said to be completely monotonic on an interval $I$ , if $f$ has derivatives of all orders on $I$ and satisfies \begin{equation} (-1)^{n}f^{(n)}(x)\geq 0\text{ for all }x\in I\text{ and }n=0,1,2,.... \label{cm} \end{equation} If the inequality (\ref{cm}) is strict, then $f$ is said to be strictly completely monotonic on $I$. It is known (Bernstein's Theorem) that $f$ is completely monotonic on $(0,\infty )$ if and only if \begin{equation*} f(x)=\int_{0}^{\infty }e^{-xt}d\mu \left( t\right) , \end{equation*} where $ \mu $ is a nonnegative measure on $[0,\infty )$ such that the integral converges for all $x>0$, see \cite[p. 161]{Widder-PUPP-1941}. \begin{example} Let \begin{eqnarray*} K\left( a,b\right) &=&N\left( a,b\right) =A\left( a,b\right) =\frac{a+b}{2}, \\ M\left( a,b\right) &=&A^{2/3}\left( a,b\right) G^{1/3}\left( a,b\right) =\left( \frac{a+b}{2}\right) ^{2/3}\left( \sqrt{ab}\right) ^{1/3} \end{eqnarray*} and $\theta =\sigma =0$ in Theorem \ref{MT-p2><p3}. Then we can obtain an asymptotic formulas for the gamma function as follows. \begin{eqnarray*} \ln \Gamma (x+1) &\thicksim &\frac{1}{2}\ln 2\pi +\left( x+\frac{1}{2} \right) \ln \left( \left( x+\frac{1}{2}\right) ^{2/3}\left( \sqrt{x\left( x+1\right) }\right) ^{1/3}\right) -\left( x+\frac{1}{2}\right) \\ &=&\frac{1}{2}\ln 2\pi +\frac{2}{3}\left( x+\frac{1}{2}\right) \ln \left( x+ \frac{1}{2}\right) +\frac{1}{6}\left( x+\frac{1}{2}\right) \ln x \\ &&+\frac{1}{6}\left( x+\frac{1}{2}\right) \ln \left( x+1\right) -\left( x+ \frac{1}{2}\right) ,\text{ as }x\rightarrow \infty . \end{eqnarray*} \end{example} Further, we can prove \begin{proposition} For $x>0$, the function \begin{eqnarray*} f_{1}(x) &=&\ln \Gamma (x+1)-\frac{1}{2}\ln 2\pi -\frac{2}{3}\left( x+\frac{1 }{2}\right) \ln \left( x+\frac{1}{2}\right) -\frac{1}{6}\left( x+\frac{1}{2} \right) \ln x \\ &&-\frac{1}{6}\left( x+\frac{1}{2}\right) \ln \left( x+1\right) +\left( x+ \frac{1}{2}\right) \end{eqnarray*} is a completely monotone function. \end{proposition} \begin{proof} Differentiating and utilizing the relations \begin{equation} \psi (x)=\int_{0}^{\infty }\left( \frac{e^{-t}}{t}-\frac{e^{-xt}}{1-e^{-t}} \right) dt\text{ \ and \ }\ln x=\int_{0}^{\infty }\frac{e^{-t}-e^{-xt}}{t}dt \label{psi-ln} \end{equation} yield \begin{eqnarray*} f_{1}^{\prime }(x) &=&\psi \left( x+1\right) -\frac{1}{6}\ln \left( x+1\right) -\frac{1}{6}\ln x-\frac{2}{3}\ln \left( x+\frac{1}{2}\right) + \frac{1}{12\left( x+1\right) }-\frac{1}{12x} \\ &=&\int_{0}^{\infty }\left( \frac{e^{-t}}{t}-\frac{e^{-\left( x+1\right) t}}{ 1-e^{-t}}\right) dt-\int_{0}^{\infty }\frac{e^{-t}-e^{-xt}}{6t} dt-\int_{0}^{\infty }\frac{e^{-t}-e^{-\left( x+1\right) t}}{6t}dt \\ &&-\int_{0}^{\infty }\frac{2\left( e^{-t}-e^{-\left( x+1/2\right) t}\right) }{3t}dt+\frac{1}{12}\int_{0}^{\infty }e^{-\left( x+1\right) t}dt-\frac{1}{12} \int_{0}^{\infty }e^{-xt}dt \\ &=&\int_{0}^{\infty }e^{-xt}\left( \frac{1}{6t}+\frac{e^{-t}}{6t}+\frac{ 2e^{-t/2}}{3t}-\frac{e^{-t/2}}{1-e^{-t}}+\frac{1}{12}\left( e^{-t}-1\right) \right) dt \\ &=&\int_{0}^{\infty }e^{-xt}e^{-t/2}\left( \frac{\cosh \left( t/2\right) }{3t }+\frac{2}{3t}-\frac{1}{2\sinh \left( t/2\right) }-\frac{1}{6}\sinh \frac{t}{ 2}\right) dt \\ &:&=\int_{0}^{\infty }e^{-xt}e^{-t/2}u\left( \frac{t}{2}\right) dt, \end{eqnarray*} where \begin{equation*} u\left( t\right) =\frac{\cosh t}{6t}+\frac{1}{3t}-\frac{1}{2\sinh t}-\frac{1 }{6}\sinh t. \end{equation*} Factoring and expanding in power series lead to \begin{eqnarray*} u\left( t\right) &=&-\frac{t\cosh 2t-\sinh 2t-4\sinh t+5t}{12t\sinh t} \\ &=&-\frac{\sum_{n=1}^{\infty }\frac{2^{2n-2}t^{2n-1}}{\left( 2n-2\right) !} -\sum_{n=1}^{\infty }\frac{2^{2n-1}t^{2n-1}}{\left( 2n-1\right) !} -4\sum_{n=1}^{\infty }\frac{t^{2n-1}}{\left( 2n-1\right) !}+5t}{12t\sinh \left( t/2\right) } \\ &=&-\frac{\sum_{n=3}^{\infty }\frac{\left( 2n-3\right) 2^{2n-2}-4}{\left( 2n-1\right) !}t^{2n-1}}{12t\sinh t}<0 \end{eqnarray*} for $t>0$. This reveals that $-f_{1}^{\prime }$ is a completely monotone function, which together with $f_{1}(x)>\lim_{x\rightarrow \infty }f_{1}(x)=0 $ leads us to the desired result. \end{proof} Using the decreasing property of $f_{1}$ on $\left( 0,\infty \right) $ and notice that \begin{equation*} f_{1}(1)=\ln \frac{2^{3/4}e^{3/2}}{3\sqrt{2\pi }}\text{ \ and \ } f_{1}(\infty )=0 \end{equation*} we immediately get \begin{corollary} For $n\in \mathbb{N}$, it is true that \begin{equation*} \sqrt{2\pi }\left( \frac{(n+1/2)^{4}n\left( n+1\right) }{e^{6}}\right) ^{\left( n+1/2\right) /6}<n!<\frac{2^{3/4}e^{3/2}}{3}\left( \frac{ (n+1/2)^{4}n\left( n+1\right) }{e^{6}}\right) ^{\left( n+1/2\right) /6}, \end{equation*} with the optimal constants $\sqrt{2\pi }\approx 2.5066$ and $ 2^{3/4}e^{3/2}/3\approx 2.5124$. \end{corollary} \begin{example} Let \begin{eqnarray*} K\left( a,b\right) &=&N\left( a,b\right) =A\left( a,b\right) =\frac{a+b}{2}, \\ M\left( a,b\right) &=&\mathcal{I}\left( a,b\right) =\left( b^{b}/a^{a}\right) ^{1/\left( b-a\right) }/e\text{ if }a\neq b\text{ and } I\left( a,a\right) =a \end{eqnarray*} and $\theta =0$ in Theorem \ref{MT-p2><p3}. Then we get the asymptotic formulas: \begin{equation*} \ln \Gamma (x+1)\thicksim \frac{1}{2}\ln 2\pi +\left( x+\frac{1}{2}\right) \left( (x+1)\ln (x+1)-x\ln x-1\right) -\left( x+\frac{1}{2}\right) , \end{equation*} as $x\rightarrow \infty $. \end{example} And, we have \begin{proposition} For $x>0$, the function \begin{equation*} f_{2}(x)=\ln \Gamma (x+1)-\frac{1}{2}\ln 2\pi -\left( x+\frac{1}{2}\right) \left( (x+1)\ln (x+1)-x\ln x-1\right) +x+\frac{1}{2} \end{equation*} is a completely monotone function. \end{proposition} \begin{proof} Differentiation gives \begin{eqnarray*} f_{2}^{\prime }(x) &=&\psi \left( x+1\right) -\left( 2x+\frac{3}{2}\right) \ln \left( x+1\right) +\left( 2x+\frac{1}{2}\right) \ln x+2, \\ f_{2}^{\prime \prime }(x) &=&\psi ^{\prime }\left( x+1\right) -2\ln \left( x+1\right) +2\ln x+\frac{1}{2\left( x+1\right) }+\frac{1}{2x}. \end{eqnarray*} Application of the relations (\ref{psi-ln}), $f_{2}^{\prime \prime }(x)$ can be expressed as \begin{eqnarray*} f_{2}^{\prime \prime }(x) &=&\int_{0}^{\infty }t\frac{e^{-\left( x+1\right) t}}{1-e^{-t}}dt-2\int_{0}^{\infty }\frac{e^{-xt}-e^{-\left( x+1\right) t}}{t} dt+\frac{1}{2}\int_{0}^{\infty }\left( e^{-\left( x+1\right) t}+e^{-xt}\right) dt \\ &=&\int_{0}^{\infty }e^{-xt}\left( \frac{te^{-t}}{1-e^{-t}}-2\frac{1-e^{-t}}{ t}+\frac{1}{2}\left( e^{-t}+1\right) \right) dt \\ &=&\int_{0}^{\infty }e^{-xt}e^{-t/2}\left( \frac{t}{2\sinh \left( t/2\right) }-4\frac{\sinh \left( t/2\right) }{t}+\cosh \frac{t}{2}\right) dt \\ &:&=\int_{0}^{\infty }e^{-xt}e^{-t/2}v\left( \tfrac{t}{2}\right) dt, \end{eqnarray*} where \begin{equation*} v\left( t\right) =\frac{t}{\sinh t}-2\frac{\sinh t}{t}+\cosh t. \end{equation*} Employing hyperbolic version of Wilker inequality proved in \cite {Zhu-MIA-10(4)-2007} (also see \cite{Zhu-AAA-485842-2009}, \cite {Yang-JIA-2014-166}) \begin{equation*} \left( \frac{t}{\sinh t}\right) ^{2}+\frac{t}{\tanh t}>2, \end{equation*} we get \begin{equation*} \frac{\sinh t}{t}v\left( t\right) =\left( \frac{t}{\sinh t}\right) ^{2}+ \frac{t}{\tanh t}-2>0, \end{equation*} and so $f_{2}^{\prime \prime }(x)$ is complete monotone for $x>0$. Hence, $ f_{2}^{\prime }(x)<\lim_{x\rightarrow \infty }f_{2}^{\prime }(x)=0$, and then, $f_{2}(x)>\lim_{x\rightarrow \infty }f_{2}(x)=0$, which indicate that $ f_{2}$ is complete monotone for $x>0$. This completes the proof. \end{proof} The decreasing property of $f_{2}$ on $\left( 0,\infty \right) $ and the facts that \begin{equation*} f_{2}\left( 0^{+}\right) =\ln \frac{e}{\sqrt{2\pi }}\text{, \ }f_{2}\left( 1\right) =\ln \frac{e^{3}}{8}\text{, \ }f_{2}\left( \infty \right) =0 \end{equation*} give the following \begin{corollary} For $x>0$, the sharp double inequality \begin{equation*} \sqrt{2\pi }e^{-2x-1}\frac{(x+1)^{(x+1)\left( x+1/2\right) }}{x^{x\left( x+1/2\right) }}<\Gamma (x+1)<e^{-2x}\frac{(x+1)^{(x+1)\left( x+1/2\right) }}{ x^{x\left( x+1/2\right) }} \end{equation*} holds. For $n\in \mathbb{N}$, it holds that \begin{equation*} \sqrt{2\pi }e^{-2n-1}\frac{(n+1)^{(n+1)\left( n+1/2\right) }}{n^{n\left( n+1/2\right) }}<n!<\frac{e^{3}}{8}e^{-2n-1}\frac{(n+1)^{(n+1)\left( n+1/2\right) }}{n^{n\left( n+1/2\right) }} \end{equation*} with the best constants $\sqrt{2\pi }\approx 2.5066$ and $e^{3}/8\approx 2.5107$. \end{corollary} \begin{example} \label{E-M3,2}Let \begin{eqnarray*} K\left( a,b\right) &=&N\left( a,b\right) =A\left( a,b\right) =\frac{a+b}{2}, \\ M\left( a,b\right) &=&M_{^{p;q}}^{3,2}\left( a,b\right) =\frac{ pa^{3}+pb^{3}+\left( 1/2-p\right) a^{2}b+\left( 1/2-p\right) ab^{2}}{ qa^{2}+qb^{2}+(1-2q)ab} \\ &=&\frac{a+b}{2}\frac{2pa^{2}+2pb^{2}+\left( 1-4p\right) ab}{ qa^{2}+qb^{2}+\left( 1-2q\right) ab} \end{eqnarray*} and $\theta =0$ in Theorem \ref{MT-p2><p3}, where $p$ and $q$ are parameters to be determined. Then, we have \begin{eqnarray*} K\left( x,x+1\right) &=&N\left( x,x+1\right) =x+\frac{1}{2}, \\ M\left( x,x+1\right) &=&S_{^{p;q}}^{3,2}\left( x,x+1\right) =\left( x+1/2\right) \frac{x^{2}+x+2p}{x^{2}+x+q}. \end{eqnarray*} Straightforward computations give \begin{eqnarray*} \lim_{x\rightarrow \infty }\tfrac{\ln \Gamma (x+1)-\ln \sqrt{2\pi }-\left( x+1/2\right) \ln M_{p;q}^{3,2}\left( x,x+1\right) +x+1/2}{x^{-1}} &=&q-2p- \frac{1}{24}, \\ \lim_{x\rightarrow \infty }\tfrac{\ln \Gamma (x+1)-\ln \sqrt{2\pi }-\left( x+1/2\right) \ln M_{p;2p+1/24}^{3,2}\left( x,x+1\right) +x+1/2}{x^{-3}} &=&- \frac{160}{1920}\left( p-\frac{23}{160}\right) , \end{eqnarray*} and solving the equation set \begin{equation*} q-2p-\frac{1}{24}=0\text{ and }-\frac{160}{1920}\left( p-\frac{23}{160} \right) =0 \end{equation*} leads to \begin{equation*} p=\frac{23}{160},q=\frac{79}{240}. \end{equation*} And then, \begin{equation*} M\left( x,x+1\right) =\left( x+\frac{1}{2}\right) \frac{x^{2}+x+\frac{23}{80} }{x^{2}+x+\frac{79}{240}}. \end{equation*} It is easy to check that $S_{^{p;q}}^{3,2}\left( a,b\right) $ is a symmetric and homogeneous mean of positive numbers $a$ and $b$ for $p=23/160$, $ q=79/240$. Hence, by Theorem \ref{MT-p2><p3}, we have the optimal asymptotic formula for the gamma function \begin{equation*} \ln \Gamma (x+1)\thicksim \frac{1}{2}\ln 2\pi +\left( x+\frac{1}{2}\right) \ln \tfrac{\left( x+1/2\right) \left( x^{2}+x+23/80\right) }{x^{2}+x+79/240} -\left( x+\frac{1}{2}\right) , \end{equation*} as $x\rightarrow \infty $, and \begin{equation*} \lim_{x\rightarrow \infty }\tfrac{\ln \Gamma (x+1)-\ln \sqrt{2\pi }-\left( x+1/2\right) \ln \tfrac{\left( x+1/2\right) \left( x^{2}+x+23/80\right) }{ x^{2}+x+79/240}+x+1/2}{x^{-5}}=-\tfrac{18\,029}{29\,030\,400}. \end{equation*} \end{example} Also, this asymptotic formula have a well property. \begin{proposition} For $x>-1/2$, the function $f_{3}$ defined by \begin{equation} f_{3}\left( x\right) =\ln \Gamma (x+1)-\frac{1}{2}\ln 2\pi -\left( x+\frac{1 }{2}\right) \ln \tfrac{\left( x+1/2\right) \left( x^{2}+x+23/80\right) }{ x^{2}+x+79/240}+\left( x+\frac{1}{2}\right) . \label{f3} \end{equation} is increasing and concave. \end{proposition} \begin{proof} Differentiation gives \begin{eqnarray*} f_{3}^{\prime }\left( x\right) &=&\psi \left( x+1\right) +\ln \left( x^{2}+x+ \frac{79}{240}\right) -\ln \left( x^{2}+x+\frac{23}{80}\right) \\ &&-\ln \left( x+\frac{1}{2}\right) -2\frac{\left( x+1/2\right) ^{2}}{ x^{2}+x+23/80}+2\frac{\left( x+1/2\right) ^{2}}{x^{2}+x+79/240}, \end{eqnarray*} \begin{eqnarray*} f_{3}^{\prime \prime }\left( x\right) &=&\psi ^{\prime }\left( x+1\right) +6 \frac{x+1/2}{x^{2}+x+79/240}-6\frac{x+1/2}{x^{2}+x+23/80} \\ &&-\frac{1}{x+1/2}+4\frac{\left( x+1/2\right) ^{3}}{\left( x^{2}+x+23/80\right) ^{2}}-4\frac{\left( x+1/2\right) ^{3}}{\left( x^{2}+x+79/240\right) ^{2}}. \end{eqnarray*} Denote by $x+1/2=t$ and make use of recursive relation \begin{equation} \psi ^{\left( n\right) }(x+1)-\psi ^{\left( n\right) }(x)=\left( -1\right) ^{n}\frac{n!}{x^{n+1}} \label{psi-rel.} \end{equation} yield \begin{eqnarray*} &&f_{3}^{\prime \prime }(t+\frac{1}{2})-f_{3}^{\prime \prime }(t-\frac{1}{2}) \\ &=&-\tfrac{1}{\left( t+1/2\right) ^{2}}+6\tfrac{t+1}{\left( t+1\right) ^{2}+19/240}-6\tfrac{t+1}{\left( t+1\right) ^{2}+3/80}-\frac{1}{\left( t+1\right) }+4\tfrac{\left( t+1\right) ^{3}}{\left( \left( t+1\right) ^{2}+3/80\right) ^{2}} \\ &&-4\tfrac{\left( t+1\right) ^{3}}{\left( \left( t+1\right) ^{2}+19/240\right) ^{2}}-\left( 6\tfrac{t}{t^{2}+19/240}-6\tfrac{t}{ t^{2}+3/80}-\frac{1}{t}+4\tfrac{t^{3}}{\left( t^{2}+3/80\right) ^{2}}-4 \tfrac{t^{3}}{\left( t^{2}+19/240\right) ^{2}}\right) \\ &=&\frac{f_{31}\left( t\right) }{t\left( t+1\right) \left( t+\frac{1}{2} \right) ^{2}\left( t^{2}+2t+83/80\right) ^{2}\left( t^{2}+3/80\right) ^{2}\left( t^{2}+2t+259/240\right) ^{2}\left( t^{2}+19/240\right) ^{2}}, \end{eqnarray*} where \begin{eqnarray*} f_{31}\left( t\right) &=&\tfrac{18\,029}{138\,240}t^{12}+\tfrac{18\,029}{ 23\,040}t^{11}+\tfrac{83\,674\,657}{41\,472\,000}t^{10}+\tfrac{24\,178\,957}{ 8294\,400}t^{9}+\tfrac{34\,366\,211\,867}{13\,271\,040\,000}t^{8}+\tfrac{ 4894\,651\,067}{3317\,760\,000}t^{7} \\ &&+\tfrac{74\,296\,657\,243}{132\,710\,400\,000}t^{6}+\tfrac{ 20\,147\,292\,749}{132\,710\,400\,000}t^{5}+\tfrac{297\,092\,035\,417}{ 9437\,184\,000\,000}t^{4}+\tfrac{66\,777\,391\,051}{14\,155\,776\,000\,000} t^{3} \\ &&+\tfrac{295\,012\,866\,563}{566\,231\,040\,000\,000}t^{2}+\tfrac{ 3972\,595\,981}{188\,743\,680\,000\,000}t+\tfrac{166\,825\,684\,249}{ 60\,397\,977\,600\,000\,000} \\ &>&0\text{ for }t=x+1/2>0\text{.} \end{eqnarray*} This shows that $f_{3}^{\prime \prime }(t+\frac{1}{2})-f_{3}^{\prime \prime }(t-\frac{1}{2})>0$, that is, $f_{3}^{\prime \prime }(x+1)-f_{3}^{\prime \prime }(x)>0$, and so \begin{equation*} f_{3}^{\prime \prime }(x)<f_{3}^{\prime \prime }(x+1)<f_{3}^{\prime \prime }(x+2)<...<f_{3}^{\prime \prime }(\infty )=0. \end{equation*} It reveals that shows $f_{3}$ is concave on $\left( -1/2,\infty \right) $, and we conclude that, $f_{3}^{\prime }(x)>\lim_{x\rightarrow \infty }f_{3}^{\prime }(x)=0$, which proves the desired result. \end{proof} As a consequence of the above proposition, we have \begin{corollary} For $x>0$, the double inequality \begin{equation*} \sqrt{\tfrac{158e}{69}}\left( \tfrac{x+1/2}{e}\tfrac{x^{2}+x+23/80}{ x^{2}+x+79/240}\right) ^{x+1/2}<\Gamma (x+1)<\sqrt{2\pi }\left( \tfrac{x+1/2 }{e}\tfrac{x^{2}+x+23/80}{x^{2}+x+79/240}\right) ^{x+1/2} \end{equation*} holds true, where $\sqrt{158e/69}\approx 2.4949$ and and $\sqrt{2\pi } \approx 2.5066$ are the best. For $n\in \mathbb{N}$, it is true that \begin{equation*} \left( \tfrac{1118e}{1647}\right) ^{3/2}\left( \tfrac{n+1/2}{e}\tfrac{ n^{2}+n+23/80}{n^{2}+n+79/240}\right) ^{n+1/2}<n!<\sqrt{2\pi }\left( \tfrac{ n+1/2}{e}\tfrac{n^{2}+n+23/80}{n^{2}+n+79/240}\right) ^{n+1/2} \end{equation*} holds true with the best constants $\left( 1118e/1647\right) ^{3/2}\approx 2.5065$ and $\sqrt{2\pi }\approx 2.5066$. \end{corollary} \begin{example} \label{E-N3,2}Let \begin{eqnarray*} K\left( a,b\right) &=&M\left( a,b\right) =A\left( a,b\right) =\frac{a+b}{2}, \\ N\left( a,b\right) &=&S_{^{p;q}}^{3,2}\left( a,b\right) =\frac{ pa^{3}+pb^{3}+\left( 1/2-p\right) ab^{2}+\left( 1/2-p\right) a^{2}b}{ qa^{2}+qb^{2}+\left( 1-2q\right) ab} \\ &=&\frac{a+b}{2}\frac{2pa^{2}+2pb^{2}+\left( 1-4p\right) ab}{ qa^{2}+qb^{2}+\left( 1-2q\right) ab} \end{eqnarray*} and $\sigma =0$ in Theorem \ref{MT-p2><p3}, where $p$ and $q$ are parameters to be determined. Direct computations give \begin{eqnarray*} \lim_{x\rightarrow \infty }\tfrac{\ln \Gamma (x+1)-\frac{1}{2}\ln 2\pi -\left( x+1/2\right) \ln \left( x+1/2\right) +\left( x+1/2\right) \frac{ x^{2}+x+2p}{x^{2}+x+q}}{x^{-1}} &=&2p-q-\frac{1}{24}, \\ \lim_{x\rightarrow \infty }\tfrac{\ln \Gamma (x+1)-\frac{1}{2}\ln 2\pi -\left( x+1/2\right) \ln \left( x+1/2\right) +\left( x+1/2\right) \frac{ x^{2}+x+2p}{x^{2}+x+2p-1/24}}{x^{-3}} &=&\frac{7}{480}-\frac{1}{12}p. \end{eqnarray*} Solving the simultaneous equations \begin{eqnarray*} 2p-q-\frac{1}{24} &=&0, \\ \frac{7}{480}-\frac{1}{12}p &=&0 \end{eqnarray*} leads to $p=7/40$, $q=37/120$. And then, \begin{equation*} N\left( x,x+1\right) =\left( x+1/2\right) \frac{x^{2}+x+7/20}{x^{2}+x+37/120} . \end{equation*} An easy verification shows that $S_{^{p;q}}^{3,2}\left( a,b\right) $ is a symmetric and homogeneous mean of positive numbers $a$ and $b$ for $p=7/40$, $q=37/120$. Hence, by Theorem \ref{MT-p2><p3} we get the best asymptotic formula for the gamma function \begin{equation*} \ln \Gamma (x+1)\thicksim \frac{1}{2}\ln 2\pi +\left( x+\frac{1}{2}\right) \ln \left( x+\frac{1}{2}\right) -\left( x+\frac{1}{2}\right) \frac{ x^{2}+x+7/20}{x^{2}+x+37/120}, \end{equation*} as $x\rightarrow \infty $. And we have \begin{equation*} \lim_{x\rightarrow \infty }\tfrac{\ln \Gamma (x+1)-\frac{1}{2}\ln 2\pi -\left( x+1/2\right) \ln \left( x+1/2\right) +\left( x+1/2\right) \frac{ x^{2}+x+7/20}{x^{2}+x+37/120}}{x^{-5}}=-\frac{1517}{2419\,200}. \end{equation*} \end{example} Now we prove the following assertion related to this asymptotic formula. \begin{proposition} Let the function $f_{4}$ be defined on $\left( -1/2,\infty \right) $ by \begin{equation*} f_{4}(x)=\ln \Gamma (x+1)-\tfrac{1}{2}\ln 2\pi -\left( x+\tfrac{1}{2}\right) \ln (x+\tfrac{1}{2})+\left( x+\tfrac{1}{2}\right) \frac{x^{2}+x+7/20}{ x^{2}+x+37/120}. \end{equation*} Then $f_{4}$ is increasing and convex on $\left( -1/2,\infty \right) $. \end{proposition} \begin{proof} Differentiation gives \begin{eqnarray*} f_{4}^{\prime }(x) &=&\psi \left( x+1\right) -\ln \left( x+\frac{1}{2} \right) +\frac{1}{24}\frac{1}{x^{2}+x+37/120}-\frac{1}{12}\frac{\left( x+1/2\right) ^{2}}{\left( x^{2}+x+37/120\right) ^{2}}, \\ f_{4}^{\prime \prime }(x) &=&\psi ^{\prime }\left( x+1\right) -\frac{1}{x+1/2 }-\frac{1}{4}\frac{x+1/2}{\left( x^{2}+x+37/120\right) ^{2}}+\frac{1}{3} \frac{\left( x+\frac{1}{2}\right) ^{3}}{\left( x^{2}+x+37/120\right) ^{3}}. \end{eqnarray*} Denote by $x+1/2=t$ and make use of recursive relation (\ref{psi-rel.}) yield \begin{eqnarray*} &&f_{4}^{\prime \prime }(t+\frac{1}{2})-f_{4}^{\prime \prime }(t-\frac{1}{2}) \\ &=&-\tfrac{1}{\left( t+1/2\right) ^{2}}-\frac{1}{t+1}-\frac{1}{4}\frac{t+1}{ \left( \left( t+1\right) ^{2}+7/120\right) ^{2}}+\frac{1}{3}\frac{\left( t+1\right) ^{3}}{\left( \left( t+1\right) ^{2}+7/120\right) ^{3}} \\ &&-\left( -\frac{1}{t}-\frac{1}{4}\frac{t}{\left( t^{2}+7/120\right) ^{2}}+ \frac{1}{3}\frac{t^{3}}{\left( t^{2}+7/120\right) ^{3}}\right) \\ &=&\frac{f_{41}\left( t\right) }{t\left( t+1\right) \left( t+1/2\right) ^{2}\left( t^{2}+7/120\right) ^{3}\left( t^{2}+2t+127/120\right) ^{3}}, \end{eqnarray*} where \begin{eqnarray*} f_{41}\left( t\right) &=&\frac{1517}{11\,520}t^{8}+\frac{1517}{2880}t^{7}+ \frac{161\,087}{192\,000}t^{6}+\frac{387\,883}{576\,000}t^{5}+\frac{ 39\,563\,149}{138\,240\,000}t^{4} \\ &&+\frac{4462\,549}{69\,120\,000}t^{3}+\frac{67\,788\,161}{8294\,400\,000} t^{2}+\frac{2794\,421}{8294\,400\,000}t+\frac{702\,595\,369}{ 11\,943\,936\,000\,000} \\ &>&0\text{ for }t=x+1/2>0. \end{eqnarray*} This implies that $f_{4}^{\prime \prime }(t+\frac{1}{2})-f_{4}^{\prime \prime }(t-\frac{1}{2})>0$, that is, $f_{4}^{\prime \prime }(x+1)-f_{4}^{\prime \prime }(x)>0$, and so \begin{equation*} f_{4}^{\prime \prime }(x)<f_{4}^{\prime \prime }(x+1)<f_{4}^{\prime \prime }(x+2)<...<f_{4}^{\prime \prime }(\infty )=0. \end{equation*} It reveals that shows $f_{4}$ is concave on $\left( -1/2,\infty \right) $, and therefore, $f_{4}^{\prime }(x)>\lim_{x\rightarrow \infty }f_{4}^{\prime }(x)=0$, which proves the desired result. \end{proof} By the increasing property of $f_{4}$ on $\left( -1/2,\infty \right) $ and the facts \begin{equation*} f_{4}\left( 0\right) =\ln \frac{e^{21/37}}{\sqrt{\pi }}\text{, \ } f_{4}\left( 1\right) =\ln \frac{2e^{423/277}}{3\sqrt{3\pi }}\text{, \ } f_{4}\left( \infty \right) =0, \end{equation*} we have \begin{corollary} For $x>0$, the double inequality \begin{equation*} e^{21/37}\sqrt{2}\left( \tfrac{x+1/2}{\exp \left( \frac{x^{2}+x+7/20}{ x^{2}+x+37/120}\right) }\right) ^{x+1/2}<\Gamma (x+1)<\sqrt{2\pi }\left( \tfrac{x+1/2}{\exp \left( \frac{x^{2}+x+7/20}{x^{2}+x+37/120}\right) } \right) ^{x+1/2} \end{equation*} holds, where $e^{21/37}\sqrt{2}\approx 2.4946$ and $\sqrt{2\pi }\approx 2.5066$ are the best. For $n\in \mathbb{N}$, the double inequality \begin{equation*} e^{423/277}\tfrac{2\sqrt{2}}{3\sqrt{3}}(\tfrac{n+1/2}{e})^{n+1/2}\exp \left( -\tfrac{1}{24}\tfrac{n+1/2}{n^{2}+n+37/120}\right) <n!<\sqrt{2\pi }(\tfrac{ n+1/2}{e})^{n+1/2}\exp \left( -\tfrac{1}{24}\tfrac{n+1/2}{n^{2}+n+37/120} \right) \end{equation*} holds true with the best constants $2\sqrt{2}e^{423/277}/\left( 3\sqrt{3} \right) \approx 2.5065$ and $\sqrt{2\pi }\approx 2.5066$. \end{corollary} \begin{example} \label{E-N4,3}Let \begin{eqnarray*} K\left( a,b\right) &=&M\left( a,b\right) =A\left( a,b\right) =x+1/2, \\ N\left( a,b\right) &=&S_{^{p,q;r}}^{4,3}\left( a,b\right) =\frac{ pa^{4}+pb^{4}+qa^{3}b+qab^{3}+\left( 1-2p-2q\right) a^{2}b^{2}}{ ra^{3}+rb^{3}+\left( 1/2-r\right) a^{2}b+\left( 1/2-r\right) ab^{2}} \end{eqnarray*} and $\sigma =0$ in Theorem \ref{MT-p2><p3}. In a similar way, we can determine that the best parameters satisfy \begin{equation*} r=2p+\frac{1}{2}q-\frac{7}{48}\text{, \ }p=\frac{21}{40}-\frac{7}{4}q\text{, \ }q=\frac{7303}{35\,280}, \end{equation*} which imply \begin{equation*} p=\frac{3281}{20\,160},q=\frac{7303}{35\,280};r=\frac{111}{392}. \end{equation*} Then, \begin{equation} N\left( x,x+1\right) =x+\tfrac{1}{2}+\tfrac{1517}{44\,640}\tfrac{1}{x+1/2}+ \tfrac{343}{44\,640}\tfrac{x+1/2}{x^{2}+x+111/196}:=N_{4/3}\left( x,x+1\right) , \label{N4/3} \end{equation} In this case, we easily check that $S_{^{p,q;r}}^{4,3}\left( a,b\right) $ is a mean of $a$ and $b$. Consequently, from Theorem \ref{MT-p2><p3} the following best asymptotic formula for the gamma function \begin{equation*} \ln \Gamma (x+1)\sim \frac{1}{2}\ln 2\pi +\left( x+1/2\right) \ln (x+1/2)-N_{4/3}\left( x,x+1\right) \end{equation*} holds true as $x\rightarrow \infty $. And, we have \begin{equation*} \lim_{x\rightarrow \infty }\tfrac{\ln \Gamma (x+1)-\frac{1}{2}\ln 2\pi -\left( x+1/2\right) \ln \left( x+1/2\right) +N_{4/3}\left( x,x+1\right) }{ x^{-7}}=\tfrac{10\,981}{31\,610\,880}. \end{equation*} \end{example} We now present the monotonicity and convexity involving this asymptotic formula. \begin{proposition} Let $f_{5}$ defined on $\left( -1/2,\infty \right) $ by \begin{equation*} f_{5}(x)=\ln \Gamma (x+1)-\frac{1}{2}\ln 2\pi -\left( x+1/2\right) \ln (x+1/2)+N_{4/3}\left( x,x+1\right) , \end{equation*} where $N_{4/3}\left( x,x+1\right) $ is defined\ by (\ref{N4/3}). Then $f_{5}$ is decreasing and convex on $\left( -1/2,\infty \right) $. \end{proposition} \begin{proof} Differentiation gives \begin{eqnarray*} f_{5}^{\prime }(x) &=&\psi \left( x+1\right) -\ln \left( x+\frac{1}{2} \right) -\frac{1517}{44\,640\left( x+1/2\right) ^{2}} \\ &&+\frac{343}{44\,640\left( x^{2}+x+111/196\right) }-\frac{343}{22\,320} \frac{\left( x+1/2\right) ^{2}}{\left( x^{2}+x+111/196\right) ^{2}}, \end{eqnarray*} \begin{eqnarray*} f_{5}^{\prime \prime }(x) &=&\psi ^{\prime }\left( x+1\right) -\frac{1}{x+1/2 }+\frac{1517}{22\,320\left( x+1/2\right) ^{3}} \\ &&-\frac{343}{7440}\frac{x+1/2}{\left( x^{2}+x+111/196\right) ^{2}}+\frac{343 }{5580}\frac{\left( x+1/2\right) ^{3}}{\left( x^{2}+x+111/196\right) ^{3}}. \end{eqnarray*} Denote by $x+1/2=t$ and make use of recursive relation (\ref{psi-rel.}) yield \begin{eqnarray*} &&f_{5}^{\prime \prime }(t+\frac{1}{2})-f_{5}^{\prime \prime }(t-\frac{1}{2}) \\ &=&-\tfrac{1}{\left( t+1/2\right) ^{2}}-\tfrac{1}{\left( t+1\right) }+\tfrac{ 1517}{22\,320\left( t+1\right) ^{3}}-\tfrac{343}{7440}\tfrac{t+1}{\left( \left( t+1\right) ^{2}+31/98\right) ^{2}}+\tfrac{343}{5580}\tfrac{\left( t+1\right) ^{3}}{\left( \left( t+1\right) ^{2}+31/98\right) ^{3}} \\ &&-\left( -\tfrac{1}{t}+\tfrac{1517}{22\,320t^{3}}-\tfrac{343}{7440}\tfrac{t }{\left( t^{2}+31/98\right) ^{2}}+\tfrac{343}{5580}\tfrac{t^{3}}{\left( t^{2}+31/98\right) ^{3}}\right) \\ &=&-\frac{f_{51}\left( t\right) }{80\left( t+1/2\right) ^{2}t^{3}\left( t+1\right) ^{3}\left( t^{2}+2t+129/98\right) ^{3}\left( t^{2}+31/98\right) ^{3}}, \end{eqnarray*} where \begin{eqnarray*} f_{51}\left( t\right) &=&\tfrac{10\,981}{784}t^{10}+\tfrac{54\,905}{784} t^{9}+\tfrac{21\,028\,039}{134\,456}t^{8}+\tfrac{27\,614\,911}{134\,456} t^{7}+\tfrac{294\,820\,517}{1647\,086}t^{6}+\tfrac{739\,744\,471}{6588\,344} t^{5}+ \\ &&\tfrac{138\,266\,105\,451}{2582\,630\,848}t^{4}+\tfrac{25\,165\,604\,049}{ 1291\,315\,424}t^{3}+\tfrac{2726\,271\,884\,261}{506\,195\,646\,208}t^{2}+ \tfrac{574\,150\,150\,569}{506\,195\,646\,208}t+\tfrac{347\,724\,739\,077}{ 3543\,369\,523\,456} \\ &>&0\text{ for }t=x+1/2>0\text{.} \end{eqnarray*} This implies that $f_{5}^{\prime \prime }(t+\frac{1}{2})-f_{5}^{\prime \prime }(t-\frac{1}{2})<0$, that is, $f_{5}^{\prime \prime }(x+1)-f_{5}^{\prime \prime }(x)<0$, and so \begin{equation*} f_{5}^{\prime \prime }(x)>f_{5}^{\prime \prime }(x+1)>f_{5}^{\prime \prime }(x+2)>...>f_{5}^{\prime \prime }(\infty )=0. \end{equation*} It reveals that shows $f_{5}$ is convex on $\left( -1/2,\infty \right) $, and therefore, $f_{5}^{\prime }(x)<\lim_{x\rightarrow \infty }f_{5}^{\prime }(x)=0$, which proves the desired statement. \end{proof} Employing the decreasing property of $f_{5}$ on $\left( -1/2,\infty \right) $ , we obtain \begin{corollary} For $x>0$, the double inequality \begin{eqnarray*} &&\sqrt{2\pi }\left( \tfrac{x+1/2}{e}\right) ^{x+1/2}\exp \left( -\tfrac{1517 }{44\,640}\tfrac{1}{x+1/2}-\tfrac{343}{44\,640}\tfrac{x+1/2}{x^{2}+x+111/196} \right) \\ &<&\Gamma (x+1)<e^{2987/39960}\sqrt{2e}\left( \tfrac{x+1/2}{e}\right) ^{x+1/2}\exp \left( -\tfrac{1517}{44\,640}\tfrac{1}{x+1/2}-\tfrac{343}{ 44\,640}\tfrac{x+1/2}{x^{2}+x+111/196}\right) \end{eqnarray*} holds, where $\sqrt{2\pi }\approx 2.5066$ and $e^{2987/39960}\sqrt{2e} \approx 2.5126$ are the best constants. For $n\in \mathbb{N}$, it holds that \begin{eqnarray*} &&\sqrt{2\pi }\left( \tfrac{n+1/2}{e}\right) ^{n+1/2}\exp \left( -\tfrac{1517 }{44\,640}\tfrac{1}{n+1/2}-\tfrac{343}{44\,640}\tfrac{n+1/2}{n^{2}+n+111/196} \right) \\ &<&n!<\frac{2\sqrt{6}}{9}\exp \left( \tfrac{829\,607}{543\,240}\right) \left( \tfrac{n+1/2}{e}\right) ^{n+1/2}\exp \left( -\tfrac{1517}{44\,640} \tfrac{1}{n+1/2}-\tfrac{343}{44\,640}\tfrac{n+1/2}{n^{2}+n+111/196}\right) \end{eqnarray*} with the best constants $\sqrt{2\pi }\approx 2.5066$ and $2\sqrt{6}\exp \left( \tfrac{829\,607}{543\,240}\right) /9\approx 2.5067$. \end{corollary} Lastly, we give an application example of Theorem \ref{MT-p2=p3}. \begin{example} let \begin{equation*} M\left( a,b\right) =H_{p,q;r}^{2,1}\left( a,b\right) =\frac{ pb^{2}+qa^{2}+(1-p-q)ab}{rb+(1-r)a} \end{equation*} and $\theta =0,\sigma =1$ in Theorem \ref{MT-p2=p3}. Then by the same method previously, we can derive two best arrays \begin{eqnarray*} \left( p_{1},q_{1},r_{1}\right) &=&\left( \frac{129-59\sqrt{3}}{360},\frac{ 129+59\sqrt{3}}{360},\frac{90-29\sqrt{3}}{180}\right) , \\ \left( p_{2},q_{2},r_{2}\right) &=&\left( \frac{129+59\sqrt{3}}{360},\frac{ 129-59\sqrt{3}}{360},\frac{90+29\sqrt{3}}{180}\right) . \end{eqnarray*} Then, \begin{eqnarray} H_{p_{1},q_{1};r_{1}}^{2,1}\left( x,x+1\right) &=&\frac{x^{2}+\frac{180-59 \sqrt{3}}{180}x+\frac{129-59\sqrt{3}}{360}}{x+\frac{90-29\sqrt{3}}{180}} :=M_{1}\left( x,x+1\right) , \label{M1} \\ H_{p_{2},q_{2};r_{2}}^{2,1}\left( x,x+1\right) &=&\frac{x^{2}+\frac{180+59 \sqrt{3}}{180}x+\frac{129+59\sqrt{3}}{360}}{x+\frac{90+29\sqrt{3}}{180}} :=M_{2}\left( x,x+1\right) \label{M2} \end{eqnarray} It is easy to check that $M\left( a,b\right) $ are means of $a$ and $b$ for $ \left( p,q,r\right) =\left( p_{1},q_{1},r_{1}\right) $ and $\left( p_{2},q_{2},r_{2}\right) $. Thus, application of Theorem \ref{MT-p2=p3} implies that both the following two asymptotic formulas \begin{equation*} \ln \Gamma (x+1)\sim \frac{1}{2}\ln 2\pi +\left( x+1/2\right) \ln M_{i}\left( x,x+1\right) -M_{i}\left( x,x+1\right) \text{, }i=1,2 \end{equation*} are valid as $x\rightarrow \infty $. And, we have \begin{eqnarray*} \lim_{x\rightarrow \infty }\tfrac{\ln \Gamma (x+1)-\frac{1}{2}\ln 2\pi -\left( x+1/2\right) \ln M_{1}\left( x,x+1\right) +M_{1}\left( x,x+1\right) }{x^{-4}} &=&-\tfrac{1481\sqrt{3}}{2332\,800}, \\ \lim_{x\rightarrow \infty }\tfrac{\ln \Gamma (x+1)-\frac{1}{2}\ln 2\pi -\left( x+1/2\right) \ln M_{2}\left( x,x+1\right) +M_{2}\left( x,x+1\right) }{x^{-4}} &=&\tfrac{1481\sqrt{3}}{2332\,800}. \end{eqnarray*} \end{example} The above two asymptotic formulas also have well properties. \begin{proposition} Let $f_{6},f_{7}$ be defined on $\left( 0,\infty \right) $ by \begin{eqnarray*} f_{6}(x) &=&\ln \Gamma (x+1)-\frac{1}{2}\ln 2\pi -\left( x+1/2\right) \ln M_{1}\left( x,x+1\right) +M_{1}\left( x,x+1\right) , \\ f_{7}(x) &=&\ln \Gamma (x+1)-\frac{1}{2}\ln 2\pi -\left( x+1/2\right) \ln M_{2}\left( x,x+1\right) +M_{2}\left( x,x+1\right) , \end{eqnarray*} where $M_{1}$ and $M_{2}$ are defined\ by (\ref{M1}) and (\ref{M2}), respectively. Then $f_{6}\ $is increasing and concave on $\left( 0,\infty \right) $, while $f_{7}$ is decreasing and convex on $\left( 0,\infty \right) $. \end{proposition} \begin{proof} Differentiation gives \begin{eqnarray*} f_{6}^{\prime }\left( x\right) &=&\psi (x+1)-\ln \frac{x^{2}+\frac{180-59 \sqrt{3}}{180}x+\frac{129-59\sqrt{3}}{360}}{x+\frac{90-29\sqrt{3}}{180}}- \frac{\left( x+\frac{1}{2}\right) \left( 2x+\frac{180-59\sqrt{3}}{180} \right) }{x^{2}+\frac{180-59\sqrt{3}}{180}x+\frac{129-59\sqrt{3}}{360}} \\ &&+\frac{x+\frac{1}{2}}{x+\frac{90-29\sqrt{3}}{180}}+\frac{2x+\frac{180-59 \sqrt{3}}{180}}{x+\frac{90-29\sqrt{3}}{180}}-\frac{x^{2}+\frac{180-59\sqrt{3} }{180}x+\frac{129-59\sqrt{3}}{360}}{\left( x+\frac{90-29\sqrt{3}}{180} \right) ^{2}}, \end{eqnarray*} \begin{eqnarray*} f_{6}^{\prime \prime }\left( x\right) &=&\psi ^{\prime }(x+1)-\frac{2x+\frac{ 180-59\sqrt{3}}{180}}{x^{2}+\frac{180-59\sqrt{3}}{180}x+\frac{129-59\sqrt{3} }{360}}+\frac{1}{x+\frac{90-29\sqrt{3}}{180}} \\ &&+\frac{59\sqrt{3}}{180}\frac{x^{2}+\frac{59-26\sqrt{3}}{59}x+\frac{43}{120} -\frac{13\sqrt{3}}{59}}{\left( x^{2}+\frac{180-59\sqrt{3}}{180}x+\frac{129-59 \sqrt{3}}{360}\right) ^{2}} \\ &&-\frac{7\sqrt{3}}{45}\frac{1}{\left( x+\frac{90-29\sqrt{3}}{180}\right) ^{2}}-\frac{\sqrt{3}}{180}\frac{x-\frac{629\sqrt{3}-90}{180}}{\left( x+\frac{ 90-29\sqrt{3}}{180}\right) ^{3}}. \end{eqnarray*} Employing the recursive relation (\ref{psi-rel.}) and factoring reveal that \begin{equation*} f_{6}^{\prime \prime }\left( x+1\right) -f_{6}^{\prime \prime }\left( x\right) =\frac{1481\sqrt{3}}{19\,440}\frac{f_{61}\left( x\right) }{ f_{62}\left( x\right) }, \end{equation*} where \begin{eqnarray*} f_{61}\left( x\right) &=&x^{9}+\left( 9-\tfrac{337\,153}{266\,580}\sqrt{3} \right) x^{8}+\left( \tfrac{991\,207\,423}{26\,658\,000}-\tfrac{674\,306}{ 66\,645}\sqrt{3}\right) x^{7} \\ &&+\left( \tfrac{2459\,907\,961}{26\,658\,000}-\tfrac{169\,081\,132\,727}{ 4798\,440\,000}\sqrt{3}\right) x^{6}+\left( \tfrac{4335\,292\,090\,469}{ 28\,790\,640\,000}-\tfrac{55\,797\,724\,727}{799\,740\,000}\sqrt{3}\right) x^{5} \\ &&+\left( \tfrac{956\,621\,902\,709}{5758\,128\,000}-\tfrac{ 148\,442\,768\,304\,491}{1727\,438\,400\,000}\sqrt{3}\right) x^{4} \\ &&+\left( \tfrac{229\,288\,958\,388\,788\,929}{1865\,633\,472\,000\,000}- \tfrac{29\,135\,013\,047\,291}{431\,859\,600\,000}\sqrt{3}\right) x^{3} \\ &&+\left( \tfrac{36\,305\,075\,316\,164\,929}{621\,877\,824\,000\,000}- \tfrac{55\,416\,459\,045\,055\,111\,861}{1679\,070\,124\,800\,000\,000}\sqrt{ 3}\right) x^{2} \\ &&+\left( \tfrac{179\,958\,708\,278\,174\,628\,611}{11\,193\,800\,832\,000 \,000\,000}-\tfrac{7731\,435\,289\,282\,423\,861}{839\,535\,062\,400\,000 \,000}\sqrt{3}\right) x \\ &&+\left( \tfrac{21\,826\,051\,463\,638\,680\,611}{11\,193\,800\,832\,000 \,000\,000}-\tfrac{5586\,677\,417\,732\,710\,687}{4975\,022\,592\,000\,000 \,000}\sqrt{3}\right) , \end{eqnarray*} \begin{eqnarray*} f_{62}\left( x\right) &=&\left( x+1\right) ^{2}\left( x^{2}+\tfrac{180-59 \sqrt{3}}{180}x+\tfrac{129-59\sqrt{3}}{360}\right) ^{2}\left( x^{2}+\tfrac{ 540-59\sqrt{3}}{180}x+\tfrac{283-59\sqrt{3}}{120}\right) ^{2} \\ &&\times \left( x+\tfrac{270-29\sqrt{3}}{180}\right) ^{3}\left( x+\tfrac{ 90-29\sqrt{3}}{180}\right) ^{3}. \end{eqnarray*} By direct verifications we see that all coefficients of $f_{61}$ and $f_{62}$ are positive, so $f_{61}\left( x\right) $, $f_{62}\left( x\right) >0$ for $ x>0$. Therefore, we get $f_{6}^{\prime \prime }\left( x+1\right) -f_{6}^{\prime \prime }\left( x\right) >0$, which yields \begin{equation*} f_{6}^{\prime \prime }(x)<f_{6}^{\prime \prime }(x+1)<f_{6}^{\prime \prime }(x+2)<...<f_{6}^{\prime \prime }(\infty )=0. \end{equation*} It shows that $f_{6}$ is concave on $\left( 0,\infty \right) $, and therefore, $f_{6}^{\prime }(x)>\lim_{x\rightarrow \infty }f_{6}^{\prime }(x)=0$, which proves the monotonicity and concavity of $f_{6}$. In the same way, we can prove the monotonicity and convexity of $f_{7}$ on $ \left( 0,\infty \right) $, whose details are omitted. \end{proof} As direct consequences of previous proposition, we have \begin{corollary} For $x>0$, the double inequality \begin{eqnarray*} &&\delta _{0}\sqrt{2\pi }\left( \tfrac{x^{2}+\frac{180-59\sqrt{3}}{180}x+ \frac{129-59\sqrt{3}}{360}}{x+\frac{90-29\sqrt{3}}{180}}\right) ^{x+1/2}\exp \left( -\tfrac{x^{2}+\frac{180-59\sqrt{3}}{180}x+\frac{129-59\sqrt{3}}{360}}{ x+\frac{90-29\sqrt{3}}{180}}\right) \\ &<&\Gamma (x+1)<\sqrt{2\pi }\left( \tfrac{x^{2}+\frac{180-59\sqrt{3}}{180}x+ \frac{129-59\sqrt{3}}{360}}{x+\frac{90-29\sqrt{3}}{180}}\right) ^{x+1/2}\exp \left( -\tfrac{x^{2}+\frac{180-59\sqrt{3}}{180}x+\frac{129-59\sqrt{3}}{360}}{ x+\frac{90-29\sqrt{3}}{180}}\right) \end{eqnarray*} holds, where $\delta _{0}=\exp f_{6}\left( 0\right) \approx 0.96259$ and $1$ are the best constants. For $n\in \mathbb{N}$, it holds that \begin{eqnarray*} &&\delta _{2}\sqrt{2\pi }\left( \tfrac{n^{2}+\frac{180-59\sqrt{3}}{180}n+ \frac{129-59\sqrt{3}}{360}}{n+\frac{90-29\sqrt{3}}{180}}\right) ^{n+1/2}\exp \left( -\tfrac{n^{2}+\frac{180-59\sqrt{3}}{180}n+\frac{129-59\sqrt{3}}{360}}{ n+\frac{90-29\sqrt{3}}{180}}\right) \\ &<&n!<\sqrt{2\pi }\left( \tfrac{n^{2}+\frac{180-59\sqrt{3}}{180}n+\frac{ 129-59\sqrt{3}}{360}}{n+\frac{90-29\sqrt{3}}{180}}\right) ^{n+1/2}\exp \left( -\tfrac{n^{2}+\frac{180-59\sqrt{3}}{180}n+\frac{129-59\sqrt{3}}{360}}{ n+\frac{90-29\sqrt{3}}{180}}\right) \end{eqnarray*} with the best constants $\delta _{1}=\exp f_{6}\left( 1\right) \approx 0.99965$ and $1$. \end{corollary} \begin{corollary} For $x>0$, the double inequality \begin{eqnarray*} &&\sqrt{2\pi }\left( \tfrac{x^{2}+\frac{180+59\sqrt{3}}{180}x+\frac{129+59 \sqrt{3}}{360}}{x+\frac{90+29\sqrt{3}}{180}}\right) ^{x+1/2}\exp \left( - \tfrac{x^{2}+\frac{180+59\sqrt{3}}{180}x+\frac{129+59\sqrt{3}}{360}}{x+\frac{ 90+29\sqrt{3}}{180}}\right) \\ &<&\Gamma (x+1)<\tau _{0}\sqrt{2\pi }\left( \tfrac{x^{2}+\frac{180+59\sqrt{3} }{180}x+\frac{129+59\sqrt{3}}{360}}{x+\frac{90+29\sqrt{3}}{180}}\right) ^{x+1/2}\exp \left( -\tfrac{x^{2}+\frac{180+59\sqrt{3}}{180}x+\frac{129+59 \sqrt{3}}{360}}{x+\frac{90+29\sqrt{3}}{180}}\right) \end{eqnarray*} holds, where $\tau _{0}=\exp f_{7}\left( 0\right) \approx 1.0020$ and $1$ are the best constants. For $n\in \mathbb{N}$, it holds that \begin{eqnarray*} &&\sqrt{2\pi }\left( \tfrac{n^{2}+\frac{180+59\sqrt{3}}{180}n+\frac{129+59 \sqrt{3}}{360}}{n+\frac{90+29\sqrt{3}}{180}}\right) ^{n+1/2}\exp \left( - \tfrac{n^{2}+\frac{180+59\sqrt{3}}{180}n+\frac{129+59\sqrt{3}}{360}}{n+\frac{ 90+29\sqrt{3}}{180}}\right) \\ &<&n!<\tau _{1}\sqrt{2\pi }\left( \tfrac{n^{2}+\frac{180+59\sqrt{3}}{180}n+ \frac{129+59\sqrt{3}}{360}}{n+\frac{90+29\sqrt{3}}{180}}\right) ^{n+1/2}\exp \left( -\tfrac{n^{2}+\frac{180+59\sqrt{3}}{180}n+\frac{129+59\sqrt{3}}{360}}{ n+\frac{90+29\sqrt{3}}{180}}\right) \end{eqnarray*} with the best constants $\delta _{1}=\exp f_{7}\left( 1\right) \approx 1.0001 $ and $1$. \end{corollary} \section{Open problems} Inspired by Examples \ref{E-M3,2}--\ref{E-N4,3}, we propose the following problems. \begin{problem} Let $S_{p_{k};q_{k}}^{n,n-1}\left( a,b\right) $ be defined by (\ref{S^n,n-1} ). Finding $p_{k}$ and $q_{k}$ such that the asymptotic formula for the gamma function \begin{equation*} \ln \Gamma (x+1)\sim \frac{1}{2}\ln 2\pi +\left( x+\frac{1}{2}\right) \ln S_{p_{k};q_{k}}^{n,n-1}\left( x,x+1\right) -\left( x+\frac{1}{2}\right) :=F_{1}\left( x\right) \end{equation*} holds as $x\rightarrow \infty $ with \begin{equation*} \lim_{x\rightarrow \infty }\frac{\ln \Gamma (x+1)-F_{1}\left( x\right) }{ x^{-2n+1}}=c_{1}\neq 0,\pm \infty . \end{equation*} \end{problem} \begin{problem} Let $S_{p_{k};q_{k}}^{n,n-1}\left( a,b\right) $ be defined by (\ref{S^n,n-1} ). Finding $p_{k}$ and $q_{k}$ such that the asymptotic formula for the gamma function \begin{equation*} \ln \Gamma (x+1)\sim \frac{1}{2}\ln 2\pi +\left( x+\frac{1}{2}\right) \ln \left( x+\frac{1}{2}\right) -S_{p_{k};q_{k}}^{n,n-1}\left( x,x+1\right) :=F_{2}\left( x\right) \end{equation*} holds as $x\rightarrow \infty $ with \begin{equation*} \lim_{x\rightarrow \infty }\frac{\ln \Gamma (x+1)-F_{2}\left( x\right) }{ x^{-2n+1}}=c_{2}\neq 0,\pm \infty . \end{equation*} \end{problem} \begin{problem} Let $H_{p_{k};q_{k}}^{n,n-1}\left( a,b\right) $ be defined by (\ref{H^n,n-1} ). Finding $p_{k}$ and $q_{k}$ such that the asymptotic formula for the gamma function \begin{equation*} \ln \Gamma (x+1)\sim \frac{1}{2}\ln 2\pi +\left( x+\frac{1}{2}\right) \ln H_{p_{k};q_{k}}^{n,n-1}\left( x,x+1\right) -H_{p_{k};q_{k}}^{n,n-1}\left( x,x+1\right) :=F_{3}\left( x\right) \end{equation*} holds as $x\rightarrow \infty $ with \begin{equation*} \lim_{x\rightarrow \infty }\frac{\ln \Gamma (x+1)-F_{1}\left( x\right) }{ x^{-2n}}=c_{3}\neq 0,\pm \infty . \end{equation*} \end{problem} \end{document}
\begin{document} \title{Polytopes of Absolutely Wigner Bounded Spin States} \author{J\'{e}r\^{o}me Denis} \affiliation{Institut de Physique Nucl\'{e}aire, Atomique et de Spectroscopie, CESAM, University of Li\`{e}ge, B-4000 Li\`{e}ge, Belgium} \author{Jack Davis} \affiliation{Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1} \author{Robert B. Mann} \affiliation{Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1} \affiliation{Perimeter Institute for Theoretical Physics, 31 Caroline St N, Waterloo, Ontario, Canada N2L 2Y5} \author{John Martin} \affiliation{Institut de Physique Nucl\'{e}aire, Atomique et de Spectroscopie, CESAM, University of Li\`{e}ge, B-4000 Li\`{e}ge, Belgium} \maketitle \begin{abstract} We study the properties of unitary orbits of mixed spin states that are characterized by Wigner functions lower bounded by a specified value. To this end, we extend a characterization of the set of absolutely Wigner positive states as a set of linear eigenvalue constraints, which together define a polytope in the simplex of spin-$j$ mixed states centred on the maximally mixed state. The lower bound determines the relative size of such absolutely Wigner bounded (AWB) polytopes and we study their geometric properties. In particular, in each dimension a Hilbert-Schmidt ball representing a tight AWB sufficiency criterion based on the purity is exactly determined, while another ball representing AWB necessity is conjectured. Special attention is given to the case where the polytope separates orbits containing only positive Wigner functions from other orbits because of the use of Wigner negativity as a witness of non-classicality of spin states. Comparisons are made to absolute symmetric state separability and spherical Glauber-Sudarshan positivity, with additional details given for low spin quantum numbers. \end{abstract} \section{Introduction} Negative quasiprobability in the phase space representation has long been an indicator of non-classicality in quantum systems. The three most studied types of quasiprobability are those associated with the Wigner function, the Glauber-Sudarshan function, and the Kirkwood-Dirac function, particularly so in recent years due to the rise of quantum information theory. Wigner negativity in particular has received special attention because of its relationship to quantum advantage in the magic state injection model of universal fault-tolerant quantum computation \cite{Veitch_Ferrie_Gross_Emerson_2012,mari_eisert_simulation_2012,Howard_Wallman_Veitch_Emerson_2014,Delfosse_qudits_2017,booth_cv_connection_2022}. In this setting Wigner negativity acts as a magic monotone with respect to Gaussian/Clifford group operations, and so offers some credence to the idea that more negativity implies more non-classicality \cite{Veitch_resource_2014,Albarelli_cv_resource_2018,Wang_magic_channels_2019}. For pure states in bosonic systems the set of Wigner-positive states is fully characterized by Hudson's theorem, be it Gaussian states in the continuous variable regime or stabilizer states in the discrete variable regime \cite{Hudson_1974,Gross_2006}. However the relationship between negative quasiprobability and state mixedness is not well understood. For both practical and theoretical reasons this relationship is important. In the mixed bosonic setting, Gaussianity is no longer necessary to infer Wigner function positivity and the situation becomes more complicated \cite{Gracia_mixed_1988,Brocker_Werner_1995,Mandilara_extending_hudson_2010}. A general observation is that negativity tends to decrease as purity decreases. This may be attributed to the point-wise convexity of Wigner functions over state decompositions, together with the maximally mixed state guaranteed to be Wigner-positive (at least in a limiting sense of increasingly flatter Gaussians), although the precise relationship is not fully understood. Even less understood is how Wigner negativity manifests in spin-$j$ systems, equivalent to the symmetric subspace of $2j$ qubits, which have a Moyal representation on a spherical phase space \cite{stratonovich_distributions_1956, varilly_moyal_1989, dowling_agarwal_1994, Klimov_Romero_de_Guise_2017, koczor_parity_2020, harrow2013church}. Evidence suggests that no pure spin state is completely Wigner-positive \cite{davis2022stellar}, and the question of mixed spin states remains largely unexplored. Inspired by work on characterizing mixed spin state entanglement in the symmetrized multi-qubit picture, in particular that of absolute separability \cite{Verstraete_absolute_2001,serrano2021maximally}, here we address the question of Wigner positivity by investigating unitary orbits of spin states. The unitary orbit of a spin-$j$ state $\rho$ is defined as the set of states $\{U\rho {U^\dagger}: U \in \text{SU}(2j+1)\}$. In particular, we call a general spin state \textit{absolutely Wigner-positive} (AWP) if its spherical Wigner function remains positive under the action of all global unitaries $U \in \text{SU}(2j+1)$. In order to position our work in a wider context, we begin with a brief note on related research. Recent works have studied the sets of AWP states \cite{Abbasli2020, Abgaryan2021, Abgaryan2020, Abgaryan2021b} taking a broad perspective on the Moyal picture in finite dimensions by simultaneously considering the set of all candidate SU($N$)-covariant Wigner functions for each dimension $N$. It is in this general setting where the relationship between generalized Moyal theory, the existence of Wigner-positive polytopes, and the Birkoff-von Neumann theorem was first established. It was furthermore abstractly demonstrated that there always exists a compatible reduction to an appropriate $N$-dimensional SU($2$) symbol correspondence on the sphere. By contrast, here we work exclusively with the symmetry group SU(2) in each dimension, as well as a single concrete Wigner function, Eq.\ \eqref{eq:defWignerFunction}, which we consider to be the canonical Wigner function for spin systems because it is the only SU(2)-covariant Wigner function to satisfy, in addition to the usual Stratonovich-Weyl axioms, either of the following two properties: \begin{itemize} \item Compatibility with the spherical $s$-ordered class of functions: it is exactly ``in between'' the Husimi $Q$ function and the Glauber-Sudarshan $P$ function (as generated by the standard spin-coherent state $\ket{j,j}$) \cite{varilly_moyal_1989}. \item Compatibility with Heisenberg-Weyl symmetry: its infinite-spin limit is the original Wigner function on $\mathbb{R}^2$ \cite{Weigert_contracting_2000}. \end{itemize} In addition to offering a related but alternative argument showing the existence of such polytopes, here we go beyond previous investigations in three ways. The first is that we extend the argument to include orbits of Wigner functions lower-bounded by numbers not necessarily zero. These one-parameter families of polytopes, which we refer to as \textit{absolutely Wigner bounded} (AWB) polytopes, are of interest not only for Wigner functions but also for other quasiprobability distributions. The second is that we go into explicit detail on the geometric properties of these polytopes and explore their relevance in the context of spin systems and quantum information. The third is that we contrast the Wigner negativity structure to the Glauber-Sudarshan negativity structure, which amounts to an accessible comparison between Wigner negativity and entanglement in the mixed state setting. Having established the context for this work with the above description, our first result is the complete characterization of the set of AWB spin states in all finite dimensions, with AWP states appearing as a special case. As similarly discussed in \cite{Abgaryan2020}, this may be phrased as a natural application of the Birkhoff-von Neumann theorem on doubly stochastic matrices, though here we extend and specialize to the SU(2)-covariant Wigner kernel associated with the canonical Wigner function on the sphere. In particular, the set of AWB states forms a polytope in the simplex of density matrix spectra, the $(2j+1)!$ hyperplane boundaries of which are defined by permutations of the eigenvalues of the phase-point operators. Centred on the maximally mixed state for each dimension, we also exactly find the largest possible Hilbert-Schmidt ball containing nothing but AWB states, which amounts to the strictest AWB sufficiency criterion based solely on the purity of mixed states. We also obtain an expression that we conjecture describes the smallest Hilbert-Schmidt ball containing all AWB states, which amounts to a tight necessity criterion. Numerical evidence supports this conjecture. For both criteria, we discuss their geometric interpretation in relation to the full AWB polytope. We then specialize to absolute Wigner positivity and compare it with symmetric absolute separability (SAS), which in the case of a single spin-$j$ system is equivalent to absolute Glauber-Sudarshan positivity \cite{Giraud_classicality_2008,Bohnet-Waldraff-PPT_2016,Bohnet-Waldraff2017absolutely}. Our paper is organized as follows. Section \ref{sec:background} briefly outlines the generalized phase space picture using the parity-operator/Stratonovich framework for the group SU(2). Section \ref{sec:AWPPolytope} proves our first result on AWB polytopes, while Sec.\ \ref{sec:AWP_balls} determines and conjectures, respectively, the largest and smallest Hilbert-Schmidt ball sitting inside and outside the AWB polytopes. Section \ref{sec:entanglement} explores low-dimensional cases in more detail and draws comparisons to entanglement. Finally, conclusions are drawn and perspectives are outlined in Sec.~\ref{sec:conclusion}. \section{Background} \label{sec:background} The parity-operator framework is the generalization of Moyal quantum mechanics to physical systems other than a collection of non-relativistic spinless particles. Each type of system has a different phase space, and the various types are classified by the system's dynamical symmetry group \cite{brif_mann_lie_1999}. In each case the central object is a map, $\Delta$, called the \textit{kernel}, which takes points in phase space to operators on Hilbert space. A quasi-probability representation of a quantum state, evaluated at a point in phase space, is the expectation value of the phase-point operator assigned to that point. Different kernels yield different distributions but all must obey the Stratonovich-Weyl axioms, which ensure, among other properties, the existence of an inverse map and that the Moyal picture is as close as possible to classical statistical physics over the same phase space (i.e.\ the Born rule as an $L^2$ inner product). When applied to the Heisenberg-Weyl group (i.e.\ the group of displacement operators generated by the canonical commutation relations, $[ x,p ] = i \mathbb{1}$) this framework reduces to the common phase space associated with $n$ canonical degrees of freedom, $\mathbb{R}^{2n}$, and the phase-point operators corresponding to the Wigner function appear as a set of displaced parity operators \cite{brif_mann_lie_1999, Grossmann_1976, royer_parity_1977}. A spin-$j$ system on the other hand corresponds to the group SU(2), which yields a spherical phase space, $S^2$. Here we list some necessary results from this case; see Refs.\ \cite{stratonovich_distributions_1956,varilly_moyal_1989,dowling_agarwal_1994,Klimov_Romero_de_Guise_2017,koczor_parity_2020} for more information. \subsection{Wigner function of a spin state} Consider a single spin system with spin quantum number $j$. Pure states live in the Hilbert space $\mathcal{H}\simeq \mathbb{C}^{2j+1}$, which carries an irreducible SU(2) representation that acts as rotations up to global phase: $U_g\ket{\psi} \simeq R(\theta,\phi)\ket{\psi}$ where $g \in$ SU(2). Mixed states live in the space of operators, $\mathcal{L(H)}$, where SU(2) acts via conjugation: $U_g \rho U^\dagger_g$. This action on operator space is not irreducible and may be conveniently decomposed into irreducible multipoles. The SU(2) Wigner kernel of a spin-$j$ system is \begin{equation}\label{Wignerkernel} \begin{aligned} &\Delta: S^2\rightarrow \mathcal{L(H)} \\ &\Delta(\Omega)= \sqrt{\frac{4\pi}{2j+1}}\sum_{L=0}^{2j}\sum_{M=-L}^{L}Y_{LM}^{*}(\Omega)T_{LM}, \end{aligned} \end{equation} where $\Omega = (\theta,\phi) \in S^2$, $Y_{LM}(\Omega)$ are the spherical harmonics, and $T_{LM} \equiv T_{LM}^{(j)}$ are the spherical tensor operators associated with spin $j$ \cite{Varshalovich_1988}. To avoid cluttered notation we do not label the operator $\Delta$ with a $j$; the surrounding context should be clear on which dimension/spin is being discussed. The Wigner function of a spin state $\rho$ is defined as \begin{equation} \begin{aligned} W_\rho(\Omega) & = \mathrm{Tr}\left[\rho\Delta(\Omega)\right]\\ &= \frac{1}{2j+1} + \sqrt{\frac{4\pi}{2j+1}}\sum_{L=1}^{2j}\sum_{M=-L}^{L}\rho_{LM}Y_{LM}(\Omega), \label{eq:defWignerFunction} \end{aligned} \end{equation} where $\rho_{LM} = \tr[\rho \, T^\dagger_{LM}]$ are state multipoles~\cite{1981Agarwal}. This function is normalized according to \begin{equation}\label{eq:normalization} \frac{2j+1}{4\pi} \int_{S^2} W_\rho(\Omega) \, d\Omega = 1, \end{equation} and, as Eq.\ \eqref{eq:defWignerFunction} suggests, the maximally mixed state (MMS) $\rho_0 = \mathbb{1}/(2j+1)$ is mapped to the constant function \begin{equation} W_{\rho_0}(\Omega) = \frac{1}{2j+1}. \end{equation} An important property is SU(2) covariance: \begin{equation}\label{eq:covariance} W_{U_g \rho U^\dagger_g}(\Omega) = W_{\rho}(g^{-1}\, \Omega), \end{equation} where the right hand side denotes the spatial action of SU(2) on the sphere. As this is simply a rigid rotation, analogous to an optical displacement operator rigidly translating $\mathbb{R}^{2n}$, the overall functional form of any Wigner function is unaffected. Hence the Wigner negativity defined as~\cite{2021Everitt,davis2021wigner} \begin{equation} \label{eq:WignerNeg} \delta(\rho)=\frac{1}{2}\left(\int_{\Gamma}\left|W_{\rho}(\Omega)\right|d\mu(\Omega)-1\right), \end{equation} often used as a measure of non-classicality, is invariant under SU(2) transformations. Note that the action of a general unitary $U \in $ SU$(2j+1)$ on a state $\rho$ can of course radically change its Wigner function and thus also its negativity. The quantity $d\mu(\Omega) = (2j+1)/(4\pi) \sin\theta d\theta d\phi$ is the invariant measure on the phase space. A related consequence of SU(2) covariance is that all phase-point operators have the same spectrum \cite{heiss-weigert-discrete-2000}. The set of kernel eigenvectors at the point $\Omega$ is the Dicke basis quantized along the axis $\mathbf{n}$ pointing to $\Omega$, such that we have \begin{equation}\label{eq:SU(2)-kernel-diagonal} \Delta (\Omega) = \sum_{m=-j}^j \Delta_{j,m} \ketbra{j,m;\mathbf{n}}{j,m;\mathbf{n}}, \end{equation} with rotationally-invariant eigenvalues \begin{equation}\label{eq:kernel_eigenvalues} \Delta_{j,m} = \sum_{L=0}^{2j} \frac{2L+1}{2j+1} C^{j,m}_{j,m; L, 0} \end{equation} where $C_{j_1, m_1;j_2, m_2}^{J,M}$ are Clebsch-Gordan coefficients. In particular, at the North pole ($\Omega=0$) the kernel is diagonal in the standard Dicke basis and its matrix elements are \begin{equation} [\Delta(0)]_{mn} = \langle j,m | \Delta(0) | j,n \rangle = \Delta_{j,m}\delta_{mn}. \end{equation} The kernel is guaranteed to have unit trace at all points and in all dimensions: \begin{equation}\label{eq:kernel_eigs_unit_sum} \sum_{m=-j}^j \Delta_{j,m} = 1 \quad \forall\, j, \end{equation} and satisfies the relationship~\cite{Abgaryan2021} \begin{equation}\label{identity2} \sum_{m=-j}^j \Delta_{j,m}^2 = 2j+1 \quad \forall\, j, \end{equation} for which we give a proof of in Appendix \ref{sec:remarkablerelation} for the sake of consistency. Finally, we note the following observations on the set of kernel eigenvalues \eqref{eq:kernel_eigenvalues}: \begin{equation}\label{eq:kernel_eigenvalue_assumption} \begin{split} & |\Delta_{j,m}| > |\Delta_{j,m-1}| \neq 0, \\[2pt] & \sgn(\Delta_{j,k}) = (-1)^{j-k} \end{split} \end{equation} for all $m \in \{-j+1,...,j \}$. That is, as $m$ ranges from $j$ to $-j$ the eigenvalues alternate in sign (starting from a positive value at $m=j$) and strictly decrease in absolute value without vanishing. Numerics support this assumption though we are not aware of any proof; see also \cite{davis2021wigner,koczor_parity_2020} for discussions on this point. Note this implies that the kernel has multiplicity-free eigenvalues for all finite spin. This is in contrast to the Wigner function on $\mathbb{R}^2$, which has a highly degenerate kernel (i.e.~it acts on an infinite-dimensional Hilbert space but only has two eigenvalues) \cite{royer_parity_1977}. Only some of our results depend on \eqref{eq:kernel_eigenvalue_assumption}, and we will highlight when this is the case. In what follows we use the vector notation $\boldsymbol{\lambda}$ for the spectrum $(\lambda_0,\lambda_1,\ldots,\lambda_{2j})$ of a density operator $\rho$, and likewise $\boldsymbol{\Delta}$ for the spectrum $(\Delta_{j,-j}, \Delta_{j,-j+1},...,\Delta_{j,j})$ of the kernel $\Delta$. We also alternate between the double-subscript notation $\Delta_{j,m}$, which refers directly to Eq.~\eqref{eq:kernel_eigenvalues}, and the single-subscript notation $\Delta_i$ where $i\in\{0,...,2j\}$, which denotes a vector component, similar to $\lambda_i$. \section{Polytopes of absolutely Wigner bounded states} \label{sec:AWPPolytope} We present in this section our first result. We prove there exists a polytope containing all absolutely Wigner bounded (AWB) states with respect to a given lower bound, and fully characterize it. When this bound is zero we refer to such states as absolutely Wigner positive (AWP). We also determine a necessary and sufficient condition for a state to be inside the AWB polytope based on a majorization criterion. These results offer a strong characterization of the classicality of mixed spin states. We start with the following definition of AWB states: \begin{definition} A spin-$j$ state $\rho$ is absolutely Wigner bounded (AWB) with respect to $W_\mathrm{min}$ if the Wigner function of each state unitarily connected to $\rho$ is lower bounded by $W_\mathrm{min}$. That is, if \begin{equation} \begin{split} W_{U\rho U^\dagger}(\Omega) \geq W_\mathrm{min} \end{split} \qquad \begin{split} &\forall \,\, \Omega \in S^2 \\ &\forall \,\, U \in \mathrm{SU}(2j+1). \end{split} \end{equation} When $W_\mathrm{min} = 0$ we refer to such states as absolutely Wigner positive (AWP). Hence, an AWP state has only non-negative Wigner function states in its unitary orbit. \end{definition} \subsection{Full set of AWB states} The following proposition is an extension and alternative derivation of a result on absolute positivity obtained in~\cite{Abgaryan2020,Abgaryan2021}. It gives a complete characterization of the set of states whose unitary orbit contains only states whose Wigner function is larger than a specified constant value, and is valid for any spin quantum number $j$. \begin{proposition} \label{AWP_Theorem} Let $\boldsymbol{\Delta}^\uparrow$ denote the vector of kernel eigenvalues sorted into increasing order, and let \begin{equation} W_\mathrm{min}\in [ \Delta^\uparrow_0, \tfrac{1}{2j+1} ]. \end{equation} Then a spin state $\rho$ has in its unitary orbit only states whose Wigner function satisfies $W(\Omega)\geq W_\mathrm{min}\;\forall \,\Omega$ iff its decreasingly ordered eigenvalues $\boldsymbol{\lambda}^\downarrow$ satisfy the following inequality \begin{equation}\label{eq:THM} \sum_{i=0}^{2j}\lambda_{i}^\downarrow\Delta_i^\uparrow\geq W_\mathrm{min}. \end{equation} \end{proposition} \textit{Remark.} While not necessary for the proof to hold, note that according to Eq.~\eqref{eq:kernel_eigenvalue_assumption} the sorted kernel eigenspectrum becomes $\boldsymbol{\Delta}^\uparrow=(\Delta_{j,j-1}, \Delta_{j,j-3},...,\Delta_{j,-j},...,\Delta_{j,j-2},\Delta_{j,j})$ and so $W_\mathrm{min} \in [\Delta_{j,j-1}, \frac{1}{2j+1}]$. The upper bound comes from Eq.\ \eqref{eq:normalization}, which implies that any Wigner function with $W_\mathrm{min} > 1/(2j+1)$ would not be normalized. Furthermore, for $W_\mathrm{min} = 0$, this proposition provides a characterisation of the set of AWP states, as previously found in a more abstract and general setting in~\cite{Abgaryan2020, Abgaryan2021}. \begin{proof} Consider a general spin state $\rho$. We are first looking for a necessary condition for any element $U\rho U^{\dagger}$ of the unitary orbit of $\rho$ to have a Wigner function $W(\Omega)\geq W_\mathrm{min}$ at any point $\Omega\in S^2$. Since the unitary transformation applied to $\rho$ may correspond, in a particular case, to an SU(2) rotation, the value of the Wigner function of $\rho$ at any point $\Omega$ corresponds to the value of the Wigner function at $\Omega=0$ of an element in its unitary orbit (the rotated version of $\rho$). But since we are considering the full unitary orbit, i.e.\ all possible $U$'s, we can set the Wigner function argument to $\Omega=0$ via the following reasoning. The state $\rho$ can always be diagonalized by a unitary matrix $M$, i.e.\ $M\rho M^\dagger=\Lambda$ with $\Lambda=\mathrm{diag}(\lambda_0,...,\lambda_{2j})$ a diagonal positive semi-definite matrix. The Wigner function at $\Omega=0$ of $U\rho U^{\dagger}$ is then given by \begin{eqnarray*} W_{U\rho U^{\dagger}}(0) & = & \mathrm{Tr}\left[U\rho U^{\dagger}\Delta(0)\right] \\ &=& \mathrm{Tr}\left[U M^\dagger \Lambda M U^{\dagger}\Delta(0)\right]. \end{eqnarray*} By defining the unitary matrix $V=U M^\dagger$ and calculating the trace in the Dicke basis, we obtain (where we drop the Wigner function argument in the following) \begin{eqnarray*} W_{U\rho U^{\dagger}} & = & \mathrm{Tr}\left[V \Lambda V^{\dagger}\Delta(0)\right]\\ & = & \sum_{p,q,k,l=0}^{2j}V_{pq}\lambda_{q}\delta_{qk}V_{lk}^{*}\Delta_{l}\delta_{lp}\\ & = & \sum_{q,p=0}^{2j}\lambda_{q}\left|V_{qp}\right|^{2}\Delta_{p}. \end{eqnarray*} The positive numbers $|V_{qp}|^{2}$ in the previous equation define the entries of a unistochastic (hence also doubly stochastic) matrix of dimension $(2j+1)\times(2j+1)$ which we denote by $X$, \begin{equation} X_{qp}=\left|V_{qp}\right|^{2}. \end{equation} By the Birkhoff-von Neumann theorem, we know that $X$ can be expressed as a convex combination of permutation matrices $P_{k}$, \begin{equation} X=\sum_{k=1}^{N_p}c_{k}P_{k}, \end{equation} where $N_p=(2j+1)!$ is the total number of permutations $\pi_{k} \in S_{2j+1}$ with $S_{2j+1}$ the symmetric group over $2j+1$ symbols, \begin{equation} c_{k}\geq0 \quad\forall\, k\quad \mathrm{and} \quad \sum_{k=1}^{N_p}c_{k}=1. \end{equation} Consequently, we have \begin{eqnarray*} W_{U\rho U^{\dagger}} & = & \sum_{p,q=0}^{2j}\lambda_{p}X_{pq}\Delta_{q}\\ & = & \sum_{k=1}^{N_p}c_{k}\sum_{p,q=0}^{2j}\lambda_{p}\left[P_{k}\right]_{pq}\Delta_{q}\\ & = & \sum_{k=1}^{N_p}c_{k}\sum_{p=0}^{2j}\lambda_{p}\Delta_{\pi_{k}(p)} \end{eqnarray*} For a state $\rho$ whose eigenspectrum $\boldsymbol{\lambda}$ satisfies the $N_p$ inequalities \begin{equation} \sum_{p=0}^{2j}\lambda_{p}\Delta_{\pi(p)}\geq W_\mathrm{min} \qquad\forall\, \pi\in S_{2j+1}\label{eq:AWPCondition} \end{equation} we then have \begin{equation*} W_{U\rho U^{\dagger}} = \sum_{k=1}^{N_p}c_{k}\sum_{p=0}^{2j}\lambda_{p}\Delta_{\pi_{k}(p)} \geq W_\mathrm{min} \end{equation*} for any unitary $U$ and we conclude. Conversely, a state has in its unitary orbits only states whose Wigner function satisfies $W(\Omega)\geq W_\mathrm{min}\;\forall \,\Omega$ if \begin{equation} W_{U\rho U^{\dagger}} = \sum_{k=1}^{N_p}c_{k}\sum_{p=0}^{2j}\lambda_{p}\Delta_{\pi_{k}(p)} \geq W_\mathrm{min} \qquad \forall\, U. \end{equation} In particular, the unitary matrix $U$ can correspond to any permutation matrix $P$, so that we have \begin{equation} W_{P \rho P^{\dagger}} = \sum_{p=0}^{2j}\lambda_{p}\Delta_{\pi(p)} \geq W_\mathrm{min} \qquad \forall \, \pi \end{equation} and we conclude that the state satisfies \eqref{eq:AWPCondition}. In fact, it is enough to consider the ordered eigenvalues $\boldsymbol{\lambda}^{\downarrow}$ so that a state is AWB iff it verifies the most stringent inequality \begin{equation}\label{eq:ordered_awp_ineq} \boldsymbol{\lambda}^\downarrow \boldsymbol{\cdot} \boldsymbol{\Delta}^\uparrow = \sum_{p=0}^{2j}\lambda_{p}^\downarrow \Delta_{p}^\uparrow \geq W_\mathrm{min} \end{equation} with the ordered eigenvalues of the kernel $\boldsymbol{\Delta}^\uparrow$. \end{proof} The proof provided for Proposition 1 can in fact be reproduced for any quasiprobability distribution $\mathcal{W}$ defined on the spherical phase space $S^2$ as the expectation value of a specific kernel operator $\tilde{\Delta}(\Omega)$ in a quantum state $\rho$; that is, via $\mathcal{W}_\rho(\Omega) = \mathrm{Tr}\left[\rho \tilde{\Delta}(\Omega)\right]$, see also Refs.~\cite{Abgaryan2020,Abgaryan2021} for other generalizations. A polytope in the simplex of states will describe the absolute positivity of each quasiprobability distribution and its vertices will be determined by the eigenspectrum of the defining kernel. A family of such (normalized) distributions is obtained from the $s$-parametrized Stratonovich-Weyl kernel (see e.g.\ Refs.~\cite{1981Agarwal,varilly_moyal_1989,Brif1998}) \begin{equation}\label{sSWkernel} \Delta^{(s)}(\Omega) = \sqrt{\frac{4\pi}{2j+1}}\sum_{L,M}\left(C_{j j, L 0}^{j j}\right)^{-s} Y_{LM}^{*}(\Omega)T_{LM} \end{equation} with $s\in [-1,1]$. For $s=0$, it reduces to the Wigner kernel given in Eq.~\eqref{Wignerkernel}. As negative values of the Wigner function are generally considered to indicate non-classicality, the value $W_\mathrm{min}=0$ plays a special role. Nevertheless, since Proposition 1 holds for any $W_{\mathrm{min}}\in [ \min\{\Delta_i\},\frac{1}{2j+1}]$ the corresponding sets of states also form polytopes, which become larger as $W_{\mathrm{min}}$ becomes more negative, culminating in the entire simplex when $W_{\mathrm{min}}$ is the smallest kernel eigenvalue $\min\{\Delta_i\}$ (which according to Eq.\ \eqref{eq:kernel_eigenvalue_assumption} is $\Delta_{j.j-1}$). There is thus a continuous transition between the one-point polytope, which represents the maximally mixed state, and the polytope containing the whole simplex. As discussed later, Fig.\ \ref{fig:criticalpolytope} in Sec.\ \ref{sec:AWP_balls} shows a special example of this family for spin-1. Quasiprobability distributions other than the Wigner function, such as the Husimi $Q$ function derived from the $s$-ordered SW kernel \eqref{sSWkernel} for $s=-1$, are positive by construction, implying that the polytope for $Q_{\mathrm{min}}=0$ contains the entire simplex of state spectra. In this case it becomes especially interesting to consider lower bounds $Q_{\mathrm{min}}>0$ and study the properties of the associated polytopes. \subsection{AWP polytopes} \label{subsec:AWPPolytope} Since the conditions for being AWP depend only on the eigenspectrum $\boldsymbol{\lambda}$ of a state, it is sufficient in the following to focus on diagonal states in the Dicke basis. The condition \eqref{eq:THM} for $W_{\mathrm{min}}=0$ defines a polytope of AWP states in the simplex of mixed spin states. Indeed, we start by noting that the equalities \begin{equation} \sum_{i=0}^{2j}\lambda_{i}\Delta_{\pi(i)} = 0 \label{eq:AWP_hyperplanes} \end{equation} define, for all possible permutations $\pi$, $(2j+1)!$ hyperplanes in $\mathbb{R}^{2j}$. Together they delimit a particular polytope that contains all absolutely Wigner positive states. The AWP polytopes for $j=1$ and $j=3/2$ are respectively represented in Figs.~\ref{fig:spin-1-simplex_2} and \ref{fig:spin-3/2-simplex} in a barycentric coordinate system (see Appendix \ref{sec:barycentricCoordinatesSystem} for a reminder). If we now restrict our attention to ordered eigenvalues $\boldsymbol{\lambda}^\downarrow$, we get a minimal polytope that is represented in Fig.~\ref{fig:minimalpolytope} for $j=1$. The full polytope is reconstructed by taking all possible permutations of the barycentric coordinates of the vertices of the minimal polytope. \begin{figure} \caption{AWP polytope for $j=1$ displayed in the barycentric coordinate system. The AWP polytope is the area shaded in dark red with the blue dashed lines marking the hyperplanes defined by Eq.~\eqref{eq:AWP_hyperplanes} \label{fig:spin-1-simplex_2} \end{figure} These vertices can be found as follows. In general we need $2j+1$ independent conditions on the vector $(\lambda_0,\lambda_1,\ldots,\lambda_{2j})$ to uniquely define (the unitary orbit of) a state $\rho$. One of them is given by the normalization condition $\sum_{i=0}^{2j}\lambda_{i}=1$. The others correspond to the fact that a vertex of the AWP polytope is the intersection of $2j$ hyperplanes each specified by an equation of the form \eqref{eq:AWP_hyperplanes}. One of them is \begin{equation} \sum_{i=0}^{2j}\lambda_{i}^{\downarrow}\Delta_{i}^{\uparrow} = 0. \label{eq:ordered_awp_eq} \end{equation}\begin{figure} \caption{The AWP polytope for $j=3/2$ in the barycentric coordinate system (top). The grey rods (shown in the enlarged polytope at the bottom) are the edges of the AWP polytope and the blue sphere is its largest inner ball, with radius $r_{\mathrm{in} \label{fig:spin-3/2-simplex} \end{figure}\begin{figure} \caption{AWP minimal polytope for $j=1$ in the barycentric coordinate system. The structure is similar to Fig.~\ref{fig:spin-1-simplex_2} \label{fig:minimalpolytope} \end{figure}Let us focus on the remaining $2j-1$. For simplicity, consider a transposition $\pi=(p,q)$ with $q>p$. The condition \eqref{eq:AWP_hyperplanes} becomes in this case, using \eqref{eq:ordered_awp_eq}, \begin{align}\label{condtransposition} & \lambda_{p}^{\downarrow}\Delta_{q}^{\uparrow}+\lambda_{q}^{\downarrow}\Delta_{p}^{\uparrow}+ \sum_{\substack{i=0\\i\neq p,q}}^{2j}\lambda_{i}^{\downarrow}\Delta_{i}^{\uparrow} = 0 \nonumber\\[2pt] \Leftrightarrow\quad & \lambda_{p}^{\downarrow}(\Delta_{q}^{\uparrow}-\Delta_{p}^{\uparrow})+\lambda_{q}^{\downarrow}(\Delta_{p}^{\uparrow}-\Delta_{q}^{\uparrow})=0 \end{align} As all the eigenvalues of the kernel are different by assumption \eqref{eq:kernel_eigenvalue_assumption}, Eq.~\eqref{condtransposition} is satisfied iff $\lambda_{p}^{\downarrow}=\lambda_{q}^{\downarrow}$ and, as the eigenvalues are ordered, this also means that $\lambda_{k}^{\downarrow}=\lambda_{p}^{\downarrow}$ for all $k$ between $p$ and $q$. Note that in this reasoning, the only forbidden transposition is $(0,2j)$ because it would give the MMS. Hence, for a given transposition $(p,q)$ will correspond a set of $q-p$ conditions $ \lambda_{l} = \lambda_{l+1}$ for $l=p,\ldots,q-1$. Therefore, as any permutation is a composition of transpositions, the $2j-1$ conditions that follow from \eqref{eq:AWP_hyperplanes} eventually reduce to a set of $2j-1$ nearest-neighbour eigenvalue equalities taken from \begin{equation}\label{eq:set_of_NN_constraints} \mathcal{E}=\left(\lambda_{0}^\downarrow = \lambda_{1}^\downarrow, \lambda_{1}^\downarrow=\lambda_{2}^\downarrow, ... , \lambda_{2j-1}^\downarrow=\lambda_{2j}^\downarrow\right). \end{equation} Since we need $2j-1$ conditions, we can draw $2j-1$ equalities from $\mathcal{E}$ in order to obtain a vertex. This method gives $\binom{2j}{2j-1}=2j$ different draws and so we get $2j$ vertices for the minimal polytope. As explained previously, all other vertices of the full polytope are obtained by permuting the coordinates of the vertices of the minimal polytope. In Appendix~\ref{sec:polytope_coordinates}, we give the barycentric coordinates of the vertices of the minimal polytope up to $j=2$. The entirety of the preceding discussion of the AWP polytope vertices naturally extends to the AWB polytope vertices for which we must replace $0$ by $W_{\text{min}}$ in the right-hand side of the equality \eqref{eq:AWP_hyperplanes}. However, for negative values of $W_{\text{min}}$, the polytope may be partially outside the simplex and some vertices will have negative-valued components, resulting in unphysical states. A peculiar characteristic of the AWP polytope is that each point on its surface has a state in its orbit satisfying $W(0)=0$. Indeed, for an eigenspectrum $\boldsymbol{\lambda}$ that satisfies \eqref{eq:AWP_hyperplanes} for a given permutation $\pi$, the diagonal state $\rho$ in the Dicke basis with $\rho_{ii}=\lambda_{\pi^{-1}(i)}$ satisfies \begin{equation} W(0)=\sum_{i=0}^{2j}\lambda_{i}\Delta_{i} = 0 \end{equation} and is in the unitary orbit of $\boldsymbol{\lambda}$. Following the same reasoning, in the interior of the AWP polytope, there is no state with a zero-valued Wigner function. \subsection{Majorization condition} Here we find a condition equivalent to \eqref{eq:THM} for a state to be AWB based on its majorization by a mixture of the vertices of the minimal polytope. \begin{definition} For two vectors $\mathbf{u}$ and $\mathbf{v}$ of the same length $n$, we say that $\mathbf{u}$ majorizes $\mathbf{v}$, denoted $\mathbf{u}\succ\mathbf{v}$, iff \begin{equation} \sum_{k=1}^{l}u_{k}^{\downarrow} \geq \sum_{k=1}^{l}v_{k}^{\downarrow} \end{equation} for $l<n$, with $\sum_{k=1}^{n}u_{k}=\sum_{k=1}^{n}v_{k}$ and $\mathbf{u}^{\downarrow}$ denoting the vector $\mathbf{u}$ with components sorted in decreasing order. \end{definition} \begin{proposition} \label{AWP_Majorization} A state $\rho$ is AWB iff its eigenvalues $\boldsymbol{\lambda}$ are majorized by a convex combination of the ordered vertices $\{\boldsymbol{\lambda}^{\downarrow}_{\mathrm{v}_k}\}$ of the corresponding AWB polytope, i.e.~$\exists\,\mathbf{c}\in\mathbb{R}_{+}^{2j}$ such that \begin{equation}\label{eq:majorizationcondition} \boldsymbol{\lambda}\prec\sum_{k=1}^{2j}c_{k}\boldsymbol{\lambda}^{\downarrow}_{\mathrm{v}_k} \end{equation} with $\sum_{k=1}^{2j}c_{k}=1$. \end{proposition} \begin{proof} If $\boldsymbol{\lambda}$ is AWB then it can be expressed as a mixture of the vertices of the AWB polytope \begin{equation} \boldsymbol{\lambda} = \sum_{k}c_{k}\boldsymbol{\lambda}_{\text{v}_k} \end{equation} and the majorization \eqref{eq:majorizationcondition} follows. Conversely, it is known from the Schur-Horn theorem that $\mathbf{x}\succ\mathbf{y}$ iff $\mathbf{y}$ is in the convex hull of the vectors obtained by permuting the elements of $\mathbf{x}$ (i.e.\ the permutahedron generated by $\mathbf{x}$). Hence, if $\boldsymbol{\lambda}$ respects \eqref{eq:majorizationcondition}, it can be expressed as a convex combination of the vertices of the AWB polytope and is therefore inside it. \end{proof} \section{Balls of absolutely Wigner bounded states} \label{sec:AWP_balls} \subsection{Largest inner ball of the AWB polytope} In this section, we calculate the radius $r_{\mathrm{in}}^{W_\mathrm{min}}$ of the largest ball centred on the MMS contained in the polytope of AWB states and find a state $\rho^*$ that is both on the surface of this ball and on a face of the polytope. Denoting by $r(\rho)$ the Hilbert-Schmidt distance between a state $\rho$ and the MMS, \begin{equation} \label{eq:distancetoMMS} r(\rho) = \lVert \rho - \rho_0 \rVert_{\mathrm{HS}} = \sqrt{\mathrm{Tr}\left[\left(\rho-\rho_0\right)^2\right]}, \end{equation} we have that all valid states with $r(\rho)\leq r_{\mathrm{in}}^{W_\mathrm{min}}$ are AWB. \begin{proposition} \label{AWPBalls} The radius of the largest inner ball of the AWB polytope associated with a $W_{\mathrm{min}}$ value such that the ball is contained within the state simplex is \begin{equation}\label{eq:rmaxAna} r_{\mathrm{in}}^{W_{\mathrm{min}}} = \frac{1-(2j+1)W_{\mathrm{min}}}{2\sqrt{j(2j+1)(j+1)}}. \end{equation} \end{proposition} \begin{proof} Note that the distance \eqref{eq:distancetoMMS} is equivalent to the Euclidean distance in the simplex between the spectra $\boldsymbol{\lambda}$ and $\boldsymbol{\lambda}_{0}$ of $\rho$ and the MMS respectively, i.e. \begin{equation*} r(\rho) = \sqrt{\left(\sum_{i=0}^{2j}\lambda_{i}^{2}\right)-\frac{1}{2j+1}} = \lVert\boldsymbol{\lambda}-\boldsymbol{\lambda}_{0}\rVert. \end{equation*} In order to find the radius $r_{\mathrm{in}}^{W_{\mathrm{min}}}$ (see Fig.~\ref{fig:minimalpolytope} for $W_{\mathrm{min}}=0$) of the largest inner ball of the AWB polytope, we need to find the spectra on the hyperplanes of the AWB polytope with the minimum distance to the MMS. Mathematically, this translates in the following constrained minimization problem \begin{equation}\label{eq:Minimization_rin} \min_{\boldsymbol{\lambda}} \; \lVert\boldsymbol{\lambda}-\boldsymbol{\lambda}_0\rVert^2 \;\;\; \text{ subject to } \left\{\begin{array}{l} \sum_{i=0}^{2j}\lambda_{i}=1\\[8pt] \boldsymbol{\lambda\cdot\Delta}=W_{\mathrm{min}} \end{array}\right. \end{equation} where $\boldsymbol{\Delta}=\left(\Delta_{0},\Delta_{1},...,,\Delta_{2j}\right)$. For this purpose, we use the method of Lagrange multipliers with the Lagrangian \begin{equation*} L = \lVert\boldsymbol{\lambda}-\boldsymbol{\lambda}_0\rVert^2+\mu_{1}\left(\boldsymbol{\lambda\cdot\Delta}-W_{\mathrm{min}}\right)+\mu_{2}\left(1-\sum_{i=0}^{2j}\lambda_{i}\right) \end{equation*} where $\mu_{1}, \mu_{2}$ are two Lagrange multipliers to be determined. The stationary points $\boldsymbol{\lambda}^*$ of the Lagrangian must satisfy the following condition \begin{equation}\label{eq:LagrangianDerivative} \frac{\partial L}{\partial\boldsymbol{\lambda}}\Big|_{\boldsymbol{\lambda}=\boldsymbol{\lambda}^*} = \boldsymbol{0} \quad \Leftrightarrow \quad 2\boldsymbol{\lambda}^*+\mu_{1}\boldsymbol{\Delta}-\mu_{2}\boldsymbol{1} = \boldsymbol{0} \end{equation} with $\boldsymbol{1}=(1,1,...,1)$ of length $2j+1$. By summing over the components of \eqref{eq:LagrangianDerivative} and using Eq.~\eqref{eq:kernel_eigs_unit_sum}, we readily get \begin{equation}\label{mu2mu1} \mu_{2} = \frac{\mu_{1}+2}{2j+1}. \end{equation} Then, by taking the scalar product of \eqref{eq:LagrangianDerivative} with $\boldsymbol{\Delta}$ and using Eqs.~\eqref{identity2} and \eqref{mu2mu1}, we obtain \begin{equation*} \mu_{1} = \frac{1-(2j+1)W_{\mathrm{min}}}{2j(j+1)}\quad \mathrm{and} \quad \mu_{2} = \frac{(2j+1)-W_{\mathrm{min}}}{2j(j+1)}. \end{equation*} Finally, by substituting the above values for $\mu_1$ and $\mu_2$ in Eq.~\eqref{eq:LagrangianDerivative} and solving for the stationary point $\boldsymbol{\lambda}^*$, we get \begin{equation}\label{rhostar} \boldsymbol{\lambda}^*=\frac{\left[(2j+1)-W_{\mathrm{min}}\right]\boldsymbol{1}-\left[1-(2j+1)W_{\mathrm{min}}\right]\boldsymbol{\Delta}}{4j(j+1)} \end{equation} from which the inner ball radius follows as \begin{equation*} r_{\mathrm{in}}^{W_{\mathrm{min}}} = r(\rho^*) = \frac{1-(2j+1)W_{\mathrm{min}}}{2\sqrt{j(2j+1)(j+1)}} \end{equation*} with $\rho^*$ a state with eigenspectrum \eqref{rhostar}. \end{proof} Let us first consider positive values of $W_{\mathrm{min}}$. The inner radius \eqref{eq:rmaxAna} vanishes for $W_{\mathrm{min}}=1/(2j+1)$, corresponding to the fact that only the MMS state has a Wigner function with this minimal (and constant) value. The radius then increases as $W_{\mathrm{min}}$ decreases. At $W_{\mathrm{min}}=0$, it reduces to the radius of the largest ball of AWP states, \begin{equation} r_{\mathrm{in}}^{\mathrm{AWP}} = \frac{1}{2\sqrt{j(2j+1)(j+1)}}. \end{equation} Expressed as a function of dimension $N=2j+1$ and re-scaled to generalized Bloch length, this result was also recently found in the context of SU($N$)-covariant Wigner functions (i.e.\ as the phase space manifold changes dramatically with each Hilbert space dimension, rather than always being the sphere) \cite{Abgaryan2021b}. While our bound is tight for all $j$ in the SU(2) setting (i.e.\ there always exist orbits infinitesimally further away that contain Wigner-negative states), it is unknown if this bound remains tight for such SU($N$)-covariant Wigner functions for $N>2$. At the critical value\footnote{In the limit $j\to\infty$, as $\Delta_{j,j}\to 2$ \cite{Weigert_contracting_2000}, Eq.~\eqref{Wmincritical} tends to $-1/2$.} \begin{equation}\label{Wmincritical} W_{\mathrm{min}}=\frac{\Delta_{j,j} - (2 j+1)}{\Delta_{j,j} (2j+1)-1}<0, \end{equation} the spectrum \eqref{rhostar} acquires a first zero eigenvalue, $\lambda^*_{2j}=0$. This corresponds to the situation where $\boldsymbol{\lambda}^*$ is simultaneously on the surface of the ball, on a face of the polytope and on an edge of the simplex as seen in Fig.~\ref{fig:criticalpolytope}. For more negative values of $W_{\mathrm{min}}$, Eq.~\eqref{rhostar} no longer represents a physical state because $\lambda^*_{2j}$ becomes negative. In this situation, in order to determine the radius of larger balls containing only AWB states, additional constraints must be imposed in the optimisation procedure reflecting the fact that some elements of the spectrum of $\rho$ are zero. Since the possible number of zero eigenvalues depends on $j$, we will not go further in this development. Nevertheless, in the end, when there is only one non-zero eigenvalue left (equal to 1, in which case the states are pure), the most negative $W_{\mathrm{min}}$ corresponds to the smallest kernel eigenvalue $\Delta_{j.j-1}$ (according to the conjecture \eqref{eq:kernel_eigenvalue_assumption}), and the radius is the distance $r=\sqrt{2j/(2j+1)}$ from pure states to the MMS. \begin{figure} \caption{AWB polytope in the barycentric coordinate system for $j=1$ and $W_{\mathrm{min} \label{fig:criticalpolytope} \end{figure} Finally, it should be noted that any state resulting from the permutation of the elements of $\boldsymbol{\lambda}^*$ is also on the surface of the AWB inner ball and verify a similar equality as \eqref{eq:AWP_hyperplanes} for any permutation $\pi$. Thus by considering all permutations of the elements of $\boldsymbol{\lambda}^*$ we can find all states located where the AWB polytope is tangent to the AWB inner ball, as shown in Fig.\ \ref{fig:spin-1-simplex_2} for $j=1$ and $W_{\mathrm{min}}=0$. \subsection{Smallest outer ball of the AWB polytope} \label{sec:r_min} We now formulate a conjecture for the radius $r_{\text{out}}^{W_\mathrm{min}}$ of the smallest outer ball of the polytope containing all AWB states. With the set of AWB states forming a convex polytope, $r_{\text{out}}^{W_\mathrm{min}}$ must be the radius associated with the outermost vertex. Hence the problem is equivalent to finding this furthest vertex of the minimal polytope. As mentioned above, as $W_\mathrm{min}$ gets smaller and the polytopes get bigger, both the polytopes and their the inner and outer Hilbert Schmidt balls will eventually encompass unphysical states. We therefore acknowledge that intermediate calculations may take us outside of the state simplex, but final results must of course be restricted to the intersection of these objects with the simplex. When a vertex lies inside the simplex it may be referred to as a \textit{vertex state}. In principle, this can always be determined on a case-by-case basis via the following procedure. Recall from Sec.\ \ref{subsec:AWPPolytope} that an AWB state with ordered spectrum $\boldsymbol{\lambda}^{\downarrow}$ located on a vertex is specified by $2j+1$ linear eigenvalue constraints. The first is normalization, the second is the AWB vertex criterion (i.e.\ Eq.\ \eqref{eq:ordered_awp_eq} with a $W_{\mathrm{min}}$), and the remaining $2j-1$ come from a ($2j-1$)-sized sample from the $(2j)$-sized set of nearest-neighbour constraints \eqref{eq:set_of_NN_constraints}. Thus the $2j$ states sitting on the $2j$ distinct vertices match up with the $\binom{2j}{2j-1} = 2j$ choices of bi-partitioning the ordered eigenvalues into a ``left'' set, $\boldsymbol{\omega}_n$, of size $n$ and a ``right'' set, $\boldsymbol{\sigma}_n$, of size $2j+1-n$, each of which contain eigenvalues of equal value $\omega_n$ and $\sigma_n$ respectively such that $\omega_n > \sigma_n$. The full eigenspectrum is the concatenation $\boldsymbol{\lambda}^{\downarrow}_{\mathrm{v}_n} = \boldsymbol{\omega}_n \circ \boldsymbol{\sigma}_n$, and normalization becomes \begin{equation}\label{eq:vertex_state_normalization} n\omega_n + (2j+1-n)\sigma_n = 1, \quad n \in \{ 1,...,2j \}. \end{equation} As we are temporarily allowing the ordered spectrum $\boldsymbol{\lambda}^{\downarrow}$ to have negative components, Eq.\ \eqref{eq:vertex_state_normalization} should be interpreted only as requiring the vertices to lie in the hyperplane generated by the state simplex (i.e.\ not necessarily within the simplex). Inserting $\boldsymbol{\lambda}^{\downarrow}_{\mathrm{v}_n}$ and \eqref{eq:vertex_state_normalization} into the AWB vertex criterion the weights $\omega_n$ can be solved as a function of the kernel eigenvalues and $W_\mathrm{min}$: \begin{align}\label{eq:omega_n_explicit} \omega_n &= \frac{\sum_{i=n}^{2j} \Delta_i^\uparrow - (2j+1-n) W_\mathrm{min}}{ n \sum_{i=n}^{2j} \Delta_i^\uparrow - (2j+1-n)\sum_{i=0}^{n-1} \Delta_i^\uparrow } \nonumber \\ &= \frac{\tau_n - (2j+1 - n) W_\mathrm{min}}{(2j+1)\tau_n - (2j+1-n)} \end{align} where in the second line we used the unit-trace property \eqref{eq:kernel_eigs_unit_sum} of the kernel and \begin{equation} \tau_n = \sum_{i=n}^{2j} \Delta_i^\uparrow = \sum_{i=0}^{2j-n} \Delta_i^\downarrow \end{equation} is the sum over the largest $2j+1-n$ kernel eigenvalues. The purity $\gamma_{\mathrm{v}_n}$ and distance $r_{\mathrm{v}_n}$ of the $n$-th vertex is then given by \begin{align} \gamma_{\mathrm{v}_n} &= n \omega_n^2 + (2j+1-n)\sigma_n^2 \\ r_{\mathrm{v}_n} &= \sqrt{\gamma_{\mathrm{v}_n} - \frac{1}{2j+1}}, \end{align} which are functions of only the kernel eigenvalues and $W_\mathrm{min}$. Note that purity, being defined as the sum of squares of the eigenvalues, remains a faithful notion of distance to the MMS even when such spectra are allowed to go negative. After computing each of these numbers, $r_{\text{out}}^{W_\mathrm{min}}$ would correspond to the largest one, and the set of states satisfying this condition would be the intersection of the associated ball with the state simplex. In Sec.\ \ref{sec:spin1} we present details of this procedure for $j=1$ and $W_\mathrm{min} = 0$. Despite this somewhat involved procedure, we numerically find it is always the case that the first vertex, $\mathrm{v}_1$, remains within the state simplex for all $W_\mathrm{min} \in [\Delta^\uparrow_0,\frac{1}{2j+1}]$ and, relatedly, that \begin{equation} r^{W_\mathrm{min}}_{\text{out}} = r_{\text{v}_1}. \end{equation} We conjecture this to be true in all finite dimensions. Part of the difficulty in proving this in general comes from the non-trivial nature of the kernel eigenvalues \eqref{eq:kernel_eigenvalues} and from further numerical evidence suggesting that no vertex state ever majorizes any other vertex state. Furthermore, with the most negative kernel eigenvalue \eqref{eq:kernel_eigenvalue_assumption} being $\Delta^\uparrow_0 = \Delta_{j,j-1}$, the vertex state $\rho_{\text{v}_1}$ takes the special form \begin{equation}\label{eq:outer_vertex_state} \omega_1 \ketbra{j,j-1}{j,j-1} + \frac{1-\omega_1}{2j} \sum_{m\neq j-1} \ketbra{j,m}{j,m} \end{equation} where \begin{align} \omega_1 &= \frac{\sum_{m\neq j-1}\Delta_{j,m} - 2j W_{\mathrm{min}} }{\sum_{m\neq j-1}\Delta_{j,m} - 2j\Delta_{j,j-1}} \nonumber\\ &= \frac{1-\Delta_{j,j-1} - 2jW_{\mathrm{min}}}{1-(2j+1)\Delta_{j,j-1}}. \label{eq:outer_vertex_weight} \end{align} The minimal outer radius $r_{\text{out}}^{W_\mathrm{min}}$ is then conjectured to be \begin{align} r_{\text{out}}^{W_\mathrm{min}} &= \sqrt{ \gamma_{\text{v}_1} - \frac{1}{2j+1} } \nonumber\\ &= \sqrt{\omega_1^2 + 2j\left( \frac{1-\omega_1}{2j} \right)^2 - \frac{1}{2j+1}}. \label{eq:r_out_conjecture} \end{align} An operational interpretation of this radius is available by noting that the multiqubit realization of the $\ket{j,j-1}$ state, which has the most pointwise-negative Wigner function allowable (occurring at the North pole), is in fact the $W$ state introduced in the context of LOCC entanglement classification \cite{Dur_LOCC_2000}. And since the maximally mixed state has uniform eigenvalues, Eq.\ \eqref{eq:outer_vertex_state} may be interpreted as the end result of mixing the $W$ state with the maximally mixed state until the Wigner function at the North pole hits $W_\mathrm{min}$. The distance between the resulting state and the maximally mixed state is exactly our conjectured $r_{\text{out}}^{W_\mathrm{min}}$. In particular, when the Wigner function vanishes at the North pole, the radius reduces to a tight, purity-based AWP necessity condition. Finally, when the lower bound is set to $W_{\mathrm{min}}=\Delta_{j,j-1}$, Eq.\ \eqref{eq:outer_vertex_weight} becomes unity and the outer radius becomes the Hilbert-Schmidt distance to pure states, which reflects the fact that now the entire simplex is contained within the AWB polytope. \section{Relationship with entanglement and absolute Glauber-Sudarshan positivity} \label{sec:entanglement} Another common quasi-probability distribution studied in the context of single spins is the Glauber-Sudarshan $P$ function, defined through the equality \begin{equation}\label{Pfunc} \rho=\frac{2j+1}{4 \pi} \int P_{\rho}(\Omega)\,\ket{\Omega}\bra{\Omega}\, d \Omega, \end{equation} Compared to the Wigner function, the $P$ function is not unique. Negative values of all $P$ functions representing the same state can be interpreted as the presence of entanglement within the multi-qubit realization of the system \cite{Bohnet-Waldraff-PPT_2016}. In other words, a general state $\rho$ of a single spin-$j$ system admits a positive $P$ function if and only if the many-body realization is separable (necessarily over symmetric states). This follows from the definition \eqref{Pfunc} of the $P$ function as the expansion coefficients of a state $\rho$ in the spin coherent state projector basis, and the fact that spin coherent states are the only pure product states available when the qubits are indistinguishable. States that admit a positive $P$ function after any global unitary transformation are called \textit{absolutely classical} spin states \cite{Bohnet-Waldraff2017absolutely} or \textit{symmetric absolutely separable (SAS)} states \cite{serrano2021maximally}. In this section we focus entirely on the case of $W_\mathrm{min}=0$ because negative values of the Wigner function are generally used as a witness of non-classicality and compare the AWP polytopes to the known results on SAS states. In the context of single spins, the set of SAS states is only completely characterized for spin-1/2 and spin-1. We also show that the Wigner negativity \eqref{eq:WignerNeg} of a positive-valued $P$-function state is upper-bounded by the Wigner negativity of a coherent state. \subsection{Spin-1/2} In the familiar case of a single qubit state $\rho$, the spectrum $(\lambda, 1 - \lambda)$ is characterized by one number $\lambda$. The kernel eigenvalues, Eq.\ \eqref{eq:kernel_eigenvalues}, are \begin{equation} \Delta_{0} = \frac{1}{2}(1 - \sqrt{3}), \quad \Delta_{1} = \frac{1}{2}(1 + \sqrt{3}) = 1 - \Delta_0. \end{equation} Letting $\lambda \geq \frac{1}{2}$ denote the larger of the two eigenvalues, the strong ordered form \eqref{eq:ordered_awp_ineq} becomes \begin{equation} \begin{split} \lambda_0 \Delta_0 + \lambda_1 \Delta_1 &= \lambda \Delta_0 + (1-\lambda)(1 - \Delta_0) \\ &= \lambda(2\Delta_0 - 1) + 1 - \Delta_0. \end{split} \end{equation} Thus the AWP polytope is described, in the 1-dimensional projection to the $\lambda$ axis, as \begin{equation} \frac{1}{2} \leq \lambda \leq \frac{1-\Delta_0}{1-2\Delta_0} = \frac{1}{2} + \frac{1}{2\sqrt{3}}. \end{equation} This may be equivalently expressed either in terms of purity $\gamma$ or Bloch length $|\mathbf{n}| = \sqrt{2\gamma - 1}$, \begin{equation} \frac{1}{2} \leq \gamma \leq \frac{2}{3} \qquad \text{and} \qquad |\mathbf{n}| \leq \frac{1}{\sqrt{3}}. \end{equation} Additionally, the distance to the maximally mixed state via Eq.\ \eqref{eq:distancetoMMS} is $r \leq 1/\sqrt{6}$, which matches with the smallest ball of AWP states derived earlier, Eq.\ \eqref{eq:rmaxAna}. In the case of spin-1/2 this radius coincides with the largest ball containing nothing but AWP states. Regarding absolute $P$-positivity, all qubit states are SAS. This is a consequence of the Bloch ball being the convex hull of the spin-$\frac{1}{2}$ coherent states and global unitaries corresponding only to rigid rotations. Thus AWP qubit states are a strict subset of SAS qubit states. \begin{figure} \caption{Maximal PT negativity over each unitary orbit in the $j=1$ simplex of state spectra. The dashed blue line and red circle are respectively the AWP polytope and ball. The camel curve shows the boundary at which the negativity along the unitary orbit becomes non-zero.} \label{fig:OrbitMaximalNegativity_N=2} \end{figure} Furthermore, due to the invariance of negativity under rigid rotation, for a single qubit there is no distinction between a state being positive (in either the Wigner or $P$ sense) and being absolutely positive. This means that any state with Bloch radius $|\mathbf{n}|\in (1/\sqrt{3},1]$ has a positive $P$ function but a negative Wigner function. This is perhaps the simplest example of the fact that, unlike the planar phase space associated with optical systems, in spin systems Glauber-Sudarshan positivity does not imply Wigner positivity. \subsection{Spin-1} \label{sec:spin1} For qutrits the set of AWP states and the set of SAS states are both more complicated, with neither being a strict subset of the other. For SAS states we need the following result in \cite{serrano2021maximally}: \emph{the maximal value of the negativity, in the sense of the PPT criterion, in the unitary orbit of a two-qubit symmetric (or equivalently a spin-1) state $\rho$ with spectrum $\lambda_0\geq\lambda_1\geq\lambda_2$ is} \begin{equation} \label{eq:maxNeg_j1/2} \max\left[ 0,\sqrt{\lambda_0^2+(\lambda_1-\lambda_2)^2}-\lambda_1-\lambda_2 \right]. \end{equation} In Fig.~\ref{fig:OrbitMaximalNegativity_N=2}, we plot the resulting maximal negativity in the $j=1$ simplex with the AWP polytope. There are clearly regions of spectra that satisfy either, both, or neither of the AWP and SAS conditions. Thus already for spin-1 there exist states with a positive $P$ function and a negative $W$ function and vice-versa. For $j=1$ specifically, it was also shown in \cite{serrano2021maximally} that the \emph{largest} ball of SAS states has a radius $r_{\mathrm{in}}^{P}=1/(2\sqrt{6})\approx 0.20412$, which is the same value as the radius $r_{\mathrm{in}}^{\mathrm{AWP}}=1/(2\sqrt{6})$. Hence, for $j=1$, the largest ball of AWP states coincides with the largest ball of SAS states as we can see in Fig.~\ref{fig:OrbitMaximalNegativity_N=2}. We now illustrate the procedure described in Sec.\ \ref{sec:r_min} and compute the vertex states and their radii for the case of spin-$1$. The two diagonal states associated to the vertices of the minimal polytope for $j=1$ (see Fig.\ \ref{fig:minimalpolytope}) are \begin{align} \rho_{\text{v}_1} &= \omega_1 \ketbra{1,-1}{1,-1} \nonumber \\ &\quad + \frac{1-\omega_1}{2}(\ketbra{1,0}{1,0} + \ketbra{1,1}{1,1} ), \\ \rho_{\text{v}_2} &= \omega_2 ( \ketbra{1,-1}{1,-1} + \ketbra{1,0}{1,0} ) \nonumber \\ &\quad + (1 - 2\omega_2)\ketbra{1,1}{1,1} \end{align} where the parameters $\omega_1$ and $\omega_2$ are found by solving the AWP criterion \eqref{eq:ordered_awp_eq}: \begin{equation} \begin{split} \omega_1 &= \frac{\Delta_{1,-1} + \Delta_{1,1}}{\Delta_{1,-1} + \Delta_{1,1} - 2\Delta_{1,0}} = \frac{1}{15}(5 + \sqrt{10}),\\ \omega_2 &= \frac{\Delta_{1,1}}{2\Delta_{1,1}-\Delta_{1,0}-\Delta_{1,-1}} = \frac{1}{6} \left(2 + \sqrt{7-3 \sqrt{5}}\right). \end{split} \end{equation} The two Hilbert-Schmidt radii \eqref{eq:distancetoMMS} of the vertex states are then \begin{equation} \begin{split} r_{\text{v}_1} &= r_{\text{out}}^\mathrm{AWP} = \frac{1}{\sqrt{15}} \approx 0.2582, \\ r_{\text{v}_2} &= \sqrt{\frac{1}{6} \left(7-3 \sqrt{5}\right)} \approx 0.2205 . \end{split} \end{equation} As conjectured, we see that $r_{\text{v}_1} = r_{\text{out}}^W$ for spin-1. \begin{figure} \caption{Maximal PT negativity over each unitary orbit on the face of the minimal $j=3/2$ AWP polytope. The camel curve shows the boundary at which the negativity along the unitary orbit becomes non-zero. The notation of the vertices corresponds to the eigenspectra given in Table \ref{table:polytopeVertices} \label{fig:OrbitMaximalNegativity_N=3} \end{figure} \subsection{Spin-3/2} \label{sec:spin3/2} For spin-$3/2$, a numerical optimization (see Ref.~\cite{serrano2021maximally} for more information) yielded the maximum negativity (in the sense of the negativity of the partial transpose of the state) in the unitary orbit of the states located on a face of the polytope. The results are displayed in Fig.~\ref{fig:OrbitMaximalNegativity_N=3} where, similar to the spin-1 case, we observe both SAS and entangled states on the face of the minimal AWP polytope. A notable difference is that, for $j=3/2$, the largest ball containing only SAS states has a radius $r_{\mathrm{in}}^{P}=1/(2\sqrt{19})$~\cite{serrano2021maximally} which is strictly smaller than $r_{\mathrm{in}}^{\mathrm{AWP}}=1/(2\sqrt{15})$. Therefore, the SAS states on the face of the polytope are necessarily outside this ball. \subsection{Spin-\texorpdfstring{$j>3/2$}{TEXT}} In Fig.~\ref{fig:radiicomparison}, we compare the radius of the AWP ball (\ref{eq:rmaxAna}) with the lower bound on the radius of the ball of SAS states \cite{Bohnet-Waldraff2017absolutely} \begin{equation} \label{eq:boundSAS} r^P \equiv \frac{\left[(4j+1)\tbinom{4j}{2j}-(j+1)\right]^{-1/2}}{\sqrt{4j+2}}\leq r^P_{\mathrm{in}}. \end{equation} This plot suggests that the balls of AWP states can be much larger than the balls of SAS states. This is confirmed by our numerical observations that sampling the hypersurface of the polytope for $j=2$, $5/2$ and $3$ always yields states that have negative partial transpose in their unitary orbit. We also plot in Fig.~\ref{fig:radiicomparison} the conjectured radius $r_{\mathrm{out}}^\mathrm{AWP}$ of the minimal ball containing all AWP states. Notably, the scalings of $r_{\mathrm{out}}^\mathrm{AWP}$ and $r_{\mathrm{in}}^\mathrm{AWP}$ with $j$ are different. The scaling $r_{\mathrm{in}}^\mathrm{AWP} \propto j^{-3/2}$ follows directly from Eq.~\eqref{eq:rmaxAna}. The scaling $r_{\mathrm{out}}^\mathrm{AWP} \propto j^{-1}$ can be explained by noting that the infinite-spin limit of the SU(2) Wigner kernel is the Heisenberg-Weyl Wigner kernel, which only has the two eigenvalues $\pm 2$ \cite{Weigert_contracting_2000}. Hence for sufficiently large $j$ we may approximate $\Delta_{j,j-1} \approx -2$, which yields $\omega_1 \approx 3/(3+4j)$ from \eqref{eq:outer_vertex_weight}. The Laurent series of Eq.\ \eqref{eq:r_out_conjecture} with this approximation has leading term $1/(4j)$, exactly matching the results shown in Fig.\ \ref{fig:radiicomparison}. \begin{figure} \caption{Comparison of the radii of the outer AWP ball (dark blue) and the inner AWP ball (blue) and the lower bound on the SAS ball radius (orange). For $j\geq10$, we found excellent fits with $r_{\mathrm{out,fit} \label{fig:radiicomparison} \end{figure} \subsection{Bound on Wigner negativity} The spin-1 case showed us that there are SAS states outside the AWP polytope, i.e.\ with a Wigner function admitting negative values. Here, we show very generally that the Wigner negativity \eqref{eq:WignerNeg} of states with an everywhere positive $P$ function (in particular SAS states), denoted hereafter by $\rho_{P\geqslant0}$, is upper bounded by the Wigner negativity of coherent states. Indeed, such states can always be represented as a mixture of coherent states \begin{equation} \rho_{P\geqslant0}=\sum_{i}w_{i}\left|\alpha_{i}\right\rangle \left\langle \alpha_{i}\right| \end{equation} with $w_{i}\geqslant 0$ and $\sum_{i}w_{i}=1$. Their Wigner negativity can then be upper bounded as follows \begin{equation} \begin{aligned} \delta(\rho_{P\geqslant0}) & = \frac{1}{2}\int_{\Gamma}\left|W_{\rho_{P\geqslant0}}(\Omega)\right|d\mu(\Omega)-\frac{1}{2}\\ & = \frac{1}{2}\int_{\Gamma}\left|\sum_{i}w_{i}W_{\left|\alpha_{i}\right\rangle }(\Omega)\right|d\mu(\Omega)-\frac{1}{2}\\ & \leqslant \underbrace{\sum_{i}w_{i}}_{=1}\underbrace{\left(\frac{1}{2}\int_{\Gamma}\left|W_{\left|\alpha_{i}\right\rangle }(\Omega)\right|d\mu(\Omega)\right)}_{=\delta\left(|\alpha\rangle\right)+\frac{1}{2}}-\frac{1}{2}\\ & =\delta\left(|\alpha\rangle\right) \end{aligned} \end{equation} where $\delta\left(|\alpha\rangle\right)$ is the Wigner negativity of a coherent state. Since it has been observed that the negativity of a coherent state decreases with $j$~\cite{davis2021wigner}, the same is true for positive $P$ function states. \section{Conclusion} \label{sec:conclusion} We have investigated the non-classicality of unitary orbits of mixed spin-$j$ states. Our first result is Proposition~\ref{AWP_Theorem}, which gives a complete characterization for any spin quantum number $j$ of the set of absolutely Wigner bounded (AWB) states in the form of a polytope centred on the maximally mixed state in the simplex of mixed spin states. This amounts to an extension and alternative derivation of results from \cite{Abgaryan2020, Abgaryan2021} in the setting of quantum spin. We have studied the properties of the vertices of this polytope for different spin quantum numbers, as well as of its largest/smallest inner/outer Hilbert-Schmidt balls. In particular, we have shown that the radii of the inner and outer balls scale differently as a function of $j$ (see Eqs.~\eqref{eq:rmaxAna} and \eqref{eq:r_out_conjecture} as well as Fig.~\ref{fig:radiicomparison}). We have provided an equivalent condition for a state to be AWB based on majorization theory (Proposition~\ref{AWP_Majorization}). We have also compared our results on the positivity of the Wigner function with those on the positivity of the spherical Glauber-Sudarshan function, which can be equivalently used as a classicality criterion for spin states or a separability criterion for symmetric multiqubit states. The spin-1 and spin-3/2 cases, for which analytical results are known, were closely examined and important differences were highlighted, such as the existence of Wigner-negative absolutely separable states, and, conversely, the existence of entangled absolutely Wigner-positive states. However, a notable observation drawn from our numerics is that the set of SAS states appears to shrink relative to the set of AWP states as $j$ increases, which in turn occupies a progressively smaller volume of the simplex. Further research is needed to explore this behaviour. A related direction for future work could be to explore the ratio of the volume of the AWB polytopes to the volume of the full simplex; this would basically be a \textit{global indicator of classicality} like those introduced and studied in Refs.~\cite{Abbasli2020, Abgaryan2020, Abgaryan2021b} particularised to spin systems. Another perspective, as briefly mentioned in Sec.\ \ref{sec:AWPPolytope}, is to apply the techniques presented here to other distinguished quasiprobability distributions. For example, preliminary results suggest that the absolutely Husimi bounded (AHB) polytopes have the same geometry as the simplex, but are simply reduced in size by a factor depending on $Q_{\mathrm{min}}\in[0,\tfrac{1}{2j+1}]$. Future work could explore this further and investigate its consequences for the geometric measure of entanglement of multiqubit symmetric states. Another idea is to study how these polytopes change with respect to the spherical $s$-ordering parameter (see Eq.~\eqref{sSWkernel}). Finally, given the importance of Wigner negativity in fields like quantum information science, our results shed new and interesting light on its manifestation in spin-$j$ systems, focusing on its relation to purity and entanglement. We believe that this will be relevant for various quantum information processing tasks, in particular those involving the symmetric subspace. \begin{acknowledgments} We would like to thank Yves-Eric Corbisier for his help in creating Fig.~\ref{fig:spin-3/2-simplex} with Blender~\cite{Blender}. Most of the other figures were produced with the package Makie~\cite{Makie}. We would also like to thank V.\ Abgaryan and his colleagues for their correspondence regarding Refs.\ \cite{Abgaryan2021, Abgaryan2020, Abgaryan2021b}. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada. Computational resources were provided by the Consortium des Equipements de Calcul Intensif (CECI), funded by the Fonds de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under Grant No. 2.5020.11. \end{acknowledgments} \appendix \section{Proof of relation \eqref{identity2}} \label{sec:remarkablerelation} We show here that the eigenvalues $\Delta_{m}\equiv \Delta_{j,m}$ of the Wigner kernel \eqref{Wignerkernel} verify \begin{equation} \sum_{m=-j}^j \Delta_{m}^2 = 2j+1. \end{equation} Using the expression \eqref{eq:kernel_eigenvalues} we get \begin{equation} \label{eq:sum_ev_squared} \begin{aligned} \sum_{m=-j}^j\Delta_m^2 = \sum_{m=-j}^j\sum_{L,L'=0}^{2j}&\frac{(2L+1)(2L'+1)}{(2j+1)^2}\\ & \times C_{j,m;L,0}^{j,m}C_{j,m;L',0}^{j,m} \end{aligned} \end{equation} The Clebsh-Gordan coefficients satisfy the following relations~\cite{Varshalovich_1988} \begin{equation}\label{Clebrule1} C_{a,\alpha;b,\beta}^{c,\gamma}=(-1)^{a-\alpha}\sqrt{\frac{2c+1}{2b+1}}C_{a,\alpha;c,-\gamma}^{b,-\beta} \end{equation} \begin{equation}\label{Clebrule2} \sum_{\alpha,\beta=-j}^j C_{a,\alpha;b,\beta}^{c,\gamma}C_{a,\alpha;b,\beta}^{c',\gamma'} = \delta_{cc'}\delta_{\gamma\gamma'}. \end{equation} Hence, by splitting the sum over $m$ in two \begin{equation} \sum_{m}C_{j,m;L,0}^{j,m}C_{j,m;L',0}^{j,m} = \sum_{m_1,m_2}C_{j,m_1;L,0}^{j,m_2}C_{j,m_1;L',0}^{j,m_2} \end{equation} and using \eqref{Clebrule1} and \eqref{Clebrule2}, we get from \eqref{eq:sum_ev_squared} \begin{equation} \begin{aligned} \sum_{m=-j}^j \Delta_{m}^2 & = \frac{1}{2j+1}\underbrace{\sum_{L=0}^{2j} 2L+1}_{=(2j+1)^2}\\ & = 2j+1 \end{aligned} \end{equation} \section{Barycentric coordinates} \label{sec:barycentricCoordinatesSystem} A mixed spin-$j$ state necessarily has eigenvalues $\lambda_i$ that are positive and add up to one: \begin{equation} \lambda_i\geq 0,\qquad \sum_{i=0}^{2j}\lambda_i=1. \end{equation} This means that every state $\rho$ has its eigenvalue spectrum in the probability simplex of dimension $2j$. For example, for $j=1$, this simplex is a triangle shown in grey in Fig.~\ref{fig:barycentricCoordinatesSystem}. In geometric terms, the spectrum $(\lambda_0,\lambda_1,\lambda_2)$ defines the barycentric coordinates of a point $\boldsymbol{\lambda}$ in the simplex, as it can be considered as the centre of mass of a system of $2j$ masses placed on the vertices of the triangle. \begin{figure} \caption{Barycentric and cartesian coordinate systems of spin state spectra for $j=1$. The simplex in this case is an equilateral triangle, shown here in gray. The red dot corresponds to a given spectrum and its projections onto the barycentric and Cartesian coordinate system are indicated by the red and green dashed lines respectively.} \label{fig:barycentricCoordinatesSystem} \end{figure} Let's explain how to go from the barycentric coordinate system to the Cartesian coordinate system spanning the simplex. If we denote by $\{\mathbf{r}^{(i)}: i=0,\ldots,2j\}$ the set of $2j+1$ vertices of the simplex, the Cartesian coordinates of a point $\boldsymbol{\lambda}$ are given by \begin{equation} x_k = \sum_{i=0}^{2j} \lambda_i \, r_{k}^{(i)} \end{equation} where $r_{k}^{(i)}$ is the $k$-th Cartesian coordinate of the $i$-th vertex of the simplex. For $j=1$, the simplex is an equilateral triangle with vertices having Cartesian coordinates $\mathbf{r}_1=(0,0)$, $\mathbf{r}_2=(1,0)$ and $\mathbf{r}_3=(1/2,\sqrt{3}/2)$. For $j=3/2$, it is a regular tetrahedron with vertices having Cartesian coordinates $\mathbf{r}_1=(0,0,0)$, $\mathbf{r}_2=(1,0,0)$, $\mathbf{r}_3=(1/2,\sqrt{3}/2,0)$ and $\mathbf{r}_4=(1/2,(2\sqrt{3})^{-1}, \sqrt{2/3})$. \section{AWP polytope vertices for $j\leq 2$} \label{sec:polytope_coordinates} We give in Table~\ref{table:polytopeVertices} for $j\leq 2$ the spin state spectra associated with the vertices of the minimal AWP polytope as they can be determined as explained in Sec.~\ref{subsec:AWPPolytope}. \begin{table}[h!] \begin{centering} \begin{tabular}{|c|c|c|} \hline $j$ & Vertices in barycentric coordinates \tabularnewline \hline \hline 1/2 & $\boldsymbol{\lambda}_{\mathrm{v}_1}\approx $ (0.789, 0.211)\tabularnewline \hline 1 & $\boldsymbol{\lambda}_{\mathrm{v}_1} \approx $ (0.423, 0.423, 0.153)\tabularnewline & $\boldsymbol{\lambda}_{\mathrm{v}_2} \approx $ (0.544, 0.228, 0.228)\tabularnewline \hline 3/2 & $\boldsymbol{\lambda}_{\mathrm{v}_1}\approx $ (0.294, 0.294, 0.294, 0.119)\tabularnewline & $\boldsymbol{\lambda}_{\mathrm{v}_2} \approx $ (0.33, 0.33, 0.170, 0.170)\tabularnewline & $\boldsymbol{\lambda}_{\mathrm{v}_3} \approx $ (0.4, 0.2, 0.2, 0.2)\tabularnewline \hline 2 & $\boldsymbol{\lambda}_{\mathrm{v}_1} \approx $ (0.313, 0.172, 0.172, 0.172, 0.172)\tabularnewline & $\boldsymbol{\lambda}_{\mathrm{v}_2} \approx $ (0.266, 0.266, 0.156, 0.156, 0.156)\tabularnewline & $\boldsymbol{\lambda}_{\mathrm{v}_3} \approx $ (0.24, 0.24, 0.24, 0.14, 0.14)\tabularnewline & $\boldsymbol{\lambda}_{\mathrm{v}_4} \approx $ (0.226, 0.226, 0.226, 0.226, 0.097)\tabularnewline \hline \end{tabular} \caption{Barycentric coordinates (corresponding to the eigenspectrum of a mixed spin state) of the vertices of the minimal polytope of AWP states. \label{table:polytopeVertices}} \par\end{centering} \end{table} \end{document}
\begin{document} \title{The chromatic number of heptagraphs\thanks{Partially supported by NSFC projects 11931006 and 12101117, and NSFJS No. BK20200344}} \author{Di Wu$^{1,}$\footnote{Email: [email protected]}, \;\; Baogang Xu$^{1,}$\footnote{Email: [email protected] OR [email protected].}\;\; and \;\; Yian Xu$^{2,}$\footnote{Email: yian$\[email protected]. }\\\\ \small $^1$Institute of Mathematics, School of Mathematical Sciences\\ \small Nanjing Normal University, 1 Wenyuan Road, Nanjing, 210023, China\\ \small $^2$School of Mathematics, Southeast University, 2 SEU Road, Nanjing, 211189, China} \date{} \maketitle \begin{abstract} A hole is an induced cycle of length at least 4. A graph is called a pentagraph if it has no cycles of length 3 or 4 and has no holes of odd length at least 7, and is called a heptagraph if it has no cycles of length less than 7 and has no holes of odd length at least 9. Let $\l\ge 2$ be an integer. The current authors proved that a graph is 4-colorable if it has no cycles of length less than $2\l+1$ and has no holes of odd length at least $2\l+3$. Confirming a conjecture of Plummer and Zha, Chudnovsky and Seymour proved that every pentagraph is 3-colorable. Following their idea, we show that every heptagraph is 3-colorable. \begin{flushleft} {\em Key words and phrases:} heptagraph, odd hole, chromatic number\\ {\em AMS 2000 Subject Classifications:} 05C15, 05C75\\ \end{flushleft} \end{abstract} \section{Introduction} Let $G$ be a graph, and let $u$ and $v$ be two vertices of $G$. We simply write $u\sim v$ if $uv\in E(G)$, and write $u\not\sim v$ if $uv\not\in E(G)$. We use $d_G(u)$ (or simply $d(u)$) to denote the degree of $u$ in $G$, and let $\delta(G)=\min\{d(u):u\in V(G)\}$. Let $S$ be a subset of $V(G)$. We use $G[S]$ to denote the subgraph of $G$ induced by $S$. For two graphs $G$ and $H$, we say that $G$ induces $H$ if $H$ is an induced subgraph of $G$. Let $S$ and $T$ be two subsets of $V(G)$, and let $x$ and $y$ be two vertices of $G$. We use $N_S(x)$ to denote the neighbors of $x$ in $S$, and define $N_S(T)=\cup_{x\in T} N_S(x)$ (if $S=V(G)$ then we omit the subindex and simply write them as $N(x)$ or $N(T)$). An $xy$-path is a path between $x$ to $y$, and an $(S, T)$-path is a path $P$ with $|S\cap P|=|T\cap P|=1$. A cycle on $k$ vertices is denoted by $C_k$. Let $P$ be a path, we use $\l(P)$ and $P^*$ to denote the length and the set of internal vertices of $P$, respectively. If $u, v\in V(P)$, then $P[u, v]$ denotes the segment of $P$ between $x$ and $y$. Let $k$ be a positive integer. A {\em hole} is an induced cycle of length at least 4, a hole of length $k$ is called a $k$-hole, and a $k$-hole is said to be an {\em odd} (resp. {\em even}) hole if $k$ is odd (resp. even). A $k$-{\em coloring} of $G$ is a mapping $c: V(G)\mapsto \{1, 2, \ldots, k\}$ such that $c(u)\neq c(v)$ whenever $u\sim v$ in $G$. The {\em chromatic number} $\chi(G)$ of $G$ is the minimum integer $k$ such that $G$ admits a $k$-coloring. Let $\l\ge 2$ be an integer. Let ${\cal G}_l$ denote the family of graphs which have no cycles of length less than $2\l+1$ and have no odd holes of length at least $2\l+3$. The graphs in ${\cal G}_2$ are called {\em pentagraphs}, and the graphs in ${\cal G}_3$ are called {\em heptagraphs}. A 3-connected graph is said to be {\em internally 4-connected} if every cutset of size 3 is the neighbor set of a vertex of degree 3. Robertson conjectured (see \cite{NPRZ2011}) that the Petersen graph is the only non-bipartite pentagraph which is 3-connected and internally 4-connected. In 2014, Plummer and Zha \cite{MPXZ} presented some counterexamples to Robertson's conjecture, and posed the following new conjecture. \begin{conjecture}\label{conj-P-Z}{\em (\cite{MPXZ})} Every pentagraph is $3$-colorable. \end{conjecture} Xu, Yu and Zha \cite{XYZ2017} proved that every pentagraph is 4-colorable. Very recently, Chudnovsky and Seymour \cite{MCPS2022} presented a structural characterization for pentagraphs and confirmed Conjecture~\ref{conj-P-Z}. A $P_3$-{\em cutset} of $G$ is an induced path $P$ on three vertices such that $V(P)$ is a cutset. A {\em parity star}-{\em cutset} is a cutset $X\subseteq V(G)$ such that $X$ has a vertex, say $x$, which is adjacent to every other vertex in $X$, and $G-X$ has a component, say $A$, such that every two vertices in $X\setminus\{x\}$ are joint by an induced even path with interior in $V(A)$. \renewcommand{\backslashelinestretch}{1} \begin{theorem}\label{theo-1-1}{\em (\cite{MCPS2022})} Let $G$ be a pentagraph which is not the Petersen graph. If $\delta(G)\ge 3$, then $G$ is either bipartite, or admits a $P_3$-cutset or a parity star-cutset. \end{theorem}\renewcommand{\backslashelinestretch}{1.2} Below is a lemma contained in the proof of \cite[Theorem 1.1]{MCPS2022}. \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{lem-critical} {\em (\cite{MCPS2022})} Let $G$ be a pentagraph that is not the Petersen graph. If $\chi(G)=4$ and every proper induced subgraph of $G$ is $3$-colorable, then $G$ has neither $P_3$-cutsets nor parity-star cutsets. \end{lemma}\renewcommand{\backslashelinestretch}{1.2} As a direct consequence of Theorem~\ref{theo-1-1} and Lemma~\ref{lem-critical}, one can easily verify that every pentagraph is 3-colorable. Thus, Conjecture~\ref{conj-P-Z} is true. By generalizing the result of \cite{XYZ2017}, the current authors \cite{WXX2022} proved that $\chi(G)\le 4$ for each graph $G\in \cup_{\l\ge 2}{\cal G}_l$, and conjectured that $\chi(G)\le 3$ for such graphs. \begin{theorem}\label{theo-1-2-0}{\em (\cite{WXX2022})} All graphs in $\cup_{\l\ge 2}{\cal G}_l$ are $4$-colorable. \end{theorem} In this paper, we prove the conjecture for heptagraphs. \begin{theorem}\label{theo-1-2} Every heptagraph is $3$-colorable. \end{theorem} We follow the idea of Chudnovsky and Seymour and prove a structural theorem for heptagraphs. \renewcommand{\backslashelinestretch}{1} \begin{theorem}\label{theo-1-3} Let $G$ be a heptagraph. If $\delta(G)\ge 3$, then $G$ is bipartite, or admits a $P_3$-cutset or a parity star-cutset. \end{theorem}\renewcommand{\backslashelinestretch}{1.2} A conclusion similar to Lemma~\ref{lem-critical} also holds on graphs in $\cup_{\l\ge 2}{\cal G}_l$. Since its proof is almost the same as that of Lemma~\ref{lem-critical}, we leave the proof to readers. \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{lem-critical-H} Let $G$ be a graph in $\cup_{\l\ge 2}{\cal G}_l$. If $\chi(G)=4$ and every proper induced subgraph of $G$ is $3$-colorable, then $G$ has neither $P_3$-cutsets nor parity-star cutsets. \end{lemma}\renewcommand{\backslashelinestretch}{1.2} \noindent{\bf Assuming Theorem~\ref{theo-1-3}, we can prove Theorem~\ref{theo-1-2}}: Suppose to its contrary, let $G$ be a heptagraph with $\chi(G)=4$ such that all proper induced subgraphs of $G$ are 3-colorable. It is certain that $G$ is not bipartite, and $\delta(G)\ge 3$. By Theorem~\ref{theo-1-3}, we have that $G$ must have a $P_3$-cutset or a parity-star cutset, which leads to a contradiction to Lemma~\ref{lem-critical-H}. Therefore, Theorem~\ref{theo-1-2} holds. \rule{4pt}{7pt} In Sections 2 and 3, we discuss the structure of heptagraphs and prove some lemmas. Theorem~\ref{theo-1-3} is proved in Section 4. \section{Subgraphs ${\cal P}$ and ${\cal P}'$} If a cutset is a clique, then we call it a {\em clique cutset}. It is certain that in any triangle free graph, every clique cutset is a parity star-cutset. Let $H$ be a proper induced subgraph of $G$ and $s, t\in V(H)$ with $s\not\sim t$, let $P$ be an induced $st$-path such that $\l(P)\ge 3$ and $P^*\subseteq V(G)\setminus V(H)$. If every vertex of $H-\{s, t\}$ that has a neighbor in $P^*$ is adjacent to both $s$ and $t$, then we call $P$ an $st$-{\em ear} of $H$. \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{MCPS-lem-2-1}{\em (\cite[2.1]{MCPS2022})} Let $G$ be a pentagraph without clique cutsets, and let $H$ be a proper induced subgraph of $G$ with $|V(H)|\ge3$. If each vertex of $V(G)\setminus V(H)$ has at most one neighbor in $V(H)$, then there exist nonadjacent vertices $s,t\in V(H)$ such that $H$ has an $st$-ear. \end{lemma}\renewcommand{\backslashelinestretch}{1.2} A similar conclusion holds to heptagraphs. We omit its proof as it is the same as that proof of Lemma~\ref{MCPS-lem-2-1} of \cite{MCPS2022}. \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{lem-2-1} Let $G$ be a heptagraph without clique cutsets, and let $H$ be a proper induced subgraph of $G$ with $|V(H)|\ge3$. If each vertex of $V(G)\setminus V(H)$ has at most one neighbor in $V(H)$, then there exist nonadjacent vertices $s,t\in V(H)$ such that $H$ has an $st$-ear. \end{lemma}\renewcommand{\backslashelinestretch}{1.2} A {\em big odd hole} is an odd hole of length at least 9, and a {\em short cycle} is a cycle of length at most 6. \begin{figure} \caption{Graphs ${\cal P} \label{fig-4} \end{figure} \begin{figure} \caption{Graphs ${\cal P} \label{fig-3} \end{figure} Let $C_8=1-2-3-4-5-6-7-8-1$, and $C_{12}=1-2-3-4-5-6-7-8-9-10-11-12$. We use ${\cal P}^0$ to denote the graph obtained from $C_8$ by adding four induced paths 1-13-15-5, 2-12-10-6, 3-16-14-7, and 4-11-9-8 (see Figure~\ref{fig-4}$(a)$), use ${\cal P}^1$ to denote the graph obtained from ${\cal P}^0$ by deleting vertices 14 and 16 (see Figure~\ref{fig-4}$(b)$), use ${\cal P}$ to denote the graph obtained from ${\cal P}^1$ by deleting vertices 13 and 15 (see Figure~\ref{fig-3}$(a)$), and use ${\cal P}'$ to denote the graph obtained from $C_{12}$ by adding edges 3-9 and 6-12 (see Figure~\ref{fig-3}$(b)$). Let $G$ be a non-bipartite heptagraph with $\delta(G)\ge 3$ and without $P_3$-cutsets or parity star-cutsets. We will show that $G$ does not induce ${\cal P}$ or ${\cal P}'$. Firstly, we prove that $G$ does not induce ${\cal P}^0$ or ${\cal P}^1$. \begin{lemma}\label{theo-2-1} Let $G$ be a heptagraph that induces a ${\cal P}^0$. If $G$ has no clique cutsets, then $G={\cal P}^0$. \end{lemma} \noindent {\it Proof. } Let $H$ be an induced subgraph of $G$ isomorphic to ${\cal P}^0$. Since the distance of any two vertices of $H$ is at most four, no vertex in $V(G)\setminus V(H)$ has more than one neighbor in $V(H)$. We may suppose that $G$ has no clique cutsets and $G\ne H$. By Lemma~\ref{lem-2-1}, there are nonadjacent vertices $s, t\in V(H)$ and an $st$-ear $P$ in $H$. We choose $s$ and $t$ that minimize $\l(P)$. By symmetry, we only need to verify that $P$ is an $st$-ear for $s\in\{1, 8, 9, 13\}$. If $(s, t)=(1,7)$, let $C=1-P-7-6-10-12-2-1$ and $C'=1-P-7-6-5-4-3-2-1$. If $(s, t)=(1, 6)$, let $C=1-P-6-10-12-2-1$ and $C'=1-P-6-5-4-3-2-1$. If $(s, t)=(1,5)$, let $C=1-P-5-6-10-12-2-1$ and $C'=1-P-5-4-3-2-1$. If $(s, t)=(1, 9)$, let $C=1-P-9-11-4-3-2-1$ and $C'=1-P-9-11-4-5-6-10-12-2-1$. If $(s, t)=(1, 14)$, let $C=1-P-14-16-3-2-1$ and $C'=1-P-14-16-3-4-11-9-8-1$. If $(s, t)=(1, 10)$, let $C=1-P-10-6-7-8-1$ and $C'=1-P-10-6-5-4-11-9-8-1$. If $(s, t)=(1, 15)$, let $C=1-P-15-5-4-3-2-1$ and $C'=1-P-15-5-4-11-9-8-1$. One can check that either $C$ or $C'$ is a big odd hole in all cases. Thus we suppose by symmetry that $\{s, t\}\cap \{1, 3, 5 ,7\}=\mbox{{\rm \O}}$ . If $(s, t)=(8, 6)$, let $C=8-P-6-10-12-2-1-8$ and $C'=8-P-6-10-12-2-3-4-11-9-8$. If $(s, t)=(8, 4)$, let $C=8-P-4-5-6-7-8$ and $C'=8-P-4-3-2-12-10-6-7-8$. If $(s, t)=(8, 13)$, let $C=8-P-13-15-5-6-7-8$ and $C'=8-P-13-15-5-4-11-9-8$. If $(s, t)=(8, 12)$, let $C=8-P-12-10-6-7-8$ and $C'=8-P-12-10-6-5-4-11-9-8$. If $(s, t)=(8, 16)$, let $C=8-P-16-3-2-1-8$ and $C'=8-P-16-3-4-11-9-8$. If $(s, t)=(8, 11)$, let $C=8-P-11-4-3-2-1-8$ and $C'=8-P-11-4-5-6-10-12-2-1-8$. In all cases, one of $C$ and $C'$ is a big odd hole. We may further suppose, by symmetry, that $\{s, t\}\cap \{1, 2, 3, 4, 5, 6, 7, 8\}=\mbox{{\rm \O}}$. If $(s, t)=(13, 9)$, let $C=13-P-9-11-4-5-15-13$ and $C'=13-P-9-8-7-6-5-15-13$. If $(s, t)=(13, 14)$, let $C=13-P-14-16-3-2-1-13$ and $C'=13-P-14-12-3-4-5-15-13$. If $(s, t)=(13, 10)$, let $C=13-P-10-6-7-8-1-13$ and $C'=13-P-10-6-5-4-11-9-8-1-13$. In each case, one of $C$ and $C'$ must be a big odd hole. By symmetry, it remains to consider that $(s, t)=(9, 12)$. Now, $l(P)\ge3$, and either $9-P-12-10-6-7-8-9$ or $9-P-12-10-6-5-4-11-9$ is a big odd hole. This proves Lemma~\ref{theo-2-1}. \rule{4pt}{7pt} Notice that ${\cal P}^1$ is an induced subgraph of ${\cal P}^0$. With almost the same arguments, one can check that the following lemma holds. \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{theo-2-2} Let $G$ be a heptagraph with no clique cutsets. If $G$ induces a ${\cal P}^1$ then $G\in \{{\cal P}^0, {\cal P}^1\}$. \end{lemma} \renewcommand{\backslashelinestretch}{1.2} \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{theo-1-4} Let $G$ be a heptagraph with no clique cutsets. If $G$ induces a ${\cal P}$ then $\delta(G)\le 2$. \end{lemma}\renewcommand{\backslashelinestretch}{1.2} \noindent {\it Proof. } Let $H$ be an induced subgraph of $G$ isomorphic to ${\cal P}$. Suppose that $\delta(G)\ge 3$ and $G$ has no clique cutsets. Since the distance of any two vertices of $H$ is at most four, no vertex in $V(G)\setminus V(H)$ has more than one neighbor in $V(H)$. By Lemma~\ref{lem-2-1}, there exist nonadjacent vertices $s,t\in V(H)$ such that $H$ has an $st$-ear. If $(s, t)\in\{(1, 5), (3, 7)\}$ and $\l(P)=3$, then $G[V(H)\cup V(P)]\in \{{\cal P}^0, {\cal P}^1\}$. By Lemmas~\ref{theo-2-1} and \ref{theo-2-2}, we have that $G\in \{{\cal P}^0, {\cal P}^1\}$, a contradiction. Suppose that either $(s, t)\not\in \{(1, 5), (3, 7)\}$ or $\l(P)>3$. Notice that ${\cal P}$ is an induced subgraph of ${\cal P}^0$. With totally the same arguments as that used in the proof of Lemma~\ref{theo-2-1}, we can always find a big odd hole in $G$. Therefore, Lemma~\ref{theo-1-4} holds. \rule{4pt}{7pt} \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{theo-1-5} Let $G$ be a heptagraph that induces a ${\cal P}'$. If $\delta(G)\ge 3$ then $G$ admits a clique cutset or a $P_3$-cutset. \end{lemma} \renewcommand{\backslashelinestretch}{1.2} \noindent {\it Proof. } Let $H$ be an induced subgraph of $G$ isomorphic to ${\cal P}'$. Since the distance of any two vertices of $H$ is at most four, no vertex in $V(G)\setminus V(H)$ has more than one neighbor in $V(H)$. We assume $\delta(G)\ge 3$ and $G$ does not admit a clique cutset or a $P_3$-cutset. It is certain that $G\ne H$. By Lemma~\ref{theo-1-4}, we may assume that $G$ induces no ${\cal P}$. Let us call the four sets $\{2,3,4\}, \{5,6,7\}, \{8,9,10\}$, and $\{1,11,12\}$ the {\em sides} of $H$. Since $G$ has no $P_3$-cutsets, there is a connected subgraph $F$ of $G - V(H)$ such that $N_H(F)$ is not a subset of any side of $H$. Choose an $F$ with $|V(F)|$ minimal. Since $N_H(F)$ cannot be a clique, there exist nonadjacent vertices $s, t\in N_H(F)$ and an induced $st$-path $P$ with $P^*\subseteq V(F)$. We choose $P$ that minimizes $\l(P)$, then every vertex of $H-\{s, t\}$ with a neighbor in $P^*$ is adjacent to both $s$ and $t$. By symmetry, we may assume that $s\in\{1, 12\}$. Firstly, suppose that $s=1$ and $t\ne 11$. If $(s, t)=(1, 8)$ and $l(P)=3$, then $V(P)\cup\{2,3,4,5,6,7,9,12\}$ induces a ${\cal P}$ in $G$, contradicting Lemma~\ref{theo-1-4}. If $(s, t)=(1, 8)$ and $l(P)>3$ then either $1-P-8-9-10-11-12-1$ or $1-P-8-9-3-2-1$ is a big odd hole, contradicting the choice of $G$. Thus, we assume that $t\ne 8$. If $(s, t)=(1, 10)$, let $C=1-P-10-9-3-2-1$ and $C'=1-P-10-9-3-4-5-6-12-1$. If $(s, t)=(1, 9)$, let $C=1-P-9-10-11-12-1$ and $C'=1-P-9-8-7-6-12-1$. If $(s, t)=(1, 7)$, let $C=1-P-7-8-9-3-2-1$ and $C'=1-P-7-6-5-4-3-2-1$. If $(s, t)=(1, 6)$, let $C=1-P-6-5-4-3-2-1$ and $C'=1-P-6-7-8-9-3-2-1$. If $(s, t)=(1, 5)$, let $C=1-P-5-4-3-2-1$ and $C'=1-P-5-4-3-9-10-11-12-1$. The case $(s, t)=(1, 4)$ can be treated with a similar argument to $(s, t)=(1, 10)$. If $(s, t)=(1, 3)$, let $C=1-P-3-9-10-11-12-1$ and $C'=1-P-3-9-8-7-6-12-1$. One of $C$ and $C'$ must be a big odd hole. Suppose that $s=12$. The case where $t\in \{2, 4, 5, 7, 8, 10\}$ can be treated similarly as above. By symmetry it remains to deal with $(s, t)=(12, 9)$. now, we have $l(P)\ge4$, and so either $12-P-9-3-2-1-12$ or $12-P-9-3-4-5-6-12$ is a big odd hole. Now, suppose that $(s, t)=(1, 11)$. Except 1,11 and possibly 12, no vertex of $H$ may have neighbors in $P^*$. Since $N_H(F)\not\subseteq\{1, 11, 12\}$, there is a vertex, say $u$, in $V(H)\setminus\{1, 11, 12\}$ and an induced $(u, P^*)$-path $Q$ with interior in $V(F)$. We choose such a $Q$ that minimizes $\l(Q)$. By symmetry, we may assume that $u\in\{6, 7, 8, 9, 10\}$. Let $R$ be an induced $1u$-path with $R^*\subseteq P^*\cup Q^*$. It is certain that no vertex of $H - \{1, 11 12, u\}$ may have neighbors in $R^*$. If $u=8$ then $l(R)\ge 4$ as otherwise $G[V(H)\cup R^*\setminus\{10, 11\}]={\cal P}$, let $C=1-R-8-9-3-2-1$ and $C'=1-R-8-7-6-5-4-3-2-1$. If $u=7$, let $C=1-R-7-8-9-3-2-1$ and $C'=1-R-7-6-5-4-3-2-1$. If $u=6$, let $C=1-R-6-5-4-3-2-1$ and $C'=1-R-6-7-8-9-3-2-1$. Each case leads to a contradiction as one of $C$ and $C'$ must be a big odd hole. Thus, we have that $9\le u\le 10$. If $u=9$ and $N_{R^*}(12)\neq\mbox{{\rm \O}}$, let $R'$ be the shortest induced $(9, 12)$-path with interior in $R^*$, then either $12-R'-9-3-2-1-12$ or $12-R'-9-3-4-5-6-12$ is a big odd hole. If $u=9$ and $N_{R^*}(12)=\mbox{{\rm \O}}$, then either $1-R-9-8-7-6-12-1$ or $1-R-9-3-4-5-6-12-1$ is a big odd hole. If $u=10$ and $N_{R^*}(12)\neq\mbox{{\rm \O}}$, let $R'$ be the shortest induced $(10, 12)$-path with interior in $R^*$, then either $12-R'-10-9-3-2-1-12$ or $12-R'-10-9-3-4-5-6-12$ is a big odd hole. If $u=10$ and $N_{R^*}(12)=\mbox{{\rm \O}}$, then either $1-R-10-9-3-2-1$ or $1-R-10-9-3-4-5-6-12-1$ is a big odd hole. This proves Lemma~\ref{theo-1-5}. \rule{4pt}{7pt} \section{Jumps} In this section, we always let $G$ be a heptagraph, and let $C=c_1\cdots c_7c_1$ be a 7-hole in $G$. Since $G$ has no short cycles, we have that every vertex in $V(G)\setminus V(C)$ has at most one neighbor in $V(C)$. For two disjoint subsets $X$ and $Y$ of $V(G)$, we say that $X$ is {\em anticomplete} to $Y$ if no vertex of $X$ has neighbors in $Y$. Let $sxyt$ be a segment of $C$. An induced $st$-path $P$ with $P^*\subseteq V(G)\setminus V(C)$ is called an {\em st e-jump across} $xy$. If $P$ is an $st$ $e$-jump across $xy$ such that $V(C)\setminus\{s, t, x, y\}$ is anticomplete to $P^*$, then we call $P$ a {\em local e-jump}. A local $e$-jump of length four is called a {\em short e-jump}. Let $sct$ be a segment of $C$. An induced $st$-path $P$ with $P^*\subseteq V(G)\setminus V(C)$ is called an {\em st v-jump across} $c$. If $P$ is an $st$ $v$-jump across $c$ such that $V(C)\setminus\{c, s, t\}$ is anticomplete to $P^*$, then we call $P$ a {\em local v-jump}. A local $v$-jump of length five is called a {\em short v-jump}. All $v$-jumps and $e$-jumps of $C$ are referred to as {\em jumps} of $C$. A {\em non-local jump} is one which is not local. Clearly, \renewcommand{\backslashelinestretch}{1} \begin{itemize} \item all $e$-jumps have length at least four, and all $v$-jumps have length at least five, \item a jump $P$ is short if and only if $V(C)\setminus V(P)$ is anticomplete to $P^*$, and \item each local $e$-jump (resp. $v$-jump) has even (resp. odd) length. \end{itemize}\renewcommand{\backslashelinestretch}{1.2} \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{coro-2-1} Suppose that, for $1\le i,j\le 7$, $C$ has a short $v$-jump across $c_i$ and a short $v$-jump across $c_j$ such that $d_C(c_i, c_j)\in \{2, 3\}$. Then $G$ induces a ${\cal P}$. \end{lemma}\renewcommand{\backslashelinestretch}{1.2} \noindent {\it Proof. } We first discuss the case where $d_C(c_i,c_j)=2$. Without loss of generality, suppose that $i=2$ and $j=7$. Let $P_1=c_1a_1a_2a_3a_4c_3$ and $P_2=c_1b_1b_2b_3b_4c_6$. To avoid a big odd hole on $V(P_1)\cup V(P_2)\cup \{c_4,c_5\}$, there must exist $u\in P_1^*$ and $v\in P_2^*$ such that $u\sim v$. If $u=a_4$, then $a_4b_4c_6c_5c_4c_3a_4$ is a 6-hole when $v=b_4$, $V(C)\cup\{b_1,b_2,b_3,b_4,a_4\}$ induces a ${\cal P}$ when $v=b_3$, $a_4b_2b_1c_1c_2c_3a_4$ is a 6-hole when $v=b_2$, and $a_4b_1c_1c_2c_3a_4$ is a 5-hole when $v=b_1$. Thus, we assume that $u\ne a_4$, and $v\ne b_4$ by symmetry. If $u=a_3$, then $a_3b_3b_4c_6c_7c_1c_2c_3a_4a_3$ is a 9-hole when $v=b_3$, $a_3b_2b_3b_4c_6c_5c_4c_3a_4a_3$ is a 9-hole when $v=b_2$, and $a_3b_1c_1c_2c_3a_4a_3$ is a 6-hole when $v=b_1$. Thus, $u\ne a_3$, and $v\ne b_3$ by symmetry. Consequently, we have that $u\notin \{a_3,a_4\}$ and $v\notin \{b_3,b_4\}$. If $u=a_2$, then $a_2b_2b_3b_4c_6c_7c_1c_2c_3a_4a_3a_2$ is an 11-hole when $v=b_2$, and $a_2b_1b_2b_3b_4c_6c_5c_4c_3a_4a_3a_2$ is an 11-hole when $v=b_1$. If $(u,v)=(a_1,b_1)$, then $a_1=b_1$ to avoid a triangle $c_1a_1b_1c_1$, which implies an 11-hole $a_1b_2b_3b_4c_6c_5c_4c_3a_4a_3a_2a_1$. Therefore, the lemma holds if $d_C(c_i,c_j)=2$. Now, suppose $d_C(c_i,c_j)=3$. Without loss of generality, suppose that $i=3$ and $j=7$. Let $P_1=c_2a_1a_2a_3a_4c_4$ and $P_2=c_1b_1b_2b_3b_4c_6$. To avoid a 13-hole, there must exist $u\in P_1^*$ and $v\in P_2^*$ with $u\sim v$. If $u=a_4$, then $a_4b_4c_6c_5c_4a_4$ is a 5-hole when $v=b_4$, $a_4b_3b_4c_6c_5c_4a_4$ is a 6-hole when $v=b_3$, $V(C)\cup\{b_4,b_3,b_2,b_1,a_4\}$ induces a ${\cal P}$ when $v=b_2$, and $a_4b_1c_1c_2c_3c_4a_4$ is a 6-hole when $v=b_1$. Without loss of generality, we assume that $u\ne a_4$ and $v\ne b_4$. If $u=a_3$, then $a_3b_3b_4c_6c_7c_1c_2a_1a_2a_3$ is a 9-hole when $v=b_3$, $a_3b_2b_3b_4c_6c_7c_1c_2c_3c_4a_4a_3$ is an 11-hole when $v=b_2$, and $a_3b_1c_1c_2a_1a_2a_3$ is a 6-hole when $v=b_1$. Now, suppose that $u\ne a_3$ and $v\ne b_3$, and this implies a short cycle in $G[c_1,c_2,a_1,a_2,b_1,b_2]$. Therefore, Lemma~\ref{coro-2-1} holds. \rule{4pt}{7pt} \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{coro-2-2} Let $P_1$ be a short $v$-jump, and $P_2$ be a short $e$-jump of $C$. If $P_1$ and $P_2$ share exactly one common end, and the other ends of them are not adjacent, then $G$ induces a ${\cal P}$. \end{lemma}\renewcommand{\backslashelinestretch}{1.2} \noindent {\it Proof. } Without loss of generality, suppose that $P_1=c_1a_1a_2a_3a_4c_3$ is a short $v$-jump across $c_2$, and $P_2=c_1b_1b_2b_3c_5$ is a short $e$-jump across $c_6c_7$. To avoid a n 11-hole on $V(P_1)\cup V(P_2)\cup \{c_4\}$, there must exist $u\in P_1^*$ and $v\in P_2^*$ with $u\sim v$. To avoid a short cycle, $u$ cannot be $a_4$. If $u=a_3$, then $a_3b_3c_5c_4c_3a_4a_3$ is a 6-hole when $v=b_3$, $G[V(C)\cup\{a_4,a_3,a_1,b_2,b_3\}]={\cal P}$ when $v=b_2$, and $a_3b_1c_1c_2c_3a_4a_3$ is a 6-hole when $v=b_1$. If $u=a_2$, then $a_2b_2b_3c_5c_6c_7c_1c_2c_3a_4a_3a_2$ is an 11-hole when $v=b_2$, $a_2b_1b_2b_3c_5c_4c_3a_4a_3a_2$ is a 9-hole when $v=b_1$. If $u=a_1$, then $a_1b_3c_5c_6c_7c_1a_1$ is a 6-hole when $v=b_3$, $a_1b_2b_3c_5c_4c_3a_4a_3a_2a_1$ is a 9-hole when $v=b_2$. If $(u,v)=(a_1,b_1)$, then $a_1=b_1$ to avoid a triangle $c_1a_1b_1c_1$, and so $a_1b_2b_3c_5c_4c_3a_4a_3a_2a_1$ is a 9-hole. Therefore, Lemma~\ref{coro-2-2} holds. \rule{4pt}{7pt} \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{coro-2-3} If $C$ has two short $e$-jumps sharing exactly one common end, then $G$ induces a ${\cal P}'$. \end{lemma}\renewcommand{\backslashelinestretch}{1.2} \noindent {\it Proof. } Without loss of generality, suppose that $P_1=c_1a_1a_2a_3c_4$ and $P_2=c_1b_1b_2b_3c_5$ are two short $e$-jumps. To avoid a 9-hole on $V(P_1)\cup V(P_2)$, there must exist $u\in P_1^*$ and $v\in P_2^*$ with $u\sim v$. If $u=a_3$ or $v=b_3$, then a short cycle occurs. Thus, we have that $u\ne a_3$ and $v\ne b_3$. If $u=a_2$, then we have a 6-hole $a_2b_2b_3c_5c_4a_3a_2$ when $v=b_2$, and an induced ${\cal P}'$ on $V(C)\cup\{b_1,b_2,b_3,a_2,a_3\}$ when $v=b_1$. The same contradiction occurs if $v=b_2$. If $(u,v)=(a_1,b_1)$ then $a_1=b_1$ to avoid a triangle $c_1a_1b_1c_1$, and so $G[V(C)\cup\{a_1,a_2,a_3,b_2,b_3\}]={\cal P}'$. This proves Lemma~\ref{coro-2-3}. \rule{4pt}{7pt} \renewcommand{\backslashelinestretch}{1} We say that \begin{itemize} \item $C$ is of {\em type $1$} if it has two local $v$-jumps sharing exactly one common end, \item $C$ is of {\em type $2$} if $C$ is not of {\em type $1$}, and has a local $v$-jump $P_1$ and a local $e$-jump $P_2$ such that $P_1$ and $P_2$ share exactly one common end and the other ends of them are not adjacent, and \item $C$ is of {\em type $3$} if $C$ is not of {\em type $1$} or {\em type $2$}, and has two local $e$-jumps sharing exactly one common end. \end{itemize}\renewcommand{\backslashelinestretch}{1.2} The following summations of subindexes are taken modulo 7, and we set $7+1\equiv 1$. \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{lem-3-1} Suppose that $G$ induces no ${\cal P}$ and ${\cal P}'$, and suppose that $C$ is of type $1$ with two local $v$-jumps $P_1$ and $P_2$ that share exactly one common end $c_j$ for some $j\in\{1, 2, \ldots, 7\}$. Then, at least one of $P_1$ and $P_2$ is not short, and $C$ has a short jump $T$ with interior in $P_1^*\cup P_2^*$ such that \begin{itemize} \item [$(a)$] $T$ is a $v$-jump across $c_j$, or an $e$-jump across $c_{j-1}c_j$ or $c_jc_{j+1}$, and \item [$(b)$] none of $P_1$ and $P_2$ is short if $T$ is a short $v$-jump across $c_j$. \end{itemize} \end{lemma}\renewcommand{\backslashelinestretch}{1.2} \noindent {\it Proof. } It follows directly from Lemmas~\ref{coro-2-1} that at least one of $P_1$ and $P_2$ is not short. The statement $(b)$ follows directly from the fact that $G$ has no short cycles. Now it is left to prove $(a)$. Without loss of generality, suppose that $j=1$, and suppose that for each short jump $Q$ of $C$ with interior in $P_1^*\cup P_2^*$, $Q$ is neither a $v$-jump across $c_1$ nor an $e$-jump across $c_1c_{2}$ or $c_{1}c_7$. We may choose $P_1$ and $P_2$ such that $|P_1^*\cup P_2^*|$ is minimum. Let $P_1=c_1a_1a_2a_3\ldots a_kc_3$ and $P_2=c_1b_1b_2b_3\ldots b_tc_6$. Let $D_1=V(P_1[a_4,a_{k-1}])$ and $D_2=V(P_2[b_4,b_{t-1}])$. \begin{claim}\label{clm-3-1} $D_1\cup \{a_k\}$ is disjoint from and anticomplete to $D_2\cup \{b_t\}$. \end{claim} \noindent {\it Proof. } Since $P_1$ and $P_2$ are both local, we have that $a_k\notin V(P_2)$ and $b_t\not\in V(P_1)$, and so $a_k\not\sim b_t$ to avoid a short cycle. Suppose that the claim is not true, and suppose by symmetry that there is an $(a_k, D_2)$-path in $G[D_1\cup D_2\cup\{a_k\}]$. Thus $P_2$ cannot be short, and so $N_{D_2}(c_7)\ne\mbox{{\rm \O}}$. We may choose $P'$ to be a $(c_7, \{c_2,c_3\})$-path with shortest length and interior in $D_1\cup D_2\cup \{a_k\}$. Let $x$ be the end of $P'$ other than $c_7$. It is certain that $V(C)\setminus\{c_7, x\}$ is anticomplete to $P'^*$, which implies that $P'$ is either a short $v$-jump across $c_1$ or a short $e$-jump across $c_1c_2$, contradicting our assumption. This proves Claim ~\ref{clm-3-1}. \rule{4pt}{7pt} Note that both $\l(P_1)$ and $\l(P_2)$ are odd. If $a_3=b_3$, then $G[D_1\cup D_2\cup \{a_k,b_t,a_3,c_3,c_4,c_5,c_6\})]$ is a 7-hole, and so $P_1$ and $P_2$ are both short, contradicting Lemma~\ref{coro-2-1}. Hence, $a_3\ne b_3$. To avoid short cycles, we have $a_3\not\sim b_3$. To avoid big odd holes, we have $\{a_1, a_2\}\cap \{b_1, b_2\}=\mbox{{\rm \O}}$. By the minimality of $|P_1^*\cup P_2^*|$, we have that $P_1^*$ is disjoint from and anticomplete to $P_2^*$. This implies that $G[P_1\cup P_2\cup\{c_4,c_5\}]$ is a big odd hole. This proves Lemma~\ref{lem-3-1}. \rule{4pt}{7pt} \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{lem-3-2} Suppose that $G$ induces no ${\cal P}$ and ${\cal P}'$, and suppose that $C$ is of type $2$ with a local $v$-jump $P_1$ and a local $e$-jump $P_2$ such that $P_1$ and $P_2$ share exactly one common end $c_j$ for some $j\in\{1, 2, \ldots, 7\}$. Then, $C$ has a short jump $T$ with interior in $P_1^*\cup P_2^*$ such that \begin{itemize} \item [$(a)$] $T$ is either a $v$-jump across $c_j$, or an $e$-jump across $c_{j-1}c_j$ or $c_jc_{j+1}$, and \item [$(b)$] $P_2$ is not short, and $P_1$ is not short if $T$ is a short $v$-jump across $c_j$. \end{itemize} \end{lemma}\renewcommand{\backslashelinestretch}{1.2} \noindent {\it Proof. } We only need to prove $(a)$. The statement $(b)$ follows from $(a)$ directly. Suppose that $(a)$ does not hold. Without loss of generality, we suppose that $j=1$, and suppose that $P_1$ and $P_2$ are chosen with $|P_1^*\cup P_2^*|$ minimum. Let $P_1=c_1a_1a_2\dots a_kc_3$ be a local $v$-jump, and $P_2=c_1b_1b_2\dots b_tc_5$ be a local $e$-jump. Let $D_1=V(P_1[a_3,a_k])$ and $D_2=V(P_2[b_3,b_t])$. Since $C$ is not of type 1, we have that $N_{P_2}(c_6)=\mbox{{\rm \O}}$. Furthermore, we have the following \begin{claim}\label{clm-3-2} $D_1\cup \{a_k\}$ is disjoint from and anticomplete to $D_2\cup \{b_t\}$. \end{claim} \noindent {\it Proof. } Since both $P_1$ and $P_2$ are local, we have that $a_k\ne b_t$, and so $a_k\not\sim b_t$ to avoid short cycles. If the claim is not true, then $G[D_1\cup D_2\cup \{a_k\}]$ has an $(a_k, D_2)$-path, and so $C$ has a $v$-jump $P'$ across $c_4$. Since $P'$ is not local and $N_{P^*_2}(c_6)=\mbox{{\rm \O}}$, we have that $N_{D_2}(c_7)\ne\mbox{{\rm \O}}$, and we may choose $Q$ to be a $(c_7, \{c_2,c_3\})$-path with shortest length and interior in $D_1\cup D_2\cup \{a_k\}$. Let $x$ be the end of $Q$ other than $c_7$. It is certain that $V(C)\setminus\{c_7, x\}$ is anticomplete to $Q^*$, which implies that $Q$ is either a short $v$-jump across $c_1$ or a short $e$-jump across $c_1c_2$, a contradiction. This proves Claim~\ref{clm-3-2}. With the same arguments as that used in the proof of Lemma~\ref{lem-3-1}, we have that $P_1^*$ is disjoint from and anticomplete to $P_2^*$, which implies a big odd hole on $P_1\cup P_2\cup\{c_4\}$. This proves Lemma~\ref{lem-3-2}. \rule{4pt}{7pt} \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{lem-3-3} Suppose that $G$ induces no ${\cal P}$ and ${\cal P}'$, and suppose that $C$ is of type $3$ with two local $e$-jumps $P_1$ and $P_2$ that share exactly one common end $c_j$ for some $j\in\{1, 2, \ldots, 7\}$. Then \begin{itemize} \item [$(a)$] $C$ has a short jump with interior in $P_1^*\cup P_2^*$ which is either a $v$-jump across $c_j$, or an $e$-jump across $c_{j-1}c_j$ or $c_jc_{j+1}$, and \item [$(b)$] none of $P_1$ and $P_2$ is short. \end{itemize} \end{lemma}\renewcommand{\backslashelinestretch}{1.2} \noindent {\it Proof. } We only need to prove $(a)$. Suppose to its contrary that $(a)$ is not true. Without loss of generality, suppose that $j=1$ and $P_1$, $P_2$ are chosen such that $|P_1^*\cup P_2^*|$ is minimum. Let $P_1=c_1a_1\dots a_kc_4$ and $P_2=c_1b_1\dots b_tc_5$. Let $D_1=V(P_1[a_2,a_k])$ and $D_2=V(P_2[b_2,b_t])$. Since $C$ is not type 1 or type 2, we have that $N_{P^*_1}(c_3)=N_{P^*_2}(c_6)=\mbox{{\rm \O}}$. By Lemma~\ref{coro-2-3}, we have that one of $P_1$ and $P_2$, say $P_2$, is not short. Thus, $N_{P^*_2}(c_7)\neq \mbox{{\rm \O}}$. \begin{claim}\label{clm-3-3} $D_1\cup \{a_k\}$ is disjoint from and anticomplete to $D_2\cup \{b_t\}$. \end{claim} \noindent {\it Proof. } Since both $P_1$ and $P_2$ are local jumps, we have that $a_k\not\in P^*_2$ and $b_t\not\in P^*_1$, and so $a_k\not\sim b_t$ to avoid a 4-cycle $c_4a_kb_tc_5c_4$. Suppose that $N_{P^*_2}(a_k)\ne\mbox{{\rm \O}}$. Choose $x\in N_{P^*_2}(a_k)$ to be a vertex closest to $b_t$. By the minimality of $|P_1^*\cup P_2^*|$, we may assume that $x=a_{k-1}$ and $P_1[c_1, x]=P_2[c_1, x]$. Notice that $N_{P^*_2}(c_7)\ne \mbox{{\rm \O}}$. If $N_{P^*_1}(c_2)\ne \mbox{{\rm \O}}$, we may choose $P$ to be an induced $c_2c_7$-path with $P^*\subseteq P_1^*\cup P_2^*\setminus\{a_1, a_k, b_t\}$, then $P^*$ is anticomplete to $V(C)\setminus\{c_2, c_7\}$, which gives a short jump across $c_1$, a contradiction. Thus, $N_{P^*_1}(c_2)=\mbox{{\rm \O}}$, which implies that $P_1$ is a short jump. Hence, $k=3$ and $c_7$ must have neighbors in $P_2[x, b_t]-x$. Consequently, $\l(P_2[x, b_t])$ is odd and at least 7. Thus $c_4c_5b_tP_2[x, b_t]xa_3c_4$ is a big odd hole. Therefore, $N_{P^*_2}(a_k)=\mbox{{\rm \O}}$ and $N_{P^*_1}(b_t)=\mbox{{\rm \O}}$ by symmetry. If the claim is not true, let $i$ and $j$ be the largest indexes such that $a_i\sim b_j$, then either $C$ has a short jump across $c_1$ when $P_1$ is not short, or $c_4P_1[c_4, a_i]a_ib_jP_2[b_i, c_5]c_5c_4$ is a big odd hole when $P_1$ is short. This proves Claim~\ref{clm-3-3}. \rule{4pt}{7pt} With the similar arguments as that used in the proof of Lemma~\ref{lem-3-1}, we conclude that $P_1^*$ is disjoint from and anticomplete to $P_2^*$, which gives a big odd hole $G[V(P_1)\cup V(P_2)]$. Therefore, Lemma~\ref{lem-3-3} holds. \rule{4pt}{7pt} \renewcommand{\backslashelinestretch}{1} \begin{lemma}\label{lem-3-4} Let $P$ be a jump of $C$. Suppose that $G$ induces no ${\cal P}$ and ${\cal P}'$ and $P$ is not a local jump. Then $C$ has a short jump with interior in $P^*$. \end{lemma}\renewcommand{\backslashelinestretch}{1.2} \noindent {\it Proof. } Without loss of generality, suppose that $P$ is a $v$-jump across $c_2$ or an $e$-jump across $c_2c_3$. If $P$ is a $v$-jump and $N_{P^*}(c_2)\ne\mbox{{\rm \O}}$, then let $Q$ be a ($c_2, \{c_4, c_5, c_6, c_7\}$)-path with shortest length and $Q^*\subseteq P^*$. If $P$ is an $e$-jump and $N_{P^*}(c_2)\cup N_{P^*}(c_{3})\ne\mbox{{\rm \O}}$, then let $Q$ be a ($\{c_2, c_3\}, \{c_5, c_6, c_7\}$)-path with shortest length and $Q^*\subseteq P^*$. It is easy to verify that $Q$ must be a short jump in both cases. Now suppose that $N_{P^*}(c_2)=\mbox{{\rm \O}}$ when $P$ is a $v$-jump, and $N_{P^*}(c_2)\cup N_{P^*}(c_{3})=\mbox{{\rm \O}}$ when $P$ is an $e$-jump. We only prove the case where $P$ is a $v$-jump. The case that $P$ is an $e$-jump can be treated with almost the same arguments. Suppose to the contrary that the lemma is not true. Firstly, we show that \begin{equation}\label{eqa-nonlocal-c4-0} N_{P^*}(c_4)=\mbox{{\rm \O}}. \end{equation} Suppose that $N_{P^*}(c_4)\ne\mbox{{\rm \O}}$. Let $Q$ be the shortest $(c_1, c_4)$-path with $Q^*\subseteq P^*$. Then $N_{Q^*}(c_2)=N_{Q^*}(c_{3})=\mbox{{\rm \O}}$ and $N_{Q^*}(\{c_5, c_6, c_7\})\ne\mbox{{\rm \O}}$. If $N_{Q^*}(c_6)\ne \mbox{{\rm \O}}$, then let $Q_{1, 6}$ be the shortest $c_1c_6$-path and $Q_{4, 6}$ be the shortest $c_4c_6$-path, both with interior in $Q^*$. If $Q_{1, 6}$ and $Q_{4, 6}$ are both local, then by applying Lemma~\ref{lem-3-1} to $Q_{1, 6}$ and $Q_{4, 6}$, we can find a short jump as required. Thus by symmetry we assume that $Q_{1, 6}$ is not local. Then, $N_{Q^*_{1, 6}}(c_5)\ne\mbox{{\rm \O}}$. Thus, either $C$ has a short $e$-jump across $c_6c_7$ when $N_{Q^*_{1, 6}}(c_7)=\mbox{{\rm \O}}$, or $C$ has a short $v$-jump across $c_6$ when $N_{Q^*_{1, 6}}(c_7)\ne \mbox{{\rm \O}}$. This shows that $N_{Q^*}(c_6)=\mbox{{\rm \O}}$. If $N_{Q^*}(c_5)\ne \mbox{{\rm \O}}$ and $N_{Q^*}(c_7)\ne \mbox{{\rm \O}}$, then the shortest $c_5c_7$-path, with interior in $Q^*$, is a short $v$-jump as required. Otherwise, we may assume by symmetry that $N_{Q^*}(c_5)\ne \mbox{{\rm \O}}$ and $N_{Q^*}(c_7)=\mbox{{\rm \O}}$, then the shortest $c_1c_5$-path, with interior in $Q^*$, is a short $e$-jump as required. Therefore, (\ref{eqa-nonlocal-c4-0}) holds. By symmetry, we may suppose that $N_{P^*}(c_4)=\mbox{{\rm \O}}$ and $N_{P^*}(c_7)=\mbox{{\rm \O}}$. Thus, a $(\{c_1, c_3\}, \{c_5, c_6\})$-path, with shortest length and interior in $Q^*$, is a short jump as required. This proves Lemma~\ref{lem-3-4}. \rule{4pt}{7pt} \section{Proof of Theorem~\ref{theo-1-3}} In this section we prove Theorem~\ref{theo-1-3}. If a heptagraph has no 7-hole, then it is bipartite. Thus we always use $G$ to denote a heptagraph, use $C=c_1\cdots c_7c_1$ to denote a 7-hole in $G$, and let ${\cal X}$ be the set of all vertices which are in the interior of some short jumps of $C$. For two integers $i$ and $j$ with $1\le i<j\le 7$, we use $X_{i, j}$ to denote the set of all vertices which are in the interior of some short jumps joining $c_i$ and $c_j$. The proof of Theorem~\ref{theo-1-3} is divided into a several lemmas. By Lemma~\ref{lem-2-1}, we have that $C$ must have some local jumps. We say that two local jumps are {\em equivalent} if they have the same ends. We start from the case that all local jumps of $C$ are equivalent. After that, we discuss the cases where $C$ is of type $i$ for some $i\in\{1, 2, 3\}$. At last we consider the case where $C$ has two kinds of equivalent local jumps and is not of type $i$ for any $i$. In the proof of each lemma, we will choose a subset ${\cal D}$ of $V(G)$ which is disjoint from $V(C)\cup {\cal X}$, and call a short jump {\em bad} if it has some interior vertex in ${\cal D}$. It is certain that \begin{equation}\label{eqa-3-3-1} \mbox{$C$ has no bad jumps.} \end{equation} We always use ${\cal N}$ to denote the set of vertices in $V(C)\cup {\cal X}$ that have neighbors in ${\cal D}$. In the proofs, Lemmas~\ref{lem-3-1}, \ref{lem-3-2}, \ref{lem-3-3} and \ref{lem-3-4} will be cited frequently. We use Lemma~\ref{lem-3-4}($P$) to denote the set of short jumps obtained by applying Lemma~\ref{lem-3-4} to a jump $P$, and use Lemmas~\ref{lem-3-1}$(P, Q)$ to denote the set of short jumps obtained by applying Lemmas~\ref{lem-3-1} to local $v$-jumps $P$ and $Q$ which share exactly one end. Similarly, we define Lemma~\ref{lem-3-2}$(P, Q)$, and Lemma~\ref{lem-3-3}$(P, Q)$. Since $G$ has no triangles, we have that each clique cutset is a single vertex or the two ends of an edge, which is a parity-star cutset. In the rest of the paper, we always choose $G$ to be a heptagraph such that \renewcommand{\backslashelinestretch}{1} \begin{itemize} \item $\delta(G)\ge 3$, $G$ induces no ${\cal P}$ and ${\cal P}'$, and $G$ has no clique cutsets and no $P_3$-cutsets. \end{itemize}\renewcommand{\backslashelinestretch}{1.2} \begin{lemma}\label{lem-unique-local} Suppose that all local jumps of $C$ are equivalent. Then $G$ admits a parity star-cutset. \end{lemma} \noindent {\it Proof. } Let $P$ be a local jump of the shortest length. We first suppose that $P$ is a local $v$-jump across $c_2$. Then ${\cal X}=X_{1, 3}$. Since $N_{\cal X}(c_7)=\mbox{{\rm \O}}$ and $d(c_7)\ge 3$, we choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_7)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. Then (\ref{eqa-3-3-1}) holds. Let $X_1=N(c_1)\cap X_{1, 3}$ and $X_3=X_{1, 3}\setminus X_1$. Suppose that $(X_3\cup\{c_3\})\cap {\cal N}\ne\mbox{{\rm \O}}$. Let $Q_{3, 7}$ be a $c_3c_7$-path with shortest length and $Q^*_{3, 7}\subseteq {\cal D}\cup X_3$. Since $Q_{3, 7}$ is not a local jump, we have that $N_{Q^*_{3, 7}}(\{c_4, c_5, c_6\})\ne \mbox{{\rm \O}}$, and so Lemma~\ref{lem-3-4}($Q_{3, 7}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Therefore, $(X_3\cup\{c_3\})\cap {\cal N}=\mbox{{\rm \O}}$. Suppose that $c_4\in {\cal N}$. Let $Q_{4, 7}$ be a $c_4c_7$-path with shortest length and interior in ${\cal D}$. Since $Q_{4, 7}$ is not a local jump, we have that Lemma~\ref{lem-3-4}($Q_{4, 7}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Therefore, $c_4\not\in {\cal N}$. With a similar argument we can show that $c_5\not\in {\cal N}$ and $c_6\not\in {\cal N}$. Thus ${\cal N}\subseteq X_1\cup\{c_1, c_2, c_7\}$. Since $G$ has no $P_3$-cutsets, we have that ${\cal N}\cap X_1\ne\mbox{{\rm \O}}$. If $c_2\in {\cal N}$, then the shortest $c_2c_7$-path, with interior in ${\cal D}\cup\{x_1\}$, is a local jump, a contradiction. Therefore, ${\cal N}\subseteq X_1\cup \{c_1,c_7\}$. Since every two vertices in $X_1\cup\{c_7\}$ are joined by an induced path of length six or eight with interior in $X_1\cup\{c_3, c_4,c_5,c_6\}$, we have that $X_1\cup\{c_1,c_7\}$ is a parity star-cutset. Next, we suppose that $P$ is a local $e$-jump across $c_2c_3$. Then, ${\cal X}=X_{1, 4}$. Let $X_1=N(c_1)\cap X_{1, 4}$ and $X_4=X_{1, 4}\setminus X_1$. Since all local jumps of $C$ are equivalent to $P$, we have that $N_{{\cal X}}(\{c_2, c_3, c_5, c_6, c_7\})=\mbox{{\rm \O}}$. Since $d(c_6)\ge 3$, we choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_6)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. Thus (\ref{eqa-3-3-1}) still holds. We claim that \begin{equation}\label{eqa-c1c2c3c4-N} (X_4\cup\{c_4\})\cap {\cal N}=(X_1\cup\{c_1\})\cap {\cal N}=\{c_2, c_3\}\cap {\cal N}=\mbox{{\rm \O}}. \end{equation} Suppose that $(X_4\cup\{c_4\})\cap {\cal N}\ne\mbox{{\rm \O}}$. Let $Q_{4, 6}$ be a $c_4c_6$-path with shortest length and $Q^*_{4, 6}\subseteq {\cal D}\cup X_4$. Since $Q_{4, 6}$ is not a local jump, we have that $N_{Q^*_{4, 6}}(\{c_1, c_2, c_3,c_7\})\ne\mbox{{\rm \O}}$, and so Lemma~\ref{lem-3-4}($Q_{4, 6}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Hence, $(X_4\cup\{c_4\})\cap {\cal N}=\mbox{{\rm \O}}$, and $(X_1\cup\{c_1\})\cap {\cal N}=\mbox{{\rm \O}}$ by symmetry. If $c_3\in {\cal N}$, let $Q_{3, 6}$ be a $c_3c_6$-path with shortest length and $Q^*_{3, 6}\subseteq {\cal D}$, then $Q_{3, 6}$ is not a local jump, and $N_{Q^*_{3, 6}}(\{c_1, c_2, c_7\})\ne\mbox{{\rm \O}}$. Consequently, $C$ has a bad jump in Lemma~\ref{lem-3-4}($Q_{3, 6}$). Therefore, $c_3\not\in {\cal N}$, and $c_2\not\in {\cal N}$ by symmetry. This proves (\ref{eqa-c1c2c3c4-N}). By (\ref{eqa-c1c2c3c4-N}), ${\cal N}\subseteq \{c_5,c_6,c_7\}$ and induces a $P_3$-cutset of $G$, contradicting the choice of $G$. This completes the proof of Lemma~\ref{lem-unique-local}. \rule{4pt}{7pt} Now suppose that $C$ has at least two kinds of equivalent local jumps. If $C$ is of type $i$ for some $i\in \{1, 2, 3\}$, then we always choose $j$ and the two local jump $P_1$ and $P_2$ such that $P_1$ and $P_2$ share $c_j$ and $|P_1^*\cup P_2^*|$ is minimum. Without loss of generality, suppose that $j=1$. \begin{lemma}\label{theo-3-1} Suppose that $C$ is of type $1$. Then $G$ admits a parity star-cutset. \end{lemma} \noindent {\it Proof. } Since $P_1$ and $P_2$ are local jumps sharing $c_1$, we have that Lemma~\ref{lem-3-1}($P_1, P_2$) has a short jump $T$, with $T^*\subseteq P_1^*\cup P_2^*$, which is a $v$-jump across $c_1$, or an $e$-jump across $c_1c_2$ or $c_1c_7$. Firstly, we prove \begin{claim}\label{clm-3-4} Lemma~{\em \ref{theo-3-1}} holds if $T$ is a short $v$-jump across $c_1$. \end{claim} \noindent {\it Proof. } Suppose that $T$ is a $v$-jump across $c_1$. It is certain that both $P_1$ and $P_2$ are not short. By the minimality of $|P_1^*\cup P_2^*|$, $C$ has no short $v$-jumps across $c_2$ or $c_7$. Since the two ends of any jump of $C$ are not adjacent, by Lemmas~\ref{coro-2-1}, \ref{coro-2-2}, \ref{lem-3-1}, and \ref{lem-3-2}, we have that no short jumps have end $c_4$ or $c_5$. Thus, ${\cal X}=X_{2,7}\cup X_{2,6}\cup X_{3,7}$, and $c_4$ has no neighbor in ${\cal X}\cup \{c_1,c_2,c_6,c_7\}$ as each vertex in $N_{{\cal X}}(c_4)$ provides us with a short jump starting from $c_4$. Since $d(c_4)\ge 3$, we choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_4)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. It is certain that (\ref{eqa-3-3-1}) holds. Let $X_3=X_{3,7}\cap N(c_3)$. First we claim that \begin{equation}\label{eqa-3-4} \mbox{$((X_{2,6}\cap N(c_2))\cup (X_{2,7}\cap N(c_2))\cup\{c_2\})\cap {\cal N}=\mbox{{\rm \O}}$.} \end{equation} Suppose that (\ref{eqa-3-4}) is not true. Let $Q_{2, 4}$ be a $c_2c_4$-path with shortest length and $Q^*_{2, 4}\subseteq {\cal D}\cup (X_{2,6}\cap N(c_2))\cup (X_{2,7}\cap N(c_2))$. If $Q_{2, 4}$ is a local jump then Lemma~\ref{lem-3-1}($Q_{2, 4}, T)$ has a bad jump. If $Q_{2, 4}$ is not a local jump then Lemma~\ref{lem-3-4}($Q_{2, 4}$) has a bad jump. Both contradict (\ref{eqa-3-3-1}). Therefore, (\ref{eqa-3-4}) holds. Next we claim that \begin{equation}\label{eqa-3-5} \mbox{$c_1\notin {\cal N}$.} \end{equation} Suppose that $c_1\in {\cal N}$. Let $Q_{1, 4}$ be a $c_1c_4$-path with shortest length and interior in ${\cal D}$. By (\ref{eqa-3-4}), we have that $N_{Q^*_{1, 4}}(c_2)=\mbox{{\rm \O}}$. If $Q_{1, 4}$ is a local $e$-jump, then Lemma~\ref{lem-3-2}($P_2, Q_{1, 4}$) has a bad jump. If $Q_{1, 4}$ is not local, then Lemma~\ref{lem-3-4}($Q_{1, 4}$) has a bad jump. Both contradict (\ref{eqa-3-3-1}). Therefore, (\ref{eqa-3-5}) holds. Now we claim that \begin{equation}\label{eqa-3-6} \mbox{$((X_{2,7}\setminus N(c_2))\cup (X_{3,7}\setminus N(c_3))\cup\{c_7\})\cap {\cal N}=\mbox{{\rm \O}}$.} \end{equation} Suppose that (\ref{eqa-3-6}) does not hold. Let $Q_{4, 7}$ be a $c_4c_7$-path with shortest length and interior in ${\cal D}\cup (X_{2,7}\setminus N(c_2))\cup (X_{3,7}\setminus N(c_3))$. By (\ref{eqa-3-4}) and (\ref{eqa-3-5}), we have that $N_{Q^*_{4, 7}}(\{c_1, c_2\})=\mbox{{\rm \O}}$. If $Q_{4, 7}$ is a local jump, then Lemma~\ref{lem-3-2}($Q_{4, 7}, T$) has a bad $e$-jump across $c_1c_7$. If $Q_{4, 7}$ is not local, then $N_{Q^*_{4, 7}}(c_3)\ne\mbox{{\rm \O}}$, and the shortest ($c_3, \{c_5, c_6, c_7\})$-path, with interior in $Q^*_{4, 7}$, is a bad jump. Both contradict (\ref{eqa-3-3-1}). Therefore, (\ref{eqa-3-6}) holds. Finally we claim that \begin{equation}\label{eqa-3-7} \mbox{$((X_{2,6}\setminus N(c_2))\cup\{c_6\})\cap {\cal N}=\mbox{{\rm \O}}$.} \end{equation} Suppose it is not true. Let $Q_{4, 6}$ be a $c_4c_6$-path with shortest length and interior in ${\cal D}\cup (X_{2,6}\setminus N(c_2))$. By (\ref{eqa-3-4}), (\ref{eqa-3-5}) and (\ref{eqa-3-6}), we have that $N_{Q^*_{4, 6}}(\{c_1, c_2, c_7\})=\mbox{{\rm \O}}$. If $N_{Q^*_{4, 6}}(c_3)=\mbox{{\rm \O}}$ then $Q^*_{4, 6}$ is a local $v$-jump, and Lemma~\ref{lem-3-1}($P_2, Q_{4, 6}$) has a bad jump (with the end either $c_4$ or $c_5$). Otherwise, the shortest ($c_3, \{c_5, c_6\})$-path, with interior in $Q^*_{4, 6}\subseteq {\cal D}\cup (X_{2,6}\setminus N(c_2))$, is a bad jump. Both contradict (\ref{eqa-3-3-1}). Therefore, (\ref{eqa-3-7}) holds. By (\ref{eqa-3-4}), (\ref{eqa-3-5}), (\ref{eqa-3-6}), and (\ref{eqa-3-7}), we have that ${\cal N}\subseteq X_3\cup \{c_3,c_4,c_5\}$. Since $G$ has no $P_3$-cutsets, we have ${\cal N}\cap X_3\ne\mbox{{\rm \O}}$. If $N_{{\cal D}}(c_5)\ne \mbox{{\rm \O}}$, then there is a bad $v$-jump across $c_4$, contradicting (\ref{eqa-3-3-1}). Hence, we have that ${\cal N}\subseteq X_3\cup \{c_3,c_4\}$. Notice that each pair of vertices in $X_3\cup\{c_4\}$ are joined by an induced path of length 6 with interior in $X_{3,7}\setminus N(c_3)\cup\{c_5,c_6,c_7\}$. Thus ${\cal N}\cup \{c_3,c_4\}$ is a parity star-cutset. This proves Claim~\ref{clm-3-4}. \rule{4pt}{7pt} Now suppose that $C$ has no short $v$-jumps across $c_1$ with interior in $P_1^*\cup P_2^*$. Thus $T$ must be a short $e$-jump across either $c_1c_2$ or $c_1c_7$. Without loss of generality, suppose that $T$ is a short $e$-jump across $c_1c_2$. \begin{claim}\label{clm-3-5} Lemma~$\ref{theo-3-1}$ holds if $C$ has no short $v$-jump across $c_3$. \end{claim} \noindent {\it Proof. } Suppose that $C$ has no short $v$-jump across $c_3$. By Lemma~\ref{coro-2-1}, we have that either $X_{1,3}=\mbox{{\rm \O}}$ or $X_{1,6}=\mbox{{\rm \O}}$. Since the two ends of any jump of $C$ are not adjacent, by Lemmas~\ref{coro-2-1}, \ref{coro-2-2}, \ref{lem-3-1} and \ref{lem-3-2}, we have that no short jumps contain $c_4$ or $c_5$. Thus, ${\cal X}=X_{2,7}\cup X_{2,6}\cup X_{3,7}\cup X_{1,3}\cup X_{1,6}$, and $c_5$ has no neighbors in ${\cal X}\cup \{c_1,c_2,c_3,c_7\}$ as each vertex in $N_{{\cal X}}(c_5)$ provides us with a short jump starting from $c_5$. Since $d(c_5)\ge 3$, we choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_5)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. Then (\ref{eqa-3-3-1}) holds. Let $X_6=N(c_6)\cap X_{2,6}$ and $X'_6=N(c_6)\cap X_{1,6}$. With the similar arguments as that used in the proof of Claim~\ref{clm-3-4}, we now prove that \begin{equation}\label{eqa-3-9-0} {\cal N}\subseteq (X_6'\cup X_6)\cup \{c_4,c_5,c_6\}. \end{equation} Suppose that $[(X_{3,7}\setminus N(c_7))\cup (X_{1,3}\cap N(c_3))\cup\{c_3\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{3, 5}$ be a $c_3c_5$-path with shortest length and $Q^*_{3, 5}\subseteq {\cal D}\cup (X_{3,7}\setminus N(c_7))\cup (X_{1,3}\cap N(c_3))$. If $Q_{3, 5}$ is a local jump, then Lemma~\ref{lem-3-2}($Q_{3, 5}, T$) has a bad jump with end either $c_4$ or $c_5$. If $Q_{3, 5}$ is not a local jump, then Lemma~\ref{lem-3-4}($Q_{3, 5}$) has a bad jump. Both contradict (\ref{eqa-3-3-1}). Therefore, ${\cal N}\subseteq (V(C)\setminus \{c_3\})\cup X_{2,7}\cup X_{2,6}\cup X_{1,6}\cup (X_{3,7}\cap N(c_7))\cup (X_{1,3}\setminus N(c_3))$. Suppose that $[(X_{2,7}\setminus N(c_2))\cup (X_{3,7}\cap N(c_7))\cup\{c_7\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{5, 7}$ be a $c_5c_7$-path with shortest length and $Q^*_{5, 7}\subseteq {\cal D}\cup (X_{2,7}\setminus N(c_2))\cup (X_{3,7}\cap N(c_7))$. Then, we have $N_{Q^*_{5, 7}}(c_3)=\mbox{{\rm \O}}$ as $c_3\not\in {\cal N}$. If $Q_{5, 7}$ is a local jump then Lemma~\ref{lem-3-2}($Q_{5, 7}, T$) has a bad jump. If $Q_{5, 7}$ is not a local jump then Lemma~\ref{lem-3-4}($Q_{5, 7}$) has a bad jump. Both contradict (\ref{eqa-3-3-1}). Thus we have ${\cal N}\subseteq (V(C)\setminus \{c_3, c_7\})\cup X_{2,6}\cup X_{1,6}\cup (X_{1,3}\setminus N(c_3))\cup (X_{2,7}\cap N(c_2))$. Suppose that $[(X_{2,7}\cap N(c_2))\cup (X_{2,6}\setminus N(c_6))\cup\{c_2\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{2, 5}$ be a $c_2c_5$-path with shortest length and $Q^*_{2, 5}\subseteq {\cal D}\cup (X_{2,7}\cap N(c_2))\cup (X_{2,6}\setminus N(c_6))$. Then $N_{Q^*_{2, 5}}(c_3)=N_{Q^*_{2, 5}}(c_7)=\mbox{{\rm \O}}$ as $c_3, c_7\not\in {\cal N}$. Suppose that $Q_{2, 5}$ is a local jump. Since $Q_{2, 5}$ cannot be bad, we have that $N_{Q^*_{2, 5}}(c_4)\ne \mbox{{\rm \O}}$. Thus the shortest $c_2c_4$-path with interior in $Q^*_{2, 5}$ is a bad $v$-jump. If $Q_{2, 5}$ is not a local jump, then Lemma~\ref{lem-3-4}($Q_{2, 5}$) has a bad jump. Both contradict (\ref{eqa-3-3-1}). Thus we further have that ${\cal N}\subseteq (V(C)\setminus \{c_2, c_3, c_7\})\cup X_6\cup X_{1,6}\cup (X_{1,3}\setminus N(c_3))$. Suppose that $[(X_{1,3}\setminus N(c_3))\cup (X_{1,6}\setminus N(c_6))\cup\{c_1\}]\cap {\cal N}\ne\mbox{{\rm \O}}$. Let $Q_{1, 5}$ be a $c_1c_5$-path with shortest length and $Q^*_{1, 5}\subseteq {\cal D}\cup (X_{1,3}\setminus N(c_3))\cup (X_{1,6}\setminus N(c_6))$. Then $\{c_2, c_3, c_7\}$ is anticomplete to $Q^*_{1, 5}$. If $Q_{1, 5}$ is a local jump then Lemma~\ref{lem-3-2}($P_1, Q_{1, 5}$) has a bad jump. If $Q_{1, 5}$ is not local, then $N_{Q^*_{1, 5}}(c_4)\ne \mbox{{\rm \O}}$, and the shortest path from ($c_4, \{c_1, c_6\}$)-path, with $P^*\subseteq Q^*_{1, 5}$, is a bad jump. Both contradict (\ref{eqa-3-3-1}). Therefore, ${\cal N}\subseteq (V(C)\setminus \{c_1, c_2, c_3, c_7\})\cup X_6\cup X'_6$. This proves (\ref{eqa-3-9-0}). Since $G$ has no $P_3$-cutsets, we have that ${\cal N}\cap (X_6\cup X'_6)\ne\mbox{{\rm \O}}$. If $c_4\in {\cal N}$, then there is a local $v$-jump $Q_{4, 6}$ across $c_5$ with interior in ${\cal D}$, and so Lemma~\ref{lem-3-1}($P_2, Q_{4, 6}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Thus, ${\cal N}\subseteq (X_6'\cup X_6)\cup \{c_5, c_6\}$. Notice that every two vertices in $(X_6'\cup X_6)\cup\{c_5\}$ are joined by an induced path of length six or eight with interior in $((X_{1,6}\cup X_{2,6})\setminus N(c_6))\cup\{c_1,c_2,c_3,c_4\}$. We have that $(X_6'\cup X_6)\cup \{c_5,c_6\}$ is a parity star-cutset. This proves Claim~\ref{clm-3-5}. \rule{4pt}{7pt} To finish the proof of Lemma~\ref{theo-3-1}, we need to verify the case that $T$ is a short $e$-jump across $c_1c_2$, and $C$ has short $v$-jumps across $c_3$. \begin{claim}\label{clm-3-6} Suppose that $T$ is a short $e$-jump across $c_1c_2$ and $C$ has a short $v$-jump across $c_3$. Then Lemma~\ref{theo-3-1} holds. \end{claim} \noindent {\it Proof. } Let $Q_{2, 4}$ be a short $v$-jump across $c_3$. By Lemma~\ref{coro-2-2}, we have that $C$ has no short $v$-jumps across $c_1$, and no short $e$-jumps across $c_1c_7$. Similar to the proofs of Claim~\ref{clm-3-4} and \ref{clm-3-5}, we can deduce that ${\cal X}=X_{2,4}\cup X_{3,7}\cup X_{1,3}$. Thus $c_5$ has no neighbor in ${\cal X}\cup \{c_1,c_2,c_3,c_7\}$, otherwise each vertex in $N_{{\cal X}}(c_5)$ provides us with a short jump starting from $c_5$ Since $d(c_5)\ge 3$, we choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_5)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$, and so (\ref{eqa-3-3-1}) holds. Let $X_4=N(c_4)\cap X_{2,4}$. We claim that \begin{equation}\label{eqa-3-14-0} {\cal N}\subseteq X_4\cup \{c_4,c_5,c_6\}. \end{equation} Suppose that $[(X_{3, 7}\setminus N(c_7))\cup (X_{1, 3}\setminus N(c_1))\cup\{c_3\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{3, 5}$ be a $c_3c_5$-path with shortest length and $Q^*_{3, 5}\subseteq {\cal D}\cup (X_{3, 7}\setminus N(c_7))\cup (X_{1, 3}\setminus N(c_1))$. If $Q_{3, 5}$ is a local jump, then Lemma~\ref{lem-3-2}($Q_{3, 5}, T$) has a bad jump. If $Q_{3, 5}$ is not a local jump, then Lemma~\ref{lem-3-4}($Q_{3, 5}$) has a bad jump. Both contradict (\ref{eqa-3-3-1}). Thus ${\cal N}\subseteq (V(C)\setminus\{c_3\})\cup X_{2,4}\cup (X_{3,7}\cap N(c_7))\cup (X_{1, 3}\cap N(c_1))$. Suppose that $[(X_{3,7}\cap N(c_7))\cup\{c_7\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{5, 7}$ be a $c_5c_7$-path with shortest length and $Q^*_{5, 7}\subseteq {\cal D}\cup (X_{3,7}\cap N(c_7))$. Then $N_{Q_{5, 7}^*}(c_3)=\mbox{{\rm \O}}$ as $c_3\not\in {\cal N}$. If $Q_{5, 7}$ is a local jump then Lemmas~\ref{lem-3-2}$(Q_{5, 7}, T)$ has a bad jump. If $Q_{5, 7}$ is not a local jump then Lemma~\ref{lem-3-4}($Q_{5, 7}$) has a bad jump. Both contradict (\ref{eqa-3-3-1}). Thus ${\cal N}\subseteq (V(C)\setminus\{c_3, c_7\})\cup X_{2,4}\cup (X_{1, 3}\cap N(c_1))$. Suppose that $[(X_{2,4}\setminus N(c_4))\cup\{c_2\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{2, 5}$ be a $c_2c_5$-path with shortest length and $Q^*_{2, 5}\subseteq {\cal D}\cup (X_{2,4}\setminus N(c_4))$. Then $N_{Q_{2, 5}^*}(c_3)=N_{Q_{2, 5}^*}(c_7)=\mbox{{\rm \O}}$ as $c_3, c_7\not\in {\cal N}$. If $Q_{2, 5}$ is a local jump, then $N_{Q_{2, 5}^*}(c_4)\ne\mbox{{\rm \O}}$ as $Q_{2, 5}$ cannot be short by (\ref{eqa-3-3-1}). This implies that the shortest $c_2c_4$-path, with interior in $Q_{2, 5}^*$, is a bad jump. If $Q_{2, 5}$ is not a local jump, then Lemma~\ref{lem-3-4}($Q_{2, 5}$) has a bad jump. Both contradict (\ref{eqa-3-3-1}). Thus we have that ${\cal N}\subseteq (V(C)\setminus\{c_2, c_3, c_7\})\cup X_4\cup (X_{1, 3}\cap N(c_1))$. Suppose that $[(X_{1, 3}\cap N(c_1))\cup\{c_1\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{1, 5}$ be a $c_1c_5$-path with shortest length and $Q^*_{1, 5}\subseteq {\cal D}\cup (X_{1, 3}\cap N(c_1))$. Then $N_{Q_{1, 5}^*}(c_3)=N_{Q_{1, 5}^*}(c_7)=N_{Q_{1, 5}^*}(c_2)=\mbox{{\rm \O}}$. If $Q_{1, 5}$ is a local jump, then Lemma~\ref{lem-3-2}($P_1, Q_{1, 5}$)has a bad jump. If $Q_{1, 5}$ is not local and $N_{Q_{1, 5}^*}(c_4)\ne\mbox{{\rm \O}}$, then Lemma~\ref{lem-3-4}($Q_{1, 5}$) has a bad jump. Both contradict (\ref{eqa-3-3-1}). Therefore, ${\cal N}\subseteq (V(C)\setminus\{c_1, c_2, c_3, c_7\})\cup X_4$. This proves (\ref{eqa-3-14-0}). Since $G$ has no $P_3$-cutsets, we have that ${\cal N}\cap X_4\neq\mbox{{\rm \O}}$. If $c_6\in {\cal N}$, then there is a local $v$-jump $Q_{4, 6}$ across $c_5$ with interior in ${\cal D}$ such that Lemma~\ref{lem-3-1}($P_2, Q_{4, 6}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Thus ${\cal N}\subseteq X_4\cup \{c_4,c_5\}$. Then every pair of distinct vertices in $X_4\cup\{c_5\}$ are joined by an induced path of length six or eight with interior in $\{c_1,c_2,c_6,c_7\}\cup X_{2,4}\setminus N(c_4)$, and so $X_4\cup\{c_4,c_5\}$ is a parity star-cutset. This proves Claim~\ref{clm-3-6}, and completes the proof of Lemma~\ref{theo-3-1}. \rule{4pt}{7pt} The proofs of the following lemmas take the same idea as that of above Lemma~\ref{theo-3-1}. \begin{lemma}\label{theo-3-2} Suppose that $C$ is of type $2$. Then $G$ admits a parity star-cutset. \end{lemma} \noindent {\it Proof. } Let $P_1$ be a local $v$ jump across $c_2$, and $P_2$ be a local $e$-jump across $c_6c_7$. Let $T$ be a short jump with $T^*\subseteq P_1^*\cup P_2^*$ in Lemma~\ref{lem-3-2}($P_1, P_2$), such that $T$ is a $v$-jump across $c_1$ or an $e$-jump across $c_1c_2$ or $c_1c_7$. By the definition of type 2, we have that \begin{equation}\label{eqa-no-c4c7-v-jump} \mbox{$C$ has no local $v$-jumps across $c_4$ or $c_7$,} \end{equation} and so \begin{equation}\label{eqa-c6-empty} \mbox{$N_{P_2^*}(c_6)=\mbox{{\rm \O}}$, and $T$ is not a jump across $c_1c_7$.} \end{equation} \begin{claim}\label{clm-3-7} Suppose that $T$ is a short $v$-jump across $c_1$. Then Lemma~$\ref{theo-3-2}$ holds. \end{claim} \noindent {\it Proof. } It is certain that both $P_1$ and $P_2$ are not short. By the minimality of $|P_1^*\cup P_2^*|$, $C$ neither has short $v$-jumps across $c_2$ nor short $e$-jumps across $c_6c_7$. By Lemmas~\ref{coro-2-1} and \ref{theo-3-1}, we may assume that $C$ has no short $v$-jumps across any vertex in $\{c_3, c_4, c_5, c_6\}$. Thus except those across $c_1$, $C$ has no short $v$-jumps. By Lemmas~\ref{coro-2-2}, we have that $C$ has no short $e$-jumps across $c_3c_4$ or $c_5c_6$. By Lemma~\ref{lem-3-3}, $C$ has no short jump across $c_2c_3$. Hence ${\cal X}=X_{2,7}\cup X_{2,6}\cup X_{3,7}$, and $c_5$ has no neighbor in ${\cal X}\cup \{c_1,c_2,c_3,c_7\}$ as $C$ has no short jumps starting from $c_5$. Since $d(c_5)\ge 3$, we choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_5)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. Let $X_6=X_{2,6}\cap N(c_6)$. Now we prove that \begin{equation}\label{eqa-3-19-0} {\cal N}\subseteq X_6\cup \{c_4,c_5,c_6\}. \end{equation} Suppose that $[(X_{3,7}\cap N(c_3))\cup\{c_3\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{3, 5}$ be a $c_3c_5$-path with shortest length and $Q^*_{3, 5}\subseteq {\cal D}\cup (X_{3,7}\cap N(c_3))$. Since by (\ref{eqa-no-c4c7-v-jump}) $C$ has no local $v$-jump across $c_4$, we have that $N_{Q^*_{3, 5}}(\{c_1, c_2, c_6, c_7\})\ne\mbox{{\rm \O}}$, and so Lemma~\ref{lem-3-4}($Q_{3, 5}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Thus ${\cal N}\subseteq (V(C)\setminus\{c_3\})\cup X_{2,7}\cup X_{2,6}\cup (X_{3,7}\setminus N(c_3))$. Suppose that $[(X_{2,6}\setminus N(c_6))\cup (X_{2,7}\cap N(c_2))\cup\{c_2\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{2, 5}$ be a $c_2c_5$-path with shortest length and $Q^*_{2, 5}\subseteq {\cal D}\cup (X_{2,6}\setminus N(c_6))\cup (X_{2,7}\cap N(c_2))$. If $Q_{2, 5}$ is a local jump, then Lemma~\ref{lem-3-2}($Q_{2, 5}, T$) has a bad jump. If $Q_{2, 5}$ is not a local jump, then Lemma~\ref{lem-3-4}($Q_{2, 5}$) has a bad jump. Both contradict (\ref{eqa-3-3-1}). Thus ${\cal N}\subseteq (V(C)\setminus\{c_2, c_3\})\cup X_6\cup (X_{2,7}\setminus N(c_2))\cup (X_{3,7}\setminus N(c_3))$. Suppose that $c_1\in {\cal N}$. Let $Q_{1, 5}$ be a $c_1c_5$-path with shortest length and $Q^*_{1, 5}\subseteq {\cal D}\cup \{c_1\}$. Then $N_{Q^*_{1, 5}}(c_2)=N_{Q^*_{1, 5}}(c_3)=\mbox{{\rm \O}}$ as $c_2, c_3\not\in {\cal N}$. If $Q_{1, 5}$ is local, then Lemma~\ref{lem-3-2}($P_1, Q_{1, 5}$) has a bad jump. If $Q_{1, 5}$ is not local, then Lemma~\ref{lem-3-4}($Q_{1, 5}$) has a bad jump. Both contradict (\ref{eqa-3-3-1}). Thus ${\cal N}\subseteq (V(C)\setminus\{c_1, c_2, c_3\})\cup X_6\cup (X_{2,7}\setminus N(c_2))\cup (X_{3,7}\setminus N(c_3))$. Suppose that $[(X_{2,7}\setminus N(c_2))\cup (X_{3,7}\setminus N(c_3))\cup\{c_7\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{5, 7}$ be a $c_5c_7$-path with shortest length and $Q^*_{5, 7}\subseteq {\cal D}\cup (X_{2,7}\setminus N(c_2))\cup (X_{3,7}\setminus N(c_3))$. Then $N_{Q^*_{5, 7}}(c_1)=N_{Q^*_{5, 7}}(c_2)=N_{Q^*_{5, 7}}(c_3)=\mbox{{\rm \O}}$. If $N_{Q^*_{5, 7}}(c_4)=\mbox{{\rm \O}}$, then let $Q'=Q_{5, 7}$. Otherwise, let $Q'$ be the shortest $c_4c_7$-path with interior in $Q^*_{5, 7}$. Then $Q'$ is a local jump, and so Lemma~\ref{lem-3-2}($Q', T$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Therefore, ${\cal N}\subseteq (V(C)\setminus\{c_1, c_2, c_3, c_7\})\cup X_6$. This proves (\ref{eqa-3-19-0}). Since $G$ has no $P_3$-cutsets, we have that ${\cal N}\cap X_6\ne\mbox{{\rm \O}}$. Then there is a short $e$-jump $Q_{2, 6}$ across $c_1c_7$. If $c_4\in {\cal N}$, then there is a local $v$-jump $Q_{4, 6}$ across $c_5$ with interior in ${\cal D}$, and Lemma~\ref{lem-3-2}($Q_{2, 6}, Q_{4, 6}$) has a bad jump. Hence ${\cal N}\subseteq X_6\cup \{c_5,c_6\}$. Since every two vertices in $X_6\cup\{c_5\}$ are joined by an induced path of length six or eight with interior in $X_{2,6}\setminus N(c_6)\cup\{c_2,c_3,c_4\}$, we have that $X_6\cup\{c_5,c_6\}$ is a parity star-cutset. This proves Claim~\ref{clm-3-7}. \rule{4pt}{7pt} By (\ref{eqa-c6-empty}), now suppose that $T$ is a short $e$-jump across $c_1c_2$. By Lemma~\ref{lem-3-2}, we have that \begin{equation}\label{eqa-c4-local} \mbox{$C$ has no local $v$-jumps across $c_4$ or $c_6$}. \end{equation} \begin{claim}\label{clm-3-8} Suppose that $C$ has no short $v$-jump across $c_3$. Then Lemma~\ref{theo-3-2} holds. \end{claim} \noindent {\it Proof. } With the same arguments as that used in the proof of Claim~\ref{clm-3-7}, we have that no short jumps may contain $c_4$ or $c_5$. Thus ${\cal X}=X_{2,7}\cup X_{2,6}\cup X_{3,7}\cup X_{1,3}$, and $c_5$ is anticomplete to ${\cal X}\cup \{c_1,c_2,c_3,c_7\}$ as the vertices in $N_{{\cal X}}(c_5)$ may produce short jumps starting from $c_5$. Since $d(c_5)\ge 3$, we choose ${\cal D}$ to be the set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_5)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. Then (\ref{eqa-3-3-1}) still holds. Let $X_6=X_{2,6}\cap N(c_6)$. We claim that \begin{equation}\label{eqa-3-23-0} {\cal N}\subseteq X_6\cup \{c_4,c_5,c_6\}. \end{equation} Suppose that $[(X_{3,7}\setminus N(c_7))\cup (X_{1,3}\cap N(c_3))\cup\{c_3\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{3, 5}$ be a $c_3c_5$-path with shortest length and $Q^*_{3, 5}\subseteq {\cal D}\cup (X_{3,7}\setminus N(c_7))\cup (X_{1,3}\cap N(c_3))$. Since by (\ref{eqa-c4-local}) $Q_{3, 5}$ is not a local jump, we have that Lemma~\ref{lem-3-4}($Q_{3, 5}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Thus ${\cal N}\subseteq (V(C)\setminus\{c_3\})\cup X_{2,7}\cup X_{2,6}\cup (X_{3,7}\cap N(c_7))\cup (X_{1,3}\setminus N(c_3))$. Suppose that $[(X_{2,7}\setminus N(c_2))\cup (X_{3,7}\cap N(c_7))\cup\{c_7\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{5, 7}$ be a $c_5c_7$-path with shortest length and $Q^*_{5, 7}\subseteq {\cal D}\cup (X_{2,7}\setminus N(c_2))\cup (X_{3,7}\cap N(c_7))$. Then $N_{Q_{5, 7}^*}(c_3)=\mbox{{\rm \O}}$. Since by (\ref{eqa-c4-local}) $Q_{5, 7}$ is not a local jump, thus Lemma~\ref{lem-3-4}($Q_{5, 7}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). So, ${\cal N}\subseteq (V(C)\setminus\{c_3, c_7\})\cup (X_{2,7}\cap N(c_2))\cup X_{2,6}\cup (X_{1,3}\setminus N(c_3))$. Suppose that $[(X_{2,7}\cap N(c_2))\cup (X_{2,6}\setminus N(c_6))\cup\{c_2\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{2, 5}$ be a $c_2c_5$-path with shortest length and $Q^*_{2, 5}\subseteq {\cal D}\cup (X_{2,7}\cap N(c_2))\cup (X_{2,6}\setminus N(c_6))$. Then $N_{Q_{2, 5}^*}(c_3)=N_{Q_{2, 5}^*}(c_7)=\mbox{{\rm \O}}$. By Lemma~\ref{coro-2-2}, we see that $Q_{2, 5}$ is not a short jump. If $Q_{2, 5}$ is a local jump, then $N_{Q_{2, 5}^*}(c_4)\ne\mbox{{\rm \O}}$, and the shortest $c_2c_4$-path with interior in $Q_{2, 5}^*$ is a bad jump, contradicting (\ref{eqa-3-3-1}). Thus $Q_{2, 5}$ is not a local jump, and so Lemma~\ref{lem-3-4}($Q_{2, 5}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Hence we have that ${\cal N}\subseteq (V(C)\setminus\{c_2, c_3, c_7\})\cup X_{6}\cup (X_{1,3}\setminus N(c_3))$. Suppose that $[(X_{1,3}\setminus N(c_3))\cup\{c_1\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{1, 5}$ be a $c_1c_5$-path with shortest length and $Q^*_{1, 5}\subseteq {\cal D}\cup (X_{1,3}\setminus N(c_3))$. Then $N_{Q_{1, 5}^*}(c_2)=N_{Q_{1, 5}^*}(c_3)=N_{Q_{1, 5}^*}(c_7)=\mbox{{\rm \O}}$. If $Q_{1, 5}$ is a local jump, then Lemma~\ref{lem-3-2}($P_1, Q_{1, 5}$) has a bad jump. If $Q_{1, 5}$ is not local, then Lemma~\ref{lem-3-4}($Q_{1, 5}$) has a bad jump. Both contradict (\ref{eqa-3-3-1}). Therefore, ${\cal N}\subseteq (V(C)\setminus\{c_1, c_2, c_3, c_7\})\cup X_{6}$. This proves (\ref{eqa-3-23-0}). Since $G$ has no $P_3$-cutsets, we have that ${\cal N}\cap X_6\ne \mbox{{\rm \O}}$. Consequently, $X_{2, 6}\ne\mbox{{\rm \O}}$ and $C$ has a short jump, say $Q_{2,6}$ across $c_1c_7$. If $N_{\cal D}(c_4)\ne \mbox{{\rm \O}}$, then $C$ has a local jump $Q_{4, 6}$ across $c_5$ with interior in ${\cal D}$, and Lemma~\ref{lem-3-2}($Q_{2,6}, Q_{4, 6}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Thus ${\cal N}\subseteq X_6\cup \{c_5,c_6\}$. Notice that each pair of vertices in $X_6\cup \{c_5\}$ are joined by an induced path of length six or eight with interior in $\{c_2,c_3,c_4\}\cup X_{2,6}\setminus N(c_6)$. Hence $X_6\cup \{c_5,c_6\}$ is a parity star-cutset. This proves Claim~\ref{clm-3-8}. \rule{4pt}{7pt} \begin{claim}\label{clm-3-9} Lemma~$\ref{theo-3-2}$ holds if $C$ has a short $v$-jump across $c_3$. \end{claim} \noindent {\it Proof. } Suppose that $C$ has a short $v$-jump $Q_{2, 4}$ across $c_3$. By Lemmas~\ref{coro-2-2} and \ref{theo-3-1}, we have that \begin{equation}\label{eqa-no-local-c5} \mbox{$C$ has neither local $v$-jumps across $c_1$ or $c_5$, nor short $e$-jumps across $c_1c_7$ or $c_5c_6$.} \end{equation} With the similar arguments as that used in the proofs of Claims~\ref{clm-3-4} and \ref{clm-3-5}, we conclude that ${\cal X}=X_{2,4}\cup X_{3,7}\cup X_{1,3}$, and $c_5$ has no neighbors in ${\cal X}\cup \{c_1,c_2,c_3,c_7\}$. Since $d(c_5)\ge 3$, we choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_5)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. Then (\ref{eqa-3-3-1}) still holds. Let $X_4=X_{2,4}\cap N(c_4)$. We will prove that \begin{equation}\label{eqa-3-28-0} {\cal N}\subseteq X_4\cup \{c_4,c_5,c_6\}. \end{equation} Suppose that $[(X_{1,3}\setminus N(c_1))\cup (X_{3,7}\setminus N(c_7))\cup\{c_3\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{3, 5}$ be a $c_3c_5$-path with shortest length and $Q^*_{3, 5}\subseteq {\cal D}\cup (X_{1,3}\setminus N(c_1))\cup (X_{3,7}\setminus N(c_7))$. Since $Q_{3, 5}$ is not a local jump by (\ref{eqa-c4-local}), we have that Lemma~\ref{lem-3-4}($Q_{3, 5}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). This shows that ${\cal N}\subseteq (V(C)\setminus\{c_3\})\cup X_{2,4}\cup (X_{3,7}\cap N(c_7))\cup (X_{1,3}\cap N(c_1))$. Suppose that $[(X_{3,7}\cap N(c_7))\cup\{c_7\}]\cap {\cal N}\ne\mbox{{\rm \O}}$. Let $Q_{5, 7}$ be a $c_5c_7$-path with shortest length and $Q^*_{5, 7}\subseteq {\cal D}\cup (X_{3,7}\cap N(c_7))$. Then $N_{Q_{5, 7}^*}(c_3)=\mbox{{\rm \O}}$. Since by (\ref{eqa-c4-local}) $Q_{5, 7}$ is not a local jump, we have that Lemma~\ref{lem-3-4}($Q_{5, 7}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Thus ${\cal N}\subseteq (V(C)\setminus\{c_3, c_7\})\cup X_{2,4}\cup (X_{1,3}\cap N(c_1))$. Suppose that $[(X_{2,4}\setminus N(c_4))\cup\{c_2\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{2, 5}$ be a $c_2c_5$-path with shortest length and $Q^*_{2, 5}\subseteq {\cal D}\cup (X_{2,4}\setminus N(c_4))$. Then $N_{Q_{2, 5}^*}(c_3)=N_{Q_{2, 5}^*}(c_7)=\mbox{{\rm \O}}$, and by (\ref{eqa-3-3-1}) $Q_{2, 5}$ is not a short jump. If $Q_{2, 5}$ is a local jump, then $N_{Q_{2, 5}^*}(c_4)\ne\mbox{{\rm \O}}$, and the shortest $c_2c_4$-path with interior in $Q_{2, 5}^*$ is a bad jump, contradicting (\ref{eqa-3-3-1}). Therefore, $Q_{2, 5}$ is not a local jump, and so Lemma~\ref{lem-3-4}($Q_{2, 5}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). This shows that ${\cal N}\subseteq (V(C)\setminus\{c_2, c_3, c_7\})\cup X_{4}\cup (X_{1,3}\cap N(c_1))$. Suppose that $[(X_{1,3}\cap N(c_1))\cup\{c_1\}]\cap {\cal N}\ne \mbox{{\rm \O}}$. Let $Q_{1, 5}$ be a $c_1c_5$-path with shortest length and $Q^*_{1, 5}\subseteq {\cal D}\cup (X_{1,3}\cap N(c_1))$. Then $N_{Q_{1, 5}^*}(c_2)=N_{Q_{1, 5}^*}(c_3)=N_{Q_{1, 5}^*}(c_7)=\mbox{{\rm \O}}$, and by (\ref{eqa-3-3-1}) $Q_{1, 5}$ is not a short jump. Now the shortest ($c_1, \{c_4, c_6\}$)-path, with interior in $Q_{1, 5}^*$, is a bad jump, contradicting (\ref{eqa-3-3-1}). Therefore, ${\cal N}\subseteq (V(C)\setminus\{c_1, c_2, c_3, c_7\})\cup X_{4}$, and (\ref{eqa-3-28-0}) holds. Since $G$ has no $P_3$-cutsets, we have that $|{\cal N}\cap X_4|\ge 1$. Since $C$ has no local $v$-jump across $c_5$ by (\ref{eqa-no-local-c5}), we have that $N_{\cal D}(c_6)=\mbox{{\rm \O}}$, and so ${\cal N}\subseteq X_4\cup \{c_4,c_5\}$. Since each pair of vertices in $X_4\cup\{c_5\}$ are joined by an induced path of length six or eight with interior in $\{c_1,c_2,c_6,c_7\}\cup X_{2,4}\setminus N(c_4)$, we have that $X_4\cup\{c_4,c_5\}$ is a parity star-cutset. This proves Claim ~\ref{clm-3-9}, and also completes the proof of Lemma~\ref{theo-3-2}. \rule{4pt}{7pt} \begin{lemma}\label{theo-3-3} Suppose that $C$ is of type $3$. Then $G$ admits a parity star-cutset. \end{lemma} \noindent {\it Proof. } Let $P_1$ be a local $e$-jump across $c_2c_3$, and $P_2$ be a local $e$-jump across $c_6c_7$. It follows from the definition of type 3, $C$ has no local $v$-jumps across any vertex in $\{c_2, c_4, c_5, c_7\}$, and so $N_{P_1^*}(c_3)=N_{P_2^*}(c_6)=\mbox{{\rm \O}}$. Consequently, Lemma~\ref{lem-3-3}($P_1, P_2$) cannot have short jumps across $c_1c_2$ or $c_1c_7$. Let $T$ be a short $v$-jump across $c_1$ in Lemma~\ref{lem-3-3}($P_1, P_2$). Then none of $P_1$ and $P_2$ can be short by Lemma~\ref{lem-3-3}. By Lemma~\ref{coro-2-1}, we have that $C$ has no short $v$-jumps across $c_3$ or $c_6$. By Lemma~\ref{theo-3-2}, we may suppose that $C$ has no local $e$-jumps across $c_3c_4$ or $c_5c_6$. Therefore, \begin{equation}\label{eqa-v-c1-jump} \mbox{the only possible local $v$-jumps are those across $c_1$,} \end{equation} and \begin{equation}\label{eqa-e-c1c2-jump} \mbox{the only possible local $e$-jumps are those across some edge in $\{c_1c_2, c_2c_3, c_6c_7\}$}. \end{equation} Consequently, ${\cal X}=X_{2,7}\cup X_{3,7}\cup X_{2,6}$, and $c_5$ has no neighbor in ${\cal X}\cup \{c_1,c_2,c_3,c_7\}$ as each vertex in $N_{{\cal X}}(c_5)$ provides us with a short jump starting from $c_5$. Since $d(c_5)\ge 3$, we choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_5)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. Then, (\ref{eqa-3-3-1}) still holds. Let $X_6=X_{2, 6}\cap N(c_6)$. We will prove that \begin{equation}\label{eqa-3-33-0} {\cal N}\subseteq X_6\cup \{c_4,c_5,c_6\}. \end{equation} Suppose that $[(X_{3,7}\cap N(c_3))\cup\{c_3\}]\cap {\cal N}\ne \mbox{{\rm \O}}$, and let $Q_{3, 5}$ be a $c_3c_5$-path with shortest length and $Q^*_{3, 5}\subseteq {\cal D}\cup (X_{3,7}\cap N(c_3))$. Since $C$ has no local $v$-jump across $c_4$ by (\ref{eqa-v-c1-jump}), we have that Lemma~\ref{lem-3-4}($Q_{3, 5}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Thus ${\cal N}\subseteq (V(C)\setminus\{c_3\})\cup X_{2,7}\cup X_{2,6}\cup (X_{3,7}\setminus N(c_3))$. Suppose that $[(X_{2,6}\setminus N(c_6))\cup (X_{2,7}\cap N(c_2))\cup\{c_2\}]\cap {\cal N}\ne \mbox{{\rm \O}}$, and let $Q_{2, 5}$ be a $c_2c_5$-path with shortest length and $Q^*_{2, 5}\subseteq {\cal D}\cup (X_{2,6}\setminus N(c_6))\cup (X_{2,7}\cap N(c_2))$. Since $Q_{2, 5}$ is not a local jump by (\ref{eqa-e-c1c2-jump}), Lemma~\ref{lem-3-4}($Q_{2, 5}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Consequently, ${\cal N}\subseteq (V(C)\setminus\{c_2, c_3\}) \cup X_{6}\cup (X_{2,7}\setminus N(c_2))\cup (X_{3,7}\setminus N(c_3))$. Suppose $c_1\in {\cal N}$. Let $Q_{1, 5}$ be a $c_1c_5$-path with shortest length and $Q^*_{1, 5}\subseteq {\cal D}$. If $Q_{1, 5}$ is a local jump then Lemma~\ref{lem-3-3}($P_1, Q_{1, 5}$) has a bad jump. If $Q_{1, 5}$ is not a local jump then Lemma~\ref{lem-3-4}($Q_{1, 5}$) has a bad jump. Both contradict (\ref{eqa-3-3-1}). Thus ${\cal N}\subseteq (V(C)\setminus\{c_1, c_2, c_3\}) \cup X_{6}\cup (X_{2,7}\setminus N(c_2))\cup (X_{3,7}\setminus N(c_3))$. Suppose that $[(X_{2,7}\setminus N(c_2))\cup (X_{3,7}\setminus N(c_3))\cup\{c_7\}]\cap {\cal N}\ne \mbox{{\rm \O}}$, and let $Q_{5, 7}$ be a $c_5c_7$-path with shortest length and $Q^*_{5, 7}\subseteq {\cal D}\cup (X_{2,7}\setminus N(c_2))\cup (X_{3,7}\setminus N(c_3))$. Then $N_{Q^*_{5, 7}}(c_1)=N_{Q^*_{5, 7}}(c_2)=N_{Q^*_{5, 7}}(c_3)=\mbox{{\rm \O}}$. If $N_{Q^*_{5, 7}}(c_4)=\mbox{{\rm \O}}$, then $Q_{5, 7}$ is a local $v$-jump, contradicting (\ref{eqa-v-c1-jump}). Otherwise, $N_{Q^*_{5, 7}}(c_4)\ne \mbox{{\rm \O}}$ and $C$ has a local $e$-jump across $c_5c_6$, contradicting (\ref{eqa-e-c1c2-jump}). Therefore, ${\cal N}\subseteq (V(C)\setminus\{c_1, c_2, c_3, c_7\}) \cup X_{6}$. This proves (\ref{eqa-3-33-0}). Since $G$ has no $P_3$-cutsets, we have that ${\cal N}\cap X_6\ne\mbox{{\rm \O}}$. If $c_4\in {\cal N}$, then $C$ has a local $v$-jump $Q_{4, 6}$ across $c_5$, contradicting (\ref{eqa-v-c1-jump}). Thus ${\cal N}\subseteq X_6\cup \{c_5,c_6\}$. Since each pair of vertices in $X_6\cup\{c_5\}$ are joined by an induced path of length six or eight with interior in $\{c_2,c_3,c_4\}\cup X_{2,6}\setminus N(c_6)$, we have that $X_6\cup\{c_5,c_6\}$ is a parity star-cutset. This proves Lemma~\ref{theo-3-3}. \rule{4pt}{7pt} Finally, we consider the case where $C$ has at least two kinds of equivalent local jumps, and is not of type $i$ for any $i\in \{1, 2, 3\}$. \begin{lemma}\label{theo-3-4} Suppose that $C$ is not of type $i$ for any $i\in \{1, 2, 3\}$, and $C$ has at least two kinds of equivalent local jumps. If $C$ has a local $v$-jump, then $G$ admits a parity star-cutset. \end{lemma} \noindent {\it Proof. } Without loss of generality, suppose that $C$ has a local $v$-jump across $c_3$. Let $P_1$ be a shortest local jump across $c_3$. By Lemmas~\ref{theo-3-1} and \ref{theo-3-2}, we may assume that \begin{equation}\label{eqa-3-4-local-jumps} \mbox{$C$ has no local $v$-jumps across $c_1$ or $c_5$, and no local $e$-jumps across $c_1c_7$ or $c_5c_6$.} \end{equation} By symmetry, we need to consider the situations that $C$ has a local $v$-jump across one vertex in $\{c_2, c_7\}$, or a local $e$-jump across one edge in $\{c_1c_2, c_3c_4, c_6c_7\}$. \begin{claim}\label{clm-4-1} $C$ has no local $v$-jumps across $c_7$. \end{claim} \noindent {\it Proof. } Suppose to its contrary, let $P_2$ be a local $v$-jump across $c_7$ shortest length. By Lemmas~\ref{theo-3-1} and \ref{theo-3-2}, suppose that \begin{equation}\label{eqa-2-3-local-jumps} \mbox{$C$ has no local $v$-jumps across $c_2$, and no local $e$-jumps across $c_2c_3$ or $c_4c_5$.} \end{equation} Firstly we prove that \begin{equation}\label{eqa-4-1-1} \mbox{$C$ has a short $e$-jump $Q_{3, 7}$ across $c_1c_2$, with interior in $P_1^*\cup P_2^*$.} \end{equation} Since $G$ induces no big odd holes, we have that $P_1^*$ cannot be disjoint from and anticomplete to $P_2^*$. Let $Q_{2, 6}$ be the shortest $c_2c_6$-path with interior in $P_1^*\cup P_2^*$. Then $Q_{2, 6}$ is not a local jump by (\ref{eqa-3-4-local-jumps}). If $N_{Q^*_{2, 6}}(c_4)\ne\mbox{{\rm \O}}$, then $C$ has a local $v$-jump across $c_5$ or a local $e$-jump across $c_2c_3$ or $c_5c_6$, contradicting (\ref{eqa-3-4-local-jumps}) or (\ref{eqa-2-3-local-jumps}). Thus $N_{Q^*_{2, 6}}(c_3)\ne\mbox{{\rm \O}}$. Let $Q_{3,6}$ be the shortest $c_3c_6$-path with interior in $Q^*_{2, 6}$. By (\ref{eqa-2-3-local-jumps}), we have that $Q_{3,6}$ cannot be a local jump, which implies that $N_{Q_{3,6}}(c_7)\ne\mbox{{\rm \O}}$. This implies that $C$ has a short $e$-jump across $c_1c_2$, with interior in $P_1^*\cup P_2^*$. Therefore, (\ref{eqa-4-1-1}) holds. Consequently, we have that neither $P_1$ nor $P_2$ can be short, that is \begin{equation}\label{eqa-3-7-local-jumps} \mbox{$C$ has no local $v$-jumps across $c_3$ or $c_7$.} \end{equation} By Lemmas~\ref{coro-2-2}, \ref{coro-2-3}, \ref{theo-3-2}, and \ref{theo-3-3}, we may assume that \begin{equation}\label{eqa-4-5-6-local-jumps} \mbox{$C$ has no local $v$-jumps across $c_4$ or $c_6$.} \end{equation} If $C$ has a short $e$-jump $Q_{1, 5}$ across $c_6c_7$, then $Q^*_{1, 5}$ is disjoint and anticomplete to $Q^*_{3, 7}$, and so $c_1c_7Q_{3, 7}c_3c_4c_5Q_{1, 5}$ is a big odd hole, a contradiction. By symmetry, we may assume that \begin{equation}\label{eqa-34-67-short} \mbox{$C$ has no short $e$-jumps across $c_3c_4$ or $c_6c_7$.} \end{equation} Combining this with (\ref{eqa-3-4-local-jumps}), (\ref{eqa-2-3-local-jumps}), (\ref{eqa-3-7-local-jumps}) and (\ref{eqa-4-5-6-local-jumps}), we have that $C$ has no short $v$-jumps, and the only possible short $e$-jumps are those across $c_1c_2$. Thus, ${\cal X}=X_{3,7}$, and $c_5$ has no neighbor in ${\cal X}\cup \{c_1,c_2,c_3,c_7\}$ as each vertex in $N_{{\cal X}}(c_5)$ provides us with a short jump starting from $c_5$. Since $d(c_5)\ge 3$, we choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_5)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. Then (\ref{eqa-3-3-1}) holds. suppose that $[(X_{3,7}\cap N(c_3))\cup\{c_3\}]\cap {\cal N}\ne \mbox{{\rm \O}}$, and let $Q_{3, 5}$ be a $c_3c_5$-path with shortest length and $Q^*_{3, 5}\subseteq {\cal D}\cup (X_{3,7}\cap N(c_3))$. Since $Q_{3, 5}$ is not a local jump by (\ref{eqa-4-5-6-local-jumps}), Lemma~\ref{lem-3-4}($Q_{3, 5}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Therefore, ${\cal N}\subseteq (V(C)\setminus\{c_3\})\cup (X_{3,7}\setminus N(c_3))$. If $c_2\in {\cal N}$, we can get a contradiction with the same argument as above. Thus $c_2\notin {\cal N}$. By symmetry, $[(X_{3,7}\setminus N(c_3))\cup\{c_7\}]\cap {\cal N}=\mbox{{\rm \O}}$ and $c_1\notin {\cal N}$. Therefore, ${\cal N}\subseteq \{c_4,c_5,c_6\}$, which leads to a contradiction to the choice of $G$. This proves Claim~\ref{clm-4-1}. \rule{4pt}{7pt} \begin{claim}\label{clm-4-2} Suppose that $C$ has a local $v$-jump across $c_2$. Then Lemma~$\ref{theo-3-4}$ holds. \end{claim} \noindent {\it Proof. } We choose $P_2$ to be a local jump across $c_2$ with shortest length. By Lemmas~\ref{theo-3-1} and \ref{theo-3-2}, and by Claim~\ref{clm-4-1}, we may assume that \begin{equation}\label{eqa-4-2-1} \mbox{$C$ has no local $v$-jumps across $c_4$ or $c_6$ or $c_7$, and no local $e$-jumps across $c_4c_5$ or $c_6c_7$.} \end{equation} Since $G$ is big odd hole free, we have that $C$ cannot have both short $e$-jumps across $c_1c_2$ and short $e$-jumps across $c_3c_4$. Thus we may suppose, by symmetry, that \begin{equation}\label{eqa-4-2-2} \mbox{$C$ has no short $e$-jumps across $c_3c_4$.} \end{equation} By (\ref{eqa-3-4-local-jumps}) and (\ref{eqa-4-2-1}), we have that \begin{equation}\label{eqa-5-6-localjumps} \mbox{$C$ has no local jumps with end either $c_5$ or $c_6$.} \end{equation} Thus the only possible short $v$-jumps of $C$ are those across $c_2$ or $c_3$, and only possible short $e$-jumps of $C$ are those across $c_2c_3$ or $c_1c_2$. Hence ${\cal X}=X_{1,3}\cup X_{2,4}\cup X_{1,4}\cup X_{3,7}$, and both $c_5$ and $c_6$ have no neighbors in ${\cal X}\cup \{c_1,c_2,c_3,c_4\}$. Since $d(c_6)\ge 3$, we choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_6)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. Then (\ref{eqa-3-3-1}) still holds. Let $X_7=X_{3,7}\cap N(c_7)$. We will prove that \begin{equation}\label{eqa-4-2-3-0} {\cal N}\subseteq X_7\cup \{c_5,c_6,c_7\}. \end{equation} Suppose that $[((X_{1,4}\cup X_{2,4})\cap N(c_4))\cup\{c_4\}]\cap {\cal N}\ne \mbox{{\rm \O}}$, and let $Q_{4, 6}$ be a $c_4c_6$-path with shortest length and $Q^*_{4, 6}\subseteq {\cal D}\cup ((X_{1,4}\cup X_{2,4})\cap N(c_4))$. Since $Q_{4, 6}$ is not a local jump by (\ref{eqa-3-4-local-jumps}), Lemma~\ref{lem-3-4}($Q_{4, 6}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Hence ${\cal N}\subseteq (V(C)\setminus\{c_4\})\cup X_{1,3}\cup X_{3,7}\cup (X_{2,4}\cup X_{1,4})\setminus N(c_4)$. Suppose that $[(X_{1,3}\cap N(c_3))\cup (X_{3,7}\setminus N(c_7))\cup\{c_3\}]\cap {\cal N}\ne\mbox{{\rm \O}}$, and let $P_{3, 6}$ be a $c_3c_6$-path with shortest length and $P^*_{3, 6}\subseteq {\cal D}\cup (X_{1,3}\cap N(c_3))\cup (X_{3,7}\setminus N(c_7))$. Since $P_{3, 6}$ is not a local jump by (\ref{eqa-4-2-1}), Lemma~\ref{lem-3-4}($Q_{3, 6}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Consequently, ${\cal N}\subseteq (V(C)\setminus\{c_3, c_4\})\cup X_{7}\cup (X_{1,3}\setminus N(c_3))\cup (X_{2,4}\cup X_{1,4})\setminus N(c_4)$. suppose that $[(X_{2,4}\setminus N(c_4))\cup\{c_2\}]\cap {\cal N}\ne \mbox{{\rm \O}}$, and let $Q_{2, 6}$ be a $c_2c_6$-path with shortest length and $Q^*_{2, 6}\subseteq {\cal D}\cup (X_{2,4}\setminus N(c_4))$. Since $Q_{2, 6}$ is not a local jump by (\ref{eqa-5-6-localjumps}), Lemma~\ref{lem-3-4}($Q_{2, 6}$) has a bad jump, a contradiction to (\ref{eqa-3-3-1}). Thus ${\cal N}\subseteq (V(C)\setminus\{c_2, c_3, c_4\})\cup X_{7}\cup (X_{1,3}\setminus N(c_3))\cup (X_{1,4}\setminus N(c_4))$. Suppose that $[(X_{1,3}\setminus N(c_3))\cup (X_{1,4}\setminus N(c_4))\cup\{c_1\}]\cap {\cal N}\ne \mbox{{\rm \O}}$, and let $Q_{1, 6}$ be a $c_1c_6$-path with shortest length and $Q^*_{1, 6}\subseteq {\cal D}\cup (X_{1,3}\setminus N(c_3))\cup (X_{1,4}\setminus N(c_4))$. Since $Q_{1, 5}$ is not a local jump by (\ref{eqa-5-6-localjumps}), Lemma~\ref{lem-3-4}($Q_{1, 6}$) has a bad jump, which contradicts (\ref{eqa-3-3-1}). Therefore, ${\cal N}\subseteq (V(C)\setminus\{c_1, c_2, c_3, c_4\})\cup X_{7}$. This proves (\ref{eqa-4-2-3-0}). Since $G$ has no $P_3$-cutsets, we have that $X_7\ne\mbox{{\rm \O}}$. Let $Q_{3, 7}$ be a short jump across $c_1c_2$. If $c_5\in {\cal N}$, then $C$ has a local jump $Q_{5, 7}$ across $c_6$ with interior in ${\cal D}$, and Lemma~\ref{lem-3-2}($Q_{3,7}, Q_{5, 7}$) has a bad jump, contradicting (\ref{eqa-3-3-1}). Therefore, ${\cal N}\subseteq X_7\cup \{c_7, c_6\}$. Notice that each pair of vertices in $X_7\cup \{c_6\}$ are joined by an induced path of length six or eight with interior in $\{c_3,c_4,c_5\}\cup X_{3,7}\setminus N(c_7)$. We have that $X_7\cup \{c_6,c_7\}$ is a parity star-cutset. This proves Claim~\ref{clm-4-2}. \rule{4pt}{7pt} \begin{claim}\label{clm-4-3} Suppose that $C$ has a local $e$-jump across $c_6c_7$. Then Lemma~$\ref{theo-3-4}$ holds. \end{claim} \noindent {\it Proof. } We choose $P_2$ to be a local jump across $c_6c_7$ with shortest length. By Claim~\ref{clm-4-1}, we may assume that $N_{P_2^*}(c_6)=N_{P_2^*}(c_7)=\mbox{{\rm \O}}$. Thus $P_2$ is short. Since $G$ is big odd hole free, we have that $P_1^*$ is not anticomplete to $P_2^*$. Then the shortest $c_2c_5$-path $Q_{2, 5}$, with interior in $P_1^*\cup P_2^*$, is a local jump, which together with $P_2$ implies that $C$ is of type 3, a contradiction. This proves Claim~\ref{clm-4-3}. \rule{4pt}{7pt} \begin{claim}\label{clm-4-4} Suppose that $C$ has a local $e$-jump across $c_1c_2$. Then Lemma~$\ref{theo-3-4}$ holds. \end{claim} \noindent {\it Proof. } We choose $P_2$ to be a local jump across $c_1c_2$ with shortest length. By Lemmas~\ref{theo-3-1}, \ref{theo-3-2} and \ref{theo-3-3}, and by Claims~\ref{clm-4-1}, \ref{clm-4-2} and \ref{clm-4-3}, we may assume that $C$ has no local $v$-jumps across any vertex in $V(C)\setminus\{c_3\}$, and no local $e$-jumps across any edge in $\{c_1c_7, c_4c_5, c_5c_6, c_6c_7\}$. Since $G$ is big odd hole free, we have that $C$ cannot have both a short $e$-jump across $c_1c_2$ and a short $e$-jump across $c_3c_4$. If $C$ has no short $e$-jumps across $c_3c_4$, then the only possible short $e$-jumps of $C$ are those across $c_1c_2$ or $c_2c_3$. Thus ${\cal X}=X_{1,4}\cup X_{2,4}\cup X_{3,7}$, and $c_6$ has no neighbor in ${\cal X}\cup \{c_1,c_2,c_3,c_4\}$ as otherwise each vertex in $N_{{\cal X}}(c_6)$ provides us with a short jump starting from $c_6$. If $C$ has no short $e$-jumps across $c_1c_2$, then the only possible short $e$-jumps are those across $c_3c_4$ or $c_2c_3$. Thus ${\cal X}=X_{1,4}\cup X_{2,4}\cup X_{2,5}$, and $c_6$ has no neighbor in ${\cal X}\cup \{c_1,c_2,c_3,c_4\}$ as otherwise each vertex in $N_{{\cal X}}(c_6)$ provides us with a short jump starting from $c_6$. In both cases above, we choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph with $N_{{\cal D}}(c_6)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. Then (\ref{eqa-3-3-1}) still holds. With a similar argument as that used in the proof of Claim \ref{clm-4-2}, we conclude that \begin{itemize} \item $(X_{3,7}\cap N(c_7))\cup \{c_6,c_7\}$ is a parity star-cutset if $C$ has no short $e$-jumps across $c_3c_4$, and \item $(X_{2,5}\cap N(c_5))\cup \{c_5,c_6\}$ is a parity star-cutset if $C$ has no short $e$-jumps across $c_1c_2$. \end{itemize} This completes the proof of Claim~\ref{clm-4-4}. \rule{4pt}{7pt} \begin{claim}\label{clm-4-5} Suppose that $C$ has a local $e$-jump across $c_3c_4$. Then Lemma~$\ref{theo-3-4}$ holds. \end{claim} \noindent {\it Proof. } We choose $P_2$ to be a local $e$-jump across $c_3c_4$ with shortest length. By Lemmas~\ref{theo-3-1}, \ref{theo-3-2} and \ref{theo-3-3}, and by Claim~\ref{clm-4-1}, \ref{clm-4-2}, \ref{clm-4-3} and \ref{clm-4-4}, we may assume that the only possible local jumps of $C$ are those across $c_3$ or $c_2c_3$ or $c_3c_4$. Thus ${\cal X}=X_{1,4}\cup X_{2,4}\cup X_{2,5}$, and $c_6$ is anticomplete to ${\cal X}\cup \{c_1,c_2,c_3,c_4\}$ as otherwise any vertex in $N_{{\cal X}}(c_6)$ provides us with a local jump starting from $c_6$. We choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_6)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. Then (\ref{eqa-3-3-1}) still holds. With a similar argument as that used in the proof of Claim \ref{clm-4-2}, we conclude that $(X_{2,5}\cap N(c_5))\cup \{c_5,c_6\}$ is a parity star-cutset. This proves Claim~\ref{clm-4-5}, and completes the proof of Lemma~\ref{theo-3-4}. \rule{4pt}{7pt} \begin{lemma}\label{theo-3-5} Suppose that $C$ is not of type $i$ for any $i\in\{1, 2, 3\}$, and $C$ has at least two kinds of equivalent local jumps. If $C$ has no local $v$-jumps, then $G$ admits a parity star-cutset. \end{lemma} \noindent {\it Proof. } Suppose that $C$ has no local $v$-jumps. Without loss of generality, we suppose by Lemma~\ref{lem-2-1} that \begin{equation}\label{eqa-no-local-v-local-34} \mbox{$C$ has a local $e$-jump across $c_3c_4$,} \end{equation} and let $P_1$ be a local $e$-jump across $c_3c_4$ with shortest length. Notice that $P_1$ is a local $e$-jump across $c_3c_4$ with shortest length, by Lemma~\ref{theo-3-3}, we may assume that \begin{equation}\label{eqa-12-23-local-e-jumps} \mbox{$C$ has no local $e$-jumps across $c_1c_7$ or $c_6c_7$.} \end{equation} By (\ref{eqa-no-local-v-local-34}) and (\ref{eqa-12-23-local-e-jumps}), and by symmetry, we need to consider the cases where $C$ has a local $e$-jump across $c_1c_2$ or $c_2c_3$. \begin{claim}\label{clm-5-1} Suppose that $C$ has a local $e$-jump across $c_1c_2$. Then Lemma~$\ref{theo-3-5}$ holds. \end{claim} \noindent {\it Proof. } We choose $P_2$ to be a local jump across $c_1c_2$ with shortest length. By Lemmas~\ref{theo-3-1}, \ref{theo-3-2}, \ref{theo-3-3} and \ref{theo-3-4}, we may assume that the only possible local jumps of $C$ are those across $c_1c_2$ or $c_2c_3$ or $c_3c_4$. Since $G$ is big odd hole free, at most one of $P_1$ and $P_2$ is short. By symmetry, we suppose that $P_2$ is not a short jump. Then ${\cal X}=X_{1, 4}\cup X_{2, 5}$, and $c_6$ is anticomplete to ${\cal X}\cup \{c_1,c_2,c_3,c_4\}$ as otherwise any vertex in $N_{{\cal X}}(c_6)$ provides us with a local jump starting from $c_6$. We choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_6)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. Then (\ref{eqa-3-3-1}) still holds. With a similar argument as that used in the proof of Claim~\ref{clm-4-2}, we conclude that $(X_{2,5}\cap N(c_5))\cup \{c_5,c_6\}$ is a parity star-cutset. This proves Claim~\ref{clm-5-1}. \rule{4pt}{7pt} \begin{claim}\label{clm-5-2} Suppose that $C$ has a local $e$-jump across $c_2c_3$. Then Lemma~$\ref{theo-3-5}$ holds. \end{claim} \noindent {\it Proof. } We choose $P_2$ to be a local jump across $c_2c_3$ with shortest length. By Lemmas~\ref{theo-3-1}, \ref{theo-3-2}, \ref{theo-3-3} and \ref{theo-3-4}, and by Claim~\ref{clm-5-1}, we may assume that the only possible local jumps are those across $c_2c_3$ or $c_3c_4$. Again we have that ${\cal X}=X_{1, 4}\cup X_{2, 5}$, and $c_6$ is anticomplete to ${\cal X}\cup \{c_1,c_2,c_3,c_4\}$. We choose ${\cal D}$ to be the vertex set of a maximal connected induced subgraph such that $N_{{\cal D}}(c_6)\ne \mbox{{\rm \O}}$ and ${\cal D}\cap (V(C)\cup {\cal X})=\mbox{{\rm \O}}$. Then (\ref{eqa-3-3-1}) still holds. With a similar argument as that used in the proof of Claim~\ref{clm-4-2}, we conclude that $(X_{2,5}\cap N(c_5))\cup \{c_5,c_6\}$ is a parity star-cutset. This proves Claim~\ref{clm-5-2}. \rule{4pt}{7pt} \noindent{\bf Proof of Theorem~\ref{theo-1-3}}. Suppose to its contrary. By the conclusion of \cite{WXX2022}, we choose $G$ to be a minimal heptagraph with $\chi(G)=4$. Then, $\delta(G)\ge 3$, and $G$ is not bipartite. Let $C$ be a 7-hole of $G$. By Lemma~\ref{lem-critical-H}, $G$ has no $P_3$-cutsets or parity-star cutsets. By Lemmas~\ref{theo-1-4} and \ref{theo-1-5}, $G$ induces no ${\cal P}$ or ${\cal P}'$. By Lemma~\ref{lem-unique-local}, $C$ has at least two kinds of equivalent local jumps. By Lemmas~\ref{theo-3-1}, \ref{theo-3-2} and \ref{theo-3-3}, $C$ cannot be type $i$ for any $i\in\{1, 2, 3\}$. By Lemma~\ref{theo-3-4}, $C$ has no local $v$-jumps. This indicates that the only possible local jumps of $C$ must be $e$-jumps, which leads to a contradiction to Lemma~\ref{theo-3-5}. \rule{4pt}{7pt} \noindent{\bf Remark.} Recall that ${\cal G}_{\l}$ is the family of graphs without cycles of length at most $2\l$ and without odd holes of length at least $2\l+3$. The current authors proposed a conjecture claiming that all graphs in $\cup_{\l\ge 2} {\cal G}_{\l}$ are 3-colorable. It seems that the structures of graphs in ${\cal G}_{\l}$ have some connection with cages. For given integers $k$ and $g$, a $(k, g)$-{\em cage} is a $k$-regular graph which has girth $g$ and the smallest number of vertices. The unique $(3, 5)$-cage is the Petersen graph, and the unique $(3, 7)$-cage is the McGee graph \cite{WM1960, WT1966}. Notice that the graph ${\cal P}'$ can be obtained from the McGee graph by deleting four disjoint groups of vertices such that each group induces a path of length 2. The graph, obtained from the Petersen graph by deleting two adjacent vertices, plays an important role in \cite{MCPS2022}, and the graph ${\cal P}'$ is also crucial in the proof of Theorem~\ref{theo-1-3}. Since the Balaban graph \cite{ATB1973, MMN1998} is the unique $(3, 11)$-cage with 112 vertices, perhaps one can prove that all graphs in ${\cal G}_{5}$ are 3-colorable following the idea of Chudnovsky and Seymour \cite{MCPS2022}, with much more detailed analysis. But it seems very hard to solve the 3-colorability of graphs in ${\cal G}_{4}$ along this approach, as there are eighteen $(3, 9)$-cages each of which has 58 vertices (see \cite{GERJ2013, PW1982}). Nelson, Plummer, Robertson, and Zha, \cite{NPRZ2011} proved that the Petersen graph is the only non-bipartite cubic pentagraph which is 3-connected and internally 4-connected, and Plummer and Zha \cite{MPXZ} presented some 3-connected and internally 4-connected non-bipartite non-cubic pentagraphs. It is known that for $g\ge 5$, all $(3, g)$-cages are 3-connected and internally 4-connected (see \cite{MPB2003}). Notice that the McGee graph has 9-holes, and so is not in ${\cal G}_{3}$. It seems interesting to consider the existences of non-bipartite 3-connected, internally 4-connected graphs for ${\cal G}_{\l}$ ($\l\ge 3$). \end{document}
\begin{document} \title{Ziegler Partial Morphisms in additive exact categories} \author[Cort\'es-Izurdiaga]{Manuel Cort\'es-Izurdiaga} \address{Departamento de Matem\'aticas, Universidad de Almeria, E-04071, Almeria, Spain} \email{[email protected]} \thanks{The first author is partially supported by the Spanish Government under grants MTM2016-77445-P and MTM2017-86987-P which include FEDER funds of the EU} \author[Guil Asensio]{Pedro A. Guil Asensio} \address{Departamento de Matem\'aticas, Universidad de Murcia, Murcia, 30100, Spain} \email{[email protected]} \thanks{The second author is partially supported by the Spanish Government under grant MTM2016-77445-P which includes FEDER funds of the EU, and by Fundaci\'on S\'eneca of Murcia under grant 19880/GERM/15} \author[Kalebo\~{g}az]{Berke Kalebo\~{g}az} \address{Department of Mathematics, Hacettepe University, Ankara, Turkey} \email{[email protected]} \author[Srivastava]{Ashish K. Srivastava} \address{Department of Mathematics and Statistics, Saint Louis University, St. Louis, MO-63103, USA} \email{[email protected]} \thanks{The fourth author is partially supported by a grant from Simons Foundation (grant number 426367).} \maketitle \begin{abstract} We develop a general theory of partial morphisms in additive exact categories which extends the model theoretic notion introduced by Ziegler in the particular case of pure-exact sequences in the category of modules over a ring. We relate partial morphisms with (co-)phantom morphisms and injective approximations and study the existence of such approximations in these exact categories. \end{abstract} \section*{Introduction} \noindent We introduce and develop a general theory of partial morphisms in arbitrary additive exact categories, in the sense of Quillen. Exact categories are a natural generalization of abelian categories, and they play a quite useful role in several areas, like Representation Theory, Algebraic Geometry, Algebraic Analysis and Algebraic $K$-Theory. The main reason behind their usefulness is that they are applicable in many situations in which the classical theory of abelian categories does not apply, for instance, in the study of filtered objects and tilting theory. Partial morphisms were introduced by Ziegler in \cite{Ziegler} in his study of Model Theory of Modules, in order to prove the existence of pure-injective envelopes. Recall that a short exact sequence of right modules is called pure if it remains exact upon tensoring by any left module (equivalently, when it is a direct limit of splitting short exact sequences). And therefore, purity reflects all decomposition properties of modules into direct summands. Ziegler realized that pure-injective modules (i.e., those modules which are injective with respect to pure-exact sequences) also extend other types of morphisms and called those morphisms as {\em partial morphisms}. These partial morphisms were central to giving a right pure version of the notion of essential monomorphisms in the category of modules. This concept was later stated in an algebraic language by Monari Martinez \cite{Monari} in terms of systems of linear equations. Namely, she gave a matrix-theoretic reformulation of it. Given a ring $R$ (not necessarily commutative), a submodule $K$ of a right $R$-module $M$ and a right $R$-module $N$, a homomorphism $f:K\rightarrow N$ is called a partial morphism from $M$ to $N$ if whenever we have a system of linear equations $$\begin{bmatrix} x_1& . & . & . & x_m \end{bmatrix} A=\begin{bmatrix} b_1 & . & . & . & b_n \end{bmatrix} $$ with $A\in \mathbb M_{m\times n}(R)$ and $b_1, \ldots, b_n \in K$, which is solvable in $M$, then the system $$\begin{bmatrix} x_1 & . & . & . & x_m \end{bmatrix} A=\begin{bmatrix} f(b_1) & . & . & . & f(b_n) \end{bmatrix} $$ is also solvable in $N$. However, the above algebraic translation of the notion of partial morphisms does not shed much light about their role in the categorical study of purity. In the present paper, we give a categorical definition of this concept which can be stated in any additive exact category $(\mathcal A;\mathcal E)$ (i.e., an additive category $\mathcal A$ with a distinguished class $\mathcal E$ of kernel-cokernel pairs which play the role of short exact sequences). This definition reduces to the original one introduced by Ziegler in the specific case of the pure-exact structure in $\textrm{Mod-}R$ consisting of all pure-exact sequences and it explains the importance of partial morphisms in a much more transparent way: a homomorphism $f:K\rightarrow N$ is partial respect to the inclusion $u$ of $K$ in a module $M$ if and only if the induced morphism ${\rm Ext}^1(-,f)$ transforms $u$ in a pure monomorphism (see Theorem \ref{p:CharacterizationZieglerPartial}). As Ziegler himself observed for the particular case of modules, this notion of partial morphisms allows us to introduce the definition of {\em small} morphisms in exact categories. And it is therefore related to the existence of approximations of modules. We explain how this idea of approximation is interrelated with others used in the literature. Namely, we show, in Theorem \ref{t:InjectiveHulls}, that this idea of approximation in terms of small extensions is equivalent to the one introduced by Enochs of monomorphic envelopes in the category of modules \cite{Enochs}, and to the classical one defined in terms of essential or pure-essential subobjects. Then we prove the existence of enough injectives in certain additive exact categories (see Theorem 4.4, which is one of the main results of this paper), and the existence of injective approximations (in the sense of small morphism mentioned before) in certain exact structures of abelian categories (see Theorem \ref{t:ExistenceHulls}). As an application of our results, we are able to recover several well-known classical results such as the existence of injective hulls in Grothendieck categories, and the existence of pure-injective hulls in finitely accessible additive categories. But, moreover, our theory also includes the known results about approximations relative to a class of modules \cite{GobelTrlifaj}. The key idea is that, under quite general assumptions, finding preenvelopes in an exact category with respect to a class $\mathcal X$ of objects is equivalent to show that there exists enough $E^\mathcal{X}$-injectives, where $E^\mathcal{X}$ is the exact structure consisting of all conflations $A \rightarrow B \rightarrow C$ which are $\Hom(-,X)$-exact for every $X\in\mathcal{X}$. Applying these arguments to Theorem 4.4, we deduce Corollary~5.4, a result which recovers [17, Theorem 2.13(4)]. This is probably the most general known result of existence of (pre-)envelopes in exact categories. The same ideas are later applied to Theorem 4.4 and Theorem 4.11 to prove our Theorem 5.6, which covers all known results of approximations relative to cotorsion pairs in Grothendieck categories. We also relate all these constructions with the recent theory of approximations of objects by ideals of morphisms introduced in \cite{FuGuilHerzogTorrecillas} (see Corollary~\ref{c:Cophantom}). In conclusion, we provide a quite general theory in which most known results of approximations of objects in exact categories are deduced as consequences of our general results, and we also explain how they are interrelated with each other. Let us briefly outline the structure of this paper. After recalling some terminology and preliminary facts, we define, in Section 2, partial morphisms with respect to an additive exact substructure $\mathcal F$ (the $\mathcal F$-partial morphisms) of an exact structure $\mathcal E$ in an additive category $\mathcal A$ (see Section 1). In order to do it, we first need to give a categorical characterization of partial morphisms relative to the pure-exact structure in the a module category (see in Theorem \ref{p:CharacterizationZieglerPartial}). This characterization is obtained in terms of pushouts and thus, it allows us to extend the notion of partial morphism to the wider framework of additive exact categories. Then, we study the properties of $\mathcal F$-partial morphisms and extend several of the results proved by Ziegler to this new setting. It is especially relevant that, as in the case of pure-injective modules, $\mathcal F$-partial morphisms can be used to characterize $\mathcal F$-injective objects. More precisely, we prove, in Theorem \ref{t:FInjectivePartial}, that an object $E$ in $\mathcal A$ is $\mathcal F$-injective if and only if any $\mathcal F$-partial morphism $f$ from an object $X$ to $E$ extends to a morphism $g:X \rightarrow E$. This extends the corresponding theorem for pure-injective modules proved by Ziegler \cite[Theorem 1.1, Corollary 3.3]{Ziegler}. Other advantage of our definition in terms of pushouts is that it allows to relate partial and phantom morphisms (see \cite{FuGuilHerzogTorrecillas} for a definition and main properties of these phantom morphisms). In Section 3, we introduce small subobjects using partial morphisms. Then, we can define when an inclusion $u:U \rightarrow E$, with $E$ injective, is small; which in turn is related to the notion of injective approximations in the category. Recall that an injective hull in an abelian category $\mathcal B$ is an essential inclusion $u:U \rightarrow E$ with $E$ injective, in the sense that $U \cap V \neq 0$ for each non-zero subobject $V$ of $E$. It is well known that the injective hull $u$ is an injective envelope too, in the sense that any endomorphism $f:E \rightarrow E$ such that $fu=f$ is an automorphism. We compare these notions of small injective extensions with that defined in terms of partial morphisms and prove, in Theorem \ref{t:InjectiveHulls}, that for nice categories, all of them are equivalent. Our discussion of injective approximations in exact categories leads us in Section 4 to study when these approximations do exist. The solution to this problem requires answering the following two questions: \begin{enumerate}[(1)] \item Do there exist enough injectives in the category (in the sense that each object can be embedded in an injective one)? \item Assuming the category has enough injectives, can these embeddings be chosen small? \end{enumerate} \noindent In Theorem \ref{t:ExistenceInjectives}, we prove that Question 1 has a positive answer for additive exact categories satisfying a generalization of Baer's lemma. And, in Theorem \ref{t:ExistenceHulls}, we describe a construction of small injective approximations for exact substructures of abelian categories. We end the paper with Section 5, in which we apply our results to study the approximation by objects in exact, Grothendieck and finitely accessible additive categories. In Corollary \ref{fp-inj} we prove that every module has an fp-injective preenvelope. In Corollary \ref{Grothendieck} we prove that every object in the Grothendieck category has an injective hull. In Corollary \ref{pinj} we prove that every object in abelian finitely accessible additive category has a pure-injective hull. \section{Preliminaries} \noindent Given a set $A$, we shall denote by $|A|$ its cardinality. Given a map $f:A \rightarrow B$ and $C$ a subset of $A$, we shall denote by $f \rest C$ the restriction. All our categories are additive (that is, they have finite direct products and an abelian group structure on each of their hom-sets which is compatible with composition). Let us fix some notations about subobjects in a category. \begin{defn} Let $\mathcal A$ be a category and $A$ an object of $\mathcal A$. \begin{enumerate} \item Two monomorphisms $u:U \rightarrow A$ and $v:V \rightarrow A$ are equivalent if there exists an isomorphism $w:V \rightarrow U$ such that $uw=v$. An equivalence class of monomorphisms under this equivalence relation is a subobject of $A$. Given a representative $u:U \rightarrow A$ of this equivalence class, we shall simply say that $U$ is a subobject of $A$, we shall write $U \leq A$ and the monomorphism $u$ will be called an inclusion of $U$ in $A$. \item Given two subobjects $U$ and $V$ of $A$, we shall write $U \subseteq V$ if $U \leq V$ and there exist inclusions $u:U \rightarrow A$, $v:V \rightarrow A$ and $w:U \rightarrow V$ such that $vw=u$. \end{enumerate} \end{defn} \noindent By a {\em kernel-cokernel} pair in $\mathcal A$ we mean a pair of composable morphisms \begin{displaymath} \begin{tikzcd} B \arrow{r}{i} & C \arrow{r}{p} &A \end{tikzcd} \end{displaymath} such that $i$ is a kernel of $p$ and $p$ is a cokernel of $i$. \noindent The following lemma is straightforward but very useful, so we state it without any proof. \begin{lem}\label{l:DiagramLemma} Let $\mathcal A$ be a category. Consider the following commutative diagram \begin{displaymath} \begin{tikzcd} B \arrow{r}{i} \arrow{d}{\varphi_1}& C \arrow{r}{p} \arrow{d}{\varphi_2} & A \arrow{d}{\varphi_3}\\ B' \arrow{r}{i'} & C' \arrow{r}{p'} & A' \end{tikzcd} \end{displaymath} in which $p$ is a cokernel of $i$ and $i'$ is a kernel of $p'$. Then the following assertions are equivalent: \begin{enumerate} \item There exists $\alpha \colon A \rightarrow C'$ such that $p'\alpha = \varphi_3$. \item There exists $\beta \colon C \rightarrow B'$ such that $\beta i=\varphi_1$. \end{enumerate} \end{lem} \noindent Given two morphisms $f:K \rightarrow M$ and $g:K \rightarrow N$ in any category $\mathcal A$, the pushout diagram of $f$ and $g$ consists of an object $P$ and morphisms $i_1:M\rightarrow P$ and $i_2:N\rightarrow P$ such that the following diagram commutes \begin{displaymath} \begin{tikzcd} K \arrow{r}{f} \arrow{d}{g} & M \arrow{d}{i_1}\\ N \arrow{r}{i_2} & P \end{tikzcd} \end{displaymath} and the triple $(P, i_1, i_2)$ is universal in the sense that whenever $(Q, j_1, j_2)$ is any other triple making the above diagram commutative, then there exists a unique morphism $\varphi: P\rightarrow Q$ such that $j_1=\varphi i_1$ and $j_2=\varphi i_2$. We recall some well-known facts about pushouts, which shall be used throughout the paper. \begin{lem}\label{l:PushoutCokernel} Let $\mathcal A$ be a category. Consider the following pushout diagram: \begin{displaymath} \begin{tikzcd} M \arrow{r}{f} \arrow{d}{g} & N \arrow{d}{\overline g}\\ L \arrow{r}{\overline f} & P \end{tikzcd} \end{displaymath} Then: \begin{enumerate} \item The morphism $\overline g$ is a split monomorphism if and only if there exists $h:L \rightarrow N$ with $hg=f$. \item If $f$ has a cokernel $c:N \rightarrow C$, then the unique morphism $c':P \rightarrow C$ satisfying $c'\overline g=c$ and $c'\overline f = 0$ is a cokernel of $\overline f$. \item If $\overline f$ has a cokernel $c'$, then $c'\overline g$ is a cokernel of $f$. \end{enumerate} \end{lem} \noindent For exact categories, we mostly rely on \cite{Buhler} but we use some terminologies of \cite{Keller} as well. Let $\mathcal A$ be a category. An \textit{exact structure} on $\mathcal A$ is a family $\mathcal E$ of distinguished kernel-cokernel pairs satisfying axioms [E0] - [E2] and [E0$^{\textrm{op}}$] - [E2$^{\textrm{op}}$] from \cite{Buhler}. We shall denote by $(\mathcal A;\mathcal E)$ the exact category and elements in $\mathcal E$ will be called {\it conflations}. The kernel of a conflation is called {\it inflation} and the cokernel of a conflation is called {\it deflation}. An \textit{admissible subobject} of an object $A$ is a subobject $U$ of $A$ such that one (and then any) inclusion $i:U \rightarrow A$ is an inflation. The main example of an exact category is an abelian category with the exact structure formed by all kernel-cokernel pairs. We shall call this exact structure the \textit{abelian exact structure}. Let $(\mathcal A; \mathcal E)$ be an exact category. Given $E$ an object and $u:K \rightarrow A$ an inflation, we say that $E$ is \textit{$u$-injective} (or injective with respect to $u$) if for each morphism $f:K \rightarrow E$, there exists a $g:A \rightarrow E$ with $gu=f$. If $\mathcal H$ is a class of inflations, we say that the object $E$ is $\mathcal H$-\textit{injective} if it is $u$-injective for each object $u \in \mathcal H$. If $X$ is another object, we say that $A$ is $X$-injective if it is injective with respect to each inflation $u:K \rightarrow X$. Finally, we say that $E$ is injective if it is injective with respect to each inflation. This is equivalent to the functor $\Hom_\mathcal{A}(-,E)$, from $\mathcal A$ to the category $\mathbf{Ab}$ of abelian groups, carrying inflations to epimorphisms. We shall say that $\mathcal A$ has enough injective objects if for each object $A$ in $\mathcal A$, there exists an inflation $A \rightarrow E$ with $E$ an injective object in $\mathcal A$. The notions about projectivity in exact categories are defined dually. We shall use the following result about relative injective objects, which is well known for the abelian exact structure of an abelian category, and for the pure-exact structure in module categories. \begin{lem}\label{l:AInjective} Let $(\mathcal A; \mathcal E)$ be an exact category. Let $M$ be an object of $\mathcal A$ and \begin{displaymath} \begin{tikzcd} A \arrow{r}{i} & B \arrow{r}{p} & C \end{tikzcd} \end{displaymath} be a conflation. If $M$ is $B$-injective, then $M$ is both $A$-injective and $C$-injective. \end{lem} \begin{proof} Given an inflation $u:K \rightarrow A$ and $f:K \rightarrow M$, $iu$ is an inflation so that there exists $g:B \rightarrow M$ with $giu= f$. Then $M$ is $A$-injective. In order to see that $M$ is $C$-injective, take $u:K \rightarrow C$ an inflation and $f:K \rightarrow M$. Taking pullback of $u$ along $p$ we get the following commutative diagram \begin{displaymath} \begin{tikzcd} P \arrow{r}{\overline p} \arrow{d}{\overline u} & K \arrow{d}{u}\\ B \arrow{r}{p} & C \\ \end{tikzcd} \end{displaymath} in which $\overline u$ is an inflation by \cite[Proposition 2.15]{Buhler}. Let $\overline i:\overline A \rightarrow P$ be a kernel of $\overline p$. Using the universal property of the pullback, we can construct a commutative diagram with a conflation in each row, \begin{displaymath} \begin{tikzcd} \overline A \arrow{r}{\overline i} \arrow{d}{w} & P \arrow{r}{\overline p} \arrow{d}{\overline u} & K \arrow{d}{u}\\ A \arrow{r}{i} & B \arrow{r}{p} & C \end{tikzcd}, \end{displaymath} with $w$ an isomorphism. Now, using that $M$ is $B$-injective, there exists $g':B \rightarrow M$ with $g'\overline u=f \overline p$. Notice that $g'iw=g' \overline u \overline i = f \overline p \overline i=0$ and, since $w$ is isomorphism, $g'i=0$, so that there exists $g:C \rightarrow M$ with $gp=g'$. Then $gu\overline p=f \overline p$ and, since $\overline p$ is an epimorphism, $gu=f$ as well. Then $M$ is $C$-injective. \end{proof} Let $(\mathcal A; \mathcal E)$ be an exact category. Given two objects $A,B$ in $\mathcal A$, we shall denote by $\Ext(A,B)$ the abelian group whose elements are the isomorphism classes of all conflations of the form \begin{displaymath} \begin{tikzcd} B \arrow{r}{i} & C \arrow{r}{p} &A \end{tikzcd} \end{displaymath} equipped with the {\em Baer sum} operation. Given any morphism $g\colon B \rightarrow X$, we can define a morphism $\Ext(A,g):\Ext(A,B) \rightarrow \Ext(A,X)$ as follows: for any conflation \begin{displaymath} \begin{tikzcd} \eta: & B \arrow{r}{i} & C \arrow{r}{p} &A \end{tikzcd} \end{displaymath} we take the pushout of $i$ along $g$ to get a commutative diagram \begin{displaymath} \begin{tikzcd} \eta: & B \arrow{r}{i} \arrow{d}{g} & C \arrow{d}{\overline g} \arrow{r}{p} & A \arrow[equal]{d}\\ \eta': & X \arrow{r}{\overline i} & P \arrow{r}{\overline p} & A \end{tikzcd} \end{displaymath} in which $\eta'$ is a conflation by \cite[Proposition 2.12]{Buhler}. Then define $\Ext(A,g)(\eta)=\eta'$. Similarly, we can define, using pullbacks, $\Ext(f,B):\Ext(A,B) \rightarrow \Ext(X,B)$ for each morphism $f:X \rightarrow A$. If we fix the objects $A$, $B$ and $X$, $\Ext(A,-)$ actually defines a map from $\Hom_{\mathcal A}(B,X)$ to $\Hom_{\mathbb Z}\big(\Ext(A,B),\Ext(A,X)\big)$ which actually is a morphism of abelian groups. Similarly, we obtain a morphism of abelian groups $\Ext(-,B)$ from $\Hom_{\mathcal A}(X,A)$ to $\Hom_{\mathbb Z}\big(\Ext(A,B),\Ext(X,B)\big)$. An \textit{exact substructure} $\mathcal F$ of $\mathcal E$ is an exact structure on $\mathcal A$ such that each conflation in $\mathcal F$ (which we shall call $\mathcal F$-conflations) is a conflation in $\mathcal E$. Inflations, deflations, admissible subobjects and injective objects with respect to $\mathcal F$ will be called $\mathcal F$-inflations, $\mathcal F$-deflations, $\mathcal F$-admissible and $\mathcal F$-injective objects, respectively. Moreover, if $(\mathcal A; \mathcal F)$ has enough injective objects, we shall say that $\mathcal A$ has enough $\mathcal F$-injective objects. Given a class $\mathcal X$ of objects we shall denote by $\mathcal E_\mathcal{X}$, the class of all $\Hom_{\mathcal A}(\mathcal X,-)$-exact conflations, i.e., those conflations \begin{displaymath} \begin{tikzcd} A \arrow{r} & C \arrow{r} & B \end{tikzcd} \end{displaymath} such that \begin{displaymath} \begin{tikzcd} \Hom_{\mathcal A}(X,A) \arrow{r} & \Hom_{\mathcal A}(X,C) \arrow{r} & \Hom_{\mathcal A}(X,B) \end{tikzcd} \end{displaymath} is a short exact sequence in the category of abelian groups for each $X \in \mathcal X$. Dually, we define $\mathcal E^{\mathcal X}$ to be the class of all $\Hom_{\mathcal A}(-,\mathcal X)$-exact conflations, that is, those conflations \begin{displaymath} \begin{tikzcd} A \arrow{r} & B \arrow{r} & C \end{tikzcd} \end{displaymath} such that \begin{displaymath} \begin{tikzcd} \Hom_{\mathcal A}(B,X) \arrow{r} & \Hom_{\mathcal A}(C,X) \arrow{r} & \Hom_{\mathcal A}(A,X) \end{tikzcd} \end{displaymath} is a short exact sequence in the category of abelian groups for each $X \in \mathcal X$. Both $\mathcal E_{\mathcal X}$ and $\mathcal E^{\mathcal X}$ are additive exact substructure of $\mathcal E$, \cite[Exercise 5.6]{Buhler}. Using Lemma \ref{l:DiagramLemma} we get a similar description of $\mathcal E_{\mathcal X}$-conflations to that of pure-exact sequences in module categories (see \cite[34.5]{Wisbauer}). The result can be easily dualized for $\mathcal E^{\mathcal X}$-conflations. \begin{lem} Let $(\mathcal A; \mathcal E)$ be an exact category, $\mathcal X$ be a class of objects and \begin{displaymath} \begin{tikzcd} \eta: & A \arrow{r}{i} & B \arrow{r}{j} & C \end{tikzcd} \end{displaymath} be a conflation. \begin{enumerate} \item If $\eta \in \mathcal E_{\mathcal X}$ then for each morphism $f \colon M \rightarrow N$ with $\Coker f \in \mathcal X$ and commutative diagram \begin{displaymath} \begin{tikzcd} M \arrow{r}{f} \arrow{d}{\varphi_1} & N \arrow{d}{\varphi_2}\\ A \arrow{r}{i} & B \end{tikzcd} \end{displaymath} there exists $\beta:N \rightarrow A$ such that $\beta f = \varphi_1$. \item If there exist enough $\mathcal E_{\mathcal X}$-projective objects, and $\eta$ satisfies (1), then $\eta \in \mathcal E_{\mathcal X}$. \item If $\eta \in \mathcal E^{\mathcal X}$ then for each morphism $f:M \rightarrow N$ with $\Ker f \in \mathcal X$ and commutative diagram \begin{displaymath} \begin{tikzcd} B \arrow{r}{j} \arrow{d}{\psi_1}& C \arrow{d}{\psi_2}\\ M \arrow{r}{f} & N \end{tikzcd} \end{displaymath} there exists $\alpha:C \rightarrow M$ with $f\alpha = \psi_2$. \item If $\mathcal E^{\mathcal X}$ has enough injective objects and $\eta$ satisfies (3), then $\eta \in \mathcal E^{\mathcal X}$. \end{enumerate} \end{lem} \begin{proof} (1) Follows from Lemma \ref{l:DiagramLemma}. (2) Take $X \in \mathcal X$ and $\varphi_3:X \rightarrow C$, a morphism. Let \begin{displaymath} \begin{tikzcd} K \arrow{r}{i} & P \arrow{r}{p} & X \end{tikzcd} \end{displaymath} be an $\mathcal E_\mathcal{X}$-conflation with $P$ being an $\mathcal E_{\mathcal X}$-projective object. Using the projectivity of $P$ we can construct a commutative diagram \begin{displaymath} \begin{tikzcd} K \arrow{r}{i} \arrow{d}{\varphi_1}& P \arrow{r}{p} \arrow{d}{\varphi_2} & X \arrow{d}{\varphi_3}\\ A \arrow{r}{i'} & B \arrow{r}{p'} & C \end{tikzcd} \end{displaymath} Then the result follows from (1) and Lemma \ref{l:DiagramLemma}. (3) and (4) are proved dually. \end{proof} Given a class of objects $\mathcal X$ in $\mathcal A$, we define the right and left perpendicular classes to $\mathcal X$, $\mathcal X^\perp$ and ${^\perp}{\mathcal X}$, by \begin{displaymath} \mathcal X^{\perp} = \{Y \in \mathcal A \mid \Ext(X,Y)=0, \forall X \in \mathcal X\} \end{displaymath} and \begin{displaymath} {^\perp}{\mathcal X} = \{Y \in \mathcal A \mid \Ext(X,Y)=0, \forall X \in \mathcal X\} \end{displaymath} respectively. A cotorsion pair in $\mathcal A$ is a pair of classes $(\mathcal B, \mathcal C)$ of objects of $\mathcal A$, such that $\mathcal B = {^\perp}{\mathcal C}$ and $\mathcal C = \mathcal B^{\perp}$. The cotorsion pair is said to be complete if for each object $A$ of $\mathcal A$ there exist conflations \begin{displaymath} \begin{tikzcd} A \arrow{r} & C_1 \arrow{r} & B_1 \end{tikzcd} \end{displaymath} and \begin{displaymath} \begin{tikzcd} C_2 \arrow{r} & B_2 \arrow{r} & A \end{tikzcd} \end{displaymath} with $B_1, B_2 \in \mathcal B$ and $C_1, C_2 \in \mathcal C$. All rings in this paper will be associative with unit (except those in Section 5.3) and all modules will be right modules. Let $R$ be a ring. As in any abelian category, we have the abelian exact structure $\mathcal E$ in $\textrm{Mod-}R$ consisting of all kernel-cokernel pairs. If $\mathcal P$ is the class of all finitely presented modules, the exact structure $\mathcal E_{\mathcal P}$ consists of all pure conflations and will be called the \textit{pure-exact structure} on $\textrm{Mod-}R$. Conflations in the pure-exact structure can be characterized in terms of systems of equations \cite[34.5]{Wisbauer}. Given a module $M$, recall that a \textit{system of linear equations over $M$} is a system of equations \begin{displaymath} \sum_{i=1}^n X_ir_{ij} = a_j \quad j \in \{1, \ldots, m\} \end{displaymath} with $r_{ij} \in R$ and $a_j\in M$ for each $i \in \{1, \ldots, n\}$ and $j \in \{1, \ldots, m\}$. Then a conflation in $\textrm{Mod-}R$, \begin{displaymath} \begin{tikzcd} K \arrow{r}{f} & M \arrow{r}{g} & L \end{tikzcd} \end{displaymath} is pure if and only if any system of linear equations over $\Img f$ that has a solution over $M$, has a solution over $\Img f$. We shall denote by $\Inj$ the class of all injective modules and by $\PInj$ the class of all pure-injective modules (that is, the class of all injective objects in the exact category $\textrm{Mod-}R$ with the pure-exact structure). \section{Partial Morphisms} \label{sec:ginj-peri-modul} \noindent The initial inspiration for our work comes from the classical notion of partial morphism introduced by Ziegler in \cite{Ziegler} in the category of right modules over a ring. \begin{defn} \label{d:PartialZiegler} Let $R$ be a ring and $M, N$ be right $R$-modules. \begin{enumerate} \item A partial morphism from $M$ to $N$ is a morphism $f \colon K \rightarrow N$, where $K$ is a submodule of $M$, such that for any system of linear equations over $K$, \[\sum_{i=1}^nX_ir_{ij}=k_j \quad j \in \{1, \ldots, m\},\] if the system has a solution in $M$, then the system \[\sum_{i=1}^nX_ir_{ij}=f(k_j) \quad j \in \{1, \ldots, m\}\] has a solution in $N$ as well. We shall call the submodule $K$ the domain of $f$ and we shall denote it by $\dom f$. \item A partial morphism from $M$ to $N$ is called a partial isomorphism if each system of linear equations over $\dom f$, \[\sum_{i=1}^nX_ir_{ij}=k_j \quad j \in \{1, \ldots, m\},\] has a solution over $M$ if and only if the system of linear equations \[\sum_{i=1}^nX_ir_{ij}=f(k_j) \quad j \in \{1, \ldots, m\}\] has a solution over $N$. \end{enumerate} \end{defn} \noindent The following characterization relates partial morphisms with the pure-exact structure in the categories of modules. It will allow us to define partial morphisms in any exact category. Let us recall the construction of the pushouts in module categories. Given a ring $R$ and two morphisms $f:K \rightarrow M$ and $g:K \rightarrow N$ in $\textrm{Mod-} R$, the pushout of $g$ along $f$ is given by the commutative diagram, \begin{displaymath} \begin{tikzcd} K \arrow{r}{f} \arrow{d}{g} & M \arrow{d}{\overline g}\\ N \arrow{r}{\overline f} & P \end{tikzcd} \end{displaymath} in which the module $P$ can be taken to be $\frac{N \oplus M}{U}$, where $U=\{(g(k),f(k)):k \in K\}$ and, if we denote by $\overline{(n,m)}$ the corresponding element in $P$ for each $n \in N$ and $m \in M$, then $\overline{f}(n) = \overline{(n,0)}$ and $\overline{g}(m) = \overline{(0,-m)}$. \begin{theorem}\label{p:CharacterizationZieglerPartial} Let $R$ be a ring. Let $M$ and $N$ be modules, $K \leq M$ a submodule and $f:K \rightarrow N$ a morphism. The following assertions are equivalent: \begin{enumerate} \item $f$ is a partial morphism (resp. isomorphism) from $M$ to $N$ with $\dom f = K$. \item In the pushout diagram \begin{displaymath} \begin{tikzcd} K \arrow[hook]{r}{i} \arrow{d}{f}& M \arrow{d}{\overline f}\\ N \arrow{r}{\overline i} & P \end{tikzcd} \end{displaymath} $\overline i$ (resp. $\overline i$ and $\overline f$) is a pure monomorphism (resp. are pure monomorphisms). \end{enumerate} \end{theorem} \begin{proof} (1) $\Rightarrow$ (2). First assume that $f$ is a partial morphism and let us prove that $\Img \overline i = \{\overline{(u,0)}:u \in N\}$ is a pure submodule of $P$. Let \begin{equation} \label{eq:1} \sum_{i=1}^nX_ir_{ij}=\overline{(s_j,0)} \quad j \in \{1, \ldots, m\} \end{equation} be a system of linear equations over $\Img \overline i$ which has a solution in $P$. Then there exist $u_1, \ldots, u_n \in N$ and $v_1, \ldots, v_n\in M$ such that $\sum_{i=1}^n \overline{(u_i,v_i)}r_{ij}=\overline{(s_j,0)}$ for each $j \in \{1, \ldots, m\}$. Then there exist $k_1, \ldots, k_m \in K$ such that $\sum_{i=1}^nu_ir_{ij}-s_j = f(k_j)$ and $\sum_{i=1}^nv_ir_{ij}=k_j$ for each $j \in \{1, \ldots, m\}$. This last equality says that the system \[\sum_{i=1}^nX_ir_{ij}=k_j \quad j \in \{1, \ldots, m\}\] has a solution in $M$ so that, as $f$ is a partial morphism, the system \[\sum_{i=1}^nX_ir_{ij}=f(k_j) \quad j \in \{1, \ldots, m\}\] has a solution, $u'_1, \ldots, u'_n$, in $N$. Then $\overline{(u_1-u'_1,0)}, \ldots, \overline{(u_n-u'_n,0)}$ is a solution of (\ref{eq:1}) in $\Img \overline i$. This implies that $\Img \overline i$ is a pure submodule of $P$ and $\overline i$ is a pure monomorphism. Now suppose that $f$ is a partial isomorphism and let us prove that $\Img \overline{f} = \{\overline{(0,v)}: v \in M\}$ is a pure submodule of $P$. Let \begin{equation} \label{eq:2} \sum_{i=1}^nX_ir_{ij}=\overline{(0,s_j)} \quad j \in \{1, \ldots, m\} \end{equation} be a system of linear equations over $\Img \overline f$ which has a solution in $P$. Then there exist $u_1, \ldots, u_n \in N$ and $v_1, \ldots, v_n \in M$ such that $\sum_{i=1}^n \overline{(u_i,v_i)}r_{ij}=\overline{(0,s_j)}$ for each $j \in \{1, \ldots, m\}$. This implies that there exist $k_1, \ldots, k_m \in K$ such that $\sum_{i=1}^nu_ir_{ij} = f(k_j)$ and $\sum_{i=1}^nv_ir_{ij}-s_j=k_j$ for each $j \in \{1, \ldots, m\}$. The first identity says that the system \[\sum_{i=1}^nX_ir_{ij}=f(k_j) \quad j \in \{1, \ldots, m\}\] has a solution in $N$. Using that $f$ is a partial isomorphism, the system \[\sum_{i=1}^nX_ir_{ij}=k_j \quad j \in \{1, \ldots, m\}\] has a solution in $M$, say $v'_1, \ldots, v'_n$. Then $\overline{(0,v_1-v'_1)}, \ldots, \overline{(0,v_n-v'_n)}$ is a solution of (\ref{eq:2}) in $\Img \overline f$. This implies that $\Img \overline f$ is a pure submodule of $P$ and $\overline f$ is a pure monomorphism. (2) $\Rightarrow$ (1). First of all assume that $\overline i$ is a pure monomorphism and let \[\sum_{i=1}^nX_ir_{ij}=k_j \quad j \in \{1, \ldots, m\}\] be a system of linear equations over $K$ which has a solution in $M$. Then the system over $\Img \overline i$, \[\sum_{i=1}^nX_ir_{ij}=\overline i f(k_j) \quad j \in \{1, \ldots, m\}\] has a solution in $P$ and, using that $\overline i$ is pure, it has a solution in $\Img \overline i$. Since $\overline i$ is monic, this implies that the system \[\sum_{i=1}^nX_ir_{ij}=f(k_j) \quad j \in \{1, \ldots, m\}\] has a solution in $N$. Thus, $f$ is a partial morphism. Now assume that $\overline f$ is a pure monomorphism too, and let \[\sum_{i=1}^nX_ir_{ij}=k_j \quad j \in \{1, \ldots, m\}\] be a system of linear equations over $K$ such that \[\sum_{i=1}^nX_ir_{ij}=f(k_j) \quad j \in \{1, \ldots, m\}\] has a solution in $N$. Then the system \[\sum_{i=1}^nX_ir_{ij}=\overline f i( k_j) \quad j \in \{1, \ldots, m\}\] has a solution in $P$ and, as $\overline f$ is a pure monomorphism, it has a solution in $\Img \overline f$. But, as $\overline f$ is monic, this implies that the system \[\sum_{i=1}^nX_ir_{ij}=k_j \quad j \in \{1, \ldots, m\}\] has a solution in $M$. Thus, $f$ is a partial isomorphism. \end{proof} With this characterization we can extend the notion of partial morphism to any exact category. For the rest of the paper, we fix an exact category $(\mathcal A;\mathcal E)$ and an additive exact substructure $\mathcal F$ of $\mathcal E$. \begin{defn}\label{d:Partial} Let $X$ and $Y$ be objects of $\mathcal A$. An $\mathcal F$-partial morphism (resp. $\mathcal F$-partial isomorphism) $f$ from $X$ to $Y$ is a morphism $f:U \rightarrow Y$, where $U$ is an admissible subobject of $X$ with inclusion $u:U \rightarrow X$, such that in the pushout of $f$ along $u$, \begin{displaymath} \begin{tikzcd} U \arrow{r}{u} \arrow{d}{f} & X \arrow{d}{\overline{f}}\\ Y \arrow{r}{\overline u} & P \end{tikzcd} \end{displaymath} $\overline u$ is an $\mathcal F$-inflation (resp. $\overline u$ and $\overline f$ are $\mathcal F$-inflations). We shall call the subobject $U$ the domain of $f$ and we shall denote it by $\dom f$. \end{defn} Sometimes we shall speak about partial morphisms with respect to $\mathcal F$ instead of $\mathcal F$-partial morphisms. Note that the definition of $\mathcal F$-partial morphism does not depend on the selected inclusion $u$ of $U$ since, following the notation of the definition, if $v:V \rightarrow X$ is an equivalent monic to $u:U \rightarrow X$ and $w:V \rightarrow U$ is an isomorphism such that $uw=v$, then $f$ is an $\mathcal F$-partial morphism (resp. isomorphism) if and only if $fw$ is an $\mathcal F$-partial morphism (resp. isomorphism). We shall denote by $\dom f$ the subobject $U$ of $X$. \begin{rem} \rm In \cite[Definition 28.]{AdamekHerrlichStrecker} another definition of partial morphism is given. For a fixed class $\mathcal M$ of morphisms in a category $\mathcal C$, a $\mathcal M$-partial morphism from $A$ to $B$ is a morphism $f:C \rightarrow B$ defined from an object $C$ for which there exists a morphism $m:C \rightarrow A$ in $\mathcal M$. We would like to emphasize here that this definition has nothing to do with our definition which is inspired by Ziegler partial morphisms. \end{rem} Now we obtain some basic properties of partial morphisms: \begin{prop}\label{p:PropertiesPartialMorphisms} Let $X$, $Y$, $Z$ be objects of $\mathcal A$, $U$, an admissible subobject of $X$ with inclusion $u:U \rightarrow X$. \begin{enumerate} \item Suppose that $u$ is an $\mathcal F$-inflation. Then any morphism $f:U \rightarrow Y$ is an $\mathcal F$-partial morphism from $X$ to $Y$ with $\dom f = U$. Moreover, a morphism $f:U \rightarrow Y$ is an $\mathcal F$-partial isomorphism from $X$ to $Y$ if and only if it is an $\mathcal F$-inflation. \item If $f:U \rightarrow Y$ is a morphism that has an extension to $X$, then $f$ is an $\mathcal F$-partial morphism from $X$ to $Y$ with $\dom f = U$. \item If $f:U \rightarrow Y$ is a morphism, then $f$ defines a $\mathcal F$-partial isomorphism from $X$ to $Y$ with $\dom f = U$ if and only if $f$ is an inflation, $f$ is an $\mathcal F$-partial morphism from $X$ to $Y$ with $\dom f = U$ and $u$ is a $\mathcal F$-partial morphism from $Y$ to $X$ with domain the subobject $U$ of $Y$ determined by the monomorphism $f$. \item Let $f$ be an $\mathcal F$-partial morphism from $X$ to $Y$ with $\dom f = U$. Then: \begin{enumerate} \item If there exists $h:Y \rightarrow X$ such that $hf=u$ then $f$ is an $\mathcal F$-partial isomorphism. \item The converse is true if $X$ is $\mathcal F$-injective. \end{enumerate} \item Let \begin{displaymath} \begin{tikzcd} \eta: & U \arrow{r}{u} & X \arrow{r}{p} &A \end{tikzcd} \end{displaymath} be a conflation whose kernel is $u$. Then a morphism $f:U \rightarrow Y$ defines an $\mathcal F$-partial morphism from $X$ to $Y$ with $\dom f=U$ if and only if $\Ext(A,f)(\eta) \in \mathcal F$. \item If $f$ is an $\mathcal F$-partial morphism (resp. $\mathcal F$-partial isomorphism) from $X$ to $Y$ and $g$ is any morphism (resp. $\mathcal F$-inflation) from $Y$ to $Z$, then $gf$ is an $\mathcal F$-partial morphism (resp. $\mathcal F$-partial isomorphism) from $X$ to $Z$ with $\dom gf = U$. \item If $f$ and $g$ are $\mathcal F$-partial morphisms from $X$ to $Y$ with $\dom f = \dom g = U$, then $f+g$ is an $\mathcal F$-partial morphism from $X$ to $Y$. \item If $f$ is an $\mathcal F$-partial morphism (resp. $\mathcal F$-partial isomorphism) from $X$ to $Y$ with $\dom f = U$, and $X$ is an $\mathcal F$-admissible subobject of $Z$ with inclusion $v$, then $f$ is an $\mathcal F$-partial morphism (resp. $\mathcal F$-partial isomorphism) from $Z$ to $Y$ with dominion the subobject $U$ of $Z$ determined by $vu$. \end{enumerate} \end{prop} \begin{proof} (1) The pushout along any $\mathcal F$-inflation is an $\mathcal F$-inflation so that any morphism $f:U \rightarrow Y$ is $\mathcal F$-partial. Moreover, as a consequence of the obscure axiom \cite[Proposition 2.16]{Buhler}, $f$ is an $\mathcal F$-partial isomorphism if and only if it is an $\mathcal F$-inflation. (2) Let $g:X \rightarrow Y$ be an extension of $f$ and consider the pushout of $f$ along $u$: \begin{displaymath} \begin{tikzcd} U \arrow{r}{u} \arrow{d}{f} & X \arrow{d}{f_2}\\ Y \arrow{r}{f_1} & Q \end{tikzcd} \end{displaymath} Since the identity of $Y$ and $g:X\rightarrow Y$ satisfy $1_Yf=gu$, there exists $h:Q \rightarrow Y$ such that $hf_1=1_Y$ and $hf_2=g$. Since $f_1$ has a cokernel, as it is an inflation, the obscure axiom \cite[Proposition 2.16]{Buhler} says that $f_1$ is an $\mathcal F$-inflation. Thus, $f$ is $\mathcal F$-partial. (3) Note that $f$ is an inflation by the obscure axiom \cite[Proposition 2.16]{Buhler} and Lemma \ref{l:PushoutCokernel}. The rest of the assertion is trivial. (4) Consider the pushout of $f$ and $u$ \begin{equation*} \begin{tikzcd} U \arrow{r}{u} \arrow{d}{f} & X \arrow{d}{\overline f}\\ Y \arrow{r}{\overline u} & P\\ \end{tikzcd} \end{equation*} If there exists $h:Y \rightarrow X$ with $hf=u$ then, by Lemma \ref{l:PushoutCokernel}, $\overline f$ is a split monomorphism and, in particular, an $\mathcal F$-inflation. Thus $f$ is an $\mathcal F$-partial isomorphism. If $X$ is $\mathcal F$-injective, and $f$ is an $\mathcal F$-partial isomorphism then $\overline f$ actually is a split monomorphism. Then there exists $h:Y \rightarrow X$ with $hf=u$ by Lemma \ref{l:PushoutCokernel}. (5) Follows from the definition of $\Ext(A,f)$. (6) First assume that $f$ is an $\mathcal F$-partial morphism from $X$ to $Y$ with $\dom f = U$. We get the following commutative diagram, \begin{equation} \label{eq:3} \begin{tikzcd} \dom f \arrow{r}{u} \arrow{d}{f} & X \arrow{d}{\overline f}\\ Y \arrow{r}{\overline u} \arrow{d}{g} & P \arrow{d}{\overline g}\\ Z \arrow{r}{\overline v} & Q \end{tikzcd} \end{equation} by considering the pushout of $f$ along $u$ and of $g$ along $\overline u$. Then the outer diagram is a pushout and $\overline v$ is an $\mathcal F$-inflation, as $f$ is $\mathcal F$-partial. This means that $gf$ is a $\mathcal F$-partial morphism from $X$ to $Z$ with $\dom gf = \dom f$. If, in addition, $g$ is an $\mathcal F$-inflation and $f$ is a $\mathcal F$-partial isomorphism from $X$ to $Y$, then in diagram (\ref{eq:3}) both $\overline f$ and $\overline g$ are $\mathcal F$-inflations, so that, $\overline g \overline f$ is an $\mathcal F$-inflation too. Consequently, $gf$ is an $\mathcal F$-partial isomorphism from $X$ to $Z$. (7) Let \begin{displaymath} \begin{tikzcd} \eta: & U \arrow{r}{u} & X \arrow{r}{p} &A \end{tikzcd} \end{displaymath} be a conflation whose kernel is $u$. Then, since $\Ext(A,-)$ defines a morphism of abelian groups, $\Ext(A,f+g)(\eta) = \Ext(A,f)(\eta)+\Ext(A,g)(\eta)$. Now using that $\mathcal F(A,Y)$ is a subgroup of $\Ext(A,Y)$, we deduce that $\Ext(A,f+g)(\eta) \in \mathcal F$. By (5), $f+g$ is an $\mathcal F$-partial morphism. (8) Let $v:\dom f \rightarrow X$ be an $\mathcal F$-inflation. We can construct the following commutative diagram \begin{displaymath} \begin{tikzcd} \dom f \arrow{r}{u} \arrow{d}{f} & X \arrow{r}{v} \arrow{d}{\overline f} & Z \arrow{d}{\overline g}\\ Y \arrow{r}{\overline u} & P \arrow{r}{\overline v} & Q \end{tikzcd} \end{displaymath} by considering the pushout of $f$ along $u$ and of $\overline f$ along $v$. Then the outer diagram is a pushout and both $\overline u$ and $\overline v$ are $\mathcal F$-inflations. Consequently $\overline v\circ \overline u$ is an $\mathcal F$-inflation which means that $f$ is $\mathcal F$-partial from $Z$ to $Y$ with dominion the subobject $U$ of $Z$ determined by $vu$. If, in addition, $f$ is an $\mathcal F$-partial isomorphism, both $\overline f$ and $\overline g$ are $\mathcal F$-inflations, then $f$ is a $\mathcal F$-partial isomorphism from $Z$ to $Y$. \end{proof} \begin{expls}\label{e:PartialMorphisms} \rm We give below some examples of partial morphisms and partial isomorphisms. \begin{enumerate} \item Let $X$ and $Y$ be objects in $\mathcal A$ and $f:X \rightarrow Y$ be a morphism. Then, by Proposition \ref{p:PropertiesPartialMorphisms}(1), $f$ is an $\mathcal F$-partial morphism from $X$ to $Y$ with $\dom f=X$. Moreover, $f$ is an $\mathcal F$-partial isomorphism with $\dom f = X$ if and only if it is an $\mathcal F$-inflation. \item Let $R$ be a ring. By Proposition \ref{p:CharacterizationZieglerPartial}, the partial morphisms with respect to the pure-exact structure in the sense of Definition \ref{d:Partial} coincide with those introduced by Ziegler (Definition \ref{d:PartialZiegler}). \end{enumerate} \end{expls} \noindent Phantom morphisms, which have their origin in homotopy theory \cite{MacGibbon}, were introduced by Gnacadja \cite{Gnacadja} in the category of modules over a finite group ring, and considered by Herzog for a general module category in \cite{Herzog}. In \cite{FuGuilHerzogTorrecillas} phantom morphisms with respect to the exact substructure $\mathcal F$ have been defined, and also the dual notion of phantom morphisms, the cophantom morphisms have been introduced. A morphism $f:B \rightarrow Y$ is called $\mathcal F$-cophantom if the pushout of any conflation (beginning in $B$) along $f$ gives a conflation that belongs to $\mathcal F$ (equivalently, if $\Ext(A,f)(\eta) \in \mathcal F$ for each conflation of the form $\eta: B \rightarrow C \rightarrow A$). With the preceding result, it is easy to characterize $\mathcal F$-cophantom morphisms in terms of $\mathcal F$-partial morphisms. \begin{cor}\label{c:Cophantom} Let $f:B \rightarrow Y$ be a morphism in $\mathcal A$. Then $f$ is an $\mathcal F$-cophantom morphism if and only if for any admissible inclusion $u:B \rightarrow X$, $f$ is $\mathcal F$-partial morphism from $X$ to $Y$ with $\dom f = B$. \end{cor} \noindent In \cite{Ziegler} (see \cite[Theorem 1.1]{Monari} too) Ziegler characterized pure-injective modules in terms of partial morphisms with respect to the pure-exact structure. We proceed to extend this result to injective objects relative to the exact structure $\mathcal F$. \begin{theorem}\label{t:FInjectivePartial} An object $E$ is $\mathcal F$-injective if and only if any $\mathcal F$-partial morphism $f$ from an object $X$ to $E$ extends to a morphism $g:X \rightarrow E$. \end{theorem} \begin{proof} If $E$ is $\mathcal F$-injective and $f$ is an $\mathcal F$-partial morphism from an object $X$ to $E$, we can consider the following pushout \begin{displaymath} \begin{tikzcd} \dom f \arrow{r}{v} \arrow{d}{f} & X \arrow{d}{\overline f}\\ E \arrow{r}{\overline v} & P \end{tikzcd} \end{displaymath} Since $E$ is $\mathcal F$-injective and $\overline v$ is an $\mathcal F$-inflation, there exists $w:P \rightarrow E$ with $w \overline v=1_E$. Then $w\overline f$ is an extension of $f$ to $X$. Conversely, if $v:V \rightarrow X$ is an $\mathcal F$-inflation and $f:V \rightarrow E$ is any morphism then, by Proposition \ref{p:PropertiesPartialMorphisms}, $f$ is an $\mathcal F$-partial morphism from $X$ to $E$. By hypothesis there exists $w:X \rightarrow E$ such that $wv=f$. Then $E$ is $\mathcal F$-injective. \end{proof} As an application of the preceding theorem we can characterize when a module belongs to the right-hand class of a cotorsion pair. \begin{cor}\label{c:CotorsionPair} Let $(\mathcal B,\mathcal C)$ be a complete cotorsion pair and $A$, an object of $\mathcal A$. Then the following assertions are equivalent: \begin{enumerate} \item $A \in \mathcal C$. \item $A$ is $\mathcal E^{\mathcal C}$-injective. \item Any $\mathcal E^{\mathcal C}$-partial morphism from an object $X$ to $A$ extends to a homomorphism from $X$ to $A$. \end{enumerate} \end{cor} \begin{proof} (1) $\Rightarrow$ (2) is trivial. (2) $\Leftrightarrow$ (3) follows from Theorem \ref{t:FInjectivePartial}. (2) $\Rightarrow$ (1). Since the cotorsion pair is complete, there exists a conflation $A \rightarrow B \rightarrow C$ with $C \in \mathcal C$ and $B \in \mathcal B$. Then, the long exact sequence induced by this conflation when applying $\Ext(-,C')$ for each $C' \in \mathcal C$, gives that $f$ actually is an $\mathcal E^{\mathcal C}$-inflation. Since $A$ is $\mathcal E^{\mathcal C}$-injective, this inflation is a split monomorphism and $A$ is isomorphic to a direct summand of $C$. Now, using that $\mathcal C$ is closed under direct summands, we conclude that $A$ belongs to $\mathcal C$. \end{proof} We end this section characterizing partial morphisms relative to the exact structures $\mathcal E^{\mathcal X}$ and $\mathcal E_{\mathcal X}$ for a given class of objects $\mathcal X$. Using the preceding theorem, it is easy to handle the case $\mathcal E^{\mathcal X}$. \begin{prop} Let $\mathcal X$ be a class of objects, $X$ an object in $\mathcal A$, $U$ an admissible suboject with inclusion $u:U \rightarrow X$ and $f:U \rightarrow Y$ be a morphism. The following assertions are equivalent: \begin{enumerate} \item $f$ is an $\mathcal E^{\mathcal X}$-partial morphism from $X$ to $Y$ with $\dom f = U$. \item For each morphism $g:Y \rightarrow Z$ with $Z \in \mathcal X$, there exists $h:X \rightarrow Z$ with $hu=gf$. \end{enumerate} \end{prop} \begin{proof} (1) $\Rightarrow$ (2). Take any $Z \in \mathcal X$ and $g:Y \rightarrow Z$. By Proposition \ref{p:PropertiesPartialMorphisms}(6), $gf$ is a $\mathcal E^{\mathcal X}$-partial morphism from $X$ to $Z$. Since $Z$ is $\mathcal E^{\mathcal X}$-injective, (2) follows from Theorem \ref{t:FInjectivePartial}. (2) $\Rightarrow$ (1). Conversely, consider the pushout of $f$ along $u$ and a morphism $g:Y \rightarrow Z$ with $Z \in X$: \begin{displaymath} \begin{tikzcd} U \arrow{r}{u} \arrow{d}{f} & X \arrow{d}{\overline f}\\ Y \arrow{r}{\overline u} \arrow{d}{g} & P\\ Z & \end{tikzcd} \end{displaymath} By (2) there exists $h:X \rightarrow Z$ such that $hu=gf$. Using that $P$ is the pushout, there exists $h':P \rightarrow Z$ such that $h'\overline u=g$. This means that $\Hom(P,Z) \rightarrow \Hom(Y,Z)$ is exact and, consequently, $\overline u$ is an $\mathcal E^{\mathcal X}$-inflation. Then, $f$ is an $\mathcal E^{\mathcal X}$-partial morphism. \end{proof} Now we treat the case $\mathcal E_{\mathcal X}$. Having in mind the interpretation of systems of equations in terms of morphisms (see \cite[34.3]{Wisbauer}), the following characterization of $\mathcal E_{\mathcal X}$-partial morphisms can be viewed as an extension of the definition of Ziegler of partial morphisms in the pure-exact structure in the module category (Definition \ref{d:PartialZiegler}). \begin{prop} Suppose that there exist enough projective objects. Let $\mathcal X$ be a class of objects, $A$ an object, $U$ an admissible subobject of $A$ with inclusion $u:U \rightarrow A$ and $f:U \rightarrow B$ a morphism. The following assertions are equivalent: \begin{enumerate} \item $f$ is an $\mathcal E_{\mathcal X}$-partial morphism. \item For each commutative diagram \begin{displaymath} \begin{tikzcd} M \arrow{r}{i} \arrow{d}{\varphi_1} & N \arrow{d}{\varphi_2}\\ U \arrow{r}{u} & A \end{tikzcd} \end{displaymath} in which $\Coker i \in \mathcal X$, there exists $g:N \rightarrow B$ such that $g i = f \varphi_1$. \end{enumerate} \end{prop} \begin{proof} (1) $\Rightarrow$ (2) Consider a diagram as in (2) and consider the pushout of $f$ along $u$ to get the following commutative diagram \begin{displaymath} \begin{tikzcd} & M \arrow{r}{i} \arrow{d}{\varphi_1} & N \arrow{r}{p} \arrow{d}{\varphi_2} & \Coker i \arrow{d}{\varphi_3}\\ & U \arrow{r}{u} \arrow{d}{f} & A \arrow{r}{q} \arrow{d}{\overline f} & C \arrow[equal]{d}\\ \eta: & B \arrow{r}{\overline u} & P \arrow{r}{\overline q} & C \end{tikzcd} \end{displaymath} in which, since $f$ is $\mathcal E_{\mathcal X}$-partial, the bottom row is an $\mathcal E_{\mathcal X}$-conflation, $q=\overline{qf}$ is a cokernel of $u$ by Lemma \ref{l:PushoutCokernel}, and $\varphi_3$ exists by the property of the cokernel. Since $\Coker i \in \mathcal X$ and $\eta \in \mathcal E_{\mathcal X}$, there exists $h:\Coker i \rightarrow P$ such that $h\overline q=\varphi_3$. By Lemma \ref{l:DiagramLemma}, there exists $g:N \rightarrow B$ such that $gi=f\varphi_1$. (2) $\Rightarrow$ (1) The pushout of $f$ along $u$ gives the commutative diagram \begin{displaymath} \begin{tikzcd} & U \arrow{r}{u} \arrow{d}{f} & A \arrow{r}{q} \arrow{d}{\overline f} & C \arrow[equal]{d}\\ \eta: & B \arrow{r}{\overline u} & P \arrow{r}{\overline q} & C \end{tikzcd} \end{displaymath} in which $q$ is a cokernel of $u$ by Lemma \ref{l:PushoutCokernel}. In order to see that $\eta$ is an $\mathcal E_{\mathcal X}$-conflation, let $\varphi:X \rightarrow C$ be a morphism with $X \in \mathcal X$. Since there exist enough projective objects, we can find a conflation \begin{displaymath} \begin{tikzcd} K \arrow{r}{i} & Q \arrow{r}{p} & X \end{tikzcd} \end{displaymath} with $Q$ being projective. Using the projectivity of $Q$, we can construct the commutative diagram \begin{displaymath} \begin{tikzcd} K \arrow{r}{i} \arrow{d}{\varphi_1} & Q \arrow{r}{p} \arrow{d}{\varphi_2} & X \arrow{d}{\varphi}\\ U \arrow{r}{u} \arrow{d}{f} & A \arrow{r}{q} \arrow{d}{\overline f} & C \arrow[equal]{d}\\ B \arrow{r}{\overline u} & P \arrow{r}{\overline q} & C \end{tikzcd} \end{displaymath} By hypothesis, there exists $g:Q \rightarrow B$ such that $gi=f\varphi_1$. By Lemma \ref{l:DiagramLemma}, there exists $h:X \rightarrow P$ such that $\overline q h =\varphi$. Thus, $\eta$ is an $\mathcal E_{\mathcal X}$-conflation. \end{proof} \section{Small Subobjects, Hulls and Envelopes} \label{sec:small-subobj-envel} \noindent Approximations by a fixed class of objects are formalized by the notions of preenvelope and precover. Recall that if $\mathcal B$ is a category, $\mathcal X$ is a class of objects and $B$ is an object of $\mathcal B$, an \textit{$\mathcal X$-preenvelope} of $B$ is a morphism $u:B \rightarrow X$, with $X$ being an object in $\mathcal X$, such that any morphism $f:B \rightarrow Y$ with $Y \in \mathcal X$ factors through $u$. Note that if $\mathcal B$ is the module category over a ring $R$, then an $\Inj$-preenvelope is just a monomorphism $B \rightarrow I$ with $I$ injective and a $\PInj$-preenvelope is a pure monomorphism $B \rightarrow E$ with $E$ pure-injective. There are two ways of defining a minimal approximation in module categories. The first of them, which can be defined in any category, is the notion of envelope: an $\mathcal X$-preenvelope $u:B \rightarrow X$ is an $\mathcal X$-envelope if $u$ is a minimal morphism in the sense that any morphism $f:X \rightarrow X$ satisfying $fu=u$ is an isomorphism. The second of them uses the notion of essential and pure-essential monomorphism. Recall that a monomorphism (resp. a pure monomorphism) $f:A \rightarrow B$ is essential (resp. pure-essential) if for any $g:B \rightarrow C$ such that $gf$ is a monomorphism (resp. a pure monomorphism), then $g$ is a monomorphism (resp. pure monomorphism). Then an injective hull in $\textrm{Mod-}R$ is an essential monomorphism $u:B \rightarrow I$ with $I$ injective, and a pure-injective hull is a pure-essential pure monomorphism $v:B \rightarrow E$ with $E$ pure-injective (we shall use the term \textit{hull} for minimal approximations defined by essentiality). It is well known that $u$ is precisely the injective envelope of $B$ and $v$ the pure-injective envelope of $v$ (as defined in the preceding paragraph). Concerning the pure-exact structure, there is another notion of small extension which was introduced by Ziegler in \cite[p. 161]{Ziegler} using partial morphisms. With this definition Ziegler constructs, for a submodule $A$ of a pure-injective module $E$, a weak version of the pure-injective hull of $A$, $A \leq H(A) \leq E$ (see \cite[Theorem 3.6]{Ziegler}) which gives, in case $A$ is a pure submodule of $E$, the pure-injective hull of $A$. The objective of this section is to define $\mathcal F$-essential and $\mathcal F$-small extensions in our exact category $(\mathcal A;\mathcal E)$, and to relate all approximations of objects by injectives: $\mathcal F$-injective envelopes, $\mathcal F$-injective hulls and $\mathcal F$-small extensions. We shall start with the definition of $\mathcal F$-small extension. Note that if $X$ and $Y$ are objects in $\mathcal A$, $f$ is an $\mathcal F$-partial morphism from $X$ to $Y$ and $V$ is an admissible subobject of $\dom f$, then $f\upharpoonright V$ defines an $\mathcal F$-partial morphism from $X$ to $Y$ (with $\dom f\upharpoonright V = V$). \begin{defn}\label{d:small} Let $X$ be an object and $U \leq V$ be admissible subobjects of $X$. \begin{enumerate} \item We shall say that $V$ is $\mathcal F$-small over $U$ in $X$ if for any $\mathcal F$-partial morphism $f$ from $X$ to another object $Y$ with $\dom f = V$, the following holds: \begin{center} $f \upharpoonright U$ is an $\mathcal F$-partial isomorphism from $X$ to $Y \Rightarrow f$ is an $\mathcal F$-partial isomorphism. \end{center} \item We shall say that $X$ is $\mathcal F$-small over $U$ if $X$ is $\mathcal F$-small over $U$ in $X$. \end{enumerate} \end{defn} \noindent If $R$ is a ring, $\mathcal A=\textrm{Mod-}R$ and $\mathcal F$ is the pure-exact structure in $\textrm{Mod-}R$, then the $\mathcal F$-small objects coincide with the small objects introduced by Ziegler in \cite{Ziegler}. \noindent As an immediate consequence of the above definition we get: \begin{lem}\label{l:CharSmall} Let $X$ be an object and $U \leq X$ be an admissible subobject. Then $X$ is $\mathcal F$-small over $U$ if and only if each morphism $f:X \rightarrow Z$ such that $f \upharpoonright U$ defines an $\mathcal F$-partial isomorphism from $X$ to $Z$ is actually an $\mathcal F$-inflation. \end{lem} \begin{proof} Simply note that, by Example \ref{e:PartialMorphisms}, any morphism $f:X \rightarrow Z$ is $\mathcal F$-partial with $\dom f = X$ and that $f$ is a $\mathcal F$-partial isomorphism with $\dom f = X$ if and only if $f$ is an $\mathcal F$-inflation. \end{proof} Next, we establish some fundamental properties of $\mathcal F$-small objects. \begin{prop}\label{p:PropertiesSmall} Let $X$ be an object and $U \subseteq V \subseteq W$ be admissible subobjects of $X$. Then: \begin{enumerate} \item If $V$ is $\mathcal F$-small over $U$ in $X$ and $W$ is $\mathcal F$-small over $V$ in $X$ then $W$ is $\mathcal F$-small over $U$ in $X$. \item If $X$ is $\mathcal F$-injective then $V$ is $\mathcal F$-small over $U$ in $X$ if and only if for each $\mathcal F$-partial morphism from $X$ to $Y$ with $\dom f = V$ we have that: if the inclusion $u:U \rightarrow X$ factors through $f \upharpoonright U$, then the inclusion $v:V \rightarrow X$ factors through $f$. \item If $V$ is an $\mathcal F$-admissible subobject of $X$, then $V$ is $\mathcal F$-small over $U$ in $X$ if and only if $V$ is $\mathcal F$-small over $U$ (in $V$). \end{enumerate} \end{prop} \begin{proof} (1) is straightforward. (2) follows from the description of $\mathcal F$-partial isomorphisms defined over $\mathcal F$-injective objects obtained in Proposition \ref{p:PropertiesPartialMorphisms}(4). (3) First of all assume that $V$ is $\mathcal F$-small over $U$ in $X$ and let us use the preceding lemma to prove that $V$ is $\mathcal F$-small over $U$. Take any morphism $f:V \rightarrow Y$ such that $f \upharpoonright U$ is an $\mathcal F$-partial isomorphism. Since $V$ is an $\mathcal F$-admissible subobject, $f$ is an $\mathcal F$-partial morphism from $X$ to $Y$ with $\dom f = V$ by Proposition \ref{p:PropertiesPartialMorphisms}(1). Since $V$ is small over $U$ in $X$, $f$ is an $\mathcal F$-partial isomorphism from $X$ to $Y$ with dominion $V$. Again by Proposition \ref{p:PropertiesPartialMorphisms}(1), $f$ is an $\mathcal F$-inflation. Now assume that $V$ is $\mathcal F$-small over $U$ and let $f$ be an $\mathcal F$-partial morphism from $X$ to an object $Y$ with $\dom f = V$ such that $f \upharpoonright U$ defines an $\mathcal F$-partial isomorphism from $X$ to $Y$. Then, trivially, $f \upharpoonright U$ defines an $\mathcal F$-partial isomorphism from $V$ to $Y$ and, since $V$ is $\mathcal F$-small over $U$, $f$ is an $\mathcal F$-inflation by Lemma \ref{l:CharSmall}. Since $V$ is $\mathcal F$-admissible, $f$ is an $\mathcal F$-partial isomorphism from $X$ to $Y$ by Proposition \ref{p:PropertiesPartialMorphisms}(1). \end{proof} With the notion of $\mathcal F$-small objects we can define $\mathcal F$-small extensions. \begin{defn} An $\mathcal F$-small extension is an inflation $f:U \rightarrow X$ such that $X$ is $\mathcal F$-small over $U$. \end{defn} The following characterization follows from the definition of partial isomorphism with respect to the pure-exact structure. \begin{prop} Let $R$ be a ring. A monomorphism $v:U \rightarrow X$ is a pure-small extension if and only if any morphism $g:X \rightarrow Y$ is a pure monomorphism provided that it satisfies the following: \begin{enumerate} \item $gu$ is monic. \item For each system of linear equations over $U$, $\sum_{j =1}^m X_jr_{ij}=u_i \ (i=1, \ldots, n)$, if $\sum_{j=1}^mX_jr_{ij}=gv(u_i) \ (i=1, \ldots, n)$ has a solution in $Y$, then $\sum_{j =1}^m X_jr_{ij}=u_i\ (i=1, \ldots, n)$ has a solution in $X$. \end{enumerate} \end{prop} \begin{rem} \rm Note that $g:X \rightarrow Y$ is a pure monomorphism if and only if: \begin{enumerate} \item $g$ is monic. \item Each system of linear equations over $X$, $\sum_{j =1}^m X_jr_{ij}=x_i\ (i=1, \ldots, n)$, satisfies that if the system $\sum_{j =1}^m X_jr_{ij}=g(x_i)\ (i=1, \ldots, n)$ has a solution in $Y$, then the system $\sum_{j =1}^m X_jr_{ij}=x_i\ (i=1, \ldots, n)$ has a solution in $X$. \end{enumerate} The previous result says that, when $X$ has a submodule $U$ such that the extension $U \leq X$ is pure-small, then we only have to check the condition on systems of equations over $U$ in order to see that a morphism $g:X \rightarrow Y$ is a pure monomorphism. \end{rem} Now we define $\mathcal F$-essential extensions and weakly $\mathcal F$-essential extensions. \begin{defn} A weakly $\mathcal F$-essential extension (resp. $\mathcal F$-essential extension) is an $\mathcal F$-inflation $u\colon X \rightarrow Y$ such that for any morphism $f:Y \rightarrow Z$, the following holds: \begin{center} $f u$ is an $\mathcal F$-inflation $\Rightarrow$ $f$ is an inflation (resp. $f$ is an $\mathcal F$-inflation). \end{center} \end{defn} \noindent If $\mathcal A = \textrm{Mod-}R$ and $\mathcal E$ is the abelian exact structure, then both the weakly $\mathcal E$-essential extensions and the $\mathcal E$-essential extensions coincide, since each monic is an inflation. If we consider $\mathcal F$ to be the pure-exact structure on $\textrm{Mod-}R$, then the weakly $\mathcal F$-essential extensions are the pure-essential extensions introduced in \cite{Warfield}; we shall call them weakly pure-essential. The $\mathcal F$-essential extensions are the purely essential monomorphisms introduced in \cite{GomezGuil} (caution: they are called pure-essential in \cite[p. 45]{Prest09}). We shall use the name pure-essential extension. In \cite[Example 2.3]{GomezGuil} it is proved that there exist weakly pure-essential extensions which are not pure-essential. We establish the relationship between $\mathcal F$-essential extensions and $\mathcal F$-small extensions in the sense of Definition \ref{d:small}. \begin{prop}\label{p:EssentialSmall} Let $u\colon X \rightarrow Y$ be an inflation. \begin{enumerate} \item The following assertions are equivalent: \begin{enumerate} \item $u$ is an $\mathcal F$-essential extension. \item $u$ is an $\mathcal F$-inflation and $Y$ is $\mathcal F$-small over $X$. \end{enumerate} \item If $u$ is a weakly $\mathcal F$-essential extension then $u$ does not factor through a proper direct summand of $Y$, that is, if $v:Z \rightarrow Y$ is a split monomorphism and $w:X \rightarrow Z$ is an inflation such that $vw=u$, then $v$ is an isomorphism. \end{enumerate} \end{prop} \begin{proof} (1) First of all, suppose that $u$ is an $\mathcal F$-essential extension and let us prove that $Y$ is small over $X$. We will use Lemma \ref{l:CharSmall}. Let $f:Y \rightarrow Z$ be a morphism such that $f\upharpoonright X = fu$ defines an $\mathcal F$-partial isomorphism from $Y$ to $Z$. Since $X$ is an $\mathcal F$-admissible subobject, $f\upharpoonright X$ is actually an $\mathcal F$-inflation by Proposition \ref{p:PropertiesPartialMorphisms}(1). Since $u$ is an $\mathcal F$-essential extension, $f$ is an $\mathcal F$-inflation. By Lemma \ref{l:CharSmall}, $Y$ is $\mathcal F$-small over $X$. Conversely, assume that $u$ is an $\mathcal F$-inflation and $Y$ is $\mathcal F$-small over $X$. Let $f:Y \rightarrow Z$ be a morphism such that $f \upharpoonright X=fu$ is an $\mathcal F$-inflation. Then, by Proposition \ref{p:PropertiesPartialMorphisms}(1), $f\upharpoonright X$ defines an $\mathcal F$-partial isomorphism from $Y$ to $Z$. Since $Y$ is $\mathcal F$-small over $X$, $f$ is an $\mathcal F$-inflation by Lemma \ref{l:CharSmall}. Thus, $u$ is an $\mathcal F$-essential extension. (2) Let $v:Z \rightarrow Y$ be a split monomorphism, $v':Y \rightarrow Z$, a morphism with $v'v=1_Z$ and $w:X \rightarrow Z$ with an inflation with $vw=u$. Since $w$ is an inflation, $w$ is an $\mathcal F$-inflation by the obscure axiom. Using that $v'u=w$ and that $u$ is weakly $\mathcal F$-essential, we get that $v'$ is monic. Then $v'vv'=v'=v'1_Y$ from which it follows that $vv'=1_Y$ and, consequently, $v$ is an isomorphism. \end{proof} With the notion of $\mathcal F$-essential extension we can define $\mathcal F$-injective hulls. \begin{defn} An $\mathcal F$-injective hull of an object $X$ is an $\mathcal F$-essential extension $u:X \rightarrow E$ with $E$, an $\mathcal F$-injective object. \end{defn} In the next result we see that, under certain circumstances, a weakly $\mathcal F$-essential extension $u:X \rightarrow E$ with $E$ being $\mathcal F$-injective is actually an $\mathcal F$-injective hull. In addition, we establish the relationship between $\mathcal F$-injective hulls and $\mathcal F$-injective envelopes as defined at the beginning of this section. If $\FInj$ is the class of all $\mathcal F$-injective objects, we shall call $\FInj$-envelopes to be $\mathcal F$-injective envelopes. \begin{theorem}\label{t:InjectiveHulls} Let $u:X \rightarrow Y$ be a morphism. The following assertions are equivalent: \begin{enumerate} \item $u$ is an $\mathcal F$-injective hull. \item $u$ is an $\mathcal F$-inflation, $Y$ is $\mathcal F$-injective and $Y$ is $\mathcal F$-small over $X$. \item $u$ is an $\mathcal F$-inflation, $Y$ is $\mathcal F$-injective and each morphism $f:Y \rightarrow Z$ satisfying that $fu$ is an $\mathcal F$-inflation, is a split monomorphism. \end{enumerate} If, in addition, $u$ has a cokernel and there exists an $\mathcal F$-inflation $v:X \rightarrow E$ with $E$ being a $\mathcal F$-injective object, the following assertion is equivalent too: \begin{enumerate} \setcounter{enumi}{3} \item $u$ is an $\mathcal F$-injective envelope. \end{enumerate} Finally, if there exists an $\mathcal F$-essential extension $v:X \rightarrow E$ with $E$ an $\mathcal F$-injective object, the following assertion is equivalent too: \begin{enumerate} \setcounter{enumi}{4} \item $u$ is a weakly $\mathcal F$-essential extension with $Y$ being $\mathcal F$-injective. \end{enumerate} \end{theorem} \begin{proof} (1) $\Leftrightarrow$ (2) is Proposition \ref{p:EssentialSmall} and (1) $\Leftrightarrow$ (3) is trivial. (1) $\Rightarrow$ (4). Since $u$ is an $\mathcal F$-inflation, it is an $\mathcal F$-injective preenvelope. In order to see that it is an envelope let $f:Y \rightarrow Y$ be a morphism such that $fu=u$. Since $u$ is $\mathcal F$-essential, $f$ is an $\mathcal F$-inflation. Using that $Y$ is $\mathcal F$-injective, we deduce that $f$ is a splitting monomorphism, i. e., there exists $g:Y \rightarrow Y$ such that $gf=1_Y$. Then $gu=gfu=u$ and, in particular, $g$ is a monomorphism. Then $gfg=g=g1_Y$. In particular, $fg=1_Y$, which implies that $f$ is an isomorphism. (4) $\Rightarrow$ (3). Since $u$ is an $\mathcal F$-injective envelope, there exist $w:Y \rightarrow E$ such that $wu=v$. By the obscure axiom \cite[Proposition 2.16]{Buhler}, $u$ is an $\mathcal F$-inflation. Now let $f:Y \rightarrow Z$ be a morphism such that $fu$ is an $\mathcal F$-inflation. Since $Y$ is $\mathcal F$-injective, there exists $g:Z \rightarrow Y$ such that $gfu=u$. Using that $u$ is an $\mathcal F$-injective envelope we get that $gf$ is an isomorphism. This implies that $f$ is a split monic. (5) $\Rightarrow$ (1). Since $v$ is an $\mathcal F$-inflation and $Y$ is $\mathcal F$-injective, there exists $w:E \rightarrow Y$ with $wv=u$. Since $u$ is $\mathcal F$-inflation and $v$ is $\mathcal F$-essential, $w$ is an $\mathcal F$-inflation. Using that $E$ is $\mathcal F$-injective, there exists $w':Y \rightarrow E$ such that $w'w=1_E$. Then $w'u=v$ is an $\mathcal F$-inflation so that, since $u$ is weakly $\mathcal F$-essential, $w'$ has to be monic. Then $w'ww'=w'1_Y$ implies that $ww'=1_Y$, so that $w$ is an isomorphism. Now the identity $wv=u$ gives that $u$ is $\mathcal F$-essential as well. \end{proof} \begin{rem} Note that the additional hypotheses of (4) (resp. (5)) are only needed to prove the implication $(4) \Rightarrow (1)$ (resp. $(5) \Rightarrow (1)$). The implication $(1) \Rightarrow (4)$ (resp. $(1) \Rightarrow (5)$) is true without those hypotheses. In particular, any $\mathcal F$-injective hull is always an $\mathcal F$-injective envelope. \end{rem} \noindent Let $R$ be any ring. In \cite[Proposition 6]{Warfield} it is proved that for each module $M$ there exists a weakly pure-essential extension $u:M \rightarrow E$ with $E$ a pure-injective module. In view of the preceding result, $u$ need not be the pure-injective hull of $M$. However, one can prove that pure-injective hulls exist by using the existence of injective hulls in the functor category \cite[Theorem 4.3.18]{Prest09}, so that, by (5) of the preceding theorem, $u$ is actually pure-essential. That is, \cite[Proposition 6]{Warfield} actually gives the existence of pure-injective hulls in $\textrm{Mod-}R$. \section{Existence of hulls and envelopes} \label{sec:exist-hulls-envel} \noindent In this section we study the problem of existence of injective hulls and envelopes in our exact category $\mathcal A$. First, we study when there does exist enough injectives (equivalently, injective preenvelopes). Then, we prove that in certain abelian categories this preenvelopes can be used to produce injective envelopes and hulls. \noindent Recall that a $\lambda$-sequence, where $\lambda$ is an ordinal, is a direct system of objects of $\mathcal A$, $(X_\alpha,i_{\beta\alpha})_{\alpha<\beta<\lambda}$, which is continuous in the sense that for each limit ordinal $\beta$, the direct limit of the system $(X_\alpha,i_{\gamma\alpha})_{\alpha<\gamma<\beta}$ exists and the canonical morphism $\displaystyle \lim_{\substack{\longrightarrow\\ \alpha < \beta}}X_\alpha \rightarrow X_\beta$ is an isomorphism. If the direct limit of the system exists, we shall call the morphism $\displaystyle X_0 \rightarrow \lim_{\longrightarrow}X_\alpha$ the transfinite composition of the $\lambda$-sequence. In many results of this section we shall use that transfinite compositions of inflations exist and are inflations. When this condition is satisfied, the category $\mathcal A$ has arbitrary direct sums and direct sums of conflations are conflations \cite[Lemma 1.4]{SaorinStovicek11}. Moreover, it is easy to see that when direct limits of inflations are inflations, then transfinite compositions of $\lambda$-sequences of inflations are inflations for each ordinal number $\lambda$. Now we define the notion of small object. Given an object $X$, and a direct system in $\mathcal A$, $(Y_i,u_{ji})_{i < j \in I}$, such that its direct limit exists, the functor $\Hom_{\mathcal A}(X,-)$ is said to \textit{preserve the direct limit of the system} if the canonical morphism from $\displaystyle \lim_{\longrightarrow}\Hom_{\mathcal A}(X,Y_i)$ to $\displaystyle \Hom_{\mathcal A}\left(X,\lim_{\longrightarrow}Y_i\right)$ is an isomorphism. It is very easy to see the following \cite[p. 9]{AdamekRosicky}: \begin{lem}\label{l:PreserveLimits} Let $X$ be an object and $(Y_i,u_{ji})_{i < j \in I}$ a direct system such that its direct limit exists, and denote by $\displaystyle u_i:Y_i \rightarrow \lim_{\longrightarrow}Y_j$ the canonical map for each $i \in I$. Then $\Hom_{\mathcal A}(X,-)$ preserves the direct limit of the system if and only if the following conditions hold: \begin{enumerate} \item For each $\displaystyle f:X \rightarrow \lim_{\longrightarrow}Y_j$ there exists $i \in I$ and $g:X \rightarrow Y_i$ such that $f=u_ig$. \item For each $i \in I$ and morphism $g:X \rightarrow Y_i$ satisfying $u_ig=0$, there exists $j \geq i$ such that $u_{ji}g=0$. \end{enumerate} \end{lem} Recall that the cofinality of a cardinal $\kappa$ is the least cardinal, denoted $\cf(\kappa)$, such that there exists a family of smaller cardinals than $\kappa$, $\{\kappa_\alpha:\alpha < \cf(\kappa)\}$, whose union is $\kappa$. The cardinal $\kappa$ is said to be regular if $\cf(\kappa)=\kappa$. \begin{defn} Suppose that transfinite compositions of inflations exist and are inflations. Let $\kappa$ be an infinite regular cardinal and $X$ be an object. We say that $X$ is $\kappa$-small if for each cardinal $\lambda$ with $\cf(\lambda) \geq \kappa$, $\Hom_{\mathcal A}(X,-)$ preserves the transfinite composition of any $\lambda$-sequence of inflations. We say that the object $X$ is small if it is $\kappa$-small for some infinite regular cardinal $\kappa$. \end{defn} \begin{lem} Suppose that transfinite compositions of inflations exist and are inflations. Let $\kappa$ be an infinite regular cardinal and $\{X_k:k \in K\}$ a family of $\kappa$-small objects with $|K| < \kappa$. Then $\bigoplus_{k \in K}X_k$ is $\kappa$-small. In particular, the direct sum of any family of small objects is small. \end{lem} \begin{proof} Let $\lambda$ be any cardinal with $\cf(\lambda) \geq \kappa$ and $(Y_\alpha,u_{\beta\alpha})_{\alpha < \beta < \lambda}$, a $\lambda$-sequence of inflations whose direct limit is $Y$. Denote by $u_\alpha:Y_\alpha \rightarrow Y$ the canonical morphism for each $\alpha < \lambda$. We are going to use Lemma \ref{l:PreserveLimits} in order to prove that $\bigoplus_{k \in K}X_k$ is $\kappa$-small. Let $f:\bigoplus_{k \in K}X_k \rightarrow Y$ be a morphism and denote by $\tau_k:X_k \rightarrow \bigoplus_{k \in K}X_k$ the inclusion for each $k \in K$. Since, for each $k \in K$, $X_k$ is $\kappa$-small, there exists $\alpha_k < \lambda$ and a morphism $g_k:X_k \rightarrow Y_{\alpha_k}$ such that $u_{\alpha_k}g_k=f\tau_k$. Since $|K| < \cf(\lambda)$, we can find an ordinal $\alpha$ with $\alpha_k < \alpha$ for each $k \in K$. Now let $g:\bigoplus_{k \in K}X_k \rightarrow Y_\alpha$ be the morphism induced in the direct sum by the family $\{u_{\alpha\alpha_k}g_k:k \in K\}$ and note that $g$ satisfies $u_\alpha g = f$, as $u_\alpha g \tau_k = f\tau_k$ for each $k \in K$. This proves (1) of Lemma \ref{l:PreserveLimits}. In order to prove (2), let $\alpha < \lambda$ and $f:\bigoplus_{k \in K}X_k \rightarrow Y_\alpha$ such that $u_\alpha f=0$. Since, for each $k \in K$, $X_k$ is $\kappa$-small, there exists $\alpha_k \geq \alpha$ such that $u_{\alpha_k\alpha}f\tau_k=0$. Using that $|K|<\cf(\lambda)$, there exists a $\beta < \lambda$ such that $\alpha_k < \beta$ for each $k \in K$. Then $u_{\beta\alpha}f\tau_k=0$ for each $k \in K$. This means that $u_{\beta\alpha}f=0$, which proves (2) of Lemma \ref{l:PreserveLimits}. \end{proof} Now we can prove the existence of enough injective objects in exact categories satisfying that transfinite compositions of inflations exist and are inflations, and a certain generalized version of Baer's lemma for injectivity. \begin{theorem}\label{t:ExistenceInjectives} Assume that the exact category $(\mathcal A;\mathcal E)$ satisfy the following: \begin{enumerate} \item Transfinite compositions of inflations exist and are inflations. \item There exists a set of inflations $\mathcal H = \{u_i:K_i \rightarrow H_i|i \in I\}$ such that $K_i$ is small for each $i \in I$ and any $\mathcal H$-injective object is injective. \end{enumerate} Then $\mathcal A$ has enough injectives. \end{theorem} \begin{proof} Let $M$ be any object of $\mathcal A$. Let $J$ be the set of all pairs $(i,f)$, where $i$ is an element of $I$ and $f:K_i \rightarrow M$ is a morphism. For any pair $(i,f) \in J$, let $u_{(i,f)}:K_{(i,f)}\rightarrow H_{(i,f)}$ be a copy of $u_i$, where $K_{(i,f)} = K_i$ and $H_{(i,f)} = H_i$, and compute $u$ the induced morphism from $\displaystyle \bigoplus_{(i,f) \in J}K_{(i,f)}$ to $\displaystyle \bigoplus_{(i,f) \in J}H_{(i,f)}$ by all these inclusions. By the properties of $\mathcal H$, it is easy to see that an object $E$ is injective if and only if it is $u$-injective. Denote $\displaystyle \bigoplus_{(i,f) \in J}K_{(i,f)}$ by $K$ and $\displaystyle \bigoplus_{(i,f) \in J}H_{(i,f)}$ by $H$. Since $K_i$ is small for each $i \in I$, we can apply Lemma \ref{l:PreserveLimits} to find an infinite regular cardinal $\kappa$ such that $K$ is $\kappa$-small. Now we are going to construct a family of objects $\{P_\alpha:\alpha < \kappa\}$ and of inflations $\{f_{\alpha\beta} \in \Hom(P_\beta,P_\alpha): \alpha \leq \beta < \kappa\}$ such that: \begin{enumerate} \item[(A)] $P_0=M$. \item[(B)] For each $\alpha < \kappa$, the system $(P_\gamma,f_{\delta\gamma})_{\gamma < \delta \leq \alpha}$ is direct. \item[(C)] For each $\alpha < \kappa$ and $f:K \rightarrow P_\alpha$, there exists $g:H \rightarrow P_{\alpha+1}$ with $gu=f_{\alpha+1,\alpha}f$. \end{enumerate} We make the construction by transfinite recursion. Suppose that $\alpha$ is a limit ordinal and that we have made the construction for all $\gamma < \alpha$. Then set $P_\alpha = \varinjlim_{\gamma < \alpha}P_\gamma$ and, for each $\gamma < \alpha$, set $f_{\alpha\gamma}$ the canonical morphism associated to this direct limit. Now suppose that we have made the construction for the ordinal $\alpha$ and let us make it for $\alpha+1$. For each morphism $f \in \Hom(K,P_\alpha)$, let $K^\alpha_f$ and $H^\alpha_f$ be a copies of $K$ and $H$ respectively. Denote by $I_\alpha = \Hom(K,P_\alpha)$, let $u_\alpha:\bigoplus_{f \in I_\alpha}K^\alpha_f \rightarrow \bigoplus_{f \in I_\alpha}H^\alpha_f$ be the direct sum of copies of $u$, and $\varphi_\alpha:\bigoplus_{f \in I_\alpha}K^\alpha_f \rightarrow P_{\alpha}$ the morphism induced in the direct sum by all morphism from $K$ to $P_\alpha$. Then take $P_{\alpha+1}$ and $f_{\alpha+1,\alpha}$ the lower arrow in the pushout of $u_\alpha$ along $\varphi_\alpha$: \begin{displaymath} \begin{tikzcd} \bigoplus_{f \in I_\alpha}K^\alpha_f \arrow{r}{u_\alpha} \arrow{d}{\varphi_\alpha} & \bigoplus_{f \in I_\alpha}H^\alpha_f \arrow{d}{\psi_\alpha}\\P_\alpha \arrow{r}{f_{\alpha+1,\alpha}} & P_{\alpha+1} \end{tikzcd} \end{displaymath} Moreover, set $f_{\alpha+1,\gamma} = f_{\alpha+1,\alpha}f_{\alpha,\gamma}$ for each $\gamma < \alpha$. Let us prove that $P_{\alpha+1}$ and $f_{\alpha+1,\alpha}$ satisfy (C). Let us denote, for each $f \in I_\alpha$, by $i_f$ and $k_f$ the corresponding inclusions of $K_f^{\alpha}$ and $H_f^{\alpha}$ in $\bigoplus_{f \in I_\alpha}K^\alpha_f$ and $\bigoplus_{f \in I_\alpha}H^\alpha_f$ respectively. Given $f:K \rightarrow P_{\alpha}$, note that $u_\alpha i_f = u k_f$ and $f=\varphi_\alpha i_f$. Consequently: \begin{displaymath} f_{\alpha+1,\alpha}f = \psi_\alpha u_\alpha i_f = \psi_\alpha k_f u \end{displaymath} Then the morphism $g=\psi_\alpha k_f$ satisfy (C). This concludes the construction. Finally, let $\displaystyle E = \lim_{\substack{\longrightarrow\\ \alpha < \kappa}}P_\alpha$ and denote by $f_\alpha:P_\alpha \rightarrow E$ the canonical maps associated to this direct limit. By (1), $f_0:M \rightarrow E$ is an inflation. Let us prove that $E$ is injective which, by (2), is equivalent to see that $E$ is $u$-injective. Let $f:K \rightarrow E$ be any morphism. Since $K$ is $\kappa$-small, there exists, by Lemma \ref{l:PreserveLimits}, an $\alpha < \kappa$ and a morphism $\overline f:K \rightarrow P_\alpha$ such that $f = f_\alpha \overline f$. By the construction of $E$, there exists $\overline g:F \rightarrow P_{\alpha+1}$ such that $f_{\alpha+1,\alpha}\overline f = \overline g u$. Then $f = f_{\alpha+1}\overline g u$, and the proof is finished. \end{proof} \begin{rem}\label{r:StructurePreenvelope} Let $M$ be an object of $\mathcal A$. Note that the $\mathcal F$-inflation $i:M \rightarrow E$ with $E$ an $\mathcal F$-injective object constructed in the preceding proof satisfies the following property: there exists an infinite regular cardinal $\kappa$ and a $\kappa$-sequence $(P_\alpha,f_{\beta\alpha})_{\alpha < \beta < \kappa}$ such that $P_0=M$, $i$ is the transfinite composition of the sequence and, for each $\alpha < \kappa$, $f_{\alpha+1,\alpha}$ is a pushout of a direct sum of inflations belonging to $\mathcal H$. \end{rem} \begin{rem}\label{r:Hyphoteses} Note that (2) in the preceding theorem is satisfied for those exact categories for which there exists a set of objects $\mathcal G$ such that: \begin{enumerate} \item The class of admissible subobjects of any $G \in \mathcal G$ is a set. \item Any admissible subobject of any $G \in \mathcal G$ is small. \item If an object $A$ of $\mathcal A$ is $\mathcal G$-injective, then it is injective. \end{enumerate} In this case, we only have to take $\mathcal H$ as the set of all inflations $u:K \rightarrow G$ with $G$ an object in $\mathcal G$. \end{rem} We finish the paper studying the existence of injective hulls. We assume that our category $\mathcal A$ is abelian and that $\mathcal E$ is the abelian exact structure. Using the argument of Enochs and Xu in \cite[$\S$2.2]{Xu} we will prove that in an exact substructure $\mathcal F$, if an object $M$ is an $\mathcal F$-admissible subobject of an $\mathcal F$-injective object, then $M$ actually has an $\mathcal F$-injective hull. We shall need the hypothesis that $\mathcal F$ is closed under well ordered limits. This condition is stronger than being closed under transfinite compositions as the next example shows. \begin{expl} \rm Let $R$ be a non-noetherian countable ring. Then there exists an fp-injective module $M$ which is not injective. Consider the exact structure $\mathcal E^M$. Since $M$ is not injective, there exists an inclusion $u:I \rightarrow R$ which is not an $\mathcal E^M$-inflation. Since $I$ is countable, $I=\bigcup_{n < \omega}I_n$ for a chain of finitely generated right ideals of $R$. Now each inclusion $u_n:I_n \rightarrow R$ is an $\mathcal E^M$ inflation and the direct limit of all of them is $u$. Note that $\mathcal E^M$ is closed under transfinite compositions by Lemma \ref{l:TransfiniteCompositions}. \end{expl} \begin{lem}\label{l:epim} Suppose that $\mathcal A$ is an abelian category, $A$ is an object of $\mathcal A$ and $f,g \in \textrm{End}_{\mathcal A}(A)$ are two monomorphisms. If $\Img f \subseteq \Img fg$ then $g$ is epic. \end{lem} \begin{proof} Since $\mathcal A$ is abelian, each monomorphism is the kernel of its cokernel, so that the image $f$ and $fg$ are represented by the monomorphism $f$ and $fg$ respectively. The inclusion $\Img f \subseteq \Img fg$ as subobjects of $A$ implies that there exists a morphism $h:A \rightarrow A$ such that $f=fgh$. Then $f(1-gh)=0$ and, since $f$ is monic, $1-gh=0$. This implies that $g$ is an epimorphism. \end{proof} Recall that an abelian category $\mathcal A$ is said to satisfy AB5 if $\mathcal A$ is cocomplete and direct limits are exact. \begin{lem}\label{l:monomorphism} Suppose that $\mathcal A$ is an abelian category satisfying AB5. Let $\kappa$ be an ordinal, $(A_\alpha,u_{\beta\alpha})_{\alpha < \beta < \kappa}$ be a direct system of objects and $f: \displaystyle \lim_{\longrightarrow}A_\alpha \rightarrow A$ be a morphism. Suppose that for each $\alpha < \kappa$, $\Ker (fu_\alpha) = \Ker (u_{\alpha+1,\alpha})$, where $\displaystyle u_\alpha:A_\alpha \rightarrow \lim_{\longrightarrow}A_\gamma$ is the canonical morphism. Then $f$ is a monomorphism. \end{lem} \begin{proof} Since $\mathcal A$ satisfies AB5, direct limits are exact and, consequently, $\displaystyle \Ker f= \lim_{\longrightarrow}\Ker(fu_\alpha)$. Denote $\Ker (fu_\alpha)=K_\alpha$ for each $\alpha < \kappa$. Since $K_\alpha$ is the kernel of $u_{\alpha+1,\alpha}$, we can construct, for each $\alpha < \kappa$, the following commutative diagram with exact rows \begin{displaymath} \begin{tikzcd} 0 \arrow{r} & K_\alpha \arrow{r}{k_\alpha} \arrow{d}{k_{\alpha+1,n}} & A_\alpha \arrow{r}{u_{\alpha+1,\alpha}} \arrow{d}{u_{\alpha+1,\alpha}} & A_{\alpha+1} \arrow{r} \arrow{d}{u_{\alpha+1,\alpha+2}}& 0\\ 0 \arrow{r} & K_{\alpha+1} \arrow{r}{k_{\alpha+1}} & A_{\alpha+1} \arrow{r}{u_{\alpha+2,\alpha+1}} & A_{\alpha+2} \arrow{r} & 0 \end{tikzcd} \end{displaymath} which actually defines a direct system of conflations. Taking direct limit and noting that $\displaystyle \lim_{\longrightarrow}u_{\alpha+1,\alpha}$ is the identity, the exactness of direct limits gives that $\displaystyle \lim_{\longrightarrow}K_\alpha=0$. Then $\Ker f=0$ and $f$ is a monomorphism. \end{proof} Given an object $X$ of $\mathcal A$, recall that the \textit{comma category} $X \downarrow \mathcal A$ is the category whose class of objects consists of all morphisms $f:X \rightarrow A$ with $A \in \mathcal A$, and whose morphisms between two objects, $u:X \rightarrow A$ and $v:X \rightarrow B$, are morphisms in $\mathcal A$, $f:A \rightarrow B$, satisfying $fu=v$. Abusing language, we shall denote the morphism between $u$ and $v$ by $f:u \rightarrow v$ as well. Given a class $\mathcal I$ of inflations of $\mathcal E$, we are going to denote by $X \downarrow_{\mathcal I} \mathcal A$ the full subcategory of the comma category $X \downarrow \mathcal A$ whose objects are all morphisms in $\mathcal I$. We shall call an object $u$ of $X \downarrow_{\mathcal I} \mathcal A$, a cogenerator if for any other object $v$ of $X \downarrow_{\mathcal I} \mathcal A$, there exists a morphism $f:v \rightarrow u$. Recall that an abelian category is said to be \textit{locally small} if the class of subobjects of any object actually is a set. \begin{lem}\label{l:ExistenceCogenerator} Let $\mathcal A$ be a locally small abelian category, $\mathcal E$ the abelian exact structure and $\mathcal I$ a class of conflations of $\mathcal E$ which is closed under well ordered direct limits. Let $u$ be a cogenerator in $X \downarrow_{\mathcal I} \mathcal A$. Then there exists a cogenerator in $X \downarrow_{\mathcal I} \mathcal A$, $\overline u:X \rightarrow \overline E$, and a morphism $\overline f:u \rightarrow \overline u$ such that any morphism $f':\overline u \rightarrow u'$ in $X \downarrow_{\mathcal I} \mathcal A$ in which $u'$ is a cogenerator satisfies $\Ker(f'\overline f)=\Ker \overline f$. \end{lem} \begin{proof} Assume that the claim of the lemma is not true. We are going to construct, for each pair of ordinals $\beta < \alpha$, cogenerators $u_\beta$ and $u_\alpha$ and a morphism $f_{\alpha\beta}:u_\beta \rightarrow u_\alpha$ such that $u_0=u$, the system $(u_\gamma,f_{\gamma\delta})_{\delta < \gamma \leq \alpha}$ is directed, $\Ker f_{\beta 0} \subsetneq \Ker f_{\alpha 0}$. This is a contradiction since the category is locally small. We shall make the construction recursively on $\alpha$. For $\alpha=0$ let $u_0=u$. Let $\alpha > 0$ and assume that we have constructed $u_\delta$ and $f_{\delta\gamma}$ for each $\gamma < \delta < \alpha$. If $\alpha$ is successor, say $\alpha = \beta+1$, then as $u_\beta$ does not satisfy the claim of the lemma, there exists an inflation $u_{\beta+1}$ in $X \downarrow_{\mathcal I} \mathcal A$ and a morphism $f_{\beta+1\beta}:u_\beta \rightarrow u_{\beta+1}$ such that $\Ker f_{\beta 0} \subsetneq \Ker(f_{\beta+1\beta}f_{\beta 0})$. Then set $f_{\beta+1 \delta}=f_{\beta+1\beta}f_{\beta \delta}$ for each $\delta < \alpha$. Clearly, $\Ker f_{\alpha 0} \subsetneq \Ker f_{\beta 0}$. If $\alpha$ is limit, set $\displaystyle u_\alpha = \lim_{\substack{\longrightarrow\\ \delta < \alpha}} u_\delta$ and $f_{\alpha \delta}:u_\delta \rightarrow u_\alpha$ the structural morphisms of this direct limit. By hypothesis, $u_\alpha$ is an element of $X \downarrow_{\mathcal I} \mathcal A$ which is a cogenerator, as each $u_\beta$ is a cogenerator for each $\beta < \alpha$. Moreover, $\Ker f_{\beta 0} \subsetneq \Ker f_{\alpha 0}$ for each $\beta < \alpha$ because, otherwise, if $\Ker f_{\beta 0} = \Ker f_{\alpha 0}$, then $\Ker f_{\beta+1 0} = \Ker f_{\alpha 0}$ as well, so that $\Ker f_{\beta 0} = \Ker f_{\beta+1 0}$, a contradiction. This finishes the proof. \end{proof} \begin{theorem}\label{t:ExistenceHulls} Let $\mathcal A$ be a locally small abelian category satisfying AB5, $\mathcal E$ be the abelian exact structure of $\mathcal A$, and $\mathcal I$ a class of conflations of $\mathcal E$ which is closed under well ordered limits. Let $X$ be any object of $\mathcal A$ such that there exists an inflation $u:X \rightarrow E$ in $\mathcal I$ with $E$, an $\mathcal I$-injective object. Then there exists an inflation $v:X \rightarrow F$ with $F$ an $\mathcal I$-injective object such that $v$ is minimal. \end{theorem} \begin{proof} Note that $u$ is a cogenerator in $X \downarrow_{\mathcal I} \mathcal A$ since $E$ is $\mathcal I$-injective. First of all, by setting $u_0=u$, we can apply recursively the preceding lemma to get, for each $n < \omega$, a cogenerator $u_n$ in $X \downarrow_{\mathcal I} \mathcal A$ and a morphism $f_{n+1,n}:u_n \rightarrow u_{n+1}$ such that any other morphism $f':u_{n+1} \rightarrow u'$ with $u'$ a cogenerator satisfies $\Ker f'f_{n+1,n} = \Ker f_{n+1,n}$. Let $\displaystyle w=\lim_{\substack{\longrightarrow\\n < \omega}} u_n$ and note that $w$ is a cogenerator in $X \downarrow_{\mathcal I} \mathcal A$. Since any $f':w \rightarrow u'$ with $u'$ a cogenerator satisfies, for each natural number $n$, that $\Ker(f'f_n) = \Ker(f_{n+1,n})$, where $f_n$ is the canonical morphism of the direct limit, Lemma \ref{l:monomorphism} says that any such $f'$ is actually a monomorphism. Suppose that the cogenerator $w$ is of the form $w:X \rightarrow F$, and let us prove that $w$ is a minimal morphism, that is, that any $f:w \rightarrow w$ is an isomorphism. Let $f:w \rightarrow w$ be a morphism and assume that $f$ is not an isomorphism. Since it is monic, by the previous claim, $f$ is not an epimorphism. Now we can construct, by transfinite recursion, a monomorphism $f_{\alpha\beta}:w_\beta \rightarrow w_\alpha$ for each $\beta < \alpha$, where $w_\alpha=w$ if $\alpha$ is successor and, otherwise, $\displaystyle w_\alpha = \lim_{\substack{\longrightarrow\\ \gamma < \alpha}}w_\gamma$, such that $f_{\alpha\beta} = f$ if $\alpha = \beta+1$. Cases $\alpha=0$ and $\alpha$, a successor are easy. If $\alpha$ is a limit ordinal, set $\displaystyle w_\alpha = \lim_{\substack{\longrightarrow\\ \gamma < \alpha}} w_\gamma$ with structural maps $f'_{\alpha\gamma}:w \rightarrow w_\alpha$ for each $\gamma < \alpha$. Since $w$ is a cogenerator, there exists $f'_\alpha:w_\alpha \rightarrow w$. Then define $f_{\alpha\beta}=f'_\alpha f'_{\alpha \beta}$. Now we prove that for each ordinal $\alpha$, $\{\Img f_{\alpha\beta}:\beta < \alpha+1\}$ is a strictly ascending chain of subobjects of $F$, which is a contradiction. Take $\beta < \alpha+1$ and suppose that $\Img f_{\beta+1,\alpha+1} = \Img f_{\beta,\alpha+1}$. Since $f_{\beta,\alpha+1} = f_{\beta+1,\alpha+1}f$, Lemma \ref{l:epim} implies that $f_{\beta+1,\alpha+1}$ is an epimorphism. But $f_{\beta+1,\alpha+1} = ff_{\beta+1,\alpha}$ so that $f$ is an epimorphism as well. This contradicts the previous hypothesis and $f$ has to be an isomorphism. \end{proof} \begin{cor} Let $\mathcal A$ be a locally small abelian category satisfying AB5, $\mathcal E$ be the abelian exact structure of $\mathcal A$, and $\mathcal F$ an additive exact substructure of $\mathcal E$ which is closed under well ordered limits. Let $X$ be any object of $\mathcal A$ such that there exists an $\mathcal F$-inflation $u:X \rightarrow E$ with $E$, an $\mathcal F$-injective object. Then $X$ has a $\mathcal F$-injective envelope. Moreover, this $\mathcal F$-injective envelope is an $\mathcal F$-injective hull as well. \end{cor} \begin{proof} Follows immediately from the previous result. By Theorem \ref{t:InjectiveHulls}, every $\mathcal F$-injective envelope actually is a $\mathcal F$-injective hull. \end{proof} \section{Applications} \noindent In this section we give several applications of the results obtained in the previous sections. \subsection{Approximations in exact categories} In the recent years several papers studying approximations in exact categories have appeared in the literature. There are two ways of defining approximations in a category $\mathcal D$. The first of them takes a fixed class of objects $\mathcal X$ and is based on the notions of $\mathcal X$-preenvelope and $\mathcal X$-precover defined at the beginning of Section 3. These are the approximations widely studied for module categories and the ones extended in \cite{SaorinStovicek11} to exact categories. The other way of defining approximations takes an ideal $\mathcal I$ in the category (that is, a subfunctor of the $\Hom_{\mathcal D}$ bifunctor) and is based on the notion of $\mathcal I$-preenvelopes and $\mathcal I$-precovers (recall that a $\mathcal I$-preenvelope of an object $D$ of $\mathcal D$ is a morphism $i:D \rightarrow X$ that belongs to $\mathcal I$ and such that for any other morphism $j:D \rightarrow Y$, there exists $f:X \rightarrow Y$ with $fj=i$; the $\mathcal I$-precovers are defined dually). This is the approach of \cite{FuGuilHerzogTorrecillas}. In this paper, we are going to apply the results of the previous sections in the study of approximations by objects. As a direct consequence of Theorem \ref{t:ExistenceInjectives} we get that the class of injective objects with respect to certain sets of inflations provide for preenvelopes. We shall use the following lemma for the exact structure $\mathcal E^{\mathcal X}$ where $\mathcal X$ is a class of objects. \begin{lem}\label{l:TransfiniteCompositions} Suppose that transfinite compositions of inflations in $\mathcal E$ exist and are inflations and let $\mathcal X$ be any class of objects. Then transfinite compositions of $\mathcal E^{\mathcal X}$-inflations exist and are $\mathcal E^{\mathcal X}$-inflations. \end{lem} \begin{proof} Let $(Y_\alpha,u_{\beta\alpha})_{\alpha < \beta < \kappa}$ be a direct system of objects indexed by an ordinal $\kappa$, such that $u_{\beta\alpha}$ is an $\mathcal E^{\mathcal X}$-inflation for each $\alpha < \beta < \kappa$. Denote by $\displaystyle u_\alpha:Y_\alpha \rightarrow \lim_{\longrightarrow}Y_\beta$ the canonical morphism for each $\alpha < \kappa$. Given any $X \in \mathcal X$ and any $f:Y_0 \rightarrow X$ we can construct, using that $u_{\beta\alpha}$ is an $\mathcal E^{\mathcal X}$-inflation for each $\alpha < \beta < \kappa$, a direct system of morphisms, $(f_\alpha:Y_\alpha \rightarrow X)_{\alpha < \kappa}$, with $f_0 = f$. Then the induced morphism $g:\varinjlim_{\beta < \kappa}Y_\beta \rightarrow X$ satisfies $gu_0 = f$. This means that $u_0$ is an $\mathcal E^{\mathcal X}$-inflation. \end{proof} \begin{cor} Suppose that transfinite compositions of inflations in $\mathcal A$ exist and are inflations. Let $\mathcal H$ be a set of inflations such that for each $i:K \rightarrow H$ in $\mathcal H$, $K$ is small. Let $\mathcal X$ be the class of all $\mathcal H$-injective objects. Then each object in $\mathcal A$ has a $\mathcal X$-preenvelope. \end{cor} \begin{proof} By Lemma \ref{l:TransfiniteCompositions}, the exact category $(\mathcal A; \mathcal E^{\mathcal X})$ satisfy the hypotheses of Theorem \ref{t:ExistenceInjectives}, since an object is $\mathcal E^{\mathcal X}$-injective if and only if it is $\mathcal H$-injective. Then, for each object $A$ of $\mathcal A$, there exists an $\mathcal E^{\mathcal X}$-inflation $i:A \rightarrow E$ with $E$ an $\mathcal E^{\mathcal X}$-injective object. But any $\mathcal E^{\mathcal X}$-injective object actually belongs to $\mathcal X$ (since morphisms in $\mathcal H$ are $\mathcal E^{\mathcal X}$-inflations), so that $i$ is a $\mathcal X$-preenvelope. \end{proof} One situation that fits the hypotheses of the preceding result is when we take, in the category of right modules over a unitary ring $R$, $\mathcal H$ to be the set of all conflations $K \rightarrow R^n \rightarrow L$ with $n$ a natural number and $K$ finitely generated. In this case, the class $\mathcal X$ consists of all fp-injective modules. The preceding results gives that every module has an fp-injective preenvelope (see \cite[Theorem 4.1.6]{GobelTrlifaj}). \begin{cor} \label{fp-inj} Let $R$ be a ring. Then every module has an fp-injective preenvelope. \end{cor} Maybe, the most general result regarding approximations in exact categories is \cite[Theorem 2.13(4)]{SaorinStovicek11}. We see that this result can be deduced from our Theorem \ref{t:ExistenceInjectives}. Let $\mathcal I$ be a set of inflations and denote by $\Coker(\mathcal I)$ the class consisting of all cokernels of morphism in $\mathcal I$. Recall that $\mathcal I$ is called \textit{homological} \cite[Definition 2.3]{SaorinStovicek11} if the following two conditions are equivalent for any object $T$: \begin{enumerate} \item $T \in \Coker(\mathcal I)^\perp$. \item $T$ is $\mathcal I$-injective. \end{enumerate} \begin{cor} Suppose that transfinite compositions of inflations exist and are inflations. Let $\mathcal I$ be a homological set of inflations such that, for each $i:K \rightarrow L$ in $\mathcal I$, $K$ is small. Then $\Coker(\mathcal I)^\perp$ is preenveloping. \end{cor} \begin{proof} Denote $\Coker(\mathcal I)^\perp$ by $\mathcal S$. Note that, since $\mathcal I$ is homological, an object belongs to $\mathcal S$ if and only if it is $\mathcal I$-injective, if and only if it is $\mathcal E^{\mathcal S}$-injective, so that the result is equivalent to prove that there exists enough $\mathcal E^{\mathcal S}$-injectives. But by Lemma \ref{l:TransfiniteCompositions}, transfinite compositions of $\mathcal E^{\mathcal S}$-inflations exist and are $\mathcal E^{\mathcal S}$-inflations and an object is $\mathcal E^{\mathcal S}$-injective (equivalently, belongs to $\mathcal S$) if and only if it is $\mathcal I$-injective. Then the existence of $\mathcal E^{\mathcal S}$-injectives follows from Theorem \ref{t:ExistenceInjectives}. \end{proof} \subsection{Approximations in Grothendieck categories} In this subsection, $\mathcal D$ will be a Grothendieck category with the abelian exact structure. The first application of our results is the existence of injective hulls in $\mathcal D$. \begin{cor} \label{Grothendieck} Every object in the Grothendieck category $\mathcal D$ has an injective hull. \end{cor} \begin{proof} First we show that $\mathcal D$ satisfies the hypotheses of Theorem \ref{t:ExistenceInjectives} to prove that $\mathcal D$ has enough injectives. That transfinite composition of inflations are inflations follows from (AB5), \cite[Proposition V.1.1]{Stenstrom}. In order to see that $\mathcal D$ satisfies (2) of Theorem \ref{t:ExistenceInjectives}, we shall see that it satisfies the conditions of Remark \ref{r:Hyphoteses}. First note that $\mathcal D$ is locally small by \cite[Proposition IV.6.6]{Stenstrom}. On the other hand, it is well known that all objects in a Grothendieck category are small and, if $G$ is a generator of $\mathcal D$, an object is $G$-injective if and only if it is injective by \cite[Proposition V.2.9]{Stenstrom}. Consequently, $\mathcal D$ has enough injectives by Theorem \ref{t:ExistenceInjectives}. Now, the existence of injective hulls in $\mathcal D$ is a direct consequence of Theorem \ref{t:ExistenceHulls}. \end{proof} Now, let us look at approximations in $\mathcal D$ by a class of objects. In many situations, the classes providing for approximations belong to a cotorsion pair. The relationship between cotorsion pairs and approximations in module categories was first observe by Salce in the late 1970s who proved that, if $(\mathcal B,\mathcal C)$ is a cotorsion pair, then the existence of special $\mathcal B$-precovers is equivalent to the existence of special $\mathcal C$-preenvelopes (a special $\mathcal B$-precover of a module $M$ is a morphism $f:B \rightarrow M$ with $B \in \mathcal B$ and $\Ker f \in \mathcal B^\perp$; the special $\mathcal C$-preenvelopes are defined dually). Later on, Enochs proved the important fact that a \textit{closed} (in the sense that $\mathcal B$ is closed under direct limits) and complete cotorsion pair provide minimal approximations: covers and envelopes. Finally, Eklof and Trlifaj proved that complete cotorsion pair are abundant: any cotorsion pair generated by a set is complete. All these works were motivated by the study of the existence of flat covers, the so-called ``Flat cover conjecture", solved by Bican, El Bashir and Enochs in \cite{BicanBashirEnochs}. In this section we see that these results are consequences of our results in the previous sections. Recall that for a class $\mathcal X$ of objects we can form the cotorsion pair $(\mathcal X^\perp, {^\perp}(\mathcal X^\perp))$, which is called the cotorsion pair generated by $\mathcal X$. We say that a cotorsion pair $(\mathcal B,\mathcal C)$ is \textit{generated by a set} if there exists a set of objects $\mathcal S$ such that $(\mathcal B,\mathcal C)$ coincides with the cotorsion pair generated by $\mathcal S$. Moreover, we say that $(\mathcal B, \mathcal C)$ is \textit{closed} if $\mathcal B$ is closed under direct limits. \begin{theorem} Let $(\mathcal B,\mathcal C)$ be a cotorsion pair in $\mathcal D$. \begin{enumerate} \item If $\mathcal D$ has a projective generator and $(\mathcal B,\mathcal C)$ is cogenerated by a set, then $(\mathcal B,\mathcal C)$ is complete. \item If $(\mathcal B,\mathcal C)$ is complete and closed then every object has a $\mathcal C$-envelope. \end{enumerate} \end{theorem} \begin{proof} (1) We prove, using Theorem \ref{t:ExistenceInjectives}, that the exact structure $\mathcal E^{\mathcal C}$ has enough injective objects. First note that transfinite compositions of inflations are inflations by \cite[Proposition V.1.1]{Stenstrom}. Using Lemma \ref{l:TransfiniteCompositions} we deduce that transfinite compositions of $\mathcal E^{\mathcal C}$-inflations are $\mathcal E^{\mathcal C}$-inflations as well. Now, since $\mathcal D$ has a projective generator, for each $S \in \mathcal S$ there exists a conflation \begin{displaymath} \begin{tikzcd} K_S \arrow{r}{i_S} & P_S \arrow{r}{p_S} & S \end{tikzcd} \end{displaymath} with $P_S$ projective. Let $\mathcal H$ be the set $\{i_S:S \in \mathcal S\}$. As $\mathcal D$ is a Grothendieck category, $K_S$ is small for each $S \in \mathcal S$. Moreover, it is easy to show that $\mathcal H$ is contained in $\mathcal E^{\mathcal C}$, which implies that an object $M$ is $\mathcal E^{\mathcal C}$-injective if and only if it is $\mathcal H$-injective. We can apply Theorem \ref{t:ExistenceInjectives} to get that $\mathcal E^{\mathcal C}$ has enough injective objects. Then, noting that an object $M$ is $\mathcal H$-injective if and only if $M \in \mathcal C$, we conclude that any $\mathcal E^{\mathcal C}$-inflation $i:M \rightarrow E$ with $E$ a $\mathcal E^{\mathcal C}$-injective object actually is a $\mathcal C$-preenvelope. Consequently, $\mathcal C$ is preenveloping. Now let us take the $\mathcal C$-preenvelope $i:M \rightarrow E$ of an object $M$ as constructed in Theorem \ref{t:ExistenceInjectives}. By Remark \ref{r:StructurePreenvelope}, there exists an infinite regular cardinal $\kappa$ and a $\kappa$-sequence $(P_\alpha,f_{\beta\alpha})_{\alpha<\beta<\kappa}$ such that $P_0=M$, $i$ is the transfinite composition of the sequence and, for each $\alpha < \kappa$, $f_{\alpha+1,\alpha}$ is a pushout of a direct sum of inflations belonging to $\mathcal H$. This last condition implies that $\Coker f_{\alpha+1,\alpha}$ is a direct sum of modules belonging to $\mathcal S$ and, consequently, belongs to $\mathcal B$. Now, for each $\alpha < \beta < \kappa$ we get the commutative diagram of conflations \begin{displaymath} \begin{tikzcd} M \arrow{r}{f_{\alpha 0}} \arrow{d} & P_\alpha \arrow{r}{\overline f_{\alpha 0}} \arrow{d}{f_{\beta\alpha}} & \Coker f_{\alpha 0} \arrow{d}{\overline f_{\beta\alpha}}\\ M \arrow{r}{f_{\beta 0}} & P_\beta \arrow{r}{\overline f_{\beta 0}} & \Coker f_{\beta 0}\\ \end{tikzcd} \end{displaymath} whose direct limit is the conflation \begin{displaymath} \begin{tikzcd} M \arrow{r}{i} & E \arrow{r}{p} & \Coker i \end{tikzcd} \end{displaymath} In particular, we get that $\Coker i$ is the composition of the transfinite sequence $(\Coker f_{\alpha 0}, \overline f_{\beta\alpha})_{\alpha < \beta < \kappa}$. Using the snake lemma it is easily verified that $\Coker \overline{f}_{\alpha+1,\alpha} \cong \Coker f_{\alpha+1,\alpha} \in \mathcal C$, so that, by Eklof lemma \cite[Proposition 2.12]{SaorinStovicek11}, $\Coker i \in \mathcal B$ as well. This means that $i$ is a special $\mathcal C$-preenvelope and that the cotorsion pair is complete. (2) Let $M$ be an object in $\mathcal D$. Using that the cotorsion pair is complete, there exists an inflation $i_M:M \rightarrow E$ with $E \in \mathcal C$ and $\Coker i_M \in \mathcal B$. Now denote by $\mathcal I$ the class of inflations $i:A \rightarrow B$ in $\mathcal E$ such that $\Coker i \in \mathcal B$. Since $\mathcal B$ and $\mathcal E$ are closed under direct limits, then so is $\mathcal I$. Moreover, notice that $i_M \in \mathcal I$ and satisfies, by Corollary \ref{c:CotorsionPair}, that $E$ is $\mathcal I$-injective. Then we are in position to apply Theorem \ref{t:ExistenceHulls} to obtain a minimal inflation $j_M:M \rightarrow F$ in $\mathcal I$ with $\mathcal F$ an $\mathcal I$-injective object. Using that the cotorsion pair is complete there exists a conflation \begin{displaymath} \begin{tikzcd} E \arrow{r}{u} & C \arrow{r} & B \end{tikzcd} \end{displaymath} with $C \in \mathcal C$ and $B \in \mathcal B$. In particular, $u \in \mathcal I$ and, as $E$ is $\mathcal I$-injective, this conflation is split. This means that $E \in \mathcal C$. Consequently, the inflation $j_M$ actually is a $\mathcal C$-envelope. \end{proof} \subsection{Pure-injective hulls in finitely accessible additive categories} The notion of purity in module categories can be considered in general in finitely accessible additive categories. Let $\mathcal K$ be an additive category with direct limits. Recall that an object $F$ of $\mathcal K$ is \textit{finitely presented} if for each direct system of objects in $\mathcal K$, $(K_i,u_{ji})_{i < j \in I}$, the canonical morphism from $\varinjlim \Hom_{\mathcal K}(F,X_i) \rightarrow \Hom_{\mathcal K}\left(F,\varinjlim K_i\right)$ is an isomorphism. The category $\mathcal K$ is said to be finitely accessible if it has all direct limits and there exists a set $\mathcal S$ of finitely presented objects such that every object of $\mathcal K$ can be expressed as a direct limit of objects from $\mathcal S$. Let $\mathcal K$ be a finitely accessible additive category. A kernel-cokernel pair in $\mathcal K$ \begin{displaymath} \begin{tikzcd} K \arrow{r}{i} & M \arrow{r}{p} & L \end{tikzcd} \end{displaymath} is said to be pure if for each finitely presented module $P$, the sequence of abelian groups \begin{displaymath} \begin{tikzcd} \Hom_{\mathcal K}(P,K) \arrow{r} & \Hom_{\mathcal K}(P,M) \arrow{r} & \Hom_{\mathcal K}(P,L) \end{tikzcd} \end{displaymath} is exact. The class $\mathcal E_{\mathcal P}$ of all such kernel-cokernels pairs is an exact structure on $\mathcal K$, which we shall call the pure-exact structure. As in the case of modules, inflations, deflations, injectives and projectives with respect to this exact structure will be called pure-monomorphisms, pure-epimorphisms, pure-injectives and pure-projectives respectively. The main objective of this section is to apply the results of the previous one to study the existence of pure-injective hulls in $\mathcal K$. It was proved in \cite{Crawley} that every finitely accessible additive category is equivalent to the full subcategory Flat-$\mathcal S$ of additive flat functors from a small preadditive category $\mathcal S$ to the category of abelian groups. Using that any functor category (additive functors from a small preadditive category to the category of abelian groups) is equivalent to the category of unitary modules over a ring $T$ with enough idempotents (that is, an associative ring without unit but with a family of pairwise orthogonal elements $\{e_i\mid i \in I\}$ such that $T=\oplus_{i \in I}Te_i = \oplus_{i \in I}e_iT$), we get that any finitely accessible additive category is equivalent to the full subcategory of flat modules over a ring with enough idempotents. More precisely, if $\mathcal K$ is a finitely accessible additive category and $\{F_i: i \in I\}$ is a representing set of the isomorphism classes of finitely presented objects of $\mathcal K$, and we denote by $F=\oplus_{i \in I}F_i$, then we consider $T=\widehat{\End}_\mathcal{K}(F)$ the subring of $\End_{\mathcal K}(F)$ consisting of all endomorphisms $f$ of $F$ such that $f(F_i)=0$ except for possibly finitely many indices $i \in I$. This ring, called the \textit{functor ring} of the family $\{F_i:i \in I\}$, is a ring with enough idempotents such that $\mathcal K$ is equivalent to the full subcategory of Mod-$T$ consisting of flat modules, in such a way that pure exact sequences in $\mathcal K$ corresponds to exact sequences in Flat-$R$. So, in order to study a finitely accessible additive category we can restrict ourselves to the full subcategory of flat modules over a ring with enough idempotents. The following result can be proved as in the unitary case (see \cite[Lemma 5.3.12]{EnochsJenda}): \begin{lem} Let $T$ be a ring with enough idempotents with $|T| = \kappa$. For each unitary $T$-module $M$ and element $x \in M$ there exists a pure submodule $S$ of $M$ containing $x$ such that $|S| \leq \kappa$. \end{lem} Now we prove that any accessible category satisfies the Baer's criterion. \begin{theorem} Let $\mathcal K$ be a finitely accessible additive category. There exists a cardinal number $\kappa$ such that if $\mathcal G$ is the set of objects \begin{displaymath} \left\{\bigoplus_{i \in I}G_i\mid G_i \textrm{ is finitely presented and }|I| \leq \kappa\right\} \end{displaymath} then any $\mathcal G$-pure-injective object is pure-injective. \end{theorem} \begin{proof} As mentioned before, we may assume that $\mathcal K$ is the full subcategory consisting of unitary flat right $T$-modules over a ring $T$ with enough idempotents. Let $M$ be a flat $T$-module which is $\mathcal G$-pure-injective. In order to see that it is pure-injective we only have to see that it is pure-injective with respect to all direct sums of finitely presented modules by Lemma \ref{l:AInjective} and \cite[33.5]{Wisbauer}. Let $I$ be a set, $\{F_i:i \in I\}$ a family of finitely presented modules in $\mathcal K$ (that is, finitely generated and projective in Mod-$T$), $K$ a pure submodule of $\bigoplus_{i \in I}F_i$ and $f:K \rightarrow M$ a morphism. Denote by $|K| = \lambda$. Using the preceding lemma, we can construct a chain of subsets of $I$, $\{I_\alpha:\alpha < \lambda\}$, satisfying, for each $\beta < \lambda$ that $\bigcup_{\alpha < \lambda}I_\alpha=I$, $|I_{\beta+1}-I_\beta| \leq \kappa$ and $I_\beta = \bigcup_{\alpha < \beta}I_\alpha$ (when $\beta$ is limit); and a chain of pure submodules of $K$, $\{K_\alpha:\alpha < \lambda\}$, satisfying, for each $\beta < \lambda$, that $K=\bigcup_{\alpha < \lambda}K_\alpha$, $K_\beta \leq \bigoplus_{i \in I_\beta}F_i$ and $K_\beta = \bigcup_{\alpha < \beta}K_\alpha$ (when $\beta$ is limit). Now, using that $M$ is $\mathcal G$-injective, we can define, recursively on $\alpha$, a morphism $f_\alpha:\bigoplus_{i \in I_\alpha}F_i \rightarrow M$ such that $f_\alpha \rest K_\alpha = f \rest K_\alpha$ and $f_\alpha \rest \bigoplus_{i \in I_\beta}F_i = f_\beta$ for each $\beta < \alpha$. Then the limit of all these $f_\alpha$'s is the extension of $f$ to $\bigoplus_{i \in I}F_i$. This finishes the proof. \end{proof} Then we get: \begin{cor} \label{pinj} Let $\mathcal K$ be a finitely accessible additive category. Then $\mathcal K$ has enough pure-injective objects. If, in addition, $\mathcal K$ is abelian, then $\mathcal K$ has pure-injective hulls. \end{cor} \begin{proof} Again, we can assume that $\mathcal K$ is the full subcategory consisting of unitary flat right $T$-modules over a ring $T$ with enough idempotents. Since direct limits of conflations in Mod-$T$ are conflations and $\mathcal K$ is closed under direct limits in Mod-$T$, direct limits of pure-exact sequences in $\mathcal K$ are, again, pure-exact. Then $\mathcal K$ satisfies (1) of Theorem \ref{t:ExistenceInjectives}. Let $\mathcal G$ be the set of objects constructed in the previous result and let us see that $\mathcal G$ satisfies the conditions of the Remark \ref{r:Hyphoteses}. Since Mod-$T$ is locally small and each module is small, $\mathcal G$ satisfies (1) and (2) of Remark \ref{r:Hyphoteses}. Moreover, it satisfies (3) by the preceding theorem. By Theorem \ref{t:ExistenceInjectives}, $\mathcal K$ has enough injective objects. If $\mathcal K$ is abelian, then the existence of pure-injective hulls follows from Theorem \ref{t:ExistenceHulls}. \end{proof} \end{document}
\begin{document} \title{Generic property of the partial calmness condition for bilevel programming problems hanks{Submitted to the editors DATE. unding{Ke's work is supported by National Science Foundation of China 71672177. The research of Ye is supported by NSERC. Zhang's work is supported by the Stable Support Plan Program of Shenzhen Natural Science Fund (No. 20200925152128002), National Science Foundation of China 11971220, Shenzhen Science and Technology Program (No. RCYX20200714114700072). The alphabetical order of the authors indicates the equal contribution to the paper.} \section{A detailed example} Here we include some equations and theorem-like environments to show how these are labeled in a supplement and can be referenced from the main text. Consider the following equation: \begin{equation} \label{eq:suppa} a^2 + b^2 = c^2. \end{equation} You can also reference equations such as \cref{eq:matrices,eq:bb} from the main article in this supplement. \lipsum[100-101] \begin{theorem} An example theorem. \end{theorem} \lipsum[102] \begin{lemma} An example lemma. \end{lemma} \lipsum[103-105] Here is an example citation: \cite{KoMa14}. \section[Proof of Thm]{Proof of \cref{thm:bigthm}} \label{sec:proof} \lipsum[106-112] \section{Additional experimental results} \Cref{tab:foo} shows additional supporting evidence. \begin{table}[htbp] {\footnotesize \caption{Example table} \label{tab:foo} \begin{center} \begin{tabular}{|c|c|c|} \hline Species & \bf Mean & \bf Std.~Dev. \\ \hline 1 & 3.4 & 1.2 \\ 2 & 5.4 & 0.6 \\ \hline \end{tabular} \end{center} } \end{table} \end{document}
\betaegin{eqnarray}gin{document} \sigmaection* {\betaegin{eqnarray*}nter{Whirly 3-Interval Exchange Transformations}} \betaegin{eqnarray}gin{center} \thetaetaextsc { Yue Wu\footnote{Schlumberger PTS Full Waveform Inversion Center of Excellence, Houston, Texas, USA}\\ 3519 Heartland Key LN, Katy, TX, 77494\\ [email protected] } \varepsilonnd{center} \betaegin{eqnarray}gin{abstract} Irreducible interval exchange transformations are studied with regard to whirly property, a condition for non-trivial spatial factor. Uniformly whirly transformation is defined and to be further studied. An equivalent condition is introduced for whirly transformation. We will prove that almost all 3-interval exchange transformations are whirly, using a combinatorics approach with application of the Rauzy-Veech Induction. It is still an open question whether whirly property is a generic property for $m$-interval exchange transformations ($m\gammaeq 4$). \varepsilonnd{abstract} \sigmaection* {} \indent \indent Interval Exchange Transformations, as a set of important dynamical systems, have been actively studied for decades. We recall some of the key theorems, either the results or the methods of which are related to the current study or possible extensions of this paper in the future. The proof of the unique ergodic property of measure theoretical generic interval exchange transformations was achieved by H. Masur\cite{MASUR} and W.A. Veech\cite{VEE1} independently using geometric methods, and was proved later using mainly combinatorial methods by M. Boshernitzan\cite{BOS}. A. Avila and G. Forni\cite{AVILA-FORNI} showed that weak mixing is a measure theoretical generic property for irreducible $m$-interval exchange transformations ($m\gammaeq 3$). J. Chaika\cite{CHAI} developed a general result showing that any ergodic transformation is disjoint with almost all interval exchange transformations. J. Chaika and J. Fickenscher\cite{CHAI2} showed that topological mixing is a topologically residual property for interval exchange transformations.\\ \indent The concept of whirly transformation was introduced and studied in E. Glasner, B. Weiss\cite{GW}, E. Glasner, B. Tsirelson, B. Weiss\cite{GTW}. In E. Glasner, B. Weiss\cite{GW2}, Proposition 1.9. states that the near action (weak closure of all the powers) of a transformation admits no non-trivial spatial factors if and only if it is whirly. In the Section of Introduction, we recall the relevant notions and facts about interval exchange transformations and whirly transformation. A new notion introduced in this section is that of uniformly whirly transformation. In the second section we study the space of three interval exchange transformations and deduce facts about the visitation times of Rauzy-Veech induction. In the last section we complete the proof of the major theorem, which states almost all three interval exchange transformations are whirly, thus admit no nontrivial spatial factor. To prove this main theorem, first we establish an equivalent definition of whirly transformations based on the assumption of ergodicity. Then the major key facts are deduced as Claim 1, Claim 2 and Claim 3, which show the whirly property for the base of the Rohlin tower associated with the Veech-induction map. Finally we apply a density point argument to extend the property to arbitrary general non-null measurable sets. \sigmaection{Introduction}\langlembdaabel{section1} \indent \indent An interval exchange transformation perturbs the half-closed half-open subintervals of a half-closed half-open interval. The subintervals have lengths corresponding to the vector $\langlembdaambda=(\langlembdaambda_{1},\cdots ,\langlembdaambda _{m})$, $ \langlembdaambda _{i} >0,\, 1\langlembdaeq i \langlembdaeq m$. All such vectors form a positive cone $\Lambda _{m}\sigmaubset R^{m}$. The subintervals thus are $[\betaegin{eqnarray}ta_{i-1},\betaegin{eqnarray}ta_{i})$,$\,1\langlembdaeq i \langlembdaeq m$, with $\betaegin{itemize}gcup[\betaegin{eqnarray}ta_{i-1},\betaegin{eqnarray}ta_{i})=[0,\langlembdaeft|\langlembda\right|)$, where\\ \betaegin{eqnarray}gin{equation}\betaegin{eqnarray}gin{array}{l} \mid \langlembda \mid=\sigmaum\langlembdaimits_{i=1}^{m} \langlembda_{i}\\ \thetaetaext{and, }\\ \betaegin{eqnarray}ta_{i}(\langlembdaambda)=\langlembdaeft\{\betaegin{eqnarray}gin{array}{clcr} 0& i=0\\ \sigmaum\langlembdaimits_{j=1}^{i}\langlembda_{j}& 1\langlembdaeq i\langlembdaeq m . \varepsilonnd{array}\right. \varepsilonnd{array}\varepsilonnd{equation} \indent Let ${\mathcal G}_{m}$ be the group of m-permutations, and $\mathcal{G}_{m}^{0}$ be the subset of $\mathcal{G}_{m}$ which contains all the irreducible permutations on $\{1,2,\cdots, m\}$. A permutation $\partiali$ is irreducible if and only if for any $1\langlembdaeq k <m,\,\{1,2,\cdots,k\}\neq\{\partiali (1),\cdots,\partiali(k)\}$, or equivalently $\sigmaum\langlembdaimits_{j=1}^{k}(\partiali(j)-j)>0,\, (1\langlembdaeq k<m))$. Given $\langlembdaambda \in \Lambda_{m},\, \partiali \in \mathcal{G}_{m}^{0}$, the corresponding interval exchange transformation is defined by:\\ \betaegin{eqnarray}gin{equation}\betaegin{eqnarray}gin{array}{l} T_{\langlembda,\partiali}(x)=x-\betaegin{eqnarray}ta_{i-1}(x)+\betaegin{eqnarray}ta_{\partiali i-1}(\langlembdaambda^{\partiali}),\quad (x\in [\betaegin{eqnarray}ta_{i-1}(\langlembda),\quad \betaegin{eqnarray}ta_{i}(\langlembda))\,),\\ \thetaetaext{where }\langlembdaambda^{\partiali}=(\langlembda_{\partiali^{-1}1},\langlembda_{\partiali^{-1}2},\cdots, \langlembda_{\partiali^{-1}m}) . \varepsilonnd{array}\varepsilonnd{equation} Obviously $\betaegin{eqnarray}ta_{\partiali i-1}(\langlembda^{\partiali})=\sigmaum\langlembdaimits_{j=1}^{\partiali i-1}\langlembda_{\partiali^{-1}j}$, and the transformation $T_{\langlembda,\partiali}$, which is also denoted by $(\langlembda,\partiali)$, sends the $i$th interval to the $\partiali (i)$th position.\\ In M. Keane\cite{KEA}, the i.d.o.c.(infinite distinct orbits condition) is raised for the sufficient condition of minimality: $\langlembdaambda ,\partiali$ is said to satisfy the {\varepsilonm i.d.o.c\/}. if \betaegin{eqnarray}gin{enumerate} \item[\varepsilonm i)\/] for any $0\langlembdaeq i <m, \{T^{k}\beta_{i}, k\in {\mathbb Z}\}$ is a infinite set; \item[\varepsilonm ii)\/] $\{T^{k}\beta_{i},k\in{\mathbb Z}\}\cap\{T^{k}\beta_{j},k\in {\mathbb Z}\}=\varepsilonmptyset$, whenever $i\neq j.$ \varepsilonnd{enumerate} \indent Suppose $m>1$, $(\langlembda ,\partiali)\in\Lambda _{m}\thetaetaimes {\mathcal G} ^{*}_{m}$ , where ${\mathcal G} ^{*}_{m}$ is the set of irreducible permutations with the property that $\partiali (j+1)\neq \partiali (j)+1$ for all $1\langlembdaeq j \langlembdaeq m-1$. Let $I$ be an interval of the form $I=[\xi ,\varepsilonnd{theorem}a)$, $0\langlembdaeq \xi <\varepsilonnd{theorem}a \langlembdaeq \langlembdaeft|\langlembda\right|$. Since $T$ is defined on $[0,\langlembdaeft|\langlembdaambda\right|)$, and $T$ is Lebesgue measure preserving, we know that Lebesgue almost all points of $I$ return to $I$ infinitely often under iteration of $T$. We use $T|_{I}$ to denote the induced transformation of $T$ on $I$. By W.A. Veech\cite{VEE4}, $T|_{I}$ is an interval exchange transformation with $(m-2)$, $(m-1)$, or $m$ discontinuities. \betaegin{definition} [Admissible Interval; W.A. Veech \cite{VEE4}] \langlembdaabel{AdmInt} Suppose $(\langlembda ,\partiali)$ satisfies the i.d.o.c., and $I=(\xi ,\varepsilonnd{theorem}a)$ where $\xi =T^{k}\beta _{s}$,$(1\langlembdaeq s<m)$; $\varepsilonnd{theorem}a =T^{l}\beta _{t}$ $(1\langlembdaeq t<m)$, and $\thetaetaau \in\{k,l\}$ have the following property: If $\thetaetaau \gammaeq 0$, there is no $j$, $0<j<\thetaetaau$, such that $T^{j}\beta_{s}\in I$; If $\thetaetaau< 0$, there is no $j$, $0\gammaeq j>\thetaetaau$, such that $T^{j}\beta_{s}\in I$. Then we say that $I$ is an admissible subinterval of $(\langlembda ,\partiali)$. \varepsilonnd{definition} {\varepsilonm\betaf Rauzy-Veech induction.\/\/} For $T_{\langlembda,\partiali}$, the Rauzy map sends it to its induced map on $[0,\langlembdaeft|\langlembda\right|-min\langlembdaeft\{\langlembda_{m},\langlembda_{\partiali^{-1}m}\right\})$, which is the largest admissible interval of form $J=[0,L),0<L<\langlembdaeft|\langlembda\right|$.\\ Given any permutation, two actions $a$ and $b$ are: \betaegin{eqnarray} a(\partiali)(i)=\langlembdaeft\{\betaegin{eqnarray}gin{array}{llcr}\partiali(i) & i\langlembdaeq\partiali^{-1}m \\\partiali(i-1)&\partiali^{-1}m+1<i\langlembdaeq m \\ \partiali(m) &i=\partiali^{-1}m+1 \varepsilonnd{array}\right. \varepsilonnd{eqnarray} and \betaegin{eqnarray} b(\partiali)(i)=\langlembdaeft\{\betaegin{eqnarray}gin{array}{llcr}\partiali(i) & \partiali(i)\langlembdaeq\partiali(m) \\\partiali(i)+1&\partiali(m)+1<\partiali(i)< m \\ \partiali(m)+1 &\partiali(i)=m . \varepsilonnd{array}\right. \varepsilonnd{eqnarray} \indent The Rauzy-Veech map $\mathcal{Z}(\langlembda,\partiali):\, \Lambda_{m}\thetaetaimes \mathcal{G}_{m}^{0}\rightarrow \Lambda_{m} \thetaetaimes \mathcal{G}_{m}^{0}$ is determined by : \betaegin{eqnarray}\langlembdaabel{RzVch}\mathcal{Z}(\langlembda,\partiali)=(A(\partiali ,c)^{-1}\langlembda,c\partiali) , \varepsilonnd{eqnarray} where $c=c(\langlembda,\partiali)$ is defined by \betaegin{eqnarray} c(\langlembda,\partiali)=\langlembdaeft\{\betaegin{eqnarray}gin{array}{clcr}a,&\langlembda_{m}<\langlembda_{\partiali^{-1}m}\\ b,&\langlembda_{m}>\langlembda_{\partiali^{-1}m.}\varepsilonnd{array}\right. \varepsilonnd{eqnarray} $\mathcal{Z}(\langlembda,\partiali)$ is a.e. defined on $\Lambda_{m}\thetaetaimes\langlembdaeft\{\partiali\right\}$, for each $\partiali\in\mathcal{G}_{m}^{0}$. \\ \indent The matrices $A=A(\partiali,c)$ in \ref{RzVch} are defined as the following: \betaegin{eqnarray}gin{equation} A(\partiali ,a)=\langlembdaeft(\betaegin{eqnarray}gin{tabular}{c|c} $I_{\partiali^{-1}m}$&$\betaegin{eqnarray}gin{array}{ccccc} 0&0&\cdots&0&0\\ 0&0&\cdots&0&0\\ .&.&\cdots&.&.\\ 0&0&\cdots&0&0\\ 1&0&\cdots&0&0 \varepsilonnd{array}$\\\hline\\ 0& $\betaegin{eqnarray}gin{array}{ccccc} 0&1&\cdots&0&1\\ 0&0&\cdots&0&0\\ .&.&\cdots&.&.\\ 0&0&\cdots&0&1\\ 1&0&\cdots&1&0 \varepsilonnd{array}$\\ \varepsilonnd{tabular}\right) \varepsilonnd{equation} \betaegin{eqnarray} A(\partiali ,b)=\langlembdaeft( \betaegin{eqnarray}gin{tabular}{c|c} $I_{m-1}$ &0\\\hline\\ $\underbrace{\betaegin{eqnarray}gin{array}{ccccccc} 0&\cdots&0&1&0&\cdots&0\varepsilonnd{array}}_{\mbox{1 at the jth position}}$&1\\ \varepsilonnd{tabular} \right) \varepsilonnd{eqnarray} where $I_{k}$ is the $k$-identity matrix, and $j=\partiali^{-1}m$.\\ \indent And the normalized Rauzy map ${\mathcal R} : \, \Delta_{m-1}\thetaetaimes {\mathcal G}^{0}_{m}\rightarrow \Delta_{m-1}\thetaetaimes{\mathcal G}^{0}_{m}$ is defined by \betaegin{eqnarray} {\mathcal R} (\langlembda, \partiali)=(\frac{\deltaisplaystyle A(\partiali ,c)^{-1}\langlembda}{\deltaisplaystyle \langlembdaeft|A(\partiali ,c)^{-1}\langlembda\right|}, c\partiali)=(\frac{\deltaisplaystyle \partiali^{*}_{1}{\mathcal Z} (\langlembda ,\partiali)}{\deltaisplaystyle \langlembdaeft|\partiali^{*}_{1}{\mathcal Z} (\langlembda ,\partiali)\right|}, \partiali^{*}_{2}{\mathcal Z} (\langlembda ,\partiali)), \varepsilonnd{eqnarray} where $\partiali^{*}_{1}$ and $\partiali^{*}_{2}$ are the projection to the first coordinate and the second coordinate respectively.\\ Iteratively, \betaegin{eqnarray}\langlembdaabel{Cn} \mathcal{Z}^{n}(\langlembda,\partiali)=((A^{(n)})^{-1}\langlembda, c^{(n)}\partiali)=(\langlembda ^{(n)},\partiali^{(n)}), \varepsilonnd{eqnarray} where \betaegin{eqnarray}\langlembdaabel{An} c^{(n)}=c_{n}c_{n-1}\cdots c_{1},(c_{1},\cdots,c_{n}\in \{a,b\}, c_{i}=c(\mathcal{Z} ^{i-1}(\langlembda,\partiali))) \varepsilonnd{eqnarray} and \betaegin{eqnarray} A^{(n)}=A(\partiali,c_{1})A(c^{(1)}\partiali, c_{2})A(c^{(2)}\partiali,c_{3})\cdots A(c^{(n-1)}\partiali,c_{n}) . \varepsilonnd{eqnarray} \indent The Rauzy class ${\mathcal C}\sigmaubseteq{\mathcal G}_{m}$ of $\partiali$ is a set of orbits for the group of maps generated by $a$ and $b$. On the ${\mathcal R}$ invariant component $\Delta_{m-1}\thetaetaimes{\mathcal C}$, we have: \betaegin{theorem} [H.Masur\cite{MASUR};W.A. Veech\cite{VEE1}]\langlembdaabel{Ergodic} Let $\partiali\in{\mathcal G}_{m}^{0}$, the set of irreducible permutations. For Lebesgue almost all $\langlembda\in\Lambda_{m}$, normalized Lebesgue measure on $I^{\langlembda}$ is the unique invariant Borel probability measure for $T_{(\langlembda,\partiali)}$. In particular, $T_{(\langlembda,\partiali)}$ is ergodic for almost all $\langlembda$. \varepsilonnd{theorem}\ {\varepsilonm\betaf Whirly Action, Whirly Automorphism. \/\/} In this paper, we assume weak topology as defined in the following Definition \ref{WeakTp}. \betaegin{definition} [Weak Topology on Automorphism Group] \langlembdaabel{WeakTp} Let $({\mathbb X} ,{\mathcal B} ,\mu)$ be a standard probability Borel space, and $G=Aut({\mathbb X} ,{\mathcal B} ,\mu)$ be the group of all non-singular measurable automorphisms of $({\mathbb X} ,{\mathcal B} ,\mu)$. Suppose $(E_{n})$ is a countable family of measurable subsets generating ${\mathcal B}$. The weak topology of $G$ is generated by the metric $d(S,T)$, for any $S,\,T \in G$, where $d(S,T)=\sigmaum_{n=1}^{\infty}2^{-n}\mu(SE_{n}\thetaetariangle TE_{n})$. \varepsilonnd{definition} Utilizing the weak topology defined as above, one can project the concept of whirly action (Definition \ref{WhirlyAc}) to whirly automorphism (Definition \ref{whirly1}). This is included in the following review of the definitions and fundamental propositions about whirly action and whirly automorphisms.\\ Whirly action is introduced by E. Glasner, B. Tsirelson, B. Weiss \cite{GTW}, Definition 3.1. The purpose is to study the condition for a Polish group action to admit a spatial model. In the same paper, they translated the concept of whirly from the group action to automorphisms since the weak closure of a rigid automorphism is a near action. They showed that in the group $G$ of automorphisms on a finite Lebesgue space, whirly (in the sense of $Z$ action) is a topologically generic property, i.e. the set of whirly automorphisms is residual in $G$. The concept of `whirly transformation' is inherited from the theory about general group actions, and implies weak mixing. It is interesting to ask whether whirly is a generic property in the space of interval exchange transformations. Theorem \ref{Main} gives a positive answer for three interval exchange transformations.\\ \indent Without considering the measure, we have the Borel action, satisfying similar condition as in Definition ¡¢\ref{Near}, defined below: \betaegin{definition} [Borel Action]\langlembdaabel{Borel} Suppose $G$ is a Polish group and $({\mathbb X} ,{\mathcal B} ,\mu)$ is a standard probability Borel space. We say a Borel map $G \thetaetaimes {\mathbb X} \rightarrow {\mathbb X} $ $((g, x)\rightarrow gx)$ is a Borel action of $G$ on $({\mathbb X} ,{\mathcal B} ,\mu)$ if it satisfies the following properties:\\ {\varepsilonm (i)\/} $\quad$ $ex=x$ for all $x\in {\mathbb X}$, where $e$ is the identity element of $G$;\\ {\varepsilonm (ii)\/} $\quad$ $g(hx)=(ghx)$ for all $x\in {\mathbb X}$, where $g, h\in G$. \varepsilonnd{definition} \betaegin{definition} [Spatial $G$ Action: E. Glasner, B. Tsirelson, B. Weiss\cite{GTW}]\langlembdaabel{Spatial} A spatial $G$-action on a standard Lebesgue space $({\mathbb X} ,{\mathcal B} ,\mu)$ is a Borel action of ${\mathbb P}$ on the space such that each $g\in {\mathbb P}$ preserves $\mu$. \varepsilonnd{definition} The concept of near action is introduced measure theoretically: \betaegin{definition}[Near Action: E. Glasner, B. Tsirelson, B. Weiss\cite{GTW}]\langlembdaabel{Near} Suppose ${\mathbb P}$ is a Polish group and $({\mathbb X} ,{\mathcal B} ,\mu)$ is a standard probability Borel space. We say a Borel map ${\mathbb P} \thetaetaimes {\mathbb X} \rightarrow {\mathbb X} $ $((g, x)\rightarrow gx)$ is a near action of ${\mathbb P}$ on $({\mathbb X} ,{\mathcal B} ,\mu)$ if it satisfies the following properties:\\ {\varepsilonm (i)\/} $\quad$ $ex=x$ for a.e. $x\in {\mathbb X}$, where $e$ is the identity element of ${\mathbb P}$;\\ {\varepsilonm (ii)\/} $\quad$ $g(hx)=(ghx)$ for a.e. $x\in {\mathbb X}$, where $g, h\in{\mathbb P}$;\\ {\varepsilonm (iii)\/} $\quad$Each $g\in G$ preserves the measure $\mu$. \varepsilonnd{definition} {\betaf Note.\/} the set of measure one in Definition \ref{Near} (ii) may depend on the pair $g,h$. It is easy to see that a near action is a continuous homomorphism from ${\mathbb P}$ to $G$ ($G$ is the automorphism group of ${\mathbb X}$).\\ Now we define the key concept of this paper: \betaegin{definition} [Whirly Action: E. Glasner, B. Tsirelson, B. Weiss\cite{GTW}]\langlembdaabel{WhirlyAc} $\,$Given $\varepsilon>0$, if for all sets $E,F\in {\mathcal B}$ with $\mu (E), \mu (F)>0$, there exists $\gamma\in N_{\varepsilon} (Id)$ (the $\varepsilon$ neighborhood of the identity $Id=e$ in ${\mathbb P}$), such that $\mu (E\cap \gamma F)>0$ then we say the near action of ${\mathbb P}$ on $({\mathbb X} ,{\mathcal B} ,\mu)$ is whirly. \varepsilonnd{definition} \betaegin{theorem}[E. Glasner, B. Tsirelson, B. Weiss\cite{GTW} Proposition 3.3]\langlembdaabel{Factor} A whirly action does not admit a nontrivial spatial factor, and thus has no spatial model. \varepsilonnd{theorem} \betaegin{remark} If an automorphism $({\mathbb X} ,{\mathcal B} ,T ,\mu)$ is rigid, then its weak closure '$Wcl(T)$' is a closed subgroup of $G=Aut({\mathbb X} ,{\mathcal B} ,\mu)$. With the induced topology, $Wcl(T)$ is also a Polish space. Based on this fact, the whirly transformation is a concept induced from whirly action. \varepsilonnd{remark} Let $({\mathbb X},{\mathcal B}, \mu )$ be the standard Lebesgue probability space, ${\mathbb X}=[0,1]$, and denote $G=Aut({\mathbb X})$ the Polish group of its automporphism. \betaegin{definition}[Whirly Automorphism]\langlembdaabel{whirly1} We say a rigid system $(X,{\mathcal B},\mu, T)$ is whirly, if given $\varepsilonpsilon >0$ for any $\mu$ positive measure sets $E$ and $F$ $(\mu (E), \mu (F)>0)$ in ${\mathcal B}$, there exists $n$ such that $T^{n}\in U_{\varepsilonpsilon}$ (the $\varepsilonpsilon$-neighborhood of the identity map in the weak topology of $G$), and $\mu(T^{n}E\cap F)>0$. \varepsilonnd{definition} Whirly implies rigid. E.Glasner, B.Weiss\cite{GW} Corollary 4.2. showed that if $({\mathbb X},{\mathcal B},\mu,T)$ is whirly then it is weak mixing. In the same paper as Theorem 5.2., it is proved that: \betaegin{theorem}[E.Glasner, B.Weiss\cite{GW} Theorem 5.2] The set of all the whirly transformations is residual (dense $G_{\delta}$ subset) in $G$. \varepsilonnd{theorem} Next we introduce a new notion of uniformly whirly, which is stronger than or equivalent to whirly: \betaegin{definition}[Uniformly Whirly] A rigid system $({\mathbb X} ,{\mathcal B} ,\mu ,T)$ is uniformly whirly if given $\varepsilon>0$ for any $0<\alpha ,\beta<1$, we have $$\underset{\mu (E)=\alpha ,\mu (F)=\beta}{inf}\quad \underset{T^{n}\in U_{\varepsilon}}{sup}\{\mu(T^{n}E \cap F)\}>0 .$$ \varepsilonnd{definition} Uniformly whirly implies whirly.\\ \thetaetaextbf{Questions:} Is uniformly whirly equivalent to whirly? If not, is the collection of uniformly whirly automorphisims a dense $G_{\varepsilonnd{eqnarray*}lta}$ subset of $G$.\\ \indent It is interesting to ask whether whirly property is a generic property in the space of $m$ interval exchange transformations ($m\gammaeq 3$). The major theorem (Theorem \ref{Main}) of this paper provides a positive answer for $m=3$. Below is the main result and an outline of the proof: \betaegin{theorem}\langlembdaabel{Main} Let $\partiali =(3,2,1)$, for Lebesgue almost all $\langlembda\in\Lambda_{3}$. The three dimensional cone of positive real numbers, the interval exchange transformation $T_{(\langlembda ,\partiali)}$ is whirly. \varepsilonnd{theorem} {\varepsilonm\betaf Outline of the proof of Theorem \ref{Main}\/\/}\\ \indent First, we raise an equivalent definition for whirly transformation (Definition \ref{whirly2}). This definition enables us to use the cyclic approximation of rank 1 stacking structure (\cite{VEE3} Section 3) associted with Rauzy-Veech induction more effectively.\\ \indent Second, for symmetric $3$-permutation $\partiali$, we study the Veech induction map ${\mathcal T}_{2}:\langlembda\rightarrow \frac{\alpha}{\langlembdaeft|\alpha\right|}$, $\langlembdaeft|\alpha\right|=\max\{\langlembdaambda_{1},\langlembdaambda_{3}\}$, and we observe that the visitation times $(a_{1}, a_{2}, a_{3})$ of each sub-interval of $\frac{\alpha}{\langlembdaeft|\alpha\right|}$ admit the equation $a_{2}=a_{1}+a_{3}-1$. Consideraing the cyclic approximation of rank 1 stacking structure, we construct a series of cyclic approximation with the base interval to be the second sub-interval of $\alpha$. Together with the relation $a_{2}=a_{1}+a_{3}-1$, we demonstrate the fundamental structure for whirly property, summarized as Lemma \ref{Major} .\\ \indent The last part of the proof is to use a density point argument to extend the fundamental structure based on the Veech Induction to general measurable subsets. \betaegin{corollary}n\langlembdaabel{Conj} Let $\partiali \in \mathcal{G}_{m}^{0}$, $m\gammaeq 3$, for Lebesgue almost all $\langlembda\in\Lambda_{m}$, the $m$-dimensional cone of positive real numbers, the interval exchange transformation $T_{(\langlembda ,\partiali)}$ is whirly. \varepsilonnd{corollary}n \sigmaection{The Space of Three Interval Exchange Transformation} \indent \indent In W.A.Veech\cite{VEE3}, key results in the theory about interval exchange transformation space are established. We will utilize the result in W.A.Veech\cite{VEE1} and \cite{VEE3}. Let $m>1$, and specifically here, let $\partiali$ be the symmetric permutation (i.e. $\partiali=(m ,m-1,\cdots ,1)$). In W.A.Veech\cite{VEE2} it is proved that for almost every $\langlembda$ the induced transformation of $T_{\langlembda ,\partiali}$ on $[0, \max \{\langlembda_{1},\langlembda_{m}\})$ is an $(\alpha ,\partiali)$ interval exchange transformation with $\langlembdaeft|\alpha\right|=\max \{\langlembda_{1}, \langlembda_{m}\}$ and $\partiali$ still the same symmetric permutation. That is a transformation ${\mathcal T}_{2}:(\langlembda,\partiali)\rightarrow (\frac{\alpha}{\langlembdaeft|\alpha\right|},\partiali)$, or simply, ${\mathcal T}_{2}(\langlembda) \sigmaim {\mathcal T}_{2}(\langlembda,\partiali)$. So without confusion, let ${\mathcal T}_{2}(\langlembda)={\mathcal T}_{2}(\langlembda,\partiali)$. When $m=3$, $f_{2}(\langlembda)=(\frac{\deltaisplaystyle 1}{\deltaisplaystyle 1-\langlembda_{1}}+\frac{\deltaisplaystyle 1}{\deltaisplaystyle 1-\langlembda_{3}})\partialrod ^{2}_{j=1}\frac{\deltaisplaystyle 1}{\deltaisplaystyle \langlembda_{j}+\langlembda_{j+1}}$ is the density of a conservative ergodic invariant measure for ${\mathcal T}_{2}$ by W.A.Veech\cite{VEE1}.\\ \indent We claim that if $(\langlembda ,\partiali)$ satisfies i.d.o.c. and $\partiali (j)=m-j+1$, there exists some $k$ such that ${\mathcal Z}^{k}(\langlembda ,\partiali)=(\alpha ,\partiali)$ with $\langlembdaeft|\alpha \right|=\max \{\langlembda_{1} ,\langlembda_{m}\}$. To verify this we need the following lemma: \betaegin{lemma}\langlembdaabel{AdmLger} If $\langlembda\in \Lambda_{m}, m\gammaeq 3 ,T_{(\langlembda ,\partiali)}$ satisfies i.d.o.c. , and ${\mathcal Z}^{k}(\langlembda ,\partiali)=(\langlembda ' ,\partiali ')$, where $k$ is the largest integer such that $\langlembdaeft|\langlembda '\right|>\max \{\langlembda_{1}, \langlembda_{m}\}$, then $J=[0, \max \{\langlembda_{1}, \langlembda_{m}\})$ is an admissible interval of $(\langlembda ' ,\partiali ')$. \varepsilonnd{lemma} \betaegin{proof} If $\langlembda_{1}>\langlembda_{m}$, then $\langlembda_{1}$ is a discontinuous point of $T_{(\langlembda ' ,\partiali ')}$, $[0, \langlembda_{1})$ is an admissible interval of $(\langlembda ' , \partiali ')$. If $\langlembda_ {m}>\langlembda_{1}$ , let $\beta_{t}'=\beta_{t} ' (\langlembda ')= \sigmaum _{i=1}^{t} \langlembda_{i} '$. Since $\langlembda_{m}=T(\beta _{m-2})$ and $T_{(\langlembda ' ,\partiali ')}$ is the induced transformation of $T_{(\langlembda ,\partiali)}$ on $[0, |\langlembda '|)$, we have that there exists $1\langlembdaeq t\langlembdaeq m-1$ and a $k_{t}>0$ such that $\langlembda_{m}=T_{(\langlembda ', \partiali ')}(\beta_{t}')=T_{(\langlembda , \partiali )}^{k_{t}}(\beta_{t}')$. By the definition of admissible interval, $[0,\langlembda_{m})$ is an admissible interval associated with $T_{\langlembda ' ,\partiali '}$. \varepsilonnd{proof} \betap Suppose $\langlembda \in \Lambda_{m-1}, \partiali (j)=m-j+1$, $1\langlembdaeq j\langlembdaeq m$, and $(\langlembda ,\partiali)$ satisfies i.d.o.c. Then there exists $k_{0}\in {\mathbb N}$ such that ${\mathcal Z}^{k_{0}}(\langlembda ,\partiali)=(\alpha ,\partiali)$, where $\langlembdaeft|\alpha\right|=\max \{\langlembda_{1},\langlembda_{m}\}$. Therefore, ${\mathcal T}_{2} (\langlembda ,\partiali)={\mathcal R}^{k_{0}}(\langlembda ,\partiali)$. \varepsilonp \betaegin{proof} Assume for all $k\in {\mathbb N}$, ${\mathcal Z}^{k}(\langlembda ,\partiali)=(\alpha^{(k)} ,\partiali)$ such that $\langlembdaeft|\alpha^{(k)}\right|\neq \max \{\langlembda_{1}, \langlembda_{m}\}$. Since $(\langlembda ,\partiali)$ satisfies i.d.o.c., $\langlembdaeft|\partiali^{*}_{1}({\mathcal Z}^{k}(\langlembda ,\partiali))\right|\rightarrow 0$ as $k\rightarrow \infty$ (see M. Viana\cite{VIA} Corollary 5.2 for a detailed proof), there exist $k_{0}\gammaeq 0$ such that $\langlembdaeft|\partiali^{*}_{1}({\mathcal Z}^{k_{0}}(\langlembda ,\partiali))\right|>\max \{\langlembda_{1}, \langlembda_{m}\}$ , and $\langlembdaeft|\partiali^{*}_{1}({\mathcal Z}^{k_{0}+1}(\langlembda ,\partiali))\right|<\max \{\langlembda_{1}, \langlembda_{m}\}$. By Lemma \ref{AdmLger} for any $r>\langlembdaeft|\partiali^{*}_{1}({\mathcal Z}^{k_{0}+1}(\langlembda, \partiali))\right|$, $[0,r)$ is not an admissible interval of $(\langlembda ', \partiali ')={\mathcal Z}^{k_{0}}(\langlembda ,\partiali)$, that is a contradiction to the fact that $[0, \max \{\langlembda_{1},\langlembda_{m}\})$ is an admissible interval of $(\langlembda ', \partiali ')$. \varepsilonnd{proof} The above argument assures us that essential general results about the iteration of Rauzy-Veech induction may be applied to ${\mathcal T}_{2}$. For convenience, lets denote the induced map of $T_{\langlembda ,\partiali}$ on $[0,\max \{\langlembda_{1}, \langlembda_{m}\})$ by $(\alpha ,\partiali)$, and define ${\mathcal Z}_{*} :\Lambda_{m} \thetaetaimes \{\partiali\}\rightarrow \Lambda_{m}\thetaetaimes \{\partiali\}$ by ${\mathcal Z}_{*} (\langlembda ,\partiali)=(\alpha ,\partiali)$ with $\langlembdaeft|\alpha\right|=\max \{\langlembda_{1}, \langlembda_{m}\}$.\\ \indent Next we limit the discussion to the case $m=3$. Recall Section 1 for the visitation matrix associated with ${\mathcal Z}^{n}(\langlembda ,\partiali)$, ${\mathcal Z}^{n}(\langlembda ,\partiali)=(\alpha^{(n)}, \partiali)$. We have $\langlembda=A^{(n)}\alpha^{n}$, and the summation of the $i^{th}$ column of $A^{(n)}$, $a^{(n)}_{i}$ is the first return time of the $i^{th}$ subinterval of $[0,\langlembdaeft|\alpha^{(n)}\right|)$ under $T_{(\langlembda ,\partiali)}$. It will be shown that for all $n\in {\mathbb N}$, $a^{(n)}_{2}= a^{(n)}_{1}+ a^{(n)}_{3}-1$. In fact we will verify the same equality for a more general case. It is done by looking at the Rauzy graph for the closed paths based at $\partiali =(3,2,1)$. The Rauzy class of $\partiali =(3,2,1)$ is $\{\partiali,\partiali_{1},\partiali_{2}|\partiali_{1}=a\partiali=(3,1,2),\partiali_{2}=b\partiali =(2, 3, 1)\}$.\\ $$A(\partiali ,a)=A(\partiali_{1} ,a)=\langlembdaeft( \betaegin{eqnarray}gin{array}{ccc} 1&1&0\\ 0&0&1\\ 0&1&0 \varepsilonnd{array}\right)$$ $$A(\partiali ,b)=A(\partiali_{1} ,b)= \langlembdaeft(\betaegin{eqnarray}gin{array}{ccc} 1&0&0\\ 0&1&0\\ 1&0&1 \varepsilonnd{array}\right)$$ $$A(\partiali_{2} ,a)=\langlembdaeft(\betaegin{eqnarray}gin{array}{ccc} 1&0&0\\ 0&1&1\\ 0&0&1 \varepsilonnd{array}\right)$$ $$A(\partiali_{2} ,b)=\langlembdaeft(\betaegin{eqnarray}gin{array}{ccc} 1&0&0\\ 0&1&0\\ 0&1&1 \varepsilonnd{array}\right) .$$ \betaegin{lemma}\langlembdaabel{RecurTime} If ${\mathcal Z}_{*}(\langlembda ,\partiali)=(\alpha ,\partiali)$, $\langlembda\in\Lambda_{3}$, $\partiali =(3,2,1)$, and the visitation matrix is $A$ (i.e. $\langlembda =A\alpha$). Then $a_{2}+1=a_{1}+a_{3}$. \varepsilonnd{lemma} \betaegin{proof} To prove this Lemma, we look into the following two cases: \betaegin{eqnarray}gin{case}\langlembdaabel{case1}[$ab^{l}a\mbox{ or }ba^{l}b$] \betaegin{eqnarray}gin{enumerate} \item Starting from $\partiali$, go along the path $ab^{l}a$, and come back to $\partiali$. Then the associated visitation matrix is $A^{(l+2)}$, we want to show that:\\ $$a^{(l+2)}_{2}=a^{(l+2)}_{1}+a^{(l+2)}_{3}-1.$$ Since $$A^{(1)}=A(\partiali ,a)=\langlembdaeft( \betaegin{eqnarray}gin{array}{ccc} 1&1&0\\ 0&0&1\\ 0&1&0 \varepsilonnd{array}\right)$$ $$a^{(1)}_{1}=a^{(1)}_{3}=1, a^{(1)}_{2}=2$$ $$A^{(2)}=A^{(1)}\cdot A(\partiali ,b)=(A^{(1)}_{1},\, A^{(1)}_{2},\, A^{(1)}_{3})\cdot \langlembdaeft(\betaegin{eqnarray}gin{array}{ccc} 1&0&0\\ 0&1&0\\ 1&0&1 \varepsilonnd{array} \right) $$ $$=(A^{(1)}_{1}+A^{(1)}_{3},\, A^{(1)}_{2},\, A^{(1)}_{3}) ,$$ where $A^{(n)}_{i}$ is the $i$-th column vector of $A^{(n)} .$ $$\cdots\cdots$$ $$A^{(l+1)}=(A^{(1)}_{1}+lA^{(1)}_{3},\, A^{(1)}_{2},\, A^{(1)}_{3})$$ \betaegin{eqnarray} \betaegin{eqnarray}gin{array}{l} A^{(l+2)}=A^{(l+1)}\cdot A(\partiali_{1} ,a)\\ =(A^{(1)}_{1}+lA^{(1)}_{3} , \, A^{(1)}_{2} , \, A^{(1)}_{3})\cdot\langlembdaeft( \betaegin{eqnarray}gin{array}{ccc} 1&1&0\\ 0&0&1\\ 0&1&0 \varepsilonnd{array}\right)\\ =(A^{(1)}_{1}+lA^{(1)}_{3},\, A^{(1)}_{1}+(l+1)A^{(1)}_{3},\, A^{(1)}_{2}) \varepsilonnd{array} \varepsilonnd{eqnarray} Therefore \betaegin{eqnarray} \betaegin{eqnarray}gin{array}{l} a^{(l+2)}_{1}=a^{(1)}_{1}+la^{(1)}_{3}=l+1\\ a^{(l+2)}_{2}=a^{(1)}_{1}+(l+1)a^{(1)}_{3}=l+2\\ a^{(l+2)}_{3}=a^{(1)}_{2}=2 . \varepsilonnd{array} \varepsilonnd{eqnarray} Thus $a_{2}=a_{1}+a_{3}-1$ is proved for the path $ab^{l}a$. \item Similar to the argument in \ref{step01}, if we replace the path of $ab^{l}a$ to the path of $ba^{l}b$, the associated matrix $A^{(l+2)}$ satisfies: $$ a^{(l+2)}_{2}=a^{(l+2)}_{1}+a^{(l+2)}_{3}-1 .$$ \varepsilonnd{enumerate} \varepsilonnd{case} \betaegin{eqnarray}gin{case}\langlembdaabel{case2}[$p_{0}ab^{l}a\mbox{ or }p_{0}ba^{l}b$] \betaegin{eqnarray}gin{enumerate} \item\langlembdaabel{step01} Suppose the closed path is $p=p_{0}ab^{l}a$, where $p_{0}$ is a closed path based at $\partiali =(3,2,1)$, $p_{0}$ admits length $n_{0}$, and associated with $p_{0}$ is the matrix $A^{(n_{0})}$ with column summations $a^{(n_{0})}_{1}, a^{(n_{0})}_{2}, a^{(n_{0})}_{3}$ satisfying $ a^{(n_{0})}_{2}+1= a^{(n_{0})}_{1}+ a^{(n_{0})}_{3}$. Then by similar computation as Case \ref{case1}. we have the conclusion that, after going along $p$, the return times satisfy: \betaegin{eqnarray} a^{(n_{0}+l+2)}_{2}=a^{(n_{0}+l+2)}_{1}+a^{(n_{0}+l+2)}_{3}-1 \varepsilonnd{eqnarray} \item Similar to \ref{step01} above, the the same relation on the three return times is true for the path $p=p_{0}ba^{l}b.$ \varepsilonnd{enumerate} \varepsilonnd{case} \indent By Case \ref{case1} and Case \ref{case2} we have proved Lemma \ref{RecurTime}. \varepsilonnd{proof} \sigmaection{Whirly Three Interval Exchange Transformations} \indent \indent Before discussing the 3-interval exchange transformations, let us introduce another way to define the concept of whirly automorphism and verify the equivalence between the two definitions: \betaegin{definition}[Whirly Automorphism]\langlembdaabel{whirly2} A rigid ergodic automorphism $T\in G$ is said to be whirly if given $\varepsilon >0$, for any $l\in {\mathbb N}$ (or for any $-l\in {\mathbb N}$) and a $\mu$-positive measure set $E\in {\mathcal B}$, there exists $n\in {\mathbb N}$ such that $T^{n}\in U_{\varepsilon}$, and $\mu (T^{n}E\cap T^{l}E)>0$. \varepsilonnd{definition} \betaegin{theorem}\langlembdaabel{Equiv} Conditions in Definition \ref{whirly1} and Definition \ref{whirly2} for an automorphism to be whirly are equivalent to each other. \varepsilonnd{theorem} \betaegin{proof} Suppose $T\in G$ satisfies the condition in Definition \ref{whirly2} (w.l.o.g., we take the case that $-l\in {\mathbb N}$) , then we claim that for any $E,F\in {\mathcal B}$ with $\mu (E),\mu (F)>0$ we have there exists $n\in{\mathbb N}$ such that $T^{n}\in U_{\varepsilon}$ and $\mu(T^{n}E\cap F)>0$. Since $T$ is ergodic, there exist $-q\in {\mathbb N} $ such that $$\mu(T^{q}E\cap F)>0.$$ \indent Then $$\mu(E\cap T^{-q}F)>0 ,$$ so therefore there exists $n\in {\mathbb N}$ such that $T^{n}\in U_{\varepsilon}$ and $$\mu (T^{n}(E\cap T^{-q}F)\cap T^{m}(E\cap T^{-q}F))>0.$$ \indent Thus $\mu (T^{n}E\cap F)>0\, .$\\ \indent The opposite direction is obvious. \varepsilonnd{proof} \indent Let $\partiali$ be the symmetric m-permutation. According to W.A.Veech\cite{VEE3}, there exists $c_{1}, c_{2}, \cdots , c_{n}\in \{a, b\}$, such that: $c_{n}\circ c_{n-1}\circ \cdots \circ c_{1}(\partiali )=\partiali;$\\ Let $\partiali^{(0)}=\partiali,\,\partiali^{(1)}=c_{1}\partiali^{(0)},\, \partiali^{(2)}=c_{2}\partiali^{(1)},\cdots ,\partiali^{(n)}=c_{n}\partiali^{(n-1)}=\partiali$, let $A^{(i)}=A(\partiali ^{(i-1)},c_{i}),(1\langlembdaeq i\langlembdaeq n)$. Then $B=A^{(1)}A^{(2)}\cdots A^{(n)}$ is a positive $m$ $\thetaetaimes$ $m$ matrix. \betaegin{remark} If $\langlembda \in \Lambda_{m}$, then ${\mathcal Z} ^{n}(B\langlembda ,\partiali)=(\langlembda, \partiali)$, and the orbit of $(B\langlembda ,\partiali)$ under ${\mathcal Z}$ passes the same sequence of permutations $\{\partiali^{j},0\langlembdaeq j\langlembdaeq n\}$. \varepsilonnd{remark} Let $\nu(A)=\underset{1\langlembdaeq i,j,k\langlembdaeq m}{\max}\{\frac{\deltaisplaystyle a_{ij}}{\deltaisplaystyle {a_{ik}}}\}$, where $A$ is a positive matrix, then: \betaegin{eqnarray}\langlembdaabel{Mv} a_{i}\langlembdaeq \nu (A)a_{j},\; 1\langlembdaeq i,j\langlembdaeq m\mbox{ } (a_{i}\mbox{ is the } i\mbox{th column sum of } A) \varepsilonnd{eqnarray} \betaegin{eqnarray}\langlembdaabel{MvStar} \nu (MA)\langlembdaeq \nu (A), \mbox { for any nonnegative matrix M with at least non zero element.}\; \varepsilonnd{eqnarray} We see that $\nu (B)$ and $\nu (B^{t})$ are both positive numbers greater than one. Let $r=\nu (B)$ and $r'=\nu (B^{t})$.\\ \indent Next we fix $m=3$, while still keeping notations as above. Let $l$ be a given positive integer and we will set up an open set in $\Lambda_{3}\thetaetaimes \{\partiali\}$ and do some computation on the approximation by the Kakutani tower associated with the Rauzy induction.\\ \indent Let $\varepsilon_{1} ,\varepsilon_{2}$ be two small positive numbers to be specified for our purpose later. Let $Y^{*}(\varepsilon_{1},\varepsilon_{2})=\{\alpha |\alpha\in\Lambda_{m},(1-\frac{\varepsilon_{1}}{2})\langlembdaeft|\alpha\right|>\alpha_{2}>(1-\varepsilon_{1})\langlembdaeft|\alpha\right| \, and\, $ $(1+\varepsilon_{2})\alpha_{3}>\alpha_{1}>\alpha_{3}\}$, an open subset of $\Lambda_{m}$. Let $W(\varepsilon_{1},\varepsilon_{2})=B^{2}Y^{*}(\varepsilon_{1},\varepsilon_{2})\thetaetaimes \{\partiali\}$, an open subset of $\Lambda_{m}\thetaetaimes \{\partiali\}$.\\ Suppose $(\langlembda,\partiali)\in \Delta_{2} \thetaetaimes \{\partiali\}$, and there exists $k\in{\mathbb N}$ such that ${\mathcal Z}^{k}(\langlembda,\partiali)\in W(\varepsilon_{1},\varepsilon_{2})$. We know $\xi =B^{2}\alpha$ for some $\alpha \in Y^{*}(\varepsilon_{1},\varepsilon_{2})$. Then $\langlembda=A^{(k)}\xi$, where $A^{(k)}$ is the visitation matrix associated with ${\mathcal Z}^{k}(\langlembda ,\partiali )$, ${\mathcal Z}^{k+2n}(\langlembda ,\partiali)={\mathcal Z}^{2n}(\xi , \partiali)=(\alpha ,\partiali)$, and $\langlembda=A^{(k)}B^{2}\alpha$. Let $A=A^{(k)}B^{2}$. Since $A^{(k)}$ is a non-negative matrix, by \ref{MvStar} we have $\nu(A)\langlembdaeq \nu (B)=r$. Therefore the following arguments may give us a clear view of the stack structure associated with the Veech-induction map ${\mathcal T}_{2}$: \betaegin{itemize} \item[\betaf Claim 1\/] $T^{a_{2}}$ translates the subinterval $I^{\alpha}_{2}$ (i.e. the second subinterval of $I^{\alpha}$) to the left by $(\alpha_{1}-\alpha_{3})$. That is $$ I^{\alpha}_{2}\cap T^{a_{2}}(I^{\alpha}_{2})=\langlembdaeft[\alpha_{1}, \alpha_{1}+\alpha_{2}-l(\alpha_{1}-\alpha_{3})\right) .$$ Since $l$ is a fixed positive integer, and $\varepsilon_{1}$, $\varepsilon_{2}$ are small enough, we have \betaegin{eqnarray}\langlembdaabel{Claim1} \betaegin{eqnarray}gin{array}{l} \mu (I^{\alpha}_{2}\cap T^{la_{2}}(I^{\alpha}_{2}))= \alpha_{2}-l(\alpha_{1}-\alpha_{3})>\alpha_{2}-l\varepsilon_{2}\alpha_{3}\\ >\alpha_{2}-l\varepsilon_{1}\varepsilon_{2}\langlembdaeft|\alpha\right|>\alpha_{2}-l\frac{\deltaisplaystyle {\varepsilon_{1}\varepsilon_{2}}}{\deltaisplaystyle {1-\varepsilon_{1}}}\alpha_{2}\\ =(1-l\frac{\deltaisplaystyle{\varepsilon_{1}\varepsilon_{2}}}{\deltaisplaystyle{1-\varepsilon_{1}}})\alpha_{2}\,. \varepsilonnd{array} \varepsilonnd{eqnarray} \item[\betaf Claim 2\/]The remainder of the column with base $I^{\alpha}_{2}$ and height $a_{2}$ has measure:\\ \betaegin{eqnarray}\langlembdaabel{Claim2} \betaegin{eqnarray}gin{array}{l} \langlembdaeft|\langlembda \right|-\mu (\cup^{a_{2}}_{i=0}T^{i}(I^{\alpha}_{2}))=\\ a_{1}\alpha_{1} +a_{3} \alpha_{3} <r\alpha_{2}(\alpha_{1}+\alpha_{3}) <ra_{2}\varepsilon_{1}\langlembdaeft|\alpha\right|<ra_{2}\frac{\deltaisplaystyle{\varepsilon_{1}}}{\deltaisplaystyle{1-\varepsilon_{1}}}\alpha_{2} <\frac{\deltaisplaystyle{\varepsilon_{1}}}{\deltaisplaystyle{1-\varepsilon_{1}}}\langlembdaeft|\langlembda\right|. \varepsilonnd{array} \varepsilonnd{eqnarray} From (\ref{Claim1}) and (\ref{Claim2}), for any $\varepsilon > 0$, we can select $\varepsilon_{1},\, \varepsilon_{2}$ small enough such that $T^{la_{2}}\in U_{\varepsilon}(Id)$.\\ \item[\betaf Claim 3\/] (the $'$whirly part$'$) $T^{a_{3}}$ sends $[\alpha_{1}+\alpha_{2}+(\alpha_{1}-\alpha_{3}), \langlembdaeft | \alpha\right |)$ to $[\alpha _{1}- \alpha_{3},\alpha _{3})$ which is continuous under $T^{a_{1}}$. That is to say $[\alpha_{1}+\alpha_{2}+(\alpha_{1}-\alpha_{3}), \langlembdaeft |\alpha\right |)$ is continuous under $T^{a_{1}+a_{3}}=T^{a_{2}-1}$. \varepsilonnd{itemize} \indent Similarly by induction:\\ \indent Let \betaegin{eqnarray} I^{\alpha}_{\omegamega}=[\alpha_{1}+\alpha_{2}+l(\alpha_{1}-\alpha_{3}), \langlembdaeft|\alpha\right|) . \varepsilonnd{eqnarray} Then $T^{i}$ are all continuous (linear) on $I^{\alpha}_{\omega}$ for $i=1,2,\cdots ,l(a_{1}+a_{3})$. And $T^{l(a_{1}+a_{3})}(I^{\alpha}_{\omegamega })\sigmaubset I^{\alpha}_{3}\sigmaubset I^{\alpha}_{}$.\\ \indent Therefore $$T^{la_{2}}(I_{\omega}^{\alpha})=T^{l(a_{1}+a_{3}-1)}(I^{\alpha}_{\omega})$$ $$=T^{-l}(T^{l(a_{1}+a_{3})}(I^{\alpha}_{\omega}))\sigmaubset T^{-l}(I^{\alpha}) ,$$ which implies \betaegin{eqnarray} T^{la_{2}}(I^{\alpha}_{\omega})\sigmaubset (T^{la_{2}}(I^{\alpha}))\cap T^{-l}(I^{\alpha}). \varepsilonnd{eqnarray} \indent Hence \betaegin{eqnarray}\langlembdaabel{Inter} \betaegin{eqnarray}gin{array}{l}\mu (T^{la_{2}}(I^{\alpha})\cap T^{-l}(I^{\alpha}))\\\gammaeq \mu (T^{la_{2}}(I^{\alpha}_{\omega}))=\alpha_{3}-l(\alpha_{1}-\alpha_{3})>\alpha_{3}-l\varepsilon_{2}\alpha_{3}=(1-l\varepsilon_{2})\alpha_{3} . \varepsilonnd{array} \varepsilonnd{eqnarray} Note: {\betaf Claim 1\/} and {\betaf Claim 2\/} show that $T^{la_{2}}$ is close to the identity map; \ref{Inter} shows that we are on the right way to the whirly property (Definition \ref{whirly2}).\\ \indent By {\betaf Claim 1\/}, {\betaf Claim 2\/} and {\betaf Claim 3\/}, choosing a positive constant $\mathfrak{C}_{\varepsilon ,l}$ associated with $\varepsilon$, $l$ and small enough, we have the following Lemma: \betaegin{lemma}\langlembdaabel{Major} \indent Let $\partiali=(3,2,1)$ for almost all $\langlembda\in\Lambda_{3}$, for any $0<\varepsilon <\frac{1}{10}$, $l\in {\mathbb N}$, there exists $\mathfrak{C}_{\varepsilon ,l}$ small enough such that for $k$ large enough, ${\mathcal Z}^{k}(\langlembda ,\partiali)=(\varepsilonnd{theorem}a ,\partiali)\in W(\mathfrak{C}_{\varepsilon ,l}, \mathfrak{C}_{\varepsilon ,l})$. We have that there exists $n\in {\mathbb N}$, ${\mathcal Z}^{k+2n}(\langlembda ,\partiali)=(\alpha ,\partiali)$, such that: $$\betaegin{eqnarray}gin{array}{l} P1) \cdots\cdots \mu (I^{\alpha}\cap T^{la_{2}}(I^{\alpha}))>(1-\varepsilon)\langlembdaeft|\alpha\right|\\ P2)\cdots\cdots \langlembdaeft| \langlembda\right|-\mu (\cup^{a_{2}-1}_{i=0}T^{i}(I^{\alpha}_{2}))<\varepsilon \langlembdaeft|\langlembda\right|\\ P3) \cdots\cdots \mu(T^{la_{2}}(I^{\alpha})\cap T^{-l}(I^{\alpha}))>\frac{\varepsilon}{3}\langlembdaeft|\alpha\right|.\\ \varepsilonnd{array}$$ \varepsilonnd{lemma} \indent Now let $N^{(\langlembda)}_{\varepsilon ,l}\sigmaubset{\mathbb N}$ be defined by \betaegin{eqnarray}\langlembdaabel{nt} N^{(\langlembda)}_{\varepsilon ,l}=\{n_{t}|n_{1}<n_{2}<\cdots <n_{i}<\cdots , {\mathcal Z}^{n_{t}-n}(\langlembda ,\partiali)\in W(\mathfrak{C}_{\varepsilon ,l}, \mathfrak{C}_{\varepsilon ,l})\}. \varepsilonnd{eqnarray} \indent By Veech's Ergodic Theorem (W.A.Veech\cite{VEE1} Theorem 1.1) on ${\mathcal T}_{2}$, we know that for Lebesgue a.e. $\langlembda \in \Lambda_{3}$, $T(\langlembda , \partiali)$ is uniquely ergodic (thus ergodic with respect to Lebesgue measure), and $N^{\langlembda}_{\varepsilon ,l}$ is a set with infinitely many elements, for any $0<\varepsilon <\frac{1}{10}$, $l\in {\mathbb N}$. Lets continue to study such $T_{(\langlembda ,\partiali)}$. As usual we use $T$ to denote $T_{\langlembda ,\partiali}$.\\ \indent We know that $A=A^{(k)}B^{2}$, $1\langlembdaeq \nu(A) \langlembdaeq \nu (B)=r$. We need $B^{2}$ here instead of $B$, in order to get a $T$-stack with the base $I^{\alpha}$, which is a relatively large portion of $I^{\langlembda}$. The following Lemma will be used in the last step, a density point argument, of the proof of Theorem \ref{Main}. \betaegin{lemma}\langlembdaabel{3p5} All notations as above, let $T=T(\langlembda ,\partiali)$, then for $a.e.\, \langlembda \in \Lambda_{3}$ there exists a positive integer $a_{*}$ such that $T^{i}$ $(1\langlembdaeq i\langlembdaeq a_{*})$ are continuous (linear) on $I^{\alpha}$, $T^{i}(I^{\alpha})\cap T^{j}(I^{\alpha})=\varepsilonmptyset$, $(i\neq j, 0\langlembdaeq i,j <a_{*})$, and $$a_{*}\langlembdaeft|\alpha\right|>\frac{1}{b_{M}(1+2r\cdot r')}\langlembdaeft|\langlembda\right| ,$$ where $b_{M}=\max \{b_{11},b_{12},b_{13}\}$. \varepsilonnd{lemma} \betaegin{eqnarray}gin{proof} \indent We know that $A=A^{(k)}B^{2}$.\\ \indent Suppose ${\mathcal Z}^{k+n}(\langlembda ,\partiali)=(\varepsilonnd{theorem}a ,\partiali)=(B\alpha ,\partiali)$, where $\varepsilonnd{theorem}a=(\varepsilonnd{theorem}a_{1},\varepsilonnd{theorem}a_{2},\varepsilonnd{theorem}a_{3})$, $I^{\varepsilonnd{theorem}a}_{1}=[0, \varepsilonnd{theorem}a_{1})$, $\varepsilonnd{theorem}a_{1}=b_{11}\alpha_{1}+b_{12}\alpha_{2}+b_{13}\alpha_{3}$, and \betaegin{eqnarray}\langlembdaabel{Eta1} I^{\alpha}\sigmaubset I^{\varepsilonnd{theorem}a}_{1} .\varepsilonnd{eqnarray} \indent Meanwhile $\varepsilonnd{theorem}a_{1}<b_{M}(\alpha_{1}+\alpha_{2}+\alpha_{3})<b_{M}\langlembdaeft|\alpha\right|$; that is \betaegin{eqnarray}\langlembdaabel{AlEta} \langlembdaeft|\alpha\right|> \frac{1}{b_{M}}\varepsilonnd{theorem}a_{1} . \varepsilonnd{eqnarray} \indent At the same time, since $$\varepsilonnd{theorem}a_{1}=b_{11}\alpha_{1}+b_{12}\alpha_{2}+b_{13}\alpha_{3}$$ $$\varepsilonnd{theorem}a_{2}=b_{21}\alpha_{1}+b_{22}\alpha_{2}+b_{23}\alpha_{3}$$ $$\varepsilonnd{theorem}a_{3}=b_{31}\alpha_{1}+b_{32}\alpha_{2}+b_{33}\alpha_{3} ,$$ it follows that \betaegin{eqnarray}\langlembdaabel{Eta} \varepsilonnd{theorem}a_{2}, \varepsilonnd{theorem}a_{3}<r'\varepsilonnd{theorem}a_{1} . \varepsilonnd{eqnarray} \indent Remembering that $\langlembda =A^{(k)}B\varepsilonnd{theorem}a$, i.e. $\langlembda=a^{(k+n)}_{1}\varepsilonnd{theorem}a_{1}+a^{(k+n)}_{2}\varepsilonnd{theorem}a_{2}+a^{(k+n)}_{3}\varepsilonnd{theorem}a_{3}$, by \ref{Mv} and \ref{MvStar} we have $$a^{(k+n)}_{2}, a^{(k+n)}_{3}<r a^{(k+n)}_{1} ,$$ and by \ref{Eta} we have \betaegin{eqnarray}\langlembdaabel{Cover} a^{(k+n)}_{1}\varepsilonnd{theorem}a_{1}>\frac{1}{1+2rr'}\langlembdaeft|\langlembda\right| . \varepsilonnd{eqnarray} \ref{AlEta} and \ref{Cover} imply that $ a^{(k+n)}_{1}\langlembdaeft|\alpha\right|>\frac{1}{b_{M}(1+2rr')}\langlembdaeft|\langlembda\right| $, combining this with \ref{Eta1}, the Lemma is proved. \varepsilonnd{proof} {\varepsilonm\betaf Proof of Theorem \ref{Main} (with a Density Point Argument)\/\/} \betaegin{proof} \indent Let $\partiali=(3,2,1)$, and let $\langlembda$ be in the full measure subset of $\Lambda_{3}$ as required by Lemma \ref{Major}.\\ \indent Define $\mathfrak{G}=\underset{N=1}{\omegaverset{\infty}{\cap}}\underset{\omegaverset{t\langlembdaeq N}{n_{t}\in N^{(\langlembda)}_{\varepsilon ,l}}}{\cup}\mathfrak{G}_{t}$, where $\mathfrak{G}_{t}=\cup ^{a_{*}^{(n_{t})}-1}_{i=0}T^{i}(I^{\alpha^{n_{t}}})$, with $n_{t}$ as defined in \ref{nt}.\\ \indent According to Lemma \ref{3p5}, $\mu (\mathfrak{G})\gammaeq \frac{\deltaisplaystyle 1}{\deltaisplaystyle b_{M}}\frac{\deltaisplaystyle 1}{\deltaisplaystyle 1+2rr'}\langlembdaeft|\langlembda\right|$. \\ \indent Suppose $E$ is an arbitrary measurable set, $E\sigmaubset [0,\langlembdaeft|\langlembda\right|)$, $\mu (E)>0$. Then by the ergodicity of $T$ there exists $q\in {\mathbb N}$ such that $\mu (T^{-q}(E)\cap \mathfrak{G})>0$. Therefore, by the Lebesgue Density Theorem, there exists a point of density one $x\in T^{-q}(E)\cap \mathfrak{G}$. By definition of $\mathfrak{G}$, we have $x\in T^{-q}_{(\langlembda ,\partiali)}(E)\cap J_{k}$, where the left close right open interval $J_{k}=T^{i_{k}}(I^{\alpha^{(S_{k})}})$, $S_{k}=n_{t_{k}}$ , $0\langlembdaeq i_{k}<a^{(S_{k})}_{*}$, and the approximate density satisfies: \betaegin{eqnarray}\langlembdaabel{Density} \underset{k\rightarrow \infty}{\langlembdaim}\frac{\mu (T^{-q} (E)\cap J_{k})}{\mu (J_{k})}=1 . \varepsilonnd{eqnarray} \indent By Lemma \ref{Major} , we know that since $S_{k}=n_{t_{k}}\in N^{(\langlembda)}_{\varepsilon ,l}$. \betaegin{eqnarray}\langlembdaabel{Lap} \betaegin{eqnarray}gin{array}{l} \mu ((T^{la^{(S_{k})}_{2}}J_{k})\cap T^{-l}(J_{k}))\\ \cdots\cdots =\mu (T^{i_{k}}(T^{la^{(S_{k})}_{2}}(I^{\alpha^{(S_{k})}})\cap T^{-l}(I^{\alpha ^{(S_{k})}})))\\ $$>\frac{\varepsilon}{3}\langlembdaeft|\alpha^{(S_{k})}\right|. \varepsilonnd{array} \varepsilonnd{eqnarray} \indent \ref{Density} implies there exists $k_{0}$ such that $$\frac{\mu (T^{-q}(E)\cap J_{k_{0}})}{\mu (J_{k_{0}})}>(1-\frac{\varepsilon}{10}) .$$ \indent Therefore by \ref{Lap} we have $$\mu (T^{la^{(S_{k})}_{2}}(T^{-q} (E))\cap (T^{-q} (E))>0 .$$ \indent Thus $\mu (T^{la^{(S_{k})}_{2}}(E)\cap T^{-l} (E))>0$. Since $S_{k}=n_{t_{k}}\in N^{(\langlembda)}_{\varepsilon ,l}$, together with Theorem \ref{Equiv} and Lemma \ref{Major}, we have proved Theorem \ref{Main}. \varepsilonnd{proof} \betaegin{corollary} Let $\partiali=(3,2,1)$, for Lebesgue almost all $\langlembda\in \Lambda_{3}$, the interval exchange transformation $({\mathbb X} ,{\mathcal B} ,T_{(\langlembda, \partiali)})$ admits no nontrivial spatial factor. \varepsilonnd{corollary} \betaegin{proof} By Proposition 1.9 of E.Glasner, B.Weiss\cite{GW}. \varepsilonnd{proof} \renewcommand{Comments and Acknowledgements}{Comments and Acknowledgements} \betaegin{eqnarray}gin{abstract} The result about 3-interval exchange transformations is a part of the author Y. Wu's 2006 Ph.D. Thesis at Rice University, Department of Mathematics. The author Y. Wu thanks Rice University for posting his Doctor of Philosophy Thesis online at the Rice University Digital Scholarship Archive. He would also like to thank his Ph.D. advisor W. A. Veech for directing his Ph.D. Thesis. \varepsilonnd{abstract} \betaegin{eqnarray}gin{thebibliography}{999} \betaegin{itemize}bitem{AVILA-FORNI} A. Avila, G. Forni, \varepsilonmph{Weak mixing for interval exchange transformations and translation flows}, Ann of Math, Vol 165 (2007), Issue 2, 637-664 \betaegin{itemize}bitem{BOS} M. Boshernitzan \varepsilonmph{A condition for minimal interval exchange maps to be uniquely ergodic}, Duke Math. J. 52 (1985), no. 3, 723--752. \betaegin{itemize}bitem{CHA} R. V. Chacon, \varepsilonmph{Weakly mixing transformations which are not strongly mixing}, Proc. Amer. Math. Soc. 22 1969 559--562. \betaegin{itemize}bitem{CHAI} J. Chaika, \varepsilonmph{Every transformation is disjoint from almost every IET}, Ann of Math, Vol 175(2012), 237-253 \betaegin{itemize}bitem{CHAI2} J. Chaika, J. Fickenscher, \varepsilonmph{Topological mixing for some residual sets of interval exchange transformations}, Communications in Mathematical Physics, Oct. 2014. \betaegin{itemize}bitem{GLA} E. Glasner, \varepsilonmph{Ergodic theory via joinings}, Mathematical Surveys and Monographs, 101. American Mathematical Society, Providence, RI, 2003. \betaegin{itemize}bitem{GW} E. Glasner, B. Weiss, \varepsilonmph{Spatial and non-spatial actions of Polish groups}, Ergodic Theory and Dynamical Systems, Volume 25, Issue 05, October 2005, pp 1521-1538 \betaegin{itemize}bitem{GTW} E. Glasner, B. Tsirelson, B. Weiss, \varepsilonmph{The automorphism group of the Gaussian measure cannot act pointwisely}, Israel J. of Math. Dec 2005 Vol 148 Iss. 1 pp305-329. \betaegin{itemize}bitem{GW2} E.Glasner, B.Weiss, \varepsilonmph{G-continuous functions and whirly actions}, Preprint: ArXiv math.DS/0311450. \betaegin{itemize}bitem{HAL1} P.R. Halmos, \varepsilonmph{Measure Theory.} Graduate Texts in Mathematics 18,Springer-Verlag. \betaegin{itemize}bitem{HAL2} P.R.Halmos, \varepsilonmph{Introduction to Ergodic Theory}, New York Press. \betaegin{itemize}bitem{KEA} M. Keane, \varepsilonmph{Interval Exchange transformations}, Math Z. 141(1973), 25-31. \betaegin{itemize}bitem{KIN1} J.L. King, \varepsilonmph{ The commutant is the weak closure of the powers, for rank-1 transformations}, Ergodic Theory Dynam. Systems 6 (1986) no. 3, 363--384. \betaegin{itemize}bitem{MASUR} H.Masur, \varepsilonmph{Interval Exchange Transformation and measured foliation}, Ann. of Math., 115 169-200. \betaegin{itemize}bitem{RAU} G. Rauzy, \varepsilonmph{Echanges d'intervalles et transformations induites}, Acta Arith 34(1979) 315-328. \betaegin{itemize}bitem{VEE1} W.A. Veech, \varepsilonmph{Gauss Measures for Transformations on the Space of Interval Exchange Map}, Ann of Math, Vol 115(1982), 201-242. \betaegin{itemize}bitem{VEE2} W.A. Veech, \varepsilonmph{Projective swiss cheeses and unique ergodic interval exchange transformations}, Ergodic Theory and Dynamical Systems, Vol I, in Progress in Mathematics, Birkhauser, Boston, 1981, 113-193 \betaegin{itemize}bitem{VEE3} W.A. Veech, \varepsilonmph{The Metric Theory of Interval Exchange Transformation I : Generic Spectral Properties}, American Journal of Mathematics, 107(6):1331-1359,1984. \betaegin{itemize}bitem{VEE4} W.A. Veech, \varepsilonmph{Interval exchange transformations}, J.D. Analyse Math. 33(1978) 222-278. \betaegin{itemize}bitem{VIA} M. Viana, \varepsilonmph{Ergodic Theory of Interval Exchange maps}, Rev. Mat. Complut., 19(2006), no. 1, 7-100. \betaegin{itemize}bitem{WU} Y. Wu, \varepsilonmph{Applications of Rauzy Induction on the generic ergodic theory of interval exchange transformations},Doctor of Philosophy Thesis, Rice University Electronic Theses and Dissertations(2006) \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{equation}gin{document} \title{Optimal convergence rates for Nesterov acceleration} \begin{equation}gin{abstract} In this paper, we study the behavior of solutions of the ODE associated to Nesterov acceleration. It is well-known since the pioneering work of Nesterov that the rate of convergence $O(1/t^2)$ is optimal for the class of convex functions with Lipschitz gradient. In this work, we show that better convergence rates can be obtained with some additional geometrical conditions, such as \L ojasiewicz property. More precisely, we prove the optimal convergence rates that can be obtained depending on the geometry of the function $F$ to minimize. The convergence rates are new, and they shed new light on the behavior of Nesterov acceleration schemes. We prove in particular that the classical Nesterov scheme may provide convergence rates that are worse than the classical gradient descent scheme on sharp functions: for instance, the convergence rate for strongly convex functions is not geometric for the classical Nesterov scheme (while it is the case for the gradient descent algorithm). This shows that applying the classical Nesterov acceleration on convex functions without looking more at the geometrical properties of the objective functions may lead to sub-optimal algorithms. \end{abstract} \begin{equation}gin{keywords} Lyapunov functions, rate of convergence, ODEs, optimization, \L ojasiewicz property. \end{keywords} \begin{equation}gin{AMS} 34D05, 65K05, 65K10, 90C25, 90C30 \end{AMS} \section{Introduction}\label{sec_intro} The motivation of this paper lies in the minimization of a differentiable function $F:\mathbb{R}^n\rightarrow \mathbb{R}$ with at least one minimizer. Inspired by Nesterov pioneering work \cite{nesterov1983method}, we study the following ordinary differential equation (ODE): \begin{equation}gin{equation} \label{ODE} \ddot{x}(t)+\alpharac{\alpha}{t}\dot{x}(t)+\nabla F(x(t))=0, \end{equation} where $\alpha>0$, with $t_0>0$, $x(t_0)=x_0$ and $\dot x(t_0)=v_0$. This ODE is associated to the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA)\cite{beck2009fast} or the Accelerated Gradient Method \cite{nesterov1983method} : \begin{equation}gin{equation} x_{n+1}=y_n-h \nabla F(y_n) \text{ and } y_n=x_n+\alpharac{n}{n+\alpha}(x_n-x_{n-1}), \end{equation} with $h$ and $\alpha$ positive parameters. This equation, including or not a perturbation term, has been widely studied in the literature \cite{attouch2000heavy,su2016differential,cabot2009long,balti2016asymptotic,may2015asymptotic}. This equation belongs to a set of similar equations with various viscosity terms. It is impossible to mention all works related to the heavy ball equation or other viscosity terms. We refer the reader to the following recent works \cite{Begout2015,jendoubi2015asymptotics,may2015asymptotic,cabot2007asymptotics,attouch2002dynamics,polyak2017lyapunov,attouch2017asymptotic} and the references therein. Throughout the paper, we assume that, for any initial conditions $(x_0,v_0)\in \mathbb{R}^n\times\mathbb{R}^n$, the Cauchy problem associated with the differential equation \eqref{ODE}, has a unique global solution $x$ satisfying $(x(t_0),\dot x(t_0) )=(x_0,v_0)$. This is guaranteed for instance when the gradient function $\nabla F$ is Lipschitz on bounded subsets of $\mathbb{R}^n$. In this work we investigate the convergence rates of the values $F(x(t))-F^*$ for the trajectories of the ODE \eqref{ODE}. It was proved in \cite{attouch2018fast} that if $F$ is convex with Lipschitz gradient and if $\alpha>3$, the trajectory $F(x(t))$ converges to the minimum $F^*$ of $F$. It is also known that for $\alpha\geqslant 3$ and $F$ convex we have: \begin{equation}gin{equation} F(x(t))-F^*=O \left( t^{-2} \right). \end{equation} Extending to the continuous setting the work of Chambolle-Dossal~\cite{chambolle2015convergence} of the convergence of iterates of FISTA, Attouch et al. \cite{attouch2018fast} proved that for $\alpha>3$ the trajectory $x$ converges (weakly in infinite-dimensional Hilbert space) to a minimizer of $F$. Su et al. \cite{su2016differential} proposed some new results, proving the integrability of $t\mapsto t(F(x(t))-F^*)$ when $\alpha>3$, and they gave more accurate bounds on $F(x(t))-F^*$ in the case of strong convexity. Always in the case of the strong convexity of $F$, Attouch, Chbani, Peypouquet and Redont proved in \cite{attouch2018fast} that the trajectory $x(t)$ satisfies $F(x(t))-F^*=O\left( t^{-\alpharac{2\alpha}{3}}\right)$ for any $\alpha>0$. More recently several studies including a perturbation term \cite{attouch2018fast,AujolDossal,aujol2015stability,vassilis2018differential} have been proposed. In this work, we focus on the decay of $F(x(t))-F^*$ depending on more general geometries of $F$ around its set of minimizers than strong convexity. Indeed, Attouch et al. in \cite{attouch2018fast} proved that if $F$ is convex then for any $\alpha>0$, $F(x(t))-F^*$ tends to $0$ when $t$ goes to infinity. Combined with the coercivity of $F$, this convergence implies that the distance $d(x(t),X^*)$ between $x(t)$ and the set of minimizers $X^*$ tends to $0$. To analyse the asymptotic behavior of $F(x(t))-F^*$ we can thus only assume hypotheses on $F$ only on the neighborhood of $X^*$ and may avoid the tough question of the convergence of the the trajectory $x(t)$ to a point of $X^*$. More precisely, we consider functions behaving like $\norm{x-x^*}^{\gamma}$ around their set of minimizers for any $\gamma\geqslant 1$. Our aim is to show the optimal convergence rates that can be obtained depending on this local geometry. In particular we prove that if $F$ is strongly convex with a Lipschitz continuous gradient, the decay is actually better than $O\left( t^{-\alpharac{2\alpha}{3}}\right)$. We also prove that the actual decay for quadratic functions is $O\left( t^{-\alpha} \right)$. These results rely on two geometrical conditions: a first one ensuring that the function is sufficiently flat around the set of minimizers, and a second one ensuring that it is sufficiently sharp. In this paper, we will show that both conditions are important to get the expected convergence rates: the flatness assumption ensures that the function is not too sharp and may prevent from bad oscillations of the solution, while the sharpness condition ensures that the magnitude of the gradient of the function is not too low in the neighborhood of the minimizers. The paper is organized as follows. In Section~\ref{sec_geom}, we introduce the geometrical hypotheses we consider on the function $F$, and their relation with \L ojasiewicz property. We then recap the state of the art results on the ODE \eqref{ODE} in Section~\ref{sec_state}. We present the contributions of the paper in Section~\ref{sec_contrib}: depending on the geometry of the function $F$ and the value of the damping parameter $\alpha$, we give optimal rates of convergence. The proofs of the theorems are given in Section~\ref{sec_proofs}. Some technical proofs are postponed to Appendix~\ref{appendix}. \section{Local geometry of convex functions}\label{sec_geom} Throughout the paper we assume that the ODE \eqref{ODE} is defined in $\mathbb{R}^n$ equipped with the euclidean scalar product $\langle \cdot,\cdot\rangle$ and the associated norm $\|\cdot\|$. As usual $B(x^*,r)$ denotes the open euclidean ball with center $x^*$ and radius $r>0$ while $\bar B(x^*,r)$ denotes the closed euclidean ball with center $x^*$ and radius $r>0$. In this section we introduce two notions describing the geometry of a convex function around its minimizers. \begin{equation}gin{definition} Let $F:\mathbb{R}^n\rightarrow \mathbb{R}$ be a convex differentiable function, $X^*:=\textup{arg}\,min F\neq \emptyset$ and: $F^*:=\inf F$. \begin{equation}gin{enumerate} \item[(i)] Let $\gamma \geqslant 1$. The function $F$ satisfies the hypothesis $\textbf{H}_1(\gamma)$ if, for any minimizer $x^*\in X^*$, there exists $\eta>0$ such that: $$\alphaorall x\in B(x^*,\eta),\quad F(x) - F^* \leqslant \alpharac{1}{\gamma} \langle \nabla F(x),x-x^*\rangle.$$ \item[(ii)] Let $r\geqslant 1$. The function $F$ satisfies the growth condition $\textbf{H}_2(r)$ if, for any minimizer $x^*\in X^*$, there exist $K>0$ and $\varepsilon>0$, such that: \begin{equation}gin{equation*} \alphaorall x\in B(x^*,\varepsilon),\quad K d(x,X^*)^{r}\leqslant F(x)-F^*. \end{equation*} \end{enumerate} \end{definition} The hypothesis $\textbf{H}_1(\gamma)$ has already been used in \cite{cabot2009long} and later in \cite{su2016differential,AujolDossal}. This is a mild assumption, requesting slightly more than the convexity of $F$ in the neighborhood of its minimizers. Observe that any convex function automatically satisfies $\textbf{H}_1(1)$ and that any differentiable function $F$ for which $(F-F^*)^{\alpharac{1}{\gamma}}$ is convex for some $\gamma\geq 1$, satisfies $\textbf{H}_1(\gamma)$. Nevertheless having a better intuition of the geometry of convex functions satisfying $\textbf{H}_1(\gamma)$ for some $\gamma\geq 1$, requires a little more effort: \begin{equation}gin{lemma} Let $F:\mathbb{R}^n \rightarrow \mathbb{R}$ be a convex differentiable function with $X^*=\textup{arg}\,min F\neq \emptyset$, and $F^*=\inf F$. If $F$ satisfies $\textbf{H}_1(\gamma)$ for some $\gamma\geq 1$, then: \begin{equation}gin{enumerate} \item $F$ satisfies $\textbf{H}_1(\gamma')$ for all $\gamma'\in[1,\gamma]$. \item For any minimizer $x^*\in X^*$, there exists $M>0$ and $\eta >0$ such that: \begin{equation}gin{equation} \alphaorall x\in B(x^*,\eta),~F(x) -F^* \leqslant M \|x-x^*\|^\gamma.\label{hyp:H1} \end{equation} \end{enumerate} \label{lem:geometry} \end{lemma} \begin{equation}gin{proof} The proof of the first point of Lemma \ref{lem:geometry} is straightforward. The second point relies on the following elementary result in dimension $1$: let $g:\mathbb{R}\rightarrow \mathbb{R}$ be a convex differentiable function such that $0\in \textup{arg}\,min g$, $g(0)=0$ and: $$\alphaorall t\in [0,1],~g(t) \leq \alpharac{t}{\gamma}g'(t),$$ for some $\gamma\geqslant 1$. Then the function $t\mapsto t^{-\gamma}g(t)$ is monotonically increasing on $[0,1]$ and: \begin{equation}gin{equation} \alphaorall t\in [0,1],~g(t)\leqslant g(1)t^\gamma.\label{majo1D} \end{equation} Consider now any convex differentiable function $F:\mathbb{R}^n \rightarrow \mathbb{R}$ satisfying the condition $\textbf{H}_1(\gamma)$, and $x^*\in X^*$. There then exists $\eta>0$ such that: $$\alphaorall x\in B(x^*,\eta),\quad 0\leqslant F(x) - F^* \leqslant \alpharac{1}{\gamma} \langle \nabla F(x),x-x^*\rangle.$$ Let $\eta'\in(0,\eta)$. For any $x\in\bar B(x^*,\eta')$ with $x \neq x^*$, we introduce the following univariate function: $$g_x:t\in [0,1]\mapsto F\left(x^*+t\eta'\alpharac{x-x^*}{\|x-x^*\|}\right)-F^*.$$ First observe that, for all $x\in \bar B(x^*,\eta')$ with $x \neq x^*$ and for all $t\in [0,1]$, we have: $x^*+t\eta'\alpharac{x-x^*}{\|x-x^*\|}\in \bar B(x^*,\eta').$ Since $F$ is continuous on the compact set $\bar B(x^*,\eta')$, we deduce that: \begin{equation}gin{equation} \exists M>0,~\alphaorall x\in \bar B(x^*,\eta') \ \mbox{ with $x \neq x^*$},~ \alphaorall t\in[0,1],~g_x(t)\leq M.\label{bounded} \end{equation} Note here that the constant $M$ only depends on the point $x^*$ and the real constant $\eta'$. Then, by construction, $g_x$ is a convex differentiable function satisfying: $0\in \textup{arg}\,min(g_x)$, $g_{x}(0)=0$ and: \begin{equation}gin{eqnarray*} \alphaorall t\in (0,1],~g_x'(t) &=& \left\langle \nabla F\left(x^*+t\eta'\alpharac{x-x^*}{\|x-x^*\|}\right),\eta'\alpharac{x-x^*}{\|x-x^*\|}\right\rangle\\%=\alpharac{1}{t}\langle \nabla F(x^*+t\alpharac{x-x^*}{\|x-x^*\|}),t\alpharac{x-x^*}{\|x-x^*\|}\rangle\\ &\geqslant & \alpharac{\gamma}{t}\left(F\left(x^*+t\eta\alpharac{x-x^*}{\|x-x^*\|}\right)-F^*\right) = \alpharac{\gamma}{t} g_x(t). \end{eqnarray*} Thus, using the one dimensional result \eqref{majo1D} and the uniform bound \eqref{bounded}, we get: \begin{equation}gin{equation} \alphaorall x\in \bar B(x^*,\eta') \ \mbox{ with $x \neq x^*$},~\alphaorall t\in [0,1],~g_{x}(t) \leqslant g_x(1)t^{\gamma}\leqslant Mt^{\gamma}. \end{equation} Finally by choosing $t=\alpharac{1}{\eta'}\|x-x^{\ast}\|$, we obtain the expected result. \end{proof} In other words, the hypothesis $\textbf{H}_1(\gamma)$ can be seen as a ``flatness'' condition on the function $F$ in the sense that it ensures that $F$ is sufficiently flat (at least as flat as $x\mapsto \|x\|^\gamma$) in the neighborhood of its minimizers. The hypothesis $\textbf{H}_2(r)$, $r\geqslant 1$, is a growth condition on the function $F$ around any minimizer (any critical point in the non-convex case). It is sometimes also called $r$-conditioning \cite{garrigos2017convergence} or H\"olderian error bounds \cite{Bolte2017}. This assumption is motivated by the fact that, when $F$ is convex, $\textbf{H}_2(r)$ is equivalent to the famous \L ojasiewicz inequality \cite{Loja63,Loja93}, a key tool in the mathematical analysis of continuous (or discrete) subgradient dynamical systems, with exponent $\theta = 1-\alpharac{1}{r}$: \begin{equation}gin{definition} \label{def_loja} A differentiable function $F:\mathbb R^n \to \mathbb R$ is said to have the \L ojasiewicz property with exponent $\theta \in [0,1)$ if, for any critical point $x^*$, there exist $c> 0$ and $\varepsilon >0$ such that: \begin{equation}gin{equation} \alphaorall x\in B(x^*,\varepsilon),~\|\nabla F(x)\| \geqslant c|F(x)-F(x^*)|^{\theta},\label{loja} \end{equation} where: $0^0=0$ when $\theta=0$ by convention. \end{definition} When the set $X^*$ of the minimizers is a connected compact set, the \L ojasiewicz inequality turns into a geometrical condition on $F$ around its set of minimizers $X^*$, usually referred to as H\"older metric subregularity \cite{kruger2015error}, and whose proof can be easily adapted from \cite[Lemma 1]{AttouchBolte2009}: \begin{equation}gin{lemma} Let $F:\mathbb{R}^n\rightarrow \mathbb{R}$ be a convex differentiable function satisfying the growth condition $\textbf{H}_2(r)$ for some $r\geqslant 1$. Assume that the set $X^*=\textup{arg}\,min F$ is compact. Then there exist $K>0$ and $\varepsilon >0$ such that for all $x\in \mathbb{R}^n$: $$d(x,X^*) \leqslant \varepsilon\mathbb{R}ightarrow K d(x,X^*)^{r}\leqslant F(x)-F^*.$$\label{lem:H2} \end{lemma} Typical examples of functions having the \L ojasiewicz property are real-analytic functions, $C^1$ subanalytic functions or semi-algebraic functions \cite{Loja63,Loja93}. Strongly convex functions satisfy a global \L ojasiewicz property with exponent $\theta=\alpharac{1}{2}$ \cite{AttouchBolte2009}, or equivalently a global version of the hypothesis $\textbf{H}_2(2)$, namely: $$\alphaorall x\in \mathbb{R}^n, F(x)-F^*\geqslant \alpharac{\mu}{2}\|x-x^*\|^2,$$ where $\mu>0$ denotes the parameter of strong convexity and $x^*$ the unique minimizer of $F$. By extension, uniformly convex functions of order $p\geqslant 2$ satisfy the global version of the hypothesis $\textbf{H}_2(p)$ \cite{garrigos2017convergence}. Let us now present two simple examples of convex differentiable functions to illustrate situations where the hypothesis $\textbf{H}_1$ and $\textbf{H}_2$ are satisfied. Let $\gamma > 1$ and consider the function defined by: $F:x\in \mathbb{R}\mapsto |x|^\gamma$. We easily check that $F$ satisfies the hypothesis $\textbf{H}_1(\gamma')$ for some $\gamma'\geq 1$ if and only if $\gamma'\in [1,\gamma]$. By definition, $F$ also naturally satisfies $\textbf{H}_2(r)$ if and only if $r\geqslant \gamma$. Same conditions on $\gamma'$ and $r$ can be derived without uniqueness of the minimizer for functions of the form: \begin{equation}gin{equation} F(x) = \left\{\begin{equation}gin{array}{ll} \max(|x|-a,0)^\gamma &\mbox{ if } |x| \geqslant a,\\ 0 &\mbox{otherwise,} \end{array}\right.\label{ex2} \end{equation} with $a>0$, and whose set of minimizers is: $X^*=[-a,a]$, since conditions $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(r)$ only make sense around the extremal points of $X^*$. Let us now investigate the relation between the parameters $\gamma$ and $r$ in the general case: any convex differentiable function $F$ satisfying both $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(r)$, has to be at least as flat as $x\mapsto \|x\|^\gamma$ and as sharp as $x\mapsto \|x\|^r$ in the neighborhood of its minimizers. Combining the flatness condition $\textbf{H}_1(\gamma)$ and the growth condition $\textbf{H}_2(r)$, we consistently deduce: \begin{equation}gin{lemma} If a convex differentiable function satisfies both $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(r)$ then necessarily $r\geqslant \gamma$. \label{lem:geometry2} \end{lemma} Finally, we conclude this section by showing that an additional assumption of the Lipschitz continuity of the gradient provides additional information on the local geometry of $F$: indeed, for convex functions, the Lipschitz continuity of the gradient is equivalent to a quadratic upper bound on $F$: \begin{equation}gin{equation}\label{lipschitz} \alphaorall (x,y)\in \mathbb{R}^n\times \mathbb{R}^n, ~F(x)-F(y) \leqslant \langle \nabla F(y),x-y \rangle + \alpharac{L}{2}\|x-y\|^{2}. \end{equation} Applying \eqref{lipschitz} at $y=x^*$, we then deduce: \begin{equation}gin{equation}\label{lipschitz2} \alphaorall x\in \mathbb{R}^n,~F(x)-F^* \leqslant \alpharac{L}{2}\|x-x^*\|^{2}, \end{equation} which indicates that $F$ is at least as flat as $\|x-x^*\|^2$ around $X^*$. More precisely: \begin{equation}gin{lemma} Let $F:\mathbb{R}^n \rightarrow \mathbb{R}$ be a convex differentiable function with a $L$-Lipschitz continuous gradient for some $L>0$. Assume also that $F$ satisfies the growth condition $\textbf{H}_2(2)$ for some constant $K>0$. Then $F$ automatically satisfies $\textbf{H}_1(\gamma)$ with $\gamma=1+\alpharac{K}{2L}\in(1,2]$.\label{lem:Lipschitz} \end{lemma} \begin{equation}gin{proof} Since $F$ is convex with a Lipschitz continuous gradient, we have: $$\alphaorall (x,y)\in \mathbb{R}^n, F(y)-F(x)-\langle \nabla F(x),y-x\rangle \geqslant \alpharac{1}{2L}\|\nabla F(y)-\nabla F(x)\|^2,$$ hence: $$\alphaorall x\in \mathbb{R}^n, F(x)-F^*\leqslant \langle\nabla F(x),x-x^*\rangle -\alpharac{1}{2L}\|\nabla F(x)\|^2.$$ Assume in addition that $F$ satisfies the growth condition $\textbf{H}_2(2)$ for some constant $K>0$. Then $F$ has the \L ojasiewicz property with exponent $\theta=\alpharac{1}{2}$ and constant $c=\sqrt{K}$. Thus: $$\left(1+\alpharac{K}{2L}\right)(F(x)-F^*) \leqslant\langle\nabla F(x),x-x^*\rangle,$$ in the neighborhood of its minimizers, which means that $F$ satisfies $\textbf{H}_1(\gamma)$ with $\gamma=1+\alpharac{K}{2L}$. \end{proof} \begin{equation}gin{remark} Observe that Lemma \ref{lem:Lipschitz} can be easily extended to the case of convex differentiable functions with a $\nu$-H\"older continuous gradient. Indeed, let $F$ be a convex differentiable functions with a $\nu$-H\"older continuous gradient for some $\nu\geqslant 1$. If $F$ also satisfies the growth condition $\textbf{H}_2(1+\nu)$ (for some constant $K>0$), then $F$ automatically satisfies $\textbf{H}_1(\gamma)$ with $\gamma =1 + \alpharac{\alpha K}{(1+\nu)L^\alpharac{1}{\nu}}$. This result is based on a notion of generalized co-coercivity for functions having a H\"older continuous gradient. \end{remark} \section{Related results}\label{sec_state} In this section, we recall some classical state of the art results on the convergence properties of the trajectories of the ODE \eqref{ODE}. Let us first recall that as soon as $\alpha>0$, $F(x(t))$ converges to $F^*$ \cite{AujolDossal,attouch2017rate}, but a larger value of $\alpha$ is required to show the convergence of the trajectory $x(t)$. More precisely, if $F$ is convex and $\alpha >3$, or if $F$ satisfies $\textbf{H}_1(\gamma)$ hypothesis and $\alpha>1+\alpharac{2}{\gamma}$ then: \begin{equation}gin{equation*} F(x(t))-F^*=o\left(\alpharac{1}{t^2}\right), \end{equation*} and the trajectory $x(t)$ converges (weakly in an infinite dimensional space) to a minimizer $x^*$ of $F$ \cite{su2016differential,AujolDossal,may2015asymptotic}. This last point generalizes what is known on convex functions: thanks to the additional hypothesis $\textbf{H}_1(\gamma)$, the optimal decay $\alpharac{1}{t^2}$ can be achieved for a damping parameter $\alpha$ smaller that $3$. In the sub-critical case (namely when $\alpha <3$), it has been proven in \cite{attouch2017rate,AujolDossal} that if $F$ is convex, the convergence rate is then given by: \begin{equation}gin{equation} F(x(t))-F^*=O\left(\alpharac{1}{t^\alpharac{2\alpha}{3}}\right), \end{equation} but we can no longer prove the convergence of the trajectory $x(t)$. The purpose in this paper is to prove that by exploiting the geometry of the function $F$, better rates of convergence can be achieved for the values $F(x(t))-F^*$. Consider first the case when $F$ is convex and $\alpha \leqslant 1+\alpharac{2}{\gamma}$. A first contribution in this paper is to provide convergence rates for the values when $F$ only satisfies $\textbf{H}_1(\gamma)$. Although we can no longer prove the convergence of the trajectory $x(t)$, we still have the following convergence rate for $F(x(t))-F^*$: \begin{equation}gin{equation} F(x(t))-F^*=O\left(\alpharac{1}{t^{\alpharac{2\gamma\alpha}{2+\gamma}}}\right), \end{equation} and this decay is optimal and achieved for $F(x)=\vert x\vert^{\gamma}$ for any $\gamma\geqslant 1$. These results have been first stated and proved in the unpublished report \cite{AujolDossal} by Aujol and Dossal in 2017 for convex differentiable functions satisfying $(F-F^*)^\alpharac{1}{\gamma}$ convex. Observe that this decay is still valid for $\gamma=1$ i.e. with the sole assumption of convexity as shown in \cite{attouch2017rate}, and that the constant hidden in the big $O$ is explicit and available also for $\gamma<1$, that is for non-convex functions (for example for functions whose square is convex). Consider now the case when $\alpha > 1+\alpharac{2}{\gamma}$. In that case, with the sole assumption $\textbf{H}_1(\gamma)$ on $F$ for some $\gamma \geqslant 1$, it is not possible to get a bound on the decay rate like $O(\alpharac{1}{t^\delta})$ with $\delta>2$. Indeed as shown in \cite[Example 2.12]{attouch2018fast}, for any $\eta>2$ and for a large friction parameter $\alpha$, the solution $x$ of the ODE associated to $F(x)=|x|^{\eta}$ satisfies: $$F(x(t))-F^*=Kt^{-\alpharac{2\eta}{\eta-2}},$$ and the power $\alpharac{2\eta}{\eta-2}$ can be chosen arbitrary close to $2$. More conditions are thus needed to obtain a decay faster than $O\left(\alpharac{1}{t^2}\right)$, which is the uniform rate that can be achieved for $\alpha \geqslant 3$ for convex functions. Our main contribution is to show that a flatness condition $\textbf{H}_1$ associated to classical sharpness conditions such as the \L ojasiewicz property provides new and better decay rates on the values $F(x(t))-F^*$, and to prove the optimality of these rates in the sense that they are achieved for instance for the function $F(x)=|x|^\gamma$, $x\in \mathbb{R}$, $\gamma\geqslant 1$. We will then confront our results to well-known results in the literature. In particular we will focus on the case when $F$ is strongly convex or has a strong minimizer \cite{cabot2009long}. In that case, Attouch Chbani, Peypouquet and Redont in \cite{attouch2018fast} following Su, Boyd and Candes \cite{su2016differential} proved that for any $\alpha>0$ we have: $$F(x(t))-F^*=O\left(t^{-\alpharac{2\alpha}{3}}\right),$$ (see also \cite{attouch2017rate} for more general viscosity term in that setting). In Section~\ref{sec_contrib}, we will prove the optimality of the power $\alpharac{2\alpha}{3}$ in \cite{attouch2016fast}, and that if $F$ has additionally a Lipschitz gradient then the decay rate of $F(x(t))-F^*$ is always strictly better than $O\left(t^{-\alpharac{2\alpha}{3}}\right)$. Eventually several results about the convergence rate of the solutions of ODE associated to the classical gradient descent : \begin{equation}gin{equation}\label{EDOGrad} \dot{x}(t)+\nabla F(x(t))=0, \end{equation} or the ODE associated to the heavy ball method \begin{equation}gin{equation}\label{EDO} \ddot{x}+\alpha\dot x(t)+\nabla F(x(t))=0 \end{equation} under geometrical conditions such that the \L ojasiewicz property have been proposed, see for example Polyak-Shcherbakov~\cite{polyak2017lyapunov}. The authors prove that if the function $F$ satisfies $\textbf{H}_2(2)$ and some other conditions, the decay of $F(x(t))-F^*$ is exponential for the solutions of both previous equations. These rates are the continuous counterparts of the exponential decay rate of the classical gradient descent algorithm and the heavy ball method algorithm for strongly convex functions. In the next section we will prove that this exponential rate is not true for solutions of \eqref{ODE} even for quadratic functions, and we will prove that from an optimization point of view, the classical Nesterov acceleration may be less efficient than the classical gradient descent. \section{Contributions}\label{sec_contrib} In this section, we state the optimal convergence rates that can be achieved when $F$ satisfies hypotheses such as $\textbf{H}_1(\gamma)$ and/or $\textbf{H}_2(r)$. The first result gives optimal control for functions whose geometry is sharp : \begin{equation}gin{theorem}\label{Theo1} Let $\gamma\geqslant 1$ and $\alpha >0$. If $F$ satisfies $\textbf{H}_1(\gamma)$ and if $\alpha\leqslant 1+\alpharac{2}{\gamma}$ then: \begin{equation}gin{equation*} F(x(t))-F^*=O\left(\alpharac{1}{t^{\alpharac{2\gamma\alpha}{\gamma+2}}}\right). \end{equation*} \end{theorem} \begin{equation}gin{figure}[h] \includegraphics[width=\textwidth]{RateTheo1.png} \caption{Decay rate $r(\alpha,\gamma)=\alpharac{2\alpha\gamma}{\gamma+2}$ depending on $\alpha$ when $\alpha\leqslant 1+\alpharac{2}{\gamma}$ and when $F$ satisfies $\textbf{H}_1(\gamma)$ (as in Theorem \ref{Theo1}) for four values $\gamma$: $\gamma_1=1.5$ dashed line, $\gamma_2=2$, solid line, $\gamma_3=3$ dotted line and $\gamma_4=5$ dashed-dotted line.} \end{figure} Note that a proof of the Theorem \ref{Theo1} has been proposed in the unpublished report \cite{AujolDossal}. The obtained decay is proved to be optimal in the sense that it can be achieved for some explicit functions $F$ for any $\gamma <1$. As a consequence one cannot expect a $o(t^{-\alpharac{2\gamma\alpha}{\gamma+2}})$ decay when $\alpha <1+\alpharac{2}{\gamma}$. Let us now consider the case when $\alpha > 1+\alpharac{2}{\gamma}$. The second result in this paper provides optimal convergence rates for functions whose geometry is sharp, with a large friction coefficient: \begin{equation}gin{theorem}\label{Theo1b} Let $\gamma\geqslant 1$ and $\alpha >0$. If $F$ satisfies $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(2)$ for some $\gamma\leqslant 2$, if $F$ has a unique minimizer and if $\alpha>1+\alpharac{2}{\gamma}$ then \begin{equation}gin{equation*}\label{eqTheo1} F(x(t))-F^*=O\left(\alpharac{1}{t^{\alpharac{2\gamma\alpha}{\gamma+2}}}\right). \end{equation*} Moreover this decay is optimal in the sense that for any $\gamma\in(1,2]$ this rate is achieved for the function $F(x)=\vert x\vert^\gamma$. \end{theorem} \begin{equation}gin{figure}[h] \includegraphics[width=\textwidth]{RateSharp.png} \caption{Decay rate $r(\alpha,\gamma)=\alpharac{2\alpha\gamma}{\gamma+2}$ depending on the value of $\alpha$ when $F$ satisfies $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(2)$ (as in Theorem \ref{Theo1b}) with $\gamma\leqslant 2$ for two values $\gamma$ : $\gamma_1=1.5$ dashed line, $\gamma_2=2$, solid line.} \end{figure} Note that Theorem~\ref{Theo1b} only applies for $\gamma\leqslant 2$, since there is no function that satisfies both conditions $\textbf{H}_1(\gamma)$ with $\gamma>2$ and $\textbf{H}_2(2)$ (see Lemma \ref{lem:geometry2}). The optimality of the convergence rate result is precisely stated in the next Proposition: \begin{equation}gin{proposition} \label{prop_optimal} Let $\gamma\in (1,2]$. Let us assume that $\alpha>0$. Let $x$ be a solution of \eqref{ODE} with $F(x)=\vert x\vert^{\gamma}$, $|x(t_0)|<1$ and $\dot{x}(t_0)=0$ where $t_0>\sqrt{\max(0,\alpharac{\alpha \gamma(\alpha -1-2/\gamma)}{(\gamma +2)^2})}$. There exists $K >0$ such that for any $T>0$, there exists $t \geqslant T$ such that \begin{equation}gin{equation} F(x(t))-F^* \geqslant \alpharac{K}{t^{\alpharac{2 \gamma \alpha}{\gamma +2}}}. \end{equation} \end{proposition} Let us make several observations: first, to apply Theorem~\ref{Theo1b}, more conditions are needed than for Theorem~\ref{Theo1}: the hypothesis $\textbf{H}_2(2)$ and the uniqueness of the minimizer are needed to prove a decay faster than $O(\alpharac{1}{t^2})$, which is the uniform rate than can be achieved with $\alpha\geqslant 3$ for convex functions \cite{su2016differential}. The uniqueness of the minimizer is crucial in the proof of Theorem~\ref{Theo1b}, but it is still an open problem to know if this uniqueness is a necessary condition. In particular, observe that if $\dot x(t_0)=0$, then for all $t \geqslant t_0$, $x(t)$ belongs to $x_0+ {\rm Im} (\nabla F) $ where ${\rm Im} (\nabla F) $ stands for the vector space generated by $\nabla F (x)$ for all $x$ in $\mathbb{R}^n$. As a consequence, Theorem~\ref{Theo1b} still holds true as long as the assumptions are valid in $x_0+ {\rm Im} (\nabla F) $. \begin{equation}gin{remark}[The Least-Square problem] Let us consider the classical Least-Square problem defined by: $$\displaystyle\min_{x\in \mathbb{R}^n}F(x):=\alpharac{1}{2}\|Ax-b\|^2,$$ where $A$ is a linear operator and $b\in \mathbb{R}^n$. If $\dot x(t_0)=0$, then for all $t \geqslant t_0$, we have thus that $x(t)$ belongs to the affine subspace $x_0+{\rm Im}(A^*)$. Since we have uniqueness of the solution on $x_0+{\rm Im}(A^*)$, Theorem~\ref{Theo1b} can be applied. \end{remark} We can also remark that if $F$ is a quadratic function in the neighborhood of $x^*$, then $F$ satisfies $\textbf{H}_1(\gamma)$ for any $\gamma \in [1,2]$. Consequently, Theorem~\ref{Theo1b} applies with $\gamma=2$ and thus: \begin{equation}gin{equation*} F(x(t))-F^*=O\left(\alpharac{1}{t^{\alpha}}\right). \end{equation*} Observe that the optimality result provided by the Proposition~\ref{prop_optimal} ensures that we cannot expect an exponential decay of $F(x(t))-F^*$ for quadratic functions whereas this exponential decay can be achieved for the ODE associated to Gradient descent or Heavy ball method \cite{polyak2017lyapunov}. Likewise, if $F$ is a convex differentiable function with a Lipschitz continuous gradient, and if $F$ satisfies the growth condition $\textbf{H}_2(2)$, then $F$ automatically satisfies the assumption $\textbf{H}_1(\gamma)$ with some $1<\gamma\leqslant 2$ as shown by Lemma~\ref{lem:Lipschitz}, and Theorem~\ref{Theo1b} applies with $\gamma>1$. Finally if $F$ is strongly convex or has a strong minimizer, then $F$ naturally satisfies $\textbf{H}_1(1)$ and a global version of $\textbf{H}_2(2)$. Since we prove the optimality of the decay rates given by Theorem~\ref{Theo1b}, a consequence of this work is also the optimality of the power $\alpharac{2\alpha}{3}$ in \cite{attouch2016fast} for strongly convex functions and functions having a strong minimizer. In both cases, we thus obtain convergence rates which are strictly better than $O(t^{-\alpharac{2\alpha}{3}})$ that is proposed for strongly convex functions by Su et al. \cite{su2016differential} and Attouch et al. \cite{attouch2018fast}. Finally it is worth noticing that the decay for strongly convex functions is not exponential while it is the case for the classical gradient descent scheme (see e.g. \cite{garrigos2017convergence}). This shows that applying the classical Nesterov acceleration on convex functions without looking more at the geometrical properties of the objective functions may lead to sub-optimal algorithms. Let us now focus on flat geometries i.e. geometries associated to $\gamma>2$. Note that the uniqueness of the minimizer is not need anymore: \begin{equation}gin{theorem}\label{Theo2} Let $\gamma_1>2$ and $\gamma_2 >2$. Assume that $F$ is coercive and satisfies $\textbf{H}_1(\gamma_1)$ and $\textbf{H}_2(\gamma_2)$ with $\gamma_1\leqslant \gamma_2$. If $\alpha\geqslant \alpharac{\gamma_1+2}{\gamma_1-2}$ then we have: \begin{equation}gin{equation*}\label{eqTheo2} F(x(t))-F^*=O\left(\alpharac{1}{t^{\alpharac{2\gamma_2}{\gamma_2-2}}}\right). \end{equation*} \end{theorem} In the case when $\gamma_1= \gamma_2$, we have furthermore the convergence of the trajectory: \begin{equation}gin{corollary}\label{Corol2} Let $\gamma>2$. If $F$ is coercive and satisfies $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(\gamma)$, and if $\alpha\geqslant \alpharac{\gamma+2}{\gamma-2}$ then we have: \begin{equation}gin{equation*}\label{eqCorol2} F(x(t))-F^*=O\left(\alpharac{1}{t^{\alpharac{2\gamma}{\gamma-2}}}\right), \end{equation*} and \begin{equation}gin{equation} \norm{\dot x(t)}=O\left(\alpharac{1}{t^{\alpharac{\gamma}{\gamma-2}}}\right). \end{equation} Moreover the trajectory $x(t)$ has a finite length and it converges to a minimizer $x^*$ of $F$. \end{corollary} \begin{equation}gin{figure}[h] \includegraphics[width=\textwidth]{RateFlat.png} \caption{Decay rate $r(\alpha,\gamma)=\alpharac{2\gamma}{\gamma-2}$ depending on the value of $\alpha$ when $\alpha\geqslant \alpharac{\gamma+2}{\gamma-2}$ when $F$ satisfies $\textbf{H}_1(\gamma)$ (as in Theorem \ref{Theo2}) for two values $\gamma$: $\gamma_3=3$ dotted line and $\gamma_4=5$ dashed-dotted line.}\label{fig:flat1} \end{figure} \begin{equation}gin{figure}[h] \includegraphics[width=\textwidth]{RateAll.png} \caption{Decay rate $r(\alpha,\gamma)$ depending on the value of $\alpha$ if $F$ satisfies $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(r)$ with $r=\max(2,\gamma)$ for four values $\gamma$ : $\gamma_1=1.5$ dashed line, $\gamma_2=2$, solid line, $\gamma_3=3$ dotted line and $\gamma_4=5$ dashed-dotted line.}\label{fig:flat2} \end{figure} Observe that the decay obtained in Corollary \ref{Corol2} is optimal since Attouch et al. proved that it is achieved for the function $F(x)=\vert x\vert^\gamma$ in \cite{attouch2018fast}.\\ From Theorems \ref{Theo1}, \ref{Theo1b} and \ref{Theo2}, we can make the following comments: first in Theorems~\ref{Theo1b} and \ref{Theo2}, both conditions $\textbf{H}_1$ and $\textbf{H}_2$ are used to get a decay rate and it turns out that these two conditions are important. With the sole hypothesis $\textbf{H}_2(\gamma)$ it seems difficult to establish optimal rate. Consider for instance the function $F(x)=|x|^3$ which satisfies $\textbf{H}_1(3)$ and $\textbf{H}_2(3)$. Applying Theorem \ref{Theo2} with $\gamma_1=\gamma_2=3$, we know that for this function with $\alpha=\alpharac{\gamma_1+2}{\gamma_1-2}=5$, we have $F(x(t))-F^*=O\left(\alpharac{1}{t^6}\right)$. But, with the sole hypothesis $\textbf{H}_2(3)$, such a decay cannot be achieved. Indeed, the function $F(x)=|x|^{2}$ satisfies $\textbf{H}_2(3)$, but from the optimality part of Theorem \ref{Theo1b} we know that we cannot achieve a decay better than $\alpharac{1}{t^{\alpharac{2\alpha \gamma}{\gamma+2}}}=\alpharac{1}{t^5}$ for $\alpha=5$. Consider now a convex function $F$ behaving like $\norm{x-x^*}^{\gamma}$ in the neighborhood of its unique minimizer $x^*$. The decay of $F(x(t))-F^*$ then depends directly on $\alpha$ if $\gamma\leqslant 2$, but it does not depend on $\alpha$ for large $\alpha$ if $\gamma>2$. Moreover for such functions the best decay rate of $F(x(t))-F^*$ is $O\left(\alpharac{1}{t^{\alpha}}\right)$ and is achieved for $\gamma=2$ i.e. for quadratic like functions around the minimizer. If $\gamma<2$, it seems that the oscillations of the solution $x(t)$ prevent us from getting an optimal decay rate. The inertia seems to be too large for such functions. If $\gamma>2$, for large $\alpha$, the decay is not as fast because the gradient of the functions decays too fast in the neighborhood of the minimizer. For these functions a larger inertia could be more efficient. Finally, observe that as shown in Figures \ref{fig:flat1} and \ref{fig:flat2}, the case when $1+\alpharac{2}{\gamma}<\alpha<\alpharac{\gamma+2}{\gamma-2}$ is not covered by our results. Although we did not get a better convergence rate than $\alpharac{1}{t^2}$ in that case, we can prove that there exist some initial conditions for which the convergence rate can not be better than $t^{-\alpharac{2\gamma\alpha}{\gamma+2}}$: \begin{equation}gin{proposition}\label{PropOpt2} Let $\gamma >2$ and $1+\alpharac{2}{\gamma}<\alpha<\alpharac{\gamma+2}{\gamma-2}$. Let $x$ be a solution of \eqref{EDO} with $F(x)=|x|^\gamma$, $|x(t_0)|<1$ and $\dot x(t_0)=0$ for any given $t_0>0$. Then there exists $K>0$ such that for any $T>0$, there exists $t\geqslant T$ such that: $$F(x(t))-F^* \geqslant \dfrac{K}{t^{\alpharac{2\gamma\alpha}{\gamma+2}}}.$$\label{prop:gap} \end{proposition} \paragraph{Numerical Experiments} In the following numerical experiments, the optimality of the decays given in all previous theorems, are tested for various choices of $\alpha$ and $\gamma$. More precisely we use a discrete Nesterov scheme to approximate the solution of \eqref{ODE} for $F(x)=|x|^{\gamma}$ on the interval $[t_0,T]$ with $t_0=0$ and $\dot{x}(t_0)=0$, see \cite{su2016differential}. If $\gamma\geqslant 2$, $\nabla F$ is a Lipschitz function and we define the sequence $(x_n)_{n\in\mathbb{N}}$ as follows: \begin{equation}gin{equation*} x_n=y_n-h\nabla F(y_n)\text{ with }y_n=x_n+\alpharac{n}{n+\alpha}(x_n-x_{n-1}), \end{equation*} where $h\in(0,1)$ is a time step. If $\gamma<2$, we use a proximal step : \begin{equation}gin{equation*} x_n=prox_{h F}(y_n)\text{ with }y_n=x_n+\alpharac{n}{n+\alpha}(x_n-x_{n-1}). \end{equation*} It has been shown that $x_n\approx x(n\sqrt{h})$ where the function $x$ is a solution of the ODE \eqref{ODE}. In the following numerical experiments the sequence $(x_n)_{n\in\mathbb{N}}$ is computed for various pairs $(\gamma,\alpha)$. The step size is always set to $h=10^{-7}$. We define the function $rate(\alpha,\gamma)$ as the expected rate given in all the previous theorems and Proposition \ref{PropOpt2}, that is: \begin{equation}gin{eqnarray*} rate(\alpha,\gamma)&:=&\left\{\begin{equation}gin{array}{ll} \dfrac{2\alpha \gamma}{\gamma+2} &\text{ if } \gamma\leqslant 2 \text{ or if }\gamma>2\text{ and }\alpha\leqslant 1+\alpharac{2}{\gamma}, \\ \dfrac{2\gamma}{\gamma-2}& \text{ if } \gamma>2\text{ and }\alpha\geqslant \alpharac{\gamma+2}{\gamma-2}, \\ \dfrac{2\alpha \gamma}{\gamma+2} &\text{ if } \gamma>2 \text{ and } \alpha\in(1+\alpharac{2}{\gamma}, \alpharac{\gamma+2}{\gamma-2}). \end{array}\right. \end{eqnarray*} If the function $z(t):=\left(F(x(t))-F(x^*)\right)t^{\delta}$ is bounded but does not tend to 0, we can deduce that $\delta$ is the largest value such that $F(x(t))-F(x^*)=O\left(t^{-\delta}\right)$. We define \begin{equation}gin{equation*} z_n:=(F(x_n)-F(x^*))\times (n\sqrt{h})^{rate(\alpha,\gamma)}\approx (F(x(t))-F(x^*))t^{rate(\alpha,\gamma)}, \end{equation*} and if the function $rate(\alpha,\gamma)$ is optimal we expect that the sequence $(z_n)_{n\in\mathbb{N}}$ is bounded but do not decay to 0. The following figures give for various choices of $(\alpha,\gamma)$ the trajectory of the sequence $(z_n)_{n\in\mathbb{N}}$. The values are re-scaled such that the maximum is always $1$. In all these numerical examples, we will observe that the sequence $(z_n)_{n\in\mathbb{N}}$ is bounded and does not tend to $0$. \begin{equation}gin{figure}[h] \includegraphics[width=0.495\textwidth]{Trajgam1dot5alpha1r0dot86.png} \includegraphics[width=0.495\textwidth]{Trajgam1dot5alpha6r5dot14.png} \caption{Case when $\gamma=1.5$. On the left $\alpha=1$ and $rate(\alpha,\gamma)=\alpharac{2\alpha\gamma}{\gamma+2}=\alpharac{6}{7}$. On the right $\alpha=6$ and $rate(\alpha,\gamma)=\alpharac{2\alpha\gamma}{\gamma+2}=\alpharac{36}{7}$}\label{fig:gamma1.5} \end{figure} \begin{equation}gin{figure}[h] \includegraphics[width=0.495\textwidth]{Trajgam2alpha1r1.png} \includegraphics[width=0.495\textwidth]{Trajgam2alpha6r6.png} \caption{Case when $\gamma=2$. On the left $\alpha=1$ and $rate(\alpha,\gamma)=\alpharac{2\alpha\gamma}{\gamma+2}=1$. On the right $\alpha=6$ and $rate(\alpha,\gamma)=\alpharac{2\alpha\gamma}{\gamma+2}=6$}\label{fig:gamma2} \end{figure} \begin{equation}gin{figure}[h] \includegraphics[width=0.495\textwidth]{Trajgam3alpha1r1dot2.png} \includegraphics[width=0.495\textwidth]{Trajgam3alpha4r4dot8.png} \includegraphics[width=0.495\textwidth]{Trajgam3alpha6r6.png} \includegraphics[width=0.495\textwidth]{Trajgam3alpha8r6.png} \caption{Case when $\gamma=3$. On the top left $\alpha=1$ and $rate(\alpha,\gamma)=\alpharac{2\alpha\gamma}{\gamma+2}=1.2$, on the top right $\alpha=4$ and $rate(\alpha,\gamma)=\alpharac{2\alpha\gamma}{\gamma+2}=4.8$, on bottom left $\alpha=6$ and $rate(\alpha,\gamma)=\alpharac{2\gamma}{\gamma-2}=6$, on bottom right $\alpha=8$ and $rate(\alpha,\gamma)=\alpharac{2\gamma}{\gamma-2}=6$}\label{fig:gamma3} \end{figure} \begin{equation}gin{itemize} \item The Figures \ref{fig:gamma1.5} and \ref{fig:gamma2} with $\gamma=1.5$ and $\gamma=2$ illustrate Theorem \ref{Theo1}, Theorem \ref{Theo1b} and Proposition \ref{prop_optimal}. Indeed for sharp functions (i.e for $\gamma\leqslant 2$) the rate is proved to be optimal. \item In the case $\gamma=3$ and $\alpha=1$, the fact that $(F(x(t))-F(x^*))t^{rate(\alpha,\gamma)}$ is bounded is also a consequence of Theorem \ref{Theo1}. The optimality of this rate is not proven but the experiments show that it numerically is. \item In the case $\gamma=3$ and $\alpha=4$, $\alpha\in(\alpharac{\gamma+2}{\gamma},\alpharac{\gamma+2}{\gamma-2})$ then the fact that $(F(x(t))-F(x^*))t^{rate(\alpha,\gamma)}$ is bounded is not proved but the experiments from Figure \ref{fig:gamma3} show that it numerically is. However Proposition \ref{PropOpt2} proves that the sequence $(z_n)_{n\in\mathbb{N}}$ does not tend to 0, which is illustrated by the experiments. \item When $\gamma=3$ and $\alpha=6$ or $\alpha=8$, Theorem \ref{Theo2} ensures that the sequence $(z_n)_{n\in\mathbb{N}}$ is bounded. This rate is proved to be optimal and the numerical experiments from Figure \ref{fig:gamma3} show that this rate is actually achieved for this specific choice of parameters. \end{itemize} \section{Proofs}\label{sec_proofs} In this section, we detail the proofs of the results presented in Section~\ref{sec_contrib}, namely Theorems \ref{Theo1}, \ref{Theo1b} and \ref{Theo2}, Propositions~\ref{prop_optimal} and \ref{prop:gap}, Corollary~\ref{Corol2}. The proofs of the theorems rely on Lyapunov functions $EE$ and $\mathcal{H}$ introduced by Su, Boyd and Candes \cite{su2016differential}, Attouch, Chbani, Peypouquet and Redont \cite{attouch2018fast} and Aujol-Dossal \cite{AujolDossal} : \begin{equation}gin{equation*} \mathcal{E}(t)=t^2(F(x(t))-F^*)+\alpharac{1}{2} \norm{\lambda(x(t)-x^*)+t\dot{x}(t)}^2+\alpharac{\xi}{2}\norm{x(t)-x^*}^2, \end{equation*} where $x^*$ is a minimizer of $F$ and $\lambda$ and $\xi$ are two real numbers. The function $\mathcal{H}$ is defined from $EE$ and it depends on another real parameter $p$ : \begin{equation}gin{equation*} \mathcal{H}(t)=t^pEE(t). \end{equation*} Using the following notations: \begin{equation}gin{align*} a(t)&=t(F(x(t))-F^*),\\ b(t)&=\alpharac{1}{2t}\norm{\lambda(x(t)-x^*)+t\dot{x}(t)}^2,\\ c(t)&=\alpharac{1}{2t}\norm{x(t)-x^*}^2, \end{align*} we have: \begin{equation}gin{equation*} EE(t)=t(a(t)+b(t)+\xi c(t)). \end{equation*} From now on we will choose \begin{equation}gin{equation*} \xi=\lambda(\lambda+1-\alpha), \end{equation*} and we will use the following Lemma whose proof is postponed to Appendix~\ref{appendix}: \begin{equation}gin{lemma}\label{LemmeFonda} If $F$ satisfies $\textbf{H}_1(\gamma)$ for any $\gamma\geq 1$, and if $\xi=\lambda(\lambda-\alpha+1)$ then \begin{equation}gin{equation*} \mathcal{H}'(t)\leqslant t^{p}\left((2-\gamma\lambda+p)a(t)+(2\lambda+2-2\alpha+p)b(t)+\lambda(\lambda+1-\alpha)(-2\lambda+p)c(t)\right). \end{equation*} \end{lemma} Note that this inequality is actually an equality for the specific choice $F(x)=\vert x\vert^\gamma$, $\gamma>1$. \subsection{Proof of Theorems \ref{Theo1} and \ref{Theo1b}} In this section we prove Theorem \ref{Theo1} and Theorem \ref{Theo1b}. Note that a complete proof of Theorem~\ref{Theo1}, including the optimality of the rate, can be found in the unpublished report \cite{AujolDossal} under the hypothesis that $(F-F^*)^\alpharac{1}{\gamma}$ is convex. The proof of both Theorems are actually similar. The choice of $p$ and $\lambda$ are the same but, to prove the first point, due to the value of $\alpha$, the function $\mathcal{H}$ is non-increasing and sum of non-negative terms, which simplifies the analysis and necessitates less hypotheses to conclude. We choose here $p=\alpharac{2\gamma \alpha}{\gamma+2}-2$ and $\lambda=\alpharac{2\alpha}{\gamma+2}$ and thus \begin{equation}gin{equation*} \xi=\alpharac{2\alpha\gamma}{(\gamma+2)^2}(1+\alpharac{2}{\gamma}-\alpha). \end{equation*} From Lemma \ref{LemmeFonda}, it appears that: \begin{equation}gin{equation}\label{ineqH1} \mathcal{H}'(t)\leqslant K_1t^{p}c(t) \end{equation} where the real constant $K_1$ is given by: \begin{equation}gin{eqnarray*} K_1 & = & \lambda(\lambda+1-\alpha)(-2\lambda+p)\\ &=& \alpharac{2\alpha}{\gamma+2} \left(\alpharac{2\alpha}{\gamma+2}+1-\alpha\right) \left(-2 \alpharac{2\alpha}{\gamma+2}+\alpharac{2\gamma \alpha}{\gamma+2}-2 \right) \\ & = & \alpharac{4\alpha}{(\gamma+2)^3} \left(2\alpha+\gamma+2-\alpha \gamma -2 \alpha \right) \left(-2\alpha+\gamma \alpha - \gamma -2 \right) \\ & = & \alpharac{4\alpha}{(\gamma+2)^3} \left(\gamma+2-\alpha \gamma\right) \left(\alpha(-2+\gamma) - \gamma -2 \right). \end{eqnarray*} Hence: \begin{equation}gin{equation}\label{eqdefK1} K_1=\alpharac{4\alpha\gamma}{(\gamma+2)^3} \left(1+\alpharac{2}{\gamma}-\alpha\right) \left(\alpha(-2+\gamma) - \gamma -2 \right). \end{equation} Consider first the case when: $\alpha\leqslant 1+\alpharac{2}{\gamma}$. In that case, we observe that: $\xi\geq 0$, so that the energy $\mathcal{H}$ is actually a sum of non-negative terms. Coming back to \eqref{ineqH1}, we have: \begin{equation}gin{equation} \mathcal{H}'(t)\leqslant K_1t^{p}c(t).\label{ineqH1b1} \end{equation} Since $\alpha\leqslant 1+\alpharac{2}{\gamma}$, the sign of the constant $K_1$ is the same as that of $\alpha(-2+\gamma) - \gamma -2$, and thus $K_1\leqslant 0$ for any $\gamma\geqslant 1$. According to \eqref{ineqH1b1}, the energy $\mathcal{H}$ is thus non-increasing and bounded i.e.: $$\alphaorall t\geqslant t_0,~\mathcal{H}(t)\leqslant \mathcal{H}(t_0).$$ Since $\mathcal{H}$ is a sum of non-negative terms, it follows directly that: $$\alphaorall t\geqslant t_0,~t^{p+2}(F(x(t))-F^*)\leqslant \mathcal{H}(t_0),$$ which concludes the proof of Theorem~\ref{Theo1}. Consider now the case when: $\alpha > 1+\alpharac{2}{\gamma}$. In that case, we first observe that: $\xi<0$, so that $\mathcal{H}$ is not a sum of non-negative functions anymore, and an additional growth condition $\textbf{H}_2(2)$ will be needed to bound the term in $\norm{x(t)-x^*}^2$. Coming back to \eqref{ineqH1}, we have: \begin{equation}gin{equation} \mathcal{H}'(t)\leqslant K_1t^{p}c(t).\label{ineqH1b2} \end{equation} Since $\alpha > 1+\alpharac{2}{\gamma}$, the sign of the constant $K_1$ is the opposite of the sign of $\alpha(\gamma -2)-(\gamma+2)$. Moreover, since $\gamma\leqslant 2$, then $\alpha(\gamma -2)-(\gamma+2)<0$ and thus $K_1 >0$. Using Hypothesis $\textbf{H}_2(2)$ and the uniqueness of the minimizer, there exists $K>0$ such that: \begin{equation}gin{equation*} Kt\norm{x(t)-x^*}^2\leqslant t(F(x(t))-F^*)=a(t), \end{equation*} and thus \begin{equation}gin{equation} c(t)\leqslant \alpharac{1}{2Kt^2}a(t).\label{eqct} \end{equation} Since $\xi<0$ with our choice of parameters, we get: \begin{equation}gin{eqnarray} \mathcal{H}(t) &\geqslant & t^{p+1}(a(t) + \xi c(t)) \geqslant t^{p+1}(1+\alpharac{\xi}{2Kt^2})a(t).\label{H:bound} \end{eqnarray} It follows that there exists $t_1$ such that for all $t\geqslant t_1$, $\mathcal{H}(t)\geqslant 0$ and: \begin{equation}gin{equation} \mathcal{H}(t) \geqslant \alpharac{1}{2}t^{p+1}a(t).\label{eqat} \end{equation} From \eqref{ineqH1b2}, \eqref{eqct} and \eqref{eqat}, we get: \begin{equation}gin{equation*} \mathcal{H}'(t)\leqslant \alpharac{K_1}{K}\alpharac{\mathcal{H}(t)}{t^3}. \end{equation*} From the Gr\"onwall Lemma in its differential form, there exists $A>0$ such that for all $t\geqslant t_1$, we have: $\mathcal{H}(t)\leqslant A$. According to \eqref{eqat}, we then conclude that $t^{p+2}(F(x(t))-F^*)=t^{p+1}a(t)$ is bounded which concludes the proof of Theorem \ref{Theo1b}. \subsection{Proof of Proposition~\ref{prop_optimal} (Optimality of the convergence rates)} Before proving the optimality of the convergence rate stated in Proposition~\ref{prop_optimal}, we need the following technical lemma: \begin{equation}gin{lemma} \label{lemmatech} Let $y$ a continuously differentiable function with values in $\mathbb R$. Let $T>0$ and $\epsilon > 0$. If $y$ is bounded, then there exists $t_1 >T$ such that: \begin{equation}gin{equation*} |\dot{y}(t_1) | \leqslant \alpharac{\epsilon}{t_1}. \end{equation*} \end{lemma} \begin{equation}gin{proof} We split the proof into two cases. \begin{equation}gin{enumerate} \item There exists $t_1 >T$ such that $\dot{y}(t_1)=0$. \item $\dot{y}(t)$ is of constant sign for $t> T$. For instance we assume $\dot{y}(t)>0$. By contradiction, let us assume that $\dot{y}(t)>\alpharac{\epsilon}{t}$ $\alphaorall t > T$. Then $y(t)$ cannot be a bounded function as assumed. \end{enumerate} \end{proof} Let us now prove the Proposition~\ref{prop_optimal}: the idea of the proof is the following: we first show that $\mathcal{H}$ is bounded from below. Since $\mathcal{H}$ is a sum of 3 terms including the term $F-F^*$, we then show that given $t_1 \geq t_0$, there always exists a time $t \geq t_1$ such that the value of $\mathcal{H}$ is concentrated on the term $F-F^*$. We start the proof by using the fact that, for the function $F(x)=\vert x\vert^{\gamma}$, $\gamma>1$, the inequality of Lemma~\ref{LemmeFonda} is actually an equality. Using the values $p=\alpharac{2\gamma \alpha}{\gamma+2}-2$ and $\lambda=\alpharac{2\alpha}{\gamma+2}$ of Theorems~\ref{Theo1} and \ref{Theo1b}, we have a closed form for the derivative of function $\mathcal{H}$: \begin{equation}gin{equation}\label{eqH1} \mathcal{H}'(t) = K_1 t^{p}c(t) =\alpharac{K_1}{2} t^{p-1}\vert x(t)\vert^{2}, \end{equation} where $K_1$ is the constant given in \eqref{eqdefK1}. We will now prove that it exists $\ell>0$ such that for $t$ large enough: \begin{equation}gin{equation*} \mathcal{H}(t) \geqslant \ell. \end{equation*} To prove that point we consider two cases depending on the sign of $\alpha-(1+\alpharac{2}{\gamma})$. \begin{equation}gin{enumerate} \item Case when $\alpha\leqslant 1+\alpharac{2}{\gamma}$, $\xi\geqslant 0$ and $K_1\leqslant 0$. We can first observe that $\mathcal{H}$ is a non negative and non increasing function. Moreover it exists $\tilde t\geqslant t_0$ such that for $t\geqslant \tilde t$, $|x(t)|\leqslant 1$ and: \begin{equation}gin{equation*} t^pc(t)\leqslant \alpharac{t^pa(t)}{2t^2}\leqslant \alpharac{\mathcal{H}(t)}{t^3}, \end{equation*} which implies using \eqref{eqH1} that: \begin{equation}gin{equation*} |\mathcal{H}'(t)|\leqslant |K_1|\alpharac{\mathcal{H}(t)}{t^3}. \end{equation*} If we denote $G(t)=\ln (\mathcal{H}(t))$ we get for all $t\geqslant \tilde t$, \begin{equation}gin{equation*} |G(t)-G(\tilde t)|\leqslant \int_{\tilde t}^t\alpharac{|K_1|}{s^3}ds. \end{equation*} We deduce that $|G(t)|$ is bounded below and then that it exists $\ell>0$ such that for $t$ large enough: \begin{equation}gin{equation*} \mathcal{H}(t) \geqslant \ell, \end{equation*} \item Case when $\alpha> 1+\alpharac{2}{\gamma}$, $\xi< 0$ and $K_1> 0$. This implies in particular that $\mathcal{H}$ is non-decreasing. Moreover, from Theorem~\ref{Theo1b}, $\mathcal{H}$ is bounded above. Coming back to the inequality \eqref{H:bound}, we observe that $\mathcal{H}(t_0)>0$ provided that $1+\alpharac{\xi}{2t_0^2}>0$, with $K=1$ and $\xi = \lambda(\lambda-\alpha+1)$, i.e.: $$t_0>\sqrt{\alpharac{\alpha\gamma}{(\gamma +2)^2}(\alpha-(1+\alpharac{2}{\gamma}))}.$$ In particular, we have that for any $t\geqslant t_0$ \begin{equation}gin{equation*} \mathcal{H}(t) \geqslant \ell, \end{equation*} with $\ell=\mathcal{H}(t_0)$ \end{enumerate} Hence for any $\alpha>0$ and for $t$ large enough \begin{equation}gin{equation*} a(t)+b(t)+ \xi c(t) \geqslant \alpharac{\ell}{t^{p+1}}. \end{equation*} Moreover, since $c(t)=o(a(t))$ when $t \to + \infty$, we have that for $t$ large enough, \begin{equation}gin{equation*} a(t)+b(t) \geqslant \alpharac{\ell}{2 t^{p+1}}. \end{equation*} Let $T>0$ and $\epsilon >0$. We set: \begin{equation}gin{equation*} y(t):=t^{\lambda} x(t), \end{equation*} where: $\lambda =\alpharac{2 \alpha}{\gamma+2}$. From the Theorem~\ref{Theo1} and Theorem \ref{Theo1b}, we know that $y(t)$ is bounded. Hence, from Lemma~\ref{lemmatech}, there exists $t_1 > T$ such that \begin{equation}gin{equation} \label{eqt1} |\dot{y}(t_1) |\leqslant \alpharac{\epsilon}{t_1}. \end{equation} But: \begin{equation}gin{equation*} \dot{y}(t) = t^{\lambda-1} \left(\lambda x(t) +t \dot{x}(t) \right). \end{equation*} Hence using \eqref{eqt1}: \begin{equation}gin{equation*} t_1^{\lambda} \left|\lambda x(t_1) +t_1 \dot{x}(t_1) \right| \leqslant \epsilon. \end{equation*} We recall that: $b(t)=\alpharac{1}{2t}\norm{\lambda(x(t)-x^*)+t\dot{x}(t)}^2$. We thus have: \begin{equation}gin{equation*} b(t_1) \leqslant \alpharac{\epsilon^2}{2 t_1^{2\lambda+1}}. \end{equation*} Since $\gamma \leqslant 2$, $\lambda =\alpharac{2 \alpha}{\gamma+2}$ and $p=\alpharac{2\gamma \alpha}{\gamma+2}-2$, we have $ 2\lambda+1 \geq p+1 $, and thus \begin{equation}gin{equation*} b(t_1) \leqslant \alpharac{\epsilon^2}{2 t_1^{p+1}}. \end{equation*} For $\epsilon =\sqrt{\alpharac{ \ell}{2}}$ for example, there exists thus some $t_1 >T$ such that $b(t_1) \leqslant \alpharac{\ell}{4 t_1^{p+1}}$. Then $a(t_1) \geqslant \alpharac{\ell}{4 t_1^{p+1}}$, i.e. $F(x(t_1))-F^* \geqslant \alpharac{\ell}{4 t_1^{p+2}}$. Since $p+2= \alpharac{2 \gamma \alpha}{\gamma +2}$, this concludes the proof. \subsection{Proof of Theorem \ref{Theo2}} We detail here the proof of Theorem \ref{Theo2}. Let us consider $\gamma_1>2$, $\gamma_2 >2$, and $\alpha\geqslant \alpharac{\gamma_1+2}{\gamma_1-2}$. We consider here functions $\mathcal{H}$ for all $x^*$ in the set $X^*$ of minimizers of $F$ and prove that these functions are uniformely bounded. More precisely for any $x^*\in X^*$ we define $\mathcal{H}(t)$ with $p=\alpharac{4}{\gamma_1-2}$ and $\lambda=\alpharac{2}{\gamma_1-2}$. With this choice of $\lambda$ and $p$, using Hypothesis $\textbf{H}_1(\gamma_1)$ we have from Lemma~\ref{LemmeFonda}: \begin{equation}gin{equation*} \mathcal{H}'(t)\leqslant 2t^{\alpharac{4}{\gamma_1-2}}\left(\alpharac{\gamma_1+2}{\gamma_1-2}-\alpha\right)b(t). \end{equation*} which is non-positive when $\alpha\geqslant\alpharac{\gamma_1+2}{\gamma_1-2}$, which implies that the function $\mathcal{H}$ is bounded above. Hence for any choice of $x^*$ in the set of minimizers $X^*$, the function $\mathcal{H}$ is bounded above and since the set of minimizers is bounded (F is coercive), there exists $A>0$ and $t_0$ such that for all choices of $x^*$ in $X^*$, \begin{equation}gin{equation*} \label{inegHHAtz} \mathcal{H}(t_0)\leqslant A, \end{equation*} which implies that for all $x^* \in X^{*}$ and for all $t\geqslant t_0$ \begin{equation}gin{equation*} \label{inegHHA} \mathcal{H}(t)\leqslant A. \end{equation*} Hence for all $t\geqslant t_0$ and for all $x^*\in X^*$ \begin{equation}gin{equation*}\label{BoundWold} t^{\alpharac{4}{\gamma_1-2}}t^2(F(x(t))-F^*)\leqslant \alpharac{\vert \xi\vert}{2} t^{\alpharac{4}{\gamma_1-2}}\norm{x(t)-x^*}^2 +A, \end{equation*} which implies that \begin{equation}gin{equation}\label{BoundW} t^{\alpharac{4}{\gamma_1-2}}t^2(F(x(t))-F^*)\leqslant \alpharac{\vert \xi\vert}{2} t^{\alpharac{4}{\gamma_1-2}}d(x(t),X^{*})^2 +A. \end{equation} We now set: \begin{equation}gin{equation}\label{defv} v(t):=t^{\alpharac{4}{\gamma_2-2}}d(x(t),X^*)^2. \end{equation} Using \eqref{BoundW} we have: \begin{equation}gin{equation}\label{Boundu1bos} t^{\alpharac{2 \gamma_1}{\gamma_1-2}}(F(x(t))-F^*)\leqslant \alpharac{\vert \xi\vert}{2} t^{\alpharac{4}{\gamma_1-2}-\alpharac{4}{\gamma_2-2}} v(t)+A. \end{equation} Using the hypothesis $\textbf{H}_2(\gamma_2)$ applied under the form given by Lemma \ref{lem:H2} (since $X^*$ is compact), there exists $K>0$ such that \begin{equation}gin{equation*} K\left(t^{-\alpharac{4}{\gamma_2-2}}v(t)\right)^{\alpharac{\gamma_2}{2}}\leqslant F(x(t))-F^*, \end{equation*} which is equivalent to \begin{equation}gin{equation*} Kv(t)^{\alpharac{\gamma_2}{2}} t^{\alpharac{-2\gamma_2}{\gamma_2-2}} \leqslant F(x(t))-F^*. \end{equation*} Hence: \begin{equation}gin{equation*} K t^{\alpharac{2\gamma_1}{\gamma_1-2}} t^{\alpharac{-2\gamma_2}{\gamma_2-2}} v(t)^{\alpharac{\gamma_2}{2}}\leqslant t^{\alpharac{2\gamma_1}{\gamma_1-2}} (F(x(t))-F^*). \end{equation*} Using \eqref{Boundu1bos}, we obtain: \begin{equation}gin{equation*} K t^{\alpharac{2\gamma_1}{\gamma_1-2}-\alpharac{2\gamma_2}{\gamma_2-2}} v(t)^{\alpharac{\gamma_2}{2}}\leqslant \alpharac{|\xi|}{2} t^{\alpharac{4}{\gamma_1-2}-\alpharac{4}{\gamma_2-2}} v(t)+A, \end{equation*} i.e.: \begin{equation}gin{equation}\label{vbounded} K v(t)^{\alpharac{\gamma_2}{2}}\leqslant \alpharac{|\xi|}{2} v(t)+A t^{\alpharac{4}{\gamma_2-2}-\alpharac{4}{\gamma_1-2}}. \end{equation} Since $2<\gamma_1 \leqslant \gamma_2$, we deduce that $v$ is bounded. Hence, using \eqref{Boundu1bos} there exists some positive constant $B$ such that: \begin{equation}gin{equation*} F(x(t))-F^*\leqslant B t^{\alpharac{-2 \gamma_2}{\gamma_2-2}} +A t^{\alpharac{-2 \gamma_1}{\gamma_1-2}}. \end{equation*} Since $2<\gamma_1 \leqslant \gamma_2$, we have $\alpharac{-2 \gamma_2}{\gamma_2-2} \geqslant \alpharac{-2 \gamma_1}{\gamma_1-2}$. Hence we deduce that $F(x(t))-F^* = O \left( t^{\alpharac{-2 \gamma_2}{\gamma_2-2}}\right)$.\\ \subsection{Proof of Corollary~\ref{Corol2}} We are now in position to prove Corollary~\ref{Corol2}. The first point of Corollary~\ref{Corol2} is just a particular instance of Theorem~\ref{Theo2}. In the sequel, we prove the second point of Corollary~\ref{Corol2}. Let $t\geqslant t_0$ and $\tilde x\in X^*$ such that \begin{equation}gin{equation*} \|x(t)-\tilde x\|=d(x(t),X^*). \end{equation*} We previously proved that there exists $A>0$ such that for any $t\geqslant t_0$ and any $x^*\in X^*$, \begin{equation}gin{equation*} \mathcal{H}(t)\leqslant A. \end{equation*} For the choice $x^*=\tilde x$ this inequality ensures that \begin{equation}gin{equation*} \alpharac{t^{\alpharac{4}{\gamma-2}}}{2}\norm{\lambda (x(t)-\tilde x)+t\dot x(t)}^2+t^{\alpharac{4}{\gamma-2}}\alpharac{\xi}{2}d(x(t),\tilde x)^2\leqslant A, \end{equation*} which is equivalent to \begin{equation}gin{equation*} \alpharac{t^{\alpharac{4}{\gamma-2}}}{2}\norm{\lambda (x(t)-\tilde x)+t\dot x(t)}^2\leqslant \alpharac{|\xi|}{2}v(t)+A, \end{equation*} where $v(t)$ is defined in \eqref{defv} with $\gamma=\gamma_2$. Using the fact that the function $v$ is bounded (a consequence of \eqref{vbounded}) we deduce that there exists a positive constant $A_1>0$ such that: \begin{equation}gin{equation*} \norm{\lambda (x(t)-\tilde x)+t\dot x(t)}\leqslant \alpharac{A_1}{t^{\alpharac{2}{\gamma-2}}}. \end{equation*} Thus: \begin{equation}gin{equation*} t\norm{\dot x(t)}\leqslant \alpharac{A_1}{t^{\alpharac{2}{\gamma-2}}}+|\lambda|d(x(t),\tilde x)=\alpharac{A_1+|\lambda|\sqrt{v(t)}}{t^{\alpharac{2}{\gamma-2}}}. \end{equation*} Using once again the fact that the function $v$ is bounded we deduce that there exists a real number $A_2$ such that \begin{equation}gin{equation*} \norm{\dot x(t)}\leqslant \alpharac{A_2}{t^{\alpharac{\gamma}{\gamma-2}}}, \end{equation*} which implies that $\norm{\dot x(t)}$ is an integrable function. As a consequence, we deduce that the trajectory $x(t)$ has a finite length. \subsection{Proof of Proposition \ref{prop:gap}} The idea of the proof is very similar to that of Proposition~\ref{prop_optimal} (optimality of the convergence rate in the sharp case i.e. when $\gamma \in (1,2]$). For the exact same choice of parameters $p=\alpharac{2\gamma\alpha}{\gamma+2}-2$ and $\lambda=\alpharac{2\alpha}{\gamma+2}$ and assuming that $1+\alpharac{2}{\gamma}<\alpha < \alpharac{\gamma+2}{\gamma-2}$, we first show that the energy $\mathcal H$ is non-decreasing and then: \begin{equation}gin{equation} \alphaorall t\geqslant t_0,~\mathcal H(t) \geqslant \ell,\label{infH} \end{equation} where: $\ell=\mathcal H(t_0)>0$. Indeed, since $\gamma>2$ and $\alpha <\alpharac{\gamma+2}{\gamma-2}$, a straightforward computation shows that: $\lambda^2 -|\xi|>0$, so that: \begin{equation}gin{eqnarray*} \mathcal H(t_0) &=& t_0^{p+2}|x(t_0)|^\gamma + \alpharac{t_0^p}{2}\left(|\lambda x(t_0)+t_0\dot x(t_0)|^2 -|\xi||x(t_0)|^2\right)\\ &=& t_0^{p+2}|x(t_0)|^\gamma + \alpharac{t_0^p}{2}\left( \lambda^2 -|\xi|\right)|x(t_0)|^2 >0, \end{eqnarray*} without any additional assumption on the initial time $t_0>0$. Let $T>t_0$. We set: $y(t)=t^\lambda x(t)$. If $y(t)$ is bounded as it is in Proposition~\ref{prop_optimal}, by the exact same arguments, we prove that there exists $t_1>T$ such that: $b(t_1) \leq \alpharac{\ell}{4t_1^{p+1}}$. Moreover since $\xi<0$ we deduce from \eqref{infH} that: $$t_1^{p+1}(a(t_1)+b(t_1)) \geqslant \ell.$$Hence: $$a(t_1)=t_1(F(x(t_1)-F^*) \geqslant \alpharac{\ell}{4t_1^{p+1}},$$ i.e.: $F(x(t_1))-F^* \geqslant \alpharac{\ell}{4t_1^{p+2}} = \alpharac{\ell}{4t_1^\alpharac{2\alpha\gamma}{\gamma+2}}$. If $y(t)$ is not bounded, then the proof is even simpler: indeed, in that case, for any $K>0$, there exists $t_1\geqslant T$ such that: $y(t_1)\geq K$, hence: $$F(x(t_1))-F^*=|x(t_1)|^\gamma \geqslant \alpharac{K}{t_1^{\lambda\gamma}}=\alpharac{K}{t_1^{\alpharac{2\alpha\gamma}{\gamma+2}}},$$ which concludes the proof. \section{Proof of Lemma~\ref{LemmeFonda}} \label{appendix} We prove here Lemma~\ref{LemmeFonda}. Notice that the computations are standard (see e.g. \cite{AujolDossal}). \begin{equation}gin{lemma} \label{lemma_tec} \begin{equation}gin{eqnarray*} \mathcal{E}'(t)&=&2a(t)+\lambda t\ps{-\nabla F(x(t))}{x(t)-x^*} + (\xi- \lambda (\lambda + 1 -\alpha) )\ps{\dot{x}(t)}{x(t)-x^*}\\ &&+ 2(\lambda+1-\alpha) b(t) -2\lambda^2(\lambda+1-\alpha) c(t) \end{eqnarray*}\label{lem:tec1} \end{lemma} \begin{equation}gin{proof} Let us differentiate the energy $\mathcal E$: \begin{equation}gin{eqnarray*} \mathcal{E}'(t)&=&2a(t)+t^2\ps{\nabla F(x(t))}{\dot{x}(t)}+\ps{\lambda \dot{x}(t)+t\ddot{x}(t)+\dot{x}(t)}{\lambda(x(t)-x^*)+t\dot{x}(t)}\\ &&+\xi\ps{\dot{x}(t)}{x(t)-x^*} \\ &=&2a(t)+t^2\ps{\nabla F(x(t))+\ddot{x}(t)}{\dot{x}(t)}+(\lambda+1)t\norm{\dot{x}(t)}^2+\lambda t\ps{\ddot{x}(t)}{x(t)-x^*}\\ &&+(\lambda(\lambda+1)+\xi)\ps{\dot{x}(t)}{x(t)-x^*} \\ &=&2a(t)+t^2\ps{-\alpharac{\alpha}{t}\dot{x(t)}}{\dot{x}(t)}+(\lambda+1)t\norm{\dot{x}(t)}^2+\lambda t\ps{\ddot{x}(t)}{x(t)-x^*}\\ &&+(\lambda(\lambda+1)+\xi)\ps{\dot{x}(t)}{x(t)-x^*}\\ &=&2a(t)+t(\lambda+1-\alpha)\norm{\dot{x}(t)}^2 +\lambda t\ps{\ddot{x}(t)}{x(t)-x^*}+ (\lambda(\lambda+1)+\xi)\ps{\dot{x}(t)}{x(t)-x^*}. \end{eqnarray*} Using the ODE \eqref{ODE}, we get: \begin{equation}gin{eqnarray*} \mathcal{E}'(t)&=&2a(t)+t(\lambda+1-\alpha)\norm{\dot{x}(t)}^2 +\lambda t\ps{-\nabla F(x(t))-\alpharac{\alpha}{t}\dot{x(t)}}{x(t)-x^*}\\ &&+ (\lambda(\lambda+1)+\xi)\ps{\dot{x}(t)}{x(t)-x^*}\\ &=&2a(t)+t(\lambda+1-\alpha)\norm{\dot{x}(t)}^2+\lambda t\ps{-\nabla F(x(t))}{x(t)-x^*}\\ &&+ (\lambda(\lambda+1)-\alpha\lambda+\xi)\ps{\dot{x}(t)}{x(t)-x^*}. \end{eqnarray*} Observing now that: \begin{equation}gin{equation*}\label{eqReformEner} \alpharac{1}{t}\norm{\lambda(x(t)-x^*)+t\dot{x}(t)}^2=t\norm{\dot{x}(t)}^2+2\lambda\ps{\dot{x}(t)}{x(t)-x^*}+\alpharac{\lambda^2}{t}\norm{x(t)-x^*}^2, \end{equation*} we can write: \begin{equation}gin{eqnarray*} \mathcal{E}'(t)&=&2a(t)+\lambda t\ps{-\nabla F(x(t))}{x(t)-x^*} + (\xi- \lambda (\lambda + 1 -\alpha) )\ps{\dot{x}(t)}{x(t)-x^*} \\ &&+ (\lambda+1-\alpha)\alpharac{1}{t}\norm{\lambda(x(t)-x^*)+t\dot{x}(t)}^2-\alpharac{\lambda^2(\lambda+1-\alpha)}{t}\norm{x(t)-x^*}^2. \end{eqnarray*} \end{proof} \begin{equation}gin{corollary}\label{lemma_tec_conv_beta2} If $F$ satisfies the hypothesis $\textbf{H}_1(\gamma)$ and if $\xi = \lambda (\lambda + 1 -\alpha)$, then: \begin{equation}gin{align}\label{EqConvBeta} \mathcal{E}'(t)\leqslant& (2- \gamma \lambda) a(t) +2(\lambda+1-\alpha) b(t) -2\lambda^2(\lambda+1-\alpha) c(t) \end{align} \end{corollary} \begin{equation}gin{proof} Choosing $\xi = \lambda (\lambda + 1 -\alpha)$ in Lemma~\ref{lem:tec1}, we get: $$\mathcal{E}'(t)=2a(t)+\lambda t\ps{-\nabla F(x(t))}{x(t)-x^*} +2(\lambda+1-\alpha) b(t) -2\lambda^2(\lambda+1-\alpha) c(t). $$ Applying now the assumption $\textbf{H}_1(\gamma)$, we finally obtain the expected result. \end{proof} One can notice that if $F(x)=|x|^{\gamma}$ the inequality of Lemma \ref{lem:tec1} is actually an equality when $\xi = \lambda (\lambda + 1 -\alpha)$. This ensures that for this specific function $F$, the inequality in Lemma~\ref{LemmeFonda} is an equality. \begin{equation}gin{lemma} \label{lemma_tec4} If $F(x)=|x|^{\gamma}$ and if $\xi = \lambda (\lambda + 1 -\alpha)$, then \begin{equation}gin{eqnarray*} \mathcal{H}'(t)&=& t^{p} \left[ (2+p) a(t)+\lambda t\ps{-\nabla F(x(t))}{x(t)-x^*} \right. +(2 \lambda+2 -2\alpha +p)b(t) \\ && \left. ~~~+\lambda(\lambda+1-\alpha) (-2\lambda +p)c(t) \right] \end{eqnarray*} \end{lemma} \begin{equation}gin{proof} We have $\mathcal{H}(t)=t^p \mathcal{E}(t)$. Hence $\mathcal{H}'(t)=t^p \mathcal{E}'(t)+pt^{p-1}\mathcal{E}(t)=t^{p-1} (t \mathcal{E}'(t)+p\mathcal{E}(t))$. We conclude by using Lemma~\ref{lem:tec1}. \end{proof} In conclusion, to prove Lemma \ref{LemmeFonda}, it is sufficient to plug the assumption $\textbf{H}_1(\gamma)$ into the equality of Lemma \ref{lemma_tec4}. \begin{itemize}bliographystyle{siamplain} \begin{itemize}bliography{reference} \end{document}
\begin{document} \title[SDEs and PDEs on non-smooth time-dependent domains]{Stochastic and partial differential equations on non-smooth time-dependent domains} \address{Niklas L.P. Lundstr\"{o}m\\ Department of Mathematics and Mathematical Statistics, Ume\aa\ University\\ SE-901 87 Ume\aa , Sweden} \email{[email protected]} \address{Thomas \"{O}nskog\\ Department of Mathematics, Royal Institute of Technology (KTH)\\ SE-100 44 Stockholm, Sweden} \email{[email protected]} \author{Niklas L.P. Lundstr\"{o}m, Thomas \"{O}nskog} \begin{abstract} In this article, we consider non-smooth time-dependent domains whose boundary is $\mathcal{W}^{1,p}$ in time and single-valued, smoothly varying directions of reflection at the boundary. In this setting, we first prove existence and uniqueness of strong solutions to stochastic differential equations with oblique reflection. Secondly, we prove, using the theory of viscosity solutions, a comparison principle for fully nonlinear second-order parabolic partial differential equations with oblique derivative boundary conditions. As a consequence, we obtain uniqueness, and, by barrier construction and Perron's method, we also conclude existence of viscosity solutions. Our results generalize two articles by Dupuis and Ishii to time-dependent domains. \noindent 2000\textit{\ Mathematics Subject Classification. }35D05, 49L25, 60J50, 60J60. \noindent \textit{Keywords and phrases. } Reflected diffusion, Skorohod problem, oblique reflection, time-dependent domain, stochastic differential equations, non-smooth domain, viscosity solution, parabolic partial differential equation, comparison principle, existence, uniqueness. \end{abstract} \maketitle \setcounter{equation}{0} \setcounter{theorem}{0} \section{Introduction\label{intro}} In this article we establish existence and uniqueness of strong solutions to stochastic differential equations (SDE) with single-valued, smoothly varying oblique reflection at the boundary of a bounded, non-smooth time-dependent domain whose boundary is $\mathcal{W}^{1,p}$ in time. In the same geometric setting, we also prove a comparison principle, uniqueness and existence of viscosity solutions to partial differential equations (PDE) with oblique derivative boundary conditions. In the SDE case, our approach is based on the Skorohod problem, which, in the form studied in this article, was first described by Tanaka \cite {Tanaka1979}. Tanaka established existence and uniqueness of solutions to the Skorohod problem in convex domains with normal reflection. These results were subsequently substantially generalized by, in particular, Lions and Sznitman \cite{LionsSznitman1984} and Saisho \cite{Saisho1987}. To the authors' knowledge, the most general results on strong solutions to reflected SDEs in time-independent domains based on the Skorohod problem are those established by Dupuis and Ishii \cite{DupuisIshii1993}. The aim here is to generalize the SDE results mentioned above, in particular those of Case 1 in \cite{DupuisIshii1993}, to the setting of time-dependent domains. There is, by now, a number of articles on reflected SDEs in time-dependent domains. Early results on this topic include the exhaustive study of the heat equation and reflected Brownian motion in smooth time-dependent domains by Burdzy, Chen, and Sylvester \cite{BurdzyChenSylvester2004AP} and the study of reflected SDEs in smooth time-dependent domains with reflection in the normal direction by Costantini, Gobet, and El Karoui \cite {CostantiniGobetKaroui2006}. We also mention that Burdzy, Kang, and Ramanan \cite{BurdzyKangRamanan2009} investigated the Skorohod problem in a one-dimensional, time-dependent domain and, in particular, found conditions for when there exists a solution to the Skorohod problem in the event that the two boundaries meet. Existence of weak solutions to SDEs with oblique reflection in non-smooth time-dependent domains was established by Nystr\"{o} m and \"{O}nskog \cite{NystromOnskog2010a} under fairly general conditions using the approach of \cite{Costantini1992}. In the article at hand, we use the approach of \cite{DupuisIshii1993} and derive regularity conditions, under which we can obtain existence and also uniqueness of strong solutions to SDEs with oblique reflection in time-dependent domains. Turning to the PDE case, we recall that the approach of \cite {DupuisIshii1993} relies on the construction of test functions used earlier in Dupuis and Ishii \cite{DupuisIshii1990} to prove the comparison principle, existence and uniqueness for fully nonlinear second-order elliptic PDEs in non-smooth time-independent domains. Here we generalize these test functions to our time-dependent setting, and obtain the corresponding results for both SDEs and PDEs in time-dependent domains. In particular, our PDE results generalize the main part of \cite {DupuisIshii1990} to hold in the setting of fully nonlinear second-order parabolic PDEs in non-smooth time-dependent domains. Our proofs are based on the theory of viscosity solutions. The first step is to observe that the maximum principle for semicontinuous functions by Crandall and Ishii \cite {CrandallIshii1990} holds in time-dependent domains. Using the maximum principle and the above-mentioned test functions, we prove the comparison principle by following the nowadays standard method, see Crandall, Ishii, and Lions \cite{CrandallIshiiLions1992} and \cite{DupuisIshii1990}. Next, we prove existence of a unique solution to the PDE problem by means of Perron's method, the comparison principle and by constructing several explicit sub- and supersolutions (barriers) to the PDE. To the authors' knowledge, there are no previous results on the oblique derivative problem for parabolic PDEs in non-smooth time-dependent domains. For time-independent domains, however, there are several articles in the literature. Besides \cite{DupuisIshii1990}, Dupuis and Ishii studied oblique derivative problems for fully nonlinear elliptic PDEs on domains with corners in \cite{DupuisIshii1991}. Moreover, Barles \cite{Barles1993} proved a comparison principle and existence of unique solutions to degenerate elliptic and parabolic boundary value problems with nonlinear Neumann type boundary conditions in bounded domains with $\mathcal{W}^{3,\infty }$ -boundary. Ishii and Sato \cite{IshiiSato2004} proved similar theorems for boundary value problems for some singular degenerate parabolic partial differential equations with nonlinear oblique derivative boundary conditions in bounded $\mathcal{C}^{1}$-domains. Further, in bounded domains with $ \mathcal{W}^{3,\infty }$-boundary, Bourgoing \cite{Bourgoing2008} considered singular degenerate parabolic equations and equations having $L^{1}$ dependence in time. Concerning PDEs in the setting of time-dependent domains, we mention that Bj \"{o}rn\textit{\ et al}.~\cite{BjornBjornGianazzaParviainen2015} proved, among other results, a comparison principle for solutions of degenerate and singular parabolic equations with Dirichlet boundary conditions using a different technique and that Avelin \cite{Avelin2016} proved boundary estimates of solutions to the degenerate $p$-parabolic equation. As a motivation for considering SDEs and PDEs in time-dependent domains, we mention that such geometries arise naturally in a wide range of applications in which the governing equation of interest is a differential equation, for example in modelling of crack propagation \cite{NicaiseSandig2007}, modelling of fluids \cite{FiloZauskova2008}, \cite{HeHsiao2000} and modelling of chemical, petrochemical and pharmaceutical processes \cite {IzadiAbdollahiDubljevic2014}. The rest of the paper is organized as follows. In Section \ref{DNA} we give preliminary definitions, notations, assumptions and also state our main results. In Section \ref{test} we construct the test functions crucial for the proofs of both the SDE and the PDE results. Using these test functions, we prove existence of solutions to the Skorohod problem in Section \ref{SP}. The results on the Skorohod problem are subsequently used, in Section \ref {RSDE}, to prove the main results for SDEs. Finally, in Section \ref{PDE}, we use the theory of viscosity solutions together with the test functions derived in Section \ref{test} to establish the PDE results. \setcounter{equation}{0} \setcounter{theorem}{0} \section{Preliminaries and statement of main results\label{DNA}} Throughout this article we will use the following definitions and assumptions. Given $n\geq 1$, $T>0$ and a bounded, open, connected set $ \Omega ^{\prime }\subset \mathbb{R} ^{n+1}$ we will refer to \begin{equation} \Omega =\Omega ^{\prime }\cap ([0,T]\times \mathbb{R} ^{n}), \label{timedep} \end{equation} as a time-dependent domain. Given $\Omega $ and $t\in \left[ 0,T\right] $, we define the time sections of $\Omega $ as $\Omega _{t}=\left\{ x:\left( t,x\right) \in \Omega \right\} $, and we assume that \begin{equation} \Omega _{t}\neq \emptyset \text{ and that }\Omega _{t}\text{ is bounded and connected for every }t\in \left[ 0,T\right] . \label{timesect} \end{equation} Let $\partial \Omega _{t}$, for $t\in \left[ 0,T\right] $, denote the boundary of $\Omega _{t}$. Let $\left\langle \cdot ,\cdot \right\rangle $ and $\left\vert \cdot \right\vert =\left\langle \cdot ,\cdot \right\rangle ^{1/2}$ define the Euclidean inner product and norm, respectively, on $ \mathbb{R} ^{n}$ and define, whenever $a\in \mathbb{R} ^{n}$ and $\,b>0$, the sets $B\left( a,b\right) =\left\{ x\in \mathbb{R} ^{n}:\left\vert x-a\right\vert \leq b\right\} $ and $S\left( a,b\right) =\left\{ x\in \mathbb{R} ^{n}:\left\vert x-a\right\vert =b\right\} $. For any Euclidean spaces $E$ and $F$, we define the following spaces of functions mapping $E$ into $F$. $ \mathcal{C}\left( E,F\right) $ denotes the set of continuous functions, $ \mathcal{C}^{k}\left( E,F\right) $ denotes the set of $k$ times continuously differentiable functions and $\mathcal{W}^{1,p}\left( E,F\right) $ denotes the Sobolev space of functions whose first order weak derivatives belong to $ L^{p}\left( E\right) $. If we can distinguish the time variable from the spatial variables, we let $\mathcal{C}^{1,2}\left( E,F\right) $ denote the set of functions, whose elements are continuously differentiable once with respect to the time variable and twice with respect to any space variable, and by $\mathcal{C}_{b}^{1,2}\left( E,F\right) $ we denote the space of bounded functions in $\mathcal{C}^{1,2}\left( E,F\right) $ having bounded derivatives. Moreover, $\mathcal{BV}\left( E,F\right) $ denotes the set of functions with bounded variation. In particular, for $\eta \in \mathcal{BV} \left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) $, we let $\left\vert \eta \right\vert \left( t\right) $ denote the total variation of $\eta $ over the interval $\left[ 0,t\right] $. \subsection{Assumptions on the domain and directions of reflection\label {geoassume}} Throughout this article we consider non-smooth time-dependent domains of the following type. Let $\Omega \subset \mathbb{R} ^{n+1}$ be a time-dependent domain satisfying \eqref{timesect}. The direction of reflection at $x\in \partial \Omega _{t}$, $t\in \left[ 0,T \right] $, is given by $\gamma \left( t,x\right) $ satisfying \begin{equation} \gamma \in \mathcal{C}_{b}^{1,2}\left( \mathbb{R}^{n+1},B\left( 0,1\right) \right) , \label{smooth_gamma} \end{equation} such that $\gamma \left( t,x\right) \in S\left( 0,1\right) $ for all $\left( t,x\right) \in V$, where $V$ is an open set satisfying $\Omega _{t}^{c}\subset V$ for all $t\in \lbrack 0,T]$. Moreover, there is a constant $\rho \in \left( 0,1\right) $ such that the exterior cone condition \begin{equation} \bigcup_{0\leq \zeta \leq \rho }B\left( x-\zeta \gamma \left( t,x\right) ,\zeta \rho \right) \subset \Omega _{t}^{c}, \label{boundarylip} \end{equation} holds, for all $x\in \partial \Omega _{t}$, $t\in \left[ 0,T\right] $. Note that it follows from \eqref{boundarylip} that $\gamma $ points into the domain and this is indeed the standard convention for SDEs. For PDEs, however, the standard convention is to let $\gamma $ point out of the domain. To facilitate for readers accustomed with either of these conventions we, in the following, let $\gamma $ point inward whenever SDEs are treated, whereas when we treat PDEs we assume the existence of a function \begin{equation} \widetilde{\gamma }\in \mathcal{C}_{b}^{1,2}\left( \mathbb{R}^{n+1},B\left( 0,1\right) \right) , \label{smooth_gamma2} \end{equation} defined as $\widetilde{\gamma }\left( t,x\right) =-\gamma \left( t,x\right) $ , with $\gamma $ as in \eqref{smooth_gamma}. In particular, we have \begin{equation} \bigcup_{0\leq \zeta \leq \rho }B\left( x+\zeta \widetilde{\gamma }\left( t,x\right) ,\zeta \rho \right) \subset \Omega _{t}^{c}, \label{boundarylip2} \end{equation} for all $x\in \partial \Omega _{t}$, $t\in \left[ 0,T\right] $. Finally, regarding the temporal variation of the domain, we define $d\left( t,x\right) =\inf_{y\in \Omega _{t}}\left\vert x-y\right\vert $, for all $ t\in \left[ 0,T\right] $, $x\in \mathbb{R} ^{n}$, and assume that for some fixed $p\in \left( 1,\infty \right) $ and all $x\in \mathbb{R} ^{n}$, \begin{equation} d\left( \cdot ,x\right) \in \mathcal{W}^{1,p}\left( \left[ 0,T\right] ,\left[ 0,\infty \right) \right) , \label{templip} \end{equation} with Sobolev norm uniformly bounded in space. We also assume that $ D_{t}d(t,x)$ is jointly measurable in $(t,x)$. \begin{remark} \label{spaceremark}A simple contradiction argument based on the exterior cone condition \eqref{boundarylip} for the time sections and the regularity of $\gamma $ and $\Omega _{t}$, shows that the time sections satisfy the interior cone condition \begin{equation*} \bigcup_{0\leq \zeta \leq \rho }B\left( x+\zeta \gamma \left( t,x\right) ,\zeta \rho \right) \subset \overline{\Omega }_{t}, \end{equation*} for all $x\in \partial \Omega _{t}$, $t\in \left[ 0,T\right] $. The exterior and interior cone conditions together imply that the boundary of $\Omega _{t} $ is Lipschitz continuous (in space) with a Lipschitz constant $K_{t}$ satisfying $\sup_{t\in \left[ 0,T\right] }K_{t}<\infty $. Moreover, these conditions imply that for a suitable constant $\theta \in \left( 0,1\right) $ , $\theta ^{2}>1-\rho ^{2}$, there exists $\delta >0$ such that \begin{equation*} \left\langle y-x,\gamma \left( t,x\right) \right\rangle \geq -\theta \left\vert y-x\right\vert , \end{equation*} for all $x\in \partial \Omega _{t}$, $y\in \overline{\Omega }_{t}$, $t\in \left[ 0,T\right] $ satisfying $\left\vert x-y\right\vert \leq \delta $. \end{remark} \begin{remark} \label{timeholder}By Morrey's inequality, condition \eqref{templip} implies the existence of a H\"{o}lder exponent $\widehat{\alpha }=1-1/p\in \left( 0,1\right) $ and a H\"{o}lder constant $K\in \left( 0,\infty \right) $ such that, for all $s,t\in \left[ 0,T\right] $, $x\in \mathbb{R} ^{n}$, \begin{equation} \left\vert d\left( s,x\right) -d\left( t,x\right) \right\vert \leq K\left\vert s-t\right\vert ^{\widehat{\alpha }}. \label{tempholder} \end{equation} \end{remark} \begin{remark} The assumptions imposed on the time sections of the time-dependent domain in \eqref{smooth_gamma}, \eqref{boundarylip} coincide with those imposed on the time-independent domains in \cite{DupuisIshii1990} and in Case 1 of \cite {DupuisIshii1993}. For time-independent domains, existence and uniqueness results for SDE and PDE have also been obtained under the conditions given in \cite{DupuisIshii1991a} and in Case 2 of \cite{DupuisIshii1993}. It is likely that also these results can be extended to time-dependent domains using a procedure similar to that of the article at hand, but we leave this as a topic of future research. \end{remark} \begin{remark} Consider the function \begin{equation*} l\left( r\right) =\sup_{s,t\in \left[ 0,T\right] ,\text{ }\left\vert s-t\right\vert \leq r}\,\,\sup_{x\in \overline{\Omega }_{s}}\,\,\inf_{y\in \overline{\Omega }_{t}}\left\vert x-y\right\vert , \end{equation*} introduced in \cite{CostantiniGobetKaroui2006} and frequently used in \cite {NystromOnskog2010a}. Condition \eqref{tempholder} is equivalent to, \begin{equation*} l\left( r\right) \leq Kr^{\widehat{\alpha }}, \end{equation*} which is considerably stronger than the condition $\lim_{r\rightarrow 0^{+}}l\left( r\right) =0$ assumed in \cite{NystromOnskog2010a}. On the other hand, it was assumed in \cite{NystromOnskog2010a} that $\Omega _{t}$ satisfies a uniform exterior sphere condition, and this does not hold in general for domains satisfying \eqref{boundarylip}. \end{remark} \subsection{Statement of main result for SDEs} We consider the Skorohod problem in the following form. \begin{definition} \label{skorohodprob}Given $\psi \in \mathcal{C}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) $, with $\psi \left( 0\right) \in \overline{\Omega }_{0}$, we say that the pair $\left( \phi ,\lambda \right) \in \mathcal{C}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) \times \mathcal{C}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) $ is a solution to the Skorohod problem for $\left( \Omega ,\gamma ,\psi \right) $ if $\left( \psi ,\phi ,\lambda \right) $ satisfies, for all $t\in \left[ 0,T\right] $, \begin{eqnarray} \phi \left( t\right) &=&\psi \left( t\right) +\lambda \left( t\right) ,\quad \phi \left( 0\right) =\psi \left( 0\right) , \label{SP1} \\ \phi \left( t\right) &\in &\overline{\Omega }_{t}, \label{SP2} \\ \left\vert \lambda \right\vert \left( T\right) &<&\infty , \label{SP3} \\ \left\vert \lambda \right\vert \left( t\right) &=&\int_{\left( 0,t\right] }I_{\left\{ \phi \left( s\right) \in \partial \Omega _{s}\right\} }d\left\vert \lambda \right\vert \left( s\right) , \label{SP4} \\ \lambda \left( t\right) &=&\int_{\left( 0,t\right] }\widehat{\gamma }\left( s\right) d\left\vert \lambda \right\vert \left( s\right) , \label{SP5} \end{eqnarray} for some measurable function $\widehat{\gamma }:\left[ 0,T\right] \rightarrow \mathbb{R} ^{n}$ satisfying $\widehat{\gamma }\left( s\right) =\gamma \left( s,\phi \left( s\right) \right) $ $d\left\vert \lambda \right\vert $-a.s. \end{definition} We use the Skorohod problem to construct solutions to SDEs confined to the given time-dependent domain $\overline{\Omega }$ and with direction of reflection given by $\gamma $. We shall consider the following notion of SDEs. Let $\left( \Omega ,\mathcal{F},\mathbb{P}\right) $ be a complete probability space and let $\left\{ \mathcal{F}_{t}\right\} _{t\geq 0}$ be a filtration satisfying the usual conditions. Let $m$ be a positive integer, let $W=\left( W_{i}\right) $ be an $m$-dimensional Wiener process and let $b: \left[ 0,T\right] \times \mathbb{R} ^{n}\rightarrow \mathbb{R} ^{n}$ and $\sigma :\left[ 0,T\right] \times \mathbb{R} ^{n}\rightarrow \mathbb{R} ^{n\times m}$ be continuous functions. \begin{definition} \label{strong}A strong solution to the SDE in $\overline{\Omega }$ driven by the Wiener process $W$ and with coefficients $b$ and $\sigma $, direction of reflection along $\gamma $ and initial condition $x\in \overline{\Omega } _{0} $ is an $\left\{ \mathcal{F}_{t}\right\} $-adapted continuous stochastic process $X\left( t\right) $ which satisfies, $\mathbb{P}$-almost surely, whenever $t\in \left[ 0,T\right] $, \begin{equation} X\left( t\right) =x+\int_{0}^{t}b\left( s,X\left( s\right) \right) ds+\int_{0}^{t}\left\langle \sigma \left( s,X\left( s\right) \right) ,dW\left( s\right) \right\rangle +\Lambda \left( t\right) , \label{RSDE1} \end{equation} where \begin{equation} X\left( t\right) \in \overline{\Omega }_{t},\quad \left\vert \Lambda \right\vert \left( t\right) =\int_{\left( 0,t\right] }I_{\left\{ X\left( s\right) \in \partial \Omega _{s}\right\} }d\left\vert \Lambda \right\vert \left( s\right) <\infty , \label{RSDE2} \end{equation} and where \begin{equation} \Lambda \left( t\right) =\int_{\left( 0,t\right] }\widehat{\gamma }\left( s\right) d|\Lambda |\left( s\right) , \label{RSDE3} \end{equation} for some measurable stochastic process $\widehat{\gamma }:\left[ 0,T\right] \rightarrow \mathbb{R} ^{n}$ satisfying $\widehat{\gamma }\left( s\right) =\gamma \left( s,X\left( s\right) \right) $ $d|\Lambda |$-a.s. \end{definition} Comparing Definition \ref{skorohodprob} with Definition \ref{strong}, it is clear that $\left( X\left( \cdot \right) ,\Lambda \left( \cdot \right) \right) $ should solve the Skorohod problem for $\psi \left( \cdot \right) =x+\int_{0}^{\cdot }b\left( s,X\left( s\right) \right) ds+\int_{0}^{\cdot }\left\langle \sigma \left( s,X\left( s\right) \right) ,dW\left( s\right) \right\rangle $ on an a.s.~pathwise basis. We assume that the coefficient functions $b\left( t,x\right) $ and $\sigma \left( t,x\right) $ satisfy the Lipschitz continuity condition \begin{equation} \left\vert b_i\left( t,x\right) -b_i\left( t,y\right) \right\vert \leq K\left\vert x-y\right\vert \quad \text{and\quad }\left\vert \sigma_{i,j} \left( t,x\right) -\sigma_{i,j} \left( t,y\right) \right\vert \leq K\left\vert x-y\right\vert , \label{lipcoeff} \end{equation} for all $(i,j) \in \{1,\dots n\} \times \{1,\dots, m\}$, $x,y \in \mathbb{R} ^n$ and for some positive constant $K\in \left( 0,\infty \right)$. Our main result for SDEs is the following theorem. \begin{theorem} \label{main}Let $\Omega \subset \mathbb{R} ^{n+1}$ be a time-dependent domain satisfying \eqref{timesect} and assume that \eqref{smooth_gamma}, \eqref{boundarylip}, \eqref{templip} and \eqref{lipcoeff} hold. Then there exists a unique strong solution to the SDE in $\overline{\Omega }$ driven by the Wiener process $W$ and with coefficients $b$ and $\sigma$, direction of reflection along $\gamma $ and initial condition $x\in \overline{\Omega}_{0}$. \end{theorem} We prove Theorem \ref{main} by completing the following steps. First, in Lemma \ref{smoothexist}, we use a penalty method to prove existence of solutions to the Skorohod problem for smooth functions. In Lemma \ref {compactest}, we then derive a compactness estimate for solutions to the Skorohod problem. Based on the compactness estimate, we are, in Lemma \ref {contexist}, able to generalize the existence result for the Skorohod problem to all continuous functions. Finally, in Section \ref{RSDE}, we use two classes of test functions and the existence result for the Skorohod problem to obtain existence and uniqueness of strong solutions to SDEs with oblique reflection at the boundary of a bounded, time-dependent domain. Note that we are able to obtain uniqueness of the reflected SDE although the solution to the corresponding Skorohod problem need not be unique. \subsection{Statement of main results for PDEs} To state and prove our results for PDEs we introduce some more notation. Let $\Omega ^{\prime }$ be as in \eqref{timedep} and put \begin{equation*} \Omega ^{\circ }=\Omega ^{\prime }\cap \left( \left(0,T\right)\times \mathbb{R} ^{n}\right), \quad \widetilde{\Omega }=\overline{\Omega }^{\prime }\cap \left( [0,T)\times \mathbb{R}^{n}\right) , \quad \partial \Omega = \left( \overline{\Omega}^{\prime} \setminus \Omega ^{\prime }\right) \cap \left( \left(0,T\right)\times \mathbb{R}^{n}\right). \end{equation*} We consider fully nonlinear parabolic PDEs of the form \begin{equation} u_{t}+F\left(t,x,u,Du,D^{2}u\right)=0\quad \text{in}\;\Omega ^{\circ }. \label{huvudekvationen} \end{equation} Here $F$ is a given real function on $\overline{\Omega }\times \mathbb{R} \times \mathbb{R}^{n}\times \mathbb{S}^{n}$, where $\mathbb{S}^{n}$ denotes the space of $n\times n$ real symmetric matrices equipped with the positive semi-definite ordering; that is, for $X,Y\in \mathbb{S}^{n}$, we write $ X\leq Y$ if $\langle \left(X-Y\right)\xi ,\xi \rangle \leq 0$ for all $\xi \in \mathbb{R }^{n}$. We also adopt the matrix norm notation \begin{equation*} \left\Vert A\right\Vert =\sup \{|\lambda |:\lambda \text{ is an eigenvalue of }A\}=\sup \{|\langle A\xi ,\xi \rangle |:|\xi |\leq 1\}. \end{equation*} Moreover, $u$ represents a real function in $\Omega ^{\circ }$ and $Du$ and $ D^{2}u$ denote the gradient and Hessian matrix, respectively, of $u$ with respect to the spatial variables. On the boundary we impose the oblique derivative condition to the unknown $u$ \begin{equation} \frac{\partial u}{\partial \widetilde{\gamma }}+f\left(t,x,u\left(t,x\right)\right)=0\quad \text{on }\;\partial \Omega , \label{randvillkor} \end{equation} where $f$ is a real valued function on $\overline{\partial \Omega }\times \mathbb{R}$ and $\widetilde{\gamma }\left( t,\cdot \right) $ is the vector field on $\mathbb{R}^{n}$, oblique to $\partial \Omega _{t}$, introduced in \eqref{smooth_gamma2} and \eqref{boundarylip2}. Regarding the function $F$, we make the following assumptions. \begin{equation} F\in C\left(\overline{\Omega }\times \mathbb{R}\times \mathbb{R}^{n}\times \mathbb{S}^{n}\right). \label{ass_F_cont} \end{equation} For some $\lambda \in \mathbb{R}$ and each $\left(t,x,p,A\right)\in \overline{\Omega } \times \mathbb{R}^{n}\times \mathbb{S}^{n}$ the function \begin{equation} \text{$r\rightarrow F\left(t,x,r,p,A\right)-\lambda r$ is nondecreasing on $\mathbb{R}$. } \label{ass_F_nondecreasing} \end{equation} There is a function $m_{1}\in C\left([0,\infty)\right)$ satisfying $m_{1}\left(0\right)=0$ for which \begin{align} & F\left(t,y,r,p,-Y\right)-F\left(t,x,r,p,X\right)\leq m_{1}\left(|x-y|\left(|p|+1\right)+\alpha |x-y|^{2}\right) \label{ass_F_XY} \\ & \text{if}\qquad -\alpha \left( \begin{array}{cc} I & 0 \\ 0 & I \end{array} \right) \leq \left( \begin{array}{cc} X & 0 \\ 0 & Y \end{array} \right) \leq \alpha \left( \begin{array}{cc} I & -I \\ -I & I \end{array} \right) , \notag \end{align} for all $\alpha \geq 1$, $\left(t,x\right),\left(t,y\right)\in \overline{\Omega }$, $r\in \mathbb{R }$, $p\in \mathbb{R}^{n}$ and $X,Y\in \mathbb{S}^{n}$, where $I$ denotes the unit matrix of size $n\times n$. There is a neighborhood $U$ of $\partial \Omega $ in $\overline{\Omega }$ and a function $m_{2}\in C\left([0,\infty )\right)$ satisfying $m_{2}\left(0\right)=0$ for which \begin{equation} |F\left(t,x,r,p,X\right)-F\left(t,x,r,q,Y\right)|\leq m_{2}\left(|p-q|+||X-Y||\right), \label{ass_F_boundary} \end{equation} for $\left(t,x\right)\in U$, $r\in \mathbb{R}$, $p,q\in \mathbb{R}^{n}$ and $X,Y\in \mathbb{S}^{n}$. Regarding the function $f$ we assume that \begin{equation} f\left(t,x,r\right)\in C\left(\overline{\partial \Omega }\times \mathbb{R}\right), \label{f_kontinuerlig} \end{equation} and that for each $\left(t,x\right)\in \overline{\partial \Omega }$ the function \begin{equation} \text{ $r\rightarrow f\left(t,x,r\right)$ is nondecreasing on $\mathbb{R}$}. \label{ass_f_nondecreasing} \end{equation} We remark that assumptions \eqref{ass_F_cont} and \eqref{ass_F_XY} imply the degenerate ellipticity \begin{equation} F\left(t,x,r,p,A+B\right)\leq F\left(t,x,r,p,A\right)\quad \text{if}\;B\geq 0, \label{F_fundamental} \end{equation} for $\left(t,x\right)\in \overline{\Omega }$, $r\in \mathbb{R}$, $p\in \mathbb{R}^{n}$ and $A,B\in \mathbb{S}^{n}$, see Remark 3.4 in \cite{CrandallIshiiLions1992} for a proof. To handle the strong degeneracy allowed, we will adapt the notion of viscosity solutions \cite{CrandallIshiiLions1992}, which we recall for problem \eqref{huvudekvationen}-\eqref{randvillkor} in Section \ref{PDE}. Let $USC(E)$ ($LSC(E)$) denote the set of upper (lower) semi-continuous functions on $E\subset\mathbb{R}^{n+1}$. Our main results for PDEs are given in the following theorems. \begin{theorem} \label{comparison}Let $\Omega ^{\circ }$ be a time-dependent domain satisfying \eqref{timesect} and assume that \eqref{smooth_gamma2}- \eqref{templip} and \eqref{ass_F_cont}-\eqref{ass_f_nondecreasing} hold. Let $u\in USC(\widetilde{\Omega})$ be a viscosity subsolution, and $v\in LSC( \widetilde{\Omega})$ be a viscosity supersolution of problem \eqref{huvudekvationen}-\eqref{randvillkor} in $\Omega ^{\circ }$. If $ u\left(0,x\right)\leq v\left(0,x\right)$ for all $x\in \overline{\Omega }_{0}$, then $u\leq v\;\text{in}\;\widetilde{\Omega }$. \end{theorem} \begin{theorem} \label{existence}Let $\Omega ^{\circ }$ be a time-dependent domain satisfying \eqref{timesect} and assume that \eqref{smooth_gamma2}- \eqref{templip} and \eqref{ass_F_cont}-\eqref{ass_f_nondecreasing} hold. Then there exists a unique viscosity solution, continuous on $\widetilde{ \Omega }$, to the initial value problem \begin{align} u_{t}+F\left(t,x,u,Du,D^{2}u\right)& =0\qquad \quad \,\text{in}\quad \Omega ^{\circ }, \notag \label{initial_value_problem} \\ \frac{\partial u}{\partial \widetilde{\gamma }}+f\left(t,x,u\left(t,x\right)\right)& =0\qquad \quad \,\text{on}\quad \partial \Omega , \notag \\ u\left(0,x\right)& =g\left(x\right)\qquad \text{for}\quad x\in \overline{\Omega }_{0}, \end{align} where $g\in C\left(\overline{\Omega }_{0}\right)$. \end{theorem} Theorems \ref{comparison} and \ref{existence} are proved in Section \ref{PDE} . The comparison principle in Theorem \ref{comparison} is obtained using two of the test functions constructed in Section \ref{test} together with nowadays standard techniques from the theory of viscosity solutions for fully nonlinear PDEs as described in \cite{CrandallIshiiLions1992}. Our proof uses ideas from the corresponding elliptic result given in \cite {DupuisIshii1990}. The uniqueness part of Theorem \ref{existence} is immediate from the formulation of Theorem \ref{comparison}, which also, together with the maximum principle in Lemma \ref{maxrand}, allows comparison in the setting of mixed boundary conditions, as follows. \begin{corollary} \label{maxrand_partial} Let $\Omega ^{\circ }$ be a time-dependent domain satisfying \eqref{timesect} and assume that \eqref{smooth_gamma2}- \eqref{templip} and \eqref{ass_F_cont}-\eqref{ass_f_nondecreasing} hold. Let $u\in USC(\widetilde{\Omega })$ be a viscosity subsolution, and $v\in LSC( \widetilde{\Omega })$ be a viscosity supersolution of \eqref{huvudekvationen} in $\Omega ^{\circ }$. Suppose also that $u$ and $v$ satisfy the oblique derivative boundary condition \eqref{randvillkor} on a subset $G\subset \partial \Omega$. Then $\sup_{\widetilde{\Omega}}u-v\leq \sup_{\left(\partial \Omega \setminus G\right)\cup \,\overline{\Omega }_{0}}\left(u-v\right)^{+}$. \end{corollary} The existence part of Theorem \ref{existence} is proved using Perron's method and Corollary \ref{maxrand_partial}, together with constructions of several explicit viscosity sub- and supersolutions to the problem \eqref{huvudekvationen}-\eqref{randvillkor}. \setcounter{equation}{0} \setcounter{theorem}{0} \section{Construction of test functions\label{test}} In this section we show how the classes of test functions constructed in \cite{DupuisIshii1990} for time-independent domains can be generalized to similar classes of test functions valid for time-dependent domains. Lemma \ref{testlemma3} and Lemma \ref{testlemma4} provide test functions that are modifications of the square function, but which interact with the direction of $\gamma $ in a suitable way. The derivations of these functions follow the lines of the derivations of the corresponding test functions in \cite {DupuisIshii1990} with the addition that it has to be verified that the time derivative of the test functions has a certain order. Lemma \ref{testlemma5} provides a non-negative test function in $\mathcal{C}^{1,2}\left( \overline{ \Omega }, \mathbb{R} \right) $, whose gradient is aligned with $\gamma $ at the boundary. To verify the existence of this function, the proof for the corresponding function in \cite{DupuisIshii1990} has to be extended considerably due to the time-dependence of the domain. In particular, new methods have to be used to obtain differentiability with respect to the time variable. The constructions of the test functions below are given with sufficient detail and for those parts of the constructions that are identical in time-dependent and time-independent domains, we refer the reader to \cite {DupuisIshii1990}. We start by stating a straightforward extension of Lemma 4.4 in \cite{DupuisIshii1990} from $\xi \in S\left( 0,1\right) $ to $\xi \in B\left( 0,1\right) $. The proof follows directly from the construction in Lemma 4.4 in \cite{DupuisIshii1990} and is omitted. For any $\theta \in \left( 0,1\right) $, there exists a function $g\in \mathcal{C}\left( \mathbb{R} ^{n}\times \mathbb{R} ^{n}, \mathbb{R} \right) $ and positive constants $\chi ,C$ such that \begin{equation} g\in \mathcal{C}^{1}\left( \mathbb{R} ^{n}\times \mathbb{R} ^{n}, \mathbb{R} \right) \cap \mathcal{C}^{2}\left( \mathbb{R} ^{n}\times \left( \mathbb{R} ^{n}\setminus \left\{ 0\right\} \right) , \mathbb{R} \right) , \label{testlemma21} \end{equation} \begin{equation} g\left( \xi ,p\right) \geq \chi \left\vert p\right\vert ^{2},\quad \text{for }\xi \in B\left( 0,1\right) \text{, }p\in \mathbb{R} ^{n}, \label{testlemma22} \end{equation} \begin{equation} g\left( \xi ,0\right) =0,\quad \text{for }\xi \in \mathbb{R} ^{n}, \label{testlemma23} \end{equation} \begin{equation} \left\langle D_{p}g\left( \xi ,p\right) ,\xi \right\rangle \geq 0,\quad \text{for }\xi \in S\left( 0,1\right) \text{, }p\in \mathbb{R} ^{n}\text{ and }\left\langle p,\xi \right\rangle \geq -\theta \left\vert p\right\vert \text{,} \label{testlemma24} \end{equation} \begin{equation} \left\langle D_{p}g\left( \xi ,p\right) ,\xi \right\rangle \leq 0,\quad \text{for }\xi \in S\left( 0,1\right) \text{, }p\in \mathbb{R} ^{n}\text{ and }\left\langle p,\xi \right\rangle \leq \theta \left\vert p\right\vert , \label{testlemma25} \end{equation} \begin{equation} \left\vert D_{\xi }g\left( \xi ,p\right) \right\vert \leq C\left\vert p\right\vert ^{2},\quad \left\vert D_{p}g\left( \xi ,p\right) \right\vert \leq C\left\vert p\right\vert ,\quad \text{for }\xi \in B\left( 0,1\right) \text{, }p\in \mathbb{R} ^{n}, \label{testlemma26} \end{equation} and \begin{equation} \left\Vert D_{\xi }^{2}g\left( \xi ,p\right) \right\Vert \leq C\left\vert p\right\vert ^{2},\quad \left\Vert D_{\xi }D_{p}g\left( \xi ,p\right) \right\Vert \leq C\left\vert p\right\vert ,\quad \left\Vert D_{p}^{2}g\left( \xi ,p\right) \right\Vert \leq C, \label{testlemma28} \end{equation} for $\xi \in B\left( 0,1\right) $, $p\in \mathbb{R} ^{n}\setminus \left\{ 0\right\} $. The test function provided by the following lemma will be used to assert relative compactness of solutions to the Skorohod problem in Lemma \ref{compactest} below. \begin{lemma} \label{testlemma3}For any $\theta \in \left( 0,1\right) $, there exists a function $h\in \mathcal{C}^{1,2}\left( \left[ 0,T\right] \times \mathbb{R} ^{n}\times \mathbb{R} ^{n}, \mathbb{R} \right) $ and positive constants $\chi ,C$ such that, for all $\left( t,x,p\right) \in \lbrack 0,T]\times \mathbb{R} ^{n}\times \mathbb{R} ^{n}$, \begin{equation} h\left( t,x,p\right) \geq \chi \left\vert p\right\vert ^{2}, \label{testlemma31} \end{equation} \begin{equation} h\left( t,x,0\right) =1, \label{testlemma32} \end{equation} \begin{equation} \left\langle D_{p}h\left( t,x,p\right) ,\gamma \left( t,x\right) \right\rangle \geq 0,\quad \text{for }x\in \partial \Omega _{t}\text{ and } \left\langle p,\gamma \left( t,x\right) \right\rangle \geq -\theta \left\vert p\right\vert , \label{testlemma33} \end{equation} \begin{equation} \left\langle D_{p}h\left( t,x,p\right) ,\gamma \left( t,x\right) \right\rangle \leq 0,\quad \text{for }x\in \partial \Omega _{t}\text{ and } \left\langle p,\gamma \left( t,x\right) \right\rangle \leq \theta \left\vert p\right\vert , \label{testlemma34} \end{equation} \begin{equation} \left\vert D_{t}h\left( t,x,p\right) \right\vert \leq C\left\vert p\right\vert ^{2},\quad \left\vert D_{x}h\left( t,x,p\right) \right\vert \leq C\left\vert p\right\vert ^{2},\left\vert D_{p}h\left( t,x,p\right) \right\vert \leq C\left\vert p\right\vert , \label{testlemma35} \end{equation} and \begin{equation} \left\Vert D_{x}^{2}h\left( t,x,p\right) \right\Vert \leq C\left\vert p\right\vert ^{2},\quad \left\Vert D_{x}D_{p}h\left( t,x,p\right) \right\Vert \leq C\left\vert p\right\vert ,\quad \left\Vert D_{p}^{2}h\left( t,x,p\right) \right\Vert \leq C. \label{testlemma37} \end{equation} \end{lemma} \noindent \textbf{Proof.} Let $\nu \in \mathcal{C}^{2}\left( \mathbb{R} , \mathbb{R} \right) $ be such that $\nu \left( t\right) =t$ for $t\geq 2$, $\nu \left( t\right) =1$ for $t\leq 1/2$, $\nu ^{\prime }\left( t\right) \geq 0$ and $ \nu \left( t\right) \geq t$ for all $t\in \mathbb{R} $. Let $\theta \in \left( 0,1\right) $ be given, choose $g\in \mathcal{C} \left( \mathbb{R} ^{n}\times \mathbb{R} ^{n}, \mathbb{R} \right) $ satisfying \eqref{testlemma21}-\eqref{testlemma28} and define \begin{equation*} h\left( t,x,p\right) =\nu \left( g\left( \gamma \left( t,x\right) ,p\right) \right) . \end{equation*} The regularity of $h$ follows easily from the regularity of $g$ and $\nu $ and \eqref{testlemma23}. It is straightforward to deduce properties \eqref{testlemma31}-\eqref{testlemma37} from \eqref{testlemma21}- \eqref{testlemma28} and we limit the proof to two examples, which are not fully covered in \cite{DupuisIshii1990}. We have \begin{equation*} \left\vert D_{t}h\left( t,x,p\right) \right\vert =\left\vert \nu ^{\prime }\left( g\left( \gamma \left( t,x\right) ,p\right) \right) \right\vert \left\vert D_{\xi }g\left( \gamma \left( t,x\right) ,p\right) \right\vert \left\vert \frac{\partial \gamma }{\partial t}\right\vert \leq C\left\vert p\right\vert ^{2}, \end{equation*} by \eqref{testlemma26} and the regularity of $\nu $ and $\gamma $. Moreover, \begin{eqnarray*} \left\Vert D_{x}^{2}h\left( t,x,p\right) \right\Vert &\leq &C(n)\bigg( \left\vert \nu ^{\prime \prime }\left( g\left( \gamma \left( t,x\right) ,p\right) \right) \right\vert \left\vert D_{\xi }g\left( \gamma \left( t,x\right) ,p\right) \right\vert ^{2}\left\Vert \frac{\partial \gamma }{ \partial x}\right\Vert ^{2} \\ &&+\left\vert \nu ^{\prime }\left( g\left( \gamma \left( t,x\right) ,p\right) \right) \right\vert \left\Vert D_{\xi }^{2}g\left( \gamma \left( t,x\right) ,p\right) \right\Vert \left\Vert \frac{\partial \gamma }{\partial x}\right\Vert ^{2} \\ &&+\left\vert \nu ^{\prime }\left( g\left( \gamma \left( t,x\right) ,p\right) \right) \right\vert \left\vert D_{\xi }g\left( \gamma \left( t,x\right) ,p\right) \right\vert \max_{1\leq k\leq n}\left\Vert \frac{ \partial ^{2}\gamma _{k}}{\partial x^{2}}\right\Vert \bigg). \end{eqnarray*} Since $\nu ^{\prime \prime }$ is zero unless $2\geq g\left( \gamma \left( t,x\right) ,p\right) \geq \chi \left\vert p\right\vert ^{2}$, the first term, which is of order $C\left\vert p\right\vert ^{4}$, only contributes for small $\left\vert p\right\vert ^{2}$ and can thus be bounded from above by $C\left\vert p\right\vert ^{2}$. By \eqref{testlemma26}- \eqref{testlemma28}, the two latter terms are also bounded from above by $ C\left\vert p\right\vert ^{2}$. $\Box $ The test function in Lemma \ref{testlemma3} is also used to verify the existence of the following test function, which will be useful in the proofs of Theorem \ref{comparison} and Lemma \ref{rsdetheorem}. \begin{lemma} \label{testlemma4}For any $\theta \in \left( 0,1\right) $, there exists a family $\left\{ w_{\varepsilon }\right\} _{\varepsilon >0}$ of functions $ w_{\varepsilon }\in \mathcal{C}^{1,2}\left( \left[ 0,T\right] \times \mathbb{R} ^{n}\times \mathbb{R} ^{n}, \mathbb{R} \right) $ and positive constants $\chi ,C$ (independent of $\varepsilon $) such that, for all $\left( t,x,y\right) \in \lbrack 0,T]\times \mathbb{R} ^{n}\times \mathbb{R} ^{n}$, \begin{equation} w_{\varepsilon }\left( t,x,y\right) \geq \chi \frac{\left\vert x-y\right\vert ^{2}}{\varepsilon }, \label{testlemma41} \end{equation} \begin{equation} w_{\varepsilon }\left( t,x,y\right) \leq C\left( \varepsilon +\frac{ \left\vert x-y\right\vert ^{2}}{\varepsilon }\right) , \label{testlemma42} \end{equation} \begin{equation} \left\langle D_{x}w_{\varepsilon }\left( t,x,y\right) ,\gamma \left( t,x\right) \right\rangle \leq C\frac{\left\vert x-y\right\vert ^{2}}{ \varepsilon },\quad \text{for }x\in \partial \Omega _{t}\text{, } \left\langle y-x,\gamma \left( t,x\right) \right\rangle \geq -\theta \left\vert x-y\right\vert , \label{testlemma43} \end{equation} \begin{equation} \left\langle D_{y}w_{\varepsilon }\left( t,x,y\right) ,\gamma \left( t,x\right) \right\rangle \leq 0,\quad \text{for }x\in \partial \Omega _{t} \text{, }\left\langle x-y,\gamma \left( t,x\right) \right\rangle \geq -\theta \left\vert x-y\right\vert , \label{testlemma49} \end{equation} \begin{equation} \left\langle D_{y}w_{\varepsilon }\left( t,x,y\right) ,\gamma \left( t,y\right) \right\rangle \leq C\frac{\left\vert x-y\right\vert ^{2}}{ \varepsilon },\quad \text{for }y\in \partial \Omega _{t}\text{, } \left\langle x-y,\gamma \left( t,y\right) \right\rangle \geq -\theta \left\vert x-y\right\vert , \label{testlemma44} \end{equation} \begin{equation} \left\vert D_{t}w_{\varepsilon }\left( t,x,y\right) \right\vert \leq C\frac{ \left\vert x-y\right\vert ^{2}}{\varepsilon }, \label{testlemma45} \end{equation} \begin{equation} \left\vert D_{y}w_{\varepsilon }\left( t,x,y\right) \right\vert \leq C\frac{ \left\vert x-y\right\vert }{\varepsilon },\quad \left\vert D_{x}w_{\varepsilon }\left( t,x,y\right) +D_{y}w_{\varepsilon }\left( t,x,y\right) \right\vert \leq C\frac{\left\vert x-y\right\vert ^{2}}{ \varepsilon }, \label{testlemma46} \end{equation} and \begin{equation} D^{2}w_{\varepsilon }\left( t,x,y\right) \leq \frac{C}{\varepsilon }\left( \begin{array}{cc} I & -I \\ -I & I \end{array} \right) +\frac{C\left\vert x-y\right\vert ^{2}}{\varepsilon }\left( \begin{array}{cc} I & 0 \\ 0 & I \end{array} \right) . \label{testlemma47} \end{equation} \end{lemma} \noindent \textbf{Proof.} Let $\theta \in \left( 0,1\right) $ be given and choose $h\in \mathcal{C}^{1,2}\left( \left[ 0,T\right] \times \mathbb{R} ^{n}\times \mathbb{R} ^{n}, \mathbb{R} \right) $ as in Lemma \ref{testlemma3}. For all $\varepsilon >0$, we define the function $w_{\varepsilon }$ as \begin{equation*} w_{\varepsilon }\left( t,x,y\right) =\varepsilon h\left( t,x,\frac{x-y}{ \varepsilon }\right) . \end{equation*} Property \eqref{testlemma41} follows easily from \eqref{testlemma31} and property \eqref{testlemma42} was verified in Remark 3.3 in \cite {DupuisIshii1993}. Moreover, properties \eqref{testlemma43}, \eqref{testlemma49}, \eqref{testlemma46} and \eqref{testlemma47} were verified in the proof of Theorem 4.1 in \cite{DupuisIshii1990} and \eqref{testlemma45} is a simple consequence of \eqref{testlemma35}. To prove \eqref{testlemma44}, we note that \begin{eqnarray*} \left\langle D_{y}w_{\varepsilon }\left( t,x,y\right) ,\gamma \left( t,y\right) \right\rangle &=&-\left\langle D_{p}h\left( t,x,\frac{x-y}{ \varepsilon }\right) ,\gamma \left( t,y\right) \right\rangle \\ &=&-\left\langle D_{p}h\left( t,y,\frac{x-y}{\varepsilon }\right) ,\gamma \left( t,y\right) \right\rangle \\ &&+\left\langle D_{p}h\left( t,y,\frac{x-y}{\varepsilon }\right) -D_{p}h\left( t,x,\frac{x-y}{\varepsilon }\right) ,\gamma \left( t,y\right) \right\rangle . \end{eqnarray*} Moreover, if $\left\langle x-y,\gamma \left( t,y\right) \right\rangle \geq -\theta \left\vert x-y\right\vert $, then by \eqref{testlemma33}, $ \left\langle D_{p}h\left( t,y,p\right) ,\gamma \left( t,y\right) \right\rangle \geq 0$ with $p=\left( x-y\right) /\varepsilon $. Hence, for some $\xi $ in the segment joining $x$ and $y$, we obtain, with the aid of the mean value theorem and \eqref{testlemma37}, \begin{eqnarray*} \left\langle D_{y}w_{\varepsilon }\left( t,x,y\right) ,\gamma \left( t,y\right) \right\rangle &\leq &\left\Vert D_{x}D_{p}h\left( t,\xi ,\frac{x-y }{\varepsilon }\right) \right\Vert \left\vert x-y\right\vert \\ &\leq &C\left\vert \frac{x-y}{\varepsilon }\right\vert \left\vert x-y\right\vert =C\frac{\left\vert x-y\right\vert ^{2}}{\varepsilon }. \end{eqnarray*} $\Box $ We conclude this section by proving Lemma \ref{testlemma5} using an appropriate Cauchy problem. The test function $\alpha $ in Lemma \ref {testlemma5} will be crucial for the proofs of Theorems \ref{comparison}, \ref{existence} and Lemma \ref{rsdetheorem}. \begin{lemma} \label{testlemma5} There exists a nonnegative function $\alpha \in \mathcal{C }^{1,2}\left( \overline{\Omega }, \mathbb{R} \right) $, which satisfies \begin{equation} \left\langle D_{x}\alpha \left( t,x\right) ,\gamma \left( t,x\right) \right\rangle \geq 1, \label{alfaprop} \end{equation} for $x\in \partial \Omega _{t}$, $t\in \left[ 0,T\right] $. Moreover, the support of $\alpha$ can be assumed to lie in the neighbourhood $U$ defined in \eqref{ass_F_boundary}. \end{lemma} \noindent \textbf{Proof.} Fix $s\in \left[ 0,T\right] $ and $z\in \partial \Omega _{s}$ and define $H_{s,z}$ as the hyperplane \begin{equation*} H_{s,z}=\left\{ x\in \mathbb{R} ^{n}:\left\langle x-z,\gamma \left( s,z\right) \right\rangle =0\right\} . \end{equation*} Given a function $u_{0}\in \mathcal{C}^{2}\left( H_{s,z}, \mathbb{R} \right) $, such that $u_{0}\left( z\right) =1$, $u_{0}\geq 0$ and supp $ u_{0}\subset B\left( z,\delta ^{2}/4\right) \cap H_{s,z}$, we can use the method of characteristics to solve the Cauchy problem \begin{eqnarray*} \left\langle D_{x}u_{\left( t\right) }\left( x\right) ,\gamma \left( t,x\right) \right\rangle &=&0, \\ \left. u_{\left( t\right) }\right\vert _{H_{s,z}} &=&u_{0}. \end{eqnarray*} Choosing the positive constants $\delta $ and $\eta $ sufficiently small, the Cauchy problem above has, for all $t\in \left[ s-\eta ,s+\eta \right] $, a solution $u_{\left( t\right) }\in \mathcal{C}^{2}\left( B\left( z,\delta \right) , \mathbb{R} \right) $ satisfying $u_{\left( t\right) }\geq 0$. Based on the continuity of $\gamma $ and the restriction on the support of $u_{0}$, we may also assume that \begin{equation*} \text{supp }u_{\left( t\right) }\subset \bigcup_{\zeta \in \mathbb{R} }B(z-\zeta \gamma (s,z),\delta ^{2}/3)\cap B\left( z,\delta \right) . \end{equation*} Next, we define the combined function \begin{equation*} u\left( t,x\right) =u_{\left( t\right) }\left( x\right) , \end{equation*} and we claim for now that $u\in \mathcal{C}^{1,2}\left( \left[ s-\eta ,s+\eta \right] \times B\left( z,\delta \right) , \mathbb{R} \right) $ and postpone the proof of this claim to the end of the proof of the lemma. By the exterior and interior cone conditions, we can, for sufficiently small $\delta $, find $\varepsilon >0$ such that \begin{eqnarray*} &&\bigcup_{\zeta >0}B(z-\zeta \gamma (s,z),\delta ^{2}/3)\cap \left(B\left( z,\delta \right) \setminus B\left( z,\delta -2\varepsilon \right)\right) \\ &\subset &\bigcup_{\zeta >0}B(z-\zeta \gamma (s,z),\zeta \delta )\cap B\left( z,\delta \right) \subset \Omega _{s}^{c}, \end{eqnarray*} and such that the similar union over $\zeta <0$ belongs to $\Omega _{s}$. Hence \begin{equation*} \partial \Omega _{s}\cap \left(\text{supp }u_{\left( t\right) }\setminus B\left( z,\delta -2\varepsilon \right)\right) =\emptyset , \end{equation*} and, by \eqref{tempholder}, it follows that if $\eta $ also satisfies the constraint $\eta <\left( \varepsilon /K\right) ^{1/\widehat{\alpha }}$, then \begin{equation} \partial \Omega _{t}\cap \left(\text{supp }u_{\left( t\right) }\setminus B\left( z,\delta -\varepsilon \right)\right) =\emptyset \text{,} \label{suppbound} \end{equation} for all $t\in \left[ s-\eta ,s+\eta \right] $. Now, choose a function $\xi \in \mathcal{C}_{0}^{1,2}\left( \left[ s-\eta ,s+\eta \right] \times B\left( z,\delta \right) , \mathbb{R} \right) $ so that $\xi \left( t,x\right) =1$ for $t\in \left[ s-\eta + \varepsilon ,s+\eta - \varepsilon \right] $, $x\in B\left( z,\delta -\varepsilon \right) $ and $\xi \geq 0$, and set \begin{equation*} v_{s,z}\left( t,x\right) =u\left( t,x\right) \xi \left( t,x\right) . \end{equation*} Then $v_{s,z}\in \mathcal{C}_{0}^{1,2}\left( \left[ s-\eta ,s+\eta \right] \times B\left( z,\delta \right) , \mathbb{R} \right) $ satisfies $v_{s,z}\geq 0$. By \eqref{suppbound} and the construction of $u$ and $\xi $, we obtain \begin{equation*} \left\langle D_{x}v_{s,z}\left( t,x\right) ,\gamma \left( t,x\right) \right\rangle =0\text{ for }x\in B\left( z,\delta \right) \cap \partial \Omega _{t}\text{, }t\in \left[ s-\eta ,s+\eta \right] . \end{equation*} Define $w_{s,z}\in \mathcal{C}^{2}\left( B\left( z,\delta \right) , \mathbb{R} \right) $ by \begin{equation*} w_{s,z}\left( x\right) =\left\langle x-z,\gamma \left( s,z\right) \right\rangle +M, \end{equation*} where $M$ is large enough so that $w_{s,z}\geq 0$. Using the continuity of $ \gamma $, we can find $\delta $ and $\eta $ such that $\left\langle \gamma \left( s,z\right) ,\gamma \left( t,x\right) \right\rangle \geq 0$ for all $ \left( t,x\right) \in \left[ s-\eta ,s+\eta \right] \times B\left( z,\delta \right) $. Setting \begin{equation*} g_{s,z}\left( t,x\right) =v_{s,z}\left( t,x\right) w_{s,z}\left( x\right) , \end{equation*} we find that $g_{s,z}\in \mathcal{C}_{0}^{1,2}\left( \left[ s-\eta ,s+\eta \right] \times B\left( z,\delta \right) , \mathbb{R} \right) $ satisfies $g_{s,z}\geq 0$. Moreover, using $\left\vert \gamma \left( t,x\right) \right\vert =1$, we have \begin{eqnarray*} \left\langle D_{x}g_{s,z}\left( s,z\right) ,\gamma \left( s,z\right) \right\rangle &=&v_{s,z}\left( s,z\right) \left\langle D_{x}w_{s,z}\left( z\right) ,\gamma \left( s,z\right) \right\rangle \\ &&+w_{s,z}\left( z\right) \left\langle D_{x}v_{s,z}\left( s,z\right) ,\gamma \left( s,z\right) \right\rangle \\ &=&u\left( s,z\right) \xi \left( s,z\right) \left\vert \gamma \left( s,z\right) \right\vert ^{2}=1, \end{eqnarray*} and a similar calculation shows that \begin{eqnarray*} \left\langle D_{x}g_{s,z}\left( t,x\right) ,\gamma \left( t,x\right) \right\rangle &=&v_{s,z}\left( t,x\right) \left\langle D_{x}w_{s,z}\left( x\right) ,\gamma \left( t,x\right) \right\rangle \\ &&+w_{s,z}\left( x\right) \left\langle D_{x}v_{s,z}\left( t,x\right) ,\gamma \left( t,x\right) \right\rangle \\ &=&v_{s,z}\left( t,x\right) \left\langle \gamma \left( s,z\right) ,\gamma \left( t,x\right) \right\rangle \geq 0, \end{eqnarray*} for $x\in B\left( z,\delta \right) \cap \partial \Omega _{t}$, $t\in \left[ s-\eta ,s+\eta \right] $. Now, using a standard compactness argument we conclude the existence of a nonnegative function $\alpha \in \mathcal{C}^{1,2}(\overline{\Omega }, \mathbb{R})$, which satisfies $\left\langle D_{x}\alpha \left( t,x\right) ,\gamma \left( t,x\right) \right\rangle \geq 1$ for $x\in \partial \Omega _{t}$, $t\in \left[ 0,T\right] $. Moreover, by the above construction, we can assume that the support of $\alpha $ lies within the neighbourhood $U$ defined in \eqref{ass_F_boundary}. It remains to prove the proposed regularity $u\in \mathcal{C}^{1,2}\left( \left[ s-\eta ,s+\eta \right] \times B\left( z,\delta \right) , \mathbb{R} \right) $. The regularity in the spatial variables follows directly by construction, so it remains to show that $u$ is continuously differentiable in the time variable. Let $x\in B\left( z,\delta \right) $ and let $t$ and $ t+h$ belong to $\left[ s-\eta ,s+\eta \right] $. Denote by $y\left( t,\cdot \right) $ and $y\left( t+h,\cdot \right) $ the characteristic curves through $x$ for the vector fields $\gamma \left( t,\cdot \right) $ and $\gamma \left( t+h,\cdot \right) $, respectively, so that \begin{eqnarray*} \frac{\partial y}{\partial r}\left( t,r\right) &=&\pm \gamma \left( t,y\left( t,r\right) \right) , \\ y\left( t,0\right) &=&x, \end{eqnarray*} and analogously for $y\left( t+h,\cdot \right) $. Choose the sign in the parametrization of $y\left( t,\cdot \right) $ so that there exists some $ r\left( t\right) >0$ such that $y\left( t,r\left( t\right) \right) =z\left( t\right) \in H_{s,z}$. Choosing the same sign in the parametrization of $ y\left( t+h,\cdot \right) $ asserts the existence of some $r\left( t+h\right) >0$ such that $y\left( t+h,r\left( t+h\right) \right) =z\left( t+h\right) \in H_{s,z}$. Without lack of generality, we assume the sign above to be positive. Since $u\left( t,x\right) =u_{0}\left( z\left( t\right) \right) $, where $u_{0}$ is continuously differentiable, it remains to show that the function $z$ is continuously differentiable. We will first show that $y\left( \cdot ,r\right) $ is continuously differentiable by following an argument that can be found in e.g. \cite {Monti2010}. Differentiating the Cauchy problem formally with respect to the time variable and introducing the function $\psi \left( t,r\right) =\dfrac{ \partial y}{\partial t}\left( t,r\right) $, we obtain \begin{eqnarray*} \frac{\partial }{\partial r}\psi \left( t,r\right) &=&\frac{\partial \gamma }{\partial t}\left( t,y\left( t,r\right) \right) +\frac{\partial \gamma }{ \partial y}\left( t,y\left( t,r\right) \right) \psi \left( t,r\right) , \\ \psi \left( t,0\right) &=&0. \end{eqnarray*} This Cauchy problem has a unique solution, which we will next show satisfies \begin{equation} \psi \left( t,r\right) =\lim_{h\rightarrow 0}\frac{y\left( t+h,r\right) -y\left( t,r\right) }{h}, \label{eq:limit_true} \end{equation} so that $\psi $ is in fact the time derivative of $y$ (not just formally). Define \begin{equation} R\left( t,r,h\right) =\frac{y\left( t+h,r\right) -y\left( t,r\right) }{h} -\psi \left( t,r\right) . \label{eq:R_def} \end{equation} Now \begin{eqnarray*} R\left( t,r,h\right) &=&\int_{0}^{r}\left( \frac{\gamma \left( t+h,y\left( t+h,u\right) \right) -\gamma \left( t,y\left( t,u\right) \right) }{h}\right) du \\ &&-\int_{0}^{r}\left( \frac{\partial \gamma }{\partial t}\left( t,y\left( t,u\right) \right) +\frac{\partial \gamma }{\partial y}\left( t,y\left( t,u\right) \right) \psi \left( t,u\right) \right) du. \end{eqnarray*} By the mean value theorem \begin{eqnarray*} &&\gamma _{i}\left( t+h,y\left( t+h,u\right) \right) -\gamma _{i}\left( t,y\left( t,u\right) \right) \\ &=&\frac{\partial \gamma _{i}}{\partial t}\left( \overline{t}_{i},y\left( t+h,u\right) \right) h+\frac{\partial \gamma _{i}}{\partial y}\left( t, \overline{y}_{i}\right) \left( y\left( t+h,u\right) -y\left( t,u\right) \right) , \end{eqnarray*} for some $\overline{t}_{i}$ between $t$ and $t+h$, some $\overline{y}_{i}$ between $y\left( t,u\right) $ and $y\left( t+h,u\right) $ and all $i\in \left\{ 1,...,n\right\} $. Hence, the $i^{\text{th}}$ component of $R\left( t,r,h\right) $ is \begin{eqnarray*} R_{i}\left( t,r,h\right) &=&\int_{0}^{r}\left( \frac{\partial \gamma _{i}}{ \partial t}\left( \overline{t}_{i},y\left( t+h,u\right) \right) -\frac{ \partial \gamma _{i}}{\partial t}\left( t,y\left( t,u\right) \right) \right) du \\ &&+\int_{0}^{r}\left( \frac{\partial \gamma _{i}}{\partial y}\left( t, \overline{y}_{i}\right) \frac{y\left( t+h,u\right) -y\left( t,u\right) }{h}- \frac{\partial \gamma _{i}}{\partial y}\left( t,y\left( t,u\right) \right) \psi \left( t,u\right) \right) du, \end{eqnarray*} where the second term on the right hand side can be rewritten as \begin{equation*} \int_{0}^{r}\left( \frac{\partial \gamma _{i}}{\partial y}\left( t,\overline{ y}_{i}\right) R\left( t,u,h\right) +\left( \frac{\partial \gamma _{i}}{ \partial y}\left( t,\overline{y}_{i}\right) -\frac{\partial \gamma _{i}}{ \partial y}\left( t,y\left( t,u\right) \right) \right) \psi \left( t,u\right) \right) du. \end{equation*} Therefore we have \begin{eqnarray*} \left\vert R\left( t,r,h\right) \right\vert &\leq &\int_{0}^{r}\sum_{i=1}^{n}\left\vert \frac{\partial \gamma _{i}}{\partial t} \left( \overline{t}_{i},y\left( t+h,u\right) \right) -\frac{\partial \gamma _{i}}{\partial t}\left( t,y\left( t,u\right) \right) \right\vert du \\ &&+\int_{0}^{r}\left\vert R\left( t,u,h\right) \right\vert \sum_{i=1}^{n}\left\vert \frac{\partial \gamma _{i}}{\partial y}\left( t, \overline{y}_{i}\right) \right\vert du \\ &&+\int_{0}^{r}\left\vert \psi \left( t,u\right) \right\vert \sum_{i=1}^{n}\left\vert \frac{\partial \gamma _{i}}{\partial y}\left( t, \overline{y}_{i}\right) -\frac{\partial \gamma _{i}}{\partial y}\left( t,y\left( t,u\right) \right) \right\vert du, \end{eqnarray*} and by Gronwall's inequality we obtain \begin{eqnarray} \left\vert R\left( t,r,h\right) \right\vert &\leq &C\int_{0}^{r}\sum_{i=1}^{n}\left\vert \frac{\partial \gamma _{i}}{\partial t }\left( \overline{t}_{i},y\left( t+h,u\right) \right) -\frac{\partial \gamma _{i}}{\partial t}\left( t,y\left( t,u\right) \right) \right\vert du \notag \label{eq:gronvall_deriv} \\ &&+C\int_{0}^{r}\left\vert \psi \left( t,u\right) \right\vert \sum_{i=1}^{n}\left\vert \frac{\partial \gamma _{i}}{\partial y}\left( t, \overline{y}_{i}\right) -\frac{\partial \gamma _{i}}{\partial y}\left( t,y\left( t,u\right) \right) \right\vert du, \end{eqnarray} for some positive constant $C$. Since $\left\vert \psi \left( t,u\right) \right\vert $ exists and is bounded, and since the time and space derivatives of $\gamma $ are continuous, \eqref{eq:gronvall_deriv} implies boundedness of $\left\vert R\left( t,r,h\right) \right\vert $. Therefore, by \eqref{eq:R_def} we have $|y\left( t+h,r\right) -y\left( t,r\right) |\leq Ch$ , for some constant $C$, and we can conclude that $\overline{y} _{i}\rightarrow y(t,u)$ and $\overline{t}_{i}\rightarrow t$ for all $i\in \{1,\dots ,n\}$ as $h\rightarrow 0$. It follows that the differences in the integrands in \eqref{eq:gronvall_deriv} vanish as $h\rightarrow 0$ and hence $\lim_{h\rightarrow 0}R\left( t,r,h\right) =0$. This proves \eqref{eq:limit_true} and therefore that $y\left( \cdot ,r\right) $ is continuously differentiable. Now, by the mean value theorem, \begin{eqnarray*} z_{i}(t+h)-z_{i}(t) &=&y_{i}(t+h,r\left( t+h\right) )-y_{i}(t,r\left( t\right) ) \\ &=&y_{i}(t+h,r\left( t+h\right) )-y_{i}(t+h,r\left( t\right) ) \\ &&+y_{i}(t+h,r\left( t\right) )-y_{i}(t,r\left( t\right) ) \\ &=&\frac{\partial y_{i}}{\partial r}\left( t+h,\overline{r}_{i}\right) \left( r\left( t+h\right) -r\left( t\right) \right) +\frac{\partial y_{i}}{ \partial t}\left( \overline{t}_{i},r\left( t\right) \right) h, \end{eqnarray*} for some $\overline{r}_{i}$ between $r\left( t\right) $ and $ r\left(t+h\right)$, some $\overline{t}_{i}$ between $t$ and $t+h$ and all $ i\in \left\{ 1,...,n\right\}$. Since the function $r(t)$ is defined so that \begin{align*} \langle y(t, r(t)) - z, \gamma(s, z)\rangle = 0, \quad t \in (s - \eta, s + \eta), \end{align*} it follows by the implicit function theorem and by the regularity of $y(t,r)$ that $r(t)$ is a continuously differentiable function. Hence, we conclude that $\overline{r}_i \rightarrow r(t)$ and $\overline{t}_i \rightarrow t$, all $i \in \{1,\dots,n\}$, as $h \rightarrow 0$ and therefore, \begin{equation*} \lim_{h\rightarrow 0}\frac{z(t+h)-z(t)}{h}=\frac{\partial y}{\partial r} \left( t,r\left( t\right) \right) r^{\prime }\left( t\right) +\frac{\partial y}{\partial t}\left( t,r\left( t\right) \right) , \end{equation*} where the right hand side is a continuous function. This proves that $z$ is continuously differentiable and, hence, that $u\in \mathcal{C}^{1,2}\left( \left[ s-\eta ,s+\eta \right] \times B\left( z,\delta \right) , \mathbb{R} \right) $. $\Box $ \setcounter{equation}{0} \setcounter{theorem}{0} \section{The Skorohod problem\label{SP}} In this section we prove existence of solutions to the Skorohod problem under the assumptions in Section \ref{geoassume}. This result could be achieved using the methods in \cite{NystromOnskog2010a}, but as we here assume more regularity on the direction of reflection and the temporal variation of the domain compared to the setting in \cite{NystromOnskog2010a} (and this is essential for the other sections of this article), we follow a more direct approach using a penalty method. We first note that, mimicking the proof of Lemma 4.1 in \cite{DupuisIshii1993}, we can prove the following result. \begin{lemma} \label{dlemma} There is a constant $\mu >0$ such that, for every $t\in \left[ 0,T\right] $, there exists a neighbourhood $U_{t}$ of $\partial \Omega _{t}$ such that \begin{equation} \left\langle D_{x}d\left( t,x\right) ,\gamma \left( t,x\right) \right\rangle \leq -\mu ,\quad \text{for a.e. }x\in U_{t}\setminus \overline{\Omega }_{t} \text{.} \label{unmollineq} \end{equation} \end{lemma} As \eqref{unmollineq} holds only for almost every point in a neighbourhood of a non-smooth domain, we cannot apply \eqref{unmollineq} directly and will use the following mollifier approach instead. Based on the construction of the neighbourhoods $\left\{ U_{t}\right\} _{t\in \left[ 0,T\right] }$ in Lemma \ref{dlemma} (see the proof of the corresponding lemma in \cite {DupuisIshii1993} for details), there exists a constant $\overline{\beta }>0$ such that $B\left( x,3\overline{\beta }\right) \subset U_{t}$ for all $x\in \partial \Omega _{t}$, $t\in \left[ 0,T\right] $. For the value of $p$ given in \eqref{templip}, let \begin{equation*} v\left( t,x\right) =\left( d\left( t,x\right) \right) ^{p}\quad \text{ and\quad }\widetilde{v}\left( t,x\right) =\left( d\left( t,x\right) \right) ^{p-1}. \end{equation*} Moreover, let $\varphi _{\beta }\in $ $\mathcal{C}^{\infty }\left( \mathbb{R} ^{n}, \mathbb{R} \right) $ be a positive mollifier with support in $B\left( 0,\beta \right) $ , for some $\beta >0$, and define the spatial convolutions \begin{equation*} v_{\beta }=v\ast \varphi _{\beta }\quad \text{and\quad }\widetilde{v}_{\beta }=\widetilde{v}\ast \varphi _{\beta }. \end{equation*} \begin{lemma} \label{vlemma} There is a constant $\kappa >0$ such that, for sufficiently small $\beta > 0$ and every $t\in \left[ 0,T\right]$, there exists a neighbourhood $\widetilde{U}_{t}$ of $\partial \Omega _{t}$, $\widetilde{U} _{t}\supset \left\{ x:d\left( x,\partial \Omega _{t}\right) <2\overline{ \beta }\right\}$, such that \begin{equation} \left\langle D_{x}v_{\beta }\left( t,x\right) ,\gamma \left( t,x\right) \right\rangle \leq -\kappa \widetilde{v}_{\beta }\left( t,x\right) ,\quad \text{for }x\in \widetilde{U}_{t}\setminus \overline{\Omega }_{t}\text{.} \label{mollineq} \end{equation} \end{lemma} \noindent \textbf{Proof.} For all $x\in U_{t}\setminus \overline{\Omega } _{t} $ such that $B\left( x,\overline{\beta }\right) \subset U_{t}$ and for all $\beta \leq \overline{\beta }$, we have \begin{eqnarray*} &&\left\langle D_{x}v_{\beta }\left( t,x\right) ,\gamma \left( t,x\right) \right\rangle =\int_{ \mathbb{R} ^{n}}\left\langle \varphi _{\beta }\left( x-y\right) D_{y}v\left( t,y\right) ,\gamma \left( t,x\right) \right\rangle dy \\ &=&\int_{ \mathbb{R} ^{n}}\left( \left\langle D_{y}v\left( t,y\right) ,\gamma \left( t,y\right) \right\rangle +\left\langle D_{y}v\left( t,y\right) ,\gamma \left( t,x\right) -\gamma \left( t,y\right) \right\rangle \right) \varphi _{\beta }\left( x-y\right) dy. \end{eqnarray*} The inner product in the second term is bounded from above by \begin{equation*} p\left( d\left( t,y\right) \right) ^{p-1}\left\vert D_{y}d\left( t,y\right) \right\vert L\beta , \end{equation*} where $L$ is the Lipschitz coefficient of $\gamma $ in spatial dimensions over the compact set $\left[ 0,T\right] \times \bigcup_{t\in \left[ 0,T \right] }\overline{U}_{t}$. By Lemma \ref{dlemma}, we have, for almost every $y\in U_{t}\setminus \overline{\Omega }_{t}$, $t\in \left[ 0,T\right] $, \begin{equation*} \left\langle D_{y}v\left( t,y\right) ,\gamma \left( t,y\right) \right\rangle =p\left( d\left( t,y\right) \right) ^{p-1}\left\langle D_{y}d\left( t,y\right) ,\gamma \left( t,y\right) \right\rangle \leq -p\mu \left( d\left( t,y\right) \right) ^{p-1}, \end{equation*} and, for sufficiently small $\beta >0$, \begin{equation*} p\left( d\left( t,y\right) \right) ^{p-1}L\beta -p\mu \left( d\left( t,y\right) \right) ^{p-1}\leq -\kappa \left( d\left( t,y\right) \right) ^{p-1}, \end{equation*} for some constant $\kappa >0$. This proves \eqref{mollineq}. $\Box $ We next use a penalty method to verify the existence of a solution to the Skorohod problem for continuously differentiable functions. The following lemma generalizes Theorem 2.1 in \cite{LionsSznitman1984} and Lemma 4.5 in \cite{DupuisIshii1993}. \begin{lemma} \label{smoothexist}Let $\psi \in \mathcal{C}^{1}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) $ with $\psi \left( 0\right) \in \overline{\Omega }_{0}$. Then there exists a solution $\left( \phi ,\lambda \right) \in \mathcal{W} ^{1,p}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) \times \mathcal{W}^{1,p}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) $ to the Skorohod problem for $\left( \Omega ,\gamma ,\psi \right) $. \end{lemma} \noindent \textbf{Proof.} Choose $\varepsilon >0$ and consider the ordinary differential equation \begin{equation} \phi _{\varepsilon }^{\prime }\left( t\right) =\frac{1}{\varepsilon }d\left( t,\phi _{\varepsilon }\left( t\right) \right) \gamma \left( t,\phi _{\varepsilon }\left( t\right) \right) +\psi ^{\prime }\left( t\right) ,\quad \phi _{\varepsilon }\left( 0\right) =\psi \left( 0\right) , \label{ode} \end{equation} for $\phi _{\varepsilon }\left( t\right) $, which has a unique solution on $ \left[ 0,T\right] $. Let $\kappa >0$ and the family of neighbourhoods $\{ \widetilde{U}_{t}\}_{t\in \left[ 0,T\right] }$ be as in Lemma \ref{vlemma}. Choose a function $\zeta \in \mathcal{C}^{\infty }\left( \left[ 0,\infty \right) ,\left[ 0,\infty \right) \right) $ such that \begin{equation*} \zeta \left( r\right) =\left\{ \begin{array}{ll} r, & \text{for }r\leq \overline{\beta }^{p}/2, \\ 3\overline{\beta }^{p}/4, & \text{for }r\geq \overline{\beta }^{p}, \end{array} \right. \end{equation*} and $0\leq \zeta ^{\prime }\left( r\right) \leq 1$ for all $r\in \left[ 0,\infty \right) $. Note that if $\phi _{\varepsilon }\left( t\right) \notin \widetilde{U}_{t}\cup \overline{\Omega }_{t}$, then $d\left( t,\phi _{\varepsilon }\left( t\right) \right) \geq 2\overline{\beta }$ and, as a consequence, for all $\beta \leq \overline{\beta }$ it holds that $v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) \geq \overline{\beta } ^{p}$ and $\zeta ^{\prime }\left( v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) \right) =0$. We next define the function $F\left( t\right) =\zeta \left( v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) \right) $, for $t\in \left[ 0,T\right] $, and investigate its time derivative. Let $D_{t}d$ denote the weak derivative guaranteed by \eqref{templip} and note that \begin{eqnarray} F^{\prime }\left( t\right) &=&\zeta ^{\prime }\left( v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) \right) \left( D_{t}v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) +\left\langle D_{x}v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) ,\phi _{\varepsilon }^{\prime }\left( t\right) \right\rangle \right) \notag \\ &=&\zeta ^{\prime }\left( v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) \right) \Bigg(D_{t}v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) \frac{{}}{{}} \notag \\ &&+\left\langle D_{x}v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) ,\,\frac{1}{\varepsilon }d\left( t,\phi _{\varepsilon }\left( t\right) \right) \gamma \left( t,\phi _{\varepsilon }\left( t\right) \right) +\psi ^{\prime }\left( t\right) \right\rangle \Bigg), \label{vprim} \end{eqnarray} as $\phi _{\varepsilon }\left( t\right) $ solves \eqref{ode}. From Lemma \ref {vlemma}, we have \begin{eqnarray*} &&\zeta ^{\prime }\left( v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) \right) \left\langle D_{x}v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) ,\,\frac{1}{\varepsilon }d\left( t,\phi _{\varepsilon }\left( t\right) \right) \gamma \left( t,\phi _{\varepsilon }\left( t\right) \right) \right\rangle \\ &\leq &-\zeta ^{\prime }\left( v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) \right) \frac{\kappa }{\varepsilon }d\left( t,\phi _{\varepsilon }\left( t\right) \right) \widetilde{v}_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) , \end{eqnarray*} for $\phi _{\varepsilon }\left( t\right) \in \widetilde{U}_{t}\setminus \overline{\Omega }_{t}$ and for all other $\phi _{\varepsilon }\left( t\right) $ both sides vanish when $\beta \leq \overline{\beta }$. Integrating the estimate for $F^{\prime }$, suppressing the $s$-dependence in $\phi _{\varepsilon }$ and $\psi $ and denoting $\zeta ^{\prime }\left( v_{\beta }\left( s,\phi _{\varepsilon }\right) \right) $ by $\zeta ^{\prime }\left( v_{\beta }\right) $ for simplicity, we obtain, for all $t\in \left[ 0,T\right] $, \begin{eqnarray} &&\zeta \left( v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) \right) -\zeta \left( v_{\beta }\left( 0,\phi _{\varepsilon }\left( 0\right) \right) \right) +\frac{\kappa }{\varepsilon }\int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) d\left( s,\phi _{\varepsilon }\right) \widetilde{v}_{\beta }\left( s,\phi _{\varepsilon }\right) ds \label{eq:nyref} \\ &\leq &\int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \left\vert D_{s}v_{\beta }\left( s,\phi _{\varepsilon }\right) \right\vert ds+\int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \left\vert D_{x}v_{\beta }\left( s,\phi _{\varepsilon }\right) \right\vert \left\vert \psi ^{\prime }\right\vert ds=I_{1}+I_{2}. \notag \end{eqnarray} Note that since $\left\vert D_{x}d\right\vert \leq 1$ a.e. we have $ \left\vert D_{x}v_{\beta }\left( s,\phi _{\varepsilon }\right) \right\vert \leq p\widetilde{v}_{\beta }\left( s,\phi _{\varepsilon }\right) $, and hence, H\"{o}lder's inequality implies \begin{eqnarray*} I_{2} &=&\int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \left\vert D_{x}v_{\beta }\left( s,\phi _{\varepsilon }\right) \right\vert \left\vert \psi ^{\prime }\right\vert ds\leq p\int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \widetilde{v}_{\beta }\left( s,\phi _{\varepsilon }\right) \left\vert \psi ^{\prime }\right\vert ds \\ &\leq &p\left( \int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \left\vert \psi ^{\prime }\right\vert ^{p}ds\right) ^{1/p}\left( \int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \left( \widetilde{v} _{\beta }\left( s,\phi _{\varepsilon }\right) \right) ^{p/(p-1)}ds\right) ^{(p-1)/p}. \end{eqnarray*} Moreover, since $|D_{s}v_{\beta }|\leq p\left( v_{\beta }\left( s,\phi _{\varepsilon }\right) \right) ^{(p-1)/p}\left( |D_{s}d|^{p}\ast \varphi _{\beta }\right) ^{1/p}$, we also have \begin{eqnarray*} I_{1} &=&\int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \left\vert D_{s}v_{\beta }\left( s,\phi _{\varepsilon }\right) \right\vert ds \\ &\leq &p\int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \left( v_{\beta }\left( s,\phi _{\varepsilon }\right) \right) ^{(p-1)/p}\left( |D_{s}d|^{p}\ast \varphi _{\beta }\right) ^{1/p}ds \\ &\leq &p\left( \int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) v_{\beta }\left( s,\phi _{\varepsilon }\right) ds\right) ^{(p-1)/p}\left( \int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \left( |D_{s}d|^{p}\ast \varphi _{\beta }\right) ds\right) ^{1/p}. \end{eqnarray*} Inserting the bounds for $I_{1}$ and $I_{2}$ into \eqref{eq:nyref} yields \begin{eqnarray} &&\frac{1}{p}\zeta \left( v_{\beta }\left( t,\phi _{\varepsilon }\left( t\right) \right) \right) +\frac{\kappa }{\varepsilon p}\int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) d\left( s,\phi _{\varepsilon }\right) \widetilde{v}_{\beta }\left( s,\phi _{\varepsilon }\right) ds \label{betabound} \\ &\leq &\left( \int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) v_{\beta }\left( s,\phi _{\varepsilon }\right) ds\right) ^{(p-1)/p}\left( \int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \left( |D_{s}d|^{p}\ast \varphi _{\beta }\right) ds\right) ^{1/p} \notag \\ &+&\left( \int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \left( \widetilde{v}_{\beta }\left( s,\phi _{\varepsilon }\right) \right) ^{p/(p-1)}ds\right) ^{(p-1)/p}\left( \int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \left\vert \psi ^{\prime }\right\vert ^{p}ds\right) ^{1/p}+\rho \left( \beta \right) , \notag \end{eqnarray} where $\rho \left( \beta \right) =p^{-1}\zeta \left( v_{\beta }\left( 0,\phi _{\varepsilon }\left( 0\right) \right) \right) \rightarrow 0$ as $\beta \rightarrow 0$. By spatial Lipschitz continuity of $d(t,x)$ we have $ v_{\beta }\left( s,\phi _{\varepsilon }\right) \rightarrow v\left( s,\phi _{\varepsilon }\right) $ and $\widetilde{v}_{\beta }\left( s,\phi _{\varepsilon }\right) \rightarrow \widetilde{v}\left( s,\phi _{\varepsilon }\right) $ as $\beta \rightarrow 0$. Moreover, since $d$ satisfies \eqref{templip}, uniformly in space, we also have \begin{equation*} \int_{0}^{t}\left\vert D_{s}d\right\vert ^{p}ds\leq C(T)^{p}, \end{equation*} for some constant $C(T)$ independent of $x$. Therefore, by the Fubini-Tonelli theorem we can conclude, since $D_{s}d(t,x)$ is jointly measurable in $(t,x)$, that \begin{equation*} \int_{0}^{t}\left( |D_{s}d|^{p}\ast \varphi _{\beta }\right) ds=\int_{ \mathbb{R}^{n}}\left( \int_{0}^{t}\left\vert D_{s}d\right\vert ^{p}ds\right) \varphi _{\beta }(x-y)dy\leq C(T)^{p}, \end{equation*} and so \begin{equation*} \left( \int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \left( |D_{s}d|^{p}\ast \varphi _{\beta }\right) ds\right) ^{1/p}+\left( \int_{0}^{t}\zeta ^{\prime }\left( v_{\beta }\right) \left\vert \psi ^{\prime }\right\vert ^{p}ds\right) ^{1/p}\leq C\left( T\right) <\infty , \end{equation*} since by construction $\left\vert \zeta ^{\prime }\left( v_{\beta }\right) \right\vert \leq 1$, and $\psi \in \mathcal{C}^{1}\left( \left[ 0,T\right] , \mathbb{R}^{n}\right) $. Thus, letting $\beta $ tend to zero in \eqref{betabound}, we obtain \begin{eqnarray*} &&\frac{1}{p}\zeta \left( v\left( t,\phi _{\varepsilon }\left( t\right) \right) \right) +\frac{\kappa }{\varepsilon p}\int_{0}^{t}\zeta ^{\prime }\left( v\left( s,\phi _{\varepsilon }\right) \right) v\left( s,\phi _{\varepsilon }\right) ds \\ &\leq &C\left( T\right) \left( \int_{0}^{t}\zeta ^{\prime }\left( v\left( s,\phi _{\varepsilon }\right) \right) v\left( s,\phi _{\varepsilon }\right) ds\right) ^{\left( p-1\right) /p}. \end{eqnarray*} Both terms on the left hand side are positive and each of the terms are therefore bounded from above by the right hand side. Hence \begin{equation*} \frac{\kappa }{\varepsilon p}\left( \int_{0}^{t}\zeta ^{\prime }\left( v\left( s,\phi _{\varepsilon }\right) \right) v\left( s,\phi _{\varepsilon }\right) ds\right) ^{1/p}\leq C\left( T\right) , \end{equation*} and, as a consequence, \begin{equation*} \zeta \left( v\left( t,\phi _{\varepsilon }\left( t\right) \right) \right) + \frac{\kappa }{\varepsilon }\int_{0}^{t}\zeta ^{\prime }\left( v\left( s,\phi _{\varepsilon }\right) \right) v\left( s,\phi _{\varepsilon }\right) ds\leq K\left( T\right) \varepsilon ^{p-1}. \end{equation*} We may assume that $\varepsilon >0$ has been chosen small enough such that $ v\left( t,\phi _{\varepsilon }\left( t\right) \right) \leq \overline{\beta } ^{p}/2$, for all $t\in \left[ 0,T\right] $. Then, by the definition of $ \zeta $, \begin{equation} \frac{1}{\varepsilon ^{p-1}}\left( d\left( t,\phi _{\varepsilon }\left( t\right) \right) \right) ^{p}+\frac{\kappa }{\varepsilon ^{p}} \int_{0}^{t}\left( d\left( s,\phi _{\varepsilon }\left( s\right) \right) \right) ^{p}ds\leq K\left( T\right) , \label{d2bound} \end{equation} for $t\in \left[ 0,T\right] $. The remainder of the proof follows along the lines of the proof of Lemma 4.5 in \cite{DupuisIshii1993}, but we give the details for completeness. Relation \eqref{d2bound} asserts that the sequences $\left\{ l_{\varepsilon }\right\} _{\varepsilon >0}$ and $\left\{ \lambda _{\varepsilon }\right\} _{\varepsilon \geq 0}$, where \begin{equation*} l_{\varepsilon }\left( t\right) =\frac{1}{\varepsilon }d\left( t,\phi _{\varepsilon }\left( t\right) \right) ,\quad \lambda _{\varepsilon }\left( t\right) =\frac{1}{\varepsilon }\int_{0}^{t}d\left( s,\phi _{\varepsilon }\left( s\right) \right) \gamma \left( s,\phi _{\varepsilon }\left( s\right) \right) ds, \end{equation*} are bounded in $L^{p}\left( \left[ 0,T\right] , \mathbb{R} \right) $ and $\mathcal{W}^{1,p}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) $ respectively. Thus, we may assume that $l_{\varepsilon }$ and $ \lambda _{\varepsilon }$ converge weakly to $l\in L^{p}\left( \left[ 0,T \right] , \mathbb{R} \right) $ and $\lambda \in \mathcal{W}^{1,p}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) \subset \mathcal{C}\left( [0,T],\mathbb{R}^{n}\right) $, respectively, as $\varepsilon \rightarrow 0$. Moreover, from \eqref{ode} we conclude that $\phi _{\varepsilon }$ converges weakly to $\phi \in \mathcal{W }^{1,p}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) $ and that $\phi \left( t\right) =\psi \left( t\right) +\lambda \left( t\right) $, $\phi \left( 0\right) =\psi \left( 0\right) $. This proves \eqref{SP1} and, moreover, \eqref{SP2} holds due to \eqref{d2bound}. By construction, $\lambda _{\varepsilon }^{\prime }\left( t\right) =l_{\varepsilon }\left( t\right) \gamma \left( t,\phi _{\varepsilon }\left( t\right) \right) $ and this implies that $\lambda ^{\prime }\left( t\right) =l\left( t\right) \gamma \left( t,\phi \left( t\right) \right) $. Moreover, if we let $\tau =\left\{ t\in \left[ 0,T\right] :\phi \left( t\right) \in \Omega _{t}\right\} $ and note that for each fixed $t\in \tau $, we have $ l_{\varepsilon }\left( t\right) =0$ for all sufficiently small $\varepsilon $ and hence $l\left( t\right) =0$ on $\tau $. Therefore \begin{equation*} \left\vert \lambda \right\vert \left( t\right) =\int_{0}^{t}\left\vert \lambda ^{\prime }\left( s\right) \right\vert ds=\int_{0}^{t}l\left( s\right) \left\vert \gamma \left( s,\phi \left( s\right) \right) \right\vert ds=\int_{0}^{t}l\left( s\right) ds,\quad \text{for all }t\in \left[ 0,T \right] , \end{equation*} as $\left\vert \gamma \left( s,\phi \left( s\right) \right) \right\vert =1$ for all $s\in \left[ 0,T\right] \setminus \tau $. This proves \eqref{SP3}. In addition, \begin{equation*} \lambda \left( t\right) =\int_{0}^{t}l\left( s\right) \gamma \left( s,\phi \left( s\right) \right) ds=\int_{0}^{t}\gamma \left( s,\phi \left( s\right) \right) d\left\vert \lambda \right\vert \left( s\right) ,\quad \text{for all }t\in \left[ 0,T\right] , \end{equation*} which proves \eqref{SP5}. It remains to verify \eqref{SP4}, but this follows readily from \begin{equation*} \left\vert \lambda \right\vert \left( t\right) =\int_{0}^{t}l\left( s\right) \left\vert \gamma \left( s,\phi \left( s\right) \right) \right\vert ds=\int_{0}^{t}I_{\left\{ \phi \left( s\right) \in \partial \Omega _{s}\right\} }l\left( s\right) ds=\int_{0}^{t}I_{\left\{ \phi \left( s\right) \in \partial \Omega _{s}\right\} }d\left\vert \lambda \right\vert \left( s\right) . \end{equation*} We have completed the proof that $\left( \phi ,\lambda \right) \in \mathcal{W }^{1,p}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) \times \mathcal{W}^{1,p}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) $ solves the Skorohod problem for $\left( \Omega ,\gamma ,\psi \right) $. $\Box $ The next step is to prove relative compactness of solutions to the Skorohod problem. The proof follows the proof of Lemma 4.7 in \cite{DupuisIshii1993}, but a number of changes must be made carefully to handle the time dependency of the domain. \begin{lemma} \label{compactest}Let $A$ be a compact subset of $\mathcal{C}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) $. Then \begin{description} \item[(i)] There exists a constant $L<\infty $ such that \begin{equation*} \left\vert \lambda \right\vert \left( T\right) <L, \end{equation*} for all solutions $\left( \psi +\lambda ,\lambda \right) $ to the Skorohod problem for $\left( \Omega ,\gamma ,\psi \right) $ with $\psi \in A$. \item[(ii)] The set of $\phi $, such that $\left( \phi ,\lambda \right) $ solves the Skorohod problem for $\left( \Omega ,\gamma ,\psi \right) $ with $ \psi \in A$, is relatively compact. \end{description} \end{lemma} \noindent \textbf{Proof.} By the compactness of $\overline{\Omega }$ and the continuity of $\gamma $, there exists a constant $c>0$ such that for every $ t\in \left[ 0,T\right] $ and $x\in \overline{\Omega }_{t}\cap V$, where $V$ is the set defined in connection with \eqref{smooth_gamma}, there exists a vector $v\left( t,x\right) $ and a set $\left[ t,t+c\right] \times B\left( x,c\right) $ such that $\left\langle \gamma \left( s,y\right) ,v\left( t,x\right) \right\rangle >c$ for all $\left( s,y\right) \in \left[ t,t+c \right] \times B\left( x,c\right) $. Without lack of generality, we may assume that $c<\delta $, for the $\delta $ introduced in Remark \ref {spaceremark}. Let $\psi \in A$ be given and let $\left( \phi ,\lambda \right) $ be any solution to the Skorohod problem for $\left( \Omega ,\gamma ,\psi \right) $. Define $T_{1}$ to be smallest of $T$, $c$ and $\inf \left\{ t\in \left[ 0,T\right] :\phi \left( t\right) \notin B\left( \phi \left( 0\right) ,c\right) \right\} $. Next define $T_{2}$ to be the smallest of $T$ , $T_{1}+c$ and $\inf \left\{ t\in \left[ T_{1},T\right] :\phi \left( t\right) \notin B\left( \phi \left( T_{1}\right) ,c\right) \right\} $. Continuing in this fashion, we obtain a sequence $\left\{ T_{m}\right\} _{m=1,2,...}$ of time instants. By construction, for all $s\in \left[ T_{m-1},T_{m}\right) $ we have $s\in \left[ T_{m-1},T_{m-1}+c\right] $ and $ \phi \left( s\right) \in B\left( \phi \left( T_{m-1}\right) ,c\right) $. For all $m$ such that $\phi \left( T_{m-1}\right) \in $ $\overline{\Omega } _{T_{m-1}}\cap V$, we have $\left\langle \gamma \left( s,\phi \left( s\right) \right) ,v\left( T_{m-1},\phi \left( T_{m-1}\right) \right) \right\rangle >c$ and hence \begin{eqnarray*} &&\left\langle \phi \left( T_{m}\right) -\phi \left( T_{m-1}\right) ,v\left( T_{m-1},\phi \left( T_{m-1}\right) \right) \right\rangle \\ &&-\left\langle \psi \left( T_{m}\right) -\psi \left( T_{m-1}\right) ,v\left( T_{m-1},\phi \left( T_{m-1}\right) \right) \right\rangle \\ &=&\int_{T_{m-1}}^{T_{m}}\left\langle \gamma \left( s,\phi \left( s\right) \right) ,v\left( T_{m-1},\phi \left( T_{m-1}\right) \right) \right\rangle d\left\vert \lambda \right\vert \left( s\right) \geq c\left( \left\vert \lambda \right\vert \left( T_{m}\right) -\left\vert \lambda \right\vert \left( T_{m-1}\right) \right) . \end{eqnarray*} Since $A$ is compact, the set $\left\{ \psi \left( t\right) :t\in \left[ 0,T \right] ,\psi \in A\right\} $ is bounded. Moreover, since $\overline{\Omega } $ is compact and $\phi \left( t\right) \in \overline{\Omega }_{t}$ for all $ t\in \left[ 0,T\right] $, there exists a constant $M<\infty $ such that \begin{equation*} \left\vert \lambda \right\vert \left( T_{m}\right) -\left\vert \lambda \right\vert \left( T_{m-1}\right) <M. \end{equation*} Note also that, for all $m$ such that $\phi \left( T_{m-1}\right) \notin $ $ \overline{\Omega }_{T_{m-1}}\cap V$, we have, for $c$ sufficiently small, that $\left\vert \lambda \right\vert \left( T_{m}\right) -\left\vert \lambda \right\vert \left( T_{m-1}\right) =0$. Define the modulus of continuity of a function $f\in \mathcal{C}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) $ as $\left\Vert f\right\Vert _{s,t}=\sup_{s\leq t_{1}\leq t_{2}\leq t}\left\vert f\left( t_{2}\right) -f\left( t_{1}\right) \right\vert $ for $0\leq s\leq t\leq T$. We next prove that there exists a positive constant $R$ such that, for any $\psi \in A$ and $T_{m-1}\leq \tau \leq T_{m}$, it holds that \begin{equation} \left\Vert \lambda \right\Vert _{T_{m-1},\tau }\leq R\left( \left\Vert \psi \right\Vert _{T_{m-1},\tau }^{1/2}+\left\Vert \psi \right\Vert _{T_{m-1},\tau }^{3/2}+\left( \tau -T_{m-1}\right) ^{\widehat{\alpha } /2}\right) , \label{apriori} \end{equation} where $\widehat{\alpha }$ is the H\"{o}lder exponent in Remark \ref {timeholder}. As we are only interested in the behaviour during the time interval $\left[ T_{m-1},T_{m}\right] $, we simplify the notation by setting, without loss of generality, $T_{m-1}=0$, $\phi \left( T_{m-1}\right) =x$, $\psi \left( T_{m-1}\right) =x$, $\lambda \left( T_{m-1}\right) =0$ and $\left\vert \lambda \right\vert \left( T_{m-1}\right) =0$. Let $h$ be the function in Lemma \ref{testlemma3} and let $\chi ,C$ be the corresponding positive constants. Define $B_{\varepsilon }\left( t\right) =\varepsilon h\left( t,x,-\lambda \left( t\right) /\varepsilon \right) $ and $E\left( t\right) =e^{-\left( 2\left\vert \lambda \right\vert \left( t\right) +t\right) C/\chi }$. Since $h\left( t,x,0\right) =1$, we get \begin{eqnarray*} B_{\varepsilon }\left( \tau \right) E\left( \tau \right) &=&B_{\varepsilon }\left( 0\right) E\left( 0\right) +\int_{0}^{\tau }\left( E\left( u\right) dB_{\varepsilon }\left( u\right) +B_{\varepsilon }\left( u\right) dE\left( u\right) \right) \\ &=&\varepsilon +\int_{0}^{\tau }E\left( u\right) dB_{\varepsilon }\left( u\right) -\frac{2C}{\chi }\int_{0}^{\tau }B_{\varepsilon }\left( u\right) E\left( u\right) d\left\vert \lambda \right\vert \left( u\right) \\ &&-\frac{C}{\chi }\int_{0}^{\tau }B_{\varepsilon }\left( u\right) E\left( u\right) du, \end{eqnarray*} where the first integral can be rewritten as \begin{eqnarray*} \int_{0}^{\tau }E\left( u\right) dB_{\varepsilon }\left( u\right) &=&\int_{0}^{\tau }E\left( u\right) \varepsilon D_{t}h\left( u,x,-\lambda \left( u\right) /\varepsilon \right) du \\ &&-\int_{0}^{\tau }E\left( u\right) \left\langle D_{p}h\left( u,x,-\lambda \left( u\right) /\varepsilon \right) ,d\lambda \left( u\right) \right\rangle . \end{eqnarray*} By \eqref{testlemma31} and \eqref{testlemma35}, the integral involving $ D_{t}h$ has the upper bound \begin{eqnarray*} &&\int_{0}^{\tau }E\left( u\right) \varepsilon D_{t}h\left( u,x,-\lambda \left( u\right) /\varepsilon \right) du \\ &\leq &C\varepsilon \int_{0}^{\tau }E\left( u\right) \left\vert \lambda \left( u\right) /\varepsilon \right\vert ^{2}du\leq \frac{C}{\chi } \int_{0}^{\tau }E\left( u\right) B_{\varepsilon }\left( u\right) du. \end{eqnarray*} Next, we would like to find an upper bound for the integral involving $ D_{p}h $ using \eqref{testlemma33} in some appropriate way, but we have to be somewhat careful due to the temporal variation of the domain. Assume that $\phi \left( u\right) \in \partial \Omega _{u}$. If $x\notin \overline{ \Omega }_{u}$, there exists at least one point $y_{u}\in \overline{\Omega } _{u}\cap B\left( x,c\right) $ such that $\left\vert x-y_{u}\right\vert =d\left( u,x\right) $. We have chosen $c<\delta $, so $\left\langle y_{u}-\phi \left( u\right) ,\gamma \left( u,\phi \left( u\right) \right) \right\rangle \geq -\theta \left\vert y_{u}-\phi \left( u\right) \right\vert $ holds by Remark \ref{spaceremark} and, due to \eqref{testlemma33}, we can conclude \begin{equation*} I_{1}:=-\int_{0}^{\tau }E\left( u\right) \left\langle D_{p}h\left( u,\phi \left( u\right) ,\left( y_{u}-\phi \left( u\right) \right) /\varepsilon \right) ,\gamma \left( u,\phi \left( u\right) \right) \right\rangle d\left\vert \lambda \right\vert \left( u\right) \leq 0, \end{equation*} since $d\left\vert \lambda \right\vert \left( u\right) =0$ if $\phi \left( u\right) \notin \partial \Omega _{u}$. If $x\in \overline{\Omega }_{u}$, the above estimate holds with $y_{u}$ replaced by $x$. The integral involving $ D_{p}h$ can be decomposed into \begin{equation*} -\int_{0}^{\tau }E\left( u\right) \left\langle D_{p}h\left( u,x,-\lambda \left( u\right) /\varepsilon \right) ,d\lambda \left( u\right) \right\rangle =I_{1}+I_{2}+I_{3}, \end{equation*} for $I_{1}$ as above and \begin{equation*} I_{2}=\int_{0}^{\tau }E\left( u\right) \left\langle D_{p}h\left( u,\phi \left( u\right) ,-\lambda \left( u\right) /\varepsilon \right) -D_{p}h\left( u,x,-\lambda \left( u\right) /\varepsilon \right) ,d \lambda \left( u\right) \right\rangle , \end{equation*} \begin{equation*} I_{3}=\int_{0}^{\tau }E\left( u\right) \left\langle D_{p}h\left( u,\phi \left( u\right) ,\left( y_{u}-\phi \left( u\right) \right) /\varepsilon \right) -D_{p}h\left( u,\phi \left( u\right) ,-\lambda \left( u\right) /\varepsilon \right) ,d \lambda \left( u\right) \right\rangle . \end{equation*} By \eqref{testlemma31} and \eqref{testlemma37}, these integrals can be bounded from above by \begin{eqnarray*} I_{2} &\leq &\frac{C}{\varepsilon }\int_{0}^{\tau }E\left( u\right) \left\vert \lambda \left( u\right) \right\vert \left\vert x-\phi \left( u\right) \right\vert d\left\vert \lambda \right\vert \left( u\right) \\ &\leq &\frac{C}{\varepsilon }\int_{0}^{\tau }E\left( u\right) \left( \left\vert \lambda \left( u\right) \right\vert ^{2}+\left\vert x-\psi \left( u\right) \right\vert \left\vert \lambda \left( u\right) \right\vert \right) d\left\vert \lambda \right\vert \left( u\right) \\ &\leq &\frac{2C}{\varepsilon }\int_{0}^{\tau }E\left( u\right) \left( \left\vert \lambda \left( u\right) \right\vert ^{2}+\left\vert x-\psi \left( u\right) \right\vert ^{2}\right) d\left\vert \lambda \right\vert \left( u\right) \\ &\leq &\frac{2C}{\chi }\int_{0}^{\tau }E\left( u\right) B_{\varepsilon }\left( u\right) d\left\vert \lambda \right\vert \left( u\right) +\frac{2C}{ \varepsilon }\int_{0}^{\tau }E\left( u\right) \left\vert x-\psi \left( u\right) \right\vert ^{2}d\left\vert \lambda \right\vert \left( u\right) , \end{eqnarray*} and \begin{eqnarray*} I_{3} &\leq &\frac{C}{\varepsilon }\int_{0}^{\tau }E\left( u\right) \left\vert y_{u}-\phi \left( u\right) -\left( -\lambda \left( u\right) \right) \right\vert d\left\vert \lambda \right\vert \left( u\right) \\ &=&\frac{C}{\varepsilon }\int_{0}^{\tau }E\left( u\right) \left\vert y_{u}-\psi \left( u\right) \right\vert d\left\vert \lambda \right\vert \left( u\right) \\ &\leq &\frac{C}{\varepsilon }\int_{0}^{\tau }E\left( u\right) \left( \left\vert x-\psi \left( u\right) \right\vert +\left\vert y_{u}-x\right\vert \right) d\left\vert \lambda \right\vert \left( u\right) \\ &\leq &\frac{C}{\varepsilon }\int_{0}^{\tau }E\left( u\right) \left( \left\vert x-\psi \left( u\right) \right\vert +d\left( u,x\right) \right) d\left\vert \lambda \right\vert \left( u\right) . \end{eqnarray*} Collecting all the terms, we obtain \begin{equation*} B_{\varepsilon }\left( \tau \right) E\left( \tau \right) \leq \varepsilon + \frac{C}{\varepsilon }\int_{0}^{\tau }E\left( u\right) \left( \left\vert x-\psi \left( u\right) \right\vert +2\left\vert x-\psi \left( u\right) \right\vert ^{2}+d\left( u,x\right) \right) d\left\vert \lambda \right\vert \left( u\right) , \end{equation*} which implies \begin{equation*} B_{\varepsilon }\left( \tau \right) \leq \left( \frac{2C}{\varepsilon } \int_{0}^{\tau }E\left( u\right) \left( \left\Vert \psi \right\Vert _{0,\tau }+\left\Vert \psi \right\Vert _{0,\tau }^{2}+K\tau ^{\widehat{\alpha } }\right) d\left\vert \lambda \right\vert \left( u\right) +\varepsilon \right) e^{\left( 2\left\vert \lambda \right\vert \left( \tau \right) +\tau \right) C/\chi }, \end{equation*} where $K$ and $\widehat{\alpha }$ are the constants from Remark \ref {timeholder}. Now \begin{equation*} \int_{0}^{\tau }E\left( u\right) d\left\vert \lambda \right\vert \left( u\right) \leq \int_{0}^{\tau }e^{-2C\left\vert \lambda \right\vert \left( u\right) /\chi }d\left\vert \lambda \right\vert \left( u\right) \leq \frac{ \chi }{2C}, \end{equation*} so \begin{equation*} B_{\varepsilon }\left( \tau \right) \leq \left( \frac{\chi }{\varepsilon } \left( \left\Vert \psi \right\Vert _{0,\tau }+\left\Vert \psi \right\Vert _{0,\tau }^{2}+K\tau ^{\widehat{\alpha }}\right) +\varepsilon \right) e^{\left( 2\left\vert \lambda \right\vert \left( \tau \right) +\tau \right) C/\chi }. \end{equation*} Another application of \eqref{testlemma31} gives \begin{eqnarray*} \left\vert \lambda \left( \tau \right) \right\vert &\leq &\frac{1}{2}\left( \varepsilon +\frac{1}{\varepsilon }\left\vert \lambda \left( \tau \right) \right\vert ^{2}\right) \leq \frac{\varepsilon }{2}+\frac{B_{\varepsilon }\left( \tau \right) }{2\chi } \\ &\leq &\frac{\varepsilon }{2}+\left( \frac{1}{2\varepsilon }\left( \left\Vert \psi \right\Vert _{0,\tau }+\left\Vert \psi \right\Vert _{0,\tau }^{2}+K\tau ^{\widehat{\alpha }}\right) +\frac{\varepsilon }{2\chi }\right) e^{\left( 2M+T\right) C/\chi }. \end{eqnarray*} Set $\varepsilon =\max \left\{ \left\Vert \psi \right\Vert _{0,\tau }^{1/2},\tau ^{\widehat{\alpha }/2}\right\} $ so that $\varepsilon \leq \left\Vert \psi \right\Vert _{0,\tau }^{1/2}+\tau ^{\widehat{\alpha }/2}$, $ 1/\varepsilon \leq \left\Vert \psi \right\Vert _{0,\tau }^{-1/2}$ and $ 1/\varepsilon \leq \tau ^{-\widehat{\alpha }/2}$. Then \eqref{apriori} follows immediately from the above inequality. By \eqref{apriori} and the compactness of $A$, there exists a $\hat{\tau}>0$ such that \begin{equation*} \max \left\{ \left\Vert \psi \right\Vert _{T_{m-1},T_{m-1}+\hat{\tau} },\left\Vert \lambda \right\Vert _{T_{m-1},T_{m-1}+\hat{\tau}}\right\} \leq c/3, \end{equation*} which implies $\left\Vert \phi \right\Vert _{T_{m-1},T_{m-1}+\hat{\tau}}\leq 2c/3$. The definition of $\left\{ T_{m}\right\} $ then implies that $ T_{m}-T_{m-1}\geq \min \left\{ \hat{\tau},c\right\} $. This proves (i) with $ L=M\left( T/\min \left\{ \hat{\tau},c\right\} +1\right) $. Part (ii) follows from \eqref{apriori} and the bound $T_{m}-T_{m-1}\geq \min \left\{ \hat{\tau} ,c\right\} $. $\Box $ Equipped with the results above, we are now ready to state and prove the existence of solutions to the Skorohod problem. The proof is very similar to the proof of Theorem 4.8 in \cite{DupuisIshii1993}, so we only sketch the first half of the proof. \begin{lemma} \label{contexist}Let $\psi \in \mathcal{C}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) $ with $\psi \left( 0\right) \in \overline{\Omega }_{0}$. Then there exists a solution $\left( \phi ,\lambda \right) $ to the Skorohod problem for $\left( \Omega ,\gamma ,\psi \right) $. \end{lemma} \noindent \textbf{Proof.} Let $\psi _{n}\in \mathcal{C}^{1}\left( \left[ 0,T \right] , \mathbb{R} ^{n}\right) $ form a sequence of functions converging uniformly to $\psi $. According to Lemma \ref{smoothexist}, there exists a solution $\left( \phi _{n},\lambda _{n}\right) $ to the Skorohod problem for $\left( \Omega ,\gamma ,\psi _{n}\right) $. By Lemma \ref{compactest}, we may assume that the sequence $\left\{ \lambda _{n}\right\} _{n=1}^{\infty }$ is equibounded and equicontinuous, that is \begin{eqnarray*} \sup_{n}\left\vert \lambda _{n}\right\vert \left( T\right) &\leq &L<\infty , \\ \lim_{\left\vert s-t\right\vert \rightarrow 0}\sup_{n}\left\vert \lambda _{n}\left( s\right) -\lambda _{n}\left( t\right) \right\vert &=&0. \end{eqnarray*} The Arzela-Ascoli theorem asserts the existence of a function $\lambda \in \mathcal{C}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) $ such that $\left\{ \lambda _{n}\right\} $ converges uniformly to $\lambda $. Clearly $\left\vert \lambda \right\vert \left( T\right) \leq L $. Defining the function $\phi $ by $\phi =\psi +\lambda $, we conclude that \eqref{SP1}-\eqref{SP3} of Definition \ref{skorohodprob} hold. To show properties \eqref{SP4} and \eqref{SP5} in the same definition, we define the measure $\mu _{n}$ on $\overline{\Omega }\times S\left( 0,1\right) $ as \begin{equation*} \mu _{n}\left( A\right) =\int_{\left[ 0,T\right] }I_{\left\{ \left( s,\phi _{n}\left( s\right) ,\gamma \left( s,\phi _{n}\left( s\right) \right) \right) \in A\right\} }d\left\vert \lambda _{n}\right\vert \left( s\right) , \end{equation*} for every Borel set $A\subset \overline{\Omega }\times S\left( 0,1\right) $. Introducing the notation $\overline{\Omega }_{\left[ 0,t\right] }:=\overline{ \Omega }\cap \left( \left[ 0,t\right] \times \mathbb{R} ^{n}\right) $, we have, by definition and \eqref{SP5}, \begin{equation*} \left\vert \lambda _{n}\right\vert \left( t\right) =\mu _{n}\left( \overline{ \Omega }_{\left[ 0,t\right] }\times S\left( 0,1\right) \right) , \end{equation*} and \begin{equation*} \lambda _{n}\left( t\right) =\int_{\overline{\Omega }_{\left[ 0,t\right] }\times S\left( 0,1\right) }\gamma d\mu _{n}\left( s,x,\gamma \right) , \end{equation*} for all $t\in \left[ 0,T\right] $. Since $\left\vert \lambda _{n}\right\vert \left( T\right) \leq L<\infty $ for all $n$, the Banach-Alaoglu theorem asserts that a subsequence of $\mu _{n}$ converges to some measure $\mu $ satisfying $\mu \left( \overline{\Omega }\times S\left( 0,1\right) \right) <\infty $. By weak convergence and the continuity of $\lambda $, \begin{equation*} \lambda \left( t\right) =\int_{\overline{\Omega }_{\left[ 0,t\right] }\times S\left( 0,1\right) }\gamma d\mu \left( s,x,\gamma \right) . \end{equation*} Using the fact that $\left( \phi _{n},\lambda _{n}\right) $ solves the Skorohod problem for $\left( \Omega ,\gamma ,\psi _{n}\right) $, we can draw several conclusions regarding the properties of the measure $\mu _{n}$ and then use weak convergence of $\mu _{n}$ to $\mu $ to deduce that $\lambda $ satisfies \eqref{SP4} and \eqref{SP5}. This procedure is carried out in the proofs of Theorem 2.8 in \cite{Costantini1992}, Theorem 4.8 in \cite {DupuisIshii1993} and Theorem 5.1 in \cite{NystromOnskog2010a}, so we omit further details. $\Box $ \setcounter{equation}{0} \setcounter{theorem}{0} \section{SDEs with oblique reflection\label{RSDE}} Using the existence of solutions $\left( \phi ,\lambda \right) $ to the Skorohod problem for $\left( \Omega ,\gamma ,\psi \right) $, with $\psi \in \mathcal{C}\left( \left[ 0,T\right] , \mathbb{R} ^{n}\right) $ and $\psi \left( 0\right) \in \overline{\Omega }_{0}$, we can now prove existence and uniqueness of solutions to SDEs with oblique reflection at the boundary of a bounded, time-dependent domain. To this end, assume that the triple $\left( X,Y,k\right) $ satisfies \begin{equation*} Y\left( t\right) =x+\int_{0}^{t}b\left( s,X\left( s\right) \right) ds+\int_{0}^{t}\sigma \left( s,X\left( s\right) \right) dM\left( s\right) +k\left( t\right) , \end{equation*} \begin{equation*} X\left( t\right) \in \overline{\Omega }_{t},\quad Y\left( t\right) \in \overline{\Omega }_{t}, \end{equation*} \begin{equation*} \left\vert k\right\vert \left( t\right) =\int_{\left( 0,t\right] }I_{\left\{ Y\left( s\right) \in \partial \Omega _{s}\right\} }d\left\vert k\right\vert \left( s\right) <\infty ,\quad k\left( t\right) =\int_{\left( 0,t\right] }\gamma \left( s\right) d|k|\left( s\right) , \end{equation*} where $x\in \overline{\Omega }_{0}$ is fixed, $\gamma \left( s\right) =\gamma \left( s,Y\left( s\right) \right) $ $d\left\vert k\right\vert $ -a.s.~and $M$ is a continuous $\mathcal{F}_{t}$-martingale satisfying \begin{equation} d\left\langle M_{i},M_{j}\right\rangle \left( t\right) \leq Cdt, \label{mart} \end{equation} for some $C\in \left( 0,\infty \right) $. Let $\left( X^{\prime },Y^{\prime },k^{\prime }\right) $ be a similar triple, but with $x$ replaced by $x^{\prime}\in \overline{\Omega }_{0}$, and $ \gamma^{\prime} \left( s\right) =\gamma \left( s,Y^{\prime}\left( s\right) \right)$ $d\left\vert k^{\prime}\right\vert $-a.s. We shall prove uniqueness of solutions by a Picard iteration scheme and a crucial ingredient is then the estimate provided by the following theorem. Note that Lemma \ref{rsdetheorem} holds for a general continuous $\mathcal{F} _{t}$-martingale satisfying \eqref{mart}, whereas in Theorem \ref{main} we restrict our interest to $M$ being a standard Wiener process. \begin{lemma} \label{rsdetheorem}There exists a positive constant $C<\infty $ such that \begin{equation*} E\left[ \sup_{0\leq s\leq t}\left\vert Y\left( s\right) -Y^{\prime }\left( s\right) \right\vert ^{ {\acute{}} 4}\right] \leq C\left( \left\vert x-x^{\prime }\right\vert ^{4}+\int_{0}^{t}E \left[ \sup_{0\leq u\leq s}\left\vert X\left( u\right) -X^{\prime }\left( u\right) \right\vert ^{4}\right] ds\right) . \end{equation*} \end{lemma} \noindent \textbf{Proof.} Fix $\varepsilon >0$, let $\lambda >0$ be a constant to be specified later, and let $w_{\varepsilon }\in \mathcal{C} ^{1,2}\left( \left[ 0,T\right] \times \mathbb{R} ^{n}\times \mathbb{R} ^{n}, \mathbb{R} \right) $ and $\alpha \in \mathcal{C}^{1,2}\left( \overline{\Omega }, \mathbb{R} \right) $ and be the functions defined in Lemma \ref{testlemma4}-\ref {testlemma5}. Define the stopping time \begin{equation*} \tau =\inf \left\{ s\in \left[ 0,T\right] :\left\vert Y\left( s\right) -Y^{\prime }\left( s\right) \right\vert \geq \delta \right\} , \end{equation*} where $\delta >0$ is the constant from Remark \ref{spaceremark}. Let $B$ denote the diameter of the smallest ball containing $\bigcup\nolimits_{t\in \left[ 0,T\right] }\overline{\Omega }_{t}$. Then, assuming without loss of generality that $B/\delta \geq 1$, we have \begin{equation*} E\left[ \sup_{0\leq s\leq t}\left\vert Y\left( s\right) -Y^{\prime }\left( s\right) \right\vert ^{4}\right] \leq \left( \frac{B}{\delta }\right) ^{4}E \left[ \sup_{0\leq s\leq t\wedge \tau }\left\vert Y\left( s\right) -Y^{\prime }\left( s\right) \right\vert ^{4}\right] , \end{equation*} so it is sufficient to prove the theorem for $t\wedge \tau $. To simplify the notation, however, we write $t$ in place of $t\wedge \tau $ and assume that $\left\vert Y\left( s\right) -Y^{\prime }\left( s\right) \right\vert <\delta $ in the proof below. Define, for all $\left( t,x,y\right) $ such that $\left( t,x\right) ,\left( t,y\right) \in \overline{\Omega }$, the function $v$ as \begin{equation*} v\left( t,x,y\right) =e^{-\lambda \left( \alpha \left( t,x\right) +\alpha \left( t,y\right) \right) }w_{\varepsilon }\left( t,x,y\right) :=u\left( t,x,y\right) w_{\varepsilon }\left( t,x,y\right) . \end{equation*} The regularity of $v$ is inherited from that of $w_{\varepsilon }$ and $ \alpha $. By It\={o}'s formula we have, suppressing the $s$-dependence for $ X $, $X^{\prime }$, $Y$ and $Y^{\prime }$, \begin{eqnarray} &&v\left( t,Y\left( t\right) ,Y^{\prime }\left( t\right) \right) \label{Ito} \\ &=&v\left( 0,x,x^{\prime }\right) +\int_{0}^{t}D_{s}v\left( s,Y,Y^{\prime }\right) ds \notag \\ &&+\int_{0}^{t}\left\langle D_{x}v\left( s,Y,Y^{\prime }\right) ,b\left( s,X\right) \right\rangle ds+\int_{0}^{t}\left\langle D_{y}v\left( s,Y,Y^{\prime }\right) ,b\left( s,X^{\prime }\right) \right\rangle ds \notag \\ &&+\int_{0}^{t}\left\langle D_{x}v\left( s,Y,Y^{\prime }\right) ,\sigma \left( s,X\right) dM\left( s\right) \right\rangle +\int_{0}^{t}\left\langle D_{y}v\left( s,Y,Y^{\prime }\right) ,\sigma \left( s,X^{\prime }\right) dM\left( s\right) \right\rangle \notag \\ &&+\int_{0}^{t}\left\langle D_{x}v\left( s,Y,Y^{\prime }\right) ,\gamma \left( s\right) \right\rangle d\left\vert k\right\vert \left( s\right) +\int_{0}^{t}\left\langle D_{y}v\left( s,Y,Y^{\prime }\right) ,\gamma ^{\prime }\left( s\right) \right\rangle d\left\vert k^{\prime }\right\vert \left( s\right) \notag \\ &&+\int_{0}^{t}\text{tr}\left( \left( \begin{array}{c} \sigma \left( s,X\right) \\ \sigma \left( s,X^{\prime }\right) \end{array} \right) ^{T}D^{2}v\left( s,Y,Y^{\prime }\right) \left( \begin{array}{c} \sigma \left( s,X\right) \\ \sigma \left( s,X^{\prime }\right) \end{array} \right) d\left\langle M\right\rangle \left( s\right) \right) . \notag \end{eqnarray} We define the martingale $N$ as \begin{equation*} N\left( t\right) =\int_{0}^{t}\left\langle D_{x}v\left( s,Y,Y^{\prime }\right) ,\sigma \left( s,X\right) dM\left( s\right) \right\rangle +\int_{0}^{t}\left\langle D_{y}v\left( s,Y,Y^{\prime }\right) ,\sigma \left( s,X^{\prime }\right) dM\left( s\right) \right\rangle , \end{equation*} and simplify the remaining terms in \eqref{Ito}. From \eqref{testlemma42}, \eqref{testlemma45} and the regularity of $u$, we have \begin{equation*} \int_{0}^{t}D_{s}v\left( s,Y,Y^{\prime }\right) ds\leq C\left( \lambda \right) \int_{0}^{t}\left( \varepsilon +\frac{\left\vert Y-Y^{\prime }\right\vert ^{2}}{\varepsilon }\right) ds. \end{equation*} Similarly, following the proof of Theorem 5.1 in \cite{DupuisIshii1993}, we have \begin{eqnarray} &&\int_{0}^{t}\left\langle D_{x}v\left( s,Y,Y^{\prime }\right) ,b\left( s,X\right) \right\rangle ds+\int_{0}^{t}\left\langle D_{y}v\left( s,Y,Y^{\prime }\right) ,b\left( s,X^{\prime }\right) \right\rangle ds \label{driftterms} \\ &\leq &C\left( \lambda \right) \left( \varepsilon +\int_{0}^{t}\frac{ \left\vert Y-Y^{\prime }\right\vert ^{2}}{\varepsilon }ds+\int_{0}^{t}\frac{ \left\vert X-X^{\prime }\right\vert ^{2}}{\varepsilon }ds\right) . \notag \end{eqnarray} A simple extension of Lemma 5.7 in \cite{DupuisIshii1993} to the time-dependent case shows that there exists a constant $K_{1}\left( \lambda \right) <\infty $ such that for all $t\in \left[ 0,T\right] $, $x,y\in \overline{\Omega }_{t}$, the second order derivatives of $v$ with respect to the spatial variables satisfy \begin{equation*} D^{2}v\left( t,x,y\right) \leq K_{1}\left( \lambda \right) \left( \frac{1}{ \varepsilon }\left( \begin{array}{cc} I & -I \\ -I & I \end{array} \right) +\left( \varepsilon +\frac{\left\vert x-y\right\vert ^{2}}{ \varepsilon }\right) \left( \begin{array}{cc} I & 0 \\ 0 & I \end{array} \right) \right) . \end{equation*} Moreover, it is an easy consequence of the Lipschitz continuity of $\sigma $ that there exists a constant $K_{2}\left( \lambda \right) <\infty $ such that for all $t\in \left[ 0,T\right] $, $x,y,\xi ,\omega \in \overline{ \Omega }_{t}$, \begin{equation*} \left( \begin{array}{c} \sigma \left( t,\xi \right) \\ \sigma \left( t,\omega \right) \end{array} \right) ^{T}D^{2}v\left( t,x,y\right) \left( \begin{array}{c} \sigma \left( t,\xi \right) \\ \sigma \left( t,\omega \right) \end{array} \right) \leq K_{2}\left( \lambda \right) \left( \varepsilon +\frac{1}{ \varepsilon }\left( \left\vert \xi -\omega \right\vert ^{2}+\left\vert x-y\right\vert ^{2}\right) \right) I. \end{equation*} Consequently, the last term in \eqref{Ito} may be simplified to \begin{eqnarray*} &&\int_{0}^{t}\text{tr}\left( \left( \begin{array}{c} \sigma \left( s,X\right) \\ \sigma \left( s,X^{\prime }\right) \end{array} \right) ^{T}D^{2}v\left( s,Y,Y^{\prime }\right) \left( \begin{array}{c} \sigma \left( s,X\right) \\ \sigma \left( s,X^{\prime }\right) \end{array} \right) d\left\langle M\right\rangle \left( s\right) \right) \\ &\leq &C\left( \lambda \right) \left( \varepsilon +\int_{0}^{t}\frac{ \left\vert X-X^{\prime }\right\vert ^{2}}{\varepsilon }ds+\int_{0}^{t}\frac{ \left\vert Y-Y^{\prime }\right\vert ^{2}}{\varepsilon }ds\right) . \end{eqnarray*} Considering now the terms containing $\left\vert k\right\vert $ and $ \left\vert k^{\prime }\right\vert $, we see, following the proof of Theorem 5.1 in \cite{DupuisIshii1993}, that \begin{eqnarray*} &&\int_{0}^{t}\left\langle D_{x}v\left( s,Y,Y^{\prime }\right) ,\gamma \left( s\right) \right\rangle d\left\vert k\right\vert \left( s\right) +\int_{0}^{t}\left\langle D_{y}v\left( s,Y,Y^{\prime }\right) ,\gamma ^{\prime }\left( s\right) \right\rangle d\left\vert k^{\prime }\right\vert \left( s\right) \\ &\leq &C\int_{0}^{t}u\left( s,Y,Y^{\prime }\right) \frac{\left\vert Y-Y^{\prime }\right\vert ^{2}}{\varepsilon }d\left\vert k\right\vert \left( s\right) +C\int_{0}^{t}u\left( s,Y,Y^{\prime }\right) \frac{\left\vert Y-Y^{\prime }\right\vert ^{2}}{\varepsilon }d\left\vert k^{\prime }\right\vert \left( s\right) \\ &&-\lambda \int_{0}^{t}v\left( s,Y,Y^{\prime }\right) \left\langle D_{x}\alpha \left( s,Y\right) ,\gamma \left( s\right) \right\rangle d\left\vert k\right\vert \left( s\right) \\ &&-\lambda \int_{0}^{t}v\left( s,Y,Y^{\prime }\right) \left\langle D_{x}\alpha \left( s,Y^{\prime }\right) ,\gamma ^{\prime }\left( s\right) \right\rangle d\left\vert k^{\prime }\right\vert \left( s\right) . \end{eqnarray*} Moreover, \eqref{testlemma41} and \eqref{alfaprop} give, since $d\left\vert k\right\vert \left( s\right) $ is zero unless $Y\left( s\right) \in \partial \Omega _{s}$, \begin{equation*} -\lambda v\left( s,Y,Y^{\prime }\right) \left\langle D_{x}\alpha \left( s,Y\right) ,\gamma \left( s\right) \right\rangle \leq -\lambda \chi u\left( s,Y,Y^{\prime }\right) \frac{\left\vert Y-Y^{\prime }\right\vert ^{2}}{ \varepsilon }, \end{equation*} so, by putting $\lambda =C/\chi $ all integrals with respect to $\left\vert k\right\vert $ and $\left\vert k^{\prime }\right\vert $ vanish. Dropping the $\lambda $-dependence from the constants, \eqref{testlemma41} and \eqref{Ito} give \begin{eqnarray*} \frac{1}{C}\frac{\left\vert Y\left( t\right) -Y^{\prime }\left( t\right) \right\vert ^{2}}{\varepsilon } &\leq &v\left( t,Y\left( t\right) ,Y^{\prime }\left( t\right) \right) \leq v\left( 0,x,x^{\prime }\right) +\varepsilon +N\left( t\right) \\ &&+\int_{0}^{t}\frac{\left\vert Y-Y^{\prime }\right\vert ^{2}}{\varepsilon } ds+\int_{0}^{t}\frac{\left\vert X-X^{\prime }\right\vert ^{2}}{\varepsilon } ds. \end{eqnarray*} Now applying \eqref{testlemma42} to $v\left( 0,x,x^{\prime }\right) $, multiplying by $\varepsilon $, squaring, taking supremum and expectations on both sides, we obtain \begin{eqnarray*} E\left[ \sup_{0\leq s\leq t}\left\vert Y\left( s\right) -Y^{\prime }\left( s\right) \right\vert ^{4}\right] &\leq &C\left( \left\vert x-x^{\prime }\right\vert ^{4}+\varepsilon ^{4}+\varepsilon ^{2}E\left[ \sup_{0\leq s\leq t}\left( N\left( s\right) \right) ^{2}\right] \right. \\ &&\left. +\int_{0}^{t}E\left[ \left\vert X-X^{\prime }\right\vert ^{4}+\left\vert Y-Y^{\prime }\right\vert ^{4}\right] ds\right) . \end{eqnarray*} Then proceeding as in \eqref{driftterms}, the Doob-Kolmogorov inequality gives \begin{eqnarray*} E\left[ \sup_{0\leq s\leq t}\left( N\left( s\right) \right) ^{2}\right] &\leq &4E\left[ \left( N\left( t\right) \right) ^{2}\right] \\ &\leq &C\int_{0}^{t}\left( \varepsilon ^{2}+E\left[ \frac{\left\vert Y-Y^{\prime }\right\vert ^{4}}{\varepsilon ^{2}}+\frac{\left\vert X-X^{\prime }\right\vert ^{4}}{\varepsilon ^{2}}\right] \right) ds, \end{eqnarray*} Letting $\varepsilon $ tend to zero, we obtain \begin{equation*} E\left[ \sup_{0\leq s\leq t}\left\vert Y\left( s\right) -Y^{\prime }\left( s\right) \right\vert ^{4}\right] \leq C\left( \left\vert x-x^{\prime }\right\vert ^{4}+\int_{0}^{t}E\left[ \left( \left\vert X-X^{\prime }\right\vert ^{4}+\left\vert Y-Y^{\prime }\right\vert ^{4}\right) \right] ds\right) , \end{equation*} from which the requested inequality follows by a simple application of Gronwall's inequality. $\Box $ \noindent \textbf{Proof of Theorem \ref{main}.} Given Lemma \ref{contexist} and Lemma \ref{rsdetheorem}, the proof of Theorem \ref{main} follows exactly along the lines of the proof of Corollary 5.2 in \cite{DupuisIshii1993}, which in turn follows the same outline as in \cite{LionsSznitman1984}, Theorem 4.3. Note that the main problem is verifying the adaptedness property of the solutions to the reflected SDE. This property follows from an approximation of continuous $\mathcal{F}_t$ -adapted semimartingales by bounded variation processes, for which one can show existence of unique bounded variation solutions to the Skorohod problem, and these bounded variation solutions will be $\mathcal{F}_t$ -adapted. We omit further details. $\Box $ \setcounter{equation}{0} \setcounter{theorem}{0} \section{Fully nonlinear second-order parabolic PDEs\label{PDE}} In this section, we prove the results on partial differential equations. First, we recall the definition of viscosity solutions. Let $E\subset \mathbb{R}^{n+1}$ be arbitrary. If $u:E\rightarrow \mathbb{R}$, then the parabolic superjet $\mathcal{P}_{E}^{2,+}u\left(s,z\right)$ contains all triplets $ \left( a,p,X\right) \in \mathbb{R}\times \mathbb{R}^{n}\times \mathbb{S}^{n}$ such that if $\left(s,z\right)\in E$ then \begin{align*} u\left(t,x\right)& \leq u\left(s,z\right)+a\left(t-s\right)+\langle p,x-z\rangle +\frac{1}{2}\langle X\left(x-z\right),x-z\rangle \\ & +o\left(|t-s|+|x-z|^{2}\right)\quad \text{as }E\ni \left(t,x\right)\rightarrow \left(s,z\right). \end{align*} The parabolic subjet is defined as $\mathcal{P}_{E}^{2,-}u\left(s,z\right)=-\mathcal{P} _{E}^{2,+}\left(-u\left(s,z\right)\right)$. The closures $\overline{\mathcal{P}}_{E}^{2,+}u\left(s,z\right)$ and $\overline{\mathcal{P}}_{E}^{2,-}u\left(s,z\right)$ are defined in analogue with (2.6) and (2.7) in \cite{CrandallIshiiLions1992}. A function $u\in USC(\widetilde{\Omega })$ is a \textit{viscosity subsolution} of \eqref{huvudekvationen} in $\Omega ^{\circ }$ if, for all $\left(a,p,A\right)\in \mathcal{P}_{\widetilde{\Omega }}^{2,+}u\left(t,x\right)$, it holds that \begin{equation*} a+F\left(t,x,u\left(t,x\right),p,A\right)\leq 0,\quad \text{for}\;\left(t,x\right)\in \Omega ^{\circ }. \end{equation*} If, in addition, for $\left(t,x\right)\in \partial \Omega $ it holds that \begin{equation} \min \{a+F\left(t,x,u\left(t,x\right),p,A\right),\;\langle p,\widetilde{\gamma }\left(t,x\right)\rangle +f\left(t,x,u\left(t,x\right)\right)\}\leq 0, \label{eq:BC_viscosity_sub} \end{equation} then $u$ is a viscosity subsolution of \eqref{huvudekvationen}- \eqref{randvillkor} in $\widetilde{\Omega }$. Similarly, a function $v\in LSC(\widetilde{\Omega })$ is a \textit{viscosity supersolution} of \eqref{huvudekvationen} in $\Omega ^{\circ }$ if, for all $\left(a,p,A\right)\in \mathcal{P}_{\widetilde{\Omega }}^{2,-}v\left(t,x\right)$, it holds that \begin{equation*} a+F\left(t,x,v\left(t,x\right),p,A\right)\geq 0,\quad \text{for}\;\left(t,x\right)\in \Omega ^{\circ }. \end{equation*} If, in addition, for $\left(t,x\right)\in \partial \Omega $ it holds that \begin{equation} \max \{a+F\left(t,x,v\left(t,x\right),p,A\right),\;\langle p,\widetilde{\gamma }\left(t,x\right)\rangle +f\left(t,x,v\left(t,x\right)\right)\}\geq 0, \label{eq:BC_viscosity_sup} \end{equation} then $v$ is a viscosity supersolution of \eqref{huvudekvationen}- \eqref{randvillkor} in $\widetilde{\Omega }$. A function is a \textit{ viscosity solution} if it is both a viscosity subsolution and a viscosity supersolution. We remark that in the definition of viscosity solutions above, we may replace $\mathcal{P}_{\widetilde{\Omega }}^{2,+}u\left(t,x\right)$ and $ \mathcal{P}_{\widetilde{\Omega }}^{2,-}v\left(t,x\right)$ by $\overline{\mathcal{P}}_{ \widetilde{\Omega }}^{2,+}u\left(t,x\right)$ and $\overline{\mathcal{P}}_{\widetilde{ \Omega }}^{2,-}v\left(t,x\right)$, respectively. In the following, we often skip writing \textquotedblleft viscosity" before subsolutions, supersolutions and solutions. Note also that, given any set $E\subset \mathbb{R}^{n+1}$ and $ t\in \lbrack 0,T]$, we denote, in the following, the time sections of $E$ as $E_{t}=\{x:\left(t,x\right)\in E\}$. Next we give two lemmas. The first clarifies that the maximum principle for semicontinuous functions \cite{CrandallIshii1990}, \cite {CrandallIshiiLions1992}, holds true in time-dependent domains. \begin{lemma} \label{le:timdep_max} Suppose that $\mathcal{O}^{i}=\mathcal{\widehat{O}} ^{i}\cap \left( \left(0,T\right)\times \mathbb{R}^{n}\right) $ for $i=1,\dots ,k$ where $\mathcal{\widehat{O}}^{i}$ are locally compact subsets of $\mathbb{R}^{n+1}$ . Assume that $u_{i}\in USC(\mathcal{O}^{i})$ and let $\varphi :\left(t,x_{1},\dots ,x_{k}\right)\rightarrow \varphi \left(t,x_{1},\dots ,x_{k}\right)$ be defined on an open neighborhood of $\{\left(t,x\right):t\in \left(0,T\right)\;\text{and}\;x_{i}\in \mathcal{O}_{t}^{i}\;\text{for}\;i=1,\dots ,k\}$ and such that $\varphi $ is once continuously differentiable in $t$ and twice continuously differentiable in $\left(x_{1},\dots ,x_{k}\right)$. Suppose that $s\in \left(0,T\right)$ and $ z_{i}\in \mathcal{O}_{s}^{i}$ and \begin{equation*} w\left(t,x_{1},\dots ,x_{k}\right)\equiv u_{1}\left(t,x_{1}\right)+\dots +u_{k}\left(t,x_{k}\right)-\varphi \left(t,x_{1},\dots ,x_{k}\right)\leq w\left(s,z_{1},\dots ,z_{k}\right), \end{equation*} for $0<t<T$ and $x_{i}\in \mathcal{O}_{t}^{i}$. Assume, moreover, that there is an $r>0$ such that for every $M>0$ there is a $C$ such that, for $ i=1,\dots ,k$, \begin{align} & b_{i}\leq C,\;\text{whenever }\left( b_{i},q_{i},X_{i}\right) \in \mathcal{ P}_{\mathcal{O}^{i}}^{2,+}u_{i}\left(t,x\right)\text{ with }\left\Vert X_{i}\right\Vert \leq M\text{ and} \notag \label{eq:besvarlig_assumption} \\ & |x_{i}-z_{i}|+|t-s|+|u_{i}\left(t,x_{i}\right)-u_{i}\left(s,z_{i}\right)|+|q_{i}-D_{x_{i}}\varphi \left(s,z_{1},\dots ,z_{k}\right)|\leq r. \end{align} Then, for each $\varepsilon >0$ there exist $\left(b_{i},X_{i}\right)$ such that \begin{equation*} \left(b_{i},D_{x_{i}}\varphi \left(s,z_{1},\dots ,z_{k}\right),X_{i}\right)\in \overline{\mathcal{P }}_{\mathcal{O}^{i}}^{2,+}u_{i}\left(s,z\right),\quad \text{for}\;i=1,...,k, \end{equation*} \begin{equation*} -\left( \frac{1}{\varepsilon }+||A||\right) I\leq \left( \begin{array}{ccc} X_{1} & \dots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \dots & X_{k} \end{array} \right) \leq A+\varepsilon A^{2}, \end{equation*} and \begin{equation*} b_{1}+\dots +b_{k}=D_{t}\varphi \left(s,z_{1},\dots ,z_{k}\right), \end{equation*} where $A=\left( D_{x}^{2}\varphi \right) \left(s,z_{1},\dots ,z_{k}\right)$. \end{lemma} \noindent \textbf{Proof.} Following ideas from page 1008\textbf{\ }in \cite {CrandallIshii1990} we let $K_{i}$ be compact neighborhoods of $\left(s,z\right)$ in $ \mathcal{O}^{i}$ and define the extended functions $\widetilde{u}_{1},\dots , \widetilde{u}_{k}$, $\widetilde{u}_{i}\in USC\left(\mathbb{R}^{n}\right)$ for $ i=1,\dots ,k$, by \begin{equation*} \widetilde{u}_{i}\left(t,x\right)=\left\{ \begin{array}{rl} u\left(t,x\right), & \text{if}\quad \left(t,x\right)\in K_{i}, \\ -\infty , & \text{otherwise.} \end{array} \right. \end{equation*} From the definitions of sub and superjets it follows, for $i=1,\dots ,k$, that \begin{equation} \mathcal{P}_{\mathbb{R}^{n+1}}^{2,+}\widetilde{u}_{i}\left(t,x\right)=\mathcal{P}_{ \mathcal{O}^{i}}^{2,+}u_{i}\left(t,x\right), \label{eq:subjet_lika_1} \end{equation} for $\left(t,x\right)$ in the interior of $K_{i}$ relative to $\mathcal{O}^{i}$. Excluding the trivial case $u_{i}\left(t,x\right)=-\infty $, then the function $ \widetilde{u}_{i}\left(t,x\right)$ cannot approach $u_{i}\left(s,z\right)$ unless $\left(t,x\right)\in K_{i}$ and it follows that \begin{equation} \overline{\mathcal{P}}_{\mathbb{R}^{n+1}}^{2,+}\widetilde{u}_{i}\left(t,x\right)= \overline{\mathcal{P}}_{\mathcal{O}^{i}}^{2,+}u_{i}\left(t,x\right). \label{eq:subjet_lika_2} \end{equation} Setting $\widetilde{w}\left(t,x_{1},\dots ,x_{k}\right)=\widetilde{u} _{1}\left(t,x_{1}\right)+\dots +\widetilde{u}_{k}\left(t,x_{k}\right)$ we see that $\left(s,z_{1},\dots ,z_{k}\right)$ is also a maximum of the function $\left(\widetilde{w}-\varphi \right)\left(t,x_{1},\dots ,x_{k}\right)$. Moreover, we note that the proof of Lemma 8 in \cite{CrandallIshii1990} still works if (27) in \cite{CrandallIshii1990} is replaced by assumption \eqref{eq:besvarlig_assumption}. These facts, together with \eqref{eq:subjet_lika_1} and \eqref{eq:subjet_lika_2}, allows us to complete the proof of Lemma \ref{le:timdep_max} by using Theorem 7 in \cite{CrandallIshii1990}. $\Box $ Before proving the next lemma, let us note that standard arguments imply that we can assume $\lambda >0$ in \eqref{ass_F_nondecreasing}. Indeed, if $ \lambda \leq 0$ then for $\bar{\lambda}<\lambda $ the functions $e^{\bar{ \lambda}t}u\left(t,x\right)$ and $e^{\bar{\lambda}t}v\left(t,x\right)$ are, respectively, sub- and supersolutions of \eqref{huvudekvationen}-\eqref{randvillkor} with $ F\left(t,x,r,p,X\right)$ and $f\left(t,x,r\right)$ replaced by \begin{equation} -\bar{\lambda}r+e^{\bar{\lambda}t}F\left(t,x,e^{-\bar{\lambda}t}r,e^{-\bar{\lambda }t}p,e^{-\bar{\lambda}t}X\right)\quad \text{and}\quad e^{\bar{\lambda}t}f\left(t,x,e^{- \bar{\lambda}t}r\right). \label{assume_lambda_positive} \end{equation} Hence, in the following proof we assume $\lambda >0$ in \eqref{ass_F_nondecreasing}. Next we prove the following version of the comparison principle. \begin{lemma} \label{maxrand} Let $\Omega ^{\circ }$ be a time-dependent domain satisfying \eqref{timesect}. Assume \eqref{ass_F_cont}-\eqref{ass_F_XY}. Let $u\in USC(\widetilde{\Omega })$ be a viscosity subsolution and $v\in LSC( \widetilde{\Omega })$ a viscosity supersolution of \eqref{huvudekvationen} in $\Omega ^{\circ }$. Then $\sup_{\widetilde{\Omega}}u-v\leq \sup_{\partial \Omega \cup \overline{\Omega }_{0}}\left(u-v\right)^{+}$. \end{lemma} \noindent \textbf{Proof.} We may assume, by replacing $T>0$ by a smaller number if necessary, that $u$ and $-v$ are bounded from above on $\widetilde{ \Omega}$. We can also assume that $\sup_{\widetilde{\Omega}}u-v$ is attained by using the well known fact that if $u$ is a subsolution of \eqref{huvudekvationen}, then so is \begin{equation*} u_{\beta }\left(t,x\right)=u\left(t,x\right)-\frac{\beta }{T-t}, \end{equation*} for all $\beta >0$. Assume that $\sup_{\widetilde{\Omega } }u-v=u\left(s,z\right)-v\left(s,z\right)>u\left(t,x\right)-v\left(t,x\right)$ for some $\left(s,z\right)\in \Omega ^{\circ }$ and for all $\left(t,x\right)\in \partial \Omega \cup \overline{\Omega }_{0}$. As in Section 5.B in \cite{CrandallIshiiLions1992}, we use the fact that if $u$ is a viscosity subsolution, then so is $\bar{u}=u-K$ for every constant $K>0$. Choose $K>0$ such that $\bar{u}\left(t,x\right)-v\left(t,x\right)\leq 0$ for all $\left(t,x\right)\in \partial \Omega \cup \overline{\Omega }_{0}$ and such that $\bar{u} \left(s,z\right)-v\left(s,z\right):=\delta >0$. Using Lemma \ref{le:timdep_max} in place of Theorem 8.3 in \cite{CrandallIshiiLions1992} and by observing that assumptions \eqref{ass_F_cont}-\eqref{ass_F_XY} imply (assuming $\lambda >0$ as is possible by \eqref{assume_lambda_positive}) the corresponding assumptions in \cite{CrandallIshiiLions1992}, we see that we can proceed as in the proof of Theorem 8.2 in \cite{CrandallIshiiLions1992} to complete the proof by deriving a contradiction. $\Box $ \noindent \textbf{Proof of Theorem \ref{comparison}. }In the following we may assume, by replacing $T>0$ by a smaller number if necessary, that $u$ and $-v$ in Theorem \ref{comparison} are bounded from above on $\widetilde{ \Omega }$. We will now produce approximations of $u$ and $v$ which allow us to deal only with the inequalities involving $F$ and not the boundary conditions. To construct these approximating functions, we note that Lemma \ref{testlemma5} applies with $\gamma $ replaced by $\widetilde{\gamma }$ as well. Thus, there exists a $\mathcal{C}^{1,2}$ function $\alpha $ defined on an open neighborhood of $\widetilde{\Omega }$ with the property that $\alpha \geq 0$ on $\widetilde{\Omega }$ and $\left\langle D_{x}\alpha \left(t,x\right), \widetilde{\gamma }\left(t,x\right)\right\rangle \geq 1$ for $x\in \partial \Omega _{t}$ , $t\in [0,T]$. For $\beta _{1}>0$, $\beta _{2}>0$ and $\beta _{3}>0$ we define, for $\left(t,x\right)\in \widetilde{\Omega}$, \begin{align} u_{\beta _{1},\beta _{2},\beta _{3}}\left(t,x\right)& =u\left(t,x\right)-\beta _{1}\alpha \left(t,x\right)-\beta _{2}-\frac{\beta _{3}}{T-t}, \notag \label{approxdef} \\ v_{\beta _{1},\beta _{2}}\left(t,x\right)& =v\left(t,x\right)+\beta _{1}\alpha \left(t,x\right)+\beta _{2}. \end{align} Given $\beta _{3},\beta _{2}>0$ there is $\beta _{1}=\beta _{1}\left(\beta _{2}\right)\in \left(0,\beta _{2}\right)$ for which $u_{\beta _{1},\beta _{2},\beta _{3}}$ and $v_{\beta _{1},\beta _{2}}$ are sub- and supersolutions of \eqref{huvudekvationen}-\eqref{randvillkor}, with $f\left(t,x,r\right)$ replaced by $ f\left(t,x,r\right)+\beta _{1}$ and $f\left(t,x,r\right)-\beta _{1}$, respectively. Indeed, if $ \left(a,p,X\right)\in \mathcal{P}_{\widetilde{\Omega }}^{2,+}u_{\beta _{1},\beta _{2},\beta _{3}}\left(t,x\right)$, then \begin{equation} \left( a+\beta _{1}\alpha _{t}\left(t,x\right)+\frac{\beta _{3}}{\left(T-t\right)^{2}},p+\beta _{1}D\alpha \left(t,x\right),X+\beta _{1}D^{2}\alpha \left(t,x\right)\right) \in \mathcal{P}_{ \widetilde{\Omega }}^{2,+}u\left(t,x\right). \label{eq:punkt_i_subjet} \end{equation} Hence, if $u$ satisfies \eqref{randvillkor}, then $\langle p+\beta _{1}D\alpha \left(t,x\right),\widetilde{\gamma }\left(t,x\right)\rangle +f\left(t,x,u\left(t,x\right)\right)\leq 0$ and since $\langle D\alpha \left(t,x\right),\widetilde{\gamma }\left(t,x\right)\rangle \geq 1$, $ u_{\beta _{1},\beta _{2},\beta _{3}}\leq u$ and by \eqref{ass_f_nondecreasing} we obtain \begin{equation} \langle p,\widetilde{\gamma }\left(t,x\right)\rangle +f\left(t,x,u_{\beta _{1},\beta _{2},\beta _{3}}\right)+\beta _{1}\leq 0. \label{eq:RV_uppfylld_approx} \end{equation} Using \eqref{eq:punkt_i_subjet} we also see that if $u$ satisfies \eqref{huvudekvationen} then \begin{equation*} a+\beta _{1}\alpha _{t}\left(t,x\right)+\frac{\beta _{3}}{\left(T-t\right)^{2}}+F\left(t,x,u,p+\beta _{1}D\alpha \left(t,x\right),X+\beta _{1}D^{2}\alpha \left(t,x\right)\right)\leq 0. \end{equation*} Using \eqref{ass_F_nondecreasing} and \eqref{ass_F_boundary}, assuming also that the support of $\alpha $ lies within $U$, we have \begin{align} a+\beta _{1}\alpha _{t}\left(t,x\right)+F\left(t,x,u_{\beta _{1},\beta _{2},\beta _{3}},p,X\right)+\lambda \beta _{2}& \label{eq:EQ_uppfylld_approx} \\ -m_{2}\left( |\beta _{1}D\alpha \left(t,x\right)|+||\beta _{1}D^{2}\alpha \left(t,x\right)||\right) & \leq 0. \notag \end{align} From \eqref{eq:RV_uppfylld_approx} and \eqref{eq:EQ_uppfylld_approx} it follows that, given $\beta _{2},\beta _{3}>0$, there exist $\beta _{1}\in \left(0,\beta _{2}\right)$ such that $u_{\beta _{1},\beta _{2},\beta _{3}}$ is a subsolution of \eqref{huvudekvationen}-\eqref{randvillkor} with $f\left(t,x,u\right)$ replaced by $f\left(t,x,u\right)+\beta _{1}$. The fact that $v_{\beta _{1},\beta _{2}}$ is a supersolution follows by a similar calculation. To complete the proof of the comparison principle, it is sufficient to prove that \begin{equation*} \max_{\widetilde{\Omega }}\left(u_{\beta _{1},\beta _{2},\beta _{3}}-v_{\beta _{1},\beta _{2}}\right)\leq 0, \end{equation*} holds for all $\beta _{2}>0$ and $\beta _{3}>0$. Assume that \begin{equation*} \sigma =\max_{\widetilde{\Omega }}\left(u_{\beta _{1},\beta _{2},\beta _{3}}-v_{\beta _{1},\beta _{2}}\right)>0. \end{equation*} We will derive a contradiction for any $\beta _{3}$ if $\beta _{2}$ (and hence $\beta _{1}$) is small enough. To simplify notation, we write, in the following, $u,v$ in place of $u_{\beta _{1},\beta _{2},\beta _{3}},v_{\beta _{1},\beta _{2}}$. By Lemma \ref{maxrand}, $u\left(0,\cdot \right)\leq v\left(0,\cdot \right)$, upper semicontinuity of $u-v$ and boundedness from above of $u-v$, we conclude that for any $\beta _{3}>0$ \begin{equation} \sigma =\left(u-v\right)\left(s,z\right),\quad \text{for some }z\in \partial \Omega _{s}\text{ and }s\in \left( 0,T\right) . \label{sigma} \end{equation} Let $\widetilde{B}\left(\left(s,z\right),\delta \right)=\{\left(t,x\right):\left\vert \left(t,x\right)-\left(s,z\right)\right\vert \leq \delta \}$ and define \begin{equation*} E:=\widetilde{B}\left(\left(s,z\right),\delta \right)\cap \widetilde{\Omega }. \end{equation*} By Remark \ref{spaceremark}, there exists $\theta \in \left(0,1\right)$ such that \begin{equation} \left\langle x-y,\widetilde{\gamma }\left( t,x\right) \right\rangle \geq -\theta \left\vert x-y\right\vert ,\quad \text{for all }\left( t,x\right) \in E\setminus \Omega ^{\circ }\text{ and }\left( t,y\right) \in E. \label{make_use_of_cone} \end{equation} By decreasing $\delta $ if necessary, we may assume that \eqref{ass_F_boundary} holds in $E$. From now on, we restrict our attention to events in the set $E$. By Lemma \ref{testlemma4} we obtain, for any $ \theta \in \left(0,1\right)$, a family $\left\{ w_{\varepsilon }\right\} _{\varepsilon >0}$ of functions $w_{\varepsilon }\in \mathcal{C}^{1,2}\left( \left[ 0,T \right] \times \mathbb{R} ^{n}\times \mathbb{R} ^{n}, \mathbb{R} \right) $ and positive constants $\chi ,C$ (independent of $\varepsilon $) such that \eqref{testlemma41}, \eqref{testlemma42}, \eqref{testlemma45}- \eqref{testlemma47} as well as \begin{equation} \left\langle D_{x}w_{\varepsilon }\left( t,x,y\right) ,\widetilde{\gamma } \left( t,x\right) \right\rangle \geq -C\frac{\left\vert x-y\right\vert ^{2}}{ \varepsilon },\quad \text{if\quad }\left\langle x-y,\widetilde{\gamma } \left( t,x\right) \right\rangle \geq -\theta \left\vert x-y\right\vert , \label{test3} \end{equation} \begin{equation} \left\langle D_{y}w_{\varepsilon }\left( t,x,y\right) ,\widetilde{\gamma } \left( t,y\right) \right\rangle \geq -C\frac{\left\vert x-y\right\vert ^{2}}{ \varepsilon },\quad \text{if\quad }\left\langle y-x,\widetilde{\gamma } \left( t,y\right) \right\rangle \geq -\theta \left\vert x-y\right\vert , \label{test4} \end{equation} hold. Note that \eqref{test3} and \eqref{test4} are direct analogues to \eqref{testlemma43} and \eqref{testlemma44} but with $\gamma $ replaced by $ \widetilde{\gamma }$. Let $\varepsilon >0$ be given and define \begin{equation*} \Phi \left(t,x,y\right)=u\left(t,x\right)-v\left(t,y\right)-\varphi \left(t,x,y\right), \end{equation*} where \begin{equation*} \varphi \left(t,x,y\right)=w_{\varepsilon }\left(t,x,y\right)+f\left(s,z,u\left(s,z\right)\right)\langle y-x,\widetilde{ \gamma }\left(s,z\right)\rangle +\beta _{1}|x-z|^{2}+\left(t-s\right)^{2}. \end{equation*} Let $\left(t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right)$ be a maximum point of $\Phi $. From \eqref{testlemma41} and \eqref{testlemma42} we have \begin{align} \sigma & -C\varepsilon \leq \Phi \left(s,z,z\right)\leq \Phi \left(t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right)\leq u\left(t_{\varepsilon },x_{\varepsilon }\right)-v\left(t_{\varepsilon },y_{\varepsilon }\right)-\chi \frac{\left\vert x_{\varepsilon }-y_{\varepsilon }\right\vert ^{2}}{\varepsilon } \label{eq:maxet_gor_att_allt_konvergerar} \\ & -f\left(s,z,u\left(s,z\right)\right)\langle y_{\varepsilon }-x_{\varepsilon },\widetilde{\gamma } \left(s,z\right)\rangle -\beta _{1}|x_{\varepsilon }-z|^{2}-\left(t_{\varepsilon }-s\right)^{2}. \notag \end{align} From this we first see that \begin{equation*} |x_{\varepsilon }-y_{\varepsilon }|\rightarrow 0\qquad \text{as}\qquad \varepsilon \rightarrow 0. \end{equation*} Therefore, using the upper semi-continuity of $u-v$ and \eqref{eq:maxet_gor_att_allt_konvergerar} we also obtain \begin{align} \frac{|x_{\varepsilon }-y_{\varepsilon }|^{2}}{\varepsilon }& \rightarrow 0,\qquad x_{\varepsilon },y_{\varepsilon }\rightarrow z,\qquad t_{\varepsilon }\rightarrow s, \notag \label{as_ep_to_0} \\ u\left(t_{\varepsilon },x_{\varepsilon }\right)& \rightarrow u\left(s,z\right),\qquad v\left(t_{\varepsilon },y_{\varepsilon }\right)\rightarrow v\left(s,z\right), \end{align} as $\varepsilon \rightarrow 0$. In the following we assume $\varepsilon $ to be so small that $\left(t_{\varepsilon },x_{\varepsilon }\right)\in E$ We introduce the notation \begin{align*} \bar{p}& =D_{x}\varphi \left(t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right)=D_{x}w_{\varepsilon }\left(t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right)-f\left(s,z,u\left(s,z\right)\right)\widetilde{\gamma }\left(s,z\right)+2\beta _{1}\left( x_{\varepsilon }-z\right) , \\ \bar{q}& =D_{y}\varphi \left(t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right)=D_{y}w_{\varepsilon }\left(t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right)+f\left(s,z,u\left(s,z\right)\right)\widetilde{\gamma }\left(s,z\right), \end{align*} and observe that \begin{align} & \langle \bar{p},\widetilde{\gamma }\left(t_{\varepsilon },x_{\varepsilon }\right)\rangle +f\left(t_{\varepsilon },x_{\varepsilon },u\left(t_{\varepsilon },x_{\varepsilon }\right)\right) \notag \\ =& \langle D_{x}w_{\varepsilon }\left(t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right),\widetilde{\gamma }\left(t_{\varepsilon },x_{\varepsilon }\right),\rangle +f\left(t_{\varepsilon },x_{\varepsilon },u\left(t_{\varepsilon },x_{\varepsilon }\right)\right) \notag \\ & -f\left(s,z,u\left(s,z\right)\right)\langle \widetilde{\gamma }\left(s,z\right),\widetilde{\gamma } \left(t_{\varepsilon },x_{\varepsilon }\right)\rangle +2\beta _{1}\langle x_{\varepsilon }-z,\widetilde{\gamma }\left(t_{\varepsilon },x_{\varepsilon }\right)\rangle , \label{peq} \end{align} and \begin{eqnarray} &&-\langle \bar{q},\widetilde{\gamma }\left(t_{\varepsilon },y_{\varepsilon }\right)\rangle +f\left(t_{\varepsilon },y_{\varepsilon },v\left(t_{\varepsilon },x_{\varepsilon }\right)\right) \notag \\ &=&-\langle D_{y}w_{\varepsilon }\left(t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right),\widetilde{\gamma }\left(t_{\varepsilon },y_{\varepsilon }\right)\rangle +f\left(t_{\varepsilon },y_{\varepsilon },v\left(t_{\varepsilon },y_{\varepsilon }\right)\right) \notag \\ &&-f\left(s,z,u\left(s,z\right)\right)\left\langle \widetilde{\gamma }\left(s,z\right) ,\widetilde{ \gamma }\left(t_{\varepsilon },y_{\varepsilon }\right)\right\rangle. \label{qeq} \end{eqnarray} Using \eqref{smooth_gamma}, \eqref{f_kontinuerlig}, \eqref{ass_f_nondecreasing} and \eqref{as_ep_to_0}-\eqref{qeq} we see that if $\varepsilon $ is small enough, then \begin{align} & \langle D_{x}w_{\varepsilon }\left(t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right),\widetilde{\gamma }\left(t_{\varepsilon },x_{\varepsilon }\right)\rangle \geq -\frac{\beta _{1}}{2} \notag \label{boundary_cond_elimination_2} \\ & \implies \langle \bar{p},\widetilde{\gamma }\left(t_{\varepsilon },x_{\varepsilon }\right)\rangle +f\left(t_{\varepsilon },x_{\varepsilon },u\left(t_{\varepsilon },x_{\varepsilon }\right)\right)+\beta _{1}>0, \notag \\ & \langle D_{y}w_{\varepsilon }\left(t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right),\widetilde{\gamma }\left(t_{\varepsilon },y_{\varepsilon }\right)\rangle \geq -\frac{\beta _{1}}{2} \notag \\ & \implies -\langle \bar{q},\widetilde{\gamma }\left(t_{\varepsilon },y_{\varepsilon }\right)\rangle +f\left(t_{\varepsilon },y_{\varepsilon },v\left(t_{\varepsilon },x_{\varepsilon }\right)\right)-\beta _{1}<0. \end{align} Moreover, from \eqref{make_use_of_cone}-\eqref{test4}, we also have \begin{align} & \langle D_{x}w_{\varepsilon }\left(t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right),\widetilde{\gamma }\left(t_{\varepsilon },x_{\varepsilon }\right)\rangle \geq -C\frac{|x_{\varepsilon }-y_{\varepsilon }|^{2}}{\varepsilon } ,\quad \text{if }x_{\varepsilon }\in \partial \Omega _{t_{\varepsilon }}, \notag \label{boundary_cond_elimination_3} \\ & \langle D_{y}w_{\varepsilon }\left(t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right),\widetilde{\gamma }\left(t_{\varepsilon },y_{\varepsilon }\right)\rangle \geq -C\frac{|x_{\varepsilon }-y_{\varepsilon }|^{2}}{\varepsilon } ,\quad \text{if }y_{\varepsilon }\in \partial \Omega _{t_{\varepsilon }}. \end{align} Using \eqref{boundary_cond_elimination_2} and \eqref{boundary_cond_elimination_3}, it follows by the definition of viscosity solutions that if $\varepsilon $ is small enough, say $ 0<\varepsilon <\varepsilon _{\beta _{1}}$, then \begin{equation} a+F\left(t_{\varepsilon },x_{\varepsilon },u\left(t_{\varepsilon },x_{\varepsilon }\right), \bar{p},X\right)\leq 0\leq -b+F\left(t_{\varepsilon },y_{\varepsilon },v\left(t_{\varepsilon },y_{\varepsilon }\right),-\bar{q},-Y\right), \label{flyttauppskiten} \end{equation} whenever \begin{equation*} \left( a,\bar{p},X\right) \in \overline{\mathcal{P}}_{\widetilde{\Omega } }^{2,+}u\left(t_{\varepsilon },x_{\varepsilon }\right)\quad \text{and}\quad \left( -b,- \bar{q},-Y\right) \in \overline{\mathcal{P}}_{\widetilde{\Omega } }^{2,-}v\left(t_{\varepsilon },y_{\varepsilon }\right). \end{equation*} We next intend to use Lemma \ref{le:timdep_max} to show the existence of such matrices $X$, $Y$ and numbers $a,b$. Hence, we have to verify condition \eqref{eq:besvarlig_assumption}. To do so, we observe that \eqref{boundary_cond_elimination_2} holds true with $\bar{p}$ and $\bar{q}$ replaced by any $p$ and $q$ satisfying $|\bar{p}-p|\leq r$ and $|\bar{q} -q|\leq r$ if we choose $r=r\left(\varepsilon \right)$ small enough. It follows that also \eqref{flyttauppskiten} holds with these $p$ and $q$ and we can conclude \begin{equation*} a\leq -F\left(t_{\varepsilon },x_{\varepsilon },u\left(t_{\varepsilon },x_{\varepsilon }\right),p,X\right)\leq C\quad \text{and}\quad b\leq F\left(t_{\varepsilon },y_{\varepsilon },v\left(t_{\varepsilon },y_{\varepsilon }\right),-q,-Y\right)\leq C, \end{equation*} for some $C=C\left(\varepsilon \right)$ whenever $\left(a,p,X\right)$ and $\left( b,q,Y\right) $ is as in \eqref{eq:besvarlig_assumption}. Hence, condition \eqref{eq:besvarlig_assumption} holds and Lemma \ref{le:timdep_max} gives the existence of $X,Y\in \mathbb{S}^{n}$ and $a,b\in \mathbb{R}$ such that \begin{align} & -\left( \frac{1}{\varepsilon }+||A||\right) I\leq \left( \begin{array}{cc} X & 0 \\ 0 & Y \end{array} \right) \leq A+\varepsilon A^{2}, \notag \label{eq:result_from_CIL92} \\ & \left(a,\bar{p},X\right)\in \overline{\mathcal{P}}_{\widetilde{\Omega } }^{2,+}u\left(t_{\varepsilon },x_{\varepsilon }\right),\quad \left(-b,-\bar{q},-Y\right)\in \overline{\mathcal{P}}_{\widetilde{\Omega }}^{2,-}v\left(t_{\varepsilon },y_{\varepsilon }\right), \notag \\ & a+b=D_{t}\varphi \left( t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right) =D_{t}w_{\varepsilon }\left( t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right) +2\left(t_{\varepsilon }-s\right), \end{align} where $A=D_{x,y}^{2}\left( w_{\varepsilon }\left(t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right)+\beta _{1}|x_{\varepsilon }-z|^{2}\right) $. Using \eqref{ass_F_nondecreasing}, \eqref{testlemma45} and \eqref{flyttauppskiten} we obtain, by recalling that we can assume $\lambda >0$ in \eqref{ass_F_nondecreasing}, that \begin{eqnarray*} 0 &\geq &D_{t}w_{\varepsilon }\left( t_{\varepsilon },x_{\varepsilon },y_{\varepsilon }\right) +2\left(t_{\varepsilon }-s\right) \\ &&+F\left(t_{\varepsilon },x_{\varepsilon },u\left(t_{\varepsilon },x_{\varepsilon }\right), \bar{p},X\right)-F\left(t_{\varepsilon },y_{\varepsilon },v\left(t_{\varepsilon },y_{\varepsilon }\right),-\bar{q},-Y\right) \\ &\geq &-C\frac{|x_{\varepsilon }-y_{\varepsilon }|^{2}}{\varepsilon } +2\left(t_{\varepsilon }-s\right)+\lambda \left(u\left(t_{\varepsilon },x_{\varepsilon }\right)-v\left(t_{\varepsilon },y_{\varepsilon }\right)\right) \\ &&+F\left(t_{\varepsilon },x_{\varepsilon },u\left(t_{\varepsilon },x_{\varepsilon }\right), \bar{p},X\right)-F\left(t_{\varepsilon },y_{\varepsilon },u\left(t_{\varepsilon },x_{\varepsilon }\right),-\bar{q},-Y\right). \end{eqnarray*} Next, assumption \eqref{ass_F_boundary} gives \begin{eqnarray} 0 &\geq &-C\bar{s}+2\left(t_{\varepsilon }-s\right)+\lambda \left(u\left(t_{\varepsilon },x_{\varepsilon }\right)-v\left(t_{\varepsilon },y_{\varepsilon }\right)\right) \notag \\ &&+F\left(t_{\varepsilon },x_{\varepsilon },u\left(t_{\varepsilon },x_{\varepsilon }\right),- \bar{q},X-C\bar{s}I\right)-F\left(t_{\varepsilon },y_{\varepsilon },u\left(t_{\varepsilon },x_{\varepsilon }\right),-\bar{q},-Y+C\bar{s}I\right) \notag \\ &&-m_{2}\left(|\bar{p}+\bar{q}|+C\bar{s}\right)-m_{2}\left(C\bar{s}\right), \label{sista_med_extrasteg} \end{eqnarray} where we use the notation $\bar{s}=|x_{\varepsilon }-y_{\varepsilon }|^{2}/\varepsilon $. Note that since the eigenvalues of $\varepsilon A^{2}$ are given by $\varepsilon \lambda ^{2}$, where $\lambda $ is an eigenvalue to $A$, and since $\lambda $ is bounded, $A+\varepsilon A^{2}\leq CA$. Hence, by \eqref{testlemma47} we obtain \begin{equation*} A+\varepsilon A^{2}\leq \frac{C}{\varepsilon }\left( \begin{array}{cc} I & -I \\ -I & I \end{array} \right) +C\bar{s}I_{2n}, \end{equation*} and since $||A||\leq C/\varepsilon $ for some large $C$, we also conclude that \eqref{eq:result_from_CIL92} implies \begin{equation*} -\frac{C}{\varepsilon }I_{2n}\leq \left( \begin{array}{cc} X-C\bar{s}I & 0 \\ 0 & Y-C\bar{s}I \end{array} \right) \leq \frac{C}{\varepsilon }\left( \begin{array}{cc} I & -I \\ -I & I \end{array} \right) . \end{equation*} Using the above inequality, assumption \eqref{ass_F_XY}, \eqref{sista_med_extrasteg}, the definition of $\bar{q}$ and \eqref{testlemma46} we have \begin{eqnarray*} 0 &\geq &-C\bar{s}+2\left(t_{\varepsilon }-s\right)+\lambda \left(u\left(t_{\varepsilon },x_{\varepsilon }\right)-v\left(t_{\varepsilon },y_{\varepsilon }\right)\right) \\ &&-m_{1}\left(C|x_{\varepsilon }-y_{\varepsilon }|+2C\bar{s}\right)-m_{2}\left(|\bar{p}+\bar{ q}|+C\bar{s}\right)-m_{2}\left(C\bar{s}\right), \end{eqnarray*} when $0<\varepsilon <\varepsilon _{\beta _{1}}$ and $u\left(t_{\varepsilon },x_{\varepsilon }\right)\geq v\left(t_{\varepsilon },y_{\varepsilon }\right)$. Sending first $\varepsilon $ and then $\beta _{2}$ to zero (the latter implies $\beta _{1}\rightarrow 0$) and using \eqref{testlemma46} we obtain a contradiction. This completes the proof of the comparison principle in Theorem \ref {comparison}. $\Box $ Using the same methodology as in the proof of Theorem \ref{comparison}, we are now able to prove the comparison principle for mixed boundary conditions stated in Corollary \ref{maxrand_partial}. This result will be an important ingredient in the proof of Theorem \ref{existence}. \noindent \textbf{Proof of Corollary \ref{maxrand_partial}.} If $u$ is a viscosity subsolution, then so is $u-K$ for all $K>0$. It thus suffices to prove that if $u\leq v$ on $\left(\partial \Omega \setminus G\right)\cup \overline{ \Omega }_{0}$, then $u\leq v$ in $\widetilde{\Omega }$. If $G=\partial \Omega $, then this implication and its proof is identical to Theorem \ref {comparison}. If $G\subset \partial \Omega $ is arbitrary, then we know by assumption that $u\leq v$ on $\partial \Omega \setminus G$ and so the point $ \left(s,z\right)$ defined in \eqref{sigma} must belong to the set $G$ where the boundary condition is satisfied. Hence, we can follow the proof of Theorem \ref{comparison} and conclude that $u\leq v$ in $\widetilde{\Omega }$. $ \Box $ \noindent \textbf{Proof of Theorem \ref{existence}. }We will prove existence using Perron's method. In particular, we show that the supremum of all subsolutions to the initial value problem given by \eqref{initial_value_problem} is indeed a solution to the same problem. To ensure that the supremum is taken over a nonempty set, we need to find at least one subsolution to the problem. We also need to know that the supremum is finite. This is obtained by producing a supersolution, which, due to the comparison principle, provides an upper bound for the supremum. To find the supersolution, let, for some constants $A$ and $B$ to be chosen later, \begin{equation*} \widehat{v}=A\alpha \left(t,x\right)+B,\quad \text{for}\;\left(t,x\right) \in \widetilde{\Omega}, \end{equation*} where $\alpha \left(t,x\right)$ is the function guaranteed by Lemma \ref{testlemma5}. By \eqref{f_kontinuerlig}, \eqref{ass_f_nondecreasing} and the boundedness of $\Omega ^{\circ }$, we can find $A>0$ such that \begin{equation*} \langle D\widehat{v}\left(t,x\right),\widetilde{\gamma }\left(t,x\right)\rangle +f\left(t,x,\widehat{v} \left(t,x\right)\right)\geq A+f\left(t,x,0\right)\geq 0, \end{equation*} for $\left(t,x\right)\in \partial \Omega $. Moreover, since the support of $\alpha $ lies in $U$, we have, with $\lambda $ and $m_{2}$ defined in \eqref{ass_F_nondecreasing} and \eqref{ass_F_boundary}, \begin{eqnarray*} &&D_{t}\widehat{v}\left(t,x\right)+F\left(t,x,\widehat{v}\left(t,x\right),D\widehat{v}\left(t,x\right),D^{2} \widehat{v}\left(t,x\right)\right) \\ &\geq &-A\sup_{U}\{|D_{t}\alpha \left(t,x\right)|\}+B\lambda +F\left(t,x,0,0,0\right) \\ &&-\sup_{U}m_{2}\left( A\left( |D\alpha \left(t,x\right)|+||D^{2}\alpha \left(t,x\right)||\right) \right) . \end{eqnarray*} By \eqref{ass_F_cont}, the boundedness of $\Omega ^{\circ }$ and by recalling that we can assume $\lambda >0$, we see that taking $B$ large enough, $\widehat{v}$ is a classical supersolution of \eqref{initial_value_problem}. Hence, using \eqref{F_fundamental} and Proposition 7.2 in \cite{CrandallIshiiLions1992}, $\widehat{v}$ is also a viscosity supersolution. Next, we observe that $\check{u}=-\widehat{v}$ is a viscosity subsolution $\check{u}$ to the problem given by \eqref{initial_value_problem}. We now apply Perron's method by defining our solution candidate as \begin{equation*} \widetilde{w}:=\sup \{w\left(x\right):\text{$w\in USC(\widetilde{\Omega })$ is a viscosity subsolution of \eqref{initial_value_problem}}\}. \end{equation*} In the following we let $u^{\ast }$ and $u_{\ast }$ denote the upper and lower semicontinuous envelopes of a function $u$, respectively. By the comparison principle and by construction we obtain \begin{equation} \check{u}_{\ast }\leq \widetilde{w}_{\ast }\leq \widetilde{w}^{\ast }\leq \widehat{v}^{\ast }\quad \text{on}\;\widetilde{\Omega }. \label{eq:by_construction} \end{equation} Let us assume for the moment that $\widetilde{w}^{\ast }$ satisfies the initial condition of being a subsolution and that $\widetilde{w}_{\ast }$ satisfies the initial condition of being a supersolution, that is \begin{equation} \widetilde{w}^{\ast }\left(0,x\right)\leq g\left(x\right)\leq \widetilde{w}_{\ast }\left(0,x\right),\quad \text{for all }x\in \overline{\Omega }_{0}. \label{eq:initial_assumed} \end{equation} We can then proceed as in \cite{CrandallIshiiLions1992} (see also \cite {Barles1993} and \cite{Ishii1987}) to show that $\widetilde{w}^{\ast }$ is a viscosity subsolution and $\widetilde{w}_{\ast }$ is a viscosity supersolution of the initial value problem in \eqref{initial_value_problem}. Using the comparison principle again, we then have $\widetilde{w}_{\ast }\geq \widetilde{w}^{\ast }$ and so by \eqref{eq:by_construction} $ \widetilde{w}_{\ast }=\widetilde{w}^{\ast }$ is the requested viscosity solution. To complete the proof of Theorem \ref{existence}, it hence suffices to prove \eqref{eq:initial_assumed}. This will be achieved by constructing families of explicit viscosity sub- and supersolutions. We first show that the subsolution candidate $\widetilde{w}^{\ast }$ satisfies the initial conditions for all $x\in \Omega _{0}$. To this end, we define, for arbitrary $z\in \Omega _{0}$ and $\varepsilon >0$, the barrier function \begin{equation*} V_{z,\varepsilon }\left(t,x\right)=g\left(z\right)+\varepsilon +B|x-z|^{2}+Ct,\quad \text{for} \;\left(t,x\right)\in \left[ 0,T\right] \times \mathbb{R}^{n}, \end{equation*} where $B$ and $C$ are constants, which may depend on $z$ and $\varepsilon $, to be chosen later. We first observe that, by continuity of $g$ and boundedness of $\Omega _{0}$, we can, for any $\varepsilon >0$, choose $B$ so large that $V_{z,\varepsilon }\left(0,x\right)\geq g\left(x\right)$, for all $x\in \overline{ \Omega }_{0}$. Moreover, since $\widetilde{w}$ is bounded on $\overline{ \Omega }$, we conclude, by increasing $B$ and $C$ if necessary, that we also have \begin{equation*} V_{z,\varepsilon }\left(t,x\right)\geq \widetilde{w}\left(t,x\right),\quad \text{for}\;\left(t,x\right)\in \partial \Omega \cup \overline{\Omega }_{0}. \end{equation*} A computation shows that, for $z$, $\varepsilon $, $B$ given, we can choose the constant $C$ so large that $V_{z,\epsilon }$ is a classical supersolution of \eqref{huvudekvationen} in $[0,\infty )\times \mathbb{R} ^{n} $. Hence, by \eqref{F_fundamental}, $V_{z,\epsilon }$ is also a continuous viscosity supersolution of \eqref{huvudekvationen} in $\Omega ^{\circ }$. By the maximum principle in Lemma \ref{maxrand} applied to $ V_{z,\varepsilon }$ and each component in the definition of $\widetilde{w}$, we obtain \begin{equation} V_{z,\varepsilon }\left(t,x\right)\geq \widetilde{w}\left(t,x\right),\quad \text{for}\;\left(t,x\right)\in \widetilde{\Omega }. \label{eq:first_barrier_downpush_inside} \end{equation} It follows that $\widetilde{w}^{\ast }\leq V_{z,\varepsilon }^{\ast }=V_{z,\varepsilon }$ in this set and hence the initial condition in $\Omega _{0}$ follows since for any $x\in \Omega _{0}$ \begin{equation} \widetilde{w}^{\ast }\left(0,x\right)\leq \inf_{\varepsilon ,z}V_{z,\varepsilon }\left(0,x\right)=g\left(x\right). \label{eq:first_barrier_downpush_inside_final} \end{equation} To prove that the supersolution candidate $\widetilde{w}_{\ast }$ satisfies the initial condition in $\Omega _{0}$, we proceed similarly by studying a family of subsolutions of the form \begin{equation*} U_{z,\epsilon }\left(t,x\right)=g\left(z\right)-B|x-z|^{2}-\varepsilon -Ct. \end{equation*} We next prove that $\widetilde{w}^{\ast }$ satisfies the boundary conditions for each $x\in \partial \Omega _{0}$. In this case the barriers above will not work as we cannot ensure that they exceed $\widetilde{w}^{\ast }$ on $ \partial \Omega $. Instead, we will construct barriers that are sub- and supersolutions only locally, near the boundary, during a short time interval. These local barriers are useful due to the maximum principle for mixed boundary conditions proved in Corollary \ref{maxrand_partial}. To construct the local barriers, fix $\widehat{z}\in \partial \Omega _{0}$ and let $z\left(t\right)$ be the H\"{o}lder continuous function \begin{equation*} z\left(t\right)=\widehat{z}-K\widetilde{\gamma }\left(0,\widehat{z}\right)t^{\widehat{\alpha }}, \end{equation*} where $\widehat{\alpha }$ is the H\"{o}lder exponent from \eqref{tempholder} and $K$ is a constant depending on the H\"{o}lder constant and the shape of the exterior cones in \eqref{boundarylip}. It follows that $z\left(t\right)$ stays inside of $\Omega $ for a short time and that $z\left(0\right)=\widehat{z}$. Consider, for $\varepsilon >0$, the barrier function \begin{equation*} \widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right)=g\left(\widehat{z}\right)+A\left( \alpha \left(t,x\right)-\alpha \left(0,\widehat{z}\right)\right) +e^{\left(\widehat{C}/\chi \right)\alpha \left(t,x\right)}w_{\varepsilon }\left(t,x,z\left(t\right)\right)+B+Ct^{\widehat{\alpha }}, \end{equation*} whenever $\left(t,x\right)\in \lbrack 0,T]\times \mathbb{R}^{n}$, where $\widehat{C}$ and $\chi $ are the constants from Lemma \ref{testlemma4} and $A,B$ and $C$ are constants to be chosen later, possibly depending on $\widehat{z}$ and $ \varepsilon $. We first show that for any choice of $A$, we can find $B$ such that \begin{equation} g\left(x\right)\leq \widetilde{V}_{\varepsilon ,\widehat{z}}\left(0,x\right),\quad \text{for all } x\in \overline{\Omega }_{0}\text{\quad and\quad }\inf_{\varepsilon } \widetilde{V}_{\varepsilon ,\widehat{z}}\left(0,\widehat{z}\right)=g\left(\widehat{z}\right). \label{eq:sup_barrier_above_g} \end{equation} Indeed, to prove the left inequality in \eqref{eq:sup_barrier_above_g}, observe that by \eqref{testlemma41} we have $\chi \left\vert x-\widehat{z} \right\vert ^{2}/\varepsilon \leq w_{\varepsilon }\left( 0,x,\widehat{z} \right) $. Moreover, by the continuity of $g\left( \cdot \right) -A\alpha \left( 0,\cdot \right) $ in $\overline{\Omega }_{0}$, we can find $B$, depending on $\varepsilon $ and $A$, so that \begin{equation*} g\left(x\right)-g\left(\widehat{z}\right)-A\left( \alpha \left(0,x\right)-\alpha \left(0,\widehat{z}\right)\right) \leq B+\chi \frac{\left\vert x-\widehat{z}\right\vert ^{2}}{\varepsilon }. \end{equation*} This proves the left inequality in \eqref{eq:sup_barrier_above_g}. Finally, it is no restriction to assume that $B\rightarrow 0$ as $\varepsilon \rightarrow 0$, and this implies the right inequality in \eqref{eq:sup_barrier_above_g}. We next show that $\widetilde{V}_{\varepsilon ,\widehat{z}}$ satisfies the boundary condition in a small neighborhood of $\widehat{z}$ in ${\partial \Omega }$. To do so, let $E_{\widehat{z}}=\left(0,\kappa \right)\times B\left(\widehat{z} ,\rho \right)$ for some $\kappa ,\rho >0$ to be chosen. We intend to find $\kappa $ , $\rho $, $A$ and $C$ such that \begin{equation} \langle D_{x}\widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right),\widetilde{\gamma }\left(t,x\right)\rangle +f\left(t,x,\widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right)\right)\geq 0,\quad \text{for}\;\left(t,x\right)\in E_{\widehat{z}}\cap \partial \Omega . \label{eq:sup_barrier_RV} \end{equation} First, observe that $\alpha $ is differentiable in time on $\overline{\Omega }$. Therefore, by taking $C$ large enough and by using \eqref{eq:sup_barrier_above_g} we ensure that \begin{equation*} \widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right)\geq g\left(\widehat{z}\right),\quad \text{ for}\;\left(t,x\right)\in \overline{\Omega }. \end{equation*} In general, the choice of $C$ will depend on $A$, but it is evident from the next inequality that this will not give rise to circular reasoning. By \eqref{ass_f_nondecreasing} and the boundedness of $\overline{\Omega }$, we can choose $A$ so that \begin{equation*} f\left(t,x,\widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right)\right)\geq f\left(t,x,g\left(\widehat{z} \right)\right)\geq -A,\quad \text{for}\;\left(t,x\right)\in \overline{\Omega }. \end{equation*} Thus, the boundary condition in \eqref{eq:sup_barrier_RV} will follow if we can prove \begin{equation} \langle D_{x}\widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right),\widetilde{\gamma }\left(t,x\right)\rangle \geq A,\quad \text{for}\;\left(t,x\right)\in E_{\widehat{z}}\cap \partial \Omega . \label{eq:RV_andra_halvan} \end{equation} To this end, choose $\rho $ and $\kappa $ so small that \begin{equation} \left\langle x-z\left(t\right),\widetilde{\gamma }\left( t,x\right) \right\rangle \geq -\theta \left\vert x-z\left(t\right)\right\vert \quad \text{whenever}\;x\in B\left( \widehat{z},\rho \right) \cap \partial \Omega _{t},\;t\in \left[ 0,\kappa \right] . \end{equation} Inequality \eqref{test3} then holds with $y=z\left( t\right) $ for all $ \left(t,x\right)\in E_{\widehat{z}}\cap \partial \Omega $. Together with the properties of $\alpha $, this gives \begin{eqnarray*} &&\langle D_{x}\widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right),\widetilde{ \gamma }\left(t,x\right)\rangle \\ &=&A\langle D_{x}\alpha \left(t,x\right),\widetilde{\gamma }\left(t,x\right)\rangle +e^{\left(\widehat{C }/\chi \right)\alpha \left(t,x\right)} \\ &&\cdot \left\langle D_{x}w_{\varepsilon }\left( t,x,z\left(t\right)\right) +w_{\varepsilon }\left(t,x,z\left(t\right)\right)\frac{\widehat{C}}{\chi }D_{x}\alpha \left(t,x\right), \widetilde{\gamma }\left(t,x\right)\right\rangle \\ &\geq &A-\widehat{C}\frac{|x-z\left(t\right)|^{2}}{\varepsilon }+\chi \frac{|x-z\left(t\right)|^{2} }{\varepsilon }\frac{\widehat{C}}{\chi }=A,\quad \text{for}\;\left(t,x\right)\in E_{ \widehat{z}}\cap \partial \Omega . \end{eqnarray*} This proves \eqref{eq:RV_andra_halvan} and hence the boundary condition \eqref{eq:sup_barrier_RV} follows. We now show that for $C$ large enough, $\widetilde{V}_{\varepsilon ,\widehat{ z}}$ is a supersolution to \eqref{huvudekvationen}, that is \begin{equation} D_{t}\widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right)+F\left(t,x,\widetilde{V} _{\varepsilon ,\widehat{z}}\left(t,x\right),D_{x}\widetilde{V}_{\varepsilon ,\widehat{z} }\left(t,x\right),D_{x}^{2}\widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right)\right)\geq 0,\quad \text{for }\left(t,x\right)\in \Omega ^{0}. \label{eq:ekvationen_f�r_den_sista_barri�ren} \end{equation} With $D_{s}$ and $D_{\eta }$ denoting differentiation with respect to the first and third arguments of $w_{\varepsilon }$, respectively, we have \begin{eqnarray} D_{t}\widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right) &=&AD_{t}\alpha \left(t,x\right)+e^{\left( \widehat{C}/\chi \right)\alpha \left(t,x\right)}\frac{\widehat{C}}{\chi }D_{t}\alpha \left( t,x\right) w_{\varepsilon }\left( t,x,z\left(t\right)\right) +e^{\left(\widehat{C}/\chi \right)\alpha \left(t,x\right)} \notag \\ &&\cdot \left( D_{s}w_{\varepsilon }\left( t,x,z\left(t\right)\right) -2K\widehat{ \alpha }\left\langle D_{\eta }w_{\varepsilon }\left(t,x,z\left(t\right)\right),\widetilde{\gamma } \left(0,\widehat{z}\right)\right\rangle t^{\widehat{\alpha }-1}\right) \notag \\ &&+C\widehat{\alpha }t^{\widehat{\alpha }-1}. \label{eq:timederiv} \end{eqnarray} Moreover, by \eqref{ass_F_nondecreasing} with $\lambda =0$ and by \eqref{ass_F_boundary} we have \begin{eqnarray} &&F\left(t,x,\widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right),D_{x}\widetilde{V} _{\varepsilon ,\widehat{z}}\left(t,x\right),D_{x}^{2}\widetilde{V}_{\varepsilon , \widehat{z}}\left(t,x\right)\right) \notag \\ &\geq &F\left( t,x,g\left(\widehat{z}\right),0,0\right) -\sup_{\Omega }m_{2}\left( |D_{x}\widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right)|+||D_{x}^{2}\widetilde{V} _{\varepsilon ,\widehat{z}}\left(t,x\right)||\right) . \label{spacederiv} \end{eqnarray} By \eqref{testlemma45}-\eqref{testlemma47}, \eqref{eq:timederiv} and \eqref{spacederiv}, we can find $C$ so that \eqref{eq:ekvationen_f�r_den_sista_barri�ren} is satisfied. Hence, using \eqref{F_fundamental} and Proposition 7.2. in \cite{CrandallIshiiLions1992}, $\widetilde{V}_{\varepsilon ,z}$ is a viscosity supersolution in $\Omega $ which satisfies the boundary condition \eqref{randvillkor} on $E_{\widehat{z} }\cap \partial \Omega $ in the viscosity sense. We now perform the localized comparison. From the construction of $ \widetilde{w}$, it is clear that $\widetilde{w}\left(0,x\right)\leq g\left( x\right) $, for all $x\in \overline{\Omega }_{0}$. Combined with the left inequality in \eqref{eq:sup_barrier_above_g}, this yields \begin{equation} \widetilde{V}_{\varepsilon ,\widehat{z}}\left(0,x\right)\geq \widetilde{w}\left(0,x\right),\quad \text{for }x\in \overline{\Omega }_{0}\text{. } \label{eq:bottom_comparison} \end{equation} Moreover, for some constant $K$ depending on $g$, $\alpha $, $\widehat{z}$, $ A$, $\kappa $ and $\rho $, we have \begin{equation*} \widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right)\geq -K+\chi \frac{\left\vert x-z\left( t\right) \right\vert ^{2}}{\varepsilon }+B,\quad \text{for} \;\left( t,x\right) \in \left( \partial E_{\widehat{z}}\setminus \partial \Omega \right) \cap \left( \left[ 0,\kappa \right) \times \mathbb{R} ^{n}\right) . \end{equation*} Since $\widetilde{w}$ is bounded, we can conclude, by increasing $B$ if necessary, that \begin{equation} \widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right)\geq \widetilde{w}\left(t,x\right),\quad \text{for}\;\left(t,x\right)\in \left(\partial E_{\widehat{z}}\setminus \partial \Omega \right)\cap \left( \lbrack 0,\kappa )\times \mathbb{R} ^{n}\right) . \label{supbarrierdominaterest} \end{equation} Now, let $\kappa $ be so small that for some $\widetilde{\varepsilon }>0$, it holds that \begin{equation} \left\vert x-z\left( t\right) \right\vert >\widetilde{\varepsilon }>0\quad \text{whenever}\;\left( t,x\right) \in \left( \partial E_{\widehat{z} }\setminus \partial \Omega \right) \cap \left( \left[ 0,\kappa \right) \times \mathbb{R}^{n}\right) . \label{eq:extra_sista_z(t)_egenskap} \end{equation} This choice is possible by the definition of $z\left(t\right)$ and by the properties of the domain. Inequality \eqref{eq:extra_sista_z(t)_egenskap} implies that it is no restriction to assume that $B\rightarrow 0$ as $\varepsilon \rightarrow 0$, which is necessary. By means of \eqref{eq:sup_barrier_RV}, \eqref{eq:bottom_comparison} and \eqref{supbarrierdominaterest}, we can use Corollary \ref{maxrand_partial} to make comparison in $E_{\widehat{z}}\cap \overline{\Omega }$ of the supersolution $\widetilde{V}_{\varepsilon , \widehat{z}}$ with each subsolution in the definition of $\widetilde{w}$. Hence \begin{equation*} \widetilde{V}_{\varepsilon ,\widehat{z}}\left(t,x\right)\geq \widetilde{w}\left(t,x\right),\quad \text{for}\;\left(t,x\right)\in \overline{E}_{\widehat{z}}\cap \overline{\Omega }, \end{equation*} and, as a consequence, $\widetilde{V}_{\varepsilon ,\widehat{z}}=\widetilde{V }_{\varepsilon ,\widehat{z}}^{\ast }\geq \widetilde{w}^{\ast }$ in $ \overline{E}_{\widehat{z}}\cap \overline{\Omega }$. Thus, for any $x\in \partial \Omega _{0}$, \begin{equation*} \widetilde{w}^{\ast }\left(0,x\right)\leq \inf_{\varepsilon ,\widehat{z}}\widetilde{V} _{\varepsilon ,\widehat{z}}\left(0,x\right)=g\left(x\right). \end{equation*} To prove that $\widetilde{w}_{\ast }$ satisfies the initial condition on $ \partial \Omega _{0}$, we proceed similarly by constructing a family of subsolutions of the form \begin{equation*} \tilde{U}_{\varepsilon ,\widehat{z}}\left(t,x\right)=g\left(\widehat{z}\right)-A\left( \alpha \left(t,x\right)-\alpha \left(0,\widehat{z}\right)\right) -e^{\left(\widehat{C}/\chi \right)\alpha \left(t,x\right)}w_{\varepsilon }\left(t,x,z\left(t\right)\right)-B-Ct^{\widehat{\alpha }}. \end{equation*} This completes the proof of Theorem \ref{existence}. $ \Box $\vspace{ 0.2cm} \end{document}
\begin{document} \title{Thermal state truncation by using quantum scissors device} \author{Hong-xia Zhao$^{1}$, Xue--xiang Xu$^{2,\dag }$, Hong-chun Yuan$^{3}$} \affiliation{$^{1}$Information Engineering college, Jiangxi University of technology, Nanchang 330098, China \\ $^{2}$Center for Quantum Science and Technology, Jiangxi Normal University, Nanchang 330022, China\\ $^{3}$College of Electrical and Optoelectronic Engineering, Changzhou Institute of Technology, Changzhou 213002, China\\ $^{\dag }$Corresponding author: [email protected] } \begin{abstract} A non-Gaussian state being a mixture of the vacuum and single-photon states can be generated by truncating a thermal state in a quantum scissors device of Pegg et al. [Phys. Rev. Lett. 81 (1998) 1604]. In contrast to the thermal state, the generated state shows nonclassical property including the negativity of Wigner function. Besides, signal amplification and signal-to-noise ratio enhancement can be achieved. \textbf{PACS: }42.50,Dv, 03.67.-a, 05.30.-d, 03.65.Wj \textbf{Keywords:} quantum scissor; thermal state; signal amplification; signal-to-noise ratio; Wigner function; parity \end{abstract} \maketitle \section{Introduction} The generation of quantum states is a prerequisite for universal quantum information processing (QIP) \cite{1}. Quantum states are usually classified into discrete-variable (DV) and continuous-variable (CV) descriptions \cite {2}. In the CV quantum regime, there are two classes of quantum states that play an important role in QIP: Gaussian and non-Gaussian states, referring to their character of wave function or Wigner function \cite{3,4}. In general, Gaussian states are relatively easy to generate and manipulate using current standard optical technology\cite{5}. However, in the recent several decades, some probabilistic schemes are proposed to generate and manipulate non-Gaussian states \cite{6,6a,6b}. Many schemes work in postselection \cite{7}, that is, the generated state is accepted conditionally on a measurement outcome. The typical examples include photon addition and subtraction \cite{8}, and noise addition \cite{9} . Among them, an interesting scheme was based on the quantum-scissors devices. In 1998, Pegg, Phillips and Barnett proposed this quantum state truncation scheme, which change an optical state $\gamma _{0}\left\vert 0\right\rangle +\gamma _{1}\left\vert 1\right\rangle +\gamma _{2}\left\vert 2\right\rangle +\cdots $ into qubit optical state $\gamma _{0}\left\vert 0\right\rangle +\gamma _{1}\left\vert 1\right\rangle $. The device is then called a quantum scissors device (QSD), while the effect is referred to as optical state truncation via projection synthesis. This quantum mechanical phenomenon was actually a nonlocal effect relying on entanglement because no light from the input mode can reach the output mode \cite{10}. After its proposal, an experiment of quantum scissors was realized by Babichev, Ries and Lvovsky \cite{11} by applying the experimentally feasible proposal of Ref. \cite{11a1,11a2,11a3}. The QSD was also applied and generalized to generate not only qubits but also qutrits \cite{11b} and qudits \cite {11c1,11c2} of any dimension. Similar quantum state can be also generated via a four-wave mixing process in a cavity \cite{11d}. Following these works on QSD, Ferreyrol et al. implemented a nondeterministic optical noiseless amplifier for a coherent state \cite{12}. Moreover, heralded noiseless linear amplifications were designed and realized \cite{13,14,15}. Recently, an experimental demonstration of a practical nondeterministic quantum optical amplification scheme was presented to achieve amplification of known sets of coherent states with high fidelity \cite{16}. By the way, many systems transmitting signals using quantum states could benefit from amplification. In fact, any attempt to amplify signal must introduce noise inevitably. In other words, perfect deterministic amplification of an unknown quantum signal is impossible. In addition, Miranowicz et. al. studied the phase-space interference of quantum states optically truncated by QSD \cite{16a}. Inspired by the above works, we generate a non-Gaussian mixed state by using a Gaussian thermal state as the input state of the quantum scissors in this paper. This process transform an input thermal state into an incoherent mixture of only zero-photon and single-photon components. The success probability of such event is studied. Some properties of the generated state, such as signal amplification, signal-to-noise ratio and the negativity of the Wigner function, are investigated in detail. The paper is organized as follows. In section II, we outline the framework of QSD and introduce the scheme of thermal state truncation. Quantum state is derived explicitly and the probability is discussed. Subsequently, some statistical properties, such as average photon number, intensity gain, signal-to-noise ratio, are investigated in section III. In addition, we study the Wigner function and the parity for the output state in section IV. Conclusions are summarized in the final section. \section{Thermal state truncation scheme} In this section, we outline the basic framework of quantum scissors device and introduce our scheme of thermal state truncation. \subsection{Framework of \textbf{quantum scissors device}} QSD mainly includes two beam splitters (BSs) and three channels, as shown in Fig.1. Three channels are described by the optical modes $a$, $b$, and $c$ in terms of their respective creation (annihilation) operators $a^{\dag }$($ a $), $b^{\dag }$($b$) and $c^{\dag }$($c$). Since every channel have an input port and an output port, the QSD have six ports. The interaction including several key stages as follows. Firstly, the channel $a$ and the channel $c$\ are correlated through an asymmetrical beam splitter (A-BS), whose operation can be described by the unitary operator $B_{1}=e^{\theta \left( a^{\dag }c-ac^{\dag }\right) }$ with the transmissivity $T=\cos ^{2}\theta $. After that, the channel $b$ and the channel $c$\ are then correlated through another symmetrical beam splitter (S-BS, also 50:50 BS), whose operation can be described by the unitary operator $B_{2}=e^{\frac{\pi }{4}\left( b^{\dag }c-bc^{\dag }\right) }$. Moreover, among these six ports, four ports are fixed with special processes as follows: (1) Injecting the auxiliary single-photon $\left\vert 1\right\rangle $ in the input port of channel $a$; (2) Injecting the auxiliary zero-photon $\left\vert 0\right\rangle $ in the input port of channel $c$; (3) Detecting the single-photon $\left\vert 1\right\rangle $ in the output port of channel $b$ ; and (4) Detecting the zero-photon $\left\vert 0\right\rangle $ in the output port of channel $c$. QSD leaves only one input port (i.e., the input port in channel $b$) and one output port (i.e., the output port in channel $a$). Injecting an appropriate input state in the input port, one can generate a new quantum state in the output port. Many previous theoretical and experimental schemes have used the pure states as the input states to generated quantum states. Here, our proposed scheme use a mixed state as the input state to generate quantum state. \subsection{Thermal state truncation} Using a mixed state (i.e., thermal state) as the input state, we shall generate another mixed state in our present protocol. The input thermal state is given by \begin{equation} \rho _{th}=\sum_{n=0}^{\infty }\frac{\bar{n}^{n}}{\left( \bar{n}+1\right) ^{n+1}}\left\vert n\right\rangle \left\langle n\right\vert , \label{1} \end{equation} where $\bar{n}$\ is the average number of the thermal photons \cite{17}. Therefore, the output generated state can be expressed as \begin{eqnarray} \rho _{out} &=&\frac{1}{p_{d}}\left\langle 0_{c}\right\vert \left\langle 1_{b}\right\vert B_{2}\{\rho _{th}\otimes \notag \\ &&[B_{1}(\left\vert 1_{a}\right\rangle \left\langle 1_{a}\right\vert \otimes \left\vert 0_{c}\right\rangle \left\langle 0_{c}\right\vert )B_{1}^{\dag }]\}B_{2}^{\dag }\left\vert 1_{b}\right\rangle \left\vert 0_{c}\right\rangle \label{2} \end{eqnarray} where $p_{d}$ is the success probability. \begin{figure} \caption{(Colour online) Conceptual scheme of "quantum scissors device" (QSD) for thermal state truncation. The auxiliary single-photon $\left\vert 1\right\rangle \left\langle 1\right\vert $ in channel $a$ and the auxiliary single-photon $\left\vert 0\right\rangle \left\langle 0\right\vert $ in channel $c$ generates an entangled state between the modes $a$ and $c$\ after passing through an asymmetrical beam splitter (A-BS) with the transmissivity $T$. The input mode $b$ (accompanied by the input thermal state $\protect\rho _{th} \label{Fig1} \end{figure} The explicit density operator in Eq.(\ref{2}) can further be expressed as \begin{equation} \rho _{out}=p_{0}\left\vert 0\right\rangle \left\langle 0\right\vert +p_{1}\left\vert 1\right\rangle \left\langle 1\right\vert , \label{3} \end{equation} where $p_{0}=\left( 1-T\right) \left( \bar{n}+1\right) /\left( \bar{n} +1-T\right) $ and $p_{1}=\bar{n}T/\left( \bar{n}+1-T\right) $ are, respectively, the zero-photon distribution probability and the one-photon distribution probability. Obviously, the output state is an incoherent mixture of a vacuum state $\left\vert 0\right\rangle \left\langle 0\right\vert $ and a one-photon state $\left\vert 1\right\rangle \left\langle 1\right\vert $ with certain ratio coefficients $p_{0}$, $p_{1}$ . If $T=0$, then $\rho _{out}\rightarrow \left\vert 0\right\rangle \left\langle 0\right\vert $; while for $T=1$, then $\rho _{out}\rightarrow \left\vert 1\right\rangle \left\langle 1\right\vert $. From another point of view, the output generated state in Eq.(\ref{3}) remains only the first two terms of the input thermal state in Eq.(\ref{1}), which can also be considered as an truncation from the input thermal state. However, the corresponding coefficients of these terms are changed. Moreover, the output generated state carry the information of the input thermal state because it also depend on the thermal parameter $\bar{n}$. Since no light from the input port reaches the output port, this process also mark the nonlocal quantum effect of the operation for the quantum scissors. From present protocol, we easily obtain $p_{d}$ as follows \begin{equation} p_{d}=\allowbreak \frac{\bar{n}+1-T}{2\left( \bar{n}+1\right) ^{2}}. \label{4} \end{equation} For a given $\bar{n}$, it can be shown that $p_{d}$ is a linear decreasing function of $T$. \begin{figure} \caption{(Colour online) Probability of successfully generating the output state as a function of the beam-splitter transmissivity according to the model presented in the text. The average photon number of the input thermal state $\bar{n} \label{Fig2} \end{figure} In Fig.2, we plot $p_{d}$ as a function of $T$\ for different $\bar{n}$. For instance, when $\bar{n}=1$, we have $p_{d}|_{\bar{n}=1}=0.25-0.125T$ (see the green line in Fig.2); when $\bar{n}=0$, we have $p_{d}|_{\bar{n} =0}=\allowbreak 0.5-0.5T$ (see the black line in Fig.2). The results on the success probability provide a theoretical reference for experimental realization. \section{Statistical properties of the generated state} By adjusting the interaction parameters, i.e., the thermal parameter $\bar{n} $ of the input state and the transmission parameter $T$ of the A-BS, one can obtain different output states with different figures of merits. Some statistical properties, such as average photon number, intensity gain and signal-to-noise ratio, are studied in this section. As the reference, we will compare the properties of the output state with those of the input state. \subsection{Average photon number and intensity gain} Using the definition of the average photon number, we have $\left\langle \hat{n}\right\rangle _{\rho _{th}}=\bar{n}$ for the input thermal state and \begin{equation} \left\langle \hat{n}\right\rangle _{\rho _{out}}=\frac{\bar{n}T}{\bar{n}+1-T} . \label{5} \end{equation} for the output generated state. Here $\hat{n}$\ is the operator of the photon number \cite{18}. In Fig.3, we plot $\left\langle \hat{n}\right\rangle _{\rho _{out}}$ as a function of $T$\ for different $\bar{n}$. Two extreme cases, such as, e.g. (1) $\left\langle \hat{n}\right\rangle _{\rho _{out}}\equiv 0$ if $\bar{n}=0$ or $T=0$, and (2) $\left\langle \hat{n}\right\rangle _{\rho _{out}}\equiv 1$ if $T=1$ for any $\bar{n}\neq 0$, are always hold. No matter how large the input thermal parameter $\left\langle \hat{n}\right\rangle _{\rho _{th}}$ is, there always exists $\left\langle \hat{n}\right\rangle _{\rho _{out}}\in \lbrack 0,1]$. Moreover, $\left\langle \hat{n}\right\rangle _{\rho _{out}}$ is an increasing function of $T$ for a given nonzero $\bar{n}$. \begin{figure} \caption{(Colour online) Average photon number of the output state as a function of the beam-splitter transmissivity. The average photon number of the input thermal state $\bar{n} \label{Fig3} \end{figure} In order to describe signal amplification, we define the intensity gain as $ g=\left\langle \hat{n}\right\rangle _{\rho _{out}}/\left\langle \hat{n} \right\rangle _{\rho _{th}}$, which is related with the intensity $ \left\langle \hat{n}\right\rangle _{\rho _{out}}$ of the output field with that ($\left\langle \hat{n}\right\rangle _{\rho _{th}}$) of the input field. Therefore we have \begin{equation} g=\allowbreak \frac{T}{\bar{n}+1-T}. \label{6} \end{equation} If $g>1$, then there exist signal amplification. \begin{figure} \caption{(Colour online) Intensity gain of the output state as a function of the beam-splitter transmissivity. The average photon number of the input thermal state $\bar{n} \label{Fig4} \end{figure} Fig.4 shows the intensity gain $g$ as a function of $T$ for different $\bar{n }$. If $\bar{n}\geq 1$, $g$ is impossible to exceed 1, which means no amplification. In other words, the amplification happens only for the cases $ \bar{n}<1$ with $T\in (\left( \bar{n}+1\right) /2,1]$. \subsection{Signal to noise ratio} Signal-to-noise ratio (abbreviated SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise \cite{19}. Here we are interesting to the effect that the process has on the noise of these states. Typically this is shown by calculating the variance of the photon number and forming the SNR, defined by $SNR=\left\langle \hat{n}\right\rangle /\sqrt{\left\langle \hat{n} ^{2}\right\rangle -\left\langle \hat{n}\right\rangle ^{2}}$. From the definition, we have $\left\langle \hat{n}\right\rangle |_{\rho _{th}}=\bar{n} $, $\left\langle \hat{n}^{2}\right\rangle |_{\rho _{th}}=\bar{n}+2\bar{n}^{2} $, and then $SNR|_{\rho _{th}}=\bar{n}/\sqrt{\bar{n}+\bar{n}^{2}}$ for the input thermal state $\rho _{th}$. While for our generated state $\rho _{out}$ , we find $\left\langle \hat{n}\right\rangle |_{\rho _{out}}=\left\langle \hat{n}^{2}\right\rangle |_{\rho _{out}}=p_{1}$ and \begin{equation} SNR|_{\rho _{out}}=\sqrt{\allowbreak \frac{\bar{n}T}{\left( 1-T\right) \left( \bar{n}+1\right) }}. \label{7} \end{equation} \begin{figure} \caption{(Colour online) Signal to noise ratio of the output states (curve line) as a function of the beam-splitter transmissivity, compared to their corresponding thermal states (straight line). Here $\bar{n} \label{Fig5} \end{figure} As is shown in Fig.5, we see the SNR for the output states, as compared to their corresponding thermal states, of the same fixed average photon number. It is found that a clear enhancement (corresponding to its input thermal state) can be seen for $T>0.5$. Moreover, although the SNR of the input thermal state is always smaller than 1:1, the SNR higher than 1:1 for the output generated state can be found in larger $T$ $\left( >\left( \bar{n} +1\right) /\left( 2\bar{n}+1\right) \right) $. \section{Wigner function and Parity of the generated state} The negative Wigner function is a witness of the nonclassicality of a quantum state \cite{20,21,22}. For a single-mode density operator $\rho $, the Wigner function in the coherent state representation $\left\vert z\right\rangle $ can be expressed as $W(\beta )=\frac{2e^{2\left\vert \beta \right\vert ^{2}}}{\pi }\int \frac{d^{2}z}{\pi }\left\langle -z\right\vert \rho \left\vert z\right\rangle e^{-2\left( z\beta ^{\ast }-z^{\ast }\beta \right) }$, where $\beta =\left( q+ip\right) /\sqrt{2}$. Therefore we easily obtain $W_{\rho _{th}}(\beta )=2/\left( \pi \left( 2\bar{n}+1\right) \right) e^{-2\left\vert \beta \right\vert ^{2}/\left( 2\bar{n}+1\right) }$ for the input thermal state and \begin{equation} W_{\rho _{out}}(\beta )=p_{0}W_{\left\vert 0\right\rangle \left\langle 0\right\vert }(\beta )+p_{1}W_{\left\vert 1\right\rangle \left\langle 1\right\vert }(\beta ) \label{8} \end{equation} for the output generated state with $W_{\left\vert 0\right\rangle \left\langle 0\right\vert }(\beta )=\frac{2}{\pi }e^{-2\left\vert \beta \right\vert ^{2}}$ \ and $W_{\left\vert 1\right\rangle \left\langle 1\right\vert }(\beta )=\frac{2}{\pi }(4\left\vert \beta \right\vert ^{2}-1)e^{-2\left\vert \beta \right\vert ^{2}}$. As we all know, the thermal state is a Gaussian state, whose Wigner function have no negative region. However, our output generated states have lost the Gaussian characters because of the non-Gaussian forms of their Wigner functions.\ In addition, the Wigner function will exhibit negative in some region satisfying the following condition $\left\vert \beta \right\vert ^{2}<[2T\bar{n}-\left( \bar{n}+1-T\right) ]/\left( 4\bar{n}T\right) $. In Fig.6, we plot the Wigner functions of the output generated states for two different cases, where the negative region is found for case with large $T$. \begin{figure} \caption{(Colour online) Wigner function of the output state with (a) $\bar{n } \label{Fig6} \end{figure} Since the Wigner function of the output state is symmetrical in $x$ and $p$ space, one can determine the fact\ whether the Wigner function have negative region by seeing $W_{\rho _{out}}(\beta =0)$. As Gerry pointed out that the Wigner function at the origin is the expectation value of the parity operator $\Pi =\left( -1\right) ^{\hat{n}}$, that is $\left\langle \Pi \right\rangle =\frac{\pi }{2}W(0)$ \cite{23}. Thus, we have $\left\langle \Pi \right\rangle _{\rho _{th}}=1/\left( 2\bar{n}+1\right) $ for the input thermal state and \begin{equation} \left\langle \Pi \right\rangle _{\rho _{out}}=\frac{\bar{n}+1-T-2T\bar{n}}{ \allowbreak \bar{n}+1-T}, \label{9} \end{equation} for the output generated state. Fig.7 show $\left\langle \Pi \right\rangle _{\rho _{out}}$ as a function of $T$ for different $\allowbreak \bar{n}$. \begin{figure} \caption{(Colour online) Parity of the output state \textbf{as a function of the beam-splitter transmissivity} \label{Fig7} \end{figure} Photon number states are assigned a parity of $+1$ if their photon number is even and a parity of $-1$ if odd \cite{24}. According to Eq.(\ref{3}), we verify $\left\langle \Pi \right\rangle _{\rho _{out}}=p_{0}-p_{1}$. If the condition $T>\left( \bar{n}+1\right) /\left( 1+2\bar{n}\right) $ is hold, then there exist $\left\langle \Pi \right\rangle _{\rho _{out}}<0$, which means that the Wigner function must exhibit negative region in the phase space. \section{Conclusion} In summary, we have applied the QSD of Pegg, Philips and Barnett to truncate a thermal field to a completely mixed qubit state, i.e., a mixture of the vacuum and single-photon state. The explicit expression was derived in Schrodinger picture and the success probability of such event was discussed. The output generated state depend on two interaction parameters, i.e., the input thermal parameter and the transmissivity of the A-BS. It is shown that the success probability is a linear decreasing function of the transmissivity for any given input parameter. Some nonclassical properties of the qubit state were analyzed including intensity amplification, singal-to-noise ratio, and the non-positive Wigner function. It was shown that the average photon number of the output state can be adjusted between 0 and 1. The intensity amplification will happen only for small-intensity thermal field ($\allowbreak \bar{n}<1$) and large-transmissivity ($T>\left( \bar{n}+1\right) /2$). The SNR of the output state can be enhanced by the operation for a given input thermal state at larger values of $T$ ($>0.5$). The SNR higher than unity can be found in the range of $T$ $>\left( \bar{n} +1\right) /\left( 2\bar{n}+1\right) $. In addition, the negativity of the Wigner function appears only for proper $T>\left( \bar{n}+1\right) /\left( 1+2\bar{n}\right) $. \begin{acknowledgments} We would like to thank Li-yun Hu and Bi-xuan Fan for their great helpful discussions. This work was supported by the Research Foundation of the Education Department of Jiangxi Province of China (Nos. GJJ151150 and GJJ150338) and the Natural Science Foundation of Jiangxi Province of China (20151BAB202013) as well as the National Natural Science Foundation of China(Grants No. 11264018 and No. 11447002). \end{acknowledgments} \textbf{Appendix A: Derivation of the density operator in Eq.(\ref{3})} In this appendix, we provide a detailed process of deriving the explicit expression of the output generated state in Schrodinger picture. Substituting $\left\vert 1_{a}\right\rangle =\frac{d}{ds_{1}}e^{s_{1}a^{\dag }}\left\vert 0_{a}\right\rangle |_{s_{1}=0}$, $\left\langle 1_{a}\right\vert =\frac{d}{dh_{1}}\left\langle 0_{a}\right\vert \exp e^{h_{1}a}|_{h_{1}=0}$, $ \left\vert 1_{b}\right\rangle =\frac{d}{ds_{2}}e^{s_{2}b^{\dag }}\left\vert 0_{b}\right\rangle |_{s_{2}=0}$, $\left\langle 1_{b}\right\vert =\frac{d}{ dh_{2}}\left\langle 0_{b}\right\vert e^{h_{2}b}|_{h_{2}=0}$, as well as \begin{equation*} \rho _{th}=\frac{1}{\bar{n}}\int \frac{d^{2}\alpha }{\pi }e^{-\left( \frac{1 }{\bar{n}}+1\right) \left\vert \alpha \right\vert ^{2}}e^{\alpha b^{\dag }}\left\vert 0_{b}\right\rangle \left\langle 0_{b}\right\vert e^{\alpha ^{\ast }b}, \end{equation*} into Eq.(\ref{2}), we have \begin{eqnarray*} \rho _{out} &=&\frac{d^{4}}{\bar{n}p_{d}ds_{1}dh_{1}dh_{2}ds_{2}} \\ &&\int \frac{d^{2}\alpha }{\pi }e^{-\left( \frac{1}{\bar{n}}+1\right) \left\vert \alpha \right\vert ^{2}}\left\langle 0_{c}\right\vert \left\langle 0_{b}\right\vert e^{h_{2}b} \\ &&e^{s_{1}\allowbreak ta^{\dag }+\frac{\alpha -s_{1}r}{\sqrt{2}}b^{\dag }- \frac{\alpha +s_{1}r}{\sqrt{2}}c^{\dag }}\left\vert 0_{a}\right\rangle \left\vert 0_{b}\right\rangle \left\vert 0_{c}\right\rangle \\ &&\left\langle 0_{c}\right\vert \left\langle 0_{b}\right\vert \left\langle 0_{a}\right\vert e^{\allowbreak h_{1}ta+\allowbreak \frac{\alpha ^{\ast }-h_{1}r}{\sqrt{2}}b-\frac{\alpha ^{\ast }+h_{1}r}{\sqrt{2}}c} \\ &&e^{s_{2}b^{\dag }}\left\vert 0_{b}\right\rangle \left\vert 0_{c}\right\rangle |_{s_{1}=s_{2}=h_{1}=h_{2}=0} \end{eqnarray*} where we have used the following transformations \begin{eqnarray*} B_{1}aB_{1}^{\dag } &=&at-cr,\text{ \ }B_{1}cB_{1}^{\dag }=ar+ct, \\ B_{2}bB_{2}^{\dag } &=&\frac{b-c}{\sqrt{2}},\text{ \ }B_{2}cB_{2}^{\dag }= \frac{b+c}{\sqrt{2}}. \end{eqnarray*} and $B_{1}\left\vert 0_{a}\right\rangle \left\vert 0_{c}\right\rangle =\left\vert 0_{a}\right\rangle \left\vert 0_{c}\right\rangle $, $ B_{2}\left\vert 0_{b}\right\rangle \left\vert 0_{c}\right\rangle =\left\vert 0_{b}\right\rangle \left\vert 0_{c}\right\rangle $, as well as their conjugations. In addition, $t=\cos \theta $ and $r=\sin \theta $\ are\ the transmission coefficient and the reflection coefficient\ of the A-BS, respectively. After detailed calculation, we obtain \begin{eqnarray*} \rho _{out} &=&\frac{d^{4}}{\left( \bar{n}+1\right) p_{d}ds_{1}dh_{1}dh_{2}ds_{2}} \\ &&e^{\frac{\bar{n}}{2\left( \bar{n}+1\right) }s_{2}h_{2}-\frac{r}{\sqrt{2}} \left( h_{1}s_{2}+rh_{2}s_{1}\right) } \\ &&e^{s_{1}ta^{\dag }}\left\vert 0_{a}\right\rangle \left\langle 0_{a}\right\vert e^{\allowbreak h_{1}ta}|_{s_{1}=s_{2}=h_{1}=h_{2}=0} \end{eqnarray*} Using $\left\vert 0_{a}\right\rangle \left\langle 0_{a}\right\vert =$ $ :e^{-a^{\dag }a}:$ and making the derivative in the normal ordering form (denoted by $:\cdots :$), we have \begin{equation*} \rho _{out}=:\left( p_{0}+p_{1}a^{\dag }a\right) \exp \left( -a^{\dag }a\right) : \end{equation*} Thus the density operator in Eq.(\ref{3}) is obtained. \end{document}
\begin{document} \title{Controlling Reversibility in Reversing Petri Nets with Application to Wireless Communications} \author{Anna Philippou \and Kyriaki Psara } \institute{Department of Computer Science, University of Cyprus\\ \email{\{annap,kpsara01\}@cs.ucy.ac.cy} } \author{ Anna Philippou\inst{1}, Kyriaki Psara\inst{1}, and Harun Siljak\inst{2} } \institute{ Department of Computer Science, University of Cyprus\\ \email{ \{annap,kpsara01\}@cs.ucy.ac.cy} \and CONNECT Centre, Trinity College Dublin, \email {[email protected]} } \maketitle \begin{abstract} Petri nets are a formalism for modelling and reasoning about the behaviour of distributed systems. Recently, a reversible approach to Petri nets, Reversing Petri Nets (RPN), has been proposed, allowing transitions to be reversed spontaneously in or out of causal order. In this work we propose an approach for controlling the reversal of actions of an RPN, by associating transitions with conditions whose satisfaction/violation allows the execution of transitions in the forward/reversed direction, respectively. We illustrate the framework with a model of a novel, distributed algorithm for antenna selection in distributed antenna arrays. \end{abstract} \pagestyle{plain} \section{Introduction}\label{sec:Introduction} Reversibility is a phenomenon that occurs in a variety of systems, e.g., biochemical systems and quantum computations. At the same time, it is often a desirable system property. To begin with, technologies based on reversible computation are considered to be the only way to potentially improve the energy efficiency of computers beyond the fundamental Landauer limit. Further applications are encountered in programming languages, concurrent transactions, and fault-tolerant systems, where in case of an error a system should reverse back to a safe state. As such, reversible computation has been an active topic of research in recent years and its interplay with concurrency is being investigated within a variety of theoretical models of computation. The notion of causally-consistent reversibility was first introduced in the process calculus RCCS~\cite{RCCS}, advocating that a transition can be undone only if all its effects, if any, have been undone beforehand. Since then the study of reversibility continued in the context of process calculi~\cite{TransactionsRCCS,Algebraic,LaneseLMSS13,LaneseMS16,CardelliL11}, event structures~\cite{ConRev}, and Petri nets~\cite{PetriNets,RPNs,RPNtoCPN}. A distinguishing feature between the cited approaches is that of {\emph controlling} reversibility: while various frameworks make no restriction as to when a transition can be reversed (uncontrolled reversibility), it can be argued that some means of controlling the conditions of transition reversal is often useful in practice. For instance, when dealing with fault recovery, reversal should only be triggered when a fault is encountered. Based on this observation, a number of strategies for controlling reversibility have been proposed: \cite{TransactionsRCCS} introduces the concept of irreversible actions, and \cite{DBLP:conf/rc/LaneseMS12} introduces compensations to deal with irreversible actions in the context of programming abstractions for distributed systems. Another approach for controlling reversibility is proposed in~\cite{ERK} where an external entity is employed for capturing the order in which transitions can be executed in the forward or the backward direction. In another line of work,~\cite{LaneseMSS11} defines a roll-back primitive for reversing computation, and in~\cite{LaneseLMSS13} roll-back is extended with the possibility of specifying the alternatives to be taken on resuming the forward execution. Finally, in~\cite{statistical} the authors associate the direction of action reversal with energy parameters capturing environmental conditions of the modelled systems. In this work we focus on the framework of reversing Petri nets (RPNs)~\cite{RPNs}, which we extend with a mechanism for controlling reversibility. This control is enforced with the aid of conditions associated with transitions, whose satisfaction/violation acts as a guard for executing the transition in the forward/backward direction, respectively. The conditions are enunciated within a simple logical language expressing properties relating to available tokens. The mechanism may capture environmental conditions, e.g., changes in temperature, or the presence of faults. We present a causal-consistent semantics of the framework. Note that conditional transitions can also be found in existing Petri net models, e.g., in~\cite{CPN}, a Petri-net model that associates transitions and arcs with expressions. We conclude with the model of a novel antenna selection (AS) algorithm which inspired our framework. Centralized AS in DM MIMO (distributed, massive, multiple input, multiple output) systems \cite{gao2015massive} is computationally complex, demands a large information exchange, and the communication channel between antennas and users changes rapidly. We introduce an RPN-based, distributed, time-evolving solution with reversibility, asynchronous execution and local condition tracking for reliable performance and fault tolerance. \section{Reversing Petri Nets}\label{sec:ReversingPetriNets} In this section we extend the reversing Petri nets of~\cite{RPNs} by associating transitions with conditions that control their execution and reversal, and allow tokens to carry data values of specific types (clauses (2), (6) and (7) in the following definition). We introduce a causal-consistent semantics for the framework. \begin{definition}{\rm A \emph{\PN}(RPN) is a tuple $(P,T, \Sigma, A, B, F, C, I)$ where: \begin{enumerate} \item $P$ is a finite set of \emph{places} and $T$ is a finite set of \emph{transitions}. \item $\Sigma$ forms a finite set of data types with $V$ the associated set of data values. \item $A$ is a finite set of \emph{bases} or \emph{tokens} ranged over by $a, b,\ldots$. $\overline{A} = \{\overline{a}\mid a\in A\}$ contains a ``negative" instance for each token and we write ${\cal{A}}=A \cup \overline{A}$. \item $B\subseteq A\times A$ is a set of undirected \emph{bonds} ranged over by $\beta,\gamma,\ldots$. We use the notation $a \bond b$ for a bond $(a,b)\in B$. $\overline{B} = \{\overline{\beta}\mid \beta\in B\}$ contains a ``negative" instance for each bond and we write ${\cal{B}}=B \cup \overline{B}$. \item $F : (P\times T \cup T \times P)\rightarrow 2^{{\cal{A}}\cup {\cal{B}}}$ is a set of directed labelled \emph{arcs}. \item $C:T\rightarrow$ COND is a function that assigns a condition to each transition $t$ such that $type(C(t))=Bool$. \item $I : A \rightarrow V$ is a function that associates a data value from $V$ to each token $a$ such that $type(I(a))=type(a)$. \end{enumerate} }\end{definition} RPNs are built on the basis of a set of \emph{tokens} or \emph{bases} which correspond to the basic entities that occur in a system. Tokens have a type from the set $\Sigma$, and we write $type(e)$ to denote the type of a token or expression in the language. Values of these types are associated to tokens of an \RPN via function $I$. Tokens may occur as stand-alone elements but as computation proceeds they may also merge together to form \emph{bonds}. Transitions represent events and are associated with conditions COND defined over the data values associated with the tokens of the model and functions/predicates over the associated data types. \emph{Places} have the standard meaning. Directed arcs connect places to transitions and vice versa and are labelled by a subset of ${\cal{A}}\cup {\cal{B}}$. Intuitively, these labels express the requirements for a transition to fire when placed on arcs incoming the transition, and the effects of the transition when placed on the outgoing arcs. Graphically, a Petri net is a directed bipartite graph where tokens are indicated by $\bullet$, places by circles, transitions by boxes, and bonds by lines between tokens. The association of tokens to places is called a \emph{marking} such that $M: P\rightarrow 2^{A\cup B}$ where $a \bond b \in M(x)$, for some $x\in P$, implies $a,b\in M(x)$. In addition, we employ the notion of a \emph{history}, which assigns a memory to each transition $H : T\rightarrow 2^\mathbb{N}$. Intuitively, a history of $H(t) = \emptyset$ for some $t \in T$ captures that the transition has not taken place, and a history of $k\in H(t)$, captures that the transition was executed as the $k^{th}$ transition occurrence and it has not been reversed. Note that $|H(t)|>1$ may arise due to cycles in a model. A pair of a marking and a history, $\state{M}{H}$, describes a \emph{state} of a RPN with $\state{M_0}{H_0}$ the initial state, where $H_0(t) = \emptyset $ for all $t\in T$. We introduce the following notations. We write $\circ t = \{x\in P\mid F(x,t)\neq \emptyset\}$ and $ t\circ = \{x\in P\mid F(t,x)\neq \emptyset\}$ for the incoming and outgoing places of transition $t$, respectively. Furthermore, we write $\guard{t} = \bigcup_{x\in P} F(x,t)$ and $\effects{t} = \bigcup_{x\in P} F(t,x)$. Finally, we define $\connected(a,C)$, where $a$ is a token and $C\subseteq A\cup B$ a set of connections, to be the tokens connected to $a$ via a sequence of bonds in $B$, together with the bonds creating these connections. In what follows we assume that: (1) transitions do not erase tokens ($A\cap \guard{t} = A\cap \effects{t}$), and (2) tokens/bonds cannot be cloned into more than one outgoing places of a transition ($F(t,x) \cap F(t,y)=\emptyset$ for all $x,y \in P, x\neq y$). Furthermore, we assume for all $a\in A, |\multiset{x| a\in M_0(x)}|=1$, i.e., there exists exactly one base of each type in $M_0$. Note that we extend the exposition of~\cite{RPNs} by allowing transitions to break bonds and by permitting cyclic structures. \subsection{Forward execution} For a transition to be forward-enabled in an \RPN the following must hold: \begin{definition}\label{forward}{\rm Consider a \RPN $(P,T, \Sigma, A, B, F, C, I)$, a transition $t$, and a state $\state{M}{H}$. We say that $t$ is \emph{forward-enabled} in $\state{M}{H}$ if: \begin{enumerate} \item If $a\in F(x,t)$ (resp. $\beta\in F(x,t)$) for some $x\in\circ t$, then $a\in M(x)$ (resp. $\beta\in M(x)$), and if $\overline{a}\in F(x,t)$ (resp. $\overline{\beta} \in F(x,t)$) for some $x\in\circ t$, then $a\not\in M(x)$ (resp. $\beta\not\in M(x)$), \item If $\beta\in F(t,x)$ for some $x\in t\circ$ and $\beta\in M(y)$ for some $y\in \circ t$ then $\beta\in F(y,t)$, \item $E(C(t))$= True. \end{enumerate} }\end{definition} Thus, $t$ is enabled in state $\state{M}{H}$ if (1) all tokens and bonds required for the transition are available in $t$'s incoming places and none of the tokens/bonds whose absence is required exists in $t$'s incoming place, (2) if a pre-existing bond appears in an outgoing arc of a transition, then it is also a precondition of the transition to fire, and (3) the transition's condition $C(t)$ evaluates to true. We write $E(c)$ for the value of the condition based on the assignment function $I$. When a transition $t$ is executed in the forward direction, all tokens and bonds occurring in its outgoing arcs are relocated from the input to the output places along with their connected components. The history of $t$ is extended accordingly: \begin{definition}{\rm \label{forw} Given a \RPN $(P,T, \Sigma, A, B, F, C, I)$, a state $\langle M, H\rangle$, and a transition $t$ enabled in $\state{M}{H}$, we write $\state{M}{H} \trans{t} \state{M'}{H'}$ where: \[ \begin{array}{rcl} M'(x) & = & M(x)-\bigcup_{a\in F(x,t)}\connected(a,M(x)) \\ &&\cup \bigcup_{ a\in F(t,x), y\in\circ{t}}\connected(a,M(y)-\guard{t} \cup F(t,x)) \end{array} \] and $H'(t') = H(t')\cup \{ \max( \{0\} \cup\bigcup_{t''\in T} H(t'')) +1\},$ if $t' = t $, and $H(t')$, otherwise. }\end{definition} \subsection{Causal order reversing} We now move on to {\em causal-order reversibility}. The following definition enunciates that a transition $t$ is $co$-enabled (`$co$' standing for causal-order reversing) if it has been previously executed and all the tokens on the outgoing arcs of the transition are available in its outplaces. Furthermore, to handle causality in the presence of cycles, clause (1) additionally requires that all bonds involved in the connected components of such tokens have been constructed by transitions $t'$ that have preceded $t$. Furthermore, clause (2) of the definition requires that the condition of the transition is not satisfied. \begin{definition}\label{co-enabled}{\rm Consider a RPN $(P,T, \Sigma, A, B, F, C, I)$, a state $\state{M}{H}$, and a transition $t\in T$ with $k=\max( H(t))$. Then $t$ is $co$-enabled in $\state{M}{H}$ if: (1) for all $a\in F(t,y)$ then $a\in M(y)$, and if $\connected(a,M(y))\cap \effects{t'} \neq \emptyset$ for some $t'\in T$ with $k'\in H(t')$, then $k'\leq k$, and, (2) $E(C(t))$= False. }\end{definition} When a transition $t$ is reversed all tokens and bonds in the pre-conditions of $t$, as well as their connected components, are transferred to $t$'s incoming places. \begin{definition}\label{br-def}{\rm Given a \RPN a state $\langle M, H\rangle$, and a transition $t$ $co$-enabled in $\state{M}{H}$ with history $k\in H(t)$, we write $ \state{M}{H} \rtrans{t} \state{M'}{H'}$ where: \[ \begin{array}{rcl} M'(x) & = & M(x)- \bigcup_{a\in F(t,x)}\connected(a,M(x)) \\ && \cup\bigcup_{ y \in t\circ, a\in F(x,t)}\connected(a,M(y)-\effects{t} \cup F(x,t)) \end{array} \] and $H'(t') = H(t')-\{k\}$ if $t' = t$, and $H(t')$, otherwise. }\end{definition} \section{Case Study: Antenna Selection in DM MIMO}\label{sec:Case study} The search for a suitable set of antennas is a sum capacity maximization problem: \begin{equation} \mathcal{C}=\max_{\mathbf{P},\mathbf{H_{c}}}\log_{2} \det\left(\mathbf{I}+\rho\frac{N_R}{N_{TS}} \mathbf{H_{c}}\mathbf{P}\mathbf{H_{c}}^{H}\right)\label{capac} \end{equation} where $\rho$ is the signal to noise ratio, $N_{TS}$ the number of antennas selected from a total of $N_T$ antennas, $N_{R}$ the number of users, $\mathbf{I}$ the $N_{TS}\times N_{TS}$ identity matrix, $\mathbf{P}$ a diagonal $N_{R}\times N_{R}$ power matrix. $\mathbf{H_{c}}$ is the $N_{TS}\times N_{R}$ submatrix of $N_{T}\times N_{R}$ channel matrix $\mathbf{H}$ \cite{gao2015massive}. Instead of centralized AS, in our approach (\ref{capac}) is calculated locally for small sets of antennas (neighborhoods), switching on only antennas which improve the capacity: in Fig.~\ref{mechanism}(a), antenna $A_{i-1}$ will not be selected. \begin{figure} \caption{RPN for antenna selection in DM MIMO (large antenna array).} \label{mechanism} \end{figure} In the \RPN interpretation, we present the antennas by places $A_1,\ldots,A_n$, where $n=N_T$, and the overlapping neighbourhoods by places $M_1,\ldots,M_h$. These places are connected together via transitions $t_{i,j}$, connecting $A_i$, $A_j$ and $M_k$, whenever there is a connection link between antennas $A_i$ and $A_j$. The transition captures that, based on the neighbourhood knowledge in place $M_k$, antenna $A_i$ may be preferred over $A_j$ or vice versa (the transition may be reversed). To implement the intended mechanism, we employ three types of tokens. First, we have the power tokens $p_1,\ldots,p_l$, where $l$ is the number of enabled antennas. If token $p$ is located on place $A_i$, antenna $A_i$ is considered to be on. Transfer of these tokens results into new antenna selections, ideally converging to a locally optimal solution. Second, tokens $m_1,\ldots,m_h$, each represent one neighborhood. Finally, $a_1,\ldots,a_n$, represent the antennas. The tokens are used as follows: Given transition $t_{i,j}$ between antenna places $A_i$ and $A_j$ in neighbourhood $M_k$, transition $t_{i,j}$ is enabled if token $p$ is available on $A_i$, token $a_j$ on $A_j$, and bond $(a_i,m_k)$ on $M_k$, i.e., $F(A_i,t_{i,j}) = \{p\}$, $F(A_j,t_{i,j})= \{a_j\}$, and $F(M_k,t_{i,j})=\{(a_i,m_k)\}$. This configuration captures that antennas $A_i$ and $A_j$ are on and off, respectively. (Note that the bonds between token $m_k$ and tokens of type $a$ in $M_k$ capture the active antennas in the neighbourhood.) Then, the effect of the transition is to break the bond $(a_i,m_k)$, and release token $a_i$ to place $A_i$, transferring the power token to $A_j$, and creating the bond $(a_j,m_k)$ on $M_k$, i.e., $F(t_{i,j}, A_i) = \{a_i\}$, $F(t_{i,j},A_j)= \{p\}$, and $F(t_{i,j},M_k) = \{(a_j,m_k)\}$. The mechanism achieving this for two antennas can be seen in Fig.~\ref{mechanism}(b). Finally, to capture the transition's condition, an antenna token $a_i$ is associated with data vector $I(a_i) = \mathbf{h}_i$, $type(\mathbf{h}_i)= \mathbb{R}^2$ ($=\mathbb{C}$), i.e., the corresponding row of $\mathbf{H}$. The condition constructs the matrix $\mathbf{H}_c$ of (\ref{capac}) by collecting the data vectors $\mathbf{h}_i$ associated with the antenna tokens $a_i$ in place $M_k$: $\mathbf{H}_c=(\mathbf{h}_1,...,\mathbf{h}_n)^T$ where $\mathbf{h}_i=I(a_i)$ if $a_i\in M_k$, otherwise $\mathbf{h}_i=(0\;\ldots\;0)$. The transition $t_{i,j}$ will occur if the sum capacity calculated for all currently active antennas (including $a_i$), $\mathcal{C}_{a_i}$, is less than the sum capacity calculated for the same neighbourhood with the antenna $A_i$ replaced by $A_j$, $\mathcal{C}_{a_j}$, i.e., $\mathcal{C}_{a_i}<\mathcal{C}_{a_{j}}$. Note that if the condition is violated, the transition may be executed in the reverse direction. \remove{ \begin{figure} \caption{Antenna selection on massive-MIMO} \label{grid} \end{figure} } \remove{ \begin{figure} \caption{Memory mechanism on massive-MIMO after the execution of transition $t_{10} \label{mechanism1} \end{figure} } Results of the RPN-based approach on an array consisting of $64$ antennas serving $16$ users, varying the number of selected antennas from $16$ to $64$ are shown in Fig. \ref{resant} \cite{SPP19}. If we run five RPN models in parallel and select the one with the best performance for the final selection, the results are consistently superior to those of a centralised (greedy) algorithm, and if we run just one (equivalent to the average of the performance of these five models) the results are on par with those of the centralised algorithm. \begin{figure} \caption{Results of antenna selection on a distributed 64 antenna array.} \label{resant} \end{figure} \section{Conclusions}\label{sec:Conclusions} We have extended RPNs with conditions that control reversibility by determining the direction of transition execution, and we have applied our framework to model an AS algorithm. Preliminary results show superior performance to centralised approaches. Our experience strongly suggests that resource management can be studied and understood in terms of RPNs as, along with their visual nature, they offer a number of relevant features. In subsequent work, we plan to extend RPNs for allowing multiple tokens of the same base/type to occur in a model and for developing out-of-causal-order reversibility semantics in the presence of conditional transitions as well as the destruction of bonds. \noindent{\bf{Acknowledgents:}} This work was partially supported by the European COST Action IC 1405: Reversible Computation - Extending Horizons of Computing, Science Foundation Ireland (SFI) and European Regional Development Fund under Grant Number 13/RC/2077, and the EU Horizon 2020 research \& innovation programme under the Marie Sklodowska-Curie grant agreement No 713567. \small \end{document}
\begin{document} \title{A Quantum Model for Autonomous Learning Automata} \author{Michael Siomau} \email{[email protected]} \affiliation{Physics Department, Jazan University, P.O.~Box 114, 45142 Jazan, Kingdom of Saudi Arabia} \date{\today} \begin{abstract} The idea of information encoding on quantum bearers and its quantum-mechanical processing has revolutionized our world and brought mankind on the verge of enigmatic era of quantum technologies. Inspired by this idea, in present paper we search for advantages of quantum information processing in the field of machine learning. Exploiting only basic properties of the Hilbert space, superposition principle of quantum mechanics and quantum measurements, we construct a quantum analog for Rosenblatt's perceptron, which is the simplest learning machine. We demonstrate that the quantum perceptron \cor{is superior} its classical counterpart in learning capabilities. In particular, we show that the quantum perceptron is able to learn an arbitrary (Boolean) logical function, perform the classification on previously unseen classes and even recognize the superpositions of learned classes -- the task of high importance in applied medical engineering. \end{abstract} \pacs{03.67.Ac, 87.19.ll, 87.85.E-} \maketitle \section{\label{sec:1} Introduction} During last few decades, we have been witnessing unification of quantum physics and classical information science that resulted in constitution of new disciplines -- quantum information and quantum computation \cite{Nielsen:00,Georgescu:13}. While processing of information, which is encoded in systems exhibiting quantum properties suggests, for example, unconditionally secure quantum communication \cite{Gisin:02} and superdense coding \cite{Vedral:02}, computers that operate according to the laws of quantum mechanics offer efficient solving of problems that are intractable on conventional computers \cite{Childs:10}. Having paramount practical importance, these announced technological benefits have indicated the main directions of the research in the field of quantum information and quantum computation, somehow leaving aside other potential applications of quantum physics in information science. So far, for instance, very little attention has been paid on possible advantages of quantum information processing in such areas of modern information science as machine learning \cite{Kecman:01} and artificial intelligence \cite{Russel:09}. \cor{Using standard quantum computation formalism, it has been shown that machine learning governed by quantum mechanics has certain advantages over classical learning \cite{Menneer:95, Andrecut:02, Kouda:05, Zhou:06, Zhou:07, Manzano:09}. These advantages, however, are strongly coupled with more sophisticated optimization procedure than in the classical case, and thus require an efficiently working quantum computer \cite{Ladd:10} to handle the optimization. This paper, in contrast, presents a new approach to machine learning, which, in the simplest case, does not require any optimization at all.} Our focus is on perceptron, which is the simplest learning machine. Perceptron is a model of neuron that was originally introduced by Rosenblatt \cite{Rosenblat:57} to perform visual perception tasks, which, in mathematical terms, result in solution of the \textit{linear classification problem}. There are two essential stages of the perceptron functioning: supervised learning session and new data classification. During the first stage, the perceptron is given a labeled set of examples. Its task is of inferring weights of a linear function according to some error-correcting rule. Subsequently, this function is utilized for classification of new previously unseen data. In spite of its very simple internal structure and learning rule, the perceptron's capabilities are seriously limited \cite{Minsky:69}. Perceptron can not provide the classification, if there is an overlap in the data or if the data can not be linearly separated. It is also incapable of learning complex logical functions, such as XOR function. Moreover, by its design, the perceptron can distinguish only between previously seen classes and, therefore, can not resolve the situation when the input belongs to none of the learned classes, or represents a superposition of seen classes. In this paper we show that all the mentioned problems can be, in principle, overcome by \cor{a quantum analog for perceptron}. There are also two operational stages for the quantum perceptron. During the learning stage all the data are formally represented through quantum states of physical systems. This representation allows expanding the data space to a physical Hilbert space. It is important to note, that there is no need to involve real physical systems during this stage. Thus, the learning is essentially a classical procedure. The subject of the learning is a set of positive operator valued measurements (POVM) \cite{Nielsen:00}. The set is constructed by making superpositions of the training data in a way that each operator is responsible for detection of one particular class. This procedure is linear and does not require solving equations or optimizing parameters. When the learning is over, there are two possibilities to achieve the required classification of new data. First, new data are encoded into the states of real quantum systems, which are measured by detectors adjusted in accordance with the learned POVM. Second, new data may be formally encoded into the states of quantum systems and processed with the POVM. Both mentioned ways allow to achieve the classification. This paper is organized as follows. In the next section, we first overview the classical perceptron and discuss the origin of the restrictions on its learning capabilities. After this, in Section~\ref{sec:2b}, we introduce the quantum perceptron and show its properties. We demonstrate, in Section~\ref{sec:3}, three examples of how the quantum perceptron \cor{is superior} its classical counterpart in learning capabilities: complex logical function learning, classification of new data on previously unseen classes and recognition of superpositions of classes. We conclude in Section~\ref{sec:4}. \section{\label{sec:2} Basic Constructions} \subsection{\label{sec:2a} Rosenblatt's Perceptron} Operational structure of the classical perceptron is simple. Given an input vector $\textbf{x}$ (which is usually called a feature vector) consisting of $n$ features, perceptron computes a weighted sum of its components $f(\textbf{x}) = \sum_i a_i x_i$, where weights $a_i$ have been previously learned. The output from a perceptron is given by $o = {\rm sign} (f(\textbf{x}))$, where ${\rm sign(...)}$ is the Heaviside function \begin{equation} \label{sign} {\rm sign} (y) = \{ \begin{array}{cc} +1 & y>0 \\ -1 & y \leq 0 \\ \end{array} \, . \end{equation} Depending on the binary output signal $o \in \{ +1,-1\}$, the input feature vector $\textbf{x}$ is classified between two feature classes, one of which is associated with output $o=+1$ and the other with output $o=-1$. As we have mentioned above, the perceptron needs to be trained before its autonomous operation. During the training, a set of P training data pairs $\{ \textbf{x}_i, d_i, i=1,...,P \}$ is given, where $\textbf{x}_i$ are the $n$-dimensional feature vectors and $d_i$ are desired binary outputs. Typically, at the beginning of the learning procedure the initial weights $a_i$ of the linear function are generated randomly. When a data pair is chosen from the training set, the output $o_i = {\rm sign} (f(\textbf{x}_i))$ is computed from the input feature vector $\textbf{x}_i$ and is compared to the desired output $d_i$. If the actual and the desired outputs match $o_i = d_i$, the weights $a_i$ are left without change and the next pair from the data set is taken for the analysis. If $o_i \ne d_i$, the weights $a_i$ of the linear function are to be changed according to the error-correcting rule $\textbf{a}^\prime = \textbf{a} + \epsilon \textbf{a} = \textbf{a} + (d_i - o_i) \textbf{x}_i$, which is applied hereafter and until the condition $o_i = d_i$ is met. The training procedure has clear geometric interpretation. The weights $a_i$ of the linear function define a $\left( n-1 \right)$-dimensional hyperplane in the $n$-dimensional feature space. The training procedure results in a hyperplane that divides the feature space on two subspaces, so that each feature class occupies one of the subspaces. Due to this interpretation, the origin of the restrictions on learning capabilities of the classical perceptron becomes visible: a hyperplane that separates the two classes may not exist. The simplest example of two classes that can not be linearly separated is XOR logical function of two variables, which is given by the truth table \begin{equation} \label{XOR} \begin{array}{ccccc} x_1 & \; 0 & \; 0 & \; 1 & \; 1 \\ x_2 & \; 0 & \; 1 & \; 0 & \; 1 \\ f & \; 0 & \; 1 & \; 1 & \; 0 \\ o & -1 & +1 & +1 & -1 \\ \end{array} \, . \end{equation} A schematic representation of this function in the two-dimensional feature space is shown in Fig.~\ref{fig-2}. \begin{figure} \caption{The feature space of XOR function is two-dimensional and discrete (each feature takes only values 0 and 1). There is no line (a one-dimensional hyperplane) that separates black and grey points. Classical perceptron is incapable of classifying the input feature vectors and, therefore, can not learn XOR function.} \label{fig-2} \end{figure} There are, however, limitations on the learning capabilities of the perceptron even in the case when the separating hyperplane exists. As we mentioned above the hyperplane divides the feature space on two subspaces, in spite of the fact that the feature classes occupy two particular hypervolumes. This enforces the classification on the two learned classes even so the given feature is essentially different from the classes, i.e. form a new class. It is very important to note that certain tasks undoable by Rosenblatt's perceptron, such as complex logical functions learning and classifying data with an overlap, can be performed in the framework of more sophisticated classical learning models, for example, by support vector machines \cite{Kecman:01}. However, these classical implementations always demand \textit{nonlinear} optimization, which complicates rapidly with growth of the feature space. This effect is known as the curse of dimensionality of the classical learning models \cite{Kecman:01}. In the next section, we present a new model for the learning machine, which, however, is \textit{linear}, but \cor{is superior} Rosenblatt's perceptron in its learning capabilities. \subsection{\label{sec:2b} \cor{A quantum analog for Rosenblatt's perceptron}} As its classical counterpart, quantum perceptron is to be trained to perform the classification task. Suppose, we are given a set of $K$ training data pairs consisting of feature vectors $\{\textbf{x}_k, d_k, k=1,...,K \}$ with the desired binary outputs $d \in \{ +1,-1 \}$; and each feature vector consists of $n$ features $\textbf{x} = \{ x_1, x_2, ..., x_n \}$. \cor{Let us suppose that each feature is restricted in a certain interval, so that all features can be normalized to the unit interval $x_k^\prime \in \left[ 0,1\right]$ for $k=1,...,n$. This allows us to represent the input feature vectors through the states of a (discrete) $2^n$-dimensional quantum system, so that $\ket{\textbf{x}} = \ket{x_1^\prime,x_2^\prime,...,x_n^\prime}$. With this quantum representation we have extended the classical $n$-dimensional feature space to $2^n$-dimensional Hilbert space of the quantum system. We shall drop "primes" hereafter assuming that the features are normalized.} \cor{Let us construct a projection operator $\ket{\textbf{x}}\bra{\textbf{x}}$ for each given feature vector $\ket{\textbf{x}}$. With the help of these projectors, let us define two operators \begin{eqnarray} \label{operators-def} \nonumber P_{-1} & = & \frac{1}{N_{-1}} \sum_{d=-1} \ket{\textbf{x}}\bra{\textbf{x}} \, , \\[0.1cm] P_{+1} & = & \frac{1}{N_{+1}} \sum_{d=+1} \ket{\textbf{x}}\bra{\textbf{x}} \, , \end{eqnarray} where $N_{-1}$ and $N_{-1}$ are normalization factors. All feature vectors that correspond to the output $d=-1$ are summed in the operator $P_{-1}$, while all feature vectors corresponding $d=+1$ are collected in $P_{+1}$.} The construction of these operators concludes the learning procedure. There are only four possibilities of how the operators $P_{-1}$ and $P_{+1}$ may be related: \textit{A.} Operators $P_{-1}$ and $P_{+1}$ are orthogonal $P_{-1} P_{+1} =0$ and form a complete set $P_{-1} + P_{+1} = I$, where $I$ is the identity operator. This means that there was no overlap between the training data, and the two classes $P_{-1}$ and $P_{+1}$ occupy the whole feature space. As the result any input feature vector can be classified between the two classes with no mistake. This situation can be simulated in principle by the classical perceptron. \textit{B.} Operators $P_{-1}$ and $P_{+1}$ are orthogonal $P_{-1} P_{+1} =0$, but do not form a complete set $P_{-1} + P_{+1} \ne I$. This is an extremely interesting case. The third operator must be defined as $P_{0} = I - P_{-1} - P_{+1}$ to fulfill the POVM competence requirement. The operator $P_{0}$ is, moreover, orthogonal to $P_{-1}$ and $P_{+1}$, because $P_{-1} P_{+1} =0$. When operating autonomously, the quantum perceptron generates three outputs $d \in \{ +1, 0, -1 \}$, namely that the feature vector belongs to the one of the previously seen classes $d \in \{ +1,-1 \}$ or it is essentially different from the learned classes $d = 0$ -- it belongs to a new previously unseen class. The classification on previously unseen classes is an extremely hard learning problem, which can not be done by classical perceptron neither by the most of the classical perceptron networks \cite{Kecman:01}. Quantum perceptron is capable of performing this task. Moreover, there will be no mistake in the classification between the three classes because of the orthogonality of the operators $P_{-1}, P_{+1}$ and $P_0$. \textit{C.} Operators $P_{-1}$ and $P_{+1}$ are not orthogonal $P_{-1} P_{+1} \ne 0$, but form a complete set $P_{-1} + P_{+1} = I$. In this case all the input data can be classified between the two classes with some nonzero probability of mistake. This is the case of probabilistic classification, which can not be done by the classical perceptron, although can be performed by more sophisticated classical learning models. \textit{D.} The most general case is when operators $P_{-1}$ and $P_{+1}$ are not orthogonal $P_{-1} P_{+1} \ne 0$ and do not form a complete set $P_{-1} + P_{+1} \ne I$. One again defines the third operator $P_0 = I - P_{-1} - P_{+1}$, which this time is not orthogonal to $P_{-1}$ and $P_{+1}$. In this situation, quantum perceptron classifies all the input feature vectors on three classes, one of which is a new class, with some nonzero probability of mistake. This situation can not be simulated by the classical perceptron. The quantum perceptron learning rule may have the following geometric interpretation. In contrast to the classical perceptron, which constructs a hyperplane separating the feature space on two subspaces, quantum perceptron constructs two (hyper-)volumes in the physical Hilbert space. These volumes are defined by the POVM operators (\ref{operators-def}). During the autonomous functioning, the POVM operators project the given feature vector $\ket{\psi}$ to one of the volumes (or to the space unoccupied by them) allowing us to perform the desired classification. For example, if $\bra{\psi} P_{-1} \ket{\psi} \neq 0$, while $\bra{\psi} P_{+1} \ket{\psi} = 0$ and $\bra{\psi} P_{0} \ket{\psi} = 0$, the feature vector $\ket{\psi}$ belongs to the class $d= -1$, and the probability of misclassification equals zero. If, in contrast, $\bra{\psi} P_{-1} \ket{\psi} \neq 0$, $\bra{\psi} P_{+1} \ket{\psi} \neq 0$ and $\bra{\psi} P_{0} \ket{\psi} = 0$, the feature vector belongs to the two classes with degrees defined by the corresponding expectation values $\bra{\psi} P_{-1} \ket{\psi}$ and $\bra{\psi} P_{+1} \ket{\psi}$. In the latter situation, one may perform a probabilistic classification according to the expectation values. \cor{We would like to stress that the construction of the operators (\ref{operators-def}) is no way unique. There may be more sophisticated ways to construct the POVM set in order to ensure a better performance of the learning model for a classification problem at hand. In fact, our construction is the simplest liner model for a quantum learning machine. Only in this sense the presented quantum perceptron is the analog for Rosenblatt's perceptron, while their learning rules are essentially different. } As we mentioned in the Introduction, there are two ways to achieve the desired classification with the POVM. One may get real physical systems involved or use the POVM operators as purely mathematical instrument. In order of clarity, the advantages of the first of these approaches will be discusses in Section~\ref{sec:3a} on particular examples, while in the rest of the next section we use the quantum perceptron as pure mathematical tool. \section{\label{sec:3} Applications} In spite of the extreme simplicity of its learning rule, quantum perceptron may perform a number of tasks infeasible for classical (Rosenblatt) perceptron. In this section we give three examples of such tasks. We start with logical function learning. Historically, the fact that classical perceptron can not learn an arbitrary logical function was the main limitation on the learning capabilities of this linear model \cite{Minsky:69}. We show that quantum perceptron, in contrast, is able of learning an arbitrary logical function irrespective of its kind and order. In Section~\ref{sec:3b}, we show that quantum perceptron can, in certain cases, perform the classification without previous training, the so-called unsupervised learning task. Classical perceptron, in contrast, can not perform this task by construction. Finally, in Section~\ref{sec:3c} we show that quantum perceptron may recognize superpositions of previously learned classes. This task is of particular interests in applied medical engineering, where simultaneous and proportional myoelectric control of artificial limb is a long desired goal \cite{Jiang:12}. \subsection{\label{sec:3a} Logical Function Learning} Let us consider a particular example of logical function -- XOR, which is given by the truth table (\ref{XOR}). During the learning session, we are given a set of four training data pairs $\{\textbf{x}_i, d_i, i=1,...,4 \}$, where the feature vector consists of two features $\textbf{x} \in \{ x_1, x_2 \}$, and the desired output $d \in \{ +1,-1 \}$ is a binary function. Let us represent the input features through the states of a two-dimensional quantum system -- qubit, so that each feature is given by one of the basis states $\ket{x_i} \in \{ \ket{0}, \ket{1} \}$ for $i=1,2$, where $\{ \ket{0}, \ket{1} \}$ denotes the computational basis \cor{for each feature}. In the above representation, the feature vector $\textbf{x}$ is given by one of the four two-qubit states $\ket{x_1, x_2}$. Following the procedure, which is described in Section~\ref{sec:2b}, the POVM operators are constructed as \begin{eqnarray} \label{pure-operators} \nonumber P_{-1} & = & \ket{0,0} \bra{0,0} + \ket{1,1} \bra{1,1} \, , \\[0.1cm] P_{+1} & = & \ket{0,1} \bra{0,1} + \ket{1,0} \bra{1,0} \, . \end{eqnarray} During its autonomous operation, quantum perceptron may be given four basis states $\ket{x_1, x_2} \in \{ \ket{0,0}, \ket{0,1}, \ket{1,0}, \ket{1,1} \}$ as inputs. Since $\bra{x_1, x_2} P_{-1} \ket{x_1, x_2} \neq 0$ only for $\ket{x_1, x_2} \in \{ \ket{0,0}, \ket{1,1} \}$, these states are classified to $d= -1$, while the other two states $\{ \ket{0,1}, \ket{1,0}\}$ are classified to $d= +1$. The fact that the operators $P_{-1}$ and $P_{+1}$ are orthogonal ensures zero probability of misclassification, while the completeness of the set of operators guarantees classification of any input. Conclusively, the quantum perceptron has learned XOR function. The successful XOR function learning by quantum perceptron is the consequence of the representation of the classical feature vector $\textbf{x}$ through the two-qubit states. In the classical representation, the feature vectors can not be linearly separated on a plane, see Fig.~\ref{fig-2}. In the quantum representation, four mutually orthogonal states $\ket{x_1, x_2} $ in the four-dimensional Hilbert space can be separated on two classes in an arbitrary fashion. This implies that an arbitrary logical function of two variables can be learned by quantum perceptron. For example, learning of logical AND function leads to the construction of operators $P_{-1} = \ket{0,0}\bra{0,0} + \ket{0,1}\bra{0,1} + \ket{1,0}\bra{1,0}$ and $P_{+1} = \ket{1,1} \bra{1,1}$. Moreover, an arbitrary logical function of an arbitrary number of inputs (arbitrary order) also can be learned by quantum perceptron, because the number of inputs of such a function growth exponentially as $2^n$ with the order of the function $n$ and exactly as fast as dimensionality of the Hilbert space that is needed to represent the logical function. \cor{In the above discussion the need to use real quantum systems has not emerged.} Let us now consider a situation, when one can benefit from utilizing real quantum systems. Let us slightly modify the problem of XOR learning. In real-life learning tasks the training data may be corrupted by noise \cite{Kecman:01}. In some cases, noise may lead to overlapping of the training data, which result in misclassification of feature vectors during the training stage and during further autonomous functioning. For example, if, during the XOR learning, there is a finite small probability $\delta$ that feature $x_1$ takes a wrong binary value, but the other feature and the desired output are not affected by noise, after a big number of trainings (which are usually required in case of learning from noisy data), the POVM operators are given by \begin{eqnarray} \label{noisy-operators} {P'}_{-1} & = & (1-\delta) \left( \ket{0,0} \bra{0,0} + \ket{1,1} \bra{1,1} \right) \nonumber \\[0.1cm] & & \hspace{0.5cm} + \; \delta \left(\ket{0,1} \bra{0,1} + \ket{1,0} \bra{1,0} \right) \, , \nonumber \\[0.1cm] {P'}_{+1} & = & (1-\delta) \left( \ket{0,1} \bra{0,1} + \ket{1,0} \bra{1,0}\right) \nonumber \\[0.1cm] & & \hspace{0.5cm} + \; \delta \left( \ket{0,0} \bra{0,0} + \ket{1,1} \bra{1,1}\right) \, . \end{eqnarray} Operators ${P'}_{-1}$ and ${P'}_{+1}$ are not orthogonal ${P'}_{-1} {P'}_{+1} \ne 0$ in contrast to operators (\ref{pure-operators}), \cor{but still form a complete set.} This means that during the autonomous operation of the quantum perceptron, the input feature vectors can be misclassified. Nevertheless, each feature is classified between the two classes and, on average, most of the feature vectors are classified correctly. This means that quantum perceptron simulates XOR function with a degree of accuracy given by $1 - \delta$. If we use real physical systems to encode feature vectors during autonomous functioning of the perceptron and measure the states of the systems with experimental setup adjusted in accordance with the POVM (\ref{noisy-operators}), we can perform a probabilistic classification. Moreover, we can exactly (in probabilistic sense) reproduce fluctuations that have been observed during the training. In certain sense such learning is too accurate and may be of use in some cases. Anyway, classical perceptron can not do any similar task. \cor{It is, however, important to note that practical simulation of quantum perceptron with real physical systems may not be always possible. In Section \ref{sec:2b} we discussed situations when operators $P_{-1}$ and $P_{+1}$ do not form a compete set, and constructed the third operator $P_0 = I - P_{-1} - P_{+1}$. It is possible in principle that the constructed operator $P_0$ is negative, i.e. unphysical. This means that the classification problem at hand can not be physically simulated with our linear model, although the problem may be treated mathematically with the quantum perceptron approach.} \cor{In this section we have seen how quantum representation and quantum measurements contribute to advanced learning abilities of the quantum perceptron. Even without these features, however, quantum perceptron is superior its classical counterpart in learning capabilities due to specific algebraic structure of the POVM operators. In the following sections we provide two examples, where advanced learning relays only on the structure of the POVM set.} \subsection{\label{sec:3b} Unsupervised Learning} The (supervised) learning stage, has been embedded into quantum perceptron by analogy with classical perceptron. Surprisingly, however, that the learning rule of the quantum perceptron allows to perform learning tasks beyond supervised learning paradigm. Suppose, for example, that we are given an unlabeled set of feature vectors and need to find a possible structure of this set, i.e. we need to answer whether there are any feature classes in the set. The following protocol allows us to resolve such an unsupervised learning task under certain conditions. \cor{Being given the first feature vector $\ket{\textbf{x}_1}$ from the set, let us define two classes with the POVM operators \begin{eqnarray} \label{operators-unsup-learn} \nonumber P^{(0)}_{-1} & = & \ket{\textbf{x}_1}\bra{\textbf{x}_1} \, , \\[0.1cm] P^{(0)}_{+1} & = & I - P^{(0)}_{-1} \, . \end{eqnarray} where $I$ is the identity operator. Here, the class $d=+1$ is formally defined as "not $d=-1$". The next given feature vector $\ket{\textbf{x}_2}$ is tested to belong to one of these classes. If $\bra{\textbf{x}_2} P^{(0)}_{-1} \ket{\textbf{x}_2} > \bra{\textbf{x}_2} P^{(0)}_{+1} \ket{\textbf{x}_2}$, the feature vector $\ket{\textbf{x}_2}$ is close enough to $\ket{\textbf{x}_1}$ and thus belongs to class $d=-1$. In this case the POVM operators (\ref{operators-unsup-learn}) are updated to \begin{eqnarray} \label{operators-unsup-learn-first-it-2} \nonumber P^{(1)}_{-1} & = & \ket{\textbf{x}_1}\bra{\textbf{x}_1} + \ket{\textbf{x}_2}\bra{\textbf{x}_2}\, , \\[0.1cm] P^{(1)}_{+1} & = & I - P_{-1} \, . \end{eqnarray} If, in contrast, $\bra{\textbf{x}_2} P^{(0)}_{-1} \ket{\textbf{x}_2} \geq \bra{\textbf{x}_2} P^{(0)}_{+1} \ket{\textbf{x}_2}$, the feature vector $\ket{\textbf{x}_2}$ is distant sufficiently from $\ket{\textbf{x}_1}$ and therefore can be assigned a new class $d=+1$. Due to the first representative of the $d=+1$ class, we may update the formal definition of the $P^{(0)}_{+1}$ introducing a new POVM set \begin{eqnarray} \label{operators-unsup-learn-first-it-2} \nonumber P^{(1)}_{-1} & = & \ket{\textbf{x}_1}\bra{\textbf{x}_1} \, , \\[0.1cm] P^{(1)}_{+1} & = & \ket{\textbf{x}_2}\bra{\textbf{x}_2} \, . \end{eqnarray} This procedure is repeated iteratively until all the feature vectors are classified between the two classes $d=-1$ and $d=+1$.} The above protocol will work if only there are at least two feature vectors $\ket{\textbf{x}}$ and $\ket{\textbf{y}}$ in the given feature set such as $\bra{\textbf{x}} (I-2P) \ket{\textbf{x}} \geq 0$, where $P = \ket{\textbf{y}} \bra{\textbf{y}}$. In the opposite case, unsupervised learning within the protocol is not possible. Moreover, the classification crucially depends on order of examples, because first seen feature vectors define the classes. This situation is, however, typical for unsupervised learning models \cite{Kecman:01}. \cor{To reduce the dependence of the classification on the order of the feature vectors appearance, it is possible to repeat the learning many times taking different order of the input feature vectors, and compare the results of the classification.} In spite of the above limitations, the unsupervised classification can be in principle performed by the quantum perceptron, while this task is undoable for the classical perceptron. \subsection{\label{sec:3c} Simultaneous and Proportional Myoelectric Control} The problem of signal classification has found remarkable applications in medical engineering. It is known that muscle contraction in human body is governed by electrical neural signals. These signals can be acquired by different means \cite{Parker:04}, but are typically summarized into so-called electromyogram (EMG). In principle, processing the EMG, one may predict muscular response to the neural signals and subsequent respond of the body. This idea is widely used in many applications, including myoelectric-controlled artificial limb, where the surface EMG is recorded from the remnant muscles of the stump and used, after processing, for activating certain prosthetic functions of the artificial limb, such as hand open/close \cite{Jiang:12}. Despite decades of research and development, however, none of the commercial prostheses is using pattern classification based controller \cite{Jiang:12}. The main limitation on successful practical application of pattern classification for myoelectric control is that it leads to very unnatural control scheme. While natural movements are continuous and require activations of several degrees of freedom (DOF) simultaneously and proportionally, classical schemes for pattern recognition allow only sequential control, i.e. activation of only one class that corresponds to a particular action in one decision \cite{Jiang:12}. Simultaneous activation of two DOFs is thus recognized as a new class of action, but not as a combination of known actions. Moreover, all these classes as well as their superpositions must be previously learned. This leads to higher rehabilitation cost and more frustration of the user, who must spend hours in a lab to learn the artificial limb control. Recently, we have taken quantum perceptron approach to the problem of simultaneous and proportional myoelectric control \cite{Siomau:13}. We considered a very simple control scheme, where two quantum perceptrons were controlling two degrees of freedom of the wrist prosthesis. We took EMG signals with corresponding angles of the wrist position from an able-bodied subject who performs wrist contractions. For the training we used only those EMG that correspond to the activation of a single DOF. During the test, the control scheme was given EMG activating multiple DOFs. We found that in 45 of 55 data blocks of the actions were recognized correctly with accuracy exceeding $73\%$, which is comparable to the accuracy of the classical schemes for classification. In the above example, \cor{we used a specific representation of the feature vectors. Since the features (i.e. the neural signals) are real and positive numbers there was no need to expand the feature space. Moreover, in general it is not possible to scale a given feature on the unit interval, because the neural signals observed during the learning and autonomous functioning may differ significantly in amplitude, and \textit{a priori} scaling may lead to misapplication of the artificial limb. Therefore, the amplitude of a signal was normalized over amplitudes from all the channels to ensure proportional control of the prosthesis. In fact, the specific structure of the POVM set was the only feature of the quantum perceptron that we used. With this feature alone we were able to recognize 4 original classes observed during the training and 4 new (previously unseen) classes that correspond to simultaneous activation of two DOF.} In general, within the above control scheme, $n$ quantum perceptrons are able to recognize $2n$ original classes with $(2n)!/[2(2n-2)!]-n$ additional two-class superpositions of these classes. In contrast, $n$ classical perceptrons may recognize only $2n$ classes, which were seen during the learning. \cor{The advantage of the quantum perceptron over the classical perceptron can be understood from the geometric interpretation discussed in Section~\ref{sec:2b}. While $n$ classical perceptrons construct $n$ hyperplanes in the feature space, which separate the feature space on $2n$ non-overlapping classes, $n$ quantum perceptrons build $n$ hypervolumes, which may not fill the whole feature space and may overlap.} \section{\label{sec:4} Conclusion} Bridging between quantum information science and machine learning theory, we showed that the capabilities of an autonomous learning automata can be dramatically increased using the quantum information formalism. We have constructed the simplest linear quantum model for learning machine, which, however, \cor{is superior} its classical counterpart in learning capabilities. \cor{Due to the quantum representation of the feature vectors, the probabilistic nature of quantum measurements and the specific structure of the POVM set, the quantum perceptron is capable of learning an arbitrary logical function, performing probabilistic classification, recognizing superpositions of previously seen classes and even classifying on previously unseen classes. Since all classical learning models track back to Rosenblatt's perceptron, we hope that the linear quantum perceptron will serve as a basis for future development of practically powerful quantum learning models, and especially in the domain of nonlinear classification problems.} \begin{acknowledgments} This project is supported by KACST. \end{acknowledgments} \end{document}
\begin{document} \date{} \title{Stochastic HJB Equations\ and Regular Singular Points} \begin{abstract} In this paper we show that some HJB equations arising from both finite and infinite horizon stochastic optimal control problems have a regular singular point at the origin. This makes them amenable to solution by power series techniques. This extends the work of Al'brecht who showed that the HJB equations of an infinite horizon deterministic optimal control problem can have a regular singular point at the origin, Al'brekht solved the HJB equations by power series, degree by degree. In particular, we show that the infinite horizon stochastic optimal control problem with linear dynamics, quadratic cost and bilinear noise leads to a new type of algebraic Riccati equation which we call the Stochastic Algebraic Riccati Equation (SARE). If SARE can be solved then one has a complete solution to this infinite horizon stochastic optimal control problem. We also show that a finite horizon stochastic optimal control problem with linear dynamics, quadratic cost and bilinear noise leads to a Stochastic Differential Riccati Equation (SDRE) that is well known. If these problems are the linear-quadratic-bilinear part of a nonlinear finite horizon stochastic optimal control problem then we show how the higher degree terms of the solutions can be computed degree by degree. To our knowledge this computation is new. \end{abstract} \section{ Linear Quadratic Regulator with Bilinear Noise} \setcounter{equation}{0} Consider an infinite horizon, discounted, stochastic Linear Quadratic Regulator with Bilinear Noise (LQGB), \begin{eqnarray}n \min_{u(\cdot)} {1\over 2} {\rm E}\int_0^\infty e^{-\alpha t}\left(x'Qx+2x'Su+u'Ru\right) \ d t \end{array}\end{equation}an subject to \begin{eqnarray}n dx&=& (Fx+Gu)\ dt+ \sum_{k=1}^r (C_{k} x+D_k u )\ dw_k\\ x(0)&=&x^0 \end{array}\end{equation}an In a previous version of this paper we studied the case with $D_k=0$ \cite{Kr18}. The state $x$ is $n$ dimensional, the control $u$ is $m$ dimensional and $w(t)$ is standard $r$ dimensional Brownian motion. The matrices are sized accordingly, in particular $C_{k}$ is an $n\times n$ matrix and $D_k$ is an $n \times m$ matrix for each $k=1,\ldots,r$. The discount factor is $\alpha\ge 0$. To the best of our knowledge such problems have not been considered before. The finite horizon version of this problem can be found in Chapter 6 of the excellent treatise by Yong and Zhou \cite{YZ99}. We will also treat finite horizon problems in Section \ref{FH} but not in the same generality as Yong and Zhou. Throughout this note we will require that the coefficient of the noise is $O(x,u)$. Yong and Zhou allow the coefficient to be $O(1)$ in their linear-quadratic problems. The reason why we require $O(x,u)$ is that then the associated stochastic Hamilton-Jacobi-Bellman equations for nonlinear extensions of LQGB have regular singular points at the origin. Hence they are amenable to solution by power series techniques. If the noise is $O(1)$ these power series techniques have closure problems, the equations for lower degree terms depend on higher degree terms. If the coefficients of the noise is $O(x,u)$ then the equations can be solved degree by degree. A first order partial differential equation with independent variable $x$ has a regular singular point at $x=0$ if the coefficients the first order partial derivatives are $O(x)$. A second order partial differential equation has a regular singular point at $x=0$ if the coefficients the first order partial derivatives are $O(x)$ and the coefficients the second order partial derivatives are $O(x)^2$. For more on regular singular points we refer the reader to \cite{BD09}. If we can find a smooth scalar valued function $\pi(x)$ and a smooth $m$ vector valued $\kappa(x)$ satisfying the discounted stochastic Hamilton-Jacobi-Bellman equations (SHJB) \begin{eqnarray}\label{shjb1} 0&=& \mbox{min}_u \left\{ -\alpha \pi(x) +\frac{\partial \pi}{\partial x}(x) (Fx+Gu)+{1\over 2} \left( x'Qx+2x'Su+u'Ru\right)\right. \nonumber \\ &&\left. +{1\over 2}\sum_{k=1}^r (x'C'_k+u'D'_k) \frac{\partial^2 \pi}{\partial x^2}(x) (C_kx+D_ku) \right\} \\ \nonumber \kappa(x)&=& \mbox{argmin}_u \left\{ \frac{\partial \pi}{\partial x}(x) (Fx+Gu)+{1\over 2} \left( x'Qx+2x'Su+u'Ru\right)\right. \\ &&\left.+{1\over 2}\sum_{k=1}^r (x'C'_k+u'D'_k) \frac{\partial^2 \pi}{\partial x^2}(x) (C_kx+D_ku)\right\} \label{shjb2} \end{array}\end{equation}a then by a standard verification argument \cite{FR75} one can show that $\pi(x^0)$ is the optimal cost of starting at $x^0$ and $u(0)=\kappa(x^0)$ is the optimal control at $x^0$. We make the standard assumptions of deterministic LQR, \begin{itemize} \item The matrix \begin{eqnarray}n \bmt Q&S\\S'&R\end{array}\right] \end{array}\end{equation}an is nonnegative definite. \item The matrix $R$ is positive definite. \item The pair $F$, $G$ is stabilizable. \item The pair $Q^{1/2}$, $F$ is detectable. \end{itemize} Because of the linear dynamics and quadratic cost, we expect that $\pi(x) $ is a quadratic function of $x$ and $\kappa(x)$ is a linear function of $x$, \begin{eqnarray}n \pi(x)&=& {1\over 2}x'Px\\ \kappa(x)&=& Kx \end{array}\end{equation}an Then the stochastic Hamilton-Jacobi-Bellman equations (\ref{shjb1}, \ref{shjb2}) simplify to \begin{eqnarray} 0&=&-\alpha P +PF+F'P +Q -K'RK\nonumber\\ &&+\sum_{k=1}^r \left(C'_k+K'D'_k\right)P\left(C_k+D_kK\right) \label{sare} \\ K&=&-\left(R+\sum_{k=1}^rD'_kPD_k\right)^{-1}\left(G'P+S'\right) \label{K} \end{array}\end{equation}a We call these equations (\ref{sare}, \ref{K}) the Stochastic Algebraic Riccati Equations (SARE). They reduce to the deterministic Algebraic Riccati Equations (ARE) if $C_k=0$ and $D_k=0$. Here is an iterative method for solving SARE. Let $P_{(0)}$ be the solution of the first deterministic ARE \begin{eqnarray}n 0&=& -\alpha P_{(0)}+ P_{(0)}F+F'P_{(0)}+Q-(P_{(0)}G+S)R^{-1}(G'P_{(0)}+S') \end{array}\end{equation}an and $K_{(0)}$ be solution of the second deterministic ARE \begin{eqnarray}n K_{(0)}&=&-R^{-1}(G'P+S') \end{array}\end{equation}an Given $P_{(\tau-1)}$ define \begin{eqnarray}n Q_{(\tau)}&=& Q+\sum_{k=1}^r C'_k P_{(\tau-1)}C_k\\ R_{(\tau)}&=& R+\sum_{k=1}^r D'_k P_{(\tau-1)}D_k\\ S_{(\tau)}&=& S+\sum_{k=1}^r C'_k P_{(\tau-1)}D_k \end{array}\end{equation}an Let $P_{(\tau)}$ be the solution of \begin{eqnarray}n 0&=& -\alpha P_{(\tau)} +P_{(\tau)}F+F'P_{(\tau)}+Q_{(\tau)}-(P_{(\tau)}G+S_{(\tau)})R_{(\tau)}^{-1}(G'P_{(\tau)}+S'_{(\tau)}) \end{array}\end{equation}an and \begin{eqnarray}n K_{(\tau)}&=&-R_{(\tau)}^{-1}\left(G'P_{(\tau)}+S_{(\tau)}'\right) \end{array}\end{equation}an If the iteration on $P_{(\tau)}$ nearly converges, that is, for some $\tau$, $P_{(\tau)}\approx P_{(\tau-1)}$ then $P_{(\tau)}$ and $ K_{(\tau)}$ are approximate solutions to SARE The solution $P$ of the deterministic ARE is the kernel of the optimal cost of a deterministic LQR and since \begin{eqnarray}n \bmt Q& S\\ S'& R\end{array}\right] \le \bmt Q_{(\tau-1)}& S_{(\tau-1)}\\S'_{(\tau-1)}& R_{(\tau-1)}\end{array}\right] \le \bmt Q_{(\tau)}& S_{(\tau)}\\S'_{(\tau)}& R_{(\tau)}\end{array}\right] \end{array}\end{equation}an it follows that $P_{(0)}\le P_{(\tau-1)} \le P_{(\tau)} $, the iteration is monotonically increasing. We have found computationally that if matrices $C_k$ and $D_k$ are not too big then the iteration conveges. But if the $C_k$ and $D_k$ are about the same size as $F$ and $G$ or larger the iteration can diverge. Further study of this issue is needed. The iteration does converge in the following simple example. \section{LQGB Example} \setcounter{equation}{0} Here is a simple example with $n=2,m=1,r=2$. \begin{eqnarray}n \min_u {1\over 2}\int_0^\infty \|x\|^2+u^2\ dt \end{array}\end{equation}an subject to \begin{eqnarray}n dx_1&=& x_2\ dt+0.1 x_1 \ dw_1\\ dx_2&=&u\ dt+0.1 (x_2 +u)\ dw_2 \end{array}\end{equation}an In other words \begin{eqnarray}n Q=\bmt 1&0\\0&1\end{array}\right],& S=\bmt 0\\1\end{array}\right], &R=1\\ F= \bmt 0&1\\0&0\end{array}\right],& G=\bmt 0\\1\end{array}\right]& \\ C_1=\bmt 0.1&0\\0&0\end{array}\right], &C_2=\bmt 0&0\\0&0.1\end{array}\right]&\\ D_1=\bmt 0\\0\end{array}\right], &D_2=\bmt 0\\0.1\end{array}\right]& \end{array}\end{equation}an The solution of the noiseless ARE is \begin{eqnarray}n P&=& \bmt 1.7321 & 1.000\\1.000&1.7321\end{array}\right] \\ K&=&-\bmt 1.0000& 1.7321\end{array}\right] \end{array}\end{equation}an The eigenvalues of the noiseless closed loop matrix $F+GK$ are $-0.8660\pm0.5000i $. The above iteration converges to the solution of the noisy SARE in eight iterations, the solution is \begin{eqnarray}n P&=& \bmt 1.7625 & 1.0176\\1.0176&1.7524\end{array}\right]\\ \\ K&=&-\bmt 1.0176&1.7524\end{array}\right] \end{array}\end{equation}an The eigenvalues of the noisy closed loop matrix $F+GK$ are $-0.8762\pm 0.4999i$. As expected the noisy system is more difficult to control than the noiseless system. It should be noted that the above iteration diverged to infinity when the noise coefficients were increased from $0.1$ to $1$. \section{Nonlinear Infinite Horizon HJB} Suppose the problem is not linear-quadratic, the dynamics is given by an Ito equation \begin{eqnarray}n dx&=& f(x,u) \ dt +\sum_{k=1}^r\gamma_k(x,u) \ dw_k \end{array}\end{equation}an and the criterion to be minimized is \begin{eqnarray}n \min_{u(\cdot)} {\rm E}\int_0^\infty e^{-\alpha t}l(x,u)\ d t \end{array}\end{equation}an We assume that $f(x,u), \gamma_k(x,u), l(x,u) $ are smooth functions that have Taylor polynomial expansions around $x=0,u=0$, \begin{eqnarray}n f(x,u)&=& Fx+Gu+f^{[2]}(x,u)+\ldots+f^{[d]}(x,u)+O(x,u)^{d+1}\\ \gamma_k(x,u)&=& C_kx+D_ku+\gamma_k^{[2]}(x,u)+\ldots+\gamma_k^{[d]}(x,u)+O(x)^{d+1}\\ l(x,u)&=&{1\over 2}\left(x'Qx+2x'Su+u'Ru\right) +l^{[3]}(x,u)+\ldots+l^{[d+1]}(x,u)+O(x,u)^{d+2} \end{array}\end{equation}an where $^{[d]}$ indicates the homogeneous polynomial terms of degree $d$. Then the the discounted stochastic Hamilton-Jacobi-Bellman equations become \begin{eqnarray} 0&=& \mbox{min}_u \left\{ -\alpha \pi(x) +\frac{\partial \pi}{\partial x}(x) f(x,u)+l(x,u)\right. \label{shjb3} \nonumber \\ &&\left. +{1\over 2}\sum_{k=1}^r \gamma'_k(x,u) \frac{\partial^2 \pi}{\partial x^2}(x) \gamma_k(x,u)\right\} \\ \kappa(x)&=& \mbox{argmin}_u \left\{ -\alpha \pi(x) +\frac{\partial \pi}{\partial x}(x) f(x,u)+l(x,u)\right. \nonumber \\ \label{shjb4} &&\left. +{1\over 2}\sum_{k=1}^r \gamma'_k(x,u) \frac{\partial^2 \pi}{\partial x^2}(x) \gamma_k(x,u)\right\} \end{array}\end{equation}a If the control enters the dynamics affinely, \begin{eqnarray}n f(x,u)&=& f^0(x) +f^u(x)u\\ \gamma_k(x,u)&=&\gamma^0_k(x)+\gamma^u_k(x)u \end{array}\end{equation}an and $l(x,u)$ is always strictly convex in $u$ for every $x$ then the quantity to be minimized in (\ref{shjb3}) is strictly convex in $u$. If we assume that (\ref{shjb3}) is strictly convex in $u$ then the HJB equations (\ref{shjb3}, \ref{shjb4}) simplify to \begin{eqnarray} 0&=&-\alpha \pi(x) +\frac{\partial \pi}{\partial x}(x) f(x,\kappa(x))+l(x,\kappa(x))\label{shjb5} \\ && \nonumber+{1\over 2}\sum_{k=1}^r \gamma'_k(x,\kappa(x)) \frac{\partial^2 \pi}{\partial x^2}(x) \gamma_k(x,\kappa(x)) \\ 0&=&\frac{\partial \pi}{\partial x}(x) \frac{\partial f}{\partial u}(x,\kappa(x))+\frac{\partial l}{\partial u}(x,\kappa(x)) \label{shjb6} \\&& +\sum_{k=1}^r \gamma'_k(x,\kappa(x)) \frac{\partial^2 \pi}{\partial x^2}(x) \frac{\partial \gamma_k}{\partial u}(x,\kappa(x)) \nonumber \end{array}\end{equation}a Because $f(x,u)=O(x,u)$ and $\gamma_k(x,u)=O(x,u)$, (\ref{shjb5}) has a regular singular point at $x=0,u=0$ and so is amenable to power series solution techniques. If $\gamma_k(x,u)=O(1)$ then there is persistent noise that must be overcome by persistent control action. Presumably then the optimal cost is infinite. Following Al'brekht \cite{Al61} we assume that the optimal cost $\pi(x)$ and the optimal feedback have Taylor polynomial expansions \begin{eqnarray}n \pi(x)&=& {1\over 2}x'Px +\pi^{[3]}(x)+\ldots+\pi^{[d+1]}(x)+O(x)^{d+2}\\ \kappa(x)&=&Kx+\kappa^{[2]}(x)+\ldots+\kappa^{[d]}(x)+O(x)^{d+1} \end{array}\end{equation}an We plug all these expansions into the simplified SHJB equations (\ref{shjb5}, \ref{shjb6}). At lowest degrees, degree two in (\ref{shjb5}) and degree one in (\ref{shjb6}) we get the familiar SARE (\ref{sare}, за\ref{K}). If (\ref{sare}, \ref{K}) are solvable then we may proceed to the next degrees, degree three in (\ref{shjb5}) and degree two in (\ref{shjb6}). \begin{eqnarray} 0&=&\frac{\partial \pi^{[3]}}{\partial x}(x) (F+GK)x+x'Pf^{[2]}(x,Kx)+l^{[3]}(x,Kx) \label{shjb7}\\&& +{1\over 2} \sum_k x'(C'_k +K'D'_k) \frac{\partial^2 \pi^{[3]}}{\partial x^2}(x) (C_k+D_kK) x \nonumber \\&&+\sum_k x'(C'_k +K'D_k)P\gamma_k^{[2]}(x,Kx)\nonumber \\ \nonumber \\ 0&=& \frac{\partial \pi^{[3]}}{\partial x}(x) G +x'P\frac{\partial f^{[2]}}{\partial u}(x,Kx)+\frac{\partial l^{[3]}}{\partial u}(x,Kx) \label{shjb8} \\&&+\sum_k x'(C_k+D_kK)'\left(P\frac{\partial \gamma^{[2]}_k}{\partial u}(x,Kx)+ \frac{\partial^2 \pi^{[3]}}{\partial x^2}(x) D_k\right) \nonumber \\&& +\sum_k \gamma^{[2]}_k(x,Kx)PD_k \nonumber +(\kappa^{[2]}(x))'\left(R+\sum_k D'_kPD_k\right) \nonumber \end{array}\end{equation}a Notice the first equation (\ref{shjb7}) is a square linear equation for the unknown $\pi^{[3]}(x)$, the other unknown $\kappa^{[2]}(x)$ does not appear in it. If we can solve the first equation (\ref{shjb7}) for $\pi^{[3]}(x)$ then we can solve the second equation (\ref{shjb8}) for $\kappa^{[2]}(x)$ because of the standard assumption that $R$ is invertible so $R+\sum_k D_kPD_k$ must also be invertible. In the deterministic case the eigenvalues of the linear operator \begin{eqnarray} \label{dop} \pi^{[3]}(x) &\mapsto& \frac{\partial \pi^{[3]}}{\partial x}(x) (F+GK)x \end{array}\end{equation}a are the sums of three eigenvalues of $F+GK$. Under the standard LQR assumptions all the eigenvalues of $F+GK$ are in the open left half plane so any sum of three eigenvalues of $F+GK$ is different from zero and the operator (\ref{dop}) is invertible. In the stochastic case the relevant linear operator is a sum of two operators \begin{eqnarray} \label{sop} \pi^{[3]}(x) &\mapsto& \frac{\partial \pi^{[3]}}{\partial x}(x) (F+GK)x \\&& +{1\over 2} \sum_k x'(C'_k +K'D'_k) \frac{\partial^2 \pi^{[3]}}{\partial x^2}(x) (C_k+D_kK) x \nonumber \end{array}\end{equation}a Consider a simple version of the second operator, for some $C$, \begin{eqnarray} \label{sim} \pi^{[3]}(x) &\mapsto&{1\over 2} x'C'\frac{\partial^2 \pi^{[3]}}{\partial x^2}(x)Cx \end{array}\end{equation}a Suppose $C$ has a complete set of left eigenpairs, $\lambda_i\in \Compl,\ w^i\in \Compl^{1\times n}$ for $i=1,\ldots,n$, \begin{eqnarray}n w^i C&=& \lambda_i w^i \end{array}\end{equation}an Then the eigenvalues of (\ref{sim}) are of the form $ \lambda_{i_1}\lambda_{i_2}+\lambda_{i_2}\lambda_{i_3}+ \lambda_{i_3}\lambda_{i_1} $ and the corresponding eigenvectors are $(w^{i_1}x)(w^{i_2}x)(w^{i_3}x)$ for for $ 1\le i_1\le i_2\le i_3$. But this analysis does not completely clarify whether the operator (\ref{sop}) is invertible. Here is one case where it is known to be invertible. Consider the space of cubic polynomials $\pi(x)$. We can norm this space using the standard $L_2$ norm on the vector of coefficients of $\pi(x)$ which we denote by $\|\pi(x)\|$. Then there is an induced norm on operators like (\ref{dop}), (\ref{sop}) and \begin{eqnarray} \label{pop} \pi^{[3]}(x) &\mapsto& {1\over 2} \sum_k x'(C'_k +K'D_k) \frac{\partial^2 \pi^{[3]}}{\partial x^2}(x) (C_k+D_kK) x \nonumber \end{array}\end{equation}a Since the operator (\ref{dop}) is invertible its inverse has an operator norm $\rho<\infty$. If all the eigenvalues of $F+GK $ have real parts less that $-\tau$ then ${ 1 \over \rho} \ge 3\tau$. Let $\sigma$ be the supremum operator norms of $C_k+D_kK$ for $k=1.\ldots, r$. Then from the discussion above we know that the operator norm of (\ref{pop}) is bounded above by ${3r\sigma^2\over 2}$ \begin{lemma} If $\tau > {r\sigma^2 \over 2}$ then the operator (\ref{sop}) is invertible. \end{lemma} \begin{proof} Suppose (\ref{sop}) is not invertible then there exist a cubic polynomial $\pi(x)\ne 0$ such that \begin{eqnarray}n \frac{\partial \pi^{[3]}}{\partial x}(x) (F+GK)x &=& -{1\over 2} \sum_k x'(C'_k +K'D_k) \frac{\partial^2 \pi^{[3]}}{\partial x^2}(x) (C_k+D_kK) x \end{array}\end{equation}an so \begin{eqnarray}n \left\| \frac{\partial \pi^{[3]}}{\partial x}(x) (F+GK)x \right\|=\left\| {1\over 2} \sum_k x'(C'_k +K'D_k) \frac{\partial^2 \pi^{[3]}}{\partial x^2}(x) (C_k+D_kK) x \right\| \end{array}\end{equation}an But we know that \begin{eqnarray}n \left\| \frac{\partial \pi^{[3]}}{\partial x}(x) (F+GK)x \right\|\ge{1\over \rho}\|\pi(x)\|\ge 3\tau \|\pi(x)\|>{3r\sigma^2 \over 2}\|\pi(x)\| \end{array}\end{equation}an while \begin{eqnarray}n \left\| {1\over 2} \sum_k x'(C'_k +K'D_k) \frac{\partial^2 \pi^{[3]}}{\partial x^2}(x) (C_k+D_kK) x \right\|\le{3r\sigma^2 \over 2}\|\pi(x)\| \end{array}\end{equation}an \end{proof} The takeaway message from this lemma is that if the nonzero entries of $C_k, D_k$ are small relative to the nonzero entries of $F, G$ then we can expect that (\ref{sop}) will be invertible. There are two ways to try solve (\ref{shjb7}), the iterative approach or the direct approach . We have written Matlab software to solve the deterministic version of these equations. This suggests an iteration scheme similar to the above for solving SARE. Let $\pi^{[3]}_{(0)}$ be the solution of the deteministic version of (\ref{shjb7}) where $C_k=0,\ D_k=0$. Given $\pi^{[3]}_{(\tau-1)}(x)$ define \begin{eqnarray}n l^{[3]}_{(\tau)}(x,u)&=&l^{[3]}(x,u)+{1\over 2} \sum_k x'(C'_k +K'D_k) \frac{\partial^2 \pi^{[3]}_{\tau-1}}{\partial x^2}(x) (C_k+D_kK) x \nonumber \\&&+\sum_k x'(C'_k +K'D_k)P\gamma_k^{[2]}(x,u)\nonumber \end{array}\end{equation}an and let $\pi^{[3]}_{(\tau)}$ be the solution of \begin{eqnarray}n 0&=&\frac{\partial \pi^{[3]}_{(\tau)}}{\partial x}(x) (F+GK)x+x'Pf^{[2]}(x,Kx)+l^{[3]}_{(\tau)}(x,Kx) \end{array}\end{equation}an If this iteration converges then we have the solution to (\ref{shjb7}). We have also written Matlab software to solve (\ref{shjb7}) directly assuming the operator (\ref{sop}) is invertible. If (\ref{shjb7}) is solvable then solving (\ref{shjb8}) for $\kappa^{[2]}(x)$ is straightforward as we have assumed that $R$ is invertible. If these equations are solvable then we can move on to the equations for $\pi^{[4]}(x)$ and $\kappa^{[3]}(x)$ and higher degrees. It should be noted that if the Lagrangian is an even function and the dynamics is an odd function then the optimal cost $\pi(x)$ is an even function and the optimal feedback $\kappa(x)$ is an odd function. \section{Nonlinear Example} \setcounter{equation}{0} Here is a simple example with $n=2,m=1,r=1$. Consider a pendulum of length $1\ m$ and mass $1\ kg$ orbiting approximately 400 kilometers above Earth on the International Space Station (ISS). The "gravity constant" at this height is approximately $g=8.7\ m/sec^2$. The pendulum can be controlled by a torque $u$ that can be applied at the pivot and there is damping at the pivot with linear damping constant $c=0.1\ kg/sec$ and cubic damping constant $c_3= 0.05\ kg\ sec/m^2$. Let $x_1$ denote the angle of pendulum measured counter clockwise from the outward pointing ray from the center of the Earth and let $x_2$ denote the angular velocity. The determistic equations of motion are \begin{eqnarray}n \dot{x}_1&=& x_2 \\ \dot{x}_2&=& lg\sin x_1 -c_1 x_2-c_3 x_2^3 +u \end{array}\end{equation}an But the shape of the earth is not a perfect sphere and its density is not uniform so there are fluctuations in the "gravity constant". We set these fluctuations in the "gravity constant" at one percent although they are probably smaller. There might also be fluctuations in the damping constants of around one percent. Further assume that the commanded torque is not always realized and the relative error in the actual torque fluctuates around one percent. We model these stochastically by three white noises \begin{eqnarray}n dx_1&=& x_2\ dt\\ dx_2&=&\left(lg\sin x_1 -c_1x_2-c_3x_2^3+u\right)\ dt\\ &&+0.01 lg\sin x_1 \ dw_1- 0.01(c_1 x_2+c_3x_2^3)\ dw_2 +0.01u\ dw_3 \end{array}\end{equation}an This is an example about how stochastic models with noise coefficients of order $O(x)$ can arise. If the noise is modeling an uncertain environment then its coefficients are likely to be $O(1)$. But if it is the model that is uncetain then noise coefficients are likely to be $O(x)$. The goal is to find a feedback $u=\kappa(x)$ that stabilizes the pendulum to straight up in spite of the noises so we take the criterion to be \begin{eqnarray}n \min_u {1\over 2}\int_0^\infty \|x\|^2+u^2\ dt \end{array}\end{equation}an with discount factor is $\alpha=0$. Then \begin{eqnarray}n F=\bmt 0&1\\8.7&0.1\end{array}\right],& G=\bmt 0\\1\end{array}\right],& \\ Q=\bmt 1&0\\0&1\end{array}\right], & R=1,& S=\bmt 0\\0\end{array}\right]\\ C_1=\bmt 0&0\\0.087&0\end{array}\right],&C_2=\bmt 0&0\\0&-0.001\end{array}\right],& C_3=\bmt 0&0\\0&0\end{array}\right]\\ D_1=\bmt 0\\0\end{array}\right],& D_2 =\bmt 0\\0\end{array}\right],& D_3 =\bmt 0\\0.01\end{array}\right] \end{array}\end{equation}an Because the Lagrangian is an even function and the dynamics is an odd function of $x,u$, we know that $\pi(x)$ is an even function of $x$ and $\kappa(x)$ s an odd function of $x$. We have computed the optimal cost $\pi(x)$ to degree $6$ and the optimal feedback $\kappa(x)$ to degree $5$, \begin{eqnarray}n \pi(x)&=&26.7042x_1^2+ 17.4701x_1x_2 2.9488x_2^2\\&& -4.6153x_1^4 -2.9012x_1^3x_2 -0.5535x_1^2x_2^2 -0.0802 x_1x_2^3 -0.0157x_2^4\\ && 0.3361x_1^6+ 0.1468x_1^5x_2 -0.0015x_1^4x_2^2 -0.0077x_1^3x_2^3 \\ && -0.0022x_1^2x_2^4 -0.0003x_1x_2^5 +0.0000x_2^6 \\ \kappa(x)&=&-17.4598x_1 -5.8941x_2 \\ && +2.9012x_1^3+ 1.1071x_1^2x_2+ 0.2405x_1x_2^2+ 0.0628x_2^3\\ && -0.1468x_1^5+ 0.0031x_1^4x_2+ 0.0232x_1^3x_2^2\\&&+ 0.0089x_1^2x_2^3+ 0.0014x_1x_2^4 -0.0002x_2^5 \end{array}\end{equation}an In making this computation we are approximating $\sin x_1$ by its Taylor polynomials \begin{eqnarray}n \sin x_1&=& x_1-{x_1^3\over 6} +{x_1^5 \over 120}+\ldots \end{array}\end{equation}an The alternating signs of the odd terms in these polynomials are reflected in the nearly alternating signs in the Taylor polynomials of the optimal cost $\pi(x)$ and optimal feedback $\kappa(x)$. If we take a first degree approximation to $\sin x_1$ we are overestimating the gravitational force pulling the pendulum from its upright position pointing so $\pi^{[2}(x)$ overestimates the optimal cost and the feedback $u=\kappa^{[1]}(x)$ is stronger than it needs to be. The latter could be a problem if there is a bound on the magnitude of $u$ that we ignored in the analysis. If we take a third degree approximation to $\sin x_1$ then $\pi^{[2]}(x)+\pi^{[4]}(x)$ underestimates the optimal cost and the feedback $u=\kappa^{[1]}(x)+\kappa^{[3]}(x)$ is weaker than it needs to be. If we take a fifth degree approximation to $\sin x_1$ then $\pi^{[2]}(x)+\pi^{[4]}(x)+\pi^{[6]}(x)$ overestimates the optimal cost but by a smaller margin than $\pi^{[2}(x)$. The feedback $u=\kappa^{[1]}(x)+\kappa^{[3]}(x)+\kappa^{[5]}(x)$ is stronger than it needs to be but by a smaller margin than $u=\kappa^{[1]}(x)$. \section{Finite Horizon Stochastic Nonlinear Optimal Control Problem} \label{FH} \setcounter{equation}{0} Consider the finite horizon stochastic nonlinear optimal control problem, \begin{eqnarray}n \min_{u(\cdot)} {\rm E}\left\{ \int_0^T l(t,x,u) \rm\ d t+\pi_T(x(T))\right\} \end{array}\end{equation}an subject to \begin{eqnarray}n d x&=& f(t,x,u)dt+\sum_{k=1}^r\gamma_k(t,x,u)d w_k\\ x(0)&=&x^0 \end{array}\end{equation}an Again we assume that $ f, l,\gamma_k, \pi_T$ are sufficiently smooth. If they exist and are smooth the optimal cost $\pi(t, x) $ of starting at $x$ at time $t$ and the optimal feedback $u(t)=\kappa(t,x(t))$ satisfy the time dependent Hamilton-Jacobi-Bellman equations (HJB) \begin{eqnarray}n 0&=& \mbox{min}_u \left\{ \frac{\partial \pi}{\partial t}(t,x) +\frac{\partial \pi}{\partial x}(t,x) f(t,x,u) +l(t,x,u)\right. \\&&\left. +{1\over 2}\sum_{l=1}^k \gamma'_k(t,x,u) \frac{\partial^2 \pi}{\partial x^2}(t,x) \gamma_k(t,x,u) \right\} \\ 0&=& \mbox{argmin}_u \left\{ \sum_i\frac{\partial \pi}{\partial x_i}(t,x) f_i(t,x,u) +l(t,x,u)\right. \\&&\left. +{1\over 2}\sum_{l=1}^k \gamma'_k(t,x,u) \frac{\partial^2 \pi}{\partial x^2}(t,x) \gamma_k(t,x,u) \right\} \end{array}\end{equation}an If the quantity to be minimized is strictly convex in $u$ then HJB equations simplify to \begin{eqnarray} \nonumber 0&=& \frac{\partial \pi}{\partial t}(t,x) + \sum_i\frac{\partial \pi}{\partial x_i}(t,x) f_i(t,x,\kappa(x)) +l(t,x,\kappa(x)) \\&&+{1\over 2} \sum_{k=1}^r \gamma'_k(t,x,\kappa(x)) \frac{\partial^2 \pi}{\partial x^2}(t,x) \gamma_k(t,x,\kappa(x)) \label{hjb1t}\\ \nonumber \\ \label{hjb2t} 0&=& \sum_{i,k} \frac{\partial \pi}{\partial x_i}(x) \frac{\partial f _i}{\partial u_k}(t,x,\kappa(x)) +\sum_k \frac{\partial l }{\partial u_k}(t,x,\kappa(x)) \\&& +\sum_{k=1}^r \gamma'_k(t,x,\kappa(x)) \frac{\partial^2 \pi}{\partial x^2}(x) \frac{\partial \gamma_k}{\partial u}(t,x,\kappa(x)) \nonumber \end{array}\end{equation}a These equations are integrated backward in time from the final condition \begin{eqnarray} \label{hjbT} \pi(T,x)&=& \pi_T(x) \end{array}\end{equation}a Again we assume that we have the following Taylor expansions \begin{eqnarray}n f(t,x,u)&=& F(t)x+G(t)u+f^{[2]}(t,x,u)+f^{[3]}(t,x,u)+\ldots\\ l(t,x,u)&=& {1\over 2}\left( x'Q(t)x+u'R(t)u\right)+l^{[3]}(t,x,u)+l^{[4]}(t,x,u)+\ldots\\ \gamma_k(t,x)&=& C_k(t)x+\gamma_k^{[2]}(t,x)+\beta_{k}^{[3]}(t,x)+\ldots\\ \pi_T(x)&=& {1\over 2} x'P_Tx+\pi_T^{[3]}(x)+\pi_T^{[4]}(x)+\ldots\\ \pi(t,x)&=& {1\over 2} x'P(t)x+\pi^{[3]}(t,x)+\pi^{[4]}(t,x)+\ldots\\ \kappa(t,x)&=& K(t)x+\kappa^{[2]}(t,x)+\kappa^{[3]}(t,x)+\ldots \end{array}\end{equation}an where $^{[r]}$ indicates terms of homogeneous degree $r$ in $x,u$ with coefficients that are continuous functions of $t$. The key assumption is that $\gamma_k(t,0)=0$ for then (\ref{hjb1t}) has a regular singular point at $x=0$ and so is amenable to power series methods. We plug these expansions into the simplified time dependent HJB equations and collect terms of lowest degree, that is, degree two in (\ref{hjb1t}), degree one in (\ref{hjb2t}) and degree two in (\ref{hjbT}). \begin{eqnarray}n 0&=& \dot{P}(t)+P(t)F(t)+F'(t)P(t)+Q(t)-K'(t)R(t)K(t)\\ && +\sum_k \left(C'_k(t)+K'(t)D'_k(t)\right)P(t)\left(C_k(t)+ D_k(t)K(t)\right) \\ K(t)&=& -\left(R(t)+\sum_{k=1}^rD'_k(t)P(t)D_k(t)\right)^{-1} (G'(t) P(t)+S(t))\\ P(T)&=& P_T \end{array}\end{equation}an We call these equations the stochastic differential Riccati equation (SDRE). Similar equations in more generality can be found in \cite{YZ99} but since we are interested in nonlinear problems we require that $\gamma_k(t,x)=O(x)$ so that the stochastic HJB equations have a regular singular at the origin. If SDRE are solvable we may proceed to the next degrees, degree three in (\ref{hjb1t}), and degree two in (\ref{hjbT}). \begin{eqnarray}n 0&=&\frac{\partial \pi^{[3]}}{\partial t}(t,x)+ \frac{\partial \pi^{[3]}}{\partial x}(t,x) (F(t)+G(t)K(t))x\\ &&+x'P(t)f^{[2]}(t,x,K(t)x)+l^{[3]}(t,x,Kx)\\&& \\&& +{1\over 2}\sum_k x'C'_k(t) \frac{\partial^2 \pi^{[3]}}{\partial x^2}(t,x) \left(C_k+D_k(t)K(t)\right)(t) x\\ && +\sum_k x'\left(C'_k(t)+K'(t)D'_k(t)\right)P(t)\gamma_k^{[2]}(t,x)\\ \\ 0&=& \frac{\partial \pi^{[3]}}{\partial x}(t,x) G(t) +x'P(t)\frac{\partial f^{[2]}}{\partial u}(t,x,K(t)x)+\frac{\partial l^{[3]}}{\partial u}(t,x,K(t)x) \\&+&\sum_k x'(C_k(t)+D_k(t)K(t))'\left(P(t)\frac{\partial \gamma^{[2]}_k}{\partial u}(x,K(t)x)+ \frac{\partial^2 \pi^{[3]}}{\partial x^2}(x) D_k(t)\right) \nonumber \\&+& \sum_k \gamma^{[2]}_k(x,K(t)x)P(t)D_k(t) +(\kappa^{[2]}(t,x))'\left(R(t)+\sum_k D'_k(t)PD_k(t)\right) \\ \nonumber \end{array}\end{equation}an Notice again the unknown $\kappa^{[2]}(t,x)$ does not appear in the first equation which is linear ode for $ \pi^{[3]}(t,x)$ running backward in time from the terminal condition, \begin{eqnarray}n \pi^{[3]}(t,x)&=& \pi^{[3]}_T(x) \end{array}\end{equation}an After we have solved it then the second equation for $\kappa^{[2]}(t,x)$ is easily solved because of the standard assumption that $R(t)$ is invertible and hence $R(t)+\sum_k D'_k(t)PD_k(t)$ is invertible. The higher degree terms can be found in a similar fashion. \end{document}
\mathbf egin{document} \mathbf egin{abstract} Variable selection for structured covariates lying on an underlying known graph is a problem motivated by practical applications, and has been a topic of increasing interest. However, most of the existing methods may not be scalable to high dimensional settings involving tens of thousands of variables lying on known pathways such as the case in genomics studies. We propose an adaptive Bayesian shrinkage approach which incorporates prior network information by smoothing the shrinkage parameters for connected variables in the graph, so that the corresponding coefficients have a similar degree of shrinkage. We fit our model via a computationally efficient expectation maximization algorithm which scalable to high dimensional settings ($p {\sim} 100{,}000$). Theoretical properties for fixed as well as increasing dimensions are established, even when the number of variables increases faster than the sample size. We demonstrate the advantages of our approach in terms of variable selection, prediction, and computational scalability via a simulation study, and apply the method to a cancer genomics study. \end{abstract} \footnote{{\noindent \em Corresponding Author}: Suprateek Kundu, Department of Biostatistics \& Bioinformatics, Emory University, 1518 Clifton Road, Atlanta, Georgia 30322, U.S.A. \\ {\noindent \em Email}: [email protected] } {\noindent Keywords:} adaptive Bayesian shrinkage; EM algorithm; oracle property; selection consistency; structured high-dimensional variable selection.\\ \section{Introduction} With the advent of modern technology such as microarray analysis and next generation sequencing in genomics, recent studies rely on increasingly large amounts of data containing tens of thousands of variables. For example, in genomics studies, it is common to collect gene expressions from $p\sim 20{,}000$ genes, which is often considerably larger than the number of subjects in these studies, resulting in a classical small $n$, large $p$, problem. In addition, it is well-known that genes lie on a graph of pathways where nodes represent genes and edges represent functional interactions between genes and gene products. Currently, there exist several biological databases which store gene network information from previous studies \citep{Stingo2011}, and these databases are constantly updated and augmented with newly emerging knowledge. In such cases when genes are known to lie on an underlying graph, usual variable selection approaches such as Lasso \citep{Tibshirani1996}, adaptive Lasso \citep{Zou2006}, or spike and slab methods \citep{Mitchell1988} may run into difficulties, since they do not exploit the association structure between variables which may give rise to correlated predictors. Moreover, there is increasing evidence that incorporating prior graph information, where applicable, can improve prediction and variable selection in analysis of high dimensional data. \citet{Li2008} and \citet{Pan2010} proposed network-based penalties in linear regression, which induce sparsity of estimated effects while encouraging similar effects for connected variables. In a Bayesian framework, \citet{Li2010}, \citet{Stingo2011a}, and \citet{Stingo2011}, used spike and slab type priors for variable selection and Markov random field (MRF) type priors on variable inclusion indicators to incorporate graph information. More recently, \citet{Rockova2014} proposed an expectation maximization (EM) algorithm for variable selection using spike and slab priors which is known as EMVS and extended EMVS to incorporate graph information via MRF priors where a variational approximation was used in computation. \citet{Rockova2014a} proposed a normal-exponential-gamma shrinkage approach with incorporation of the pathway membership information and developed an EM algorithm for computation. To our knowledge, there is a scarcity of scalable Bayesian approaches for structured variable selection that possess desirable theoretical and numerical properties in high dimensions. The Bayesian approaches involving MRF type priors are implemented using Markov chain Monte Carlo and hence are not scalable to high dimensions involving tens of thousands of variables, such as in our cancer genomics application. While the EM approach by \citet{Rockova2014a} can incorporate pathway membership information, it is not equipped to incorporate edge information which is the focus of this article. Moreover, the theoretical properties and scalability of their method to the higher dimensions considered in this work ($p\sim 100{,}000$) are unclear. The variational approximation proposed by \citet{Rockova2014} may suffer from the loss of convexity properties and inferior estimates close to the transition points for tuning parameters, as indicated by the authors. The frequentist network-based regularization approaches are expected to be more scalable, but make a strong assumption of smoothness of covariate effects for connected variables in the graph, which may be restrictive in real-life applications. We propose a Bayesian shrinkage approach and an associated EM algorithm for structured covariates, which is scalable to high dimensional settings and possesses a desirable oracle property in variable selection and estimation for both fixed and increasing dimensions. The proposed approach assigns Laplace priors to the regression coefficients and incorporates the underlying graph information via a hyper-prior for the shrinkage parameters in the Laplace priors. Specifically, the shrinkage parameters are assigned a log-normal prior specifying the inverse covariance matrix as a graph Laplacian \citep{Chung1997,Ng2002}, which has a zero or positive partial correlation depending on whether the corresponding edge is absent or present. This enables smoothing of shrinkage parameters for connected variables in the graph and conditional independence between shrinkage parameters for disconnected variables. Thus, the resulting approach encourages connected variables to have a similar degree of shrinkage in the model without forcing their regression coefficients to be similar in magnitude. The operating characteristics of the approach can be controlled via tuning parameters with clearly defined roles. Although the proposed model can be implemented using Markov chain Monte Carlo, it is not scalable to high dimensional settings of our interest. As such, we implement an EM algorithm which treats the inverse covariance matrix for the shrinkage parameters as missing variables, and marginalizes over them to obtain the ``observed data" posterior which has a closed form. We incorporate recent computational developments such as the dynamic weighted lasso \citep{Chang2010} to obtain a computationally efficient approach which is scalable to high dimensional settings. We present the proposed methodology and the EM algorithm in Section 2, the theoretical results in Section 3, and the simulation results comparing our approach with several competitors in Section 4. We apply our method to a cancer genomics study in Section 5. \section{Methodology} \subsection{Model Specification} Let $\mathbf 0_m$ and $\mathbf 1_m$ denote the length-$m$ vectors with 0 entries and 1 entries, respectively, and $I_m$ the $m \times m$ identity matrix. The subscript $m$ may be omitted in the absence of ambiguity. For any length-$m$ vector $\mathbf v$, we define $e^\mathbf v = \left(e^{v_1},\dots,e^{v_m}\right)'$, $\log \mathbf v = \left(\log v_1,\dots,\log v_m \right)'$, $|\mathbf v| = \left(|v_1|,\dots,|v_m|\right)'$, and $D_{\mathbf v} = \mathrm{diag}(\mathbf v)$. Suppose we have a random sample of $n$ observations $\{y_i, \mathbf x_i; i=1,\ldots,n\}$ where $y_i$ is the outcome variable and $\mathbf x_i$ is a vector of $p$ predictors. Let $\mathcal{G} = \langle V,E \rangle$ denote the known underlying graph for the $p$ predictors, where $V=\{1,\dots,p\}$ is the set of nodes and $E \subset \{(j,k): 1\le j < k \le p\}$ is the set of undirected edges. Let $G$ be the $p \times p$ adjacency matrix in which the $(j,k)$-th element $G_{jk}=1$ if there is an edge between predictors $j$ and $k$, and $G_{jk}=0$ if otherwise. Consider the linear model \mathbf egin{eqnarray} \mathbf y = X \boldsymbol \beta + \mathbf epsilon, \mbox{ } \mathbf epsilon \sim \mathcal{N}(\mathbf 0,\sigma^2I_n), \label{eq:model} \end{eqnarray} where $\mathbf y=(y_1,\dots,y_n)'$, $X=(\mathbf x_1,\dots,\mathbf x_n)'$, $\boldsymbol \beta=(\mathbf eta_1,\dots,\mathbf eta_p)'$, $\mathbf epsilon=(\epsilon_1,\dots,\epsilon_n)'$, and $\mathcal{N}(\cdot)$ denotes the Gaussian distribution. We assign the following priors to $\boldsymbol \beta$ and $\sigma^2$ \mathbf egin{align} \mathbf eta_j \sim& \mathcal{DE}(\lambda_j/\sigma), \mbox{ } \sigma^2 \sim \mathcal{IG}(a_\sigma,b_\sigma), \mbox{ } j=1,\dots,p, \label {eq:base} \end{align} where $\lambda_j$ is the shrinkage parameter for $\mathbf eta_j$, $\mathcal{DE}(\cdot)$, and $\mathcal{IG}(\cdot)$ denote the double exponential (Laplace) and inverse gamma distributions, respectively. Prior specification \eqref{eq:base} differs from Bayesian Lasso \citep{Park2008} in that the degree of shrinkage for the $j$-th coefficient is controlled by $\lambda_j$ ($j=1,\ldots,p$) not a common $\lambda$, allowing for adaptive shrinkage guided by underlying graph knowledge. We encode the graph information $\mathcal{G}$ in the model via an informative prior on the shrinkage parameters as follows. \mathbf egin{align} \boldsymbol \alpha = (\log(\lambda_1), \ldots, \log(\lambda_p))' \sim \mathcal{N} \left( \boldsymbol \mu,\nu \mathit{\mathit \Omega}^{-1} \right), \label{eq:shrinkage} \end{align} where \mathbf egin{align*} \mathit{\mathit \Omega} = \left[ \mathbf egin{array}{cccc} 1+\sum_{j\neq1} \omega_{1j} & -\omega_{12} & \cdots & -\omega_{1p}\\ -\omega_{21} & 1+\sum_{j\neq2} \omega_{2j} & \ddots & -\omega_{2p}\\ \vdots & \ddots & \ddots & \vdots\\ -\omega_{p1} & -\omega_{p2} & \cdots & 1+\sum_{j\neq p} \omega_{pj} \end{array} \right], \end{align*} and assign the following prior to $\boldsymbol \omega = \{ \omega_{jk}: j<k\}$ \mathbf egin{align} \label{prior_omega} \pi(\boldsymbol \omega) \propto |\mathit{\mathit \Omega}|^{-1/2} \prod_{G_{jk}=1} \omega_{jk}^{a_\omega-1} \exp ( -b_\omega\omega_{jk} ) 1 (\omega_{jk}>0) \prod_{G_{jk}=0} \delta_0(\omega_{jk}), \end{align} where $\delta_0$ is the Dirac delta function concentrated at 0 and $1(\cdot)$ is the indicator function. Since $\mathit{\mathit \Omega}$ is symmetric and diagonally dominant, it is guaranteed to be positive definite. It follows from prior \eqref{prior_omega} that $\omega_{jk}=0$ if $G_{jk}=0$ and $\omega_{jk}>0$ if $G_{jk}=1$. In other words, under our model formulation the shrinkage parameters $\lambda_j$ and $\lambda_k$ have a positive partial correlation if predictors $j$ and $k$ are connected and have a zero partial correlation otherwise. The magnitudes of the positive partial correlations are learned from the data, with a higher partial correlation leading to the smoothing of corresponding shrinkage parameters. Our model formulation has several appealing features. First, a higher positive partial correlation between two connected predictors results in an increased probability of having both predictors selected or excluded simultaneously under an EM algorithm. This makes intuitive sense when both variables are important or unimportant. Second, in the scenario where one of the connected predictors is important and the other one is not, the method can learn from the data and impose a weak partial correlation, thereby enabling the corresponding shrinkage parameters to act in a largely uncorrelated manner. Third, the selection of unconnected variables is guided by shrinkage parameters which are partially uncorrelated. Finally, our approach does not constrain the effect sizes for connected variables to be similar in magnitude. \mathbf egin{figure}[h!] \includegraphics[width=\textwidth]{prior_plot_density.pdf} \caption{Top two panels plot the marginal prior densities of $\mathbf eta$ for (a) different $\mu$ while $\nu$ and $\sigma$ are fixed and (b) different $\nu$ while $\mu$ and $\sigma$ are fixed. Bottom two panels (c) and (d) plot the corresponding negative log density functions. The standard normal prior and the horseshoe prior with $\tau=1$ are shown for contrast. The Laplacian prior with $\lambda=e^{0.3}$ is plotted as a comparison to the case with $\mu=0.3$ and $\nu=0.1$.} \label{fig1} \end{figure} The mean vector $\boldsymbol \mu$ in \eqref{eq:shrinkage} determines the locations of $\boldsymbol \alpha$, and can be interpreted as controlling the average sparsity of the model. In particular, one can choose $\boldsymbol \mu = \mu \mathbf 1$ for some $\mu \in \mathbb{R}$, where a greater value of $\mu$ implies a sparser model. Figure \ref{fig1}(a) plots the marginal density for the regression coefficients for different values of $\mu$ with $\lambda$ marginalized out (via Monte Carlo averaging), while $\nu$ and $\sigma$ are kept fixed. It is clear that larger $\mu$ values lead to sharper peaks at zero with lighter tails, thus encouraging greater shrinkage. On the other hand, $\nu$ specifies the prior confidence on the choice of $\boldsymbol \mu$ as the average sparsity parameter. If $\nu = 0$, we have $\boldsymbol \alpha = \boldsymbol \mu$ so that the shrinkage parameters are fixed, resulting in a Lasso type shrinkage. This is evident from Figure \ref{fig1}(d), which plots the negative logarithm of the density for the marginal regression coefficients for different values of $\nu$ while $\mu$ and $\sigma$ are fixed. Figures 1(b) and 1(d) also show that larger values of $\nu$ result in higher-peaked and heavier-tailed densities and the corresponding penalty becomes similar to non-convex penalties in the frequentist literature, e.g. SCAD in \citet{Fan2001}. Overall, changing the value of $\nu$ results in different types of penalty functions which can be convex or non-convex. We note that \eqref{prior_omega} looks similar to a product of the gamma densities. However, it involves an additional term $|\mathit{\mathit \Omega}|^{-1/2}$ which is required to obtain a closed form full posterior, since the term cancels out between $\pi(\boldsymbol \alpha)$ and $\pi(\boldsymbol \omega)$. A similar trick was used for specifying the inverse covariance matrix for the regression coefficients in \citet{Liu2014}, which they denote as a graph Laplacian structure. However our approach is distinct in that it specifies a graph Laplacian type structure for the inverse covariance matrix for the log-shrinkage parameters and incorporates prior graph knowledge. Moreover, their approach results in an OSCAR type penalty \citep{Bondell2008}, while $-\log(\pi(\mathbf eta))$ under our approach can lead to both convex and non-convex penalties depending on the value of $\nu$. Proposition \ref{pro} shows that the prior in \eqref{prior_omega} is proper. The proof is presented in the Appendix. \mathbf egin{pro} \label{pro} The prior $\pi(\boldsymbol \omega)$ of $\boldsymbol \omega$ in \eqref{prior_omega} is proper. \end{pro} \mathbf egin{figure}[h!] \includegraphics[width=0.5\textwidth]{prior_plot_alpha1.pdf} \includegraphics[width=0.5\textwidth]{prior_plot_alpha2.pdf} \includegraphics[width=0.5\textwidth]{prior_plot_alpha3.pdf} \includegraphics[width=0.5\textwidth]{prior_plot_alpha4.pdf} \caption{Contour plots of the marginal prior density of $\alpha_1$ and $\alpha_2$ for 4 different combinations of $a_\omega$ and $b_\omega$.} \label{fig2} \end{figure} The prior in \eqref{prior_omega} involves a shape parameter $a_\omega$ and the rate parameter $b_\omega$, which serve the similar roles as those of the gamma distribution. In fact, they are directly involved in regulating the correlations between the elements of $\boldsymbol \alpha$. To see how they affect these correlations, consider $p=2$ and $G_{12}=1$. It follows that the joint prior density of $\alpha_1$ and $\alpha_2$ after marginalizing out $\omega_{12}$ is given (up to a constant) by \mathbf egin{align*} \pi(\alpha_1,\alpha_2) \propto f(\alpha_1,\alpha_2) = \exp \left( -\frac{(\alpha_1-\mu_1)^2+(\alpha_2-\mu_2)^2}{2\nu} \right) \left( b_\omega + \frac{(\alpha_1-\alpha_2)^2}{2\nu} \right)^{-a_\omega}. \end{align*} Figure \ref{fig2} draws the contour plots of $f(\alpha_1,\alpha_2)$ for 4 different combination of $a_\omega$ and $b_\omega$; $(a_\omega,b_\omega) = (1,1),(1,4),(4,1),(4,4)$ with $\mu_1=\mu_2=1$ and $\nu=1$. As $a_\omega$ increases and/or $b_\omega$ decreases, $\alpha_1$ and $\alpha_2$ tend to have a stronger correlation, translating to a higher probability of having similar values. This is also evident in the E-step in the EM algorithm (see equation \eqref{eq:Estep}), where high values of $a_\omega/b_\omega$ tend to result in a high mean value for $\omega_{jk}$ which in turn tends to result in similar values for $\alpha_j-\mu_j$ and $\alpha_k - \mu_k$. \subsection{EM Algorithm} The Maximum-A-Posteriori (MAP) estimator for the proposed model is obtained by maximizing the posterior density over $\mathbf theta = (\boldsymbol \beta', \sigma^2, \boldsymbol \alpha')'$ with $\boldsymbol \omega$ marginalized out. Specifically, \mathbf egin{align} \widehat{\mathbf theta} = \left(\widehat{\boldsymbol \beta},\widehat{\sigma}^2,\widehat{\boldsymbol \alpha}\right) = \qopname\relax m{argmax}_{\mathbf theta} \int \pi(\mathbf theta,\boldsymbol \omega|\mathbf y,X) d\boldsymbol \omega, \label{eq:argmax} \end{align} where the full posterior density is given by \mathbf egin{align*} \pi(\mathbf theta,\boldsymbol \omega|\mathbf y,X) &\propto \pi(\mathbf y|\boldsymbol \beta,\sigma^2,X) \pi(\boldsymbol \beta|\sigma^2,\boldsymbol \alpha) \pi(\sigma^2) \times |\mathit{\mathit \Omega}|^{1/2} \exp \left( -\frac{(\boldsymbol \alpha-\boldsymbol \mu)' \mathit{\mathit \Omega} (\boldsymbol \alpha-\boldsymbol \mu)}{2\nu} \right)\\ & \qquad \times |\mathit{\mathit \Omega}|^{-1/2} \prod_{j<k,G_{jk}=1} \omega_{jk}^{a_\omega-1} \exp ( -b_\omega\omega_{jk} ) \prod_{j<k,G_{jk}=0} \delta_0(\omega_{jk} ). \end{align*} In the case of $\mathit{\mathit \Omega} = I_p$, where no graph information is used, we call the resulting estimator the \emph{EM} estimator for Bayesian \emph{SH}rinkage approach, or EMSH in short. In the general case where prior graph information is used, we call the resulting estimator the EMSH with the \emph{S}tructural information incorporated, or EMSHS in short. We use $\boldsymbol \mu = \mu\mathbf 1$ where $\mu>0$ for simplicity. Note that the algorithm can be easily modified to accommodate heterogeneous sparsity parameters. Since \mathbf egin{align*} (\boldsymbol \alpha-\boldsymbol \mu)' \mathit{\mathit \Omega} (\boldsymbol \alpha-\boldsymbol \mu) = \sum_{j=1}^p (\alpha_j-\mu)^2 + \sum_{j<k} \omega_{jk} (\alpha_j-\alpha_k)^2, \end{align*} we have \mathbf egin{align} \pi(\mathbf theta,\boldsymbol \omega|\mathbf y,X) &\propto \pi(\mathbf y|\boldsymbol \beta,\sigma^2,X) \pi(\boldsymbol \beta|\sigma^2,\boldsymbol \alpha) \pi(\sigma^2) \times \exp \left( -\frac{(\boldsymbol \alpha-\boldsymbol \mu)' (\boldsymbol \alpha-\boldsymbol \mu)}{2\nu} \right) \nonumber \\ & \qquad \times \prod_{j<k,G_{jk}=1} \omega_{jk}^{a_\omega-1} \exp \left( -b_\omega \omega_{jk} - \frac{\omega_{jk}}{2\nu} (\alpha_j-\alpha_k)^2 \right) \prod_{j<k,G_{jk}=0} \delta_0(\omega_{jk} ). \label{eq:joint} \end{align} Therefore, the marginal posterior density for $\mathbf theta$ is given by \mathbf egin{align} \pi(\mathbf theta|\mathbf y,X) & \propto \pi(\mathbf y|\boldsymbol \beta,\sigma^2,X) \pi(\boldsymbol \beta|\sigma^2,\boldsymbol \alpha) \pi(\sigma^2) \times \exp \left( -\frac{(\boldsymbol \alpha-\boldsymbol \mu)' (\boldsymbol \alpha-\boldsymbol \mu)}{2\nu} \right) \nonumber\\ & \qquad \times \prod_{j<k,G_{jk}=1} \left( b_\omega + \frac{1}{2\nu} (\alpha_j-\alpha_k)^2 \right)^{-a_\omega}. \label{eq:marginal} \end{align} Since the marginal posterior density in \eqref{eq:marginal} is differentiable with respect to $\mathbf theta$ and the set $\{\mathbf theta: \pi(\mathbf theta|\mathbf y,X) \ge \eta\}$ is bounded and closed for any $\eta>0$, its maximum is attainable and the MAP estimator always exists; see Theorem 2.28 in \citet{rudin1976principles}. Since the logarithm of marginal posterior density may not be convex, the MAP estimator may have multiple (local) solutions. However, our numerical experiments suggests a stable performance under our method, and we show in Section 3, that the algorithm admits a unique solution asymptotically. Although one can directly optimize \eqref{eq:marginal} to compute $\widehat{\mathbf theta}$ in \eqref{eq:argmax}, we choose to use the EM algorithm to obtain the MAP estimate. This is because the solution surface for $\boldsymbol \alpha$ given $\boldsymbol \beta$ after marginalizing out $\boldsymbol \omega$ in \eqref{eq:marginal} is non-convex, leading to potential computational difficulties. We elaborate more on this when describing the M-step for $\boldsymbol \alpha$. In summary, we optimize $\pi(\mathbf theta|\mathbf y,X)$ by proceeding iteratively with the ``complete data" log-posterior $\pi(\mathbf theta, \boldsymbol \omega | \mathbf y, X)$ in \eqref{eq:joint}, where $\mathit \Omega(\boldsymbol \omega)$ is considered ``missing data." At each EM iteration, we replace $\mathit \Omega$ by its conditional expectation in the E-step and then maximize the expected ``complete data" log posterior with respect to $\mathbf theta$ in the M-step. The objective function to be optimized at the $t$-th EM iteration is given by \mathbf egin{align} Q_t(\mathbf theta) = & -\frac{n+p+2a_\sigma+2}{2} \log (\sigma^2) \nonumber \\ & -\frac{(\mathbf y-X\boldsymbol \beta)'(\mathbf y-X\boldsymbol \beta) + 2\sigma \sum_{j=1}^p e^{\alpha_j} |\mathbf eta_j| + 2b_\sigma}{2\sigma^2} \nonumber \\ & + \sum_{i=1}^p \alpha_i - \frac{(\boldsymbol \alpha-\boldsymbol \mu)' \mathit{\mathit \Omega}^{(t)} (\boldsymbol \alpha-\boldsymbol \mu)}{2\nu}, \label{eq:opt} \end{align} where $\mathit{\mathit \Omega}^{(t)} = \mathbb{E}\left( \mathit{\mathit \Omega} | \mathbf y, X, \mathbf theta^{(t-1)} \right)$. \subsubsection{E-step} It follows from \eqref{eq:joint} that the posterior density of $\boldsymbol \omega$ given $\mathbf theta$ is the product of the gamma densities where $\omega_{jk}$ follows the gamma distribution with parameters $a_\omega$ and $b_\omega+\frac{\left(\alpha_j-\alpha_k\right)^2}{2\nu}$ for $j < k, G_{jk}=1$. Therefore, we have \mathbf egin{align} \omega_{jk}^{(t)} = \mathbb{E}(\omega_{jk}|\mathbf y,X,\mathbf theta^{(t-1)}) &= \frac{2\nu a_\omega G_{jk}}{ 2\nu b_\omega + \left( \alpha_j^{(t-1)} -\alpha_k^{(t-1)} \right)^2}, \qquad j<k. \label{eq:Estep} \end{align} Since we only need to update as many $\omega_{jk}$ as the number of edges in $\mathcal{G}$, this step can be completed in $O(|E|)$ operations, which is computationally very inexpensive for sparse graphs. \subsubsection{M-step} For this step, we sequentially optimize the objective function with respect to $\boldsymbol \beta$, $\sigma^2$, and $\boldsymbol \alpha$. \mathbf egin{itemize} \item M-step for $\boldsymbol \beta$: With $\sigma = \sigma^{(t-1)}$ and $\boldsymbol \alpha = \boldsymbol \alpha^{(t-1)}$ fixed, $\boldsymbol \beta^{(t)}$ can be obtained as \mathbf egin{align*} \boldsymbol \beta^{(t)} = \qopname\relax m{argmin}_{\boldsymbol \beta} \, \frac{1}{2} (\mathbf y-X\boldsymbol \beta)'(\mathbf y-X\boldsymbol \beta)+ \sum_{j=1}^p \xi_j |\mathbf eta_j|, \end{align*} where $\xi_j = \sigma e^{\alpha_j}$. This is a weighted lasso problem, which can be solved by many algorithms such as \citet{Efron2004}, \citet{wu2008coordinate}, and \citet{Chang2010}. We use the dynamic weighted lasso (DWL) algorithm developed in \citet{Chang2010}, which is capable of rapidly computing the solution by borrowing information from previous iterations when the regularization parameters change across the EM iterations. Our experience suggests that these regularization parameters differ negligibly over EM iterations under our approach, especially as the solution approaches its limit. As such, the DWL results in substantial savings in computation, compared to alternate algorithms such as LARS which needs to completely recompute the solution for each EM iteration. Finding a lasso solution using the DWL algorithm requires $O(pq^2)$ operations where $q$ is the number of nonzero coefficients in the solution, provided that the sample correlations between the selected variables and all remaining variables are available. The latter requires an additional $O(npq)$ operations. Therefore, while the initial M-step for $\boldsymbol \beta$ takes $O(npq)$ operations, the DWL algorithm updates the solution in $O(pq)$ operations as the EM iterations continue and the solution stabilizes. Readers are referred to \citet{Chang2010} for further details regarding the DWL algorithm. We note that \citet{Park2008}, \citet{Armagan2013}, and several others used the normal mixture representation of the Laplace prior below to compute MAP estimates under an EM algorithm \mathbf egin{align*} \frac{\lambda}{2\sigma} e^{-\lambda|\mathbf eta|/\sigma} = \int_0^\infty \frac{1}{\sqrt{2\pi\tau\sigma^2}} e^{-\mathbf eta^2/(2\tau\sigma^2)} \frac{\lambda^2}{2} e^{-\lambda^2\tau/2} d\tau, \end{align*} where $\tau$ is the latent scale parameter that is imputed in the E-step. We choose to use the form of the Laplace prior instead of the above mixture representation due to several considerations. First, an M-step for $\boldsymbol \beta$ of the EM algorithm under the normal mixture representation takes $O(n^2p)$ operations, which is slower than the proposed approach. Second, as pointed out by \citet{Armagan2013}, the Laplace representation leads to faster convergence than the normal mixture representation. Third, the regression coefficients cannot attain exact zeros in the normal mixture representation, and additional post-processing steps are required for variable selection, which can be sensitive to cut-off values. Lastly, numerical difficulties may arise when $\mathbf eta$ approaches zero under the normal mixture representation because the conditional mean of $\tau^{-1}$ may explode to infinity. \item M-step for $\sigma$: With $\boldsymbol \beta = \boldsymbol \beta^{(t)}$ and $\boldsymbol \alpha = \boldsymbol \alpha^{(t-1)}$ fixed, we have \mathbf egin{align*} \sigma^{(t)} = \qopname\relax m{argmin}_\sigma \frac{c_1}{\sigma^2} + \frac{c_2}{\sigma} + c_3 \log \sigma, \end{align*} where $c_1 = \frac{1}{2}(\mathbf y-X\boldsymbol \beta)'(\mathbf y-X\boldsymbol \beta) + b_\sigma$, $c_2 = \sum_{j=1}^p e^{\alpha_j} |\mathbf eta_j|$, and $c_3 = n+p+2a_\sigma+2$. The solution is then given by $\sigma^{(t)} = \frac{c_2 + \sqrt{c_2^2+8c_1c_3}}{2c_3}.$ \item M-step for $\boldsymbol \alpha$: Since there is no closed-form solution for $\boldsymbol \alpha$, we use the Newton method. With $\boldsymbol \beta = \boldsymbol \beta^{(t)}$, $\sigma = \sigma^{(t)}$, and $\mathit{\mathit \Omega} = \mathit{\mathit \Omega}^{(t)}$ fixed, the Newton search direction at $\boldsymbol \alpha$ is given by $\mathbf d_N(\boldsymbol \alpha) = -H^{-1} \mathbf g,$ where $H = \sigma \mathit{\mathit \Omega} + \nu D_{|\boldsymbol \beta|} D_{e^{\boldsymbol \alpha}}$ \mbox{ and } $\mathbf g = \sigma \mathit{\mathit \Omega} \left( \boldsymbol \alpha-\boldsymbol \mu \right) - \nu \sigma \mathbf 1 + \nu D_{|\boldsymbol \beta|} e^{\boldsymbol \alpha}$. As the Hessian matrix $H$ is always positive definite, $\mathbf d_N(\boldsymbol \alpha)$ becomes a valid Newton direction. Therefore, we can update $\boldsymbol \alpha$ as follows \mathbf egin{align}\label{eq:update_alpha} \boldsymbol \alpha^{(t)} &= \boldsymbol \alpha^{(t-1)} + s_t \mathbf d_N \left( \boldsymbol \alpha^{(t-1)} \right), \end{align} where $s_t$ is the step size. Since the usual Newton method involves the inversion of the $p \times p$ Hessian matrix $H$, it is only feasible when $p$ is moderate. When $p$ is large, we suggest replacing the Hessian matrix by its diagonal matrix \citep{becker1988improving} and $\mathbf d_N(\boldsymbol \alpha)$ by $\mathbf d(\boldsymbol \alpha) = - D_H^{-1} \mathbf g$, where $D_H = \mathrm{diag}(H) = \sigma \mathrm{diag}(\mathit{\mathit \Omega}) + \nu D_{|\boldsymbol \beta|} D_{e^{\boldsymbol \alpha}}$. Since $D_H$ is positive definite, $\mathbf d(\boldsymbol \alpha)$ is a valid descent direction, and the step size $s_t$ can be determined by the backtracking line search \citep{nocedal2006numerical}. Note that there are only $p+|E|$ unique nonzero elements in $\mathit{\mathit \Omega}$. Therefore, obtaining the $p$-dimensional direction vector takes $O(p+|E|)$ operations only. Since edges in network graphs are usually sparse, its overall computation is much faster than the Newton method even if approximating the Hessian matrix may slightly increase the number of EM iterations. In addition, it is not necessary to repeat the Newton steps until convergence to obtain the optimal solution for $\boldsymbol \alpha$ within each M-step for $\boldsymbol \alpha$. It suffices that each iteration of M-step for $\boldsymbol \alpha$ ensures an increase in the value of the objective function, in order to guarantee the convergence of the EM algorithm. However, our experience indicates that repeating the Newton steps three to five times within each M-step for $\boldsymbol \alpha$ helps reduce the number of total EM iterations. As alluded to earlier, the advantage of the EM algorithm over directly optimizing the marginal posterior density $\pi(\mathbf theta|\mathbf y,X)$ with respect to $\mathbf theta$ lies in the fact that the Hessian matrix with respect to $\boldsymbol \alpha$ is guaranteed to be positive definite in the former case, while it is not in the latter. Since the EM algorithm exploits part of the curvature information in optimizing with respect to $\boldsymbol \alpha$ at nearly no extra computational cost, it is expected to lead to a reduced number of total iterations and hence savings in computation \citep{nocedal2006numerical}. \end{itemize} The EM algorithm can be started from the E-step for $\boldsymbol \omega$ with suggested initial values $\boldsymbol \beta^{(0)} = \mathbf 0$, $\sigma^{(0)} = \sqrt{(\mathbf y'\mathbf y+2b_\sigma)/c_3}$, and $\alpha_j^{(0)} = \mu$ for all $j$. The number of operations in each EM iteration is $O(npq+|E|)$ initially and reduces to $O(pq+|E|)$ after a few iterations. We repeat the EM procedures until the relative improvement of the optimum value of the objective function goes below a certain threshold, say $\epsilon=e^{-5}$. \subsection{Role of Shrinkage Parameters $\boldsymbol \alpha=\log(\boldsymbol \lambda)$ } \label{interpretation} It is straightforward to show that the estimators satisfy \mathbf egin{align} \widehat{\boldsymbol \beta} = \qopname\relax m{argmin}_{\boldsymbol \beta} \, \frac{1}{2}(\mathbf y-X\boldsymbol \beta)'(\mathbf y-X\boldsymbol \beta) + \sum_{j=1}^p \widehat{\xi}_j |\mathbf eta_j|, \label{eq:betahat} \end{align} and \mathbf egin{align} \label{adaptive_penalty} \widehat{\boldsymbol \alpha} = \qopname\relax m{argmin}_{\boldsymbol \alpha} \, \frac{1}{2\nu} (\boldsymbol \alpha-\boldsymbol \mu)' \mathit{\mathit \Omega}^{(\infty)} (\boldsymbol \alpha-\boldsymbol \mu) - \mathbf 1' \boldsymbol \alpha + \frac{1}{\widehat{\sigma}} | \widehat{\boldsymbol \beta} |' e^{\boldsymbol \alpha}, \end{align} where $\widehat{\xi}_j = \widehat{\sigma} e^{\widehat{\alpha}_j}$ and $\mathit{\mathit \Omega}^{(\infty)}$ is the final value of $\mathit{\mathit \Omega}$ from the EM algorithm. When $\widehat{\sigma}$ and $\widehat{\boldsymbol \alpha}$ are fixed, the solution $\widehat{\boldsymbol \beta}$ in \eqref{eq:betahat} resembles an adaptive lasso solution with the regularization parameter $\widehat{\mathbf xi}$. Instead of assuming fixed weights as in the adaptive lasso, the EMSHS uses the data and the underlying graph knowledge to learn the weights. Specifically, the estimate of $\alpha_j$ depends on the shrinkage parameters corresponding to variables connected to $x_j,j=1,\ldots,p$ and the corresponding partial correlations, as follows \mathbf egin{align} \big|\widehat{\mathbf eta}_j\big| = \frac{\widehat{\sigma}}{\nu} \left( \mu + \nu - \widehat{\alpha}_j + \sum_{k \sim j} \omega_{jk}^{(\infty)} (\widehat{\alpha}_k - \widehat{\alpha}_j) \right) e^{-\widehat{\alpha}_j}. \label{eq:adaptive_weights} \end{align} By estimating the weights in an adaptive manner guided by the prior graph knowledge, the proposed approach avoids having to specify an initial consistent estimator for the weights as in the adaptive lasso, which is expected to be of significant practical advantage in high dimensional settings. This is in fact our experience in numerical studies; see Section 4. Finally, we note that larger values of $\widehat{\alpha}_j$ translate to smaller values for $\big|\widehat{\mathbf eta}_j\big|,j=1,\ldots,p$, and vice-versa, clearly demonstrating the role of the shrinkage parameters $\boldsymbol \alpha$. \section{Theoretical Properties} \label{sec:oracle} To fix ideas, let $p_n$ denote the number of candidate predictors, of which $q_n$ are the true important variables. Model~\eqref{eq:model} is reformulated as \mathbf egin{align*} \mathbf y_n = X_n \boldsymbol \beta_0 + \mathbf epsilon_n, \end{align*} where $\mathbf y_n$ is the $n \times 1$ response vector, $X_n$ is the $n\times p_n$ design matrix, $\boldsymbol \beta_0$ is the $p_n \times 1$ true coefficient vector, and $\mathbf epsilon_n$ is the $n \times 1$ error vector. The errors are independent Gaussian with mean 0 and variance $\sigma_0^2$; $\mathbf epsilon_n \sim \mathcal{N}(\mathbf 0,\sigma_0^2I_n)$, and the errors are also independent of the covariates. The covariates are stochastic and are dictated by an inverse covariance matrix depending on a true graph $\mathcal{G}_{0n}$. They are standardized such that \mathbf egin{align*} \mathbf 1' \mathbf x_{nj} = 0, \qquad \mathbf x_{nj}'\mathbf x_{nj} = n, \qquad j=1,\dots,p_n, \end{align*} where $\mathbf x_{nj}$ is the $j$-th column (variable) of $X_n$, and let $\mathit \Sigma_n = \frac{1}{n} X_n' X_n$ be the sample covariance matrix. Let $\widehat{\mathbf theta}_n = (\widehat{\boldsymbol \beta}_n',\widehat{\sigma}_n^2,\widehat{\boldsymbol \alpha}_n')'$ be the EMSHS solution. Let $\mathcal{A}_n = \{j:\widehat{\mathbf eta}_{nj} \neq 0\}$ be the index set of the selected variables in $\widehat{\boldsymbol \beta}_n$, and $\mathcal{A}_0 = \{j:\mathbf eta_{0j} \neq 0\}$ be the index set of the true important variables where $|\mathcal{A}_0|=q_n$. We assume $\|\boldsymbol \beta_0\|$ is bounded so that the variance of the response and the signal-to-noise ratio stay bounded. Without loss of generality, we assume $\|\boldsymbol \beta_0\| = 1$. For any index set $\mathcal{A}$, $\mathbf v_\mathcal{A}$ represents the subvector of a vector $\mathbf v$ with entries corresponding to $\mathcal{A}$. $E_{\mathcal{A}\mathcal{B}}$ is the submatrix of a matrix $E$ with rows and columns corresponding to $\mathcal{A}$ and $\mathcal{B}$, respectively. When a sequential index set $\mathcal{A}_n$ is used for a sequence of vectors or matrices indexed by $n$, the subscript $n$ may be omitted for conciseness if it does not cause a confusion. For example, $\mathbf v_{n\mathcal{A}_n}$ can be written as $\mathbf v_{\mathcal{A}_n}$ or $\mathbf v_{n\mathcal{A}}$, and $E_{n\mathcal{A}_n\mathcal{B}_n}$ can be written as $E_{\mathcal{A}_n\mathcal{B}_n}$ or $E_{n\mathcal{A}\mathcal{B}}$. Let $O( \cdot )$, $o( \cdot )$, $O_p( \cdot )$, and $o_p( \cdot )$ denote the standard big $O$, little $o$, big $O$ in probability, and little $o$ in probability, respectively. Further $f(n) = \Theta(g(n))$ indicates that $f(n)$ and $g(n)$ satisfy $f(n) = O(g(n))$ and $g(n) = O(f(n))$; $f(n) = \Theta_p(g(n))$ is similarly defined. When these notations are used for vectors and matrices, they bound the $L_2$-norm $\| \cdot \|$ of the entities. For example, $\mathbf v = O(n)$ means that $\|\mathbf v\| = O(n)$. Every norm $\| \cdot \|$ in this article denotes the $L_2$ norm. Finally, $\rightarrow_p$ and $\rightarrow_d$ denote convergence in probability and in distribution, respectively. \subsection{Oracle Property for Fixed $p$} Consider the case with a fixed number of candidate predictors (i.e., $p_n=p$). Suppose the following conditions hold as $n \rightarrow \infty$. \mathbf egin{enumerate}[label=(A\arabic{enumi}),ref=(A\arabic{enumi})] \item \label{ass:fix:beta} $\|\boldsymbol \beta_0\| = 1$ and $\min_{j \in \mathcal{A}_0} |\mathbf eta_{0j}| \ge C_\mathbf eta$ for some constant $C_\mathbf eta>0$. \item \label{ass:fix:XX} $\mathit \Sigma_n \rightarrow_p \mathit \Sigma_0$ where $\mathit \Sigma_0$ is positive definite and depends on $\mathcal{G}_{0n}=\mathcal{G}_0$. \item \label{ass:fix:mu} $\mu_n = R\log n + o(\log n)$ where $1/2<R<1$. \item \label{ass:fix:nu} $\nu_n = \Theta( n^{-r} \log n )$ where $0 < r < R-1/2$. \item \label{ass:fix:omega} $a_{\omega n} b_{\omega n}^{-1} = o(1)$. \item \label{ass:fix:sigma} $a_{\sigma n} = a_{\sigma 1} n^z$ and $b_{\sigma n} = b_{\sigma 1} n^z$ for $a_{\sigma 1}>0$, $b_{\sigma 1} > 0$, and $0 \le z < 1$. \end{enumerate} Assumption \ref{ass:fix:beta} states that the nonzero coefficients stay away from zero, although their magnitudes are allowed to vary with $n$. Assumption \ref{ass:fix:XX} is a fairly general regularity condition for the design matrix which rules out collinearity between covariates, and ensures that the important variables are not replaced by any other remaining variables in the model. Assumption \ref{ass:fix:XX} is a fairly general regularity condition for the design matrix. Readers are referred to Remark \ref{rem:fix:principle} for the comments on \ref{ass:fix:mu} and \ref{ass:fix:nu}. Assumption \ref{ass:fix:omega} forces the precision matrix to assume a diagonal form as $n \to \infty$. Thus as $n \to \infty$, we essentially do not need to utilize prior graph knowledge $\mathcal{G}_0$ to establish the theoretical results for fixed $p$ case. Hence our asymptotic results for fixed $p$ is agnostic to the structure of the prior graph, and hence robust to mis-specification. However, we note that for finite samples, incorporation of true prior graph knowledge is of paramount importance in achieving improved numerical performance. According to \ref{ass:fix:sigma}, the prior on $\sigma^2$ is well-tightened, with $\sigma^2$ converging to a constant when $z>0$. \mathbf egin{thm} \label{oracle:fix} Assume the conditions \ref{ass:fix:beta}-\ref{ass:fix:sigma}. The following statements hold for the EMSHS estimator $\widehat{\mathbf theta}_n = (\widehat{\boldsymbol \beta}_n',\widehat{\sigma}_n^2,\widehat{\boldsymbol \alpha}_n')'$ as $n \rightarrow \infty$. \mathbf egin{enumerate}[label=(\alph*)] \item $P ( \mathcal{A}_n = \mathcal{A}_0 ) \rightarrow 1$. \item $n^{1/2} \left( \widehat{\boldsymbol \beta}_{n\mathcal{A}_0} - \boldsymbol \beta_{0\mathcal{A}_0} \right) \rightarrow_d \mathcal{N}\left(\mathbf 0, \sigma_0^2 \mathit \Sigma_{0\mathcal{A}_0\mathcal{A}_0}^{-1} \right)$. \item The solution is unique in probability. \end{enumerate} \end{thm} The proof for Theorem 1 is provided in Appendix. \mathbf egin{rem} Although we only consider Gaussian errors, the results also hold for error distributions with finite variance. \end{rem} \mathbf egin{rem} \label{rem:fix:principle} With a little lack of rigorousness, \ref{ass:fix:mu} ensures $\widehat{\xi}_{nj} = \widehat{\sigma}_n e^{\widehat{\alpha}_{nj}} = \Theta_p(n^R)$ for $j \in \mathcal{A}_n^c$, and \ref{ass:fix:nu} ensures $|\widehat{\mathbf eta}_{nj}| \widehat{\xi}_{nj} = \Theta_p(n^r)$ for $j \in \mathcal{A}_n$. Therefore, if $\widehat{\boldsymbol \beta}_n$ is $\sqrt{n}$-consistent, which is indeed the case, the important variables receive shrinkage of order $\widehat{\xi}_{nj} = \Theta_p(n^r)$ and the unimportant variables receive shrinkage of order \emph{at least} $\widehat{\xi}_{nj} = \Theta_p(n^{r+1/2})$. This is the key that leads to the oracle property. \end{rem} \mathbf egin{rem} \label{rem:fix:sigma} The true residual variance $\sigma_0^2$ is consistently estimated by $\widehat{\sigma}_n^2$. That is, $\widehat{\sigma}_n^2 \rightarrow_p \sigma_0^2$. \end{rem} \subsection{Oracle Property for Diverging $p$} When the number of candidate predictors is diverging, let $\mathcal{G}_n = \langle V_n, E_n \rangle$ be the working graph which is used to fit the model where $V_n = \{1,\dots,p_n\}$ and $E_n$ is the set of edges. Let $G_n$ be the adjacency matrix for $\mathcal{G}_n$, $l_{nj} = \sum_{k=1}^{p_n} G_{n,jk}$ be the degree of the vertex $j$ in $\mathcal{G}_n$, and $L_n = \max_{1 \le j \le p_n} l_{nj}$ be the maximum degree among all vertices. Suppose the following conditions hold as $n \rightarrow \infty$. \mathbf egin{enumerate}[label=(B\arabic{enumi}),ref=(B\arabic{enumi})] \item \label{ass:p} $p_n = O(\exp(n^U))$ where $0 \le U < 1$. \item \label{ass:q} $q_n = O(n^u)$ where $0 \le u < (1-U)/2$ and $q_n \le p_n$. \item \label{ass:beta} $\|\boldsymbol \beta_0\| = 1$ and $\min_{j\in \mathcal{A}_0} |\mathbf eta_{0j}| \ge C_\mathbf eta q_n^{-1/2}$ for some constant $C_\mathbf eta>0$. \item \label{ass:XX} Assume that $\mathcal{G}_{0n}$ is such that the smallest eigen value of $\mathit \Sigma_{n\mathcal{A}\mathcal{A}}$ is greater than $\tau_1$ for any index set $\mathcal{A}$ with $|\mathcal{A}| \le n$, and that the largest eigen value of $\mathit \Sigma_n$ is less than $\tau_2$ almost surely, where $0<\tau_1<\tau_2<\infty$. \item \label{ass:rho} Assume that $\mathcal{G}_{0n}$ is such that the following partial orthogonality condition holds. \mathbf egin{align*} \|\mathit \Sigma_{n\mathcal{B}\mathcal{C}}\|^2 \le \rho_n^2 \|\mathit \Sigma_{n\mathcal{B}\mathcal{B}}\| \|\mathit \Sigma_{n\mathcal{C}\mathcal{C}}\|, \qquad \forall \mathcal{B} \subset \mathcal{A}_0, \forall \mathcal{C} \subset \mathcal{A}_0^c, \end{align*} almost surely where $\rho_n = O(n^{-1/2})$. \mathbf egin{comment} \mathbf egin{align*} |C_{jk}| = \frac{1}{n} |\mathbf x_j'\mathbf x_k| \le \rho_n = o(n^{-1/2}q_n^{-1/2}), \qquad j \in \mathcal{A}_0, k \notin \mathcal{A}_0. \end{align*} This condition leads to \mathbf egin{align*} \|C_{\mathcal{B}\mathcal{C}}\| \le n^{1/2} q_n^{1/2} \rho_n, \qquad \mathcal{B} \subset \mathcal{A}_0, \mathcal{C} \subset \mathcal{A}_0^c, |\mathcal{C}|\le n. \end{align*} \end{comment} \item \label{ass:mu} $\mu_n = R\log n + \frac{1}{2} \log (1+p_n/n) + o(\log n)$ where $(U+1)/2<R<1-u$. \item \label{ass:nu} $\nu_n = \Theta( (1+p_n/n)^{-1} n^{-r} \log n )$ where $0 < r < R-1/2< 1/2-u$. \item \label{ass:omega} $L_n a_{\omega n} b_{\omega n}^{-1} = o(1)$. \item \label{ass:sigma} $a_{\sigma n} = a_{\sigma 1} n^z$ and $b_{\sigma n} = b_{\sigma 1} n^z$ for $a_{\sigma 1}>0$, $b_{\sigma 1} > 0$, and $1-r<z<1$. \end{enumerate} Assumption~\ref{ass:p} allows the number of candidate predictors to increase at an exponential rate and \ref{ass:q} allows the number of important variables to diverge as well. Assumption \ref{ass:beta} states that the $L_2$ norm of the true regression coefficients is bounded, which in conjunction with diverging $q_n$ implies that some of the true nonzero coefficients may get sufficiently small. However, \ref{ass:beta} ensures that they remain away from zero sufficiently. In order to accommodate increasing $p_n$ and $q_n$, the shrinkage parameters need to be carefully calibrated, which is ensured under conditions \ref{ass:mu} and \ref{ass:nu} on $\mu_n$ and $\nu_n$ and by the fact that $q_n$ increases at a moderate rate in \ref{ass:q}. The roles of $\mu_n$ and $\nu_n$ are further explained in Remark \ref{rem:rate}. \ref{ass:XX} is analogous to \ref{ass:fix:XX} for the fixed $p$ case. The partial orthogonality condition in \ref{ass:rho} assumes that the unimportant variables are asymptotically weakly correlated with the important variables; similar assumptions are widely used for the case of diverging p in the literature \citep{Huang2008}. Since $p_n\to\infty$, the degree of a vertex in the graph $\mathcal{G}_n$ can diverge. In order to precisely regulate the smoothing effect between neighboring shrinkage parameters, the condition \ref{ass:fix:omega} needs to be extended to \ref{ass:omega} which incorporates information about the degrees of vertices in the working graph $\mathcal{G}_n$. The condition \ref{ass:sigma} is stronger than \ref{ass:fix:sigma} in order to prevent $\widehat{\sigma}^2_n$ from converging to zero much faster than desired: see Remark \ref{rem:sigma} for further comments on $\widehat{\sigma}^2_n$. \mathbf egin{thm} \label{thm:oracle} Assume the conditions \ref{ass:p}-\ref{ass:sigma}. The following statements hold for the EMSHS estimator $\widehat{\mathbf theta}_n = (\widehat{\boldsymbol \beta}_n',\widehat{\sigma}_n^2,\widehat{\boldsymbol \alpha}_n')'$ as $n \rightarrow \infty$. \mathbf egin{enumerate} \item $P( \mathcal{A}_n = \mathcal{A}_0 ) \rightarrow 1$. \item Letting $s_n^2 = \mathbf gamma_n' \mathit \Sigma_{n\mathcal{A}_0\mathcal{A}_0}^{-1} \mathbf gamma_n$ for any sequence of $q_n \times 1$ nonzero vectors $\mathbf gamma_n$, we have \mathbf egin{align*} n^{1/2} s_n^{-1} \mathbf gamma_n' ( \widehat{\boldsymbol \beta}_{n\mathcal{A}_0} - \boldsymbol \beta_{0\mathcal{A}_0} ) \rightarrow_d \mathcal{N} (0,\sigma_0^2). \end{align*} \item The solution is unique in probability. \end{enumerate} \end{thm} We note that, in contrast to the fixed $p$ case, the oracle property result for the diverging $p$ case requires assumptions on the true graph, as in conditions \ref{ass:XX} and \ref{ass:rho}, as well as knowledge about the working graph $\mathcal{G}_n$ in \ref{ass:omega}. The proof is provided in the Appendix. \mathbf egin{rem} Although we only consider Gaussian errors, the results can be readily generalized to moderately heavier tailed errors. \end{rem} \mathbf egin{rem} \label{rem:rate} In parallel with Remark \ref{rem:fix:principle}, with a little lack of rigorousness, \ref{ass:mu} ensures $\widehat{\xi}_{nj} = \Theta_p(n^R)$ for $j \in \mathcal{A}_n^c$, and \ref{ass:nu} ensures $|\widehat{\mathbf eta}_{nj}| \widehat{\xi}_{nj} = \Theta_p(n^r)$ for $j \in \mathcal{A}_n$. If $\widehat{\boldsymbol \beta}_n$ is $\sqrt{n}$-consistent, which is indeed the case, the important variables receive shrinkage of order \emph{at most} $\widehat{\xi}_{nj} = \Theta_p(n^r q_n^{1/2})$ due to \ref{ass:beta} and the unimportant variables receive shrinkage of order \emph{at least} $\widehat{\xi}_{nj} = \Theta_p(n^{r+1/2})$. \end{rem} \mathbf egin{rem} \label{rem:sigma} Unlike Remark \ref{rem:fix:sigma}, $\widehat{\sigma}_n^2$ may converge to 0. However, once rescaled, $\widehat{\sigma}_n^2$ consistently estimates the true residual variance $\sigma_0^2$; $(n+p_n)\widehat{\sigma}_n^2/n \rightarrow_p \sigma_0^2$. \end{rem} \section{Simulation Study} \label{simulation} We conduct simulations to evaluate the performance of the proposed approach in comparison with several existing methods. The competing methods include the lasso (Lasso), the adaptive Lasso (ALasso) \citep{Zou2006}, the Bayesian variable selection approach using spike and slab priors and MRF priors by \citet{Stingo2011} which we denote as BVS-MRF, and finally the EM approach for Bayesian variable selection (denoted as EMVS) proposed by \citet{Rockova2014} and its extension to incorporate structural information (denoted as EMVSS). Of note, EMSHS, EMVSS and BVS-MRF incorporate the graph information, whereas the other methods do not. For Lasso and ALasso, we use the glmnet R package where the initial consistent estimator for ALasso is given by the ridge regression. The Matlab code for the MCMC approach is provided with the original article by \citet{Stingo2011}. \citet{Rockova2014} provided us their unpublished R codes for EMVS and EMVSS. \subsection{Simulation Set-up} The simulated data are generated from the following model \mathbf egin{align*} y_i = \mathbf x_i' \boldsymbol \beta + \epsilon_i, \qquad 1 \le i \le n, \end{align*} where $\mathbf x_i \sim \mathcal{N}(\mathbf 0,\mathit{\mathit \Sigma}_X)$, $\epsilon_i \sim \mathcal{N}(0,\sigma_\epsilon^2)$, and $\boldsymbol \beta = (\underbrace{1,\dots,1}_{q},\underbrace{0,\dots,0}_{p-q})$. The first $q=5$ variables are the important variables and the last $p-q$ variables are unimportant variables. The sample size is fixed at $n=50$; the residual variance is fixed at $\sigma_\epsilon=1$; and we consider $p=1{,}000$, $10{,}000$, and $100{,}000$. Let $G_0$ be the adjacency matrix for the true covariate graph, which determines ${\mathit \Sigma}_X$. That is, $G_{0,jk} = 1$ if there is an edge between predictors $j$ and $k$, and $G_{0,jk} = 0$ otherwise. $G_0$ is generated as follows. \mathbf egin{enumerate}[label=(\arabic*)] \item We generate $g$ virtual pathways, depending on the total number of predictors; $g=50$ for $p=1{,}000$, $g=300$ for $p=10{,}000$ and $p=100{,}000$. \item The first pathway is composed of the $q$ important variables only. \item The number of genes in other pathways are negative binomial random variables with mean $\mu_{path} = 30$. \item The genes which belong to a pathway are chosen randomly and independently of other pathways. Hence the pathways can be overlapping. \item Edges are randomly generated ensuring all genes in a pathway have at least one path to all the other genes in the pathway. This can be done by conducting the following procedure for each pathway. \mathbf egin{enumerate}[label=(\alph*)] \item Randomly choose two genes and insert an edge between the two. Mark the two genes as connected. Mark the others as unconnected. \item \label{stepX} Randomly choose a connected gene and an unconnected gene, and add an edge between them. Mark the unconnected gene as connected. \item Repeat step \ref{stepX} until all genes are connected. This will form a tree, where all genes have at least one path to all the other genes in the pathway. \item In order to add some extra edges, for each pair of genes that do not share an edge, add an edge between them with probability $p_1 = 0.05$. $p_1$ determines the overall density of edges. \end{enumerate} \end{enumerate} Given $G_0$, the covariance matrix $\mathit{\mathit \Sigma}_X$ is designed as follows. \mathbf egin{enumerate}[label=(\roman*)] \item Set $A = I_p$. \item Calculate the vertex degrees $D_j = \sum_{k=1}^p G_{0,jk}$. \item \label{step3} For each pair $j<k$ with $G_{0,jk} = 1$, set $A_{jk} = A_{kj} = -S_{jk}/(\max(D_j,D_k) \times 1.1 + 0.1)$ where \mathbf egin{align*} S_{jk} = \mathbf egin{cases} 1, & \textrm{if } 1 \le j,k \le q,\\ \mathrm{Ber}(1/2), & \textrm{otherwise}. \end{cases} \end{align*} \item Set $\mathit{\mathit \Sigma}_X = A^{-1}$ and then rescale $\mathit{\mathit \Sigma}_X$ so that its diagonal elements become 1. \end{enumerate} Note that the resulting covariance matrix $\mathit{\mathit \Sigma}_X$ is diagonally dominant and positive definite and $X_j$ and $X_k$ are partially correlated only if $G_{0,jk} = 1$. Also note that since this procedure involves inverting a $p \times p$ matrix, we used this method for $p=1{,}000$ and $p=10{,}000$ cases only. For $p=100{,}000$ case, the network information of the first $10,000$ variables were generated by this procedure, and the second set of independent $90{,}000$ variables were added and they were independent of the first set of variables. Let $G$ be the adjacency matrix of the pathway graph that is used to fit the model. We consider several scenarios where the graph used to fit the model may be specified correctly or mis-specified, as follows \mathbf egin{enumerate}[label=\arabic*)] \item $G_0$ is as described above and $G=G_0$. \item $G_0$ is as described above but allows no edge between important variables and unimportant variables, and $G=G_0$. \item $G_0$ is the same as in scenario (1), but $G$ is randomly generated with the same number of edges as $G_0$. \item $G_0$ is the same as in scenario (2), but $G$ is randomly generated with the same number of edges as $G_0$. \item $G_0$ is the same as in scenario (1), but $G$ includes only a subset of the edges in $G_0$ for which the corresponding partial correlations are greater than $0.5$. \end{enumerate} Scenarios (1) and (2) are cases where the true graph is completely known; scenario (2) allows no correlation between important and unimportant variables and hence is an ideal setting for our approach. In scenarios (3) and (4) considered as the worst case scenario, $G$ is completely mis-specified. Scenario (5) mimics the situation where only strong signals from $G_0$ are known to data analysts, which is between the ideal and the worst case scenarios. For the proposed approach, we choose an uninformative prior $\sigma^2 \sim \mathcal{IG}(1,1)$. Based on our numerical studies which show that the performance of EMSH and EMSHS is not highly sensitive to $a_\omega$, $b_\omega$, and $\nu$, we recommend using $a_\omega=4$, $b_\omega=1$, and $\nu=1.2$. We generate 500 simulated datasets in total, each of which contains a training set, validation set, and test set of size $n=50$ each. We fit the model using the training data for a grid of values of the tuning parameter $\mu$ lying between $(3.5,7.5)$, and then we choose the value that minimizes the prediction error for the validation data. Variable selection performance is assessed in terms of the number of false positives (FP) and the number of false negatives (FN) and prediction performance is assessed in terms of mean squared prediction error (MSPE) calculated using the test data. We also report the average computation time per tuning parameter value in seconds. \subsection{Results} The simulation results are summarized in Tables \ref{tbl1}, \ref{tbl2}, and \ref{tbl3}. BVS-MRF is omitted in the case of $p=10,000$ and both BVS-MRF and EMVSS are omitted in $p=100{,}000$ cases, because they are not scalable or errors are reported when applied to these settings. All methods achieve better performance in scenario 2 (or 4) compared to scenario 1 (or 3), indicating that the problem is more challenging in the presence of nonzero correlation between the important variables and the unimportant variables. Within each of scenarios 1, 2, and 5, where true or partially true graph knowledge is available, the structured variable selection methods EMVSS and EMSHS are superior to their counterparts that do not use graph information (i.e. EMVS and EMSH). Moreover, the performance of each structured variable selection method (namely, EMSHS, EMVSS and BVS-MRF) improves from Scenario 3 (or 4) to Scenario 1 (or 2), further demonstrating the benefits of the correctly specified graph information. Similarly, the partially correctly specified graph information in Scenario 5 also improves the performance of these methods compared to Scenario 3. In Scenarios 3 and 4, For prediction, the EMSHS yields the best performance in all settings when the graph information is correctly or partially correctly specified (i.e., scenarios 1, 2 and 5). When the graph information is completely mis-specified (scenarios 3 and 4), EMSHS still yields the best or close to the best performance in all settings, demonstrating its robustness to mis-specified graph information. In addition, EMSH performs the best among the unstructured variable selection methods for all cases under $p=1{,}000$ and $p= 10{,}000$, lending support to the advantage of using the data to learn adaptive shrinkage in EMSH as discussed in section \ref{interpretation}. For variable selection, the EMSHS yields the best or close to the best performance in all settings when the graph information is correctly or partially correctly specified. Of note, while EMVS tends to have close to 0 false positives, it has high false negatives. In addition, the false positives under the proposed methods are significantly lower compared to Lasso and adaptive lasso. Finally, EMSHS consistently yields lower false positives and false negatives than EMSH in scenarios 1, 2 and 5, which again demonstrates the advantage of incorporating true graph information. While the difference in performance between EMSHS and EMVSS in terms of prediction and variable selection is subtle for $p=1{,}000$ in scenarios 1 and 5, they behave somewhat differently for $p=10{,}000$. EMSHS tends to have relatively lower false negatives admitting slightly higher false positives. Including more important variables seems to have led to smaller prediction errors than EMVSS. Although somewhat slower than the Lasso and adaptive Lasso, the proposed structured variable selection approach is still computationally efficient and is scalable to $p=100{,}000$ and higher dimensions, which is substantially better than the BVS-MRF and EMVSS. \mathbf egin{small} \mathbf egin{table} \caption{\label{tbl1} The mean squared prediction error (MSPE) for the test data, the number of false positives (FP), the number of false negatives (FN), and the average computation time per tuning parameter value in seconds are recorded for $p=1{,}000$ case. In the parentheses are the corresponding standard errors.} \mathbf egin{tabular}{l|cccr} Method & MSPE & FP & FN & \multicolumn{1}{c}{Time}\\ \hline & \multicolumn{4}{l}{Scenario \#1: Reference case}\\ Lasso & 2.29 (0.04) & 24.11 (0.48) & 0.04 (0.01) & 0.00\\ ALasso & 2.12 (0.04) & 12.20 (0.35) & 0.14 (0.02) & 0.00\\ BVS-MRF & 3.41 (0.07) & 10.56 (0.72) & 1.23 (0.05) & 540.78\\ EMVS & 5.58 (0.11) & 0.00 (0.00) & 3.55 (0.06) & 0.50\\ EMVSS & 1.36 (0.02) & 1.39 (0.07) & 0.05 (0.01) & 1.30\\ EMSH & 1.76 (0.04) & 2.62 (0.14) & 0.27 (0.03) & 0.07\\ EMSHS & 1.31 (0.03) & 1.13 (0.09) & 0.06 (0.02) & 0.75\\ \hline & \multicolumn{4}{l}{Scenario \#2: Ideal case}\\ Lasso & 1.73 (0.02) & 18.11 (0.48) & 0.00 (0.00) & 0.00\\ ALasso & 1.48 (0.02) & 6.17 (0.22) & 0.01 (0.00) & 0.00\\ BVS-MRF & 2.02 (0.04) & 7.18 (0.58) & 0.28 (0.03) & 539.82\\ EMVS & 3.11 (0.10) & 0.00 (0.00) & 1.93 (0.06) & 0.47\\ EMVSS & 1.22 (0.01) & 0.76 (0.05) & 0.01 (0.00) & 1.25\\ EMSH & 1.28 (0.02) & 0.71 (0.06) & 0.09 (0.02) & 0.06\\ EMSHS & 1.14 (0.01) & 0.24 (0.05) & 0.00 (0.00) & 0.74\\ \hline & \multicolumn{4}{l}{Scenario \#3: Worst case (Reference)}\\ Lasso & 2.21 (0.04) & 23.61 (0.47) & 0.03 (0.01) & 0.00\\ ALasso & 2.04 (0.04) & 11.50 (0.34) & 0.12 (0.02) & 0.00\\ BVS-MRF & 3.39 (0.07) & 10.39 (0.73) & 1.23 (0.06) & 516.85\\ EMVS & 5.41 (0.11) & 0.01 (0.00) & 3.49 (0.06) & 0.51\\ EMVSS & 1.83 (0.05) & 2.62 (0.11) & 0.30 (0.03) & 1.23\\ EMSH & 1.66 (0.04) & 2.74 (0.14) & 0.19 (0.02) & 0.06\\ EMSHS & 1.73 (0.04) & 5.41 (0.31) & 0.22 (0.03) & 0.68\\ \hline & \multicolumn{4}{l}{Scenario \#4: Worst case (Ideal)}\\ Lasso & 1.72 (0.03) & 18.42 (0.48) & 0.01 (0.00) & 0.00\\ ALasso & 1.48 (0.02) & 6.04 (0.22) & 0.03 (0.01) & 0.00\\ BVS-MRF & 1.99 (0.04) & 7.35 (0.59) & 0.28 (0.03) & 408.52\\ EMVS & 3.13 (0.10) & 0.00 (0.00) & 1.91 (0.06) & 0.48\\ EMVSS & 1.35 (0.02) & 1.17 (0.07) & 0.09 (0.01) & 1.20\\ EMSH & 1.29 (0.02) & 0.76 (0.06) & 0.10 (0.02) & 0.06\\ EMSHS & 1.51 (0.03) & 4.32 (0.28) & 0.19 (0.02) & 0.69\\ \hline & \multicolumn{4}{l}{Scenario \#5: Intermediate case}\\ Lasso & 2.25 (0.04) & 22.91 (0.47) & 0.05 (0.01) & 0.00\\ ALasso & 2.06 (0.04) & 11.42 (0.31) & 0.13 (0.02) & 0.00\\ BVS-MRF & 3.36 (0.07) & 12.28 (0.79) & 1.21 (0.06) & 449.23\\ EMVS & 5.41 (0.11) & 0.01 (0.00) & 3.45 (0.06) & 0.47\\ EMVSS & 1.34 (0.03) & 1.27 (0.07) & 0.04 (0.01) & 1.96\\ EMSH & 1.66 (0.04) & 2.55 (0.15) & 0.21 (0.02) & 0.06\\ EMSHS & 1.31 (0.03) & 1.33 (0.12) & 0.05 (0.01) & 0.81\\ \hline \hline \end{tabular} \end{table} \end{small} \mathbf egin{small} \mathbf egin{table} \caption{\label{tbl2} The mean squared prediction error (MSPE) for the test data, the number of false positives (FP), the number of false negatives (FN), and the average computation time per tuning parameter value in seconds are recorded for $p=10{,}000$ case. In the parentheses are the corresponding standard errors.} \mathbf egin{tabular}{l|cccr} Method & MSPE & FP & FN & \multicolumn{1}{c}{Time}\\ \hline & \multicolumn{4}{l}{Scenario \#1: Reference case}\\ Lasso & 3.34 (0.07) & 29.00 (0.53) & 0.34 (0.03) & 0.03\\ ALasso & 3.21 (0.08) & 15.87 (0.46) & 0.54 (0.04) & 0.02\\ EMVS & 6.88 (0.12) & 0.00 (0.00) & 4.17 (0.04) & 33.70\\ EMVSS & 2.01 (0.07) & 2.23 (0.13) & 0.56 (0.04) & 83.52\\ EMSH & 2.98 (0.09) & 4.74 (0.26) & 1.04 (0.05) & 0.96\\ EMSHS & 1.94 (0.08) & 2.71 (0.25) & 0.39 (0.04) & 5.56\\ \hline & \multicolumn{4}{l}{Scenario \#2: Ideal case}\\ Lasso & 2.30 (0.04) & 23.68 (0.55) & 0.04 (0.01) & 0.03\\ ALasso & 2.05 (0.05) & 9.16 (0.30) & 0.16 (0.02) & 0.02\\ EMVS & 5.36 (0.16) & 0.00 (0.00) & 3.15 (0.06) & 36.64\\ EMVSS & 1.43 (0.03) & 1.02 (0.07) & 0.21 (0.02) & 80.68\\ EMSH & 1.79 (0.05) & 2.10 (0.15) & 0.43 (0.04) & 0.80\\ EMSHS & 1.16 (0.02) & 0.64 (0.11) & 0.01 (0.01) & 4.67\\ \hline & \multicolumn{4}{l}{Scenario \#3: Worst case (Reference)}\\ Lasso & 3.31 (0.07) & 28.10 (0.54) & 0.34 (0.03) & 0.03\\ ALasso & 3.17 (0.07) & 15.65 (0.47) & 0.57 (0.04) & 0.03\\ EMVS & 7.13 (0.13) & 0.00 (0.00) & 4.23 (0.04) & 28.85\\ EMVSS & 3.27 (0.08) & 3.97 (0.15) & 1.37 (0.05) & 56.38\\ EMSH & 2.91 (0.08) & 4.91 (0.26) & 1.03 (0.05) & 0.87\\ EMSHS & 3.04 (0.08) & 9.34 (0.45) & 1.04 (0.05) & 4.34\\ \hline & \multicolumn{4}{l}{Scenario \#4: Worst case (Ideal)}\\ Lasso & 2.24 (0.04) & 24.25 (0.55) & 0.04 (0.01) & 0.03\\ ALasso & 1.98 (0.04) & 8.45 (0.32) & 0.14 (0.02) & 0.03\\ EMVS & 5.14 (0.15) & 0.00 (0.00) & 3.01 (0.06) & 31.66\\ EMVSS & 1.84 (0.05) & 2.22 (0.10) & 0.48 (0.03) & 54.44\\ EMSH & 1.72 (0.04) & 2.13 (0.15) & 0.35 (0.03) & 0.72\\ EMSHS & 1.94 (0.05) & 7.55 (0.39) & 0.40 (0.03) & 4.59\\ \hline & \multicolumn{4}{l}{Scenario \#5: Intermediate case}\\ Lasso & 3.26 (0.07) & 27.68 (0.54) & 0.32 (0.03) & 0.03\\ ALasso & 3.11 (0.07) & 14.75 (0.46) & 0.54 (0.04) & 0.03\\ EMVS & 6.95 (0.12) & 0.00 (0.00) & 4.17 (0.04) & 24.49\\ EMVSS & 1.94 (0.06) & 2.27 (0.11) & 0.46 (0.03) & 63.24\\ EMSH & 2.81 (0.07) & 4.66 (0.25) & 0.95 (0.05) & 0.74\\ EMSHS & 1.72 (0.06) & 2.47 (0.22) & 0.26 (0.03) & 5.37\\ \hline \end{tabular} \end{table} \end{small} \mathbf egin{small} \mathbf egin{table} \caption{\label{tbl3} The mean squared prediction error (MSPE) for the test data, the number of false positives (FP), the number of false negatives (FN), and the average computation time per tuning parameter value in seconds are recorded for $p=100{,}000$ case. In the parentheses are the corresponding standard errors.} \mathbf egin{tabular}{l|cccr} Method & MSPE & FP & FN & \multicolumn{1}{c}{Time}\\ \hline & \multicolumn{4}{l}{Scenario \#1: Reference case}\\ Lasso & 4.87 (0.09) & 30.31 (0.65) & 1.21 (0.05) & 0.12\\ ALasso & 4.77 (0.09) & 16.91 (0.57) & 1.56 (0.06) & 0.18\\ EMSH & 4.66 (0.11) & 4.83 (0.29) & 2.35 (0.06) & 8.66\\ EMSHS & 3.28 (0.12) & 3.67 (0.28) & 1.26 (0.07) & 18.57\\ \hline & \multicolumn{4}{l}{Scenario \#2: Ideal case}\\ Lasso & 3.23 (0.07) & 28.67 (0.61) & 0.29 (0.03) & 0.12\\ ALasso & 3.07 (0.08) & 11.68 (0.37) & 0.53 (0.04) & 0.13\\ EMSH & 2.93 (0.10) & 2.85 (0.19) & 1.26 (0.05) & 7.38\\ EMSHS & 1.43 (0.07) & 1.02 (0.12) & 0.10 (0.03) & 16.55\\ \hline & \multicolumn{4}{l}{Scenario \#3: Worst case (Reference)}\\ Lasso & 5.08 (0.09) & 29.82 (0.65) & 1.31 (0.06) & 0.11\\ ALasso & 4.99 (0.10) & 17.20 (0.61) & 1.69 (0.06) & 0.16\\ EMSH & 4.89 (0.10) & 5.14 (0.32) & 2.47 (0.06) & 8.26\\ EMSHS & 4.94 (0.10) & 6.12 (0.37) & 2.49 (0.06) & 13.82\\ \hline & \multicolumn{4}{l}{Scenario \#4: Worst case (Ideal)}\\ Lasso & 3.34 (0.07) & 28.09 (0.60) & 0.27 (0.03) & 0.11\\ ALasso & 3.18 (0.08) & 11.71 (0.39) & 0.57 (0.04) & 0.15\\ EMSH & 3.01 (0.09) & 2.92 (0.19) & 1.23 (0.06) & 7.01\\ EMSHS & 3.07 (0.09) & 4.99 (0.30) & 1.23 (0.06) & 12.46\\ \hline & \multicolumn{4}{l}{Scenario \#5: Intermediate case}\\ Lasso & 5.07 (0.09) & 29.67 (0.63) & 1.29 (0.06) & 0.11\\ ALasso & 4.99 (0.10) & 16.36 (0.57) & 1.66 (0.06) & 0.16\\ EMSH & 4.83 (0.11) & 4.87 (0.30) & 2.41 (0.07) & 8.11\\ EMSHS & 3.55 (0.13) & 3.85 (0.30) & 1.44 (0.08) & 15.62\\ \hline \end{tabular} \end{table} \end{small} \section{Data Application} We applied the proposed method to analysis of a glioblastoma data set obtained from the The Cancer Genome Atlas Network \citep{verhaak2010integrated}. The data set includes survival times ($T$) and gene expression data for $p=12{,}999$ genes ($X$) for 303 glioblastoma patients. As glioblastoma is known as one of the most aggressive cancers, only $12\%$ of the samples were censored. We removed the censored observations, resulting in a sample size of $n=267$ for analysis. We fit an accelerated failure time (AFT) model as follows \mathbf egin{align*} \log T_i = \mathbf eta_1 X_{i1} + \cdots + \mathbf eta_p X_{ip} + \epsilon_i, \qquad i=1,\dots,n, \end{align*} where $\epsilon_i$'s are independent Gaussian random variables and all variables were standardized to have mean 0 and variance 1. The network information ($\mathcal{G}$) for $X$ was retrieved from the Kyoto Encyclopedia of Genes and Genomes (KEGG) database including a total of 332 KEGG pathways and $31{,}700$ edges in these pathways. In addition to EMSHS and EMSH, we included several competing methods that are computationally feasible, namely, lasso, adaptive lasso, EMVS, and EMVSS. The optimal tuning parameters were chosen by minimizing the 5-fold cross-validated mean-squared prediction error. The tuning parameter $\mu$ had 20 candidate values ranging from 5.5 to 6.5 ensuring solutions with various sparsity to be considered. We used $a_\sigma=1$ and $b_\sigma=1$ for prior of $\sigma^2$, which is uninformative. As shown in Table \ref{tbl4}, EMSHS achieves the best prediction performance followed by EMSH and both are substantially less expensive than EMVS and EMVSS in terms of computation. Similar to our simulation results, EMSH again yields better prediction performance than adaptive lasso, demonstrating the advantage of using the data to learn adaptive shrinkage in EMSH. \mathbf egin{small} \mathbf egin{table} \caption{\label{tbl4} Cross-validated mean squared prediction error (CV MSPE) and computation time in seconds per tuning parameter (Time) from the analysis of TCGA genomic data and KEGG pathway information.} \mathbf egin{tabular}{l|cr} Method & CV MSPE & \multicolumn{1}{c}{Time}\\ \hline Lasso & 0.986 & 0.2\\ ALasso & 0.996 & 0.4\\ EMVS & 0.996 & 1346.6\\ EMVSS & 0.982 & 8284.1\\ EMSH & 0.979 & 14.3\\ EMSHS & 0.975 & 17.0\\ \hline \end{tabular} \end{table} \end{small} To assess variable selection performance of EMSHS and EMSH, we conducted a second analysis. We randomly divided the entire sample into two subsets; the first subset with $187$ subjects (70\% of the whole sample) was used as the training data to fit the model and the second subset with 30\% of the subjects was used as the validation data to select optimal tuning parameter values. We repeated this procedure 100 times, resulting in 100 EMSHS and 100 EMSH solutions. Of the 100 random splits, 28 genes were selected at least once by EMSH and 21 genes were selected at least once by EMSHS. Further examination reveals that the set of genes that were selected by EMSH but not by EMSHS belong to pathways in which most of the genes were not selected. This lends support to the notion that incorporating graph information may reduce false positives, which is consistent with the findings in our simulations where EMSHS tends to yield lower false positives than EMSH in simulation scenarios 1, 2 and 5. The 3 genes most frequently selected by EMSHS are TOM1L1, RANBP17, and BRD7. In this set of genes the Wnt signaling pathway \citep{Kandasamy2010} was identified as an enriched pathway by the ToppGene Suite \citep{Chen2009}. Abnormalities in the Wnt signaling pathway have been associated with human malignancies in the literature. For example, BRD7 has been shown to be correlated with enlarged lateral ventricles in mice and highly expressed in gliomas \citep{Tang2014}. TOM1L1 depletion has been shown to decrease tumor growth in xenografted nude mice \citep{Sirvent2012}. EMSHS reported average estimated coefficients of $-0.146$, $0.181$, and $0.193$ for TOM1L1, RANBP17, and BRD7, respectively. The signs of the coefficients are consistent with the known knowledge about the genes in promoting/suppressing the development of cancer. Our data analyses demonstrate that EMSHS yields biologically meaningful results. \section{Discussion} This article introduces a scalable Bayesian regularized regression approach and an associated EM algorithm which can incorporate the structural information between covariates in high dimensional settings. The approach relies on specifying informative priors on the log-shrinkage parameters of the Laplace priors on the regression coefficients, which results in adaptive regularization. The method does not rely on initial estimates for weights as in adaptive lasso approaches, which provides computational advantages in higher dimensions as demonstrated in simulations. Appealing theoretical properties for both fixed and diverging dimensions are established, under very general assumptions, even when the true graph is mis-specified. The method demonstrates encouraging numerical performance in terms of scalability, prediction and variable selection, with significant gains when the prior graph is correctly specified, and a robust performance under prior graph mis-specification. Extending the current approach to more general types of outcomes such as binary or categorical should be possible \citep{mccullagh1989generalized}, although the complexity of the optimization problem may increase. These issues can potentially be addressed using a variety of recent advances in literature involving EM approaches via latent variables \citep{polson2013bayesian}, coordinate descent method \citep{wu2008coordinate}, and other optimization algorithms \citep{nocedal2006numerical} which are readily available to facilitate computation. We leave this task as a future research question of interest. {} \section*{Appendix} \subsection*{Proof of Proposition \ref{pro}} \mathbf egin{proof}[Proof of Proposition \ref{pro}] Note that $\mathit{\mathit \Omega} = I + W D_{\boldsymbol \omega} W'$ where $W$ is a $p \times p(p-1)/2$ matrix, whose column $\mathbf w_{jk}$ corresponding to the edge $(j,k)$ is $\mathbf e_j-\mathbf e_k$. Here $\mathbf e_j$ is the $p \times 1$ coordinate vector whose $j$-th element is 1 and all others are zero. Therefore, we have $|\mathit{\mathit \Omega}| \ge 1$ and \mathbf egin{align*} \int |\mathit{\mathit \Omega}|^{-1/2} \prod_{G_{jk}=1} \omega_{jk}^{a_\omega-1} \exp ( -b_\omega \omega_{jk} ) \mathbf 1 (\omega_{jk}>0) \prod_{G_{jk}=0} \delta_0(\omega_{jk}) d\boldsymbol \omega \le \Gamma(a_\omega)^{|E|} b_\omega^{-a_\omega|E|} < \infty. \end{align*} \end{proof} \subsection*{Proof of Theorem \ref{thm:oracle}} We first prove Theorem \ref{thm:oracle} as it is more general than Theorem \ref{oracle:fix}. We then prove Theorem \ref{oracle:fix} as a special case. Note from the M-step for $\boldsymbol \beta$ that \mathbf egin{align} \label{sol:lasso} \widehat{\boldsymbol \beta}_n = \qopname\relax m{argmin}_{\boldsymbol \beta} \, \frac{1}{2} (\mathbf y_n - X_n \boldsymbol \beta)'(\mathbf y_n-X_n\boldsymbol \beta) + \sum_{j=1}^{p_n} \widehat{\xi}_{nj} |\mathbf eta_j|, \end{align} where $\widehat{\mathbf xi}_n = \widehat{\sigma}_n \widehat{\boldsymbol \lambda}_n = \widehat{\sigma}_n e^{\widehat{\boldsymbol \alpha}_n}$. Then, by the Karush-–Kuhn–-Tucker (KKT) conditions (see, for example, \citet{Chang2010}), the solution is given by \mathbf egin{align} \label{sol:beta1} \widehat{\boldsymbol \beta}_{\mathcal{A}_n} &= (X_{\mathcal{A}_n} ' X_{\mathcal{A}_n})^{-1} ( X_{\mathcal{A}_n}' \mathbf y - S_{n\mathcal{A}\mathcal{A}} \widehat{\mathbf xi}_{\mathcal{A}_n} ),\\ \label{sol:beta0} \widehat{\boldsymbol \beta}_{\mathcal{A}_n^c} &= \mathbf 0, \end{align} and satisfies \mathbf egin{align} \label{sol:inactive} | \mathbf x_{nj}' ( \mathbf y_n - X_n \widehat{\boldsymbol \beta}_n ) | \le \widehat{\xi}_{nj}, \qquad j \notin \mathcal{A}_n, \end{align} where \mathbf egin{align} \label{sol:sign} S_n =\mathrm{diag}(\mathrm{sign}(\widehat{\mathbf eta}_{n1}),\dots,\mathrm{sign}(\widehat{\mathbf eta}_{np_n})) \end{align} is the sign matrix of $\widehat{\boldsymbol \beta}_n$. The M-step for $\sigma$ yields \mathbf egin{align} \label{sol:sigma} \widehat{\sigma}_n = \frac{\widehat{c}_{2n} + \sqrt{\widehat{c}_{2n}^2+8\widehat{c}_{1n}c_{3n}}}{2c_{3n}}, \end{align} where $\widehat{c}_{1n} = \frac{1}{2}(\mathbf y_n - X_n \widehat{\boldsymbol \beta}_n)'(\mathbf y_n - X_n \widehat{\boldsymbol \beta}_n) + b_{\sigma n}$, $\widehat{c}_{2n} = \sum_{j=1}^{p_n} e^{\widehat{\alpha}_{nj}} |\widehat{\mathbf eta}_{nj}|$, and $c_{3n} = n+p_n+2a_{\sigma n}+2$. In addition, from the M-step for $\boldsymbol \alpha$, the solution satisfies \mathbf egin{align} \label{sol:alpha} |\widehat{\mathbf eta}_{nj}| e^{\widehat{\alpha}_{nj}} = \frac{\widehat{\sigma}_n}{\nu_n} \left( \mu_n+\nu_n - \widehat{\alpha}_{nj} + \sum_{k \sim j} \omega_{jk}^{(\infty)} (\widehat{\alpha}_{nk} - \widehat{\alpha}_{nj}) \right), \qquad j=1,\dots,p_n. \end{align} Let $\mathcal{B}_n = \mathcal{A}_n \cap \mathcal{A}_0$, $\mathcal{C}_n = \mathcal{A}_n - \mathcal{A}_0$, and $\mathcal{D}_n = \mathcal{A}_0 - \mathcal{A}_n$. The residual vector and the SSE are given by \mathbf egin{align} \label{solution:residual:full} \mathbf y_n - X_n \widehat{\boldsymbol \beta}_n = ( I - H_{\mathcal{A}_n} ) ( X_{\mathcal{D}_n} \boldsymbol \beta_{0\mathcal{D}_n} + \mathbf epsilon_n ) + X_{\mathcal{A}_n} \left( X_{\mathcal{A}_n}' X_{\mathcal{A}_n} \right)^{-1} S_{n\mathcal{A}\mathcal{A}} \widehat{\mathbf xi}_{\mathcal{A}_n}, \end{align} and \mathbf egin{align} \nonumber \| \mathbf y_n - X_n \widehat{\boldsymbol \beta}_n \|^2 &= ( X_{\mathcal{D}_n} \boldsymbol \beta_{0\mathcal{D}_n} + \mathbf epsilon_n ) '( I - H_{\mathcal{A}_n} ) ( X_{\mathcal{D}_n} \boldsymbol \beta_{0\mathcal{D}_n} + \mathbf epsilon_n )\\ \label{solution:SSE:full} & \qquad + \widehat{\mathbf xi}_{\mathcal{A}_n}' S_{n\mathcal{A}\mathcal{A}} \left( X_{\mathcal{A}_n}' X_{\mathcal{A}_n} \right)^{-1} S_{n\mathcal{A}\mathcal{A}} \widehat{\mathbf xi}_{\mathcal{A}_n}, \end{align} where $H_{\mathcal{A}_n} = X_{\mathcal{A}_n} \left( X_{\mathcal{A}_n}' X_{\mathcal{A}_n} \right)^{-1} X_{\mathcal{A}_n}'$. Due to the partial orthogonality \ref{ass:rho}, the solution $\widehat{\boldsymbol \beta}_n$ can be rewritten as \mathbf egin{align} \label{solution:beta} \left[ \mathbf egin{array}{c} \widehat{\boldsymbol \beta}_{\mathcal{B}_n}\\ \widehat{\boldsymbol \beta}_{\mathcal{C}_n} \end{array} \right] = \left[ \mathbf egin{array}{c} \boldsymbol \beta_{0\mathcal{B}_n} + (X_{\mathcal{B}_n}'X_{\mathcal{B}_n})^{-1} (X_{\mathcal{B}_n}' (X_{\mathcal{D}_n} \boldsymbol \beta_{0\mathcal{D}_n} + \mathbf epsilon_n) - S_{n\mathcal{B}\mathcal{B}} \widehat{\mathbf xi}_{\mathcal{B}_n})\\ (X_{\mathcal{C}_n}'X_{\mathcal{C}_n})^{-1} (X_{\mathcal{C}_n}'\mathbf epsilon_n - S_{n\mathcal{C}\mathcal{C}} \widehat{\mathbf xi}_{\mathcal{C}_n}) \end{array} \right] + O_p(\rho_n) \end{align} where the last term $O_p(\rho_n)$ exists only when $\mathcal{C}_n \neq \emptyset$. The residual vector if $\mathcal{C}_n = \emptyset$ is given by \mathbf egin{align} \label{solution:residual} \mathbf y_n - X_n \widehat{\boldsymbol \beta}_n = ( I - H_{\mathcal{B}_n} ) ( X_{\mathcal{D}_n} \boldsymbol \beta_{0\mathcal{D}_n} + \mathbf epsilon_n ) + X_{\mathcal{B}_n} \left( X_{\mathcal{B}_n}' X_{\mathcal{B}_n} \right)^{-1} S_{n\mathcal{B}\mathcal{B}} \widehat{\mathbf xi}_{\mathcal{B}_n}. \end{align} \mathbf egin{lem} \label{lemma:sigma} The following statements are true. \mathbf egin{enumerate} \item $\widehat{\sigma}_n^2 = O_p((1+p_n/n)^{-1})$. \item $\widehat{\sigma}_n^{-2} = O_p((1+p_n/n)n^{1-z})$. \end{enumerate} \end{lem} \mathbf egin{proof} \mathbf egin{enumerate} \item Since $\widehat{\boldsymbol \beta}_n$ is the solution of \eqref{sol:lasso} and since $\boldsymbol \beta = \mathbf 0$ is a possible solution, we have \mathbf egin{align*} \widehat{c}_{1n} + \widehat{\sigma}_n \widehat{c}_{2n} = \frac{1}{2} \|\mathbf y_n - X_n \widehat{\boldsymbol \beta}_n\|^2 + b_{\sigma n} + \widehat{\sigma}_n \sum_{j=1}^{p_n} e^{\widehat{\alpha}_{nj}} |\widehat{\mathbf eta}_{nj}| \le \frac{1}{2} \|\mathbf y_n\|^2 + b_{\sigma n}. \end{align*} Since \mathbf egin{align*} \|\mathbf y_n\|^2 = \|X_n \boldsymbol \beta_0\|^2 + 2\boldsymbol \beta_0'X_n'\mathbf epsilon_n + \|\mathbf epsilon_n\|^2 = \tau_2 n + \sigma_0^2n + O_p(n^{1/2}) = O_p(n), \end{align*} we have $\widehat{c}_{1n} = O_p(n)$ and $\widehat{\sigma}_n \widehat{c}_{2n} = O_p(n)$. Note that \mathbf egin{align} \label{sigma_bound} \sqrt{\frac{2\widehat{c}_{1n}}{c_{3n}}} \le \widehat{\sigma}_n = \frac{\widehat{c}_{2n} + \sqrt{\widehat{c}_{2n}^2 + 8\widehat{c}_{1n}c_{3n}} }{2c_{3n}} \le \frac{\widehat{c}_{2n}}{c_{3n}} + \sqrt{\frac{2\widehat{c}_{1n}}{c_{3n}}}. \end{align} Since, if $0 \le b \le x \le a+b$, it follows that \mathbf egin{align*} x^2 \le ax+bx \le ax+b(a+b) \le 2ax + b^2, \end{align*} we have \mathbf egin{align*} \widehat{\sigma}_n^2 \le \frac{2 \widehat{\sigma}_n \widehat{c}_{2n}}{c_{3n}} + \frac{2\widehat{c}_{1n}}{c_{3n}} = O_p((1+p_n/n)^{-1}). \end{align*} \item Since $\widehat{c}_{1n} \ge b_{\sigma n}$, the result follows by the lower bound in \eqref{sigma_bound} and \ref{ass:sigma}. \end{enumerate} \end{proof} \mathbf egin{lem} \label{lemma:alpha} The following statements are true. \mathbf egin{enumerate} \item $\|\widehat{\boldsymbol \beta}_n\| = O_p(1)$. \item $\max_{1 \le j \le p_n} \widehat{\alpha}_{nj} \le \mu_n+\nu_n$. \item $\min_{1 \le j \le p_n} \widehat{\alpha}_{nj} \ge \frac{1}{2} \log (1+p_n/n) + (r-1/2) \log n + o_p(\log n)$. \end{enumerate} \end{lem} \mathbf egin{proof} \mathbf egin{enumerate} \item As $\widehat{c}_{1n} = O_p(n)$ in Lemma \ref{lemma:sigma}, note that \mathbf egin{align*} \|\mathbf y_n-X_n\widehat{\boldsymbol \beta}_n\|^2 = \|\mathbf epsilon_n\|^2 - 2(\widehat{\boldsymbol \beta}_n-\boldsymbol \beta_0)'X_n'\mathbf epsilon_n + \|X_n(\widehat{\boldsymbol \beta}_n-\boldsymbol \beta_0)\|^2 = O_p(n). \end{align*} This implies $\|X_n(\widehat{\boldsymbol \beta}_n-\boldsymbol \beta_0)\|^2 = O_p(n)$. Since $\|\boldsymbol \beta_0\|=1$ and \mathbf egin{align*} \|X_{\mathcal{A}_n} \widehat{\boldsymbol \beta}_{\mathcal{A}_n}\| \le \|X_n(\widehat{\boldsymbol \beta}_n-\boldsymbol \beta_0)\| + \|X_n \boldsymbol \beta_0\|, \end{align*} the result follows by \ref{ass:XX}. \item Let $j_1 = \qopname\relax m{argmax}_j \widehat{\alpha}_{nj}$. Due to \eqref{sol:alpha}, note that \mathbf egin{align*} \widehat{\alpha}_{nj_1}-\mu_n-\nu_n \le \sum_{k\sim j_1} \omega_{j_1k}^{(\infty)} ( \widehat{\alpha}_{nk} - \widehat{\alpha}_{nj_1} ) \le 0. \end{align*} Therefore, we have $\widehat{\alpha}_{nj_1} \le \mu_n + \nu_n$. \item Let $j_0 = \qopname\relax m{argmin}_j \widehat{\alpha}_{nj}$. Due to \eqref{sol:alpha}, note that \mathbf egin{align*} |\widehat{\mathbf eta}_{nj_0}| e^{\widehat{\alpha}_{nj_0}} \ge \widehat{\sigma}_n \frac{\mu_n + \nu_n - \widehat{\alpha}_{nj_0}}{\nu_n}. \end{align*} By Lemma \ref{lemma:sigma}(b), Lemma \ref{lemma:alpha}(a), and \ref{ass:nu}, note that \mathbf egin{align*} \widehat{\alpha}_{nj_0} &\ge -\log |\widehat{\mathbf eta}_{nj_0}| + \log \widehat{\sigma}_n + \log (\mu_n+\nu_n-\widehat{\alpha}_{nj_0}) - \log \nu_n\\ &\ge (r-1/2) \log n + \frac{1}{2} \log (1+p_n/n) + \log (\mu_n+\nu_n-\widehat{\alpha}_{nj_0}) + o_p(\log n). \end{align*} By \ref{ass:mu}, we have \mathbf egin{align*} \mu_n - \widehat{\alpha}_{nj_0} + \log (\mu_n+\nu_n-\widehat{\alpha}_{nj_0}) \le (R-r+1/2)\log n + o(\log n). \end{align*} Note that $\nu_n = o(\log n)$ by \ref{ass:nu}. Since $R-r+1/2>0$, we have \mathbf egin{align*} \mu_n - \widehat{\alpha}_{nj_0} \le (R-r+1/2)\log n + o(\log n). \end{align*} Hence, the result follows. \end{enumerate} \end{proof} \mathbf egin{lem} \label{lemma:xi} The following statements are true. \mathbf egin{enumerate} \item $\max_{1 \le j \le p_n} \widehat{\xi}_{nj} = o_p(\widehat{\sigma}_n (1+p_n/n)^{1/2} n^{1-u})$. \item $M_n = \max_{1 \le j \le p_n} |m_{nj}| = o(\log n)$ where $m_{nj} = \sum_{k\sim j} \omega_{jk}^{(\infty)} ( \widehat{\alpha}_{nk} - \widehat{\alpha}_{nj} )$. \item If $|\widehat{\mathbf eta}_{nj}| = 0$ for large $n$, then we have \mathbf egin{align*} \widehat{\xi}_{nj} > C_2 \widehat{\sigma}_n (1+p_n/n)^{1/2} n^{R-\zeta}, \end{align*} for $\forall C_2>0$, $\forall \zeta>0$, and large $n$. \item If $ |\widehat{\mathbf eta}_{nj}| \le C_1 n^{-c}$ for $\exists C_1>0$, $\forall c<R-r$, and large $n$, we have \mathbf egin{align*} \widehat{\xi}_{nj} > C_2 \widehat{\sigma}_n^2 (1+p_n/n) n^{r+c-\zeta}, \end{align*} for $\forall C_2>0$, $\forall \zeta>0$, and large $n$, and we have \mathbf egin{align*} |\widehat{\mathbf eta}_{nj}| \widehat{\xi}_{nj} \le C_3 \widehat{\sigma}_n^2 (1+p_n/n) n^r, \end{align*} for $\exists C_3>0$ and large $n$. \item If $ |\widehat{\mathbf eta}_{nj}| \ge C_1 n^{-c}$ for $\exists C_1>0$, $\forall c<R-r$, and large $n$, we have \mathbf egin{align*} \widehat{\xi}_{nj} < C_2 \widehat{\sigma}_n^2 (1+p_n/n) n^{r+c+\zeta}, \end{align*} for $\forall C_2>0$, $\forall \zeta>0$, and large $n$, and we have \mathbf egin{align*} |\widehat{\mathbf eta}_{nj}| \widehat{\xi}_{nj} \ge C_3 \widehat{\sigma}_n^2 (1+p_n/n) n^r, \end{align*} for $\exists C_3>0$ and large $n$. \end{enumerate} \end{lem} \mathbf egin{proof} \mathbf egin{enumerate} \item By Lemma \ref{lemma:alpha}(b), the claim follows due to \ref{ass:mu} and \ref{ass:nu}. \item Note that, by Lemma \ref{lemma:alpha}, we have \mathbf egin{align*} |\widehat{\alpha}_{nk} - \widehat{\alpha}_{nj}| \le \max_j \widehat{\alpha}_{nj} - \min_j \widehat{\alpha}_{nj} = O_p(\log n). \end{align*} On the other hand, we have $\omega_{jk}^{(\infty)} \le a_{\omega n} b_{\omega n}^{-1}$ by \eqref{eq:Estep}. Then, by \ref{ass:omega}, we have \mathbf egin{align*} M_n \le \max_{1 \le j \le p_n} \sum_{k \sim j} \omega_{jk}^{(\infty)} | \widehat{\alpha}_{nk} - \widehat{\alpha}_{nj} | \le L_n a_{\omega n} b_{\omega n}^{-1} O(\log n) = o(\log n). \end{align*} \item If $|\widehat{\mathbf eta}_{nj}| = 0$, then we have $\widehat{\alpha}_{nj} = \mu_n + \nu_n + m_{nj}$. The claim follows by \ref{ass:mu}, \ref{ass:nu}, and part (b). \item By \eqref{sol:alpha} and \ref{ass:nu}, note that \mathbf egin{align*} \widehat{\alpha}_{nj} &= -\log |\widehat{\mathbf eta}_{nj}| + \log \widehat{\sigma}_n + \log (\mu_n+\nu_n-\widehat{\alpha}_{nj} + m_{nj}) - \log \nu_n\\ &\ge (r+c)\log n + \log \widehat{\sigma}_n + \log (1+p_n/n) + \log (\mu_n+\nu_n-\widehat{\alpha}_{nj} + m_{nj}) + o(\log n). \end{align*} By \ref{ass:mu}, we have \mathbf egin{align*} \mu_n - \widehat{\alpha}_{nj} + & \log (\mu_n+\nu_n-\widehat{\alpha}_{nj} + m_{nj})\\ &\le (R-r-c)\log n - \log \widehat{\sigma}_n - \frac{1}{2} \log (1+p_n/n) + o(\log n). \end{align*} Note that $\nu_n + m_{nj} = o(\log n)$ by \ref{ass:nu} and part (b). Since $c<R-r$ and by Lemma \ref{lemma:sigma}, we have \mathbf egin{align} \label{core} \mu_n - \widehat{\alpha}_{nj} \le (R-r-c)\log n - \log \widehat{\sigma}_n - \frac{1}{2} \log (1+p_n/n) + o(\log n), \end{align} and therefore \mathbf egin{align*} \widehat{\alpha}_{nj} \ge (r+c)\log n + \log \widehat{\sigma}_n + \log (1+p_n/n) + o(\log n). \end{align*} This implies \mathbf egin{align*} \widehat{\xi}_{nj} > C_2 \widehat{\sigma}_n^2 (1+p_n/n) n^{r+c-\zeta}. \end{align*} On the other hand, by \eqref{sol:alpha}, \eqref{core}, and \ref{ass:nu}, we have \mathbf egin{align*} |\widehat{\mathbf eta}_{nj}| \widehat{\xi}_{nj} = \widehat{\sigma}_n^2 \frac{\mu_n+\nu_n-\widehat{\alpha}_{nj} + m_{nj}}{\nu_n} \le C_3\widehat{\sigma}_n^2 (1+p_n/n) n^r. \end{align*} \item The arguments are in parallel to those in part (d). \end{enumerate} \end{proof} \mathbf egin{lem} \label{lemma:xibound} Suppose $p_n \times 1$ vectors $\underline{\boldsymbol \alpha}_n$ and $\overline{\boldsymbol \alpha}_n$ satisfy, given $\widehat{\boldsymbol \beta}_n$ and $\widehat{\sigma}_n$, \mathbf egin{align} \label{def:loweralpha} |\widehat{\mathbf eta}_{nj}| e^{\underline{\alpha}_{nj}} &= \widehat{\sigma}_n \frac{\mu_n+\nu_n-\underline{\alpha}_{nj}-M_n}{\nu_n}, \qquad 1 \le j \le p_n,\\ \label{def:upperalpha} |\widehat{\mathbf eta}_{nj}| e^{\overline{\alpha}_{nj}} &= \widehat{\sigma}_n \frac{\mu_n+\nu_n-\overline{\alpha}_{nj}+M_n}{\nu_n}, \qquad 1 \le j \le p_n. \end{align} Let $\underline{\xi}_{nj} = \widehat{\sigma}_n e^{\underline{\alpha}_{nj}}$ and $\overline{\xi}_{nj} = \widehat{\sigma}_n e^{\overline{\alpha}_{nj}}$. Then, the following statements are true. \mathbf egin{enumerate} \item $\underline{\alpha}_{nj} \le \widehat{\alpha}_{nj} \le \overline{\alpha}_{nj}$ for all $1 \le j \le p_n$. \item $\underline{\alpha}_{nj}$ is a decreasing function of $|\widehat{\mathbf eta}_{nj}|$ and $|\widehat{\mathbf eta}_{nj}| \underline{\xi}_{nj}$ is a decreasing function of $\underline{\alpha}_{nj}$. These hold for $\overline{\alpha}_{nj}$ and $\overline{\xi}_{nj}$ analogously. \item Lemma \ref{lemma:xi}(c), \ref{lemma:xi}(d), and \ref{lemma:xi}(e) hold with $\widehat{\xi}_{nj}$ replaced by $\underline{\xi}_{nj}$ (or $\overline{\xi}_{nj}$) as well. \end{enumerate} \end{lem} \mathbf egin{proof} \mathbf egin{enumerate} \item Obvious from \eqref{def:loweralpha}, \eqref{def:upperalpha}, and the definition of $M_n$. \item Obvious from the definitions \eqref{def:loweralpha} and \eqref{def:upperalpha}. \item The same arguments in the proof of \ref{lemma:xi}(c), \ref{lemma:xi}(d), and \ref{lemma:xi}(e) are valid with $m_{nj}$ and $\widehat{\alpha}_{nj}$ replaced by $M_n$ and $\underline{\alpha}_{nj}$ (or $\overline{\alpha}_{nj}$), respectively. \end{enumerate} \end{proof} \mathbf egin{lem} \label{lemma:main1} $P(\mathcal{A}_n \nsubseteq \mathcal{A}_0) = P(\mathcal{C}_n \neq \emptyset) \rightarrow 0$. \end{lem} \mathbf egin{proof} Suppose $\mathcal{C}_n \neq \emptyset$. By \eqref{solution:beta} and \ref{ass:rho}, note that \mathbf egin{align*} \widehat{\sigma}_n \sum_{j \in \mathcal{C}_n} e^{\widehat{\alpha}_{nj}} |\widehat{\mathbf eta}_{nj}| = {\widehat{\mathbf xi}_{\mathcal{C}_n}}' S_{n\mathcal{C}\mathcal{C}} \widehat{\boldsymbol \beta}_{\mathcal{C}_n} = O_p(h_n^{1/2}) - h_n, \end{align*} where $h_n = {\widehat{\mathbf xi}_{\mathcal{C}_n}}' S_{n\mathcal{C}\mathcal{C}} (X_{\mathcal{C}_n}'X_{\mathcal{C}_n})^{-1} S_{n\mathcal{C}\mathcal{C}} {\widehat{\mathbf xi}_{\mathcal{C}_n}}$. We claim that $h_n \rightarrow_p \infty$ and $\textrm{RHS} \rightarrow_p -\infty$ while LHS stays positive, which yields $P(\mathcal{C}_n \neq \emptyset) \rightarrow 0$. Suppose $\max_{j \in \mathcal{C}_n} |\widehat{\boldsymbol \beta}_{nj} | \le C_2 n^{-1/2+(r+z-1)/2}$ for $\exists C_2>0$ and large $n$. By Lemma \ref{lemma:sigma}, \ref{ass:sigma}, Lemma \ref{lemma:xibound}(b), Lemma \ref{lemma:xibound}(c) with $\zeta = (r+z-1)/4$, we have \mathbf egin{align*} \min_{j \in \mathcal{C}_n} \underline{\xi}_{nj} > C_1 n^{1/2}, \end{align*} for $\forall C_1>0$ and large $n$. By Lemma \ref{lemma:xibound}(a), we have \mathbf egin{align*} P( \| \widehat{\mathbf xi}_{\mathcal{C}_n} \| \le C_1 n^{1/2} \; \& \; \max_{j \in \mathcal{C}_n} |\widehat{\boldsymbol \beta}_{nj} | \le C_2 n^{-1/2+(r+z-1)/2} ) \rightarrow 0, \end{align*} for $\forall C_1>0$ and $\forall C_2>0$. On the other hand, suppose $\| \widehat{\mathbf xi}_{\mathcal{C}_n} \| \le C_1 n^{1/2}$ for $\exists C_1>0$ and large $n$. By the fact that the errors are gaussian and by \eqref{solution:beta}, we have $\max_{j \in \mathcal{C}_n} |\widehat{\boldsymbol \beta}_{nj} | = o_p(n^{-1/2+\zeta})$ for $\forall \zeta>0$. Therefore, we have \mathbf egin{align*} P( \| \widehat{\mathbf xi}_{\mathcal{C}_n} \| \le C_1 n^{1/2} \; \& \; \max_{j \in \mathcal{C}_n} |\widehat{\boldsymbol \beta}_{nj} | > C_2 n^{-1/2+(r+z-1)/2} ) \rightarrow 0. \end{align*} We have reached \mathbf egin{align*} P( \| \widehat{\mathbf xi}_{\mathcal{C}_n} \| \le C_1 n^{1/2}) \rightarrow 0, \end{align*} for $\forall C_1>0$. Since $h_n \ge (\tau_2n)^{-1} \| \widehat{\mathbf xi}_{\mathcal{C}_n} \|^2$, we have $h_n \rightarrow_p \infty$, as claimed. \end{proof} \mathbf egin{lem} \label{lemma:main2} $P(\mathcal{A}_0 \subsetneq \mathcal{A}_n) = P(\mathcal{C}_n = \emptyset \, \& \, \mathcal{D}_n \neq \emptyset) \rightarrow 0$. \end{lem} \mathbf egin{proof} Suppose $\mathcal{C}_n = \emptyset$ and $\mathcal{D}_n \neq \emptyset$. This implies that \mathbf egin{align*} |X_{\mathcal{D}_n}' (\mathbf y_n - X_n \widehat{\boldsymbol \beta}_n)| < \widehat{\mathbf xi}_{\mathcal{D}_n} = \widehat{\sigma}_n e^{\widehat{\boldsymbol \alpha}_{\mathcal{D}_n}}. \end{align*} We claim that this inequality is satisfied with probability tending to 0. Note that $\|\widehat{\mathbf xi}_{\mathcal{D}_n}\| = o_p(n^{1-u/2})$ by Lemma \ref{lemma:sigma}, Lemma \ref{lemma:xi}(a), and \ref{ass:q}. On the other hand, note from \eqref{solution:residual} that \mathbf egin{align*} X_{\mathcal{D}_n}' (\mathbf y_n - X_n \widehat{\boldsymbol \beta}_n) &= X_{\mathcal{D}_n}' ( I - H_{\mathcal{B}_n} ) X_{\mathcal{D}_n} \boldsymbol \beta_{0\mathcal{D}_n} + X_{\mathcal{D}_n}'( I - H_{\mathcal{B}_n} ) \mathbf epsilon_n\\ & \qquad + X_{\mathcal{D}_n} X_{\mathcal{B}_n} \left( X_{\mathcal{B}_n}' X_{\mathcal{B}_n} \right)^{-1} S_{n\mathcal{B}\mathcal{B}} \widehat{\mathbf xi}_{\mathcal{B}_n}\\ &= X_{\mathcal{D}_n}' ( I - H_{\mathcal{B}_n} ) X_{\mathcal{D}_n} \boldsymbol \beta_{0\mathcal{D}_n} + O_p(n^{1/2}q_n^{1/2}) + o_p(n^R q_n^{1/2})\\ &= X_{\mathcal{D}_n}' ( I - H_{\mathcal{B}_n} ) X_{\mathcal{D}_n} \boldsymbol \beta_{0\mathcal{D}_n} + o_p(n^{1-u/2}). \end{align*} Since $\|X_{\mathcal{D}_n}' ( I - H_{\mathcal{B}_n} ) X_{\mathcal{D}_n} \boldsymbol \beta_{0\mathcal{D}_n}\| \ge C nq_n^{-1/2}$ for some constant $C$ by \ref{ass:beta} and \ref{ass:XX}, the claim follows. \end{proof} \mathbf egin{lem} \label{lemma:sigma2} $\widehat{\sigma}_n^2 = \Theta_p((1+p_n/n)^{-1})$. \end{lem} \mathbf egin{proof} We already have $\widehat{\sigma}_n^2 = O_p((1+p_n/n)^{-1})$ by Lemma \ref{lemma:sigma}(a). Lemma \ref{lemma:main1} and \ref{lemma:main2} shows that $P(\mathcal{A}_n \neq \mathcal{A}_0) \rightarrow 0$. Assume $\mathcal{A}_n = \mathcal{A}_0$, then by \eqref{solution:SSE:full}, we have \mathbf egin{align*} \widehat{c}_{1n} \ge \frac{1}{2} \mathbf epsilon_n' (I - H_{\mathcal{A}_0}) \mathbf epsilon_n = \frac{1}{2} (n-q_n) \sigma_0^2 + O_p(n^{1/2}). \end{align*} By the lower bound in \eqref{sigma_bound}, we have $\widehat{\sigma}_n^{-2} = O_p(1+p_n/n)$. \end{proof} \mathbf egin{proof} [Proof of Theorem \ref{thm:oracle}] By virtue of Lemma \ref{lemma:main1} and \ref{lemma:main2}, we assume $\mathcal{A}_n = \mathcal{A}_0$, and note that, by \eqref{solution:beta}, we have \mathbf egin{align} \nonumber \widehat{\boldsymbol \beta}_{n\mathcal{A}_0} - \boldsymbol \beta_{0\mathcal{A}_0} &= \left( X_{\mathcal{A}_0}' X_{\mathcal{A}_0} \right)^{-1} X_{\mathcal{A}_0}' \mathbf epsilon_n - \left( X_{\mathcal{A}_0}' X_{\mathcal{A}_0} \right)^{-1} S_{n\mathcal{A}\mathcal{A}} \widehat{\mathbf xi}_{\mathcal{A}_0}\\ \label{diffbeta0} &= O_p(n^{(u-1)/2}) + o(n^{-u/2}) = o(n^{-u/2}), \end{align} by Lemma \ref{lemma:xi}(a) and \ref{ass:q}. Since \ref{ass:beta}, this implies the sign consistency $P(\mathrm{sign}(\widehat{\boldsymbol \beta}_{nj}) = \mathrm{sign}(\boldsymbol \beta_{0j}), \forall j) \rightarrow 1$. \eqref{diffbeta0} also implies that $\max_{j \in \mathcal{A}_0} |\widehat{\mathbf eta}_{nj}|^{-1} = O_p(q_n^{1/2}) = O_p(n^{u/2})$ by \ref{ass:beta}, and that $\max_{j \in \mathcal{A}_0} \overline{\xi}_{nj} = O_p(n^{r+u/2})$ by Lemmas \ref{lemma:sigma}, \ref{lemma:xibound}(b), and \ref{lemma:xibound}(c). Then, by Lemma \ref{lemma:xibound}(a), we have $\max_{j \in \mathcal{A}_0} \widehat{\xi}_{nj} = O_p(n^{r+u/2})$. We have reached \mathbf egin{align} \label{diffbeta1} \widehat{\boldsymbol \beta}_{n\mathcal{A}_0} - \boldsymbol \beta_{0\mathcal{A}_0} &= \left( X_{n\mathcal{A}_0}' X_{n\mathcal{A}_0} \right)^{-1} X_{n\mathcal{A}_0}' \mathbf epsilon_n - o_p(n^{-1/2}). \end{align} This proves the asymptotic normality in part (b). For the existance of the solution, we verify the KKT condition \eqref{sol:inactive} \mathbf egin{align*} |\mathbf x_{nj}' (\mathbf y_n - X_n \widehat{\boldsymbol \beta}_n)| < \widehat{\mathbf xi}_{nj}, \qquad j \notin \mathcal{A}_0. \end{align*} Note that, by \eqref{sol:beta1} and \eqref{diffbeta1}, \mathbf egin{align*} X_{n\mathcal{A}_0^c}' (\mathbf y_n - X_n \widehat{\boldsymbol \beta}_n) &= X_{n\mathcal{A}_0^c}' ( I - H_{n\mathcal{A}_0} ) \mathbf epsilon_n + o(n^{1/2} \rho_n). \end{align*} Since $\mathbf x_{nj}'\mathbf x_{nj} = n$ for all $j$ and $e_{ni}$ are gaussian, we have \mathbf egin{align*} \max_{j \notin \mathcal{A}_0} |\mathbf x_{nj}' ( I - H_{n\mathcal{A}_0} ) \mathbf epsilon_n| = O_p(n^{1/2} (\log p_n)^{1/2}) = O_p(n^{(U+1)/2}). \end{align*} Therefore, we have \mathbf egin{align*} \max_{j \notin \mathcal{A}_0} |\mathbf x_{nj}' (\mathbf y_n - X_n \widehat{\boldsymbol \beta}_n)| = O_p(n^{(U+1)/2}). \end{align*} On the other hand, note that $\max_{j \notin \mathcal{A}_0} |\widehat{\mathbf eta}_{nj}| = 0$. By Lemmas \ref{lemma:xibound}(b), \ref{lemma:xibound}(c), and \ref{lemma:sigma2}, we have $\max_{j \notin \mathcal{A}_0} \underline{\xi}_{nj}^{-1} = O_p(n^{-R+\zeta})$ for any $\zeta > 0$. By Lemma \ref{lemma:xibound}(a), we have $\max_{j \notin \mathcal{A}_0} \widehat{\xi}_{nj}^{-1} = O_p(n^{-R+\zeta})$ for any $\zeta > 0$. Since \ref{ass:mu}, by choosing $\zeta = (2R-U-1)/4$, we have \mathbf egin{align*} \max_{j \notin \mathcal{A}_0} |\mathbf x_{nj}' (\mathbf y_n - X_n \widehat{\boldsymbol \beta}_n)| < \min_{j \notin \mathcal{A}_0} |\widehat{\xi}_{nj}| \end{align*} with probability tending to 1. For uniqueness, note that from \eqref{eq:marginal} \mathbf egin{align*} -\frac{\partial^2 \log \pi(\mathbf theta|\mathbf y_n,X_n)}{\partial \boldsymbol \alpha \partial \boldsymbol \alpha'} = \frac{1}{\nu_n} I + W D_{\boldsymbol \kappa} W' + \frac{1}{\sigma} D_{e^{\boldsymbol \alpha}} D_{|\boldsymbol \beta|}, \end{align*} where $W$ is a $p$ by $p(p-1)/2$ matrix, whose column $\mathbf w_{jk}$ corresponding to the edge $(j,k)$ is $\mathbf e_j-\mathbf e_k$, and \mathbf egin{align*} \kappa_{jk} = G_{n,jk} a_{\omega n} \frac{4\nu_n b_{\omega n} - 2(\alpha_{nj}-\alpha_{nk})^2}{(2\nu_n b_{\omega n} + (\alpha_{nj} -\alpha_{nk})^2)^2}. \end{align*} Since $|\kappa_{jk}| \le \frac{a_{\omega n}}{\nu_n b_{\omega n}}$, we have $\left| \sum_{k \sim j} \kappa_{jk} \right| < \frac{L_n a_{\omega n}}{\nu_n b_{\omega n}}$. By \ref{ass:omega}, we have \mathbf egin{align*} -\frac{\partial^2 \log \pi(\mathbf theta|\mathbf y_n,X_n)}{\partial \boldsymbol \alpha \partial \boldsymbol \alpha'} = \frac{1}{\nu_n} I + o(\frac{1}{\nu_n}) + \frac{1}{\sigma} D_{e^{\boldsymbol \alpha}} D_{|\boldsymbol \beta|}. \end{align*} The following table shows the Hessian matrix of $-\pi(\mathbf theta|\mathbf y_n,X_n)$ with respect to $\sigma$, $\boldsymbol \beta_{\mathcal{A}_0}$, $\boldsymbol \alpha_{\mathcal{A}_0}$, and $\boldsymbol \alpha_{\mathcal{A}_0^c}$. \mathbf egin{center} \mathbf egin{tabular}{c|cccc} w.r.t. & $\sigma$ & $\boldsymbol \beta_{\mathcal{A}_0}$ & $\boldsymbol \alpha_{\mathcal{A}_0}$ & $\boldsymbol \alpha_{\mathcal{A}_0^c}$\\ \hline $\sigma$ & $ \frac{6c_{1n}}{\sigma_n^4} + \frac{2c_{2n}}{\sigma_n^3} - \frac{c_{3n}}{\sigma_n^2}$ & \\ $\boldsymbol \beta_{\mathcal{A}_0}$ & $\frac{2X_{n\mathcal{A}_0}'(\mathbf y_n-X_n\boldsymbol \beta_n)}{\sigma_n^3} - \frac{ S_{\mathcal{A}_0} e^{\boldsymbol \alpha_{\mathcal{A}_0}} }{\sigma_n^2}$ & $\frac{X_{n\mathcal{A}_0}'X_{n\mathcal{A}_0}}{\sigma_n^2}$ \\ $\boldsymbol \alpha_{\mathcal{A}_0}$ & $-\frac{ D_{|\boldsymbol \beta_{\mathcal{A}_0}|} e^{\boldsymbol \alpha_{\mathcal{A}_0}} }{\sigma_n^2}$ & $\frac{D_{e^{\boldsymbol \alpha_{\mathcal{A}_0}}} S_{\mathcal{A}_0}}{\sigma_n}$ & $\frac{I}{\nu_n} + o(\frac{1}{\nu_n}) + \frac{D_{e^{\boldsymbol \alpha_{\mathcal{A}_0}}}D_{|\boldsymbol \beta_{\mathcal{A}_0}|}}{\sigma_n}$ \\ $\boldsymbol \alpha_{\mathcal{A}_0^c}$ & 0 & 0 & 0 & $\frac{I}{\nu_n} + o(\frac{1}{\nu_n})$\\ \end{tabular} \end{center} The Hessian evaluated at the solution is given by \mathbf egin{center} \mathbf egin{tabular}{c|cccc} w.r.t. & $\sigma$ & $\boldsymbol \beta_{\mathcal{A}_0}$ & $\boldsymbol \alpha_{\mathcal{A}_0}$ & $\boldsymbol \alpha_{\mathcal{A}_0^c}$\\ \hline $\sigma$ & {\color{red} $ \frac{4\widehat{c}_{1n}}{\widehat{\sigma}_n^4}$} $+$ {\color{blue} $\frac{\widehat{c}_{2n}}{\widehat{\sigma}_n^3}$} & \\ $\boldsymbol \beta_{\mathcal{A}_0}$ & {\color{red} $\frac{X_{n\mathcal{A}_0}'(\mathbf y_n-X_n\widehat{\boldsymbol \beta}_n)}{\widehat{\sigma}_n^3}$} & {\color{red} $\frac{X_{n\mathcal{A}_0}'X_{n\mathcal{A}_0}}{2\widehat{\sigma}_n^2}$} $+$ {\color{green} $\frac{X_{n\mathcal{A}_0}'X_{n\mathcal{A}_0}}{2\widehat{\sigma}_n^2}$} \\ $\boldsymbol \alpha_{\mathcal{A}_0}$ & {\color{blue} $-\frac{ D_{|\widehat{\boldsymbol \beta}_{\mathcal{A}_0}|} e^{\widehat{\boldsymbol \alpha}_{\mathcal{A}_0}} }{\widehat{\sigma}_n^2}$} & {\color{green} $\frac{D_{e^{\widehat{\boldsymbol \alpha}_{\mathcal{A}_0}}} S_{n\mathcal{A}_0\mathcal{A}_0}}{\widehat{\sigma}_n}$} & {\color{green} $\frac{I}{\nu_n} + o(\frac{1}{\nu_n})$} $+$ {\color{blue} $\frac{D_{e^{\widehat{\boldsymbol \alpha}_{\mathcal{A}_0}}}D_{|\widehat{\boldsymbol \beta}_{\mathcal{A}_0}|}}{\widehat{\sigma}_n}$} \\ $\boldsymbol \alpha_{\mathcal{A}_0^c}$ & 0 & 0 & 0 & $\frac{I}{\nu_n} + o(\frac{1}{\nu_n})$ \end{tabular} \end{center} The red-colored submatrix is strictly positive definite. The blue-colored submatrix is positive semi-definite. We claim that the green-colored submatrix is asymptotically strictly positive definite. Note that the smallest eigen value of $X_{n\mathcal{A}_0}'X_{n\mathcal{A}_0}$ is greater than or equal to $n\tau_1$ by \ref{ass:XX}. The smallest eigen value of $I/\nu_n$ is greater than $(1+p_n/n) n^{r-\zeta}$ for any $\zeta>0$ and large $n$ by \ref{ass:nu}. On the other hand, the largest eigenvalue of $D_{e^{\boldsymbol \alpha_{\mathcal{A}_0}}} S_{\mathcal{A}_0}$ is of $O_p((1+p_n/n)^{1/2} n^{r+u/2})$ as discussed above. The claim follows by \ref{ass:nu}. We have proved that the objective function is strictly convex in the region where the solutions can reside. Suppose we have two distinct solutions. This is only possible if there is at least one non-convex point in the segment between the two points, which cannot be the case because the objective function is strictly convex in that region. Therefore, the solution is unique and this completes the proof. \end{proof} \subsection*{Proof of Theorem \ref{oracle:fix}} \mathbf egin{proof}[Proof of Theorem \ref{oracle:fix}] Lemmas \ref{lemma:sigma}--\ref{lemma:xibound} hold with $U=0$, $u=0$, $1/2<R<1$, $0<r<R-1/2$, and $0 \le z < 1$. Since the partial orthogonality is not assumed in the fixed $p$ case, we take a different stratege to prove the rest. We first prove $P(\mathcal{D}_n \neq \emptyset) \rightarrow 0$, then $\widehat{\sigma}_n^{-2} = O_p(1)$, and then show $P(\mathcal{C}_n \neq \emptyset \, \& \, \mathcal{D}_n = \emptyset ) \rightarrow 0$. Suppose $\mathcal{D}_n \neq \emptyset$. Note from \eqref{solution:residual:full} that \mathbf egin{align*} X_{\mathcal{D}_n}' (\mathbf y_n - X_n \widehat{\boldsymbol \beta}_n) &= X_{\mathcal{D}_n}' ( I - H_{\mathcal{A}_n} ) X_{\mathcal{D}_n} \boldsymbol \beta_{0\mathcal{D}_n} + X_{\mathcal{D}_n}'( I - H_{\mathcal{A}_n} ) \mathbf epsilon_n\\ & \qquad + X_{\mathcal{D}_n} X_{\mathcal{A}_n} \left( X_{\mathcal{A}_n}' X_{\mathcal{A}_n} \right)^{-1} S_{n\mathcal{A}\mathcal{A}} \widehat{\mathbf xi}_{\mathcal{A}_n}\\ &= X_{\mathcal{D}_n}' ( I - H_{\mathcal{A}_n} ) X_{\mathcal{D}_n} \boldsymbol \beta_{0\mathcal{D}_n} + O_p(n^{1/2}) + o_p(n^R)\\ &= X_{\mathcal{D}_n}' ( I - H_{\mathcal{A}_n} ) X_{\mathcal{D}_n} \boldsymbol \beta_{0\mathcal{D}_n} + o_p(n). \end{align*} Since $X_{\mathcal{D}_n}' ( I - H_{\mathcal{A}_n} ) X_{\mathcal{D}_n} \boldsymbol \beta_{0\mathcal{D}_n} \ge C n$ in probability for $\exists C>0$ by \ref{ass:fix:beta} and \ref{ass:fix:XX}, the KKT condition \eqref{sol:inactive} cannot be satisfied due to Lemma \ref{lemma:xi}(a). This implies $P(\mathcal{D}_n = \emptyset) \rightarrow 1$. Now assume $\mathcal{D}_n = \emptyset$, then by \eqref{solution:SSE:full}, we have \mathbf egin{align*} \widehat{c}_{1n} \ge \frac{1}{2} \mathbf epsilon_n' (I - H_{\mathcal{A}_n}) \mathbf epsilon_n \ge \frac{1}{2} (n-p) \sigma_0^2 + O_p(n^{1/2}). \end{align*} By the lower bound in \eqref{sigma_bound}, we have $\widehat{\sigma}_n^{-2} = O_p(1)$. Suppose $\mathcal{C}_n \neq \emptyset$ and $\mathcal{D}_n = \emptyset$. By \eqref{sol:beta1}, we have \mathbf egin{align*} \widehat{\sigma}_n \sum_{j =1}^p e^{\widehat{\alpha}_{nj}} |\widehat{\mathbf eta}_{nj}| = {\widehat{\mathbf xi}_{\mathcal{A}_n}}' S_{n\mathcal{A}\mathcal{A}} \widehat{\boldsymbol \beta}_{\mathcal{A}_n} = O_p(h_n^{1/2}) - h_n, \end{align*} where $h_n = {\widehat{\mathbf xi}_{\mathcal{A}_n}}' S_{n\mathcal{A}\mathcal{A}} (X_{\mathcal{A}_n}'X_{\mathcal{A}_n})^{-1} S_{n\mathcal{A}\mathcal{A}} {\widehat{\mathbf xi}_{\mathcal{A}_n}}$. By the similar argument as in Lemma \ref{lemma:main1}, we can show that $h_n \rightarrow_p \infty$ and $P(\mathcal{C}_n \neq \emptyset \, \& \, \mathcal{D}_n = \emptyset ) \rightarrow 0$. (Use the fact $\widehat{\sigma}_n^{-2} = O_p(1)$). Now we have all the resuls that are analogous to Lemmas \ref{lemma:sigma}--\ref{lemma:sigma2}. The rest of the proof is analogous to the proof of Theorem \ref{thm:oracle}. \end{proof} \end{document}
{\beta}gin{document} \tilde{t}le[Smooth covers on symplectic manifolds] {Smooth covers on symplectic manifolds} \author[Fran\c{c}ois Lalonde]{Fran\c{c}ois Lalonde} \author[Jordan Payette]{Jordan Payette} \address{D\'epartement de math\'ematiques et de statistique, Universit\'e de Montr\'eal; D\'{e}partement de math\'{e}matiques et de statistique, Universit\'{e} de Montr\'{e}al.} \email{[email protected]; [email protected]} \thanks{The first author is supported by a Canada Research Chair, a NSERC grant OGP 0092913 (Canada) and a FQRNT grant ER-1199 (Qu\'ebec); the second author is supported by a Graham Bell fellowship from NSERC (Canada) } \date{} {\beta}gin{abstract} In this article, we first introduce the notion of a {\it continuous cover} of a manifold parametrised by any compact manifold $T$ endowed with a mass 1 volume-form. We prove that any such cover admits a partition of unity where the sum is replaced by integrals. When the cover is smooth, we then generalize Polterovich's notion of Poisson non-commutativity to such a context in order to get a more natural definition of non-commutativity and to be in a position where one can compare various invariants of symplectic manifolds. The main theorem of this article states that the discrete Poisson bracket invariant of Polterovich is equal to our smooth version of it, as it does not depend on the nature or dimension of the parameter space $T$. As a consequence, the Poisson-bracket invariant of a symplectic manifold can be computed either in the discrete category or in the smooth one, that is to say either by summing or integrating. The latter is in general more amenable to calculations, so that, in some sense, our result is in the spirit of the De Rham theorem equating simplicial cohomology and De Rham cohomology. We finally study the Poisson-bracket invariant associated to coverings by symplectic balls of capacity $c$, exhibiting some of its properties as the capacity $c$ varies. We end with some positive and negative speculations on the relation between uncertainty phase transitions and critical values of the Poisson bracket, which was the motivation behind this article. \end{abstract} \maketitle \noindent Subject classification: 53D35, 57R17, 55R20, 57S05. \section{Introduction} In mathematics, the notion of partition of unity is fundamental since it is the concept that distinguishes $C^{\infty}$ geometry from analytic geometry. In the first case, where partitions of unity apply, most objects can be decomposed in local parts, while in the second case where partitions of unity do not apply, most objects are intrinsically global and indecomposable. It is therefore of some importance to push that notion as far as we can in order to make it more natural and applicable. Our first observation is that the right context in which one should consider partitions of unity is in the continuous category (or possibly in the measurable category if one were able to make sense of that concept for families of open sets). So here continuous covers by open subsets of a given smooth manifold $M$ will be parametrised by any smooth compact manifold (possibly with boundary) $T$ endowed with a volume-form $dt$ of total mass $1$; for simplicity we shall refer to those pairs $(T, dt)$ as 'probability spaces'. We will first prove that for any compact manifold $M$, any such continuously parametrised cover of $M$ admits a smooth partition of unity made of smooth functions. Concentrating on an arbitrary symplectic manifold $(M, {\omega})$, the covers that we will consider will be made of smooth families of symplectically embedded balls of a given capacity $c= {\partial}i r^2$ indexed by a measure space $(T, d\mu = \mathrm{d} t)$. Here is our first theorem: the level of Poisson non-commutativity, as defined by Polterovich in the discrete case of partitions of unity, can be generalised to the case of our families of covers and associated partitions of unity; morever the number that we get in this general case, which depends {\it a priori} on the probability space, actually does not, being equal to the number associated to the corresponding discrete setting. Our second theorem is that if one considers the function $f: [0, c_{max}] \to [0, \infty]$ that assigns to $c$ the Polterovich's level of non-commutativity of the covers made of symplectically embedded balls of capacity $c$, as generalised by us in the smooth setting, then this function enjoys the following two properties: \noindent 1) $f$ is non-increasing, and \noindent 2) $f$ is upper semi-continuous and left-continuous. We end this paper with a question concerning the relation, for a given symplectic manifold $(M, {\omega}ega)$, between critical values of the Poisson-bracket invariant as the capacity $c$ of the ball varies, and the critical values (or ``phase transition'') depending on $c$ of the topology of the infinite dimensional space of symplectic embeddings of the standard ball of capacity $c$ into $(M, {\omega}ega)$. \noindent {\it Acknowledgements}. Both authors are deeply grateful to Lev Buhovsky for suggesting and proving that the $T$-parameter spaces of smooth covers can be reduced to one-dimensional families. Although we present a different proof here, that now includes the discrete case, his idea has had a significant impact on the first version of this paper. The second author would like to thank Dominique Rathel-Fournier for inspiring discussions. \section{Continuous and smooth covers} Throughout this article, ``smooth" means ``of class $C^r$" for some arbitrary fixed $r {\mathcal g}e 1$ and $T$ is a compact smooth manifold of finite dimension endowed with a measure $\mu$ of total volume $1$ coming from a volume-form $\mathrm{d} t$. The following definition is far more restrictive than the one that we have in mind, but it will be enough for the purpose of this article. {\beta}gin{Definition} Let $M$ be a closed smooth manifold of dimension $n$. Let $U$ be a bounded open subset of Euclidean space $\mathbb{R}^{n}$ whose boundary is smooth, so that the closure of $U$ admits an open neighbourhood smoothly diffeomorphic to $U$. A {\it continous cover} of $M$ of type $(T,U)$ is a continuous map $$ G: T \times U \to M $$ such that {\beta}gin{enumerate} \item for each $t \in T$, the map $G_t$ is a smooth embedding of $U$ to $M$ that can be extended to a smooth embedding of some ({\it a priori} $t$-dependent) collar neighbourhood of $U$ (and therefore to the closed set $\bar{U}$), and \item the images of $U$ as $t$ runs over the parameter space $T$, cover $M$. \end{enumerate} \end{Definition} Note that, in general, the topology of $U$ could change within the $T$-family. However, to simplify the presentation, we restrict ourselves to a fixed $U$ -- this is what we had in mind in the sentence preceding this definition. A {\it smooth cover} is defined in the same way, but now requiring that $G$ be a smooth map. {\beta}gin{Definition} A {\it partition of unity} $F$ subordinated to a continuous cover $G$ is a smooth function $$ \tilde{F}: T \times U \to [0, \infty) $$ such that {\beta}gin{enumerate} \item each $\tilde{F}_t : U \to {\bf R}$ is a smooth function with (compact) support in $U$, \item the closure of the union $\bigcup_{t \in T} \, \mathrm{supp} (\tilde{F}_t)$ is contained in $U$, and \item for every $x \in M$, $$ \int_T F_t(x) dt = 1, $$ \end{enumerate} \noindent where the smooth function $F_t : M \to {\bf R}$ is the pushforward of $\tilde{F}_t$ to $M$ using $G_t$, extended by zero outside the image of $G_t$; in other words, it is $F_t(x) = \tilde{F}(t, G^{-1}_t(x))$. \noindent The notation $F < G$ expresses that $F$ is a partition of unity subordinated to the cover $G$. \end{Definition} {\beta}gin{Remark} Condition (2) plays a role in the proofs of a few results of this paper by allowing us to deform $U$ a little while keeping a given $F$ fixed; we were not able to come up with arguments working without this condition. Note that we recover the usual notion of partition of unity by taking $T$ to be a finite set of points with the counting measure. \end{Remark} {\beta}gin{Theorem} Each continuous cover admits a partition of unity. \end{Theorem} {\partial}roof Let $G$ be a continuous cover of $M$ of type $(T,U)$. The general idea of the proof is to replace $G$ by a finite open cover $G'$ of $M$, to consider a partition of unity subordinated to the latter and to use it to construct a partition of unity subordinated to $G$. Cover $U$ by open balls such that their closure is always included inside $U$. Now, push forward this cover to $M$ using each $G_{t}$; the collection of all of these images as $t$ varies in $T$ forms an open cover of $M$ by sets diffeomorphic to the ball. Since $M$ is compact, there exists a finite subcover. Each open set in this subcover comes from some $G_t$, where $t$ is an element of a finite set $T' {\frak {su}}bset T$. For each $t \in T'$, consider the (finite) collection $C_t$ of open balls inside $U$ whose image under $G_t$ belongs to the aforementioned subcover, so that the latter can be expressed as $G' := \{G_t(V) \, : \, t \in T', V \in C_t\}$. Since the closure of each ball $V \in C_t$ is contained in the open set $U$, by continuity of $G$ there is an open set $B_V {\frak {su}}bset T$ centred at $t$ such that $G_t(\bar{V}) {\frak {su}}bset G_{\tau}(U)$ for all $\tau \in B_V$, so that each $G_{\tau}^{-1} \circ G_t : \bar{V} \to U$ is defined and is a diffeomorphism onto its image. The intersection $B_t = \cap_{V \in C_t} B_V$ contains $t$ and is open since $C_t$ is finite. For each $t \in T'$, consider a smooth nonnegative bump function $\rho_t$ supported in $B_t$ whose integral over $T$ equals $1$. There exists a smooth partition of unity $\Phi = \{ {\partial}hi_V : M \to [0,1] \, | \, V \in \cup_{t \in T'} C_t \}$ on $M$ subordinated to the finite open cover $G'$. For $t \in T'$ and $V \in C_t$, the real-valued function $\tilde{F}_V(\tau,u) := \rho_t(\tau) {\partial}hi_V(G(\tau,u))$ defined on $T \times U$ is supported in $B_t \times U$, is smooth in both $u$ and $\tau$ and satisfies $\int_T \tilde{F}_V(\tau, G_{\tau}^{-1}(x))d\tau = {\partial}hi_V(x)$. It easily follows that the function \[ \tilde{F} : T \times U \to \mathbb{R} : (\tau, u) \mapsto {\frak {su}}m_{t \in T'} {\frak {su}}m_{V \in C_t} \tilde{F}_V(\tau, u) \] is a partition of unity subordinated to $G$. $\square$ \section{The $pb$ invariant and Poisson non-commutativity} Leonid Polterovich \cite{P1,P2,PR} introduced recently the notion of the level of Poisson non-com\-mutativity of a given classical (i.e finite) covering of a symplectic manifold. Here is the definition: {\beta}gin{Definition} Let $(M, {\omega})$ be a closed symplectic manifold and $\mathcal U$ a finite cover of $M$ by open subsets $U_1, \ldots, U_N$. For each partition of unity $F = (f_1, \ldots, f_N)$ subordinated to $\mathcal U$, take the supremum of $\| \{{\Sigma}gma_i a_i f_i, {\Sigma}gma_j b_j f_j\} \|$ when the $N$-tuples of coefficients $(a_i)$ and $(b_i)$ run through the $N$-cube $[-1,1]^N$, where the bracket is the Poisson bracket and the norm is the $C^0$-supremum norm. Then take the infimum over all partitions of unity subordinated to $\mathcal U$. This is by definition the $pb$ \textit{invariant} of $\mathcal U$. To summarize: \[ {\partial}b(\mathcal U) := \underset{F < \mathcal U}{\mathrm{inf}} \; \underset{(a_i), (b_i) \in [-1,1]^N \, }{\mathrm{sup}} \, \left\| \left\{ {\frak {su}}m_i a_i f_i \, , \, {\frak {su}}m_j b_j f_j \right\} \right\| \; . \] \end{Definition} Roughly speaking, this number is a measure of the least amount of ``symplectic interaction" that sets in a cover $\mathcal U$ can have. It is very plausible that such a number depends on the combinatorics of the cover, but also on the symplectic properties of the (intersections of the) open sets in the cover. To illustrate this point, observe that if $\mathcal U$ is an open cover made of only two open sets, then ${\partial}b(\mathcal U) = 0$. A somewhat opposite result holds for covers constituted of displaceable open sets; let's recall that a subset $U$ of $M$ is {\it displaceable} if there is a Hamiltonian diffeomorphism ${\partial}hi$ such that ${\partial}hi(U) \cap U = \emptyset$. The main result of Polterovich in \cite{P1,PR} is that for such a cover, the number ${\partial}b(\mathcal U)$ (multiplied by some finite number which measures the ``symplectic size" of the sets in the cover) is bounded from below by $(2N^2)^{-1}$. In particular, ${\partial}b(\mathcal U) > 0$ in such a case. This result heavily relies on techniques in quantum and Floer homologies and in the theory of quasi-morphisms and quasi-states. Unfortunately, this lower bound depends on the cardinality $N$ of the open cover; as such, it does not show if one could use the ${\partial}b$-invariant in order to assign to a given symplectic manifold a (strictly positive) number that might be interpreted as its level of Poisson non-commutativity. Nevertheless, Polterovich conjectured in \cite{P2} and \cite{PR} that for covers made of displaceable open sets, there should be a strictly positive lower bound for $pb$ independent of the cardinality of the cover, an extremely hard conjecture. One way of solving this problem might come from the extension of the ${\partial}b$-invariant from finite covers to continuous or smooth covers. Indeed, such covers are morally limits of finite covers as the cardinality $N$ goes to infinity, so we can expect some relation between the minimal value of ${\partial}b$ on such covers and the level of Poisson non-commutativity of the symplectic manifold. This extension has the advantage that one may then compare the ${\partial}b$ invariant for continuous/smooth covers to other quantities that also depend on continuous/smooth covers, such as the critical values at which families of symplectic balls undergo a ``phase transition''. We first need the following definition: {\beta}gin{Definition} Let $(M, {\omega})$ be a closed symplectic manifold and $G$ a continuous cover of $M$ of type $(T,U)$ by open subsets $G_t(U)$. For each partition of unity $F$ subordinated to $G$, take the supremum of $\| \{\int_N a(t) F(t) dt, \int_N b(t) F(t) dt\} \|$ over all coefficients (or \textit{weights}) $a$ and $b$ that are measurable functions defined on $T$ with $dt$-almost everywhere values in $[-1,1]$. Then take the infimum over all partitions of unity subordinated to $G$. This is by definition the $pb$ \textit{invariant} of $G$. To summarize: \[ {\partial}b(G) := \underset{F < G}{\mathrm{inf}} \; \underset{a, b : \, T \to [-1,1] \, \mbox{ \tiny{measurable}}}{\mathrm{sup}} \, \left\| \left\{ \int_T a(t) F_{t} dt \, , \, \int_T b(t) F_{t} dt \right\} \right\| \; . \] \end{Definition} Note that we recover Polterovich's definition by replacing $T$ by a finite set of points. The following result shows that this pb-invariant is finite. {\beta}gin{lemma} Given a continuous cover $G$ of type $(T,U)$, there exists a partition of unity $F$ subordinated to $G$ whose pb-invariant ${\partial}b \, F$ is finite\footnote{The pb-invariant ${\partial}b \, F$ of a partition of unity $F$ is defined as above without the infimum over $F < G$. That is, ${\partial}b \, G = \mathrm{inf}_{F < G} \, {\partial}b \, F$.}. \end{lemma} {\partial}roof Consider the partition of unity $\tilde{F} : T \times U \to \mathbb{R}$ constructed in Theorem 4 above. Given a measurable function $a : T \to [-1,1]$, for any $x \in M$ we compute \[ \int_T a(\tau) F_{\tau}(x) d\tau = {\frak {su}}m_{t \in T'} {\frak {su}}m_{V \in C_t} \bar a_t {\partial}hi_V(x) \, , \] where $\bar{a}_t := \int_T a(\tau) \rho_t(\tau)d\tau$ is a number whose value is in $[-1,1]$. It follows that \[ \left| \left\{ \int_T a(\tau) F_{\tau}(x) d\tau \, , \, \int_T b(\tau) F_{\tau}(x) d\tau \right\}(x) \right| \le {\frak {su}}m_{V, W \in \cup_{t \in T'} C_t} |\{ {\partial}hi_V, {\partial}hi_W \}(x)| < \infty \] for any measurable functions $a, b : T \to [-1,1]$, which proves the claim. When the partition is smooth with respect to $t$ also, then the value is always finite. Indeed, since $F \in C^r(T \times M)$, it follows from Lebesgue's dominated convergence theorem that the function $a \cdot F := \int_T a(t) F_t dt \in C^r(M)$ is defined and also that $\{ a \cdot F, b \cdot F \} = \int_{T \times T} a(t) b(u) \{F_t, F_u \} dtdu < \infty$ when $a$ and $b$ are weights. {\bf Q}ED \noindent We will be now working only with smooth covers. Therefore all partitions of unity $F$ satisfy ${\partial}b \, F < \infty.$ We recall a few facts taken from Polterovich's and Rosen's recent book \cite{PR}, since we will need them. Further informations are available in this book and in the references therein. The setting is the following (\cite{PR}, chapter 4): {\beta}gin{itemize} \item $(M^{2n}, {\omega}ega)$ a compact symplectic manifold; \item $U {\frak {su}}bset M$ an open set; \item $H(U)$ the image of $\widetilde{Ham}(U)$ in $H := \widetilde{Ham}(M)$ under the map induced by the inclusion $U {\frak {su}}bset M$; \item ${\partial}hi$ : an element of $H(U)$; \item $c$ : a (subadditive) spectral invariant on $H(U)$ (see the definition below); \item $q({\partial}hi) := c({\partial}hi) + c({\partial}hi^{-1})$, which is (almost) a norm on $H$; \item $w(U) := \mathrm{sup}_{{\partial}hi \in H(U)} \, q({\partial}hi)$ the spectral width of $U$ (which may be infinite). \end{itemize} {\beta}gin{Definition}[\cite{PR}, 4.3.1] A function $c : H \to \mathbb{R}$ is called a \textit{subadditive spectral invariant} if it satisfies the following axioms: {\beta}gin{description} \item[ Conjugation invariance ] $c({\partial}hi {\partial}si {\partial}hi^{-1}) = c({\partial}si) \; \forall {\partial}hi, {\partial}si \in H$; \item[ Subadditivity ] $c({\partial}hi {\partial}si) \le c({\partial}hi) + c({\partial}si)$; \item[ Stability ] \[ \int_{0}^1 \mathrm{min}(f_{t} - g_{t}) dt \le c({\partial}hi) - c({\partial}si) \le \int_{0}^1 \mathrm{max}(f_{t} - g_{t}) dt \, , \] \noindent provided ${\partial}hi, {\partial}si \in H$ are generated by normalized Hamiltonians $f$ and $g$, respectively; \item[ Spectrality ] $c({\partial}hi) \in \mathrm{spec}({\partial}hi)$ for all nondegenerate elements ${\partial}hi \in H$. \end{description} \end{Definition} {\beta}gin{Remark} The first three properties of a spectral invariant are in practice the most important ones. However, from the spectrality axiom, one can show for instance that $w(U) < \infty$ whenever $U$ is displaceable; as such, the spectrality axiom is relevant in order to tie the spectral invariant with the symplectic topology of $M$. Let's mention that a spectral invariant exists on any closed symplectic manifold, as can be shown in the context of Hamiltonian Floer theory. \end{Remark} Given a Hamiltonian function $f \in C^{\infty}(M)$ generating the (autonomous) Hamiltonian diffeomorphism ${\partial}hi_{f} = {\partial}hi^1_{f}$ and a spectral invariant $c$, we can define the number \[ \zeta(f) := {\sigma}gma({\partial}hi_{f}) + {\lambda}ngle f \rangle \in \mathbb{R} \] \noindent where ${\sigma}gma({\partial}hi_{f}) := \lim_{n \to \infty} \frac{1}{n}c({\partial}hi^n_{f})$ (with ${\sigma}gma$ the {\it homogeneization} of $c$) and ${\lambda}ngle f \rangle := V^{-1} \, \int_{M} f {\omega}ega^n$ is the mean-value of $f$, where $V = \int_M {\omega}ega^n$ is the volume of the symplectic manifold $M$. The function $\zeta : C^{\infty}(M) \to \mathbb{R}$ is called the (\textit{partial symplectic}) {\it quasi-state} associated to $c$. It has some very important properties, among which: {\beta}gin{description} \item[ Normalization ] $\zeta(a) = a$ for any constant $a$; \item[ Stability ] $\mathrm{min}_{M} (f-g) \le \zeta(f) - \zeta(g) \le \mathrm{max}_{M} (f-g)$; \item[ Monotonicity ] If $f {\mathcal g}e g$ on $M$, then $\zeta(f) {\mathcal g}e \zeta(g)$; \item[ Homogeneity ] If $s \in [0, \infty)$, then $\zeta(sf) = s \zeta(f)$; \item[ Vanishing ] If the support of $f$ is displaceable, then $\zeta(f) = 0$ (this is a consequence of the spectrality axiom for $c$); \item[ Quasi-subadditivity ] If $\{f, g \} = 0$, then $\zeta(f+g) \le \zeta(f) + \zeta(g)$. \end{description} For $f, g \in C^{\infty}(M)$, define $S(f, g) = \mathrm{min} \{ w(\mathrm{supp} \, f) \; , \; w(\mathrm{supp} \, g) \} \in [0, \infty]$. It follows from Remark 8 that this number is finite whenever either $f$ or $g$ has displaceable support. {\beta}gin{Theorem}[\cite{EPZ}, 1.4 ; \cite{PR}, 4.6.1 ; the Poisson bracket inequality] For every pair of functions $f, g \in C^{\infty}(M)$ such that $S(f,g) < \infty$, \[ \Pi(f,g) := \left| \zeta(f+g) - \zeta(f) - \zeta(g) \right| \le \sqrt{2 S(f,g) \, \| \{ f, g \} \| } \; . \] \end{Theorem} \noindent We see that $\Pi(f,g)$ measures the default of additivity of $\zeta$. In fact, this theorem implies: {\beta}gin{description} \item[ Partial quasi-linearity ] If $S(f,g) < \infty$ and if $\{f, g \} = 0$, then \[ \zeta(f+g) = \zeta(f) + \zeta(g) \; \mbox{ and } \zeta(s f) = s \zeta(f) \; \forall s \in \mathbb{R} \, . \] \end{description} It is known that some symplectic manifolds admit a spectral invariant $c$ for which $S$ takes values in $[0, \infty)$, in which case $\zeta$ is a genuine symplectic quasi-state : it is a normalized, monotone and quasi-linear functional on the Poisson algebra $(C^{\infty}(M), \{-, - \})$. {\beta}gin{Theorem}[\cite{P1}, 3.1 ; \cite{PR}, 9.2.2] Let $(M, {\omega}ega)$ be a symplectic manifold and consider a finite cover $U = \{ U_{1}, \dots, U_{N} \}$ of $M$ by displaceable open sets. Write $w(U) := \mathrm{max}_{i} \, w(U_{i}) < \infty$. Then \[ {\partial}b(U) \, w(U) \; {\mathcal g}e \; \frac{1}{2N^2} \; . \] \end{Theorem} \textit{Proof} : Let $F$ be a partition of unity subordinated to $U$. Set \[ G_{1} = F_{1}, \, G_{2} = F_{1} + F_{2}, \, \dots , \, G_{N} = F_{1} + \dots + F_{N} \, . \] \noindent Using Theorem 1 and the vanishing property of $\zeta$, one obtains the following estimate: {\beta}gin{align} \notag \left| \zeta(G_{k+1}) - \zeta(G_{k}) \right| &= \left| \zeta(G_{k} + F_{k+1} \, ) - \zeta(G_{k}) - \zeta(F_{k+1}) \right| \\ \notag & \le \sqrt{2 \, \mathrm{min} ( w(\mathrm{supp} \, G_{k}) \, , \, w(\mathrm{supp} \, F_{k+1}) )} \, \sqrt{\{ G_{k} , F_{k+1} \}} \, . \end{align} \noindent Using the definitions of ${\partial}b (F)$ and of $w(U)$, one gets: {\beta}gin{align} \notag \left| \zeta(G_{k+1}) - \zeta(G_{k}) \right| & \le \sqrt{2 \, w(U)} \, \sqrt{{\partial}b (F) } \, . \end{align} \noindent This inequality holds for all $k$. Using the normalization and vanishing properties of $\zeta$ and applying the triangle inequality to a telescopic sum, one gets: {\beta}gin{align} \notag 1 & = \left| \zeta(1) - 0 \right| = \left| \zeta(G_{N}) - \zeta(G_{1}) \right| \le {\frak {su}}m_{k=1}^{N-1} \left| \zeta(G_{k+1}) - \zeta(G_{k}) \right| \\ \notag & \le {\frak {su}}m_{k=1}^{N-1} \sqrt{2 \, w(U) \, {\partial}b (F) } \le N \sqrt{2 \, w(U) \, {\partial}b (F) } \, . \end{align} \noindent Since this is true for any $F < U$, the result easily follows. {\bf Q}ED A similar results holds in the context of smooth covers. We say that a smooth cover $G : T \times U \to (M, {\omega}ega)$ is made of displaceable sets if each set $G_t(\bar{U}) = \overline{G_t(U)} {\frak {su}}bset (M, {\omega}ega)$ is displaceable (recall that we assume that $G_t$ extends as a smooth embedding to the closure $T \times \bar{U}$). In other words, not only is each $G_t(U)$ displaceable, but so is a small neighborhood of it too. {\beta}gin{Theorem} For any smooth cover $G$ of type $(T,U)$ made of displaceable sets, there exists a constant $c = c(G) > 0$ such that \[ {\partial}b(G) {\mathcal g}e \; c(G) \; . \] \end{Theorem} \textit{Proof} : The proof morally consists in a coarse-graining of the smooth cover to a finite cover. Let $W_{1}, \dots, W_{N}$ be any exhaustion of the compact manifold $T$ by nested open sets with the following property: the sets $V_{1} = W_{1}$, $V_{2} = W_{2}-W_{1}$, ..., $V_{N} = W_{N} - W_{N-1}$ are such that for every $j$ the open set $U_j := \cup_{t \in V_{j}} \, \mathrm{Im}(G_{t})$ in $M$ is displaceable. Assume for the moment being that such sets $W_i$ exist. Notice that the sets $U_j$ cover $M$ and let $w(G) := \mathrm{sup}_j w(U_j) < \infty$. Now let $F$ be a partition of unity subordinated to $G$ and consider the functions $\int_{V_{1}} F_{t} dt$, ...\,, $\int_{V_{N}} F_{t} dt$ which form a partition of unity on $M$ subordinated to the $U_j$'s. As in the previous theorem, one estimates: {\beta}gin{align} \notag 1 &= \left| \zeta(1) - 0 \right| = \left| \zeta \left(\int_{W_{N}} F_{t}dt \right) - \zeta \left( \int_{W_{1}} F_{t}dt \right) \right| \\ \notag &\le {\frak {su}}m_{k=1}^{N-1} \left| \zeta \left(\int_{W_{k+1}} F_{t}dt \right) - \zeta \left( \int_{W_{k}} F_{t}dt \right) - \underset{0}{\underbrace{\zeta \left( \int_{V_{k+1}} F_{t}dt \right)}}\right| \\ \notag & \le {\frak {su}}m_{k=1}^{N-1} \sqrt{2 \, w(G) \, {\partial}b(F)} \le N \sqrt{2 \, w(G) \, {\partial}b(F)} \, . \end{align} \noindent Since this is true for all $F < G$, and since $2N^2$ depends only on $G$ (through the choice of the $W_j$'s), the result follows with $c(G) := (2N^2 w(G))^{-1}$. The sets $W_{j}$'s exist for the following reason. The closure of each $G_{t}(U)$ is a compact displaceable set, so that some open neighborhood $O_t$ of this set is displaceable. By the continuity of the cover $G$, for any $t$ there exists an open set $\{t \} \in Y_t {\frak {su}}bset T$ such that $G(Y_t \times U) {\frak {su}}bset O_t$. Since $T$ is compact, only a finite number of these $Y_t$ suffices to cover $T$, say $Y_1, \dots, Y_N$. Set $W_j = \cup_{k=1}^{j} Y_j$. Since $V_j {\frak {su}}bset Y_j$, the sets $G(V_j \times U)$ are indeed displaceable. This concludes the proof. {\bf Q}ED It is natural to compare the ${\partial}b$ invariant of different smooth covers of type $(T,U)$, especially if they are related to each other by a smooth family of smooth covers of the same type. This might help in understanding what is the 'optimal' way to cover a symplectic manifold $(M, {\omega}ega)$ by copies of a set $U$. We are led to the following definition which lies at the heart of this article: {\beta}gin{Definition} A \textit{constraint} on smooth covers of $M$ of type $(T,U)$ is a set $C$ of such covers; the set of all smooth covers of type $(T,U)$ corresponds to the unconstrained case. Considering the $C^r$-Whitney topology on the space of smooth covers $G : T \times U \to M$, a \textit{constrained class of smooth covers of $M$ of type $(T,U)$} is defined as a connected component of the given constraint. We define the ${\partial}b$ invariant of a (constrained) class $A$ as the infimum of ${\partial}b(G)$ when $G$ runs over all smooth covers in $A$. \end{Definition} As an instance of a constraint, we shall consider later on the one given by asking for each embedding $G_t : (U^{2n}, {\omega}ega_0) \hookrightarrowrightarrow (M^{2n}, {\omega}ega)$ to be symplectic. The obvious difficulty with this last notion of pb invariant is that it intertwines four extrema: the supremum in the definition of the $C^0$-norm, the supremum over coefficients, the infimum over partitions of unity and the infimum over the smooth cover in the class. As a consequence of this difficulty, it is not clear if this number is strictly positive for every $M$, a problem which is related to Polterovich's conjecture; however, this number is now known to be positive for closed surfaces, as Polterovich's conjecture was recently proved valid in this context by Buhovsky, Tanny and Logunov \cite{BLT} and by the second author for genera $g {\mathcal g}e 1$ \cite{Pa}. \section{Equivalence of the smooth and discrete settings} This section is mainly devoted to the proof of Theorem 13 below which can be summarized as follows: the pb invariant of any class of $T$-covers is equal to the pb invariant of an affiliated class of discrete covers. Fix a pair $(T,U)$. Any constraint $C$ of type $(T,U)$ determines the subset of \textit{constrained embeddings} \[ C^* := \{ G_t : U \hookrightarrowrightarrow M \, | \, t \in T, \, G \in C \} {\frak {su}}bset \mathrm{Emb}(U, M) \, . \] \noindent Any section of the natural map $T \to {\partial}i_0(T)$ -- which associates to $t \in T$ the connected component to which it belongs -- induces a well-defined, \textit{i.e.} section-independent, map $p_T : {\partial}i_0(C) \to [{\partial}i_0(T), C^*] {\sigma}meq {\partial}i_0(C^*)^{{\partial}i_0(T)}$. An element $A \in {\partial}i_0(C)$ is just a constrained class of covers, and the element $A^* = p_T(A) \in \mathrm{Im}(p_T)$ corresponds to the $|{\partial}i_0(T)|$ (not necessarily distinct) connected components of $C^*$ from which open sets the smooth covers in $A$ are built. Denote by $B$ the subset of ${\partial}i_0(C^*)$ which is the image of $A^*$. Thus $B$ comprises sufficiently many open sets to cover the whole of $M$. Let ${\lambda}ngle 1, n \rangle = [1,n] \cap \mathbb{N}$. Considering the natural map $q: C^* \to {\partial}i_0(C^*)$, for $B {\frak {su}}bset {\partial}i_0(C^*)$ let $B' = q^{-1}(B) {\frak {su}}bset C^*$. Assuming that $B'$ comprises enough open sets to cover $M$, define \[ {\partial}b_{\mathrm{discrete}}(B) := \inf \, \{ \, {\partial}b(G) \, | \, \exists n \in \mathbb{N}, \, G : {\lambda}ngle 1, n \rangle \to B' \mbox{ a cover of $M$ } \} \, . \] \noindent To simplify the notations, we will, in the sequel, denote the set $B$ by the same symbol $A^*$. {\beta}gin{Theorem} [Equivalence smooth-discrete] Let $M$ be a symplectic manifold of dimension $2n$, $U$ an open subset of ${\bf R}^{2n}$ as mentioned above, and $T$ a compact manifold of strictly positive dimension endowed with a Lebesgue measure $\mu$ of total mass $1$. Consider a constraint $C$ on smooth covers of $M$ of type $(T,U)$, let $A \in {\partial}i_0(C)$ be a constrained class of such covers and write $A^* = p_T(A) {\frak {su}}bset {\partial}i_0(C^*)$. Then \[ {\partial}b(A) = {\partial}b_{\mathrm{discrete}}(A^*) \; . \] \end{Theorem} {\partial}roof We first prove ${\partial}b(A) {\mathcal g}e {\partial}b_{\mathrm{discrete}}(A^*)$. Let $G$ be a smooth cover of type $(T,U)$ in the constrained class $A$ and consider a smooth partition of unity $F < U$. By property (2) in the definition of a partition of unity and by continuity of $G$, we deduce that for each $t \in T$ there is an open set $t \in B_t {\frak {su}}bset T$ such that $\mathrm{supp}(G_t^*(F_s)) {\frak {su}}bset U$ for all $s \in B_t$. Since $T$ is compact, there is a finite set $T' = \{t_1, \dots, t_n\} {\frak {su}}bset T$ such that the collection $B = \{B_{t_1}, \dots, B_{t_n}\}$ covers $T$. Consider a partition of unity $\rho = \{\rho_1, \dots, \rho_n\}$ on $T$ subordinated to $B$ and for each $t_i$ define \[ F'_i : M \to [0, \infty) : x \mapsto F_i(x) = \int_T \rho_i(t)F(x,t) dt \; . \] We observe that the collection $F' = \{F'_1, \dots, F'_n\}$ is a partition of unity on $M$ by smooth functions which is subordinated to the finite cover $G' := \left. G \right|_{T'}$ of $M$. We note that $\mathrm{Im}(G') {\frak {su}}bset (A^*)'$, where we use a notation introduced just before the statement of the theorem. For $a' = \{a'_1, \dots, a'_n\} {\frak {su}}bset [-1,1]$, the quantity $a := {\frak {su}}m_{i=1}^n a'_i \rho_i : T \to [-1,1]$ is a $T$-weight. For $a',b' \in [-1,1]^n$ we easily compute {\beta}gin{align} \notag \left\{ \int_T a(t) F_t dt \, , \, \int_T b(u) F_u du \right\} &= \left\{ {\frak {su}}m_{i=1}^n a'_i F'_i \, , \, {\frak {su}}m_{j=1}^n b'_j F'_j \right\} \; . \end{align} Taking the suprema over weights thus yields ${\partial}b(F) {\mathcal g}e {\partial}b(F')$, while taking the infima over partitions of unities yields ${\partial}b(G) {\mathcal g}e {\partial}b(G')$. Taking the infima over covers in classes $A$ and $A^*$ finally yields ${\partial}b(A) {\mathcal g}e {\partial}b_{\mathrm{discrete}}(A^*)$. We now prove ${\partial}b(A) \le {\partial}b_{\mathrm{discrete}}(A^*)$. Let $G' : {\lambda}ngle 1, n \rangle \to (A^*)'$ be a finite cover of $M$ and let $F' = \{F'_1, \dots, F'_n\}$ be a partition of unity subordinated to $G'$. Since $A^* = p_T(A)$, there exists a smooth cover $G''$ of $M$ of type $(T,U)$ in the constrained class $A \in {\partial}i_0(C)$. Interpreting $A^* = \{A^*_1, \dots, A^*_m\}$ as a collection of connected components of $C^*$, for each connected component $A^*_i$ we can associate a point $t''_i \in T$ such that $G''_{t''_i} \in A^*_i$. From this association we can get an injective map ${\lambda}ngle 1, n \rangle \to T$ which associates to the integer $j$ a point $t'_j$ in the same connected component as the point $t''_i$, with $A^*_i \ni G'_j$. Call the image of this map $T' {\frak {su}}bset T$. From these data we shall construct a smooth cover $G$ of type $(T,U)$ in the class $A$ which could act as a substitute for $G'$, in the sense that $\left. G \right|_{T'} = G'$. In fact, we shall define a smooth family $G_s$ of covers of type $(T,U)$ with $s \in [0,1]$ so that $G_0 = G''$ and $G_1 = G$, thereby illustrating that $G$ is indeed in the constrained class $A$. Fix a Riemannian metric on $T$. Observe that smoothly deforming $G''$ within $A$ if necessary, we can assume that $G''$ is constant in an $\epsilon$-neighbordhood of $T'$. If some connected component of $T$ contains none of the points $t'_j$, just set $G_s = G''$ on that component. For any other connected component of $T$, say the one containing $t''_i$, pick a Riemannian metric on it and consider disjoint embedded closed geodesic $\epsilon$-balls centred at the points $t'_j$. Outside the reunion of these balls, set again $G_s = G''$, whereas on the ball containing $t'_j$ define $G_s$ as follows. First choose a smooth path $g_j : [0, \epsilon] \to A^*_i$ such that $g_j(0) = G'_j$and $g_j(\epsilon) = G''(t'_j)$. Also pick a smooth function $\chi : [0, \epsilon] \to [0,1]$ such that $\chi(u) = 1$ if $u < \epsilon/3$ and $\chi(u) = 0$ is $u > 2\epsilon/3$. Denoting $r(p)$ the radial distance in the $j$-th ball of a point $p$ from $t'_j$, set on that ball $G_s(p) = g_j([1 - (1- \chi(s \epsilon)) \chi(r)]\epsilon)$. This completely defines the family $G_s$ in the way we desired. We observe that $G$ is constant on an $(\epsilon/3)$-neighbourhood of each $t'_j$. For each $j$, pick a smooth positive function $\rho_j$ with support in the $(\epsilon/3)$-ball about $t'_j$ and which integrates to $1$. We define the smooth function $F : T \times M \to [0, \infty)$ as $F(t,m) = {\frak {su}}m_{j=1}^n \rho_j(t) F'_j(x)$. We easily verify that this is a smooth partition of unity subordinated to $G$. For any $T$-weight $a : T \to [-1,1]$, define $a' = (a'_1, \dots, a'_n) \in [-1,1]^n$ via $a'_j = \int_T a(t) \rho_j(t) dt$. For $T$-weights $a$ and $b$ we then easily compute {\beta}gin{align} \notag \left\{ {\frak {su}}m_{i=1}^n a'_i F'_i \, , \, {\frak {su}}m_{j=1}^n b'_j F'_j \right\} &= \left\{ \int_T a(t) F_t dt \, , \, \int_T b(u) F_u du \right\} \; . \end{align} Taking the suprema over weights thus yields ${\partial}b(F') {\mathcal g}e {\partial}b(F)$, while taking the infima over partitions of unities yields ${\partial}b(G') {\mathcal g}e {\partial}b(G)$. Taking the infima over covers in classes $A^*$ and $A$ finally yields ${\partial}b_{\mathrm{discrete}}(A^*) {\mathcal g}e {\partial}b(A)$. {\bf Q}ED \section{Independence on the probability space} The equivalence of the smooth and of the discrete settings suggests that the pb invariants might be independent from the underlying probability space $T$ parametrising the smooth covers. The purpose of this section is make this idea precise. {\beta}gin{proposition} Let $M$ be a symplectic manifold of dimension $2n$, $U$ an open subset of ${\bf R}^{2n}$ as mentioned above, and $T_1$ and $T_2$ be compact manifold of strictly positive dimension each endowed with a smooth volume form of total mass $1$. Consider constraints $C_1$ and $C_2$ on smooth covers of $M$ of type $(T_1,U)$ and $(T_2, u)$, respectively. Let $A_i \in {\partial}i_0(C_i)$, $i=1,2$, be constrained classes and assume that the corresponding sets of embeddings $(A_i^*)' {\frak {su}}bset C^*_i {\frak {su}}bset \mathrm{Emb}(U, M)$ coincide in the latter space. Then \[ {\partial}b(A_1) = {\partial}b(A_2) \; . \] \end{proposition} {\partial}roof It follows from Theorem 13 that ${\partial}b(A_i) = {\partial}b_{\mathrm{discrete}}(A_i^*)$, $i=1,2$. Looking at the definition, ${\partial}b_{\mathrm{discrete}}(A_i^*)$ only depends on the set $(A_i^*)'$, which is itself assumed to be independent from $i$. {\bf Q}ED Next we discuss special sorts of constraints which not only frequently appear in practice, but also for which the hypothesis in the previous proposition follows from a somewhat less stringent assumption. {\beta}gin{Definition} A constraint $C$ on covers of type $(T,U)$ is \textit{prime} if there exists a set $C' {\frak {su}}bset \mathrm{Emb}(U, M)$ such that $G \in C$ if and only if $G_t \in C'$ for every $t \in T$. In other words, $C$ is prime if it is the largest constraint such that $C^* {\frak {su}}bset C'$ (equivalently, $C^* = C'$). \end{Definition} \noindent We point out that $C'$ thus admits sufficiently many open sets to cover the whole of $M$. Conversely, given a set $C' {\frak {su}}bset \mathrm{Emb}(U, M)$ which admits sufficiently many open sets to cover $M$ and a probability space $T$, it is not guaranteed that there exists a constraint $C$ of covers of type $(T,U)$ (let alone a prime one) such that $C^* = C'$; this happens if $|{\partial}i_0(C')| > |{\partial}i_0(T)|$ and if no reunion of $|{\partial}i_0(T)|$ connected components of $C'$ has sufficiently many open sets to cover $M$. In comparison, as long as $|{\partial}i_0(C')|$ is finite, we can always find a discrete cover of $M$ made of open sets in $C'$. Note that this is however the only obstacle: given a set $C' {\frak {su}}bset \mathrm{Emb}(U, M)$ such that there exists a smooth cover $G$ of $M$ of type $(T,U)$ with $G_* : {\partial}i_0(T) \to {\partial}i_0(C')$ well-defined and surjective, then $C' = C^*$ for some (prime) constraint $C$ on covers of type $(T,U)$. {\beta}gin{Definition} A prime constraint $C$ on covers of type $(T,U)$ with $C^* = C' {\frak {su}}bset \mathrm{Emb}(U,M)$ is \textit{filled} if there is $G \in C$ such that the map $G_* : {\partial}i_0(T) \to {\partial}i_0(C')$ is surjective. By extension, we say that $C'$ is \textit{filled by $T$} if the associated prime constraint $C$ of type $(T,U)$ is filled. \end{Definition} {\beta}gin{corollary} Let $M$ be a symplectic manifold of dimension $2n$, $U$ an open subset of ${\bf R}^{2n}$ as mentioned above, and $T_1$ and $T_2$ be compact manifold of strictly positive dimension each endowed with a smooth volume form of total mass $1$. Consider constraints $C_1$ and $C_2$ on smooth covers of $M$ of type $(T_1,U)$ and $(T_2, u)$, respectively. Let $C' {\frak {su}}bset \mathrm{Emb}(U,M)$ be filled by both $T_1$ and $T_2$ and consider the corresponding prime constraints $C_1$ and $C_2$. Let $A_i \in {\partial}i_0(C_i)$, $i=1,2$, be constrained classes and assume that the corresponding sets of embeddings $(A_i^*)' {\frak {su}}bset C^*_i {\frak {su}}bset \mathrm{Emb}(U, M)$ coincide in the latter space. Then \[ {\partial}b(A_1) = {\partial}b(A_2) \; . \] \end{corollary} {\beta}gin{Remark} {\lambda}bel{independence} For one application of this corollary, note that $C'$ is filled by any probability space $T$ of strictly positive dimension whenever $C'$ is connected and contains sufficiently many open sets to cover $M$. In that case $(A^*)'=C'$ for any $A \in {\partial}i_0(C)$ (where $C$ is the prime and filled constraint associated with $C'$), since in fact $|{\partial}i_0(C)|=1$. As a consequence, when $C'$ is not necessarily connected but each of its components contains sufficiently many embeddings to cover $M$, then the restriction of ${\partial}b$ to prime constrained classes of covers parametrised by \textit{connected} $T$ comes from a function on ${\partial}i_0(C')$. \end{Remark} \section{The behaviour of ${\partial}b$ on symplectic balls} For the rest of this article, we only\footnote{The results of this section can however be easily adapted for star-shaped domain $U {\frak {su}}bset \mathbb{R}^{2n}$.} consider $U = U(c) = B^{2n}(c)$, that is the standard symplectic ball capacity $c = {\partial}i r^2$ (where $r$ is the radius). We also only consider (smooth) \textit{symplectic} covers, that is covers $G$ of type $(T,U)$ satisfying the symplectic prime constraint $C$ given as follows: $G \in C$ if $G_t \in C' = \mathrm{Emb}_{{\omega}ega}(U, M)$ for every $t \in T$. We shall write $U(c)$, $C(c)$ and $C'(c)$ when we want to stress the dependence on $c$. Of special interest are the cases when $T = S^n$ for some $n {\mathcal g}e 1$. A constrained class $A$ of $C$ determines a connected component\footnote{It is still a conjecture, that we shall dub the \textit{symplectic camel conjecture}, whether $C'(c)$ is connected (whenever nonempty) when $(M, {\omega}ega)$ is compact and for any $c$.} $A' = p_T(A) {\frak {su}}bset C'$, and determines in fact an element of the $n$-th homotopy group ${\partial}i_n(A')$. Conversely, since $M$ is compact and using the fact that the group $\mathrm{Symp}(M, {\omega}ega)$ is $k$-transitive for all $k \in \mathbb{N}$, any element in ${\partial}i_n(C')$ can be represented by some class $A \in {\partial}i_0(C)$. The pb-invariants of symplectically constrained classes hence allow to probe the homotopic properties of $C'(c)$, properties which might change with $c$. Consequently, it appears important to better understand how the pb-invariants depend on the capacity $c$. This behavior of ${\partial}b$ on $c$ is the main question raised in this paper. However, invoking {\bf C}ref{independence} and again the $k$-transitivity of $\mathrm{Symp}(M, {\omega}ega)$, we deduce that for any connected probability space $T$ there is a bijective correspondence between ${\partial}i_0(C)$ and ${\partial}i_0(C')$. We can thus interpret the ${\partial}b$ functional on smooth covers of type $(T,U(c))$ parametrised by connected spaces $T$ simply as a map ${\partial}b : {\partial}i_0(C'(c)) \to [0, \infty)$, the latter being clearly independent from $T$. It therefore appears that the ${\partial}b$-invariants can only probe the homotopy type of $C'(c)$ in a crude way. Let $c_{max}$ denote the largest capacity a symplectic (open) ball embedded in $M$ can have; that can be much smaller than the one implied by the volume constraint $\mathrm{Vol}(U(c)) \le \mathrm{Vol}(M, {\omega}ega)$, according to the Non-Squeezing Theorem. For $0 < c < c' < c_{max}$ the obvious inclusion $U(c') {\frak {su}}bset U(c)$ induces the restriction map $C'(c) \to C'(c')$ and hence also $r_{c,c'} : {\partial}i_0(C'(c)) \to {\partial}i_0(C'(c'))$. {\beta}gin{Definition} The \textit{tree of path-connected classes of symplectic embeddings of $U$ in $M$} is the set \[ \Psi(U,M) := \bigsqcup_{c \in (0, c_{max})} \{c\} \times {\partial}i_0(C'(c)) \; . \] A \textit{(short) branch of $\Psi(U,M)$} is a continuous path ${\beta}ta : (0, c_{{\beta}ta}) \to \Psi(U,M) : c \mapsto (c, A^*_{{\beta}ta}(c))$ such that $r_{c,c'}(A^*_{{\beta}ta}(c)) = A^*_{{\beta}ta}(c')$. \end{Definition} We can therefore define a function ${\partial}b : \Psi(U,M) \to [0, \infty)$ in the obvious way. Given a branch ${\beta}ta$ with domain $(0, c_{{\beta}ta})$, we can define a map ${\partial}b_{{\beta}ta} = {\partial}b \circ {\beta}ta : (0, c_{{\beta}ta}) \to [0, \infty)$. {\beta}gin{Theorem} Given any branch ${\beta}ta$, the function ${\partial}b_{{\beta}ta}$ is non-increasing, upper semi-continuous and left-continuous. \end{Theorem} {\partial}roof (a) Let us first show that the function is non-increasing. Fix $0 < c' < c < c_{{\beta}ta}$ and let $\epsilon > 0$. From the work done above and with the interpretation of $A^*_{{\beta}ta}(c)$ as a connected component of $C'(c)$, there exists a discrete cover $G' : {\lambda}ngle 1, n \rangle \to A^*_{{\beta}ta}(c')$ of $M$ such that ${\partial}b(G') < {\partial}b_{{\beta}ta}(c') + \epsilon$. We claim that this cover refines a cover $G : {\lambda}ngle 1, n \rangle \to A^*_{{\beta}ta}(c)$ of $M$; assuming this for the moment, we would then have \[ {\partial}b_{{\beta}ta}(c) \le {\partial}b(G) \le {\partial}b(G') < {\partial}b_{{\beta}ta}(c') + \epsilon \; . \] As this holds for any $\epsilon > 0$, we get ${\partial}b_{{\beta}ta}(c) \le {\partial}b_{{\beta}ta}(c')$ \textit{i.e.} ${\partial}b_{{\beta}ta}$ is non-increasing. To prove the existence of $G$, consider a symplectic embedding $B \in A^*_{{\beta}ta}(c)$. Since ${\beta}ta$ is a branch, the restriction $B'$ of $B$ to $U(c')$ is an embedding in $A^*_{{\beta}ta}(c')$; the latter space being a connected component of $C'(c')$ with respect to the Whitney $C^r$-topology, for each $j \in {\lambda}ngle 1, n \rangle$ there is smooth path of symplectic embeddings of $U(c')$ into $M$ joining $B'$ to $G'(j)$. By the symplectic isotopy extension theorem, each of these paths extends to a global symplectic isotopy on $M$, which thus sends $B$ to an embedding $G(j)$ of $U(c)$ into $M$. Clearly $G$ is a discrete cover of $M$ refined by $G'$. (b) Now let us show that for every $c \in (0, c_{beta})$, the function ${\partial}b_{{\beta}ta}$ is upper semi-continuous at $c$, \textit{i.e.} $\limsup_{c' \to c} {\partial}b_{{\beta}ta}(c') \le {\partial}b_{{\beta}ta}(c)$. On the one hand, it follows from part (a) that ${\partial}b_{{\beta}ta}(c)$ is greater or equal to all limits of $f$ from the right. On the other hand, for any $\epsilon > 0$, there are a discrete cover $G$ representing $A^*_{{\beta}ta}(c)$ and a partition of unity $F < G$ such that ${\partial}b(F) < {\partial}b_{{\beta}ta}(c) + \epsilon$. In fact, by our definition of a partition of unity, there is a strictly smaller capacity $c' < c$ such that the support of $F$ is compact inside the open ball $U(c') {\frak {su}}bset U(c)$. Transporting the data to the restriction of the pair $(G,F)$ to $U(c'')$ for any $c'' \in [c',c]$, one gets $${\partial}b_{{\beta}ta}(c'') \le {\partial}b_{{\beta}ta}(c) + \epsilon.$$ \noindent Since the choice of $c'$ indirectly depends on $\epsilon > 0$ through $F$, and might get as close to $c$ when $\epsilon$ approaches to zero, we do not get ${\partial}b_{{\beta}ta}(c'') \le {\partial}b_{{\beta}ta}(c)$ but only that ${\partial}b_{{\beta}ta}$ is upper semi-continuous from the left. (c) We wish to prove that ${\partial}b_{{\beta}ta}$ is in fact left-continuous, that is to say that ${\partial}b_{{\beta}ta}(c)$ is equal to the limit of ${\partial}b_{{\beta}ta}(c')$ as $c'$ tends to $c$ from the left. Consider a sequence of capacities $c_i < c$ converging to $c$ with highest value $\lim {\partial}b_{{\beta}ta}(c_i)$ (the value $\infty$ is not excluded). This limit cannot be smaller than ${\partial}b_{{\beta}ta}(c)$ because otherwise it would contradict the non-increasing property. However, by upper semi-continuity, it cannot be greater than ${\partial}b_{{\beta}ta}(c)$. Therefore, it has to be equal to ${\partial}b_{{\beta}ta}(c)$. {\bf Q}ED With regard to the continuity of the function ${\partial}b_{{\beta}ta}$ associated to a branch ${\beta}ta$ it is not possible to be much more specific than the above Theorem, at least not when $\mathrm{dim} \, M = 2$. Indeed, in that case $c_{max} = \mathrm{Area}(M, {\omega}ega)$ and Moser's argument allows to prove that the space $C'(c) = \mathrm{Emb}_{{\omega}ega}(B^{2}(c), M)$ is connected whenever non-empty, so that there is only one maximal branch ${\beta}ta$. Polterovich's conjecture has recently been established in dimension two \cite{BLT}: in fact there is a universal constant ${{\mathcal g}amma}mma > 0$ such that ${\partial}b_{{\beta}ta}(c)c > {{\mathcal g}amma}mma$ whenever $c \le c_{max}/2$. Using the invariance of the quantity ${\partial}b_{{\beta}ta}(c)c$ upon pullback of the data under any symplectic covering map, this inequality holds even for $c > c_{max}/2$ when $M$ has genus $g {\mathcal g}e 1$ (\textit{c.f.} \cite{Pa}). However for $M = S^2$, by enlarging two opposite hemispheres one gets ${\partial}b_{{\beta}ta}(c) = 0$ when $c > c_{max}/2$. Consequently ${\partial}b_{{\beta}ta}$ is discontinuous on $S^2$, yet might be continuous on higher genus surfaces. \section{Phase transitions and the ${\partial}b$ function} We conclude this paper by a few speculations since they disclose the main motivation behind this article. The first ``phase transition'' discovered in Symplectic Topology is the following one: {\beta}gin{Theorem} (Anjos-Lalonde-Pinsonnault) In any ruled symplectic 4-manifold $(M, {\omega})$, there is a unique value $c_{crit}$ such that the infinite dimensional space $Emb(c,{\omega})$ of all symplectic embeddings of the standard closed ball of capacity $c$ in $M$ undergoes the following striking property: below $c_{crit}$, the space $Emb(c,{\omega})$ is homotopy equivalent to a finite dimensional manifold, while above that value, $Emb(c,{\omega})$ does not retract onto any finite dimensional manifold (or CW-complex) since it possesses non-trivial homology groups in dimension as high as one wishes. Below and above that critical value, the homotopy type stays the same. \end{Theorem} \noindent The reason for the term {\it phase transition} is still debatable, but there are several physical reasons, from Thermodynamics, to adopt that terminology. {\beta}gin{Definition} Given a closed symplectic manifold $(M, {\omega}ega)$, let us call an {\it uncertainty phase transition} any critical value $c$ at which the space of symplectic embeddings of balls of capacity $c$ into $(M, {\omega}ega)$ undergoes a change of its homotopy type. \end{Definition} This terminology reflects the fact that a symplectically embedded ball quantifies the uncertaintly in the position and momentum of a (collection of) particles(s). The proof of the above theorem is quite indirect: one identifies all homology classes of symplectically embedded balls through the action of two groups on them: the full group of symplectic diffeomorphisms and the subset of these that preserve a given standard ball, the latter being viewed as the group of symplectic diffeomorphisms on the blow-up. Each of theses groups is computed by their action on a stratification of all compatible almost complex structures that realise holomophically some homology classes (essentially the homology classes that cut out the symplectic manifold in simple parts). Everything boils down to the behaviour of some J-curves in the given symplectic manifold for each $J$, generic or not (the non-generic ones playing the fundamental role since only the first stratum is generic). So, for instance, some homology class of symplectically embedded balls may disappear at some capacity $c_{crit}$ because the homology class of symplectic diffeomorphisms that preseve some standard ball of capacity $c$ in $M$ vanishes when $c$ crosses $c_{crit}$. It is conceivable that the class that vanished was supporting a covering that minimized the $pb$ at that level of capacity. We know that the dimension of the homology class, i.e. the dimension of the parametrizing space $T$, plays no role by our theorem on smooth-discrete equivalence. However, it is possible, that such a class, discretized or not, contained the optimal configuration of balls for a covering in order to mimimize $pb$. Therefore the main question that drove us to study the $pb$ invariant in the smooth setting is: \noindent {\bf Question (Poisson-Uncertainty).} Is there a relation between the critical values of the Poisson bracket and the critical values (or phase transitions) of $Emb_{{\omega}ega}(B(c), M)$ as $c$ varies ? This is a natural question since the latter probes the topological changes in configurations of balls, while the former looks for $pb$-optimal configurations. We do not have in mind any direct sketch proving that there is a relation. So we must simply for the moment look at the facts. We have little material to work on, since the $pb$ conjecture has been proved (very recently) only for real surfaces, while the study of the topology of balls is known only in dimension $2$ and $4$ for ruled symplectic $4$-manifolds. Thus we may just examine the case of surfaces. In this case, there is no critical value for the phase transition, but there are for the pb-invariant, showing that the answer to the above question is negative in dimension $2$. Small displaceable balls should not see the symplectic form, actually the space of (unparamatrised) symplectic balls below the uncertainty critical value retracts to the topology of the manifold itself for ruled symplectic 4-manifolds. This refines the symplectic camel conjecture for small capacities and it leads us to state the following conjecture: {\beta}gin{Conjecture} (The Topology conjecture). The limit of the function $c {\partial}b(c)$, as $c$ tends to zero, is a finite number, and depends only on the differential topology of the symplectic manifold. \end{Conjecture} Now, while the the Poisson-uncertainty question might have a positive answer in high dimensions, we show here that the Poisson-uncertainty question has a negative answer in dimension $2$ for the sphere. To see this, let us consider the simple situation of $(M, {\omega}ega)$ being $S^2$ with its standard symplectic form, say of area $A$. As $M$ is a surface, it satisfies the symplectic camel conjecture, which is to say that the space $\mathrm{Emb}(c, {\omega}ega)$ is connected. The Poisson bracket function is then defined for any $c \in (0, A)$. There exists on any closed symplectic manifold a spectral invariant $c$ such that $c(Id) = 0$, see Theorem 4.7.1 in \cite{PR}. It follows from that and the other properties of $c$ that the spectral width $w(U)$ of any subset $U {\frak {su}}bset M$ satisfies $w(U) \le 4 e_H(U)$ where $e_H(U)$ is the Hofer displacement energy of $U$. For open sets in $S^2$, $e_H(U) = \mathrm{Area}(U)$ if this area is smaller than $A/2$ and $e_H(U) = \infty$ otherwise. In this context, Polterovich's conjecture (now a theorem on surfaces \cite{BLT,Pa}) states that there is a constant $C > 0$ such that for any, continuous or discrete, cover $G$ of $S^2$ by displaceable open sets, the inequality \[ {\partial}b(G)w(G) {\mathcal g}e C \; \mbox{ holds }. \] \noindent This implies that ${\partial}b(c) e_H(U(c)) {\mathcal g}e C$. Thus when $c < A/2$, we have ${\partial}b(c) {\mathcal g}e 2C/A$. However, we observe that ${\partial}b(c) = 0$ whenever $c > A/2$: two symplectic balls of capacity $c > A/2$ suffice to cover $S^2$ and the ${\partial}b$-invariant of such a cover vanishes. Polterovich's conjecture hence goes against any claim that the Poisson bracket function ${\partial}b(c)$ only has discontinuities when $\mathrm{Emb}(c, {\omega}ega)$ undergoes a transition in its homotopy type. As a concluding remark, we point out that our borrowings in the thermodynamical and statistical mechanical terminology are explained by our insight that tools from these subjects might play a role in the understanding of the symplectic problems we considered in this paper. The space of symplectically embedded balls can be understood as an infinite dimensional (pre)symplectic manifolds which is some sort of limit of finite dimensional ones. In this paper, continuous covers have also been understood as limits of discretes ones. It is a recurrent theme in statistical mechanics that systems with a very large number of degrees of freedom tend to behave in universal and somewhat simpler ways. {\beta}gin{thebibliography}{1} \bibitem{ALP} S. Anjos, F. Lalonde and M. Pinsonnault, The homotopy type of the space of symplectic balls in rational ruled 4-manifolds, {\it Geometry and Topology} {\bf 13} (2009), 1177--1227. \bibitem{BLT} L. Buhovsky, A. Logunov and S. Tanny. Poisson Brackets of Partitions of Unity on Surfaces. Preprint arXiv:1705.02513v2. \bibitem{EPZ} M. Entov, L. Polterovich and F. Zapolsky, Quasi-morphisms and the Poisson bracket, {\it Pure Appl. Math. Q.} {\bf 3}, 2007, 1037 -- 1055. \bibitem{Pa} J. Payette, The geometry of the Poisson bracket invariant on surfaces. Preprint arXiv:1803.09741. \bibitem{P1} L. Polterovich, Quantum unsharpness and symplectic rigidity, {\it Lett. Math. Physics} {\bf 102}, 2012, 245 -- 264. \bibitem{P2} L. Polterovich, Symplectic geometry of quantum noise, {\it Comm. Math. Physics} {\bf 327}, 2014, 481 -- 519. \bibitem{PR} L. Polterovich and D. Rosen, {\it Function Theory on Symplectic Manifolds}, CRM Monograph Series, American Mathematical Society, vol. 34, 2014. \end{thebibliography} \end{document}
\begin{document} \title{Entropy squeezing and atomic inversion in the $k$-photon Jaynes-Cummings model in the presence of Stark shift and Kerr medium: full nonlinear approach} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\mathbb R}{\mathbb R} \newcommand{\mathbb{I}}{\mathbb{I}} \newcommand{\mathbb C}{\mathbb C} \newcommand{\varepsilon}{\varepsilon} \newcommand{\longrightarrow}{\longrightarrow} \newcommand{\mathbf{B}(X)}{\mathbf{B}(X)} \newcommand{\mathfrak{H}}{\mathfrak{H}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathcal{N}}{\mathcal{N}} \newcommand{\mathcal{x}}{\mathcal{x}} \newcommand{\mathcal{p}}{\mathcal{p}} \newcommand{\lambda}{\lambdambda} \newcommand{a^{ }_F}{a^{ }_F} \newcommand{a^{ }_Fd}{a^\dag_F} \newcommand{a^{ }_Fy}{a^{ }_{F^{-1}}} \newcommand{a^{ }_Fdy}{a^\dag_{F^{-1}}} \newcommand{\phi^{ }_n}{\mathcal{p}hi^{ }_n} \newcommand{\hat{\mathcal{H}}}{\hat{\mathcal{H}}} \newcommand{\hat{\mathcal{H}}D}{\mathcal{H}} \begin{abstract} In this paper the interaction between a two-level atom and a single-mode field in the $k$-photon Jaynes-Cummings model (JCM) in the presence of Stark shift and Kerr medium is studied. All terms in the respected Hamiltonian, such as the single-mode field, its interaction with the atom, the contribution of the Stark shift and the Kerr medium effects are considered to be $f$-deformed. In particular, the effect of the initial state of radiation field on the dynamical evolution of some physical properties such as atomic inversion and entropy squeezing are investigated by considering different initial field states. To achieve this purpose, coherent, squeezed and thermal states as initial field states are considered. \end{abstract} \section{Introduction}\lambdabel{sec-intro} The well-known Jaynes-Cummings model (JCM) is an important, simplified and standard model that describes elegantly the interaction between an atom and a single-mode field in the dipole and rotating wave approximations (RWA) \cite{Jaynes}. Many interesting physical features have been studied by this model. Some examples are atomic inversion \cite{HU}, collapse and revival \cite{Scully,Setare}, entanglement \cite{Buzek12,Tan,Ouyang}, sub-Poissonian statistics \cite{Mandel,Mirzaee}, quadrature squeezing \cite{Rui} and entropy squeezing \cite{Fang,Jian}. A lot of researches in this field are based on the linear interaction between atom and field, i.e. the atom-field coupling is performed to be a constant throughout the evolution of the whole system. Phoenix and Knight \cite {Phoenix} used the JCM and employed a diagonalised reduced density operator to calculate entropy and demonstrated thereby the essential two-state nature of the field. Kayham investigated the entropy squeezing of a two-level atom interacting with a quantum field prepared initially in the Glauber-Lachs state by the standard JCM \cite{Kayham}. Liao {\it et al} considered a system of two two-level atoms interacting with a binomial field in an ideal cavity and investigated the time evolution of the single-atom entropy squeezing, atomic inversion and linear entropy of the system \cite{Liao}. Zhang {\it et al} discussed the entanglement and evolution of some of the nonclassicality features of the atom-field system in a standard JCM with squeezed vacuum and coherent state fields as initial field state \cite{Zhang}. Mortezapour {\it et al} studied the entanglement of dressed atom and its spontaneous emission in a three-level $\Lambda$-type closed-loop atomic system in multi-photon resonance condition and beyond it \cite{Mortezapour}. The entropy squeezing, atomic inversion and variance squeezing in the interaction between a two-level atom with a single mode cavity field via $k$-photon process have been investigated in \cite{Kang}. Ateto in \cite{Ateto} has been extended the JCM for combined the influences of atomic motion, field-mode structure and Kerr-like medium and investigated the effects of them on the dynamics of entropy of entanglement of the cavity and atomic populations. However, in recent years, researches have strongly focused on the nonlinear interaction between a two-level atom and field in the deformed JCM. This model which firstly suggested by Buck and Sukumar \cite{Buck,Sukumar} describes the dependence of atom-field coupling on the light intensity. Bu\v{z}ek investigated the physical quantities, particularly atomic population and squeezing in the intensity-dependent coupling JCM \cite{Buz^ek}. The interaction between a $\Lambda$-type three-level atom with a single-mode cavity field with intensity-dependent coupling in a Kerr medium has been investigated by one of us \cite{Faghihi}. Sanchez and R\'{e}camier introduced a nonlinear JCM constructed from its standard structure by deforming all of the bosonic field operators \cite{Recamier}. Naderi {\it et al} replaced $\hat{a}$ and $\hat{a}^\dagger$ in the standard JCM by the $f$-deformed operators $\hat{A}$ and $\hat{A}^\dagger$ and introduced the two-photon $q$-deformed JCM \cite{Naderi}. Barzangeh {\it et al} investigated the effect of a classical gravitational field on the dynamical behavior of nonlinear atom-field interaction within the framework of the $f$-deformed JCM \cite {Barzanjeh}. Abdel-Aty {\it et al} studied the entropy squeezing of a two-level atom in a Kerr medium and examined the influence of the nonlinear interaction of the Kerr medium on the quantum information and entropy squeezing \cite {Abdel-Aty}. Cordero and R\'{e}camier considered the Jaynes-Cummings Hamiltonian with deformed operators of the field and additional Kerr term which is introduced by means of a purely algebraic method \cite{Cordero}. Recently, Faghihi {\it et al} investigated the entanglement dynamics of the nonlinear interaction between a three-level atom (in a $\Lambda$ configuration) and a two-mode cavity field in the presence of a cross-Kerr medium and its deformed counterpart \cite{Honarasa}, intensity-dependent atom-field coupling and the detuning parameters \cite{faghihi2,faghihi3}. Also, the authors have studied a three-level atom in motion which interacts with a single-mode field in an optical cavity in an intensity-dependent coupling regime \cite{faghihi4}. The effects of the mean photon number, detuning, Kerr-like medium and various of the intensity-dependent coupling functional on the degree of entanglement in the interaction between $\Lambda$-type three-level atom with a two-mode field have been studied in \cite{Hanoura}. Abdalla {\it et al} considered the interaction of a two-level atom with a single-mode multi-photon field in the medium consisting the Stark shift and Kerr medium effects, with the coupling term which is assumed to be a function of time, but still linear in the intensity of light \cite{Abdalla M S}. Moreover, it is (partially) nonlinear only due to the presence of the Kerr medium. They investigated atomic inversion and entropy squeezing and showed that, the existence of the time-dependent coupling parameter leads to a time delaying in the interaction which is twice the delay time for time-independent case. We aimed the present paper to nonlinearize the latter atom-field system which is considered in \cite{Abdalla M S}. Precisely speaking, the interaction will be occured in a time-dependent and at the same time nonlinear manner between a two-level atom and a nonlinear single-mode field for $k$-photon transitions in the presence of the Stark shift effect and Kerr medium, both of which are considered to be $f$-deformed. As is clear, in this way, all terms in the Hamiltonian will behave in a nonlinear regime by entering a nonlinearity function $f(n)$, which is generally a well-defined function of the intensity of light. Fortunately, the complicated considered system can be solved analytically and therefore, we will be able to evaluate the time evolution of some of the physical properties such as atomic inversion and entropy squeezing. Since, the exact solution depends on the initial atom-field state, we take the atom to be in its exited state, but the field is considered to be in coherent state, squeezed state and thermal state. Although, our proposal may work well for any arbitrary nonlinear physical system with known $f(n)$, the effect of various parameters and different initial field states will be investigated in detail by considering a particular nonlinearity function. The paper is organized in the following way: in section 2 we introduce the interaction Hamiltonian of our considered system in the full nonlinear regime of $k$-photon JCM and then by solving the corresponding Schr\"{o}dinger equation, the probability amplitudes at any time $t$ for the whole system with arbitrary initial field state have been obtained. In sections 3 and 4 we investigate temporal evolution of atomic inversion and entropy squeezing, respectively. Section 5 deals with presenting our numerical results for atomic inversion and entropy squeezing versus the scaled time for single- and two-photon transitions and so we discuss about the effects of the Kerr medium, Stark shift, detuning and intensity-dependent coupling on the evolution of mentioned properties. Also, the results of the effects of three- and four-photon transitions on the time evolution of the atomic inversion and entropy squeezing are given in section 6. Finally, we give a summary and conclusion in section 7. \section{The $k$-photon JCM: full nonlinear regime} The Hamiltonian of a two-level atom interacting with a quantized field by the standard JCM in the dipole and the rotating wave approximations can be simply written as ($\hbar=1$), $\hat{H}=\nu \hat{a}^\dagger \hat{a} +\frac{\omega}{2}\hat{\sigma}_{z}+\lambdambda ( \hat{a}^\dagger\hat{\sigma}_{-}+\hat{a}\hat{\sigma}_{+})$, where $\hat{\sigma}_{z}$ and $\hat{\sigma}_{\mathcal{p}m}$ are the Pauli operators, $\hat{a}$ and $\hat{a}^\dagger$ are the bosonic annihilation and creation operators, $\nu$ is the frequency of the field, $\omega$ is the transition frequency between the excited and ground states of the atom and $\lambdambda$ is the constant coupling between atom and field. By a few steps of generalizing standard JCM, the time dependent single-mode $k$-photon JCM in the presence of linear Stark shift and Kerr medium with the time dependent coupling has been studied by the Hamiltonian \cite{Abdalla M S} \begin{eqnarray}\lambdabel{2} \hat{H}(t)&=&\nu \hat{a}^\dagger \hat{a}+\frac{\omega}{2}\hat{\sigma}_{z}+ \hat{a}^\dagger \hat{a}(\beta_{1}|g\rangle\lambdangle g|+\beta_{2}|e\rangle\lambdangle e|)\nonumber\\ &+& \chi \hat{a}^{\dagger 2} \hat{a}^{2}+\lambdambda(t)(\hat{a}^{\dagger k}\hat{\sigma}_{-}+\hat{a}^{k}\hat{\sigma}_{+}), \end{eqnarray} where $\beta_{1}$ and $\beta_{2}$ are the effective Stark coefficients, $\chi$ denotes the third-order susceptibility of Kerr medium and $\lambdambda(t)$ is the time-dependent coupling parameter. The third term in Hamiltonian (\ref{2}) indicates to the linear (in $a^\dag a$) Stark shift effect, which is arisen from the virtual transition to the intermediate level \cite{Puri,Ahmad,Obada}, and can exist for two-photon transition, i.e., $k=2$ \cite{Puri}. So, for instance, the authors of Refs. \cite{Abdalla M S,Liao2} used $\delta_{k,2}$ besides the Stark shift term in their Hamiltonian. In addition, it should be emphasized that, in the Hamiltonian (\ref{2}), whenever $k\neq2$ one has to set $\beta_{1}=0=\beta_{2}$ \cite{Liao2}. Altogether, it is worth to mention that, the (nonlinear) Stark shift can also exist for the cases with $k>2$ \cite{Ahmad}. In this latter case, the Hamiltonian (\ref{2}) changes to the following form \begin{eqnarray}\lambdabel{2,1} \hat{H}(t)&=&\nu \hat{a}^\dagger \hat{a}+\frac{\omega}{2}\hat{\sigma}_{z}+ \hat{a}^{\dagger k} \hat{a}^{k}(\beta_{1}|g\rangle\lambdangle g|+\beta_{2}|e\rangle\lambdangle e|) \nonumber\\ &+& \chi \hat{a}^{\dagger 2} \hat{a}^{2}+\lambdambda(t)(\hat{a}^{\dagger 2k}\hat{\sigma}_{-}+\hat{a}^{2k}\hat{\sigma}_{+}). \end{eqnarray} The Hamiltonian (\ref{2,1}) for $k=1$ is equal to the Hamiltonian (\ref{2}) for $k=2$ (linear Stark shift can be occured). Altogether, in the continuation of the paper, by following the path of \cite{Abdalla M S}, we will generalize the Hamiltonian (\ref{2}), which is performed for linear Stark shift effect, in order to be able to compare our results with the presented results in \cite{Abdalla M S}. Anyway, by defining the detuning parameter $\mathcal{D}elta=\omega-k\nu$, the Hamiltonian (\ref{2}) can be rewritten in the form \begin{eqnarray}\lambdabel{3} \hat{H}(t)&=&\nu( \hat{a}^\dagger \hat{a}+\frac{k}{2}\hat{\sigma}_{z})+\frac{\mathcal{D}elta}{2}\hat{\sigma}_{z}+ \hat{a}^\dagger \hat{a}(\beta_{1}|g\rangle\lambdangle g|+\beta_{2}|e\rangle\lambdangle e|)\nonumber\\ &+& \chi \hat{a}^{\dagger 2} \hat{a}^{2}+\lambdambda(t)(\hat{a}^{\dagger k}\hat{\sigma}_{-}+\hat{a}^{k}\hat{\sigma}_{+}). \end{eqnarray} The aim of this paper is to generalize all terms of the Hamiltonian (\ref{3}) via the well-known nonlinear coherent state approach \cite{Manko,Vogel2}. By the notion of the nonlinearity we mean that, we intend to enter the $f$-deformation function in all possible terms, i.e. we will replace all $\hat{a}$ and $\hat{a}^\dagger$ respectively by $\hat{A}=\hat{a} f(\hat{n})$ and $\hat{A}^\dagger=f(\hat{n})\hat{a}^\dagger$ where $f(\hat{n})$ is a function of the number operator (intensity of light). By performing the mentioned procedure, the full nonlinear single-mode $k$-photon time-dependent JCM in the presence of effective Stark shift and Kerr medium can be written in the following manner \begin{eqnarray}\lambdabel{5} \hat{H}(t)&=&\nu (\hat{A}^\dagger \hat{A}+\frac{k}{2}\hat{\sigma}_{z})+\frac{\mathcal{D}elta}{2}\hat{\sigma}_{z}+ \hat{A}^\dagger \hat{A}(\beta_{1}|g\rangle\lambdangle g|+\beta_{2}|e\rangle\lambdangle e|)\nonumber\\&+&\chi \hat{A}^{\dagger 2}\hat{A}^{2}+\lambdambda(t)( \hat{A}^{\dagger k}\hat{\sigma}_{-}+\hat{A}^{k}\hat{\sigma}_{+}), \end{eqnarray} In this respect, a few words seems to be necessary about our present work. It may be recognized that, starting with the nonlinear Hamiltonian describing the interaction between a three-level atom and a single-mode $f$-deformed cavity field (without the Stark shift) and following the path of Refs. \cite{Puri,Ahmad}, the same equations of motion for the three levels of the atom will be achieved. Therefore, one can conclude that, replacing $a, a^\dag$ with $A, A^\dag$ does not change the final results of the above Refs. By these explanations, we would like to emphasize that, the Stark shift should exist in the generalized form of the Hamiltonian (\ref{5}), too. In other words, the Stark shift coefficients are now linear in terms of $\hat{A}^\dag \hat{A}$, i.e., the field part of the Hamiltonian (\ref{5}). So, in Hamiltonian (\ref{5}) (similar to (\ref{2}) and (\ref{3})), the linear (in terms of $\hat{A}^\dag \hat{A}$) Stark shift can exist for the case $k=2$. And whenever $k\neq2$ one should set $\beta_{1}=0=\beta_{2}$. To see what we have really done explicitly, it can be easily seen that \begin{eqnarray}\lambdabel{6} \hat{H}(t)&=&\nu( \hat{a}^\dagger \hat{a} f^{2}(\hat{n})+\frac{k}{2}\hat{\sigma}_{z})+\frac{\mathcal{D}elta}{2} \hat{\sigma}_{z}+\hat{a}^\dagger \hat{a} f^{2}(\hat{n})(\beta_{1}|g\rangle\lambdangle g|+\beta_{2}|e\rangle\lambdangle e|) \nonumber\\ &+&\chi f^{2} (\hat{n}) f^{2}(\hat{n}-1)\hat{a}^{\dagger2} \hat{a}^{2}+\lambdambda(t)\left(\frac{\left[ f(\hat{n})\right] !}{\left[ f(\hat{n}-k)\right] !} \hat{a}^{\dagger k}\hat{\sigma}_{-} +\hat{a}^{ k}\frac{\left[ f(\hat{n})\right] !}{\left[ f(\hat{n}-k)\right] !} \hat{\sigma}_{+}\right), \end{eqnarray} where $\hat{n}=\hat{a}^\dagger \hat{a}$ and $\left[ f(\hat{n})\right]! \doteq f(\hat{n})f(\hat{n}-1).....f(1)$ with $\left[ f(0)\right] !\doteq1$. As is clear from (\ref{6}) in comparison with previous Hamiltonian (\ref{3}) which is considered in \cite{Abdalla M S}, we have in fact made the transformations, $ \nu \rightarrow \nu f^{2}(\hat n), \beta_{1(2)}\rightarrow\beta_{1(2)}f^{2}(\hat n), \chi\rightarrow\chi f^{2}(\hat{n})f^{2}(\hat{n}-1)$ and $\lambdambda(t) \rightarrow \lambdambda(t) \frac{[f(\hat{n}]!}{[f(\hat{n}-k]!}.$ It is seen that, the field frequency, Stark shifts, third-order susceptibility and time-dependent parameter are all evolved from $c$-numbers to operator-valued functions (intensity-dependent parameters) \cite{Buz^ek,Faghihi,Singh,Manko,Honarasa}. The time-dependent $\lambdambda$-parameter makes the whole Hamiltonian to be time-dependent. Different forms may be chosen for $\lambdambda(t)$. In this paper we will select $\lambdambda(t)=\gamma\cos(\mu t)$, where $\gamma$ and $\mu$ are arbitrary constants. Following the probability amplitude approach \cite{Scully}, we assume that, the wave function of the atom-field can be expressed as \cite{Wolfang P} \begin{equation}\lambdabel{7} \hspace{-1in}|\mathcal{p}si(t)\rangle=\sum_{n} \exp[-i\nu(\hat{n}f^{2}(\hat{n})+\frac{k}{2}\hat{\sigma}_{z})](c_{n,e}(t)|n,e\rangle+c_{n+k,g}(t)|n+k,g\rangle), \end{equation} where $|n,e\rangle$ and $|n+k,g\rangle$ are the states in which the atom is in exited and ground state and the field has $n$ and $n+k$ photons, respectively. Setting the wave function (\ref{7}) in the time dependent Schr\"{o}dinger equation, $i\hbar\frac{\mathcal{p}artial}{\mathcal{p}artial t}|\mathcal{p}si(t)\rangle=\hat{H}|\mathcal{p}si(t)\rangle$, we obtain the following coupled equations for $c _{n,e}(t)$ and $c_{n+k,g}(t)$: \begin{eqnarray}\lambdabel{9} i\frac{d c_{n,e}}{dt}&=&R_{1}c_{n,e}(t)+\alpha_{n} \cos(\mu t)c_{n+k,g}(t), \nonumber \\i\frac{d c_{n+k,g}}{dt}&=&R_{2}c_{n+k,g}(t)+\alpha_{n} \cos(\mu t)c_{n,e}(t), \end{eqnarray} where $R_{1}$, $R_{2}$ and $\alpha_{n}$ are defined as follows: \begin{eqnarray}\lambdabel{10} R_{1}&=&\frac{\mathcal{D}elta}{2}+n f^{2}(n)\beta_{2}+\chi n(n-1)f^{2}(n)f^{2}(n-1),\nonumber\\ R_{2}&=&- \frac{\mathcal{D}elta}{2}+(n+k) f^{2}(n+k)\beta_{1}\nonumber\\ &+&\chi (n+k)(n+k-1)f^{2}(n+k)f^{2}(n+k-1), \nonumber\\\alpha_{n}&=&\gamma\frac{\left[ f(n+k)\right] !}{\left[ f(n)\right] !}\sqrt{\frac{(n+k)!}{n!}}. \end{eqnarray} The fast frequency dependence of $c _{n,e}(t)$ and $c_{n+k,g}(t)$ can be removed by transforming them to the slowly varying functions $X(t)$ and $Y(t)$ as \begin{equation}\lambdabel{11} X(t)=c_{n,e}(t) \exp(iR_{1}t), \hspace{1.5cm} Y(t)=c_{n+k,g}(t) \exp(iR_{2}t). \end{equation} On using (\ref{11}) in equation (\ref{9}) we obtain \begin{eqnarray}\lambdabel{12} \frac{d X}{dt}&=&-i\frac{\alpha_{n}}{2}(\ e^{i(\mu+R_{n})t}+\ e^{-i(\mu-R_{n})t})Y,\nonumber\\\frac{d Y}{dt}&=&-i\frac{\alpha_{n}}{2}(\ e^{i(\mu-R_{n})t}+\ e^{-i(\mu+R_{n})t})X, \end{eqnarray} where \begin{eqnarray}\lambdabel{13} R_{n} &=& R_{1}-R_{2}\nonumber\\ &=& \mathcal{D}elta +\chi[n(n-1)f^{2}(n)f^{2}(n-1)-(n+k)(n+k-1)f^{2}(n+k)f^{2}(n+k-1)]\nonumber\\ &+& [nf^{2}(n)\beta_{2}-(n+k)f^{2}(n+k)\beta_{1}]. \end{eqnarray} The coupled differential equations in (\ref{12}) consist of two terms; the term in the form $e^{i(\mu - R_{n})t}$ ($e^{i(\mu+R_{n})t}$) describes the process that, energy is conserved (nonconserved). So we neglect the terms corresponding to nonconserving energy (in the rotating wave approximation). Under this condition, equations in (\ref{12}) change to \begin{eqnarray}\lambdabel{14} \frac{dX}{dt}&=&-i\frac{\alpha_{n}}{2}\ e^{-i(\mu-R_{n})t} Y, \nonumber\\\frac{d Y}{dt}&=&-i\frac{\alpha_{n}}{2}\ e^{i(\mu-R_{n})t}X. \end{eqnarray} By solving the above coupling equations, we obtain \begin{eqnarray}\lambdabel{15} c_{n,e}(t)&=& \left \lbrace c_{n,e}(0)\left (\cos(\Omega_{n}t)-i(R_{n}-\mu)\frac{\sin( \Omega_{n}t)}{2 \Omega_{n}}\right) -i\frac{\alpha_{n}}{2\Omega_{n}}\sin(\Omega_{n}t)c_{n+k,g}(0)\right\rbrace \nonumber\\ &\times& \exp [-i(\varphi_{n}+\mu/2)t], \end{eqnarray} \begin{eqnarray}\lambdabel{16} c_{n+k,g}(t) &=& \left \lbrace c_{n+k,g}(0)\left (\cos(\Omega_{n}t)+i(R_{n}-\mu)\frac{\sin(\Omega_{n}t)}{2\Omega_{n}}\right)-i\frac{\alpha_{n}}{2\Omega_{n}}\sin(\Omega_{n}t)c_{n,e}(0)\right\rbrace \nonumber\\ &\times& \exp[-i(\varphi_{n}-\mu/2)t], \end{eqnarray} where \begin{eqnarray}\lambdabel{17} \varphi_{n}&=&\frac{\chi}{2}[n(n-1)f^{2}(n)f^{2}(n-1)+(n+k)(n+k-1)f^{2}(n+k)f^{2}(n+k-1)]\nonumber\\ &+&\frac{1}{2}[nf^{2}(n)\beta_{2}+(n+k)f^{2}(n+k)\beta_{1}]. \end{eqnarray} and $\Omega_{n}=\frac{1}{2}\sqrt{(R_{n}-\mu)^{2}+\alpha_{n}^{2}}$ is the generalized Rabi frequency (note that $\alpha_{n}$ and $R_{n}$ are defined respectively in (\ref{10}) and (\ref{13})). It ought to be mentioned that, in equations (\ref{10}), (\ref{13}) and (\ref{17}), the values $\beta_{1}=0=\beta_{2}$ should be set for the case $k\neq2$. In the above equations, $c_{n,e}(0)$ and $c_{n+k,g}(0)$ may be determined with the initial states of atom and field. In this work we suppose that, the atom is initially in the excited state ($|e\rangle $), however, the cavity field is considered to be initially in different states such as coherent state, squeezed state and thermal state which can be defined by their associated density operators as \begin{equation}\lambdabel{18} \rho_{CS}(0)= |\alpha\rangle \lambdangle\alpha|=e^{-\lambdangle n \rangle}\sum_{n,m=0}^{\infty}\frac{\alpha^{m}\alpha^{\ast n}}{\sqrt{m! n!}}|m\rangle\lambdangle n| \end{equation} \begin{equation}\lambdabel{19} \rho_{SS}(0)=|\mathcal{x}i\rangle \lambdangle\mathcal{x}i|=\sum_{n,m=0}^{\infty}\frac{(\tanh r)^{n+m} \sqrt{(2m)!(2n)!}}{(2^{n}n!2^{m}m!)\cosh r} |2m\rangle\lambdangle 2n|, \end{equation} \begin{equation}\lambdabel{20} \rho_{TS}(0)=\sum_{n=0}^{\infty}\frac{\lambdangle n\rangle^{n}}{(1+\lambdangle n\rangle)^{n+1}}|n\rangle\lambdangle n|. \end{equation} Therefore, by using each of the above initial field conditions we can find the explicit form of the solution of time dependent Schr\"{o}dinger equation. This enables one to analyze interesting properties such as atomic inversion and entropy squeezing, which will be done in the following sections. It can clearly be seen that, setting $f(n)=1$ in the relations (\ref{15}), (\ref{16}) recovers the results of Ref. \cite{Abdalla M S}. As another important point, it is worth mentioning that, choosing different nonlinearity functions leads to different Hamiltonian systems and so, different physical results may be achieved. Altogether, in the continuation of this paper, we select particularly the intensity dependent coupling as $f(n)=\sqrt{n}$. This function is a favorite function for the authors who have worked in the nonlinear regime of atom-field interaction (see for instance \cite{Singh}, \cite{Huang}). In particular, Fink {\it et al} have explored a natural way that, this nonlinearity function will be appeared in physical systems \cite{Fink}. \section{Atomic inversion} The atomic inversion measures the difference in the populations of the two levels of the atom and plays a fundamental role in laser theory \cite{Wolfang P}. After determining $c_{n,e}(t)$ and $c_{n+k,g}(t)$ for the initial field states in (\ref{18}), (\ref{19}) and (\ref{20}), we can investigate this quantity which is given by \begin{equation} \lambdabel{21} W(t)=\sum_{n=0}^{\infty}(|c_{n,e}(t)|^{2}-|c_{n+k,g}(t)|^{2}). \end{equation} By inserting the equations (\ref{15}) and (\ref{16}) in (\ref{21}) for an arbitrary initial field state, we obtain \begin{equation}\lambdabel{22} W(t)=\sum_{n=0}^{\infty}\rho_{nn}(0)\left(\cos(2\Omega_{n} t)+(R_{n}-\mu)^{2}\frac{\sin^{2}(\Omega_{n} t)}{2 \Omega_{n}^{2}}\right), \end{equation} where $\rho_{nn}(0)=|c_{n}(0)|^{2}$. For the mentioned initial field states in (\ref{18}), (\ref{19}) and (\ref{20}) and one has: \begin{equation}\lambdabel{221} \rho^{CS}_{n,n}(0)= |c_n^{CS}(0)|^{2}= e^{-\lambdangle n \rangle}\frac{\lambdangle n \rangle ^{2 n}}{ n!}, \end{equation} \begin{equation}\lambdabel{222} \rho^{SS}_{2n,2n}(0)= |c_{2n}^{SS}(0)|^{2}=\frac{\lambdangle n\rangle^{n} (2n)!}{(2^{n}n!)^2(1+\lambdangle n\rangle)^{n+1/2}},\hspace{1cm} \rho^{SS}_{2n+1,2n+1}(0)= |c_{2n+1}^{SS}(0)|^{2}=0 \end{equation} \begin{equation}\lambdabel{223} \rho^{TS}_{n,n}(0)=\frac{\lambdangle n\rangle^n }{{(1+\lambdangle n\rangle)^{n+1}}}, \end{equation} where $\lambdangle n\rangle$ for each of these states is given by \begin{equation}\lambdabel{224} \lambdangle n\rangle_{CS}=|\alpha|^{2},\hspace{0.5cm} \lambdangle n\rangle_{SS}=\sinh^{2}(r),\hspace{0.5cm}\lambdangle n\rangle_{TS}=\frac{1}{e^{\hbar\nu/k_{B}T}-1}. \end{equation}\lambdabel{22,4} From equation (\ref{22}) we can discuss the temporal evolution of the atomic inversion for different initial field situations. This will be presented in section 5 in detail. \section{Entropy squeezing } For a two-level atom, characterized by the Pauli operators $\sigma_{x}$, $\sigma_{y}$ and $\sigma_{z}$, the uncertainty relation for the information entropy is defined as follows \cite{Fang} \begin{equation}\lambdabel{23} \delta H(\sigma_{x})\delta H(\sigma_{y})\geq\frac{4}{\delta H(\sigma_{z})},\hspace{2cm}\delta H(\sigma_{\alpha})=\exp[H(\sigma_{\alpha})], \end{equation} where $ H(\sigma_{\alpha})$, as the information entropy of the operator $ \sigma_{\alpha}(\alpha=x,y,z)$, is given by \begin{equation}\lambdabel{24} H(\sigma_{\alpha})=-\sum_{i=1}^{2}P_{i}(\sigma_{\alpha})\ln P_{i}(\sigma_{\alpha}). \end{equation} Since for a two-level atom, the Pauli operators have two eigenvalues, one may expect that, $P_{i}(\sigma_{\alpha})$ denotes the probability distribution of two possible outcomes of measurements of the operator $\sigma_{\alpha}$. Henceforth, it is defined as follows \begin{equation}\lambdabel{25} P_i (\sigma_{\alpha})= \lambdangle \mathcal{p}si_{\alpha_i}| \rho|\mathcal{p}si_{\alpha_i} \rangle , \end{equation} where $\rho$ is the density operator of the system and $|\mathcal{p}si_{\alpha_i} \rangle$ is the eigenstate of the Pauli operators, i.e., \begin{equation}\lambdabel{26} \sigma_{\alpha} | \mathcal{p}si_{\alpha_i} \rangle = \eta_{\alpha_i}| \mathcal{p}si_{\alpha_i} \rangle,\hspace{2cm}\alpha=x,y,z, \hspace{0.25cm} \hspace{.25cm} i=1,2. \end{equation} From equation (\ref{23}) the components $ \sigma_{\alpha}(\alpha=x,y)$ are said to be squeezed, if the information entropy $H(\sigma_{\alpha})$ of $\sigma_{\alpha}$ satisfies the inequality \begin{equation}\lambdabel{27} E(\sigma_{\alpha})=\delta H(\sigma_{\alpha})-\frac{2}{\sqrt{\delta H(\sigma_{z})}}<0,\hspace{2cm}\alpha=x \hspace{.25cm} or\hspace{.25cm} y. \end{equation} By using the equations (\ref{24}) and (\ref{25}) for the information entropies of the atomic operators $\sigma_{x}$, $\sigma_{y}$ and $\sigma_{z}$ we finally arrive at \begin{eqnarray}\lambdabel{29} H(\sigma_{x})=&-&\left[ \frac{1}{2}+Re(\rho_{ge}(t))\right] \ln\left[ \frac{1}{2}+Re(\rho_{ge}(t))\right]\nonumber\\&-&\left[ \frac{1}{2}-Re(\rho_{ge}(t))\right] \ln\left[ \frac{1}{2}-Re(\rho_{ge}(t))\right], \end{eqnarray} \begin{eqnarray}\lambdabel{30} H(\sigma_{y})=&-&\left[ \frac{1}{2}+Im(\rho_{ge}(t))\right] \ln\left[ \frac{1}{2}+Im(\rho_{ge}(t))\right]\nonumber\\&-&\left[ \frac{1}{2}-Im(\rho_{ge}(t))\right] \ln\left[ \frac{1}{2}-Im(\rho_{ge}(t))\right], \end{eqnarray} \begin{eqnarray}\lambdabel{31} H(\sigma_{z})=-\rho_{ee}(t) \ln\rho_{ee}(t)-\rho_{gg}(t)\ln\rho_{gg}(t). \end{eqnarray} By using the form of the wave function (\ref{7}), the density operator of the entire atom-field system at any time $t$ is given by \begin{eqnarray}\lambdabel{28} \hspace{-1cm}\rho_{\mathrm{atom-field}} =\sum_{n=0}^{\infty}\sum_{m=0}^{\infty} \lbrace c_{n,e}(t)c_{m,e}^{*}(t)|n,e\rangle \lambdangle e,m|+ c_{n+k,g}(t)c_{m+k,g}^{*}(t)|n+k,g\rangle \lambdangle g,m+k|\nonumber\\ \hspace*{-.41in} +c_{n,e}(t)c_{m+k,g}^{*}(t)|n,e\rangle \lambdangle g,m+k|+ c_{n+k,g}(t)c_{m,e}^{*}(t)|n+k,g\rangle \lambdangle e,m|\rbrace. \end{eqnarray} So, the necessary matrix elements of the reduced density operator in (\ref{29})-(\ref{31}) may be given in the following form \begin{eqnarray}\lambdabel{32} \rho_{ee}(t)=\sum_{n=0}^{\infty}|c_{n,e}(t)|^{2}, \end{eqnarray} \begin{eqnarray}\lambdabel{33} \rho_{eg}(t)=\sum_{n=0}^{\infty}c_{n+k,e}(t)c_{n+k,g}^{*}(t)=\rho_{ge}^{*}(t), \end{eqnarray} \begin{eqnarray}\lambdabel{34} \rho_{gg}(t)=\sum_{n=0}^{\infty}|c_{n+k,g}(t)|^{2}. \end{eqnarray} By employing the above equations, we can study the temporal evolution of the entropy squeezing in terms of the variables $\sigma_{x}$ and $\sigma_{y}$, which will be done in the next section. By using equation (\ref{22}) and replacing $\rho_{nn}(0)$ for different initial field states (coherent, squeezed and thermal states from (\ref {221}), (\ref{222}) and (\ref{223})), we can investigate the effects of the initial field state on the variation of the atomic inversion. Prior to everything it is necessary to select a particular nonlinearity function. As we mentioned previously, in this paper we choose $f(n)=\sqrt{n}$. In all figures which are related to the atomic inversion $W(t)$, the left and the right plots respectively corresponds to the linear and nonlinear function. All figures are drawn with particular values of $\mu=0.1$, $\lambdangle n \rangle=25$. Other used parameters are denoted in the related figure captions distinctly. Figure 1 shows the temporal evolution of the atomic inversion in terms of the scaled time, for the functions $f(n)=\sqrt{n}$ and also $f(n)=1$ taking the ``coherent state'' in (\ref{18}) as the initial field state. Figures 1(a) and 1(b) show the variation of the atomic inversion without Kerr and Stark effects. The collapse and revival phenomena exist in these figures, but there is an increase in the number of fluctuations with regular behavior for the deformed case. Also, the amplitude of the fluctuation for this case is increased relative to $f(n)=1$. In other words, while we have partial revivals in the linear case, nearly complete revivals occur in the nonlinear regime. To examine the effect of the Kerr medium on the behavior of the population inversion, figures 1(c) and 1(d) are plotted. Figure 1(d) which corresponds to $f(n)=\sqrt{n}$ shows a chaotic behavior of $W(t)$ around $0.99$, such that the amplitude of the fluctuations between maxima and minima of $W(t)$ are very small. Figure 1(c) indicates that, in the presence of the Kerr effect for the case $f(n)=1$, the result is very similar to figure 1(a). Altogether, if we use the value of $\chi$ larger (up to 0.03 \cite{Abdalla M S}) the Kerr effect will be visible. The effect of the Stark shift (in the presence of Kerr medium) can be seen for linear and nonlinear functions in figures 1(e) and 1(f). From figure 1(f), we observe that, the Stark shift increases the amplitude of the fluctuations as compared with figure 1(d). Also, this figure shows a chaotic behavior for $W(t)$ in the nonlinear regime. Figure 1(e) shows the effect of the Stark shift (in the presence of Kerr medium) on the time variation of $W(t)$ for $f(n)=1$. One can see that, the temporal evolution of the atomic inversion reveals several revivals in the presence of both the Stark and Kerr effects. Comparing figures 1(e) and 1(b) leads us to conclude that, the effect of the considered nonlinearity function (without the Kerr and Stark effects) is nearly equivalent to the Kerr and Stark effects in the linear case. The effect of the detuning parameter $\mathcal{D}elta$ (defined as $\omega-k\nu=\mathcal{D}elta$), in the presence Kerr and Stark effects has been shown in figures 1(g) and 1(h). Comparing figure 1(g) with figure 1(e) indicates that the extremes of $W(t)$ (in the revivals) are regularly decreases for the linear system in our plotted figure (figure 1(g)). Altogether, $\mathcal{D}elta$ has a negligible effect in the presence of nonlinearity function (figure 1(h)). We have plotted figure 2 taking into account the initial field as ``squeezed state" using (\ref{19}). Figures 2(a) and 2(b) show the time evolution of the atomic inversion for the linear and nonlinear function, in the absence of both Kerr and Stark effects. We can see from figure 2(a) that, $W(t)$ oscillates rapidly for the case $f(n)=1$, while in the presence of nonlinearity the behaviour of $W(t)$ is periodic (figure 2(b)). Figures 2(c) and 2(d) demonstrate the Kerr medium effect on the variation of $W(t)$ and in figures 2(e) and 2(f), we added the Stark shift, too. By comparison the figures 2(a) and 2(c), one finds that, the behavior of $W(t)$ for $f(n)=1$, with and without Kerr effect, are almost the same, however, for the linear case and in the presence of the Stark effect (figure 2(e)) the behavior of $W(t)$ is irregular. We examined the time evolution of the atomic inversion for the nonlinear case with different parameters in the right plots of figure 2. We can see a regular behavior for $W(t)$ in the absence of the Kerr medium, Stark effect and detuning (figure 2(b)). But, the behavior of $W(t)$ in the figures 2(d), 2(f) and 2(h) is generally irregular. Figures 2(g) and 2(h) show the effect of the detuning parameter for linear and nonlinear function. We can see partial revivals in the presence of detuning. In figure 3, we assumed that the initial field is ``thermal state" which is defined in (\ref{20}) and again the effects of the Kerr medium, Stark shift and detuning are investigated on the behavior of $W(t)$. The evolution of the atomic inversion is shown for nonlinear and linear regimes, in the right and left plots of this figure, respectively. We have shown the effect of Kerr medium in figure 3(c) for linear function, where one can see that, $W(t)$ is not so sensitive to the Kerr effect. While in figure 3(b), the presence of nonlinearity without both Kerr and Stark effects, allows the (partial) collapses and revivals to be observed, the Kerr medium and Stark shift destroy the latter phenomena. The above result seems to be in contrast to the linear case, i.e., the presence of Kerr and Stark shift effects can appear the (partial) collapses and revivals apparently. We observe that the variation of $W(t)$ with $\mathcal{D}elta\neq0$ in the presence of the Kerr and Stark effects for linear and nonlinear function in figures 3(g) and 3(h), respectively. As in the previous states considered in this paper, unlike some changes in the numerical results, no qualitatively change can be observed. Altogether, generally in all three states discussed above, linear case is more sensitive to detuning parameter in comparison with nonlinear case.\\ In this part of the present section, we will analyze the temporal evolution of the entropy squeezing for different initial field states using the analytical results of section 4. We will deal with nonlinear case with deformation function $f(n)=\sqrt{n}$ only. All figures are drawn with particular value of $\mu=0.1$. Other used parameters are denoted in the related figure captions distinctly. Figures 4(a) and 4(b) display the time evolution of the entropy squeezing factors $E(\sigma_{x})$ and $E(\sigma_{y})$ if one concerns with the initial field as a ``coherent state" in (\ref {18}). It is obvious from these figures that, there exists entropy squeezing in $\sigma_{x}$ and $\sigma_{y}$ at some intervals of time. In figures 4(c) and 4(d), we examine the influence of the Kerr effect on the evolution of the entropy squeezing for the variables $\sigma_{x}$ and $\sigma_{y}$ with the chosen parameters, respectively. It is clear from these figures that, there is no squeezing in $\sigma _{x}$ and $\sigma_{y}$. Also, paying attention to figures 4(e) and 4(f) which are plotted in the presence of both Kerr and Stark effects, no squeezing can be seen in $\sigma_{x}$ and $\sigma_{y}$. To study the effect of the initial mean photon number on the behavior of the entropy squeezing (with Kerr effect) figures 4(g) and 4(h) are plotted. As is shown, by decrement the mean value of $ \lambdangle n \rangle $ from 25 to 1, the entropy squeezing for the variables $\sigma_{x}$ and $\sigma_{y}$ will be appeared in certain time ranges. The time evolution of the squeezing parameters $E(\sigma_{x})$ and $E(\sigma_{y})$ are shown in figure 5, for the field initially being in the ``squeezed state" in (\ref{19}). Specifically, in figures 5(a) and 5(b), the behavior of the squeezing $E(\sigma_{x})$ and $E(\sigma_{y})$ as a function of the scaled time in the absence of the Kerr and Stark effects have been shown. We see from these figures that, both $E(\sigma_{x})$ and $E(\sigma_{y})$ possess squeezing in the variables $\sigma_{x}$ and $\sigma_{y}$ when $ \lambdangle n \rangle =1$. It should be noticed that, according to our further calculations (not shown here) in this case (without Kerr and Stark effect), squeezing may be seen in the components $\sigma_{x}$ and $\sigma_{y}$ for $ \lambdangle n \rangle < 4$. To investigate the effect of the Kerr medium, we have depicted the entropy squeezing $E(\sigma_{x})$ and $E(\sigma_{y})$ in terms of the scaled time in figures 5(c) and 5(d). $E(\sigma_{x})$ and $E(\sigma_{y})$ predict squeezing in the variables $\sigma_{x}$ and $\sigma_{y}$ on short time periods discontinuously. A comparison of the figures 5(a), 5(b), 5(c) and 5(d) with similar figures for coherent state (figures 4(a), 4(b), 4(c) and 4(d)) shows that, while for the second set of figures the Kerr effect destroys the entropy squeezing completely, this is not so for the first set of figures. We discuss the effects of Stark shift on the time evolution of the squeezing factors in figures 5(e) and 5(f). It is obvious from these figures that, there is no squeezing in the presence of the Kerr and Stark effects. Figure 6 shows the time evolution of the entropy squeezing factors $E(\sigma_{x})$ and $E(\sigma_{y})$ for the case that, the field is initially prepared in the ``thermal state" in (\ref{20}). Figures 6(a) and 6(b) represent the entropy squeezing $E(\sigma_{x})$ and $E(\sigma_{y})$ in the absence of the Kerr and Stark effects. As is clear, squeezing in the components $\sigma_{x}$ and $\sigma_{y}$ exists at some intervals of time, obviously for different time intervals. Also, the depth of the entropy squeezing for $\sigma_{x}$ is larger than for $\sigma_{y}$. We investigated the effect of the Kerr medium in figures 6(c) and 6(d). As is observed, squeezing may be occurred in the entropy squeezing factors $E(\sigma_{x})$ and $E(\sigma_{y})$ in a short range of time. Finally, we examined the effect of the Stark shifts (when the Kerr effect is also in our consideration) in figures 6(e) and 6(f). In this case, there is no squeezing in $E(\sigma_{x})$ and $E(\sigma_{y})$. \section{A discussion on the effect of three- and four-photon transitions} We investigated the influence of one- and two-photon transitions on the temporal behaviour of atomic inversion and entropy squeezing in the previous sections. In this section, we intend to discuss the effect of three- and four-photon processes on the time evolution of mentioned physical quantities in a general manner. Obviously, adding all of the numerical results and related figures considering all quantities which concern with $k=3, 4$ will make the paper dramatically large. Therefore, we present our obtained results qualitatively and make a comparison with the previous results for $k=1,2$. Clearly, due the numerous parameters which are involved in the calculations, one can not reach a sharp result, so, our discussion is restricted to the particular used parameters. According to our further calculations for $k=3$ and $k=4$ (not shown here), the following results have been extracted: \begin{itemize} \item The collapse and revival phenomena exist in a clear manner for three- and four-photon transitions in the linear regime ($f(n)=1$) when the filed is initially in the coherent state. As we observed, by increasing the number of photon transitions, the time interval between subsequent revivals will be decreased. In addition, the revival times turn shorter when the number of photon transition is increased. This result is in consistence with the outcome results of Ref. \cite{Kang} ([18] of RM). Moreover, for $k=3,4$, no clear collapse-revival phenomenon is observed for the atom-field states in the linear regime ($f(n)=1$) which their initial field states are squeezed and thermal states. \item The temporal behaviour of atomic inversion for $k=3$ and $k=4$ shows a chaotic behaviour for the nonlinear regime ($f(n)=\sqrt{n}$) in all cases. As one may observe, when the initial field is thermal and coherent states, for the case $k=1$ in the absence of Kerr medium and detuning, the full collapse and revivals are revealed in the evolution of atomic inversion. \item Our results show that, Kerr medium has a negligible effect on the time variation of atomic inversion for all cases with $k=3, 4$. For the detuning parameter we observed that for the linear case with $k=3$, it has no critical effect for the coherent and thermal states as initial field, while for the squeezed initial state, the negative values of atomic inversion are considerably decreased, i.e., it gets positive values in main parts of time. The same statement will be weakly true for the case $k=4$. \item In the nonlinear case ($f(n)=\sqrt{n}$), there is no (entropy) squeezing in $\sigma_{x}$ and $\sigma_{y}$ for $k=4$ with different initial field states and also, for $k=3$ with squeezed and thermal states as the initial states of the field. But, for $k=3$ in the absence of Kerr medium, squeezing exists in $\sigma_{y}$ in a very short intervals of times. In this case, there is no squeezing in $\sigma_{x}$, too. \end{itemize} \section{Summary and conclusion} In this paper, we considered the full nonlinear interaction between a two-level atom with a nonlinear single-mode quantized field for $k$-photon transition in the presence of Kerr medium and Stark shift effect. Also, we assumed that, the coupling between atom and field is time-dependent as well as intensity-dependent. To the best of our knowledge, this problem in such a general form has not been considered in the literature up to now. Fortunately, we could solve the dynamical problem and found the explicit form of the state vector of the whole atom-field system analytically. We have considered the atom to be initially in the exited state and the field in three different possible states (coherent state, squeezed state and thermal state), and then, the time variation of atomic inversion and entropy squeezing have been numerically studied and compared with each other. Even though our formalism can be used for any nonlinearity function, we particularly considered the nonlinearity function $f(n)=\sqrt{n}$ for our further numerical calculations. The obtained results are summarized as follow:\\ 1. The temporal evolution of both atomic inversion and entropy squeezing is generally sensitive to the initial field state, but this fact is more visible for the atomic inversion in comparison with entropy squeezing. \\ 2. The behavior of atomic inversion in the presence of nonlinearity (the right plots in all figures) is chaotic, except in some cases, i.e., figures 1(b), 2(b) and 3(b) which are plotted for the cases in which the Kerr and Stark effects are absent and the initial field state is coherent state, squeezed state and thermal state, respectively. As is observed, the collapse and revival phenomena are revealed in the figures 1(b) and 3(b).\\ 3. The complete (partial) collapse and revival, as purely quantum mechanical features, are observed in the left plots of figure 1 (figure 3) corresponds to atomic inversion for initial coherent (thermal) state. \\ 4. The detuning parameter has not a critical effect on atomic inversion, unless it causes some minor changes in the extremes of the investigated quantities, either with chaotic or collapse-revival behavior. \\ 5. The variation of atomic inversion for different initial field states (coherent state, squeezed state and thermal state) shows that, the time dependent coupling leads to a time delaying which is twice the delay time for the time-independent case. This result is similarly to reported results in \cite{Abdalla M S}.\\ 6. There is seen entropy squeezing in $\sigma_{x}$ and $\sigma_{y}$ at some intervals of time in some cases with different conditions, obviously for different time intervals, such that the uncertainty relation holds.\\ 7. The presence of both Stark shift and Kerr medium simultaneously on the entropy squeezing for all cases (different initial field states) prevents the entropy squeezing to be occurred.\\ 8. In the absence of Kerr medium, Stark shift, detuning and with constant coupling ($f(n)=1$), with considering the parameters which are used in Ref. \cite{Kang} ([18] of RM), our results recover the numerical results of Ref. \cite{Kang} successfully. In the absence of the mentioned effects with intensity-dependent but time-independent coupling ($f(n) =\sqrt{n}$, $\mu = 0$), our results are reduced to the ones reported in Ref. \cite{LiN}.\\ 9. As previously mentioned, we nonlinearized the atom-field system which has been considered in \cite{Abdalla M S}. Consequently, as is expected, in the linear case ($f(n)=1$) the outcome results are the same as the results in this Ref.\\ Finally, we would like to mention that, our presented formalism has the potential ability to be applied for all well-known nonlinearity functions, such as the center of mass motion of trapped ion \cite{Vogel}, photon-added coherent states \cite{Agarwal,Sivakumar}, deformed photon-added coherent states \cite{Safaeian}, harmonious states \cite{Manko,Sudarshan}, $q$-deformed coherent states \cite{Naderi,Macfalane,Biedenharn,Chaichian} etc. We have not discussed the effect of the initial field photon number in detail, but it is obvious that, the results may be affected directly by this parameter, as well as all discussed parameters. \end{document}
\begin{document} {}_{\scriptscriptstyle(2)}itle{A Leray spectral sequence for noncommutative differential fibrations} \author{Edwin Beggs\ \dag\ \ \&\ \ Ibtisam Masmali\ \ddag \\ \\ \dag\ College of Science, Swansea University, Wales \\ \ddag\ Jazan University, Saudi Arabia} \title{A Leray spectral sequence for noncommutative differential fibrations} \begin{abstract} This paper describes the Leray spectral sequence associated to a differential fibration. The differential fibration is described by base and total differential graded algebras. The cohomology used is noncommutative differential sheaf cohomology. For this purpose, a sheaf over an algebra is a left module with zero curvature covariant derivative. As a special case, we can recover the Serre spectral sequence for a noncommutative fibration. \end{abstract} \section{Introduction} This paper uses the idea of noncommutative sheaf theory introduced in \cite{three}. This is a differential definition, so the algebras involved have to have a differential structure. Essentially having zero derivative is used to denote `locally constant', which is a term of uncertain meaning for an algebra. Working rather vaguely, one might think of considering the total space of a sheaf over a manifold as locally inheriting the differential structure of the manifold, via the homeomorphism between a neighbourhood of a point in the total space and an open set in the base space. This allows us to lift a vector at a point of the base space to a unique vector at every point of the preimage of that point in the total space. This lifting should allow us to give a covariant derivative on the functions on the total space. Further, the local homeomorphisms suggest that the resulting covariant derivative has zero curvature. In \cite{three} it is shown that a zero curvature covariant derivative on a module really does allow us to reproduce some of the main results of sheaf cohomology. In this paper we shall consider another of the main results of sheaf cohomology, the Leray spectral sequence. Ideally it would be nice to have a definition which did not involve differential structures, but there are several comments to be made on this: When Connes calculated the cyclic cohomology of the noncommutative torus \cite{ConnesIHES}, he used a subalgebra of rapidly decreasing sequences, effectively placing differential methods at the heart of noncommutative cohomology. It is not obvious what a {}_{\scriptscriptstyle(2)}extit{calculable} purely algebraic (probably read $C^*$ algebraic) sheaf cohomology theory would be -- though maybe the theory of quantales \cite{MulQuant} might give a clue. Secondly, even if there were a non-differential definition, it would likely be complementary to the differential definition. The relation between de Rham and topological cohomology theories is fundamental to a lot of mathematics, it would make no sense to delete either. Finally, in mathematics today, differential graded algebras arising from several constructions are considered interesting objects in their own right, and many applications to Physics are phrased in terms of differential forms or vector fields. There are four main motivations behind this paper. One is that the Leray spectral sequence seems a natural continuation from the sheaf theory and Serre spectral sequence in \cite{three}. Another is a step in finding an analogue of the Borel-Weil-Bott theorem for representations of quantum groups (see \cite{BWquant93}). One motivation we should look at in more detail is contained in the papers \cite{25,26}. These papers are about noncommutative fibrations. The differences in approach can be summarised in two sentences: We require that the algebras have differential structures, and \cite{25,26} do not. The papers \cite{25,26} require that the base is commutative, and we do not. One interesting point is that the method of \cite{26} makes use of the classical Leray spectral sequence of a fibration with base a simplicial complex. The fourth motivation is noncommutative algebraic topology, where we would define a fibration on a category whose objects were differential graded algebras. The interesting question is then whether there is a corresponding idea of cofibration in the sense of model categories \cite{quillModel}. The example of the noncommutative Hopf fibration in \cite{three} shows that a differential fibration need not have a commutative base. The example in Section \ref{se1} was made by taking a differential picture of a fibration given as an example in \cite{25} (the base is the functions on the circle), and so it can be considered a noncommutative fibration in both senses. It would be useful to consider whether higher dimensional constructions, such as the 4-dimensional orthogonal quantum sphere in \cite{48}, also give examples of differential fibrations. As differential calculi on finite groups are quite well understood (e.g.\ see \cite{17,Ma:rief}), it would be interesting to ask what a differential fibration corresponds to in this context. From the point of view of methods in mathematical Physics, the quantisation of twistor theory (see \cite{BraMa}) is likely to provide some examples. This paper is based on part of the content of the Ph.D.\ thesis \cite{MasThesis}. \section{Spectral sequences} This is standard material, and we use \cite{11} as a reference. We will give quite general definitions, but likely not the most general possible. \subsection{What is a spectral sequence?} A spectral sequence consists of series of pages (indexed by $r$) and objects $ \mathcal{E}^{p,q}_{r}$ (e.g.\ vector spaces), where $r,p,q$ are integers. We take $r\geq 1$ and $p,q \geq 0$ , and set $ \mathcal{E}^{p,q}_{r} = 0$ if $p < 0$ or $q < 0$ . There is a differential $$\mathrm{d}_{r} : \mathcal{E}^{p,q}_{r} \longrightarrow \mathcal{E}^{p+r,q+1-r}_{r}$$ such that $\mathrm{d}_{r}\mathrm{d}_{r} = 0$. As $\mathrm{d}_{r}\mathrm{d}_{r} = 0$, we can take a quotient (in our case, quotient of vector spaces) $$\frac{ \ker \, \mathrm{d}_{r} : \mathcal{E}^{p,q}_{r} \rightarrow \mathcal{E}^{p+r,q+1-r}_{r}}{\mathrm{im} \, \mathrm{d}_{r} : \mathcal{E}^{p-r,q+r-1}_{r} \rightarrow \mathcal{E}^{p,q}_{r}} = H^{p,q}_{r}$$ Then the rule for going from page $r$ to page $r+1$ is $ \mathcal{E}^{p,q}_{r+1} = H^{p,q}_{r}$. The maps $d_{r+1}$ are given by a detailed formula on $H^{p,q}_{r}$. The idea is that eventually the $ \mathcal{E}^{p,q}_{r}$ will become fixed for $r$ large enough. The spectral sequence is said to converge to these limiting cases $ \mathcal{E}^{p,q}_{\infty}$ as $r$ increases. \subsection{The spectral sequence of filtration}{{}_{\scriptscriptstyle(2)}riangleright}bel{a53} A decreasing filtration of a vector space $V$ is a sequence of subspaces $F^m V$ ($m\in\mathbb{N}$) for which $F^{m+1}V \subset F^m V$. The reader should refer to \cite{11} for the details of the homological algebra used to construct the spectral sequence. We will merely quote the results. \begin{remark}{{}_{\scriptscriptstyle(2)}riangleright}bel{sprem} Start with a differential graded module $C^n$ (for $n\ge 0$) and $ \mathrm{d} :C^n {}_{\scriptscriptstyle(2)}o C^{n+1}$ with $ \mathrm{d}^2=0$. Suppose that $C$ has a filtration $F^m C\subset C={}_{\scriptscriptstyle(1)}plus_{n\ge 0}C^n$ for $m\ge 0$ so that:\\ (1)\quad $ \mathrm{d} F^m C \subset F^m C$ for all $m\ge 0$ (i.e.\ the filtration is preserved by $ \mathrm{d}$); \\ (2)\quad $F^{m+1} C\subset F^m C$ for all $m\ge 0$ (i.e.\ the filtration is decreasing); \\ (3)\quad $F^0 C=C$ and $F^m C^n=F^m C\cap C^n=\{0\}$ for all $m>n$ (a boundedness condition). \\ Then there is a spectral sequence $(\mathcal{E}_r^{*,*}, \mathrm{d}_r)$ for $r\ge 1$ ($r$ counts the page of the spectral sequence) with $ \mathrm{d}_r$ of bidegree $(r,1-r)$ and \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{b7} \mathcal{E}_1^{p,q} &=& H^{p+q}(F^pC/F^{p+1}C) \cr &=& \frac{{\rm ker}\, \mathrm{d}:F^pC^{p+q}/F^{p+1}C^{p+q}{}_{\scriptscriptstyle(2)}o F^pC^{p+q+1}/F^{p+1}C^{p+q+1}}{{\rm im}\, \mathrm{d}:F^pC^{p+q-1}/F^{p+1}C^{p+q-1}{}_{\scriptscriptstyle(2)}o F^pC^{p+q}/F^{p+1}C^{p+q}}\ . \end{eqnarray} In more detail, we define \begin{eqnarray*} Z_{r}^{p,q} &=& F^{p} C^{p+q} \cap \mathrm{d}^{-1}(F^{p+r} C^{p+q+1})\ ,\cr B_{r}^{p,q} &=& F^{p} C^{p+q} \cap \mathrm{d}(F^{p-r} C^{p+q-1})\ ,\cr \mathcal{E}_{r}^{p,q} &=& Z_r^{p,q}/(Z_{r-1}^{p+1,q-1}+B_{r-1}^{p,q})\ .\end{eqnarray*} The differential $ \mathrm{d}_{r}:\mathcal{E}_{r}^{p,q} {}_{\scriptscriptstyle(2)}o \mathcal{E}_{r}^{p+r,q-r+1}$ is the map induced on quotienting $ \mathrm{d}:Z_{r}^{p,q} {}_{\scriptscriptstyle(2)}o Z_{r}^{p+r,q-r+1}$. The diligent reader should remember an important point here, when reading the seemingly innumerable differentials in the pages to come. There is really only one differential $\mathrm{d}$ -- its domain or codomain may be different subspaces with different quotients applied, but the same $\mathrm{d}$ lies behind them all. The spectral sequence converges to $H^*(C, \mathrm{d})$ in the sense that \begin{eqnarray*}\mathcal{E}_\infty^{p,q} \cong \frac{F^p H^{p+q}(C, \mathrm{d})}{F^{p+1}H^{p+q}(C, \mathrm{d})}\ , \end{eqnarray*} where $F^p H^*(C, \mathrm{d})$ is the image of the map $H^*(F^p C, \mathrm{d}){}_{\scriptscriptstyle(2)}o H^*(C, \mathrm{d})$ induced by inclusion $F^p C{}_{\scriptscriptstyle(2)}o C$. \end{remark} \subsection{The classical Leray spectral sequence} The statement of the general Leray spectral sequence can be found in \cite{28}. We shall omit the supports and the subsets as we are only currently interested in a non commutative analogue of the spectral sequence. Then the statement reads that, given $f : X \rightarrow Y$ and $\mathcal{S}$ a sheaf on $X$, that there is a spectral sequence$$E^{pq}_{2} = H^{p}(Y, H^{q}(f,f \vert \mathcal{S}))$$ converging to $H^{p+q}(X,\mathcal{S})$. Here $H^{q}(f,f \vert \mathcal{S}) $ is a sheaf on $Y$ which is given by the presheaf for an open $U \subset Y$ $$U\longmapsto H^{q}(f^{-1}U; \mathcal{S} \vert_{f^{-1}U}).$$ Here $f^{-1}U$ is an open set of $X$, and $\mathcal{S} \vert_{f^{-1}U}$ is the sheaf $\mathcal{S}$ restricted to this open set. We shall consider the special case of a differential fibration. This is the background to the Serre spectral sequence, but we consider a sheaf on the total space. The Leray spectral sequence of a fibration is a spectral sequence whose input is the cohomology of the base space $B$ with coefficients in the cohomology of the fiber $F$, and converges to the cohomology of the total space $E$. Here$$\pi : E \rightarrow B$$ is a fibration with fiber $F$. The difference of this from the Serre spectral sequence is that the cohomology may have coefficients in a sheaf on $E$. \section{Noncommutative differential calculi and sheaf theory} Take a possibly noncommutative algebra $A$. Then a differential calculus $(\Omega^*A,\mathrm{d})$ is given by the following. \begin{defin}{{}_{\scriptscriptstyle(2)}riangleright}bel{anwar} A differential calculus $(\Omega^*A,\mathrm{d})$ on $A$ consists of vector spaces $\Omega^{n}A$ with operators $\wedge$ and $\mathrm{d}$ so that \\ 1) $\wedge : \Omega^r A {}_{\scriptscriptstyle(1)}times \Omega^m A \longrightarrow \Omega^{r+m} A$ is associative (we do not assume any graded commutative property) \\ 2) $\Omega^0 A = A $ \\ 3) $\mathrm{d} : \Omega^n A \rightarrow \Omega^{n+1}A $ with $\mathrm{d}^2 =0$ \\ 4) $\mathrm{d}(\xi \wedge \eta ) = \mathrm{d}\xi \wedge \eta + (-1)^r \xi \wedge \mathrm{d}\eta$ for $\xi \in \Omega^r A$ \\ 5) $\Omega^1 A \wedge \Omega^n A = \Omega^{n+1} A$ . \\ 6) $A.\mathrm{d} A = \Omega^{1}A$ \end{defin} Note that many differential graded algebras do not obey (5), but those in classical differential geometry do, and it will be true in all our examples. There is only one place where we require (5), and we will point it out at the time. A special case of $\wedge$ shows that each $\Omega^n A$ is an $A$-bimodule. We will often use $\vert \xi \vert$ for the degree of $ \xi $, if $\xi \in \Omega^{n}A$, then $\vert \xi \vert = n$. In the differential graded $(\Omega^{n} A,\wedge,\mathrm{d})$, we have $\mathrm{d}^{2} = 0$. This means that $$\mathrm{im} \, \mathrm{d} : \Omega^{n-1} A \longrightarrow \Omega^{n} A \subset \ker \, \mathrm{d} : \Omega^{n} A \longrightarrow \Omega^{n+1} A\ .$$ Then we define the de Rham cohomology as $$H^{n}_{\mathrm{dR}}(A)\ =\ \frac{\ker \, \mathrm{d} : \Omega^{n} A \longrightarrow \Omega^{n+1} A}{\mathrm{im} \, \mathrm{d} : \Omega^{n-1} A \longrightarrow \Omega^{n} A}\ .$$ We give the usual idea of covariant derivatives on left $A$ modules by using the left Liebnitz rule: \begin{defin}{{}_{\scriptscriptstyle(2)}riangleright}bel{de1} Given a left A-module E , a left A-covariant derivative is a map $\nabla : E \rightarrow \Omega^1 A {}_{\scriptscriptstyle(1)}times_{A} E$ which obeys the condition $\nabla ( a.e) = \mathrm{d}a {}_{\scriptscriptstyle(1)}times e + a . \nabla e $ for all $e \in E$ and $a \in A$. \end{defin} After the fashion of the de-Rham complex, we can attempt to extend the covariant derivative to a complex as follows: \begin{defin}\cite{three} Given $(E,\nabla )$ a left $A$-module with covariant derivative, define $$ \nabla^{[n]} : \Omega^{n} A {}_{\scriptscriptstyle(1)}times_{A} E \rightarrow \Omega^{n+1} A {}_{\scriptscriptstyle(1)}times_{A} E , \quad {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times e \mapsto \mathrm{d}{}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times e + (-1)^{n} {}_{\scriptscriptstyle(1)}mega \wedge \nabla e .$$ Then the curvature is defined as $R = \nabla^{[1]} \nabla : E \rightarrow \Omega^{2} A {}_{\scriptscriptstyle(1)}times E$, and is a left A-module map. The covariant derivative is called flat if the curvature is zero. \end{defin} However, the curvature forms an obstruction to setting up a cohomology, as we now show: \begin{propos}\cite{three} For all $n \geq 0$, $\nabla^{[n+1]} \circ \nabla^{[n]} = \mathrm{id} \wedge R : \Omega^{n} A {}_{\scriptscriptstyle(1)}times_{A} E \rightarrow \Omega^{n+2} A {}_{\scriptscriptstyle(1)}times_{A} E .$ \end{propos} We can now use this in a definition of a noncommutative sheaf \cite{three}. \begin{defin}{{}_{\scriptscriptstyle(2)}riangleright}bel{key49}\cite{three} Given $(E,\nabla )$ a left $A$-module with covariant derivative and zero curvature, define $H^*(A ; E, \nabla )$ to be the cohomology of the cochain complex $$E {}^{\scriptscriptstyle[2]}ackrel{\nabla} \longrightarrow \Omega^{1} A {}_{\scriptscriptstyle(1)}times_{A}E {}^{\scriptscriptstyle[2]}ackrel{\nabla^{[1]}} \longrightarrow \Omega^{2} A {}_{\scriptscriptstyle(1)}times_{A} E {}^{\scriptscriptstyle[2]}ackrel{\nabla^{[2]}}\longrightarrow ........$$ Note that $ H^{0}(E , \nabla ) = \{ e \in E : \nabla e = 0 \}$, the flat sections of $E$. We will often write $H^* (A ;E)$ where there is no danger of confusing the covariant derivative . \end{defin} We will take this opportunity to make a couple of well known statements about modules over algebras which we will use, as it may make the reading later easier for non-experts (see e.g.\ \cite{44}) . \begin{defin} A right A-module $E$ is flat if every short exact sequence of left A-modules $$0 \longrightarrow L \longrightarrow M \longrightarrow N \longrightarrow 0$$ gives another short exact sequence $$0 \longrightarrow E{}_{\scriptscriptstyle(1)}times_{A} L \longrightarrow E{}_{\scriptscriptstyle(1)}times_{A} M \longrightarrow E{}_{\scriptscriptstyle(1)}times_{A} N \longrightarrow 0.$$ Similarly, a left A-module $F$ is called flat if $-\mathop{{}_{\scriptscriptstyle(1)}times}_A F$ preserves exactness of short sequences of right modules. \end{defin} \begin{lemma}{{}_{\scriptscriptstyle(2)}riangleright}bel{raneem} Given two short exact sequences of modules (left or right), \begin{eqnarray*} && 0 \longrightarrow U {}^{\scriptscriptstyle[2]}ackrel{t}\longrightarrow V {}^{\scriptscriptstyle[2]}ackrel{f}\longrightarrow W \longrightarrow 0 \ ,\cr && 0 \longrightarrow U {}^{\scriptscriptstyle[2]}ackrel{t}\longrightarrow V {}^{\scriptscriptstyle[2]}ackrel{g}\longrightarrow X \longrightarrow 0\ , \end{eqnarray*} there is an isomorphism $h : W \longrightarrow X $ given by $h(w) = g(v)$, where $f(v) = w$. \end{lemma} \section{Differential fibrations and the Serre spectral sequence} \subsection{A simple differential fibration}{{}_{\scriptscriptstyle(2)}riangleright}bel{a52} The reader may take this section as a justification of why the definition of a noncommutative differential fibration which we will give in Definition \ref{b61} is reasonable. Take a trivial fibration $\pi:\mathbb{R}^n {}_{\scriptscriptstyle(2)}imes \mathbb{R}^m {}_{\scriptscriptstyle(2)}o \mathbb{R}^n$ given by $$(x_{1} , . . . .,x_{n},y_{1} , . . . .,y_{m}) \longmapsto (x_{1} , . . . .,x_{n})\ .$$ Here the base space is $B = \mathbb{R}^n$, the fiber is $\mathbb{R}^m$, and the total space is $E = \mathbb{R}^{n+m}$. We can write a basis for the differential forms on the total space, putting the $B$ terms (the $\mathrm{d} x_{i}$) first. A form of degree $p$ in the base and $q$ in the fiber (total degree $p+q$) is $$\mathrm{d} x_{\iota_{1}} \wedge . . . .\wedge \mathrm{d} x_{\iota_{p}} \wedge \mathrm{d} y_{j_{1}} \wedge . . . .\wedge \mathrm{d} y_{j_{q}}\ ,$$ for example $\mathrm{d} x_{2} \wedge \mathrm{d} x_{4} \wedge \mathrm{d} y_{1} \wedge \mathrm{d} y_{7} \wedge \mathrm{d} y_{9}$, If we have the projection map $\pi : E \longrightarrow B$, we can write our example form as $$\alpha \ =\ \pi^*(\mathrm{d} x_{2} \wedge \mathrm{d} x_{4}) \wedge (\mathrm{d} y_{1} \wedge \mathrm{d} y_{7} \wedge \mathrm{d} y_{9})$$ so we have a form in $\pi^* \Omega^2 B \wedge \Omega^3 E$. Another element of $\pi^* \Omega^2 B \wedge \Omega^3 E$ might be $$\beta\ =\ \pi^*(\mathrm{d} x_{2} \wedge \mathrm{d} x_{4}) \wedge (\mathrm{d} x_{3} \wedge \mathrm{d} y_{1} \wedge \mathrm{d} y_{_{7}}).$$ Note, we now just look at $\Omega^3 E$, not the forms in the fiber direction, as in the noncommutative case we will not know (at least in the begining) what the fiber is. We need to describe the forms on the fiber space more indirectly. Now look at the vector space quotient \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{vcfhgmk} \frac{\pi^* \Omega^2 B \wedge \Omega^3 E}{\pi^* \Omega^3 B \wedge \Omega^2 E}\ . \end{eqnarray} Here $\beta$ is also an element of the bottom line of (\ref{vcfhgmk}), as we could write $$\beta = \pi^*(\mathrm{d} x_{2} \wedge \mathrm{d} x_{4} \wedge \mathrm{d} x_{3}) \wedge (\mathrm{d} y_{1} \wedge \mathrm{d} y_{_{7}})$$ so, denoting the quotient by square brackets, $[\beta] = 0$. On the other hand, $\alpha$ is not in the bottom line of (\ref{vcfhgmk}), so $[\alpha] \neq 0$. We can now use $$\frac{\pi^* \Omega^p B \wedge \Omega^q E}{\pi^* \Omega^{p+1} B \wedge \Omega^{q-1} E}$$ to denote the forms on the total space which are of degree $p$ in the base and degree $q$ in the fiber, without explicitly having any coordinates for the fiber. This is just the idea of a noncommutative differential fibration. \subsection{Noncommutative differential fibrations} In Subsection \ref{a52} we had a topological fibration $\pi:\mathbb{R}^{m+n}{}_{\scriptscriptstyle(2)}o \mathbb{R}^n$. For algebras, we will reverse the arrows, and look at $ \iota : B \rightarrow A$, where $B$ is the `base algebra' and $A$ is the `total algebra'. Suppose that both $A$ and $B$ have differential calculi, and that the algebra map $\iota : B \rightarrow A$ is differentiable. This means that $\iota : B \rightarrow A$ extends to a map of differential graded algebras $\iota_* : \Omega^*B \rightarrow \Omega^*A$, and in particular that $\mathrm{d}\,\iota_*=\iota_*\,\mathrm{d}$ and $\iota_*\,\wedge=\wedge\,(\iota_*\mathop{{}_{\scriptscriptstyle(1)}times} \iota_*)$. Now we set \begin{eqnarray} {{}_{\scriptscriptstyle(2)}riangleright}bel{cvhgsuv} D_{p,q} = \iota_{*} \Omega^{p} B \wedge \Omega^{q} A\quad\mathrm{and} \quad N_{p,q} = \frac{D_{p,q}}{D_{p+1,q-1}}\ ,\quad N_{p,0} \,=\, \iota_*\Omega^{p} B.A\ . \end{eqnarray} Now we can finally define a differential fibration, remembering that we use $[\ ]$ to denote equivalence class in the quotient in (\ref{cvhgsuv}): \begin{defin}{{}_{\scriptscriptstyle(2)}riangleright}bel{b61} $\iota : B \longrightarrow A$ is a differential fibration if the map$$\xi {}_{\scriptscriptstyle(1)}times [x] \longrightarrow [\iota_{*} \xi \wedge x]$$ gives an isomorphism from $\Omega^{p}B {}_{\scriptscriptstyle(1)}times_{B} N_{0,q}$ to $N_{p,q}$ for all $p,q\ge 0$. \end{defin} \begin{example}{{}_{\scriptscriptstyle(2)}riangleright}bel{f2} (See section 8.5 of \cite{three}.) Given the left covariant calculus on the quantum group $SU_{q}(2)$ given by Woronowicz \cite{worondiff}, the corresponding differential calculus on the quantum sphere $S^{2}_{q}$ gives a differential fibration $$\iota : S^{2}_{q} \longrightarrow SU_{q}(2)\ .$$Here the algebra $SU_{q}^{2}$ is the invariants of $SU_{q}(2)$ under a circle action, and $\iota$ is just the inclusion. \end{example} \noindent We will give another example in Section \ref{se1}. Now we have the following version of the Serre spectral sequence from \cite{three}. \begin{theorem} Suppose that $\iota : B \rightarrow X$ is a differential fibration. Then there is a spectral sequence converging to $H^{*}_{dR}(A)$ with $$E^{p,q}_{2} \cong H^{p}(B ; H^{q}(N_{0,*} ), \nabla )\ .$$ \end{theorem} Here $\nabla$ is a zero curvature covariant derivative on the left $B$-modules $N_{0,n}$, whose construction we will not go further into, as we are about to something more general. \section{The noncommutative Leray spectral sequence} \subsection{A filtration of a cochain complex}{{}_{\scriptscriptstyle(2)}riangleright}bel{a54} We suppose that $E$ is a left A module, with a left covariant derivative $$\nabla : E \longrightarrow \Omega^1 A {}_{\scriptscriptstyle(1)}times_{A}E$$ and that this covariant derivative is flat, i.e.\ that its curvature vanishes. Then $\nabla^{[n]} : \Omega^{n}A{}_{\scriptscriptstyle(1)}times_{A}E \longrightarrow \Omega^{n+1}A {}_{\scriptscriptstyle(1)}times_{A}E$ is a cochain complex (see definition \ref{key49}). Suppose that $\iota_* : \Omega^*B \longrightarrow \Omega^*A$ is a map of differential graded algebras. We define a filtration of $\Omega^{n}A {}_{\scriptscriptstyle(1)}times_{A}E$ by \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{rana} F^m(\Omega^{n}A {}_{\scriptscriptstyle(1)}times_{A} E) = \left\{ \begin{array}{ll} \iota_{*} \Omega^m B \wedge \Omega^{n-m} A {}_{\scriptscriptstyle(1)}times_{A} E & \mbox{ $0 \leq m \leq n$};\\ 0 & \mathrm{otherwise}.\end{array} \right. \end{eqnarray} \begin{propos} The filtration in (\ref{rana}) satisfies the conditions of remark \ref{sprem}.\\ {}_{\scriptscriptstyle(2)}extbf{Proof:} First \quad $ F^0 (\Omega^n A {}_{\scriptscriptstyle(1)}times_{A} E) = \iota_{*}\Omega^0 B \wedge \Omega^n A {}_{\scriptscriptstyle(1)}times_{A} E,$ but $1 \in \iota_{*} \Omega^0 B = \iota_{*}B$, so $ F^0 (\Omega^n A {}_{\scriptscriptstyle(1)}times_{A} E) = \Omega^n A {}_{\scriptscriptstyle(1)}times_{A} E.$. To show it is decreasing, (using condition (5) from definition \ref{anwar}) \begin{eqnarray*} F^{m+1} (\Omega^n A {}_{\scriptscriptstyle(1)}times_{A} E) &=& \iota_{*}\Omega^{m+1} B \wedge \Omega^{n-m-1} A {}_{\scriptscriptstyle(1)}times_{A} E \cr &=& \iota_{*} \Omega^m B \wedge (\iota_{*} \Omega^{1} B \wedge \Omega^{n-m-1}A) {}_{\scriptscriptstyle(1)}times_{A} E\cr &\subset & \iota_{*} \Omega^m B \wedge \Omega^{n-m}A {}_{\scriptscriptstyle(1)}times_{A} E \cr &\subset & F^m (\Omega^n A {}_{\scriptscriptstyle(1)}times_{A} E)\ . \end{eqnarray*} To show that the filtration is preserved by $\mathrm{d}$, take $\iota_{*} \xi \wedge \eta {}_{\scriptscriptstyle(1)}times e \in F^m (\Omega^n A {}_{\scriptscriptstyle(1)}times_{A} E)$ where $\xi \in \Omega^m B$, and $\eta \in \Omega^{n-m} A$. Then $$\mathrm{d}(\iota_{*} \xi \wedge \eta {}_{\scriptscriptstyle(1)}times e) = \iota_{*} \mathrm{d}\xi \wedge \eta {}_{\scriptscriptstyle(1)}times e + (-1)^m \iota_{*} \xi \wedge \mathrm{d}\eta {}_{\scriptscriptstyle(1)}times e +(-1)^n \iota_{*} \xi \wedge \eta \wedge \nabla e$$ This is in $F^m C$, as the first term is in $F^{m+1}C \subset F^m C$, and the other two are in $F^m C$.\quad $\square$ \end{propos} Now we have a spectral sequence which converges to $H^{*}_{d R}(A ; E)$. All we have to do is to find the first and second pages of the spectral sequence, though this is quite lengthy. \subsection{The first page of the spectral sequence}{{}_{\scriptscriptstyle(2)}riangleright}bel{first} From section \ref{a53}, to use the filtration in section \ref{a54} we need to work with \begin{eqnarray} {{}_{\scriptscriptstyle(2)}riangleright}bel{cbdhsiouv} M_{p,q} = \frac{F^p C^{p+q}}{F^{p+1} C^{p+q}} = \frac{\iota_{*}\Omega^p B \wedge \Omega^q A {}_{\scriptscriptstyle(1)}times_{A} E}{\iota_{*}\Omega^{p+1} B \wedge \Omega^{q-1} A {}_{\scriptscriptstyle(1)}times_{A} E} \end{eqnarray} Then we look, for $p$ fixed (following (\ref{b7})), at the sequence \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{b60} \cdots M_{p,q-1} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{d}} \longrightarrow M_{p,q} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{d}} \longrightarrow M_{p,q+1} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{d}} \longrightarrow \cdots \end{eqnarray} as the cohomology of this sequence gives the first page of the spectral sequence. Denote the quotient in $M_{p,q}$ by $[ \quad ]_{p,q}$, so if $x \in \iota_{*} \Omega^{p}B \wedge \Omega^{q} A {}_{\scriptscriptstyle(1)}times_{A} E$, then $[x]_{p,q} \in M_{p,q}$. Then we have a map of left $B$ modules $$\Omega^{p}B {}_{\scriptscriptstyle(1)}times_{B} M_{0,q} \longrightarrow M_{p,q}\ ,\quad \xi {}_{\scriptscriptstyle(1)}times [y]_{0.q} \longmapsto [\iota_{* }\xi \wedge y]_{pq}.$$ Here $y \in \Omega^{q} A {}_{\scriptscriptstyle(1)}times_{A} E$ and the left action of $b \in B$ on $y$ is $\iota(b)y$. \begin{propos}{{}_{\scriptscriptstyle(2)}riangleright}bel{aa66} If $E$ is flat as a left A module, then $N_{p,q} {}_{\scriptscriptstyle(1)}times_{A} E \cong M_{p,q}$ with isomorphism $[z] {}_{\scriptscriptstyle(1)}times e \longmapsto [z {}_{\scriptscriptstyle(1)}times e]_{p,q}$.\end{propos} {}_{\scriptscriptstyle(2)}extbf{Proof:} We have, by definition, a short exact sequence using notation from (\ref{cvhgsuv}), where $\mathrm{inc}$ is inclusion and $[\, \,]$ is quotient, $$0 \longrightarrow D_{p+1,q-1} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{inc}}\longrightarrow D_{p,q} {}^{\scriptscriptstyle[2]}ackrel{[\, \,]}\longrightarrow N_{p,q} \longrightarrow 0.$$ As $E$ is flat, we get another short exact sequence, $$0 \longrightarrow D_{p+1,q-1} {}_{\scriptscriptstyle(1)}times_{A} E {}^{\scriptscriptstyle[2]}ackrel{\mathrm{inc} {}_{\scriptscriptstyle(1)}times \mathrm{id}}\longrightarrow D_{p,q} {}_{\scriptscriptstyle(1)}times_{A} E {}^{\scriptscriptstyle[2]}ackrel{[\, \,] {}_{\scriptscriptstyle(1)}times \mathrm{id}}\longrightarrow N_{p,q} {}_{\scriptscriptstyle(1)}times_{A} E \longrightarrow 0$$ but by definition we also have $$0 \longrightarrow D_{p+1,q-1} {}_{\scriptscriptstyle(1)}times_{A} E {}^{\scriptscriptstyle[2]}ackrel{\mathrm{inc} {}_{\scriptscriptstyle(1)}times \mathrm{id}}\longrightarrow D_{p,q} {}_{\scriptscriptstyle(1)}times_{A} E {}^{\scriptscriptstyle[2]}ackrel{[\, \,]_{p,q}}\longrightarrow M_{p,q} \longrightarrow 0.$$and the result follows from Lemma \ref{raneem}.\quad $\square$ \begin{propos}{{}_{\scriptscriptstyle(2)}riangleright}bel{g2} If $E$ is a flat left A module, and $\iota : B \longrightarrow A$ is a fibering in the sense of definition \ref{b61}, then $$\Omega^{p} B {}_{\scriptscriptstyle(1)}times_{B} N_{0,q} {}_{\scriptscriptstyle(1)}times_{A} E \cong M_{p,q} $$ via the map $$ \xi {}_{\scriptscriptstyle(1)}times [x] {}_{\scriptscriptstyle(1)}times e \longmapsto [\iota_{*} \xi \wedge x {}_{\scriptscriptstyle(1)}times e ]_{p,q}.$$ {}_{\scriptscriptstyle(2)}extbf{Proof:} Definition \ref{b61} gives an isomorphism $$\Omega^{p} B {}_{\scriptscriptstyle(1)}times_{B} N_{0,q} \longrightarrow N_{p,q}$$ by $ \xi {}_{\scriptscriptstyle(1)}times [x] \longmapsto [\iota_{*} \xi \wedge x].$ Now use Proposition \ref{aa66}. \quad $\square$ \end{propos} We now return to the problem of calculating the cohomology of the sequence (\ref{b60}). Take $\xi {}_{\scriptscriptstyle(1)}times [x] {}_{\scriptscriptstyle(1)}times e \in \Omega^{p} B {}_{\scriptscriptstyle(1)}times_{B} N_{0,q} {}_{\scriptscriptstyle(1)}times_{A}E$ (for $x \in \Omega^{q}A)$) which maps to $[\iota_{*} \xi \wedge x {}_{\scriptscriptstyle(1)}times e] \in M_{p,q}$, and apply the differential $\nabla^{[p+q]}$ to it to get \begin{eqnarray} &&d(\iota_{*} \xi \wedge x) {}_{\scriptscriptstyle(1)}times e + (-1)^{p+q}\ \iota_{*} \xi \wedge x \wedge \nabla e \cr &=& \iota_{*} \mathrm{d}\xi \wedge x {}_{\scriptscriptstyle(1)}times e + (-1)^{p}\ \iota_{*} \xi \wedge \mathrm{d} x {}_{\scriptscriptstyle(1)}times e + (-1)^{p+q}\ \iota_{*} \xi \wedge x \wedge \nabla e\ . \end{eqnarray} But $\mathrm{d}\xi \in \Omega^{p+1} B$, and $$M_{p,q+1} = \frac{\iota_{*} \Omega^{p}B \wedge \Omega^{q+1} A {}_{\scriptscriptstyle(1)}times_{A}E}{\iota_{*} \Omega^{p+1}B \wedge \Omega^{q} A {}_{\scriptscriptstyle(1)}times_{A}E}\ ,$$ so the first term vanishes on applying $[\quad]_{p,q+1}$. Then \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{g1} \mathrm{d}[\iota_{*} \xi \wedge x {}_{\scriptscriptstyle(1)}times e]_{p,q} = (-1)^{p}[\iota_{*} \xi \wedge (\mathrm{d} x {}_{\scriptscriptstyle(1)}times e +(-1)^{q}x \wedge \nabla e)]_{p,q+1} \end{eqnarray} Then, using Proposition \ref{g2}, we have an isomorphism \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{b1} \Omega^{p} B {}_{\scriptscriptstyle(1)}times_{B} M_{0,q} \cong M_{p,q} \ ,\quad \xi {}_{\scriptscriptstyle(1)}times [y]_{0,q} \longmapsto [\iota_{*} \xi \wedge y]_{p,q}\ , \end{eqnarray} and using this isomorphism, $\mathrm{d}$ on $M_{p,q}$ can be written as (see \ref{g1}) \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{g4} \mathrm{d}( \xi {}_{\scriptscriptstyle(1)}times [y]_{0,q} ) = (-1)^{p} \xi {}_{\scriptscriptstyle(1)}times [\nabla^{[q]} y]_{0,q+1} \end{eqnarray} where $y \in \Omega^{q} A {}_{\scriptscriptstyle(1)}times_{A} E$. From (\ref{g4}) we see that we should study $[\nabla^{[q]}] : M_{0,q} \longrightarrow M_{0,q+1}$, defined by $[y]_{0,q} \longmapsto [\nabla^{[q]} y]_{0,q+1}$. \begin{propos} {{}_{\scriptscriptstyle(2)}riangleright}bel{cvauiuy} $[\nabla^{[q]} ] : M_{0,q} \longrightarrow M_{0,q+1}$ is a left $B$ module map. The module structure is $b . [\eta {}_{\scriptscriptstyle(1)}times e] = [i(b) \eta {}_{\scriptscriptstyle(1)}times e ]$, for $b \in B$ and $\eta {}_{\scriptscriptstyle(1)}times e \in \Omega^{q} A {}_{\scriptscriptstyle(1)}times_{A} E$. \end{propos} {}_{\scriptscriptstyle(2)}extbf{Proof:} First, \begin{eqnarray*} [\nabla^{[q]}](b .[\eta {}_{\scriptscriptstyle(1)}times e]_{0,q}) &=& [\mathrm{d}(i(b)\eta ) {}_{\scriptscriptstyle(1)}times e+(-1)^{q}i(b)\eta \wedge \nabla e]_{0,q+1}\cr &=& [\iota_{*}(\mathrm{d} b)\wedge \eta {}_{\scriptscriptstyle(1)}times e+i(b).\mathrm{d}\eta {}_{\scriptscriptstyle(1)}times e+(-1)^{q}i(b) \eta \wedge \nabla e]_{0,q+1} \end{eqnarray*} Now $$\iota_{*}(\mathrm{d} b)\wedge \eta {}_{\scriptscriptstyle(1)}times e \in \iota_{*} \Omega^{1} B \wedge \Omega^{q}A {}_{\scriptscriptstyle(1)}times_{A}E$$ so $[\iota_{*}(\mathrm{d} b)\wedge \eta {}_{\scriptscriptstyle(1)}times e]_{0,q+1} = 0$ in $M_{0,q+1}$. Then \begin{eqnarray*} [\nabla^{[q]}](b . [\eta {}_{\scriptscriptstyle(1)}times e]_{0,q})& =& [i(b).\mathrm{d}\eta {}_{\scriptscriptstyle(1)}times e+(-1)^{q}i(b) \eta \wedge \nabla e]_{0,q+1} \cr &=&b .[\mathrm{d}\eta {}_{\scriptscriptstyle(1)}times e + (-1)^{q} \eta \wedge\nabla e]_{0,q+1}. \quad \square \end{eqnarray*} \begin{propos}{{}_{\scriptscriptstyle(2)}riangleright}bel{bob2} If $\Omega^p B$ is flat as a right B module, the cohomology of the cochain complex $$\cdots M_{p,q-1} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{d}} \longrightarrow M_{p,q} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{d}} \longrightarrow M_{p,q+1} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{d}} \longrightarrow \cdots$$ is given by $\Omega^{p}B {}_{\scriptscriptstyle(1)}times_{B}\hat{H}_{q}$, where $\hat{H}_{q}$ is defined as the cohomology of the cochain complex $$\cdots {}^{\scriptscriptstyle[2]}ackrel{[\nabla^{[q-1]}]} \longrightarrow M_{0,q} {}^{\scriptscriptstyle[2]}ackrel{[\nabla^{[q]}]} \longrightarrow M_{0,q+1} {}^{\scriptscriptstyle[2]}ackrel{[\nabla^{[q+1]}]} \longrightarrow \cdots.$$ If we write $\left{{}_{\scriptscriptstyle(2)}riangleright}ngle \quad \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{p,q}$ for the equivalence class in the cohomology of $M_{p,q}$, this isomorphism is given by, for $\xi \in \Omega^{p} B$ and $x \in \Omega^{q} A {}_{\scriptscriptstyle(1)}times_{A} E$, \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{u2} \left{{}_{\scriptscriptstyle(2)}riangleright}ngle \iota_{*} \xi \wedge x \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{p,q} \longrightarrow \xi {}_{\scriptscriptstyle(1)}times \left{{}_{\scriptscriptstyle(2)}riangleright}ngle x \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{0,q}\ . \end{eqnarray} \end{propos} {}_{\scriptscriptstyle(2)}extbf{Proof:} To calculate the cohomology, we need to find ${Z}_{p,q} = \mathrm{im} \, \mathrm{d} : M_{p,q-1} {}_{\scriptscriptstyle(2)}o M_{p,q}$ and ${K}_{p,q} = \ker \, \mathrm{d} : M_{p,q} {}_{\scriptscriptstyle(2)}o M_{p,q+1}$. As we know from Proposition \ref{cvauiuy} that $\mathrm{d}=[\nabla^{[q]}]: M_{0,q} \longrightarrow M_{0,q+1}$ is a left B module map, we have an exact sequence of left $B$ modules, where the first map is inclusion, \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{where} 0 \longrightarrow K_{0,q} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{inc}} \longrightarrow M_{0,q} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{d}} \longrightarrow Z_{0,q+1} \longrightarrow 0\ . \end{eqnarray} Since $\Omega^{P}B$ is flat as a right $B$ module, we have another exact sequence, \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{b2} 0 \longrightarrow \Omega^{p}B{}_{\scriptscriptstyle(1)}times_{B} K_{0,q} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{id} {}_{\scriptscriptstyle(1)}times \mathrm{inc}} \longrightarrow \Omega^{p}B{}_{\scriptscriptstyle(1)}times_{B} M_{0,q} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{id} {}_{\scriptscriptstyle(1)}times \mathrm{d}} \longrightarrow \Omega^{p}B{}_{\scriptscriptstyle(1)}times_{B} Z_{0,q+1} \longrightarrow 0\ . \end{eqnarray} Now refer to the isomorphism given in (\ref{b1}), and then by (\ref{g4}) the last map $\mathrm{id} {}_{\scriptscriptstyle(1)}times \mathrm{d}$ is $(-1)^{p} \mathrm{d}$ on $M_{p,q}$, so ${Z}_{p,q} = \Omega^{p} B {}_{\scriptscriptstyle(1)}times_{B} Z_{0,q}$ and ${K}_{p,q} = \Omega^{p} B {}_{\scriptscriptstyle(1)}times_{B} K_{0,q}$. From the definition of $\hat{H}_{q}$ we have another short exact sequence, \begin{eqnarray*} 0 \longrightarrow Z_{0,q} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{inc}}\longrightarrow K_{0,q} \longrightarrow \hat{H}_{q} \longrightarrow 0\ , \end{eqnarray*} and applying $\Omega^{p}B {}_{\scriptscriptstyle(1)}times_{B}$ gives, as $\Omega^{p}B$ is flat as a right B module, \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{g6} 0 \longrightarrow \Omega^{p} B {}_{\scriptscriptstyle(1)}times_{B} Z_{0,q} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{id} {}_{\scriptscriptstyle(1)}times \mathrm{inc}}\longrightarrow \Omega^{p} B {}_{\scriptscriptstyle(1)}times_{B} K_{0,q} \longrightarrow \Omega^{p} B {}_{\scriptscriptstyle(1)}times_{B} \hat{H}_{q} \longrightarrow 0\ . \end{eqnarray} We deduce that the cohomology of $M_{p,q}$ is isomorphic to $\Omega^{p}B {}_{\scriptscriptstyle(1)}times_{B} \hat{H}_{q}$. \quad $\square$ \subsection{The second page of the spectral sequence} Now we move to the second page of the spectral sequence, in which we take the cohomology of the previous cohomology, i.e. the cohomology of $$\mathrm{d} : \mathrm{cohomology} \, (M_{p,q} ) \longrightarrow \mathrm{cohomology} \, (M_{p+1,q}).$$ By the isomorphism discussed in Proposition \ref{bob2}, we can view this as \begin{eqnarray} \mathrm{d} : \Omega^{p}B {}_{\scriptscriptstyle(1)}times_{B} \hat{H}_{q}\longrightarrow \Omega^{p+1}B {}_{\scriptscriptstyle(1)}times_{B} \hat{H}_{q}\ . \end{eqnarray} \begin{propos}{{}_{\scriptscriptstyle(2)}riangleright}bel{b6} The differential $\mathrm{d}$ gives a left covariant derivative $$\nabla_{q} : \hat{H}_{q} \longrightarrow \Omega^{1}B {}_{\scriptscriptstyle(1)}times_{B} \hat{H}_{q}.$$ If $\left{{}_{\scriptscriptstyle(2)}riangleright}ngle \xi {}_{\scriptscriptstyle(1)}times e\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{0,q} \in \hat{H}_{q}$, this is given by using the isomorphism (\ref{u2}) as $$\left{{}_{\scriptscriptstyle(2)}riangleright}ngle \xi {}_{\scriptscriptstyle(1)}times e\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{0,q} \longmapsto \eta {}_{\scriptscriptstyle(1)}times \left{{}_{\scriptscriptstyle(2)}riangleright}ngle {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{0,q}\ ,$$ where $$\mathrm{d}\xi {}_{\scriptscriptstyle(1)}times e + (-1)^{q}\xi \wedge \nabla e = \iota_{*} \eta \wedge {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f \in \iota_*\Omega^1 B\wedge\Omega^q A\mathop{{}_{\scriptscriptstyle(1)}times}_A E\ . $$ \end{propos} {}_{\scriptscriptstyle(2)}extbf{Proof}: Take $\left{{}_{\scriptscriptstyle(2)}riangleright}ngle x\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{0,q} \in \hat{H}_{q}$, where $x \in K_{0,q} = \ker \, \mathrm{d} : M_{0,q} {}_{\scriptscriptstyle(2)}o M_{0,q+1}$, and suppose $x = \xi {}_{\scriptscriptstyle(1)}times e$, where $\xi \in \Omega^{q}A$ and $e \in E$ (summation implicit). As $x \in K_{0,q}$ we have $$[\mathrm{d} x]_{0,q+1} = [\mathrm{d}\xi {}_{\scriptscriptstyle(1)}times e + (-1)^{q}\xi \wedge \nabla e]_{0,q+1} = 0$$ in $M_{0,q+1}$, so $$\mathrm{d}\xi {}_{\scriptscriptstyle(1)}times e + (-1)^{q}\xi \wedge \nabla e \in \iota_{*} \Omega^{1}B \wedge \Omega^{q} A {}_{\scriptscriptstyle(1)}times_{A} E.$$ We write (summation implicit), for $\eta \in \Omega^{1} B$, ${}_{\scriptscriptstyle(1)}mega \in \Omega^{1} A$ and $f \in E$, \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{b3} \mathrm{d}\xi {}_{\scriptscriptstyle(1)}times e +(-1)^{q} \xi \wedge \nabla e = \iota_{*}\eta \wedge {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f\ . \end{eqnarray} Under the isomorphism (\ref{b1}), this corresponds to $\eta {}_{\scriptscriptstyle(1)}times [{}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f]_{q} \in \Omega^{1}B {}_{\scriptscriptstyle(1)}times_{B} M_{0,q}$. As the curvature of $E$ vanishes, we have from applying $\nabla^{[q+1]}$ to (\ref{b3}), \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{b4} \iota_{*} \mathrm{d}\eta \wedge {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f - \iota_{*} \eta \wedge \mathrm{d}{}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f + (-1)^{q+1} \iota_{*} \eta \wedge {}_{\scriptscriptstyle(1)}mega \wedge \nabla f = 0 . \end{eqnarray} We take this as an element of $M_{1,q+1}$, so we apply $[\quad ] _{1,q+1}$ to (\ref{b4}). Then as the denominator of $M_{1,q+1}$ is $\iota_{*} \Omega^{2}B \wedge \Omega^{q} A {}_{\scriptscriptstyle(1)}times_{A} E$, we see that the first term of (\ref{b4}) vanishes on taking the quotient, giving $$- [ \iota_{*} \eta \wedge (\mathrm{d}{}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f +(-1)^{q} {}_{\scriptscriptstyle(1)}mega \wedge \nabla f )] _{1,q+1} = 0.$$ Under the isomorphism (\ref{b1}) this corresponds to \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{b5} - \eta {}_{\scriptscriptstyle(1)}times_{B} [\mathrm{d}{}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f + (-1)^{q} {}_{\scriptscriptstyle(1)}mega \wedge \nabla f ]_{0,q+1} = 0. \end{eqnarray} This means that $$\eta {}_{\scriptscriptstyle(1)}times [ {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f ]_{0,q} \in \Omega^{1}B {}_{\scriptscriptstyle(1)}times_{B} M_{0,q}$$ is in the kernel of the map $\mathrm{id} {}_{\scriptscriptstyle(1)}times \mathrm{d}$ in (\ref{b2}), and as (\ref{b2}) is an exact sequence we have $$\eta {}_{\scriptscriptstyle(1)}times [{}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f]_{0,q} \in \Omega^{1} B {}_{\scriptscriptstyle(1)}times_{B} K_{0,q},$$ so we can see take the cohomology class to get $$\eta {}_{\scriptscriptstyle(1)}times \left{{}_{\scriptscriptstyle(2)}riangleright}ngle {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{0,q} \in \Omega^{1}B {}_{\scriptscriptstyle(1)}times_{B} \hat{H}_{q}.$$ This completes showing that $\nabla_{q}$ exists, but we also need to show that it is a left covariant derivative. For $b \in B$, we calculate $\nabla_{q}(b. \xi {}_{\scriptscriptstyle(1)}times e )$ to get $$\mathrm{d}(b.\xi) {}_{\scriptscriptstyle(1)}times e + (-1)^{q} b. \xi \wedge \nabla e = \mathrm{d} b \wedge \xi {}_{\scriptscriptstyle(1)}times e + b. (\mathrm{d}\xi {}_{\scriptscriptstyle(1)}times e + (-1)^{q} \xi \wedge \nabla e),$$ so we get $$\nabla_{q}\left{{}_{\scriptscriptstyle(2)}riangleright}ngle b. \xi {}_{\scriptscriptstyle(1)}times e\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle _{0,q} = \mathrm{d} b {}_{\scriptscriptstyle(1)}times \left{{}_{\scriptscriptstyle(2)}riangleright}ngle \xi {}_{\scriptscriptstyle(1)}times e\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{0,q} + b. \nabla_{q}\left{{}_{\scriptscriptstyle(2)}riangleright}ngle \xi {}_{\scriptscriptstyle(1)}times e \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle _{0,q}\ . \quad \square$$ \begin{propos}{{}_{\scriptscriptstyle(2)}riangleright}bel{b9} The curvature $R_q$ of the covariant derivative $\nabla_{q}$ in Proposition \ref{b6} is zero. \end{propos} {}_{\scriptscriptstyle(2)}extbf{Proof:} Using the notation of Proposition \ref{b6}, equation (\ref{b3}) $$\nabla_{q}\left{{}_{\scriptscriptstyle(2)}riangleright}ngle \xi {}_{\scriptscriptstyle(1)}times e\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{0,q} = \eta {}_{\scriptscriptstyle(1)}times \left{{}_{\scriptscriptstyle(2)}riangleright}ngle {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{0,q}.$$ If we apply $\nabla_{q}^{[1]}$ (see Definition \ref{key49}) to this, we get \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{u4} R_{q}\left{{}_{\scriptscriptstyle(2)}riangleright}ngle \xi {}_{\scriptscriptstyle(1)}times e\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle _{0,q} = \mathrm{d}\eta {}_{\scriptscriptstyle(1)}times \left{{}_{\scriptscriptstyle(2)}riangleright}ngle {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{0,q} - \eta \wedge \nabla_{q} \left{{}_{\scriptscriptstyle(2)}riangleright}ngle {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle _{0,q}. \end{eqnarray} To find $\nabla_{q}\left{{}_{\scriptscriptstyle(2)}riangleright}ngle w {}_{\scriptscriptstyle(1)}times f\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle _{0,q}$, referring to the proof of Proposition \ref{b6}, formula (\ref{b5}), we have $$\eta {}_{\scriptscriptstyle(1)}times_{B}(\mathrm{d}{}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f + (-1)^{q} {}_{\scriptscriptstyle(1)}mega \wedge \nabla f ) \in \Omega^{1}B {}_{\scriptscriptstyle(1)}times_{B} (\iota_{*} \Omega^{1}B \wedge \Omega^{q}A {}_{\scriptscriptstyle(1)}times_{A}E).$$ This comes for tensoring the exact sequence $$0 \longrightarrow \iota_{*} \Omega^{1}B \wedge \Omega^{q}A {}_{\scriptscriptstyle(1)}times_{A}E \longrightarrow \Omega^{q+1}A {}_{\scriptscriptstyle(1)}times_{A} E {}^{\scriptscriptstyle[2]}ackrel{[\, \,]_{0,q+1}}\longrightarrow M_{0,q+1}\longrightarrow 0$$ on the left by $\Omega^{1}B$, and using that $\Omega^{1}B$ is a flat right module. Now write (summation implicit), \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{u5} \eta {}_{\scriptscriptstyle(1)}times (\mathrm{d}{}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f +(-1)^{q} {}_{\scriptscriptstyle(1)}mega \wedge \nabla f) = \eta^{\prime} {}_{\scriptscriptstyle(1)}times (\iota_{*} \kappa \wedge \zeta {}_{\scriptscriptstyle(1)}times g) \end{eqnarray} for $\eta^{\prime}$,$\kappa \in \Omega^{1} B$, $\zeta \in \Omega^{q}A$ and $g \in E$. Then, from Proposition \ref{b6}, $$\eta \wedge \nabla_{q}\left{{}_{\scriptscriptstyle(2)}riangleright}ngle {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle _{0,q} = \eta^{\prime} \wedge \kappa {}_{\scriptscriptstyle(1)}times \left{{}_{\scriptscriptstyle(2)}riangleright}ngle \zeta {}_{\scriptscriptstyle(1)}times g\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle _{0,q}$$ so from (\ref{u4}), \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{arwa} R_{q}\left{{}_{\scriptscriptstyle(2)}riangleright}ngle \xi {}_{\scriptscriptstyle(1)}times e\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle _{0,q} = \mathrm{d}\eta {}_{\scriptscriptstyle(1)}times \left{{}_{\scriptscriptstyle(2)}riangleright}ngle {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{0,q} - \eta^{\prime} \wedge \kappa {}_{\scriptscriptstyle(1)}times \left{{}_{\scriptscriptstyle(2)}riangleright}ngle \zeta {}_{\scriptscriptstyle(1)}times g\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle_{0,q}\ . \end{eqnarray} Now (\ref{u5}) implies that $$\iota_{*}\eta \wedge (\mathrm{d} {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f + (-1)^{q} {}_{\scriptscriptstyle(1)}mega \wedge \nabla f) = \iota_{*} \eta^{\prime} \wedge \iota_{* } \kappa \wedge \zeta {}_{\scriptscriptstyle(1)}times g\ ,$$ and substituting this into (\ref{b4}) gives $$\iota_{*} \mathrm{d}\eta \wedge {}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f - \iota_{*} \eta^{\prime} \wedge \iota_{*} \kappa \wedge \zeta {}_{\scriptscriptstyle(1)}times g = 0\ ,$$ so on taking equivalence classes in $M_{2,q}$ we find, using the isomorphism (\ref{b1}), $$\mathrm{d}\eta {}_{\scriptscriptstyle(1)}times [{}_{\scriptscriptstyle(1)}mega {}_{\scriptscriptstyle(1)}times f]_{0,q} - \eta^{\prime} \wedge \kappa {}_{\scriptscriptstyle(1)}times [\zeta {}_{\scriptscriptstyle(1)}times g]_{0,q} = 0\ ,$$ and this shows that $R_q=0$ by (\ref{arwa}). $\square$ \begin{theorem} Given:\\ 1) a map $\iota : B \longrightarrow A$ which is a differential fibration (see definition \ref{b61}),\\ 2) a flat left A module $E$, with a zero-curvature left covariant derivative $\nabla_{E} : E {}_{\scriptscriptstyle(2)}o \Omega^{1}A {}_{\scriptscriptstyle(1)}times_{A} E$,\\ 3) each $\Omega^{p}B$ is flat as a right B module,\\ then there is a spectral sequence converging to $ H^{*}(A, E, \nabla_{E})$ with second page $ H^{*}(B, \hat{H}_{q}, \nabla_{q})$ where $\hat{H}_{q}$ is defined as the cohomology of the cochain complex$$\cdots {}^{\scriptscriptstyle[2]}ackrel{\mathrm{d}} \longrightarrow M_{0,q} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{d}} \longrightarrow M_{0,q+1} {}^{\scriptscriptstyle[2]}ackrel{\mathrm{d}} \longrightarrow \cdots$$ where \begin{eqnarray*} M_{0,q} &=& \frac{\Omega^{q}A {}_{\scriptscriptstyle(1)}times_{A} E}{\iota_{*} \Omega^{1} B \wedge \Omega^{q-1} A {}_{\scriptscriptstyle(1)}times_{A}E}\ ,\cr \mathrm{d}[x {}_{\scriptscriptstyle(1)}times e]_{0,q} &=& [\mathrm{d} x {}_{\scriptscriptstyle(1)}times e + (-1)^{q} x \wedge \nabla_{E} e]_{0,q+1}\ . \end{eqnarray*} The zero curvature left covariant derivative $\nabla_{q}: \hat{H}_{q} {}_{\scriptscriptstyle(2)}o \Omega^{1}B {}_{\scriptscriptstyle(1)}times_{B} \hat{H}_{q}$ is as defined in Proposition \ref{b6}. \end{theorem} {}_{\scriptscriptstyle(2)}extbf{Proof:} The first part of the proof is given in Proposition \ref{bob2}. Now we need to calculate the cohomology of $$\mathrm{d} : \Omega^{p}B {}_{\scriptscriptstyle(1)}times_{B} \hat{H}_{q} \longrightarrow \Omega^{p+1}B {}_{\scriptscriptstyle(1)}times_{B}\hat{H}_{q}$$ This is given for $\xi {}_{\scriptscriptstyle(1)}times \left{{}_{\scriptscriptstyle(2)}riangleright}ngle \eta {}_{\scriptscriptstyle(1)}times e\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle _{0,q}$ (for $\xi \in \Omega^{p}B$, $\eta \in \Omega^{q} A$ and $e \in E$) as follows: this element corresponds to $\iota_{*}\xi \wedge \eta {}_{\scriptscriptstyle(1)}times e$, and applying $\mathrm{d}$ to this gives $$\iota_{*} \mathrm{d}\xi \wedge \eta {}_{\scriptscriptstyle(1)}times e + (-1)^{p} \iota_{*} \xi \wedge \mathrm{d}\eta {}_{\scriptscriptstyle(1)}times e +(-1)^{p+q} \iota_{*} \xi \wedge \eta \wedge \nabla e.$$ But we have calculated the effect of $\mathrm{d}$ on $\hat{H}_{q}$ in Proposition \ref{b6}, so we get $$\mathrm{d}(\xi {}_{\scriptscriptstyle(1)}times \left{{}_{\scriptscriptstyle(2)}riangleright}ngle \eta {}_{\scriptscriptstyle(1)}times e \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle _{0,q} ) = \mathrm{d}\xi {}_{\scriptscriptstyle(1)}times \left{{}_{\scriptscriptstyle(2)}riangleright}ngle \eta {}_{\scriptscriptstyle(1)}times e \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle _{0,q} +(-1)^{p}\xi \wedge \nabla_{q} \left{{}_{\scriptscriptstyle(2)}riangleright}ngle \eta {}_{\scriptscriptstyle(1)}times e \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle _{0,q}.$$ The covariant derivative $\nabla_{q} $ has zero curvature by Proposition \ref{b9}. \quad $\square$ \section{Example: A fibration with fiber the noncommutative torus}{{}_{\scriptscriptstyle(2)}riangleright}bel{se1} As discussed in the Introduction, the idea for this example came from \cite{25,26}. \subsection{The Heisenberg group} {{}_{\scriptscriptstyle(2)}riangleright}bel{vcadgsh} The Heisenberg group $H$ is defined to be following subgroup of $M_{3}\mathbb{(\mathbb{Z})}$ under multiplication. \[ \Big\{ \left( \begin{array}{ccc} 1&n&k\\ 0&1&m\\ 0&0&1 \end{array} \right) : n , m , k \in \mathbb{Z} \Big\} \] We can take generators $u, v , w$ for the group, where $w$ is central and there is one more relation $uv=wvu$. These generators correspond to the matrices \[ u= \left( \begin{array}{ccc} 1&1&0\\ 0&1&0\\ 0&0&1 \end{array} \right) ,\quad v= \left( \begin{array}{ccc} 1&0&0\\ 0&1&1\\ 0&0&1 \end{array}\right)\ ,\quad w= \left( \begin{array}{ccc} 1&0&1\\ 0&1&0\\ 0&0&1 \end{array}\right)\ \] There is an isomorphism ${}_{\scriptscriptstyle(2)}heta : H \longrightarrow H $, for every matrix \[\left( \begin{array}{cc} a&c\\ b&d \end{array}\right)\ \in SL(2,\mathbb{Z}) , \] given by ${}_{\scriptscriptstyle(2)}heta(u)=u^{a}\,v^{b}$, ${}_{\scriptscriptstyle(2)}heta(v)=u^{c}\,v^{d}$, ${}_{\scriptscriptstyle(2)}heta(w)=w$. The group algebra $\mathbb{C}H$ of $H$ can be made into a star algebra by setting $x^*=x^{-1}$ for all $x \in \{ u , v , w \}$. \subsection{A differential calculus on the Heisenberg group} There is a differential calculus on the group algebra $\mathbb{C}H$ of $H$. It is bicovariant, as set down by Woronowicz in \cite{worondiff}. For a generator $ x \in \{u , v , w\}$, we write $e^x = x^{-1}.\mathrm{d} x$, a left invariant element of $\Omega^{1}\mathbb{C}H$. We suppose that $\Omega^{1}\mathbb{C}H$ is free as left $\mathbb{C}H$ module, with generators $\{ e^{u} , e^{v} , e^{w} \}$. This means that every element of $\Omega^{1}\mathbb{C}H$ can be written uniquely as $a^{u} . e^{u} + a^{v} . e^{v} + a^{w} . e^{w}$, for $a^{u} , a^{v} , a^{w} \in \mathbb{C}H$. We have the following relations on $\Omega^{1}\mathbb{C}H$, for all $x \in \{ u , v , w \}$:\\ $x .e^x = e^x .x $ \\ $x .e^w = e^w . x $\\ $w .e^x = e^x .w $\\ $ u^{-n} .e^v. u^n = e^v - \frac{n}{2}\,e^w $ \\ $ v^{-n}. e^u. v^n = e^v + \frac{n}{2}\,e^w $\\ Further the map ${}_{\scriptscriptstyle(2)}heta$ in subsection \ref{vcadgsh} extends to a map of 1-forms given by $ {}_{\scriptscriptstyle(2)}heta(e^{w}) = e^{w} $ \\ $ {}_{\scriptscriptstyle(2)}heta(e^{u}) = a .e^{u} + b. e^{v} + \frac{ab}{2}. e^{w} $ \\ $ {}_{\scriptscriptstyle(2)}heta(e^{v}) = c .e^{u} + d. e^{v} + \frac{cd}{2}. e^{w} $ \\ Checking the braiding given by Woronowicz shows that, for $x,y \in \{u , v , w\}$,\\ $\mathrm{d} e^x=0$ \\ $e^x\wedge e^y=-e^y\wedge e^x$\ .\\ The star operation extends to the differential calculus, with $(e^x)^*=-e^x$. \subsection{The differential fibration}{{}_{\scriptscriptstyle(2)}riangleright}bel{b10} If we take $z$ to be the identity function :$S^{1} \rightarrow \mathbb{C}$, the map sending $z^n$ to $w^n$ gives an algebra map $\iota:C(S^1){}_{\scriptscriptstyle(2)}o \mathbb{C}H$. It is also a star algebra map, with the usual star structure $z^*=z^{-1}$ on $C(S^1)$. The differential structure of the `fiber algebra' $F$ is \begin{eqnarray}{{}_{\scriptscriptstyle(2)}riangleright}bel{fib1} \Omega^n F = \frac{\Omega^n \mathbb{C}H}{\iota_*\Omega^1 C(S^1) \wedge \Omega^{n-1}\mathbb{C}H}\ , \end{eqnarray} i.e.\ we put $ \mathrm{d} w=0$ in $\Omega^n F$ (i.e. put $e^{w} = 0$). This is because in (\ref{fib1}) we divide by everything of the form $e^{w} \wedge \xi $. To see that this gives a fibration, we note that a linear basis for the left invariant n-forms is as follows:\\ $\Omega^{1}A$: \quad $e^{u}, e^{v}, e^{w}$\\ $\Omega^{2}A$: \quad $e^{u} \wedge e^{v}$, $e^{w} \wedge e^{u}$ , $e^{w} \wedge e^{v}$\\ $\Omega^{3}A$: \quad $e^{v} \wedge e^{u} \wedge e^{w}$\\ Then the $N_{n,m}$ (see (\ref{cvhgsuv})) are, where $\langle...\rangle$ denotes the module generated by, and all others are zero:\\ $N_{0,0} = 1$, \quad $N_{1,0} = \left{{}_{\scriptscriptstyle(2)}riangleright}ngle e^{w} \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle $, \quad $N_{m,0} = 0$ , $m > 1$ \\ $N_{0,1} = \frac{\left{{}_{\scriptscriptstyle(2)}riangleright}ngle e^{u}, e^{v}, e^{w}\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle }{\left{{}_{\scriptscriptstyle(2)}riangleright}ngle e^{w} \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle } = \left{{}_{\scriptscriptstyle(2)}riangleright}ngle e^{u}, e^{v} \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle $\\ $N_{0,2} = \frac{\left{{}_{\scriptscriptstyle(2)}riangleright}ngle e^{u} \wedge e^{v}, e^{w} \wedge e^{u}, e^{w} \wedge e^{v} \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle }{\left{{}_{\scriptscriptstyle(2)}riangleright}ngle e^{w} \wedge e^{u}, e^{w} \wedge e^{v} \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle } = \left{{}_{\scriptscriptstyle(2)}riangleright}ngle e^{u} \wedge e^{v} \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle $\\ $N_{0,3} = \frac{\left{{}_{\scriptscriptstyle(2)}riangleright}ngle e^{w} \wedge e^{u} \wedge e^{v}\right{{}_{\scriptscriptstyle(2)}riangleleft}ngle }{\left{{}_{\scriptscriptstyle(2)}riangleright}ngle e^{w} \wedge e^{u} \wedge e^{v} \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle } = 0$\\ $N_{0,n} = 0 $ \quad $n \geq 4$\\ $N_{1,1} = \frac{e^{w} \wedge \left{{}_{\scriptscriptstyle(2)}riangleright}ngle e^{w}, e^{u}, e^{v} \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle }{\left{{}_{\scriptscriptstyle(2)}riangleright}ngle 0 \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle } = \left{{}_{\scriptscriptstyle(2)}riangleright}ngle e^{w} \wedge e^{u} , e^{w} \wedge e^{v} \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle $\\ $N_{1,2} = \frac{e^{w} \wedge \left{{}_{\scriptscriptstyle(2)}riangleright}ngle e^{w} \wedge e^{u}, e^{w} \wedge e^{v}, e^{u} \wedge e^{v} \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle }{\left{{}_{\scriptscriptstyle(2)}riangleright}ngle 0 \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle } = \left{{}_{\scriptscriptstyle(2)}riangleright}ngle e^{w} \wedge e^{u} \wedge e^{v} \right{{}_{\scriptscriptstyle(2)}riangleleft}ngle$ \\ Then the following map is one-to-one and onto, $$\Omega^{1}C(S^{1}) {}_{\scriptscriptstyle(1)}times_{C(S^{1})} N_{0,n} \longrightarrow N_{1,n}$$ giving a differential fibration in the sense of Definition \ref{b61}. As was done in \cite{25}, we note that this map does have a fiber in quite a classical sense. The algebra $C(S^1)$ is commutative, and if we take $q\in S^1$, the fiber algebra corresponding to $q$ is given by substituting $w\mapsto q$ in the algebra relations. We get unitary generators $u ,v$ and a relation $u\, v = q\, v\, u$ for a complex number $q$ of norm $1$. But this is exactly the noncommutative torus $\mathbb{T}^{2}_{q}$. The map ${}_{\scriptscriptstyle(2)}heta$ on the total algebra $\mathbb{C}H$ is the identity on the base algebra $C(S^1)$, so it acts on each fiber. \end{document}
\begin{document} \draft \title{CHARGE~RENORMALIZATION~IN~A~NEW~KIND~OF~NON-LOCAL QUANTUM~ELECTRODYNAMICS} \author{S.~S.~Sannikov} \address{Physico-Technical Institute\\ 1 Academichna St., 310108 Kharkov, {\bf UKRAINE}} \author{A.~A.~Stanislavsky} \address{Institute of Radio Astronomy of the Ukrainian National Academy of Sciences\\ 4 Chervonopraporna St., 310002 Kharkov, {\bf UKRAINE}\\ E-mail: [email protected]} \date{\today} \maketitle \begin{abstract} The goal of this message is to calculate radiative corrections to the Sommerfeld fine structure constant in the framework of a new QED in which particles are described by bilocal fields. The bare constant is 1/136 where 136 is a dimension of the dynamical group of the bihamiltonian system underlying the suggested elementary particle theory. Our calculations in the second order of perturbation theory give the renormalized Sommerfeld constant 1/137.0345. We believe the difference (137.0359 - 137.0345) between corresponding experimental and theoretical values may be understood as corrections of the fourth order. \end{abstract} \pacs{11.10.Gh, 11.10.Lm} \section{Introduction} The aim of this paper is to show how to calculate the main radiative corrections in quantum electrodynamics improved on the bases of the general elementary particle theory suggested in \cite{1}. The keystone of the theory is the assumption that the true mechanism of production of elementary particles is not interactions between them (or between their hypothetical constituents), but is a certain quantum-dynamical system determining the special physics at supersmall distances where the space-time is discontinuum, i.\ e.\ it is the quite non-connected manifold there. The transition in such a system lead to the creation of fundamental particle fields which are bilocal wave functions $\psi(X,Y)$ in our theory (the Heisenberg-Schr$\rm\ddot o$dinger-Dirac theory postulates the existance of local fields $\psi(X)$, but in that theory there are ultraviolet devergences). The initial principles of our approach to the elementary particle problem have been stated in the Russian periodicals \cite{1,2,3,4,5}. The dynamical system mentioned above has been described in \cite{2} and named as a relativistic bi-Hamiltonian system. Owing to the discontinuity of space in small, the quantum theory of the system is non-unitary; the non-standard (non-Fock) representations of the Heisenberg algebra $h^{(*)}_{16}$ described in \cite{4} (extraction of square root of Dirac-Grassmann spinors leads to such algebras \cite{3}) and non-unitary (infinite-dimensional) representations of the rotation group $SO(3)$ and Lorentz group $SO(3,1)$, induced by them and charaterized by the arbitrary complex spin found in \cite{5} earlier, form the mathematical foundation of this theory. Thus these representations stand for a new physical reality. The elementary particle theory based on them is more like the atomic spectrum theory rather than any composite model. In the framework of this theory (quantum electrodynamics with bilocal fields) we consider here only one question --- the charge renormalization which was not solved up till now. \section{Bilocal fields and their interactions} The field bilocality $\psi(X,Y)$ is the direct consequence of the semispinor structure of the particle fields $\psi^\Sigma \sim\langle\dot f,O^\Sigma f\rangle$($O^\Sigma$ is elements of the Heisenberg algebra $h^{(*)}_{16}$) discovered by means of extracting the square root of Grassmann spinors, see \cite{3} (this structure is quite analogous to the spinor structure of current $j_\mu\sim\bar\psi\gamma_\mu\psi$ where $\gamma_\mu$ are elements of the Clifford algebra discovered by means of extracting the Dirac square root of vectors). The bilocal field $\psi^\Sigma(X,Y)$ defined by the transition amplitude $\langle\dot f(X-Y),O^\Sigma f(X+Y)\rangle$ where $O^\Sigma f(x)$ and $\dot f(\dot x)$ are the initial (excited) and final (ground) states of the relativistic bi-Hamiltonian system respectively (the explicit form of these states found in \cite{1}) is written down as \begin{equation} \psi(X,Y)=\frac{1}{(2\pi)^{3/2}}\int e^{ipX+iqY}\theta(p_0+q_0)\, \theta(p_0-q_0)\,\delta(p^2+q^2)\,\delta(pq)\,\delta(p^2-m^2)\, \psi(p,q)\,d^4p\,\frac{d^4q}{2\pi}\ . \label{eq1} \end{equation} Here $X_\mu$ are coordinates in Minkowsky space ($p_\mu$ is a 4-momentum of a particle) and $Y_\mu$ are internal coordinates (which are not fixed in the experiment and therefore we call them hidden) describing the space-time structure of particles ($q_\mu$ is a 4-momentum of tachyon; it is interesting to note that analogous objects have already been introduced by Yukawa \cite{6}). It follows from (\ref{eq1}) that if $\vert X\vert\gg\vert Y \vert$ then the bilocal field $\psi(X,Y)$ transforms into the usual local field $\psi(X)=\psi(X,0)$ (hence, in the suggested scheme the local fields appear as asymptotic fields; it is a principal point of a new correspondence principle). It also follows from (\ref{eq1}) that $\psi(X,Y)$ may be represented in the form of $\psi(X,Y)=F(Y,-i\frac{\partial}{\partial X})\psi(X)$ where $\psi(X)$ is a local field and $F$ is the so-called smearing operator which has the form in the case of massive particles ($p_\mu$ is a 4-momentum of such a particle) \begin{equation} F(Y,p)=\frac{1}{2\pi}\int e^{iqY}\delta(p^2+q^2)\, \delta(pq)\,d^4q\ . \label{eq2} \end{equation} Another form of the smearing operator takes place in the case of massless particles (it follows from the explicit form of the leptonic transition amlplitude; it is necessary to note that operator (\ref{eq2}) does not transform into (\ref{eq3}) when $p^2=0$; in the case we have the stochastic integral \begin{displaymath} \frac{1}{2}\int_{-1}^1 e^{i\alpha pY}\,d\alpha\,), \end{displaymath} namely: \begin{equation} F_0(Y,k)=e^{iYk}\ . \label{eq3} \end{equation} It is a translation ($k_\mu$ is a 4-momentum of such a particle). Interactions between bilocal fields are described by differential equations in Minkowsky space. We are interested in the Dirac field $\psi(X,Y)$ interacting with the electromagnetic field $A_\mu(X,Y')$ (a general mechanism driving interactions is described in \cite{1}). In this case the equations are written in the form of \begin{equation} \left(\gamma_\mu\frac{\partial}{\partial X_\mu}+m\right)\psi(X)= -iJ(X) \label{eq4} \end{equation} where $J(X)=e\gamma_\mu\int\psi(X,Y)\,A_\mu(X,Y')\,d\mu(Y,Y')$ is the interaction ``current''. It transforms into the usual local connection between local fields $e\gamma_\mu\psi(X)A_\mu(X)$ if $\vert X\vert\gg\vert Y\vert$ (it is the new correspondence principle from which follows the explicit form of measure: \begin{displaymath} d\mu(Y,Y')=\frac{\kappa^8}{(2\pi)^4}\,e^{iYY'\kappa^2}\,d^4Y\, d^4Y'\,. \end{displaymath} Here $\kappa$ is a new fundamental constant equal to $\kappa= 5\cdot 10^{13}cm^{-1}$, see \cite{1}; it will be convenient for our further calculations to put $c=\hbar=\kappa=1$). Proceeding from (\ref{eq4}) we may construct the $S$ - matrix: $S=T\exp(i\int\pounds_i(X)\,d^4X$) where $\pounds_i(X) =\frac{1}{2}[\bar\phi(X)J(X)+\bar J(X)\psi(X)]$ is an interaction Lagrangian. In the perturbation theory the interaction picture may be described by the well-known Feynman diagrams in vertices of which the electron-photon formfactor arises \begin{equation} \rho(p,k)=\int F(Y,p)\,F_0(Y',k)\,d\mu(Y,Y')= \frac{1}{2\pi}\int e^{iqk}\delta(p^2+q^2)\, \delta(pq)\,d^4q\ . \label{eq5} \end{equation} \section{The main formula}\label{kd} First of all we state the result of our calculations of radiative corrections to the Sommerfeld fine structure constant $\alpha=e^2/4\pi$. In the suggested theory the renormalized constant $\tilde\alpha$ connects with the ``bare'' constant $\alpha$ by the formula \begin{equation} \tilde\alpha=\left(\frac{Z_1}{Z_2}\right)^{2}\frac{Z_4}{Z_3}\,\alpha \label{eq6} \end{equation} where $Z_1,Z_2,Z_3,Z_4$ are the renormalization constants of the fermion Green function, vertex function, Lagrangian of classical electromagnetic field and three-tail, respectively. Here all these quantities are calculated in the second order of perturbation theory. In the suggested theory the ``bare'' constant $\alpha$ is equal to 1/136 (Eddington formula) where 136 is the dimension of the dynamical group (the group of automorphisms $Sp^{(*)}(8,{\bf C})$ for the Heisenberg algebra $h_{16}^{(*)}$) in our relativistic bi-Hamiltonian system, see \cite{2}. We see formula (\ref{eq6}) essentially differs from the local theory formula $\tilde\alpha=Z_3^{-1}\alpha$ \cite{7} being a consequence of the Ward identity $Z_1=Z_2$ (in this theorem for regularized constants, see \cite{7}, the regularized fermion self-energy operator $\Sigma(p)$ is assumed to be an analytic function at point $p^2=0$; but it does not take place in the suggested theory: it follows from \cite{8} that $\Sigma(p)\sim \ln p^2$ when $p^2\to 0$) and also the Furry theorem (which does not take place in the suggested theory too due to the presence of hidden parameters $Y_\mu$ for bare particles and the absence of them for bare antiparticles, see further). \section{Calculation of $Z_1/Z_2$} In our theory the Ward identity \begin{displaymath} \frac{\partial\Sigma(p)}{\partial p_\mu}+\Lambda_\mu(p,0)=0 \end{displaymath} ($\Lambda_\mu$ is the vertex function) is replaced by a more general identity \begin{equation} \frac{\partial\Sigma(p)}{\partial p_\mu}+\Lambda_\mu(p,0)= \Sigma_\mu(p) \label{eq7} \end{equation} where $\Sigma_\mu(p)$ is the following operator \begin{displaymath} \Sigma_\mu(p)=\frac{e^2}{i(2\pi)^4}\int\frac{\gamma_\nu(\hat p- \hat k+m)\gamma_\nu}{[(p-k)^2-m^2]\,k^2}\,\left[\frac{\partial} {\partial p_\mu}\rho(p,k)\right]\,d^4k= \end{displaymath} \begin{displaymath} =\frac{ie^2}{(2\pi)^4} \int^1_0 dz\int_0^\infty\frac{d\sigma}{\sigma^2}\exp[i\frac{p^2} {2\sigma}-i\sigma(m^2z-p^2z(1-z))]\left[p_\mu(2m-\hat p(1-z))+ \frac{z}{3}(p_\mu\hat p-\gamma_\mu p^2)\right]. \end{displaymath} To express the quantity $(Z_1/Z_2-1)$ of interest to us in terms of $\Sigma_\mu(p)$, it is necessary to take the operator on the mass shell $\bar p=m$ by means of the formula \begin{displaymath} \left(\frac{Z_1}{Z_2}\right)\gamma_\mu=\Sigma_\mu(m)= -\gamma_\mu\frac{e^2}{4\pi^2}m^2\int^1_0 z\,(1+z)\,K_1(m^2z)\,dz \end{displaymath} where $K_1$ is the MacDonald function. From here we get \begin{equation} \frac{Z_1}{Z_2}=\cases{1-\frac{3\alpha}{2\pi}\ \ \ ,\ m\ll 1;\cr 1-\frac{\alpha}{2m^2}\ ,\ m\gg 1.\cr} \label{eq8} \end{equation} \section{Calculation of $Z_4/Z_3$} Similarly, another Ward identity \begin{displaymath} \frac{\partial\Pi_{\mu\nu}(k)}{\partial k_\sigma}+ \Delta_{\mu\nu\sigma}(k,0)=0 \end{displaymath} ($\Pi_{\mu\nu}$ is the polarization tensor), see \cite{9}, is replaced by a more general identity \begin{equation} \frac{\partial\Pi_{\mu\nu}(k)}{\partial k_\sigma}+ \Delta_{\mu\nu\sigma}(k,0)=\Pi_{\mu\nu\sigma}(k) \label{eq9} \end{equation} where $\Pi_{\mu\nu\sigma}(k)$ is the following expression \begin{displaymath} \Pi_{\mu\nu\sigma}^{(1/2)}(k)=\frac{ie^2}{(2\pi)^4}2\int \frac{2p_\mu p_\nu+2p_\mu k_\nu -\delta_{\mu\nu}(p^2+pk)} {p^2\,(p+k)^2}\,\left[\frac{\partial}{\partial k_\sigma} \rho\Bigl((p+k)^2\Bigr)\right]\,d^4p \end{displaymath} in the case of Weyl's dissociation, and \begin{displaymath} \Pi_{\mu\nu\sigma}^{(0)}(k)=-\frac{ie^2}{(2\pi)^4}4\int \frac{p_\mu p_\nu+p_\mu k_\nu} {p^2\,(p+k)^2}\,\left[\frac{\partial}{\partial k_\sigma} \rho\Bigl((p+k)^2\Bigr)\right]\,d^4p \end{displaymath} in the case of Klein-Gordon's dissociation. Speaking about the electromagnetic wave dissociation we should explain two points. Firstly calculating $\Pi_{\mu\nu}$ we use quite another formfactor not (\ref{eq5}), but \begin{equation} \rho(p^2)=\int F(Y,p)\,F(Y',p)\,d\mu(Y,Y')= \frac{\sin p^2}{p^2} \label{eq10} \end{equation} because the Lagrangian $\hat A_\mu\hat{\bar\psi}\gamma_\mu\hat\psi$ all fields of which are quantized does not give any contribution to the charge renormalization, see \cite{8}. Another Lagrangian, namely $A_\mu\hat{\bar\psi}\gamma_\mu\hat\psi$ ($A_\mu$ is a classical field), gives such a contribution. If the wave function of photons $A_\mu(X,Y)$ has the internal variables $Y_\mu$, then the classical one (Maxwell field $A_\mu(X)$), as an essential alloy of indefinite number of photons (light molecule), does not have such variables. Therefore in the case only internal variables of intermediate particles (not antiparticles) are paired. This operation leads to the formfactor (\ref{eq10}). It is important to note that the bare particles as objects being created in the transition $f\to\dot f$ have the additional variables $Y$. The bare antiparticles arised in consequence of interactions do not have such variables ($T$-asymmetry of 100 per cent or complete fermion-antifermion asymmetry of the theory, see \cite{1}). Under these circumstances the well-known Furry theorem is invalid. Secondly, the polarization tensor $\Pi_{\mu\nu}$ having a finite value $\Pi_{\mu\nu}(k)=(k_\mu k_\nu-\delta_{\mu\nu}k^2) \Pi(k^2)+\delta_{\mu\nu}d(k^2)$ where \begin{displaymath} d(k,m)=-\frac{e^2}{4\pi^2}\int^1_{-1}\frac{d\alpha}{2}\int^1_0 dz\int_0^\infty\frac{\sigma\,d\sigma}{(\sigma+\alpha)^2}\, \left[m^2-\frac{i}{\sigma+\alpha}-k^2\frac{\sigma z}{\sigma+ \alpha}\left(1-\frac{\sigma z}{\sigma+\alpha}\right)\right]\times \end{displaymath} \begin{displaymath} \times\exp\left[-i\sigma m^2+ik^2\sigma z\left(1- \frac{\sigma z}{\sigma+\alpha}\right)\right] \end{displaymath} for both the Dirac and Kemmer-Duffin (or Klein-Gordon) polarizations (the expression for $\Pi$ is not given here) must be a gauge-invariant quantity. Therefore we require $d(k)=0$ at least in the region $k^2=0$. The last condition leads to the equation \begin{displaymath} \int_0^\infty\frac{\sin x}{x+m^2}\,dx+m^2\int_0^\infty \frac{\cos x}{x+m^2}\,dx=\frac{\pi}{2} \end{displaymath} which has the only solution $m=0$. Hence a classical electromagnetic wave may dissociate on massless particles only. Essentially, in the suggested theory there are two and only two charged particles with zero bare mass: positron (in our scheme it is the fundamental fermion with spin 1/2; electron is antifermion) and $\pi$-meson (quantum of degeneration fields with spin 0). Therefore we consider only two these cases. Since $\Pi_{\mu\nu\sigma}$ leads the Lagrangian to the form of $\Pi_{\mu\nu\sigma}(k)\,A_\mu(k)\,A_\nu(k)\,A_\sigma(0)$ and in consequence of the Lorentz-gauge $k_\mu A_\mu(k)=k_\nu A_\nu(k)=0$ we should hold only the term $\delta_{\mu\nu}k_\sigma$ in $\Pi_{\mu\nu\sigma}(k)$. Therefore we write $\Pi_{\mu\nu\sigma}(k) =\delta_{\mu\nu}k_\sigma I(k)$. Our calculations give \begin{displaymath} I^{(1/2)}(k)=\frac{e^2}{4\pi^2}\int^1_{-1}\frac{\alpha\,d\alpha} {2}\int^1_0 dz\int_0^\infty\frac{\sigma\,d\sigma}{(\sigma+ \alpha)^3}\,\left(\frac{1}{2}-\frac{2\sigma z}{\sigma+\alpha}\right)\, \exp\left[ik^2\sigma z\left(1-\frac{\sigma z}{\sigma+\alpha}\right) \right], \end{displaymath} \begin{displaymath} I^{(0)}(k)=-\frac{e^2}{4\pi^2}\int^1_{-1}\frac{\alpha\,d\alpha} {2}\int^1_0 z\,dz\int_0^\infty\frac{\sigma^2\,d\sigma}{(\sigma+ \alpha)^4}\,\exp\left[ik^2\sigma z\left(1-\frac{\sigma z} {\sigma+\alpha}\right)\right]. \end{displaymath} On the mass shell $k^2=0$ we get \begin{displaymath} I^{(1/2)}(0)=-\frac{e^2}{48\pi^2}\,,\qquad I^{(0)}(0)=-\frac{e^2}{24\pi^2}\,. \end{displaymath} The quantity ($Z_4/Z_3-1$) of interest to us is determined by the sum $I^{(1/2)}(0)+I^{(0)}(0)$ and we have \begin{equation} \frac{Z_4}{Z_3}=1-\frac{\alpha}{12\pi}-\frac{\alpha}{6\pi}= 1-\frac{\alpha}{4\pi}\ . \label{eq11} \end{equation} \section{The principal result} Expressions (\ref{eq8}) and (\ref{eq11}) together give \begin{displaymath} \left(\frac{Z_2}{Z_1}\right)^{2}\frac{Z_3}{Z_4}= \left(1+\frac{3\alpha}{\pi}\right)\left(1+\frac{\alpha}{4\pi}\right)= 1+\frac{13\alpha}{4\pi}\ . \end{displaymath} From (\ref{eq6}) it follows now \begin{equation} \tilde\alpha^{-1}=\alpha^{-1}+\frac{13}{4\pi}\ . \label{eq12} \end{equation} Since in the suggested theory $\alpha^{-1}$ = 136, the renormalized constant $\tilde\alpha^{-1}$ is $\tilde\alpha^{-1}$ = 136+1.0345 = 137.0345. The modern experimental value of this constant is 137.0359 \cite{10}. We believe the difference 0.0014 (indeed, 0.00085 only) may be explained by the fourth order radiative corrections. \section{Fermion anomalous magnetic moment} According to the suggested theory, calculations of the vertex operator in the third order of the perturbation theory lead to the following formula of the fermion anomalous magnetic moment \begin{equation} \Delta\mu=\frac{\alpha}{\pi}\ m^2\int^1_0 z\,(1-z)\,K_1(m^2z)\,dz\ . \label{eq13} \end{equation} a) In the case $m\ll 1$ the formula (\ref{eq13}) gives Schwinger's result $\frac{\alpha}{2\pi}$ with a correction \begin{displaymath} \Delta\mu\simeq\frac{\alpha}{2\pi}\left[1+\frac{m^4}{12} \left(C-\frac{13}{12}-\ln 2+\ln m^2\right)\right]\ . \end{displaymath} The electron has $m=\frac{m_e c}{\kappa h}=5\cdot 10^{-4}$ and the correction $\frac{\alpha}{2\pi}\,\frac{m^4}{12}\left(C-\frac{13} {12}-\ln 2+\ln m^2\right)\simeq -9.8\cdot 10^{-17}$ is far beyond the experimental possibilities of today. The $\mu$-meson has $m_\mu= 0.1$ and the correction is equal $-5.6\cdot 10^{-8}$ within the bounds of possibility. The correction should be added to the factor $\left(\frac{g-2}{2}\right)_{\rm theory}=\frac{\alpha}{2\pi}+0.76 \left(\frac{\alpha}{\pi}\right)^2=$0.0011655102 calculted by means of the local theory. Its experimental value is $\left(\frac{g-2}{2} \right)_{\rm exper}=$0.001165923 . The difference $\left(\frac{g-2} {2}\right)_{\rm exper}-\left(\frac{g-2}{2}\right)_{\rm theory}=$ 0.000000413 (together with our correction the value is equal 0.000000493) is usually accounted for by influence of the strong interaction the correct theory of which is known to be wanting as yet (and all the calculations are not strictly defined). However there is a correction close to it in magnitude because of nonlocality (of both electromagnetic and strong interactions), i.\ e.\ owing to the finite third fundamental constant $\kappa$. b) For $m\gg 1$ the formula (\ref{eq13}) gives \begin{displaymath} \frac{g-2}{2}\simeq\frac{\alpha}{2m^2}\ . \end{displaymath} Let us apply it to the $\tau$-meson having $m_\tau=$1.78 and obtain $\frac{g-2}{2}=$0.001151584 . \begin{references} \bibitem{1} S.~S.~Sannikov and I.~I.~Uvarov, Problems of Nuclear Physics and Space Rays, No.~31, 76(Kharkov University Press, Kharkov, 1989); S.~S.~Sannikov, Kiev Report No. ITP-91-72R, 1992. \bibitem{2} S.~S.~Sannikov and I.~I.~Uvarov, Izvestiya Vysshikh Uchebnykh Zavedenii, seriya Fizika, No.~10, 5(1990)(in translation Soviet Physics Journal). \bibitem{3} S.~S.~Sannikov, Dokl.~Akad.~Nauk~SSSR {\bf 172}, 37(1967); \bibitem{4} S.~S.~Sannikov and I.~I.~Uvarov, Problems of Nuclear Physics and Space Rays, No.~32, 31(Kharkov University Press, Kharkov, 1989); \bibitem{5} S.~S.~Sannikov, J.Nucl.Phys. {\bf 2}, 570(1965); Teor.Math.Phys. {\bf 34}, 34(1978); \bibitem{6} H.~Yukawa, Phys.Rev. {\bf 77}, 219(1950). \bibitem{7} N.~N.~Bogolubov and D.~V.~Shirkov, {\it Introduction to the Theory of Fields}, 3rd ed. (Wiley, 1980). \bibitem{8} S.~S.~Sannikov and A.~A.~Stanislavsky, Izvestiya Vysshikh Uchebnykh Zavedenii, seriya Fizika, No.~6, 76(1994)(translation in Russian Physics Journal). \bibitem{9} L.~H.~Ryder {\it Quantum Field Theory} (Cambridge University Press, Cambridge, 1985). \bibitem{10} Review of Particles Properties, Phys.Lett. {\bf B239}, (1990). \end{references} \end{document}
\begin{document} \title{ On Some Problems \\ Related to a Simplex and a Ball} \author{Mikhail Nevskii\footnote{Department of Mathematics, P.G.~Demidov Yaroslavl State University, Sovetskaya str., 14, Yaroslavl, 150003, Russia orcid.org/0000-0002-6392-7618 [email protected]} } \date{May 5, 2019} \maketitle \begin{abstract} Let $C$ be a convex body and let $S$ be a nondegenerate simplex in ${\mathbb R}^n$. Denote by $\xi(C;S)$ the minimal $\tau>0$ such that $C$ is a subset of the simplex $\tau S$. By $\alpha(C;S)$ we mean the minimal $\tau>0$ such that $C$ is contained in a translate of $\tau S$. Earlier the author has proved the equalities $\xi(C;S)=(n+1)\max\limits_{1\leq j\leq n+1} \max\limits_{x\in C}(-\lambda_j(x))+1$ \ (if $C\not\subset S$), \ $\alpha(C;S)= \sum\limits_{j=1}^{n+1} \max\limits_{x\in C} (-\lambda_j(x))+1.$ Here $\lambda_j$ are linear functions called the basic Lagrange polynomials corresponding to $S$. In his previous papers, the author has investigated these formulae if $C=[0,1]^n$. The present paper is related to the case when $C$ coincides with the unit Euclidean ball $B_n=\{x: \|x\|\leq 1\},$ where $\|x\|=\left(\sum\limits_{i=1}^n x_i^2 \right)^{1/2}.$ We establish various relations for $\xi(B_n;S)$ and $\alpha(B_n;S)$, as well as we give their geometric interpretation. \noindent Keywords: $n$-dimensional simplex, $n$-dimensional ball, homothety, absorption index \end{abstract} \section{Preliminaries}\label{nev_s1} Everywhere further $n\in{\mathbb N}.$ An element $x\in{\mathbb R}^n$ is written in the form $x=(x_1,\ldots,x_n).$ By definition, $$\|x\|=\sqrt{(x,x)}=\left(\sum\limits_{i=1}^n x_i^2\right)^{1/2},$$ $$B\left(x^{(0)};\varrho\right):=\{x\in{\mathbb R}^n: \|x-x^{(0)}\|\leq \varrho \} \quad \left(x^{(0)}\in {\mathbb R}^n, \varrho>0\right),$$ \ $$B_n:=B(0;1), \quad Q_n:=[0,1]^n, \quad Q_n^\prime:=[-1,1]^n.$$ Let $C$ be a convex body in ${\mathbb R}^n$. Denote by $\tau C$ the image of $C$ under the homothety with center of homothety in the center of gravity of $C$ and ratio of homothety $\tau.$ For an~$n$-dimensional nondegenerate simplex $S$, consider the value $\xi(C;S):=\min \{\sigma\geq 1: C\subset \sigma S\}.$ We call this number the {\it absorption index of $S$ with respect to $C$.} Define $\alpha(C;S)$ as minimal $\tau>0$ such that convex body $C$ is a subset of the simplex $\tau S$. By ${\rm ver}(G)$ we mean the set of vertices of convex polytope $G$. Let $x^{(j)}=\left(x_1^{(j)},\ldots,x_n^{(j)}\right),$ $1\leq j\leq n+1,$ be the vertices of simplex $S$. The matrix $${\bf A} := \left( \begin{array}{cccc} x_1^{(1)}&\ldots&x_n^{(1)}&1\\ x_1^{(2)}&\ldots&x_n^{(2)}&1\\ \vdots&\vdots&\vdots&\vdots\\ x_1^{(n+1)}&\ldots&x_n^{(n+1)}&1\\ \end{array} \right)$$ is nondegenerate. By definition, put ${\bf A}^{-1}$ $=(l_{ij})$. Linear polynomials $\lambda_j(x)= l_{1j}x_1+\ldots+ l_{nj}x_n+l_{n+1,j}$ whose coefficients make up the columns of ${\bf A}^{-1}$ have the property $\lambda_j\left(x^{(k)}\right)$ $=$ $\delta_j^k$, where $\delta_j^k$ is the Kronecker $\delta$-symbol. We call $\lambda_j$ the {\it basic Lagrange polynomials corresponding to $S$.} The numbers $\lambda_j(x)$ are barycentric coordinates of a point $x\in{\mathbb R}^n$ with respect to $S$. Simpex $S$ is given by the system of linear inequalities $\lambda_j(x)\geq 0$. For more details about $\lambda_j$, see [3; Chapter\,1]. The equality $\xi(C;S)=1$ is equivalent to the inclusion $C\subset S.$ If $C\not\subset S$, then \begin{equation}\label{ksi_cs_equality} \xi(C;S)=(n+1)\max_{1\leq j\leq n+1} \max_{x\in C}(-\lambda_j(x))+1. \end{equation} (the proof was given in [2]; see also [3;\S\,1.3]). The relation \begin{equation}\label{relation_cs} \max\limits_{x\in C} \left(-\lambda_1(x)\right)= \ldots= \max\limits_{x\in C} \left(-\lambda_{n+1}(x)\right) \end{equation} holds true if and only if the simplex $\xi(C;S)S$ is circumscribed around convex body $C.$ In the case $C=Q_n$ equality (\ref{ksi_cs_equality}) can be reduced to the form $$\xi(Q_n;S)=(n+1)\max_{1\leq j\leq n+1} \max_{x\in {\rm ver}(Q_n)}(-\lambda_j(x))+1 $$ and (\ref{relation_cs}) is equivalent to the relation \begin{equation}\label{relation_qs} \max\limits_{x\in {\rm ver}(Q_n)} \left(-\lambda_1(x)\right)= \ldots= \max\limits_{x\in {\rm ver}(Q_n)} \left(-\lambda_{n+1}(x)\right). \end{equation} For any $C$ and $S$, we have $\xi(C;S)\geq\alpha(C;S)$. The equality $\xi(C;S)=\alpha(C;S)$ holds only in the case when the simplex $\xi(C;S)S$ is circumscribed around convex body $C.$ This is equivalent to (\ref{relation_cs}) and also to (\ref{relation_qs}) when $C=Q_n$. It was proved in [4] (see also [3; \S\,1.4]) that \begin{equation}\label{alpha_cs_equality} \alpha(C;S)= \sum_{j=1}^{n+1} \max_{x\in C} (-\lambda_j(x))+1. \end{equation} If $C=Q_n$, then this formula can be written in rather more geometric way: \begin{equation}\label{alpha_d_i_formula} \alpha(Q_n;S) =\sum_{i=1}^n\frac{1}{d_i(S)}. \end{equation} Here $d_i(S)$ is {\it the $i$th axial diameter of simplex $S$,} i.\,e., the length of a longest segment in $S$ parallel to the $i$th coordinate axis. Equality (\ref{alpha_d_i_formula}) was obtained in [11]. When $S\subset Q_n,$ we have $d_i(S)\leq 1.$ Therefore, for these simplices, (\ref{alpha_d_i_formula}) gives \begin{equation}\label{ksi_alpha_n_ineq} \xi(Q_n;S)\geq\alpha(Q_n;S) =\sum_{i=1}^n\frac{1}{d_i(S)}\geq n. \end{equation} Earlier the author established the equality \begin{equation}\label{d_i_l_ij_formula} \frac{1}{d_i(S)}=\frac{1}{2}\sum_{j=1}^{n+1} |l_{ij}| \end{equation} (see [2]). Being combined together, (\ref{alpha_d_i_formula}) and (\ref{d_i_l_ij_formula}) yield \begin{equation}\label{alpha_qs_formula} \alpha(Q_n;S)=\frac{1}{2}\sum_{i=1}^n\sum_{j=1}^{n+1} |l_{ij}|. \end{equation} Note that $\alpha(C;S)$ is invariant under parallel translation of the sets and for $\tau>0$ we have $\alpha(\tau C;S)=\tau\alpha(C;S).$ Since $Q_n^\prime=[-1,1]^n$ is a translate of the cube $2Q_n$, after replacing $Q_n$ with $Q_n^\prime$ we obtain from (\ref{alpha_qs_formula}) an even simpler formula: \begin{equation}\label{alpha_q_prime_s_formula} \alpha(Q_n^\prime;S)=\sum_{i=1}^n\sum_{j=1}^{n+1} |l_{ij}|. \end{equation} Let us define the value $$\xi_n:=\min \{ \xi(Q_n;S): \, S \mbox{ --- $n$-мерный симплекс,} \, S\subset Q_n, \, {\rm vol}(S)\ne 0\}.$$ Various estimates of $\xi_n$ were obtained first by the author and then by the author and A.\,Yu.~Ukhalov (e.\,g., see papers [1], [2], [5], [6], [7], [8], [12] and book [3]). Always $n\leq \xi_n<n+1$. Nowaday the precise values of $\xi_n$ are known for $n=2,5,9$ and also for the infinite set of odd $n$'s for any of which there exists an Hadamard matrix of order $n+1$. If $n\ne 2$, then every known value of $\xi_n$ is equal to $n$, whereas $\xi_2=1+\frac{3\sqrt{5}}{5}=2.34\ldots$ Still remains unknown is there exist an even $n$ with the property $\xi_n=n$. There are some other open problems concerning the numbers $\xi_n$. In this article, we will discuss the analogues of the above characteristics for a simplex and an Euclidean ball. Replacing a cube with a ball makes many questions much more simpler. However, geometric interpretation of general results has a certain interest also in this particular case. Besides, we will note some new applications of the basic Lagrange polynomials. Numerical characteristics connecting simplices and subsets of ${\mathbb R}^n$ have applications for obtaining various estimates in polynomial interpolation of functions defined on mul\-ti\-dimen\-sional domains. This approach and the corresponding analytic methods in detailes were described in [3]. Lately these questions have been managed to study also by computer methods (see, e.\,g., [5], [6], [8], [12]). \section{The value $\alpha(B_n;S)$}\label{nev_s2} The {\it inradius of an $n$-dimensional simplex $S$} is the maximum of the radii of balls contained within $S$. The center of this unique maximum ball is called the {\it incenter of $S$.} The boundary of the maximum ball is a sphere that has a single common point with each $(n-1)$-dimensional face of $S$. By the {\it circumradius of S} we mean the minimum of the radii of balls containing $S$. The boundary of this unique minimal ball does not necessarily contain all the vertices of $S$. Namely, this is only when the center of the minimal ball lies inside the simplex. The inradius $r$ and the circumradius $R$ of a simplex $S$ satisfy the so-called {\it Euler inequality} \begin{equation}\label{euler_ineq} R\geq nr. \end{equation} Equality in (\ref{euler_ineq}) takes place if and only if $S$ is a regular simplex. Concerning the proofs of the Euler inequality, its history and generalizations, see, e.\,g., [10], [13], [14]. In connection with (\ref{euler_ineq}), let us remark an analogue to the following property being true for parallelotopes (see [11], [3; \S\,1.8]). {\it Let $S$ be a nondegenerate simplex and let $D,$ $D^*$~be parallelotopes in ${\mathbb R}^n.$ Suppose $D^*$ is a homothetic copy of $D$ with ratio $\tau>1.$ If $D\subset S \subset D^*,$ then $\tau\geq n.$ } This proposition holds true also for balls. In fact, the Euler inequality is equivalent to the following statement. {\it Suppose $B$ is a ball with radius $r_1$ and $B^*$ is a ball with radius $r_2$. If $B\subset S\subset B^*$, then $r_1\leq nr_2.$ Equality takes place if and only if $S$ is a regular simplex inscribed into $B^*$ and $B$ is the ball inscribed into $S$.} Another equivalent form of these propositions is given by Theorem 2 (see the note after the proof of this theorem). Let $x^{(1)},$ $\ldots,$ $x^{(n+1)}$ be the vertices and let $\lambda_1,$ $\ldots,$ $\lambda_{n+1}$ be the basic Lagrange polynomials of an nondegenerate simplex $S\subset {\mathbb R}^n$ (see Section 1). In what follows $\Gamma_j$ is the $(n-1)$-dimensional hyperplane given by the equation $\lambda_j(x)=0$, by $\Sigma_j$ we mean the $(n-1)$-dimensional face of $S$ contained in $\Gamma_j$, symbol $h_j$ denotes the height of $S$ conducted from the vertex $x^{(j)}$ onto~$\Gamma_j$, and $r$ denotes the inradius of $S$. Define $\sigma_j$ as $(n-1)$-measure of $\Sigma_j$ and put $\sigma:=\sum\limits_{j=1}^{n+1} \sigma_j$. Consider the vector $a_j:=\{l_{1j},\ldots,l_{nj}\}$. This vector is orthogonal to $\Gamma_j$ and directed into the subspace containing $x^{(j)}$. Obviously, $$\lambda_j(x)= l_{1j}x_1+\ldots+ l_{nj}x_n+l_{n+1,j}=(a_j,x)+l_{n+1,j}=(a_j,x)+\lambda_j(0).$$ {\bf Theorem 1.} {\it The following equalities are true: \begin{equation}\label{alpha_bs_sum_l_ij_equality} \alpha(B_n;S)= \sum_{j=1}^{n+1}\left(\sum_{i=1}^n l_{ij}^2\right)^{1/2}, \end{equation} \begin{equation}\label{alpha_bs_h_j_equality} \alpha(B_n;S)=\sum_{j=1}^{n+1}\frac{1}{h_j}, \end{equation} \begin{equation}\label{alpha_bs_1_r_equality} \alpha(B_n;S)= \frac{1}{r}, \end{equation} \begin{equation}\label{alpha_bs_sigma_nV} \alpha(B_n;S)=\frac{\sigma}{n{\rm vol}(S)}. \end{equation} } {\it Proof.} Let us obtain these pairwise-equivalent equalities from the top up to the bottom. First we note that \begin{equation}\label{alpha_bs_equality} \alpha(B_n;S)= \sum_{j=1}^{n+1} \max_{x\in B} (-\lambda_j(x))+1. \end{equation} Formula (\ref{alpha_bs_equality}) is the particular case of (\ref{alpha_cs_equality}) in the situation $C=B_n$. By the Cauchy inequality, \begin{equation}\label{cauchy_ineq} -\|a_j\|\|x\|\leq (a_j,x)\leq \|a_j\|\|x\|, \end{equation} $$-\|a_j\|\|x\|-\lambda_j(0)\leq -\lambda_j(x)\leq \|a_j\|\|x\|-\lambda_j(0).$$ Both the upper and the lower bounds in (\ref{cauchy_ineq}) are reachable. This gives $$\max_{x\in B_n} (-\lambda_j(x))= \max_{\|x\|\leq 1} (-\lambda_j(x))= \|a_j\|-\lambda_j(0).$$ Therefore, $$\alpha(B_n;S)= \sum_{j=1}^{n+1} \max_{x\in B_n} (-\lambda_j(x))+1= \sum_{j=1}^{n+1}\|a_j\|-\sum_{j=1}^{n+1}\lambda_j(0)+1= \sum_{j=1}^{n+1}\left(\sum_{i=1}^n l_{ij}^2\right)^{1/2}. $$ We made use of the equality $\sum\limits_{j=1}^{n+1}\lambda_j(0)=1.$ Since $\lambda_j\left(x^{(j)}\right)=1$, we have $$h_j={\rm dist}\left(x^{(j)};\Gamma_j\right)= \frac{\left|\lambda_j\left(x^{(j)}\right)\right|}{\|a_j\|}= \frac{1}{\|a_j\|}=\frac{1}{\left(\sum\limits_{i=1}^n l_{ij}^2\right)^{1/2}}.$$ Consequently, $$\alpha(B_n;S)= \sum_{j=1}^{n+1}\left(\sum_{i=1}^n l_{ij}^2\right)^{1/2} =\sum_{j=1}^{n+1}\frac{1}{h_j}.$$ We have obtained both (\ref{alpha_bs_sum_l_ij_equality}) and (\ref{alpha_bs_h_j_equality}). Let us prove (\ref{alpha_bs_1_r_equality}). The ball $B_n$ is a subset of a tranlate of the simplex $\alpha(B_n;S)S$. This means that a translate of the ball $\frac{1}{\alpha(B_n;S)}B_n$ is contained in $S$. Since the maximum of the radii of balls being contained in $S$ is equal to $r$, holds true $\frac{1}{\alpha(B_n;S)}\leq r,$ i.\,e., $\alpha(B_n;S)\geq \frac{1}{r}$. To obtaine the inverse inequality, denote by $B^\prime$ a ball of radius $r$ inscribed into $S$. Then the ball $B_n=\frac{1}{r}B^\prime$ is a subset of some translate of $\frac{1}{r}S$. Using the definition of $\alpha(B_n;S)$ we can write $\alpha(B_n;S)\leq \frac{1}{r}$. So, we have $\alpha(B_n;S)=\frac{1}{r}$. Finally, in order to establish (\ref{alpha_bs_sigma_nV}), it is sufficient to utilize (\ref{alpha_bs_1_r_equality}) and the formula ${\rm vol}(S)=\frac{1}{n}\sigma r$. The latter equality one can obtain from an ordinary formula for the volume of a simplex after subdividing $S$ onto $n+1$ simplices in such a way that $j$th of these simplices has a vertex in the center of the inscribed ball and is supported on $\Sigma_j$. $\Box$ {\bf Corollary 1.} {\it We have $$\frac{1}{r}=\sum_{j=1}^{n+1}\frac{1}{h_j}.$$ } For proving, it is sufficient to apply (\ref{alpha_bs_h_j_equality}) and (\ref{alpha_bs_1_r_equality}). It seems to be interesting that this geometric relation (which evidently can be obtained also in a direct way) occurs to be equivalent to general formula for $\alpha(C;S)$ in the particular case when a conveх body $C$ coincide with an Euclidean unit ball. {\bf Corollary 2.} {\it The inradius $r$ and the incenter $z$ of a simplex $S$ can be calculated by the following formulae: \begin{equation}\label{r_formula} r=\frac{1}{ \sum\limits_{j=1}^{n+1}\left(\sum\limits_{i=1}^n l_{ij}^2\right)^{1/2}}, \end{equation} \begin{equation}\label{z_formula} z=\frac{1}{ \sum\limits_{j=1}^{n+1}\left(\sum\limits_{i=1}^n l_{ij}^2\right)^{1/2}} \sum\limits_{j=1}^{n+1}\left(\sum\limits_{i=1}^n l_{ij}^2\right)^{1/2} x^{(j)}. \end{equation} The tangent point of the ball $B(z;r)$ and facet $\Sigma_k$ has the form \begin{equation}\label{y_k_formula} y^{(k)}=\frac{1}{ \sum\limits_{j=1}^{n+1}\left(\sum\limits_{i=1}^n l_{ij}^2\right)^{1/2}} \left[\sum\limits_{j=1}^{n+1}\left(\sum\limits_{i=1}^n l_{ij}^2\right)^{1/2} x^{(j)} -\frac{1}{\left(\sum\limits_{i=1}^n l_{ik}^2\right)^{1/2}} \left(l_{1k},\ldots,l_{nk}\right) \right]. \end{equation} } {\it Proof.} Equality (\ref{r_formula}) follows immediately from (\ref{alpha_bs_sum_l_ij_equality}) and (\ref{alpha_bs_1_r_equality}). To obtain (\ref{z_formula}), let us remark that $$r= {\rm dist}(z;\Gamma_j)= \frac{|\lambda_j(z)|}{\|a_j\|}.$$ Since $z$ lies inside $S$, each barycentric coordinate of this point $\lambda_j(z)$ is positive, i.\,e., $\lambda_j(z)=r\|a_j\|.$ Consequently, $$z=\sum_{j=1}^{n+1}\lambda_j(z)x^{(j)}= r\sum_{j=1}^{n+1} \|a_j\| x^{(j)}.$$ This coincides with (\ref{z_formula}). Finally, since vector $a_k=\{l_{1k},\ldots,l_{nk}\}$ is orthogonal to $\Sigma_k$ and is directed from this facet inside the simplex, a unique common point of $B(z;r)$ and $\Sigma_k$ has the form $$y^{(k)}=z-\frac{r}{\|a_k\|}a_k=r\left( \sum_{j=1}^{n+1} \|a_j\| x^{(j)}-\frac{1}{\|a_k\|} a_k\right).$$ The latter is equivalent to (\ref{y_k_formula}). $\Box$ It is interesting to compare (\ref{alpha_bs_sum_l_ij_equality}) with the formula (\ref{alpha_q_prime_s_formula}) for $\alpha(Q_n^\prime;S)$. Since $B_n$ is a subset of the cube $Q_n^\prime=[-1,1]^n$, we have $\alpha(B_n;S)\leq \alpha(Q_n^\prime;S)$. Analytically, this also follows from the estimate $$\left(\sum_{i=1}^n l_{ij}^2\right)^{1/2}\leq \sum_{i=1}^n |l_{ij}|.$$ For arbitrary $x^{(0)}$ and $\varrho>0$, the number $\alpha\left(B(x^{(0)};\varrho);S\right)$ can be calculated with the use of Theorem 1 and the equality $\alpha(B(x^{(0)};\varrho);S)$ $=$ $\varrho\alpha(B_n;S)$. If $S\subset Q_n$, then all the axial diameters $d_i(S)$ do not exceed $1$ and (\ref{alpha_d_i_formula}) immediately gives $\alpha(Q_n;S)\geq n$. Moreover, the equality $\alpha(Q_n;S)=n$ holds when and only when each $d_i(S)=1$. The following proposition expresses the analogues of these properties for simplices contained in a ball. {\bf Theorem 2.} {\it If $S\subset B_n$, then $\alpha(B_n;S)\geq n.$ The equality $\alpha(B_n;S)=n$ holds true if and only if $S$ is a regular simplex inscribed into $B_n$. } {\it Proof.} By the definition of $\alpha(B_n;S)$, the ball $B_n$ is contained in a translate of the simplex $\alpha(B_n;S)S$. Hence, some translate $B^\prime$ of the ball $\frac{1}{\alpha(B_n;S)}B_n$ is a subset of $S$. So, we have the inclusions $B^\prime\subset S\subset B_n$. Since the radius of $B^\prime$ is equal to $\frac{1}{\alpha(B_n;S)}$, the inradius $r$ and the circumradius $R$ of $S$ satisfy the inequalities $\frac{1}{\alpha(B_n;S)}\leq r,$ $R\leq 1$. Making use of the Euler inequality $R\geq nr$, we can write \begin{equation}\label{theor2_ineqs} \frac{1}{\alpha(B_n;S)}\leq r\leq \frac{R}{n}\leq \frac{1}{n}. \end{equation} Therefore, $\alpha(B_n;S)\geq n.$ The equality $\alpha(B_n;S)=n$ means that the left-hand value in (\ref{theor2_ineqs}) coincides with the right-hand one. Thus, all the inequalities in this chain turn into equalities. We obtain $R=1,$ $r=\frac{1}{n}$. Since in this case the Euler inequality (\ref{euler_ineq}) also becomes an equality, $S$ is a regular simplex inscribed into $B_n$. Conversely, if $S$ is a regular simplex inscribed into $B_n$, then $r=\frac{1}{n}$, i.\,e., $\alpha(B_n;S)=\frac{1}{r}=n$. $\Box$ We see that Theorem 2 follows from the Euler inequality (\ref{euler_ineq}). In fact, these statements are equivalent. Indeed, suppose $S$ is an arbitrary $n$-dimensional simple, $r$ is the inradius and $R$ is the circumradius of $S$. Let us denote by $B$ the ball containing $S$ and having radius $R$. Then some translate $S^\prime$ of the simplex $\frac{1}{R}S$ is contained in $B_n$. By Theorem~1, $\alpha(B_n;S^\prime)$ is the inverse to the inradius of $S^\prime$, i.\,e., is equal to $\frac{R}{r}.$ Now assume that Theorem~2 is true. Let us apply this theorem to the simplex $S^\prime\subset B_n$. This gives $\alpha(B_n;S^\prime)=\frac{R}{r}\geq n$ and we have (\ref{euler_ineq}). Finally, if $R=nr,$ then $\alpha(B_n;S^\prime)=n$. From Theorem 2 we obtain that both $S^\prime$ and $S$ are regular simplices. It follows from (\ref{ksi_alpha_n_ineq}) that the minimum value of $\alpha(Q_n;S)$ for $S\subset Q_n$ also is equal to $n$. This minimal value corresponds to those and only those $S\subset Q_n$ for which every axial diameter $d_i(S)$ is equal to $1$. The noted property is fulfilled for the maximum volume simplices in $Q_n$ (see [3]), but not for the only these simplices, if $n>2$. \section{The value $\xi(B_n;S)$}\label{nev_s3} In this section, we will obtain the computational formula for the absorption index of a simplex $S$ with respect to an Euclidean ball. We use the previous denotations. {\bf Theorem 3.} {\it Suppose $S$ is a nondegenerate simplex in ${\mathbb R}^n$, $x^{(0)}\in {\mathbb R}^n$, $\varrho>0$. If $B\left(x^{(0)};\varrho\right) \not\subset S$, we have \begin{equation}\label{ksi_b_x0_ro_s_l_ij_equality} \xi\left(B\left(x^{(0)};\varrho\right);S\right)= (n+1)\max_{1\leq j\leq n+1} \left[\varrho\left(\sum_{i=1}^n l_{ij}^2\right)^{1/2}- \sum_{i=1}^n l_{ij}x_i^{(0)}-l_{n+1,j}\right]+1. \end{equation} In particular, if $B_n\not\subset S$, then \begin{equation}\label{ksi_bs_l_ij_equality} \xi(B_n;S)= (n+1)\max_{1\leq j\leq n+1}\left[\left(\sum_{i=1}^n l_{ij}^2\right)^{1/2} -l_{n+1,j}\right]+1. \end{equation} } {\it Proof.} Let us apply the general formula (\ref{ksi_cs_equality}) in the case $C=B\left(x^{(0)};\varrho\right)$. The Cauchy inequality yields \begin{equation}\label{cauchy_for_ksi_ineq} -\|a_j\|\|x-x^{(0)}\|\leq (a_j,x-x^{(0)})\leq \|a_j\|\|x-x^{(0)}\|. \end{equation} If $\|x-x^{(0)}\|\leq \varrho$, we see that $$ -\varrho\|a_j\|\leq (a_j,x)-(a_j,x^{(0)}) \leq \varrho\|a_j\|, $$ $$-\lambda_j(x)=-(a_j,x)-l_{n+1,j}\leq \varrho\|a_j\|-(a_j,x^{(0)})-l_{n+1,j}.$$ Since both the upper and the lower bounds in (\ref{cauchy_for_ksi_ineq}) are reachable, $$\max_{\|x-x^{(0)}\|\leq \varrho} (-\lambda_(x))= \varrho\left(\sum_{i=1}^n l_{ij}^2\right)^{1/2}- \sum_{i=1}^n l_{ij}x_i^{(0)}-l_{n+1,j}.$$ It follows that $$\xi\left(B\left(x^{(0)};\varrho\right);S\right)= (n+1)\max_{1\leq j\leq n+1, \|x-x^{(0)}\|\leq \varrho} (-\lambda_j(x))+1=$$ $$=(n+1)\max_{1\leq j\leq n+1} \left[\varrho\left(\sum_{i=1}^n l_{ij}^2\right)^{1/2}- \sum_{i=1}^n l_{ij}x_i^{(0)}-l_{n+1,j}\right]+1,$$ and we obtain (\ref{ksi_b_x0_ro_s_l_ij_equality}). Equality (\ref{ksi_bs_l_ij_equality}) appears from (\ref{ksi_b_x0_ro_s_l_ij_equality}) for $x^{(0)}=0, \varrho=1$. $\Box$ \section{The equality $\beta_n=n$. Commentaries}\label{nev_s4} {\bf Theorem 4.} {\it If $S\subset B_n$, then $\xi(B_n;S)\geq n.$ The equality $\xi(B_n;S)=n$ takes place if and only if $S$ is a regular simplex inscribed into $B_n$. } {\it Proof.} The statement immediately follows from Theorem 2 and the inequality $\xi(B_n;S)\geq \alpha(B_n;S)$. We give here also a direct proof without applying the Euler inequality that was used to obtain the estimate $\alpha(B_n;S)\geq n$. First let $S$ be a regular simplex inscribed into $ B_n $. Then $\alpha(B_n;S)=n$ and the inradius of $S$ is equal to $\frac{1}{n}$. Since the simplex $\xi(B_n;S)S$ is circumscribed around $B_n$, we have the equalities $\xi(S;B_n)=\alpha(S;B_n)=n$ and also relation (\ref{relation_cs}) with $C=B_n$. It follows from (\ref{ksi_cs_equality}) that for any $j=1,\ldots,n+1$ $$\max_{x\in B_n} (-\lambda_j(x))=\frac{n-1}{n+1},$$ where $\lambda_j$ are the basic Lagrange polynomials related to $S$. Now suppose simplex $S$ is contained in $B_n$ but is not regular or is not inscribed into the ball. Denote the Lagrange polynomials of this simplex by $\mu_j$. There exist a regular simplex $S^*$ inscribed into $B_n$ and an integer $k$ such that $S$ is contained in the strip $0\leq\lambda_k(x)\leq 1$, the $k$th $(n-1)$-dimensional faces of $S$ and $S^*$ are parallel, and $S$ has not any common points with at least one of the boundary hyperplanes of this strip. Here $\lambda_j$ are the basic Lagrange polynomials of $S^*$. The vertex $x^{(k)}$ of the simplex $S^*$ does not lie in its $k$th facet. Assume $u$ is a point of the boundary of $B_n$ most distant from $x^{(k)}$. Then $u$ is the maximum point of polynomial $-\lambda_k(x)$, i.\,e., $- \lambda_j(u)=\frac{n-1}{n+1}$. Consider the straight line connecting $x^{(k)}$ and $u$. Denote by $y,z$ and $t$ the inersection points of this line and pairwize parallel hyperplanes $\mu_k(x)=1,$ $\mu_k=0$ and $\lambda_k(x)=0$ respectively. We have \begin{equation}\label{ineqs_one_strong} \|x^{(k)}-t\|\geq \|y-z\|, \quad \|t-u\|\leq \|z-u\|. \end{equation} At least one of these inequalities is fulfilled in the strict form. The linearity of the basic Lagrange polynomials means that $$\frac{\mu_k(z)-\mu_k(u)}{\mu_k(y)-\mu_k(z)}= \frac{\|z-u\|}{\|y-z\|}, \quad \frac{\lambda_k(t)-\lambda_k(u)}{\lambda_k\left(x^{(k)}\right)-\lambda_k(t)}= \frac{\|t-u\|}{\left\|x^{(k)}-t\right\|}.$$ Since $\mu_k(y)=1,$ $\mu_k(z)=0,$ $\lambda_k\left(x^{(k)}\right)=1,$ and $\lambda_k(t)=0$, we get $$-\mu_k(u)= \frac{\|z-u\|}{\|y-z\|} > \frac{\|t-u\|}{\left\|x^{(k)}-t\right\|}=-\lambda_k(u)=\frac{n-1}{n+1}.$$ We made use of (\ref{ineqs_one_strong}) and took into account that at least one of the inequalities is strict. The application of (\ref{ksi_cs_equality}) yields $$\xi(B_n;S)=(n+1)\max_{1\leq j\leq n+1} \max_{x\in B_n}(-\mu_j(x))+1\geq (n+1)(-\mu_k(u))+1>n.$$ Thus, if $S$ is not regular simplex inscribed into $B_n$, then $\xi(B_n;S)>n$. We see that each simplex $S\subset B_n$ satisfies the estimate $\xi(B_n;S)\geq n$. The equality takes place if and only if $S$ is a regular simplex inscribed into $B_n$. $\Box$ By analogy with the value $\xi_n=\min\{\xi(Q_n;S): S\subset Q_n\}$ defined through the unit cube, let us introduce the similar numerical characteristic given by the unit ball: $$\beta_n:=\min \{ \xi(B_n;S): \, S \mbox{ --- $n$-мерный симплекс,} \, S\subset B_n, \, {\rm vol}(S)\ne 0\}.$$ Many problems concerning $\xi_n$ yet have not been solved. For example, $\xi_2 = 1 + \frac{3\sqrt{5}}{5}$ still remains the only accurate value of $\xi_n $ for even $n$; moreover, this value was discovered in a rather difficult way (see [3; Chapter\,2]). Compared to $\xi_n$ the problem on numbers $\beta_n $ turns out to be trivial. {\bf Corollary 3.} {\it For any $n$, we have $\beta_n=n$. The only simplex $S\subset B_n$ extremal with respect to $\beta_n$ is an arbitrary regular simplex inscribed into $B_n$.} {\it Proof.} It is sufficient to apply Theorem 4. $\Box$ The technique developed for a ball makes it possible to illustrate some results having been earlier got for a cube. Here we note a proof of the following known statement which differs from the proofs given in [3; \S\,3.2] and [12]. {\bf Corollary 4.} {\it If there exists an Hadamard matrix of order $n+1$, then $\xi_n=n.$ } {\it Proof.} It is known (see, e.\,g., [9]) that for these and only these~$n$ we can inscribe into $Q_n$ a regular simplex $S$ so that all the vertices of $S$ will coincide with vertices of the cube. Let us denote by $B$ the ball with radius $\frac{\sqrt{n}}{2}$ having the center in center of the cube. Clearly, $Q_n$ is inscribed into $B$, therefore, the simplex is inscribed into the ball as well. Since $S$ is regular, by Theorem 4 and by similarity reasons, we have $\xi(B;S)=n.$ The inclusion $Q_n\subset B$ means that $\xi(Q_n;S)\leq \xi(B;S),$ i.\,e.~$\xi(Q_n;S)\leq n$. From (\ref{ksi_alpha_n_ineq}) it follows that the inverse inequality $\xi(Q_n;S)\geq n$ is also true. Hence, $\xi(Q_n;S)=n$. Simultaneously (\ref{ksi_alpha_n_ineq}) gives $\xi_n=\xi(Q_n;S)=n$. $\Box$ This argument is based on the following fact: if $S$ is a regular simplex with the vertices in vertices of $Q_n$, then the simplex $nS$ absorbs not only the cube $Q_n$ but also the ball $B$ circumscribed around the cube. The corresponding absorption index $n$ is the minimum possible both for the cube and the ball. In addition, we mention the following property. {\bf Corollary 5.} {\it Assume that $S\subset Q_n\subset nS$ and simplex $S$ is not regular. Then $B\not\subset nS$. } {\it Proof.} The inclusion $B\subset nS$ implies that $\xi(B;S)=n$. This way $S$ is a regular simplex inscribed into the ball $B$. But since this is not so, $B$ is not a subset of $nS$. $\Box$ Simplices satisfying the condition of Corollary 5 exist at least for $n=3, 5,$ and $9$ (see [12]). The relations (\ref{ksi_alpha_n_ineq}) mean that always $\xi_n\geq n$. Since $\xi_2=1+\frac{3\sqrt{5}}{5}>2$, there exist $n$'s such that $\xi_n>n$. Besides the cases when $n+1$ is an Hadamard number, the equality $\xi_n=n$ is established for $n=5$ and $n=9$ (the extremal simplices in ${\mathbb R}^5$ and ${\mathbb R}^9$ are given in [12]). For all such dimensions holds true $\xi_n=\beta_n$, i.\,e., with respect to the minimum absorption index of an internal simplex, both the convex bodies, an $n$-dimensional cube and an $n$-dimensional ball, have the same behavoir. The equality $\xi_n=n$ is equivalent to the existence of simplices satisfying the inclusions $S\subset Q_n\subset nS$. Some properties of such simplices (e.\,g., the fact that the center of gravity of $S$ coincides with the center of the cube; see [7]) are similar to the properties of regular simplices inscribed into the ball. However, the problem to describe the set of all dimensions where exist those simplices, seems to be very difficult and nowaday is far from solution. \centerline{\bf\Large References} \begin{itemize} \item[1.] Nevskij,~M.\,V., On a certain relation for the minimal norm of an interpolational projection, {\it Model. Anal. Inform. Sist.}, 2009, vol.~16, no.~1, pp.~24--43 (in~Russian). \item[2.] Nevskii,~M.\,V. On a property of $n$-dimensional simplices, {\it Math. Notes}, 2010, vol.~87, no.~4, pp.~543--555. \item[3.] Nevskii,~M.\,V., {\it Geometricheskie ocenki v polinomialnoi interpolyacii} (Geometric Estimates in Polynomial Interpolation), Yaroslavl': Yarosl. Gos. Univ., 2012 (in~Russian). . \item[4.] Nevskii,~M.\,V., On the minimal positive homothetic image of a simplex containing a convex body, {\it Math. Notes}, 2013, vol.~93, no.~3--4, pp.~470--478. \item[5.] Nevskii,~M.\,V., and Ukhalov, A.\,Yu., On numerical charasteristics of a simplex and their estimates, {\it Model. Anal. Inform. Sist.}, 2016, vol.~23, no.~5, pp.~603--619 (in~Russian). English transl.: {\it Aut. Control Comp. Sci.}, 2017, vol.~51, no.~7, pp.~757--769. \item[6.] Nevskii,~M.\,V., and Ukhalov, A.\,Yu., New estimates of numerical values related to a simplex, {\it Model. Anal. Inform. Sist.}, 2017, vol.~24, no.~1, pp.~94--110 (in~Russian). English transl.: {\it Aut. Control Comp. Sci.}, 2017, vol.~51, no.~7, pp.~770--782. \item[7.] Nevskii,~M.\,V., and Ukhalov, A.\,Yu., On $n$-dimensional simplices satisfying inclusions $S\subset [0,1]^n\subset nS$, {\it Model. Anal. Inform. Sist.}, 2017, vol.~24, no.~5, pp.~578--595 (in~Russian). English transl.: {\it Aut. Control Comp. Sci.}, 2018, vol.~52, no.~7, pp.~667--679. \item[8.] Nevskii,~M.\,V., and Ukhalov, A.\,Yu., On minimal absorption index for an $n$-dimensional simplex, {\it Model. Anal. Inform. Sist.}, 2018, vol.~25, no.~1, pp.~140--150 (in~Russian). English transl.: {\it Aut. Control Comp. Sci.}, 2018, vol.~52, no.~7, pp.~680--687. \item[9.] Hudelson,~M., Klee, V., and Larman,~D., Largest $j$-simplices in $d$-cubes: some relatives of the Hadamard maximum determinant problem, {\it Linear Algebra Appl.}, 1996, vol.~241--243, pp.~519--598 \item[10.] Klamkin~M.\,S., and Tsifinis~G.\,A., Circumradius--inradius inequality for a simplex, {\it Mathematics Magazine}, 1979, vol.~52, no.~1, pp.~20--22. \item[11.] Nevskii,~M., Properties of axial diameters of a simplex, {\it Discr. Comput. Geom.}, 2011, vol.~46, no.~2, pp.~301--312. \item[12.] Nevskii,~M., and Ukhalov A., Perfect simplices in ${\mathbb R}^5$, {\it Beitr. Algebra Geom.}, 2018, vol.~59, no.~3, pp.~501--521. \item[13.] Yang~S., and Wang~J., Improvements of $n$-dimensional Euler inequality, {\it Journal of~Geometry}, 1994, vol.~51, pp.~190--195 \item[14.] Vince~A., A simplex contained in a sphere, {\it Journal of~Geometry}, 2008, vol.~89, no.~1--2, pp.~169--178. \end{itemize} \end{document}
\begin{equation}gin{document} \begin{equation}gin{abstract} We study convergence of 3D lattice sums via expanding spheres. It is well-known that, in contrast to summation via expanding cubes, the expanding spheres method may lead to formally divergent series (this will be so e.g. for the classical NaCl-Madelung constant). In the present paper we prove that these series remain convergent in Cesaro sense. For the case of second order Cesaro summation, we present an elementary proof of convergence and the proof for first order Cesaro summation is more involved and is based on the Riemann localization for multi-dimensional Fourier series. \end{abstract} \subjclass[2010]{11L03, 42B08, 35B10, 35R11} \keywords{lattice sums, Madelung constants, Cesaro summation, Fourier series, Riemann localization} \varthetaanks{The first author has been partially supported by the LMS URB grant 1920-04. The second author is partially supported by the EPSRC grant EP/P024920/1} \maketitle \tableofcontents \section{Introduction}\lambdabel{s0} Lattice sums of the form \begin{equation}gin{equation}\lambdabel{0.lattice} \sum_{(n,k,m)\in\mathbb Z^3}\frac{e^{i(nx_1+kx_2+mx_3)}}{(a^2+n^2+k^2+m^2)^s} \end{equation} and various their extensions naturally appear in many branches of modern analysis including analytic number theory (e.g. for study the number of lattice points in spheres or balls), analysis of PDEs (e.g. for constructing Green functions for various differential operators in periodic domains, finding best constants in interpolation inequalities, etc.), harmonic analysis as well as in applications, e.g. for computing the electrostatic potential of a single ion in a crystal (the so-called Madelung constants), see \cite{Flap,MFS,BDZ,Bor13,mar,mar2000,Ram21,ZI} and references therein. For instance, the classical Madelung constant for the NaCl crystal is given by \begin{equation}gin{equation}\lambdabel{0.M} M=\sideset{}{'}\sum_{(i,j,k)\in \mathbb Z^3}\frac{(-1)^{i+j+k}}{(i^2+j^2+k^2)^{1/2}}, \end{equation} where the index ${'}$ means that the sum does not contain the term which corresponds to $(i,j,k)=0$. \par The common feature of series \eqref{0.lattice} and \eqref{0.M} is that the decay rate of the terms is not strong enough to provide absolute convergence, so they are often only conditionally convergent and their convergence/divergence strongly depends on the method of summation. The typical methods of summation are summation by expanding cubes/rectangles or summation by expanding spheres, see sections \S\ref{s1} and \S\ref{s2} for definitions and \cite{Bor13} for more details. For instance, when summation by expanding spheres is used, the formula for the Madelung constant has an especially elegant form \begin{equation}gin{equation}\lambdabel{2.Ms} M=\sum_{n=1}^\infty (-1)^n\frac{r_3(n)}{\sqrt{n}}, \end{equation} where $r_3(n)$ is the number of integer point in a sphere of radius $\sqrt{n}$. Exactly this formula is commonly used in physical literature although it has been known for more than 70 years that series \eqref{2.Ms} is {\it divergent}, see \cite{Emer}. Thus, one should either switch from expanding spheres to expanding cubes/rectangles for summation of \eqref{0.M} (which is suggested to do e.g. in \cite{Bor13} and where such a convergence problem does not appear) or to use more advanced methods for summation of \eqref{2.Ms}, for instance Abel or Cesaro summation. Surprisingly, the possibility to justify \eqref{2.Ms} in such a way is not properly studied (although there are detailed results concerning Cesaro summation for different methods, e.g. for the so called summation by diamonds, see \cite{Bor13}) and the main aim of the present notes is to cover this gap. \par Namely, we will study the following generalized Madelung constants: \begin{equation}gin{equation}\lambdabel{2.Mg} M_{a,s}=\sideset{}{'}\sum_{(i,j,k)\in \mathbb Z^3}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}=\sum_{n=1}^\infty(-1)^n\frac{r_3(n)}{(a^2+n)^s}, \end{equation} where $a\in\R$ and $s>0$ and the sum in the RHS is understood in the sense of Cesaro (Cesaro-Riesz) summation of order $\kappa$, see Definition \ref{Def2.Cesaro} below. Our presentation of the main result consists of two parts. \par First, we present a very elementary proof of convergence for second order Cesaro summation which is based only on counting the number of lattice points in spherical layers by volume comparison arguments. This gives the following result \begin{equation}gin{theorem}\lambdabel{Th0.c2} Let $a\in\R$ and $s>0$. Then \begin{equation}gin{equation} M_{a,s}=\lim_{N\to\infty}\sum_{n=1}^N(-1)^n\left(1-\frac nN\right)^2 \frac{r_3(n)}{(a^2+n)^s}. \end{equation} In particular, the limit in the RHS exists. \end{theorem} Second, we establish the convergence for the first order Cesaro summation. \begin{equation}gin{theorem}\lambdabel{Th0.c1} Let $a\in\R$ and $s>0$. Then \begin{equation}gin{equation} M_{a,s}=\lim_{N\to\infty}\sum_{n=1}^N(-1)^n\left(1-\frac nN\right) \frac{r_3(n)}{(a^2+n)^s}. \end{equation} In particular, the limit in the RHS exists. \end{theorem} In contrast to Theorem \ref{Th0.c2}, the proof of this result is more involved and is based on an interesting connection between the convergence of lattice sums and Riemann localization for multiple Fourier series, see section \S\ref{s22} for more details. Note that Theorem \ref{Th0.c2} is a formal corollary of Theorem \ref{Th0.c1}, but we prefer to keep both of them not only since the proof of Theorem \ref{Th0.c2} is essentially simple, but also since it possesses extensions to other methods of summation, see the discussion in section \S\ref{s3}. Also note that the above convergence results have mainly theoretical interest since much more effective formulas for Madelung constants are available for practical computations, see \cite{Bor13} and references therein. \par The paper is organized as follows. Some preliminary results concerning lattice sums and summation by rectangles are collected in \S\ref{s1}. The proofs of Theorems \ref{Th0.c2} and \ref{Th0.c1} are given in sections \S\ref{s21} and \S\ref{s22} respectively. Some discussion around the obtained results, their possible generalizations and numerical simulations are presented in section \S\ref{s3}. \section{Preliminaries}\lambdabel{s1} In this section, we recall standard results about lattice sums and prepare some technical tools which will be used in the sequel. We start with the simple lemma which is however crucial for what follows. \begin{equation}gin{lemma}\lambdabel{Lem1.block} Let the function $f:\R^3\to\R$ be 3 times continuously differentiable in a cube $Q_{I,J,K}:=[I,I+1]\times[J,J+1]\times[K,K+1]$. Then \begin{equation}gin{multline}\lambdabel{1.E} \min_{x\in Q_{2I,2J,2K}}\{-\partial_{x_1}\partial_{x_2}\partial_{x_3}f(x)\}\le\\\le E_{I,J,K}(f):=\sum_{i=2I}^{2I+1}\sum_{j=2J}^{2J+1} \sum_{k=2K}^{2K+1}(-1)^{i+j+k}f(i,j,k)\le\\\le \max_{x\in Q_{2I,2J,2K}}\{-\partial_{x_1}\partial_{x_2}\partial_{x_3}f(x)\}. \end{multline} \end{lemma} \begin{equation}gin{proof} Indeed, it is not difficult to check using the Newton-Leibnitz formula that $$ E_{I,J,K}(f)=-\int_0^1\int_0^1\int_0^1 \partial_{x_1}\partial_{x_2}\partial_{x_3}f(2I+s_1,2J+s_2,2K+s_3)\,ds_1\,ds_2\,ds_3 $$ and this formula gives the desired result. \end{proof} \noindent A typical example of the function $f$ is the following one \begin{equation}gin{equation}\lambdabel{1.pol} f_{a,s}(x)=(a^2+|x|^2)^s,\ \ |x|^2=x_1^2+x_2^2+x_3^2. \end{equation} In this case, $$ \partial_{x_1}\partial_{x_2}\partial_{x_3}f= 8s(s-1)(s-2)x_1x_2x_3(a^2+|x|^2)^{s-3/2} $$ and, therefore, \begin{equation}gin{equation}\lambdabel{1.bet} |E_{I,J,K}(f)|\le C(a^2+I^2+J^2+K^2)^{s-\frac32}. \end{equation} One more important property of the function \eqref{1.pol} is that the term $E_{I,J,K}$ is sign-definite in the octant $I,J,K\ge0$. \par At the next step, we state a straightforward extension of the integral comparison principle to the case of multi-dimensional series. We recall that, in one dimensional case, for a positive monotone decreasing function $f:[A,B]\to\R$, $A,B\in\mathbb Z$, $B>A$, we have $$ f(B)+\int_A^{B}f(x)\,dx\le \sum_{n=A}^Bf(n)\le f(A)+\int_{A}^{B}f(x)\,dx $$ which, in turn, is an immediate corollary of the estimate $$ f(n+1)\le\int_n^{n+1}f(x)\,dx\le f(n). $$ \begin{equation}gin{lemma}\lambdabel{Lem1.int} Let the continuous function $f:\R^3\setminus\{0\}\to\R_+$ be such that \begin{equation}gin{equation}\lambdabel{1.good} C_2\max_{x\in Q_{i,j,k}} f(x)\le \min_{x\in Q_{i,j,k}}f(x)\le C_1\max_{x\in Q_{i.j,k}} f(x), \end{equation} $(i,j,k)\in\mathbb Z^3$ and the constants $C_1$ and $C_2$ are positive and are independent of $Q_{i,j,k}\not\owns 0$. Let also $\Omegaega\subset\R^3$ be a domain which does not contain $0$ and \begin{equation}gin{equation}\lambdabel{1.lat} \Omegaega_{lat}:=\{(i,j,k)\in\mathbb Z^3:\,\exists Q_{I,J,K}\subset\Omegaega,\ \ (i,j,k)\in Q_{I,J,K},\ 0\notin Q_{I,J,K}\}. \end{equation} Then, \begin{equation}gin{equation}\lambdabel{1.comp} \sum_{(i,j,k)\in\Omegaega_{lat}}f(i,j,k)\le C\int_\Omegaega f(x)\,dx, \end{equation} where the constant $C$ is independent of $\Omegaega$ and $f$. If assumption \eqref{1.good} is satisfied for all $(I,J,K)$, the condition $0\notin\Omegaega$ and $0\notin Q_{I,J,K}$ can be removed. \begin{equation}gin{comment} Then, for every $N,M,K\in\mathbb N$, we have \begin{equation}gin{multline} \sum_{(i,j,k)\in\Pi_{N,M,K}' }f(i,j,k)\le \int_{x\in\Pi_{M,N,K}'}f(x)\,dx+f(1,1,1)+\\+\int_{(x_1,x_2)\in\Pi_{N,M}'}f(x_1,x_2,0)\,dx_1\,dx_2+ \int_{(x_2,x_3)\in\Pi_{M,K}'}f(0,x_2,x_3)\,dx_2\,dx_3+\\+ \int_{(x_1,x_3)\in\Pi_{M,K}'}f(x_1,0,x_3)\,dx_1\,dx_3 +f(1,1,0)+f(0,1,1)+f(1,0,1)+\\+f(1,0,0)+f(0,0,1)+f(0,1,0)+\\+\int_1^Nf(x_1,0,0)\,dx_1+ \int_1^Mf(0,x_2,0)\,dx_2+\int_1^Kf(0,0,x_3)\,dx_3 \end{multline} \end{comment} \end{lemma} \begin{equation}gin{proof} Indeed, assumption \eqref{1.good} guarantees that \begin{equation}gin{equation}\lambdabel{1.mult} C_2\int_{Q_{I,J,K}}f(x)\,dx\le f(i,j,k)\le C_1\int_{Q_{I,J,K}}f(x)\,dx \end{equation} for all $Q_{I,J,K}$ which do not contain zero and all $(i,j,k)\in Q_{I,J,K}\cap\mathbb Z^3$. Since any point $(i,j,k)\in\mathbb Z$ can belong no more than $8$ different cubes $Q_{I,J,K}$, \eqref{1.mult} implies \eqref{1.comp} (with the constant $C=8C_1$) and finishes the proof of the lemma. \end{proof} We will mainly use this lemma for functions $f_{a,s}(x)$ defined by \eqref{1.pol}. It is not difficult to see that these functions satisfy assumption \eqref{1.good}. For instance, this follows from the obvious estimate $$ |\nabla f_{a,s}(x)|\le \frac{C_{s}}{\sqrt{a^2+|x|^2}}f_{a,s}(x) $$ and the mean value theorem. Moreover, if $a\ne0$, condition \eqref{1.good} holds for $Q_{i,j,k}\owns 0$ as well. As a corollary, we get the following estimate for summation "by spheres": \begin{equation}gin{multline}\lambdabel{1.as} \sideset{}{'}\sum_{(i,j,k)\in B_n\cap\mathbb Z^3} f_{a,s}(i,j,k)\le C_s\int_{x\in B_n\setminus B_1}(a^2+x^2)^{s}\,dx\le\\\le 4\pi C_s\int_1^{\sqrt n} R^2(a^2+R^2)^s\,dR\le 4\pi C_s\int_1^{\sqrt n} R(a^2+R^2)^{s-1/2}\,dR=\\=\frac {4\pi C_s}{2s+3}\left((a^2+n)^{s+3/2}-(a^2+1)^{s+3/2}\right), \end{multline} where $B_n:=\{x\in\R^3\,:\,|x|^2\le n\}$ and $\sum'$ means that $(i,j,k)=0$ is ex\-clu\-ded. Of course, in the case $s=-\frac32$, the RHS of \eqref{1.as} reads as $2\pi C_s\ln\frac{a^2+n^2}{a^2+1}$. In particular, if $s>\frac32$, passing to the limit $n\to\infty$ in \eqref{1.as}, we see that \begin{equation}gin{equation}\lambdabel{1.simple} \sideset{}{'}\sum_{(i,j,k)\in\mathbb Z^3}\frac1{(a^2+i^2+j^2+k^2)^s}= \sideset{}{'}\sum_{(i,j,k)\in\mathbb Z^3}f_{a,-s}(i,j,k)\le \frac {C_s}{(a^2+1)^{s-\frac32}}. \end{equation} Thus, the series in the LHS is absolutely convergent if $s>\frac32$ and its sum tends to zero as $a\to\infty$. It is also well-known that condition $s>\frac32$ is sharp and the series is divergent if $s\le \frac32$. \par We also mention that Lemmas \ref{Lem1.block} and \ref{Lem1.int} are stated for 3-dimensional case just for simplicity. Obviously, their analogues hold for any dimension. We will use this observation later. \par We now turn to the alternating version of lattice sums \eqref{1.simple} \begin{equation}gin{equation}\lambdabel{1.main} M_{a,s}:=\sideset{}{'}\sum_{(i,j,k)\in\mathbb Z^3}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s} \end{equation} which is the main object of study in these notes. We recall that, due to \eqref{1.simple}, this series is absolutely convergent for $s>\frac32$, so the sum is independent of the method of summation. In contrast to this, in the case $0<s\le\frac32$, the convergence is not absolute and depends strongly to the method of summation, see \cite{Bor13} and references therein for more details. Note also that $M_{a,s}$ is analytic in $s$ and, similarly to the classical Riemann zeta function, can be extended to a holomorphic function on $\mathbb C$ with a pole at $s=0$, but this is beyond the scope of our paper, see e.g. \cite{Bor13} for more details. Thus, we are assuming from now on that $0<s\le\frac32$. We start with the most studied case of summation by expanding rectangles/parallelograms. \begin{equation}gin{definition} Let $\Pi_{I,J,K}:=[-I,I]\times[-J,J]\times[-K,K]$, $I,J,K\in\mathbb N$, and $$ S_{\Pi_{I,J,K}}(a,s):=\sideset{}{'}\sum_{(i,j,k)\in\Pi_{I,J,K}\cap\mathbb Z^3}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}. $$ We say that \eqref{1.main} is summable by expanding rectangles if the following triple limit exists and finite $$ M_{a,s}=\lim_{(I,J,K)\to\infty} S_{\Pi_{I,J,K}}(a,s). $$ \end{definition} To study the sum \eqref{1.main}, we combine the terms belonging to cubes $Q_{2i,2j,2k}$ and introduce the partial sums \begin{equation}gin{equation} E_{\Pi_{I,J,K}}(a,s):=\sideset{}{'}\sum_{(2i,2j,2k)\in\Pi_{I,J,K}\cap2\mathbb Z^3}E_{i,j,k}(a.s), \end{equation} where $E_{i,j,k}(a,s):=E_{i,j,k}(f_{a,-s})$ is defined in \eqref{1.E}. \begin{equation}gin{theorem} Let $0<s\le \frac32$. Then, \begin{equation}gin{equation}\lambdabel{1.equiv} \bigg|S_{\Pi_{I,J,K}}(a,s)-E_{\Pi_{I,J,K}}(a,s)\bigg|\le \frac {C_s}{\left(a^2+\min\{I^2,J^2,K^2\}\right)^{s}}, \end{equation} where the constant $C_s$ is independent of $a$ and $I,J,K$. \end{theorem} \begin{equation}gin{proof} We first mention that, according to Lemma \ref{Lem1.block} and estimate \eqref{1.simple}, we see that \begin{equation}gin{equation}\lambdabel{1.e-conv} |E_{\Pi_{I,J,K}}(a,s)|\le \frac{C_s}{(a^2+1)^s} \end{equation} uniformly with respect to $(I,J,K)$. \par The difference between $S_{\Pi_{I,J,K}}$ and $E_{\Pi_{I,J,K}}$ consists of the alternating sum of $f_{a,-s}(i,j,k)$ where $(i,j,k)$ belong to the boundary of $\Pi_{I,J,K}$. Let us write an explicit formula for the case when all $I,J,K$ are even (other cases are considered analogously): \begin{equation}gin{multline}\lambdabel{1.huge} S_{\Pi_{2I,2J,2K}}(a,s)-E_{\Pi_{2I,2J,2K}}(a,s)=\!\!\!\sideset{}{'}\sum_{\substack{-2J\le j\le-2J\\-2K\le k\le2K}}(-1)^{j+k}f_{a,-s}(2I,j,k)+\\+\sideset{}{'}\sum_{\substack{-2I\le i\le-2I\\-2K\le k\le2K}}(-1)^{i+k}f_{a,-s}(i,2J,k)+\sideset{}{'}\sum_{\substack{-2I\le i\le-2I\\-2J\le j\le2J}}(-1)^{i+j}f_{a,-s}(i,j,2K)-\\- \sideset{}{'}\sum_{-2I\le i\le-2I}(-1)^{i}f_{a,-s}(i,2J,2K)-\sideset{}{'}\sum_{-2J\le j\le-2J}(-1)^{j}f_{a,-s}(2I,j,2K)-\\- \sideset{}{'}\sum_{-2K\le k\le-2K}(-1)^{k}f_{a,-s}(2I,2J,k)+f_{a.-s}(2I,2J,2K). \end{multline} In the RHS of this formula we see the analogues of lattice sum \eqref{1.main} in lower dimensions one or two and, thus, it allows to reduce the dimension. Indeed, assume that the analogues of estimate \eqref{1.equiv} are already established in one and two dimensions. Then, using the lower dimensional analogue of \eqref{1.e-conv} together with the fact that $$ f_{a,-s}(2I,j,k)=f_{\sqrt{a^2+4I^2},-s}(i,j), $$ where we have 2D analogue of the function $f_{a,-s}$ in the RHS, we arrive at \begin{equation}gin{multline}\lambdabel{1.huge1} \bigg|S_{\Pi_{2I,2J,2K}}(a,s)-E_{\Pi_{2I,2J,2K}}(a,s)\bigg|\le\\\le \frac{C_s}{(a^2+4I^2+1)^s}+\frac{C_s}{(a^2+\min\{J^2,K^2\})^s}+ \frac{C_s}{(a^2+4J^2+1)^s}+\\+\frac{C_s}{(a^2+\min\{I^2,K^2\})^s}+ \frac{C_s}{(a^2+4K^2+1)^s}+\frac{C_s}{(a^2+\min\{I^2,J^2\})^s}+\\+ +\frac{C_s}{(a^2+4I^2+4K^2\})^s}+\frac{C_s}{(a^2+4I^2+4J^2\})^s}+ \frac{C_s}{(a^2+4J^2+4K^2\})^s}+\\+ \frac{C_s}{(a^2+4I^2+4J^2+4K^2\})^s}\le \frac{C_s'}{(a^2+\min\{I^2,J^2,K^2\})^s}. \end{multline} Since in 1D case the desired estimate is obvious, we complete the proof of the theorem by induction. \end{proof} \begin{equation}gin{corollary}\lambdabel{Cor1.main} Let $s>0$. Then series \eqref{1.main} is convergent by expanding rectangles and \begin{equation}gin{equation}\lambdabel{1.rep} M_{a,s}=\sideset{}{'}\sum_{(i,j,k)\in\mathbb Z^3}E_{i,j,k}(a,s). \end{equation} In particular, the series in RHS of \eqref{1.rep} is absolutely convergent, so the method of summation for it is not important. \end{corollary} Indeed, this fact is an immediate corollary of estimates \eqref{1.equiv}, \eqref{1.bet} and \eqref{1.simple}. \section{Summation by expanding spheres}\lambdabel{s2} We now turn to summation by expanding spheres. In other words, we want to write the formula \eqref{1.main} in the form \begin{equation}gin{equation}\lambdabel{2.sphere} M_{a,s}=\lim_{N\to\infty}\sideset{}{'}\sum_{i^2+j^2+k^2\le N}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}. \end{equation} Moreover, since $(i+j+k)^2=i^2+j^2+k^2+2(ij+jk+ik)$, we have $(-1)^{i+j+k}=(-1)^{i^2+j^2+k^2}$, so formula \eqref{2.sphere} can be rewritten in the following elegant form \begin{equation}gin{equation}\lambdabel{2.sp} M_{a,s}=\sum_{n=1}^\infty (-1)^n\frac{r_3(n)}{(a^2+n)^s}, \end{equation} where $r_3(n)$ is the number of integer points on a sphere of radius $\sqrt{n}$ centered at zero, see e.g. \cite{Ram21} and reference therein for more details about this function. However, the convergence of series \eqref{2.sp} is more delicate. In particular, it is well-known that this series is divergent for $s\le\frac12$, see \cite{Emer,Bor13}. For the convenience of the reader, we give the proof of this fact below. \begin{equation}gin{lemma}\lambdabel{Lem2.div} Let $c>0$ be small enough. Then, there are infinitely many values of $n\in\mathbb N$ such that \begin{equation}gin{equation}\lambdabel{2.bad} r_3(n)\ge c\sqrt{n} \end{equation} and, particularly, series \eqref{2.sp} is divergent for all $s\le\frac12$. \end{lemma} \begin{equation}gin{proof} Indeed, by comparison of volumes, we see that the number $M_N$ of integer points in a spherical layer $N\le i^2+j^2+k^2\le 2N$ can be estimated from above as $$ M_N=\sum_{n=N}^{2N}r_3(n)\ge \frac43\pi\left((\sqrt{2N}-\sqrt3)^{3}-(\sqrt{N}+\sqrt3)^{3}\right)\ge cN^{3/2} $$ for sufficiently small $c>0$. Thus, for every sufficiently big $N\in\mathbb N$, there exists $n\in[N,2N]$ such that $r_3(n)\ge c\sqrt{n}$ and estimate \eqref{2.bad} is verified. The divergence of \eqref{2.sp} for $s\le \frac12$ is an immediate corollary of this estimate since the $n$th term $(-1)^n\frac{r_3(n)}{(a^2+n)^s}$ does not tend to zero under this condition and the lemma is proved. \end{proof} \begin{equation}gin{remark} The condition that $c>0$ is small can be removed using more sophisticated methods. Moreover, it is known that the inequality $$ r_3(n)\ge c\sqrt{n}\ln\ln n $$ holds for infinitely many values of $n\in\mathbb N$ (for properly chosen $c>0$). On the other hand, for every $\varepsilon>0$, there exists $C_\varepsilon>0$ such that $$ r_3(n)\le C_\varepsilon n^{\frac12+\varepsilon}, $$ see \cite{Ram21} and references therein. Thus, we cannot establish divergence of \eqref{2.sp} via the $n$th term test if $s>\frac12$. Since this series is alternating, one may expect convergence for $s>\frac12$. However, the behavior of $r_3(n)$ as $n\to\infty$ is very irregular and, to the best of our knowledge, this convergence is still an open problem for $\frac12<s\le\frac{25}{34}$, see \cite{Bor13} for the convergence in the case $s>\frac{25}{34}$ and related results. \end{remark} Thus, one should use weaker concepts of convergence in order to justify equality \eqref{2.sp}. The main aim of these notes is to establish the convergence in the sense of Cesaro. \begin{equation}gin{definition}\lambdabel{Def2.Cesaro} Let $\kappa>0$. We say the series \eqref{2.sp} is $\kappa$-Cesaro (Cesaro-Riesz) summable if the sequence $$ C^\kappa_N(a,s):=\sum_{n=1}^N\left(1-\frac nN\right)^\kappa(-1)^n\frac{r_3(n)}{(a^2+n)^s} $$ is convergent. Then we write $$ (C,\kappa)-\sum_{N=1}^\infty (-1)^n\frac{r_3(n)}{(a^2+n)^s}:=\lim_{N\to\infty}C_N^\kappa(a,s). $$ Obviously, $\kappa=0$ corresponds to the usual summation and if a series is $\kappa$-Cesaro summable, then it is also $\kappa_1$-Cesaro summable for any $\kappa_1>\kappa$, see e.g.~\cite{Ha}. \end{definition} \subsection{Second order Cesaro summation}\lambdabel{s21} The aim of this subsection is to present a very elementary proof of the fact that the series \eqref{2.sp} is second order Cesaro summable. Namely, the following theorem holds. \begin{equation}gin{theorem}\lambdabel{Th2.2c} Let $s>0$. Then the series \eqref{2.sp} is second order Cesaro summable and \begin{equation}gin{equation}\lambdabel{2.2good} M_{a,s}=(C,2)-\sum_{N=1}^\infty (-1)^n\frac{r_3(n)}{(a^2+n)^s}, \end{equation} where $M_{a,s}$ is the same as in \eqref{1.main} and \eqref{1.rep}. \end{theorem} \begin{equation}gin{proof} For every $N\in\mathbb N$, let us introduce the sets \begin{equation}gin{equation*} D_N:=\bigcup\limits_{\substack{(I,J,K)\in2\mathbb Z^3\\Q_{I,J,K}\subset B_N}}Q_{I,J,K},\ \ \ D_N':=B_N\setminus D_N \end{equation*} and split the sum $C^2_N(a,s)$ as follows \begin{equation}gin{multline} C^2_N(a,s)=\sideset{}{'}\sum_{(i,j,k)\in B_N\cap\mathbb Z^3}\left(1-\frac{i^2+j^2+k^2}{N}\right)^2\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}=\\=\sideset{}{'}\sum_{(i,j,k)\in D_N\cap\mathbb Z^3}\left(1-\frac{i^2+j^2+k^2}{N}\right)^2\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}+\\+ \!\!\!\sideset{}{'}\sum_{(i,j,k)\in D_N'\cap\mathbb Z^3}\!\!\!\left(1\!-\!\frac{i^2+j^2+k^2}{N}\right)^2\!\!\!\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}\!:= A_N(a,s)+R_N(a,s). \end{multline} Let us start with estimating the sum $R_N(a,s)$. To this end we use the elementary fact that $$ \sqrt{N}-\sqrt3\le \sqrt{i^2+j^2+k^2}\le \sqrt{N} $$ for all $(i,j,k)\in D_N'$ ($\sqrt{3}$ is the length of the diagonal of the cube $Q_{I,J,K}$). Therefore, \begin{equation}gin{equation}\lambdabel{2.R} |R_N(a,s)|\le \left(1-\frac{(\sqrt{N}-\sqrt3)^2}{N}\right)^2\frac{\#\left(D'_M\cap\mathbb Z^3\right)}{\left(a^2+(\sqrt {N}-\sqrt3)^2\right)^s}. \end{equation} Using again the fact that all integer points of $D'_N$ belongs to the spherical layer $\sqrt{N}-\sqrt{3}\le |x|^2\le\sqrt{N}$ together with the volume comparison arguments, we conclude that $$ \#\left(D'_M\cap\mathbb Z^3\right)\le \frac43\pi\left((\sqrt{N}+\sqrt3)^2-(\sqrt{N}-\sqrt3)^2\right)\le c_0 N $$ for some positive $c_0$. Therefore, \begin{equation}gin{equation} |R_N(a,s)|\le \frac{C}{N}\frac{c_0N}{\left(a^2+(\sqrt {N}-\sqrt3)^2\right)^s}=\frac{C}{\left(a^2+(\sqrt {N}-\sqrt3)^2\right)^s}\to0 \end{equation} as $N\to\infty$. Thus, the term $R_N$ is not essential and we only need to estimate the sum $A_N$. To this end, we rewrite it as follows \begin{equation}gin{multline}\lambdabel{2.huge2} A_N(a,s)=\left(1-\frac{a^2}N\right)^2\sideset{}{'}\sum_{(i,j,k)\in D_N\cap\mathbb Z^3}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}+\\+\frac2N\left(1-\frac {a^2}N\right)\sideset{}{'}\sum_{(i,j,k)\in D_N\cap\mathbb Z^3}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^{s-1}}+\\+ \frac1{N^2}\sideset{}{'}\sum_{(i,j,k)\in D_N\cap\mathbb Z^3}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^{s-2}}=\\= \left(1-\frac{a^2}N\right)^2\sideset{}{'}\sum_{(i,j,k)\in \frac12D_{N}\cap\mathbb Z^3}E_{i,j,k}(a,s)+\\+\frac2N\left(1-\frac {a^2}N\right)\sideset{}{'}\sum_{(i,j,k)\in \frac12D_{N}\cap\mathbb Z^3}E_{i,j,k}(a,s-1)+\\+ \frac1{N^2}\sideset{}{'}\sum_{(i,j,k)\in \frac12D_{N}\cap\mathbb Z^3}E_{i,j,k}(a,s-2). \end{multline} From Corollary \ref{Cor1.main}, we know that the first sum in the RHS of \eqref{2.huge2} converges to $M_{a,s}$ as $N\to\infty$. Using estimates \eqref{1.bet} and \eqref{1.as}, we also conclude that \begin{equation}gin{equation} \bigg|\sideset{}{'}\sum_{(i,j,k)\in \frac12D_N\cap\mathbb Z^3}E_{i,j,k}(a,s-1)\bigg|\le CN^{1-s} \end{equation} and \begin{equation}gin{equation} \bigg|\sideset{}{'}\sum_{(i,j,k)\in \frac12D_N\cap\mathbb Z^3}E_{i,j,k}(a,s-2)\bigg|\le CN^{2-s}. \end{equation} Thus, two other terms in the RHS of \eqref{2.huge2} tend to zero as $N\to\infty$ and the theorem is proved. \end{proof} \subsection{First order Cesaro summation}\lambdabel{s22} We may try to treat this case analogously to the proof of Theorem \ref{Th2.2c}. However, in this case, we will have the multiplier $(1-\frac{(\sqrt{N}-\sqrt3)^2}{N})$ without the extra square and this leads to the extra technical assumption $s>\frac12$. In particular, this method does not allow us to establish the convergence for the case of classical NaCl-Madelung constant ($a=0$, $s=\frac12$). In this subsection, we present an alternative method based on the Riemann localization principle for multiple Fourier series which allows us to remove the technical condition $s>\frac12$. The key idea of our method is to introduce the function \begin{equation}gin{equation}\lambdabel{2.F} M_{a,s}(x):=\sideset{}{'}\sum_{(n,k,l)\in\mathbb Z^3}\frac{e^{i(nx_1+kx_2+lx_3)}}{(a^2+n^2+k^2+l^2)^s}. \end{equation} The series is clearly convergent, say, in $\mathcal D'(\mathbb T^3)$ and defines (up to a constant) a fundamental solution for the fractional Laplacian $(a^2-\Delta)^s$ on a torus $\mathbb T^3$ defined on functions with zero mean. Then, at least formally, $$ M_{a,s}=M_{a,s}(\pi,\pi,\pi) $$ and justification of this is related to the convergence problem for multi-dimensional Fourier series. \par Let $G_{a,s}(x)$ be the fundamental solution for $(a^2-\Delta)^s$ in the whole space $\R^3$, i.e. $$ G_{a,s}(x)=-\frac{1}{2^{\frac12+s}\pi^{\frac32}\Gamma(s)}\frac1{|x|^{3-2s}}\Psi(a|x|),\ \ \Psi(z):=z^{\frac32-s}K_{\frac32-s}(z), $$ where $K_\nu(z)$ is a modified Bessel function of the second kind and $\Gamma(s)$ is the Euler gamma function, see e.g. \cite{SL,Watson}. In particular, passing to the limit $a\to0$ and using that $\Psi(0)=2^{\frac12-s}\Gamma(\frac32-s)$, we get the fundamental solution for the case $a=0$: $$ G_{0,s}(x)=-\frac{\Gamma(\frac32-s)}{2^{2s}\pi^{\frac32}\Gamma(s)}\,\frac1{|x|^{3-2s}}. $$ Then, as known, the periodization of this function will be the fundamental solution on a torus: \begin{equation}gin{equation}\lambdabel{2.Poisson} M_{a,s}(x)=C_0+\frac1{(2\pi)^3}\sum_{(n,k,l)\in\mathbb Z^3}G_{a,s}\left(x-2\pi(n,k,l)\right), \end{equation} where the constant $C_0$ is chosen in such a way that $M_{a,s}(x)$ has a zero mean on the torus, see \cite{Flap,Trans} and references therein. Recall that, for $a>0$, the function $G_{a,s}(x)$ decays exponentially as $|x|\to\infty$, so the convergence of \eqref{2.Poisson} is immediate (and identity \eqref{2.Poisson} is nothing more than the Poisson Summation Formula applied to \eqref{2.F}). However, when $a=0$, the convergence of \eqref{2.Poisson} is more delicate since $G_{0,s}(x)\sim |x|^{2s-3}$ and the decay rate is not strong enough to get the absolute convergence. Thus, some regularization should be done and the method of summation also becomes important, see \cite{Bor13,CR,mar2000} and reference therein. Recall also that we need to consider the case $s\le\frac12$ only (since for $s>\frac12$, we have convergence of the first order Cesaro sums by elementary methods). \begin{equation}gin{lemma}\lambdabel{Lem2.Green} Let $0<s<1$. Then \begin{equation}gin{multline}\lambdabel{2.Poisson0} M_{0,s}(x)=C_0'+\frac1{(2\pi)^3} G_{0,s}(x)+\\+\frac1{(2\pi)^3}\sideset{}{'}\sum_{(n,k,l)\in\mathbb Z^3}\bigg(G_{0,s}\left(x-2\pi(n,k,l)\right)-G_{0,s}(2\pi(n,k,l))\bigg), \end{multline} where the convergence is understood in the sense of convergence by expanding rectangles and $C'_0$ is chosen in such a way that the mean value of the expression in the RHS is zero. \end{lemma} \begin{equation}gin{proof}[Sketch of the proof] Although this result seems well-known, we sketch below the proof of convergence of the RHS (the equality with the LHS can be established after that in a standard way, e.g. passing to the limit $a\to0$ in \eqref{2.Poisson}). \par To estimate the terms in the RHS, we use the following version of a mean value theorem for second differences: \begin{equation}gin{multline} f(p+x)+f(p-x)-2f(p)=[f(p+x)-f(p)]-[f(p)-f(p-x)]\\=x\int_0^1(f'(p+\kappa x)-f'(p-\kappa x))\,d\kappa=\\= 2x^2\int_0^1\int_0^1\kappa_1\kappa f''(p+\kappa(1-2\kappa_1)x)\,d\kappa\,d\kappa_1 \end{multline} applying this formula to the function $G_{0,s}(x)$, we get \begin{equation}gin{multline*} \bigg|\sum_{\varepsilon_i=\pm1,\, i=1,2,3 }\bigg(G_{0,s}(2\pi n+\varepsilon_1x_1,2\pi k+\varepsilon_2 x_2,2\pi l+\varepsilon_3x_3)-G_{0,s}(2\pi(n,k,l))\bigg)\bigg|\\\le C\sum_{i=1}^3\|\partial^2_{x_i}G_{0,s}\|_{C(2\pi(n,k,l)+\mathbb T^3)}\le \frac{C_1}{(n^2+k^2+l^2)^{\frac32-2(s-1)}}. \end{multline*} Thus, we see that, if we combine together in the RHS of \eqref{2.Poisson0} the terms corresponding to 8 nodes $(\pm n,\pm k,\pm l)$ (for every fixed $(n,k,l)$), the obtained series will become absolutely convergent (here we use the assumption $s<1$). \par It remains to note that the parallelepipeds $\Pi_{N,M,K}$ enjoy the property: $(n,m,k)\in\Pi_{N,M,K}$ implies that all 8 points $(\pm n,\pm m,\pm k)\in \Pi_{N,M,K}$. This implies the convergence by expanding rectangles and finishes the proof of the lemma. \end{proof} \begin{equation}gin{corollary}\lambdabel{Cor2.Grsm} Let $0<s<\frac32$ and $a>0$ or $a=0$ and $0<s<1$. Then, the function $M_{a,s}(x)$ is $C^\infty(\mathbb T^3\setminus\{0\})$ and $G_{a,s}(x)\sim \frac C{|x|^{3-2s}}$ near zero. In particular. $M_{a,s}\in L^{1+\varepsilon}(\mathbb T^3)$ for some positive $\varepsilon=\varepsilon(s)$. \end{corollary} \begin{equation}gin{proof} Indeed, the infinite differentiability follows from \eqref{2.Poisson} and \eqref{2.Poisson0} since differentiation of $G_{a,s}(x)$ in $x$ can only improve the rate of convergence. In addition, $M_{a,s}(x)-\frac1{(2\pi)^3}G_{a,s}(x)$ is smooth on the whole $\mathbb T^3$, so $M_{a,s}$ belongs to the same Lebesgue space $L^p$ as the function $|x|^{2s-3}$. \end{proof} \begin{equation}gin{remark} The technical assumption $s<1$ can be removed using the fact that $(-\Delta)^{s_1}(-\Delta)^{s_2}=(-\Delta)^{s_1+s_2}$ and, therefore $$ G_{a,s_1+s_2}=G_{a,s_1}*G_{a,s_2} $$ using the elementary properties of convolutions. Note that the result of Corollary \ref{Cor2.Grsm} can be obtained in a straightforward way using the standard PDEs technique, but we prefer to use the explicit formulas \eqref{2.Poisson} and \eqref{2.Poisson0} which look a bit more transparent. In addition, using the Poisson Summation Formula in a more sophisticated way (e.g. in the spirit of \cite{mar}, see also references therein), we can obtain much better (exponentially convergent) series for $M_{0,s}(x)$. \end{remark} We are now ready to state and prove the main result of this section. \begin{equation}gin{theorem}\lambdabel{Th2.1c} Let $s>0$. Then \begin{equation}gin{equation}\lambdabel{2.1cesaro} M_{a,s}=M_{a,s}(\pi,\pi,\pi)=\lim_{N\to\infty}\sum_{n=1}^n\left(1-\frac nN\right)\frac{(-1)^nr_3(n)}{(a^2+n)^s} \end{equation} and, therefore, \eqref{2.sphere} is first order Cesaro summable by expanding spheres. \end{theorem} \begin{equation}gin{proof} As already mentioned above, it is sufficient to consider the case $0<s<1$ only. We also recall that \eqref{2.F} is nothing more than formal Fourier expansions for the function $M_{a,s}(x)$, therefore, to verify the second equality in \eqref{2.1cesaro}, we need to check the convergence of Fourier expansions of $M_{a,s}(x)$ at $x=(\pi,\pi,\pi)$ by first Cesaro expanding spheres. To do this, we use the analogue of Riemann localization property for multi-dimensional Fourier series. Namely, as proved in \cite{stein}, this localization is satisfied for first order Cesaro summation by expanding spheres in the class of functions $f$ such that $$ \int_{\mathbb T^3}|f(x)|\ln_+|f(x)|\,dx<\infty $$ (this is exactly the critical case $\kappa=\frac{d-1}2=1$ for $d=3$). Thus, since this condition is satisfied for $M_{a,s}(x)$ due to Corollary \ref{Cor2.Grsm}, the Fourier series for $M_{a,s}(x)$ and $M_{a,s}(x)-\frac1{(2\pi)^3}G_{a,s}(x)$ are convergent or divergent simultaneously. Since the second function is $C^\infty$ on the whole torus, we have the desired convergence, see also \cite{MFS} and references therein. Thus, the second equality in \eqref{2.1cesaro} is established. To verify the first equality, it is enough to mention that the series is second order Cesaro summable to $M_{a,s}$ due to Theorem \ref{Th2.2c}. This finishes the proof of the theorem. \end{proof} \section{Concluding remarks}\lambdabel{s3} Note that formally Theorem \ref{Th2.1c} covers Theorem \ref{Th2.2c}. Nevertheless, we would like to present both methods. The one given in subsection \ref{s21} is not only very elementary and transparent, but also can be easily extended to summation by general expanding domains $N\Omegaega$ where $\Omegaega$ is a sufficiently regular bounded domain in $\R^3$ containing zero. Also the rate of convergence of second Cesaro sums can be easily controlled. Some numeric simulations for the case of NaCl-Madelung constant ($a=0$, $s=\frac12$) are presented in the figure below \begin{equation}gin{figure}[h!] \centering \includegraphics[width=0.8\linewidth]{Cesaro2ndOrder.jpg} \caption{A figure plotting $N$th partial sums of \eqref{2.2good} with $a=0$ and $s=\frac12$ up to N = 5000.} \lambdabel{fig:coffee} \end{figure} \noindent and we clearly see the convergence to the Madelung constant $$ M_{0,1/2}=-1.74756... $$ The second method (used in the proof of Theorem \ref{Th2.1c}) is more delicate and strongly based on the Riemann localization for multiple Fourier series and classical results of \cite{stein}. This method is more restricted to expanding spheres and the rate of convergence is not clear. Some numeric simulation for the NaCl-Madelung constant is presented in the figure below \begin{equation}gin{figure}[h!] \centering \includegraphics[width=\linewidth]{Cesaro1stOrder.jpg} \caption{A figure plotting $N$th partial sums of \eqref{2.1cesaro} with $a=0$ and $s=\frac12$ up to N = 5000.} \lambdabel{fig:coffee1} \end{figure} \noindent and we see that the rate of convergence is essentially worse than for the case of second order Cesaro summation. As an advantage of this method, we mention the ability to extend it for more general class of exponential sums of the form \eqref{2.F}. \par Both methods are easily extendable to other dimensions $d\ne3$. Indeed, it is not difficult to see that the elementary method works for Cesaro summation of order $\kappa\ge d-2$ and the second one requires weaker assumption $\kappa\ge\frac{d-1}2$. Using the fact that the function $M_{a,s}(x)$ is more regular (belongs to some Sobolev space $W^{\varepsilon,p}(\mathbb T^3)$), together with the fact that Riemann localization holds for slightly subcritical values of $\kappa$ if this extra regularity is known (see e.g. \cite{MFS}), one can prove convergence for some $\kappa=\kappa(s)<\frac{d-1}2$ although the sharp values for $\kappa(s)$ seem to be unknown. \begin{equation}gin{thebibliography}{9} \bibitem{Flap} N. Abatangelo and E. Valdinoc, {\it Getting Acquainted with the Fractional Laplacian}, in: Contemporary Research in Elliptic PDEs and Related Topics, Springer, (2019), 1--105. \bibitem{MFS} Sh. Alimov, R. Ashurov and A. Pulatov, {\it Multiple Fourier Series and Fourier Integrals}, in: Commutative Harmonic Analysis IV, Springer, (1992), 1--95. \bibitem{BDZ} M. Bartuccelli, J. Deane and S. Zelik, {\it Asymptotic expansions and extremals for the critical Sobolev and Gagliardo–Nirenberg inequalities on a torus}, Proc R. Soc. Edinburgh, Vol. 143, No. 3, (2013), 445--482. \bibitem{Bor13} J. Borwein, M. Glasser, R. McPhedran, J. Wan, and I. Zucker, {\it Lattice Sums Then and Now}, (Encyclopedia of Mathematics and its Applications), Cambridge: Cambridge University Press, 2013. \bibitem{CR} A. Chaba and R. Pathria, {\it Evaluation of lattice sums using Poisson's summation formula. II}, J. Phys. A: Math. Gen.. Vol. 9. No. 9, (1976) 1411--1423. \bibitem{Emer} O. Emersleben, {\it \"Uber die Konvergenz der Reihen Epsteinscher Zetafunktionen}, Math. Nachr., Vol. 4, No. 1-6, (1950), 468--480. \bibitem{SL} D. Gurarie, {\it Symmetries and Laplacians}, in: Introduction to Harmonic Analysis, Group Representations and Applications, Vol. 174, North-Holland, 1992. \bibitem{Ha} G.H. Hardy, {\it Divergent series}, Clarendon Press, 1949. \bibitem{mar} S. Marshall, {\it A rapidly convergent modified Green' function for Laplace' equation in a rectangular region,} Proc. R. Soc. Lond. A. vol. 455 (1999), 1739--1766. \bibitem{mar2000} S. Marshall, {\it A periodic Green function for calculation of coloumbic lattice potentials}, Journal of Physics: Condensed Matter, 12(21), (2000),4575--4601. \bibitem{Ram21} M. Ortiz Ramirez, {\it Lattice points in d-dimensional spherical segments}, Monatsh Math, vol. 194, (2021), 167--179. \bibitem{Trans} L. Roncal and P. Stinga, {\it Transference of Fractional Laplacian Regularity}, in: Special Functions, Partial Differential Equations, and Harmonic Analysis, Springer (2014), 203--212. \bibitem{stein} E. Stein, {\it Localization and Summability of Multiple Fourier Series}, Acta Math. Vol. 100, No. 1-2, (1958), 93--146. \bibitem{Watson} G. Watson, {\it A Treatise on the Theory of Bessel Functions,} 2nd ed. Cambridge, England: Cambridge University Press, 1966. \bibitem{ZI} S. Zelik and A. Ilyin, {\it Green's function asymptotics and sharp interpolation inequalities}, Uspekhi Mat. Nauk, 69:2(416) (2014), 23–76; \end{thebibliography} \end{document}
\begin{document} \thispagestyle{empty} \begin{center} \section*{Structures and Numerical Ranges of Power Partial Isometries} \vspace*{3mm} \begin {tabular}{lcl} \hspace*{1cm}{\bf Hwa-Long Gau}$^{*1}$\hspace*{1cm}&and & \hspace*{1cm}{\bf Pei Yuan Wu}$^2$ \vspace*{3mm}\\ Department of Mathematics & & Department of Applied Mathematics\\ National Central University&& National Chiao Tung University\\ Chung-Li 32001, Taiwan&& Hsinchu 30010, Taiwan\\ Republic of China&&Republic of China \end{tabular} \end{center} \centerline{\bf Abstract} We derive a matrix model, under unitary similarity, of an $n$-by-$n$ matrix $A$ such that $A, A^2, \ldots, A^k$ ($k\ge 1$) are all partial isometries, which generalizes the known fact that if $A$ is a partial isometry, then it is unitarily similar to a matrix of the form ${\scriptsize\left[\begin{array}{cc} 0 & B\\ 0 & C\end{array}\right]}$ with $B^*B+C^*C=I$. Using this model, we show that if $A$ has ascent $k$ and $A, A^2, \ldots, A^{k-1}$ are partial isometries, then the numerical range $W(A)$ of $A$ is a circular disc centered at the origin if and only if $A$ is unitarily similar to a direct sum of Jordan blocks whose largest size is $k$. As an application, this yields that, for any $S_n$-matrix $A$, $W(A)$ (resp., $W(A\otimes A)$) is a circular disc centered at the origin if and only if $A$ is unitarily similar to the Jordan block $J_n$. Finally, examples are given to show that the conditions that $W(A)$ and $W(A\otimes A)$ are circular discs at 0 are independent of each other for a general matrix $A$. \noindent \emph{AMS classification}: 15A99, 15A60\\ \emph{Keywords}: Power partial isometry, numerical range, $S_n$-matrix. ${}^*$Corresponding author. E-mail addresses: [email protected] (H.-L. Gau), [email protected] (P. Y. Wu) ${}^1$Research supported by the National Science Council of the Republic of China under NSC-102-2115-M-008-007. ${}^2$Research supported by the National Science Council of the Republic of China under NSC-102-2115-M-009-007 and by the MOE-ATU project. \noindent {\bf\large 1. Introduction} An $n$-by-$n$ complex matrix $A$ is a \emph{partial isometry} if $\|Ax\|=\|x\|$ for any vector $x$ in the orthogonal complement $(\ker A)^{\perp}$ in $\mathbb{C}^n$ of the kernel of $A$, where $\|\cdot\|$ denotes the standard norm in $\mathbb{C}^n$. The study of such matrices or, more generally, such operators on a Hilbert space dates back to 1962 \cite{6}. Their general properties have since been summarized in \cite[Chapter 15]{5}. In this paper, we study matrices $A$ such that, for some $k\ge 1$, the powers $A, A^2, \ldots, A^k$ are all partial isometries. In Section 2 below, we derive matrix models, under unitary similarity, of such a matrix (Theorems 2.2 and 2.4). They are generalizations of the known fact that $A$ is a partial isometry if and only if it is unitarily similar to a matrix of the form ${\scriptsize\left[\begin{array}{cc} 0 & B\\ 0 & C\end{array}\right]}$ with $B^*B+C^*C=I$ (Lemma 2.1). Recall that the \emph{ascent} of a matrix, denoted by $a(A)$, is the minimal integer $k\ge 0$ for which $\ker A^k=\ker A^{k+1}$. It is easily seen that $a(A)$ is equal to the size of the largest Jordan block associated with the eigenvalue 0 in the Jordan form of $A$. We denote the $n$-by-$n$ \emph{Jordan block} $$\left[ \begin{array}{cccc} 0 & 1 & & \\ & 0 & \ddots & \\ & & \ddots & 1 \\ & & & 0 \end{array} \right]$$ by $J_n$. The \emph{numerical range} $W(A)$ of $A$ is the subset $\{\langle Ax, x{\rm ran\, }gle : x\in \mathbb{C}^n, \|x\|=1\}$ of the complex plane $\mathbb{C}$, where $\langle\cdot, \cdot{\rm ran\, }gle$ is the standard inner product in $\mathbb{C}^n$. It is known that $W(A)$ is a nonempty compact convex subset, and $W(J_n)=\{z\in\mathbb{C} : |z|\le\cos(\pi/(n+1))\}$ (cf. \cite[Proposition 1]{4}). For other properties of the numerical range, the readers may consult \cite[Chapter 22]{5} or \cite[Chapter 1]{9}. Using the matrix model for power partial isometries, we show that if $a(A)=k\ge 2$ and $A, A^2, \ldots, A^{k-1}$ are all partial isometries, then the following are equivalent: (a) $W(A)$ is a circular disc centered at the origin, (b) $A$ is unitarily similar to a direct sum $J_{k_1}\oplus J_{k_2}\oplus\cdots\oplus J_{k_{\ell}}$ with $k=k_1\ge k_2\ge\cdots\ge k_{\ell}\ge 1$, and (c) $A$ has no unitary part and $A^j$ is a partial isometry for all $j\ge 1$ (Theorem 2.6). An example is given, which shows that the number ``$k-1$'' in the above assumption is sharp (Example 2.7). In Section 3, we consider the class of $S_n$-matrices. Recall that an $n$-by-$n$ matrix $A$ is of {\em class} $S_n$ if $A$ is a contraction ($\|A\|\equiv\max\{\|Ax\|: x\in\mathbb{C}^n, \|x\|=1\}\le 1$), its eigenvalues are all in $\mathbb{D}$ ($\equiv\{z\in\mathbb{C} : |z|<1\}$), and it satisfies ${\rm ran\, }k(I_n-A^*A)=1$. Such matrices are the finite-dimensional versions of the \emph{compression of the shift} $S(\phi)$, first studied by Sarason \cite{10}. They also feature prominently in the Sz.-Nagy--Foia\c{s} contraction theory \cite{11}. It turns out that a hitherto unnoticed property of such matrices is that if $A$ is of class $S_n$ and $k$ is its ascent, then $A, A^2, \ldots, A^k$ are all partial isometries. Thus the structure theorems in Section 2 are applicable to $A$ or even to $A\otimes A$, the tensor product of $A$ with itself. As a consequence, we obtain that, for an $S_n$-matrix $A$, the numerical range $W(A)$ (resp., $W(A\otimes A)$) is a circular disc centered at the origin if and only if $A$ is unitarily similar to the Jordan block $J_n$ (Theorem 3.3). The assertion concerning $W(A)$ is known before (cf. \cite[Lemma 5]{12}). Finally, we give examples to show that if $A$ is a general matrix, then the conditions for the circularity (at the origin) of $W(A)$ and $W(A\otimes A)$ are independent of each other (Examples 3.5 and 3.6). We use $I_n$ and $0_n$ to denote the $n$-by-$n$ identity and zero matrices, respectively. An identity or zero matrix with unspecified size is simply denoted by $I$ or $0$. For an $n$-by-$n$ matrix $A$, ${\rm nullity\, } A$ is used for $\dim\ker A$, and ${\rm ran\, }k A$ for its rank. The \emph{real part} of $A$ is ${\rm Re\, } A=(A+A^*)/2$. The \emph{geometric} and \emph{algebraic multiplicities} of an eigenvalue $\lambda$ of $A$ are ${\rm nullity\, }(A-\lambda I_n)$ and the multiplicity of the zero $\lambda$ in the characteristic polynomial $\det(zI_n-A)$ of $A$, respectively. An $n$-by-$n$ diagonal matrix with diagonal entries $a_1, \ldots, a_n$ is denoted by ${\rm diag\, }(a_1, \ldots, a_n)$. \noindent {\bf \large 2. Power Partial Isometries} We start with the following characterizations of partial isometries. {\bf Lemma 2.1.} \emph{The following conditions are equivalent for an $n$-by-$n$ matrix $A$}: (a) \emph{$A$ is a partial isometry}, (b) \emph{$A^*A$ is an} (\emph{orthogonal}) \emph{projection}, \emph{and} (c) \emph{$A$ is unitarily similar to a matrix of the form ${\scriptsize\left[\begin{array}{cc} 0 & B\\ 0 & C\end{array}\right]}$ with $B^*B+C^*C=I$}. \noindent \emph{In this case}, ${\scriptsize\left[\begin{array}{cc} 0 & B\\ 0 & C\end{array}\right]}$ \emph{acts on} $C^n=\ker A\oplus(\ker A)^{\perp}$. Its easy proof is left to the readers. The next theorem gives the matrix model, under unitary similarity, of a matrix $A$ with $A, A^2, \ldots, A^k$ ($1\le k\le a(A)$) partial isometries. {\bf Theorem 2.2.} \emph{Let $A$ be an $n$-by-$n$ matrix}, $\ell\ge 1$, \emph{and} $k=\min\{\ell, a(A)\}$. \emph{Then the following conditions are equivalent}: (a) $A, A^2, \ldots, A^k$ \emph{are partial isometries}, (b) \emph{$A$ is unitarily similar to a matrix of the form} $$A'\equiv\left[\begin{array}{ccccc} 0 & A_1 & & & \\ & 0 & \ddots & & \\ & & \ddots & A_{k-1} & \\ & & & 0 & B\\ & & & & C\end{array}\right] \ on \ \mathbb{C}^n=\mathbb{C}^{n_1}\oplus\cdots\oplus\mathbb{C}^{n_k}\oplus\mathbb{C}^{m},$$ \emph{where the} $A_j$'\emph{s satisfy $A_j^*A_j=I_{n_{j+1}}$ for $1\le j\le k-1$}, \emph{and $B$ and $C$ satisfy $B^*B+C^*C=I_m$}. \emph{In this case}, $n_j={\rm nullity\, } A$ \emph{if} $j=1$, ${\rm nullity\, } A^j-{\rm nullity\, } A^{j-1}$ \emph{if} $2\le j\le k$, \emph{and} $m={\rm ran\, }k A^k$, (c) \emph{$A$ is unitarily similar to a matrix of the form} $$A''\equiv\left[\begin{array}{ccccc} 0 & I & & & \\ & 0 & \ddots & & \\ & & \ddots & I & \\ & & & 0 & B\\ & & & & C\end{array}\right]\oplus(J_{k-1}\oplus\cdots\oplus J_{k-1})\oplus\cdots\oplus(J_1\oplus\cdots\oplus J_1)$$ $$on \ \mathbb{C}^n=\underbrace{\mathbb{C}^{n_k}\oplus\cdots\oplus\mathbb{C}^{n_k}}_{k}\oplus\mathbb{C}^{m} \oplus\underbrace{\mathbb{C}^{k-1}\oplus\cdots\oplus\mathbb{C}^{k-1}}_{n_{k-1}-n_k}\oplus\cdots\oplus \underbrace{\mathbb{C}\oplus\cdots\oplus\mathbb{C}}_{n_1-n_2},$$ \emph{where the} $n_j$'\emph{s}, $1\le j\le k$, \emph{and $m$ are as in} (b), \emph{and $B$ and $C$ satisfy $B^*B+C^*C=I_m$}. For the proof of Theorem 2.2, we need the following lemma. {\bf Lemma 2.3.} \emph{Let $A=[A_{ij}]_{i,j=1}^n$ be a block matrix with $\|A\|\le 1$}, \emph{and let $\alpha$ be a nonempty subset of} $\{1, 2, \ldots, n\}$. \emph{If for some} $j_0$, $1\le j_0\le n$, \emph{we have} $\sum_{i\in\alpha}A_{i j_0}^*A_{i j_0}=I$, \emph{then $A_{i j_0}=0$ for all $i$ not in $\alpha$}. {\em Proof}. Since $\|A\|\le 1$, we have $A^*A\le I$. Thus the same is true for the $(j_0, j_0)$-block of $A^*A$, that is, $\sum_{i=1}^n A_{i j_0}^*A_{i j_0}\le I$. Together with our assumption that $\sum_{i\in\alpha}A_{i j_0}^*A_{i j_0}=I$, this yields $\sum_{i\not\in\alpha}A_{i j_0}^*A_{i j_0}\le 0$. It follows immediately that $A_{i j_0}=0$ for all $i$ not in $\alpha$. \hspace{2mm} $\blacksquare$ {\em Proof of Theorem $2.2$}. To prove (a) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (b), let $H_1=\ker A$, $H_j=\ker A^j\ominus\ker A^{j-1}$ for $2\le j\le\ell$, and $H_{\ell+1}=\mathbb{C}^n\ominus\ker A^{\ell}$. Note that if $\ell>a(A)$, then at most $H_1, \ldots, H_{k+1}$ are present. Hence $A$ is unitarily similar to the block matrix $A'\equiv[A_{ij}]_{i,j=1}^{k+1}$ on $\mathbb{C}^n=H_1\oplus\cdots\oplus H_{k+1}$. It is easily seen that $A_{ij}=0$ for any $(i,j)\neq(k+1,k+1)$ with $1\le j\le i\le k+1$. For the brevity of notation, let $A_j=A_{j, j+1}$, $1\le j\le k-1$, $B=A_{k, k+1}$, and $C=A_{k+1, k+1}$. We now check, by induction on $j$, that $A_j^*A_j=I_{n_{j+1}}$ for all $j$, and $A_{ij}=0$ for $1\le i\le j-2\le k-2$. For $j=1$, since $A$ is a partial isometry, $A^*A$ is an (orthogonal) projection by Lemma 2.1. We obviously have $A^*A=0$ on $H_1=\ker A$ and $A^*A=I$ on $H_1^{\perp}=H_2\oplus\cdots\oplus H_{k+1}$. Thus $A'^*A'=0\oplus I\oplus\cdots\oplus I$ on $\mathbb{C}^n=H_1\oplus H_2\oplus\cdots\oplus H_{k+1}$. Since $A'^*A'$ is of the form $$\left[ \begin{array}{ccccc} 0 & 0 & 0 & \cdots & 0 \\ 0 & A_1^*A_1 & * & \cdots & * \\ 0 & * & * & \cdots & * \\ \vdots & \vdots & \vdots & & \vdots \\ 0 & * & * & \cdots & * \end{array} \right],$$ we conclude that $A_1^*A_1=I$. Next assume that, for some $p$ ($2\le p<k$), $A_j^*A_j=I$ for all $j$, $1\le j\le p-1$, and all the blocks in $A'$ which are above $A_1, \ldots, A_{p-1}$ are zero. We now check that $A_{p}^*A_{p}=I$ and all blocks above $A_{p}$ are zero. Since $A^{p}$ is a partial isometry, ${A^p}^*A^{p}$ is an (orthogonal) projection with kernel equal to $H_1\oplus\cdots\oplus H_{p}$. Thus ${A'^p}^*A'^{p}=\underbrace{0\oplus\cdots\oplus 0}_{p}\oplus \underbrace{I\oplus\cdots\oplus I}_{k-p+1}$. But from $$A'=\left[ \begin{array}{ccccccccc} 0 & A_1 & 0 & \cdots & 0 & * & \cdots & * & * \\ & \ddots & \ddots & \ddots & \vdots & \vdots & & \vdots & \vdots \\ & & \ddots & \ddots & 0 & \vdots & & \vdots & \vdots \\ & & & \ddots & A_{p-1} & * & & \vdots & \vdots \\ & & & & 0 & A_{p} & \ddots & \vdots & \vdots \\ & & & & & 0 & \ddots & * & \vdots \\ & & & & & & \ddots & A_{k-1} & * \\ & & & & & & & 0 & B \\ & & & & & & & & C \\ \end{array} \right],$$ we have $$A'^{p}=\begin{array}{ll} \ \ \ \overbrace{\ \hspace{15mm} \ }^{\displaystylelaystyle p} \ \ \ \overbrace{\ \hspace{71mm} \ }^{\displaystylelaystyle k-p+1} & \\ \left[ \begin{array}{cccccccc} 0 & \cdots & 0 & \prod_{j=1}^{p}A_j & * & \cdots & * & * \\ \cdot & & & 0 & \prod_{j=2}^{p+1}A_j & \ddots & \vdots & \vdots \\ \cdot & & & & \ddots & \ddots & * & \vdots \\ \cdot & & & & & \ddots & \prod_{j=k-p}^{k-1}A_j & * \\ \cdot & & & & & & 0 & * \\ \cdot & & & & & & \vdots & \vdots \\ \cdot & & & & & & 0 & BC^{p-1} \\ 0 & \cdot & \cdot & \cdot & \cdot & \cdot & 0 & C^{p} \\ \end{array} \right] & \hspace{-11mm}\begin{array}{l}\left.\begin{array}{l} {\ } \\ {\ } \\ {\ }\\ {\ }\end{array}\right\}k-p\\ \left.\begin{array}{l}{\ } \\ {\ } \\ {\ }\\ {\ }\end{array}\right\}p+1\end{array}\end{array}.$$ Thus the $(p+1, p+1)$-block of ${A'^p}^*A'^{p}$ is $(\prod_{j=1}^{p}A_j)^*(\prod_{j=1}^{p}A_j)=A_{p}^*A_{p}$, which is equal to $I$ from above. Lemma 2.3 then implies that all the blocks in $A'$ which are above $A_{p}$ are zero. Thus, by induction, the first $k$ block columns of $A'$ are of the asserted form. Finally, we check that $B^*B+C^*C=I_m$. If this is the case, then all the blocks in $A'$ above $B$ and $C$ are zero by Lemma 2.3 again and we will be done. As above, $A'^{k-1}$ is of the form $$\left[\begin{array}{ccccc} 0 & \cdots & 0 & \prod_{j=1}^{k-1}A_j & D_1\\ 0 & \cdots & 0 & 0 & D_2\\ \vdots & & \vdots & \vdots & \vdots\\ 0 & \cdots & 0 & 0 & D_k\\ 0 & \cdots & 0 & 0 & C^{k-1} \end{array}\right],$$ and the (orthogonal) projection ${A'^{k-1}}^*A'^{k-1}$ equals $\underbrace{0\oplus\cdots\oplus 0}_{k-1}\oplus I\oplus I$ on $\mathbb{C}^n=H_1\oplus\cdots\oplus H_{k-1}\oplus H_k\oplus H_{k+1}$. Hence the $(k+1, k+1)$-block of ${A'^{k-1}}^*A'^{k-1}$ is \begin{equation}\label{e1} (\sum_{j=1}^k D_j^*D_j)+{C^{k-1}}^*C^{k-1}, \end{equation} which is equal to $I$. Similarly, $$A'^k=A'^{k-1}A'=\left[\begin{array}{cccc} 0 & \cdots & 0 & (\prod_{j=1}^{k-1}A_j)B+D_1C\\ 0 & \cdots & 0 & D_2C\\ \vdots & & \vdots & \vdots\\ 0 & \cdots & 0 & D_kC\\ 0 & \cdots & 0 & C^{k} \end{array}\right]$$ and the $(k+1, k+1)$-block of ${A'^k}^*A'^k$, \begin{equation}\label{e2} B^*(\prod_{j=1}^{k-1}A_j)^*(\prod_{j=1}^{k-1}A_j)B+ B^*(\prod_{j=1}^{k-1}A_j)^*D_1C+C^*D_1^*(\prod_{j=1}^{k-1}A_j)B+(\sum_{j=1}^kC^*D_j^*D_jC)+{C^k}^*C^k, \end{equation} is also equal to $I_m$. We deduce from ({\rm Re\, }f{e1}), ({\rm Re\, }f{e2}) and $A_j^*A_j=I$ for $1\le j\le k-1$ that \begin{equation}\label{e3} B^*B+B^*(\prod_{j=1}^{k-1}A_j)^*D_1C+C^*D_1^*(\prod_{j=1}^{k-1}A_j)B+{C}^*C=I_m. \end{equation} To complete the proof, we need only show that $(\prod_{j=1}^{k-1}A_j)^*D_1=0$. Indeed, since $(\prod_{j=1}^{k-1}A_j)^*(\prod_{j=1}^{k-1}A_j)=I_{n_k}$, there is an $n_1$-by-$n_1$ unitary matrix $U$ such that $U^*(\prod_{j=1}^{k-1}A_j)={\scriptsize\left[\begin{array}{c} I_{n_k} \\ 0\end{array}\right]}$. Then $V\equiv U\oplus \underbrace{I\oplus\cdots\oplus I}_k$ is unitary and $$V^*A'^{k-1}V=\left[\begin{array}{ccccc} 0 & \cdots & 0 & U^*(\prod_{j=1}^{k-1}A_j) & U^*D_1\\ 0 & \cdots & 0 & 0 & D_2\\ \vdots & & \vdots & \vdots & \vdots\\ 0 & \cdots & 0 & 0 & D_k\\ 0 & \cdots & 0 & 0 & C^{k-1} \end{array}\right]=\left[\begin{array}{ccccc} 0 & \cdots & 0 & \left[\begin{array}{c} I_{n_k} \\ 0\end{array}\right] & \left[\begin{array}{c} 0 \\ D'_1\end{array}\right]\\ 0 & \cdots & 0 & 0 & D_2\\ \vdots & & \vdots & \vdots & \vdots\\ 0 & \cdots & 0 & 0 & D_k\\ 0 & \cdots & 0 & 0 & C^{k-1} \end{array}\right].$$ Hence $$(\prod_{j=1}^{k-1}A_j)^*D_1=\left[I_{n_k} \ 0\right]U^*U\left[\begin{array}{c} 0 \\ D'_1\end{array}\right]=\left[I_{n_k} \ 0\right]\left[\begin{array}{c} 0 \\ D'_1\end{array}\right]=0$$ as asserted. We conclude from ({\rm Re\, }f{e3}) that $B^*B+C^*C=I_m$. Moreover, the sizes of the blocks in $A'$ are as asserted from our construction. This proves (a) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (b). Next we prove (b) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (c). Let $A'$ be as in (b), and let $n_1, \ldots, n_k, m$ be the sizes of the diagonal blocks of $A'$. Since $A_j^*A_j=I_{n_{j+1}}$ for all $j$, $1\le j\le k-1$, we have $n_1\ge n_2\ge\cdots\ge n_k$. Also, from $A_{k-1}^*A_{k-1}=I_{n_k}$, we deduce that there is a unitary matrix $U_{k-1}$ of size $n_{k-1}$ such that $U_{k-1}^*A_{k-1}={\scriptsize\left[\begin{array}{c} I_{n_k} \\ 0\end{array}\right]}$. Similarly, since $(A_{k-2}U_{k-1})^*(A_{k-2}U_{k-1})=I_{n_{k-1}}$, there is a unitary $U_{k-2}$ of size $n_{k-2}$ such that $U_{k-2}^*(A_{k-2}U_{k-1})={\scriptsize\left[\begin{array}{c} I_{n_{k-1}} \\ 0\end{array}\right]}$. Proceeding inductively, we obtain a unitary $U_j$ of size $n_j$ satisfying $U^*_j(A_jU_{j+1})={\scriptsize\left[\begin{array}{c} I_{n_{j+1}} \\ 0\end{array}\right]}$ for each $j$, $1\le j\le k-3$. If $U=U_1\oplus\cdots\oplus U_{k-1}\oplus I_{n_k}\oplus I_m$, then $$U^*A'U=\left[\begin{array}{cccccc} 0 & U_1^*A_1U_2 & & & & \\ & 0 & \ddots & & & \\ & & \ddots & U_{k-2}^*A_{k-2}U_{k-1} & & \\ & & & 0 & U_{k-1}^*A_{k-1} & \\ & & & & 0 & B\\ & & & & & C\end{array}\right]$$ $$=\left[\begin{array}{cccccc} 0 & \left[\begin{array}{c} I_{n_{2}} \\ 0\end{array}\right] & & & & \\ & 0 & \ddots & & & \\ & & \ddots & \left[\begin{array}{c} I_{n_{k-1}} \\ 0\end{array}\right] & & \\ & & & 0 & \left[\begin{array}{c} I_{n_{k}} \\ 0\end{array}\right] & \\ & & & & 0 & B\\ & & & & & C\end{array}\right].$$ Note that this last matrix is unitarily similar to the one asserted in (c). To prove (c) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (a), we may assume that $$A''=\left[ \begin{array}{ccccc} 0 & I & & & \\ & 0 & \ddots & & \\ & & \ddots & I & \\ & & & 0 & B \\ & & & & C \end{array} \right]$$ with $B^*B+C^*C=I_m$. This is because powers of any Jordan block are all partial isometries and the direct sums of partial isometries are again partial isometries. Simple computations show that $$A''^j=\begin{array}{ll} \ \ \ \overbrace{\ \hspace{15mm} \ }^{\displaystylelaystyle j} \ \overbrace{\ \hspace{36mm} \ }^{\displaystylelaystyle k-j+1} & \\ \left[ \begin{array}{cccccccc} 0 & \cdots & 0 & I & 0 & \cdots & 0 & 0 \\ & \cdot & & 0 & \ddots & \ddots & \vdots & \vdots \\ & & \cdot & & \ddots & \ddots & 0 & \vdots \\ & & & \cdot & & \ddots & I & 0 \\ & & & & \cdot & & 0 & B \\ & & & & & \cdot & \vdots & \vdots \\ & & & & & & 0 & BC^{j-1} \\ & & & & & & & C^{j} \\ \end{array} \right] & \hspace{-11mm}\begin{array}{l}\left.\begin{array}{l} {\ } \\ {\ } \\ {\ }\\ {\ }\end{array}\right\}k-j\\ \left.\begin{array}{l}{\ } \\ {\ } \\ {\ }\\ {\ }\end{array}\right\}j+1\end{array}\end{array}$$ and ${A''^j}^*A''^j=\underbrace{0\oplus\cdots\oplus 0}_j\oplus \underbrace{I\oplus\cdots\oplus I}_{k-j}\oplus D$, where $D=(\sum_{s=0}^{j-1}{C^s}^*B^*BC^s)+{C^j}^*C^j$ for each $j$, $1\le j\le k$. From $B^*B+C^*C=I_m$, we deduce that \begin{align*} D &= B^*B+(\sum_{s=1}^{j-2}{C^s}^*B^*BC^s)+{C^{j-1}}^*(B^*B+C^*C)C^{j-1}\\ &= B^*B+(\sum_{s=1}^{j-2}{C^s}^*B^*BC^s)+{C^{j-1}}^*C^{j-1}\\ &= B^*B+(\sum_{s=1}^{j-3}{C^s}^*B^*BC^s)+{C^{j-2}}^*(B^*B+C^*C)C^{j-2}\\ &= \cdots\\ &= B^*B+C^*C\\ &= I_m. \end{align*} Hence ${A''^j}^*A''^j=0\oplus I$, which implies that $A''^j$ is a partial isometry by Lemma 2.1 for all $j$, $1\le j\le k$. This proves (c) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (a). \hspace{2mm} $\blacksquare$ A consequence of Theorem 2.2 is the following. {\bf Theorem 2.4.} \emph{Let $A$ be an $n$-by-$n$ matrix and $\ell>a(A)$}. \emph{Then the following conditions are equivalent}: (a) $A, A^2, \ldots, A^{\ell}$ \emph{are partial isometries}, (b) \emph{$A$ is unitarily similar to a matrix of the form} $U\oplus J_{k_1}\oplus\cdots\oplus J_{k_m}$, \emph{where $U$ is unitary and} $a(A)=k_1\ge \cdots\ge k_m\ge 1$, \emph{and} (c) \emph{$A^j$ is a partial isometry for all $j\ge 1$}. The equivalence of (b) and (c) here is the finite-dimensional version of a result of Halmos and Wallen \cite[Theorem]{7}. {\em Proof of Theorem $2.4$}. Since $\ell>k\equiv a(A)$, Theorem 2.2 (a) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (b) says that $A$ is unitarily similar to a matrix of the form $$A'\equiv\left[\begin{array}{ccccc} 0_{n_1} & A_1 & & & \\ & 0_{n_2} & \ddots & & \\ & & \ddots & A_{k-1} & \\ & & & 0_{n_k} & B\\ & & & & C\end{array}\right] \ \ \ \mbox{on} \ \ \mathbb{C}^n=\mathbb{C}^{n_1}\oplus\cdots\oplus\mathbb{C}^{n_k}\oplus\mathbb{C}^{m}$$ with the $A_j$'s, $B$ and $C$ satisfying the properties asserted therein. As $k$ is the ascent of $A$, ${\rm nullity\, } A^k$ equals the algebraic multiplicity of eigenvalue 0 of $A$. Since ${\rm nullity\, } A^k={\rm nullity\, } A'^k=\sum_{j=1}^kn_j$, it is seen from the structure of $A'$ that the eigenvalue 0 appears fully in the diagonal $0_{n_j}$'s. This shows that 0 cannot be an eigenvalue of $C$ or $C$ is invertible. A simple computation yields that $$A'^{k+1}=\left[\begin{array}{cccc} 0 & \cdots & 0 & (\prod_{j=1}^{k-1}A_j)BC\\ 0 & \cdots & 0 & (\prod_{j=2}^{k-1}A_j)BC^2\\ \vdots & & \vdots & \vdots\\ 0 & \cdots & 0 & A_{k-1}BC^{k-1}\\ 0 & \cdots & 0 & BC^k\\ 0 & \cdots & 0 & C^{k+1} \end{array}\right]$$ and \begin{equation}\label{e4} {A'^{k+1}}^*A'^{k+1}=0_{n_1}\oplus\cdots\oplus 0_{n_k}\oplus D, \end{equation} where, after simplification by using $A_j^*A_j=I_{n_{j+1}}$ for $1\le j\le k-1$, $D=(\sum_{j=1}^k{C^j}^*B^*BC^j)+{C^{k+1}}^*C^{k+1}$. As $A'^{k+1}$ is a partial isometry, ${A'^{k+1}}^*A'^{k+1}$ is a projection by Lemma 2.1. Moreover, we also have $${\rm nullity\, } {A'^{k+1}}^*A'^{k+1}={\rm nullity\, } A'^{k+1}={\rm nullity\, } A'^k=\sum_{j=1}^kn_j,$$ where the second equality holds because of $k=a(A')$. Thus we obtain from ({\rm Re\, }f{e4}) that $D=I_m$. Therefore, \begin{align*} I_m &= D = (\sum_{j=1}^k{C^j}^*B^*BC^j)+{C^{k+1}}^*C^{k+1}\\ &= (\sum_{j=1}^{k-1}{C^j}^*B^*BC^j)+{C^{k}}^*(B^*B+C^*C)C^{k}\\ &= (\sum_{j=1}^{k-1}{C^j}^*B^*BC^j)+{C^{k}}^*C^{k}\\ &= \cdots\\ &= C^*(B^*B+C^*C)C\\ &= C^*C. \end{align*} This shows that $C$ is unitary and hence $B=0$ (from $B^*B+C^*C=I_m$). Thus $A'$ is unitarily similar to the asserted form in (b). This completes the proof of (a) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (b). The implications (b) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (c) and (c) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (a) are trivial. \hspace{2mm} $\blacksquare$ At this juncture, it seems appropriate to define the {\em power partial isometry index} $p(\cdot)$ for any matrix $A$: $$p(A)\equiv\sup\{k\ge 0: I, A, A^2, \ldots, A^k \ \mbox{are all partial isometries}\}.$$ An easy corollary of Theorem 2.4 is the following estimate for $p(A)$. {\bf Corollary 2.5.} \emph{If $A$ is an $n$-by-$n$ matrix}, \emph{then $0\le p(A)\le a(A)$ or $p(A)=\infty$}. \emph{In particular}, \emph{we have} (a) $0\le p(A)\le n-1$ \emph{or} $p(A)=\infty$, \emph{and} (b) $p(A)=n-1$ \emph{if and only if $A$ is unitarily similar to a matrix of the form} \begin{equation}\label{e5} \left[\begin{array}{ccccc} 0 & 1 & & &\\ & 0 & \ddots & &\\ & & \ddots & 1 &\\ & & & 0 & a \\ & & & & b \end{array}\right]\end{equation} \emph{with $|a|^2+|b|^2=1$ and $a, b\neq 0$}. {\em Proof}. The first assertion follows from Theorem 2.4. If $p(A)=n$, then $a(A)=n$, which implies that the Jordan form of $A$ is $J_n$. Thus $p(A)=\infty$, a contradiction. This proves (a) of the second assertion. As for (b), if $p(A)=n-1$, then $a(A)=n$ will lead to a contradiction as above. Thus we must have $a(A)=n-1$. Theorem 2.2 implies that $A$ is unitarily similar to a matrix of the form ({\rm Re\, }f{e5}) with $|a|^2+|b|^2=1$. Since either $a=0$ or $b=0$ will lead to the contradicting $p(A)=\infty$, we have thus proven one direction of (b). The converse follows easily from Theorem 2.2 and the arguments in the preceding paragraph. \hspace{2mm} $\blacksquare$ The next theorem gives conditions for which $p(A)\ge a(A)-1$ implies that $A$ is unitarily similar to a direct sum of Jordan blocks. {\bf Theorem 2.6.} \emph{Let $A$ be an $n$-by-$n$ matrix with $p(A)\ge a(A)-1$}. \emph{Then the following conditions are equivalent}: (a) \emph{$W(A)$ is a circular disc centered at the origin}, (b) \emph{$A$ is unitarily similar to a direct sum of Jordan blocks}, (c) \emph{$A$ has no unitary part and $A^j$ is a partial isometry for all $j\ge 1$}, \emph{and} (d) \emph{$A$ has no unitary part and $A, A^2, \ldots, A^{\ell}$ are partial isometries for some $\ell>a(A)$}. \noindent \emph{In this case}, $W(A)=\{z\in \mathbb{C} : |z|\le\cos(\pi/(a(A)+1))\}$ \emph{and} $p(A)=\infty$. Here a matrix is said to have {\em no unitary part} if it is not unitarily similar to one with a unitary summand. Note that, in the preceding theorem, the condition $p(A)\ge a(A)-1$ cannot be replaced by the weaker $p(A)\ge a(A)-2$. This is seen by the next example. {\bf Example 2.7.} If $A=J_3\oplus{\scriptsize\left[\begin{array}{cc} 0 & (1-|\lambda|^2)^{1/2}\\ 0 & \lambda\end{array}\right]}$, where $0<|\lambda|\le\sqrt{2}-1$, then $a(A)=3$ and $W(A)=\{z\in\mathbb{C} : |z|\le\sqrt{2}/2\}$. Since $A$ is a partial isometry while $A^2$ is not, we have $p(A)=1$. Note that $A$ has a nonzero eigenvalue. Hence it is not unitarily similar to any direct sum of Jordan blocks. The proof of Theorem 2.6 depends on the following series of lemmas, the first of which is a generalization of \cite[Theorem 1]{13}. {\bf Lemma 2.8.} \emph{Let} $$A=\left[\begin{array}{ccccc} 0 & A_1 & & & \\ & 0 & \ddots & & \\ & & \ddots & A_{k-1} & \\ & & & 0 & B\\ & & & & C\end{array}\right] \ on \ \mathbb{C}^n=\mathbb{C}^{n_1}\oplus\cdots\oplus\mathbb{C}^{n_k}\oplus\mathbb{C}^{m},$$ \emph{where the} $A_j$'\emph{s satisfy} $A_j^*A_j=I_{n_{j+1}}$, $1\le j\le k-1$. \emph{If $W(A)$ is a circular disc centered at the origin with radius $r$ larger than} $\cos(\pi/(k+1))$, \emph{then $C$ is not invertible}. {\em Proof}. Since $W(A)=\{z\in\mathbb{C}:|z|\le r\}$, $r$ is the maximum eigenvalue of ${\rm Re\, }(e^{i\theta}A)$ and hence $\det(rI_n-{\rm Re\, }(e^{i\theta}A))=0$ for all real $\theta$. We have \begin{align}\label{e6} & 0=\det\left[\begin{array}{ccccc} rI_{n_1} & -(e^{i\theta}/2)A_1 & & & \\ -(e^{-i\theta}/2)A_1^* & rI_{n_2} & \ddots & & \\ & \ddots & \ddots & -(e^{i\theta}/2)A_{k-1} & \\ & & -(e^{-i\theta}/2)A_{k-1}^* & rI_{n_k} & -(e^{i\theta}/2)B\\ & & & -(e^{-i\theta}/2)B^* & rI_m-{\rm Re\, }(e^{i\theta}C)\end{array}\right] \\ =& \det D_k(\theta)\cdot\det(E(\theta)-F(\theta)),\nonumber \end{align} where \begin{equation}\label{e7} D_k(\theta)=\left[\begin{array}{cccc} rI_{n_1} & -(e^{i\theta}/2)A_1 & & \\ -(e^{-i\theta}/2)A_1^* & rI_{n_2} & \ddots & \\ & \ddots & \ddots & -(e^{i\theta}/2)A_{k-1} \\ & & -(e^{-i\theta}/2)A_{k-1}^* & rI_{n_k} \end{array}\right], \ \ E(\theta)=rI_m-{\rm Re\, }(e^{i\theta}C), \end{equation} and \begin{equation}\label{e8} F(\theta)=\left[0 \ \ldots \ 0 \ -(e^{-i\theta}/2)B^*\right]D_k(\theta)^{-1}\left[\begin{array}{c} 0\\ \vdots \\ 0\\ -(e^{i\theta}/2)B\end{array}\right], \end{equation} by using the Schur complement of $D_k(\theta)$ in the matrix in ({\rm Re\, }f{e6}) (cf. \cite[p. 22]{8}). Note that here the invertibility of $D_k(\theta)$ follows from the facts that $D_k(\theta)$ is unitarily similar to $rI-{\rm Re\, } J$, where $J=(\sum_{j=1}^{n_k}\oplus J_k)\oplus(\sum_{j=1}^{n_{k-1}-n_k}\oplus J_{k-1})\oplus\cdots\oplus(\sum_{j=1}^{n_1-n_2}\oplus J_1)$ (cf. the proof of Theorem 2.2 (b) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (c)), and $r$ ($>\cos(\pi/(k+1))$) is not an eigenvalue of ${\rm Re\, } J$. Moreover, the $(k,k)$-block of $D_k(\theta)^{-1}$ is independent of the value of $\theta$. Thus the same is true for the entries of $F(\theta)$. Under a unitary similarity, we may assume that $C=[c_{ij}]_{i,j=1}^m$ is upper triangular with $c_{ij}=0$ for all $i>j$. Let $F(\theta)=[b_{ij}]_{i,j=1}^m$ and $E(\theta)-F(\theta)=[d_{ij}(\theta)]_{i,j=1}^m$. Then $$d_{ij}(\theta)=\left\{\begin{array}{ll} r-{\rm Re\, }(e^{i\theta}c_{jj})-b_{jj} \ \ \ \ & \mbox{if} \ \ i=j,\\ -(e^{i\theta}/2)c_{ij}-b_{ij} & \mbox{if} \ \ i<j,\\ -(e^{-i\theta}/2)\overline{c}_{ji}-b_{ij} & \mbox{if} \ \ i>j.\end{array}\right.$$ Hence $p(\theta)\equiv\det(E(\theta)-F(\theta))$ is a trigonometric polynomial of degree at most $m$, say, $p(\theta)=\sum_{j=-m}^m a_je^{ij\theta}$. Since $\det(rI_m-{\rm Re\, }(e^{i\theta}A))=0$ and $\det D_k(\theta)\neq 0$, we obtain from ({\rm Re\, }f{e6}) that $p(\theta)=0$ for all real $\theta$. This implies that $a_j=0$ for all $j$. In particular, $a_m=(-1)^m\prod_{j=1}^m(c_{jj}/2)=0$ from the above description of the $d_{ij}(\theta)$'s. This yields that $c_{jj}=0$ for some $j$ or $C$ is not invertible. \hspace{2mm} $\blacksquare$ The next lemma is to be used in the proof of Lemma 2.10. {\bf Lemma 2.9.} \emph{Let $A={\scriptsize\left[\begin{array}{cc} 0_p & B\\ 0 & C\end{array}\right]}$ be an $n$-by-$n$ matrix}, \emph{and let $B=[b_{ij}]_{i=1, j=1}^{p, n-p}$ and $C=[c_{ij}]_{i,j=1}^{n-p}$ with $c_{ij}=0$ for all $i>j$}. \emph{If the geometric and algebraic multiplicities of the eigenvalue $0$ of $A$ are equal to each other and $c_{11}=0$}, \emph{then $b_{i1}=0$ for all $i$}, $1\le i\le p$. {\em Proof}. Let $e_j$ denote the $j$th standard unit vector $[0 \ \ldots \ 0 \ \tb{1}{j \, \mbox{th}} \ 0 \ \ldots \ 0]^T$, $1\le j\le n$. Then $e_1, \ldots, e_p$ are all in $\ker A$. Since $c_{11}=0$, we have $Ae_{p+1}=b_{11}e_1+\cdots+b_{p 1}e_p$, which is also in $\ker A$. Thus $A^2e_{p+1}=0$ or $e_{p+1}\in\ker A^2$. Our assumption on the multiplicities of 0 implies that $\ker A=\ker A^2=\cdots$. Hence we obtain $e_{p+1}\in \ker A$ or $Ae_{p+1}=0$, which yields that $b_{i 1}=0$ for all $i$, $1\le i\le p$. \hspace{2mm} $\blacksquare$ The following lemma is the main tool in proving, under the condition of circular $W(A)$, that $p(A)\ge a(A)-1$ yields $p(A)\ge a(A)$. {\bf Lemma 2.10.} \emph{Let} $$A=\left[\begin{array}{ccccc} 0 & A_1 & & & \\ & 0 & \ddots & & \\ & & \ddots & A_{k-2} & \\ & & & 0 & B\\ & & & & C\end{array}\right] \ \ \ \ on \ \ \mathbb{C}^n=\mathbb{C}^{n_1}\oplus\cdots\oplus\mathbb{C}^{n_{k-1}}\oplus\mathbb{C}^{m},$$ \emph{where} $k=a(A) \, (\ge 2)$, \emph{the} $A_j$'\emph{s satisfy} $A_j^*A_j=I_{n_{j+1}}$, $1\le j\le k-2$, \emph{and} $B={\scriptsize\left[\begin{array}{cc} I_p & 0\\ 0 & B_1\end{array}\right]}$ \emph{and} $C={\scriptsize\left[\begin{array}{cc} 0_p & C_1\\ 0 & C_2\end{array}\right]} \, (1\le p\le\min\{n_{k-1}, m\})$ \emph{satisfy} $B^*B+C^*C=I_m$. \emph{If $W(A)$ is a circular disc centered at the origin with radius $r$ larger than $\cos(\pi/(k+1))$}, \emph{then $A$ is unitarily similar to a matrix of the form} $$\left[\begin{array}{ccccc} 0 & A_1' & & & \\ & 0 & \ddots & & \\ & & \ddots & A_{k-1}' & \\ & & & 0 & B'\\ & & & & C'\end{array}\right] \ \ \ \ on \ \ \mathbb{C}^n=\mathbb{C}^{n_1}\oplus\cdots\oplus\mathbb{C}^{n_{k-1}}\oplus\mathbb{C}^{q}\oplus\mathbb{C}^{m-q},$$ \emph{where} $q=\min\{n_{k-1}, m\}$, \emph{the} $A'_j$'\emph{s satisfy} ${A'_j}^*A'_j=I_{n_{j+1}}$, $1\le j\le k-2$, ${A'_{k-1}}^*A'_{k-1}=I_{q}$, \emph{and} $B'$ \emph{and} $C'$ \emph{satisfy} ${B'}^*B'+{C'}^*C'=I_{m-q}$. {\em Proof}. Since $W(A)=\{z\in\mathbb{C} : |z|\le r\}$, we have $\det(rI_n-{\rm Re\, }(e^{i\theta}A))=0$ for all real $\theta$. As in the proof of Lemma 2.8, we have the factorization $\det(rI_n-{\rm Re\, }(e^{i\theta}A))=\det D_{k-1}(\theta)\cdot\det(E(\theta)-F(\theta))$, where $D_{k-1}(\theta)$, $E(\theta)$ and $F(\theta)$ are as in ({\rm Re\, }f{e7}) and ({\rm Re\, }f{e8}) with $D_{k}(\theta)^{-1}$ in the expression of $F(\theta)$ there replaced by $D_{k-1}(\theta)^{-1}$. Since $D_{k-1}(\theta)$ is unitarily similar to $rI-{\rm Re\, } J$, where $J=(\sum_{j=1}^{n_{k-1}}\oplus J_{k-1})\oplus(\sum_{j=1}^{n_{k-2}-n_{k-1}}\oplus J_{k-2})\oplus\cdots\oplus(\sum_{j=1}^{n_1-n_{2}}\oplus J_{1})$ and the $(k-1, k-1)$-entry of $(rI_{k-1}-{\rm Re\, } J_{k-1})^{-1}$ is $a\equiv\det(rI_{k-2}-{\rm Re\, } J_{k-2})/\det(rI_{k-1}-{\rm Re\, } J_{k-1})$, the $(k-1, k-1)$-block of $D_{k-1}(\theta)^{-1}$ is given by $aI_{n_{k-1}}$. Hence we have $F(\theta)=(a/4)B^*B$. As before, from $\det D_{k-1}(\theta)\neq 0$, we obtain $\det(E(\theta)-F(\theta))=0$. Thus \begin{align}\label{e9} & \, 0 =\det(E(\theta)-F(\theta)) \nonumber\\ = & \, \det\left(rI_m-\left[\begin{array}{cc} 0_{p} & (e^{i\theta}/2)C_1\\ (e^{-i\theta}/2)C_1^* & {\rm Re\, }(e^{i\theta}C_2)\end{array}\right]-\frac{a}{4}\left[\begin{array}{cc} I_{p} & 0\\ 0 & B_1^*B_1\end{array}\right]\right) \nonumber\\ = & \, \det\left[\begin{array}{cc} (r-(a/4))I_p & -(e^{i\theta}/2)C_1\\ -(e^{-i\theta}/2)C_1^* & rI_{m-p}-{\rm Re\, }(e^{i\theta}C_2)-(a/4)B_1^*B_1\end{array}\right]. \end{align} We claim that $r\neq a/4$. Indeed, since $\det(rI_k-{\rm Re\, } J_k)=r\det(rI_{k-1}-{\rm Re\, } J_{k-1})-(1/4)\det(rI_{k-2}-{\rm Re\, } J_{k-2})$, we have $\det(rI_k-{\rm Re\, } J_k)/\det(rI_{k-1}-{\rm Re\, } J_{k-1})=r-(a/4)$. Therefore, $r=a/4$ if and only if $\det(rI_k-{\rm Re\, } J_k)=0$. The latter would imply $r\le\cos(\pi/(k+1))$ contradicting our assumption that $r>\cos(\pi/(k+1))$. Hence $r\neq a/4$ as asserted. Using the Schur complement, we infer from ({\rm Re\, }f{e9}) that $$p(\theta)\equiv\det(rI_{m-p}-{\rm Re\, }(e^{i\theta}C_2)-\frac{a}{4}B_1^*B_1-\frac{1}{r-(a/4)}\cdot\frac{1}{4}C_1^*C_1)=0$$ for all real $\theta$. As $p(\theta)$ is a trigonometric polynomial of degree at most $m-p$, say, $p(\theta)=\sum_{j=-(m-p)}^{m-p}a_je^{ij\theta}$, this implies that $a_j=0$ for all $j$. After a unitary similarity, we may assume that $C_2=[c_{ij}]_{i,j=1}^{m-p}$ with $c_{ij}=0$ for all $i>j$. Hence $a_{m-p}=(1/2^{m-p})c_{11}\cdots c_{m-p, m-p}=0$. Thus $c_{jj}=0$ for some $j$. We may assume that $c_{11}=0$. Note that $$A^k=\left[\begin{array}{cccc} 0 & \cdots & 0 & (\prod_{j=1}^{k-2}A_j)BC\\ 0 & \cdots & 0 & (\prod_{j=2}^{k-2}A_j)BC^2\\ \vdots & & \vdots & \vdots\\ 0 & \cdots & 0 & A_{k-2}BC^{k-2}\\ 0 & \cdots & 0 & BC^{k-1}\\ 0 & \cdots & 0 & C^{k} \end{array}\right],$$ $$BC^j=\left[\begin{array}{cc} I_p & 0\\ 0 & B_1\end{array}\right]\left[\begin{array}{cc} 0_p & C_1C_2^{j-1}\\ 0 & C_2^j\end{array}\right]=\left[\begin{array}{cc} 0_p & C_1C_2^{j-1}\\ 0 & B_1C_2^j\end{array}\right], \ \ 1\le j\le k-1,$$ and $$C^k=\left[\begin{array}{cc} 0_p & C_1C_2^{k-1}\\ 0 & C_2^k\end{array}\right].$$ Since the first column of $C_2$ is zero, the same is true for the $(p+1)$st columns of $(\prod_{j=t}^{k-2}A_j)BC^t$ ($2\le t\le k-2$), $BC^{k-1}$ and $C^k$. As for $(\prod_{j=1}^{k-2}A_j)BC$, we need Lemma 2.9. Because $k=a(A)$, the geometric and algebraic multiplicities of the eigenvalue 0 of $A^k$ coincide. Hence we may apply Lemma 2.9 to $A^k$ to infer that the $((\sum_{j=1}^{k-1}n_j)+p+1)$st column of $A^k$ is zero. In particular, since $\ker(\prod_{j=1}^{k-2}A_j)=\{0\}$, the $(p+1)$st column of $BC={\scriptsize\left[\begin{array}{cc} 0_p & C_1\\ 0 & B_1C_2\end{array}\right]}$ is zero and thus the first column of $C_1$ is zero. Together with the zero first column of $C_2$, this yields $C={\scriptsize\left[\begin{array}{cc} 0_{p+1} & C_1^{(1)}\\ 0 & C_2^{(1)}\end{array}\right]}$. As \begin{align*} & I_m=B^*B+C^*C=\left[\begin{array}{cc} I_p & 0\\ 0 & B_1^*\end{array}\right]\left[\begin{array}{cc} I_p & 0\\ 0 & B_1\end{array}\right]+\left[\begin{array}{cc} 0_{p+1} & 0\\ C^{(1)*}_1 & C^{(1)*}_2\end{array}\right]\left[\begin{array}{cc} 0_{p+1} & C^{(1)}_1\\ 0 & C^{(1)}_2\end{array}\right]\\ =& \left[\begin{array}{cc} I_p & 0\\ 0 & B_1^*B_1\end{array}\right]+\left[\begin{array}{cc} 0_{p+1} & 0\\ 0 & C^{(1)*}_1C^{(1)}_1+ C^{(1)*}_2C^{(1)}_2\end{array}\right], \end{align*} we infer that the first column of $B_1$ is a unit vector. After another unitary similarity, we may further assume that $$B_1=\left[\begin{array}{cc} 1 & 0\\ 0 & B_1^{(1)}\end{array}\right] \ \ \ \mbox{or} \ \ \ B=\left[\begin{array}{cc} I_{p+1} & 0\\ 0 & B^{(1)}_1\end{array}\right].$$ Applying the above arguments again, we have $$C=\left[\begin{array}{cc} 0_{p+2} & C_1^{(2)}\\ 0 & C_2^{(2)}\end{array}\right], \ B_1^{(1)}=\left[\begin{array}{cc} 1 & 0\\ 0 & B_1^{(2)}\end{array}\right] \ \ \mbox{and} \ \ B=\left[\begin{array}{cc} I_{p+2} & 0\\ 0 & B_1^{(2)}\end{array}\right].$$ Continuing this process, we obtain $$\mbox{(i)} \ \ C=\left[\begin{array}{cc} 0_{n_{k-1}} & C'_1\\ 0 & C'_2\end{array}\right] \ \ \mbox{and} \ \ B=\left[ I_{n_{k-1}} \ \ 0\right] \ \ \mbox{if} \ \ n_{k-1}<m,$$ and $$\hspace*{-25mm}\mbox{(ii)} \ \ C=0_m \ \ \mbox{and} \ \ B=\left[\begin{array}{c} I_m \\ 0\end{array}\right] \ \ \mbox{if} \ \ n_{k-1}\ge m.$$ Finally, let $A_j'=A_j$ for $1\le j\le k-2$. In case (i), let $A_{k-1}'=I_{n_{k-1}}$, $B'=C'_1$ and $C'=C'_2$. Since $$I_m=B^*B+C^*C=\left[\begin{array}{cc} I_{n_{k-1}} & 0\\ 0 & {C'_1}^*C'_1+{C'_2}^*C'_2\end{array}\right],$$ we have $B'^*B'+C'^*C'={C'_1}^*C'_1+{C'_2}^*C'_2=I_{m-n_{k-1}}$. On the other hand, for case (ii), let $A'_{k-1}={\scriptsize\left[\begin{array}{c} I_m \\ 0\end{array}\right]}$. In this case, $B'$ and $C'$ are absent. \hspace{2mm} $\blacksquare$ A consequence of the previous results is the following. {\bf Proposition 2.11.} \emph{If $A$ is an $n$-by-$n$ matrix with $W(A)$ a circular disc centered at the origin and $p(A)\ge a(A)-1$}, \emph{then $p(A)=a(A)$ or $\infty$}. {\em Proof}. Let $k=a(A)$. The assumption $p(A)\ge a(A)-1$ says that $A, A^2, \ldots, A^{k-1}$ are all partial isometries. In particular, we have $A^{k-1}=0$ or $\|A^{k-1}\|=1$. In the former case, $p(A)$ equals $\infty$. Hence we may assume that $\|A^{k-1}\|=1$ and thus also $\|A\|=1$. By \cite[Theorem 2.10]{1}, we have $w(A)\ge\cos(\pi/(k+1))$. Two cases are considered separately: (i) $w(A)=\cos(\pi/(k+1))$. In this case, \cite[Theorem 2.10]{1} yields that $A$ is unitarily similar to a matrix of the form $J_k\oplus A_1$ with $\|A_1\|\le 1$ and $w(A_1)\le\cos(\pi/(k+1))$. Since $A_1^{k-1}$ is also a partial isometry, we may assume as before that $\|A_1^{k-1}\|=1$ and thus also $\|A_1\|=1$. Now applying \cite[Theorem 2.10]{1} again to $A_1$ yields that $w(A_1)=\cos(\pi/(k+1))$ and $A_1$ is unitarily similar to $J_k\oplus A_2$ with $\|A_2\|\le 1$ and $w(A_2)\le\cos(\pi/(k+1))$. Continuing this process, we obtain that either $p(A)=\infty$ or $A$ is unitarily similar to a direct sum of copies of $J_k$. In the latter case, we again have $p(A)=\infty$. (ii) $w(A)>\cos(\pi/(k+1))$. Since $A, A^2, \ldots, A^{k-1}$ are partial isometries, Theorem 2.2 yields the unitary similarity of $A$ to a matrix of the form $$\left[\begin{array}{ccccc} 0 & A_1 & & & \\ & 0 & \ddots & & \\ & & \ddots & A_{k-2} & \\ & & & 0 & B\\ & & & & C\end{array}\right] \ \ \ \mbox{on} \ \ \mathbb{C}^n=\mathbb{C}^{n_1}\oplus\cdots\oplus\mathbb{C}^{n_{k-1}}\oplus\mathbb{C}^{m} $$ with $A_j^*A_j=I_{n_{j+1}}$, $1\le j\le k-2$, and $B^*B+C^*C=I_m$. By Lemma 2.8, $C$ is not invertible. We may assume, after a unitary similarity, that $B$ and $C$ are of the forms ${\scriptsize\left[\begin{array}{cc} 1 & 0\\ 0 & B_1 \end{array}\right]}$ and ${\scriptsize\left[\begin{array}{cc} 0 & C_1\\ 0 & C_2\end{array}\right]}$, where $B_1$, $C_1$ and $C_2$ are $(n_{k-1}-1)$-by-$(m-1)$, 1-by-$(m-1)$ and $(m-1)$-by-$(m-1)$ matrices, respectively. Using Lemma 2.10, we obtain the unitary similarity of $A$ to a matrix of the form in Theorem 2.2 (b). Thus, by Theorem 2.2 again, $A, A^2, \ldots, A^k$ are partial isometries. Hence $p(A)\ge k=a(A)$. Our assertion then follows from Corollary 2.5. \hspace{2mm} $\blacksquare$ Note that, in the preceding proposition, the number ``$a(A)-1$'' is sharp as was seen from Example 2.7. We are now ready to prove Theorem 2.6. {\em Proof of Theorem $2.6$}. The implications (b) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (c) and (c) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (d) are trivial. On the other hand, (d) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (a) follows from Theorem 2.4. Hence we need only prove (a) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (b). Let $k=a(A)$. By Proposition 2.11, $A, A^2, \ldots, A^k$ are partial isometries. Thus $A$ is unitarily similar to the matrix $A'$ in Theorem 2.2 (b). Since $k$ is the ascent of $A$, the geometric multiplicity of $A^k$, that is, ${\rm nullity\, } A^k$ is equal to the algebraic multiplicity of eigenvalue 0 of $A$. As proven in (a) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (b) of Theorem 2.2, ${\rm nullity\, } A^k=\sum_{j=1}^kn_j$. We infer from the structure of $A'$ that 0 cannot be an eigenvalue of $C$. On the other hand, applying Lemma 2.8 to $A'$ yields the noninvertibility of $C$. This leads to a contradiction. Thus $B$ and $C$ won't appear in $A'$ and, therefore, $A'$, together with $A$, is unitarily similar to a direct sum of Jordan blocks by Theorem 2.2 (c). This proves (b). \hspace{2mm} $\blacksquare$ \noindent {\bf\large 3. $S_n$-matrices} In this section, we apply the results in Section 2 to the class of $S_n$-matrices. This we start with the following. {\bf Proposition 3.1.} \emph{Let $A$ be a noninvertible $S_n$-matrix}. \emph{Then} (a) \emph{$a(A)$ equals the algebraic multiplicity of the eigenvalue $0$ of $A$}, (b) $p(A)=a(A)$ \emph{or} $\infty$, (c) \emph{$p(A)=\infty$ if and only if $A$ is unitarily similar to $J_n$}, \emph{and} (d) ${\rm ran\, }k A^j=n-j$ \emph{for} $1\le j\le a(A)$. {\em Proof}. Let $k= a(A)$. (a) It is known that, for any eigenvalue $\lambda$ of $A$, there is exactly one associated block, say, $\lambda I_{\ell}+J_{\ell}$ in the Jordan form of $A$. In particular, for $\lambda=0$, both $a(A)$ and the algebraic multiplicity of 0 are equal to the size $\ell$ of its associated Jordan block $J_{\ell}$. (b) By \cite[Corollary 1.3]{2}, $A$ is unitarily similar to a matrix of the form $A'\equiv{\scriptsize\left[\begin{array}{cc} J_k & B\\ 0 & C\end{array}\right]}$, where $B={\scriptsize\left[\begin{array}{c} 0\\ b\end{array}\right]}$ is a $k$-by-$(n-k)$ matrix with $b$ a row vector of $n-k$ components, and $C$ is an invertible $(n-k)$-by-$(n-k)$ upper-triangular matrix. Since ${\rm ran\, }k(I_n-A^*A)=1$, we infer from $$I_n-A'^*A'=\left[\begin{array}{cc} I_k & 0\\ 0 & I_{n-k}\end{array}\right]-\left[\begin{array}{cc} J_k^* & 0\\ B^* & C^*\end{array}\right]\left[\begin{array}{cc} J_k & B\\ 0 & C\end{array}\right]=\left[\begin{array}{cc} {\scriptsize\left[\begin{array}{cccc} 1 & & &\\ & 0 & & \\ & & \ddots & \\ & & & 0\end{array}\right]} & 0\\ 0 & I_{n-k}-(B^*B+C^*C)\end{array}\right]$$ that $B^*B+C^*C=I_{n-k}$. As $A'$ can also be expressed as $$\left[\begin{array}{ccccc} 0 & 1 & & & 0\\ & 0 & \ddots & & \vdots\\ & & \ddots & 1 & 0\\ & & & 0 & b \\ & & & & C \end{array}\right] \ \ \ \mbox{on} \ \ \mathbb{C}^n=\underbrace{\mathbb{C}\oplus\cdots\oplus \mathbb{C}}_k\oplus \mathbb{C}^{n-k}$$ with $b^*b+C^*C=I_{n-k}$, Theorem 2.2 can be invoked to conclude that $A, A^2, \ldots, A^k$ are partial isometries. Thus $p(A)\ge k$. It follows from Corollary 2.5 that $p(A)=k$ or $\infty$. (c) If $p(A)=\infty$, then the unitary similarity of $A$ and $J_n$ is an easy consequence of Theorem 2.4 and the fact that $A$ is irreducible (in the sense that it is not unitarily similar to the direct sum of two other matrices). The converse is trivial. (d) As in the proof of (b), $A$ is unitarily similar to $A'=\left[\begin{array}{cc} J_k & B\\ 0 & C\end{array}\right]$, where $B={\scriptsize\left[\begin{array}{c} 0\\ b\end{array}\right]}$ and $C$ is invertible. Then $A^j$ is unitarily similar to $$A'^j=\begin{array}{ll} \ \ \ \overbrace{\ \hspace{15mm} \ }^{\displaystylelaystyle j} \ \ \overbrace{\ \hspace{23mm} \ }^{\displaystylelaystyle k-j} & \\ \left[\begin{array}{c|c} \begin{array}{ccccccc} 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\ & \cdot & & 0 & \ddots & \ddots & \vdots \\ & & \cdot & & \ddots & \ddots & 0 \\ & & & \cdot & & \ddots & 1 \\ & & & & \cdot & & 0 \\ & & & & & \cdot & \vdots \\ & & & & & & 0 \end{array} & \begin{array}{c} \\ 0 \\ \\ \\ B_j\end{array}\\ \hline 0 & C^j\end{array} \right] & \hspace{-11mm}\begin{array}{l} \left.\begin{array}{l} {\ } \\ {\ } \\ {\ } \\ {\ }\end{array}\right\}k-j\\ \left.\begin{array}{l}{\ } \\ {\ } \\ \vspace*{-2mm}{\ }\end{array}\right\}j\end{array}\end{array}.$$ for some $j$-by-$(n-k)$ matrix $B_j$. Since the first $k-j$ rows and the last $n-k$ rows of $A'^j$ are linearly independent, we infer that ${\rm ran\, }k A^j={\rm ran\, }k A'^j=(k-j)+(n-k)=n-j$ for $1\le j\le k$. \hspace{2mm} $\blacksquare$ The next corollary complements Corollary 2.5: it shows that any allowable value for $p(A)$ can actually be attained by some matrix $A$. {\bf Corollary 3.2.} \emph{For any integers $n$ and $j$ satisfying $1\le j\le n-1$}, \emph{there is an $n$-by-$n$ matrix $A$ with $p(A)=j$}. {\em Proof}. Let $A$ be a noninvertible $S_n$-matrix with the algebraic multiplicity of its eigenvalue 0 equal to $j$ (cf. \cite[Corollary 1.3]{2}). Then $p(A)=a(A)=j$ by Proposition 3.1. \hspace{2mm} $\blacksquare$ For an $n$-by-$n$ matrix $A=[a_{ij}]_{i,j=1}^n$ and an $m$-by-$m$ matrix $B$, their {\em tensor product} (or {\em Kronecker product}) $A\otimes B$ is the $(nm)$-by-$(nm)$ matrix $$\left[ \begin{array}{ccc} a_{11}B & \cdots & a_{1n}B \\ \vdots & & \vdots \\ a_{n1}B & \cdots & a_{nn}B \end{array} \right].$$ Basic properties of tensor products can be found in \cite[Chapter 4]{9}. Our main concern here is when $W(A)$ and $W(A\otimes A)$ are circular discs (centered at the origin). Problems of this nature have also been considered in \cite{1}. The main result of this section is the following theorem. {\bf Theorem 3.3.} \emph{Let $A$ be an $S_n$-matrix}. \emph{Then the following conditions are equivalent}: (a) \emph{$W(A)$ is a circular disc centered at the origin}, (b) \emph{$W(A\otimes A)$ is a circular disc centered at the origin}, \emph{and} (c) \emph{$A$ is unitarily similar to $J_n$}. In preparation for its proof, we need the next lemma. {\bf Lemma 3.4.} \emph{Let $A$ and $B$ be an $n$-by-$n$ and $m$-by-$m$ nonzero matrices}, \emph{respectively}. (a) $$a(A\otimes B)=\left\{\begin{array}{ll} \min\{a(A), a(B)\} \ \ \ & \mbox{\em if} \ \ a(A), a(B)\ge 1,\\ a(A) & \mbox{\em if} \ \ a(B)=0,\\ a(B) & \mbox{\em if} \ \ a(A)=0.\end{array}\right.$$ (b) \emph{If $A$ and $B$ are partial isometries}, \emph{then so is $A\otimes B$}. \emph{The converse is false}. (c) \emph{Assume that $A$ and $B$ are} (\emph{nonzero}) \emph{contractions}. \emph{Then $A$ and $B$ are partial isometries if and only if $A\otimes B$ is a partial isometry}. (d) \emph{If $A$ and $B$ are} (\emph{nonzero}) \emph{contractions}, \emph{then} $p(A\otimes B)=\min\{p(A), p(B)\}$. (e) \emph{$A$ is a partial isometry if and only if $A\otimes A$ is}. \emph{Thus}, \emph{in particular}, $p(A\otimes A)=p(A)$. The proof makes use of the facts that (i) if $A$ (resp., $B$) is similar to $A'$ (resp., $B'$), then $A\otimes B$ is similar to $A'\otimes B'$, and (ii) if the eigenvalues of $A$ (resp., $B$) are $a_i$, $1\le i\le n$ (resp., $b_j$, $1\le j\le m$), then the eigenvalues of $A\otimes B$ are $a_ib_j$, $1\le i\le n$, $1\le j\le m$, counting algebraic multiplicities (cf. \cite[Theorem 4.2.12]{9}). {\em Proof of Lemma $3.4$}. (a) Let $k_1=a(A)$ and $k_2=a(B)$, and assume that $2\le k_1\le k_2$. Let $J_{k_1}$ (resp., $J_{k_2}$) be a Jordan block in the Jordan form of $A$ (resp., $B$). Since $$(J_{k_1}\otimes J_{k_2})^{k_1}=J_{k_1}^{k_1}\otimes J_{k_2}^{k_1}=0_{k_1}\otimes J_{k_2}^{k_1}=0_{k_1k_2}$$ and $$(J_{k_1}\otimes J_{k_2})^{k_1-1}=J_{k_1}^{k_1-1}\otimes J_{k_2}^{k_1-1}\neq 0_{k_1k_2},$$ the size of the largest Jordan block in the Jordan form of $A\otimes B$ is $k_1$. This shows that $a(A\otimes B)=k_1=\min\{a(A), a(B)\}$. The other cases can be proven even easier. (b) This is a consequence of the equivalence of (a) and (b) in Lemma 2.1 as $A^*A$ and $B^*B$ are projections, which implies the same for $(A\otimes B)^*(A\otimes B)$. The converse is false as seen by the example of $A=[2]$ and $B=[1/2]$. (c) If $A\otimes B$ is a partial isometry, then $(A\otimes B)^*(A\otimes B)=(A^*A)\otimes(B^*B)$ is a projection by Lemma 2.1. Since the positive semidefinite $A^*A$ and $B^*B$ are both contractions, their eigenvalues $a_i$, $1\le i\le n$, and $b_j$, $1\le j\le m$, are such that $0\le a_i, b_j\le 1$ for all $i$ and $j$. As the eigenvalues of $(A^*A)\otimes(B^*B)$, the products $a_ib_j$, $1\le i\le n$, $1\le j\le m$, can only be $0$ and $1$. Thus the same is true for the $a_i$'s and $b_j$'s. It follows that $A^*A$ and $B^*B$ are projections. Therefore, $A$ and $B$ are partial isometries. (d) This follows from (c) immediately. (e) If $A\otimes A$ is a partial isometry, then $(A\otimes A)^*(A\otimes A) = (A^*A)\otimes (A^*A)$ is a projection with eigenvalues 0 and 1. But its eigenvalues are also given by $a_i a_j$, $1\le i, j \le n$, where the $a_i$'s are eigenvalues of $A^*A$. If any $a_i$ is nonzero and not equal to 1, then the same is true for $a_i^2$, which is a contradiction. Hence all the $a_i$'s are either 0 or 1. It follows that $A^*A$ is a projection and $A$ is a partial isometry. The converse was proven in (c). \hspace{2mm} $\blacksquare$ Finally, we are ready to prove Theorem 3.3. {\em Proof of Theorem $3.3$}. To prove (a) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (c) (resp., (b) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (c)), note that the center of the circular $W(A)$ (resp., $W(A\otimes A)$) must be an eigenvalue of $A$ (resp., $A\otimes A$) (cf. \cite[Theorem]{3}). In particular, this says that $A$ (resp., $A\otimes A$) is noninvertible. Since the eigenvalues of $A\otimes A$ are $a_ia_j$, $1\le i, j\le n$, where the $a_i$'s are the eigenvalues of $A$ (cf. \cite[Theorem 4.2.12]{9}), the noninvertibility of $A\otimes A$ also implies that of $A$. Hence $p(A)=a(A)$ or $\infty$ by Proposition 3.1 (b). If $p(A)=\infty$, then we have already had (c) by Proposition 3.1 (c). Thus we may assume that $p(A)=a(A)$. In this case, we also have $$p(A\otimes A)=p(A)=a(A)=a(A\otimes A)$$ by Lemma 3.4 (d) (or (e)) and (a). Applying Theorem 2.6, we obtain the unitary similarity of $A$ (resp., $A\otimes A$) to a direct sum of Jordan blocks. It follows that the only eigenvalue of $A$ (resp., $A\otimes A$ and hence of $A$) is 0. Hence $A$ is unitarily similar to $J_n$, that is, (c) holds. The implication (c) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (a) is trivial since, under (c), we have $W(A)=\{z\in\mathbb{C} : |z|\le\cos(\pi/(n+1))\}$. For (c) $\hbox{\bf\rlap{I}{\hbox to 2 pt{}}R}ightarrow$ (b), note that (c) implies that $A$ is unitarily similar to $e^{i\theta}A$ for all real $\theta$. Hence $A\otimes A$ is unitarily similar to $e^{i\theta}(A\otimes A)$ for real $\theta$. Thus $W(A\otimes A)$ is a circular disc centered at the origin. This also follows from \cite[Proposition 2.8]{1}. \hspace{2mm} $\blacksquare$ We remark that the equivalence of (a) and (c) in Theorem 3.3 was shown before in \cite[Lemma 5]{12} by a completely different proof. We end this section with two examples and one open question. The examples show that, in contrast to the case of $S_n$-matrices, the conditions of $W(A)$ and $W(A\otimes A)$ being circular discs centered at the origin are independent of each other for a general matrix $A$. {\bf Example 3.5.} Let $A=[\lambda]\oplus J_2$, where $1/2<|\lambda|\le 1/\sqrt{2}$. Then $$W(A\otimes A)=W([\lambda^2]\oplus\lambda J_2\oplus\lambda J_2\oplus\left[\begin{array}{cc} 0_2 & J_2\\ 0 & 0_2\end{array}\right])=\{z\in\mathbb{C} : |z|\le\frac{1}{2}\},$$ but $W(A)$, being the convex hull of $\{\lambda\}\cup\{z\in\mathbb{C} : |z|\le 1/2\}$, is obviously not a circular disc. {\bf Example 3.6.} Let $$A=\left[\begin{array}{ccc} 0 & -\sqrt{2} & 1\\ 0 & 0 & 1\\ 0 & 0 & \sqrt{2}/2\end{array}\right].$$ Then, for any real $\theta$, $${\rm Re\, }(e^{i\theta}A)=\frac{1}{2}\left[\begin{array}{ccc} 0 & -\sqrt{2}e^{i\theta} & e^{i\theta}\\ -\sqrt{2}e^{-i\theta} & 0 & e^{i\theta}\\ e^{-i\theta} & e^{-i\theta} & \sqrt{2}\cos\theta\end{array}\right],$$ whose maximum eigenvalue can be computed to be always equal to 1. Hence $W(A)=\overline{\mathbb{D}}$. On the other hand, a long and tedious computation shows that the characteristic polynomial $p(z)\equiv\det(zI_9-2{\rm Re\, }(A\otimes A))$ of $2{\rm Re\, }(A\otimes A)$ can be factored as \begin{equation}\label{e10} z^2(z^2-3)(z^5-z^4-17z^3+17z^2+46z-48). \end{equation} Assume that $W(A\otimes A)=\{z\in\mathbb{C} : |z|\le \sqrt{r}/2\}$ for some $r>0$. Then the maximum and minimum eigenvalues of $2{\rm Re\, }(A\otimes A)$ are $\sqrt{r}$ and $-\sqrt{r}$, respectively. Note that $p(2)=-8<0$ and $p(\infty)=\infty$ imply that $p$ has a zero larger than 2. Hence $r\neq 3$. Similarly, we have $r\neq -3$. Since both $\sqrt{r}$ and $-\sqrt{r}$ are zeros of $p$, we also have \begin{align}\label{e11} & \ p(z)= z^2(z^2-3)(z^2-r)(z^3+az^2+bz+c) \nonumber\\ =& \ z^2(z^2-3)(z^5+az^4+(b-r)z^3+(c-ar)z^2-brz-cr) \end{align} for some real $a$, $b$ and $c$. Comparing the coefficients of the last factors in ({\rm Re\, }f{e10}) and ({\rm Re\, }f{e11}) yields that $a=-1$, $b-r=-17$, $c-ar=17$, $br=-46$ and $cr=48$. From these, we deduce that $c+r=17$ and hence $b=-c$. This leads to $-46=br=-cr$, which contradicts $cr=48$. Thus $W(A\otimes A)$ cannot be a circular disc at 0. The matrix $A$ in the preceding example was also considered in \cite[Example 3.4]{1} for another purpose. {\bf Question 3.7.} Is it true that, for any integers $n$, $j$ and $k$ satisfying $1\le j\le k\le n-1$, there is an $n$-by-$n$ matrix $A$ with $p(A)=j$ and $a(A)=k$? This is a refinement of Corollary 3.2. It is true if $k<n/2$. Indeed, in this case, we have $j\le k\le n-k-1$. Let $A=J_k\oplus B$, where $B$ is a noninvertible $S_{n-k}$-matrix whose eigenvalue 0 has algebraic multiplicity $j$. Then $p(A)=p(B)=a(B)=j$ by Proposition 3.1. On the other hand, we obviously have $a(A)=k$. \end{document}
\begin{document} \pagestyle{myheadings} \title{Leader-Following Consensus of Multiple Linear Systems Under Switching Topologies: An Averaging Method} \author{Wei Ni, Xiaoli Wang and Chun Xiong} \contact{Wei}{Ni}{School of Science, Nanchang University, Nanchang 330031, P. R. China.}{[email protected]} \contact{Xiaoli}{Wang}{School of Information Science and Engineering, Harbin Institute of Technology at Weihai, Weihai 264209, P. R. China.}{[email protected]} \contact{Chun}{Xiong}{School of Science, Nanchang University, Nanchang 330031, P. R. China.}{[email protected]} \markboth{W. Ni, X. Wang and C. Xiong} {Leader-Following Consensus of Multiple Linear Systems} \maketitle \begin{abstract} The leader-following consensus of multiple linear time invariant (LTI) systems under switching topology is considered. The leader-following consensus problem consists of designing for each agent a distributed protocol to make all agents track a leader vehicle, which has the same LTI dynamics as the agents. The interaction topology describing the information exchange of these agents is time-varying. An averaging method is proposed. Unlike the existing results in the literatures which assume the LTI agents to be neutrally stable, we relax this condition, only making assumption that the LTI agents are stablizable and detectable. Observer-based leader-following consensus is also considered. \end{abstract} \keywords{Consensus, multi-agent systems, averaging method} \classification{93C15, 93C35} \section{Introduction} Multi-agent system is a hot topic in a variety of research communities, such as robotics, sensor networks, artificial intelligence, automatic control and biology. Of particular interest in this field is the consensus problem, since it lays foundations for many consensus-related problem, including formation, flocking and swarming. We refer to survey papers \cite{olfati2007,ren2007} and references therein for details. Integrator and double integrator models are the simplest abstraction, upon which a large part of results on consensus of multi-agent systems have been based (see \cite{ren2005,olfati2004,olfati2007,jadb2003,cheng2008,hong2007}). To deal with more complex models, a number of recent papers are devoted to consensus of multiple LTI systems \cite{zhang2011,wang2008,ni2010,scardovi2009,seo2009,liu2009,khoo2009,Yoshioka2008,namerikawa2008,wang2009,wang2010,wang2011}. These results keep most of the concepts provided by earlier developments, and provide new design and analysis technique, such as LQR approach, low gain approach, $H_{\infty}$ approach, parametrization and geometric approach, output regulation approach, and homotopy based approach. However, most of these results \cite{zhang2011,wang2008,ni2010,seo2009,liu2009,khoo2009,Yoshioka2008,namerikawa2008} mainly focus on fixed interaction topology, rather than time-varying topology. How the switches of the interaction topology and agent dynamics jointly affect the collective behavior of the multi-agent system? Attempts to understand this issue had been hampered by the lack of suitable analysis tools. The results of Scardovi et al. \cite{scardovi2009} and Ni et al. \cite{ni2010} are mentioned here, because of their contributions to dealing with switching topology in the setup of high-order agent model. However, when dealing with switching topology, \cite{scardovi2009} and \cite{ni2010} assumed that the system of each agent is neutrally stable; thus it has no positive real parts eigenvalues. This assumption was widely assumed in the literatures when the interaction topology is fixed or switching. Unfortunately, when the agent is stabilizable and detectable rather than neutrally stable, and when the interaction topology is switching, there is no result reported in the literature to investigate the consensus of these agents. To deal with switching graph topology and to remove the neutral stability condition , we provide a modified averaging approach, which is motivated by \cite{aeyels1999,bellman1985,kosut1987}. The averaging approach was initially proposed by Krylov and Bogoliubov in celestial mechanics \cite{krylov1943}, and was further developed in the work of \cite{bogoliubov1961,krasnosel1955}; for more details refer to the recent book \cite{sanders2007}. Closely related to the averaging theory is the stability of fast time-varying nonautonomous systems \cite{aeyels1999,kosut1987,bellman1985}, and more specifically the fast switching systems \cite{stilwell2006,teel2011}. The modified approach in this paper is motivated by the work of Stilwell et al. \cite{stilwell2006}, and also the work of \cite{aeyels1999,kosut1987,bellman1985}. Although this work borrows the idea from \cite{stilwell2006}, the main difference of this work from \cite{stilwell2006} is as follows. The synchronization in \cite{stilwell2006} is achieved under under fast switching condition; that is, synchronization is realized under two time scales: a time scale $t$ for the agent dynamics and a time scale for the switching signal parameterized by $t/\varepsilon$ with $\varepsilon$ small enough. In our paper, we further establish that the two time scales can be made to be the same and thus the consensus in our paper is not limited to fast switching case. Furthermore, We present an extended averaging approach for consensus: a sequence of averaged systems (rather a single averaged systems) are the indicators of consensus of the the multi-agent systems. This allows to obtain more relax conditions for consensus. At last, We give further investigation on how to render these sequence of averaged systems to achieve consensus, and thus ensure the consensus of the original multi-agent systems. This was not investigated in [22]. The result in our paper shows that if there exists an infinite sequence of uniformly bounded and contiguous time intervals such that during each such interval the interaction graph is jointly connected and if the dwell time for each subgraph is appropriately small, then consensus can be achieved. In summary, the contributions of this paper are as follows: \begin{itemize} \item Averaging method is applied to leader-following consensus of multiple LTI systems. \item Results are obtained for a wider class of agent dynamics which is stabilizable and detectable than the existing class of neutrally stable agent dynamics. \item The agent dynamics and the switching signal considered in this paper have the same time scale, rather than having different time scales considered in \cite{stilwell2006}. Thus the results in our paper are not limited to fast time switching case. \end{itemize} The rest of this paper is organized as follows. Section 2 contains the problem formulation and some preliminary results. Section 3 provides the main result of leader-following consensus, and extensions are made in Section 4 which devotes to observer-based protocols design and analysis. Two illustrated examples are presented in Section 5. Section 6 is a brief conclusion. \section{Problem Formulation and Preliminaries} This section presents the multi-agent system model, with each agent being a stabilizable LTI system, which includes integrator or double integrator as its special cases. The leader-following consensus problem is formulated by use of the graph theory. Some supporting lemmas are also included here. Consider $N$ agents with the same dynamics \begin{eqnarray}\label{2.1} \dot x_i=Ax_i+Bu_i,\quad i=1, 2, \cdots, N, \end{eqnarray} where $x_i\in \mathbb{R}^n$ is the agent $i$'s state, and $u_i\in \mathbb{R}^m$ is agent $i$'s input through which the interactions or coupling between agent $i$ and other agents are realized. The matrix $B$ is of full column rank. The state information is transmitted among these agents, and the agents together with the transmission channels form a network. We use a directed graph $\mathcal {G} =(\mathcal{V}, \mathcal {E})$ to describe the topology of this network, where $\mathcal{V}=\{1,2,\cdots,N\}$ is the set of nodes representing $N$ agents and $\mathcal{E} \subset \mathcal{V} \times \mathcal{V}$ is the set of ordered edges $(i, j)$, meaning that agents $i$ can send information to agent $j$. The leader, labeled as $i=0$, has linear dynamics as \begin{eqnarray}\label{2.2} \dot x_0=Ax_0, \end{eqnarray} where $x_0\in \mathbb{R}^n$ is the state. Referring agent $i\in \{1, \cdots, N\}$ as follower agent, the leader's dynamics is obviously independent of follower agents. More specifically, the leader just sends information to some follower agents, without receiving information from them. The interaction structure of the whole agents $\{0,1,\cdots,N\}$ is described by an extended directed graph $\bar{\mathcal{G}}=(\bar{\mathcal{V}}, \bar{\mathcal{E}})$, which consists of graph $\mathcal{G}$, node $0$ and directed edges from node $0$ to its information-sending follower nodes. \begin{definition} {\bf (Definitions related to graph)} Consider a graph $\bar{\mathcal{G}}=(\bar{ \mathcal{V}}, \bar{ \mathcal{E}})$ with $\bar{ \mathcal{V}}=\{0, 1, \cdots, N\}$. \begin{itemize} \item The set of neighbors of node $i$ relative to subgraph $\mathcal{G}$ of $\bar{\mathcal{G}}$ is denoted by $\mathcal{N}_i=\{j\in \mathcal{V}: (j,i)\in \mathcal{E}, j\neq i\}$. \item A directed path is a sequence of edges $(i_1,i_2),(i_2,i_3),(i_3,i_4),\cdots$ in that graph. \item Node $i$ is reachable to node $j$ if there is a directed path from $i$ to $j$. \item The graph $\bar{\mathcal{G}}$ is called connected if node $0$ is reachable to any other node. \end{itemize} \end{definition} \begin{definition}{\bf (Structure matrices of graph)}\label{def2} \begin{itemize} \item For a directed graph $\mathcal{G}$ on nodes $\{1,\cdots,N\}$, it structure is described by its adjacency matrix $\mathcal {A}\in \mathbb{R}^{N\times N}$, whose $ij$-th entry is 1 if $(j,i)$ is an edge of $\mathcal{G}$ and 0 if it is not; or by its Laplacian matrix $\mathcal{L}=-\mathcal {A}+ \Lambda$, where $\Lambda\in \mathbb{R}^{N\times N}$ is the in-degree matrix of $\mathcal{G}$ which is diagonal with $i$-th diagonal element be $|\mathcal{N}_i|$, the cardinal of $\mathcal{N}_i$, which equals $\sum_{j\neq i}a_{ij}$. \item For a directed graph $\bar{\mathcal{G}}$ on the node set $\{0, 1, \cdots, N\}$, one uses a matrix $\mathcal{H}=\mathcal{L}+\mathcal{D}$ to describe it structure, where $\mathcal{L}$ is the Laplacian matrix of its subgraph $\mathcal{G}$ and $\mathcal{D}=diag(d_1, \cdots, d_N)$ with $d_i=1$ if node $(0,i)$ is an edge of graph $\bar{\mathcal{G}}$, and with $d_i=0$ otherwise. Obviously, the structure of the graph $\bar{\mathcal{G}}$ can also be described by its Laplacian $\bar{\mathcal{L}}$. \end{itemize} \end{definition} It is noted that the graph describing the interaction topology of nodes $\{0, 1, \cdots, N\}$ can vary with time. To account this we need to consider all possible graphs $\{\bar{\mathcal{G}}_p: p\in \mathcal {P}\}$, where $\mathcal {P}$ is an index set for all graphs defined on nodes $\{0,1,\cdots,N\}$. Obviously, $\mathcal{P}$ is a finite set. We use $\{\mathcal{G}_p: p\in \mathcal {P}\}$ to denote subgraphs defined on vertices $\{1,\cdots,N\}$. The dependence of the graphs upon time can be characterized by a switching law $\sigma: [0, \infty)\rightarrow \mathcal {P}$ which is a piece-wise constant and right continuous map; that is, at each time $t$, the underlying graph is $\bar{\mathcal{G}}_{\sigma(t)}$. For each agent $i\in \{1, \cdots, N\}$, if agent $j$ is a neighbor of agent $i$, the relative information $x_j-x_i$ is feedback to agent $i$ with a gain matrix $K$ to be design later. The leader-following consensus problem consists of designing for each agent $i\in \{1, \cdots, N\}$ a distributed protocol which is a linear feedback, or a dynamical feedback of \begin{eqnarray}\label{protocol} z_i=\Sigma_{j\in \mathcal{N}_i(t)}(x_j-x_i)+d_i(t)(x_0-x_i) \end{eqnarray} such that the closed-loop systems (\ref{2.1})-(\ref{2.2}) achieve the following collected behaviors: \begin{eqnarray}\label{leaderfollowing} \lim_{t\rightarrow \infty}\|x_i(t)-x_0(t)\|=0, \quad i=1, \cdots, N. \end{eqnarray} To solve the leader-following consensus problem, the following assumption is proposed throughout this paper. \begin{assumption}\label{stabilizable} The pair $(A,B)$ is stabilizable. \end{assumption} The following result presents an averaging method for stability of fast time-varying linear systems. The general result for nonlinear systems can be found in \cite{aeyels1999,kosut1987,bellman1985}. For convenient of our later use, we rewrite the result in the following form. \begin{lemma}\label{lemma2} Consider a linear time-varying systems $\dot x(t)=A(t)x(t)$ with $A(\cdot): \mathbb{R}\rightarrow \mathbb{R}^{n\times n}$. If there exists an increasing sequence of times $t_k, k\in \mathbb{Z}$, with $t_k \rightarrow +\infty$ as $k\rightarrow +\infty$, $t_k \rightarrow -\infty$ as $k\rightarrow -\infty$, and $t_{k+1}-t_k\leq T$ for some $T>0$, such that $\forall t_k$, the following average systems \begin{eqnarray} \dot {\bar x}(t)=\bar A_k \bar x(t), \quad \bar A_k=\frac{\int_{t_k}^{t_{k+1}}A(t)dt}{t_{k+1}-t_k}, k=0,1,\cdots \end{eqnarray} are asymptotically stable, then there exists $\alpha^*>0$ such that the following fast time-varying system \begin{eqnarray} \dot x(t)=A(\alpha t)x(t) \end{eqnarray} is asymptotically stable for all $\alpha> \alpha^*$. \end{lemma} \begin{remark}\label{remark1} It has been shown in \cite[Remark 4]{aeyels1999} that the value $\alpha^*$ can be estimated from $T$ by solving the equation \begin{eqnarray}\label{alpha} e^{\frac{KT}{\alpha}}\frac{T}{\alpha}=\frac{1}{K}\left(-1+\sqrt{1+\frac{v}{K_vKT}}\right) \end{eqnarray} for $\alpha$, where $T>0$ is defined above and $K_v>0, K>0, v>0$ are parameters which can be determined from the system matrix; furthermore, this equation has for every $T>0$, $K_v>0, K>0, v>0$ exactly one positive solution $\alpha$. Now fixing $K_v>0, K>0, v>0$, we show that as $T\rightarrow 0$ the corresponding solution $\alpha=\alpha(T)\rightarrow 0$; indeed, $T\rightarrow 0$ rends the right hand side of (\ref{alpha}) go to infinity, thus requiring $\frac{T}{\alpha}$ on the left hand side of (\ref{alpha}) to go to infinity, being thus resulting in $\alpha \rightarrow 0$. Therefore, appropriately choosing a small $T>0$ gives a solution $\alpha=\alpha^*<1$. \end{remark} The following rank property of Kronecker product will be used. The proof is straightforward, being thus omitted. \begin{lemma}\label{kron} For any matrices $P, Q_1, \cdots, Q_n$ of appropriate dimensions, the following property holds: \begin{eqnarray*} rank({P \otimes \left( \begin{array}{c} Q_1 \\ Q_2 \\ \vdots\\ Q_n \end{array} \right)}) =rank( \left( \begin{array}{c} P\otimes Q_1 \\ P\otimes Q_1 \\ \vdots\\ P\otimes Q_n \end{array} \right)) \end{eqnarray*} \end{lemma} The following result will also be used later. \begin{lemma}\label{lemma4} Consider an $n$-order differential system $\dot x(t)=A_1x(t)+A_2y(t)$ with $A_1\in \mathbb{R}^{n \times n},A_2\in \mathbb{R}^{n \times m}$, and $y(t)\in \mathbb{R}^m$. If $A_1$ is Hurwitz and $\lim_{t\rightarrow \infty}y(t)=0$, then $\lim_{t\rightarrow \infty}x(t)=0$. \end{lemma} {\bf Proof:} Let $x(t,x_0,y(t))$ denote the solution of $\dot x(t)=A_1x(t)+A_2y(t)$ with initial state $x_0$ at $t=0$. Since $A_1$ is Hurwitz, there exist positive number $\alpha,\gamma_1$ and $\gamma_2$ such that \begin{eqnarray*} \|x(t,x_0,y(t))\|\leq \gamma_1 \|x_0\|e^{-\alpha t}+\gamma_2 \|y(t)\|_{\infty}, \end{eqnarray*} where $\|y(t)\|_{\infty}={\rm ess\,sup}_{t\geq 0}\|y(t)\|$. Since $\lim_{t\rightarrow \infty}y(t)=0$, then for any $\varepsilon>0$, there exists a $T>0$ such that $\gamma_2\|y(t)\|<\varepsilon /2$. Similarly, $\gamma_1 \|x_0\|e^{-\alpha t}<\varepsilon /2$. Therefore, $\|x(t,x_0,y(t))\|<\varepsilon$. This completes the proof. $\blacksquare$ \section{Leader-Following Consensus of Multiple LTI Systems} This section presents the leader-following consensus of multiple stablizable LTI systems under switching topology. Unlike most results in the literature, we do not impose assumption that $A$ is neutrally stable. For completeness, we first review a result from \cite{ni2010} when the graph is fixed and undirected. \begin{theorem} For the multi-agent system {\rm(\ref{2.1})}-{\rm(\ref{2.2})} associated with connected graph $\bar{\mathcal{G}}$ under Assumption {\rm\ref{stabilizable}}, let $P>0$ be a solution to the Riccati inequality \begin{eqnarray}\label{riccati} PA+A^TP-2\delta PBB^TP+I_n<0, \end{eqnarray} where $\delta$ is the smallest eigenvalue of the structure matrix $\mathcal H$ of graph $\bar{\mathcal{G}}$(which is shown to be positive therein), then under the control law $u_i=Kz_i$ with $K=B^TP$ all the agents follow the leader from any initial conditions. \end{theorem} We now treat the leader-following consensus problem under switching topologies and directed graph case. Denoting the state error between the agent $i$ and the leader as $\varepsilon_i=x_i-x_0$, then the dynamics of $\varepsilon_i$ is \begin{eqnarray*} \dot \varepsilon_i &=& A\varepsilon_i+Bu_i\\ &=& A\varepsilon_i+BK\sum_{j\in \mathcal{N}_i(t)}(\varepsilon_j-\varepsilon_i)-BKd_i(t)\varepsilon_i, \quad i=1,\cdots,N. \end{eqnarray*} By introducing $\varepsilon=(\varepsilon_1^T, \varepsilon_2^T, \cdots,\varepsilon_N^T)^T$, one has \begin{eqnarray}\label{error} \dot \varepsilon &=& (I_N \otimes A)\varepsilon-(I_N \otimes B) (\mathcal{L}_{\sigma(t)}\otimes I_m) (I_N \otimes K)\varepsilon-(I_N \otimes B)(\mathcal{D}_{\sigma(t)}\otimes I_m) (I_N \otimes K)\varepsilon \nonumber\\ &=& [I_N \otimes A-(\mathcal{L}_{\sigma(t)}+\mathcal{D}_{\sigma(t)})\otimes (BK)]\varepsilon \nonumber\\ &=& [I_N \otimes A-\mathcal{H}_{\sigma(t)}\otimes (BK)]\varepsilon. \end{eqnarray} The remaining issue is finding conditions on the switching topologies( i.e., conditions on the switching law $\sigma$) under which one can synthesize a feedback gain matrix $K$ such that the zero solution of systems (\ref{error}) is asymptotically stable. As treated in \cite{hong2007}, consider an infinite sequence of nonempty, bounded and contiguous time intervals $[t_k, t_{k+1}), k=0,1,\cdots,$ with $t_0=0$, $t_{k+1}-t_k\leq T$ for some constant $T>0$. Suppose that in each interval $[t_k, t_{k+1})$ there is a sequence of $m_k$ nonoverlapping subintervals \begin{eqnarray*} [t_k^1, t_k^2), \cdots, [t_k^j, t_k^{j+1}), \cdots, [t_k^{m_k}, t_k^{m_k+1}), \quad t_k=t_k^1, \quad t_{k+1}=t_k^{m_k+1}, \end{eqnarray*} satisfying $t_k^{j+1}-t_k^j\geq \tau, 1\leq j\leq m_k$ for a given constant $\tau >0$, such that during each of such subintervals, the interconnection topology does not change. That is, during each time interval $[t_k^j, t_k^{j+1})$, the graph $\bar{\mathcal{G}}_{\sigma(t)}$ is fixed and we denote it by $\bar{\mathcal{G}}_{k_j}$. The number $\tau$ is usually call the minimal dwell time of the graphs. The $\tau >0$ can be arbitrarily small and the existence of such an number ensures that Zero phenomena dose not happen. During each time interval $[t_k, t_{k+1})$, some or all of $\bar{\mathcal{G}}_{k_j}, j=1,\cdots, m_k$ are permitted to be disconnected. We only require the graph to be jointly connected, which is defined as follows: \begin{definition}{\bf (Joint Connectivity)} \begin{itemize} \item The union of a collection of graphs is a graph whose vertex and edge sets are the unions of the vertex and edge sets of the graphs in the collection. \item The graphs are said to be jointly connected across the time interval $[t, t+T], T>0$ if the union of graphs $\{\bar{\mathcal{G}}_{\sigma(s)}: s\in [t, t+T]\}$ is connected. \end{itemize} \end{definition} \begin{assumption}\label{jc} The graphs $\bar{\mathcal{G}}_{\sigma(t)}$ are jointly connected across each interval $[t_k, t_{k+1}), k=0,1,\cdots$, with their length being uniformly up-bounded by a positive number $T$ and lower-bounded by a positive number $\tau$. \end{assumption} The following lemma gives a property of jointly connected graphs. When the graph is undirected, this result has been reported in \cite{hong2007,hong2007b}. We show this result is still valid when the graph is directed; its proof is put in the appendix. \begin{lemma}\label{lemma1} Let matrices $\mathcal{H}_{1},\cdots,\mathcal{H}_{m}$ be associated with the graphs $\bar{\mathcal{G}}_{1},\cdots, \bar{\mathcal{G}}_m$ respectively. If these graphs are jointly connected, then \\ (1) all the eigenvalues of $\sum_{i=1}^{m} \mathcal{H}_i$ have positive real parts.\\ (2) all the eigenvalues of $\sum_{i=1}^{m} \tau_i \mathcal{H}_i$ have positive real parts, where $\tau_i>0$ and $\sum_{i=1}^m \tau_i=1$. \end{lemma} With this, an averaging method by using Lemma \ref{lemma2} is applied to study the stability of system (\ref{error}), whose average system during each time interval $[t_k, t_{k+1}), k=0, 1, \cdots$, is \begin{eqnarray}\label{averagesystem} \dot {\bar x}=\bar A_k \bar x \end{eqnarray} with \begin{eqnarray*} \bar A_k &=&\frac{\int_{t_k}^{t_{k+1}}[I_N \otimes A-\mathcal{H}_{\sigma(t)}\otimes (BK)]dt}{t_{k+1}-t_k}\\ &=& I_N \otimes A-\bar{\mathcal{H}}_{[t_k,t_{k+1}]} \otimes (BK), \end{eqnarray*} where $\bar{\mathcal{H}}_{[t_k,t_{k+1}]}=\sum_{t\in [t_k, t_{k+1})}\tau_{\sigma(t)}\mathcal{H}_{\sigma(t)}$, $\tau_j=(t_k^{j+1}-t_k^j)/(t_{k+1}-t_k)$, $j=k_1, \cdots, k_{m_k}$. Define by $Re\lambda_{min}(\cdot)$ the least real part of the eigenvalues of a matrix. Define \begin{eqnarray}\label{delta} \bar \delta=\min \big\{\inf_{\tiny{(\tau_{k_1}, \cdots, \tau_{k_{m_k}})\in \Gamma_k}}Re\lambda_{min} (\bar{\mathcal{H}}_{[t_k,t_{k+1})})|k=0,1,\cdots \big \}, \end{eqnarray} where where \begin{eqnarray*} \Gamma_k=\{(\tau_{k_1}, \cdots, \tau_{k_{m_k}})|\sum_{j=1}^{m_k} \tau_{k_j}, \tau \leq \tau_j < 1, j=1,\cdots, m_k\}. \end{eqnarray*} Noting that $Re\lambda_{min}(\bar{\mathcal{H}}_{[t_k,t_{k+1})})$ depends continuously on $\tau_{k_1}, \cdots, \tau_{k_{m_k}}$ and the set $\Gamma_k$ is compact, also by referring to Lemma \ref{lemma2}, one has \begin{eqnarray*} \inf_{(\tau_{k_1}, \cdots, \tau_{k_{m_k}})\in \Gamma_k}Re\lambda_{min} (\bar{\mathcal{H}}_{[t_k,t_{k+1})}) &=&Re\lambda_{min}(\tau_{k_1}^* \mathcal{H}_{k_1}+\cdots+\tau_{k_{m_k}}^*\mathcal{H}_{k_{m_k}} )>0, \end{eqnarray*} which, together with the fact that the set in (\ref{delta}) is finite due to finiteness of all graphs, implies that $\bar \delta$ in (\ref{delta}) is a positive number. Then the leader-following consensus control can be achieved through the following theorem. \begin{theorem}\label{theorem2} For the multi-agent system {\rm(\ref{2.1})}-{\rm(\ref{2.2})} under Assumption {\rm\ref{stabilizable}}, associated with switched graphs $\bar{\mathcal{G}}_{\sigma(t)}$ under Assumption {\rm\ref{jc}} with $T$ small enough, let $P>0$ be a solution to the Riccati inequality \begin{eqnarray}\label{riccati2} PA+A^TP-2 \bar \delta PBB^TP+I_n<0, \end{eqnarray} then under the control law $u_i=Kz_i$ with $K=B^TP$ all the agents follow the leader from any initial conditions. \end{theorem} {\bf Proof:} We first prove that for each $k=0,1,\cdots$, the average system (\ref{averagesystem}) is asymptotically stable. To this end, let $T_k \in \mathbb{R}^{N\times N}$ be an unitary matrix such that $T_k \bar{\mathcal H}_{[t_k,t_{k+1}]}T^*_k=\bar \Lambda_k$ be an upper triangular matrix with the diagonal elements $\bar\lambda_1^k, \cdots, \bar\lambda_N^k$ be the eigenvalues of matrix $\bar{\mathcal H}_{[t_k,t_{k+1}]}$, where $T^*_k$ denote the Hermitian adjoint of matrix $T_k$. Setting $\tilde{x}=(T_k\otimes I_n)\bar x$, (\ref{averagesystem}) becomes \begin{eqnarray}\label{transx} \dot{\tilde x}=(I_N \otimes A-\bar\Lambda_k \otimes BK) \tilde x. \end{eqnarray} The stability of (\ref{transx}) is equivalent to stability of its diagonal system \begin{eqnarray}\label{diasystem} \dot{\tilde x}=[I_N \otimes A-diag(\bar\lambda_1^k,\cdots, \bar\lambda_N^k) \otimes BK] \tilde x, \end{eqnarray} or equivalent to the stability of the following $N$ systems \begin{eqnarray} \dot{\tilde x}_i=(A-\bar\lambda_i^kBB^TP)\tilde x_i, \quad i=1,\cdots, N. \end{eqnarray} Denoting $\bar\lambda_i^k=\bar\mu_i^k+\jmath \bar \nu_i^k$, where $\jmath^2=-1$, then \begin{eqnarray*} &&P(A-\bar\lambda_i^kBB^P)+(A-\bar\lambda_i^kBB^TP)^*P\\ &=&P[A-(\bar\mu_i^k+\jmath \bar\nu_i^k)BB^TP]+[A-(\bar\mu_i^k+\jmath \bar\nu_i^k)BB^P]^*P\\ &=&PA+A^TP-2\bar\mu_i^k PBB^TP\\ &\leq &PA+A^TP-2 \delta PBB^TP\\ &\leq& -I<0 \end{eqnarray*} Therefore system (\ref{averagesystem}) is globally asymptotically stable for each $k=0,1,\cdots$. Using Lemma \ref{lemma2}, we conclude that there exists a positive $\alpha^*$ dependent of $T$, such that $\forall \alpha > \alpha^*$, the switching system \begin{eqnarray} \label{scale} \dot \varepsilon(t) = [I_N \otimes A-\mathcal{H}_{\sigma(\alpha t)}\otimes (BK)]\varepsilon(t) \end{eqnarray} is asymptotically stable. According to Remark \ref{remark1}, $\alpha^*$ can be made smaller than one if we choose $T$ small enough. Since $\alpha>\alpha^*$ is arbitrary, just pick $\alpha=1$. That is, system (\ref{error}) is asymptotically stable, which implies that leader-following consensus is achieved. $\blacksquare$ Although the exact value of $\bar \delta$ is hard to obtain, this difficulty can be removed as follows. Noting that for two positive parameters $\bar{\delta}^* < \bar{\delta}$, if $P>0$ is a solution of (\ref{riccati2}) for parameter $\bar{\delta}^*$, then this $P$ is also a solution of (\ref{riccati2}) for parameter $\bar{\delta}$. Thus we can compute a positive definite matrix $P$ with a small enough parameter $\bar{\delta}^*$ which is obviously independent the global information. This treatment has an extra advantage that it make consensus control law really distributed since the feedback gain $K=B^TP$ does not include global information. \begin{remark} During each interval $[t_k,t_{k+1})$, the total dwell time of the $m_k$ graphs is upper bounded by a positive number $T$, which is required to be appropriately small to make $\alpha^*<1$. This means that the dwell time of each graph can not exceed a certain bound. However, in \cite{ni2010} the dwell time for each graph can be arbitrary since there $T$ is not constrained and can be chosen arbitrarily large. \end{remark} \begin{remark} Note that in (\ref{scale}) the switching signal $\sigma(\alpha t)$ and state $\varepsilon(t)$ have different time scales, while our result is obtained for system (\ref{error}) with $\sigma( t)$ and $\varepsilon(t)$ have the same time scale, and thus the result in our paper is not limited to fast time switching case. This distinguishes this work from \cite{stilwell2006}. \end{remark} \begin{remark} It can be seen that if $P>0$ is a solution to (\ref{riccati}), then $\kappa P, \kappa \geq 1$, is also a solution to (\ref{riccati}). Indeed, $\kappa PA+\kappa A^TP-2 \kappa ^2 \bar \delta PBB^TP+\bar \delta I_n$ $= \kappa (PA+ A^TP-2 \kappa \bar \delta PBB^TP+1/\kappa I_n)$ $\leq \kappa (PA+ A^TP-2 \bar \delta PBB^TP+ I_n)<0$. Therefore, $\kappa K$ is also a stabilizing feedback matrix and the $\kappa $ can be understood as the coupling strength. \end{remark} \section{Observer-Based Leader-Following Consensus} This section extends the result in last section to observer-based leader-following consensus. Consider a multi-agent system consisting of $N$ agents and a leader. The leader agent, labeled as $i=0$, has linear dynamics as \begin{eqnarray}\label{leader} \begin{array}{lllll} \dot x_0=Ax_0,\\ y_0=Cx_0 \end{array} \end{eqnarray} where $y_0\in \mathbb{R}^p$ is the output of the leader. The dynamics of each follower agent, labeled as $i\in \{1, \cdots, N\}$, is \begin{eqnarray}\label{follower} \begin{array}{llllll} \dot x_i=Ax_i+Bu_i,\\ y_i=Cx_i \end{array} \end{eqnarray} where $y_i\in \mathbb{R}^p$ is the agent $i$'s observed output information, and $u_i\in \mathbb{R}^m$ is agent $i$'s input through which the interaction or coupling between other agents is realized. More specifically, $u_i$ is a dynamical feedback of $z_i$. In this section, we assume \begin{assumption}\label{sta_det} The pair (A,B) is stabilizable, and the pair (A,C) is detectable. \end{assumption} The observer-based feedback controller is represented as \begin{eqnarray}\label{observerfeedback} \begin{array}{llllll} {\dot{\hat{\varepsilon}}}_i=A \hat{\varepsilon}_i+K_o(\hat z_i-z_i)+Bu_i,\\ u_i=F\hat {\varepsilon}_i, \end{array} \end{eqnarray} where \begin{eqnarray}\label{hatzi} \hat z_i=\sum_{j\in \mathcal{N}_i}(C\hat{\varepsilon}_j-C\hat{\varepsilon}_i)+d_iC\hat{\varepsilon}_i, \end{eqnarray} and the matrices $K_o$ and $K$ are to be designed later. \begin{remark} The term $z_i$ in (\ref{observerfeedback}) indicates that the observer receives the output variable information from this agent's neighbors as input, and the term $\hat z_i$ indicates that this observer exchanges its state with its neighboring observers. That is, each observer is implemented according to its local sensing resources. Since $z_i$ and $\hat z_i$ are local, the observer is essentially distributed, thus then feeding the state of each observer back to the corresponding agent is again a distributed control scheme. \end{remark} By further introducing the following stacked vector $\hat\varepsilon=(\hat\varepsilon_1^T, \cdots, \hat\varepsilon_N^T)^T$, $\hat z=(\hat z_1^T, \cdots, \hat z_N^T)^T$, and by using the structure matrices of graph $ \bar{ \mathcal{G}}_{\sigma(t)}$, one has \begin{eqnarray}\label{hateps} \dot {\hat\varepsilon} &=& (I_N \otimes A)\hat\varepsilon-[\mathcal{L}_{\sigma(t)}\otimes (K_oC)] \hat\varepsilon-[\mathcal{D}_{\sigma(t)}\otimes (K_oC)] \hat\varepsilon + \nonumber\\ && \hspace{3cm} [\mathcal{L}_{\sigma(t)}\otimes (K_oC)]\varepsilon+ [\mathcal{D}_{\sigma(t)}\otimes (K_oC)] \varepsilon +(I_N \otimes B)u \nonumber\\ &=& [I_N \otimes (A+BF)-\mathcal{H}_{\sigma(t)}\otimes (K_oC)]\hat\varepsilon +[\mathcal{H}_{\sigma(t)}\otimes (K_oC)][\varepsilon. \end{eqnarray} Then \begin{eqnarray}\label{close} \begin{array}{llll} \dot { \varepsilon}=(I_N\otimes A)\varepsilon+ [I_N \otimes (BF)]\hat{\varepsilon}\\ \dot{\hat{\varepsilon}}=[\mathcal{H}_{\sigma(t)}\otimes (K_oC)]\varepsilon+[I_N\otimes A+I_N\otimes (BF)-\mathcal{H}_{\sigma(t)}\otimes (K_oC)]\hat{\varepsilon} \end{array} \end{eqnarray} Let $e=\hat{\varepsilon}-\varepsilon$, that is \begin{eqnarray*} \left( \begin{array}{c} \varepsilon \\ e \\ \end{array} \right) = \left( \begin{array}{cc} I_{nN} & 0 \\ -I_{nN} & I_{nN} \\ \end{array} \right) \left( \begin{array}{c} \varepsilon \\ \hat{\varepsilon} \\ \end{array} \right). \end{eqnarray*} Under this coordinate transformation, system (\ref{close}) becomes \begin{eqnarray}\label{system_xe} \left( \begin{array}{c} \dot \varepsilon \\ \dot e \\ \end{array} \right) = \left( \begin{array}{cc} I_N \otimes (A+BF) & I_N \otimes BF \\ 0 & I_N \otimes A-\mathcal{H}_{\sigma(t)}\otimes (K_oC)\\ \end{array} \right) \left( \begin{array}{c} \varepsilon \\ e \\ \end{array} \right). \end{eqnarray} Therefore, observer-based leader-following consensus consists in, under jointly connected graph condition, designing matrices $K_o$ and $F$ such that system (\ref{system_xe}) is asymptotically stable. By separate principle and by referring to Lemma \ref{lemma4}, system (\ref{system_xe}) can by made asymptotically stable through carrying out the following two-procedure design: \begin{itemize} \item Design matrix $K_o$ such the switched system $\dot e =[I_N \otimes A-\mathcal{H}_{\sigma(t)}\otimes (K_oC)]e$ is asymptotically stable; \item Design matrix $F$ such the $\dot \varepsilon =[I_N \otimes (A+BF)]\varepsilon$ is asymptotically stable; \end{itemize} The first step can be realized by referring Theorem \ref{theorem2}, replacing the pair $(A, B)$ with $(A^T,C^T)$. The second step is a standard state feedback control problem. We summarize above analysis in the following theorem. The rest of its proof is essentially similar to that of Theorem \ref{theorem2}, and is thus omitted for save of space. \begin{theorem}\label{theorem4} Consider the multi-agent systems (\ref{leader}-\ref{follower}) associated with switching graphs $\bar{\mathcal{G}}_{\sigma(t)}$ under the Assumptions \ref{jc}, \ref{sta_det} with $T$ small enough. Let $P>0$ be a solution to the Riccati inequality \begin{eqnarray}\label{riccati} PA^T+AP-2\bar \delta PC^TCP+I_n<0, \end{eqnarray} then under the control law {\rm(\ref{observerfeedback})} with $K_o=PC^T$ and $F$ being such that $A+BF$ is Hurwitz, all the agents follow the leader from any initial conditions. \end{theorem} \section{Simulation Results} In this section, we give two examples to illustrate the validity of the results. Consider a multi-agent system consisting of a leader and four agents. Assume the system matrices are \begin{eqnarray*} A=\left( \begin{array}{ccc} 0.5548 & -0.5397 & -0.0757\\ 0.3279 & -0.0678 & -0.4495\\ -0.0956 & -0.6640 & 0.0130 \end{array} \right), B=\left( \begin{array}{cc} 3 & 5\\ 3 & -2\\ -8 & -8 \end{array} \right), C=\left( \begin{array}{ccc} 1 & -1 & 2\\ -4 & 2 & -3 \end{array} \right) \end{eqnarray*} We suppose that possible interaction graphs are $\{\bar{G}_1,\bar{G}_2,\bar{G}_3,\bar{G}_4,\bar{G}_5,\bar{G}_6\}$ which are shown in Figure {\rm\ref{topology}}, and the interaction graphs are switching as $\bar{G}_1\rightarrow\bar{G}_2\rightarrow\bar{G}_3\rightarrow\bar{G}_4 \rightarrow\bar{G}_5\rightarrow\bar{G}_6\rightarrow \bar{G}_1\rightarrow \cdots $, and each graph is active for $1/2$ second. Since the graphs $\bar{G_1}\cup \bar{G_2}\cup \bar{G_3}$ and $\bar{G_4}\cup \bar{G_5}\cup \bar{G_6}$ are connected, we can choose $t_k=k, t_{k+1}=k+3/2$ and $t_k^0=k, t_k^1=k+1/2, t_k^2=k+2/2, t_k^3=k+3/2$ with $k=0,1\cdots$. We choose a small parameter $\bar \delta=1/3 min(0.3820, 0.1732)=0.0057$. The matrices $K$ in Theorem \ref{theorem2} and $K_O, F$ Theorem \ref{theorem4} are calculated as \begin{eqnarray*} K=\left( \begin{array}{ccc} 0.7520 & 5.9852 & -2.7041\\ 12.6966 & -3.8441 & 1.6419 \end{array} \right) \end{eqnarray*} and \begin{eqnarray*} F=\left( \begin{array}{ccc} 0.6338 & -0.5087 & 0.3731\\ -0.9077 & 0.4509 & -0.1938 \end{array} \right), K_O=\left( \begin{array}{ccc} -6.7092 & -9.1532\\ -9.6111 & 4.1353\\ 7.7514 & -1.1756 \end{array} \right), \end{eqnarray*} respectively. With the same initial condition, the simulation results of Theorem 2 and Theorem 4 are shown in Figure \ref{figth2} and Figure \ref{figth4}, respectively. \section{Conclusions} This paper presents an averaging approach to leader-following consensus problem of multi-agent with linear dynamics, without imposing neutrally stable condition on agent dynamics. The interaction topology is switching. The proposed protocols force the follower agents to follow the independent leader trajectory. The result is extended to observer-based protocols design. Such design can be separated as a two-step procedure: design asymptotically stable distributed observers and asymptotically stable observer-state-feedback protocols. \section*{Appendix} The appendix is devoted to the proof of Lemma \ref{lemma1}. To this end, we first cite the following result. \begin{lemma}(Ger\v{s}gorin)\cite{horn1985} For any matrix $G=[g_{ij}]\in \mathbb{R}^{N\times N}$, all the eigenvalues of $G$ are located in the union of $N$ Ger\v{s}gorin discs \begin{eqnarray*} Ger(G):=\cup_{i=1}^{N}\{z\in C: |z-g_{ii}|\leq \sum_{j\neq i}|g_{ij}|\}. \end{eqnarray*} \end{lemma} The definition of weighted graph will also be used in the proof of what follows. If we assign each edge $(i,j)$ of graph $\bar{\mathcal{G}}$ a weight $w_{ij}$, we obtain a weighted graph $\bar{\mathcal{G}}_{\mathcal W}=(\bar{\mathcal{V}}, \bar{\mathcal{E}},\bar{\mathcal W})$, where $\bar{\mathcal{W}}=[w_{ij}]$. For an graph $\bar{\mathcal{G}}$ and any positive number $k>0$, the graph $k\bar{\mathcal{G}}$ is defined to be a weighted graph obtained from $\bar{\mathcal{G}}$ by assigning a weight $k$ to each existing edge of $\bar{\mathcal{G}}$. For two graphs $\bar{\mathcal{G}}_1$ and $\bar{\mathcal{G}}_2$, their union is a weighted graph and the weight for edge $(i,j)$ is the sum of weights for the two edges $(i,j)$ in the graphs $\bar{\mathcal{G}}_1$ and $\bar{\mathcal{G}}_2$ respectively. The weighted Laplacian of graph $\bar{\mathcal{G}}_{_{\mathcal{W}}}$ is defined as $\bar{\mathcal{L}}_{_{\mathcal{W}}}=-\bar{\mathcal{A}}_{_{\mathcal{W}}}+\bar{\Lambda_{_{\mathcal{W}}}}$, where $\bar{\mathcal{A}}_{_{\mathcal{W}}}=[w_{ij}a_{ij}]$ and $\bar{\Lambda_{_{\mathcal{W}}}}(i,i)=\sum_{j\neq}w_{ij}a_{ij}$; the weighted structure matrix of graph $\bar{\mathcal{G}}_{_{\mathcal{W}}}$ is defined as $\mathcal{H}_{_{\mathcal{W}}}=\mathcal{L}_{_{\mathcal{W}}}+\mathcal{D}_{_{\mathcal{W}}}$, where $\mathcal{L}_{_{\mathcal{W}}}$ is the weighted Laplacian of the subgraph $\mathcal{G}_{_{\mathcal{W}}}$ of $\bar{\mathcal{G}}_{_{\mathcal{W}}}$, and $\mathcal{D}_{_{\mathcal{W}}}=diag(w_{_{01}}d_1,\cdots, w_{_{0N}}d_N)$. {\bf Proof of Lemma \ref{lemma1}:} (1) Denote the Laplacian matrix and the structure matrix of graph $\bar{\mathcal{G}}_i$ by $\bar{\mathcal{L}}_i$ and $\mathcal{H}_i$ respectively, and denote the Laplacian matrix of graph $\mathcal{G}_i$ by $\mathcal{L}_i$. We first prove the case when $m=1$. By definitions, it can be easily verified the following relationship \begin{eqnarray*} \bar{\mathcal{L}}_1=\begin{pmat}({|..}) 0 & 0 & \cdots & 0 \cr\- -d_1 & & & \cr \vdots & & \mathcal{H}_1& \cr -d_N & & & \cr \end{pmat}. \end{eqnarray*} Since the graph $\bar{\mathcal{G}}_1$ is connected, then $rank(\bar{\mathcal{L}}_1)=N$ \cite{ren2005}. Thus the sub-matrix $\bar{\mathcal {M}}_1$ formed by the last $N$ rows of $\bar{\mathcal{L}}_1$ has rank $N$. Note that \begin{eqnarray*} \left( \begin{array}{c} -d_1 \\ \vdots \\ -d_N \\ \end{array} \right) = \mathcal {D}_1 \left( \begin{array}{c} 1 \\ \vdots \\ 1 \\ \end{array} \right) = \mathcal {L}_1\left( \begin{array}{c} 1 \\ \vdots \\ 1 \\ \end{array} \right) + \mathcal {D}_1 \left( \begin{array}{c} 1 \\ \vdots \\ 1 \\ \end{array} \right) = \mathcal {H}_1 \left( \begin{array}{c} 1 \\ \vdots \\ 1 \\ \end{array} \right), \end{eqnarray*} that is, the first column of matrix $\bar{\mathcal {M}}_1$ is a linear combination of its last $N$ columns. Therefore, $rank(\mathcal {H}_1)=N$. Furthermore, we claim the eigenvalues of the matrix $\mathcal {H}_1$ are located in closed-right half plan; indeed, from Ger\v{s}gorin theorem, all the eigenvalues of $H$ are located in \begin{eqnarray*} Ger(H)=\cup_{i=1}^{N}\left\{z\in \mathbb{C}: |z-l_{ii}-d_i|\leq |N_i|\right\}, \end{eqnarray*} and therefore they are located in the closed-right half plan by noting that $l_{ii}=|N_i|$ and $d_i\geq 0$. We thus conclude that all the eigenvalues of $\mathcal {H}_1$ have positive real parts. We proceed to prove the case when $m>1$. Obviously, for a union $\bar{\mathcal{G}}_{_U}$ of a group of weighted graphs $\{\bar{\mathcal{G}}_1, \cdots, \bar{\mathcal{G}}_m\}$, its weighted Laplacian matrix $\bar{\mathcal{L}}_{_U}$ is the sum of the Laplacian matrices $\{\bar{\mathcal{L}}_1, \cdots, \bar{\mathcal{L}}_m\}$ of graphs $\{\bar{\mathcal{G}}_1, \cdots, \bar{\mathcal{G}}_m\}$, and \begin{eqnarray*} \bar{\mathcal{L}}_U=\begin{pmat}({|..}) 0 & 0 & \cdots & 0 \cr\- -d_1^U & & & \cr \vdots & \mathcal{H}_1+ & \cdots & +\mathcal{H}_m \cr -d_N^U & & & \cr \end{pmat}, \end{eqnarray*} where \begin{eqnarray*} \left( \begin{array}{c} -d_1^U \\ \vdots \\ -d_N^U \\ \end{array} \right)= \left( \begin{array}{c} -d_1^1 \\ \vdots \\ -d_N^1 \\ \end{array} \right)+\cdots+ \left( \begin{array}{c} -d_1^m \\ \vdots \\ -d_N^m \\ \end{array} \right) \end{eqnarray*} with $(d_1^j, d_2^j, \cdots, d_N^j)^T$ be the diagonal elements of matrix $\mathcal {D}_j, j=1,\cdots, m$. When the graphs are jointly connected, that is, when the $\bar{\mathcal{G}}_{_U}$ is connected, the matrix $\bar{\mathcal{L}}_{_U}$ has a simple zero eigenvalue. Argue in a manner similar to that of $m=1$ case, it can be shown that the all the eigenvalues of the matrix $\mathcal{H}_1+ \cdots +\mathcal{H}_m$ have positive real parts. (2) Similar discussion as given in (1) for the weighted graphs $\tau_1 \bar{\mathcal{G}}_1, \cdots, \tau_N \bar{\mathcal{G}}_N$ yields the conclusion. \begin{figure} \caption{Six possible interaction topologies between the leader and the agents.} \label{topology} \end{figure} \begin{figure} \caption{Simulation for Theorem \ref{theorem2} \label{figth2} \end{figure} \begin{figure} \caption{Simulation for Theorem \ref{theorem4} \label{figth4} \end{figure} \makecontacts \end{document}
\begin{document} \title{Exploiting algebraic structure in global optimization and the Belgian chocolate problem} \begin{abstract} The Belgian chocolate problem involves maximizing a parameter $\delta$ over a non-convex region of polynomials. In this paper we detail a global optimization method for this problem that outperforms previous such methods by exploiting underlying algebraic structure. Previous work has focused on iterative methods that, due to the complicated non-convex feasible region, may require many iterations or result in non-optimal $\delta$. By contrast, our method locates the largest known value of $\delta$ in a non-iterative manner. We do this by using the algebraic structure to go directly to large limiting values, reducing the problem to a simpler combinatorial optimization problem. While these limiting values are not necessarily feasible, we give an explicit algorithm for arbitrarily approximating them by feasible $\delta$. Using this approach, we find the largest known value of $\delta$ to date, $\delta = 0.9808348$. We also demonstrate that in low degree settings, our method recovers previously known upper bounds on $\delta$ and that prior methods converge towards the $\delta$ we find. \end{abstract} \section{Introduction}\label{sec:intro} Global optimization problems of practical interest can often be cast as optimization programs over non-convex feasible regions. Unfortunately, iterative optimization over such regions may require large numbers of iterations and result in non-global maxima. Finding all or even many critical points of such programs is generally an arduous, computationally expensive task. In this paper we show that by exploiting the underlying algebraic structure, we can directly find the largest known values of the Belgian chocolate problem, a famous open problem bridging optimization and control theory. Moreover, this algebraic method does not require any iterative approach. Instead of relying on eventual convergence, our method algebraically identifies points that provide the largest value of the Belgian chocolate problem so far. While this approach may seem foreign to the reader, we will show that our algebraic optimization method outperforms prior global optimization methods for solving the Belgian chocolate problem. We will contrast our method with the optimization method of Chang and Sahinidis \cite{chang2007global} in particular. Their method used iterative branch-and-reduce techniques \cite{ryoo1996branch} to find what was the largest known value of $\delta$ until our new approach. Due to the complicated feasible region, their method may take huge numbers of iterations or converge to suboptimal points. Our method eliminates the need for these expensive iterative computations by locating and jumping directly to the larger values of $\delta$. This approach has two primary benefits over \cite{chang2007global}. First, it allows us to more efficiently find $\delta$ as we can bypass the expensive iterative computations. This also allows us to extend our approach to cases that were not computationally tractable for \cite{chang2007global}. Second, our approach allows us to produce larger values of $\delta$ by finding a finite set of structured limit points. In low-degree cases, this set provably contains the supremum of the problem, while in higher degree cases, the set contains larger values of $\delta$ than found in \cite{chang2007global}. The Belgian chocolate problem is a famous open problem in control theory proposed by Blondel in 1994. In the language of control theory, Blondel wanted to determine the largest value of a process parameter for which stabilization of an unstable plant could be achieved by a stable minimum-phase controller \cite{blondel1994simultaneous}. Blondel designed the plant to be a low-degree system that was resistant to known stabilization methods, in the hope that a solution would lead to development of new stabilization techniques. Specifically, Blondel wanted to determine the largest value of $\delta > 0$ for which the transfer function $P(s) = (s^2-1)/(s^2-2\delta s+1)$ can be stabilized by a proper, bistable controller. For readers unfamiliar with control theory, this problem can be stated in simple algebraic terms. To do so, we will require the notion of a {\it stable} polynomial. A polynomial is stable if all its roots have negative real part. The Belgian chocolate problem is then as follows.\\ \\ \noindent{\bf Belgian chocolate problem:} Determine for which $\delta > 0$ there exist real, stable polynomials $x(s), y(s), z(s)$ with $\deg (x) \geq \deg (y)$ satisfying \begin{equation}\label{bcp} z(s) = (s^2-2\delta s+1)x(s)+(s^2-1)y(s).\end{equation} We call such $\delta$ {\it admissible}. In general, stability of $x,y,z$ becomes harder to achieve the larger $\delta$ is. Therefore, we are primarily interested in the supremum of all admissible $\delta$. If we fix a maximum degree $n$ for $x$ and $y$, then this gives us the following global optimization problem for each $n$.\\ \\ \noindent{\bf Belgian chocolate problem} (optimization version): \begin{equation}\label{bcp_opt} \begin{aligned} & \underset{\delta, x(s), y(s)}{\text{maximize}} & & \delta \\ & \text{subject to} & & x, y, z\text{ are stable},\\ & & & z(s) = (s^2-2\delta s+1)x(s)+(s^2-1)y(s),\\ & & & \deg(y) \leq \deg(x) \leq n. \end{aligned} \end{equation} Note that we can view a degree $n$ polynomial with real coefficients as a $(n+1)$-dimensional real vector of its coefficients. Under this viewpoint, the space of polynomials $x, y, z$ that are stable and satisfy (\text{Re}f{bcp}) is an extremely complicated non-convex space. As a result, it is difficult to employ global optimization methods directly to this problem. The formulation above does suggest an undercurrent of algebra in this problem. This will be exploited to transform the problem into a combinatorial optimization problem by finding points that are essentially local optima. Previous work has employed various optimization methods to find even larger admissible $\delta$. Patel et al.~\cite{patel2002some} were the first to show that $\delta = 0.9$ is admissible by $x, y$ of degree at most 11, answering a long-standing question of Blondel. They further showed that $\delta = 0.93720712277$ is admissible. In 2005, Burke et al. \cite{burke2005analysis} showed that $\delta = 0.9$ is admissible with $x,y$ of degree at most 3. They also improved the record to $\delta = 0.94375$ using gradient sampling techniques. In 2007, Chang and Sahinidis used branch-and-reduce techniques to find admissible $\delta$ as large as $0.973974$ \cite{chang2007global}. In 2012, Boston used algebraic techniques to give examples of admissible $\delta$ up to 0.97646152 \cite{boston2012belgian}. Boston found polynomials that are almost stable and satisfy (\text{Re}f{bcp}). Boston then used ad hoc methods to perturb these to find stable $x,y,z$ satisfying (\text{Re}f{bcp}). While effective, no systematic method for perturbing these polynomials to find stable ones was given. In this paper, we extend the approach used by Boston in 2012 \cite{boston2012belgian} to achieve the largest known value of $\delta$ so far. We will refer to this method as the method of {\it algebraic specification}. We show that these almost stable polynomials serve as limiting values of the optimization program. Empirically, these almost stable polynomials achieve the supremum over all feasible $\delta$. Furthermore, we give a theoretically rigorous method for perturbing the almost stable polynomials produced by algebraic specification to obtain stable polynomials. Our approach shows that all $\delta \leq 0.9808348$ are admissible. This gives the largest known admissible value of $\delta$ to date. We further show that previous global optimization methods are tending towards the limiting values of $\delta$ found via our optimization method. We do not assume any familiarity on the reader's part with the algebra and control theory and will introduce all relevant notions. While we focus on the Belgian chocolate problem throughout the paper, we emphasize that the general theme of this paper concerns the underlying optimization program. We aim to illustrate that by considering the algebraic structure contained within an optimization problem, we can develop better global optimization methods. \section{Motivation for our approach}\label{motivation}\label{sec:motivation} In order to explain our approach, we will discuss previous approaches to the Belgian chocolate problem in more detail. Such approaches typically perform iterative non-convex optimization in the space of stable controllers in order to maximize $\delta$. In \cite{chang2007global}, Chang and Sahinidis formulated, for each $n$, a non-convex optimization program that sought to maximize $\delta$ subject to the polynomials $x, y, (s^2-2\delta s +1)x+(s^2-1)y$ being stable and such that $n \geq \deg(x) \geq \deg(y)$. For notational convenience, we will always define $z = (s^2-2\delta s + 1)x+(s^2-1)y$. Chang and Sahinidis used branch-and-reduce techniques to attack this problem for $n$ up to 10. Examining the roots of the $x,y,z$ they found for $\deg(x) = 6,8,10$, a pattern emerges. Almost all the roots of these polynomials are close to the imaginary axis and are close to a few other roots. In fact, most of these roots have real part in the interval $(-0.01,0)$. In other words, the $x,y,z$ are approximated by polynomials with many repeated roots on the imaginary axis. It is also worth noting that the only roots of $x$ that were omitted are very close to $-\delta \pm \sqrt{\delta^2-1}$. This suggests that $x$ should have a factor close to $(s^2+2\delta s+1)$. This suggests the following approach. Instead of using non-convex optimization to iteratively push $x,y,z$ towards polynomials possessing repeated roots on the imaginary axis, we will algebraically construct polynomials with this property. This will allow us to immediately find large limit points of the optimization problem in (\text{Re}f{bcp_opt}). While the $x,y,z$ we construct are not stable, they are close to being stable. We will show later that we can perturb $x,y,z$ and thereby push their roots just to the left of the imaginary axis, causing them to be stable. This occurs at the expense of decreasing $\delta$ by an arbitrarily small amount. Our method only requires examining finitely many such limit points. Moreover, for reasonable degrees of $x$ and $y$, these limit points can be found relatively efficiently. By simply checking each of these limit points, we reduce to a combinatorial optimization problem. This combinatorial optimization problem provably achieves the supremal values of $\delta$ for $\deg(x) \leq 4$. For higher degree $x$, our method finds larger values of $\delta$ than any previous optimization method thus far. In the sections below we will further explain and motivate our approach, and show how this leads to the largest admissible $\delta$ found up to this point. \section{Main results}\label{sec:math_back} \subsection{Preliminaries} Given $t \in \mathbb{C}$, we let $\text{Re}(t)$ denote its real part. We will let $\mathbb{R}[s]$ denote the set of polynomials in $s$ with real coefficients. For $p(s) \in \mathbb{R}[s]$, we call $p(s)$ {\it stable} if every root $t$ of $p$ satisfies $\text{Re}(t) < 0$. We let $H$ denote the set of all stable polynomials in $\mathbb{R}[s]$. We call $p(s)$ {\it quasi-stable} if every root $t$ of $p$ satisfies $\text{Re}(t) \leq 0$. We let $\overline{H}$ denote the set of quasi-stable polynomials of $\mathbb{R}[s]$. We let $H^m, \overline{H^m}$ denote the sets of stable and quasi-stable polynomials respectively of degree at most $m$. \begin{definition}We call $\delta$ {\it admissible} if there exist $x,y \in H$ such that $\deg(x) \geq \deg(y)$ and \begin{equation} (s^2-2\delta s+1)x(s) + (s^2-1)y(s) \in H.\end{equation}\end{definition} \begin{definition}We call $\delta$ {\it quasi-admissible} if there exist $x,y \in \overline{H}$ such that $\deg(x) \geq \deg(y)$ and \begin{equation} (s^2-2\delta s+1)x(s) + (s^2-1)y(s) \in \overline{H}.\end{equation}\end{definition} Note that since quasi-stability is weaker than stability, quasi-admissibility is weaker than admissibility. Our main theorem (Theorem \text{Re}f{main_thm} below) will show that if $\delta$ is quasi-admissible, then all smaller $\delta$ are admissible. Note that this implies that the Belgian chocolate problem is equivalent to finding the supremum of all admissible $\delta$. We will then find quasi-admissible $\delta$ in order to establish which $\delta$ are admissible. This is the core of our approach. These quasi-admissible $\delta$ are easily identified and are limit points of admissible $\delta$. In practice, one verifies stability by using the Routh-Hurwitz criteria. Suppose we have a polynomial $p(s) = a_0s^n + a_1s^{n-1} + \ldots + a_{n-1}s + a_n \in \mathbb{R}[s]$ such that $a_0 > 0$. Then we define the $n\times n$ {\it Hurwitz matrix} $A(p)$ as $$A(p) = \begin{pmatrix} a_1 & a_3 & a_5 & \ldots & \ldots & 0 & 0\\ a_0 & a_2 & a_6 & \ldots & \ldots & 0 & 0\\ 0 & a_1 & a_3 & \ldots & \ldots & 0 & 0\\ 0 & a_0 & a_2 & \ldots & \ldots & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots & \vdots\\ 0 & 0 & 0 & \ldots & \ldots & a_{n-2} & a_n\end{pmatrix}.$$ Adolf Hurwitz showed that a real polynomial $p$ with positive leading coefficient is stable if and only if all leading principal minors of $A(p)$ are positive. While it may seem natural to conjecture that $p$ is quasi-stable if and only if all leading principal minors are nonnegative, this only works in one direction. \begin{lemma}Suppose $p$ is a real polynomial with positive leading coefficient. If $p$ is quasi-stable then all the leading principal minors of $A(p)$ are nonnegative.\end{lemma} \begin{proof}If $p(s)$ is quasi-stable, then for all $\epsilon > 0$, $p(s+\epsilon)$ is stable. Therefore, for all $\epsilon > 0$, the leading minors of $A(p(s+\epsilon))$ are all positive. Note that $$\lim_{\epsilon \to 0} A(p(s+\epsilon)) = A(p).$$ Since the minors of a matrix are expressible as polynomial functions of the entries of the matrix, the leading principal minors of $A$ are limits of positive real numbers. They are therefore nonnegative.\end{proof} To see that the converse doesn't hold, consider $p(s) = s^4 + 198s^2 + 101^2$. Its Hurwitz matrix has nonnegative leading principal minors, but $p$ is not quasi-stable. This example, as well as a more complete characterization of quasi-stability given below, can be found in \cite{asner1970total}. In particular, it is shown in \cite{asner1970total} that a real polynomial $p$ with positive leading coefficient is quasi-stable if and only if for all $\epsilon > 0$, $A(p(s+\epsilon))$ has positive leading principal minors. \subsection{Quasi-admissible and admissible $\delta$}\label{delta_theory} We first present the following theorem concerning which $\delta$ are admissible. We will defer the proof until later as it is a simple corollary to a stronger theorem about approximating polynomials in $\overline{H}$ by polynomials in $H$. \begin{theorem}\label{delta_prop}If $\delta$ is admissible then all $\hat{\delta} < \delta$ are also admissible.\end{theorem} For $\delta = 1$, note that the Belgian chocolate problem reduces to whether there are $x,y \in H$ with $\deg(x) \geq \deg(y)$ such that $(s-1)^2x + (s^2-1)y \in H$. This cannot occur for non-zero $x,y$ since $(s-1)^2x + (s^2-1)y$ has a root at $s = 1$. Theorem \text{Re}f{delta_prop} then implies that any $\delta \geq 1$ is not admissible. In 2012, Bergweiler and Eremenko showed that any admissible $\delta$ must satisfy $\delta < 0.999579$ \cite{bergweiler2013gol}. On the other hand, if we fix $x,y$ then there is no single largest admissible $\delta$ associated to $x,y$. Standard results from control theory show that if $\delta$ is admissible by $x, y$ then for $\epsilon$ small enough, $\delta+\epsilon$ is admissible by the same polynomials. Therefore, supremum $\delta^*$ over all admissible $\delta$ will not be associated to stable $x,y$. From an optimization point of view, the associated optimization program in (\text{Re}f{bcp_opt}) has an open feasible region. In particular, the set of admissible $\delta$ for (\text{Re}f{bcp_opt}) is of the form $(0,\delta_n^*)$ for some $\delta_n^*$ that is not admissible by $x,y$ of degree at most $n$. However, as we will later demonstrate, quasi-admissible $\delta$ lie on the boundary of this feasible region. Moreover, quasi-admissible $\delta$ naturally serve as analogues of local maxima. We will therefore find quasi-admissible $\delta$ and use these to find admissible $\delta$. In Section \text{Re}f{sec:approx} we will prove the following theorem relating admissible and quasi-admissible $\delta$. The following is the main theorem of our work and demonstrates the utility of searching for quasi-admissible $\delta$. \begin{theorem}\label{main_thm}If $\delta$ is quasi-admissible, then all $\hat{\delta} < \delta$ are admissible. Moreover, if $\delta$ is quasi-admissible by quasi-stable $x,y$ of degree at most $n$, then any $\hat{\delta} < \delta$ is admissible by stable $\hat{x}, \hat{y}$ of degree at most $n$. \end{theorem} This theorem shows that to find admissible $\delta$, we need only to find quasi-admissible $\delta$. In fact our theorem will show that if $\delta$ is quasi-admissible via $x,y$ of degree at most $n$, then all $\hat{\delta} < \delta$ are admissible via $x,y$ of degree at most $n$ as well. In short, quasi-admissible $\delta$ serve as upper limit points of admissible $\delta$. Also note that since admissible implies quasi-admissible, Theorem \text{Re}f{main_thm} implies Theorem \text{Re}f{delta_prop}. The proof of Theorem \text{Re}f{main_thm} will be deferred until Section \text{Re}f{sec:approx}. In fact, we will do more than just prove the theorem. We will given an explicit algoritm for approximating quasi-stable $\hat{\delta}$ by stable $\delta$ within any desired tolerance. We will also be able to use the techniques in Section \text{Re}f{sec:approx} to prove the following theorem showing that admissible $\delta$ are always smaller than some quasi-admissible $\delta$. \begin{theorem}\label{rev_thm}If $\delta$ is admissible by $x,y$ of degree at most $n$ then there is some $\hat{\delta} > \delta$ that is quasi-admissible by $\hat{x}, \hat{y}$ of degree at most $n$. Moreover, this $\hat{\delta}$ is not admissible by these polynomials.\end{theorem} In other words, for any admissible $\delta$, there is a larger $\hat{\delta}$ that is quasi-admissible but not necessarily admissible. Therefore, we can restrict to looking at polynomials $x,y,z$ with at least one root on the imaginary axis. \section{Low degree examples}\label{low_degree_ex} In this section we demonstrate that in low-degree settings, the supremum of all admissible $\delta$ in (\text{Re}f{bcp_opt}) is actually a quasi-admissible $\delta$. By looking at quasi-stable polynomials that are not stable, we can greatly reduce our search space and directly find the supremum of the optimization program in (\text{Re}f{bcp_opt}). For small degrees of $x, y$, we will algebraically design quasi-stable polynomials that achieve previously known bounds on the Belgian chocolate problem in these degrees. Burke et al. \cite{burke2005analysis} showed that for $x\in H^3, y \in H^0$, any admissible $\delta$ must satisfy $\delta < \sqrt{2+\sqrt{2}}/2$ and for $x \in H^4, y \in H^0$, $\delta$ must satisfy $\delta < \sqrt{10+2\sqrt{5}}/4$. He et al. \cite{guannan2007stabilization} later found $x \in H^4, y \in H^0$ admitting $\delta$ close to this bound. In fact, these upper bounds on admissible $\delta$ are actually quasi-admissible $\delta$ that can be obtained in a straightforward manner. For example, suppose we restrict to $x$ of degree 3, $y$ of degree 0. Then for some $A, B, C, k \in \mathbb{R}$, we have \begin{gather*} x(s) = s^3+ As^2 + Bs + C\\ y(s) = k\end{gather*} Instead of trying to find admissible $\delta$ using this $x$ and $y$, we will try to find quasi-admissible $\delta$. That is, we want $\delta$ such that $$z(s) = (s^2-2\delta s + 1)x(s) + (s^2-1)y(s) \in \overline{H}.$$ In other words, this $z(s)$ can be quasi-stable instead of just stable. Note that $z(s)$ must be of degree 5. We will specify a form for $z(s)$ that ensures it is quasi-stable. Consider the case $z(s) = s^5$. This is clearly quasi-stable as its only roots are at $s = 0$. To ensure that $z(s) = s^5$ and equation (\text{Re}f{bcp}) holds, we require \begin{gather*} (s^2-2\delta s + 1)(s^3+ As^2 + Bs + C)+(s^2-1)k = s^5\end{gather*} Equating coefficients gives us the following 5 equations in 5 unknowns. \begin{gather*} A-2\delta=0\\ -2A\delta + B + 1=0\\ A - 2B\delta + C + k=0\\ B-2C\delta=0\\ C-k=0\end{gather*} In fact, ensuring that we have as many equations as unknowns was part of the motivation for letting $z(s) = s^5$. Solving for $A,B,C,k,\delta$, we find \begin{gather*} 8\delta^4-8\delta^2+1=0\\ A = 2\delta\\ B = 4\delta^2-1\\ C = 4\delta^3-2\delta\\ k = 4\delta^3-2\delta\end{gather*} Taking the largest real root of $8\delta^4-8\delta^2+1$ gives $\delta = \sqrt{2+\sqrt{2}}/2$. Taking $A,B,C,k$ as above yields polynomials $x, y, z$ with real coefficients. One can verify that $x$ is stable (via the Routh-Hurwitz test, for example), while $y$ is degree 0 and therefore stable. Note that since $z(s) = s^5, z$ is only quasi-stable. Therefore, there is $x \in H^3, y \in H^0$ for which $\sqrt{2+\sqrt{2}}/2$ is quasi-admissible. This immediately gives the limiting value for $x \in H^3, y \in H^0$ discovered by Burke et al \cite{burke2005analysis}. Combining this with Theorem \text{Re}f{main_thm}, we have shown the following theorem. \begin{theorem}For $\deg(x) \leq 3$, $\delta = \frac{\sqrt{2+\sqrt{2}}}{2}$ is quasi-admissible and all $\delta < \frac{\sqrt{2+\sqrt{2}}}{2}$ are admissible.\end{theorem} Next, suppose that $x$ has degree 4 and $y$ has degree 0. For $A, k, \delta \in \mathbb{R}$, define \begin{gather*} x(s) = (s^2+2\delta s + 1)(s^2+A)\\ y(s) = k\end{gather*} Note that as long as $A \geq 0$, $x$ will be quasi-stable and $y$ will be stable for any $k$. As above, we want quasi-admissible $\delta$. We let $z(s) = s^6$, so that $z(s)$ is quasi-stable. Finding $A, \delta, k$ amounts to solving \begin{gather*} (s^2-2\delta s + 1)x(s) + (s^2-1)y(s) = z(s)\\ \Leftrightarrow (s^2-2\delta s + 1)(s^2+2\delta s+1)(s^2+A)+(s^2-1)k = s^6\\ \Leftrightarrow s^6 + (A - 4\delta^2 + 2)s^4 + (-4A\delta^2 + 2A + k + 1)s^2 + (A-k) = s^6 \end{gather*} Note that the $(s^2+2\delta s + 1)$ term in $x$ is used to ensure that the left-hand side will have zero coefficients in its odd degree terms. Since $(s^2+2\delta s + 1)$ is stable, it does not affect stability of $x$. Equating coefficients and manipulating, we get the following equations. \begin{gather*} 16\delta^4 -20\delta^2+5=0\\ A -4\delta^2+2=0\\ k -A=0\end{gather*} Taking the largest real root of $16\delta^4 -20\delta^2+5$ gives $\delta = \sqrt{10+2\sqrt{5}}/4$. For this $\delta$ one can easily see that $A = 4\delta^2 - 2 \geq 0$, so $x$ is quasi-stable, as are $y$ and $z$ by design. Once again, we were able to easily achieve the limiting value discovered by Burke et al. \cite{burke2005analysis} discussed in Section \text{Re}f{low_degree_ex} by searching for quasi-admissible $\delta$. Combining this with Theorem \text{Re}f{main_thm}, we obtain the following theorem. \begin{theorem}For $\deg(x) \leq 4$, $\delta = \frac{\sqrt{10+2\sqrt{5}}}{4}$ is quasi-admissible and all $\delta < \frac{\sqrt{10+2\sqrt{5}}}{4}$ are admissible.\end{theorem} The examples above demonstrate how, by considering quasi-stable $x,y$ and $z$, we can find quasi-admissible $\delta$ that are limiting values of admissible $\delta$. Moreover, the quasi-stable $\delta$ above were found by solving relatively simple algebraic equations instead of having to perform optimization over the space of stable $x$ and $y$. \section{Algebraic specification}\label{sec:alg_spec} The observations in Section \text{Re}f{sec:motivation} and Section \text{Re}f{sec:math_back} and the examples in Section \text{Re}f{low_degree_ex} suggest the following approach which we refer to as {\it algebraic specification}. This method will be used to find the largest known values of $\delta$ found for any given degree. We wish to construct quasi-stable $x(s), y(s), z(s)$ with repeated roots on the imaginary line satisfying (\text{Re}f{bcp}). For example, we may wish to find polynomials of the following form: \begin{gather*} x(s) = (s^2+2\delta s+1)(s^2+A_1)^4(s^2+A_2)^2(s^2+A_3)^2(s^2+A_4)\\ y(s) = k(s^2+B_1)^3(s^2+B_2)^2\\ z(s) = s^{14}(s^2+C_1)^2(s^2+C_2)(s^2+C_3)\end{gather*} We refer to such an arrangement of $x,y,z$ as an {\it algebraic configuration}. As long as $\delta > 0$, the parameters $\{A_i\}_{i=1}^4$, $\{B_i\}_{i=1}^2$, and $\{C_i\}_{i=1}^3$ are all nonnegative, and $k$ is real, $x(s), y(s), z(s)$ will be real, quasi-stable polynomials. We then wish to solve \begin{equation}\label{alg_eq} (s^2-2\delta s+1)x(s)+(s^2-1)y(s)=z(s)\end{equation} Recall that the $(s^2+2\delta s+1)$ factor in $x(s)$ is present to ensure that the left-hand side has only even degree terms, as the right-hand side clearly only has even degree terms. Expanding (\text{Re}f{alg_eq}) and equating coefficients, we get 11 equations in 11 unknowns. Using PHCPack \cite{verschelde1999algorithm} to solve these equations and selecting the solution with the largest $\delta$ such that the $A_i, B_i, C_i \geq 0$, we get the following solution, rounded to seven decimal places: \begin{gather*} \delta = 0.9808348\\ A_1 = 1.1856917\\ A_2 = 6.6228807\\ A_3 = 0.3090555\\ A_4 = 0.2292503\\ B_1 = 0.5430391\\ B_2 = 0.2458118\\ C_1 = 4.4038385\\ C_2 = 0.7163490\\ C_3 = 7.4637156\\ k = 196.1845537 \end{gather*} The actual solution has $\delta = 0.980834821202\ldots$. This is the largest $\delta$ we have found to date using this method. By Theorem \text{Re}f{main_thm}, we conclude the following theorem. \begin{theorem}All $\delta \leq 0.9808348$ are admissible.\end{theorem} In general, we can form an algebraic configuration for $x(s), y(s), z(s)$ as \begin{equation}\label{xconf} x(s) = (s^2+2\delta s +1) \prod_{i=1}^{m_1} (s^2+A_i)^{j_i}.\end{equation} \begin{equation}\label{yconf} y(s) = k\prod_{i=1}^{m_2}(s^2+B_i)^{k_i}.\end{equation} \begin{equation}\label{zconf} z(s) = s^c\prod_{i=1}^{m_3}(s^2+C_i)^{\ell_i}.\end{equation} For fixed degrees of $x, y$, note there are only finitely many such configurations. Instead of performing optimization over the non-convex feasible region of the Belgian chocolate problem, we instead tackle the combinatorial optimization problem of maximizing $\delta$ among the possible configurations. Note that $c$ in (\text{Re}f{zconf}) is whatever exponent is needed to make $\deg(z) = \deg(x)+2$. We want $x,y,z$ to satisfy (\text{Re}f{bcp}). Expanding and equating coefficients, we get equations in the undetermined variables above. As long as the number of unknown variables equals the number of equations, we can solve and look for real solutions with $\delta$ and all $A_i, B_i, C_i$ nonnegative. Not all quasi-stable polynomials can be formed via algebraic specification. In particular, algebraic specification forces all the roots of $y,z$ and all but two of the roots of $x$ to lie on the imaginary axis. However, more general quasi-stable $x,y,z$ could have some roots with negative real part and some with zero real part. This makes the possible search space infinite and, as discussed in Section \text{Re}f{low_degree_ex}, empirically does not result in larger $\delta$. Further evidence for this statement will be given in Section \text{Re}f{sec:opt}. While the method of algebraic specification has demonstrable effectiveness, it becomes computationally infeasible to solve these general equations for very large $n$. In particular, the space of possible algebraic configurations of $x,y,z$ grows almost exponentially with the degree of the polynomials. For large $n$, an exhaustive search over the space of possible configurations becomes infeasible, especially as the equations become more difficult to solve. We will describe an algebraic configuration via the shorthand \begin{equation}\label{alg_conf_shorthand} [j_1,\ldots,j_{m_1}],[k_1,\ldots, k_{m_2}],[\ell_1,\ldots, \ell_{m_3}].\end{equation} This represents the configuration described in $(\text{Re}f{xconf}),(\text{Re}f{yconf}),(\text{Re}f{zconf})$ above. In particular, if the second term of (\text{Re}f{alg_conf_shorthand}) is empty then $y = k$, while if the third term of (\text{Re}f{alg_conf_shorthand}) is empty then $z$ is a power of $s$. For example, the following configuration is given by $[3,1],[2],[1]$: \begin{gather*} x(s) = (s^2+2\delta s + 1)(s^2+A_1)^3(s^2+A_2)\\ y(s) = k(s^2+B_1)^2\\ z(s) = s^{10}(s^2+C_1)\end{gather*} A table containing the largest quasi-admissible $\delta$ we have found and their associated algebraic configuration for given degrees of $x$ is given below. Note that for each entry of the table, given $\deg(x) = n$ and quasi-admissible $\delta$, Theorem \text{Re}f{main_thm} implies that all $\hat{\delta} < \delta$ are admissible with $x,y$ of degree at most $n$. \begin{figure} \caption{The largest known quasi-admissible $\delta$ for $x,y,z$ designed algebraically, for varying degrees of $x$.} \end{figure} \section{Approximating quasi-admissible $\delta$ by admissible $\delta$}\label{sec:approx} In this section we will prove Theorem \text{Re}f{main_thm}. Our proof will be algorithmic in nature. We will describe an algorithm that, given $\delta$ that is quasi-admissible by quasi-stable polynomials $x, y$, will produce for any $\hat{\delta} < \delta$ stable polynomials $\hat{x}, \hat{y}$ admitting $\hat{\delta}$. Moreover, given $\deg(x) = n$, we will ensure that $\deg(\hat{x}) \leq n$. \begin{proof}[of Theorem \text{Re}f{main_thm}]Suppose that for a given $\delta$ there are $x,y,z \in \overline{H}$ with $\deg(x) \geq \deg(y)$ satisfying (\text{Re}f{bcp}). Let $n = \deg(x)$. Define $$R(s) := \dfrac{(s^2-1)y(s)}{z(s)}.$$ Note that for any $s \in \mathbb{C}$, $R(s) = 0$ iff $(s^2-1)y(s) = 0$, $R(s) = 1$ iff $(s^2-2\delta s+1)x(s) = 0$, and $R(s)$ is infinite iff $z(s) = 0$. Since $x,y,z$ are quasi-stable, we know that for $\text{Re}(s) > 0$, $R(s) = 1$ iff $s = \delta \pm i\sqrt{1-\delta^2}$ and $R(s) = 0$ iff $s = 1$. All other points where $R(s)$ is 0, 1, or infinite satisfy $\text{Re}(s) \leq 0$. Precomposing $R(s)$ with the fractional linear transformation $f(s) = (1+s)/(1-s)$, we get the complex function $$D(s) := R\bigg(\dfrac{1+s}{1-s}\bigg).$$ Note that this fractional linear transformation maps the unit disk $\{s | |s| = 1\}$ to the imaginary axis $\{ s | \text{Re}(s) = 0\}$. Also note that $f^{-1}(1) = 0, f^{-1}(\delta \pm i\sqrt{1-\delta^2}) = \pm it$ where $t = \sqrt{1-\delta}/\sqrt{1+\delta}$. Therefore, $D(s)$ satisfies the following properties: \begin{enumerate} \item For $|s| < 1$, $D(s) = 0$ iff $s = 0$. \item For $|s| < 1$, $D(s) = 1$ iff $s = \pm it$. \item $|D(s)| < \infty$ for $|s| < 1$. \end{enumerate} Note that the last holds by the quasi-stability of $z(s)$. Since $z(s) = 0$ implies $\text{Re}(s) \leq 0$, $D(s) = \infty$ implies $|s| \geq 1$. In particular, the roots of $x, y, z$ that have 0 real part now correspond to points $|s| = 1$ such that $D(s) = 1, 0, \infty$ respectively. For any $\epsilon > 0$, let $$D_\epsilon(s) := D\bigg(\frac{s}{1+\epsilon}\bigg).$$ $D_\epsilon(s)$ then satisfies \begin{enumerate} \item For $|s| \leq 1$, $D_\epsilon(s) = 0$ iff $s = 0$. \item For $|s| \leq 1$, $D_\epsilon(s) = 1$ iff $s = \pm i(1+\epsilon)t$. \item $|D(s)| < \infty$ for $|s| \leq 1$. \end{enumerate} Precomposing with the inverse fractional linear transformation $f^{-1}(s) = (s-1)/(s+1)$, we get $$R_\epsilon(s) := D_\epsilon\bigg(\dfrac{s-1}{s+1}\bigg).$$ By the properties of $D_\epsilon(s)$ above, we find that $R_\epsilon(s)$ satisfies \begin{enumerate} \item For $\text{Re}(s) \geq 0$, $R_\epsilon(s) = 0$ iff $s = 1$. \item For $\text{Re}(s) \geq 0$, $R_\epsilon(s) = 1$ iff $s = \delta_\epsilon\pm i\sqrt{1-\delta_\epsilon^2}$ where $$\delta_\epsilon = \dfrac{1-(1+\epsilon)^2t^2}{1+(1+\epsilon^2)t^2}.$$ \item For $\text{Re}(s) \geq 0$, $|R_\epsilon(s)| < \infty$. \end{enumerate} Moreover, $R_\epsilon(s) \neq 0, 1, \infty$ for any $s$ such that $\text{Re}(s) < 0$. We can rewrite $R_\epsilon(s)$ as $R_\epsilon(s) = p(s)/q(s)$. Note that by the first property of $R_\epsilon$, the only root of $p(s)$ in $\{s | \text{Re}(s) \geq 0\}$ is at $s = 1$. By properties of $f(s), f^{-1}(s)$, one can show that $p(-1) = 0$. This follows from the fact that $R(-1) = 0$, which implies that $\lim_{s\to \infty} D(s) = \lim_{s\to\infty}D_{\epsilon}(s) = 0$, and therefore $R_\epsilon(-1) = 0$. Therefore, $p(s) = (s^2-1)y_\epsilon(s)$ where $y_\epsilon(s)$ has no roots in $\{s | \text{Re}(s) \geq 0\}$. By the second property of $R_\epsilon$, the only roots of $q-p$ in $\{s | \text{Re}(s) \geq 0\}$ are at $\pm \delta_\epsilon + i\sqrt{1-\delta_\epsilon^2}$. Therefore, $q-p = (s^2-2\delta_\epsilon s+1)x_\epsilon(s)$ where $x_\epsilon(s)$ has no roots in $\{s | \text{Re}(s) \geq 0\}$. Finally, by the third property of $R_\epsilon$ we find that $z_\epsilon(s) = (s^2-2\delta_\epsilon s+1)x_\epsilon(s)+(s^2-1)y_\epsilon(s)$ is stable. Moreover, basic properties of fractional linear transformations show that if $\deg (x) = n \geq \deg(y) = m$, then $x_\epsilon, y_\epsilon$ are both of degree $n$. Therefore, $x_\epsilon, y_\epsilon, z_\epsilon$ are stable polynomials satisfying (\text{Re}f{bcp}) for $\delta_\epsilon$. For any $\hat{\delta} < \delta$, we can take $\epsilon$ such that $\delta_\epsilon = \hat{\delta}$, proving the desired result.\end{proof} Note that if we start with $\delta$ admissible by stable $x,y,z$ of degree at most $n$, then we can do the reverse of this procedure to perturb $x,y,z$ to quasi-stable $\hat{x}, \hat{y}, \hat{z}$. By the reverse of the arguments above, $\hat{x}, \hat{y}, \hat{z}$ will be quasi-stable but at least one of these polynomials will not be stable. These polynomials will be associated to some quasi-admissible $\hat{\delta} > \delta$. This gives the proof of Theorem \text{Re}f{rev_thm}. The proof above describes the following algorithm for perturbing quasi-stable $x,y,z$ satisfying (\text{Re}f{bcp}) to obtain stable $\hat{x}, \hat{y},\hat{z}$ satisfying (\text{Re}f{bcp}).\\ \\ \noindent{\bf Input:} Real numbers $\delta, \epsilon > 0$ and real polynomials $x,y,z \in \overline{H}$ satisfying (\text{Re}f{bcp}).\\ \noindent{\bf Output:} $\hat{\delta}$ and real polynomials $\hat{x},\hat{y},\hat{z} \in H$ satisfying (\text{Re}f{bcp}). \begin{enumerate} \item Let $R(s) = (s^2-1)y(s)/z(s)$. For $\epsilon > 0$, compute $$R_\epsilon(s) = R\bigg(\dfrac{(2+\epsilon)s + \epsilon}{\epsilon s + (2+\epsilon)} \bigg).$$ \item Reduce $R_\epsilon(s)$ to lowest terms. Suppose that in lowest terms $R_\epsilon(s) = p(s)/q(s)$. \item Factor $p(s)$ as $(s^2-1)\hat{y}(s)$ and factor $q(s)-p(s)$ as $(s^2-2\hat{\delta}s+1)\hat{x}(s)$. Let $\hat{z}(s) = q(s)$. \end{enumerate} To further illustrate the method of algebraic specification and this algorithm for perturbing to get quasi-stable polynomials, we give the following detailed example. \begin{example}Say we are interested in $x$ of degree 4. We may then give the following algebraic specification of $x, y, z$ discussed in Section \text{Re}f{low_degree_ex}. In the shorthand of (\text{Re}f{alg_conf_shorthand}), this is the configuration $[1],[],[]$. \begin{gather*} x(s) = (s^2+2\delta s+1)(s^2+A)\\ y(s) = k\\ z(s) = s^6\end{gather*} As in Section \text{Re}f{low_degree_ex}, we solve $(s^2-2\delta s+1)x(s) + (s^2-1)y(s) = z(s)$. This implies that $\delta, A, k$ satisfy $16\delta^4-20\delta^2+5 = 0$, $A = 4\delta^2-2$, $k = 4\delta^2 - 2$. Taking the largest root of $16\delta^4-20\delta^2+5$ gives $\delta = \sqrt{10+2\sqrt{5}}/4$, $A = k = (\sqrt{5}+1)/2$. Given numerically to six decimal places, $\delta = 0.951057$. Computing $R(s)$ using exact arithmetic, we get \begin{gather*} R(s) = \dfrac{(s^2-1)y(s)}{z(s)} = \dfrac{(s^2-1)(\sqrt{5}+1)}{2s^6}\end{gather*} We then use a fractional linear transformation $s \mapsto (1+s)/(1-s)$ to get: \begin{align*} D(s) &= R((1+s)/(1-s))\\ &= \dfrac{2s(\sqrt{5}+1)(s-1)^4}{s^6+6s^5+15s^4+20s^3+15s^2+6s+1}\end{align*} One can verify that $D(s)$ can equal 1 on the boundary of the unit circle, so we push these away from the boundary (with $\epsilon = 0.01$) by defining \begin{align*} D_\epsilon(s) &=D\big(\frac{s}{1+0.01}\big)\\ &=\dfrac{6.40805(0.99010s-1)^4s}{0.942045s^6+\ldots+5.94054s}\end{align*} While we gave an approximate decimal form above for brevity, this computation can and should be done with exact arithmetic. We let $R_\epsilon(s) = f_\epsilon((s-1)/(s+1))$. Writing $R_\epsilon(s)$ as $p(s)/q(s)$ in lowest terms, we get: \begin{gather*} p(s) = 64080.55401(0.990990s+199.00990)^4(s^2-1)\\ q(s) = 0.62122\times 10^{14}s^6 + \ldots +0.94204\end{gather*} As proved above, $p(s)$ will equal $(s^2-1)\hat{y}(s)$. Dividing $p(s)$ by the $s^2-1$ factor, we get a polynomial $\hat{y}(s)$ such that its only root is at $s = -201$. Therefore $\hat{y}(s)$ is stable. The denominator, $\hat{z}(s)$ is easily verified to only have roots with negative part. Finally, the polynomial $q(s) - p(s)$ will equal $(s^2-2\hat{\delta}s+1)\hat{x}(s)$. Finding its roots, one can show that $q(s)-p(s)$ only has roots with negative real part, except for roots at $s = 0.950097 \pm 0.311954i$. These roots are of the form $\hat{\delta} \pm \sqrt{\hat{\delta}^2-1}$ for $\hat{\delta} = 0.950097$. Therefore $\hat{\delta} = 0.950097$ is admissible via the stable polynomials $\hat{x},\hat{y},\hat{z}$. While we have decreased $\delta$ slightly, we have achieved stability in the process. By decreasing $\epsilon$, we can get arbitrarily close to our original $\delta$. \end{example} \section{Optimality of algebraic specification}\label{sec:opt} Not only does our method of algebraic specification find larger $\delta$ than have been found before, one can view previous approaches to the Belgian chocolate problem as approximating algebraic specification. In particular, previously discovered admissible $\delta$ can be seen as approximating some quasi-admissible $\delta'$ that can be found via algebraic specification. For example, in \cite{chang2007global}, Chang and Sahinidis found that $\delta = 0.9739744$ is admissible by \begin{align*} x(s) &=s^{10} + 1.97351109136261s^9\\ &+5.49402092964662s^8 + 8.78344232801755s^7\\ &+ 11.67256448604672s^6 + 13.95449016040116s^5\\ &+11.89912895529042s^4 + 9.19112429409894s^3\\ &+5.75248874640322s^2+2.03055901420484s\\ &+1.03326203778346,\\ y(s)&=0.00066128189295s^5+3.611364710425s^4\\ &+0.03394722108511s^3+3.86358782861648s^2\\ &+0.0178174691792s+1.03326203778319.\\ \end{align*} The roots of $x,y,z$ were discussed in Section \text{Re}f{motivation}. As previously noted, $x,y,z$ are close to polynomials with repeated roots on the imaginary axis. Examining the roots of $x,y,z$, one can see that $x,y,z$ are tending towards quasi-stable polynomials $x', y', z'$ that have the same root structure as the algebraic configuration $[3,1],[2],[1]$. In other words, we will consider the following quasi-stable polynomials: \begin{gather*} x'(s) = (s^2+2\delta' s + 1)(s^2+A_1)^3(s^2+A_2)\\ y'(s) = k(s^2+B)^2\\ z'(s) = s^{10}(s^2+C) \end{gather*} Solving for the free parameters and finding the largest real $\delta'$ such that $A_1, A_2, B, C \geq 0$, we obtain the following values, given to seven decimal places. \begin{gather*} \delta' = 0.9744993\\ A_1 = 1.3010813\\ A_2 = 0.4475424\\ B = 0.5345301\\ C = 2.5521908\\ k = 3.4498736.\end{gather*} One can easily verify that taking these values of the parameters, the roots of $x, y, z$ are close to the roots of $x',y',z'$. These algebraically designed $x', y', z'$ possess the root structure that $x,y,z$ are tending towards. Moreover, the $x', y', z'$ show that $\delta'$ is quasi-stable and their associated $\delta'$ gives an upper bound for the $\delta$ found by Chang and Sahinidis. This demonstrates that the stable polynomials found by Chang and Sahinidis are tending towards the quasi-stable ones listed above. Moreover, by Theorem \text{Re}f{main_thm} all $\delta < 0.9744993$ are admissible. In fact, many examples of admissible $\delta$ given in previous work are approximating quasi-admissible $\delta$ found via algebraic specification. This includes the previously mentioned examples in \cite{burke2005analysis} and all admissible values of $\delta$ given by Chang and Sahinidis in \cite{chang2007global}. We further conjecture that for all admissible $\delta$, there is a quasi-admissible $\delta' > \delta$ that can be achieved by algebraically specified $x,y,z$. More formally, if we fix $x, y$ to be of degree at most $n$, let $\delta_n^*$ denote the supremum of the optimization problem in (\text{Re}f{bcp_opt}). Note that as discussed in Section \text{Re}f{delta_theory}, $\delta_n^*$ is not admissible by $x,y$ of degree at most $n$. The empirical evidence given in this section and in Sections \text{Re}f{sec:motivation} and \text{Re}f{low_degree_ex} suggests that this $\delta_n^*$ is quasi-admissible and can be obtained through algebraic specification. This leads to the following conjecture. \begin{conjecture}For all $n$, $\delta_n^*$ is quasi-admissible by some $x,y,z$ that are formed via algebraic specification.\end{conjecture} \section{Conclusion} The Belgian chocolate problem has remained resilient to direct global optimization techniques for over a decade. Most prior work attempts to maximize $\delta$ subject to the stability constraints by applying iterative methods to complicated non-convex regions. By contrast, we find the largest known value of $\delta$ in a more direct fashion. We do this by reducing our problem to combinatorial optimization over a finite set of algebraically constructed limit points. Our key algebraic insight is that quasi-admissible $\delta$ are limiting values of the admissible $\delta$. In fact, previous methods actually find admissible $\delta$ that approach quasi-admissible $\delta$. We give the method of algebraic specification to design quasi-stable polynomials and directly find these quasi-admissible $\delta$ by solving a system of equations. We then show that we can perturb these quasi-stable polynomials to obtain stable polynomials with admissible $\delta$ that are arbitrarily close to the quasi-admissible $\delta$. We show that this method recovers the largest admissible $\delta$ known to date and gives a much better understanding of the underlying landscape of admissible and quasi-admissible $\delta$. We conjecture that for all $n$, the supremum of all $\delta$ admissible by $x,y$ of degree at most $n$ is a quasi-admissible $\delta$ that can be found through our method of algebraic specification. \section*{Acknowledgments} The authors would like to thank Bob Barmish for his valuable feedback, discussions, and advice. The first author was partially supported by the National Science Foundation grant DMS-1502553. The second author was partially supported by the Simons Foundation grant MSN179747. \end{document}
\begin{document} \title{Sensor-assisted fault mitigation in quantum computation} \author{John L.\ Orrell} \email[Corresponding author: ]{[email protected]} \affiliation{Pacific Northwest National Laboratory, Richland, WA 99352, USA} \author{Ben Loer} \affiliation{Pacific Northwest National Laboratory, Richland, WA 99352, USA} \date{\today} \begin{abstract} We propose a method to assist fault mitigation in quantum computation through the use of sensors co-located near physical qubits. Specifically, we consider using transition edge sensors co-located on silicon substrates hosting superconducting qubits to monitor for energy injection from ionizing radiation, which has been demonstrated to increase decoherence in transmon qubits. We generalize from these two physical device concepts and explore the potential advantages of co-located sensors to assist fault mitigation in quantum computation. In the simplest scheme, co-located sensors beneficially assist rejection of calculations potentially affected by environmental disturbances. Investigating the potential computational advantage further required development of an extension to the standard formulation of quantum error correction. In a specific case of the standard three-qubit, bit-flip quantum error correction code, we show that given a 20\% overall error probability per qubit, approximately 90\% of repeated calculation attempts are correctable. However, when \emph{sensor-detectable} errors account for 45\% of overall error probability, the use of co-located sensors uniquely associated with independent qubits boosts the fraction of correct final-state calculations to 96\%, at the cost of rejecting 7\% of repeated calculation attempts. \end{abstract} \maketitle \section{\label{sec:intro}Introduction} Many mechanisms may lead to state decoherence in the physical implementation of quantum computing systems. Recent reports \cite{PhysRevLett.121.117001,Oliver2020,Cardani2020} show deleterious effects in superconducting kinetic inductance devices and superconducting transmon qubits correlated with ionizing radiation levels, identifying yet another mechanism causing decoherence. As others have~\cite{Cardani2019}, we postulate these observed phenomena stem from the same underlying process: the instantaneous injection of energy into the superconducting device and the device's substrate as a result of impinging ionizing radiation. It is possible to reduce the rate of ionizing radiation energy injections by shielding against naturally occurring radiation sources in the laboratory and by placing systems underground to shield against cosmic rays. These techniques are commonly employed for rare event searches in nuclear and particle physics research, including searches operating at mK temperatures~\cite{PhysRevD.95.082002,Armengaud2017,PhysRevD.100.102002,ISI:000386879300001,ALDUINO20199,ISI:000475616600001}. However, the history of such physics research experiments demonstrate it is difficult to entirely shield against the ionizing radiation present in any instrumentation laboratory. Thus, we contemplate superconducting qubit operation in a regime of low, but non-zero, rates of ionizing-radiation-induced energy injections. From there we draw an inference to a superconducting qubit device concept employing the use of co-located sensors that can signal when an ionizing radiation energy injection has occurred, signifying probable error in the quantum computation. We employ the terminology \emph{fault mitigation in quantum computation} to distinguish from purely \emph{quantum} computational means for achieving fault tolerance or error correction~\cite{7167244}. In the simplest application of our device concepts, we show co-located sensors can provide modest fault mitigation through selective result-acceptance in redundant (``many shot'') computation schemes, where the same quantum calculation is repeated multiple times. Speculatively, as this will require advances in superconducting qubit interconnection techniques, we explore how co-located sensors can identify uncorrectable errors within the framework of quantum error correction codes. \section{\label{sec:TES-assisted_qubit_concept}TES-assisted qubit device concept} \begin{figure*} \caption{Photograph of a CDMS II ZIP detector contained within it's hexagonal copper housing~\cite{CDMS-iZIP-photo} \label{fig:cdms-ii-zip} \end{figure*} This section presents a notional concept for the physical implementation of devices combining ionizing radiation transition edge sensors (TES) and superconducting qubits that share a common silicon substrate. \subsection{\label{sec:TES}Transition edge sensor devices} In a TES~\cite{ISI:000231009400003}, the material's effective temperature is set such that the material resides on the ``transition edge'' between the superconducting and normal conducting states. Any additional energy added to the material will increase the temperature and push the TES toward the normal conducting phase, dramatically raising the electrical resistance of the material. Sensing this change in resistance in a circuit makes the TES useful for detecting small amounts of absorbed energy. A key step in the development~\cite{doi:10.1063/1.1770037,Irwin2005} of TES devices as practical sensors was the use of direct current (DC) voltage bias to provide negative electrothermal feedback (ETF) to stabilize the readout circuit~\cite{doi:10.1063/1.113674}. As diagrammed in the cited seminal reference, superconducting quantum interference devices (SQUIDs) are typically used to monitor the resistance-dependent current in the ETF TES circuit through a current-induced magnetic field. While the TES may reside at tens of mK~temperatures in the ``mixing chamber'' stage of a refrigerator, the SQUIDs monitoring the circuit are typically located at a warmer stage, often at a $\simeq600$~mK ``still'' stage~\cite{AKERIB2008476}. This provides for physical separation and magnetic shielding between the TES devices and the SQUIDs. \begin{figure*} \caption{Micrograph of IBM 5-qubit device.} \label{fig:ibmqx2_yorktown_microgrpah} \caption{IBM 5-qubit scheme with 3 QETs.} \label{fig:ibmqx2_yorktown_schematic} \caption{Gate connections with sensor patch.} \label{fig:ibmqx2_yorktown_connections} \caption{Labeled micrograph of the IBM 5-qubit \texttt{ibmqx2} \label{fig:ibmqx2_yorktown} \end{figure*} TES sensors developed by the SuperCDMS collaboration~\cite{PhysRevD.95.082002} employ QET devices, defined as Quasiparticle-trap-assisted Electrothermal feedback Transition-edge sensor devices~\cite{doi:10.1063/1.1146105}. In these QET devices, superconducting aluminum films are deposited on Ge or Si crystals, in contact with the tungsten-based ETF TES devices. Phonon energy present in the crystal substrate breaks Cooper pairs in the superconducting Al films. The resultant quasiparticles diffuse through the Al film to the W-based ETF TES, ultimately resulting in a TES transition event used for event detection. Typically, multiple QET devices are operated in parallel in a circuit to provide increased phonon energy collection coverage with a single sensor channel. Figure~\ref{fig:cdms-ii-zip} shows a CDMS ZIP (Z-dependent Ionization- and Phonon-mediated) detector~\cite{CDMS-iZIP-photo}. We added the schematics~\cite{PhysRevD.72.052009}, scale overlays, and highlighting lines. Detailed descriptions of lithographic fabrication techniques for similar devices are available~\cite{JASTRAM201514}. It is worth noting the QET devices used in these detector applications are essentially ``classical'' signal sensors. That is, the TES circuit operates through a process of Joule heating of a material in response to a thermalizing population of quasiparticles, produced by a population of thermal and athermal substrate phonons. \subsection{\label{sec:superconducting_qubit}Superconducting qubit devices} There are many modalities for the physical implementation of qubits. These modalities include trapped ions, superconducting circuits, photon systems manipulated either with linear optics or quantum dots, neutral atoms, semiconductor devices typified by either optically active nitrogen vacancies in diamond or electronically manipulated electron spins, and most recently topological qubits that are based in collective properties of solid state systems~\cite{NAP25196}. In all cases, the goal is to isolate a physical two-level quantum system that can be manipulated for quantum computation. In this report we focus on superconducting qubit devices. In this work we consider transmon qubits~\cite{PhysRevA.76.042319} based on our experience with them in studies of the effect of ionizing radiation on their coherence time~\cite{Oliver2020}. Furthermore, the IBM Q Experience~\cite{IBMQ} provides access to transmon-based multi-qubit devices~\cite{ISI:000399429500002,ISI:000542630400002} for cloud-based quantum computing. We use these resources as a reference for exploring sensor-assisted fault mitigation in quantum computation. Figure~\ref{fig:ibmqx2_yorktown_schematic} shows in schematic representation the combining of three QET devices with the qubit chip layout. The schematic diagram (Fig.~\ref{fig:ibmqx2_yorktown_schematic}) captures our proposed hybrid sensor and qubit device concept. A notional connectivity diagram (Fig.~\ref{fig:ibmqx2_yorktown_connections}) further abstracts the generalized idea of a co-located sensor for detection of environmental disturbances. \subsection{\label{sec:TES-assisted_qubit_devices}TES-assisted qubit devices} A hybrid device as suggested by Figure~\ref{fig:ibmqx2_yorktown_schematic} is producible with today's fabrication techniques. Furthermore, we do not foresee any inherent incompatibility in co-operation of the DC voltage biased QET devices and the microwave frequency controls of the qubits. Specifically, QET devices on a silicon chip are operated using a DC voltage bias across the TES of approximately 40~mV. From the TES-SQUID circuit's quiescent state, ionizing radiation induced events appear as $\simeq$5~$\mu$s rising-edge current excursions of $\simeq$100~nA amplitudes and $\simeq$100~$\mu$s pulse decay times. These representative operational details are derived from the SuperCDMS HVeV chip-scale dark matter search devices~\cite{PhysRevLett.121.051301,doi:10.1063/1.5010699}. The above described QET operating characteristics are in contrast to transmon qubit operation following the theory of circuit quantum electrodynamics (cQED)~\cite{Schuster2007}. Qubits are typically controlled via radiofrequency pulses applied through co-planar waveguide microwave transmission lines, typically in the $\simeq$5~GHz range. Specifically, qubits are coupled to the transmission line via superconducting Al circuit meander resonators designed to have unique resonance frequencies in the same $\simeq$5~GHz range, resulting from the details of their physical shape. Each qubit's resonance frequency is designed to lie off-resonance (detuned) from the paired resonator's resonance frequency to allow dispersive readout from the qubit via the resonator~\cite{doi:10.1063/1.5089550}. Multiple such qubit-resonator pairs can exist on the same silicon chip and even connect to same transmission line~\cite{Jerger_2011}, so long as all resonance frequencies are fully offset. The $\simeq$5~GHz RF control pulses are typically $\simeq$10s of nanoseconds duration and have millivolt scale amplitudes at the readout resonator, resulting in $\simeq$100s of nanoamperes of current in the qubit circuit. The hybrid devices we envision, having the above described characteristics, would consist of QET and transmon qubit devices simultaneously operated at roughly 30--50~mK. There are two obvious possible ``cross-talk'' scenarios between the QETs and the quibts. The first is through near-resonance coupling of RF qubit control pulses in the QET. We believe the QET physical layout can be optimized to reduce the potential for this coupling. It is not obvious current excursions in the QET devices would have any coupling to the qubit circuits. The second ``cross-talk'' mechanism is through quasiparticle generation via power input from either device type. There is ample evidence from the operation of arrays of QET and superconducting qubit devices that each device type can be operated without substantial injection of thermal energy into the substrate, which would result in elevated quasiparticle levels in the superconducting circuits of either device type. We are not aware of any conceptually similar device created to date to that experimentally tests the veracity of these claims. In the next section, we assess the potential value of such a co-located sensor in contributing to fault mitigation in quantum computations. The initial evaluation considers plausible devices we believe can be fabricated today. Such devices would likely employ co-located sensors in a ``veto'' role to reject computations suspected of excessive error-inducing environmental disturbances. Taking the assessment a step further, we speculate on the error correction performance of independent qubits systems, where each qubit is uniquely associated with an individual co-located sensor. In the case of superconducting qubits, this idealization would manifest in the case where QET-qubit pairs each reside on separate silicon substrate chips and are potentially interconnected through superconducting air-bridges or capacitive coupling across gaps between chips. We note the choice of the class of TES/QET devices~\cite{Ullom_2015} for the co-located sensor is potentially interchangeable with microwave kinetic inductance detectors (MKIDs)~\cite{Day_2003} or superconducting nanowire detectors~\cite{Natarajan_2012}. \section{\label{sec:error_estimation}Quantum error mitigation} Pedagogical development of qubit-based quantum error correction considers two complementary forms of error: bit-flip error and sign-flip error. Within the Bloch sphere picture of a qubit, these errors correspond to state error and phase error. These two flip-type quantum errors are highly idealized \emph{binary symmetric channel} representations of the otherwise continuous error experienced by real qubits~\cite{Devitt_2013}. We note ionizing radiation induced error in superconducting transmon qubits is almost certainly a continuous noise source best represented by arbitrary three-angle unitary transformations (or much worse). However, for our goal of developing an intuition for the relative utility of sensor-assisted error mitigation in quantum computation, we will focus solely on bit-flip errors, to the exclusion of all others. This assumption and other assumptions we make in the following developments are assessed in the Discussion section. Our goal is to determine how information gained from a co-located sensor---\emph{without performing any measurement on the quantum computation qubit(s)}---can assist in the implementation of error mitigation in quantum computation. We begin with the hybrid device concept presented in Section~\ref{sec:TES-assisted_qubit_concept}, Figure~\ref{fig:ibmqx2_yorktown_schematic}. For illustrative purposes, we make use of the IBM Quantum Experience~\cite{ibmqx2_yorktown} as a source of some realistic scenarios, specifically working with the Yorktown (\texttt{ibmqx2}) 5-qubit backend~\cite{PhysRevLett.109.240504}. We will refer to this simply as the ``Yorktown backend'' for brevity. We conclude by investigating a fully abstracted hypothetical case when co-located sensors are uniquely assigned to individual, independent qubits. \begin{figure} \caption{A simple balanced Deutsch-Jozsa calculation used as a test case for investigating the role of co-located sensors in calculations performed by devices such as the IBM 5-qubit \texttt{ibmqx2} \label{fig:balanced-dj-circuit} \end{figure} \begin{figure} \caption{Results from three implementations of a balanced Deutsch-Jozsa calculation (See Fig.~\ref{fig:balanced-dj-circuit} \label{fig:balanced-dj} \end{figure}. \subsection{\label{sec:example_error}Example calculation: Repetition and error} Quantum error correction is often presented as an approach toward the correction of errors in an idealized, \emph{single-pass} quantum computation calculation. The application of quantum computation routinely uses computational repetition (repeating the same calculation many times) to achieve averaged results that approach the idealized, single-pass calculation result for large numbers of repetitions. Furthermore, a single-pass quantum calculation is only able to return the ``correct'' answer in cases where the result is uniquely identifiable with a single eigenvector of the measurement basis. More generally, in cases analogous to quantum phase estimation and/or quantum state tomography, the relative weight of the measurement basis eigenvectors---determined through computational repetition---is key to determining the underlying quantum state. Thus, in quantum calculations, computational repetition is used advantageously in \emph{both} statistical averaging for error mitigation \emph{and} quantum state estimation as part of the underlying calculation method. In addition, and in entire generality, if erroneous final states are identifiable within this repetition process, then either better accuracy is obtained for a fixed number of repetitions or the same accuracy is achievable with fewer repetitions. Figure~\ref{fig:balanced-dj}(a) shows the results from 81,920~repetitions of a simple balanced Deutsch-Jozsa calculation (see Fig.~\ref{fig:balanced-dj-circuit}) implemented on the Yorktown backend. The ``correct'' result is equal weight in each of the four states $|001|$, $|011|$, $|101|$, and $|111|$ (i.e., 25\% of the 81,920~trials in each of the four states), with statistical fluctuations from the finite sample size. However, the data report \emph{at minimum} 5,734 trials of the repeated quantum calculation were in error, reporting measurements of the states $|000|$, $|010|$, $|100|$, and $|110|$. We contemplate the possibility that \emph{some} of the error states are the result of ionizing radiation striking the Yorktown backend during the computational repetitions. Our prior work~\cite{Oliver2020} suggests the actual fraction of ionizing radiation disturbances is small for devices such as the Yorktown backend. However, for the sake of intellectual exploration, we wish to consider when some significant percentage of the induced error states are due to ionizing radiation or some other environmental disturbance detectable by a co-located sensor. We are thus implicitly assuming some error inducing phenomenon are also \emph{not} detectable by the co-located sensor, as is normally assumed in quantum error correction schemes. For concreteness, we consider a case where 60\% of the errors are \underline{\emph{not}} due to ionizing radiation (or some other environmental disturbance), which is detectable by a co-located sensor on the qubit chip. We have no method for assessing the true error cases for any particular computational repetition of the Yorktown backend, so we must create a model of the noise. The Qiskit programming language provides a mechanism for simulating the noise of a specific backend device, based on measured gate error rates and coherence times. Fig~\ref{fig:balanced-dj} shows the results of many such calculations performed during the week of 12 October 2020. Unfortunately, we are not aware of a way to use the Qiskit modeled noise to determine for a single repetition of the calculation when an error may have been induced (modeled) for a qubit. Thus, we created a simple bit-flip-based noise model simulation designed to \emph{mimic} the statistical properties of the Yorktown backend performing the balanced Deutsch-Jozsa calculation. We assign a single bit-flip (\textbf{\texttt{X}}-gate) to follow each of the eleven operations on the qubits in the circuit diagram of Figure~\ref{fig:balanced-dj-circuit}, including the control qubit on \textit{qubit}$_1$. We find setting the bit-flip error probability to 7\% in this highly over-simplified model simulation roughly reproduces the balanced Deutsch-Jozsa calculation's statistical distribution of results seen on the actual Yorktown backend device. Thus, we now have a method for determining within a single repetition of the calculation when an error was induced within the quantum circuit by any one (or more) of the bit-flip errors. We simulated 81,920 single-shot calculations, where each time a balanced Deutsch-Jozsa circuit was created with a randomly generated set of bit-flip errors contained within the circuit, based upon the 7\% gate error probability mentioned above. The results are shown in Fig~\ref{fig:balanced-dj}(c). Recall, rather than assuming 100\% of the induced errors are due to an environmental disturbance that can be detected by the co-located sensor, we instead assume 60\% of the errors are \underline{\emph{not}} detectable by the co-located sensor. Fig~\ref{fig:balanced-dj}(d) shows the results when the co-located sensor would provide information to reject a number of the calculations (20,282 shots in this case) that are expected to potentially be in error. This improves the performance of the quantum calculation, showing a reduction of the fraction of calculation repetitions reporting states $|000|$, $|010|$, $|100|$, and $|110|$ compared to that shown in Fig~\ref{fig:balanced-dj}(c). Our first substantial conclusion is that this improvement is at the cost of rejecting outright a number of the calculations from consideration. We repeat, the calculation improves because those calculations with the potential for being environmentally disturbed are preferentially rejected from consideration in calculating the final results after all repetitions are complete. The Appendix to this report further investigates the statistical properties of the results shown in Fig~\ref{fig:balanced-dj}. The form of error mitigation described above is of the simplest variety. The co-located sensor provides a case-by-case capacity to reject or ``veto'' individual, ``single-shot'' calculations. At the expense of throwing-away the so-flagged calculation trials, it is possible to improve the numerical accuracy of quantum calculations employing repetition for purposes of result averaging or quantum state determination via the measurement eigenvector weightings. While these improvements are modest, we believe devices such as that described by Figure~\ref{fig:ibmqx2_yorktown_schematic} can be fabricated today and take advantage of sensors to selectively reject calculations where environmental phenomenon have potentially disturbed the quantum computational system. \subsection{\label{sec:two_error_types}Error types: Environmental and entangling} We now propose to distinguish more clearly between two classes of phenomenon resulting in quantum decoherence of qubit systems. In this discussion, we have in mind superconducting qubit devices, but we believe these definitions are sufficiently general as to apply to other physical implementations of qubits. We suggest framing two types of qubit error generation mechanisms that can appear in physical qubit systems: (1) Environmental disturbances and (2) effects having quantum entanglement. These two types are not mutually exclusive, but they should be exhaustive. As such, we warily adopt substantively \emph{different} meaning for the terms ``environment'' and ``environmental,'' due to a lack of better terminology. We acknowledge our use of these terms may seem counter to the sense used by other authors. For this report we consider environmental error-inducing disturbances as those phenomena that are \emph{independent} of the presence or absence of a qubit state. In a superconducting qubit device, we have in mind phenomena such as energy injection from ionizing radiation, leakage of UV, optical, or IR photons into the system, thermal heat transients, fluctuating externally-generated magnetic fields, and fluctuating externally-generated electric fields (e.g., RF). In these cases, the phenomena impinges on the qubit system \emph{and the immediate vicinity}, independent of the presence or absence of a qubit holding a quantum state. In these cases, we propose an appropriate sensor can potentially detect the error-inducing environmental disturbance without \emph{any} explicit or implicit influence on the state of a qubit in the vicinity of the disturbance. We henceforth refer to these error-inducing disturbances as ``environment''- or ``environmental''-inducing error sources. These errors are entirely incoherent errors within a computation. A second class of error-inducing effects must also exist. This second class distinguishes itself through the quantum state entanglement produced as a result of the interaction between the error-inducing phenomenon and the presence of a qubit state. In a superconducting qubit device, we have in mind phenomena such as coupling to two-level state (TLS) systems and off-resonance coupling to other device elements. In these cases, a co-located measurement of the entangled error-inducing effect has the potential to produce back-action on the qubit's quantum state. Thus, we refer to these types of errors as ``entangling'' error sources. These ``entangling'' error-types can result in both incoherent and coherent error within computations. We expect both types of errors described above are present in physical implementations of quantum computing systems. Throughout this study we have always assumed the entangling error is 60\% of the overall error probability.\footnote{Assuming 100\% of errors are of the entangling type is equivalent to the typical, pedagogical assumption in quantum error correction. Assuming 0\% of the errors are of the entangling type means \emph{all} errors are potentially identifiable by a co-located sensor, which we consider an unlikely and uninteresting, limiting case.} \begin{figure*} \caption{\label{fig:QC-S111-E111-SingleCircuit} \label{fig:QC-S111-E111-SingleCircuit} \end{figure*} \subsection{\label{sec:middle_case}Sensor-assist in quantum error correction} We now evaluate a more speculative scenario abstracted and generalized from the preceding sections. We assume all qubits experience \emph{entirely} independent errors and a co-located sensor is associated with each qubit. Furthermore, we assume a typical set of quantum computational gates is available and all errors in the error channel are bit-flip errors. Furthermore, we make the assumptions that circuit gates do not introduce errors outside of the error channel and ancilla qubits are reliable for their purpose of extracting a syndrome measurement. A number of such assumptions are made through-out the following development and these assumptions are explored in the Discussion (Sec.~\ref{sec:discussion}). Figure~\ref{fig:QC-S111-E111-SingleCircuit} shows a quantum circuit for performing error correction when the error channel (columns 5~\&~6) is composed of independent environmental- and entangling-error types, as described above. In describing this quantum circuit, we focus on the key differences from a standard three-qubit, bit-flip error correction code. Columns~1-4 initialize three qubits, set a quantum state $|\Psi\rangle$ to preserve, and then encode the quantum state in the expanded three-qubit computational basis space. Column~5 includes a single bit-flip error (pink ``\textbf{\texttt{X}}?''-gates) on each of the three computational qubits, representing the potential environmental disturbance that can be detected by a co-located sensor. Column~6 represents the possibility to have entangling-type errors on any of the three qubits, shown as purple ``\textbf{\texttt{X}}?''-gates. Columns~7--9 represent the three co-located sensor readouts that are uniquely identified with each of the three physical qubits used for the state preservation.\footnote{Co-located sensors might also be associated with the ancilla qubits for further protection.} Note the diagram suggests the co-located sensors are near, but do not interact with the qubits. Pulses measured by the co-located sensors are recorded in the sensor's classical bit register, along the bottom of the diagram. As the error correction portion of the circuit (columns 10--18) can only correct a single qubit error, at this point it is already possible to reject a single shot of the calculation if the co-located sensors measure two or more potential environmental disturbances to the qubits. When the sensor classical register reports 0x3, 0x5, 0x6, or 0x7, the Sensor REJECT flag is set for vetoing the calculation's output, as shown in the quantum circuit at column~10. Only in cases where a single (or no) co-located sensor has an event does the quantum computation fruitfully proceed to the error correction stage in columns~10--18. Assuming the calculation proceeds into the error correction stage in columns~10--18, the preserved quantum state is then decoded and measured in columns~19--24. To understand the impact of the co-located sensor capacity to detect potential error-inducing environmental disturbances, we must evaluate the truth table of the circuit. There are eight possible combinations of errors for each of the environmental- and entangling-type errors (columns 5 and 6, respectively) on the three computational qubits, resulting in sixty-four possible error cases for the complete truth table (i.e., $2^3\times2^3=64$ error combinations). Note we are not yet invoking the assumption that the single error probability is ``small,'' though we will invariably evaluate specific cases under that assumption. To compare the sensor-assisted circuit shown in Figure~\ref{fig:QC-S111-E111-SingleCircuit} to the standard, three-qubit, bit-flip error, quantum error correction code, recognize removal of columns 5, 7, 8, and 9 produces the standard three-qubit, bit-flip error correction circuit. Thus, we can tabulate the truth table for both circuits together for direct comparison. As stated above, there are 64 possible error combinations. The full 64 element truth table is provided in the Appendix. \def1.1{1.1} \begin{table}[ht!] \small \centering \begin{tabular}{|cc|ccc|c|c|c|} \hline \multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\ \multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\ \multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\ \hline \hline ~[001] & (000) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & C \textit{vs.} C \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{3}$ & \\ \hline & (001) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} CC \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline & (010) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline & (100) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline & (011) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline & (101) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline & (110) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline & (111) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{3}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} F \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{0}$ & \\ \hline \end{tabular} \caption{Truth table for the case when a single environmental-type error occurs on qubit~0 (i.e., error mask: $[001]$), with any combination of entangling-type errors (i.e., error masks: $(000)$-$(111)$~). Outcome notation: C = Correct, CC = Correct via cancellation, F = Faulty, and R$_{\mathrm{PT}}$ = REJECT based on syndrome parity test. See text for complete table description.} \label{tab:001_cases} \end{table} \def1.1{1.25} We focus on the interesting case when there is a \emph{single} qubit affected by an environmental-type disturbance phenomenon (in column 5), detectable by a co-located sensor. We assume 100\% of environmental-type phenomena are detected by the co-located sensors, though this is not required for gaining utility from a sensor-assist method. The \textbf{Outcome} column of Table~\ref{tab:001_cases} presents the eight outcome cases when a single environmental disturbance occurs on \textit{qubit}$_0$, with any possible combination of entangling errors on the three qubits. An error bit-mask notation is used to uniquely identify each possible error case. For example, in our bit-mask notation [001]~(011) means an environmental disturbance has caused a bit-flip error on \textit{qubit}$_0$ and two entangling-type bit-flip errors have occurred on \textit{qubit}$_0$ and \textit{qubit}$_1$, the \textit{qubit} designations referring again to the quantum circuit in Figure~\ref{fig:QC-S111-E111-SingleCircuit}. This bit-mask notation is given in the \textbf{Errors} column of Table~\ref{tab:001_cases}. Note we are \emph{not} assuming that only a single error occurs in the error channel. We take for granted that if the error probabilities are ``small,'' then the probability of multiple errors occurring will diminish greatly. For additional clarity, in addition to the error bit-mask identifiers, the \textbf{Gates} column in Table~\ref{tab:001_cases} presents the quantum gates for both types of errors, represented in columns 5~\&~6 in the quantum circuit (Fig.~\ref{fig:QC-S111-E111-SingleCircuit}). The assumption used in this report that all errors are bit-flip errors has an unintended consequence that two bit-flip errors can cancel if they both appear on the same qubit. Thus, the resultant gate for each of the three qubits is presented in Table~\ref{tab:001_cases}, where \textbf{\texttt{X}} is a bit-flip error gate and \textbf{\texttt{I}} is the identity gate (i.e., no error). This cancellation effect is an artifact of the unrealistic model of pure bit-flip errors. For each error combination, the \textbf{Synd.} column in Table~\ref{tab:001_cases} provides the syndrome measurement (columns 10--15 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}) recorded in the ancilla classical register. In the lower right of Figure~\ref{fig:QC-S111-E111-SingleCircuit}, classical logic is used to assess if the combination of the co-located sensor and the parity tests performed in the syndrome measurement are consistent with a single error on the qubit associated with the co-located sensor reporting an environmental disturbance. Each unique error combination results in a specific outcome from the quantum circuit. If no errors of any kind occur, then the circuit returns the correct (C) quantum state. Likewise, if only a single error occurs of either type (environmental or entangling), again the circuit returns the correct outcome (C). In some cases, as we have mentioned, the bit-flip error induced by the environmental disturbance is canceled by an entangling error on the same qubit. In these cases, such as [001]~(001), the quantum circuit returns the correct outcome quantum state, but via a fortuitous cancellation, a ``correct via cancellation'' (CC) outcome state. As the number of error occurrences in the error channel increase, the standard error correction code and the sensor-assisted code return different outcomes. This is the first notable conclusion: The sensor-assisted code only has an impact for cases when the quantum state has an uncorrectable error. In this way one intuits correctly that the classical information provided by a co-located sensor can't increase the number of correctly returned quantum states. However, the sensor-assist method can identify when an uncorrectable error has likely occurred, giving the user the opportunity to remove the calculation from further consideration in a computational effort. To quantify these statements, we define several error probability notation terms, used in part in the probability (\textbf{Prob.}) column in Table~\ref{tab:001_cases}. In this column, $o$ is the probability of an environmentally-induced error and $p$ is the probability of an entangling-type error. The non-error complements are $\bar{o}=1-o$ and $\bar{p}=1-p$. We also define $\hat{P}=o+p-op$, the probability that at least one error occurred in the error channel (i.e., the combination of columns~5~\&~6 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}). Note $\hat{P}$ is \emph{not} the probability that a qubit is in an error state after the error channel gates have been applied (i.e., the combined action of columns~5~\&~6 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}). That is, $\hat{P}$ does not correspond to what one would measure as a qubit error rate except when the error is only either 100\% entangling-type or 100\% environmental-type. See the Appendix for a full derivation and definition of the terms $\hat{P}$, $o$, $p$, $\bar{o}$, and $\bar{p}$. Looking again at Table~\ref{tab:001_cases}, if two or more \textbf{\texttt{X}}-gates appear in the resultant gate column, the standard bit-flip quantum error correction code will run to completion, but the returned quantum state will be faulty (F). The sensor-assisted method, however, is able to set a Parity Test REJECT flag (R$_{\mathrm{PT}}$) in half of the faulty cases.\footnote{Note the probability of these multiple error cases occurring in a real set of calculations is not half. That is, there are 64 possible error cases, but there is not equal probability weight of arriving at each of the 64 error cases for a set of calculations.} The computational advantage of the sensor-assist method comes from the fact that the Parity Test REJECT flag is set for cases when the number of ``small'' probability errors is low. To see this, consider the probability (\textbf{Prob.}) column in Table~\ref{tab:001_cases}, which shows as an exponent the number of errors occurring. The sum of the exponents of the $o$ and $p$ terms reveals the \emph{order} of ``small'' probability errors. By examining Table~\ref{tab:001_cases}, it is possible to see that whereas the standard error correction code permits faulty computations to pass through at an error-order of $2$ and higher, the sensor-assist code will only allow faulty computations to pass through at an error-order of $3$ or higher. This computational benefit does, however, come at the expense of also rejecting correct computations at an error-order of $3$ that are arrived at through fortuitous cancellations (CC). As a reminder, the fortuitous cancellation (CC) cases, are artifacts of the simplistic model of treating all errors as single bit-flips. \begin{table*}[ht!] \centering \begin{tabular}{l|r|r|r|r|r|r|r|r|} \multicolumn{1}{l}{} & \multicolumn{8}{c}{\textbf{Outcome fractions, $\mathcal{F}$, for various error probabilities, $\hat{P}$ and $p$}} \\ \hline & \multicolumn{2}{c|}{$\hat{P}=0.20$, $p=0.20$} & \multicolumn{2}{c|}{$\hat{P}=0.20$, $p=0.12$} & \multicolumn{2}{c|}{$\hat{P}=0.05$, $p=0.03$} & \multicolumn{2}{c|}{$\hat{P}=0.05$, $p=0.01$} \\ Outcome case & Standard & Assisted & Standard & Assisted & Standard & Assisted & Standard & Assisted \\ \hline Correct (C) &\ 0.8960\ &\ 0.8960\ &\ 0.8751\ &\ 0.8751\ &\ 0.9911\ &\ 0.9911\ &\ 0.9917\ &\ 0.9917\ \\ Correct via cancellation (CC) &\ 0.0000\ &\ 0.0000\ &\ 0.0312\ &\ 0.0209\ &\ 0.0018\ &\ 0.0017\ &\ 0.0012\ &\ 0.0011\ \\ Faulty (F) &\ 0.1040\ &\ 0.1040\ &\ 0.0937\ &\ 0.0331\ &\ 0.0071\ &\ 0.0025\ &\ 0.0071\ &\ 0.0003\ \\ Parity Test REJECT (R$_{\mathrm{PT}}$) &\ -\ &\ 0.0000\ &\ -\ &\ 0.0476\ &\ -\ &\ 0.0034\ &\ -\ &\ 0.0022\ \\ Sensor REJECT (R$_{\mathrm{S}}$) &\ -\ &\ 0.0000\ &\ -\ &\ 0.0233\ &\ -\ &\ 0.0013\ &\ -\ &\ 0.0047\ \\ \hline Effective correct outcome $\rightarrow$ &\ 0.8960\ &\ 0.8960\ &\ 0.9063\ &\ 0.9644\ &\ 0.9929\ &\ 0.9974\ &\ 0.9929\ &\ 0.9997\ \\ \hline \end{tabular} \caption{Standard bit-flip quantum error correction outcome fractions compared to those from the sensor-assisted quantum circuit. Here $\hat{P}=o+p-op$ is the probability for any error to occur in the error channel (i.e., the combined columns~5~\&~6 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}). The entangling-type error (i.e., column~6 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}) has probability $p$ and the sensor detectable environmental disturbance-induced error (i.e., column~5 in Fig.~\ref{fig:QC-S111-E111-SingleCircuit}) has probability $o$. See the text body for further details and the Appendix for the derivation of the relationship between $\hat{P}$, $o$, and $p$.} \label{tab:probabilities} \end{table*} Finally, Table~\ref{tab:probabilities} presents numerical values for several specific choices of error probability, parameterized by $\hat{P}$ and the entanglement error $p$. The values of the error probabilities are merely illustrative. There are four examples, and for each example, the standard bit-flip quantum error correction code is compared to the sensor-assisted code. We present the fractional weights of specific outcomes from the quantum circuit in Figure~\ref{fig:QC-S111-E111-SingleCircuit}, as described above in the explanation of Table~\ref{tab:001_cases}, with the addition of the Sensor REJECT (R$_{\mathrm{S}}$) cases (which appear in the full 64-combination tables in the Appendix). From Table~\ref{tab:probabilities} we see several features. First, the fractional weight for the correct outcome (C) is always the same for the standard code and the sensor-assisted code, the ``intuition'' mentioned above. Second, when $\hat{P}=0.20$ and $p=0.20$, the environmental disturbance error probability is zero, so the two codes perform the same. Third, the key metric for determining the computational advantage is effective correct outcome fractional weight calculated as $(C+CC)/(C+CC+F)$. As a portion of the cases are removed from consideration by the logic of the sensor-assisted method, the denominator is lower than for the standard quantum correction code. The case $\hat{P}=0.20$ and $p=0.12$ is quoted in the abstract of this report. Fourth, as the overall scale of the error's fractional weighting decreases, the utility of the sensor-assist method decreases, as one would also intuitively expect. \section{\label{sec:discussion}Discussion} A number of assumptions were made in the foregoing analysis. It is valuable to explore the limitations these assumptions may impose on the results of this work. First and foremost, we assumed all quantum computation error types are of the bit-flip variety. In the case of using a co-located sensor for simply ``vetoing'' selected calculations in response to the detection of an environmental disturbance (Sec.~\ref{sec:example_error}), this choice of error-type is of no substantive consequence since any error type is still subject to the same ``veto'' of the entire computation. However, one might argue the errors present in the actual Yorktown backend calculation are not even discrete in nature. That is, our assumption of a bit-flip type error is effectively assuming the co-located sensor is responding to discrete events, like the interaction of an ionizing $\gamma$-ray in the chip substrate. If the environmental disturbances are of a continuous nature, it may be difficult to know when the co-located sensor is reporting a disturbance warranting rejection of the calculation instance. This could be assessed through empirical correlation studies to determine at what level of co-located sensor response it becomes beneficial to reject a specific calculation. Perhaps more pointedly, even the standard textbook example three-qubit, bit-flip quantum error correction scheme \emph{presumes knowledge} of the error type. In other words our assumption of a bit-flip error type is entirely analogous to pedagogical presentations~\cite{10.5555/1972505} of a purely quantum method of error correction. We believe a key point is that if a co-located sensor's response is \emph{preferentially correlated} with a specific type of correctable error in the quantum calculation, then a sensor-assisted mitigation code implementation is likely fruitful. Furthermore, while not shown in this report, the quantum circuit developed in this report for use with co-located sensors also works for phase-flip errors when Hadamard gates are inserted on each computational qubit at what would be columns 4.5 and 9.5 in Figure~\ref{fig:QC-S111-E111-SingleCircuit}, as well as changing the error types in columns 5 and 6 to \textbf{\texttt{Z}}-gates. Related to the exclusive use of bit-flip type errors in this report's analysis is the, as we have called them, ``fortuitous cancellations'' that arise as a natural (logical) consequence to the introduction of two independent error types within the error channel. We would readily agree with the reader that it seems highly unlikely that two such errors, of presumably very different phenomenological cause, would perfectly cancel each other on a single qubit. The specific case of concern would need evaluation in the framework of Table~\ref{tab:001_cases}. In reality, the type of ``errors'' induced by ionizing radiation interactions in superconducting qubit devices is not entirely unknown. Our prior work~\cite{Oliver2020} has shown elevated levels of ionizing radiation results in increased quasiparticle density in the qubits' superconducting circuits. As quasiparticles tunnel through the Josephson junctions of, for example, transmon qubits, the parity of the quantum state flips. Thus, the appearance of parity transitions in transmon qubits due to tunneling of quasiparticles~\cite{ISI:000320589900109,PhysRevApplied.12.014052,PhysRevB.84.064517,PhysRevLett.121.157701} is a signature of energy injections due to ionizing radiation. The transitions rates of qubit relaxation and dephasing due to quasiparticle tunneling through Josephson junctions was previously investigated~\cite{ISI:000320589900109}. In this report, we have presented simple methods of sensor-assisted fault mitigation in quantum computation. We anticipate sensor-assisted fault mitigation is possible within the frameworks of surface and stabilizer codes, though we have not explored those possibilities in any detail. Surface codes are potentially particularly interesting as it is easy to envision a physical surface array of single-qubit transmon chips, each containing a QET sensor. A high-quality chip-to-chip communication method would need development, but it is perhaps achievable through air-bridges or capacitive coupling elements in the circuits. In this report, we have consistently had in mind ionizing radiation as representative of a class of environmental disturbing effects to superconducting transmon qubit systems. We proposed a specific sensor type---the QET---as a means for detecting these ionizing radiation specific environmental disturbances. At the present time, ionizing radiation is a minor contributor to quantum computational error. However, we note plans for future quantum computing systems, such as the ``Goldeneye'' million-qubit capable cryostat IBM is building~\cite{ScienceNews-Goldeneye}, are reaching the same physical scale as deep underground cryogenic research instruments~\cite{ALDUINO20199} that actively, passively, and in analysis work against ionizing radiation as a background to their experimental detection goals. The likelihood of ionizing radiation interactions occurring increases roughly linearly with the mass of the instrument, the total silicon chip substrate mass in the case of transmon qubit. Once the extraneous silicon chip substrate mass is minimized, the interaction likelihood of ionizing radiation within a single computational cycle will scale directly with the number of qubits (and duration of the computation). In this regime of large-scale qubit systems (and long duration computations) we believe the utility of sensor-assisted fault mitigation is likely to grow. A key question is whether these considerations extend beyond ionizing radiation to other, more general, environmental disturbances. We believe the QET co-located sensor approach described in this report is applicable to most silicon chip-based superconducting Josephson junction qubit devices (e.g., flux, charge, and phase qubit varieties). However, the broader objective of the analysis presented in this report was to show the potential computational value achievable \emph{if} quantum computational error types are preferentially correlated with sensor-detectable environmental disturbances. For superconducting transmon qubit systems, other case types may include IR photon leakage sensing, vibration-induced energy coupling, and stray electric- or magnetic-field fluctuations. We are not in a position to speculate on analogous environmental disturbance error types and sensor combinations in other qubit modalities. We look to experts in the relevant disciplines to consider if the ideas presented in this report are transferable to other quantum computing systems. During the final preparation of this report for submission, we were made aware of an article by J.M.~Martinis~\cite{martinis2020saving} which presented a model of ionizing radiation induced errors in superconducting qubits. Of interest to our own report, the Martinis article touches on error correction in the face of disturbances from ionizing radiation. In particular, Martinis states, ``if errors are large or correlated, bunching together either in time or across the chip in space, then error decoding fails.'' We concur with this assessment as it relates to disturbances from ionizing radiation and find the design solutions suggested by Martinis to be compelling. Our own suggested design solution, described in this report, is to place uniquely paired sets of a qubit and a sensor together on shared chip substrate. Communication via air-bridges, capacitive coupling, or other novel means for qubit-to-qubit interconnection is required to create a network of qubits for computation. In this way, we see no inconsistencies between the concepts presented in this report and the concepts presented by Martinis. \section{\label{sec:summary}Summary} In this report, we proposed hybrid superconducting device concepts for quantum computation. The inclusion of a co-located sensor on qubit substrates provides the potential to detect environmental disturbances causing errors in a quantum computation. In the simplest form, such co-located sensors provide a means to selectively ``veto'' and reject just those calculations where an environmental disturbance is likely to result in an incorrect calculation result. We showed the computational advantage of such a scheme and proposed device concepts that could implement such error mitigating techniques using proven device fabrication designs and methods. We abstracted the co-located sensor concept to a scenario where every qubit has a uniquely assigned co-located sensor. We developed a formulation of the three-qubit, bit-flip quantum error correction code to take advantage of the co-located sensor's ability to detect environmental disturbances. The results demonstrated an enhanced effective quantum computational performance at the cost of the rejection of some calculation repetitions. In both fault mitigation concepts considered in this report, the computational enhancements are numerically modest. Nevertheless, we believe these results recommend the development and investigation of a new class of superconducting quantum computation devices that include co-located sensors for the detection of environmental disturbances. We believe such devices are a potential new tool in the broad category of hybrid quantum-classical algorithm development and approaches to quantum error mitigation~\cite{endo2020hybrid}. \begin{acknowledgments} The concepts presented in this work stem from efforts underway at Pacific Northwest National Laboratory (PNNL). PNNL is a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy under Contract No. DE-AC05-76RL01830. The research results presented in this work were developed outside of any supporting grant or internal R\&D support mechanism and are associated with U.S. provisional patent application number~63/123,050 (9 December 2020). The authors thank Alexander Melville and Kyle Serniak (both at MIT Lincoln Laboratory) for answering questions regarding how superconducting qubits located on separate chip substrates might inter-communicate through superconducting air-bridges or capacitive coupling across gaps between chips, making some of the speculative device concepts we propose seem more plausible to the authors. We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. The authors thank Mark V. Raugas and Tobias J. Hagge (both at PNNL) for constructive comments on an early draft of this report. PNNL Information Release PNNL-SA-158581. \end{acknowledgments} \input{main.bbl} \appendix \begin{figure} \caption{The possibilities for combining two independent random event types with individual probabilities of occurrence, $o$ and $p$. The values $(1-o)$ and $(1-p)$ are the individual probabilities for each random event type to \emph{not} \label{fig:outcome_table} \end{figure} \section{Other sensor-assisted qubit concepts} \begin{figure*} \caption{Qubit and QET sensor concept.} \label{fig:3-qubit-QET-scheme} \caption{Scheme for 3 qubit and QET.} \label{fig:3-qubit-QET-connections} \caption{Three chips of 3 qubits each.} \label{fig:3-chip-3-qubit} \caption{Concepts for 3-qubit devices utilizing co-located QET sensors. The star symbol, $\star$, represents a chip-to-chip inter-communication point.} \label{fig:multi-qubit-TES-scheme} \end{figure*} \begin{figure*} \caption{The 9-qubit Shor code, left, with qubit groupings and alpha labels on the computational steps. To the right, qubit groupings with a physical arrangement similar to that shown in Figure~\ref{fig:3-chip-3-qubit} \label{fig:3x3-physical} \end{figure*} \begin{figure*} \caption{Similar to Fig~\ref{fig:3x3-physical} \label{fig:3x3-logical} \end{figure*} Early in the development of the sensor-assisted quantum error correction concepts presented in this report, we explored integrating co-located sensors into the 9-qubit Shor code. We envisioned groups of qubits on shared substrates, monitored by co-located QET sensors; see Figure~\ref{fig:multi-qubit-TES-scheme}. We envisioned 3-qubit chips, in a grouping of three chips, to provide the 9~physical qubits needed for the Shor code. The concept would assume a standard Toffoli gate (CCNOT gate) implementation is available and that there is a means for interconnecting the three chips. Concepts for physical implementation in superconducting Josephson multi-qubit devices were recently explored in the literature~\cite{PhysRevA.101.022308}, and we believe chip-to-chip air bridges or capacitive coupling are future possibilities. Use of ancilla qubits is also likely in a practical implementation, though that was not considered in these initial concepts. The 9-qubit Shor code contains several 3-fold symmetries we believed would prove advantageous for using co-located sensors to provide informed error correction coding. Figures~\ref{fig:3x3-physical} and~\ref{fig:3x3-logical} present these ideas. In each of the figures, the computational gates are assigned a designating letter (a--j) and are grouped within colored boxes. The qubits residing on the same chip share the same color (blue, green, or orange). The qubits can be grouped in a set of three so that the chip-to-chip communication is minimized (Fig.~\ref{fig:3x3-physical}). However, one sees the signals from a co-located sensor will flag an entire sub-group of the qubits as potentially error prone, making the Shor code fail, in general. An alternative is to distribute the physical qubits across the Shor code (see Fig.~\ref{fig:3x3-logical}) but at the expense of having the majority of the multiple qubit gates require chip-to-chip communication. Worse, one sees once again a single co-located sensor event flags three qubits across the Shor code as potentially being in error. As the Shor code, in general, can only protect against two qubit errors, it was realized this approach was likely not fruitful. From this analysis, we abandoned further development of error correction where a co-located sensor is assigned to more than one single (independent) qubit. However, we expect for specific computation implementations, there may yet be utility in considering symmetries within the computation to determine how to efficiently arrange co-located sensors whilst minimizing error prone qubit-to-qubit inter-communications. \section{Statistics of a repeated calculation} Here we present more statistics of the balanced Deutsch-Jozsa calculation presented in the main report. The key interest is related to whether the enhanced performance of the hypothesized sensor-assisted computation result is statistically significant. One hundred trials of 81,920 shots were conducted to determine the variation of the sample. Figure~\ref{fig:balancedDJ} presents the distributions of these one hundred trails of 81,920 shots. The Yorktown backend shows greater variability (Fig.~\ref{fig:balancedDJ}(a)~\&~(e)~), suggesting error inducing effects beyond simple Poisson statistical variation.\footnote{This is also likely a result from the IBM Q Experience's transpilation step for implementation of a quantum circuit on a specific backend as well as errors introduced solely in the measurement stage.} The noise model for the Yorktown backend (Fig.~\ref{fig:balancedDJ}(b)~\&~(f)~), however, shows Poissonian statistical variation, as expected for a fixed, deterministic simulation process. It is interesting to note the modeled noise for the Yorktown backend does not appear to closely match the results of the actual device, and in fact produced ``incorrect'' state outcomes in a larger fraction of calculations (i.e., Fig.~\ref{fig:balancedDJ}(f)~). As expected, the bit-flip-based error models (Fig.~\ref{fig:balancedDJ}(c,d)~\&~(g,h)~) show only Poissonian statistical variation as the errors are discrete in nature and follow a strict fixed probability of being introduced into the quantum circuit by construction. It is clear from these results the improvement provided by the use of the co-located sensor to ``veto'' some calculations does produce a statistically significant enhancement to the computational result when comparing the two bit-flip-based modeled error cases. That is, comparing Figure~\ref{fig:balancedDJ}(d)~to~(c) shows a greater fraction of correct outcome states, while comparing Figure~\ref{fig:balancedDJ}(h)~to~(g) shows a lower fraction of incorrect outcome states. \begin{figure*} \caption{Results from three implementations of a balanced Deutsch-Jozsa calculation (see Fig.~\ref{fig:balanced-dj-circuit} \label{fig:balancedDJ} \end{figure*} \section{Probabilities in two error systems} In this section we analyze, in an entirely generic way, the probability outcomes for two independent, random, bi-modal processes (see Figure~\ref{fig:outcome_table}) on three independent channels. Consider two independent random event processes, each having fixed probabilities, $o$ and $p$, for occurring in a given time period. We refer to these as Type-$o$ and Type-$p$ events in the context of this report. Initially, we make no assumptions about what these events represent. We are interested in detailing all possible ways these two independent events can occur in the given time period. None of the following discussion relies on any quantum mechanical assumptions whatsoever or any knowledge of the event type. There are only four possible cases, as presented in Figure~\ref{fig:outcome_table}. Since Figure~\ref{fig:outcome_table} is complete and exhaustive of all possibilities, we can write two probability equations to represent the probability of at least one error occurring and the probability of no error occurring, respectively: \begin{eqnarray} P_{\mathrm{error}} & = & o \cdot p + o \cdot (1-p) + (1-o) \cdot p \\ & = & o + p - o p \equiv \hat{P} \end{eqnarray} \begin{eqnarray} P_{\mathrm{no~error}} & = & (1-o) \cdot (1-p) \\ & = & 1 - p - o + o p \\ & = & 1 - ( o + p - o p ) = 1 - \hat{P} \end{eqnarray} Thus, for two independently drawn random bi-modal errors, the combined probability of an event is $\hat{P}$, while the probability of no event is $(1-\hat{P})$. In this work we will identify the Type-$o$ and Type-$p$ events with different sorts of errors induced on a qubit. We further assume the probability $(1-\hat{P})$ satisfies the quantum error correction requirement of being ``small'' (i.e., less than $0.5$). It should be noted that if $\hat{P}$ is small, then $o$ and $p$ must also both individually be small and therefore also less than $0.5$. Now consider the outcome equation for three qubits, $\mathrm{q0}$, $\mathrm{q1}$, $\mathrm{q2}$, in one time period when an error may occur on any combination of qubits. We write this as, \begin{eqnarray} \mathbf{1}_{\mathrm{q0,q1,q2}} & = & \mathbf{1}_{\mathrm{q0}} \times \mathbf{1}_{\mathrm{q1}} \times \mathbf{1}_{\mathrm{q2}} \\ & = & \big( (1 - \hat{P}) + \hat{P} \big)_{\mathrm{q0}} \\ & & \times \big( (1 - \hat{P}) + \hat{P} \big)_{\mathrm{q1}} \\ & & \times \big( (1 - \hat{P}) + \hat{P} \big)_{\mathrm{q2}} \end{eqnarray} which exhausts all possible outcomes for the three qubits. We further expand this outcome equation to highlight the individual Type-$o$ and Type-$p$ errors, making the compacting notation adjustments, $\bar{o}=(1-o)$ and $\bar{p}=(1-p)$, \begin{eqnarray} \mathbf{1}_{\mathrm{q0,q1,q2}} & = & \big( \bar{o} \bar{p} + o p + o \bar{p} + \bar{o} p \big)_{\mathrm{q0}} \\ & & \times \big( \bar{o} \bar{p} + o p + o \bar{p} + \bar{o} p \big)_{\mathrm{q1}} \\ & & \times \big( \bar{o} \bar{p} + o p + o \bar{p} + \bar{o} p \big)_{\mathrm{q2}} \end{eqnarray} Given the assumption in this report that all error types are bit-flip errors, the terms $o p$ have special significance in that the two errors on a single qubit will cancel out. Thus, we add a notation, $\bar{c}$, representing when errors cancel out: \begin{eqnarray} \mathbf{1}_{\mathrm{q0,q1,q2}} & = & \big( \bar{o} \bar{p} + \bar{c} + o \bar{p} + \bar{o} p \big)_{\mathrm{q0}} \\ & & \times \big( \bar{o} \bar{p} + \bar{c} + o \bar{p} + \bar{o} p \big)_{\mathrm{q1}} \\ & & \times \big( \bar{o} \bar{p} + \bar{c} + o \bar{p} + \bar{o} p \big)_{\mathrm{q2}} \end{eqnarray} At this point we assume the Type-$o$ and Type-$p$ errors, while independent between the three qubits, come from the same physical source types and have the same probability values (i.e., $o = o_{\mathrm{q0}}=o_{\mathrm{q1}}=o_{\mathrm{q2}}$ and $p = p_{\mathrm{q0}}=p_{\mathrm{q1}}=p_{\mathrm{q2}}$). Thus, multiplying through the outcome equation and regrouping terms associated with order of $\bar{o}$-factors, we arrive at: \begin{eqnarray} \mathbf{1}_{\mathrm{q0,q1,q2}} & = & \bar{o}^3 \left(3 p \bar{p}^2+\bar{p}^3\right) \\ & & +\ \bar{o}^3 \left(p^3+3 p^2 \bar{p}\right) \\ & & +\ \bar{o}^2 \left(6 p \bar{c} \bar{p}+3 \bar{c} \bar{p}^2+3 o \bar{p}^3\right) \\ & & +\ \bar{o}^2 \left(3 p^2 \bar{c}+3 o p^2 \bar{p}+6 o p \bar{p}^2\right) \\ & & +\ \bar{o} \left(3 p \bar{c}^2+3 \bar{c}^2 \bar{p}+6 o \bar{c} \bar{p}^2\right) \\ & & +\ \bar{o} \left(6 o p \bar{c} \bar{p}+3 o^2 p \bar{p}^2+3 o^2 \bar{p}^3\right) \\ & & +\ \bar{c}^3+3 o \bar{c}^2 \bar{p} \\ & & +\ 3 o^2 \bar{c} \bar{p}^2+o^3 \bar{p}^3 \end{eqnarray} We identify Type-$o$ errors with environmental, sensor-detectable errors and Type-$p$ errors with entanglement type errors, which cannot be detected. We regroup the terms related to whether the errors are correctable (C), faulty (F), or correctable via cancellation (CC) as well as the distinction of whether the sensor-assist either outright REJECTs the calculation (R$_{\mathrm{S}}$) or sets the REJECT flag based on the syndrome parity test (R$_{\mathrm{PT}}$). These amount to the fractions, $\mathcal{F}$, of cases of each kind. \begin{eqnarray} \mathcal{F}_{\mathrm{C\textit{vs.}C}} & = & \bar{o}^3 \left(3 p \bar{p}^2+\bar{p}^3\right) + \bar{o}^2 \left(3 o \bar{p}^3\right) \\ \mathcal{F}_{\mathrm{CC\textit{vs.}CC}} & = & \bar{o}^2 \left(3 \bar{c} \bar{p}^2\right) \\ \mathcal{F}_{\mathrm{F\textit{vs.}F}} & = & \bar{o}^3 \left(p^3+3 p^2 \bar{p}\right) \\ & & +\ \bar{o}^2 \left(3 p^2 \bar{c}+3 o p^2 \bar{p}\right) \\ \mathcal{F}_{\mathrm{CC\textit{vs.}R}_{\mathrm{PT}}} & = & \bar{o}^2 \left(6 p \bar{c} \bar{p}\right) \\ \mathcal{F}_{\mathrm{F\textit{vs.}R}_{\mathrm{PT}}} & = & \bar{o}^2 \left(6 o p \bar{p}^2\right) \\ \mathcal{F}_{\mathrm{CC\textit{vs.}R}_{\mathrm{S}}} & = & \bar{o} \left(3 p \bar{c}^2+3 \bar{c}^2 \bar{p}+6 o \bar{c} \bar{p}^2\right) \\ & & +\ \bar{c}^3+3 o \bar{c}^2 \bar{p} \\ \mathcal{F}_{\mathrm{F\textit{vs.}R}_{\mathrm{S}}} & = & \bar{o} \left(6 o p \bar{c} \bar{p}+3 o^2 p \bar{p}^2+3 o^2 \bar{p}^3\right) \\ & & +\ 3 o^2 \bar{c} \bar{p}^2+o^3 \bar{p}^3 \end{eqnarray} An alternative means for presenting the outcomes is through the truth table of all 64 possible error combinations. This is presented in Tables~\ref{tab:all_64_cases_I}~\&~\ref{tab:all_64_cases_II}. For specific numerical cases, see Figure~\ref{fig:eff_fault}, which shows the effective fault rate (fraction of faults in unrejected calculations, i.e., $ \mathcal{F}_{\mathrm{F\textit{vs.}F}} / ( \mathcal{F}_{\mathrm{F\textit{vs.}F}} + \mathcal{F}_{\mathrm{C\textit{vs.}C}} + \mathcal{F}_{\mathrm{CC\textit{vs.}CC}} ) $ calculated from equations C24-C27). \begin{figure} \caption{Fraction of calculations containing faults that are not rejected by combined information of the co-sensors and parity registers, as a function of the total per-qubit error rate $\hat{P} \label{fig:eff_fault} \end{figure} \def1.1{1.1} \begin{table*}[h] \small \centering \begin{tabular}{|cc|ccc|c|c|c|} \hline \multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\ \multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\ \multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\ \hline \hline ~[000] & (000) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{0} \cdot p^{0}$ & \phantom{AC \textit{vs.} R$_{\mathrm{PT}}$} \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & C \textit{vs.} C \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{3} \cdot \bar{p}^{3}$ & \\ \hline & (001) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{0} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & C \textit{vs.} C \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{3} \cdot \bar{p}^{2}$ & \\ \hline & (010) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{0} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & C \textit{vs.} C \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{3} \cdot \bar{p}^{2}$ & \\ \hline & (100) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{0} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & C \textit{vs.} C \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{3} \cdot \bar{p}^{2}$ & \\ \hline & (011) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{0} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} F \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{3} \cdot \bar{p}^{1}$ & \\ \hline & (101) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{0} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} F \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{3} \cdot \bar{p}^{1}$ & \\ \hline & (110) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{0} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} F \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{3} \cdot \bar{p}^{1}$ & \\ \hline & (111) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{0} \cdot p^{3}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{3} \cdot \bar{p}^{0}$ & \\ \hline \end{tabular} \small \centering \begin{tabular}{|cc|ccc|c|c|c|} \hline \multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\ \multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\ \multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\ \hline \hline ~[001] & (000) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & C \textit{vs.} C \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{3}$ & \\ \hline & (001) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} CC \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline & (010) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline & (100) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline & (011) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline & (101) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline & (110) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline & (111) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{3}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} F \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{0}$ & \\ \hline \end{tabular} \small \centering \begin{tabular}{|cc|ccc|c|c|c|} \hline \multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\ \multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\ \multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\ \hline \hline ~[010] & (000) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & C \textit{vs.} C \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{3}$ & \\ \hline & (001) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline & (010) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} CC \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline & (100) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline & (011) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline & (101) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline & (110) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline & (111) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{3}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} F \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{0}$ & \\ \hline \end{tabular} \small \centering \begin{tabular}{|cc|ccc|c|c|c|} \hline \multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\ \multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\ \multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\ \hline \hline ~[100] & (000) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & C \textit{vs.} C \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{3}$ & \\ \hline & (001) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline & (010) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline & (100) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} CC \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{2}$ & \\ \hline & (011) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} F \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline & (101) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline & (110) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{1} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{PT}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{1}$ & \\ \hline & (111) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{1} \cdot p^{3}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} F \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{2} \cdot \bar{p}^{0}$ & \\ \hline \end{tabular} \caption{See main report and Table~\ref{tab:001_cases} for description of tables.} \label{tab:all_64_cases_I} \end{table*} \begin{table*}[h] \small \centering \begin{tabular}{|cc|ccc|c|c|c|} \hline \multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\ \multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\ \multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\ \hline \hline ~[011] & (000) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{3}$ & \\ \hline & (001) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline & (010) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline & (100) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline & (011) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline & (101) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline & (110) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline & (111) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{3}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{0}$ & \\ \hline \end{tabular} \small \centering \begin{tabular}{|cc|ccc|c|c|c|} \hline \multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\ \multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\ \multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\ \hline \hline ~[101] & (000) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{3}$ & \\ \hline & (001) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline & (010) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline & (100) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline & (011) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline & (101) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{II}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline & (110) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline & (111) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{3}$ & \\ & & ~\textbf{\texttt{IX}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{0}$ & \\ \hline \end{tabular} \small \centering \begin{tabular}{|cc|ccc|c|c|c|} \hline \multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\ \multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\ \multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\ \hline \hline ~[110] & (000) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{3}$ & \\ \hline & (001) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline & (010) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline & (100) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{2}$ & \\ \hline & (011) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline & (101) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline & (110) & ~\textbf{\texttt{II}} & & \textbf{\texttt{I}} & & $o^{2} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{1}$ & \\ \hline & (111) & ~\textbf{\texttt{IX}} & & \textbf{\texttt{X}} & & $o^{2} \cdot p^{3}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{1} \cdot \bar{p}^{0}$ & \\ \hline \end{tabular} \small \centering \begin{tabular}{|cc|ccc|c|c|c|} \hline \multicolumn{2}{|c|}{\textbf{Errors}} & \multicolumn{3}{c|}{\textbf{Gates}} & \textbf{Synd.} & \textbf{Prob.} & \textbf{Outcome} \\ \multicolumn{2}{|c|}{[Enviro.]} & \multicolumn{3}{c|}{Col. 5~\&~6} & Ancilla & Error $\times$ & Standard \\ \multicolumn{2}{|c|}{(Entangle)} & \multicolumn{3}{c|}{$\Rightarrow$ Result} & c-reg. & Non-error & \textit{vs.} Assisted \\ \hline \hline ~[111] & (000) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{3} \cdot p^{0}$ & \phantom{CC \textit{vs.} R$_{\mathrm{PT}}$} \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}0 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{0} \cdot \bar{p}^{3}$ & \\ \hline & (001) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{3} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}3 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{0} \cdot \bar{p}^{2}$ & \\ \hline & (010) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{3} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}1 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{0} \cdot \bar{p}^{2}$ & \\ \hline & (100) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{3} \cdot p^{1}$ & \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}2 & $\times$ & F \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{0} \cdot \bar{p}^{2}$ & \\ \hline & (011) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{3} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}2 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $\bar{o}^{0} \cdot \bar{p}^{1}$ & \\ \hline & (101) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{3} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{XI}} & ~$\Rightarrow$ & \textbf{\texttt{X}} & 0\texttt{x}1 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{0} \cdot \bar{p}^{1}$ & \\ \hline & (110) & ~\textbf{\texttt{XI}} & & \textbf{\texttt{X}} & & $o^{3} \cdot p^{2}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}3 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{0} \cdot \bar{p}^{1}$ & \\ \hline & (111) & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $o^{3} \cdot p^{3}$ & \\ & & ~\textbf{\texttt{XX}} & ~$\Rightarrow$ & \textbf{\texttt{I}} & 0\texttt{x}0 & $\times$ & CC \textit{vs.} R$_{\mathrm{S}}$ \\ & & ~\textbf{\texttt{XX}} & & \textbf{\texttt{I}} & & $\bar{o}^{0} \cdot \bar{p}^{0}$ & \\ \hline \end{tabular} \caption{See main report and Table~\ref{tab:001_cases} for description of tables. Here R$_{\mathrm{S}}$ = REJECT based on sensors.} \label{tab:all_64_cases_II} \end{table*} \end{document}
\begin{document} \title [Survival of dominated strategies] {Survival of dominated strategies under imitation dynamics} \author [P.~Mertikopoulos] {Panayotis Mertikopoulos$^{\ast}$} \address{$^{\ast}$\, Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LIG, 38000 Grenoble, France} \EMAIL{[email protected]} \author [Y.~Viossat] {Yannick Viossat$^{\diamond,\sharp}$} \address{$\diamond$\, CEREMADE, Université Paris Dauphine-PSL, Place du Maréchal de Lattre de Tassigny, F-75775 Paris, France } \address{$\sharp$\, Corresponding author} \EMAIL{[email protected]} \subjclass[2020]{Primary: 91A22, 91A26.} \keywords{ Evolutionary game theory; evolutionary game dynamics; imitation; dominated strategies; survival; rationality.} \thanks{This article is dedicated to the memory of Bill Sandholm, who, had he lived, would have been a co-author of this work. We thank him, Vianney Perchet, Jorge Pe\~{n}a, seminar audiences, and two anonymous reviewers for helpful comments.} \begin{abstract} The literature on evolutionary game theory suggests that pure strategies that are strictly dominated by other pure strategies always become extinct under imitative game dynamics, but they can survive under innovative dynamics. As we explain, this is because innovative dynamics favour rare strategies while standard imitative dynamics do not. However, as we also show, there are reasonable imitation protocols that favour rare or frequent strategies, thus allowing strictly dominated strategies to survive in large classes of imitation dynamics. Dominated strategies can persist at nontrivial frequencies even when the level of domination is not small. \end{abstract} \maketitle \allowdisplaybreaks \section{Introduction} \label{sec:intro} Many economic models assume that the agents they consider are rational. This may be defended as a reference case or for tractability. A more interesting justification is that, at least in tasks that they perform routinely, and for which they have enough time to experiment, even weakly rational agents should come to learn which strategies do well, and behave eventually \emph{as if} they were rational. The same intuition applies to other evolutionary processes, such as natural selection or imitation of successful agents. But does evolution really wipe out irrational behaviors? A simple way to tackle this question in a game-theoretic context is to study whether evolutionary game dynamics wipe out dominated strategies, in the sense that the frequency of these strategies goes to zero as time goes to infinity. This may be interpreted in several ways, depending on whether domination means weak or strict domination, whether the strategies considered are pure or mixed, and the dynamics deterministic or stochastic (see Viossat, 2015 \cite{11}, for a partial survey). We focus here on what we see as the most basic question: \emph{do pure strategies that are strictly dominated by other pure strategies become extinct under deterministic dynamics in continuous time?} The answer of the literature is mixed. Roughly speaking, evolutionary game dynamics may be classified as imitative or innovative. In imitative dynamics, modeling imitation processes or pure selection (without mutation), strategies that are initially absent from the population never appear. The leading example is the replicator dynamics. In innovative dynamics, strategies initially absent from the population may appear. Examples include the best-reply dynamics (and smoothened versions of it), the Brown-von Neumann-Nash dynamics, the Smith dynamics, the projection dynamics, and others. The literature shows that imitative dynamics (in the sense of Sandholm, 2010 \cite{10}) always eliminate pure strategies strictly dominated by other pure strategies (Akin, 1980 \cite{1}; Nachbar, 1990 \cite{7}), while innovative dynamics need not do so, with the notable exception of the best-reply dynamics. Indeed, building on Berger and Hofbauer (2006) \cite{2}, Hofbauer and Sandholm (2011) \cite{5} show that for all dynamics satisfying four natural conditions called Innovation, Continuity, Nash Stationarity and Positive Correlation, there are games in which pure strategies strictly dominated by other pure strategies survive in relatively high proportion. Moreover, their simulations show that, at least for some well-known dynamics, dominated strategies may survive at non-negligible frequencies even when the difference in payoff between the dominated and dominating strategies is relatively important. Thus, with respect to elimination of dominated strategies, there seems to be a sharp contrast between imitative and innovative processes. This paper argues that this is not the case. As we shall explain, the intuitive reason why innovative dynamics allow for survival of dominated strategies is that they give an edge to rare strategies. Indeed, the \emph{Innovation} property of Hofbauer and Sandholm stipulates that if a strategy is an unplayed best-response to the current population state, then it should appear in the population: technically, the derivative of its frequency should be positive. The per-capita growth rate of its frequency is then infinite. Moreover, the \emph{Continuity} property requires that the dynamics depends smoothly on the payoffs of the game and the population state. Taken together, these two properties imply that rare strategies that are almost-best replies to the current population state have a huge per-capita growth rate, potentially higher than strategies that have a slightly better payoff, but are more frequent. In this sense, Hofbauer and Sandholm's dynamics favour rare strategies. When a dominated strategy becomes rare, this advantage to rarity may compensate the fact of being dominated and allows it to survive. By contrast, in imitative dynamics, the per-capita growth rates of pure strategies are always ordered as their payoffs, irrespective of their frequencies in the population. But we feel that this is, in some sense, an artifact, a legacy of the history of evolutionary game theory. Indeed, imitative dynamics arose as variants of the replicator dynamics, which originated as a natural selection model, and was only a posteriori reinterpreted as an imitation model. Ironically, their rationality properties come from their biological interpretation. But if we consider a priori which dynamics could arise from an imitation protocol, then we arrive quite naturally at dynamics that provide an evolutionary advantage to rare strategies (or frequent strategies) in a sense that we will make clear. As in innovative dynamics, this advantage to rarity (or commonness) may offset the fact of being dominated, hence allowing dominated strategies to survive. More precisely, imitative dynamics may be derived through a two-step imitation protocol. In the first step, an agent (henceforth, the \emph{revising agent}) meets another individual (the \emph{mentor}) uniformly at random. In an infinite population, the probability that the mentor plays a given strategy is thus equal to the frequency of this strategy. In the second step, the revising agent decides to adopt the mentor's strategy or to keep his own. The adoption rule depends on the dynamics but satisfies a monotonicity condition. Roughly, the probability of switching is larger if the revising agent's payoff is low, the mentor's payoff is large, or both. This leads to dynamics that coincide with Nachbar's (1990) \cite{7} monotone dynamics: if strategy $i$ has a larger current payoff than strategy $j$, then its frequency has a larger per-capita growth-rate. We thus suggest to call them \emph{monotone imitative dynamics}.\footnote{We thank an anonymous reviewer for suggesting this name.} To motivate more general, non-monotone imitative dynamics, we consider revision protocols where the second step satisfies the standard monotonicity condition, but the first step is modified. Instead of always meeting a single other individual, a revising agent sometimes meets several. There are then many reasonable ways of choosing a mentor (or depending on the interpretation, a strategy to be potentially imitated). The probability of envisioning to switch to a given strategy may then be lower or higher than the frequency of this strategy, in a way that may systematically favour rare or frequent strategies. This leads to dynamics that are no longer monotone in the sense of Nachbar (1990) \cite{7}, and under which dominated strategies may survive. Jorge Pen\~a brought to our attention that similar phenomena have been studied in the literature on the evolution of cooperation. In particular, a conformist bias may allow cooperation to survive in the prisoner's dilemma (e.g., Boyd and Richerson, 1988 \cite{3}; Heinrich and Boyd, 2001 \cite{4}; Pe\~na et al., 2009 \cite{8}; and references therein). We first illustrate these ideas on dynamics derived from imitation protocols based on adoption of successful strategies or departure from less successful ones, but not on direct comparison between the payoff of an agent's current strategy and of the strategy he envisions to adopt. With such protocols, agents keep switching from a strategy to another even when all strategies earn the same payoff. For this reason, an advantage to rare or frequent strategies always bites, and dominated strategies may survive even in games with only two strategies. The argument is simple: if the two strategies are twins, that is, always earn the same payoffs, then in the case of an advantage to rare strategies, the shares of both strategies tend to become equal. Technically, the population state where both strategies are played with probability 1/2 is globally asymptotically stable. If we penalize sufficiently little one of the strategies, to make it dominated, most solutions still converge to one or several rest points in the neighborhood of this population state, in which the dominated strategy is played with positive probability. Of course, these rest points cannot be Nash equilibria. This reveals that the dynamics we just mentioned do not satisfy the evolutionary folk theorem (see, e.g., Weibull, 1995 \cite{12}). They do not satisfy either the Positive Correlation condition, which stipulates that there is a positive correlation between the growth rates of strategies and their payoffs (or, equivalently, that against a constant environment, the average payoff in the population increases). Our main result is to show that survival of dominated strategies also occurs under dynamics that are derived from imitation protocols based on payoff comparison, and that satisfy both the evolutionary folk theorem and an appropriate version of Positive Correlation. We show that this is the case as soon as they also satisfy the Continuity condition of Hofbauer and Sandholm and two additional conditions: Imitation, and Advantage to Rarity. The former requires that, except at Nash equilibria, a strategy which is currently played must be abandoned by some agents or imitated by others (or both). The latter assumes that if two strategies are twins, then the rarer one has a per-capita growth-rate that is no lower than the per-capita growth-rate of the more frequent one, and strictly higher in some precise circumstances. The Advantage to Rarity condition may be replaced by a similar Advantage to Frequency. We provide a number of imitation protocols leading to dynamics satisfying these assumptions. Under these dynamics, if a solution converges to a rest point, this point must be a Nash equilibrium, hence put a zero weight on all strictly dominated strategies. Therefore, to prove that dominated strategies may survive, we need to consider games where solutions cycle. We consider the same game as Hofbauer and Sandholm, the hypnodisk game with a feeble twin, and use similar arguments, with some twists. We check via simulations that dominated strategies can also survive in more standard games, such as a Rock-Paper-Scissors game augmented by a feeble twin of Scissors, as also considered by Hofbauer and Sandholm. Finally, we show that simpler examples of survival of dominated strategies can be given if we depart from single population dynamics and consider a population of agents facing an environment which oscillates for exogeneous reasons. The remainder of this article is organized as follows. Evolutionary dynamics are introduced in \cref{sec:EvolDyn}. \cref{sec:ImProc} describes imitation processes favouring rare strategies or frequent strategies. \cref{sec:simple} gives a simple example of survival of dominated strategies under dynamics based on protocols known as the imitation of success, or imitation driven by dissatisfaction. \cref{sec:paycomp} states our main results: that survival of dominated strategies also occurs for imitation dynamics based on payoff comparison, and for any imitation dynamics satisfying some natural conditions, on top of favouring rare or frequent strategies. The result is proved in \cref{sec:proof}. \cref{sec:disc} concludes. \cref{app:proofs} gathers some proofs. \cref{app:moreprot} discusses more general imitation protocols than those described in the main text. Finally, \cref{app:unilateral} gives simple examples of survival of dominated strategies under dynamics based on payoff comparison in a population playing against an ad-hoc environment. \section{Evolutionary dynamics} \label{sec:EvolDyn} With the exception of \cref{app:unilateral}, we focus on single-population dynamics. There is a single, unit mass population of agents. These agents may choose any pure strategy in the set $I = \{1, \dotsc, N\}$. The frequency of strategy $i$ at time $t$ is denoted by $x_i(t)$. The vector $x(t) = (x_i(t))_{i \in I}$ of these frequencies is called the population state at time $t$. It belongs to the simplex $X = \{x \in \mathbb{R}^N_+, \sum_{i \in I} x_i = 1\}$. The payoff for an agent playing strategy $i$ when the population state is $x$ is denoted by $F_i(x)$. The vector $F(\cdot)=(F_1(\cdot),\dotsc,F_N(\cdot)) : X \to \mathbb{R}^N$ is called the game's payoff function. We frequently identify a (symmetric two-player) game and its payoff function. We are interested in evolutionary dynamics of the form $\dot{x}= V^F(x)$, with $V^F$ Lipschitz continuous in $x$, to ensure existence and uniqueness of solutions through a given initial condition. Thus, the population state evolves as a function of the current state and the payoffs of the game. The vector field $V^F$ is assumed to depend continuously on the game's payoff function $F$.\footnote{To fix ideas, we use the sup norm on the space of payoff functions: $||F|| = \sup_{x \in X, i \in I} |F_i(x)|$, and again the sup norm $||(F,x)|| = \max(||F||, ||x||)$ to define joint continuity in $(F, x)$. This is not essential.} A well-known example is the replicator dynamics: \begin{equation} \label{eq:rep} \dot x_i(t) = x_i(t) \left[F_i(x(t)) - \bar{F}(x(t))\right] \end{equation} where $\bar{F}(x(t))= \sum_{i \in I} x_i(t) F_i(x(t))$ is the average payoff in the population. We often omit to specify that the payoffs depend on the state, which depends on time. Thus, instead of \eqref{eq:rep}, we write: $\dot x_i = x_i (F_i - \bar{F})$. Pure strategy $i$ is strictly dominated by pure strategy $j$ if for all $x$ in $X$, $F_i(x) < F_j(x)$. Pure strategy $i$ goes extinct, along a given solution of given dynamics, if $x_i(t) \to 0$ as $t \to +\infty$. We want to understand under which dynamics pure strategies strictly dominated by other pure strategies always go extinct, at least for initial conditions in which all strategies are initially present, that is, in the relative interior of the simplex $X$. Before introducing imitative and innovative dynamics, let us explain a standard way to derive dynamics from micro-foundations. The idea is that from time to time agents revise their strategies. Due to this revision process, agents playing strategy $i$ switch to strategy $j$ at a certain rate, which depends on the population state and on the payoffs of the game. We denote this rate by $\rho_{ij}(x, F)$, or simply $\rho_{ij}$ to keep formulas light. Thus, between time $t$ and $t+ dt$, a mass $x_i \rho_{ij} dt$ of agents switch from $i$ to $j$, and a mass $x_j \rho_{ji} dt$ switch from $j$ to $i$. This leads to the ``mother equation": \begin{equation} \label{eq:mother} \dot x_i = \sum_{j \neq i} x_j \rho_{ji} - x_i \sum_{j \neq i} \rho_{ij} \end{equation} where the first term is an inflow term (agents starting to play strategy $i$) and the second term an outflow term (agents abandoning strategy $i$).\footnote{As the terms $i=j$ cancel, Eq. \eqref{eq:mother} may also be written as follows: \[\dot x_i = \sum_{j \in I} x_j \rho_{ji} - x_i \sum_{j \in I} \rho_{ij} \]} A specification of the rates $\rho_{ij}$ for all $(i, j)$ in $I \times I$ is called a revision protocol and defines dynamics. The replicator dynamics for instance may be derived from at least three different protocols. \begin{itemize} \item (imitation of success) $\rho_{ij} = x_j (K + F_j(x))$, where $K$ is a constant large enough to ensure that $K+ F_j(x)$ is positive for all strategies $j$ in $I$ and all states $x$ in $X$. \item (imitation driven by dissatisfaction) $\rho_{ij} = x_j (K - F_i(x))$, with $K > F_i(x)$ for all $i$ in $I$ and all $x$ in $X$. \item (proportional pairwise imitation rule) $\rho_{ij} = x_j [F_j- F_i]_+$, where for any real number $a$, $[a]_+=\max(a, 0)$. \end{itemize} These three protocols model two-step processes: first, a revising agent meets another agent uniformly at random, hence playing $j$ with probability $x_j$; second, he imitates her with a probability that depends on the payoff of this agent's strategy, his own, or a comparison of both.\footnote{We use ``He" for the revising agent, and ``She" for the agent being imitated.} \emph{Imitative dynamics.} More generally, Sandholm (2010) \cite{10} calls dynamics imitative if they may be derived from a revision protocol of the form \[\rho_{ij} (F,x) = x_j r_{ij}(F, x)\] with for all $x$ in $X$, all strategies $i, j, k$ in $I$: \begin{equation} \label{eq:monotonicity} F_i(x) < F_j(x) \Leftrightarrow r_{kj}(F,x) - r_{jk}(F,x) > r_{ki}(F, x) - r_{ik}(F,x) \end{equation} As the replicator dynamics, these dynamics may be seen as modeling a two-step process where, in step 1, a revising agent meets another agent from the population at random, and in step 2, decides to imitate her or not. Condition \eqref{eq:monotonicity} is a monotonicity condition. It means that in step 2, the difference between the conditional imitation rates from $k$ to $i$ and from $i$ to $k$ increases with the payoff of strategy $i$. In particular, if strategy $j$ earns more than strategy $i$, then in step 2, an agent playing strategy $i$ is more likely to adopt $j$ than an agent playing $j$ is to adopt $i$. It is easy to see that imitative dynamics coincide with a class of dynamics known as monotone dynamics (Viossat, 2015 \cite{11}, footnote 6). These are dynamics of the form \[\dot{x}_i=x_i g_i(x)\] with $g_i$ Lipschitz continuous and, for all $x \in X$, and all $(i, j)$ in $I \times I$, \begin{equation*} g_{i}(x) < g_{j}(x) \Leftrightarrow F_{i}(x) < F_{j}(x). \end{equation*} It follows that in imitative dynamics, per-capita growth rates of pure strategies are ordered as their payoffs. As a result, pure strategies strictly dominated by other pure strategies are always eliminated (Akin, 1980 \cite{1}; Nachbar, 1990 \cite{7}; Samuelson and Zhang, 1992 \cite{9}; Hofbauer and Weibull, 1996 \cite{5}). To distinguish them from more general imitation processes that we will consider, we refer to these dynamics as \emph{monotone imitative dynamics}. This monotone character does not only derive from the monotonicity condition \eqref{eq:monotonicity}, but also from the assumption that the probability of envisioning to adopt a given strategy is equal to the frequency of this strategy. \emph{Innovative dynamics.} By contrast with imitative dynamics, in innovative dynamics, strategies that are not initially played may appear. A leading example is the Smith dynamic: \begin{equation} \label{eq:Smith} \dot{x_i} = \sum_{j \in I} x_j [F_i(x) - F_j(x)]_+ - x_i \sum_{i \in I} [F_j(x) - F_i(x)]_+ \end{equation} It may be derived by assuming that, first, revising $i$-strategists\footnote{An $i$-strategist is an agent currently using strategy $i$.} pick a strategy $j$ uniformly at random in the list of possible strategies, and second, adopt it with probability proportional to $[F_j - F_i]_+$. This leads to $\rho_{ij} = \frac{1}{N} [F_j - F_i]_+$. This is similar to the proportional pairwise imitation rule defining the replicator dynamics, except that in the first step, strategy $j$ is selected as a candidate new strategy with probability $1/N$ instead of $x_j$.\footnote{In Eq.\eqref{eq:Smith}, as standard, we omitted the factor $1/N$, which only affects the time-scale.} Other well known innovative dynamics are the Brown-von Neumann-Nash dynamics, or BNN: \begin{equation*} \dot x_i = \left[F_i(x) - \bar{F}(x)\right]_+ - x_i \sum_{k \in I} [F_k(x) - \bar{F}(x)]_+ \end{equation*} They model a two-step process where, in step 1, revising $i$-strategists pick a strategy $j$ uniformly at random in the list of possible strategies, and, in step 2, adopt it with probability proportional to $[F_j - \bar{F}]_+$, where $\bar{F}(x)=\sum_i x_i F_i(x)$ is the average payoff in the population. \emph{Innovative Dynamics favour rare strategies, monotone imitative dynamics do not.} Building on Berger and Hofbauer (2006) \cite{2}, Hofbauer and Sandholm (2011) \cite{5} showed that for the Smith and BNN dynamics, and many others, there are games in which a pure strategy strictly dominated by another pure strategy survives, for most initial conditions. This holds for any dynamics satisfying four natural requirements, called \emph{Innovation}, \emph{Continuity}, \emph{Positive Correlation} and \emph{Nash Stationarity}. As explained in the introduction, the intuition is that, taken together, Innovation and Continuity favour rare strategies, in the sense that a rare strategy can have a higher per-capita growth-rate than a better but more frequent strategy. By contrast, monotone imitative dynamics favour neither rare nor frequent strategies: they are neutral. Under monotone imitative dynamics, if the payoff of strategy $i$ is less than the payoff of strategy $j$, then its per-capita growth rate is less than that of strategy $j$. This is true whatever the frequencies of strategies $i$ and $j$. The reason is not that this property is completely natural. Indeed, it does not hold for innovative dynamics. Rather, this is because the imitation processes modeled by monotone imitative dynamics are of a particular kind, inspired by the replicator dynamics. It is actually easy to imagine dynamics modeling imitation processes but advantaging rare strategies, or frequent ones.\footnote{Of course, such dynamics, though modeling imitation processes, do not satisfy Sandholm's definition of imitative dynamics. This is the key-point: this definition of imitative dynamics does not encompass all reasonable imitation processes.} For these dynamics, as for innovative dynamics, the advantage given to rare (or frequent) strategies should be able to offset the fact of being strictly dominated, allowing for survival of dominated strategies. This is what we show. We begin by providing examples of imitation dynamics favouring rare or frequent strategies. They are all based on the idea that instead of deciding to change his strategy or not upon meeting only one other agent, a revising agent might meet several other agents before taking his decision. \section{Imitation processes advantaging rare or frequent strategies} \label{sec:ImProc} \subsection{Examples} Loosely speaking, dynamics favour rare strategies if, when strategies $i$ and $j$ earn the same payoff but strategy $i$ is rarer, strategy $i$ has a higher per-capita growth rate than strategy $j$. To see how this could arise in an imitation process, consider revision protocols of the form: \begin{equation} \label{eq:gen2step} \rho_{ij} (F, x)= p_{ij} (F, x) r_{ij} (F, x), \text{ with } p_{ij}(F, x) = \lambda_{ij}(F, x) x_j \end{equation} for some positive functions $\lambda_{ij}$. This models a two-step process: in step 1, a revising $i$-strategist gets interested in strategy $j$ with a probability $p_{ij}$ that we call a \emph{selection rate}. We allow it to depend on both payoffs and frequencies, but in our main examples, it depends only on strategy frequencies; in step 2, he adopts strategy $j$ with a probability proportional to a quantity $r_{ij}$ that depends on payoff considerations, and that we call an \emph{adoption rate}.\footnote{We allow these adoption rates to depend on both payoffs and frequencies as we want to allow for protocols comparing one's current payoff to, e.g., the average payoff in the population, which the vector $F(x)$ alone does not allow to compute; nevertheless, we have in mind a payoff-based second step.} The assumption $p_{ij}(F, x) = \lambda_{ij}(F, x) x_j$ just means that the probability $p_{ij}$ to consider switching to strategy $j$ is zero whenever $x_j=0$, since we are modeling an imitation process. Our adoption rates $r_{ij}$ will typically be monotonic, in the sense of Eq. \eqref{eq:monotonicity}. Thus, the difference with monotone imitative dynamics is that the probability with which a revising agent gets interested in strategy $j$ need not be exactly $x_j$; that is, the $\lambda_{ij}$ need not be all constant and equal to $1$. Here are some examples. \begin{example} \label{ex:list} Meeting several agents and making a list of their strategies: a protocol advantaging rare strategies. \end{example} Assume that, in step 1, a revising agent does not meet one but $m$ other agents uniformly at random, where $m$ is a bounded random variable independent of the strategy played by the agent. He then makes a list of the strategies they play, and selects at random a strategy in this list, as a candidate. He might then learn more about this strategy's payoff, by talking to the agent he met, by experimenting with this strategy for a short, un-modeled period of time, or by some thought experiment. He then decides to adopt it or not according to a standard adoption rate $r_{ij}$. As a concrete example, assume that the revising agent meets one agent playing strategy 1, two playing strategy 2 and two playing strategy 3. He would then make a list of the strategies met: $\{1, 2, 3\}$, and pick each of them with the same probability, hence with probability $1/3$.\footnote{Picking up a strategy with a probability proportional to the number of agents met playing them (so here probabilities $1/5$, $2/5$, $2/5$) boils down to selecting a candidate uniformly at random, just breaking the selection process in two. So this would lead to a neutral step 1. For similar reasons, if $m=1$ or $m=2$, the above process leads to a neutral step 1. This is why we need $m\geq 3$ with positive probability.} This is similar to protocols generating Smith or Brown-von Neumann-Nash dynamics, except that, instead of having a list of all possible strategies, an agent becomes aware of other possible strategies by meeting agents using them. Provided that the number $m$ of agents met is equal to 3 or more with positive probability, the above step 1 advantages rare strategies compared to the reference case $p_{ij}(x)= x_j$, in the sense that the lower $x_j$, the higher the multiplicative factor $\lambda_{ij}$ in \eqref{eq:gen2step}. In other words, in proportion to their frequencies, rare strategies are more often selected at step 1 than frequent strategies. Another interpretation is as follows. Assume that after deciding which strategy to investigate, the revising agent obtains information about its payoffs by talking to a randomly selected mentor: one of the agents playing this strategy among those he met. Then if Alice plays a rarer strategy than Bob, she is (ex-ante) more likely to serve as a mentor. \begin{proposition}\label{prop:ex1} Assume $m \geq 3$ with positive probability. Then in the first step of \cref{ex:list}, $p_{ij}(x) = x_j \lambda_j(x)$ where the functions $\lambda_j$ satisfy \[\forall x \in X, \forall (j,k) \in I \times I, x_j < x_k \Rightarrow \lambda_j(x) > \lambda_k(x)\] \end{proposition} \begin{proof} See \cref{app:proofs}. \end{proof} We do not need step 1 to be exactly as described above. Any protocol whose first step is a combination of the above one and a standard one ($p_{ij} =x_j$) would favour rare strategies in a similar sense. Our results also apply to protocols that cannot be separated in two steps in the sense of Eq. \eqref{eq:gen2step}, but still favour rare strategies. This is discussed in \cref{app:moreprot}. \begin{example} \label{ex:maj} Following the majority: a protocol advantaging frequent strategies. \end{example} As in the previous example, assume that a revising agent first meets $m$ other agents, where $m$ is a bounded random variable independent of the strategy played by the agent. But now, he selects as a candidate the strategy played by the highest number of these agents, if there is only one. If there are several such strategies, he selects one of these strategies uniformly at random. Thus, if he meets one agent playing strategy 1, two playing strategy 2 and two playing strategy 3, then with probability 1/2 he selects strategy 2, and with probability 1/2, he selects strategy 3. This step 1 advantages frequent strategies in the sense that the higher $x_j$, the higher the multiplicative factor $\lambda_{ij}$ (which here is independent of $i$). In this sense, frequent strategies are imitated more often, or more precisely, more often selected at step 1. \begin{proposition}\label{prop:ex2} Assume that $m \geq 3$ with positive probability. Then in the first step of \cref{ex:maj}, $p_{ij}(x) = x_j \lambda_j(x)$ where the functions $\lambda_j$ satisfy \[\forall x \in X, \forall (j, k) \in I \times I, x_j < x_k \Rightarrow \lambda_j(x) < \lambda_k(x)\] \end{proposition} \begin{proof} See \cref{app:proofs}. \end{proof} As for \cref{ex:list}, a number of variants could be considered that cannot easily be put in the form \eqref{eq:gen2step}, but still favour frequent strategies, and to which our results would apply. Note also that other forms of conformity biases have been studied in the literature on the evolution of cooperation, and shown to allow for the survival of cooperation in the prisoner's dilemma (Boyd and Richerson, 1988 \cite{3}; see also Eq. (1) in Heinrich and Boyd, 2001 \cite{4}, or in Pe\~na et al., 2009 \cite{8}). \begin{example} \label{ex:other} Trying to meet agents playing other strategies than one's own: a protocol disadvantaging frequent strategies. \end{example} Assume that in step $1$, a revising agent of type $i$ meets somebody uniformly at random in the population. If this person is of a type $j \neq i$, then the revising agent considers switching to $j$. If this person is also of type $i$, then the revising agent tries again. If after trying $m$ times, he did not manage to meet an agent of another type, he stops and keeps using strategy $i$. The maximal number of trials $m$ could be a random variable. We only assume that the law of this maximal number is the same for all strategies, that it is almost surely finite, and that with positive probability, it is equal to $2$ or more. The motivation for such a behavior is that an agent currently playing strategy $i$ already knows that this is a possible behavior and already has a pretty good idea of how good this strategy is. So talking with an agent of the same type is not very informative. Upon meeting an agent of the same type, a revising agent might thus be willing to try to meet somebody else.\footnote{If the payoff of a strategy is not deterministic, talking with other agents playing the same strategy is useful, but likely less so than talking to an agent with a different behaviour.} For any $j \neq i$, the probability that a revising agent of type $i$ meets an agent playing another strategy for the first time at the $k^{th}$ trial, and that this agent is of type $j$, is $x_i^{k-1} x_j$. So the probability $p_{ij}$ that a revising agent of type $i$ considers switching to strategy $j$ is: \[p_{ij} = x_j \lambda (x_i), \text{ with } \lambda(x_i) = 1 + x_i + \dotsm + x_i^{m-1}\] The function $\lambda$ is strictly increasing. In this sense frequent strategies imitate more often than rare ones (or rather, are proportionally more likely to select another type at step 1). This is because agents from frequent types try on average more times to meet another type than agents from rare types. This favours rare types but not in the same way as in \cref{ex:list}. Indeed, the fact that a strategy is rare will not increase its chance to be considered for imitation, in the sense that if $j$ and $k$ are two strategies different from $i$, $p_{ij}/x_j = p_{ik}/x_k = \lambda(x_i)$, irrespective of the relative frequencies of strategies $j$ and $k$. So $j$ and $k$ have the same ``extra-probability" of being selected by $i$. In terms of the mother-equation \eqref{eq:mother}, the advantage of rare strategies is a higher inflow in \cref{ex:list} and a lower outflow in \cref{ex:other}. The first step of \cref{ex:other} may also be interpreted as follows: the revising agent meets $m$ agents, keeps the same strategy if they all play as he does, and otherwise disregards all agents playing his strategy; he then picks up one of the remaining agents uniformly at random, and chooses her strategy as a candidate. Thus, if he plays strategy 3 and meets one agent playing strategy 1, two playing strategy 2 and two playing strategy 3, he ends up choosing strategy 1 with probability 1/3 and strategy 2 with probability 2/3. \begin{example} \label{ex:confirmation} Confirmation bias: a protocol favouring frequent strategies. \end{example} Assume that a revising agent meets $m$ other agents and that its main purpose is to be reassured that his strategy is not completely foolish. More precisely, if at least one of the agents met plays the same strategy as he does, then he keeps it; otherwise, he selects uniformly at random one of the agents met and envisions to imitate her. This leads to $$p_{ij} = (1-x_i)^m \frac{x_j}{1-x_i}= (1- x_i)^{m-1} x_j$$ for any $i \neq j$. Thus, $\lambda_{ij}(x)= (1- x_i)^{m-1}$. If $m \geq 2$, this expression is strictly decreasing in $x_i$, hence this protocol favours frequent strategies. This is an example of frequent strategies imitating less often than rare strategies (or rather, being proportionally less likely to select another strategy as a candidate at step 1). \subsection{A definition of favouring rare or frequent strategies} Consider a two-step revision protocol of the form \eqref{eq:gen2step}: \footnote{ Our results go through if all definitions in this section are restricted to the case where $i$ and $j$ are twin strategies, in that they have the same payoff function: $F_i = F_j$. This is because the strategy of the proof is to first use the advantage to rare or frequent strategies in a game with twin strategies, and then penalize one of them to make it dominated.} \begin{definition} The first step is \emph{fair} is $\lambda_{ij}= 1$ for all $i \neq j$. \end{definition} \begin{definition}[being selected more often] Per capita, rare strategies are more often selected at step 1 than frequent ones if for all $(F, x)$ and all strategies $i, j$ such that $x_i < x_j$, $\lambda_{ji}(F, x) \geq \lambda_{ij}(F,x)$, and $\lambda_{ki}(F,x) \geq \lambda_{kj}(F, x)$ for all strategies $k \notin\{i,j\}$. They are selected strictly more often if these conditions hold with strict inequalities. Frequent strategies are selected more often (in a weak or strict sense) if the same conditions hold when $x_i > x_j$. \end{definition} \begin{definition}[selecting other strategies less often] Per capita, rare strategies select other strategies less often if for all $(F, x)$ and all strategies $i, j$ such that $x_i < x_j$, $\lambda_{ij} \leq \lambda_{ji}$ and for all strategies $k \notin\{i,j\}$, $\lambda_{ik} \leq \lambda_{jk}$. They select other strategies strictly less often if these conditions hold with strict inequalities. Frequent strategies select other strategies less often (in a weak or strict sense) if the same conditions hold when $x_i > x_j$. \end{definition} \begin{definition}[favouring rare or frequent strategies] \label{def:adv} Step 1 favours rare strategies if rare strategies are more often selected and select other strategies less often than frequent ones, and at least one of these properties holds strictly. It favours frequent strategies if frequent strategies are more often selected and select other strategies less often, and at least one of these properties holds strictly. \end{definition} With this vocabulary, the protocols of Examples 1 and 3 both favour rare strategies, but not for the same reason. In \cref{ex:list}, rare strategies are selected strictly more often than frequent ones, while in \cref{ex:other}, they select other strategies strictly less often. The protocols of Examples 2 and 4 favour frequent strategies. \section{A very simple example of survival of dominated strategies} \label{sec:simple} In this section, we consider two-step revision protocols \eqref{eq:gen2step} where in the second step, the adoption rates $r_{ij}$ are always positive. This is the case in the imitation of success, in imitation driven by dissatisfaction, and in any generalization of the form $r_{ij} = f(F_i) g(F_j)$ with $f$ and $g$ positive.\footnote{It would be natural to assume $f$ decreasing, $g$ increasing, but this is not needed.} For such protocols, as soon as the first step is not fair, survival of dominated strategies occurs in the simplest of games. \begin{proposition} \label{prop:simple} Consider dynamics generated by protocols such that the functions $\lambda_{ij}$ and $r_{ij}$ are jointly continuous in $(F, x)$, the adoption rates $r_{ij}$ are strictly positive, and $r_{ij}(F, x) = r_{ji}(F, x)$ whenever $F_i(x)= F_j(x)$. Consider the $2 \times 2$ game $\Gamma^{\varepsilon}$ with payoff function $F^{\varepsilon} = (F_1^{\varepsilon}, F_2^{\varepsilon})$ given by $F^{\varepsilon}_1(x)= 1$ and $F^{\varepsilon}_2(x)= 1- \varepsilon$, for all $x$ in $X$. \begin{enumerate} \item If the first step favours rare strategies, then for any $\alpha > 0$, there exists $\bar{\varepsilon}>0$ such that, for any $\varepsilon \in [0 , \bar{\varepsilon}]$ and for any initial condition $x(0)$ in $\mathrm{int}(X)$, $\liminf x_2(t) \geq 1/2 - \alpha$ as $t \to +\infty$. \item If the first step favours frequent strategies, then for any $\alpha > 0$, there exists $\bar{\varepsilon}>0$ such that, for any $\varepsilon \in [0 , \bar{\varepsilon}]$ and for any initial condition $x(0)$ such that $x_2(0) \geq 1/2 + \alpha$, $x_2(t) \to 1$ as $t \to +\infty$. \item If there exists $\hat{x} \in \mathrm{int}(X)$ such that $\lambda_{12}(F^0, \hat{x}) > \lambda_{21}(F^0, \hat{x})$, then there exists $\bar{\varepsilon}>0$ such that, for any $\varepsilon \in [0 , \bar{\varepsilon}]$, for any initial condition such that $x_2(0) >\hat{x}_2$, $\liminf x_2(t) \geq \hat{x}_2$. \end{enumerate} \end{proposition} \begin{proof} 1) With only two strategies, the mother-equation \eqref{eq:mother} boils down to \[\dot x_1 = x_1(1-x_1) h(F, x) \text{ with } h(F, x)= \lambda_{21} r_{21} - \lambda_{12} r_{12}.\] Our assumptions ensure that $h$ is jointly continuous. In game $\Gamma^{0}$, $r_{21}= r_{12}$ for all $x$, hence $h(F^0, x)= (\lambda_{21} - \lambda_{12}) r_{12}$. Since we assume $r_{12}>0$, $h(F^0, x)$ has the sign of $\lambda_{21} - \lambda_{12}$. Thus, if step 1 favours rare strategies, $h(F^0, x) > 0$ if $0 \leq x_1 < 1/2$ and $h(F^0, x)< 0$ if $1/2 < x_1 \leq 1$. Thus, in game $\Gamma^0$, $x_1(t) \to 1/2$ as $t \to +\infty$ for any interior initial condition. Now let $\alpha \in (0, 1/2)$. Since the sets $[0, 1/2 - \alpha]$ and $[1/2+ \alpha, 1]$ are compact, and $h$ is jointly continuous, it follows that for any $\varepsilon >0$ small enough, in game $\Gamma^{\varepsilon}$, we still have $h(F^{\varepsilon}, x) > 0$ on $[0, 1/2 - \alpha]$ and $h(F^{\varepsilon}, x) < 0$ on $[1/2 + \alpha, 1]$. Therefore, in $\Gamma^{\varepsilon}$, for any interior initial condition, \[\frac{1}{2} - \alpha \leq \liminf_{t \to + \infty} x(t) \leq \limsup_{t \to + \infty} x(t) \leq \frac{1}{2} + \alpha.\] 2) Similar arguments show that, if step 1 favours frequent strategies, then $x_2(t) \to 1$ for any initial condition such that $x_2(0) >1/2$ in game $\Gamma^0$, and for any initial condition such that $x_2(0) \geq 1/2+ \alpha$ in $\Gamma^{\varepsilon}$, provided that $\varepsilon$ is small enough. 3) The assumption essentially amounts to assuming that the first step is not fair. We may then assume that there exists $\hat{x}$ such that $\lambda_{12}(F^0, \hat{x}) > \lambda_{21}(F^0, \hat{x})$. Then in $\Gamma^{0}$, $h(F^0, \hat{x}) <0$, hence for any $\varepsilon>0$ small enough, $h(F^{\varepsilon}, \hat{x}) < 0$. It follows that at $\hat{x}$, $\dot x_2 >0$. Since the state space is a segment, the result follows. \end{proof} \emph{How dominated can surviving strategies be?} As results of Hofbauer and Sandholm, the proof of \cref{prop:simple} relies on arbitrarily small domination levels. It does not say whether strategies that are substantially dominated can survive. To tackle this question, consider a game with only two strategies, $1$ and $2$, with constant payoffs: $F_1(x) = u_1$ and $F_2(x) = u_2 < u_1$ for all $x$ in $X$. For a protocol of type \eqref{eq:gen2step}, there are at least as many transitions from strategy 2 to strategy 1 than from 1 to 2 (hence the frequency of strategy 2 does not decrease) if and only if $\lambda_{12} r_{12} \geq \lambda_{21} r_{21}$, or equivalently \begin{equation*} \frac{r_{21}}{r_{12}} \leq \frac{\lambda_{12}}{\lambda_{21}} \end{equation*} The LHS may be seen as the ``payoff effect" and the RHS as the ``frequency effect". This inequality takes a simple form if we assume \begin{itemize} \item $r_{ij} = u_j \geq 0$, as in the imitation of success. \item $p_{ij}= x_j \lambda(x_i)$ with $\lambda(x_i) = 1 + x_i + \dotsm + x_i^{m-1}$, as in \cref{ex:other} from \cref{sec:ImProc}, where a revising agent tries to meet an agent playing another strategy up to $m$ times before giving up. \end{itemize} It is then easy to see that the strictly dominated strategy $2$ survives whenever $u_2 > u_1/m$. Moreover, in that case, $x_2(t) \to x_2^{\ast}$ where $x_2^{\ast}$ is the solution of \[ u_2/u_1 = \frac{x_2(1- x_2^m)}{x_1 (1-x_1^m)} \text{ with } x_1 = 1-x_2.\] \cref{FigA} draws the value of the asymptotic frequency $x_2^{\ast}$ of the dominated strategy as a function of the ratio $u_2/u_1$, for various values of $m$. For instance, if $m=2$, the dominated strategy survives if its payoff is at least half the payoff of the dominant strategy $(u_2/u_1 \geq 1/2)$, its asymptotic frequency is larger than 0.2 if $u_2/u_1 \geq 2/3$, and larger than $1/3$ if $u_2/u_1 \geq 0.8$. Larger values of $m$ lead to even larger frequencies of the dominated strategy. Thus, at least for this protocol, relatively large differences in payoffs still allow for survival of strictly dominated strategies at significant frequencies. \begin{figure} \caption{\textbf{Asymptotic frequency of the dominated strategy as a function of the payoff ratio $u_2/u_1$ for various values of $m$.} \label{FigA} \end{figure} \section{Imitation through comparison of payoffs} \label{sec:paycomp} In imitation protocols considered in the previous section, adoption rates are always positive, and rest-points correspond to an equilibrium between inflow and outflow, rather than an absence of strategy changes. Though these adoption rates are standard, they have the debatable property that revising agents do not compare the payoff of their current strategy to the payoff of the strategy they envision to adopt (or the average payoff in the population). As a result, agents may switch to a strategy with currently lower payoffs than their own (or lower than average). In this section, we show that survival of dominated strategies also occurs for adoption rates based on payoff comparison, such as $r_{ij} = [F_i - F_j]_+$, $r_{ij}= [F_j - \bar{F}]_+$, or generalizations thereof.\footnote{The examples we give cannot be of simple $2 \times 2$ games, as in the previous section. Indeed, in a game with only two strategies, such adoption rates prevent agents playing the dominant strategy to adopt the dominated one, so the dominated strategy gets extinct. This is also the case for any dynamics satisfying Positive Correlation (defined below).} To do so, we first need to show that, under mild additional assumptions, these protocols lead to dynamics satisfying the version of Positive Correlation for imitation processes: \begin{equation} \label{eq:PC} \tag{PC$'$} \sum_{i} \dot{x}_i F_i > 0 \end{equation} whenever $x$ is not a population equilibrium, that is, a population state at which all strategies with a positive frequency get the same payoff (or in other words, a rest point of the replicator dynamics). An interpretation of \eqref{eq:PC} is that, in a fixed environment, the average payoff in the population would increase, unless it is already maximal.\footnote{On top of replacing Nash equilibrium with population equilibrium, condition \eqref{eq:PC} somehow combines the Positive Correlation condition of Hofbauer and Sandholm ($\sum_{i} \dot{x}_i F_i > 0$ whenever $\dot{x} \neq 0$) and their Nash Stationarity condition ($\dot{x} \neq 0$ whenever $x$ is not a Nash equilibrium).} \subsection{Protocols leading to Positive Correlation} Define the sign function by, for any real number $a$: $\mathrm{sgn}(a) = 1$ if $a>0$, $\mathrm{sgn}(a) = -1$ if $a <0$, and $\mathrm{sgn}(0)=0$. \begin{proposition} \label{prop:PC} Consider dynamics arising from protocols of type \eqref{eq:gen2step}. Condition \eqref{eq:PC} is satisfied if at least one of the following properties holds:\footnote{The equalities below are between functions: $F_i$, $F_j$ may depend on $x$, and $r_{ij}$, $r_i$, $r_j$, $p_{ij}$, $\lambda_i$, $\lambda_j$ may depend on $(F, x)$.} \begin{description} \item[a)] (pairwise comparison) $\mathrm{sgn}(r_{ij}) = \mathrm{sgn}([F_j - F_i]_+)$. \item[b)] (imitation of greater than average success)\footnote{If $f$ is constant, the second step is purely imitation of greater than average success. If $f$ is decreasing, it combines imitation of greater than average success with imitation driven by dissatisfaction.}\\ $p_{ij} = \lambda_j x_j$ with $\lambda_j$ positive; $r_{ij}= f(F_i) r_j$ with $f$ positive, nonincreasing, and $\mathrm{sgn}(r_j) = \mathrm{sgn}([F_j - \bar{F}]_+)$. \item[c)] (imitation driven by less than average success)\footnote{If $g$ is constant, the second step is purely imitation driven by less than average success. If $g$ is decreasing, it combines imitation driven by less than average success with imitation of success.}\\ $p_{ij} = \lambda_i x_j $ with $\lambda_i$ positive; $r_{ij}= g(F_j) r_i$ with $g$ positive, nondecreasing, and $\mathrm{sgn}(r_i) = \mathrm{sgn}([\bar{F}- F_i]_+)$. \end{description} \end{proposition} The intuition for this result is as follows: in case a), agents always switch to strategies with better payoff than their own; in case b), agents only switch to strategies $j$ earning more than $\bar{F}$, and for any such $j$, the average former payoff of agents switching to $j$ is no more than $\bar{F}$; in case c), agents only quit strategies $i$ earning less than $\bar{F}$, and for any such strategy $i$, on average, the new strategy of agents quitting $i$ earns at least $\bar{F}$. It follows that in all three cases, in a fixed environment, the average population payoff would increase, which is one of the interpretations of condition \eqref{eq:PC}. A formal proof of \cref{prop:PC} is given below. \begin{proof} We let the reader check that \[\sum_i \dot{x}_i F_i = \sum_{i, j} x_i \rho_{ij} (F_j - F_i)\] (intuitively, both sides represent the rate at which the average population payoff evolves in a fixed environment). \vspace*{4pt}\noindent\textbf{Case a).} $\sum_i \dot{x}_i F_i = \sum_{i, j} x_i p_{ij} r_{ij} (F_j - F_i)$ with $\mathrm{sgn}(r_{ij}) = \mathrm{sgn}([F_j - F_i]_+)$, so that $\mathrm{sgn}(r_{ij} (F_j - F_i)) = \mathrm{sgn}([F_j - F_i]_+)$. It follows that the sum is zero if $F_i=F_j$ for any strategies $i$, $j$ such that $x_i>0$, $x_j>0$ (that is, at a population equilibrium) and positive otherwise. \vspace*{4pt}\noindent\textbf{Case b).} Let $p_j = \lambda_j x_j$, with $\lambda_j >0$; let $\bar f= \sum_k x_k f(F_k)$ and let $y_i = x_i f(F_i) / \bar{f}$. Note that $\sum_i y_i=1$. We have: \begin{equation*} \begin{split} \sum_i \dot{x}_i F_i = \sum_{i, j} x_i f(F_i) \lambda_j x_j r_j (F_j - F_i) & = \bar{f} \sum_{i, j} y_i \lambda_j x_j r_j (F_j - F_i)\\ & = \bar{f} \sum_j \lambda_j x_j r_j (F_j- \sum y_i F_i). \end{split} \end{equation*} Since $f$ is nonincreasing, the $y_i$ (which may be thought of as distorted frequencies) give more weight to strategies with low payoffs than the true frequencies $x_i$, and it may be shown that $\sum y_i F_i \leq \sum x_i F_i = \bar{F}$. Since $\mathrm{sgn}(r_j) = \mathrm{sgn}([F_j - \bar{F}]_+)$, it follows that we also have $sgn (r_j (F_j- \sum y_i F_i))= \mathrm{sgn}([F_j - \bar{F}]_+)$. Thus, the whole sum is zero at population equilibria and positive otherwise. \vspace*{4pt}\noindent\textbf{Case c).} Similarly, let $\bar{g} = \sum_k x_k g(F_k)$ and $y_i = x_i g(F_i) / \bar{g}$. We get: \begin{equation*} \begin{split} \sum_i \dot{x}_i F_i = \sum_{i, j} \lambda_i x_j r_i g(F_j) (F_j - F_i) & = \bar{g} \sum_{i, j} \lambda_i r_i y_j (F_j - F_i)\\ & = \bar{g} \sum_{i} \lambda_i r_i \left(\left[\sum_j y_j F_j\right] - F_i\right). \end{split} \end{equation*} Since $g$ is nondecreasing, $\sum_j y_j F_j \geq \bar{F}$. Moreover, $r_i$ has the sign of $[\bar{F} - F_i]_+$. Therefore, $r_i ([\sum_j y_j F_j] - F_i)$ has the sign of $[\bar{F} - F_i]_+$. It follows that the whole sum is zero at population equilibria and positive otherwise. \end{proof} \subsection{Survival result} Our results on survival of dominated strategies also hold for revision protocols that are not of the two-step form \eqref{eq:gen2step}. To emphasize this fact, we first state a theorem with assumptions directly on the vector field $V^F$ and the switching-rates $\rho_{ij}$. We then provide sufficient conditions for these assumptions to be satisfied by two-step revision protocols of form \eqref{eq:gen2step}. We begin with a list of definitions and assumptions. \begin{definition} Strategies $i$ and $j$ are twins if for all $x$ in $X$, $F_i(x)=F_j(x)$. \end{definition} \begin{definition} At a given population state of a given game: strategy $i$ imitates other strategies if there exists $j \neq i$ such that $\rho_{ij} >0$; it is imitated by other strategies if there exists $j \neq i$ such that $\rho_{ji} >0$. \end{definition} On top of condition \eqref{eq:PC}, we will need the following assumptions: \emph{Continuity (C)}: the vector field $V^F$ is Lipschitz continuous in $x$ and continuous in $u$ (implying joint continuity); the functions $x \to \rho_{ij}(F, x)$ are continuous in $x$. \emph{Imitation (Im)}: at any interior population state that is not a Nash equilibrium, each strategy $i$ imitates other strategies or is imitated by other strategies (or both). We also need either Advantage to Rarity or Advantage to Frequency, as defined below: \emph{Advantage to Rarity (AR)}: in the interior of the simplex, if strategy $i$ and $j$ are twins, then $\frac{\dot{x}_i}{x_i} \geq \frac{\dot{x}_j}{x_j}$ whenever $x_i < x_j$. Moreover, at least one of the following additional properties holds: \\ (AR1) The inequality is strict whenever at least one of the strategies $i$ and $j$ imitates other strategies.\\ (AR2) The inequality is strict whenever at least one of the strategies $i$ and $j$ is imitated by other strategies. \emph{Advantage to Frequency (AF)}: idem but when $x_i > x_j$ instead of $x_i < x_j$. \begin{theorem} \label{th:hypno} Fix $\eta >0$. Assume that conditions \eqref{eq:PC}, (Im), and (C) are satisfied. If (AR) is satisfied (respectively, (AF)), then there exist 4-strategy games in which pure strategy $3$ strictly dominates pure strategy $4$ but $\liminf x_4(t) > \frac{1}{2} - \eta$ (respectively, $1- \eta$) for a large, open set of initial conditions. \footnote{By a ``large set", we mean the whole simplex (for an advantage to rarity) or the half-simplex defined by $x_4 \geq x_3$ (for an advantage to frequency), except an arbitrarily small neighborhood of its boundary and of a line segment.} \end{theorem} The proof is given in the next section. It is based on ideas of Hofbauer and Sandholm. We first provide sufficient conditions for the assumptions of \cref{th:hypno} to hold. Consider a two-step revision protocol $\rho_{ij}(F, x) = x_j \lambda_{ij}(F, x) r_{ij}(F, x)$. \begin{definition} Step 2 treats twins identically if for any twin strategies $i$ and $j$, $r_{ij}= r_{ji}$ and for any $k \notin \{i, j\}$, $r_{ik} = r_{jk}$ and $r_{ki}= r_{kj}$. \end{definition} \begin{proposition} Consider dynamics generated by a two-step protocol of form \eqref{eq:gen2step} satisfying the assumptions of \cref{prop:PC}. Then \cref{th:hypno} applies provided that both of the following conditions hold:\\ a) the functions $\lambda_{ij}$ and $r_{ij}$ are continuous, and Lipschitz continuous in $x$;\\ b) the selection rates $\lambda_{ij}$ are strictly positive, step 1 favours rare (respectively frequent) strategies, and step 2 treats twins identically. \end{proposition} \begin{proof} The conditions of \cref{prop:PC} imply \eqref{eq:PC} and (Im), as would any protocol based on adoption rates $r_{ij}$ with the same sign as $[F_j - F_i]_+$, or $[F_j - \bar{F}]_+$. Assumption a) implies (C). It remains to show that b) implies (AR) (or, respectively, (AF)). Let $i$ and $k$ be twin strategies. We let the reader check that, since step 2 treats twins identically: \begin{equation*} \frac{\dot{x}_i }{x_i} - \frac{\dot{x}_j }{x_j}= \sum_{k \notin \{i, j\}} x_k r_{ki} (\lambda_{ki} - \lambda_{kj}) + \sum_{k \notin \{i, j\}} x_k r_{ik} (\lambda_{jk}- \lambda_{ik}) + r_{ij} (x_j + x_i) [\lambda_{ji} - \lambda_{ij} ] \end{equation*} Moreover, again because step 2 treats twins identically, the assumption in (AR1) that at least one of the strategies $i$ and $j$ imitates (or, in (AR2), is imitated by) other strategies boils down to the fact that this holds for strategy $i$. Now assume that $x_i < x_j$ and that step 1 favours rare strategies. Then all three terms in the RHS are nonnegative. There are two cases. \vspace*{4pt}\noindent\textbf{Case 1.} If rare strategies are more often selected at step 1. Then $\lambda_{ki} > \lambda_{kj}$ for all $k \notin \{ i, j\}$, and $\lambda_{ji} > \lambda_{ij}$. Provided that strategy $i$ is imitated by other strategies, it follows that the first or the third term, hence the whole RHS, is positive. Therefore (AR2) holds, hence (AR) holds. \vspace*{4pt}\noindent\textbf{Case 2.} Otherwise, rare strategies select other strategies less often. The second or third term in the RHS are then positive, provided that strategy $i$ imitates other strategies. Therefore (AR1) holds, hence (AR) holds as well. Similarly, if step 1 favours frequent strategies, condition (AF) is satisfied. This concludes the proof. \end{proof} \section{Proof of \cref{th:hypno}} \label{sec:proof} The proof combines ideas of the proofs of Hofbauer and Sandholm's (2011) Theorems 1 and 2. As in their Theorem 2, the game used is the hypnodisk game with a feeble twin. As in their Theorem 1, in the case of an advantage to rarity, the shares of strategies that always earn the same payoff tend to become equal. \subsection{The game} We first briefly recall the construction of the hypnodisk game with a feeble twin (see also Figures 5, 6, 7 in Hofbauer and Sandholm). The construction has three steps. Below, $X$ may denote the simplex of a game with three or four strategies, depending on the context. \vspace*{4pt}\noindent\textbf{Step 1. The hypnodisk game.} The hypnodisk game is a 3-strategy game, with nonlinear payoffs: it is not the mixed extension of a finite game. It may be seen as a generalization of Rock-Paper-Scissors, in that it generates cyclic dynamics for any dynamics satisfying Positive Correlation. Its payoff function will be denoted by $H$. We refer to Hofbauer and Sandholm for a precise definition and analysis of this game. The important properties are the following: a) there is a unique Nash equilibrium $p = (1/3, 1/3, 1/3)$. b) there exist two reals numbers $r$ and $R$ with $0 < r < R < 1/\sqrt{6}$ such that: within the disk of center $p$ and radius $r$, the payoffs are as in a coordination game: $H_i(x) = x_i$; outside of the disk of center $p$ and radius $R$, the payoffs are as in an anti-coordination game: $H_i(x) = -x_i$. These disks will be denoted by $D_r = \{x \in X, || x - p||_2 < r\}$ and $D_R = \{x \in X, || x - p||_2 \leq R\}$.\footnote{We define $D_r$ as an open disk so that the annular region $D_R \backslash D_r$ is closed.} c) In the annular region with radii $r$ and $R$, the payoffs are defined in a way that preserves the regularity of the payoff function. d) The radii $r$ and $R$ may be chosen arbitrarily small if useful. The payoff function $F$ is a map from $X \subset \mathbb{R}^3$ to $\mathbb{R}^3$ and may be seen as a vector field. Property b) implies that the projection of this payoff vector field on the affine span of the simplex points towards the equilibrium outside of the larger disk $D_R$, and away from the equilibrium within the smaller disk $D_r$ (except precisely at the equilibrium).\footnote{The idea to preserve the regularity of the payoff function, i.e., property c), is to rotate continuously (the projection of) the payoff vector field so that it rotates by 180 degrees in total in the annular region, see Hofbauer and Sandholm.} Moreover, the geometric interpretation of condition \eqref{eq:PC} is that, except at population equilibria, the payoff vector field, or equivalently, its projection on the affine span of the simplex, makes an acute angle with the dynamics' vector field $V^F$. It follows that in the hypnodisk game, for any dynamics satisfying \eqref{eq:PC} and any interior initial condition different from the Nash equilibrium, the solution eventually enters the annulus region with radii $r$ and $R$ and never leaves (Hofbauer and Sandholm, Lemma 3). A similar construction could be made but putting the unique equilibrium at any desired place in the interior of the simplex instead of the barycenter.\footnote{\label{ft20} The disks $D_r$ and $D_R$ would then surround the equilibrium and the projected payoff vector field would point towards the equilibrium outside of the larger disk $D_R$, and away from it inside of the smaller disk $D_r$. This is the case for instance if $H_i(x) = p_i - x_i$ outside $D_R$ and $H_i(x)=x_i - p_i$ inside $D_r$, where $p$ is the equilibrium.} \vspace*{4pt}\noindent\textbf{Step 2. Adding a twin.} Let us now add a fourth strategy that is a twin of the third. This leads to a 4-strategy game, which is called the hypnodisk game with a twin. Its payoff function $F$ satisfies: for any $x$ in $X$, $F_i(x) = H_i(x_1, x_2, x_3 + x_4)$ for $i=1,2,3$ and $F_4(x)= F_3(x)$. There is now a segment of Nash equilibria: $$\mathrm{NE}=\{x \in X, (x_1, x_2, x_3 + x_4) = (1/3, 1/3, 1/3)\}.$$ The disks $D_r$ and $D_R$ become intersections of cylinders and of the simplex, which are denoted by $I$ and $O$ (for Inner and Outer cylinders): $$I = \{x \in X, (x_1, x_2, x_3 + x_4) \in D_r\}; \quad O = \{x \in X, (x_1, x_2, x_3 + x_4) \in D_R\}.$$ The annular area with radii $r$ and $R$ becomes the intercylinder region \[D= O\backslash I = \{x \in X, r^2 \leq (x_1- 1/3)^2 + (x_2- 1/3)^2 + (x_3 + x_4 - 1/3)^2 \leq R^2\}.\] For any dynamics satisfying (C) and \eqref{eq:PC} and any interior initial condition not in $\mathrm{NE}$, the solution eventually enters this intercylinder zone, and then never leaves it (Hofbauer and Sandholm, Lemma 4): $\exists T, \forall t \geq T, x(t) \in D$. \vspace*{4pt}\noindent\textbf{Step 3. The feeble twin.} We now subtract $\varepsilon>0$ from the payoffs of strategy $4$, so that it is now dominated by strategy $3$. This leads to the hypnodisk game with a feeble twin, which we denote by $\Gamma_{\varepsilon}$. \subsection{Sketch of proof of \cref{th:hypno}} Before providing a formal proof, we describe its logic. Consider first the hypnodisk game with an exact twin $\Gamma_0$. In the case of an advantage to rare strategies, the shares of strategy $3$ and $4$ tend to become equal. As a result, for any interior initial condition, solutions converge to an attractor $A$ which is contained in the intersection of the intercylinder region $D$ and the plane $x_3=x_4$. In this attractor, $\liminf x_4 \geq \frac{1}{6}- \frac{R}{\sqrt{6}}$. Because the vector field of the dynamics is jointly continuous in $(F, x)$, for $\varepsilon> 0$ small enough, there is an attractor $A^{\varepsilon}$ included in an arbitrarily small neighborhood of $A$, and whose basin of attraction is at least the old basin of attraction minus an arbitrarily small neighborhood of the union of the segment of $\mathrm{NE}$ and of the boundary of the simplex. It follows that for most initial conditions, $\liminf x_4 \geq \frac{1}{6}- \frac{R}{\sqrt{6}} - \delta(\varepsilon)$, with $\delta(\varepsilon) \to 0$ as $\varepsilon \to 0$. Thus, if we fix any $\eta >0$, for $R$ and $\varepsilon$ small enough, $\liminf x_4 \geq \frac{1}{6}- \eta$. We can get an ever larger value of $\liminf x_4$ with the same construction and proof, just replacing the standard hypnodisk game by a variant with unique equilibrium $(\beta, \beta, 1- 2\beta)$, see footnote \ref{ft20}. We then get for $\beta$, $R$ and $\varepsilon$ small enough, $\liminf x_4 \geq \frac{1}{2}- \eta$.\footnote{We thank Vianney Perchet for pointing this out to us.} The case of an advantage to frequent strategies is similar, with some twists. Now in $\Gamma_0$, for any interior initial condition with $x_4 > x_3$, the solution converges to an attractor $A'$ included in the intersection of the intercylinder region $D$ and of the plane $x_3=0$. In $\Gamma_{\varepsilon}$, for $\varepsilon$ small enough, there is an attractor included in an arbitrarily small neighborhood of $A'$, and whose basin of attraction is at least the basin of attraction of $A'$ minus a zone with an arbitrarily small Lebesgue measure. This allows to show that, for any $\eta>0$, we may find a game such that for many initial conditions (including all initial conditions such that $x_4 > x_3 + \eta$ and $x$ is not in the $\eta$-neighborhood of the union of the segment of Nash equilibrium and of the boundary of the simplex), for $\varepsilon$ and $R$ small enough, $\liminf x_4 \geq 1/3 - \eta$. By changing the equilibrium of the initial hypnodisk game, we get $\liminf x_4 \geq 1 - \eta$. \subsection{Formal proof of \cref{th:hypno}} We now provide a formal proof. To fix ideas, let us assume that (AR) holds, and that the advantage to rarity is strict when at least one of the twin strategies imitate other strategies (condition (AR1)). Other cases are similar. Consider game $\Gamma_0$ and fix an interior initial condition $x(0) \in \mathrm{NE}$. As in Hofbauer and Sandholm, Lemma 4, we first obtain: \begin{claim} \label{cl:IC} There exists a time $T$ such that for all $t \geq T$, $x(t)$ is in the intercylinder region $D$. \end{claim} \begin{proof} Since Hofbauer and Sandholm do not provide a formal proof, we do it here. Due to condition (PC'), the vector field $V^F(x)$ at the boundary of region $D$ points inwards, it follows that once solutions enter region $D$, they cannot leave it. By contradiction, assume that this is never the case, that is, the solution remains in the compact set $K = X\backslash \mathrm{int}(D)$, where $\mathrm{int}(D)$ denotes the relative interior of $D$. It follows that the solution has accumulation points in $K$, which cannot be on $\mathrm{NE} \cup \mathrm{\mathrm{Bd}}(X)$. Moreover, the Euclidean distance $W(x)$ to the segment of Nash equilibria evolves monotonically (it increases within inner cylinder $I$ and decreases outside outer cylinder $O$). By a standard result on Lyapunov functions, all such accumulation points $x^{\ast}$ satisfy $\nabla W(x^*) \cdot F(x^*)=0$ (thus, if at time $t$, $x(t) = x^*$, then $dW(x(t))/dt=0$). But by construction, there are no such points in $K \backslash (\mathrm{NE} \cup \mathrm{Bd}(X))$, a contradiction. \end{proof} Moreover, as in Theorem 1 of Hofbauer and Sandholm: \begin{claim} \label{cl:equal} $x_4(t)/x_3(t) \to 1$ as $t \to+\infty$. \end{claim} \begin{proof} Let $V(x) = x_4/x_3$ and let $\dot V(x) = \nabla V(x) \cdot F(x)$ so that $\mathrm{\frac{d}{dt}} V(x(t)) = \dot{V}(x(t))$. Due to condition (AR), $V(x(t))$ evolves (weakly) monotonically in the direction of $1$. Thus, assuming to fix ideas $x_4(0) < x_3(0)$, $V(x(t))$ is increasing and less than $1$, hence has a limit $l$ such that $V(x(0)) \leq l \leq 1$. Assume by contradiction that $l < 1$. Let $K_i= \{x \in X \, | \, \rho_{ik}=0, \forall k \neq i\}$ be the set of population states at which strategy $i$ does not imitate any other strategy. Let $$K = K_3 \cap K_4 \cap D \cap \{x \in X, x_4 = l x_3\}.$$ Note that $K$ is compact (by Continuity) and contained in the interior of the simplex (since in $D$, $x_1>0$, $x_2>0$, $x_3+ x_4>0$, and $l \neq 0$). We want to show that the solution cannot stay in $K$ forever. For any population state in $K$, strategies $3$ and $4$ do not imitate other strategies. Moreover, the state is not an equilibrium. So by Imitation, strategies $3$ and $4$ are imitated. Therefore, $\dot{x}_3 + \dot{x}_4 > 0$. By Continuity and compactness of $K$, there exists $\varepsilon >0$ and an open neighborhood $U$ of $K$ such that, whenever $x(t) \in U \cap X$, $\dot{x_3}+ \dot{x_4} > \varepsilon$. It follows that $x(t)$ cannot stay for ever in $U$, hence must have accumulation points in $X \backslash K$. We now prove that this is impossible. Indeed, let $x^{\ast} \in X \backslash K$ be an accumulation point of $x(t)$. Necessarily, $x^{\ast} \in D \cap \{x \in X \, | \, x_4 = l x_3\} \subset \mathrm{int}(X)$. Moreover, by standard results on Lyapunov functions, $\dot{V}(x^{\ast})=0$. Since $x^{\ast} \in \mathrm{int}(X)$, it follows from (AR1) that $x^{\ast} \in K_3 \cap K_4$, so that $x^{\ast} \in K$. We thus get a contradiction. This concludes the proof. \end{proof} Let $K_{\alpha}$ denote the compact set $X \backslash N_\alpha(\mathrm{NE} \cup \mathrm{Bd}(X))$, where $N_{\alpha}$ refers to the open $\alpha$-neighborhood for the Euclidean norm. Let $\varepsilon \in (0, 1)$ and let $$U_{\varepsilon} = \{x \in N_{\varepsilon}(D), |x_4/x_3 - 1| < \varepsilon\}.$$ Let $\Phi_t$ denote the time $t$ map of the flow; that is, $\Phi_t(x_0)$ is the value at time $t$ of the solution such that $x(0) = x_0$. \begin{claim} \label{cl:flow} There exists $T$ such that for all $t \geq T$, $\Phi_t(K_{\alpha}) \subset U_{\varepsilon}$. \end{claim} \begin{proof} Since the solution cannot leave $U_{\varepsilon}$ it suffices to show that there exists $T$ such that $\Phi_T(K_{\alpha}) \subset U_{\varepsilon}$. Assume that this is not the case. Then we may find a increasing sequence of times $t_n \to +\infty$ and a sequence of positions $x_n \in K_{\alpha}$ such that $\Phi_{t_n}(x_n) \notin U_{\varepsilon}$. By compactness of $K_{\alpha}$, up to considering a subsequence, we may assume that $x_n$ converges towards some $x_{\lim}$ in $K_{\alpha}$. But by the previous claims, there exists a time $\tau$ such that $\Phi_{\tau} (x_{\lim}) \in U_{\varepsilon/2}$. By continuity of the flow, there exists a neighborhood $\Omega$ of $x_{\lim}$ such that $\Phi_{\tau}(\Omega) \subset U_{\varepsilon}$, hence $\Phi_{t}(\Omega) \subset U_{\varepsilon}$ for all $t \geq \tau$, since solutions cannot leave $U_{\varepsilon}$ in forward time. But for $n$ large enough, $t_n \geq \tau$, $x_n \in \Omega$ but $\phi_{t_n}(x_n) \notin U_{\varepsilon}$, a contradiction.\end{proof} We now need to define $\omega$-limits, attractors and basins of attraction. \begin{definition}[$\omega$-limit] The \emph{$\omega$-limit} of a set $U \subset X$ is defined as $\omega(U) = \bigcap_{t > 0} \mathrm{cl}(\phi^{ [t, \infty) } (U))$, where for $T \subset \mathbb{R}$, we let $\phi^T(U) = \cup_{t \in T} \phi^t (U)$. If $x \in X$, we write $\omega(x)$ instead of $\omega(\{x\})$. \end{definition} \begin{definition}[attractor and basin of attraction] A set $A \subset X$ is an \emph{attractor} if there is a neighborhood $U$ of $A$ such that $\omega(U) = A$. Its \emph{basin of attraction} is then defined as $B(A) = \{x : \omega(x) \subseteq A\}$. \end{definition} \begin{claim} \label{cl:attractor} Fix $\alpha>0$ small enough. Then $A= \omega(K_{\alpha})$ is an attractor, it is included in the intersection of the intercylinder zone D and the plane $x_3 = x_4$, and its basin of attraction is $B(A)= \mathrm{int}(X)\backslash \mathrm{NE}$. \end{claim} \begin{proof} By \cref{cl:flow}, there exists a time $t >0$ such that $\phi_t(K_{\alpha}) \subset \mathrm{int}(K_{\alpha})$. It follows (see Appendix A in Hofbauer and Sandholm) that $A$ is an attractor. By letting $\varepsilon$ go to zero in \cref{cl:flow}, we obtain that $$A \subset \cap_{\varepsilon >0} U_{\varepsilon} = U_0 = D \cap \{x \in X : x_3 = x_4\}.$$ Finally, by \cref{cl:IC,cl:equal}, for all $x$ in $\mathrm{int}(X)\backslash \mathrm{NE}$, the solution starting in $x$ enters $K_{\alpha}$. Therefore $\omega(x) \subset \omega(K_{\alpha}) =A$, hence $(\mathrm{int}(X))\backslash \mathrm{NE} \subset B(A)$. The reverse inclusion is obvious. Note that $\omega(K_{\alpha})$ does not depend on $\alpha$ (as long as $\alpha$ is small enough). \end{proof} \begin{claim} Call $\Gamma_{\varepsilon}$ the hypnodisk game with an $\varepsilon$-feeble twin. Let $\eta> 0$. For all $\varepsilon>0$ small enough, in $\Gamma_{\varepsilon}$, there is an attractor $A_{\varepsilon} \subset N_{\eta}(A)$ whose basin of attraction includes $B(A) \backslash N_{\eta} (\mathrm{NE} \cup \mathrm{Bd}(X)) = X\backslash N_{\eta} (\mathrm{NE} \cup \mathrm{Bd}(X))$. \end{claim} \begin{proof} This follows from \cref{cl:attractor} and Continuity, as in Hofbauer and Sandholm (2011) \cite{5}. \end{proof} We now conclude: for $\varepsilon$ small enough, from most initial conditions, solutions converge to an attractor along which $x_4$ is bounded away from zero. The minimum of $x_4$ along this attractor may be made higher than $1/6- R/\sqrt{6} - \eta$, where $R$ is the radius of the outer cylinder, which may be chosen arbitrarily small. By taking as base game an hypnodisk game with an equilibrium such that $x_3$ is sufficiently close to $1$ (see footnote \ref{ft20}), we may transform $1/6$ in any number strictly smaller than $1/2$, and obtain $\liminf x_4 \geq 1/2 - \delta$ for any $\delta>0$ fixed beforehand.\footnote{For an advantage to frequent strategies, we get initially $\liminf x_4 \geq 1/3- R - \eta$ and then $\liminf x_4 \geq 1- \delta$.} \section{Discussion} \label{sec:disc} \emph{The hypnodisk game. } The hypnodisk game with a feeble twin is easy to analyze, and allows to prove survival results for large classes of dynamics. However, numerical simulations show that pure strategies strictly dominated by other pure strategies also survive in more standard games. \cref{FigC} illustrates imitation dynamics in a Rock-Paper-Scissors-Feeble Twin game for two different domination margins (the game is the same as in the numerical explorations of Hofbauer and Sandholm, Section 5.2): \begin{equation} \label{eq:RPST} \begin{array}{c} R \\ P \\ S \\ FT \end{array} \left(\begin{array}{cccc} 0 & -2 & 1 & 1 \\ 1 & 0 & -2 & -2 \\ -2 & 1 & 0 & 0 \\ -2 - d & 1 - d & - d & -d \end{array}\right) \end{equation} The dynamics are derived from a two-step protocol of form \eqref{eq:gen2step}, with a first step as in \cref{ex:other} (trying to meet an agent playing another strategy), with $m= 4$, and a second step based on payoff comparison: $r_{ij} = [F_j - F_i]_+$. \begin{figure} \caption{\textbf{Imitation dynamics in Game \eqref{eq:RPST} \label{FigC} \end{figure} \emph{Monotone dynamics. } Monotone dynamics (or imitative dynamics, in the sense of Sandholm) have long been known to eliminate pure strategies strictly dominated by other pure strategies. With our vocabulary, this may be formulated as follows: in a two-step protocol of form \eqref{eq:gen2step}, if Step 1 is fair ($p_{ij} = x_j$) and Step 2 is monotonic (in the sense of Eq.\eqref{eq:monotonicity}), then pure strategies strictly dominated by other pure strategies go extinct. Obviously, if step 1 is fair but step 2 is not monotonic, there is no reason to expect dominated strategies to go extinct. What we showed is that, similarly, when step 2 is monotonic, but step 1 is not fair, dominated strategies may survive. \emph{Elimination results are not robust. } For imitative dynamics, the elimination of strictly dominated pure strategies in all games relies on the fact that two strategies with the same payoff have the same per capita growth rate. This condition is an equality, and contrary to strict inequalities, equalities are not robust to small perturbations. In a sense, Hofbauer and Sandholm show that the elimination result is not robust to the introduction of the possibility to innovate. We show that it is not robust either to perturbations of the imitation protocol (here, perturbations of the first step), even if the dynamics still model pure imitation. See also Section 5.3. in Hofbauer and Sandholm. \emph{Inflow towards a dominated strategy. } At all times, some of the agents quit playing the dominated strategy for the dominating one, or some currently even better strategy. So for the dominated strategy to survive, it is needed that, to compensate, some other strategies keep imitating it. This can occur in two ways: \begin{enumerate} \item If solutions converge to a rest-point, but there is nonetheless a perpetual flow between strategies. That is, rest-points correspond to a macroscopic equilibrium between inflow and outflow, not an absence of strategy changes at the micro level (\cref{sec:simple}). This is not the case for protocols based on standard payoff comparison. \item If solutions do not converge to a rest-point. This requires cycling dynamics. This is why survival examples in \cref{sec:paycomp} are more elaborated than the perhaps surprisingly simple examples of \cref{sec:simple}. Simpler examples of survival of dominated strategies under imitation dynamics based on payoff comparison may be given if we consider a population of players playing against an opponent with an exogeneously cycling behavior: see \cref{app:unilateral}. \end{enumerate} \emph{From the replicator dynamics to the Smith dynamics. } Consider again the protocol of \cref{ex:list} (making a list of strategies met), with a second step based on the proportional pairwise comparison rule, $r_{ij} = [F_j - F_i]_+$. This revision protocol builds a bridge between the replicator dynamics and the Smith dynamics: replicator dynamics are obtained for $m=1$ and the Smith dynamics (in the interior of the simplex) in the limit $m \to +\infty$. This suggests that at least for this protocol and small values of $m$, survival of dominated strategies will be more modest than with the Smith dynamics (lower domination level allowed, lower share of the dominated strategy for a given domination level). This is what our preliminary numerical investigations also suggest. A systematic investigation of these issues is left for future research. \emph{Favouring frequent strategies. } On the other hand, imitation protocols favouring frequent strategies allow for survival of dominated strategies at very high frequencies, much higher that with the Smith dynamics or other standard innovative dynamics. Conceptually, an advantage to frequent strategies could be given in innovative dynamics (i.e., such that strategies initially not played may appear), by assuming a form of risk-aversion of agents who would only be willing to adopt rare or unused strategies if the payoff of these rare strategies seem substantially higher than the payoff of better known strategies. For a risk-averse agent, this can be a rational attitude if information on the payoff of other strategies is noisy, with a greater variance for rare strategies, on which less information is available. Note also that there is a certain degree of similarity between modifying a fair imitation protocol into one that benefits frequent strategies and adding to the payoffs of the game those of a pure coordination game.\footnote{In both cases, assume we start with twin strategies in the base game (before adding the coordination component), and most of the population playing the second strategy, and then add an increasingly high bonus to the first strategy, making the second one dominated. Initially, agents keep playing the second strategy due to either the advantage to frequent strategies or the added coordination component, but when the bonus becomes large enough, they switch to the first strategy. If the bonus for the first strategy is then reduced, and even made slightly negative, agents will keep playing the first strategy \textendash\ a hysteresis effect.} \numberwithin{lemma}{section} \numberwithin{corollary}{section} \numberwithin{proposition}{section} \numberwithin{equation}{section} \appendix \section{Proofs of propositions on advantage to rare or frequent strategies} \label{app:proofs} In this section, the probability that a revising agent selects strategy $j$ at step 1 is independent of the revising agent's strategy, so we denote it by $p_j$ instead of $p_{ij}$. \subsection{Meeting $m $ agents: Proof of \cref{prop:ex1}} \begin{claim} \label{cl:Bayes} It suffices to show that when $m$ is deterministic, then the first step is fair ($p_i=x_i$ for all $i$) for $m=1$ or $m=2$, and advantages rare strategies for any $m \geq 3$. \end{claim} This is a simple computation, which is left to the reader. \begin{claim} The first step is fair for $m=1$ or $m=2$\end{claim} \begin{proof} This is obvious for $m=1$. For $m=2$, this is because the selection steps boils down to selecting an agent uniformly at random, just breaking down the process in two stages: first select two agents uniformly at random, then among these two, select one of them, again uniformly. \end{proof} \begin{claim} For any fixed $m \geq 3$, the first step advantages rare strategies. \end{claim} \begin{proof} We divide the proof in four steps. \vspace*{4pt}\noindent\textbf{Step 1.} Fix $m \geq 3$. Let $0 \leq q \leq l \leq m$. Let $E_{l, q}$ denote the event: among the $m$ agents met, $l$ play other strategies than $i$ or $j$ (so $\tilde{m}=m-l$ play $i$ or $j$) and these $l$ agents play $q$ different strategies.\footnote{Example: if $m= 5$, $i=1$, $j=4$, and the agents drawn are: one of type 1, two of type 2, two of type 3, then $l=4$ and $q=2$.} Then \[\frac{p_i(x)}{x_i}= \sum_{ (q,l): 0 \leq q \leq l \leq m} P(E_{l,q}) \frac{P(i | E_{l, q})}{x_i}\] \vspace*{4pt}\noindent\textbf{Step 2.} Now let $y_i=\frac{x_i}{x_i + x_j}$ and $y_j= 1-y_i$. Condition on the event $E_{l,q}$. If $l=m$, that is, if all $m$ agents met play strategies other than $i$ or $j$, then $P(i | E_{l,q})=0$. Otherwise, each of the $\tilde{m}=m-l$ players playing $i$ or $j$ is of type $i$ with probability $y$ and the draws are independent. So: a) with probability $y_i^{\tilde{m}}$, all of these $\tilde{m}$ players are of type $i$; so there are exactly $q+1$ strategies encountered, including $i$ but excluding $j$. Thus, $i$ is selected with probability $1/(q+1)$, and $j$ with probability $0$. b) symmetrically, with probability $y_j^{\tilde{m}}$, all of the $\tilde{m}$ players are of type $j$, hence $i$ is selected with probability $0$ and $j$ with probability $1/(q+1)$ c) finally, with the remaining probability $1- y_i^{\tilde{m}} - y_j^{\tilde{m}}$, there are both players of type $i$ and players of type $j$ among these $\tilde{m}$ players, and both strategies are selected with probability $1/(q+2)$. Summing up, if $l < m$, then \begin{equation} \label{eq:Elq} P(i | E_{l, q})= \frac{1}{q+1} y_i^{\tilde{m}} + \frac{1}{q+2} \left(1- y_i^{\tilde{m}} - y_j^{\tilde{m}} \right) \end{equation} \vspace*{4pt}\noindent\textbf{Step 3.} Assume $m \geq 3$, $l \leq m-2$ (so $\tilde{m} \geq 2$), and $0 < x_i < x_j$. Then \[\frac{P(i | E_{l,q})}{x_i} > \frac{P(j | E_{l, q})}{x_j}.\] Let $A_i= (q+1)(q+2)P(i | E_{l,q}) / y_i$ and define $A_j$ similarly. It suffices to show that $A_i > A_j$. By \eqref{eq:Elq}: \[y_i A_i= (q+2)y_i^{\tilde{m}} + (q+1) (1- y_i^{\tilde{m}} - y_j^{\tilde{m}} )= y_i^{\tilde{m}} + (q+1) (1- y_j^{\tilde{m}})\] Noting that $\displaystyle 1- y_j^{\tilde{m}}= (1-y_j) \sum_{r=0}^{\tilde{m}-1} y_j^r = y_i \sum_{r=0}^{\tilde{m}-1} y_j^r $ and dividing by $y_i$ we obtain: \[A_i= y_i^{\tilde{m}-1} + (q+1) \sum_{r=0}^{\tilde{m}-1} y_j^r= y_i^{\tilde{m}-1} + (q+1) y_j^{\tilde{m}- 1} + \sum_{r=0}^{\tilde{m}-2} y_j^r \] and similarly for $A_j$. It follows that $A_i - A_j = T_1 + T_2$ with \[T_1= q (y_j^{\tilde{m}-1} - y_i^{\tilde{m}- 1}) \mbox{ and } T_2 = \sum_{r=0}^{\tilde{m}-2} (y_j^r - y_i^r).\] The term $T_1$ is always nonnegative and it is positive if $q \geq 1$, that is if $l \geq 1$. This is the case in particular if $l=m-2$ since $m \geq 3$. The term $T_2$ is always nonnegative, and it is positive if $\tilde{m} \geq 3$, that is if $l \leq m-3$. Since we assumed $l \leq m-2$, at least one of the terms $T_1$ and $T_2$ is positive. Therefore, $T_1+ T_2 > 0$ and $A_i > A_j$. \vspace*{4pt}\noindent\textbf{Step 4.} Assume $m \geq 3$ and $0 < x_i < x_j$. Then $p_i/x_i > p_j/x_j$. Indeed, it is easily seen that if $l=m$ or $l=m-1$, then $P(i | E_{l, q})/x_i= P(j | E_{l, q})/x_j$ (equal to $0$ if $l=m$, and to $1/ [(x_i + x_j)(q+1)]$ if $l=m-1$). Moreover, we just saw that if $l \leq m-2$, which happens with positive probability, then $P(i | E_{l, q})/x_i > P(j | E_{l, q})/x_j$. Since \[\frac{p_i}{x_i} = \sum_{0 \leq q \leq l \leq m} P(E_{l, q}) \frac{P(i | E_{l, q})}{x_i}. \] the result follows. \end{proof} \subsection{The majoritarian choice: Proof of \cref{prop:ex2}} It suffices to show that if $m$ is deterministic, then step 1 is fair for $m=1$ or $m=2$, and advantages frequent strategies for any $m \geq 3$. The proof that step 1 is fair for $m=1$ or $m=2$ is as in \cref{prop:ex1}. We now prove that for $m \geq 3$, the first step favours frequent strategies. Assume $x_i > x_j >0$ and let $y_i= x_i/(x_i + x_j)$ and $y_j =1-y_i$. Consider a revising agent meeting $m\geq 3$ other agents. \vspace*{4pt}\noindent\textbf{Case 1.} Conditionally to the fact that only agents playing strategies $i$ and $j$ are met (in a slight abuse of notation, we keep writing $p_i$ for the probability that $i$ is selected, without making clear in the notation that this is conditional on the fact that only agents playing $i$ or $j$ are met). \vspace*{4pt}\noindent\textbf{Subcase 1.1 (m odd, $m \geq 3$).} If $m= 2m'+1$, the probability that $i$ is selected is: \[\frac{p_i}{y_i} = \frac{1}{y_i} \sum_{k= m'+1}^{m} \left(\begin{array}{c} m' \\ k \end{array}\right) y_i^k y_j^{m-k}= \sum_{k= m'+1}^{m} \left(\begin{array}{c} m \\ k \end{array}\right) y_i^{k-1} y_j^{m-k}\] Similarly, \[\frac{p_j}{y_j} = \sum_{k= m'+1}^{m} \left(\begin{array}{c} m \\ k \end{array}\right) y_j^{k-1} y_i^{m-k}\] Since for any $k \geq m'+1$, we have $k-1 \geq m' \geq m - (m'+1) \geq m-k$, it follows that the first expression is term by term greater than the second one, and strictly greater for all terms $k > m'+1$. Such terms exists because $m= 2m'+1 \geq 3$ implies $m > m'+1$. It follows that $p_i/y_i > p_j/y_j$. \vspace*{4pt}\noindent\textbf{Subcase 1.2 (m even, $m\geq 4$).} If $m=2m'$, then there may be a tie, if both strategies are met $m'$ times, in which case they are selected with probability $1/2$. Thus we get: \begin{equation} \label{eq:app1prot2} \frac{p_i}{y_i} = \frac{1}{2} \left(\begin{array}{c} m \\ m' \end{array}\right) y_i^{m'-1} y_j^{m'} + \sum_{k= m'+1}^{m} \left(\begin{array}{c} m \\ k \end{array}\right) y_i^{k-1} y_j^{m-k} \end{equation} Note that if $k \geq m'+1$, then \[y_i^{k-1}y_j^{m-k} \geq y_i^{m'} y_j^{m- (m'+1)}= y_i^{m'} y_j^{m'-1}.\] Moreover, the inequality is strict for any $k \geq m'+2$, in particular for $k=m$, since we assumed $m=2m' \geq 4$. Thus, factorizing by $y_i^{m'-1}y_j^{m'-1}$, we obtain: \[\frac{p_i}{y_i} > y_i^{m'-1}y_j^{m'-1} \left[ \frac{1}{2} \left(\begin{array}{c} m \\ m' \end{array}\right) y_j + \sum_{k= m'+1}^{m} \left(\begin{array}{c} m \\ k \end{array}\right) y_i \right] \] A similar (but reverse) inequality holds for $p_j/y_j$. Using both inequalities, we obtain: \[\frac{p_i}{y_i} - \frac{p_j}{y_j} > y_i^{m'-1}y_j^{m'-1} (y_i - y_j) \left[\sum_{k= m'+1}^{m} \left(\begin{array}{c} m \\ k \end{array}\right) - \frac{1}{2} \left(\begin{array}{c} m \\ m' \end{array}\right) \right]\] We let the reader check that the first term in the summation suffices to show that the bracket is nonnegative, so that $p_i/y_i > p_j/y_j$. \vspace*{4pt}\noindent\textbf{Case 2.} Now consider the general case. Out of the $m$ players met, let $m_k$ denote the number of players playing strategy $k$. Let $E(l, b, q)$ denote the event: out of the $m$ players met, $l= \sum_{k \notin \{i, j\}} m_k$ play strategies different from $i$ and $j$, $b= \max_{k \notin \{i,j\}} m_k$ is the highest number of occurence of a strategy different from $i$ and $j$, and there are $q$ strategies $k \notin \{i, j\}$ such that $m_k=b$. Condition on this event. Again, we write $p_i$ instead of $P(i |E(l, b, q))$. We dealt with the case $l=0$ in Case 1, so we may assume $l \geq 1$ hence $b \geq 1$ and $q \geq 1$. Let $\tilde{m} = m - l$ be the number of agents met playing $i$ or $j$. \noindent\textbf{Subcase 2.1. $b > \tilde{m}$.} Then $i$ and $j$ cannot be selected, hence $p_i=p_j=0$. \noindent\textbf{Subcase 2.2. $\tilde{m} \geq 2b+1$.} Then one of the strategies $i$ and $j$ will win for sure. Moreover, $\tilde{m} \geq 3$, and the proof is as in Case 1, replacing $m$ with $\tilde{m}$. \vspace*{4pt}\noindent\textbf{{Subcase 2.3. $\tilde{m}= 2b$.}} This is similar to Subcase 1.2., replacing $m$ with $\tilde{m}$, with the twist that if $m_i=m_j=b$, the strategies $i$ and $j$ are not selected with probability $1/2$, but $1/(q+2)$. The factor $1/2$ in Eq. \eqref{eq:app1prot2} thus becomes $1/(q+2)$. Since $q \geq 1$, it is then easy to check that $p_i/y_i > p_j/y_j$ even if $\tilde{m}=2$ (while we had to require $m \geq 4$ in Subcase 1.2). \vspace*{4pt}\noindent\textbf{{Subcase 2.4. $b \leq \tilde{m} \leq 2b - 1$.}} This case is similar to Subcase 1.1. We get: \[\frac{p_i}{y_i} = \frac{1}{q+1} \left(\begin{array}{c} \tilde{m} \\ b \end{array}\right) y_i^{b-1} y_j^{\tilde{m} - b} + \sum_{k= b+1}^{\tilde{m}} \left(\begin{array}{c} \tilde{m} \\ k \end{array}\right) y_i^{k-1} y_j^{\tilde{m}-k}\] and a symmetric expression for $p_j/y_j$. Because $\tilde{m} \leq 2b - 1 \Rightarrow b-1 \geq \tilde{m} - b$, it follows that the expression for $p_i/y_i$ is term by term greater than the expression for $p_j/y_j$, with a strict inequality for the term $k=\tilde{m}$, unless $\tilde{m}=1$. It follows that if $\tilde{m}=1$, $p_i/y_i= p_j/y_j$, and if $\tilde{m} > 1$, then $p_i/y_i > p_j/y_j$. \emph{To conclude:} for any $l$, $b$, $q$, $P(i | E(l,b,q))/y_i \geq P(j | E(l, b, q))/y_j$, with a strict inequality in some cases occurring with positive probability. Since $$p_i/y_i= \sum_{l, b, q} P(E(l, b, q)) P(i | E(l,b,q))/y_i,$$ it follows that $p_i/y_i > p_j/y_j$. \section{Imitation protocols not of the form \eqref{eq:gen2step}} \label{app:moreprot} We note here that our results would also apply to protocols that cannot be neatly separated in two steps in the sense of Eq. \eqref{eq:gen2step}. Reconsider \cref{ex:list} from \cref{sec:ImProc}, where a revising agent meets several other agents and makes a list of the strategies they play. We assumed then that he would investigate just one of these strategies. Instead, the revising agent could obtain information on the payoffs of all those strategies. This makes sense if getting information on payoffs of strategies met is cheap. In our concrete example, after meeting strategies $1$, $2$, $3$, the revising agent would obtain information on the payoffs $F_1$, $F_2$, $F_3$, and adopt one of these strategies with a probability that depends on all these payoffs, and possibly his own. For instance, he could adopt strategy $j \in \{1, 2, 3\}$ with probability $f(F_j)/ (1 + \sum_{k=1, 2, 3} f(F_k))$ with $f$ positive increasing, or with probability $[F_j - F_i]_+/(1 + \sum_{k=1, 2, 3} [F_k - F_i]_+)$. Such protocols cannot easily be put in the form \eqref{eq:gen2step}. Nevertheless, the resulting dynamics still favour rare strategies in the sense that when two strategies have the same payoff, the rarest one has a higher per-capita growth rate; thus, as long as the switching rates $\rho_{ij}$ are regular enough in $(F, x)$, versions of our results would apply. However, our results do not apply to \emph{discontinuous} imitative variants of the best-reply dynamics, such as imitating a best-reply to the current population state among the strategies met. \section{Unilateral approach: Simple examples for comparison based imitation processes} \label{app:unilateral} In this section, we adopt a unilateral approach, in the spirit of (Viossat, 2015 \cite{11}). That is, we study the evolution of behavior in a large population of players (the focal population, player 1) facing an unknown opponent (the environment, player 2), whose behavior we freely choose. This allows to provide simple examples of survival of dominated strategies even for dynamics based on payoff comparison. Specifically, let us denote by $G_{\varepsilon}$ a $3 \times 2$ game where the payoffs in the focal population are as follows: \begin{equation} \label{eq:GameUni} \begin{array}{cc} & \begin{array}{cc} L \hspace{0.2 cm} & \hspace{0.2 cm} R \\ \end{array} \\ \begin{array}{c} 1 \\ 2 \\ 3 \\ \end{array} & \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \\ -\varepsilon & 1- \varepsilon \\ \end{array}\right)\\ \end{array} \end{equation} As before, $x_i(t)$ denotes the frequency of strategy $i \in \{1, 2,3 \}$ in the focal population. We make the following assumptions: (A1) For $i=1, 2, 3$, when the opponent plays $Y \in \{L, R\}$, then $$\dot x_i = x_i g^Y_i(x)$$ for some growth-rate function $g_i^Y : X \to \mathbb{R}$ that is Lipschitz continuous in $x$ and depends continuously on the parameter $\varepsilon$ (here, $X$ denotes the simplex of possible population states for the focal population). (A2) When $\varepsilon=0$, if $x_1 \notin \{0, 1\}$, then $g_1^L(x) >0$ and $g_1^R(x) < 0$.\\ We also assume that at least one of the conditions (A3), (A3') below holds: (A3) When $\varepsilon=0$, if $x_3 < x_2$, then $g_3^L (x) \geq g_2^L(x)$ and $g_3^R (x) > g_2^R(x)$\\ or (A3') When $\varepsilon=0$, if $x_3 < x_2$, then $g_3^L (x) > g_2^L(x)$ and $g_3^R (x) \geq g_2^R(x)$\\ Assumption (A1) is a regularity assumption. Assumption (A2) is weaker than Positive Correlation. Assumption (A3) or (A3') is a form of advantage to rare strategies. These assumptions are satisfied, for instance, by any dynamics arising from a revision protocol of form \eqref{eq:gen2step} with $\lambda_{ij}$, $r_{ij}$ Lipschitz continuous in $x$ and continuous in $F$, $r_{ij}$ with the sign of $[F_j - F_i]_+$, and favouring rare strategies in the sense of \cref{def:adv}. \begin{proposition} \label{prop:app3} Fix $\eta > 0$. Let $\delta$, $x_{\min}$, $x_{\max}$ be real numbers such that $0 < \delta < x_{\min} < x_{\max} < 1- \delta$. Let $K_{\delta}= \{x \in X | \min(x_1, 1-x_1) \geq \delta\}$. Assume that the opponent plays $L$ until the first time $\tau$ such that $x_1(\tau) \geq x_{\max}$, then plays $R$ for $t > \tau$ until $x_1 = x_{\min}$, then plays $L$ again until $x_1= x_{\max}$, etc.\footnote{The fact that the opponent plays a discontinuous strategy simplifies the exposition but could be replaced by a similar behavior with smooth transitions. Due to this discontinuity, the frequencies $x_i(t)$ are only piecewise $C^1$, but it may be shown that this creates no technical difficulty.} Then there exists $\bar{\varepsilon}>0$ such that for any $\varepsilon \in [0, \bar{\varepsilon}]$ and any initial condition $x(0) \in K_{\delta} \cap \mathrm{int}(X)$, $\liminf x_3(t) > (1- x_{\max}) \left(\frac{1}{2} - \eta \right)$. \end{proposition} \begin{proof} The intuition is that when $\varepsilon=0$, the shares of strategies 2 and 3 tend to become equal. Thus, $\liminf x_3(t) = (1-\limsup x_1)/2= (1-x_{\max})/2$. We then need to show that for a sufficiently small perturbation of payoffs, $\liminf x_3$ remains close to $(1-x_{\max})/2$. By contrast with \cref{th:hypno}, we do not deal with an autonomous system of differential equations, but with a controlled system. This is why the proof below does not rely on continuity of attractors but on a direct analysis. To fix ideas, assume that $(A3)$ holds. The proof when $(A3')$ holds is similar. Throughout, we assume that $x(0) \in K_{\delta} \cap \mathrm{int}(X)$. By (A1), (A2) and compactness of $K_{\delta}$, there exist positive real numbers $\bar{\varepsilon}$, $\alpha_1$, $\alpha_2$ such that, for any $\varepsilon$ in $[0, \bar{\varepsilon}]$ and any $x \in K_{\delta}$, $\alpha_1 \leq \dot{x}_1 \leq \alpha_2$ when the opponent plays $L$ and $- \alpha_2 \leq \dot{x}_1 \leq - \alpha_1$ when she plays $R$. It follows that $x(t)$ eventually enters the compact set $$K = \{x \in X, x_{\min} \leq x_1 \leq x_{\max}\},$$ and never leaves, oscillating between $x_{\min}$ and $x_{\max}$. Moreover, the time to travel from the hyperplane $x_1 = x_{\min}$ to the hyperplane $x_1= x_{\max}$ (or back) is always between $$T_{\min}= \frac{x_{\max}- x_{\min}}{\alpha_2} \text{ and } T_{\max}= \frac{x_{\max} - x_{\min}}{\alpha_1}.$$ Note that $\liminf(x_2 + x_3) = 1 - x_{\max}$. Thus if suffices to show that, possibly up to lowering $\bar{\varepsilon}$, $$\liminf \frac{x_3}{x_2 + x_3} \geq \frac{1}{2} - \eta.$$ We first show that $\limsup \frac{x_3}{x_2 + x_3} \geq \frac{1-\eta}{2}$. Assume by contradiction that this is not the case. Then from some time $T$ on, $$x(t) \in \tilde{K} = \left\{ x \in K, \frac{x_3}{x_2 + x_3} \leq \frac{1-\eta}{2} \right\}.$$ By (A1), (A3) and compactness of $\tilde{K}$, and up to lowering $\bar{\varepsilon}$, we may assume that there exist positive real numbers $\beta_1$ and $\beta_2(\varepsilon)$ such that for any $x \in \tilde{K}$ and any $\varepsilon \in [0, \bar{\varepsilon}]$, \begin{equation} \label{eq:compgrowth} g_3^R(x) - g_2^R(x) \geq \beta_1 \text{ and } g_3^L(x) - g_2^L(x) \geq - \beta_2(\varepsilon) \end{equation} with $\beta_1$ independent of $\varepsilon$ and $\beta_2(\varepsilon) \to 0$ as $ \varepsilon \to 0$. Up to lowering $\bar{\varepsilon}$ again, we may assume that $$C:= \beta_1 T_{\min} - \beta_2(\varepsilon) T_{\max} >0.$$ Now let $t_{2k}$ and $t_{2k+1}$ be the $k^{th}$ time greater than $T$ such that $x_1 = x_{\min}$ and $x_1 = x_{\max}$, respectively. Note that $\frac{d}{dt} \ln(x_3/x_2) = g_3^Y (x) - g_2^Y(x)$ when the opponent plays $Y$. Integrating between $t_{2k}$ and $t_{2k+2}$ and using \eqref{eq:compgrowth} we obtain that between $t_{2k}$ and $t_{2k+2}$, $\ln(x_3/x_2)$ increases by at least $C$. Since $C>0$, this implies that $x_3/x_2 \to +\infty$, a contradiction. Therefore, $$\limsup_{t \to +\infty} \frac{x_3}{x_2 + x_3}(t) \geq \frac{1-\eta}{2}.$$ Moreover, since $\beta_2(\varepsilon) \to 0$ as $\varepsilon \to 0$, up to lowering $\bar{\varepsilon}$ again, we may assume that between $t_{2k}$ and $t_{2k+1}$, $x_2/(x_2+ x_3)$ does not decrease by more than $\eta/2$. It may be shown that this ensures that $\liminf \frac{x_2}{x_2 + x_3} \geq \frac{1-\eta}{2} - \frac{\eta}{2} = \frac{1}{2} - \eta$. This concludes the proof. \end{proof} Note that for $x_{\max}$ and $\eta$ small enough, $\liminf x_3$ may be made arbitrarily close from $1/2$. If we replace Assumptions (A3), (A3') by the same assumptions but when $x_3 < x_2$, thus giving an advantage to frequent strategies, then we obtain that for $\varepsilon$ small enough and an open set of initial conditions, $\liminf x_3$ may be made arbitrarily close to $1$. \begin{figure} \caption{\textbf{Imitation dynamics favouring rare strategies in Game \eqref{eq:GameUni} \label{FigB} \end{figure} \cref{FigB} depicts imitation dynamics with payoffs in the focal population described by the payoff matrix \eqref{eq:GameUni} and a periodic behavior of Player 2 that smoothly approximates playing $L$ on time-intervals of the form $[2k, 2k+1)$ and $R$ on time-intervals of the form $[2k+1, 2k+2)$, where $k$ is an integer (at time $t$, Player 2 puts probability $y(t) = \frac{1+ \sin^{1/9} (\pi t)}{2}$ on strategy L). As in \cref{FigC}, the dynamics of the focal population are derived from a two-step protocol of form \eqref{eq:gen2step}, with a first step as in \cref{ex:other} (trying to meet an agent playing another strategy), with $m= 4$, and a second step based on payoff comparison $r_{ij} = [F_j - F_i]_+$. \cref{FigB} illustrates that survival of dominated strategies can also occur if the behavior of the opponent is smooth and independent of the current population state in the focal population. The average frequency of the dominated strategy is around $20\%$ with a domination margin of $\varepsilon = 0.05$, and around $10\%$ with a domination margin of $\varepsilon = 0.1$. For an advantage to frequent strategies, survival of the dominated strategy in this example seems less robust: if the behavior of the opponent oscillates in a way that is independent of the population state in the focal population, what happens in most simulations is that initially either strategy 1 or strategy 3 takes over, as deviations from an approximately equal share of these strategies get amplified by the advantage to frequent strategies. In the first case, the solution converges to the mixed strategy putting probability 1 on the first strategy. In the second case, strategy 1 gets extinct, and then, since the second step of the protocol is based on payoff comparison, strategy 2 drives strategy 3 extinct. \section*{Acknowledgments} \begingroup \small The first author is grateful for financial support by the French National Research Agency (ANR) in the framework of the ``Investissements d'avenir'' program (ANR-15-IDEX-02), the LabEx PERSYVAL (ANR-11-LABX-0025-01), MIAI@Grenoble Alpes (ANR-19-P3IA-0003), and the bilateral ANR-NRF grant ALIAS (ANR-19-CE48-0018-01). \endgroup \end{document}
\begin{document} \title[Generalized coinvariant algebras for wreath products] {Generalized coinvariant algebras for wreath products} \author{Kin Tung Jonathan Chan} \address {Department of Mathematics \newline \indent University of California, San Diego \newline \indent La Jolla, CA, 92093-0112, USA} \email{[email protected], [email protected]} \author{Brendon Rhoades} \begin{abstract} Let $r$ be a positive integer and let $G_n$ be the reflection group of $n \times n$ monomial matrices whose entries are $r^{th}$ complex roots of unity and let $k \leq n$. We define and study two new graded quotients $R_{n,k}$ and $S_{n,k}$ of the polynomial ring ${\mathbb {C}}[x_1, \dots, x_n]$ in $n$ variables. When $k = n$, both of these quotients coincide with the classical coinvariant algebra attached to $G_n$. The algebraic properties of our quotients are governed by the combinatorial properties of $k$-dimensional faces in the Coxeter complex attached to $G_n$ (in the case of $R_{n,k}$) and $r$-colored ordered set partitions of $\{1, 2, \dots, n\}$ with $k$ blocks (in the case of $S_{n,k}$). Our work generalizes a construction of Haglund, Rhoades, and Shimozono from the symmetric group ${\mathfrak{S}}_n$ to the more general wreath products $G_n$. \end{abstract} \keywords{Coxeter complex, coinvariant algebra, wreath product} \subjclass{Primary 05E18, Secondary 05E05} \maketitle \section{Introduction} \label{Introduction} The coinvariant algebra of the symmetric group ${\mathfrak{S}}_n$ is among the most important ${\mathfrak{S}}_n$-modules in combinatorics. It is a graded version of the regular representation of ${\mathfrak{S}}_n$, has structural properties deeply tied to the combinatorics of permutations, and gives a combinatorially accessible model for the action of ${\mathfrak{S}}_n$ on the cohomology ring $H^{\bullet}(G/B)$ of the flag manifold $G/B$. Haglund, Rhoades, and Shimozono \cite{HRS} recently defined a generalization of the ${\mathfrak{S}}_n$-coinvariant algebra which depends on an integer parameter $k \leq n$. The structure of their graded ${\mathfrak{S}}_n$-module is governed by the combinatorics of ordered set partitions of $[n] := \{1, 2, \dots, n \}$ with $k$ blocks. The graded Frobenius images of this module is (up to a minor twist) either of the combinatorial expressions ${\mathrm {Rise}}_{n,k}({\mathbf {x}};q,t)$ or ${\mathrm {Val}}_{n,k}({\mathbf {x}};q,t)$ appearing in the {\em Delta Conjecture} of Haglund, Remmel, and Wilson \cite{HRW} upon setting $t = 0$. The Delta Conjecture is a generalization of the Shuffle Conjecture in the field of Macdonald polynomials; this gives the first example of a `naturally constructed' module with Frobenius image related to the Delta Conjecture. A linear transformation $t \in GL_n({\mathbb {C}})$ is a {\em reflection} if the fixed space of $t$ has codimension $1$ in ${\mathbb {C}}^n$ and $t$ has finite order. A finite subgroup $W \subseteq GL_n({\mathbb {C}})$ is called a {\em reflection group} if $W$ is generated by reflections. Given any complex reflection group $W$, there is a coinvariant algebra $R_W$ attached to $W$. The algebra $R_W$ is a graded $W$-module with structural properties closely related to the combinatorics of $W$. In this paper we provide a Haglund-Rhoades-Shimozono style generalization of $R_W$ in the case where $R_W$ belongs to the family of reflection groups $G(r,1,n) = {\mathbb {Z}}_r \wr {\mathfrak{S}}_n$. The general linear group $GL_n({\mathbb {C}})$ acts on the polynomial ring ${\mathbb {C}}[{\mathbf {x}}_n] := {\mathbb {C}}[x_1, \dots, x_n]$ by linear substitutions. If $W \subset GL_n({\mathbb {C}})$ is any finite subgroup, let \begin{equation*} {\mathbb {C}}[{\mathbf {x}}_n]^W := \{ f({\mathbf {x}}_n) \in {\mathbb {C}}[{\mathbf {x}}_n] \,:\, w.f({\mathbf {x}}_n) = f({\mathbf {x}}_n) \text{ for all $w \in W$} \} \end{equation*} denote the associated subspace of {\em invariant polynomials} and let ${\mathbb {C}}[{\mathbf {x}}_n]^W_+ \subset {\mathbb {C}}[{\mathbf {x}}_n]^W$ denote the collection of invariant polynomials with vanishing constant term. The {\em invariant ideal} $I_W \subset {\mathbb {C}}[{\mathbf {x}}_n]$ is the ideal $I_W := \langle {\mathbb {C}}[{\mathbf {x}}_n]^W_+ \rangle$ generated by ${\mathbb {C}}[{\mathbf {x}}_n]^W_+$ and the {\em coinvariant algebra} is $R_W := {\mathbb {C}}[{\mathbf {x}}_n]/I_W$. The quotient $R_W$ is a graded $W$-module. A celebrated result of Chevalley \cite{C} states that if $W$ is a complex reflection group, then $R_W$ is isomorphic to the regular representation ${\mathbb {C}}[W]$ as a $W$-module. \begin{quote} {\bf Notation.} {\em Throughout this paper $r$ will denote a positive integer. Unless otherwise stated, we assume $r \geq 2$. Let $\zeta := e^{\frac{2 \pi i}{r}} \in {\mathbb {C}}$ and let $G := \langle \zeta \rangle$ be the multiplicative group of $r^{th}$ roots of unity in ${\mathbb {C}}^{\times}$.} \end{quote} Let us introduce the family of reflection groups we will focus on. A matrix is {\em monomial } if it has a unique nonzero entry in every row and column. Let $G_n$ be the group of $n \times n$ monomial matrices whose nonzero entries lie in $G$. For example, if $r = 3$ we have \begin{equation*} g = \begin{pmatrix} 0 & 0 & \zeta & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & \zeta^2 \\ \zeta & 0 & 0 & 0 \end{pmatrix} \in G_4. \end{equation*} Matrices in $G_n$ may be thought of combinatorially as {\em $r$-colored permutations} $\pi_1^{c_1} \dots \pi_n^{c_n}$, where $\pi_1 \dots \pi_n$ is a permutation in ${\mathfrak{S}}_n$ and $c_1 \dots c_n$ is a sequence of `colors' in the set $\{0, 1, \dots, r-1\}$ representing powers of $\zeta$. For example, the above element of $G_4$ may be represented combinatorially as $g = 4^1 2^0 1^1 3^2$. In the usual classification of complex reflection groups we have $G_n = G(r,1,n)$. The group $G_n$ is isomorphic to the wreath product ${\mathbb {Z}}_r \wr {\mathfrak{S}}_n = ({\mathbb {Z}}_r \times \cdots \times {\mathbb {Z}}_r) \rtimes {\mathfrak{S}}_n$, where the symmetric group ${\mathfrak{S}}_n$ acts on the $n$-fold direct product of cyclic groups ${\mathbb {Z}}_r \times \cdots \times {\mathbb {Z}}_r$ by coordinate permutation. For the sake of legibility, we suppress reference to $r$ in our notation for $G_n$ and related objects. Let $I_n \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ be the invariant ideal associated to $G_n$. We have $I_n = \langle e_1({\mathbf {x}}_n^r), \dots, e_n({\mathbf {x}}_n^r) \rangle$, where \begin{equation*} e_d({\mathbf {x}}_n^r) = e_d(x_1^r, \dots, x_n^r) := \sum_{1 \leq i_1 < \cdots < i_d \leq n} x_{i_1}^r \cdots x_{i_d}^r \end{equation*} is the $d^{th}$ elementary symmetric function in the variable powers $x_1^r, \dots, x_n^r$. Let $R_n := {\mathbb {C}}[{\mathbf {x}}_n]/I_n$ denote the coinvariant ring attached to $G_n$. The algebraic properties of the quotient $R_n$ are governed by the combinatorial properties of $r$-colored permutations in $G_n$. Chevalley's result \cite{C} implies that $R_n \cong {\mathbb {C}}[G_n]$ as ungraded $G_n$-modules. The fact that $e_1({\mathbf {x}}_n^r), \dots, e_n({\mathbf {x}}_n^r)$ is a regular sequence in ${\mathbb {C}}[{\mathbf {x}}_n]$ gives the following expression for the Hilbert series of $R_n$: \begin{equation} {\mathrm {Hilb}}(R_n; q) = \prod_{i = 1}^n \frac{1-q^{ri}}{1-q} = \sum_{g \in G_n} q^{{\mathrm {maj}}(g)}, \end{equation} where ${\mathrm {maj}}$ is the {\em major index} statistic on $G_n$ (also known as the {\em flag-major index}; see \cite{HLR}). Bango and Biagoli \cite{BB} described a {\em descent monomial basis} $\{b_g \,:\, g \in G_n\}$ of $R_n$ whose elements satisfy $\deg(b_g) = {\mathrm {maj}}(b_g)$. Stembridge \cite[Thm. 6.6]{Stembridge} described the graded $G_n$-module structure of $R_n$ using (the $r \geq 1$ generalization of) standard Young tableaux. When $r = 1$ and $G_n = {\mathfrak{S}}_n$ is the symmetric group, Haglund, Rhoades, and Shimozono \cite[Defn. 1.1]{HRS} introduced and studied a generalization of the coinvariant algebra $R_n$ depending on a positive integer $k \leq n$. In this paper we extend \cite[Defn. 1.1]{HRS} to $r \geq 2$ by introducing the following {\em two} families of ideals $I_{n,k}, J_{n,k} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$. \begin{defn} \label{main-definition} Let $n, k,$ and $r$ be nonnegative integers which satisfy $n \geq k, n \geq 1$, and $r \geq 2$. We define two quotients of the polynomial ring ${\mathbb {C}}[{\mathbf {x}}_n]$ as follows. \begin{enumerate} \item Let $I_{n,k} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ be the ideal \begin{equation*} I_{n,k} := \langle x_1^{kr+1}, x_2^{kr+1}, \dots, x_n^{kr+1}, e_n({\mathbf {x}}_n^r), e_{n-1}({\mathbf {x}}_n^r), \dots, e_{n-k+1}({\mathbf {x}}_n^r) \rangle \end{equation*} and let $R_{n,k}$ be the corresponding quotient: \begin{equation*} R_{n,k} := {\mathbb {C}}[{\mathbf {x}}_n]/I_{n,k}. \end{equation*} \item Let $J_{n,k} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ be the ideal \begin{equation*} J_{n,k} := \langle x_1^{kr}, x_2^{kr}, \dots, x_n^{kr}, e_n({\mathbf {x}}_n^r), e_{n-1}({\mathbf {x}}_n^r), \dots, e_{n-k+1}({\mathbf {x}}_n^r) \rangle \end{equation*} and let $S_{n,k}$ be the corresponding quotient: \begin{equation*} S_{n,k} := {\mathbb {C}}[{\mathbf {x}}_n]/J_{n,k}. \end{equation*} \end{enumerate} \end{defn} Both of the ideals $I_{n,k}$ and $J_{n,k}$ are homogeneous and stable under the action of $G_n$ on ${\mathbb {C}}[{\mathbf {x}}_n]$. It follows that the quotients $R_{n,k}$ and $S_{n,k}$ are graded $G_n$-modules. The ring introduced in \cite[Defn. 1.1]{HRS} is the ideal $S_{n,k}$ with $r = 1$. When $k = n$, it can be shown \footnote{By \cite[Sec. 7.2]{Bergeron} under the change of variables $(x_1, \dots, x_n) \mapsto (x_1^r, \dots, x_n^r)$ we have $x_n^{nr} \in I_n$, and the ideal $I_n$ is stable under ${\mathfrak{S}}_n$.} that for any $1 \leq i \leq n$, the variable power $x_i^{nr}$ lies in the invariant ideal $I_n$, so that $I_{n,n} = J_{n,n} = I_n$, and $R_{n,n} = S_{n,n}$ are both equal to the classical coinvariant algebra $R_n$ for $G_n$. At the other extreme, we have $R_{n,0} \cong {\mathbb {C}}$ (the trivial representation in degree $0$) and $S_{n,0} = 0$. The reader may wonder why we are presenting two generalizations of the ring of \cite{HRS} rather than one. The combinatorial reason for this is the presence of {\em zero blocks} in the $G_n$-analog of ordered set partitions. These zero blocks do not appear in the case of \cite{HRS} when $r = 1$ (or in the case of the classical coinvariant algebra when $k = n$). Roughly speaking, the ring $S_{n,k}$ will be a `zero block free' version of $R_{n,k}$. These rings will be related in a nice way (see Proposition~\ref{r-to-s-reduction}), and $S_{n,k}$ will be easier to analyze directly. The generators of the ideal $I_{n,k}$ defining the quotient $R_{n,k}$ come in two flavors: \begin{itemize} \item high degree invariant polynomials $e_n({\mathbf {x}}_n^r), e_{n-1}({\mathbf {x}}_n^r), \dots, e_{n-k+1}({\mathbf {x}}_n^r)$, and \item a collection of polynomials $x_1^{kr+1}, \dots, x_n^{kr + 1}$ whose linear span $\mathrm{span} \{x_1^{kr+1}, \dots, x_n^{kr+1} \}$ is stable under the action of $G_n$ and carries the dual of the defining action of $G_n$ on ${\mathbb {C}}^n$. \end{itemize} This extends the two flavors of generators for the ideal of \cite{HRS}. In the context of the 0-Hecke algebra $H_n(0)$ attached to the symmetric group, Huang and Rhoades \cite{HuangRhoades} defined another ideal (denoted in \cite{HuangRhoades} by $J_{n,k} \subseteq \mathbb{F}[{\mathbf {x}}_n]$, where $\mathbb{F}$ is any field) with analogous types of generators: high degree $H_n(0)$-invariants together with a copy of the defining representation of $H_n(0)$ sitting in homogeneous degree $k$. It would be interesting to see if the favorable properties of the corresponding quotients could be derived from this choice of generator selection in a more conceptual way. In this paper we will prove that the structures of the rings $R_{n,k}$ and $S_{n,k}$ are controlled by $G_n$-generalizations of ordered set partitions. We will use the usual $q$-analog notation \begin{align*} [n]_q := 1 + q + \cdots + q^{n-1} & &[n]!_q := [n]_q [n-1]_q \cdots [1]_q \\ {n \brack a_1, \dots , a_r}_q := \frac{[n]!_q}{[a_1]!_q \cdots [a_r]!_q} & &{n \brack a}_q := \frac{[n]!_q}{[a]!_q [n-a]!_q}. \end{align*} We also let ${\mathrm {rev}}_q$ be the operator which reverses the coefficient sequences in polynomials in the variable $q$ (over any ground ring). For example, we have \begin{equation*} {\mathrm {rev}}_q(8q^2 + 7q + 6) = 6q^2 + 7q + 8. \end{equation*} Let ${\mathrm {Stir}}(n,k)$ be the (signless) Stirling number of the second kind counting set partitions of $[n]$ into $k$ blocks and let ${\mathrm {Stir}}_q(n,k)$ denote the {\em $q$-Stirling number} defined by the recursion \begin{equation*} {\mathrm {Stir}}_q(n,k) = [k]_q \cdot {\mathrm {Stir}}_q(n-1,k) + {\mathrm {Stir}}_q(n-1,k-1) \end{equation*} for $n, k \geq 1$ and the initial condition ${\mathrm {Stir}}_q(0,k) = \delta_{0,k}$. Deferring various definitions to Section~\ref{Background}, we state our main results. \begin{itemize} \item As {\em ungraded} $G_n$-modules we have \begin{center} $R_{n,k} \cong {\mathbb {C}}[{\mathcal{F}}_{n,k}]$ and $S_{n,k} \cong {\mathbb {C}}[{\mathcal{OP}}_{n,k}]$, \end{center} where ${\mathcal{F}}_{n,k}$ is the set of $k$-dimensional faces in the Coxeter complex attached to $G_n$ and ${\mathcal{OP}}_{n,k}$ is the set of $r$-colored ordered set partitions of $[n]$ with $k$ blocks (Corollary~\ref{ungraded-isomorphism-type}). In particular, we have \begin{align*} \dim(R_{n,k}) &= \sum_{z = 0}^{n-k} {n \choose z} \cdot r^{n-z} \cdot k! \cdot {\mathrm {Stir}}(n-z,k), \\ \dim(S_{n,k}) &= r^n \cdot k! \cdot {\mathrm {Stir}}(n,k). \end{align*} \item The Hilbert series ${\mathrm {Hilb}}(R_{n,k}; q)$ and ${\mathrm {Hilb}}(S_{n,k};q)$ are given by (Corollary~\ref{hilbert-series-corollary}) \begin{align*} {\mathrm {Hilb}}(R_{n,k}; q) &= \sum_{z = 0}^{n-k} {n \choose z} \cdot q^{krz} \cdot {\mathrm {rev}}_q( [r]_q^{n-z} \cdot [k]!_{q^r} \cdot {\mathrm {Stir}}_{q^r}(n-z,k)), \\ {\mathrm {Hilb}}(S_{n,k}; q) &= {\mathrm {rev}}_q( [r]_q^n \cdot [k]!_{q^r} \cdot {\mathrm {Stir}}_{q^r}(n,k)). \end{align*} \item Endow monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ with the lexicographic term order. The standard monomial basis of $R_{n,k}$ is the collection of monomials $m = x_1^{a_1} \cdots x_n^{a_n}$ whose exponent sequences $(a_1, \dots, a_n)$ are componentwise $\leq$ some shuffle of the sequences $(r-1, 2r-1, \dots, kr-1)$ and $(\underbrace{kr, \dots, kr}_{n-k})$. The standard monomials basis of $S_{n,k}$ is the collection of monomials $m = x_1^{b_1} \cdots x_n^{b_n}$ whose exponent sequences $(b_1, \dots, b_n)$ are componentwise $\leq$ some shuffle of the sequences $(r-1, 2r-1, \dots, kr-1)$ and $(\underbrace{kr-1, \dots, kr-1}_{n-k})$ (Theorem~\ref{artin-basis}). \item There is a generalization of Bango and Biagoli's descent monomial basis of $R_n$ to the rings $R_{n,k}$ and $S_{n,k}$ (Theorems~\ref{s-gs-basis-theorem} and \ref{r-gs-basis-theorem}). \item We have an explicit description of the {\em graded} isomorphism type of the $G_n$-modules $R_{n,k}$ and $S_{n,k}$ in terms of standard Young tableaux (Theorem~\ref{graded-isomorphism-type}). \end{itemize} Although the properties of the rings $R_{n,k}$ (and $S_{n,k}$) shown above give natural extensions of the corresponding properties of $R_n$, the proofs of these results will be quite different. Since the classical invariant ideal $I_n$ is cut out by a regular sequence $e_1({\mathbf {x}}_n^r), \dots, e_n({\mathbf {x}}_n^r)$, standard tools from commutative algebra (the {\em Koszul complex}) can be used to derive the graded isomorphism type of $R_n$. Since neither the dimension $\dim(R_{n,k}) = \sum_{z = 0}^{n-k} {n \choose z} \cdot r^{n-z} \cdot k! \cdot {\mathrm {Stir}}(n-z,k)$ nor $\dim(S_{n,k}) = r^n \cdot k! \cdot {\mathrm {Stir}}(n,k)$ have nice product formulas, we cannot hope to apply this technology to our situation. Replacing the commutative algebra machinery used to analyze $R_n$ will be {\em combinatorial} commutative algebra machinery (Gr\"obner theory and straightening laws) which will determine the structure of $R_{n,k}$. Although some portions of our analysis will follow from the arguments of \cite{HRS} after making the change of variables $(x_1, \dots, x_n) \mapsto (x_1^r, \dots, x_n^r)$, other arguments will have to be significantly adapted to account for the possible presence of zero blocks. The rest of the paper is organized as follows. In {\bf Section~\ref{Background}} we give background material related to $r$-colored ordered set partitions, the Coxeter complex of $G_n$, symmetric functions, the representation theory of $G_n$, and Gr\"obner theory. In {\bf Section~\ref{Polynomial}} we prove some polynomial and symmetric function identities that will be helpful in later sections. In {\bf Section~\ref{Hilbert}} we calculate the standard monomial bases of $R_{n,k}$ and $S_{n,k}$ with respect to the lexicographic term order and calculate the Hilbert series of these quotients. In {\bf Section~\ref{Descent}} we present our generalizations of the Bango-Biagoli descent monomial basis of $R_n$ to obtain descent monomial-type bases for $R_{n,k}$ and $S_{n,k}$. In {\bf Section~\ref{Frobenius}} we derive the graded isomorphism type of the $G_n$-modules $R_{n,k}$ and $S_{n,k}$. We close in {\bf Section~\ref{Conclusion}} with some open questions. \section{Background} \label{Background} \subsection{$r$-colored ordered set partitions} We will make use of two orders on the alphabet \begin{equation*} {\mathcal{A}}_r := \{i^c \,:\, i \in {\mathbb {Z}}_{> 0} \text{ and } 0 \leq c \leq r-1 \} \end{equation*} of $r$-colored positive integers. The first order $<$ weights colors more heavily than letter values, with higher colors being smaller: \begin{equation*} 1^{r-1} < 2^{r-1} < \cdots < 1^{r-2} < 2^{r-2} < \cdots < 1^0 < 2^0 < \cdots. \end{equation*} The second order $\prec$ weights letter values more heavily than colors: \begin{equation*} 1^{r-1} \prec 1^{r-2} \prec \cdots \prec 1^0 \prec 2^{r-1} \prec 2^{r-2} \prec \cdots \prec 2^0 \prec \cdots. \end{equation*} Let $w = w_1^{c_1} \dots w_n^{c_n}$ be any word in the alphabet ${\mathcal{A}}_r$. The {\em descent set} and {\em ascent set} of $w$ are defined using the order $<$: \begin{equation} {\mathrm {Des}}(w) := \{1 \leq i \leq n-1 \,:\, w_i^{c_i} > w_{i+1}^{c_{i+1}} \}, \hspace{0.2in} {\mathrm {Asc}}(w) := \{1 \leq i \leq n-1 \,:\, w_i^{c_i} < w_{i+1}^{c_{i+1}} \}. \end{equation} We write ${\mathrm {des}}(w) := |{\mathrm {Des}}(w)|$ and ${\mathrm {asc}}(w) := |{\mathrm {Asc}}(w)|$ for the number of descents and ascents in $w$. The {\em major index} ${\mathrm {maj}}(w)$ is given by the formula \begin{equation} {\mathrm {maj}}(w) := c(w) + r \cdot \sum_{i \in {\mathrm {Des}}(w)} i, \end{equation} where $c(w)$ denotes the sum of the colors of the letters in $w$. This version of major index was defined by Haglund, Loehr, and Remmel in \cite{HLR} (where it was termed `flag-major index'). Since we may view elements of $G_n$ as $r$-colored permutations, the objects defined in the above paragraph make sense for $g \in G_n$. For example, if $r = 3$ and $g = 3^0 4^1 6^2 2^0 5^2 1^2 \in G_6$, we have ${\mathrm {Des}}(g) = \{1,2,4,5\}, {\mathrm {Asc}}(g) = \{3\}, {\mathrm {des}}(g) = 4, {\mathrm {asc}}(g) = 1,$ and \begin{equation*} {\mathrm {maj}}(g) = (0 + 1 + 2 + 0 + 2 + 2) + 3 \cdot (1 + 2 + 4 + 5) = 43. \end{equation*} An {\em ordered set partition} is a set partition equipped with a total order on its blocks. An {\em $r$-colored ordered set partition of size $n$} is an ordered set partition $\sigma$ of $[n]$ in which every letter is assigned a color in the set $\{0, 1, \dots, r-1\}$. For example, \begin{equation*} \sigma = \{3^0,4^1\} \prec \{6^2\} \prec \{1^2,2^0,5^0\} \end{equation*} is a $3$-colored ordered set partition of size $6$ with $3$ blocks. We let ${\mathcal{OP}}_{n,k}$ be the collection of $r$-colored ordered set partitions of size $n$ with $k$ blocks. We have \begin{equation} |{\mathcal{OP}}_{n,k}| = r^n \cdot k! \cdot {\mathrm {Stir}}(n,k). \end{equation} We will often use bars to represent colored ordered set partitions more succinctly. Here we write block elements in increasing order with respect to $\prec$. Our example ordered set partition becomes \begin{equation*} \sigma = (3^0 4^1 \mid 6^2 \mid 1^2 2^0 5^2 ). \end{equation*} We also have a descent starred notation for colored ordered set partitions, where we order elements within blocks in a decreasing fashion with respect to $<$. Our example ordered set partition becomes \begin{equation*} \sigma = 3^0_* 4^1 \, \, 6^2 \, \, 2^0_* 5^2_* 1^2. \end{equation*} Notice that we use the order $\prec$ for the bar notation, but the order $<$ for the star notation. The star notation represents $\sigma \in {\mathcal{OP}}_{n,k}$ as a pair $\sigma = ( g, S)$, where $g \in G_n$, $|S| = n-k$ and $S \subseteq {\mathrm {Des}}(g)$. Our example ordered set partition becomes \begin{equation*} \sigma = (3^0 4^1 6^2 2^0 5^2 1^2, \{1,4,5\}). \end{equation*} Let $\sigma \in {\mathcal{OP}}_{n,k}$ and let $(g,S)$ be the descent starred representation of $\sigma$. The {\em major index} of $\sigma = (g, S)$ is \begin{equation} {\mathrm {maj}}(\sigma) = {\mathrm {maj}}(g, S) = c(\sigma) + r \cdot \left[ \sum_{i \in {\mathrm {Des}}(g)} i - \sum_{i \in S} |{\mathrm {Des}}(g) \cap \{i, i+1, \dots, n\}| \right], \end{equation} where $c(\sigma)$ denotes the sum of the colors in $\sigma$. In the example above, we have \begin{equation*} {\mathrm {maj}}(3^0_* 4^1 \, \, 6^2 \, \, 2^0_* 5^2_* 1^2) = (0 + 1 + 2 + 0 + 2 + 2) + 3 \cdot [ (1 + 2 + 4 + 5) - (4 + 2 + 1) ] = 22. \end{equation*} Whereas the definition of ${\mathrm {maj}}$ for colored ordered set partitions used the order $<$ to compare elements, the definition of ${\mathrm {coinv}}$ uses the order $\prec$. In particular, let $\sigma$ be a colored ordered set partition. A {\em coinversion pair} in $\sigma$ is a pair of colored letters $i^c \preceq j^d$ appearing in $\sigma$ such that \begin{equation*} \begin{cases} \text{at least one of $i^c$ and $j^d$ is $\prec$-minimal in its block in $\sigma$,} \\ \text{$i^c$ and $j^d$ belong to different blocks of $\sigma$, and} \\ \text{if $i^c$'s block is to the right of $j^d$'s block, then only $j^d$ is $\prec$-minimal in its block.} \end{cases} \end{equation*} In our example $\sigma = (3^0 4^1 \mid 6^2 \mid 1^2 2^0 5^2 )$, the coinversion pairs are $3^0 6^2, 2^0 3^0 , 3^0 5^2, 2^0 6^2, 4^1 6^2,$ and $5^2 6^2$. The statistic ${\mathrm {coinv}}(\sigma)$ is defined by \begin{equation} {\mathrm {coinv}}(\sigma) = [n\cdot(r-1) - c(\sigma)] + r \cdot (\text{number of coinversion pairs in $\sigma$}). \end{equation} In our example we have \begin{equation*} {\mathrm {coinv}}(3^0 4^1 \mid 6^2 \mid 1^2 2^0 5^2 ) = [6 \cdot 2 - (0 + 1 + 2 + 2 + 0 + 2)] + 3 \cdot 6 = 23. \end{equation*} In particular, whereas the statistic ${\mathrm {maj}}$ involves a sum over colors, the statistic ${\mathrm {coinv}}$ involves a sum over {\em complements} of colors. The statistic ${\mathrm {coinv}}$ on $r$-colored $k$-block ordered set partitions of $[n]$ is complementary to the statistic ${\mathrm {inv}}$ defined in \cite[Sec. 4]{Rhoades}. We need an extension of colored set partitions involving repeated letters. An {\em $r$-colored ordered multiset partition} $\mu$ is a sequence of finite nonempty sets $\mu = (M_1, \dots, M_k)$ of elements from the alphabet ${\mathcal{A}}_r$. The {\em size} of $\mu$ is $|M_1| + \cdots |M_k|$ and we say that $\mu$ has {\em $k$ blocks}. For example, we have that $\mu = (2^1 2^0 3^1 \mid 1^2 3^1 \mid 2^0 4^2 )$ is a $3$-colored ordered multiset partition of size $7$ with $3$ blocks. We emphasize that the blocks of ordered multiset partitions are {\em sets}; there are no repeated letters within blocks (although the same letter can occur with different colors within a single block). If $\mu$ is an ordered multiset partition, the statistics ${\mathrm {coinv}}(\mu)$ and ${\mathrm {maj}}(\mu)$ have the same definitions as in the case of no repeated letters. \subsection{$G_n$-faces} To describe the combinatorics of the rings $R_{n,k}$, we introduce the following concept of a $G_n$-face. In the following definition we require $r \geq 2$. \begin{defn} \label{g-face} A {\em $G_n$-face} is an ordered set partition $\sigma = (B_1 \mid B_2 \mid \cdots \mid B_m)$ of $[n]$ such that the letters in every block of $\sigma$, with the possible exception of the first block $B_1$, are decorated by the colors $\{0, 1, \dots, r-1\}$. \end{defn} Let $\sigma = (B_1 \mid B_2 \mid \cdots \mid B_m)$ be an $G_n$-face. If the letters in $B_1$ are uncolored, then $B_1$ is called the {\em zero block} of $\sigma$. The {\em dimension} of $\sigma$ is the number of nonzero blocks in $\sigma$. Let ${\mathcal{F}}_{n,k}$ denote the set of $G_n$-faces of dimension $k$. For example, if $r = 3$ we have \begin{align*} ( 2 5 \mid 1^1 3^2 6^2 \mid 4^1 ) &\in {\mathcal{F}}_{6,2} \text{ and } \\ ( 2^2 5^1 \mid 1^1 3^2 6^2 \mid 4^1) &\in {\mathcal{F}}_{6,3}, \end{align*} where the lack of colors on the letters of the first block $\{2,5\}$ of the top face indicates that $\{2,5\}$ is a zero block. When $k = n$, we have ${\mathcal{F}}_{n,n} = {\mathcal{OP}}_{n,n} = G_n$ as there cannot be a zero block. The notation {\em face} in Definition~\ref{g-face} comes from the identification of the $k$-dimensional $G_n$-faces with the $k$-dimensional faces in the Coxeter complex of $G_n$. The set ${\mathcal{F}}_{n,k}$ may also be identified with the collection of rank $k$ elements in the Dowling lattice $Q_n(\Gamma)$ to a group $\Gamma$ of size $r$ (see \cite{Dowling}). By considering the possible sizes of zero blocks, we see that the number of faces in ${\mathcal{F}}_{n,k}$ is \begin{equation} |{\mathcal{F}}_{n,k}| = \sum_{z = 0}^{n-k} {n \choose z} \cdot r^{n-z} \cdot k! \cdot {\mathrm {Stir}}(n-z,k). \end{equation} We will consider an action of the group $G_n$ on ${\mathcal{F}}_{n,k}$. To describe this action it suffices to describe the action of permutation matrices ${\mathfrak{S}}_n \subseteq G_n$ and the diagonal subgroup ${\mathbb {Z}}_r \times \cdots \times {\mathbb {Z}}_r \subseteq G_n$. If $\pi = \pi_1 \dots \pi_n \in {\mathfrak{S}}_n$, then $\pi$ acts on $G_n$ by swapping letters while preserving colors. For example, if $\pi = 614253 \in {\mathfrak{S}}_6$, then \begin{equation*} \pi. (25 \mid 1^1 3^2 6^2 \mid 4^1) = (15 \mid 6^1 4^2 3^2 \mid 2^1) = (15 \mid 3^2 4^2 6^1 \mid 2^1). \end{equation*} A diagonal matrix $g = \mathrm{diag}(\zeta^{c_1}, \dots, \zeta^{c_n})$ acts by increasing the color of the letter $i$ by $c_i$ (mod $r$), while leaving elements in the zero block uncolored. For example, if $r = 3$ an example action of the diagonal matrix $g = \mathrm{diag}(\zeta, \zeta^2, \zeta^2, \zeta, \zeta^2, \zeta) \in G_6$ is \begin{equation*} g. (25 \mid 1^1 3^2 6^2 \mid 4^1) = (25 \mid 1^2 3^1 6^0 \mid 4^2). \end{equation*} It is clear that the action of $G_n$ on ${\mathcal{F}}_{n,k}$ preserves the subset ${\mathcal{OP}}_{n,k}$ of $r$-colored ordered set partitions. We extend the definition of ${\mathrm {coinv}}$ to $G_n$-faces as follows. There is a natural map \begin{equation} \pi: {\mathcal{F}}_{n,k} \rightarrow \bigcup_{z = 0}^{n-k} {\mathcal{OP}}_{n-z,k} \end{equation} which removes the zero block $Z$ of a $G_n$-face (if present), and then maps the letters in $[n] - Z$ onto $\{1, 2, \dots, n - |Z| \}$ via an order-preserving bijection while preserving colors. For example, we have \begin{equation*} \pi: (2 5 \mid 1^1 3^2 6^2 \mid 4^1) \mapsto (1^1 2^2 4^2 \mid 3^1). \end{equation*} If $\sigma$ is a $G_n$-face whose zero block has size $z$, we define ${\mathrm {coinv}}(\sigma)$ by \begin{equation} {\mathrm {coinv}}(\sigma) := krz + {\mathrm {coinv}}(\pi(\sigma)). \end{equation} In the $r = 3$ example above, we have \begin{equation*} {\mathrm {coinv}}(2 5 \mid 1^1 3^2 6^2 \mid 4^1) = 2 \cdot 3 \cdot 2 + {\mathrm {coinv}}(1^1 2^2 4^2 \mid 3^1) = 12 + 8 = 20. \end{equation*} \subsection{Symmetric functions} For $n \geq 0$, a {\em (weak) composition of $n$} is a sequence $\alpha = (\alpha_1, \dots, \alpha_k)$ of nonnegative integers with $\alpha_1 + \cdots + \alpha_k = n$. We write $\alpha \models n$ or $|\alpha| = n$ to indicate that $\alpha$ is a composition of $n$. A {\em partition of $n$} is a composition $\lambda$ of $n$ whose parts are positive and weakly decreasing. We write $\lambda \vdash n$ to indicate that $\lambda$ is a partition of $n$. If $\lambda$ and $\mu$ are partitions (of any size) we say that $\lambda$ {\em dominates} $\mu$ and write $\lambda \geq_{dom} \mu$ if $\lambda_1 + \cdots + \lambda_i \geq \mu_1 + \cdots + \mu_i$ for all $i \geq 1$. The {\em Ferrers diagram} of a partition $\lambda$ (in English notation) consists of $\lambda_i$ left-justified boxes in row $i$. The Ferrers diagram of $(4,2,2) \vdash 8$ is shown below. The {\em conjugate} $\lambda'$ of a partition $\lambda$ is obtained by reflecting the Ferrers diagram across its main diagonal. For example, we have $(4,2,2)' = (3,3,1,1)$. \begin{small} \begin{center} \begin{Young} & & & \cr & \cr & \cr \end{Young} \end{center} \end{small} For an infinite sequence of variables ${\mathbf {y}} = (y_1, y_2, \dots )$, let $\Lambda({\mathbf {y}})$ denote the ring of symmetric functions in the variable set ${\mathbf {y}}$ with coefficients in the field ${\mathbb {Q}}(q)$. The ring $\Lambda({\mathbf {y}}) = \bigoplus_{n \geq 0} \Lambda({\mathbf {y}})_n$ is graded by degree. The degree $n$ piece $\Lambda({\mathbf {y}})_n$ has vector space dimension equal to the number of partitions of $n$. For a partition $\lambda$, let \begin{center} $\begin{array}{ccccc} m_{\lambda}({\mathbf {y}}), & e_{\lambda}({\mathbf {y}}), & h_{\lambda}({\mathbf {y}}), & s_{\lambda}({\mathbf {y}}) \end{array}$ \end{center} be the corresponding {\em monomial, elementary, (complete) homogeneous,} and {\em Schur} symmetric functions. As $\lambda$ varies over the collection of all partitions, these symmetric functions give four different bases for $\Lambda({\mathbf {y}})$. Given any composition $\beta$ whose nonincreasing rearrangement is the partition $\lambda$, we extend this notation by setting $e_{\beta}({\mathbf {y}}) := e_{\lambda}({\mathbf {y}})$ and $h_{\beta}({\mathbf {y}}) := h_{\lambda}({\mathbf {y}})$. Let $\omega: \Lambda({\mathbf {y}}) \rightarrow \Lambda({\mathbf {y}})$ be the linear map which sends $s_{\lambda}({\mathbf {y}})$ to $s_{\lambda'}({\mathbf {y}})$ for all partitions $\lambda$. The map $\omega$ is an involution and a ring automorphism. For any partition $\lambda$, we have $\omega(e_{\lambda}({\mathbf {y}})) = h_{\lambda}({\mathbf {y}})$ and $\omega(h_{\lambda}({\mathbf {y}})) = e_{\lambda}({\mathbf {y}})$. We let $\langle \cdot, \cdot \rangle$ denote the {\em Hall inner product} on $\Lambda({\mathbf {y}})$. This can be defined by either of the rules $\langle s_{\lambda}({\mathbf {y}}), s_{\mu}({\mathbf {y}}) \rangle = \delta_{\lambda,\mu}$ or $\langle h_{\lambda}({\mathbf {y}}), m_{\mu}({\mathbf {y}}) \rangle = \delta_{\lambda, \mu}$ for all partitions $\lambda, \mu$. If $F({\mathbf {y}}) \in \Lambda({\mathbf {y}})$ is any symmetric function, let $F({\mathbf {y}})^{\perp}$ be the linear operator on $\Lambda({\mathbf {y}})$ which is adjoint to the operation of multiplication by $F({\mathbf {y}})$. That is, we have \begin{equation} \langle F({\mathbf {y}})^{\perp} G({\mathbf {y}}), H({\mathbf {y}}) \rangle = \langle G({\mathbf {y}}), F({\mathbf {y}}) H({\mathbf {y}}) \rangle \end{equation} for all symmetric functions $G({\mathbf {y}}), H({\mathbf {y}}) \in \Lambda({\mathbf {y}})$. The representation theory of $G_n$ is analogous to that of ${\mathfrak{S}}_n$, but involves $r$-tuples of objects. Given any $r$-tuple $\bm{o} = (o^{(1)}, o^{(2)}, \dots, o^{(r-1)}, o^{(r)})$ of objects, we define the {\em dual} $\bm{o^*}$ to be the $r$-tuple \begin{equation} \bm{o^*} := (o^{(r-1)}, \dots, o^{(2)}, o^{(1)}, o^{(r)}) \end{equation} obtained by reversing the first $r-1$ terms in the sequence $\bm{o}$. At the algebraic level, the operator $\bm{o} \mapsto \bm{o^*}$ corresponds to the entrywise action of complex conjugation on matrices in $G_n$ (which is trivial when $r = 1$ or $r = 2$). If $1 \leq i \leq r$, we define the {\em dual} $i^*$ of $i$ by the rule \begin{equation} i^* = \begin{cases} r-i & 1 \leq i \leq r-1 \\ r & i = r. \end{cases} \end{equation} We therefore have \begin{equation} \bm{o^*} = (o^{(1^*)}, \dots, o^{(r^*)}) \text{ if } \bm{o} = (o^{(1)}, \dots, o^{(r)}). \end{equation} For a positive integer $n$, an {\em $r$-composition} $\bm{\alpha}$ of $n$ is an $r$-tuple of compositions $\bm{\alpha} = (\alpha^{(1)}, \dots, \alpha^{(r)})$ which satisfies $|\bm{\alpha}| := |\alpha^{(1)}| + \cdots + |\alpha^{(r)}| = n$. We write $\bm{\alpha} \models_r n$ to indicate that $\bm{\alpha}$ is an $r$-composition of $n$. Similarly, an {\em $r$-partition} ${ \bm{\lambda} } = (\lambda^{(1)}, \dots, \lambda^{(r)})$ of $n$ is an $r$-tuple of partitions with $|{ \bm{\lambda} }| := |\lambda^{(1)}| + \cdots + |\lambda^{(r)}| = n$. We write ${ \bm{\lambda} } \vdash_r n$ to mean that ${ \bm{\lambda} }$ is an $r$-partition of $n$. The {\em conjugate} of an $r$-partition ${ \bm{\lambda} } = (\lambda^{(1)}, \dots, \lambda^{(r)})$ is defined componentwise; $\bm{\lambda'} := (\lambda^{(1)'}, \dots, \lambda^{(r)'})$. The {\em Ferrers diagram} of an $r$-partition $\bm{\lambda} = (\lambda^{(1)}, \dots, \lambda^{(r)})$ is the $r$-tuple of Ferrers diagrams of its constituent partitions. The Ferrers diagram of the $3$-partition $((3,2), \varnothing, (2,2)) \vdash_3 9$ is shown below. \begin{center} \begin{small} \begin{Young} & & \cr & \cr \end{Young}, \, \, \begin{large}$\varnothing$\end{large}, \, \, \begin{Young} & \cr & \cr \end{Young} \end{small} \end{center} Let ${ \bm{\lambda} } = (\lambda^{(1)}, \dots, \lambda^{(r)}) \vdash_r n$ be an $r$-partition of $n$. A {\em semistandard tableau ${ \bm{T}}$ of shape ${ \bm{\lambda} }$} is a tuple ${ \bm{T}} = (T^{(1)}, \dots, T^{(r)})$, where $T^{(i)}$ is a filling of the boxes of $\lambda^{(i)}$ with positive integers which increase weakly across rows and strictly down columns. A semistandard tableau ${ \bm{T}}$ of shape ${ \bm{\lambda} }$ is {\em standard} if the entries $1, 2, \dots, n$ all appear precisely once in ${ \bm{T}}$. Let ${\mathrm {SYT}}^r(n)$ denote the collection of all possible standard tableaux with $r$ components and $n$ boxes. For example, let ${ \bm{\lambda} }= ((3,2), \varnothing, (2,2)) \vdash_3 9$. A semistandard tableau ${ \bm{T}} = (T^{(1)}, T^{(2)}, T^{(3)})$ of shape ${ \bm{\lambda} }$ is \begin{center} \begin{small} \begin{Young} 1 & 3 & 3\cr 3 & 4 \cr \end{Young}, \, \, \begin{large}$\varnothing$\end{large}, \, \, \begin{Young} 1 & 3 \cr 4 & 4 \cr \end{Young} \end{small}. \end{center} A standard tableau of shape ${ \bm{\lambda} }$ is \begin{center} \begin{small} \begin{Young} 3 & 6 & 9\cr 5 & 7 \cr \end{Young}, \, \, \begin{large}$\varnothing$\end{large}, \, \, \begin{Young} 1 & 4 \cr 2 &8 \cr \end{Young} \end{small}. \end{center} Let $\bm{T} = (T^{(1)}, \dots, T^{(r})) \in {\mathrm {SYT}}^r(n)$ be a standard tableau with $n$ boxes. A letter $1 \leq i \leq n-1$ is called a {\em descent} of $\bm{T}$ if \begin{itemize} \item the letters $i$ and $i+1$ appear in the same component $T^{(j)}$ of $\bm{T}$, and $i+1$ appears in a row below $i$ in $T^{(j)}$, or \item the letter $i+1$ appears in a component of $\bm{T} = (T^{(1)}, \dots, T^{(r)})$ strictly to the right of the component containing $i$. \end{itemize} We let ${\mathrm {Des}}(\bm{T}) := \{ 1 \leq i \leq n \,:\, \text{$i$ is a descent of $\bm{T}$} \}$ denote the collection of all descents of $\bm{T}$ and let ${\mathrm {des}}(\bm{T}) := | {\mathrm {Des}}(\bm{T}) |$ denote the number of descents of $T$. The {\em major index} of $\bm{T}$ is \begin{equation} {\mathrm {maj}}(\bm{T}) := r \cdot \sum_{i \in {\mathrm {Des}}(\bm{T})} i + \sum_{j = 1}^r (j-1) \cdot |T^{(j)}|, \end{equation} where $|T^{(j)}|$ is the number of boxes in the component $T^{(j)}$. For example, if $\bm{T} = (T^{(1)}, T^{(2)}, T^{(3)})$ is the standard tableau above, then ${\mathrm {Des}}(\bm{T}) = \{1,3,6,7\}, {\mathrm {des}}(\bm{T}) = 4$, and \begin{equation*} {\mathrm {maj}}(\bm{T}) = 3 \cdot (1 + 3 + 6 + 7) + (0 \cdot 5 + 1 \cdot 0 + 2 \cdot 4) = 59. \end{equation*} For $1 \leq i \leq r$, let ${\mathbf {x}}^{(i)} = (x_1^{(i)}, x_2^{(i)}, \dots )$ be an infinite list of variables and let $\Lambda({\mathbf {x}}^{(i)})$ be the ring of symmetric functions in the variables ${\mathbf {x}}^{(i)}$ with coefficients in ${\mathbb {Q}}(q)$. We use ${\mathbf {x}}$ to denote the union of the $r$ variable sets ${\mathbf {x}}^{(1)}, \dots, {\mathbf {x}}^{(r)}$. Let $\Lambda^r({\mathbf {x}})$ be the tensor product \begin{equation*} \Lambda^r({\mathbf {x}}) = \Lambda({\mathbf {x}}^{(1)}) \otimes \cdots \otimes \Lambda({\mathbf {x}}^{(r)}). \end{equation*} We can think of $\Lambda^r({\mathbf {x}})$ as the ring of formal power series in ${\mathbb {Q}}(q)[[{\mathbf {x}}]]$ which are symmetric in the variable sets ${\mathbf {x}}^{(1)}, \dots, {\mathbf {x}}^{(1)}$ separately. The algebra $\Lambda^r({\mathbf {x}})$ is spanned by generating tensors of the form \begin{equation*} F_1({\mathbf {x}}^{(1)}) \cdot \ldots \cdot F_r({\mathbf {x}}^{(r)}) := F_1({\mathbf {x}}^{(1)}) \otimes \cdots \otimes F_r({\mathbf {x}}^{(r)}), \end{equation*} where $F_i({\mathbf {x}}^{(i)}) \in \Lambda({\mathbf {x}}^{(i)})$ is a symmetric function in the variables ${\mathbf {x}}^{(i)}$. The algebra $\Lambda^r({\mathbf {x}})$ is graded via \begin{equation*} \deg(F_1({\mathbf {x}}^{(1)}) \cdot \ldots \cdot F_{r}({\mathbf {x}}^{(r)})) := \deg(F_1({\mathbf {x}}^{(1)})) + \cdots + \deg(F_{r}({\mathbf {x}}^{(r)})), \end{equation*} where the $F_i({\mathbf {x}}^{(i)})$ are homogeneous. The standard bases of $\Lambda^r({\mathbf {x}})$ are obtained from those of $\Lambda({\mathbf {x}}^{(1)}), \dots, \Lambda({\mathbf {x}}^{(r)})$ by multiplication. More precisely, let $\bm{\lambda} = (\lambda^{(1)}, \dots, \lambda^{(r)})$ be an $r$-partition. We define elements \begin{equation*} \bm{m_{\lambda}}({\mathbf {x}}), \bm{e_{\lambda}}({\mathbf {x}}), \bm{h_{\lambda}}({\mathbf {x}}), \bm{s_{\lambda}}({\mathbf {x}}) \in \Lambda^r({\mathbf {x}}) \end{equation*} by \begin{center} $\begin{array}{cc} \bm{m_{\lambda}}({\mathbf {x}}) := m_{\lambda^{(1)}}({\mathbf {x}}^{(1)}) \cdots m_{\lambda^{(r)}}({\mathbf {x}}^{(r)}), & \bm{e_{\lambda}}({\mathbf {x}}) := e_{\lambda^{(1)}}({\mathbf {x}}^{(1)}) \cdots e_{\lambda^{(r)}}({\mathbf {x}}^{(r)}), \\ \bm{h_{\lambda}}({\mathbf {x}}) := h_{\lambda^{(1)}}({\mathbf {x}}^{(1)}) \cdots h_{\lambda^{(r)}}({\mathbf {x}}^{(r)}), & \bm{s_{\lambda}}({\mathbf {x}}) := s_{\lambda^{(1)}}({\mathbf {x}}^{(1)}) \cdots s_{\lambda^{(r)}}({\mathbf {x}}^{(r)}). \end{array}$ \end{center} As ${ \bm{\lambda} }$ varies over the collection of all $r$-partitions, any of the sets $\{ \bm{m_{\lambda}}({\mathbf {x}}) \}, \{ \bm{e_{\lambda}}({\mathbf {x}}) \}, \{ \bm{h_{\lambda}}({\mathbf {x}}) \},$ or $\{ \bm{s_{\lambda}}({\mathbf {x}}) \}$ forms a basis for $\Lambda^r({\mathbf {x}})$. If ${ \bm{\beta} } = (\beta^{(1)}, \dots, \beta^{(r)})$ is an $r$-composition, we extend this notation by setting \begin{center} $\begin{array}{cc} \bm{e_{\beta}}({\mathbf {x}}) := e_{\beta^(1)}({\mathbf {x}}^{(1)}) \cdots e_{\beta^{(r)}}({\mathbf {x}}^{(r)}), & \bm{h_{\beta}}({\mathbf {x}}) := h_{\beta^(1)}({\mathbf {x}}^{(1)}) \cdots h_{\beta^{(r)}}({\mathbf {x}}^{(r)}). \end{array}$ \end{center} The Schur functions $\bm{s_{\lambda}}({\mathbf {x}})$ admit the following combinatorial description. If ${ \bm{T}} = (T^{(1)}, \dots, T^{(r)})$ is a semistandard tableau with $r$ components, let ${\mathbf {x}}^{{ \bm{T}}}$ be the monomial in the variable set ${\mathbf {x}}$ where the exponent of $x^{(i)}_j$ equals the multiplicity of $j$ in the tableau $T^{(i)}$. For example, if $r = 3$ and ${ \bm{T}} = (T^{(1)}, T^{(2)}, T^{(3)})$ is as above, we have \begin{equation*} {\mathbf {x}}^{{ \bm{T}}} = (x^{(1)}_1)^1 (x^{(1)}_3)^3 (x^{(1)}_4)^1 (x^{(3)}_1)^1 (x^{(3)}_3)^1 (x^{(3)}_4)^2. \end{equation*} Similarly, if $w$ is any word in the $r$-colored positive integers ${\mathcal{A}}_r$, let ${\mathbf {x}}^w$ be the monomial in ${\mathbf {x}}$ where the exponent of $x^{(i)}_j$ equals the multiplicity of $j^{i-1}$ in the word $w$. Also, if ${ \bm{\beta} } = (\beta^{(1)}, \dots, \beta^{(r)})$ is an $r$-composition, define the monomial ${\mathbf {x}}^{{ \bm{\beta} }}$ by \begin{equation} {\mathbf {x}}^{{ \bm{\beta} }} := (x^{(1)}_1)^{\beta^{(1)}_1} (x^{(1)}_2)^{\beta^{(1)}_2} \cdots (x^{(2)}_1)^{\beta^{(2)}_1} (x^{(2)}_2)^{\beta^{(2)}_2} \cdots \end{equation} Given an $r$-partition ${ \bm{\lambda} } \vdash_r n$, we have \begin{equation} \bm{s_{\lambda}}({\mathbf {x}}) = \sum_{{ \bm{T}}} {\mathbf {x}}^{{ \bm{T}}}, \end{equation} where the sum is over all semistandard tableaux ${ \bm{T}}$ of shape ${ \bm{\lambda} }$. The Hall inner product $\langle \cdot, \cdot \rangle$ extends to $\Lambda^r({\mathbf {x}})$ by the rule \begin{equation} \langle \bm{s_{\lambda}}({\mathbf {x}}), \bm{s_{\mu^*}}({\mathbf {x}}) \rangle = \langle \bm{h_{\lambda}}({\mathbf {x}}), \bm{m_{\mu^*}}({\mathbf {x}}) \rangle = \delta_{{ \bm{\lambda} }, \bm{\mu}} \end{equation} for all $r$-partitions ${ \bm{\lambda} }$ and $\bm{\mu}$. The presence of duals in this definition comes from the nontriviality of complex conjugation on $G_n$ for $r > 2$. The involution $\omega$ is defined on $\Lambda^r({\mathbf {x}}) = \Lambda({\mathbf {x}}^{(1)}) \otimes \cdots \otimes \Lambda({\mathbf {x}}^{(r)})$ by applying $\omega$ in each component separately. The map $\omega$ is an isometry of the inner product $\langle \cdot, \cdot \rangle$. If $\bm{F(x)} \in \Lambda^r({\mathbf {x}})$, we let $\bm{F(x)}^{\perp}$ be the operator on $\Lambda^r({\mathbf {x}})$ which is adjoint to multiplication by $\bm{F(x)}$ under the inner product $\langle \cdot, \cdot \rangle$. In particular, if $j \geq 1$ and if $1 \leq i \leq r$, we have $h_j({\mathbf {x}}^{(i)}), e_j({\mathbf {x}}^{(i)}) \in \Lambda^r({\mathbf {x}})$, so that $h_j({\mathbf {x}}^{(i)})^{\perp}$ and $e_j({\mathbf {x}}^{(i)})^{\perp}$ make sense as linear operators on $\Lambda^r({\mathbf {x}})$. These operators (and their `dual' versions $h_j({\mathbf {x}}^{(i^*)})^{\perp}$ and $e_j({\mathbf {x}}^{(i^*)})^{\perp}$) will play a key role in this paper. \subsection{Representations of $G_n$} In his thesis, Specht \cite{Specht} described the irreducible representations of $G_n$. We recall his construction. Given a matrix $g \in G_n$, define numbers $\chi(g)$ and ${\mathrm {sign}}(g)$ by \begin{align} \chi(g) &:= \text{product of the nonzero entries in $g$}, \\ {\mathrm {sign}}(g) &:= \text{determinant of the permutation matrix underlying $g$}. \end{align} In particular, the number $\chi(g)$ is an $r^{th}$ root of unity and ${\mathrm {sign}}(g) = \pm 1$. Both of the functions $\chi$ and ${\mathrm {sign}}$ are linear characters of $G_n$. In other words, we have $\chi(gh) = \chi(g) \chi(h)$ and ${\mathrm {sign}}(g h) = {\mathrm {sign}}(g) {\mathrm {sign}}(h)$ for all $g, h \in G_n$. It is well known that the irreducible complex representations of the symmetric group ${\mathfrak{S}}_n$ are indexed by partitions $\lambda \vdash n$. Given $\lambda \vdash n$, let $S^{\lambda}$ be the corresponding irreducible ${\mathfrak{S}}_n$-module. For example, we have that $S^{(n)}$ is the trivial representation of ${\mathfrak{S}}_n$ and $S^{(1^n)}$ is the sign representation of ${\mathfrak{S}}_n$. Let $V$ be a $G$-module and let $U$ be an ${\mathfrak{S}}_n$-module. We build a $G_n$-module $V \wr U$ by letting $V \wr U = (V)^{\otimes n} \otimes U$ as a vector space and defining the action of $G_n$ by \begin{equation} \mathrm{diag}(g_1, \dots, g_n).(v_1 \otimes \cdots \otimes v_n \otimes u) := (g_1.v_1) \otimes \cdots \otimes (g_n.v_n) \otimes u, \end{equation} for all diagonal matrices $\mathrm{diag}(g_1, \dots, g_n) \in G_n$, and \begin{equation} \pi.(v_1 \otimes \cdots \otimes v_n \otimes u) := v_{\pi^{-1}_1} \otimes \cdots \otimes v_{\pi^{-1}_n} \otimes (\pi.u), \end{equation} for all $\pi \in {\mathfrak{S}}_n \subseteq G_n$. If $V$ is an irreducible $G$-module and $U$ is an irreducible ${\mathfrak{S}}_n$-module, then $V \wr U$ is an irreducible $G_n$-module, but not all of the irreducible $G_n$-modules arise in this way. For any composition $\alpha = (\alpha_1, \dots , \alpha_r) \models n$ with $r$ parts, the parabolic subgroup of block diagonal matrices in $G_n$ with block sizes $\alpha_1, \dots, \alpha_r$ gives an inclusion \begin{equation} G_{\alpha} := G_{\alpha_1} \times \cdots \times G_{\alpha_r} \subseteq G_n. \end{equation} If $W_i$ is a $G_{\alpha_i}$-module for $1 \leq i \leq r$, the tensor product $W_1 \otimes \cdots \otimes W_r$ is a $G_{\alpha}$-module and the induction ${\mathrm {Ind}}_{G_{\alpha}}^{G_n}(W_1 \otimes \cdots \otimes W_r)$ is a $G_n$-module. We index the irreducible representations of the cyclic group $G = {\mathbb {Z}}_r = \langle \zeta \rangle$ in the following slightly nonstandard way. For $1 \leq i \leq r$, let $\rho_i: G \rightarrow GL_1({\mathbb {C}}) = {\mathbb {C}}^{\times}$ be the homomorphism \begin{equation} \rho_i: \zeta \mapsto \zeta^{-i}. \end{equation} and let $V_i$ be the vector space ${\mathbb {C}}$ with $G$-module structure given by $\rho_i$. In particular, we have that $V_r$ is the trivial representation of $G$ and $V_1, V_2, \dots, V_{r-1}$ are the nontrivial irreducible representations of $G$. The irreducible modules for $G_n$ are indexed by $r$-partitions of $n$. If $\bm{\lambda} = (\lambda^{(1)}, \dots, \lambda^{(r)}) \vdash_r n$ is an $r$-partition of $n$, let $\alpha = (\alpha_1, \dots, \alpha_r) \models n$ be the composition whose parts are $\alpha_i := |\lambda^{(i)}|$. Define $\bm{S^{\lambda}}$ to be the $G_n$-module given by \begin{equation} \bm{S^{\lambda}} := {\mathrm {Ind}}_{G_{\alpha}}^{G_n} ((V_1 \wr S^{\lambda^{(1)}}) \otimes \cdots \otimes (V_r \wr S^{\lambda^{(r)}})). \end{equation} Specht proved that the set $\{ \bm{S^{\lambda}} \,:\, \bm{\lambda} \vdash_r n \}$ forms a complete set of nonisomorphic irreducible representations of $G_n$. \begin{example} For any $1 \leq i \leq r$, both of the functions \begin{equation} \begin{cases} \chi^i: g \mapsto (\chi(g))^i \\ {\mathrm {sign}} \cdot \chi^i: g \mapsto {\mathrm {sign}}(g) \cdot (\chi(g))^i \end{cases} \end{equation} on $G_n$ are linear characters. We leave it for the reader to check that under the above classification we have \begin{center} $\begin{array}{cccc} \chi^1 \leftrightarrow ((n), \varnothing, \dots, \varnothing), & & {\mathrm {sign}} \cdot \chi^1 \leftrightarrow ((1^n), \varnothing \dots, \varnothing), \\ \chi^2 \leftrightarrow (\varnothing, (n), \dots, \varnothing), & & {\mathrm {sign}} \cdot \chi^2 \leftrightarrow (\varnothing, (1^n), \dots, \varnothing), \\ \vdots & & \vdots \\ \chi^r \leftrightarrow (\varnothing, \varnothing, \dots, (n)), & & {\mathrm {sign}} \cdot \chi^r \leftrightarrow (\varnothing, \varnothing, \dots, (1^n)). \end{array}$ \end{center} Since $\chi^r$ is the trivial character of $G_n$, the trivial representation therefore corresponds to the $r$-partition $(\varnothing, \dots, \varnothing, (n))$. \end{example} Let $V$ be a finite-dimensional $G_n$-module. There exist unique integers $m_{\bm{\lambda}}$ such that \begin{equation*} V \cong \bigoplus_{\bm{\lambda} \vdash_r n} (\bm{S^{\lambda}})^{m_{\bm{\lambda}}}. \end{equation*} The {\em Frobenius character} ${\mathrm {Frob}}(V) \in \Lambda^r({\mathbf {x}})$ of $V$ is given by \begin{equation} {\mathrm {Frob}}(V) := \sum_{\bm{\lambda} \vdash_r n} m_{\bm{\lambda}} \bm{s_{\lambda}}({\mathbf {x}}). \end{equation} In particular, the multiplicity $m_{\bm{\lambda}}$ of $\bm{S^{\lambda}}$ in $V$ is $\langle {\mathrm {Frob}}(V), \bm{s_{\lambda^*}}({\mathbf {x}}) \rangle$. More generally, if $V = \oplus_{d \geq 0} V_d$ is a graded $G_n$-module with each $V_d$ finite-dimensional, the {\em graded Frobenius character} ${\mathrm {grFrob}}(V;q) \in \Lambda^r({\mathbf {x}})[[q]]$ of $V$ is \begin{equation} {\mathrm {grFrob}}(V;q) := \sum_{d \geq 0} {\mathrm {Frob}}(V_d) \cdot q^d. \end{equation} Also recall that the {\em Hilbert series} ${\mathrm {Hilb}}(V;q)$ of $V$ is \begin{equation} {\mathrm {Hilb}}(V;q) := \sum_{d \geq 0} \dim(V_d) \cdot q^d. \end{equation} The Frobenius character is compatible with induction product in the following way. Let $V$ be an $G_n$-module and let $W$ be a $G_m$ module. The tensor product $V \otimes W$ is a $G_{(n,m)}$-module, so that ${\mathrm {Ind}}_{G_{(n,m)}}^{G_{n+m}} (V \otimes W)$ is a $G_{n+m}$-module. We have \begin{equation} {\mathrm {Frob}}({\mathrm {Ind}}_{G_{(n,m)}}^{G_{n+m}} (V \otimes W)) = {\mathrm {Frob}}(V) \cdot {\mathrm {Frob}}(W), \end{equation} where the multiplication on the right-hand side takes place within $\Lambda^r({\mathbf {x}})$. \subsection{Gr\"obner theory} A total order $<$ on the monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ is called a {\em monomial order} if \begin{itemize} \item $1 \leq m$ for every monomial $m \in {\mathbb {C}}[{\mathbf {x}}_n]$, and \item $m \leq m'$ implies $m \cdot m'' \leq m' \cdot m''$ for all monomials $m, m', m'' \in {\mathbb {C}}[{\mathbf {x}}_n]$. \end{itemize} In this paper we will only use the {\em lexicographic} monomial order defined by $x_1^{a_1} \cdots x_n^{a_n} < x_1^{b_1} \cdots x_n^{b_n}$ if there exists $1 \leq i \leq n$ such that $a_1 = b_1, \dots, a_{i-1} = b_{i-1}$, and $a_i < b_i$. If $f \in {\mathbb {C}}[{\mathbf {x}}_n]$ is a nonzero polynomial and $<$ is a monomial order, let ${\mathrm {in}}_<(f)$ be the leading term of $f$ with respect to the order $<$. If $I \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ is an ideal, the corresponding {\em initial ideal} ${\mathrm {in}}_<(I) \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ is the monomial ideal in ${\mathbb {C}}[{\mathbf {x}}_n]$ generated by the leading terms of every nonzero polynomial in $I$: \begin{equation} {\mathrm {in}}_<(I) := \langle {\mathrm {in}}_<(f) \,:\, f \in I - \{0\} \rangle. \end{equation} The collection of monomials $m \in {\mathbb {C}}[{\mathbf {x}}_n]$ which are not contained in ${\mathrm {in}}_<(I)$, namely \begin{equation} \{ \text{monomials $m \in {\mathbb {C}}[{\mathbf {x}}_n]$} \,:\, {\mathrm {in}}(f) \nmid m \text{ for all $f \in I - \{0\}$} \} \end{equation} descends to a vector space basis for the quotient ${\mathbb {C}}[{\mathbf {x}}_n]/I$. This is called the {\em standard monomial basis}. A finite subset $B = \{g_1, \dots, g_m\} \subseteq I$ of nonzero polynomials in $I$ is called a {\em Gr\"obner basis} of $I$ if ${\mathrm {in}}_<(I) = \langle {\mathrm {in}}_<(g_1), \dots, {\mathrm {in}}_<(g_m) \rangle$. A Gr\"obner basis $B$ is called {\em reduced} if \begin{itemize} \item the leading coefficient of $g_i$ is $1$ for all $1 \leq i \leq m$, and \item for $i \neq j$, the monomial ${\mathrm {in}}(g_i)$ does not divide any of the terms appearing in $g_j$. \end{itemize} After fixing a monomial order, every ideal $I \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ has a unique reduced Gr\"obner basis. \section{Polynomial identities} \label{Polynomial} In this section we prove a family of polynomial and symmetric function identities which will be useful in our analysis of the rings $R_{n,k}$ and $S_{n,k}$. The first of these identities is the $G_n$-analog of \cite[Lem. 3.1]{HRS}. \begin{lemma} \label{alternating-sum-lemma} Let $k \leq n$, let $\alpha_1, \dots, \alpha_k \in {\mathbb {C}}$ be distinct complex numbers, and let $\beta_1, \dots, \beta_n \in {\mathbb {C}}$ be complex numbers with the property that $\{\alpha_1, \dots, \alpha_k\} \subseteq \{\beta_1^r, \dots, \beta_n^r \}$. For any $n-k+1 \leq s \leq n$ we have \begin{equation} \sum_{j = 0}^{s} (-1)^{j} e_{s-j}(\beta_1^r, \dots, \beta_n^r) h_j(\alpha_1, \dots, \alpha_k) = 0. \end{equation} \end{lemma} \begin{proof} The left-hand side is the coefficient of $t^s$ in the power series \begin{equation} \frac{\prod_{i = 1}^n (1 + t \beta_i^r)}{\prod_{i = 1}^k (1 + t \alpha_i)}. \end{equation} By assumption, every term in the denominator cancels with a distinct term in the numerator, so that this expression is a polynomial in $t$ of degree $n-k$. Since $s > n-k$, the coefficient of $t^s$ in this polynomial is $0$. \end{proof} In practice, our applications of Lemma~\ref{alternating-sum-lemma} will always involve one of the two situations $\{\beta_1^r, \dots, \beta_n^r\} = \{\alpha_1, \dots, \alpha_k\}$ or $\{\beta_1^r, \dots, \beta_n^r\} = \{\alpha_1, \dots, \alpha_k, 0 \}$. Let $\gamma = (\gamma_1, \dots, \gamma_n) \models n$ be a composition with $n$ parts. The {\em Demazure character} $\kappa_{\gamma}({\mathbf {x}}_n) \in {\mathbb {C}}[{\mathbf {x}}_n]$ is defined recursively as follows. If $\gamma_1 \geq \cdots \geq \gamma_n$, we let $\kappa_{\gamma}({\mathbf {x}}_n)$ be the monomial \begin{equation} \kappa_{\gamma}({\mathbf {x}}_n) = x_1^{\gamma_1} \cdots x_n^{\gamma_n}. \end{equation} In general, if $\gamma_i < \gamma_{i+1}$, we let \begin{equation} \kappa_{\gamma}({\mathbf {x}}_n) = \frac{ x_i (\kappa_{\gamma'}({\mathbf {x}}_n)) - x_{i+1} (s_i \cdot \kappa_{\gamma'}({\mathbf {x}}_n))}{x_i - x_{i+1}}, \end{equation} where $\gamma' = (\gamma_1, \dots, \gamma_{i+1}, \gamma_i, \dots, \gamma_n)$ is the composition obtained by interchanging the $i^{th}$ and $(i+1)^{st}$ parts of $\gamma$ and $s_i \cdot \kappa_{\gamma'}({\mathbf {x}}_n)$ is the polynomial $\kappa_{\gamma'}({\mathbf {x}}_n)$ with $x_i$ and $x_{i+1}$ interchanged. It can be shown that this recursion gives a well defined collection of polynomials $\{ \kappa_{\gamma}({\mathbf {x}}_n) \}$ indexed by compositions $\gamma$ with $n$ parts. This set forms a basis for the polynomial ring ${\mathbb {C}}[{\mathbf {x}}_n]$. Demazure characters played a key role in \cite{HRS}; they will be equally important here. In order to state the $G_n$-analogs of the lemmata from \cite{HRS} that we will need, we must introduce some notation. \begin{defn} Let $S = \{s_1 < s_2 < \cdots < s_m\} \subseteq [n]$. The {\em skip monomial} ${\mathbf {x}}(S)$ in ${\mathbb {C}}[{\mathbf {x}}_n]$ is \begin{equation*} {\mathbf {x}}(S) := x_{s_1}^{s_1} x_{s_2}^{s_2 - 1} \cdots x_{s_m}^{s_m - m + 1}. \end{equation*} The {\em skip composition} $\gamma(S) = (\gamma_1, \dots, \gamma_n)$ is the length $n$ composition defined by \begin{equation*} \gamma_i = \begin{cases} 0 & i \notin S \\ s_j - j + 1 & i = s_j \in S. \end{cases} \end{equation*} We also let $\overline{\gamma(S)} := (\gamma_n, \dots, \gamma_1)$ be the reverse of the skip composition $\gamma(S)$. \end{defn} For example, if $n = 8$ and $S = \{2,3,5,8\}$, then $\gamma(S) = (0,2,2,0,3,0,0,5)$ and ${\mathbf {x}}(S) = x_2^2 x_3^2 x_5^3 x_8^5$. In general, we have that $\gamma(S)$ is the exponent vector of ${\mathbf {x}}(S)$. We will be interested in the $r^{th}$ powers ${\mathbf {x}}(S)^r$ of skip monomials in this paper. Skip monomials are related to Demazure characters as follows. For any polynomial $f({\mathbf {x}}_n) = f(x_1, \dots, x_n) \in {\mathbb {C}}[{\mathbf {x}}_n]$, let $f({\mathbf {x}}_n^r) = f(x_1^r, \dots, x_n^r)$ and $\overline{f({\mathbf {x}}_n^r)} = f(x_n^r, \dots, x_1^r)$. The following result is immediate from \cite[Lem. 3.5]{HRS} after the change of variables $(x_1, \dots, x_n) \mapsto (x_1^r, \dots, x_n^r)$. \begin{lemma} \label{demazure-initial-term} Let $n \geq k$ and let $S \subseteq [n]$ satisfy $|S| = n-k+1$. Let $<$ be lexicographic order. We have \begin{equation} {\mathrm {in}}_<(\overline{\kappa_{\overline{\gamma(S}}({\mathbf {x}}_n^r)}) = {\mathbf {x}}(S)^r. \end{equation} Moreover, for any $1 \leq i \leq n$ we have \begin{equation} x_i^{r \cdot (\max(S)-n+k+1)} \nmid m \end{equation} for any monomial $m$ appearing in $\overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^r)}$. Finally, if $T \subseteq [n]$ satisfies $|T| = n-k+1$ and $T \neq S$, then ${\mathbf {x}}(S)^r \nmid m$ for any monomial $m$ appearing in $\overline{\kappa_{\overline{\gamma(T)}}({\mathbf {x}}_n^r)}$. \end{lemma} We also record the fact, which follows immediately from \cite{HRS}, that the polynomials $\kappa_{\gamma(S)^*}({\mathbf {x}}_n^{r,*})$ appearing in Lemma~\ref{demazure-initial-term} are contained in the ideals $I_{n,k}$ and $J_{n,k}$. The following result follows from \cite[Eqn. 3.4]{HRS} after the change of variables $(x_1, \dots, x_n) \mapsto (x_1^r, \dots, x_n^r)$. \begin{lemma} \label{demazures-in-ideal} Let $n \geq k$ and let $S \subseteq [n]$ satisfy $|S| = n-k+1$. The polynomial $\overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^r)}$ is contained in the ideal \begin{equation} \langle e_n({\mathbf {x}}_n^r), e_{n-1}({\mathbf {x}}_n^r), \dots, e_{n-k+1}({\mathbf {x}}_n^r) \rangle \subseteq {\mathbb {C}}[{\mathbf {x}}_n]. \end{equation} In particular, we have $\overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^r)} \in I_{n,k}$ and $\overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^{r})} \in J_{n,k}$. \end{lemma} We define two formal power series in the infinite variable set ${\mathbf {x}} = ({\mathbf {x}}^{(1)}, \dots, {\mathbf {x}}^{(r)})$ using the ${\mathrm {coinv}}$ and ${\mathrm {comaj}}$ statistics on $r$-colored ordered multiset partitions. If $\mu$ is an $r$-colored ordered multiset partition, let ${\mathbf {x}}^{\mu}$ be the monomial in the variable set ${\mathbf {x}}$ where the exponent of $x_j^{(i)}$ is the number of occurrences of $j^{i-1}$ in $\mu$. \begin{defn} \label{m-and-i} Let $r \geq 1$ and let $k \leq n$ be positive integers. Define two formal power series in the variable set ${\mathbf {x}} = ({\mathbf {x}}^{(1)}, \dots, {\mathbf {x}}^{(r)})$ by \begin{align} \bm{M_{n,k}}({\mathbf {x}};q) &:= \sum_{\mu} q^{{\mathrm {maj}}(\mu)} {\mathbf {x}}^{\mu}, \\ \bm{I_{n,k}}({\mathbf {x}};q) &:= \sum_{\mu} q^{{\mathrm {coinv}}(\mu)} {\mathbf {x}}^{\mu}, \end{align} where the sum is over all $r$-colored ordered multiset partitions $\mu$ of size $n$ with $k$ blocks. \end{defn} The next result establishes that the formal power series $\bm{M_{n,k}}({\mathbf {x}};q), \bm{I_{n,k}}({\mathbf {x}};q)$ in Definition~\ref{m-and-i} both contained in the ring $\Lambda^r({\mathbf {x}})$ and are related to each other by $q$-reversal. \begin{lemma} \label{m-equals-i} Both of the formal power series $\bm{M_{n,k}}({\mathbf {x}};q)$ and $\bm{I_{n,k}}({\mathbf {x}};q)$ lie in the ring $\Lambda^r({\mathbf {x}})$. Moreover, we have $\bm{M_{n,k}}({\mathbf {x}};q) = {\mathrm {rev}}_q (\bm{I_{n,k}}({\mathbf {x}};q))$. \end{lemma} \begin{proof} The truth of this statement for $r = 1$ (when $\Lambda^r({\mathbf {x}})$ is the usual ring of symmetric functions) follows from the work of Wilson \cite{WMultiset}. To deduce this statement for general $r \geq 1$, consider a new countably infinite set of variables \begin{equation} {\mathbf {z}} = \{z_{i,j} \,:\, j \in {\mathbb {Z}}_{> 0}, 1 \leq i \leq r \}. \end{equation} The association $z_{i,j} \leftrightarrow x_j^{(i)}$ gives a bijection with our collection of variables ${\mathbf {x}} = ({\mathbf {x}}^{(1)}, \dots, {\mathbf {x}}^{(r)})$. The idea is to reinterpret $\bm{M_{n,k}}({\mathbf {x}};q)$ and $\bm{I_{n,k}}({\mathbf {x}};q)$ in terms of the new variable set ${\mathbf {z}}$, and then apply the equality and symmetry known in the case $r = 1$. To achieve the program of the preceding paragraph, we introduce the following notation. Let $\bm{M_{n,k}^1}({\mathbf {z}};q^r)$ be the formal power series \begin{equation} \bm{M_{n,k}^1}({\mathbf {z}};q^r) := \sum_{\mu} q^{r \cdot {\mathrm {maj}}(\mu)} {\mathbf {z}}^{\mu}, \end{equation} where the sum is over all ordered multiset partitions $\mu$ of size $n$ with $k$ blocks on the countably infinite alphabet \begin{equation*} 1^{r-1} < 2^{r-1} < \cdots < 1^{r-2} < 2^{r-1} < \cdots < 1^0 < 2^0 < \cdots \end{equation*} and we compute ${\mathrm {maj}}(\mu)$ as in the $r = 1$ case (i.e., ignoring contributions to ${\mathrm {maj}}$ coming from colors, and not multiplying descents by $r$). Similarly, let $\bm{I^1_{n,k}}({\mathbf {z}};q^r)$ be the formal power series \begin{equation} \bm{I^1_{n,k}}({\mathbf {z}};q) := \sum_{\mu} q^{r \cdot {\mathrm {coinv}}(\mu)} {\mathbf {x}}^{\mu}, \end{equation} where the sum is over all ordered multiset partitions $\mu$ of size $n$ with $k$ blocks on the countably infinite alphabet \begin{equation*} 1^{r-1} \prec \cdots \prec 1^0 \prec 2^{r-1} \prec \cdots \prec 2^0 \prec \cdots \end{equation*} and we define ${\mathrm {coinv}}(\mu)$ as in the $r = 1$ case (i.e., ignoring the contribution to ${\mathrm {coinv}}$ coming from colors, and not multiplying the number of coinversion pairs by $r$). It follows from the definition of $\bm{M_{n,k}}({\mathbf {x}};q)$ that \begin{equation} \label{maj-relation} \bm{M_{n,k}}({\mathbf {x}};q) = \bm{M_{n,k}^1}({\mathbf {z}};q^r) |_{z_{i,j} = q^{i-1} \cdot x_j^{(i)}}. \end{equation} This expression for $\bm{M_{n,k}}({\mathbf {x}};q)$, together with the fact that $\bm{M_{n,k}^1}({\mathbf {z}};q^r)$ is symmetric in the ${\mathbf {z}}$ variables, proves that $\bm{M_{n,k}}({\mathbf {x}};q) \in \Lambda^r({\mathbf {x}})$. Similarly, we have \begin{equation} \label{inv-relation} \bm{I_{n,k}}({\mathbf {x}};q) = \bm{I_{n,k}^1}({\mathbf {z}};q^r) |_{z_{i,j} = q^{r-i} \cdot x_j^{(i)}}, \end{equation} so that $\bm{I_{n,k}}({\mathbf {x}};q) \in \Lambda^r({\mathbf {x}})$. Applying the lemma in the case $r = 1$, we have \begin{align} \bm{M_{n,k}}({\mathbf {x}};q) &= \bm{M_{n,k}^1}(z_{1,r}, z_{2,r}, \dots, z_{1,r-1}, z_{2,r-1}, \dots , z_{1,1}, z_{2,1}, \dots ;q^r) |_{z_{i,j} = q^{i-1} \cdot x_j^{(i)}} \\ &= \bm{M_{n,k}^1}(z_{1,r}, \dots, z_{1,1}, z_{2,r}, \dots, z_{2,1}, \dots; q^r) |_{z_{i,j} = q^{i-1} \cdot x_j^{(i)}} \\ &= {\mathrm {rev}}_q \left[ \bm{I_{n,k}^1}(z_{1,r}, \dots, z_{1,1}, z_{2,r}, \dots, z_{2,1}, \dots; q^r) \right]|_{z_{i,j} = q^{i-1} \cdot x_j^{(i)}} \\ &= {\mathrm {rev}}_q \left[ \bm{I_{n,k}^1}(z_{1,r}, \dots, z_{1,1}, z_{2,r}, \dots, z_{2,1}, \dots; q^r)|_{z_{i,j} = q^{r-i} \cdot x_j^{(i)}} \right] \\ &= {\mathrm {rev}}_q(\bm{I_{n,k}}({\mathbf {x}};q)). \end{align} The first equality is Equation~\ref{maj-relation}, the second equality uses the fact that $\bm{M_{n,k}^1}({\mathbf {z}};q)$ is symmetric in the ${\mathbf {z}}$ variables, the third equality uses the fact that $\bm{M_{n,k}^1}({\mathbf {z}};q) = {\mathrm {rev}}_q(\bm{I_{n,k}^1}({\mathbf {z}};q))$, the fourth equality interchanges evaluation and $q$-reversal, and the final equality is Equation~\ref{inv-relation}. \end{proof} The power series in Lemma~\ref{m-equals-i} will be (up to minor transformations) the graded Frobenius character of the ring $S_{n,k}$. We give this character-to-be a name. \begin{defn} Let $r \geq 1$ and let $k \leq n$ be positive integers. Let $\bm{D_{n,k}}({\mathbf {x}};q) \in \Lambda^r({\mathbf {x}})$ be the common ring element \begin{equation} \bm{D_{n,k}}({\mathbf {x}};q) := ({\mathrm {rev}}_q \circ \omega) \bm{M_{n,k}}({\mathbf {x}};q) = \omega \bm{I_{n,k}}({\mathbf {x}};q). \end{equation} \end{defn} As a Frobenius character, the ring element $\bm{D_{n,k}}({\mathbf {x}};q) \in \Lambda^r({\mathbf {x}})$ must expand positively in the Schur basis $\{ \bm{s_{\lambda}}({\mathbf {x}}) \,:\, { \bm{\lambda} } \vdash_r n \}$. The ${\mathrm {maj}}$ formulation of $\bm{D_{n,k}}({\mathbf {x}};q)$ is well suited to proving this fact directly, as well as giving the Schur expansion of $\bm{D_{n,k}}({\mathbf {x}};q)$. The following proposition is a colored version of a result of Wilson \cite[Thm. 5.0.1]{WMultiset}. \begin{proposition} \label{d-schur-expansion} Let $r \geq 1$ and let $k \leq n$ be positive integers. We have the Schur expansion \begin{equation} \bm{D_{n,k}}({\mathbf {x}};q) = {\mathrm {rev}}_q \left[\sum_{{ \bm{T}} \in {\mathrm {SYT}}^r(n)} q^{{\mathrm {maj}}({ \bm{T}}) + r {n-k \choose 2} - r (n-k) {\mathrm {des}}({ \bm{T}})} {{\mathrm {des}}({ \bm{T}}) \brack n-k}_{q^r} \bm{s_{{\mathrm {shape}}({ \bm{T}})'}}({\mathbf {x}}) \right]. \end{equation} \end{proposition} \begin{proof} Consider the collection ${\mathcal{W}}_n$ of all length $n$ words $w = w_1 \dots w_n$ in the alphabet of $r$-colored positive integers. For any word $w \in {\mathcal{W}}_n$, the (colored version of the) {\em RSK correspondence} gives a pair of $r$-tableaux $(\bm{U}, { \bm{T}})$ of the same shape, with $\bm{U}$ semistandard and ${ \bm{T}}$ standard. For example, if $r = 3$ and $w = 2^0 1^1 4^1 2^2 1^0 2^0 2^1 1^2 \in {\mathcal{W}}_8$ then $w \mapsto (\bm{U}, { \bm{T}})$ where \begin{small} \begin{equation*} \bm{U} = \, \begin{Young} 1 & 2 \\ 2 \end{Young}, \hspace{0.1in} \begin{Young} 1 & 2 \\ 4 \end{Young}, \hspace{0.1in} \begin{Young} 1 \\ 2 \end{Young} \hspace{0.3in} { \bm{T}} = \, \begin{Young} 1 & 6 \\ 5 \end{Young}, \hspace{0.1in} \begin{Young} 2 & 3 \\ 7 \end{Young}, \hspace{0.1in} \begin{Young} 4 \\ 8 \end{Young} \, . \end{equation*} \end{small} The RSK map gives a bijection \begin{equation} {\mathcal{W}}_n \xrightarrow{\sim} \left\{ (\bm{U}, { \bm{T}}) \,:\, \begin{array}{c} \text{$\bm{U}$ a semistandard $r$-tableau with $n$ boxes,} \\ \text{${ \bm{T}}$ a standard $r$-tableau with $n$ boxes,} \\ \text{${\mathrm {shape}}(\bm{U}) = {\mathrm {shape}}({ \bm{T}})$} \end{array} \right\}. \end{equation} If $w \mapsto (\bm{U}, { \bm{T}})$, then ${\mathrm {Des}}(w) = {\mathrm {Des}}({ \bm{T}})$ so that ${\mathrm {maj}}(w) = {\mathrm {maj}}({ \bm{T}})$. For any word $w \in {\mathcal{W}}_n$, we can generate a collection of ${{\mathrm {des}}(w) \choose n-k}$ $r$-colored ordered multiset partitions $\mu$ as follows. Among the ${\mathrm {des}}(w)$ descents of $w$, choose $n-k$ of them to star, yielding a pair $(w, S)$ where $S \subseteq {\mathrm {Des}}(w)$ satisfies $|S| = n-k$. We may identify $(w, S)$ with an $r$-colored ordered multiset partition $\mu$. The above paragraph implies that \begin{equation} \label{first-m-equation} \bm{M_{n,k}}({\mathbf {x}};q) = \sum_{w \in {\mathcal{W}}_n^r} q^{{\mathrm {maj}}(w) + r {n-k \choose 2} - r(n-k){\mathrm {des}}(w)} {{\mathrm {des}}(w) \brack n-k}_{q^r} {\mathbf {x}}^w, \end{equation} where the factor $q^{r {n-k \choose 2} - r(n-k){\mathrm {des}}(w)} {{\mathrm {des}}(w) \brack n-k}_{q^r}$ is generated by the ways in which $n-k$ stars can be placed among the ${\mathrm {des}}(w)$ descents of $w$. Applying RSK to the right-hand side of Equation~\ref{first-m-equation}, we deduce that \begin{equation} \bm{M_{n,k}}({\mathbf {x}};q) = \sum_{{ \bm{T}} \in {\mathrm {SYT}}^r(n)} q^{{\mathrm {maj}}({ \bm{T}}) - r {n-k \choose 2} + r(n-k){\mathrm {des}}({ \bm{T}})} { {\mathrm {des}}({ \bm{T}}) \brack n-k}_{q^{r}} \bm{s_{{\mathrm {shape}}({ \bm{T}})}}({\mathbf {x}}). \end{equation} Since $\bm{D_{n,k}}({\mathbf {x}};q) = ({\mathrm {rev}}_q \circ \omega) \bm{M_{n,k}}({\mathbf {x}};q)$, we are done. \end{proof} Our basic tool for proving that $\bm{D_{n,k}}({\mathbf {x}};q) = {\mathrm {grFrob}}(S_{n,k};q)$ will be the following lemma, which is a colored version of \cite[Lem. 3.6]{HRS}. \begin{lemma} \label{e-perp-lemma} Let $\bm{F({\mathbf {x}})}, \bm{G({\mathbf {x}})} \in \Lambda^r({\mathbf {x}})$ have equal constant terms. Then $\bm{F({\mathbf {x}})} = \bm{G({\mathbf {x}})}$ if and only if $e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{F({\mathbf {x}})} = e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{G({\mathbf {x}})}$ for all $j \geq 1$ and $1 \leq i \leq r$. \end{lemma} \begin{proof} The forward direction is obvious. For the reverse direction, let ${ \bm{\lambda} }$ be any $r$-partition, let $j \geq 1$, and let $1 \leq i \leq r$. We have \begin{align} \langle \bm{F({\mathbf {x}})}, e_j({\mathbf {x}}^{(i^*)}) \bm{e_{{ \bm{\lambda} }}}({\mathbf {x}}) \rangle &= \langle e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{F({\mathbf {x}})}, \bm{e_{{ \bm{\lambda} }}({\mathbf {x}})} \rangle \\ &= \langle e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{G({\mathbf {x}})}, \bm{e_{{ \bm{\lambda} }}({\mathbf {x}})} \rangle \\ &= \langle \bm{G({\mathbf {x}})}, e_j({\mathbf {x}}^{(i^*)}) \bm{e_{{ \bm{\lambda} }}}({\mathbf {x}}) \rangle. \end{align} Since $\langle \bm{F({\mathbf {x}})}, \bm{e_{\bm{\varnothing}}}({\mathbf {x}}) \rangle = \langle \bm{G({\mathbf {x}})}, \bm{e_{\bm{\varnothing}}}({\mathbf {x}}) \rangle$ by assumption (where $\bm{\varnothing} = (\varnothing, \dots, \varnothing)$ is the empty $r$-partition), this chain of equalities implies that $\langle \bm{F({\mathbf {x}})}, \bm{e_{{ \bm{\lambda} }}}({\mathbf {x}}) \rangle = \langle \bm{G({\mathbf {x}})}, \bm{e_{{ \bm{\lambda} }}({\mathbf {x}})} \rangle$ for any $r$-partition ${ \bm{\lambda} }$. We conclude that $\bm{F({\mathbf {x}})} = \bm{G({\mathbf {x}})}$. \end{proof} We will show that $\bm{D_{n,k}}({\mathbf {x}};q)$ and ${\mathrm {grFrob}}(S_{n,k};q)$ satisfy the conditions of Lemma~\ref{e-perp-lemma} by showing that their images under $e_j({\mathbf {x}}^{(i^*)})^{\perp}$ satisfy the same recursion. The ${\mathrm {coinv}}$ formulation of $\bm{D_{n,k}}({\mathbf {x}};q)$ is best suited to calculating $e_j({\mathbf {x}}^{(i^*)})^{\perp}$. The following lemma is a colored version of \cite[Lem. 3.7]{HRS}. \begin{lemma} \label{d-under-e-perp} Let $r \geq 1$ and let $k \leq n$ be positive integers. Let $1 \leq i \leq r$ and let $j \geq 1$. We have \begin{equation} e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{D_{n,k}}({\mathbf {x}};q) = q^{j \cdot (r-i) + r \cdot {j \choose 2}} {k \brack j}_{q^r} \sum_{m = \max(1,k-j)}^{\min(k,n-j)} q^{r \cdot (k-m) \cdot (n-j-m)} {j \brack k-m}_{q^r} \bm{D_{n-j,m}}({\mathbf {x}};q). \end{equation} \end{lemma} \begin{proof} Applying $\omega$ to both sides of the purported identity, it suffices to prove \begin{equation} \label{h-equation} h_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{I_{n,k}}({\mathbf {x}};q) = q^{j \cdot (r-i) + r \cdot {j \choose 2}} {k \brack j}_{q^r} \sum_{m = \max(1,k-j)}^{\min(k,n-j)} q^{r \cdot (k-m) \cdot (n-j-m)} {j \brack k-m}_{q^r} \bm{I_{n-j,m}}({\mathbf {x}};q). \end{equation} Since the bases $\{ \bm{h_{{ \bm{\lambda} }}}({\mathbf {x}}) \}$ and $\{ \bm{m_{{ \bm{\lambda} }^*}}({\mathbf {x}}) \}$ are dual bases for $\Lambda^r({\mathbf {x}})$ under the Hall inner product, for any $\bm{F({\mathbf {x}})} \in \Lambda^r({\mathbf {x}})$ and any $r$-composition ${ \bm{\beta} }$, we have \begin{equation} \label{coefficient-extraction} \langle \bm{F({\mathbf {x}})}, \bm{h_{{ \bm{\beta} }^*}}({\mathbf {x}}) \rangle = \text{coefficient of ${\mathbf {x}}^{{ \bm{\beta} }}$ in $\bm{F}({\mathbf {x}})$}. \end{equation} Equation~\ref{coefficient-extraction} is our tool for proving Equation~\ref{h-equation}. Let ${ \bm{\beta} } = (\beta^{(1)}, \dots, \beta^{(r)})$ be an $r$-composition and consider the inner product \begin{equation} \label{h-inner-product} \langle h_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{I_{n,k}}({\mathbf {x}};q), \bm{h_{{ \bm{\beta} }^{*}}}({\mathbf {x}}) \rangle = \langle \bm{I_{n,k}}({\mathbf {x}};q), h_j({\mathbf {x}}^{(i^*)}) \bm{h_{{ \bm{\beta} }^*}}({\mathbf {x}}) \rangle. \end{equation} We may write $h_j({\mathbf {x}}^{(i^*)}) \bm{h_{{ \bm{\beta} }^*}}({\mathbf {x}}) = \bm{h_{\bm{\widehat{\beta}}^*}}({\mathbf {x}})$, where \begin{itemize} \item $\bm{\widehat{\beta}} = (\beta^{(1)}, \dots, \widehat{\beta}^{(i)}, \dots, \beta^{(r)})$ is an $r$-composition which agrees with ${ \bm{\beta} }$ in every component except for $i$, and \item $\widehat{\beta}^{(i)} = (\beta^{(i)}_1, \beta^{(i)}_2, \dots, 0 ,\dots, 0, j)$, where the composition $\widehat{\beta}^{(i)}$ has $N$ parts for some positive integer $N$ larger than the number of parts in any of $\beta^{(1)}, \dots, \beta^{(r)}$. \end{itemize} By Equation~\ref{coefficient-extraction}, we can interpret $\langle \bm{I_{n,k}}({\mathbf {x}}), h_j({\mathbf {x}}^{(i^*)}) \bm{h_{{ \bm{\beta} }^*}}({\mathbf {x}}) \rangle = \langle \bm{I_{n,k}}({\mathbf {x}}), \bm{h_{\bm{\widehat{\beta}}^*}}({\mathbf {x}}) \rangle$ combinatorially. For any $r$-composition $\bm{\alpha} = (\alpha^{(1)}, \dots, \alpha^{(r)})$, let ${\mathcal{OP}}_{\bm{\alpha},k}$ be the collection of $r$-colored ordered multiset partitions with $k$ blocks which contain $\alpha^{(i)}_j$ copies of the letter $j^{i-1}$. Equation~\ref{coefficient-extraction} implies \begin{equation} \label{combinatorial-coefficient-extraction} \langle \bm{I_{n,k}}({\mathbf {x}}), \bm{h_{\bm{\widehat{\beta}}^*}}({\mathbf {x}}) \rangle = \sum_{\mu \in {\mathcal{OP}}_{\bm{\widehat{\beta}},k}} q^{{\mathrm {coinv}}(\mu)}. \end{equation} Let us analyze the right-hand side of Equation~\ref{combinatorial-coefficient-extraction}. A typical element $\mu \in {\mathcal{OP}}_{\bm{\widehat{\beta}},k}$ contains $j$ copies of the {\em big letter} $N^{i-1}$, together with various other {\em small letters}. Recall that the statistic ${\mathrm {coinv}}$ is defined using the order $\prec$, which prioritizes letter value over color. Our choice of $N$ guarantees that every small letter is $\prec N^{i-1}$. We have a map \begin{equation} \varphi: {\mathcal{OP}}_{\bm{\widehat{\beta}},k} \rightarrow \bigcup_{m = \max(1,k-j)}^{\min(k,n-j)} {\mathcal{OP}}_{{ \bm{\beta} },m}, \end{equation} where $\varphi(\mu)$ is the $r$-colored ordered multiset partition obtained by erasing all $j$ of the big letters $N^{i-1}$ in $\mu$ (together with any singleton blocks $\{N^{i-1}\}$). Let us analyze the effect of $\varphi$ on ${\mathrm {coinv}}$. Fix $m$ in the range $\max(1,k-j) \leq m \leq \min(k,n-j)$ and let $\mu \in {\mathcal{OP}}_{{ \bm{\beta} },m}$. Then any $\mu' \in \varphi^{-1}(\mu)$ is obtained by adding $j$ copies of the big letter $N^{i-1}$ to $\mu$, precisely $k-m$ of which must be added in singleton blocks. We calculate $\sum_{\mu' \in \varphi^{-1}(\mu)} q^{{\mathrm {coinv}}(\mu')}$ in terms of ${\mathrm {coinv}}(\mu)$ as follows. Following the notation of the proof of \cite[Lem. 3.7]{HRS}, let us call a big letter $N^{i-1}$ {\em minb} if it is $\prec$-minimal in its block and {\em nminb} if it is not $\prec$-minimal in its block. Similarly, let us call a small letter {\em mins} or {\em nmins} depending on whether it is minimal in its block. The contributions to $\sum_{\mu' \in \varphi^{-1}(\mu)} q^{{\mathrm {coinv}}(\mu')}$ coming from big letters are as follows. \begin{itemize} \item The $j$ big letters $N^{i-1}$ give a complementary color contribution of $j \cdot (r-i)$ to ${\mathrm {coinv}}$. \item Each of the $minb$ letters forms a coinversion pair with every $nmins$ letter. Since there are $k-m$ $minb$ letters and $n-j-m$ $nmins$ letters, this contributes $r(k-m)(n-j-m)$ to ${\mathrm {coinv}}$. \item Each of the $minb$ letters forms a coinversion pair with every $nminb$ letter (for a total of $(k-m)(j-k+m)$ coinversion pairs) as well each $minb$ letter to its left (for a total of ${k-m \choose 2}$ coinversion pairs. This contributes $r \cdot [ (k-m)(j-k+m) + {k-m \choose 2} ]$ to ${\mathrm {coinv}}$. \item Each $minb$ letter forms a coinversion pair with each $mins$ letter to its left. If we sum over the ${k \choose k-m}$ ways of interleaving the singleton blocks $\{N^{i-1}\}$ within the blocks of $\mu$, this gives rise to a factor of ${k \brack k-m}_{q^r}$. \item Each $nminb$ letter forms a coinversion pair with each $mins$ letter to its left. If we consider the ${m \choose j-k+m}$ ways to augment the $m$ blocks of $\mu$ with a $nminb$ letter, this gives rise to a factor of $q^{{j-k+m \choose 2}} {m \brack j-k+m}_{q^r}$. \end{itemize} Applying the identity \begin{equation} r \cdot \left[(k-m)(j-k-m) + {k-m \choose 2} + {j-k+m \choose 2} \right] = r \cdot {j \choose 2}, \end{equation} we see that \begin{align} \sum_{\mu' \in \varphi^{-1}(\mu)} q^{{\mathrm {coinv}}(\mu')} &= q^{j \cdot (r-i) + r \cdot {j \choose 2} + r \cdot (k-m)(n-j-m)} {k \brack k-m}_{q^r} {m \brack j-k+m}_{q^r} q^{{\mathrm {coinv}}(\mu)} \\ &= q^{j \cdot (r-i) + r \cdot {j \choose 2} + r \cdot (k-m)(n-j-m)} {k \brack j}_{q^r} {j \brack k-m}_{q^r} q^{{\mathrm {coinv}}(\mu)}. \end{align} If we sum this expression over all $\mu \in {\mathcal{OP}}_{{ \bm{\beta} },m}$, and then sum over $m$, we get \begin{equation} \label{big-expression-h} q^{j \cdot (r-i) + r \cdot {j \choose 2}} {k \brack j}_{q^r} \sum_{m = \max(1,k-j)}^{\min(k,n-j)} q^{r \cdot (k-m)(n-j-m)} {j \brack k-m}_{q^r} \sum_{\mu \in {\mathcal{OP}}_{{ \bm{\beta} },m}} q^{{\mathrm {coinv}}(\mu)}. \end{equation} However, thanks to Equation~\ref{coefficient-extraction} and the definition of the $\bm{I}$-functions, the expression (\ref{big-expression-h}) is also equal to \begin{equation} \left\langle q^{j \cdot (r-i) + r \cdot {j \choose 2}} {k \brack j}_{q^r} \sum_{m = \max(1,k-j)}^{\min(k,n-j)} q^{r \cdot (k-m) \cdot (n-j-m)} {j \brack k-m}_{q^r} \bm{I_{n-j,m}}({\mathbf {x}};q), \bm{h_{{ \bm{\beta} }^*}}({\mathbf {x}}) \right\rangle. \end{equation} Since both sides of the equation in the statement of the lemma have the same pairing under $\langle \cdot, \cdot \rangle$ with $\bm{h_{{ \bm{\beta} }^*}}({\mathbf {x}})$ for any $r$-composition ${ \bm{\beta} }$, we are done. \end{proof} \section{Hilbert series and standard monomial basis} \label{Hilbert} \subsection{The point sets $Y_{n,k}^r$ and $Z_{n,k}^r$} In this section we derive the Hilbert series of $R_{n,k}$ and $S_{n,k}$. We also prove that, as ungraded $G_n$-modules, we have $R_{n,k} \cong {\mathbb {C}}[{\mathcal{F}}_{n,k}]$ and $S_{n,k} \cong {\mathbb {C}}[{\mathcal{OP}}_{n,k}]$. To do this, we will use a general method dating back to Garsia and Procesi \cite{GP} in the context of the Tanisaki ideal. We recall the method, and then apply it to our situation. For any finite point set $Y \subset {\mathbb {C}}^n$, let ${\mathbf {I}}(Y) \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ be the ideal of polynomials which vanish on $Y$. That is, we have \begin{equation} {\mathbf {I}}(Y) := \{ f \in {\mathbb {C}}[{\mathbf {x}}_n] \,:\, f({\mathbf {y}}) = 0 \text{ for all ${\mathbf {y}} \in Y$} \}. \end{equation} We can identify the quotient ${\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {I}}(Y)$ with the ${\mathbb {C}}$-vector space of functions $Y \rightarrow {\mathbb {C}}$. In particular \begin{equation} \dim ({\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {I}}(Y)) = |Y|. \end{equation} If $W \subseteq GL_n({\mathbb {C}})$ is a finite subgroup and $Y$ is stable under the action of $W$, we have \begin{equation} {\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {I}}(Y) \cong_W {\mathbb {C}}[Y] \end{equation} as $W$-modules, where we used the fact that the permutation module $Y$ is self-dual. The ideal ${\mathbf {I}}(Y)$ is almost never homogeneous. To get a homogeneous ideal, we proceed as follows. If $f \in {\mathbb {C}}[{\mathbf {x}}_n]$ is any nonzero polynomial of degree $d$, write $f = f_d + f_{d-1} + \cdots + f_0$, where $f_i$ is homogeneous of degree $i$. Define $\tau(f) := f_d$ and define a homogeneous ideal ${\mathbf {T}}(Y) \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ by \begin{equation} {\mathbf {T}}(Y) := \langle \tau(f) \,:\, f \in {\mathbf {I}}(Y) - \{0\} \rangle. \end{equation} The passage from ${\mathbf {I}}(Y)$ to ${\mathbf {T}}(Y)$ does not affect the $W$-module structure (or vector space dimension) of the quotient: \begin{equation} {\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {T}}(Y) \cong_W{\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {I}}(Y) \cong_W {\mathbb {C}}[Y]. \end{equation} Our strategy, whose $r = 1$ avatar was accomplished in \cite{HRS}, is as follows. \begin{enumerate} \item Find finite point sets $Y_{n,k}, Z_{n,k} \subset {\mathbb {C}}^n$ which are stable under the action of $G_n$ such that there are equivariant bijections $Y_{n,k} \cong {\mathcal{F}}_{n,k}$ and $Z_{n,k} \cong {\mathcal{OP}}_{n,k}$. \item Prove that $I_{n,k} \subseteq {\mathbf {T}}(Y_{n,k})$ and $J_{n,k} \subseteq {\mathbf {T}}(Z_{n,k})$ by showing that the generators of the ideals $I_{n,k}, J_{n,k}$ arise as top degree components of polynomials vanishing on $Y_{n,k}, Z_{n,k}$ (respectively). \item Use Gr\"obner theory to prove \begin{equation*} \dim(R_{n,k}) = \dim \left( {\mathbb {C}}[{\mathbf {x}}_n]/I_{n,k} \right) \leq | {\mathcal{F}}_{n,k} | = \dim \left( {\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {T}}(Y_{n,k}) \right) \end{equation*} and \begin{equation*} \dim(S_{n,k}) = \dim \left( {\mathbb {C}}[{\mathbf {x}}_n]/J_{n,k} \right) \leq | {\mathcal{OP}}_{n,k} | = \dim \left( {\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {T}}(Z_{n,k}) \right). \end{equation*} Step 2 then implies $I_{n,k} = {\mathbf {T}}(Y_{n,k})$ and $J_{n,k} = {\mathbf {T}}(Z_{n,k})$. \end{enumerate} To accomplish Step 1 of this program, we introduce the following point sets. \begin{defn} Fix $k$ distinct positive real numbers $0 < \alpha_1 < \cdots < \alpha_k$. Let $Y_{n,k} \subset {\mathbb {C}}^n$ be the set of points $(y_1, \dots, y_n)$ such that \begin{itemize} \item we have $y_i = 0$ or $y_i \in \{ \zeta^c \alpha_j \,:\, 0 \leq c \leq r-1, 1 \leq j \leq k \}$ for all $i$, and \item we have $\{\alpha_1, \dots, \alpha_k\} \subseteq \{ |y_1|, \dots, |y_n| \}$. \end{itemize} Let $Z_{n,k} \subseteq {\mathbb {C}}^n$ be the set of points in $Y_{n,k}$ whose coordinates do not vanish: \begin{equation*} Z_{n,k} := \{ (y_1, \dots, y_n) \in Y_{n,k} \,:\, y_i \neq 0 \text{ for all $i$.} \}. \end{equation*} \end{defn} There is a bijection $\varphi: {\mathcal{F}}_{n,k} \rightarrow Y_{n,k}$ given as follows. Let $\sigma = (Z \mid B_1 \mid \cdots \mid B_k) \in {\mathcal{F}}_{n,k}$ be an $G_n$-face of dimension $k$, whose zero block $Z$ may be empty. The point $\varphi(\sigma) = (y_1, \dots, y_n)$ has coordinates given by \begin{equation} y_i = \begin{cases} 0 & \text{if $i \in Z$,} \\ \zeta^c \alpha_j & \text{if $i\in B_j$ and $i$ has color $c$.} \end{cases} \end{equation} For example if $r = 3$ then \begin{equation*} \varphi: ( 25 \mid 3^0 \mid 1^0 4^2 6^2) \mapsto (\zeta^0 \alpha_2, 0, \zeta^0 \alpha_1, \zeta^2 \alpha_2, 0, \zeta^2 \alpha_2). \end{equation*} The set $Y_{n,k}$ is closed under the action of $G_n$ and the map $\varphi$ commutes with the action of $G_n$. It follows that $Y_{n,k} \cong {\mathcal{F}}_{n,k}$ as $G_n$-sets. Moreover, the bijection $\varphi$ restricts to show that $Z_{n,k} \cong {\mathcal{OP}}_{n,k}$ as $G_n$-sets. This accomplishes Step 1 of our program. Step 2 of our program is accomplished by appropriate modifications of \cite[Sec. 4]{HRS}. \begin{lemma} \label{i-contained-in-t} We have $I_{n,k} \subseteq {\mathbf {T}}(Y_{n,k})$ and $J_{n,k} \subseteq {\mathbf {T}}(Z_{n,k})$. \end{lemma} \begin{proof} We will show that every generator of $I_{n,k}$ (resp. $J_{n,k}$) is the top degree component of some polynomial in ${\mathbf {I}}(Y_{n,k})$ (resp. ${\mathbf {I}}(Z_{n,k})$). Let $1 \leq i \leq n$. It is clear that $x_i (x_i^r - \alpha_1^r) \cdots (x_i^r - \alpha_k^r) \in {\mathbf {I}}(Y_{n,k})$. Taking the highest component, we have $x_i^{kr+1} \in {\mathbf {T}}(Y_{n,k})$. Similarly, the polynomial $(x_i^r - \alpha_1^r) \cdots (x_i^r - \alpha_k^r)$ vanishes on $Z_{n,k}$, so that $x_i^{kr} \in {\mathbf {T}}(Z_{n,k})$. Lemma~\ref{alternating-sum-lemma} applies to show $e_{n-k+1}({\mathbf {x}}_n^r), \dots, e_n({\mathbf {x}}_n^r) \in {\mathbf {T}}(Y_{n,k})$ and $e_{n-k+1}({\mathbf {x}}_n^r), \dots, e_n({\mathbf {x}}_n^r) \in {\mathbf {T}}(Z_{n,k})$. \end{proof} \subsection{Skip monomials and initial terms} Step 3 of our program takes more work. We begin by isolating certain monomials in the initial ideals of $I_{n,k}$ and $J_{n,k}$. \begin{lemma} \label{skip-leading-terms} Let $<$ be the lexicographic order on monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$. \begin{itemize} \item For any $1 \leq i \leq n$ we have $x_i^{kr+1} \in {\mathrm {in}}_<(I_{n,k})$ and $x_i^{kr} \in {\mathrm {in}}_<(J_{n,k})$. \item If $S \subseteq [n]$ satisfies $|S| = n-k+1$, we also have ${\mathbf {x}}(S)^r \in {\mathrm {in}}_<(I_{n,k})$ and ${\mathbf {x}}(S)^r \in {\mathrm {in}}_<(J_{n,k})$. \end{itemize} \end{lemma} \begin{proof} The first claim follows from the fact that $x_i^{kr+1}$ is a generator of $I_{n,k}$ and $x_i^{kr}$ is a generator of $J_{n,k}$. The second claim is a consequence of Lemma~\ref{demazure-initial-term} and Lemma~\ref{demazures-in-ideal}. \end{proof} It will turn out that the monomials given in Lemma~\ref{skip-leading-terms} will suffice to generate ${\mathrm {in}}_<(I_{n,k})$ and ${\mathrm {in}}_<(J_{n,k})$. The next definition gives the family of monomials which are not divisible by any of the monomials in Lemma~\ref{skip-leading-terms} a name. \begin{defn} A monomial $m \in {\mathbb {C}}[{\mathbf {x}}_n]$ is {\em $(n,k)$-nonskip} if \begin{itemize} \item $x_i^{kr+1} \nmid m$ for $1 \leq i \leq n$, and \item ${\mathbf {x}}(S)^r \nmid m$ for all $S \subseteq [n]$ with $|S| = n-k+1$. \end{itemize} Let ${\mathcal{M}}_{n,k}$ denote the collection of all $(n,k,r)$-nonskip monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$. An $(n,k)$-nonskip monomial $m \in {\mathcal{M}}_{n,k}$ is called {\em strongly $(n,k)$-nonskip} if we have $x_i^{kr} \nmid m$ for all $1 \leq i \leq n$. Let ${\mathbb{N}}N_{n,k}$ denote the collection of strongly $(n,k)$-nonskip monomials. \end{defn} We will describe a bijection $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$ which restricts to a bijection ${\mathcal{OP}}_{n,k} \rightarrow {\mathbb{N}}N_{n,k}$. The bijection $\Psi$ will be constructed recursively, so that $\Psi(\sigma)$ will be determined by $\Psi(\overline{\sigma})$, where $\overline{\sigma}$ is the $G_{n-1}$-face obtained from $\sigma$ by deleting the largest letter $n$. The recursive procedure which gives the derivation $\Psi(\overline{\sigma}) \mapsto \Psi(\sigma)$ will rely on the following lemmata involving skip monomials. The first of these is an extension of \cite[Lem. 4.5]{HRS}. \begin{lemma} \label{skip-monomial-union} Let $m \in {\mathbb {C}}[{\mathbf {x}}_n]$ be a monomial and let $S, T \subseteq [n]$ be subsets. If ${\mathbf {x}}(S)^r \mid m$ and ${\mathbf {x}}(T)^r \mid m$, then ${\mathbf {x}}(S \cup T)^r \mid m$. \end{lemma} \begin{proof} Given $i \in S$, it follows from the definition of skip monomials that the exponent of $x_i$ in ${\mathbf {x}}(S \cup T)^r$ is $\leq$ the exponent of $x_i$ in ${\mathbf {x}}(S)^r$. A similar observation holds for $i \in T$. The claimed divisibility follows. \end{proof} The following result is an immediate consequence of Lemma~\ref{skip-monomial-union}; it extends \cite[Lem. 4.6]{HRS}. \begin{lemma} \label{skip-monomial-unique} Let $m \in {\mathbb {C}}[{\mathbf {x}}_n]$ be a monomial and let $\ell$ be the largest integer such that there exists a subset $S \subseteq [n]$ with $|S| = \ell$ and ${\mathbf {x}}(S)^r \mid m$. Then there exists {\em a unique} subset $S \subseteq [n]$ with $|S| = \ell$ and ${\mathbf {x}}(S)^r \mid m$. \end{lemma} \begin{proof} If there were two such sets $S, S'$ then by Lemma~\ref{skip-monomial-union} we would have ${\mathbf {x}}(S \cup S')^r \mid m$, contradicting the definition of $\ell$. \end{proof} Given any subset $S \subseteq [n]$, let ${\mathbf {m}}(S) := \prod_{i \in S} x_i$ be the corresponding squarefree monomial. For example, we have ${\mathbf {m}}(245) = x_2 x_4 x_5$. We have the following lemma involving the $r^{th}$ power ${\mathbf {m}}(S)^r$ of ${\mathbf {m}}(S)$. This is the extension of \cite[Lem. 4.7]{HRS}. \begin{lemma} \label{skip-monomial-multiply} Let $m \in {\mathcal{M}}_{n,k}$ be an $(n,k)$-nonskip monomial. There exists a unique set $S \subseteq [n]$ with $|S| = n-k$ such that \begin{enumerate} \item ${\mathbf {x}}(S)^r \mid ( {\mathbf {m}}(S)^r \cdot m)$, and \item ${\mathbf {x}}(U)^r \nmid ( {\mathbf {m}}(S)^r \cdot m)$ for all $U \subseteq [n]$ with $|U| = n-k+1$. \end{enumerate} \end{lemma} \begin{proof} We begin with uniqueness. Suppose $S = \{s_1 < \cdots < s_{n-k} \}$ and $T = \{t_1 < \cdots < t_{n-k} \}$ were two such sets. Let $\ell$ be such that $s_1 = t_1, \dots, s_{\ell-1} = t_{\ell-1}$, and $s_{\ell} \neq t_{\ell}$; without loss of generality we have $s_{\ell} < t_{\ell}$. Define a new set $U$ by $U := \{s_1 < \cdots < s_{\ell} < t_{\ell} < t_{\ell + 1} < \cdots < t_{n-k} \}$, so that $|U| = n-k+1$. Since ${\mathbf {x}}(S)^r \mid ({\mathbf {m}}(S)^r \cdot m)$ and ${\mathbf {x}}(T)^r \mid ({\mathbf {m}}(T)^r \cdot m)$, we have ${\mathbf {x}}(U)^r \mid ({\mathbf {m}}(S)^r \cdot m)$, which is a contradiction. To prove existence, consider the following collection ${\mathbb {C}}C$ of subsets of $[n]$: \begin{equation} {\mathbb {C}}C := \{ S \subseteq [n] \,:\, |S| = n-k \text{ and } {\mathbf {x}}(S)^r \mid ({\mathbf {m}}(S)^r \cdot m) \}. \end{equation} The collection ${\mathbb {C}}C$ is nonempty; indeed, we have $\{1, 2, \dots, n-k\} \in {\mathbb {C}}C$. Let $S_0 \in {\mathbb {C}}C$ be the lexicographically {\em final} set in ${\mathbb {C}}C$; we argue that ${\mathbf {m}}(S_0)^r \cdot m$ satisfies Condition 2 of the statement of the lemma, thus finishing the proof. Let $U \subseteq [n]$ have size $|U| = n-k+1$ and suppose ${\mathbf {x}}(U)^r \mid ({\mathbf {m}}(S_0)^r \cdot m)$. If there were an element $u \in U$ with $u < \min(S_0)$, then we would have ${\mathbf {x}}(S_0 \cup \{u\})^r \mid m$, which contradicts the assumption $m \in {\mathcal{M}}_{n,k}$. Since $|U| > |S_0|$, there exists an element $u_0 \in U - S_0$ with $u_0 > \min(S_0)$. Write the union $S_0 \cup \{u_0\}$ as \begin{equation} S_0 \cup \{u_0\} = \{s_1 < \cdots < s_j < u_0 < s_{j+1} < \cdots < s_{n-k} \}, \end{equation} where $j \geq 1$. Define a new set $S_0'$ by \begin{equation} S_0' := \{s_1 < \cdots < s_{j-1} < u_0 < s_{j+1} < \cdots < s_{n-k} \}. \end{equation} Then $S_0'$ comes after $S_0$ in lexicographic order but we have $S_0' \in {\mathbb {C}}C$, contradicting our choice of $S_0$. \end{proof} To see how Lemma~\ref{skip-monomial-multiply} works, consider the case $(n,k,r) = (5,2,3)$ and $m = x_1^2 x_2^6 x_3^3 x_4^3 x_5^6 \in {\mathcal{M}}_{5,2}$. The collection ${\mathbb {C}}C$ of sets \begin{equation*} {\mathbb {C}}C = \{ S \subseteq [5] \,:\, |S| = 3 \text{ and } {\mathbf {x}}(S)^3 \mid ({\mathbf {m}}(S)^3 \cdot m) \} \end{equation*} is given by \begin{equation*} {\mathbb {C}}C = \{123, 124, 125, 234, 235 \}. \end{equation*} However, we have \begin{align*} &{\mathbf {x}}(1234)^3 \mid ({\mathbf {m}}(123)^3 \cdot m), &{\mathbf {x}}(1234)^3 \mid ({\mathbf {m}}(124)^3 \cdot m), \\ &{\mathbf {x}}(1235)^3 \mid ({\mathbf {m}}(125)^3 \cdot m), &{\mathbf {x}}(2345)^3 \mid ({\mathbf {m}}(234)^3 \cdot m). \end{align*} On the other hand, if $S \subseteq [5]$ and $|S| = 4$, then ${\mathbf {x}}(S)^3 \nmid ({\mathbf {m}}(235)^3 \cdot m)$. Observe that $235$ is the lexicographically final set in ${\mathbb {C}}C$. \subsection{The bijection $\Psi$} We describe a bijection $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$ which restricts to a bijection ${\mathcal{OP}}_{n,k} \rightarrow {\mathbb{N}}N_{n,k}$ with the property that ${\mathrm {coinv}}(\sigma) = \deg(\Psi(\sigma))$ for any $G_n$-face $\sigma \in {\mathcal{F}}_{n,k}$. The construction of $\Psi$ will be recursive in the parameter $n$. If $n = 1$ and $k = 1$, the relation ${\mathrm {coinv}}(\sigma) = \deg(\Psi(\sigma))$ determines the bijection $\Psi$ uniquely. Explicitly, the map $\Psi: {\mathcal{F}}_{1,1} \rightarrow {\mathcal{M}}_{1,1}$ is defined by \begin{equation} \Psi: (1^c) \mapsto x_1^{r-c-1}, \end{equation} for any color $0 \leq c \leq r-1$. If $n = 1$ and $k = 0$ then ${\mathcal{F}}_{1,0}$ consists of the sole face $(1)$. On the other hand, the collection ${\mathcal{M}}_{1,0}$ of nonskip monomials consists of the sole monomial $1$. We are forced to define \begin{equation} \Psi: (1) \mapsto 1. \end{equation} The combinatorial recursion on which $\Psi$ is based is as follows. Let $\sigma = (B_1 \mid \cdots \mid B_{\ell}) \in {\mathcal{F}}_{n,k}$ be an $G_n$-face of dimension $k$, so that $\ell = k+1$ or $\ell = k$ according to whether $\sigma$ has a zero block. Suppose we wish to build a larger face by inserting $n+1$ into $\sigma$. There are three ways in which this can be done. \begin{enumerate} \item We could perform a {\em star insertion} by inserting $n+1$ into one of the nonzero blocks $B_{\ell - j}$ of $\sigma$ for $1 \leq j \leq k$ also assigning a color $c$ to $n+1$. The resulting $G_n$-face would be $(B_1 \mid \cdots \mid B_{\ell - j} \cup \{(n+1)^c\} \mid \cdots \mid B_{\ell})$. This leaves the dimension $k$ unchanged and increases ${\mathrm {coinv}}$ by $r \cdot (k - j) + (r - c - 1)$. For example, if $r = 2$ and $\sigma = (3 \mid 2^1 4^0 \mid 1^1) \in {\mathcal{F}}_{4,2}$, the possible star insertions of $5$ and their effects on ${\mathrm {coinv}}$ are \begin{center} $\begin{array}{cccc} (3 \mid 2^1 4^0 5^1 \mid 1^1) & (3 \mid 2^1 4^0 5^0 \mid 1^1 ) & (3 \mid 2^1 4^0 \mid 1^1 5^1) & (3 \mid 2^1 4^0 \mid 1^1 5^0) \\ {\mathrm {coinv}} + 0 & {\mathrm {coinv}} + 1 & {\mathrm {coinv}} + 2 & {\mathrm {coinv}} + 3. \end{array}$ \end{center} \item We could perform a {\em zero insertion} by inserting $n+1$ into the zero block of $\sigma$ (or by creating a new zero block whose sole element is $n+1$). This leaves the dimension $k$ unchanged and increases ${\mathrm {coinv}}$ by $kr$. For example, if $r = 2$ and $\sigma = (3 \mid 2^1 4^0 \mid 1^1) \in {\mathcal{F}}_{4,2}$, the zero insertion of $5$ would yield $(35 \mid 2^1 4^0 \mid 1^1)$, adding $4$ to ${\mathrm {coinv}}$. \item We could perform a {\em bar insertion} by inserting $n+1$ into a new singleton nonzero block of $\sigma$ just after the block $B_{\ell - j}$ for some $0 \leq j \leq k$, also assigning a color $c$ to $n+1$. The resulting $G_n$-face would be $(B_1 \mid \cdots \mid B_{\ell - j} \mid (n+1)^c \mid B_{\ell - j + 1} \mid \cdots \mid B_{\ell})$. This increases the dimension $k$ by one and increases ${\mathrm {coinv}}$ by $r \cdot (n-k) + r \cdot (k-j) + (r-c-1)$. For example, if $r = 2$ and $\sigma = (3 \mid 2^1 4^0 \mid 1^1) \in {\mathcal{F}}_{4,2}$, the possible bar insertions of $5$ and their effects on ${\mathrm {coinv}}$ are \begin{center} $\begin{array}{ccc} (3 \mid 5^1 \mid 2^1 4^0 \mid 1^1) & (3 \mid 5^0 \mid 2^1 4^0 \mid 1^1) & (3 \mid 2^1 4^0 \mid 5^1 \mid 1^1) \\ {\mathrm {coinv}} + 4 & {\mathrm {coinv}} + 5 & {\mathrm {coinv}} + 6 \\ \\ (3 \mid 2^1 4^0 \mid 5^0 \mid 1^1) & (3 \mid 2^1 4^0 \mid 1^1 \mid 5^1) & (3 \mid 2^1 4^0 \mid 1^1 \mid 5^0) \\ {\mathrm {coinv}} + 7 & {\mathrm {coinv}} + 8 & {\mathrm {coinv}} + 9. \end{array}$ \end{center} \end{enumerate} The names of these three kinds of insertions come from our combinatorial models for $G_n$-faces; a star insertion adds a star to the star model of $\sigma$, a zero insertion adds an element to the zero block of $\sigma$, and a bar insertion adds a bar to the bar model of $\sigma$. Let $\sigma = (B_1 \mid \cdots \mid B_{\ell}) \in {\mathcal{F}}_{n,k}$ be an $G_n$-face of dimension $k$ and let $\overline{\sigma}$ be the $G_{n-1}$-face obtained by deleting $n$ from $\sigma$. Then $\overline{\sigma} \in {\mathcal{F}}_{n-1,k}$ if $\sigma$ arises from $\overline{\sigma}$ by a star or zero insertion and $\overline{\sigma} \in {\mathcal{F}}_{n-1,k-1}$ if $\sigma$ arises from $\overline{\sigma}$ from a bar insertion. Assume inductively that the monomial $\Psi(\overline{\sigma})$ has been defined, and that this monomial lies in ${\mathcal{M}}_{n-1,k}$ or ${\mathcal{M}}_{n-1,k-1}$ according to whether $\overline{\sigma}$ lies in ${\mathcal{F}}_{n-1,k}$ or ${\mathcal{F}}_{n-1,k-1}$. We define $\Psi(\sigma)$ by the rule \begin{equation} \Psi(\sigma) := \begin{cases} \Psi(\overline{\sigma}) \cdot x_n^{r \cdot (k-j-1) + (r-c-1)} & \text{if $n^c \in B_{\ell - j}$ and $B_{\ell - j}$ is a nonzero nonsingleton,} \\ \Psi(\overline{\sigma}) \cdot x_n^{kr} & \text{if $n$ lies in the zero block of $\sigma$,} \\ \Psi(\overline{\sigma}) \cdot {\mathbf {m}}(S)^r \cdot x_n^{r \cdot (k-j-1) + (r-c-1)} & \text{if $B_{\ell - j} = \{n^c\}$ is a nonzero singleton,} \end{cases} \end{equation} where in the third branch $S \subseteq [n-1]$ is the unique subset of size $|S| = n-k$ guaranteed by Lemma~\ref{skip-monomial-multiply} applied to $m = \Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k-1}$. \begin{example} Let $(n,k,r) = (8,3,3)$ and consider the face $\sigma = (2 5 \mid 1^0 7^0 8^1 \mid 6^1 \mid 3^2 4^2) \in {\mathcal{F}}_{8,3}$. In order to calculate $\Psi(\sigma) \in {\mathcal{M}}_{8,3}$, we refer to the following table. Here `type' refers to the type of insertion (star, zero, or bar) of $n$ at each stage. \begin{center} \begin{tabular}{l | l | l | l | l | l} $\sigma$ & $n$ & $k$ & type & $S$ & $\Psi(\sigma)$ \\ \hline $(1^0)$ & $1$ & $1$ & & & $x_1^2$ \\ $(2 \mid 1^0)$ & $2$ & $1$ & zero & & $x_1^2 x_2^3$ \\ $(2 \mid 1^0 \mid 3^2)$ & $3$ & $2$ & bar & $2$ & $x_1^2 x_2^3 \cdot {\mathbf {m}}(2)^3 \cdot x_3^3 = x_1^2 x_2^6 x_3^3$ \\ $(2 \mid 1^0 \mid 3^2 4^2)$ & $4$ & $2$ & star & & $x_1^2 x_2^6 x_3^3 x_4^3$ \\ $(25 \mid 1^0 \mid 3^2 4^2)$ & $5$ & $2$ & zero & & $x_1^2 x_2^6 x_3^3 x_4^3 x_5^6$ \\ $(25 \mid 1^0 \mid 6^1 \mid 3^2 4^2)$ & $6$ & $3$ & bar & $235$ & $x_1^2 x_2^6 x_3^3 x_4^3 x_5^6 \cdot {\mathbf {m}}(235)^3 \cdot x_6^4 = x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4$ \\ $(25 \mid 1^0 7^0 \mid 6^1 \mid 3^2 4^2)$ & $7$ & $3$ & star & & $x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2$ \\ $(25 \mid 1^0 7^0 8^1 \mid 6^1 \mid 3^2 4^2)$ & $8$ & $3$ & star & & $x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2 x_8^1$ \\ \end{tabular} \end{center} We conclude that \begin{equation*} \Psi(\sigma) = \Psi(2 5 \mid 1^0 7^0 8^1 \mid 6^1 \mid 3^2 4^2) = x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2 x_8^1 \in {\mathcal{M}}_{8,3}. \end{equation*} Observe that the zero block of $\sigma$ is $\{2,5\}$, and that $x_2$ and $x_5$ are the variables in $\Psi(\sigma)$ with exponent $k r = 3 \cdot 3 = 9$. \end{example} The next result is the extension of \cite[Thm. 4.9]{HRS} to $r \geq 2$. The proof has the same basic structure, but one must account for the presence of zero blocks. \begin{proposition} \label{psi-is-bijection} The map $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$ is a bijection which restricts to a bijection ${\mathcal{OP}}_{n,k} \rightarrow {\mathbb{N}}N_{n,k}$. Moreover, for any $\sigma \in {\mathcal{F}}_{n,k}$ we have \begin{equation} {\mathrm {coinv}}(\sigma) = \deg(\Psi(\sigma)). \end{equation} Finally, if $\sigma \in {\mathcal{F}}_{n,k}$ has a zero block $Z$, then \begin{equation} Z = \{1 \leq i \leq n \,:\, \text{the exponent of $x_i$ in $\Psi(\sigma)$ is $kr$} \}. \end{equation} \end{proposition} \begin{proof} We need to show that $\Psi$ is a well-defined function ${\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$. To do this, we induct on $n$ (with the base case $n = 1$ being clear). Let $\sigma = (B_1 \mid \cdots \mid B_{\ell}) \in {\mathcal{F}}_{n,k}$ and let $\overline{\sigma}$ be the $G_{n-1}$-face obtained by removing $n$ from $\sigma$. Then $\overline{\sigma} \in {\mathcal{F}}_{n-1,k}$ (if the insertion type of $n$ was star or zero) or $\overline{\sigma} \in {\mathcal{F}}_{n-1,k-1}$ (if the insertion type of $n$ was bar). We inductively assume that $\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k}$ or $\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k-1}$ accordingly. Suppose first that the insertion type of $n$ was star or zero, so that $\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k}$. Then we have \begin{equation} \Psi(\sigma) = \begin{cases} \Psi(\overline{\sigma}) \cdot x_n^{r \cdot (k-j-1) + (r-c-1)} & \text{if $n^c \in B_{\ell - j}$ and $B_{\ell - j}$ is a nonzero nonsingleton,} \\ \Psi(\overline{\sigma}) \cdot x_n^{kr} & \text{if $n$ lies in the zero block of $\sigma$.} \end{cases} \end{equation} By induction and the inequalities $0 \leq j \leq k-1$ and $0 \leq c \leq r-1$, we know that none of the variable powers $x_1^{kr+1}, \dots, x_n^{kr+1}$ divide $\Psi(\sigma)$. Let $S \subseteq [n]$ be a subset of size $|S| = n-k+1$. Since $\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k}^r$, we know that ${\mathbf {x}}(S - \{\max(S)\})^r \nmid \Psi(\overline{\sigma})$. This implies that ${\mathbf {x}}(S)^r \nmid \Psi(\sigma)$. We conclude that $\Psi(\sigma) \in {\mathcal{M}}_{n,k}$. Now suppose that the insertion type of $n$ was bar, so that $\Psi(\overline{\sigma}) \in {\mathcal{M}}_{n-1,k-1}$. We have \begin{equation} \Psi(\sigma) = \Psi(\overline{\sigma}) \cdot {\mathbf {m}}(S)^r \cdot x_n^{r \cdot (k-j-1) + (r - c- 1)}, \end{equation} where $B_{\ell - j} = \{n^c\}$ and $S \subseteq [n-1]$ is the unique subset of size $|S| = n-k$ guaranteed by Lemma~\ref{skip-monomial-multiply} applied to the monomial $m = \Psi(\overline{\sigma})$. Since none of the variable powers $x_1^{(k-1)\cdot r + 1}, \dots, x_{n-1}^{(k-1) \cdot r + 1}$ divide $\Psi(\overline{\sigma})$, we conclude that none of the variable powers $x_1^{kr+1}, \dots, x_n^{kr+1}$ divide $\Psi(\sigma)$. Let $T \subseteq [n]$ satisfy $|T| = n-k+1$. If $n \notin T$, Lemma~\ref{skip-monomial-multiply} and induction guarantee that ${\mathbf {x}}(T)^r \nmid \Psi(\sigma)$. If $n \in T$, then the power of $x_n$ in the monomial ${\mathbf {x}}(T)^r$ is $kr$, so that ${\mathbf {x}}(T)^r \nmid \Psi(\sigma)$. We conclude that $\Psi(\sigma) \in {\mathcal{M}}_{n,k}$. This finishes the proof that $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$ is well-defined. The relationship ${\mathrm {coinv}}(\sigma) = \deg(\Psi(\sigma))$ is clear from the inductive definition of $\Psi$ and the previously described effect of insertion on the ${\mathrm {coinv}}$ statistic. Let $\sigma \in {\mathcal{F}}_{n,k}$ be an $G_n$-face with zero block $Z$ (where $Z$ could be empty). We aim to show that $Z = \{ 1 \leq i \leq n \,:\, \text{the exponent of $x_i$ in $\Psi(\sigma)$ is $kr$} \}$. To do this, we proceed by induction on $n$ (the case $n = 1$ being clear). As before, let $\overline{\sigma}$ be the face obtained by erasing $n$ from $\sigma$ and let $\overline{Z}$ be the zero block of $\overline{\sigma}$. We inductively assume that \begin{equation} \overline{Z} = \begin{cases} \{1 \leq i \leq n-1 \,:\, \text{the exponent of $x_i$ in $\Psi(\overline{\sigma})$ is $kr$} \} & \text{if $\overline{\sigma} \in {\mathcal{F}}_{n-1, k}$}, \\ \{1 \leq i \leq n-1 \,:\, \text{the exponent of $x_i$ in $\Psi(\overline{\sigma})$ is $(k-1) \cdot r$} \} & \text{if $\overline{\sigma} \in {\mathcal{F}}_{n-1, k-1}$}. \end{cases} \end{equation} Suppose first that $\sigma$ was obtained from $\overline{\sigma}$ by a star insertion, so that $\overline{\sigma} \in {\mathcal{F}}_{n-1,k}$ and $Z = \overline{Z}$. Since the exponent of $x_n$ in $\Psi(\sigma)$ is $< kr$, the desired equality of sets holds in this case. Next, suppose that $\sigma$ was obtained from $\overline{\sigma}$ by a zero insertion, so that $\overline{\sigma} \in {\mathcal{F}}_{n-1,k}$ and $Z = \overline{Z} \cup \{n\}$. Since the exponent of $x_n$ in $\Psi(\sigma)$ is $kr$, the desired equality of sets holds in this case. Finally, suppose that $\sigma$ was obtained from $\overline{\sigma}$ by a bar insertion, so that $\overline{\sigma} \in {\mathcal{F}}_{n-1,k-1}$ and $Z = \overline{Z}$. Since the exponent of $x_n$ in $\Psi(\sigma)$ is $< kr$, by induction we need only argue that $Z \subseteq S$, where $S \subseteq [n-1]$ is the unique subset of size $|S| = n-k$ guaranteed by Lemma~\ref{skip-monomial-multiply} applied to the monomial $m = \Psi(\overline{\sigma})$. If the containment $Z \subseteq S$ failed to hold, let $z = Z - S$ be arbitrary. By induction, the exponent of $x_z$ in $\Psi(\overline{\sigma})$ is $(k-1) \cdot r$. Also, we have the divisibility ${\mathbf {x}}(S)^r \mid \Psi(\overline{\sigma}) \cdot {\mathbf {m}}(S)^r$. If since $z \leq n-1$, we have the divisibility ${\mathbf {x}}(S \cup \{z\})^r \mid {\mathbf {x}}(S)^r \cdot x_z^{(k-1) \cdot r}$, so that ${\mathbf {x}}(S \cup \{z\})^r \mid \Psi(\overline{\sigma}) \cdot {\mathbf {m}}(S)^r$, which contradicts Lemma~\ref{skip-monomial-multiply}. We conclude that $Z \subseteq S$. This proves the last sentence of the proposition. We now turn our attention to proving that $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$ is a bijection. In order to prove that $\Psi$ is a bijection, we will construct its inverse $\Phi: {\mathcal{M}}_{n,k} \rightarrow {\mathcal{F}}_{n,k}$. The map $\Phi$ will be defined by reversing the recursion used to define $\Psi$. When $(n,k) = (1,0)$, there is only one choice for $\Phi$; we must define $\Phi: {\mathcal{M}}_{1,0} \rightarrow {\mathcal{F}}_{1,0}$ by \begin{equation} \Phi: 1 \mapsto (1). \end{equation} When $(n,k) = (1,1)$, since $\Phi$ is supposed to invert the function $\Psi$, we are forced to define $\Phi: {\mathcal{M}}_{1,1} \rightarrow {\mathcal{F}}_{1,1}$ by \begin{equation} \Phi: x_1^c \mapsto (1^{r-c-1}), \end{equation} for $0 \leq c \leq r-1$. In general, fix $k \leq n$ and assume inductively that the functions \begin{equation*} \begin{cases} \Phi: {\mathcal{M}}_{n-1,k} \rightarrow {\mathcal{F}}_{n-1,k}, \\ \Phi: {\mathcal{M}}_{n-1,k-1} \rightarrow {\mathcal{F}}_{n-1,k-1} \end{cases} \end{equation*} have already been defined. We aim to define the function $\Phi: {\mathcal{M}}_{n,k} \rightarrow {\mathcal{F}}_{n,k}$. To this end, let $m = x_1^{a_1} \cdots x_{n-1}^{a_{n-1}} x_n^{a_n} \in {\mathcal{M}}_{n,k}$ be a monomial. Define a new monomial $m' := x_1^{a_1} \cdots x_{n-1}^{a_{n-1}}$ by setting $x_n = 1$ in $m$. Either $m' \in {\mathcal{M}}_{n-1,k}$ or $m' \notin {\mathcal{M}}_{n-1,k}$. If $m' \in {\mathcal{M}}_{n-1,k}$, then $\Phi(m') = (B_1 \mid \cdots \mid B_{\ell}) \in {\mathcal{F}}_{n-1,k}^r$ is a previously defined $G_{n-1}$-face. Our definition of $\Phi(m)$ depends on the exponent $a_n$ of $x_n$ in $m$. \begin{itemize} \item If $m' \in {\mathcal{M}}_{n-1,k}$ and $a_n < kr$, write $a_n = j \cdot r + (r-c-1)$ for a nonnegative integer $j$ and $0 \leq c \leq r-1$. Let $\Phi(m)$ be obtained from $\Phi(m')$ by star inserting $n^c$ into the $j^{th}$ nonzero block of $\Psi(m)$ from the left. \item If $m' \in {\mathcal{M}}_{n-1,k}$ and $a_n = kr$, let $\Phi(m)$ be obtained from $\Phi(m')$ by adding $n$ to the zero block of $\Phi(m')$ (creating a zero block if necessary). \end{itemize} If $m' \notin {\mathcal{M}}_{n-1,k}$, there exists a subset $S \subseteq [n-1]$ such that $|S| = n-k$ and ${\mathbf {x}}(S)^r \mid m'$. Lemma~\ref{skip-monomial-unique} guarantees that the set $S$ is unique. {\bf Claim:} We have $\frac{m'}{{\mathbf {m}}(S)^r} \in {\mathcal{M}}_{n-1,k-1}$. Since $m \in {\mathcal{M}}_{n,k}$, we know that ${\mathbf {x}}(T)^r \nmid \frac{m'}{{\mathbf {m}}(S)^r}$ for all $T \subseteq [n-1]$ with $|T| = n-k+1$. Let $1 \leq j \leq n-1$. We need to show $x_j^{(k-1) \cdot r + 1} \nmid \frac{m'}{{\mathbf {m}}(S)^r}$. If $j \in S$ this is immediate from the fact that $x_j^{kr + 1} \nmid m'$. If $j \notin S$ and $x_j^{(k-1) \cdot r + 1} \mid \frac{m'}{{\mathbf {m}}(S)^r}$, then $x_j^{(k-1) \cdot r + 1} \mid m'$ and ${\mathbf {x}}(S \cup \{j\})^r \mid m'$, a contradiction to the assumption $m = m' \cdot x_n^{a_n} \in {\mathcal{M}}_{n,k}$. This finishes the proof of the Claim. By the Claim, we recursively have an $G_{n-1}$-face $\Phi \left( \frac{m'}{{\mathbf {m}}(S)} \right) \in {\mathcal{F}}_{n-1,k-1}$. Moreover, we have $a_n < kr$ (because otherwise ${\mathbf {x}}(S \cup \{n\})^r \mid m$, contradicting $m \in {\mathcal{M}}_{n,k}$). Write $a_n = j \cdot r + (r-c-1)$ for some nonnegative integer $j$ and $0 \leq c \leq r-1$. Form $\Phi(m)$ from $\Phi(m')$ by bar inserting the singleton block $\{n^c\}$ to the left of the $j^{th}$ nonzero block of $\Phi(m')$ from the left. For an example of the map $\Phi$, let $(n,k,r) = (8,3,3)$ and let $m = x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2 x_8^1 \in {\mathcal{M}}_{8,3}$. The following table computes $\Phi(m) = (25 \mid 1^0 7^0 8^1 \mid 6^1 \mid 3^2 4^2)$. Throughout this calculation, the nonzero blocks will successively become frozen (i.e., written in bold). \begin{small} \begin{center} \begin{tabular}{l | l | l | l | l | l | l | l} $m$ & $m'$ & $(n,k)$ & type & $S$ & $\frac{m'}{{\mathbf {m}}(S)^r}$ & $(j,c)$ & $\Phi(m)$ \\ \hline $x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2 x_8^1$ & $x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2$ & $(8,3)$ & star & & & $(0,1)$ & $(8^1 \mid \cdot \mid \cdot)$ \\ $x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4 x_7^2$ & $x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4$ & $(7,3)$ & star & & & $(0,0)$ & $(7^0 8^1 \mid \cdot \mid \cdot)$ \\ $x_1^2 x_2^9 x_3^6 x_4^3 x_5^9 x_6^4$ & $x_1^2 x_2^9 x_3^6 x_4^3 x_5^9$ & $(6,3)$ & bar & $235$ & $x_1^2 x_2^6 x_3^3 x_4^3 x_5^6$ & $(1,1)$ & $(7^0 8^1 \mid {\bf 6^1} \mid \cdot)$ \\ $x_1^2 x_2^6 x_3^3 x_4^3 x_5^6$ & $x_1^2 x_2^6 x_3^3 x_4^3$ & $(5,2)$ & zero & & & & $(5 \mid 7^0 8^1 \mid {\bf 6^1} \mid \cdot)$ \\ $x_1^2 x_2^6 x_3^3 x_4^3$ & $x_1^2 x_2^6 x_3^3$ & $(4,2)$ & star & & & $(1,2)$ & $(5 \mid 7^0 8^1 \mid {\bf 6^1} \mid 4^2 )$ \\ $x_1^2 x_2^6 x_3^3$ & $x_1^2 x_2^6$ & $(3,2)$ & bar & 2 & $x_1^2 x_2^3$ & $(1,2)$ & $(5 \mid 7^0 8^1 \mid {\bf 6^1} \mid {\bf 3^2 4^2} )$ \\ $x_1^2 x_2^3$ & $x_1^2$ & $(2,1)$ & zero & & & & $(25 \mid 7^0 8^1 \mid {\bf 6^1} \mid {\bf 3^2 4^2})$ \\ $x_1^2$ & 1 & $(1,1)$ & bar & $\varnothing$ & 1 & $(0,0)$ & $(25 \mid {\bf 1^0 7^0 8^1} \mid {\bf 6^1} \mid {\bf 3^2 4^2})$ \end{tabular} \end{center} \end{small} To proceed from one row of the table to the next, we use the following procedure. \begin{itemize} \item Define $m$ to be the monomial $m'$ from the above row (if the insertion type in the above row was star or zero) or the monomial $\frac{m'}{{\mathbf {m}}(S)^r}$ from the above row (if the insertion type in the above row was bar). \item Define $(n,k)$ in the current row to be $(n-1,k)$ from the above row (if the insertion type in the above row was star or zero) or $(n-1,k-1)$ from the above row (if the insertion type in the above row was bar). \item Using the $(n,k)$ in the current row, define $m'$ from $m$ using the relation $m = m' \cdot x_n^{a_n}$. \item If $a_n = kr$, define the insertion type of the current row to be zero, let $\Phi(m)$ be obtained from the above row by adjoining $n$ to its zero block (creating a new zero block if necessary), and move on to the next row. \item If $a_n < kr$, define $(j,c)$ by the relation $a_n = j \cdot r + (r-c-1)$, where $j$ is nonnegative and $0 \leq c \leq r-1$. \item If $a_n < kr$ and $m' \in {\mathcal{M}}_{n-1,k}$, define the insertion type of the current row to be star. Let $\Phi(m)$ obtained from the above row by inserting $n^c$ into the $j^{th}$ nonzero nonfrozen block from the left, and move on to the next row. \item If $a_n < kr$ and $m' \notin {\mathcal{M}}_{n-1,k}$, define the insertion type of the current row to be bar. Let $S \subseteq [n-1]$ be the set defined by Lemma~\ref{skip-monomial-unique} as above. Calculate $\frac{m'}{{\mathbf {m}}(S)^r}$. Let $\Phi(m)$ be obtained from the above row by inserting $n^c$ into the $j^{th}$ nonzero nonfrozen block from the left and freezing that block. Move on to the next row. \end{itemize} We leave it for the reader to check that the procedure defined above reverses the recursive definition of $\Psi$, so that $\Phi$ and $\Psi$ are mutually inverse maps. The fact that $\Psi$ restricts to give a bijection ${\mathcal{OP}}_{n,k} \rightarrow {\mathbb{N}}N_{n,k}$ follows from the assertion about zero blocks. \end{proof} We are ready to identify the standard monomial bases of our quotient rings $R_{n,k}$ and $S_{n,k}$. The proof of the following result is analogous to the proof of \cite[Thm. 4.10]{HRS}. \begin{theorem} \label{m-is-basis} Let $n \geq k$ be positive integers and endow monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ with the lexicographic term order $<$. \begin{itemize} \item The collection ${\mathcal{M}}_{n,k}$ of $(n,k)$-nonskip monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ is the standard monomial basis of $R_{n,k}$. \item The collection ${\mathbb{N}}N_{n,k}$ of strongly $(n,k)$-nonskip monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ is the standard monomial basis of $S_{n,k}$. \end{itemize} \end{theorem} \begin{proof} Let us begin with the case of $R_{n,k}$. Recall the point set $Y_{n,k} \subseteq {\mathbb {C}}^n$. Let ${\mathcal{B}}_{n,k}$ be the standard monomial basis of the quotient ring ${\mathbb {C}}[{\mathbf {x}}_n] / {\mathbf {T}}(Y_{n,k})$. Since $\dim({\mathbb {C}}[{\mathbf {x}}_n] / {\mathbf {T}}(Y_{n,k})) = | Y_{n,k} | = | {\mathcal{F}}_{n,k} |$, we have \begin{equation} |{\mathcal{B}}_{n,k}| = | {\mathcal{F}}_{n,k} |. \end{equation} On the other hand, Lemma~\ref{i-contained-in-t} says that $I_{n,k} \subseteq {\mathbf {T}}(Y_{n,k})$. This leads to the containment of initial ideals \begin{equation} {\mathrm {in}}_<(I_{n,k}) \subseteq {\mathrm {in}}_<({\mathbf {T}}(Y_{n,k})). \end{equation} If ${\mathbb {C}}C_{n,k}$ is the standard monomial basis for $R_{n,k} = {\mathbb {C}}[{\mathbf {x}}_n] / I_{n,k}$, this implies \begin{equation} {\mathcal{B}}_{n,k} \subseteq {\mathbb {C}}C_{n,k}. \end{equation} However, Lemma~\ref{skip-leading-terms} and the definition of $(n,k)$-nonskip monomials implies \begin{equation} {\mathbb {C}}C_{n,k} \subseteq {\mathcal{M}}_{n,k}. \end{equation} Proposition~\ref{psi-is-bijection} shows that $|{\mathcal{M}}_{n,k}| = |{\mathcal{F}}_{n,k}|$. Since we already know ${\mathcal{B}}_{n,k} \subseteq {\mathcal{M}}_{n,k}$ and $|{\mathcal{B}}_{n,k}| = |{\mathcal{F}}_{n,k}|$, we conclude that \begin{equation} {\mathcal{B}}_{n,k} = {\mathcal{M}}_{n,k}, \end{equation} which proves the first assertion of the theorem. The case of $S_{n,k}$ is similar. An identical chain of reasoning, this time involving $Z_{n,k}$ instead of $Y_{n,k}$, shows that ${\mathbb{N}}N_{n,k}$ contains the standard monomial basis for $S_{n,k}$. Propososition~\ref{psi-is-bijection} implies that both $|{\mathbb{N}}N_{n,k}|$ and $\dim(S_{n,k})$ equal $|{\mathcal{OP}}_{n,k}|$. \end{proof} Theorem~\ref{m-is-basis} makes it easy to compute the Hilbert series of $R_{n,k}$ and $S_{n,k}$. \begin{corollary} \label{hilbert-series-corollary} The graded vector spaces $R_{n,k}$ and $S_{n,k}$ have the following Hilbert series. \begin{align} {\mathrm {Hilb}}(R_{n,k}; q) &= \sum_{z = 0}^n {n \choose z} q^{krz} \cdot {\mathrm {rev}}_q( [r]_q^{n-z} \cdot [k]!_{q^r} \cdot {\mathrm {Stir}}_{q^r}(n-z,k)) \\ &= \sum_{z = 0}^n {n \choose z} q^{krz} \cdot [r]_q^{n-z} \cdot [k]!_{q^r} \cdot {\mathrm {rev}}_q({\mathrm {Stir}}_{q^r}(n-z,k)). \\ {\mathrm {Hilb}}(S_{n,k}; q) &= {\mathrm {rev}}_q ([r]_q^n \cdot [k]!_{q^r} \cdot {\mathrm {Stir}}_{q^r}(n,k) ) \\ &= [r]_q^n \cdot [k]!_{q^r} \cdot {\mathrm {rev}}_q ({\mathrm {Stir}}_{q^r}(n,k)). \end{align} \end{corollary} \begin{proof} By Theorem~\ref{m-is-basis} and Proposition~\ref{psi-is-bijection}, we have \begin{align} {\mathrm {Hilb}}(R_{n,k}; q) &= \sum_{\sigma \in {\mathcal{F}}_{n,k}} q^{{\mathrm {coinv}}(\sigma)}, \\ {\mathrm {Hilb}}(S_{n,k}; q) &= \sum_{\sigma \in {\mathcal{OP}}_{n,k}} q^{{\mathrm {coinv}}(\sigma)}, \end{align} so that the proof of the corollary reduces to calculating the generating function of ${\mathrm {coinv}}$ on ${\mathcal{F}}_{n,k}$ and ${\mathcal{OP}}_{n,k}$. It follows from the work of Steingr\'imsson \cite{Stein} that the generating function of ${\mathrm {coinv}}$ on ${\mathcal{OP}}_{n,k}$ is \begin{equation} \sum_{\sigma \in {\mathcal{OP}}_{n,k}} q^{{\mathrm {coinv}}(\sigma)} = {\mathrm {rev}}_q ([r]_q^n \cdot [k]!_{q^r} \cdot {\mathrm {Stir}}_{q^r}(n,k)), \end{equation} proving the desired expression for ${\mathrm {Hilb}}(S_{n,k}; q)$. For the derivation of ${\mathrm {Hilb}}(R_{n,k}; q)$, simply note that a zero block $Z$ of an $G_n$-face $\sigma \in {\mathcal{F}}_{n,k}$ contributes $kr \cdot |Z|$ to ${\mathrm {coinv}}(\sigma)$. \end{proof} The proof of Theorem~\ref{m-is-basis} also gives the {\em ungraded} isomorphism type of the $G_n$-modules $R_{n,k}$ and $S_{n,k}$. \begin{corollary} \label{ungraded-isomorphism-type} As {\em ungraded} $G_n$-modules we have $R_{n,k} \cong {\mathbb {C}}[{\mathcal{F}}_{n,k}]$ and $S_{n,k} \cong {\mathbb {C}}[{\mathcal{OP}}_{n,k}]$. \end{corollary} \begin{proof} We have the following isomorphisms of ungraded $G_n$-modules: \begin{equation} {\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {T}}(Y_{n,k}) \cong {\mathbb {C}}[{\mathbf {x}}_n]/I_{n,k} \cong {\mathbb {C}}[{\mathcal{F}}_{n,k}] \end{equation} and \begin{equation} {\mathbb {C}}[{\mathbf {x}}_n]/{\mathbf {T}}(Z_{n,k}) \cong {\mathbb {C}}[{\mathbf {x}}_n]/J_{n,k} \cong {\mathbb {C}}[{\mathcal{OP}}_{n,k}]. \end{equation} The proof of Theorem~\ref{m-is-basis} shows that ${\mathbf {T}}(Y_{n,k}) = I_{n,k}$ and ${\mathbf {T}}(Z_{n,k}) = J_{n,k}$. \end{proof} Theorem~\ref{m-is-basis} identifies the standard monomial bases ${\mathcal{M}}_{n,k}$ and ${\mathbb{N}}N_{n,k}$ for the quotient rings $R_{n,k}$ and $S_{n,k}$ with respect to the lexicographic term order. However, checking whether monomial $m \in {\mathbb {C}}[{\mathbf {x}}_n]$ is (strongly) $(n,k)$-nonskip involves checking whether ${\mathbf {x}}(S)^r \mid m$ for all possible subsets $S \subseteq [n]$ with $|S| = n-k+1$. The next result gives a more direct characterization of the monomials of ${\mathcal{M}}_{n,k}$ and ${\mathbb{N}}N_{n,k}$. A {\em shuffle} of a pair of sequences $(a_1, \dots, a_p)$ and $(b_1, \dots, b_q)$ is an interleaving $(c_1, \dots, c_{p+q})$ of these sequences which preserves the relative order of the $a$'s and $b$'s. The following result is an extension of \cite[Thm. 4.13]{HRS} to $r \geq 2$. \begin{theorem} \label{artin-basis} We have \begin{equation} {\mathcal{M}}_{n,k} = \left\{ x_1^{a_1} \cdots x_n^{a_n} \,:\, \begin{array}{c} \text{$(a_1, \dots, a_n)$ is componentwise $\leq$ some shuffle of} \\ \text{$(r-1, 2r-1, \dots, kr-1)$ and $(kr, \dots, kr)$} \end{array} \right\}, \\ \end{equation} where there are $n-k$ copies of $kr$. Moreover, we have \begin{equation} {\mathbb{N}}N_{n,k} = \left\{ x_1^{a_1} \cdots x_n^{a_n} \,:\, \begin{array}{c} \text{$(a_1, \dots, a_n)$ is componentwise $\leq$ some shuffle of} \\ \text{$(r-1, 2r-1, \dots, kr-1)$ and $(kr-1, \dots, kr-1)$} \end{array} \right\}, \\ \end{equation} where there are $n-k$ copies of $kr-1$. \end{theorem} \begin{proof} Let ${\mathcal{A}}_{n,k}$ and ${\mathcal{B}}_{n,k}$ denote the sets of monomials right-hand sides of the top and bottom asserted equalities, respectively. A direct check shows that any shuffle of $(r-1, 2r-1, \dots, kr-1)$ and $(kr, \dots, kr)$ is $(n,k)$-nonskip and that any shuffle of $(r-1, 2r-1, \dots, kr-1)$ and $(kr-1, \dots, kr-1)$ is $(n,k)$-strongly nonskip. This implies that ${\mathcal{A}}_{n,k} \subseteq {\mathcal{M}}_{n,k}$ and ${\mathcal{B}}_{n,k} \subseteq {\mathbb{N}}N_{n,k}$. To verify the reverse containment, consider the bijection $\Psi: {\mathcal{F}}_{n,k} \rightarrow {\mathcal{M}}_{n,k}$ of Proposition~\ref{psi-is-bijection}. We argue that $\Psi({\mathcal{F}}_{n,k}) \subseteq {\mathcal{A}}_{n,k}$. Let $\sigma \in {\mathcal{F}}_{n,k}$ be an $G_n$-face and let $\overline{\sigma}$ be the $G_{n-1}$-face obtained by removing $n$ from $\sigma$. {\bf Case 1:} {\em $n$ is not contained in a nonzero singleton block of $\sigma$.} In this case we have $\overline{\sigma} \in {\mathcal{F}}_{n-1,k}$. We inductively assume $\Psi(\overline{\sigma}) \in {\mathcal{A}}_{n-1,k}$. This means that there is some shuffle $(a_1, \dots, a_{n-1})$ of the sequences $(r-1, 2r-1, \dots, kr-1)$ and $(kr, \dots, kr)$ such that $\Psi(\overline{\sigma}) \mid x_1^{a_1} \cdots x_{n-1}^{a_{n-1}}$ (where there are $n-k-1$ copies of $kr$). By the definition of $\Psi$ we have $\Psi(\sigma) \mid x_1^{a_1} \cdots x_{n-1}^{a_{n-1}} x_n^{kr}$, and $(a_1, \dots, a_{n-1}, kr)$ is a shuffle of $(r-1, 2r-1, \dots, kr-1)$ and $(kr, kr, \dots, kr)$, where there are $n-k$ copies of $kr$. We conclude that $\Psi(\sigma) \in {\mathcal{A}}_{n,k}$ {\bf Case 2:} {\em $n$ is contained in a nonzero singleton block of $\sigma$.} In this case we have $\overline{\sigma} \in {\mathcal{F}}_{n-1,k-1}$. We inductively assume $\Psi(\overline{\sigma}) \in {\mathcal{A}}_{n-1,k-1}$. We have $\Psi(\sigma) = \Psi(\overline{\sigma}) \cdot {\mathbf {m}}(S)^r \cdot x_n^i$ for some $0 \leq i \leq kr-1$, where $S \subseteq [n-1], |S| = n-k,$ and ${\mathbf {x}}(S)^r \mid (\Psi(\overline{\sigma}) \cdot {\mathbf {m}}(S)^r)$. Consider the shuffle $(a_1, \dots, a_n)$ of $(r-1, 2r-1, \dots, kr-1)$ and $(kr, kr, \dots, kr)$ determined by $a_j = kr$ if and only if $j \in S$. We claim $\Psi(\sigma) \mid x_1^{a_1} \cdots x_n^{a_n}$, so that $\Psi(\sigma) \in {\mathcal{A}}_{n,k}$. To see this, write $\Psi(\sigma) = x_1^{b_1} \cdots x_n^{b_n}$. Since $\Psi(\sigma) \in {\mathcal{M}}_{n,k}$ we know that $0 \leq b_j \leq kr$ for all $1 \leq j \leq n$. If $\Psi(\sigma) \nmid x_1^{a_1} \cdots x_n^{a_n}$, choose $1 \leq j \leq n$ with $a_j < b_j$; by the last sentence we know $j \notin S$. A direct check shows that ${\mathbf {x}}(S \cup \{j\})^r \mid \Psi(\sigma)$, which contradicts $\Psi(\sigma) \in {\mathcal{M}}_{n,k}$. We conclude that $\Psi(\sigma) \in {\mathcal{A}}_{n,k}$. This completes the proof that $\Psi({\mathcal{F}}_{n,k}) \subseteq {\mathcal{A}}_{n,k}$. To prove the second assertion of the theorem, one verifies $\Psi({\mathcal{OP}}_{n,k}) \subseteq {\mathcal{B}}_{n,k}$. The argument follows a similar inductive pattern and is left to the reader. \end{proof} For example, consider the case $(n,k,r) = (5,3,2)$. The shuffles of $(1,3,5)$ and $(6,6)$ are the ten sequences \begin{center} $\begin{array}{ccccc} (1,3,5,6,6) & (1,3,6,5,6) & (1,6,3,5,6) & (6,1,3,5,6) & (1,3,6,6,5) \\ (1,6,3,6,5) & (6,1,3,6,5) & (1,6,6,3,5) & (6,1,6,3,5) & (6,6,1,3,5), \end{array}$ \end{center} so that the standard monomial basis ${\mathcal{M}}_{5,3}$ of $R_{5,3}$ with respect to the lexicographic term order consists of those monomials $x_1^{a_1} \cdots x_5^{a_5}$ whose exponent sequence $(a_1, \dots, a_5)$ is componentwise $\leq$ at least one of these ten sequences. On the other hand, the shuffles of $(1,3,5)$ and $(5,5)$ are the six sequences \begin{center} $\begin{array}{cccccc} (1,3,5,5,5) & (1,5,3,5,5) & (5,1,3,5,5) & (1,5,5,3,5) & (5,1,5,3,5) & (5,5,1,3,5), \end{array}$ \end{center} so that the standard monomial basis ${\mathbb{N}}N_{5,3}$ of $S_{5,3}$ consists of those monomials $x_1^{a_1} \cdots x_5^{a_5}$ where $(a_1, \dots, a_5)$ is componentwise $\leq$ at least one of these six sequences. The next result gives the reduced Gr\"obner bases of the ideals $I_{n,k}$ and $J_{n,k}$. It is the extension of \cite[Thm. 4.14]{HRS} to $r \geq 2$. \begin{theorem} \label{groebner-basis} Endow monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ with the lexicographic term order. \begin{itemize} \item The variable powers $x_1^{kr+1}, \dots, x_n^{kr+1}$, together with the polynomials \begin{equation*} \overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^{r})} \end{equation*} for $S \subseteq [n]$ with $|S| = n-k+1$, form a Gr\"obner basis for the ideal $I_{n,k} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$. If $n > k > 0$, this Gr\"obner basis is reduced. \item The variable powers $x_1^{kr}, \dots, x_n^{kr}$, together with the polynomials \begin{equation*} \overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^{r})} \end{equation*} for $S \subseteq [n-1]$ with $|S| = n-k+1$, form a Gr\"obner basis for the ideal $J_{n,k} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$. If $n > k > 0$, this Gr\"obner basis is reduced. \end{itemize} \end{theorem} \begin{proof} By Lemma~\ref{demazures-in-ideal}, the relevant polynomials $\overline{\kappa_{\overline{\gamma(S)}}({\mathbf {x}}_n^{r})}$ lie in the ideals $I_{n,k}$ and $J_{n,k}$; the given variable powers are generators of these ideals. By Theorem~\ref{m-is-basis}, the number of monomials which do not divide any of the initial terms of the given polynomials equals the dimension of the corresponding quotient ring in either case. It follows that the given sets of polynomials are Gr\"obner bases for $I_{n,k}$ and $J_{n,k}$. Suppose $n > k > 0$. By Lemma~\ref{demazure-initial-term}, for any distinct polynomials $f, g$ listed in either bullet point, the leading monomial of $f$ has coefficient $1$ and does not divide any monomial in $g$. This implies the claim about reducedness. \end{proof} \section{Generalized descent monomial basis} \label{Descent} \subsection{A straightening algorithm} For an $r$-colored permutation $g = \pi_1^{c_1} \dots \pi_n^{c_n} \in G_n$, let $d(g) = (d_1(g), \dots, d_n(g))$ be the sequence of nonnegative integers given by \begin{equation} \label{d-sequence-definition} d_i(g) := | \{ j \in {\mathrm {Des}}(\pi_1^{c_1} \dots \pi_n^{c_n}) \,:\, j \geq i \} |. \end{equation} We have $d_1(g) = {\mathrm {des}}(g)$ and $d_1(g) \geq \cdots \geq d_n(g)$. Following Bango and Biagioli \cite{BB}, we define the {\em descent monomial} $b_g \in {\mathbb {C}}[{\mathbf {x}}_n]$ by the equation \begin{equation} \label{gs-monomial-equation} b_g := \prod_{i = 1}^n x_{\pi_i}^{r d_i(g) + c_i}. \end{equation} When $r = 1$, the monomials $b_g$ were introduced by Garisa \cite{Garsia} and further studied by Garsia and Stanton \cite{GS}. Garsia \cite{Garsia} proved that the collection of monomials $\{b_g \,:\, g \in {\mathfrak{S}}_n\}$ descends to a basis for the coinvariant algebra attached to ${\mathfrak{S}}_n$. When $r = 2$, a slightly different family of monomials was introduced by Adin, Brenti, and Roichman \cite{ABR}; they proved that their monomials descend to a basis for the coinvariant algebra attached to the hyperoctohedral group. Bango and Biagioli \cite{BB} introduced the collection of monomials above; they proved that they descend to a basis for the coinvariant algebra attached to $G_n$ (and, more generally, that an appropriate subset of them descend to a basis of the coinvariant algebra for the $G(r,p,n)$ family of complex reflection groups). We will find it convenient to extend the definition of $b_g$ somewhat to `partial colored permutations' $g = \pi_1^{c_1} \dots \pi_m^{c_m}$, where $\pi_1, \dots, \pi_m$ are distinct integers in $[n]$ and $0 \leq c_1, \dots, c_m \leq r-1$ are colors. The formulae (\ref{d-sequence-definition}) and (\ref{gs-monomial-equation}) still make sense in this case and define a monomial $b_g \in {\mathbb {C}}[{\mathbf {x}}_n]$. As an example of descent monomials, consider the case $(n,r) = (8,3)$ and $g = \pi_1^{c_1} \dots \pi_8^{c_8} = 3^2 7^0 1^1 6^1 8^1 2^0 4^2 5^1 \in G_8$. We calculate ${\mathrm {Des}}(g) = \{2,6\}$, so that $d(g) = (2,2,1,1,1,1,0,0)$. The monomial $b_g \in {\mathbb {C}}[{\mathbf {x}}_8]$ is given by \begin{equation*} b_g = x_3^8 x_7^6 x_1^4 x_6^4 x_8^4 x_2^3 x_4^2 x_5^1. \end{equation*} Let $\overline{g} = 6^1 8^1 2^0 4^2 5^1$ be the sequence obtained by erasing the first three letters of $g$. We leave it for the reader to check that \begin{equation*} b_{\overline{g}} = x_6^4 x_8^4 x_2^3 x_4^2 x_5^1, \end{equation*} so that $b_{\overline{g}}$ is obtained by truncating $b_g$. We formalize this as an observation. \begin{observation} \label{truncation-observation} Let $g = \pi_1^{c_1} \dots \pi_n^{c_n} \in G_n$ and let $\overline{g} = \pi_m^{c_m} \dots \pi_n^{c_n}$ for some $1 \leq m \leq n$. If $b_g = x_{\pi_1}^{a_1} \cdots x_{\pi_n}^{a_n}$, then $b_{\overline{g}} = x_{\pi_m}^{a_m} \cdots x_{\pi_n}^{a_n}$. \end{observation} The most important property of the $b_g$ monomials will be a related {\em Straightening Lemma} of Bango and Biagioli \cite{BB} (see also \cite{ABR}). This lemma uses a certain partial order on monomials. In order to define this partial order, we will attach colored permutations to monomials as follows. \begin{defn} \label{group-element-definition} Let $m = x_1^{a_1} \cdots x_n^{a_n}$ be a monomial in ${\mathbb {C}}[{\mathbf {x}}_n]$. Let \begin{equation*} g(m) = \pi_1^{c_1} \dots \pi_n^{c_n} \in G_n \end{equation*} be the $r$-colored permutation determined uniquely by the following conditions: \begin{itemize} \item $a_{\pi_i} \geq a_{\pi_{i+1}}$ for all $1 \leq i < n$, \item if $a_{\pi_i} = a_{\pi_{i+1}}$ then $\pi_i < \pi_{i+1}$, and \item $a_i \equiv c_i$ (mod $r$). \end{itemize} \end{defn} If $m = x_1^{a_1} \cdots x_n^{a_n}$ is a monomial in ${\mathbb {C}}[{\mathbf {x}}_n]$, let $\lambda(m) = (\lambda(m)_1 \geq \cdots \geq \lambda(m)_n)$ be the nonincreasing rearrangement of the exponent sequence $(a_1, \dots, a_n)$. The following partial order on monomials was introduced in \cite[Sec. 3.3]{ABR}. \begin{defn} \label{partial-order-definition} Let $m, m' \in {\mathbb {C}}[{\mathbf {x}}_n]$ be monomials and let $g(m) = \pi_1^{c_1} \dots \pi_n^{c_n}$ and $g(m') = \sigma_1^{e_1} \dots \sigma_n^{e_n}$ be the elements of $G_n$ determined by Definition~\ref{group-element-definition} We write $m \prec m'$ if $\deg(m) = \deg(m')$ and one of the following conditions holds: \begin{itemize} \item $\lambda(m) <_{dom} \lambda(m')$, or \item $\lambda(m) = \lambda(m')$ and ${\mathrm {inv}}(\pi) > {\mathrm {inv}}(\sigma)$. \end{itemize} \end{defn} Observe the numbers ${\mathrm {inv}}(\pi)$ and ${\mathrm {inv}}(\sigma)$ appearing in the second bullet refer to the inversion numbers of the {\em uncolored} permutations $\pi, \sigma \in {\mathfrak{S}}_n$. In order to state the Straightening Lemma, we will need to attach a length $n$ sequence $\mu(m) = (\mu(m)_1 \geq \cdots \geq \mu(m)_n)$ of nonnegative integers to any monomial $m$. The basic tool for doing this is as follows; its proof is similar to that of \cite[Claim 5.1]{ABR}. \begin{lemma} \label{mu-lemma} Let $m = x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb {C}}[{\mathbf {x}}_n]$ be a monomial, let $g(m) = \pi_1^{c_1} \dots \pi_n^{c_n} \in G_n$ be the associated group element, and let $d(m) := d(g(m)) = (d_1 \geq \cdots \geq d_n)$. The sequence \begin{equation} a_{\pi_1} - r d_1 - c_1, \dots, a_{\pi_n} - r d_n - c_n \end{equation} of exponents of $\frac{m}{b_{g(m)}}$ is a weakly decreasing sequence of nonnegative multiples of $r$. \end{lemma} Lemma~\ref{mu-lemma} justifies the following definition. \begin{defn} \label{mu-definition} Let $m = x_1^{a_1} \cdots x_n^{a_n}$ be a monomial and let $(a_{\pi_1} - r d_1 - c_1 \geq \dots \geq a_{\pi_n} - r d_n - c_n)$ be the weakly decreasing sequence of nonnegative multiples of $r$ guaranteed by Lemma~\ref{mu-lemma}. Let $\mu(m) = (\mu(m)_1, \dots, \mu(m)_n)$ be the partition {\em conjugate to} the partition \begin{equation*} \left( \frac{a_{\pi_1} - r d_1 - c_1}{r} , \dots, \frac{a_{\pi_n} - r d_n - c_n}{r} \right). \end{equation*} \end{defn} As an example, consider $(n,r) = (8,3)$ and $m = x_1^7 x_2^3 x_3^{14} x_4^2 x_5^1 x_6^7 x_7^{12} x_8^7$. We have $\lambda(m) = (14,12,7,7,7,3,2,1)$. We calculate $g(m) \in G_8$ to be $g(m) = 3^2 7^0 1^1 6^1 8^1 2^0 4^2 5^1$. From this it follows that $d(m) = (2,2,1,1,1,1,0,0)$. The sequence $\mu(m)$ is determined by the equation \begin{equation*} 3 \cdot \mu(m)' = \lambda(m) - 3 \cdot d(m) - (2,0,1,1,1,0,2,1), \end{equation*} from which it follows that $\mu(m)' = (2,2,1,1,1,0,0,0)$ and $\mu(m) = (5,2,0,0,0,0,0,0)$. The Straightening Lemma of Bango and Biagioli \cite{BB} for monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ is as follows. \begin{lemma} \label{straightening-lemma} (Bango-Biagioli \cite{BB}) Let $m = x_1^{a_1} \cdots x_n^{a_n}$ be a monomial in ${\mathbb {C}}[{\mathbf {x}}_n]$. We have \begin{equation} m = e_{\mu(m)}({\mathbf {x}}_n^r) \cdot b_{g(m)} + \Sigma, \end{equation} where $\Sigma$ is a linear combination of monomials $m' \in {\mathbb {C}}[{\mathbf {x}}_n]$ which satisfy $m' \prec m$. \end{lemma} \subsection{The rings $S_{n,k}$} We are ready to introduce our descent-type monomials for the rings $S_{n,k}$. This is an extension to $r \geq 1$ of the $(n,k)$-Garsia-Stanton monomials of \cite[Sec. 5]{HRS}. \begin{defn} \label{gs-monomial-definition} Let $n \geq k$. The collection ${\mathcal{D}}_{n,k}$ of {\em $(n,k)$-descent monomials} consists of all monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ of the form \begin{equation} b_g \cdot x_{\pi_1}^{r i_1} \cdots x_{\pi_{n-k}}^{r i_{n-k}}, \end{equation} where $g \in G_n$ satisfies ${\mathrm {des}}(g) < k$ and the integer sequence $(i_1, \dots, i_{n-k})$ satisfies \begin{equation*} k - {\mathrm {des}}(g) > i_1 \geq \cdots \geq i_{n-k} \geq 0. \end{equation*} \end{defn} As an example, consider $(n,k,r) = (7,5,2)$ and let $g = 2^1 5^0 6^1 1^0 3^1 4^0 7^0 \in G_7$. It follows that ${\mathrm {Des}}(g) = \{2,4\}$ so that ${\mathrm {des}}(g) = 2$ and $k - {\mathrm {des}}(g) = 3$. We have \begin{equation*} b_g = x_2^5 x_5^4 x_6^3 x_1^2 x_3^1, \end{equation*} so that Definition~\ref{gs-monomial-definition} gives rise to the following monomials in ${\mathcal{D}}_{7,5}$: \begin{center} $\begin{array}{ccc} x_2^5 x_5^4 x_6^3 x_1^2 x_3^1, & x_2^5 x_5^4 x_6^3 x_1^2 x_3^1 \cdot x_2^2, & x_2^5 x_5^4 x_6^3 x_1^2 x_3^1 \cdot x_2^4, \\ \\ x_2^5 x_5^4 x_6^3 x_1^2 x_3^1 \cdot x_2^2 x_5^2, & x_2^5 x_5^4 x_6^3 x_1^2 x_3^1 \cdot x_2^4 x_5^2, & x_2^5 x_5^4 x_6^3 x_1^2 x_3^1 \cdot x_2^4 x_5^4. \end{array}$ \end{center} By considering the possibilities for the sequence $(i_1 \geq \cdots \geq i_{n-k})$, we see that \begin{equation} |{\mathcal{D}}_{n,k}| \leq \sum_{g \in G_n} {n-{\mathrm {des}}(g)-1 \choose n-k} = \sum_{g \in G_n} {{\mathrm {asc}}(g) \choose n-k} \end{equation} (where we have an inequality because {\em a priori} two monomials produced by Definition~\ref{gs-monomial-definition} for different choices of $g$ could coincide). If we consider an `ascent-starred' model for elements of ${\mathcal{OP}}_{n,k}$, e.g. \begin{equation*} 2^1 _*5^1_*1^0 \, \, 6^3 \, \, 4^2_* 3^1 \in {\mathcal{OP}}_{6,3}, \end{equation*} we see that \begin{equation} \label{s-dimension-inequality} |{\mathcal{D}}_{n,k}| \leq |{\mathcal{OP}}_{n,k}| = \dim(S_{n,k}). \end{equation} Our next theorem implies $|{\mathcal{D}}_{n,k}| = \dim(S_{n,k})$. \begin{theorem} \label{s-gs-basis-theorem} The collection ${\mathcal{D}}_{n,k}$ of $(n,k)$-descent monomials descends to a basis of the quotient ring $S_{n,k}$. \end{theorem} \begin{proof} By Equation~\ref{s-dimension-inequality}, we need only show that ${\mathcal{D}}_{n,k}$ descends to a spanning set of the quotient ring $S_{n,k}$. To this end, let $m = x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb {C}}[{\mathbf {x}}_n]$ be a monomial. We will show that the coset $m + J_{n,k}$ lies in the span of ${\mathcal{D}}_{n,k}$ by induction on the partial order $\prec$. Suppose $m$ is minimal with respect to the partial order $\prec$. Let us consider the exponent sequence $(a_1, \dots, a_n)$ of $m$. By $\prec$-minimality, we have \begin{equation*} (a_1, \dots, a_n) = (\underbrace{a, \dots, a}_p, \underbrace{a+1, \dots, a+1}_{n-p}) \end{equation*} for some integers $a \geq 0$ and $0 < p \leq n$. Our analysis breaks into cases depending on the values of $a$ and $p$. \begin{itemize} \item If $a \geq r$ then $e_n({\mathbf {x}}_n^r) \mid m$, so that $m \equiv 0$ in the quotient $S_{n,k}$. \item If $0 \leq a < r$ and $p = n$, then $m = b_g$ where \begin{equation*} g = 1^a 2^a \dots n^a \in G_n. \end{equation*} \item If $0 \leq a < r-1$ and $p < n$, then $m = b_g$ where \begin{equation*} g = (p+1)^{a+1} (p+2)^{a+1} \dots n^{a+1} 1^a 2^a \dots p^a \in G_n. \end{equation*} \item If $a = r-1$ and $0 < p < n$, then $m = b_g$ where \begin{equation*} g = (p+1)^0 (p+2)^0 \dots n^0 1^{r-1} 2^{r-1} \dots p^{r-1} \in G_n. \end{equation*} \end{itemize} We conclude that $m + J_{n,k}$ lies in the span of ${\mathcal{D}}_{n,k}$. Now let $m = x_1^{a_1} \cdots x_n^{a_n}$ be an arbitrary monomial in ${\mathbb {C}}[{\mathbf {x}}_n]$. We inductively assume that for any monomial $m'$ in ${\mathbb {C}}[{\mathbf {x}}_n]$ which satisfies $m' \prec m$, the coset $m' + J_{n,k}$ lies in the span of ${\mathcal{D}}_{n,k}$. We apply the Straightening Lemma~\ref{straightening-lemma} to $m$, which yields \begin{equation*} m = e_{\mu(m)}({\mathbf {x}}_n^r) \cdot b_{g(m)} + \Sigma, \end{equation*} where $\Sigma$ is a linear combination of monomials $m' \prec m$; by induction, the ring element $\Sigma + J_{n,k}$ lies in the span of ${\mathcal{D}}_{n,k}$. Write $d(m) = (d_1, \dots, d_n)$ and $g(m) = (\pi_1 \dots \pi_n, c_1 \dots c_n)$. Since $d_1 = {\mathrm {des}}(g(m))$, if ${\mathrm {des}}(g(m)) \geq k$, we would have $x_{\pi_1}^{kr} \mid b_{g(m)}$, so that $m \equiv \Sigma$ modulo $J_{n,k}$ and $m$ lies in the span of ${\mathcal{D}}_{n,k}$. Similarly, if $\mu(m)_1 \geq n-k+1$, then $e_{\mu(m)_1}({\mathbf {x}}_n^r) \mid (e_{\mu(m)}({\mathbf {x}}_n^r )\cdot b_{g(m)})$, so that again $m \equiv \Sigma$ modulo $J_{n,k}$ and $m$ lies in the span of ${\mathcal{D}}_{n,k}$. By the last paragraph, we may assume that \begin{center} ${\mathrm {des}}(g(m)) < k$ and $\mu(m)_1 \leq n-k$. \end{center} We have the identity \begin{equation} m = b_{g(m)} \cdot x_{\pi_1}^{r \cdot \mu(m)'_1} \cdots x_{\pi_n}^{r \cdot \mu(m)'_n}, \end{equation} where $\mu(m)'$ is the partition conjugate to $\mu(m)$. Since $\mu(m)_1 \leq n-k$, we may rewrite this identity as \begin{equation} m = b_{g(m)} \cdot x_{\pi_1}^{r \cdot \mu(m)'_1} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}}, \end{equation} where the sequence $\mu(m)'_1, \dots, \mu(m)'_{n-k}$ is weakly decreasing. If $\mu(m)'_1 < k - {\mathrm {des}}(g)$, we have $m \in {\mathcal{D}}_{n,k}$. If $\mu(m)'_1 \geq k - {\mathrm {des}}(g)$, since $r \cdot {\mathrm {des}}(g)$ is $\leq$ the power of $x_{\pi_1}$ in $b_{g(m)}$, we have $x_{\pi_1}^{kr} \mid m$, so that $m \equiv \Sigma$ modulo $J_{n,k}$. In either case, we have that $m + J_{n,k}$ lies in the span of ${\mathcal{D}}_{n,k}$. \end{proof} \subsection{The rings $R_{n,k}$.} Our aim is to expand our set of monomials ${\mathcal{D}}_{n,k}$ to a larger set of monomials ${\mathcal{ED}}_{n,k}$ (the `extended' descent monomials) which will descend to a basis for the rings $R_{n,k}$. \begin{defn} \label{extended-gs-definition} Let the {\em extended $(n,k)$-descent monomials} ${\mathcal{ED}}_{n,k}$ be the set of monomials of the form \begin{equation} \label{extended-gs-equation} \left( \prod_{j = 1}^z x_{\pi_j}^{kr} \right) \cdot b_{\pi_{z+1}^{c_{z+1}} \dots \pi_n^{c_n}} \cdot \left( x_{\pi_{z+1}}^{r \cdot i_{z+1}} x_{\pi_{z+2}}^{r \cdot i_{z+2}} \cdots x_{\pi_{n-k}}^{r \cdot i_{n-k}} \right), \end{equation} where \begin{itemize} \item we have $0 \leq z \leq n-k$, \item $\pi_1^{c_1} \dots \pi_n^{c_n} \in G_n$ is a colored permutation whose length $n-z$ suffix $\pi_{z+1}^{c_{z+1}} \dots \pi_n^{c_n}$ satisifes ${\mathrm {des}}(\pi_{z+1}^{c_{z+1}} \dots \pi_n^{c_n}) < k$, and \item we have \begin{equation*} k - {\mathrm {des}}(\pi_{z+1}^{c_{z+1}} \dots \pi_n^{c_n}) > i_{z+1} \geq i_{z+2} \geq \cdots \geq i_{n-k} \geq 0. \end{equation*} \end{itemize} We also set ${\mathcal{ED}}_{n,0} := \{1\}$. \end{defn} As an example of Definition~\ref{extended-gs-definition}, let $(n,k,r) = (7,3,2)$, let $z = 2$, and consider the group element $5^1 1^1 2^0 6^0 7^0 4^1 3^0 \in G_7$. We have ${\mathrm {des}}(2^0 6^0 7^0 4^1 3^0) = 1$, so that $k - {\mathrm {des}}(2^0 6^0 7^0 4^1 3^0) = 2$. Moreover, we have \begin{equation*} b_{2^0 6^0 7^0 4^1 3^0} = x_2^2 x_6^2 x_7^2 x_4^1, \end{equation*} so that we get the following monomials in ${\mathcal{ED}}_{7,3}$: \begin{center} $\begin{array}{ccc} (x_5^6 x_1^6) \cdot (x_2^2 x_6^2 x_7^2 x_4^1), & (x_5^6 x_1^6) \cdot (x_2^2 x_6^2 x_7^2 x_4^1) \cdot (x_2^2), & (x_5^6 x_1^6) \cdot (x_2^2 x_6^2 x_7^2 x_4^1) \cdot (x_2^2 x_6^2). \end{array}$ \end{center} Observe that the monomial defined in (\ref{extended-gs-equation}) depends only on the set of letters $\{\pi_1, \dots, \pi_z\}$ contained in the length $z$ prefix $\pi_1^{c_1} \dots \pi_z^{c_z}$ of $\pi_1^{c_1} \dots \pi_n^{c_n}$. We can therefore form a typical monomial in ${\mathcal{ED}}_{n,k}$ by choosing $0 \leq z \leq n-k$, then choosing a set $Z \subseteq [n]$ with $|Z| = z$, then forming a typical element of ${\mathcal{D}}_{n-z,k}$ on the variable set $\{x_j \,:\, j \in [n] - Z\}$, and finally multiplying by the product $\prod_{j \in Z} x_j^{kr}$. By Theorem~\ref{s-gs-basis-theorem}, there are $|{\mathcal{OP}}_{n-z,k}|$ monomials in ${\mathcal{D}}_{n-z,k}$, and all of the exponents in these monomials are $< kr$. It follows that \begin{equation} |{\mathcal{ED}}_{n,k}| = \sum_{z = 0}^{n-k} {n \choose z} |{\mathcal{D}}_{n-z,k}| = \sum_{z = 0}^{n-k} {n \choose z} |{\mathcal{OP}}_{n-z,k}| = |{\mathcal{F}}_{n,k}| = \dim(R_{n,k}). \end{equation} We will show ${\mathcal{ED}}_{n,k}$ descends to a spanning set of $R_{n,k}$, and hence descends to a basis of $R_{n,k}$. \begin{theorem} \label{r-gs-basis-theorem} The set ${\mathcal{ED}}_{n,k}$ of extended $(n,k)$-descent monomials descends to a basis of $R_{n,k}$. \end{theorem} \begin{proof} Let $m = x_1^{a_1} \cdots x_n^{a_n}$ be a monomial in ${\mathbb {C}}[{\mathbf {x}}_n]$. We argue that the coset $m + I_{n,k} \in R_{n,k}$ lies in the span of ${\mathcal{ED}}_{n,k}$. Suppose first that $m$ is minimal with respect to $\prec$. The exponent sequence $(a_1, \dots, a_n)$ has the form \begin{equation*} (a_1, \dots, a_n) = (\underbrace{a, \dots, a}_p, \underbrace{a+1, \dots, a+1}_{n-p}) \end{equation*} for some $a \geq 0$ and $0 < p \leq n$. The same analysis as in the proof of Theorem~\ref{s-gs-basis-theorem} implies that $m \equiv 0$ (mod $I_{n,k})$ or $m \in {\mathcal{D}}_{n,k} \subseteq {\mathcal{ED}}_{n,k}$. Now let $m = x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb {C}}[{\mathbf {x}}_n]$ be an arbitrary monomial and form the sequence $d(m) = (d_1, \dots, d_n)$ and the colored permutation $g(m) = \pi_1^{c_1} \dots \pi_n^{c_n}$. Apply the Straightening Lemma~\ref{straightening-lemma} to write \begin{equation} m = e_{\mu(m)} ({\mathbf {x}}_n^r) \cdot b_{g(m)} + \Sigma, \end{equation} where $\Sigma$ is a linear combination of monomials $m' \in {\mathbb {C}}[{\mathbf {x}}_n]$ with $m' \prec m$. We inductively assume that the ring element $\Sigma + I_{n,k}$ lies in the span of ${\mathcal{ED}}_{n,k}$. If $\mu(m)_1 \geq n-k+1$, then $m \equiv \Sigma$ (mod $I_{n,k}$), so that $m + I_{n,k}$ lies in the span of ${\mathcal{ED}}_{n,k}$. If ${\mathrm {des}}(g(m)) > k+1$, then $x_{\pi_1}^{(k+1)r} \mid b_{g(m)}$, so that again $m \equiv \Sigma$ (mod $I_{n,k}$) and $m + I_{n,k}$ lies in the span of ${\mathcal{ED}}_{n,k}$. By the last paragraph, we may assume \begin{center} $\mu(m)_1 \leq n-k$ and ${\mathrm {des}}(g(m)) \leq k$. \end{center} Our analysis breaks up into two cases depending on whether ${\mathrm {des}}(g(m)) < k$ or ${\mathrm {des}}(g(m)) = k$. {\bf Case 1:} {\em $\mu(m)_1 \leq n-k$ and ${\mathrm {des}}(g(m)) < k$.} If any element in the exponent sequence $(a_1, \dots, a_n)$ of $m$ is $> kr$, then $m \equiv 0$ (mod $I_{n,k}$). We may therefore assume $a_j \leq kr$ for all $j$. Since we have $\mu(m)_1 \leq n-k$, we have the identity \begin{equation} m = b_{g(m)} \cdot x_{\pi_1}^{r \cdot \mu(m)'_1} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}}. \end{equation} If $\mu(m)'_1 < k - {\mathrm {des}}(g(m))$, we have $m \in {\mathcal{D}}_{n,k} \subseteq {\mathcal{ED}}_{n,k}$. If $\mu(m)_1' > k - {\mathrm {des}}(g(m))$, we have $x_{\pi_1}^{(k+1) \cdot r} \mid m$, which contradicts $a_{\pi_1} \leq kr$. By the last paragraph, we may assume $\mu(m)'_1 = k - {\mathrm {des}}(g(m))$. Since every term in the weakly decreasing sequence $(a_{\pi_1}, \dots, a_{\pi_n})$ is $\leq kr$, there exists an index $1 \leq z \leq n$ such that $(a_{\pi_1}, \dots, a_{\pi_n}) = (kr, \dots, kr, a_{\pi_{z+1}}, \dots, a_{\pi_n})$, where $a_{\pi_{z+1}} < kr$. Since every exponent in $b_{g(m)}$ is $< kr$, we in fact have $1 \leq z \leq n-k$. Let $\overline{g}$ be the partial colored permutation $\overline{g} := \pi_{z+1}^{c_{z+1}} \dots \pi_n^{c_n}$. Applying Observation~\ref{truncation-observation}, we have \begin{align} m &= b_{g(m)} \cdot x_{\pi_1}^{r \cdot \mu(m)'_1} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}} \\ &= \left( \prod_{j = 1}^z x_{\pi_j}^{kr} \right) \cdot b_{\overline{g}} \cdot x_{\pi_{z+1}}^{r \cdot \mu(m)'_{z+1}} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}}, \end{align} for $1 \leq z \leq n-k$. The monomial $b_{\overline{g}} \cdot x_{\pi_{z+1}}^{r \cdot \mu(m)'_{z+1}} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}}$ only involves the variables $x_{\pi_{z+1}}, \dots, x_{\pi_n}$, and every exponent in this product is $< kr$. If $\mu(m)'_{z+1} \geq k - {\mathrm {des}}(\overline{g})$, we would have the divisibility $x_{\pi_{z+1}}^{kr} \mid b_{\overline{g}} \cdot x_{\pi_{z+1}}^{r \cdot \mu(m)'_{z+1}} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}}$, which is a contradiction. It follows that $\mu(m)'_{z+1} < k - {\mathrm {des}}(\overline{g})$, which implies that $m \in {\mathcal{ED}}_{n,k}$. We conclude that the coset $m + I_{n,k}$ lies in the span of ${\mathcal{ED}}_{n,k}$, which completes this case. {\bf Case 2:} {\em $\mu(m)_1 \leq n-k$ and ${\mathrm {des}}(g(m)) = k$.} As in the previous case, we may assume that every exponent appearing in the monomial $m$ is $\leq kr$. We again write \begin{equation} m = b_{g(m)} \cdot x_{\pi_1}^{r \cdot \mu(m)'_1} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}} \end{equation} and have $(a_{\pi_1} \geq \cdots \geq a_{\pi_n}) = (kr, \dots, kr, a_{\pi_{z+1}}, \dots, a_{\pi_n})$ for some $1 \leq z \leq n-k$. Define the partial colored permutation $\overline{g} := \pi_{z+1}^{c_{z+1}} \dots \pi_n^{c_n}$. Since the exponent of $x_{\pi_{z+1}}$ in $m$ is $\geq r \cdot {\mathrm {des}}(\overline{g})$, we have ${\mathrm {des}}(\overline{g}) < k$. If $\mu(m)'_{z+1} \geq k - {\mathrm {des}}(\overline{g})$, the exponent of $x_{\pi_{z+1}}$ in $m$ would be $\geq kr$, so we must have $\mu(m)'_{z+1} < k - {\mathrm {des}}(\overline{g})$. Using Observation~\ref{truncation-observation} to write \begin{align} m &= b_{g(m)} \cdot x_{\pi_1}^{r \cdot \mu(m)'_1} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}} \\ &= \left( \prod_{j = 1}^z x_{\pi_j}^{kr} \right) \cdot b_{\overline{g}} \cdot x_{\pi_{z+1}}^{r \cdot \mu(m)'_{z+1}} \cdots x_{\pi_{n-k}}^{r \cdot \mu(m)'_{n-k}}, \end{align} we see that $m \in {\mathcal{ED}}_{n,k}$. \end{proof} The following lemma involving expansions of monomials $m$ into the ${\mathcal{ED}}_{n,k}$ basis of $R_{n,k}$ will be useful in the next section. For $0 \leq z \leq n-z$, let ${\mathcal{ED}}_{n,k}(z)$ be the subset of monomials in ${\mathcal{ED}}_{n,k}$ which contain exactly $z$ variables with power $kr$. We get a stratification \begin{equation} {\mathcal{ED}}_{n,k} = {\mathcal{ED}}_{n,k}(0) \uplus {\mathcal{ED}}_{n,k}(1) \uplus \cdots \uplus {\mathcal{ED}}_{n,k}(n-k). \end{equation} For convenience, we set ${\mathcal{ED}}_{n,k}(z) = \varnothing$ for $z > n-k$. \begin{lemma} \label{zero-stability-lemma} Let $(a_1, \dots, a_n)$ satisfy $0 \leq a_i \leq kr$ for all $i$, let $m = x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb {C}}[{\mathbf {x}}_n]$ be the corresponding monomial, and let $z := | \{1 \leq i \leq n \,:\, a_i = kr \} |$. The expansion of $m + I_{n,k}$ in the basis ${\mathcal{ED}}_{n,k}$ of $R_{n,k}$ only involves terms in ${\mathcal{ED}}_{n,k}(0) \uplus {\mathcal{ED}}_{n,k}(1) \uplus \cdots \uplus {\mathcal{ED}}_{n,k}(z)$. \end{lemma} \begin{proof} Applying the Straightening Lemma~\ref{straightening-lemma} to $m$, we get \begin{equation} m = e_{\mu(m)}({\mathbf {x}}_n^r) \cdot b_{g(m)} + \Sigma, \end{equation} where $\Sigma$ is a linear combination of monomials $m'$ in ${\mathbb {C}}[{\mathbf {x}}_n]$ which satisfy $m' \prec m$. The proof of Theorem~\ref{r-gs-basis-theorem} shows that either \begin{itemize} \item the monomial $m$ is an element of ${\mathcal{ED}}_{n,k}$, and hence an element of ${\mathcal{ED}}_{n,k}(z)$, or \item we have $m \equiv \Sigma$ (mod $I_{n,k}$). \end{itemize} If the first bullet holds, we are done. We may therefore assume that $m \equiv \Sigma$ (mod $I_{n,k}$). Let $m' = x_1^{a'_1} \cdots x_n^{a'_n}$ be a monomial appearing in $\Sigma$. The dominance relation $\lambda(m') \leq_{dom} \lambda(m)$ implies $| \{ 1 \leq i \leq n \,:\, a'_i = kr \} | \leq z$. We may therefore apply the logic of the last paragraph to each such monomial $m'$, and iterate. \end{proof} \section{Frobenius series} \label{Frobenius} In this section we will determine the graded isomorphism types of the rings $R_{n,k}$ and $S_{n,k}$. When $r = 1$, this was carried out for the rings $S_{n,k}$ in \cite[Sec. 6]{HRS}. It turns out that the methods developed in \cite[Sec. 6]{HRS} generalize fairly readily to the $S$ rings, but not the $R$ rings. Our approach will be to describe the $R$ rings in terms of the $S$ rings, and then describe the isomorphism type of the $S$ rings. \subsection{Relating $R$ and $S$ } In this section, we describe the graded isomorphism type of $R_{n,k}$ in terms of the rings $S_{n,k}$. The result here is as follows. \begin{proposition} \label{r-to-s-reduction} We have an isomorphism of graded $G_n$-modules \begin{equation} R_{n,k} \cong \bigoplus_{z = 0}^{n-k} {\mathrm {Ind}}_{G_{(n-z,z)}}^{G_n}(S_{n-z,k}^r \otimes {\mathbb {C}}_{krz}). \end{equation} Here ${\mathbb {C}}_{krz}$ is a copy of the trivial $1$-dimensional representation of $G_z$ sitting in degree $krz$. Equivalently, we have the identity \begin{equation} {\mathrm {grFrob}}(R_{n,k}; q) = \sum_{z = 0}^{n-k} q^{krz} \bm{s}_{(\varnothing, \dots, \varnothing, (z))}({\mathbf {x}}) \cdot {\mathrm {grFrob}}(S_{n-z,k}^r; q). \end{equation} \end{proposition} \begin{proof} For $0 \leq z \leq n-k$, let $R_{n,k}(z)$ be the subspace of $R_{n,k}$ given by \begin{equation} R_{n,k}(z) := \mathrm{span}_{{\mathbb {C}}} \{ x_1^{a_1} \cdots x_n^{a_n} + I_{n,k} \,:\, \text{$0 \leq a_i \leq kr$ and at most $z$ of $a_1, \dots, a_n$ equal $kr$} \}. \end{equation} It is clear that $R_{n,k}(z)$ is graded and stable under the action of $G_n$. We also have a filtration \begin{equation} R_{n,k}(0) \subseteq R_{n,k}(1) \subseteq \cdots \subseteq R_{n,k}(n-k) = R_{n,k}. \end{equation} It follows that there is an isomorphism of graded $G_n$-modules \begin{equation} R_{n,k} \cong Q_{n,k}^r(0) \oplus Q_{n,k}^r(1) \oplus \cdots \oplus Q_{n,k}^r(n-k), \end{equation} where $Q_{n,k}^r(z) := R_{n,k}(z)/R_{n,k}(z-1)$. Consider the stratification ${\mathcal{ED}}_{n,k} = {\mathcal{ED}}_{n,k}(0) \uplus {\mathcal{ED}}_{n,k}(1) \uplus \cdots \uplus {\mathcal{ED}}_{n,k}(n-k)$ of the basis ${\mathcal{ED}}_{n,k}$ of $R_{n,k}$. The containment ${\mathcal{ED}}_{n,k}(z') \subseteq R_{n,k}(z)$ for $z' \leq z$ implies \begin{equation} \dim(R_{n,k}(z)) \geq | {\mathcal{ED}}_{n,k}(0)| + |{\mathcal{ED}}_{n,k}(1)| + \cdots + |{\mathcal{ED}}_{n,k}(z)|. \end{equation} On the other hand, Lemma~\ref{zero-stability-lemma} implies that $R_{n,k}(z)$ is spanned by (the image of the monomials in) $\biguplus_{z' = 0}^z {\mathcal{ED}}_{n,k}(z')$. It follows that \begin{equation} \dim(R_{n,k}(z)) = | {\mathcal{ED}}_{n,k}(0)| + |{\mathcal{ED}}_{n,k}(1)| + \cdots + |{\mathcal{ED}}_{n,k}(z)|. \end{equation} and $\biguplus_{z' = 0}^z {\mathcal{ED}}_{n,k}(z')$ descends to a basis of $R_{n,k}(z)$. Consequently, the set ${\mathcal{ED}}_{n,k}(z)$ descends to a basis for $Q_{n,k}^r(z)$. Fix $0 \leq z \leq n-k$. It follows from the definition of ${\mathcal{ED}}_{n,k}(z)$ that \begin{equation} \dim(Q_{n,k}^r(z)) = |{\mathcal{ED}}_{n,k}(z)| = {n \choose z} \cdot |{\mathcal{OP}}_{n-z,k}| = {n \choose z} \cdot \dim(S_{n,k}), \end{equation} which coincides with the dimension of ${\mathrm {Ind}}_{G_{(n-z,z)}}^{G_n}(S_{n-z,k}^r \otimes {\mathbb {C}}_{krz})$. We claim that we have an isomorphism of graded $G_n$-modules \begin{equation} \label{main-module-isomorphism} Q_{n,k}^r(z) \cong {\mathrm {Ind}}_{G_{(n-z,z)}}^{G_n}(S_{n-z,k}^r \otimes {\mathbb {C}}_{krz}). \end{equation} In order to prove the isomorphism (\ref{main-module-isomorphism}), for any $T \subseteq [n]$, let $G_{[n] - T}$ be the group of $r$-colored permutations on the index set $[n] - T$ and let $S_{n-z,k}(T)$ be the module $S_{n-z,k}$ in the variable set $\{x_j \,:\, j \in T\}$. Any group element $g \in G_{[n] - T}$ acts trivially on the product $\prod_{j \notin T} x_j^{kr}$. We may therefore interpret the induction on the right-hand side of (\ref{main-module-isomorphism}) as \begin{equation} {\mathrm {Ind}}_{G_{(z,n-z)}}^{G_n}(S_{n-z,k} \otimes {\mathbb {C}}_{krz}) \cong \bigoplus_{|T| = n-z} S_{n-z,k}(T) \otimes \mathrm{span} \left\{ \prod_{j \notin T} x_j^{kr} \right\}, \end{equation} which reduces our task to proving \begin{equation} \label{modified-module-isomorphism} Q_{n,k}^r(z) \cong \bigoplus_{|T| = n-z} S_{n-z,k}(T) \otimes \mathrm{span} \left\{ \prod_{j \notin T} x_j^{kr} \right\}. \end{equation} The set of monomials ${\mathcal{EGS}}_{n,k}(z)$ in ${\mathbb {C}}[{\mathbf {x}}_n]$ descends to a vector space basis of the graded modules appearing on either side of (\ref{modified-module-isomorphism}); the corresponding identification of cosets gives rise to an isomorphism \begin{equation} \varphi: Q_{n,k}^r(z) \rightarrow \bigoplus_{|T| = n-z} S_{n-z,k}^r(T) \otimes \mathrm{span} \left\{ \prod_{j \notin T} x_j^{kr} \right\}. \end{equation} of graded vector spaces. It is clear that $\varphi$ commutes with the action of the diagonal subgroup ${\mathbb {Z}}_r \times \cdots \times {\mathbb {Z}}_r \subseteq G_n$; we need only show that $\varphi$ commutes with the action of ${\mathfrak{S}}_n$. The proof that the map $\varphi$ commutes with the action of ${\mathfrak{S}}_n$ uses straightening. Let $m = x_1^{a_1} \cdots x_n^{a_n} \in {\mathcal{ED}}_{n,k}(z)$ be a typical basis element and let $\pi.m = x_{\pi_1}^{a_1} \cdots x_{\pi_n}^{a_n}$ be the image of $m$ under a typical permutation $\pi \in {\mathfrak{S}}_n$. If $\pi.m \in {\mathcal{ED}}_{n,k}(z)$ the definition of $\varphi$ yields $\varphi(\pi.m) = \pi.\varphi(m)$. If $\pi.m \notin {\mathcal{ED}}_{n,k}(z)$, by Lemma~\ref{straightening-lemma} we can write $\pi.m = e_{\mu(\pi.m)}({\mathbf {x}}_n^r) \cdot b_{g(\pi.m)} + \Sigma$, where $\Sigma$ is a linear combination of monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ which are $\prec \pi.m$. As in the proof of Lemma~\ref{zero-stability-lemma}, since $m \in {\mathcal{ED}}_{n,k}(z)$ but $\pi.m \notin {\mathcal{ED}}_{n,k}(z)$, we know that $\pi.m \equiv \Sigma$ in the modules on either side of Equation~\ref{modified-module-isomorphism}. Iterating this procedure, we see that $\pi.m$ has the same expansion into the bases induced from ${\mathcal{ED}}_{n,k}(z)$ on either side of Equation~\ref{modified-module-isomorphism}. This proves that the map $\varphi$ is ${\mathfrak{S}}_n$-equivariant, so that $\varphi$ is an isomorphism of graded $G_n$-modules. \end{proof} \subsection{The rings $S_{n,k,s}$} By Proposition~\ref{r-to-s-reduction}, the graded isomorphism type of $R_{n,k}$ is determined by the graded isomorphism type of $S_{n,k}$. The remainder of this section will focus on the rings $S_{n,k}$. As in \cite[Sec. 6]{HRS}, to determine the graded isomorphism type of $S_{n,k}$ we will introduce a more general class of quotients. \begin{defn} Let $n, k, s$ be positive integers with $n \geq k \geq s$. Define $J_{n,k,s} \subseteq {\mathbb {C}}[{\mathbf {x}}_n]$ to be the ideal \begin{equation*} J_{n,k,s} := \langle x_1^{kr}, \dots , x_n^{kr}, e_n({\mathbf {x}}_n^r), e_{n-1}({\mathbf {x}}_n^r), \dots, e_{n-s+1}({\mathbf {x}}_n^r) \rangle. \end{equation*} Let $S_{n,k,s} := {\mathbb {C}}[{\mathbf {x}}_n]/J_{n,k,s}$ be the corresponding quotient ring. \end{defn} When $s = k$ we have $J_{n,k,k} = J_{n,k}$, so that $S_{n,k,k} = S_{n,k}$. Our aim for the remainder of this section is to build a combinatorial model for the quotient $S_{n,k,s}$ using the point orbit technique of Section~\ref{Hilbert}. To this end, for $n \geq k \geq s$ let ${\mathcal{OP}}_{n,k,s}$ denote the collection of $r$-colored $k$-block ordered set partitions $\sigma = (B_1 \mid \cdots \mid B_k)$ of $[n + (k-s)]$ such that, for $1 \leq i \leq k-s$, we have $n+i \in B_{s+i}$ and $n+i$ has color $0$. For example, we have \begin{equation*} ( 2^0 3^2 \mid 1^2 6^0 \mid {\bf 7^0} \mid 5^1 7^2 {\bf 8^0} \mid 4^1 {\bf 9^0} ) \in {\mathcal{OP}}^3_{6,5,2}. \end{equation*} Given $\sigma \in {\mathcal{OP}}_{n,k,s}$, we will refer to the letters $n+1, n+2, \dots, n+(k-s)$ as {\em big}; the remaining letters will be called {\em small}. The group $G_n$ acts on ${\mathcal{OP}}_{n,k,s}$ by acting on the small letters. We model this action with a point set as follows. \begin{defn} Fix positive real numbers $0 < \alpha_1 < \cdots < \alpha_k$. Let $Z_{n,k,s} \subseteq {\mathbb {C}}^{n+(k-s)}$ be the collection of points $(z_1, \dots, z_n, z_{n+1}, \dots, z_{n+k-s})$ such that \begin{itemize} \item we have $z_i \in \{ \zeta^c \alpha_j \,:\, 0 \leq c \leq r-1, \, \, 1 \leq j \leq k\}$ for all $1 \leq i \leq n + (k-s)$, \item we have $\{\alpha_1, \dots, \alpha_k\} = \{|z_1|, \dots, |z_n| \}$, and \item we have $z_{n+i} = \alpha_{s+i}$ for all $1 \leq i \leq k-s$. \end{itemize} \end{defn} It is evident that the point set $Z_{n,k,s}$ is stable under the action of $G_n$ on the first $n$ coordinates of ${\mathbb {C}}^{n + (k-s)}$ and that $Z_{n,k,s}$ is isomorphic to the action of $G_n$ on ${\mathcal{OP}}_{n,k,s}$. Let ${\mathbf {I}}(Z_{n,k,s}) \subseteq {\mathbb {C}}[{\mathbf {x}}_{n+(k-s)}]$ be the ideal of polynomials which vanish on $Y_{n,k,s}$ and let ${\mathbf {T}}(Y_{n,k,s}) \subseteq {\mathbb {C}}[{\mathbf {x}}_{n+(k-s)}]$ be the corresponding top component ideal. Since $x_{n+i} - \alpha_{n+i} \in {\mathbf {I}}(Y_{n,k,s})$ for all $1 \leq i \leq k-s$, we have $x_{n+i} \in {\mathbf {T}}(Y_{n,k,s})$. Let $\varepsilon: {\mathbb {C}}[{\mathbf {x}}_{n+(k-s)}] \twoheadrightarrow {\mathbb {C}}[{\mathbf {x}}_n]$ be the map which evaluates $x_{n+i} = 0$ for all $1 \leq i \leq k-s$ and let $T_{n,k,s} := \varepsilon({\mathbf {T}}(Y_{n,k,s}))$ be the image of ${\mathbf {T}}(Y_{n,k,s})$ under $\varepsilon$. Then $T_{n,k,s}$ is an ideal in ${\mathbb {C}}[{\mathbf {x}}_n]$ and we have an identification of $G_n$-modules \begin{equation*} {\mathbb {C}}[{\mathcal{OP}}_{n,k,s}] \cong {\mathbb {C}}[{\mathbf {x}}_{n+(k-s)}]/{\mathbf {I}}(Y_{n,k,s}) \cong {\mathbb {C}}[{\mathbf {x}}_{n+(k-s)}]/{\mathbf {T}}(Y_{n,k,s}) \cong {\mathbb {C}}[{\mathbf {x}}_n]/T_{n,k,s}. \end{equation*} It will develop that $J_{n,k,s} = T_{n,k,s}$. We can generalize Lemma~\ref{i-contained-in-t} to prove one containment right away. \begin{lemma} \label{j-contained-in-t-generalized} We have $J_{n,k,s} \subseteq T_{n,k,s}$. \end{lemma} \begin{proof} We show that every generator of $J_{n,k,s}$ is contained in $T_{n,k,s}$. For $1 \leq i \leq n$ we have $\prod_{j = 1}^r \prod_{c = 0}^{r-1} (x_i - \zeta^c \alpha_i) \in {\mathbf {I}}(Y_{n,k,s})$, so that $x_i^{kr} \in T_{n,k,s}$. The proof of Lemma~\ref{i-contained-in-t} shows that $e_j({\mathbf {x}}_{n+(k-s)}^r) \in {\mathbf {T}}(Y_{n,k,s})$ for all $j \geq n-s+1$. Applying the evaluation map $\varepsilon$ gives $\varepsilon: e_j({\mathbf {x}}_{n+(k-s)}^r) \mapsto e_j({\mathbf {x}}_n^r) \in T_{n,k,s}$. \end{proof} Proving the equality $J_{n,k,s} = T_{n,k,s}$ will involve a dimension count. To facilitate this, let us identify some terms in the initial ideal of $I_{n,k,s}$. The following is a generalization of Lemma~\ref{skip-leading-terms}; its proof is left to the reader. \begin{lemma} \label{skip-leading-terms} Let $<$ be the lexicographic term order on monomials in ${\mathbb {C}}[{\mathbf {x}}_n]$ and let ${\mathrm {in}}_<(J_{n,k,s})$ be the initial ideal of $J_{n,k,s}$. We have \begin{itemize} \item $x_i^{kr} \in {\mathrm {in}}_<(J_{n,k,s})$ for $1 \leq i \leq n$, and \item ${\mathbf {x}}(S)^r \in {\mathrm {in}}_<(J_{n,k,s})$ for all $S \subseteq [n]$ with $|S| = n-s+1$. \end{itemize} \end{lemma} Lemma~\ref{skip-leading-terms} motivates the following generalization of strongly $(n,k)$-nonskip monomials. \begin{defn} Let ${\mathbb{N}}N_{n,k,s}$ be the collection of monomials $m \in {\mathbb {C}}[{\mathbf {x}}_n]$ such that \begin{itemize} \item $x_i^{kr} \nmid m$ for all $1 \leq i \leq m$, and \item ${\mathbf {x}}(S)^r \nmid m$ for all $S \subseteq [n]$ with $|S| = n-s+1$. \end{itemize} \end{defn} By Lemma~\ref{skip-leading-terms}, the set ${\mathbb{N}}N_{n,k,s}$ contains the standard monomial basis of $S_{n,k,s}$; we will prove that these two sets of monomials coincide. Let us first observe a relationship between the monomials in ${\mathbb{N}}N_{n,k,s}$ and those in ${\mathbb{N}}N_{n+(k-s),k}$. \begin{lemma} \label{nonskip-monomial-factor} If $x_1^{a_1} \cdots x_n^{a_n} x_{n+1}^{a_{n+1}} \cdots x_{n+(k-s)}^{a_{n+(k-s)}} \in {\mathbb{N}}N_{n+(k-s),k}$, then $x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb{N}}N_{n,k,s}$. Conversely, if $x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb{N}}N_{n,k,s}$ and $0 \leq a_{n+1} < a_{n+2} < \cdots < a_{n+(k-s)} < kr$ satisfy \begin{equation*} a_{n+1} \equiv a_{n+2} \equiv \cdots \equiv a_{n+(k-s)} \equiv i \text{ (mod $r$)} \end{equation*} for some $0 \leq i \leq r-1$ , then $x_1^{a_1} \cdots x_n^{a_n} x_{n+1}^{a_{n+1}} \cdots x_{n+(k-s)}^{a_{n+(k-s)}} \in {\mathbb{N}}N_{n+(k-s),k}$. \end{lemma} \begin{proof} The first statement is clear from the definitions of ${\mathbb{N}}N_{n+(k-s),k}$ and ${\mathbb{N}}N_{n,k,s}$. For the second statement, let $m' := x_1^{a_1} \cdots x_n^{a_n} \in {\mathbb{N}}N_{n,k,s}$ and let $0 \leq a_{n+1} < a_{n+2} < \cdots < a_{n+(k-s)} < kr$ be as in the statement of the lemma. We argue that $m := x_1^{a_1} \cdots x_n^{a_n} x_{n+1}^{a_{n+1}} \cdots x_{n+(k-s)}^{a_{n+(k-s)}} \in {\mathbb{N}}N_{n+(k-s),k}$. Since $m' \in {\mathbb{N}}N_{n,k,s}$, we know that $x_i^{kr} \nmid m$ for $1 \leq i \leq n + (k-s)$. Let $S \subseteq [n + (k-s)]$ satisfy $|S| = n+(k-s)$. We need to show ${\mathbf {x}}(S)^r \nmid m$. If $S \subseteq [n]$, then ${\mathbf {x}}(S)^r \nmid m$ because ${\mathbf {x}}(S)^r \nmid m'$. On the other hand, if $n + i \in S$ for some $1 \leq i \leq k-r$, the power $p_{n+i}$ of $x_{n+i}$ in ${\mathbf {x}}(S)^r$ is $\geq r \cdot (s+i)$. However, our assumptions on $(a_{n+1}, a_{n+2}, \dots, a_{n+(k-s)})$ force $a_{n+i} < r \cdot (k - (s-i)) \leq r \cdot (s+i)$, which implies ${\mathbf {x}}(S)^r \nmid m$. \end{proof} We use the map $\Psi$ from Section~\ref{Hilbert} to count ${\mathbb{N}}N_{n,k,s}$. \begin{lemma} \label{size-of-n} We have $|{\mathbb{N}}N_{n,k,s}| = |{\mathcal{OP}}_{n,k,s}|$. \end{lemma} \begin{proof} Consider the bijection $\Psi: {\mathcal{OP}}_{n+(k-s),k} \rightarrow {\mathbb{N}}N_{n+(k-s),k}$ from Section~\ref{Hilbert}. We have ${\mathcal{OP}}_{n,k,s} \subseteq {\mathcal{OP}}_{n+(k-s),k}$. We leave it for the reader to check that \begin{equation*} \Psi({\mathcal{OP}}_{n,k,s}) = {\mathbb{N}}N'_{n,k,s}, \end{equation*} where ${\mathbb{N}}N'_{n,k,s}$ consists of those monomials $x_1^{a_1} \cdots x_{n}^{a_n} x_{n+1}^{a_{n+1}} \cdots x_{n+(k-s)}^{a_{n+(k-s)}} \in {\mathbb{N}}N_{n+(k-s),k}$ which satisfy \begin{equation*} (a_{n+1}, a_{n+2}, \dots, a_{n+(k-s)}) = (rs + (r-1), r(s+1) + (r-1), \dots, r(k-1) + (r-1)). \end{equation*} (The $+(r-1)$ terms come from the fact that the letters $n+1, \dots, n+(k-s)$ all have color $0$ and $\Psi$ involves a {\em complementary} color contribution.) Lemma~\ref{nonskip-monomial-factor} applies to show $|{\mathbb{N}}N'_{n,k,s}| = |{\mathbb{N}}N_{n,k,s}|$. \end{proof} We are ready to determine the ungraded isomorphism type of the $G_n$-module $S_{n,k,s}$. \begin{lemma} \label{s-dimension-lemma-generalized} We have $S_{n,k,s} \cong {\mathbb {C}}[{\mathcal{OP}}_{n,k,s}]$. In particular, we have $\dim(S_{n,k,s}) = |{\mathcal{OP}}_{n,k,s}|$. \end{lemma} \begin{proof} By Lemma~\ref{j-contained-in-t-generalized} we have $\dim(S_{n,k,s}) \geq |{\mathcal{OP}}_{n,k,s}|$. Lemma~\ref{skip-leading-terms} and Lemma~\ref{size-of-n} imply that the standard monomial basis of $S_{n,k,s}$ with respect to the lexicographic term order has size $\leq |{\mathbb{N}}N_{n,k,s}| = |{\mathcal{OP}}_{n,k,s}|$, so that $\dim(S_{n,k,s}) = |{\mathcal{OP}}_{n,k,s}|$. Lemma~\ref{j-contained-in-t-generalized} gives a $G_n$-module surjection $S_{n,k,s} \twoheadrightarrow {\mathbb {C}}[{\mathcal{OP}}_{n,k,s}]$; dimension counting shows that this surjection is an isomorphism. \end{proof} \subsection{Idempotents and $e_j({\mathbf {x}}^{(i^*)})^{\perp}$} For $1 \leq j \leq n$ and $1 \leq i \leq r$, we want to develop a module-theoretic analog of acting by the operator $e_j({\mathbf {x}}^{(i^*)})^{\perp}$ on Frobenius images. If $V$ is a $G_n$-module, acting by $e_j({\mathbf {x}}^{(i^*)})^{\perp}$ on ${\mathrm {Frob}}(V)$ will correspond to taking the image of $V$ under a certain group algebra idempotent $\epsilon_{i,j} \in {\mathbb {C}}[G_n]$. Let $1 \leq j \leq n$ and consider the corresponding parabolic subgroup $G_{(n-j,j)} = G_{n-j} \times G_j$ of $G_n$. The factor $G_j$ acts on the {\em last} $j$ letters $n-j+1, \dots, n-1, n$ of $\{1, 2, \dots, n\}$. For $1 \leq j \leq n$ and $1 \leq i \leq r$, let $\epsilon_{i,j}$ be the idempotent in the group algebra of $G_n$ given by \begin{equation} \epsilon_{i,j} := \frac{1}{r^j \cdot j!} \sum_{g \in {\mathbb {Z}}_r \wr {\mathfrak{S}}_j} {\mathrm {sign}}(g) \cdot \overline{\chi(g)^i} \cdot g \in {\mathbb {C}}[G_n]. \end{equation} (Recall that $\chi(g)$ is the product of the nonzero entries in the $j \times j$ monomial matrix $g$.) The idempotent $\epsilon_{i,j}$ commutes with the action of $G_{n-j}$. In particular, if $V$ is a $G_n$-module, then $\epsilon_{i,j} V$ is a $G_{n-j}$-module. The relationship between ${\mathrm {Frob}}(V)$ and ${\mathrm {Frob}}(\epsilon_{i,j}V)$ is as follows. \begin{lemma} \label{e-perp-on-v} Let $V$ be a $G_n$-module, let $1 \leq j \leq n$, and let $1 \leq i \leq r$. We have \begin{equation} {\mathrm {Frob}}(\epsilon_{i,j} V) = e_j({\mathbf {x}}^{(i^*)})^{\perp} {\mathrm {Frob}}(V). \end{equation} In particular, if $V$ is graded, we have \begin{equation} {\mathrm {grFrob}}(\epsilon_{i,j} V; q) = e_j({\mathbf {x}}^{(i^*)})^{\perp} {\mathrm {grFrob}}(V; q). \end{equation} \end{lemma} \begin{proof} The proof is a standard application of Frobenius reciprocity and symmetric function theory (and can be found in \cite{GP} in the case $r = 1$). It suffices to prove this lemma when $V$ is irreducible, so let $V = \bm{S^{\lambda}}$ for some $r$-partition ${ \bm{\lambda} } \vdash_r n$. Consider the parabolic subgroup $G_{(n-j,j)} \subseteq G_n$. Irreducible representations of $G_{(n-j,j)}$ have the form $\bm{S^{\mu}} \otimes \bm{S^{\nu}}$ for $\bm{\mu} \vdash_r n-j$ and $\bm{\nu} \vdash_r j$. By Frobenius reciprocity, we have \begin{align*} \text{(multiplicity of $\bm{S^{\mu}} \otimes \bm{S^{\nu}}$ in $\mathrm{Res}^{G_n}_{G_{(n-j,j)}} \bm{S^{\lambda}}$)} &= \text{(multiplicity of $\bm{S^{\lambda}}$ in $\mathrm{Ind}^{G_n}_{G_{(n-j,j)}} \bm{S^{\mu}} \otimes \bm{S^{\nu}}$)} \\ &= \text{(coefficient of $\bm{s_{\lambda}(x)}$ in $\bm{s_{\mu}(x)} \cdot \bm{s_{\nu}(x)}$)}. \end{align*} The coefficient of $\bm{s_{\lambda}(x)}$ in the Schur expansion of $\bm{s_{\mu}(x)} \cdot \bm{s_{\nu}(x)}$ is \begin{equation*} \bm{c^{\lambda}_{\mu,\nu}} := c_{\mu^{(1)}, \nu^{(1)}}^{\lambda^{(1)}} \cdots c_{\mu^{(r)}, \nu^{(r)}}^{\lambda^{(r)}}, \end{equation*} where the numbers $c_{\mu^{(1)}, \nu^{(1)}}^{\lambda^{(1)}}, \dots, c_{\mu^{(r)}, \nu^{(r)}}^{\lambda^{(r)}}$ are Littlewood-Richardson coefficients. By the last paragraph, we have the isomorphism of $G_{(n-j,j)}$-modules \begin{equation} \mathrm{Res}^{G_n}_{G_{(n-j,j)}} \bm{S^{\lambda}} \cong \bigoplus_{\substack{ \bm{\mu} \vdash_r n-j \\ \bm{\nu} \vdash_r j}} \bm{c_{\mu,\nu}^{\lambda}} (\bm{S^{\mu}} \otimes \bm{S^{\nu}}), \end{equation} which implies the isomorphism of $G_{n-j}$-modules \begin{equation} \epsilon_{i,j} \bm{S^{\lambda}} \cong \bigoplus_{\substack{ \bm{\mu} \vdash_r n-j \\ \bm{\nu} \vdash_r j}} \bm{c_{\mu,\nu}^{\lambda}} (\bm{S^{\mu}} \otimes \epsilon_{i,j} \bm{S^{\nu}}). \end{equation} However, since the idempotent $\epsilon_{i,j}$ projects onto the $\bm{\nu_0} := (\varnothing, \dots, (1^j), \dots, \varnothing)$-isotypic component of any $G_j$-module (where the nonempty partition is in position $i$), we have \begin{equation} \epsilon_{i,j} \bm{S^{\nu}} = \begin{cases} \bm{S^{\nu_0}} & \bm{\nu} = \bm{\nu_0} \\ 0 & \bm{\nu} \neq \bm{\nu_0}. \end{cases} \end{equation} Since $\bm{S^{\nu_0}}$ is 1-dimensional, we deduce \begin{equation} \epsilon_{i,j} \bm{S^{\lambda}} \cong \bigoplus_{\bm{\mu} \vdash_r n-j} \bm{c_{\mu, \nu_0}^{\lambda}} \bm{S^{\mu}}, \end{equation} or \begin{equation} {\mathrm {Frob}}(\epsilon_{i,j} \bm{S^{\lambda}}) = \sum_{\bm{\mu} \vdash_r n-j} \bm{c_{\mu, \nu_0}^{\lambda}} \bm{s_{\mu}}({\mathbf {x}}). \end{equation} To complete the proof, observe that ${\mathrm {Frob}}(S^{\bm{\nu_0}}) = e_j({\mathbf {x}}^{(i)})$ and apply the definition of adjoint operators (together with the dualizing operation $i \mapsto i^*$ in the relevant inner product $\langle \cdot, \cdot \rangle$). \end{proof} We will need to consider the action of the idempotent $\epsilon_{i,j}$ on polynomials in ${\mathbb {C}}[{\mathbf {x}}_n]$. Our basic tool is the following lemma describing the action of $\epsilon_{i,j}$ on monomials in the variables $x_{n-j+1}, \dots, x_n$. \begin{lemma} \label{last-variable-lemma} Let $(a_{n-j+1}, \dots, a_n)$ be a length $j$ sequence of nonnegative integers and consider the corresponding monomial $x_{n-j+1}^{a_{n-j+1}} \cdots x_n^{a_n}$. Unless the numbers $a_{n-j+1}, \dots, a_n$ are distinct and all congruent to $-i$ modulo $r$, we have \begin{equation} \epsilon_{i,j} \cdot (x_{n-j+1}^{a_{n-j+1}} \cdots x_n^{a_n}) = 0. \end{equation} Furthermore, if $(a'_{n-j+1}, \dots, a'_n)$ is a rearrangement of $(a_{n-j+1}, \dots, a_n)$, we have \begin{equation} \epsilon_{i,j} \cdot (x_{n-j+1}^{a_{n-j+1}} \cdots x_n^{a_n}) = \pm \epsilon_{i,j} \cdot (x_{n-j+1}^{a'_{n-j+1}} \cdots x_n^{a'_n}). \end{equation} \end{lemma} \begin{proof} Recall that $G_n$ acts on ${\mathbb {C}}[{\mathbf {x}}_n]$ by linear substitutions. In particular, if $1 \leq \ell \leq n$ and $\pi \in {\mathfrak{S}}_n \subseteq G_n$, we have $\pi.x_{\ell} = x_{\pi_{\ell}}$. Moreover, if $g = \mathrm{diag}(g_1, \dots, g_n) \in G_n$ is a diagonal matrix, we have $g.x_{\ell} = g_{\ell}^{-1} x_{\ell}$. Using these rules, the lemma is a routine computation. \end{proof} The group $G_j$ acts on the quotient ring $V_{n,k,j} := {\mathbb {C}}[x_{n-j+1}, \dots, x_n] / \langle x_{n-j+1}^{kr}, \dots, x_n^{kr} \rangle$. For any $1 \leq i \leq r$, let $\epsilon_{i,j} V_{n,k,j}$ be the image of $V_{n,k,j}$ under $\epsilon_{i,j}$. Then $\epsilon_{i,j} V_{n,k,j}$ is a graded vector space on which the idempotent $\epsilon_{i,j}$ acts as the identity operator. As a consequence of Lemma~\ref{last-variable-lemma}, the set of polynomials \begin{equation} \{ \epsilon_{i,j} \cdot (x_{n-j+1}^{a_{n-j+1}} \cdots x_n^{a_n}) \,:\, 0 \leq a_{n-j+1} < \cdots < a_n < kr, \text{ $a_{\ell} \equiv -i$ (mod $r$) for all $\ell$} \} \end{equation} descends to a basis for $\epsilon_{i,j} V_{n,k,j}$. Counting the degrees of the monomials appearing in the above set, we have the Hilbert series \begin{equation} {\mathrm {Hilb}}(\epsilon_{i,j} V_{n,k,j}; q) = q^{j \cdot (r-i) + r \cdot {j \choose 2}} {k \brack j}_{q^r}. \end{equation} The following generalization of \cite[Lem. 6.8]{HRS} uses the spaces $\epsilon_{i,j} V_{n,k,j}$ to relate the modules $\epsilon_{i,j} S_{n,k}$ and $S_{n-j,k,k-j}$. \begin{lemma} \label{tensor-isomorphism} As graded $G_j$-modules we have $\epsilon_{i,j} S_{n,k} \cong S_{n-j,k,k-j} \otimes \epsilon_{i,j} V_{n,k,j}$. \end{lemma} \begin{proof} Write ${\mathbf {y}}_{n-j} = (y_1, \dots, y_{n-j}) = (x_1, \dots, x_{n-j})$ and ${\mathbf {z}}_j = (z_1, \dots, z_j) = (x_{n-j+1}, \dots, x_n)$, so that ${\mathbb {C}}[{\mathbf {x}}_n] = {\mathbb {C}}[{\mathbf {y}}_{n-j}, {\mathbf {z}}_j]$. The operator $\epsilon_{i,j} \in {\mathbb {C}}[G_j]$ acts on the ${\mathbf {z}}$ variables and commutes with the ${\mathbf {y}}$ variables. There is a natural multiplication map \begin{equation} \widetilde{\mu}: {\mathbb {C}}[{\mathbf {y}}_{n-j}] \otimes \epsilon_{i,j} V_{n,k,j} \rightarrow \epsilon_{i,j} {\mathbb {C}}[{\mathbf {x}}_n] / \epsilon_{i,j} J_{n,k} \cong \epsilon_{i,j} S_{n,k} \end{equation} coming from the assignment $f({\mathbf {y}}_{n-j}) \otimes g({\mathbf {z}}_j) \mapsto f({\mathbf {y}}_{n-j}) g({\mathbf {z}}_j)$. The map $\widetilde{\mu}$ commutes with the action of $G_{n-j}$ on the ${\mathbf {y}}$ variables. We show that $\widetilde{\mu}$ descends to the desired isomorphism. We calculate \begin{equation} \epsilon_{i,j}(e_d({\mathbf {y}}_{n-j}^r, {\mathbf {z}}_j^r)) = \sum_{a + b = d} e_a({\mathbf {y}}_{n-j}^r) \epsilon_{i,j}(e_b({\mathbf {z}}_j^r)) = e_d({\mathbf {y}}_{n-j}^r) \end{equation} for any $d > 0$. It follows that $e_d({\mathbf {y}}_{n-j}^r) \in \epsilon_{i,j} J_{n,k}$ for all $d > n-k$. For any $f({\mathbf {z}}_j) \in \epsilon_{i,j} V_{n,k,j}$ we have \begin{equation} \widetilde{\mu}(y_{\ell}^{kr} \otimes f({\mathbf {z}}_j)) = y_{\ell}^{kr} f({\mathbf {z}}_j) = y_{\ell}^{kr} \epsilon_{i,j} (f({\mathbf {z}}_j)) = \epsilon_{i,j} (y_{\ell}^{kr} f({\mathbf {z}}_j)) \in \epsilon_{i,j} J_{n,k}, \end{equation} where we used the fact that $\epsilon_{i,j}$ acts as the identity operator on $\epsilon_{i,j} V_{n,k,j}$. By the last paragraph, we have $J_{n-j,k,k-j} \otimes \epsilon_{i,j} V_{n,k,j} \subseteq \mathrm{Ker}(\widetilde{\mu})$. The map $\widetilde{\mu}$ therefore induces a map \begin{equation} \mu: S_{n-j,k,k-j} \otimes \epsilon_{i,j} V_{n,k,j} \rightarrow \epsilon_{i,j} {\mathbb {C}}[{\mathbf {x}}_n]/\epsilon_{i,j} J_{n,k} \cong \epsilon_{i,j} S_{n,k}. \end{equation} To determine the dimension of the target of $\mu$, consider the action of $\epsilon_{i,j}$ on ${\mathbb {C}}[{\mathcal{OP}}_{n,k}]$. Given $\sigma \in {\mathcal{OP}}_{n,k}$, we have $\epsilon_{i,j}.\sigma = 0$ if and only if two of the big letters $n-j+1, \dots, n-1, n$ lie in the same block of $\sigma$. Moreover, if $\sigma'$ is obtained from $\sigma$ by rearranging the letters $n-j+1, \dots, n-1, n$ and/or changing their colors, then $\epsilon_{i,j}.\sigma'$ is a scalar multiple of $\epsilon_{i,j}.\sigma$. By Theorem~\ref{ungraded-isomorphism-type}, the dimension of the target of $\mu$ is \begin{equation} \label{mu-dimension} \dim(\epsilon_{i,j} S_{n,k}) = \dim(\epsilon_{i,j} {\mathbb {C}}[{\mathcal{OP}}_{n,k}]) = {k \choose j} \cdot |{\mathcal{OP}}_{n-j,k,k-j}|, \end{equation} where the binomial coefficient ${k \choose j}$ comes from deciding which of the $k$ blocks of $\sigma$ receive the $j$ big letters. On the other hand, Lemma~\ref{s-dimension-lemma-generalized} and the discussion after Lemma~\ref{last-variable-lemma} imply that the domain of $\mu$ also has dimension given by (\ref{mu-dimension}). To prove that $\mu$ gives the desired isomorphism, it is therefore enough to show that $\mu$ is surjective. To see that $\mu$ is surjective, let ${\mathbb {C}}C_{n,k,j}$ be the set of polynomials of the form $\epsilon_{i,j} m({\mathbf {x}}_n)$, where $m({\mathbf {x}}_n) = m({\mathbf {y}}_{n-j}) \cdot m({\mathbf {z}}_j) \in {\mathbb{N}}N_{n,k}$ has the property that $m({\mathbf {z}}_j) = z_1^{a_1} \cdots z_j^{a_j}$ with $a_1 < \cdots < a_j$ and $a_{\ell} \equiv -i$ (mod $r$) for all $\ell$. We claim that ${\mathbb {C}}C_{n,k,j}$ descends to a basis of $\epsilon_{i,j} S_{n,k}$. Since ${\mathbb{N}}N_{n,k}^r$ is a basis of $S_{n,k}$, the set $\{ \epsilon_{i,j} m({\mathbf {x}}_n) \,:\, m({\mathbf {x}}_n) \in {\mathbb{N}}N_{n,k} \}$ spans $\epsilon_{i,j} S_{n,k}$. Let $m({\mathbf {x}}_n) = m({\mathbf {y}}_{n-j}) \cdot m({\mathbf {z}}_j) \in {\mathbb{N}}N_{n,k}$. By Lemma~\ref{last-variable-lemma}, we have $\epsilon_{i,j} m({\mathbf {x}}_n) = 0$ unless $m({\mathbf {z}}_j) = z_1^{a_1} \cdots z_j^{a_j}$ with $(a_1, \dots, a_j)$ distinct and $a_{\ell} \equiv -i$ (mod $r$) for all $\ell$. Also, if $m({\mathbf {z}}_j)' = z_1^{a_1'} \cdots z_j^{a_j'}$ for any permutation $(a_1', \dots, a_j')$ of $(a_1, \dots, a_j)$, then $\epsilon_{i,j} m({\mathbf {x}}_n) = \pm \epsilon_{i,j} m({\mathbf {y}}_{n-j}) \cdot m({\mathbf {z}}_j)'$. It follows that ${\mathbb {C}}C_{n,k,j}$ descends to a spanning set of $\epsilon_{i,j} S_{n,k}$. Lemmas~\ref{nonskip-monomial-factor}, \ref{size-of-n}, and \ref{s-dimension-lemma-generalized} imply \begin{equation} |{\mathbb {C}}C_{n,k,j}| = {k \choose j} \cdot |{\mathcal{OP}}_{n-j,k,k-j}| = \dim(\epsilon_{i,j} S_{n,k}). \end{equation} It follows that ${\mathbb {C}}C_{n,k,j}$ descends to a basis of $\epsilon_{i,j} S_{n,k}$. Consider a typical element $\epsilon_{i,j} m({\mathbf {x}}_n) = m({\mathbf {y}}_{n-j}) \cdot \epsilon_{i,j} m({\mathbf {z}}_j) \in {\mathbb {C}}C_{n,k,j}$. We have \begin{equation} \mu(m({\mathbf {y}}_{n-j}) \otimes \epsilon_{i,j} m({\mathbf {z}}_j)) = m({\mathbf {y}}_{n-j}) \cdot \epsilon_{i,j} m({\mathbf {z}}_j) = \epsilon_{i,j} m({\mathbf {x}}_n), \end{equation} so that $\epsilon_{i,j} m({\mathbf {x}}_n)$ lies in the image of $\mu$. It follows that $\mu$ is surjective. \end{proof} By Lemma~\ref{tensor-isomorphism}, we have \begin{align} e_j({\mathbf {x}}^{(i^*)})^{\perp} {\mathrm {grFrob}}(S_{n,k}; q) &= {\mathrm {Hilb}}(\epsilon_{i,j} V_{n,k,j}^r; q) \cdot {\mathrm {grFrob}}(S_{n-j,k,k-r}^r; q) \\ &= q^{j \cdot (r-i) + r \cdot {j \choose 2}} {k \brack j}_{q^r} \cdot {\mathrm {grFrob}}(S_{n-j,k,k-r}^r; q). \end{align} It we want ${\mathrm {grFrob}}(S_{n,k}; q)$ to satisfy the same recursion that $\bm{D_{n,k}}({\mathbf {x}};q)$ satisfies from Lemma~\ref{d-under-e-perp}, our goal is therefore \begin{lemma} \label{target-lemma} \begin{equation} \label{target-equation} {\mathrm {grFrob}}(S_{n-j,k,k-j};q) = \sum_{m = \max(1,k-j)}^{\min(k,n-j)} q^{r \cdot (k-m) \cdot (n-j-m)} {j \brack k-m}_{q^r} {\mathrm {grFrob}}(S_{n-j,m}; q). \end{equation} \end{lemma} \begin{proof} This is proven using the same reasoning as in the proofs of \cite[Lem. 6.9, Lem. 6.10]{HRS}; one just makes the change of variables $(x_1, \dots, x_n) \mapsto (x_1^r, \dots, x_n^r)$ and $q \mapsto q^r$. \end{proof} We are ready to describe the graded isomorphism types of $S_{n,k}$ and $R_{n,k}$. \begin{theorem} \label{graded-isomorphism-type} Let $n, k,$ and $r$ be positive integers with $n \geq k$ and $r \geq 2$. We have \begin{equation} {\mathrm {grFrob}}(S_{n,k}; q) = \bm{D_{n,k}}({\mathbf {x}}; q) \end{equation} and \begin{equation} {\mathrm {grFrob}}(R_{n,k}; q) = \sum_{z = 0}^{n-k} q^{krz} \cdot \bm{s}_{\varnothing, \dots, \varnothing, (z)}({\mathbf {x}}) \cdot \bm{D_{n-z,k}}({\mathbf {x}}; q). \end{equation} \end{theorem} When $k = n$, the graded Frobenius image of $R_{n,n} = S_{n,n}$ was calculated by Stembridge \cite{Stembridge}. \begin{proof} By Lemma~\ref{target-lemma} (and the discussion preceding it), Lemma~\ref{d-under-e-perp}, and induction, we see that \begin{equation} e_j({\mathbf {x}}^{(i^*)})^{\perp} {\mathrm {grFrob}}(S_{n,k}; q) = e_j({\mathbf {x}}^{(i^*)})^{\perp} \bm{D_{n,k}}({\mathbf {x}}; q) \end{equation} for all $j \geq 1$ and $1 \leq i \leq r$. Lemma~\ref{e-perp-lemma} therefore gives the first statement. The second statement is a consequence of Proposition~\ref{r-to-s-reduction}. \end{proof} \begin{example} Theorem~\ref{graded-isomorphism-type} may be verified directly in the case $n = k = 1$. We have $S_{1,1} = R_{1,1} = {\mathbb {C}}[x_1]/\langle x_1^r \rangle$. The group $G_1 \cong G = \langle \zeta \rangle$ acts on $S_{1,1}$ by $\zeta.x_1^i = \zeta^{-i} x_1^i$ for $0 \leq i < r$. Recalling our convention for the characters of the cyclic group $G$, we have \begin{equation} {\mathrm {grFrob}}(S_{1,1}; q) = \bm{s}_{\varnothing, \dots, \varnothing, (1)} \cdot q^0 + \cdots + \bm{s}_{\varnothing, (1), \dots, \varnothing} \cdot q^{r-2} + \bm{s}_{(1), \varnothing, \dots, \varnothing} \cdot q^{r-1}. \end{equation} On the other hand, the elements of ${\mathrm {SYT}}^r(1)$ are the tableaux \begin{equation*} (\varnothing, \varnothing, \dots, \, \, \begin{Young} 1 \end{Young} \,), \, \, \dots, \, \, (\varnothing, \begin{Young} 1 \end{Young} \, , \dots \, \varnothing), (\begin{Young} 1 \end{Young} \, , \varnothing, \dots, \varnothing). \end{equation*} The major indices of these tableaux are (from left to right) $r-1, \dots, 1, 0$. By Proposition~\ref{d-schur-expansion} we have \begin{equation} \bm{D_{1,1}}({\mathbf {x}};q) = {\mathrm {rev}}_q \left[ \bm{s}_{\varnothing, \dots, \varnothing, (1)} \cdot q^{r-1} + \cdots + \bm{s}_{\varnothing, (1), \dots, \varnothing} \cdot q^{1} + \bm{s}_{(1), \varnothing, \dots, \varnothing} \cdot q^{0} \right], \end{equation} which agrees with Theorem~\ref{graded-isomorphism-type}. \end{example} \begin{example} Let us consider Theorem~\ref{graded-isomorphism-type} in the case $(n,k,r) = (3,2,2)$. By Proposition~\ref{d-schur-expansion}, the only elements of ${\mathrm {SYT}}^2(3)$ which contribute to $\bm{D_{3,2}}({\mathbf {x}};q)$ are those with $\geq 1$ descent. \begin{small} \begin{equation*} \begin{Young} 1 \\ 2 \\ 3 \\ \end{Young} \, , \, \varnothing \hspace{0.3in} \begin{Young} 1 & 2 \\ 3 \end{Young} \, , \, \varnothing \hspace{0.3in} \begin{Young} 1 & 3 \\ 2 \end{Young} \, , \, \varnothing \hspace{0.3in} \begin{Young} 1 \\ 2 \end{Young} \, , \, \begin{Young} 3 \end{Young} \hspace{0.3in} \begin{Young} 1 & 2 \end{Young} \, , \, \begin{Young} 3 \end{Young} \hspace{0.3in} \begin{Young} 1 \\ 3 \end{Young} \, , \, \begin{Young} 2 \end{Young} \hspace{0.3in} \begin{Young} 1 & 3 \end{Young} \, , \, \begin{Young} 2 \end{Young} \hspace{0.3in} \begin{Young} 2 \\ 3 \end{Young} \, , \, \begin{Young} 1 \end{Young} \end{equation*} \begin{equation*} \begin{Young} 1 \end{Young} \, , \, \begin{Young} 2 \\ 3 \end{Young} \hspace{0.3in} \begin{Young} 1 \end{Young} \, , \, \begin{Young} 2 & 3 \end{Young} \hspace{0.3in} \begin{Young} 2 \end{Young} \, , \, \begin{Young} 1 \\ 3 \end{Young} \hspace{0.3in} \begin{Young} 2 \end{Young} \, , \, \begin{Young} 1 & 3 \end{Young} \hspace{0.3in} \begin{Young} 3 \end{Young} \, , \, \begin{Young} 1 \\ 2 \end{Young} \hspace{0.3in} \varnothing \, , \, \begin{Young} 1 & 2 \\ 3 \end{Young} \hspace{0.3in} \varnothing \, , \, \begin{Young} 1 & 3 \\ 2 \end{Young} \hspace{0.3in} \varnothing \, , \, \begin{Young} 1 \\ 2 \\ 3 \end{Young} \end{equation*} \end{small} The major indices of these tableaux are (in matrix format) $\begin{pmatrix} 6 & 4 & 2 & 7 & 5 & 3 & 3 & 5 \\ 8 & 4 & 6 & 6 & 4 & 7 & 5 & 9 \end{pmatrix}$ while the descent numbers are $\begin{pmatrix} 2 & 1 & 1 & 2 & 1 & 1 & 1 & 1 \\ 2 & 1 & 1 & 1 & 1 & 1 & 1 & 2 \end{pmatrix}$. The statistic ${\mathrm {maj}}({ \bm{T}}) + r {n-k \choose 2} - r(n-k) {\mathrm {des}}({ \bm{T}})$ appearing in the exponent in Proposition~\ref{d-schur-expansion} is therefore $ \begin{pmatrix} 2 & 2 & 0 & 3 & 3 & 1 & 1 & 3 \\ 4 & 2 & 4 & 4 & 2 & 5 & 3 & 5 \end{pmatrix}. $ If we apply $\omega$ and multiply by ${{\mathrm {des}}({ \bm{T}}) \brack n-k}_{q^r} = [{\mathrm {des}}({ \bm{T}})]_{q^2}$, we see that $\bm{D_{3,2}}({\mathbf {x}};q)$ is the $q$-reversal of \begin{multline} \bm{s}_{(3), \varnothing} \cdot (q^2 + q^4) + \bm{s}_{(2,1), \varnothing} \cdot q^2 + \bm{s}_{(2,1), \varnothing} \cdot q^0 + \bm{s}_{(2), (1)} \cdot (q^3 + q^5) \\ + \bm{s}_{(1,1), (1)} \cdot q^3 + \bm{s}_{(2), (1)} \cdot q^1 + \bm{s}_{(1,1), (1)} \cdot q^1 + \bm{s}_{(2), (1)} \cdot q^3 \\ + \bm{s}_{(1), (2)} \cdot (q^4 + q^6) + \bm{s}_{(1), (1,1)} \cdot q^2 + \bm{s}_{(1), (2)} \cdot q^4 + \bm{s}_{(1), (1,1)} \cdot q^4 \\ + \bm{s}_{(1), (2)} \cdot q^2 + \bm{s}_{\varnothing, (2,1)} \cdot q^5 + \bm{s}_{\varnothing, (2,1)} \cdot q^3 + \bm{s}_{\varnothing, (3)} \cdot (q^5 + q^7). \end{multline} Collecting powers of $q$ and applying ${\mathrm {rev}}_q$, the graded Frobenius image ${\mathrm {grFrob}}(S_{3,2}; q)$ is \begin{multline} \label{small-expression} \bm{s}_{\varnothing, (3)} \cdot q^0 + \bm{s}_{(1), (2)} \cdot q^1 + (\bm{s}_{(2), (1)} + \bm{s}_{\varnothing, (2,1)} + \bm{s}_{\varnothing, (3)}) \cdot q^2 \\ + (\bm{s}_{(3), \varnothing} + 2 \bm{s}_{(1), (2)} + \bm{s}_{(1), (1,1)}) \cdot q^3 + (2 \bm{s}_{(2), (1)} + \bm{s}_{(1,1), (1)} + \bm{s}_{\varnothing, (2,1)}) \cdot q^4 \\ + (\bm{s}_{(3), \varnothing} + \bm{s}_{(2,1), \varnothing} + \bm{s}_{(1), (1,1)} + \bm{s}_{(1), (2)}) \cdot q^5 + (\bm{s}_{(2), (1)} + \bm{s}_{(1,1), (1)}) \cdot q^6 + \bm{s}_{(2,1), \varnothing} \cdot q^7. \end{multline} Let us calculate ${\mathrm {grFrob}}(R_{3,2}; q)$. A shorter calculation (left to the reader) shows that $\bm{D_{2,2}}({\mathbf {x}}; q)$ is given by \begin{equation} \label{new-expression} \bm{s}_{\varnothing, (2)} \cdot q^0 + \bm{s}_{(1), (1)} \cdot q^1 + (\bm{s}_{(2), \varnothing} + \bm{s}_{\varnothing, (1,1)}) \cdot q^2 + \bm{s}_{(1), (1)} \cdot q^3 + \bm{s}_{(1,1), \varnothing} \cdot q^4. \end{equation} By Theorem~\ref{graded-isomorphism-type}, the Frobenius image ${\mathrm {grFrob}}(R_{3,2}; q)$ is given by adding the product of (\ref{new-expression}) and $\bm{s}_{(\varnothing, (1))}({\mathbf {x}}) \cdot q^4$ to (\ref{small-expression}). Applying the Pieri rule we see that the Schur expansion of ${\mathrm {grFrob}}(R_{3,2}; q)$ is \begin{multline} \text{\rm{(expression in (}\ref{small-expression}))} \, + (\bm{s}_{\varnothing, (3)} + \bm{s}_{\varnothing, (2,1)}) \cdot q^4 + (\bm{s}_{(1), (2)} + \bm{s}_{(1), (1,1)}) \cdot q^5 \\ + (\bm{s}_{(2), (1)} + \bm{s}_{\varnothing, (2,1)} + \bm{s}_{\varnothing, (1,1,1)}) \cdot q^6 + (\bm{s}_{(1), (2)} + \bm{s}_{(1), (1,1)}) \cdot q^7 + \bm{s}_{(1,1), (1)} \cdot q^8. \end{multline} \end{example} \section{Conclusion} \label{Conclusion} In this paper we introduced a quotient $R_{n,k}$ of the polynomial ring ${\mathbb {C}}[{\mathbf {x}}_n]$ whose structure is governed by the combinatorics of the set of $k$-dimensional faces ${\mathcal{F}}_{n,k}$ in the Coxeter complex attached to $G_n$, where $G_n = {\mathbb {Z}}_r \wr {\mathfrak{S}}_n$ is a wreath product. \begin{problem} \label{reflection-group-generalization} Let $W \subset GL_n({\mathbb {C}})$ be a complex reflection group and let $0 \leq k \leq n$. Find a graded $W$-module $R_{W,k}$ which generalizes $R_{n,k}$. \end{problem} The quotient $R_{W,k}$ in Problem~\ref{reflection-group-generalization} should have combinatorics governed by the $k$-dimensional faces ${\mathcal{F}}_{W,k}$ of some Coxeter complex-like object attached to $W$. A natural collection of groups $W$ to look at is the $G(r,p,n)$ family of reflection groups. Recall that, for positive integers $r, p, n$ with $p \mid r$, the group $G(r,p,n)$ is defined by \begin{equation} G(r,p,n) := \{ g \in G_n \,:\, \text{the product of the nonzero entries in $g$ is a $(r/p)^{th}$ root of unity} \}. \end{equation} It is well known that the $G(r,p,n)$-invariant polynomials ${\mathbb {C}}[{\mathbf {x}}_n]^{G(r,p,n)}$ have algebraically independent generators $e_1({\mathbf {x}}_n^r), e_2({\mathbf {x}}_n^r), \dots, e_{n-1}({\mathbf {x}}_n^r),$ and $(x_1 \cdots x_n)^{r/p}$. However, even in the case of $G(2,2,n)$ which is isomorphic to the real reflection group of type $D_n$, the authors have been unable to construct a quotient of ${\mathbb {C}}[{\mathbf {x}}_n]$ which carries an action of $G(2,2,n)$ whose dimension is given by the number of $k$-dimensional faces in the $D_n$-Coxeter complex. If $W$ is any {\em real} reflection group and $\mathbb{F}$ is any field, there is an $\mathbb{F}$-algebra $H_W(0)$ of dimension $|W|$ called the {\em 0-Hecke algebra} attached to $W$. When $W$ is the symmetric group ${\mathfrak{S}}_n$, there is an action of $H_W(0)$ on the polynomial ring $\mathbb{F}[{\mathbf {x}}_n]$ given by the isobaric Demazure operators (see \cite{HuangRhoades}). When $W = {\mathfrak{S}}_n$, Huang and Rhoades proved that the ideal \begin{equation} \langle h_k(x_1), h_k(x_1, x_2), \dots, h_k(x_1, x_2, \dots, x_n), e_n({\mathbf {x}}_n), e_{n-1}({\mathbf {x}}_n), \dots, e_{n-k+1}({\mathbf {x}}_n) \rangle \subseteq \mathbb{F}[{\mathbf {x}}_n] \end{equation} is stable under this action, and that the corresponding quotient of $\mathbb{F}[{\mathbf {x}}_n]$ gives a graded version of a natural action of $H_{{\mathfrak{S}}_n}(0)$ on $k$-block ordered set partitions of $[n]$. This suggests the following problem. \begin{problem} \label{zero-hecke-problem} Let $W$ be a real reflection group of rank $n$, let $H_W(0)$ be the 0-Hecke algebra attached to $W$, and let $0 \leq k \leq n$. Describe a natural action of $W$ on the set of $k$-dimensional faces in the Coxeter complex of $W$. Give a graded this action as a $W$-stable quotient of $\mathbb{F}[{\mathbf {x}}_n]$. \end{problem} Another possible direction for future research is motivated by the Delta Conjecture and the {\em Parking Conjecture} of Armstrong, Reiner, and Rhoades \cite{ARR}. Let $W$ be an irreducible real reflection group with reflection representation $V$ and Coxeter number $h$, and consider a homogeneous system of parameters $\theta_1, \dots, \theta_n \in {\mathbb {C}}[V]_{h+1}$ of degree $h+1$ carrying the dual $V^*$ of the reflection representation. Armstrong et. al. introduce an inhomogeneous deformation $(\Theta - {\mathbf {x}})$ of the ideal $(\Theta) = (\theta_1, \dots, \theta_n) \subseteq {\mathbb {C}}[V]$ generated by the $\theta_i$ and conjecture a relationship between the quotient ${\mathbb {C}}[V]/(\Theta - {\mathbf {x}})$ and $(W \times {\mathbb {Z}}_h)$-set $\mathsf{Park}^{NC}_W$ of `$W$-noncrossing parking functions' defined via Coxeter-Catalan theory. When $W = {\mathfrak{S}}_n$ is the symmetric group, the `classical' h.s.o.p. quotient ${\mathbb {C}}[V]/(\Theta)$ is known to have graded Frobenius image given by (the image under $\omega$ of, after a $q$-shift) the Delta conjecture in the case $k = n$ at the specialization $t = 1/q$. In \cite[Prob. 7.8]{HRS} the problem was posed of finding a `$k \leq n$' extension of the Parking Conjecture for any real reflection group $W$. The authors are hopeful that the quotients studied in this paper will be helpful in this endeavor. \end{document}
\ensuremath{\mathbf{b}}egin{document} \ensuremath{\mathbf{m}}aketitle \ensuremath{\mathbf{t}}hispagestyle{empty} \ensuremath{\mathbf{p}}agestyle{empty} \ensuremath{\mathbf{b}}egin{abstract} Deep CCA is a recently proposed deep neural network extension to the traditional canonical correlation analysis (CCA), and has been successful for multi-view representation learning in several domains. However, stochastic optimization of the deep CCA objective is not straightforward, because it does not decouple over training examples. Previous optimizers for deep CCA are either batch-based algorithms or stochastic optimization using large minibatches, which can have high memory consumption. In this paper, we tackle the problem of stochastic optimization for deep CCA with small minibatches, based on an iterative solution to the CCA objective, and show that we can achieve as good performance as previous optimizers and thus alleviate the memory requirement. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{abstract} \section{Introduction} \label{s:intro} Stochastic gradient descent (SGD) is a fundamental and popular optimization method for machine learning problems~\ensuremath{\mathbf{c}}ite{Bottou91a,Lecun_98b,Bottou04a,Zhang04b,Bertsek11a}. SGD is particularly well-suited for large-scale machine learning problems because it is extremely simple and easy to implement, it often achieves better generalization (test) performance (which is the focus of machine learning research) than sophisticated batch algorithms, and it usually achieves large error reduction very quickly in a small number of passes over the training set~\ensuremath{\mathbf{c}}ite{BottouBousquet08a}. One intuitive explanation for the empirical success of stochastic gradient descent for large data is that it makes better use of data redundancy, with an extreme example given by \ensuremath{\mathbf{c}}ite{Lecun_98b}: If the training set consists of $10$ copies of the same set of examples, then computing an estimate of the gradient over one single copy is $10$ times more efficient than computing the full gradient over the entire training set, while achieving the same optimization progress in the following gradient descent step. At the same time, ``multi-view'' data are becoming increasingly available, and methods based on canonical correlation analysis (CCA)~\ensuremath{\mathbf{c}}ite{Hotell36a} that use such data to learn representations (features) form an active research area. The views can be multiple measurement modalities, such as simultaneously recorded audio + video~\ensuremath{\mathbf{c}}ite{Kidron_05a,Chaudh_09a}, audio + articulation~\ensuremath{\mathbf{c}}ite{AroraLivesc13a}, images + text~\ensuremath{\mathbf{c}}ite{Hardoon_04a,SocherLi10a,Hodosh_13a}, or parallel text in two languages~\ensuremath{\mathbf{c}}ite{Vinokour_03a,Haghig_08a,Chandar_14a,FaruquiDyer14a,Lu_15a}, but may also be different information extracted from the same source, such as words + context~\ensuremath{\mathbf{c}}ite{Pennin_14a} or document text + text of inbound hyperlinks~\ensuremath{\mathbf{c}}ite{BickelScheff04a}. The presence of multiple information sources presents an opportunity to learn better representations (features) by analyzing multiple views simultaneously. Among various multi-view learning approaches, the recently proposed deep canonical correlation analysis \ensuremath{\mathbf{c}}ite{Andrew_13a}, which extends traditional CCA with deep neural networks (DNNs), has been shown to be advantageous over previous methods in several domains \ensuremath{\mathbf{c}}ite{Wang_15a,Wang_15b,YanMikolaj15a}, and scales to large data better than its nonparametric counterpart kernel CCA~\ensuremath{\mathbf{c}}ite{LaiFyfe00a,BachJordan02a,Hardoon_04a}. In contrast with most DNN-based methods, the objective of deep CCA couples together all of the training examples due to its whitening constraint, making stochastic optimization challenging. Previous optimizers for this model are batch-based, e.g., limited-memory BFGS (L-BFGS) \ensuremath{\mathbf{c}}ite{Nocedal80a} as in \ensuremath{\mathbf{c}}ite{Andrew_13a}, or stochastic optimization with large minibatches~\ensuremath{\mathbf{c}}ite{Wang_15a}, because it is difficult to obtain an accurate estimate of the gradient with a small subset of the training examples (again due to the whitening constraint). As a result, these approaches have high memory complexity and may not be practical for large DNN models with hundreds of millions of weight parameters (common with web-scale data~\ensuremath{\mathbf{c}}ite{Dean_12a}), or if one would like to run the training procedure on GPUs which are equipped with faster but smaller (more expensive) memory than CPUs. In such cases there is not enough memory to save all intermediate hidden activations of the batch/large minibatch used in error backpropagation. In this paper, we tackle this problem with two key ideas. First, we reformulate the CCA solution with orthogonal iterations, and embed the DNN parameter training in the orthogonal iterations with a nonlinear least squares regression objective, which naturally decouples over training examples. Second, we use adaptive estimates of the covariances used by the CCA whitening constraints and carry out whitening \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}mph{only} for the minibatch used at each step to obtain training signals for the DNNs. This results in a stochastic optimization algorithm that can operate on small minibatches and thus consume little memory. Empirically, the new stochastic optimization algorithm performs as well as previous optimizers in terms of convergence speed, even when using small minibatches with which the previous stochastic approach makes no training progress. In the following sections, we briefly introduce deep CCA and discuss the difficulties in training it (Section~\ref{s:dcca}); motivate and propose our new algorithm (Section~\ref{s:algorithm}); describe related work (Section~\ref{s:related}); and present experimental results comparing different optimizers (Section~\ref{s:experiments}). \section{Deep CCA} \label{s:dcca} \ensuremath{\mathbf{n}}oindent\ensuremath{\mathbf{t}}extbf{Notation} In the multi-view feature learning setting, we have access to paired observations from two views, denoted ${(\ensuremath{\mathbf{x}}_1,\ensuremath{\mathbf{y}}_1),\dots,(\ensuremath{\mathbf{x}}_N,\ensuremath{\mathbf{y}}_N)}$, where $N$ is the training set size, $\ensuremath{\mathbf{x}}_i\in \ensuremath{\mathbb{R}}^{D_x}$ and $\ensuremath{\mathbf{y}}_i\in \ensuremath{\mathbb{R}}^{D_y}$ for $i=1,\dots,N$. We also denote the data matrices for View 1 and View 2 $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}=[\ensuremath{\mathbf{x}}_1,\dots,\ensuremath{\mathbf{x}}_N]$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}=[\ensuremath{\mathbf{y}}_1,\dots,\ensuremath{\mathbf{y}}_N]$, respectively. We use bold-face letters, e.g.~$\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$, to denote mappings implemented by DNNs, with a corresponding set of learnable parameters, denoted, e.g., $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$. The dimensionality of the learned features is denoted $L$. \ensuremath{\mathbf{b}}egin{figure} \ensuremath{\mathbf{c}}entering \ensuremath{\mathbf{p}}sfrag{x}[][]{$\ensuremath{\mathbf{x}}$} \ensuremath{\mathbf{p}}sfrag{y}[][]{$\ensuremath{\mathbf{y}}$} \ensuremath{\mathbf{p}}sfrag{v1}[][][0.8]{View 1} \ensuremath{\mathbf{p}}sfrag{v2}[][][0.8][90]{View 2} \ensuremath{\mathbf{p}}sfrag{U}[][]{$\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}$} \ensuremath{\mathbf{p}}sfrag{V}[][]{$\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}$} \ensuremath{\mathbf{p}}sfrag{f}[][]{$\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$} \ensuremath{\mathbf{p}}sfrag{g}[][]{$\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}$} \includegraphics[width=0.55\linewidth]{dcca.eps} \ensuremath{\mathbf{c}}aption{Schematic diagram of deep canonical correlation analysis.} \label{f:dcca} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{figure} Deep CCA (DCCA)~\ensuremath{\mathbf{c}}ite{Andrew_13a} extends (linear) CCA~\ensuremath{\mathbf{c}}ite{Hotell36a} by extracting $d_x$- and $d_y$-dimensional nonlinear features with two DNNs $\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}$ for views 1 and 2 respectively, such that the canonical correlation (measured by CCA) between the DNN outputs is maximized, as illustrated in Fig.~\ref{f:dcca}. The goal of the final CCA is to find $L \le \ensuremath{\mathbf{m}}in(d_x,d_y)$ pairs of linear projection vectors $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}} \in \ensuremath{\mathbb{R}}^{d_x \ensuremath{\mathbf{t}}imes L} $ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}} \in \ensuremath{\mathbb{R}}^{d_y \ensuremath{\mathbf{t}}imes L}$ such that the projections of each view (a.k.a.~canonical variables,~\ensuremath{\mathbf{c}}ite{Hotell36a}) are maximally correlated with their counterparts in the other view, constrained such that the dimensions in the representation are uncorrelated with each other. Formally, the DCCA objective can be written as\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}ootnote{In this paper, we use the scaled covariance matrices (scaled by $N$) so that the dimensions of the projection are orthonormal and comply with the custom of orthogonal iterations.} \ensuremath{\mathbf{b}}egin{gather} \label{e:dcca} \ensuremath{\mathbf{m}}ax_{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}} \ensuremath{\mathbf{q}}uad \ensuremath{\mathbf{t}}race{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}} \\ \ensuremath{\mathbf{t}}ext{s.t.} \ensuremath{\mathbf{q}}uad \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}} = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}} = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{I}}, \ensuremath{\mathbf{n}}onumber \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather} where $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}=\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}})=[\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\mathbf{x}}_1),\dots,\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\mathbf{x}}_N)] \in \ensuremath{\mathbb{R}}^{d_x \ensuremath{\mathbf{t}}imes N}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}=\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}})=[\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}(\ensuremath{\mathbf{y}}_1),\dots,\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}(\ensuremath{\mathbf{y}}_N)] \in \ensuremath{\mathbb{R}}^{d_y \ensuremath{\mathbf{t}}imes N}$. We assume that $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}$ are centered at the origin for notational simplicity; if they are not, we can center them as a pre-processing operation. Notice that if we use the original input data without further feature extraction, i.e.~$\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}$, then we recover the CCA objective. In DCCA, the final features (projections) are \ensuremath{\mathbf{b}}egin{gather}\label{e:concat} \ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}})=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\mathbf{x}}) \ensuremath{\mathbf{q}}quad \ensuremath{\mathbf{t}}ext{and} \ensuremath{\mathbf{q}}quad \ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}})=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}(\ensuremath{\mathbf{y}}). \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather} We observe that the last CCA step with linear projection mappings $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}$ can be considered as adding a linear layer on top of the feature extraction networks $\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}$ respectively. In the following, we sometimes refer to the concatenated networks $\ensuremath{\mathbf{t}}ilde{\f}$ and $\ensuremath{\mathbf{t}}ilde{\g}$ as defined in \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:concat}, with $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}=\{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}\}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}}=\{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}\}$. \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}ootnote{In principle there is no need for the final linear layer; we could define DCCA such that the correlation objective and constraints are imposed on the final nonlinear layer. However, the linearity of the final layer is crucial for algorithmic implementations such as ours.} Let $\ensuremath{\mathbf{b}}Sigma_{fg}= \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op$, $\ensuremath{\mathbf{b}}Sigma_{ff}=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op$ and $\ensuremath{\mathbf{b}}Sigma_{gg}=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op$ be the (scaled) cross- and auto-covariance matrices of the feature-mapped data in the two views. It is well-known that, when $\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}$ are fixed, the last CCA step in \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} has a closed form solution as follows. Define $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg=\ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\mathbf{b}}Sigma_{fg} \ensuremath{\mathbf{b}}Sigma_{gg}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}}$, and let $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg=\ensuremath{\mathbf{t}}ilde{\U} \Lambda \ensuremath{\mathbf{t}}ilde{\V}^\ensuremath{\mathbf{t}}op $ be its rank-L singular value decomposition (SVD), where $\Lambda$ contains the singular values $\sigma_1 \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \dots \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \sigma_L \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e 0$ on its diagonal. Then the optimum of \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} is achieved by $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}})=(\ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\mathbf{t}}ilde{\U}, \ensuremath{\mathbf{b}}Sigma_{gg}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\mathbf{t}}ilde{\V} )$, and the optimal objective value (the total canonical correlation) is $\sum_{j=1}^L \sigma_j$. By switching $\ensuremath{\mathbf{m}}ax(\ensuremath{\mathbf{c}}dot)$ with $- \ensuremath{\mathbf{m}}in -(\ensuremath{\mathbf{c}}dot)$, and adding $1/2$ times the constraints, it is straightforward to show that \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} is equivalent to the following: \ensuremath{\mathbf{b}}egin{gather}\label{e:dcca2} \ensuremath{\mathbf{m}}in_{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}} \ensuremath{\mathbf{q}}uad \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2} \ensuremath{\mathbf{n}}orm{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}^2_F \\ \ensuremath{\mathbf{q}}quad \ensuremath{\mathbf{t}}ext{s.t.} \ensuremath{\mathbf{q}}uad (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}) (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}})^\ensuremath{\mathbf{t}}op = (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}) (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}})^\ensuremath{\mathbf{t}}op = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{I}}. \ensuremath{\mathbf{n}}onumber \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather} In other words, CCA minimizes the squared difference between the projections of the two views, subject to the whitening constraints. This alternative formulation of CCA will also shed light on our proposed algorithm for DCCA. The DCCA objective \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} differs from typical DNN regression or classification training objectives. Typically, the objectives are unconstrained and can be written as the expectation (or sum) of error functions (e.g., squared loss or cross entropy) incurred at each training example. This property naturally suggests stochastic gradient descent (SGD) for optimization, where one iteratively generates random unbiased estimates of the gradient based on one or a few training examples (a minibatch) and takes a small step in the opposite direction. However, the objective in \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} can not be written as an unconstrained sum of errors. The difficulty lies in the fact that the training examples are coupled through the auto-covariance matrices (in the constraints), which can not be reliably estimated with only a small amount of data. When introducing deep CCA, \ensuremath{\mathbf{c}}ite{Andrew_13a} used the L-BFGS algorithm for optimization. To compute the gradients of the objective with respect to $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}})$, one first computes the gradients\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}ootnote{Technically we are computing subgradients as the ``sum of singular values'' (trace norm) is not a differentiable function of the matrix.} with respect to $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}})$ as \ensuremath{\mathbf{b}}egin{align}\label{e:gradient} \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{\ensuremath{\mathbf{p}}artial \sum_{j=1}^L \sigma_j} {\ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}} &= 2\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{D}}elta_{ff} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} + \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{D}}elta_{fg} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}, \\ \ensuremath{\mathbf{t}}ext{with}\ensuremath{\mathbf{q}}quad \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{D}}elta_{ff} & = -\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2} \ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}_{ff}^{-1/2} \ensuremath{\mathbf{t}}ilde{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}} \Lambda \ensuremath{\mathbf{t}}ilde{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}_{ff}^{-1/2} \ensuremath{\mathbf{n}}onumber \\ \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{D}}elta_{fg} & = \ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}_{ff}^{-1/2} \ensuremath{\mathbf{t}}ilde{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}} \ensuremath{\mathbf{t}}ilde{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}_{gg}^{-1/2} \ensuremath{\mathbf{n}}onumber \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{align} where $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg=\ensuremath{\mathbf{t}}ilde{\U}\Lambda\ensuremath{\mathbf{t}}ilde{\V}^\ensuremath{\mathbf{t}}op$ is the SVD of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$ as in the closed-form solution to CCA, and $\ensuremath{\mathbf{p}}artial \sum_{j=1}^L \sigma_j / \ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}$ has an analogous expression. One can then compute the gradients with respect to $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}$ via the standard backpropagation procedure~\ensuremath{\mathbf{c}}ite{Rumelh_86c}. From the gradient formulas, it is clear that the key to optimizing DCCA is the SVD of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$; various nonlinear optimization techniques can be used here once the gradient is computed. In practice, however, batch optimization is undesirable for applications with large training sets or large DNN architectures, as each gradient step computed on the entire training set can be expensive in both memory and time. Later, it was observed by \ensuremath{\mathbf{c}}ite{Wang_15a} that stochastic optimization still works well even for the DCCA objective, as long as larger minibatches are used to estimate the covariances and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$ when computing the gradient with \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:gradient}. More precisely, the authors find that learning plateaus at a poor objective value if the minibatch is too small, but fast convergence and better generalization than batch algorithms can be obtained once the minibatch size is larger than some threshold, presumably because a large minibatch contains enough information to estimate the covariances and therefore the gradient accurately enough (the threshold of minibatch size varies for different datasets because they have different levels of data redundancy). Theoretically, the necessity of using large minibatches in this approach can also be established. Let the empirical estimate of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$ using a minibatch of $n$ samples be $\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}at{\ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}}_{fg}^{(n)}$. It can be shown that the expectation of $\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}at{\ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}}_{fg}^{(n)}$ does not equal the true $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$ computed using the entire dataset, mainly due to the nonlinearities in the matrix inversion and multiplication operations in computing $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$, and the nonlinearity in the ``sum of singular values'' (trace norm) of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$; moreover, the spectral norm of the error $\ensuremath{\mathbf{n}}orm{\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}at{\ensuremath{\ensuremath{\mathbf{b}}oldsymbol{\Sigma}}}_{fg}^{(n)} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg}$ decays slowly as $\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{\sqrt{n}}$. Consequently, the gradient estimated on a minibatch using \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:gradient} does not equal the true gradient of the objective in expectation, indicating that the stochastic approach of \ensuremath{\mathbf{c}}ite{Wang_15a} does not qualify as a stochastic gradient descent method for the DCCA objective. \section{Our algorithm} \label{s:algorithm} \subsection{An iterative solution to linear CCA} \ensuremath{\mathbf{b}}egin{algorithm}[t] \ensuremath{\mathbf{c}}aption{CCA projections via alternating least squares.} \label{alg:cca-iterative} \renewcommand{\textbf{Input:}}{\ensuremath{\mathbf{t}}extbf{Input:}} \renewcommand{\textbf{Output:}}{\ensuremath{\mathbf{t}}extbf{Output:}} \ensuremath{\mathbf{b}}egin{algorithmic} \REQUIRE Data matrices $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}\in \ensuremath{\mathbb{R}}^{d_x \ensuremath{\mathbf{t}}imes N}$, $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}\in \ensuremath{\mathbb{R}}^{d_y \ensuremath{\mathbf{t}}imes N}$. Initialization $\ensuremath{\mathbf{t}}ilde{\U}_0\in \ensuremath{\mathbb{R}}^{d_x\ensuremath{\mathbf{t}}imes L}$ s.t. $\ensuremath{\mathbf{t}}ilde{\U}_0^\ensuremath{\mathbf{t}}op \ensuremath{\mathbf{t}}ilde{\U}_0=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{I}}$. \STATE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_0 \leftarrow \ensuremath{\mathbf{t}}ilde{\U}_0^\ensuremath{\mathbf{t}}op \ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}$ \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}OR{$t=1,2,\dots,T$} \STATE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_t \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}$ \STATE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t} \leftarrow \left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}^\ensuremath{\mathbf{t}}op \right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_t$ \STATE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}$ \STATE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t} \leftarrow \left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}^\ensuremath{\mathbf{t}}op \right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$ \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{E}}NDFOR \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{E}}NSURE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{T}$/$\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{T}$ are the CCA projections of view 1/2. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{algorithmic} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{algorithm} Our solution to \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} is inspired by the iterative solution for finding the linear CCA projections $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}})$ for inputs $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}})$, as shown in Algorithm~\ref{alg:cca-iterative}. This algorithm computes the top-$L$ singular vectors $(\ensuremath{\mathbf{t}}ilde{\U},\ensuremath{\mathbf{t}}ilde{\V})$ of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$ via orthogonal iterations \ensuremath{\mathbf{c}}ite{GolubLoan96a}. An essentially identical algorithm (named \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}mph{alternating least squares} for reasons that will soon become evident) appears in \ensuremath{\mathbf{c}}ite[Algorithm 5.2]{GolubZha95a} and according to the authors the idea goes back to J. Von Neumann. A similar algorithm is also recently used by \ensuremath{\mathbf{c}}ite[Algorithm~1]{LuFoster14a} for large scale linear CCA with high-dimensional sparse inputs, although their algorithm does not implement the whitening operations $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t} \leftarrow \left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}^\ensuremath{\mathbf{t}}op \right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t} \leftarrow \left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}^\ensuremath{\mathbf{t}}op \right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_t$ or they use the QR decomposition instead. The convergence of Algorithm~\ref{alg:cca-iterative} is characterized by the following theorem, which parallels \ensuremath{\mathbf{c}}ite[Theorem~1]{LuFoster14a}. \ensuremath{\mathbf{b}}egin{theorem} Let the singular values of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$ be \ensuremath{\mathbf{b}}egin{gather*} \sigma_1 \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \dots \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \sigma_L > \sigma_{L+1} \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \dots \ensuremath{\ensuremath{\mathbf{m}}athbf{g}}e \sigma_{\ensuremath{\mathbf{m}}in(d_x,d_y)} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*} and suppose $\ensuremath{\mathbf{t}}ilde{\U}_0^\ensuremath{\mathbf{t}}op \ensuremath{\mathbf{t}}ilde{\U}$ is nonsingular. Then the output $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_T,\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_T)$ of Algorithm~\ref{alg:cca-iterative} converges to the CCA projections as $T\rightarrow \infty$. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{theorem} \ensuremath{\mathbf{b}}egin{proof} We focus on showing that $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_T$ converges to the view 1 projection; the proof for $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_T$ is similar. First recall that $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg=\ensuremath{\mathbf{t}}ilde{\U} \Lambda \ensuremath{\mathbf{t}}ilde{\V}^\ensuremath{\mathbf{t}}op$ is the rank-$L$ SVD of $\ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\mathbf{b}}Sigma_{fg} \ensuremath{\mathbf{b}}Sigma_{gg}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}}$, and thus $\ensuremath{\mathbf{t}}ilde{\U}$ contains the top-$L$ eigenvectors of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg^\ensuremath{\mathbf{t}}op = \ensuremath{\mathbf{t}}ilde{\U} \Lambda^2 \ensuremath{\mathbf{t}}ilde{\U}^\ensuremath{\mathbf{t}}op$. Since the operation $\left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}^\ensuremath{\mathbf{t}}op\right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}$ extracts an orthonormal basis of the row space of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}$, at iteration $t$ we can write \ensuremath{\mathbf{b}}egin{align*} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} & = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{P}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_t\\ \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} & = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Q}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{align*} where $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{P}}_t \in \ensuremath{\mathbb{R}}^{L\ensuremath{\mathbf{t}}imes L}$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Q}}_t \in \ensuremath{\mathbb{R}}^{L\ensuremath{\mathbf{t}}imes L}$ are nonsingular coefficient matrices (as the initialization $\ensuremath{\mathbf{t}}ilde{\U}_0$ is nonsingular) for representing the left-hand side matrices in their row space basis. Combining the above two equations gives the following recursion at iteration $t$: \ensuremath{\mathbf{b}}egin{gather*} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{P}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Q}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*} By induction, it can be shown that by the end of iteration $t$ we have \ensuremath{\mathbf{b}}egin{multline*} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_0 \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \right)^t = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{O}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{multline*} where $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{O}}_t=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{P}}_1 \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Q}}_1 \dots \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{P}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Q}}_t \in \ensuremath{\mathbb{R}}^{L\ensuremath{\mathbf{t}}imes L}$ is nonsingular. Plugging in the definition of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_0$, this equation reduces to \ensuremath{\mathbf{b}}egin{gather} \label{e:orth-iteration} \ensuremath{\mathbf{t}}ilde{\U}_0^\ensuremath{\mathbf{t}}op \left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg^\ensuremath{\mathbf{t}}op \right)^t \ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{O}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather} It is then clear that $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$ can be written as \ensuremath{\mathbf{b}}egin{gather*} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t = \ensuremath{\mathbf{t}}ilde{\U}_t^\ensuremath{\mathbf{t}}op \ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*} with \ensuremath{\mathbf{b}}egin{gather*} \ensuremath{\mathbf{t}}ilde{\U}_t = \left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg^\ensuremath{\mathbf{t}}op \right)^t \ensuremath{\mathbf{t}}ilde{\U}_0 \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{O}}_t^{-1} \; \in \ensuremath{\mathbb{R}}^{d_x\ensuremath{\mathbf{t}}imes L}. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*} And since $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$ has orthonormal rows, we have \ensuremath{\mathbf{b}}egin{gather*} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{I}}=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t^\ensuremath{\mathbf{t}}op = \ensuremath{\mathbf{t}}ilde{\U}_t^\ensuremath{\mathbf{t}}op \ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op) \ensuremath{\mathbf{b}}Sigma_{ff}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\mathbf{t}}ilde{\U}_t = \ensuremath{\mathbf{t}}ilde{\U}_t^\ensuremath{\mathbf{t}}op \ensuremath{\mathbf{t}}ilde{\U}_t, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*} indicating that $\ensuremath{\mathbf{t}}ilde{\U}_t$ has orthonormal columns. As a result, we consider the algorithm as working implicitly in the space of $\{ \ensuremath{\mathbf{t}}ilde{\U}_t\in \ensuremath{\mathbb{R}}^{d_x\ensuremath{\mathbf{t}}imes L}, t=0,\dots,T\}$, and have \ensuremath{\mathbf{b}}egin{gather} \label{e:orth-iteration} (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg^\ensuremath{\mathbf{t}}op)^T \ensuremath{\mathbf{t}}ilde{\U}_0 = \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{O}}_T \ensuremath{\mathbf{t}}ilde{\U}_T. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather} Following the argument of~\ensuremath{\mathbf{c}}ite[Theorem~8.2.2]{GolubLoan96a}) for orthogonal iterations, under the assumptions of our theorem, the column space of $\ensuremath{\mathbf{t}}ilde{\U}_T$ converges to that of $\ensuremath{\mathbf{t}}ilde{\U}$, the top-$L$ eigenvectors of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg^\ensuremath{\mathbf{t}}op$, with a linear convergence rate depending on the ratio $\sigma_{L+1}/\sigma_L$. In view of the relationship between $\ensuremath{\mathbf{t}}ilde{\U}_T$ and $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$, we conclude that $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_T$ converges to the view 1 CCA projection as $T\rightarrow \infty$. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{proof} It is interesting to note that, besides the whitening operations $\left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}^\ensuremath{\mathbf{t}}op \right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$, the other basic operations in each iteration of Algorithm~\ref{alg:cca-iterative} are of the form \ensuremath{\mathbf{b}}egin{gather}\label{e:lsq} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather} which is solving a linear least squares (regression) problem with input $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}$ and target output $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}$ satisfying $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}^\ensuremath{\mathbf{t}}op=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{I}}$, i.e., \ensuremath{\mathbf{b}}egin{gather*} \ensuremath{\mathbf{m}}in_{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}_t} \ensuremath{\mathbf{q}}uad \ensuremath{\mathbf{n}}orm{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}_t^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_t}_F^2. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*} By setting the gradient of this unconstrained objective to zero, we obtain $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}_t=(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_t^\ensuremath{\mathbf{t}}op$ and so the optimal projection $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}}_t^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}$ coincides with the update \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:lsq}. For \ensuremath{\mathbf{c}}ite{LuFoster14a}, the advantage of the alternating least squares formulation over the exact solution to CCA is that it does not need to form the high-dimensional (nonsparse) matrix $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{T}}fg$; instead it directly operates on the projections, which are much smaller in size, and one can solve the least squares problems using iterative algorithms that require only sparse matrix-vector multiplications. \subsection{Extension to DCCA} Our intuition for adapting Algorithm~\ref{alg:cca-iterative} to DCCA is as follows. During DCCA optimization, the DNN weights $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}})$ are updated frequently and thus the outputs $\left( \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}),\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}) \right)$, which are also the inputs to the last CCA step, also change upon each weight update. Therefore, the last CCA step needs to adapt to the fast evolving input data distribution. On the other hand, if we are updating the CCA weights $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}})$ based on a small minibatch of data (as happens in stochastic optimization), it is intuitively wasteful to solve $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{V}})$ to optimality rather than to make a simple update based on the minibatch. Moreover, the objective of this ``simple update'' can be used to derive a gradient estimate for $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}})$. In view of Algorithm~\ref{alg:cca-iterative}, it is a natural choice to embed the optimization of $(\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{g}})$ into the iterative solution to linear CCA. Instead of solving the regression problem $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \rightarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}$ exactly with $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}$, we try to solve the problem $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}} \rightarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t}$ on a minibatch with a gradient descent step on $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{U}})$ jointly (recall $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}=\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}})$ is a function of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}$). Notice that this regression objective is unconstrained and decouples over training samples, so an unbiased gradient estimate for this problem can be easily derived through standard backpropagation using minibatches (however, this gradient estimate may not be unbiased for the original DCCA objective; see discussion in Section~\ref{s:related}). The less trivial part of Algorithm~\ref{alg:cca-iterative} to implement in DCCA is the whitening operation $\left(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}^\ensuremath{\mathbf{t}}op \right)^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$, which needs $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t\in\ensuremath{\mathbb{R}}^{L\ensuremath{\mathbf{t}}imes N}$, the projections of all training samples. We would like to avoid the exact computation of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t$ as it requires feeding forward the entire training set $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}$ with the updated $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}$, and the computational cost of this operation is as high as (half of) the cost of evaluating the batch gradient (the latter requires both the forward and backward passes). We bypass this difficulty by noting that the only portion of $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}$ needed is the updated projection of the minibatch used in the subsequent view 2 regression problem $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}} \rightarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}$ (corresponding to the step $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{B}}_{t+1} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_t \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \left( \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op \right)^{-1} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}$ in Algorithm~\ref{alg:cca-iterative}). Therefore, if we have an estimate of the covariance $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^t:=\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{A}}_{t}^\ensuremath{\mathbf{t}}op$ without feeding forward the entire training set, we can estimate the updated projection for this minibatch only. Specifically, we estimate this quantity by\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}ootnote{We add a small value $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}psilon>0$ to the diagonal of the covariance estimates in our implementation for numerical stability.} \ensuremath{\mathbf{b}}egin{gather}\label{e:memory} \ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^{t} \leftarrow \rho\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^{t-1} + (1-\rho) \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{N}{\abs{b}} \ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_b)\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_b)^\ensuremath{\mathbf{t}}op, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather} where $\rho\in[0,1]$, $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_b$ denotes a minibatch of data with index set $b$, and $\abs{b}$ denotes the size (number of samples) of this minibatch. The time constant $\rho$ controls how much the previous covariance estimate is kept in the update; a larger $\rho$ indicates forgetting the ``memory'' more slowly. Assuming that the parameters do not change much from time $t-1$ to $t$, then $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^{t-1}$ will be close to $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^{t}$, and incorporating it helps to reduce the variance from the term $\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_b)\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_b)^\ensuremath{\mathbf{t}}op$ when $\abs{b}\ll N$. The update in \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:memory} has a form similar to that of the widely used momentum technique in the optimization~\ensuremath{\mathbf{c}}ite{Polyak64a} and neural network literature~\ensuremath{\mathbf{c}}ite{Sutskev_13a,Schaul_13a}, and is also used by \ensuremath{\mathbf{c}}ite{Brand06a,SantosMilidiu10a,Yger_12a} for online subspace tracking and anomaly detection. We note that the memory cost of $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^{t} \in \ensuremath{\mathbb{R}}^{L\ensuremath{\mathbf{t}}imes L}$ is small as we look for low-dimensional projections (small $L$) in practice. These advantages validate our choice of whitening operations over the more commonly used QR decomposition used by \ensuremath{\mathbf{c}}ite{LuFoster14a}. \ensuremath{\mathbf{b}}egin{algorithm}[t] \ensuremath{\mathbf{c}}aption{Nonlinear orthogonal iterations (NOI) for DCCA.} \label{alg:dcca} \renewcommand{\textbf{Input:}}{\ensuremath{\mathbf{t}}extbf{Input:}} \renewcommand{\textbf{Output:}}{\ensuremath{\mathbf{t}}extbf{Output:}} \ensuremath{\mathbf{b}}egin{algorithmic} \REQUIRE Data matrix $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}\in \ensuremath{\mathbb{R}}^{D_x \ensuremath{\mathbf{t}}imes N}$, $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}\in \ensuremath{\mathbb{R}}^{D_y \ensuremath{\mathbf{t}}imes N}$. Initialization $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}})$, time constant $\rho$, learning rate $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta$. \STATE Randomly choose a minibatch $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_{b_0},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}_{b_0})$ \STATE $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{N}{b_0}\sum_{i\in b_0} \ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}}_i)\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}}_i)^\ensuremath{\mathbf{t}}op$, \STATE $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}g\ensuremath{\mathbf{t}}g} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{N}{b_0}\sum_{i\in b_0} \ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}}_i)\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}}_i)^\ensuremath{\mathbf{t}}op$ \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}OR{$t=1,2,\dots,T$} \STATE Randomly choose a minibatch $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}_{b_t},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}_{b_t})$ \STATE $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f} \leftarrow \rho \ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f} + (1-\rho) \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{N}{\abs{b_t}}\sum_{i\in b_t} \ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}}_i)\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}}_i)^\ensuremath{\mathbf{t}}op$ \STATE $\ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}g\ensuremath{\mathbf{t}}g} \leftarrow \rho \ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}g\ensuremath{\mathbf{t}}g} + (1-\rho) \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{N}{\abs{b_t}}\sum_{i\in b_t} \ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}}_i)\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}}_i)^\ensuremath{\mathbf{t}}op$ \STATE Compute the gradient $\ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}$ of the objective \ensuremath{\mathbf{b}}egin{gather*} \ensuremath{\mathbf{m}}in_{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}}\; \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{\abs{b_t}} \sum_{i\in b_t} \ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}}_i) - \ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}g\ensuremath{\mathbf{t}}g}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}}\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}}_i) }^2 \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*} \STATE Compute the gradient $\ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}}$ of the objective \ensuremath{\mathbf{b}}egin{gather*} \ensuremath{\mathbf{m}}in_{\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}}}\; \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{\abs{b_t}} \sum_{i\in b_t} \ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\mathbf{y}}_i) - \ensuremath{\mathbf{b}}Sigma_{\ensuremath{\mathbf{t}}f\ensuremath{\mathbf{t}}f}^{-\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{2}}\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\mathbf{x}}_i) }^2 \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*} \STATE $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta \ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}$, $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta \ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}}$. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{E}}NDFOR \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{E}}NSURE The updated $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}})$. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{algorithmic} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{algorithm} We give the resulting nonlinear orthogonal iterations procedure (NOI) for DCCA in Algorithm~\ref{alg:dcca}. Now adaptive whitening is used to obtain suitable target outputs of the regression problems for computing derivatives $(\ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\f}}, \ensuremath{\mathbf{p}}artial \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_{\ensuremath{\mathbf{t}}ilde{\g}})$, and we no longer maintain the whitened projections of the entire training set at each iteration. Therefore, by the end of the algorithm, $(\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}),\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}))$ may not satisfy the whitening constraints of \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca}. One may use an additional CCA step on $(\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}),\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}))$ to obtain a feasible solution of the original problem if desired, and this amounts to linear transforms in $\ensuremath{\mathbb{R}}^L$ which do not change the canonical correlations between the projections for both the training and test sets. In practice, we adaptively estimate the mean of $\ensuremath{\mathbf{t}}ilde{\f}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}})$ and $\ensuremath{\mathbf{t}}ilde{\g}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}})$ with an update formula similar to that of \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:memory} and center the samples accordingly before estimating the covariances and computing the target outputs. We also use momentum in the stochastic gradient steps for the nonlinear least squares problems as is commonly used in the deep learning community \ensuremath{\mathbf{c}}ite{Sutskev_13a}. Overall, Algorithm~\ref{alg:dcca} is intuitively quite simple: It alternates between adaptive covariance estimation/whitening and stochastic gradient steps over (a stochastic version of) the least squares objectives, without any involved gradient computation. \section{Related Work} \label{s:related} Stochastic (and online) optimization techniques for fundamental problems, such as principal component analysis and partial least squares, are of continuous research interest~\ensuremath{\mathbf{c}}ite{Krasul69a,OjaKarhun85a,WarmutKuzmin08a,Arora_12a,Arora_13a,Mitliag_13a,Balsub_13a,Shamir15a}. However, as pointed out by \ensuremath{\mathbf{c}}ite{Arora_12a}, the CCA objective is more challenging due to the whitening constraints. Recently, \ensuremath{\mathbf{c}}ite{Yger_12a} proposed an adaptive CCA algorithm with efficient online updates based on matrix manifolds defined by the whitening constraints. However, the goal of their algorithm is anomaly detection rather than optimizing the canonical correlation objective for a given dataset. Based on the alternating least squares formulation of CCA (Algorithm~\ref{alg:cca-iterative}), \ensuremath{\mathbf{c}}ite{LuFoster14a} propose an iterative solution of CCA for very high-dimensional and sparse input features, and the key idea is to solve the high dimensional least squares problems with randomized PCA and (batch) gradient descent. \ensuremath{\mathbf{b}}egin{algorithm}[t] \ensuremath{\mathbf{c}}aption{CCA via gradient descent over least squares. } \label{alg:cca-gd} \renewcommand{\textbf{Input:}}{\ensuremath{\mathbf{t}}extbf{Input:}} \renewcommand{\textbf{Output:}}{\ensuremath{\mathbf{t}}extbf{Output:}} \ensuremath{\mathbf{b}}egin{algorithmic} \REQUIRE Data matrix $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}\in \ensuremath{\mathbb{R}}^{d_x \ensuremath{\mathbf{t}}imes N}$, $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}\in \ensuremath{\mathbb{R}}^{d_y \ensuremath{\mathbf{t}}imes N}$. Initialization ${\ensuremath{\mathbf{u}}}_0 \in \ensuremath{\mathbb{R}}^{d_x}$, ${\ensuremath{\mathbf{v}}}_0 \in \ensuremath{\mathbb{R}}^{d_y}$. Learning rate $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta$. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}OR{$t=1,2,\dots,T$} \STATE ${\ensuremath{\mathbf{u}}}_t \leftarrow {\ensuremath{\mathbf{u}}}_{t-1} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op {\ensuremath{\mathbf{u}}}_{t-1} - \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{v}}_{t-1}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op {\ensuremath{\mathbf{v}}}_{t-1})$ \STATE ${\ensuremath{\mathbf{v}}}_t \leftarrow {\ensuremath{\mathbf{v}}}_{t-1} - \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} (\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}^\ensuremath{\mathbf{t}}op {\ensuremath{\mathbf{v}}}_{t-1} - \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{1}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{u}}_{t-1}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}}} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}^\ensuremath{\mathbf{t}}op {\ensuremath{\mathbf{u}}}_{t-1})$ \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{E}}NDFOR \STATE $\ensuremath{\mathbf{u}} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{{\ensuremath{\mathbf{u}}}_T}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{u}}_T^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}}}$,$\ensuremath{\mathbf{q}}uad$$\ensuremath{\mathbf{v}} \leftarrow \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{{\ensuremath{\mathbf{v}}}_T}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{v}}_T^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}}$ \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{E}}NSURE $\ensuremath{\mathbf{u}}$/$\ensuremath{\mathbf{v}}$ are the CCA directions of view 1/2. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{algorithmic} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{algorithm} Upon the submission of this paper, we have become aware of the very recent publication of \ensuremath{\mathbf{c}}ite{Ma_15b}, which extends \ensuremath{\mathbf{c}}ite{LuFoster14a} by solving the linear least squares problems with (stochastic) gradient descent. We notice that a specical case of our algorithm ($\rho=0$) is equivalent to theirs for linear CCA. To see this, we give the linear CCA version of our algorithm (for a one-dimensional projection, to be consistent with the notation of \ensuremath{\mathbf{c}}ite{Ma_15b}) in Algorithm~\ref{alg:cca-gd}, where we take a batch gradient descent step over the least squares objectives in each iteration. This algorithm is equivalent to Algorithm~3 of \ensuremath{\mathbf{c}}ite{Ma_15b}.\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}ootnote{Although Algorithm~3 of \ensuremath{\mathbf{c}}ite{Ma_15b} maintains two copies---the normalized and the unnormalized versions---of the weight parameters, we observe that the sole purpose of the normalized version in the intermediate iterations is to provide whitened target output for the least squares problems; our version of the algorithm eliminates this copy and the normalized version can be retrieved by a whitening step at the end.} Though intuitively very simple, the analysis of this algorithm is challenging. In~\ensuremath{\mathbf{c}}ite{Ma_15b} it is shown that the solution to the CCA objective is a fixed point of this algorithm, but no global convergence property is given. We also notice that the gradients used in this algorithm are derived from the alternating least squares problems \ensuremath{\mathbf{b}}egin{gather*} \ensuremath{\mathbf{m}}in_{\ensuremath{\mathbf{u}}}\; \ensuremath{\mathbf{n}}orm{ \ensuremath{\mathbf{u}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{\ensuremath{\mathbf{v}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{v}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}} }_F^2 \ensuremath{\mathbf{t}}ext{\ and \ } \ensuremath{\mathbf{m}}in_{\ensuremath{\mathbf{v}}}\; \ensuremath{\mathbf{n}}orm{ \ensuremath{\mathbf{v}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{\ensuremath{\mathbf{u}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{u}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}}} }_F^2, \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*} while the true CCA objective can be written as \ensuremath{\mathbf{b}}egin{gather*} \ensuremath{\mathbf{m}}in_{\ensuremath{\mathbf{u}},\ensuremath{\mathbf{v}}}\; \ensuremath{\mathbf{n}}orm{ \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{\ensuremath{\mathbf{u}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{u}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{F}}}} - \ensuremath{\ensuremath{\mathbf{m}}athbf{f}}rac{\ensuremath{\mathbf{v}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}{\ensuremath{\mathbf{n}}orm{\ensuremath{\mathbf{v}}^\ensuremath{\mathbf{t}}op \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{G}}}}}_F^2. \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{gather*} This shows that Algorithm~3 is \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}mph{not} implementing gradient descent over the CCA objective. When extending Algorithm~3 to stochastic optimization, we observe the key differences between their algorithm and ours as follows. Due to the evolving $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{f}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{W}}_\ensuremath{\ensuremath{\mathbf{m}}athbf{g}})$, the last CCA step in the DCCA model is dealing with different $(\ensuremath{\ensuremath{\mathbf{m}}athbf{f}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}}),\ensuremath{\ensuremath{\mathbf{m}}athbf{g}}(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}}))$ and covariance structures in different iterates, even though the original inputs $(\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{X}},\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nsuremath{\ensuremath{\mathbf{m}}athbf{Y}})$ are the same; this motivates the adaptive estimate of covariances in \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:memory}. In the whitening steps of \ensuremath{\mathbf{c}}ite{Ma_15b}, however, the covariances are estimated using \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}mph{only} the current minibatch at each iterate, without consideration of the remaining training samples or previous estimates, which corresponds to $\rho\rightarrow 0$ in our estimate. \ensuremath{\mathbf{c}}ite{Ma_15b} also suggests using a minibatch size of the order $\ensuremath{\mathcal{O}}(L)$, the dimensionality of the covariance matrices to be estimated, in order to obtain a high-accuracy estimate for whitening. As we will show in the experiments, in both CCA and DCCA, it is important to incorporate the previous covariance estimates ($\rho\rightarrow 1$) at each step to reduce the variance, especially when small minibatches are used. Based on the above analysis for batch gradient descent, solving the least squares problem with stochastic gradient descent is \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}mph{not} implementing stochastic gradient descent over the CCA objective. Nonetheless, as shown in the experiments, this stochastic approach works remarkably well and can match the performance of batch optimization, for both linear and nonlinear CCA, and is thus worth careful analysis. Finally, we remark that other possible approaches for solving \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:dcca} exist. Since the difficulty lies in the whitening constraints, one can relax the constraints and solve the Lagrangian formulation repeatedly with updated Lagrangian multipliers, as done by \ensuremath{\mathbf{c}}ite{LaiFyfe00a}; or one can introduce auxiliary variables and apply the quadratic penalty method \ensuremath{\mathbf{c}}ite{NocedalWright06a}, as done by \ensuremath{\mathbf{c}}ite{CarreirWang14b}. The advantage of such approaches is that there exists no coupling of all training samples when optimizing the primal variables (the DNN weight parameters) and thus one can easily apply SGD there, but one also needs to deal with the Lagrange multipliers or to set a schedule for the quadratic penalty parameter (which is non-trivial) and alternately optimize over two sets of variables repeatedly in order to obtain a solution of the original constrained problem. \ensuremath{\mathbf{b}}egin{table}[t] \ensuremath{\mathbf{c}}entering \ensuremath{\mathbf{c}}aption{Statistics of two real-world datasets.} \label{t:datasets} \ensuremath{\mathbf{b}}egin{tabular}{|c||c|c|c|} \ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line dataset & training/tuning/test & $L$ & DNN architectures \\ \ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line JW11 & 30K/11K/9K & 112 & \ensuremath{\mathbf{c}}aja{c}{c}{273-1800-1800-112\\ensuremath{\mathbf{1}}12-1200-1200-112} \\ \ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line MNIST & 50K/10K/10K & 50 & \ensuremath{\mathbf{c}}aja{c}{c}{392-800-800-50\\392-800-800-50} \\ \ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{tabular} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{table} \section{Experiments} \label{s:experiments} \subsection{Experimental setup} We now demonstrate the NOI algorithm on the two real-world datasets used by \ensuremath{\mathbf{c}}ite{Andrew_13a} when introducing DCCA. The first dataset is a subset of the University of Wisconsin X-Ray Microbeam corpus ~\ensuremath{\mathbf{c}}ite{Westbur94a}, which consists of simultaneously recorded acoustic and articulatory measurements during speech. Following \ensuremath{\mathbf{c}}ite{Andrew_13a,Wang_15a}, the acoustic view inputs are 39D Mel-frequency cepstral coefficients and the articulatory view inputs are horizontal/vertical displacement of 8 pellets attached to different parts of the vocal tract, each then concatenated over a 7-frame context window, for speaker `JW11'. The second dataset consists of left/right halves of the images in the MNIST dataset~\ensuremath{\mathbf{c}}ite{Lecun_98a}, and so the input of each view consists of $28\ensuremath{\mathbf{t}}imes 14$ grayscale images. We do not tune neural network architectures as it is out of the scope of this paper. Instead, we use DNN architectures similar to those used by \ensuremath{\mathbf{c}}ite{Andrew_13a} with ReLU activations~\ensuremath{\mathbf{c}}ite{NairHinton10a}, and we achieve better generalization performance with these architectures mainly due to better optimization. The statistics of each dataset and the chosen DNN architectures (widths of input layer-hidden layers-output layer) are given in Table~\ref{t:datasets}. The projection dimensionality $L$ is set to 112/50 for JW11/MNIST respectively as in \ensuremath{\mathbf{c}}ite{Andrew_13a}; these are also the maximum possible total canonical correlations for the two datasets. We compare three optimization approaches: full batch optimization by L-BFGS~\ensuremath{\mathbf{c}}ite{Andrew_13a}, using the implementation of \ensuremath{\mathbf{c}}ite{Schmid12a} which includes a good line-search procedure; stochastic optimization with large minibatches~\ensuremath{\mathbf{c}}ite{Wang_15a}, denoted STOL; and our algorithm, denoted NOI. We create training/tuning/test splits for each dataset and measure the total canonical correlations on the test sets (measured by linear CCA on the projections) for different optimization methods. Hyperparameters of each algorithm, including $\rho$ for NOI, minibatch size $n=\abs{b_1}=\abs{b_2},\dots$, learning rate $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta$ and momentum $\ensuremath{\mathbf{m}}u$ for both STOL and NOI, are chosen by grid search on the tuning set. All methods use the same random initialization for DNN weight parameters. We set the maximum number of iterations to $300$ for L-BFGS and number of epochs (one pass over the training set) to $50$ for STOL and NOI. \ensuremath{\mathbf{b}}egin{table*}[!t]\ensuremath{\mathbf{c}}entering \ensuremath{\mathbf{c}}aption{Total test set canonical correlation obtained by different algorithms.} \label{t:corr} \ensuremath{\mathbf{b}}egin{tabular}{|c|c|c|c|c|c|c|c|}\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line \ensuremath{\mathbf{m}}ultirow{2}{*}{dataset} & \ensuremath{\mathbf{m}}ultirow{2}{*}{L-BFGS} & \ensuremath{\mathbf{m}}ulticolumn{2}{|c}{STOL} & \ensuremath{\mathbf{m}}ulticolumn{4}{|c|}{NOI} \\ \ensuremath{\mathbf{c}}line{3-8} && $n=100$ & $n=500$ & $n=10$ & $n=20$ & $n=50$ & $n=100$ \\ \ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line JW11 & 78.7 & 33.0 & 86.7 & 83.6 & 86.9 & 87.9 & 89.1 \\ \ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line MNIST & 47.0 & 26.1 & 47.0 & 45.9 & 46.4 & 46.4 & 46.4 \\ \ensuremath{\ensuremath{\mathbf{m}}athbf{h}}line \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{tabular} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{table*} \ensuremath{\mathbf{b}}egin{figure}[t] \ensuremath{\mathbf{c}}entering \ensuremath{\mathbf{b}}egin{tabular}{@{}c@{\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}space{0.03\linewidth}}c@{}} JW11 & MNIST \\[.5ex] \ensuremath{\mathbf{p}}sfrag{corr}[][]{Canon. Corr.} \ensuremath{\mathbf{p}}sfrag{iteration}[t][]{epoch} \ensuremath{\mathbf{p}}sfrag{LBFGS n=N}[l][l][0.52]{L-BFGS $n\!=\!N$} \ensuremath{\mathbf{p}}sfrag{STOL n=100}[l][l][0.55]{STOL $n\!=\!100$} \ensuremath{\mathbf{p}}sfrag{STOL n=500}[l][l][0.55]{STOL $n\!=\!500$} \ensuremath{\mathbf{p}}sfrag{NOI n=10}[l][l][0.55]{NOI $n\!=\!10$} \ensuremath{\mathbf{p}}sfrag{NOI n=20}[l][l][0.55]{NOI $n\!=\!20$} \ensuremath{\mathbf{p}}sfrag{NOI n=50}[l][l][0.55]{NOI $n\!=\!50$} \ensuremath{\mathbf{p}}sfrag{NOI n=100}[l][l][0.55]{NOI $n\!=\!100$} \includegraphics[width=0.50\linewidth]{JW11_varyb.eps} & \ensuremath{\mathbf{p}}sfrag{iteration}[t][]{epoch} \includegraphics[width=0.47\linewidth]{MNIST_varyb.eps} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{tabular} \ensuremath{\mathbf{c}}aption{Learning curves of different algorithms on tuning sets with different minibatch size $n$.} \label{f:varyn} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{figure} \subsection{Effect of minibatch size $n$} In the first set of experiments, we vary the minibatch size $n$ of NOI over $\{10,20,50,100\}$, while tuning $\rho$, $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta$ and $\ensuremath{\mathbf{m}}u$. Learning curves (objective value vs.~number of epochs) on the tuning set for each $n$ with the corresponding optimal hyperparameters are shown in Fig.~\ref{f:varyn}. For comparison, we also show the learning curves of STOL with $n=100$ and $n=500$, while $\ensuremath{\ensuremath{\mathbf{m}}athbf{e}}ta$ and $\ensuremath{\mathbf{m}}u$ are also tuned by grid search. We observe that STOL performs very well at $n=500$ (with the performance on MNIST being somewhat better due to higher data redundancy), but it can not achieve much progress in the objective over the random initialization with $n=100$, for the reasons described earlier. In contrast, NOI achieves very competitive performance with various small minibatch sizes, with fast improvement in objective during the first few iterations, although larger $n$ tends to achieve slightly higher correlation on tuning/test sets eventually. Total canonical correlations on the test sets are given in Table~\ref{t:corr}, showing that we achieve better results than \ensuremath{\mathbf{c}}ite{Andrew_13a} with similar DNN architectures. \subsection{Effect of time constant $\rho$} \ensuremath{\mathbf{b}}egin{figure}[t] \ensuremath{\mathbf{c}}entering \ensuremath{\mathbf{p}}sfrag{0}[][][.47]{$0$} \ensuremath{\mathbf{p}}sfrag{0.2}[][][.47]{$0.2$} \ensuremath{\mathbf{p}}sfrag{0.4}[][][.47]{$0.4$} \ensuremath{\mathbf{p}}sfrag{0.6}[][][.47]{$0.6$} \ensuremath{\mathbf{p}}sfrag{0.8}[][][.47]{$0.8$} \ensuremath{\mathbf{p}}sfrag{0.9}[][][.47]{$0.9$} \ensuremath{\mathbf{p}}sfrag{0.99}[][][.47]{$0.99$} \ensuremath{\mathbf{p}}sfrag{0.999}[][][.47]{$\, 0.999$} \ensuremath{\mathbf{p}}sfrag{0.9999}[][][.47]{$\ensuremath{\mathbf{q}}uad 0.9999$} \ensuremath{\mathbf{p}}sfrag{1}[][][.47]{$\; 1$} \ensuremath{\mathbf{b}}egin{tabular}{@{}c@{\ensuremath{\ensuremath{\mathbf{m}}athbf{h}}space{0.05\linewidth}}c@{}} JW11 & MNIST \\[1ex] \ensuremath{\mathbf{p}}sfrag{corr}[][]{Canon. Corr.} \ensuremath{\mathbf{p}}sfrag{rho}[t][]{$\rho$} \ensuremath{\mathbf{p}}sfrag{n=10}[l][l][0.55]{$n\!=\!10$} \ensuremath{\mathbf{p}}sfrag{n=20}[l][l][0.55]{$n\!=\!20$} \ensuremath{\mathbf{p}}sfrag{n=50}[l][l][0.55]{$n\!=\!50$} \ensuremath{\mathbf{p}}sfrag{n=100}[l][l][0.55]{$n\!=\!100$} \includegraphics[width=0.49\linewidth]{JW11_varyr.eps} & \ensuremath{\mathbf{p}}sfrag{rho}[t][]{$\rho$} \includegraphics[width=0.46\linewidth]{MNIST_varyr.eps} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{tabular} \ensuremath{\mathbf{c}}aption{Total correlation achieved by NOI on tuning sets with different $\rho$.} \label{f:varyr} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{figure} In the second set of experiments, we demonstrate the importance of $\rho$ in NOI for different minibatch sizes. The total canonical correlations achieved by NOI on the tuning set for $\rho=\{0,\, 0.2,\, 0.4,\, 0.6,\, 0.8,\, 0.9,\, 0.99,\, 0.999,\, 0.9999\}$ are shown in Fig.~\ref{f:varyr}, while other hyper-parameters are set to their optimal values. We confirm that for relatively large $n$, NOI works reasonably well with $\rho=0$ (so we are using the same covariance estimate/whitening as \ensuremath{\mathbf{c}}ite{Ma_15b}). But also as expected, when $n$ is small, it is beneficial to incorporate the previous estimate of the covariance because the covariance information contained in each small minibatch is noisy. Also, as $\rho$ becomes too close to $1$, the covariance estimates are not adapted to the DNN outputs and the performance of NOI degrades. Moreover, we observe that the optimal $\rho$ value seems different for each $n$. \ensuremath{\mathbf{b}}egin{figure}[t] \ensuremath{\mathbf{c}}entering \ensuremath{\mathbf{p}}sfrag{0}[][][.7]{$0$} \ensuremath{\mathbf{p}}sfrag{0.2}[][][.7]{$0.2$} \ensuremath{\mathbf{p}}sfrag{0.4}[][][.7]{$0.4$} \ensuremath{\mathbf{p}}sfrag{0.6}[][][.7]{$0.6$} \ensuremath{\mathbf{p}}sfrag{0.8}[][][.7]{$0.8$} \ensuremath{\mathbf{p}}sfrag{0.9}[][][.7]{$0.9$} \ensuremath{\mathbf{p}}sfrag{0.99}[][][.7]{$0.99$} \ensuremath{\mathbf{p}}sfrag{0.999}[][][.7]{$\, 0.999$} \ensuremath{\mathbf{p}}sfrag{0.9999}[][][.7]{$\ensuremath{\mathbf{q}}uad 0.9999$} \ensuremath{\mathbf{p}}sfrag{corr}[][]{Canon. Corr.} \ensuremath{\mathbf{p}}sfrag{rho}[][]{$\rho$} \ensuremath{\mathbf{p}}sfrag{Initialization}[l][l][0.8]{Random Init.} \ensuremath{\mathbf{p}}sfrag{SVD}[l][l][0.8]{SVD} \ensuremath{\mathbf{p}}sfrag{STOL n=500}[l][l][0.8]{STOL $n\!=\!500$} \ensuremath{\mathbf{p}}sfrag{NOI n=1}[l][l][0.8]{NOI $n\!=\!1$} \includegraphics[width=0.8\linewidth]{MNISTCCA.eps} \ensuremath{\mathbf{c}}aption{Pure stochastic optimization of linear CCA using NOI. We show total correlation achieved by NOI with $n=1$ on the MNIST training sets at different $\rho$, by the random initialization used by NOI, by the exact solution, and by STOL with $n=500$.} \label{f:cca-noi} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{figure} \subsection{Pure stochastic optimization for CCA} Finally, we carry out pure stochastic optimization ($n=1$) for linear CCA on the MNIST dataset. Notice that linear CCA is a special case of DCCA with $(\ensuremath{\mathbf{t}}ilde{\f},\ensuremath{\mathbf{t}}ilde{\g})$ both being single-layer linear networks (although we have used small weight-decay terms for the weights, leading to a slightly different objective than that of CCA). Total canonical correlations achieved by STOL with $n=500$ and by NOI (50 training epochs) on the training set with different $\rho$ values are shown in Fig.~\ref{f:cca-noi}. The objective of the random initialization and the closed-form solution (by SVD) are also shown for comparison. NOI could not improve over the random initialization without memory ($\rho=0$, corresponding to the algorithm of \ensuremath{\mathbf{c}}ite{Ma_15b}), but gets very close to the optimal solution and matches the objective obtained by the previous large minibatch approach when $\rho\rightarrow 1$. This result demonstrates the importance of our adaptive estimate \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:memory} also for CCA. \section{Conclusions} \label{s:conclusion} In this paper, we have proposed a stochastic optimization algorithm NOI for training DCCA which updates the DNN weights based on small minibatches and performs competitively to previous optimizers. One direction for future work is to better understand the convergence properties of NOI, which presents several difficulties. First, we note that convergence of the alternating least squares formulation of CCA (Algorithm~\ref{alg:cca-iterative}, or rather orthogonal iterations) is usually stated as the angle between the estimated subspace and the ground-truth subspace converging to zero. In the stochastic optimization setting, we need to relate this measure of progress (or some other measure) to the nonlinear least squares problems we are trying to solve in the NOI iterations. As discussed in Section~\ref{s:related}, even the convergence of the linear CCA version of NOI with batch gradient descent is not well understood~\ensuremath{\mathbf{c}}ite{Ma_15b}. Second, the use of memory in estimating covariances \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:memory} complicates the analysis and ideally we would like to come up with ways of determining the time constant $\rho$. We have also tried using the same form of adaptive covariance estimates in both views for the STOL approach for computing the gradients \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}qref{e:gradient}, but its performance with small minibatches is much worse than that of NOI. Presumably this is because the gradient computation of STOL suffers from noise in both views which are further combined through various nonlinear operations, whereas the noise in the gradient computation of NOI only comes from the output target (due to inexact whitening), and as a result NOI is more tolerant to the noise resulting from using small minibatches. This deserves further analysis as well. \ensuremath{\mathbf{b}}ibliographystyle{IEEEtran} \ensuremath{\mathbf{b}}ibliography{allerton15a} \ensuremath{\ensuremath{\mathbf{m}}athbf{e}}nd{document}
\begin{document} \title{Infinitely many roots of unity are zeros of some Jones polynomials} \author{Maciej Mroczkowski} \address{Institute of Mathematics\\ Faculty of Mathematics, Physics and Informatics\\ University of Gdansk, 80-308 Gdansk, Poland\\ e-mail: [email protected]} \begin{abstract} Let $N=2n^2-1$ or $N=n^2+n-1$, for any $n\ge 2$. Let $M=\frac{N-1}{2}$. We construct families of prime knots with Jones polynomials $(-1)^M\sum_{k=-M}^{M} (-1)^kt^k$. Such polynomials have Mahler measure equal to $1$. If $N$ is prime, these are cyclotomic polynomials $\Phi_{2N}(t)$, up to some shift in the powers of $t$. Otherwise, they are products of such polynomials, including $\Phi_{2N}(t)$. In particular, all roots of unity $\zeta_{2N}$ occur as roots of Jones polynomials. We also show that some roots of unity cannot be zeros of Jones polynomials. \end{abstract} \maketitle \let\thefootnote\relax\footnotetext{Mathematics Subject Classification 2020: 57K10, 57K14} \section{Introduction} We study knots $K$ with Jones polynomials that have Mahler measure equal to $1$, or $M(V_{K}(t))=1$. Such knots were considered in~\cite{CK,CK2}. A Laurent polynomial $P$, with $M(P)=1$, has the form: $t^a$ times a product of cyclotomic polynomials $\Phi_n$, $a\in\mathbb{Z}$. Following~\cite{CK2}, we call such $P$ {\it cyclotomic}. The motivation for studying such knots comes from some observed connections between the Mahler measure of the Jones polynomial of a knot and its hyperbolic volume~\cite{CK,CK2}. The Mahler measure of Jones polynomials has also been studied in~\cite{T1, T2}. In a more general context, not much is known about the question: what polynomials are Jones polynomials? This is in contrast to the Alexander polynomial: there are simple conditions on a polynomial which are sufficient and necessary for it to be the Alexander polynomial of a knot. Studying the locus of the zeros of Jones polynomials is part of the general question. It is shown in \cite{JZDT} that this locus is dense in $\mathbb{C}$. Our result implies that the locus intersected with the unit circle is dense in the unit circle. Near the end of~\cite{CK}, after listing all knots up to $16$ crossings with cyclotomic Jones polynomials (there are only $17$ such knots), the following problem is posed: ``An interesting open question is how to construct more knots with $M(V_K(t))=1$.'' In this paper, we construct four inifinite families of knots with cyclotomic Jones polynomials of a particularly simple form. They include $4_1$ and $9_{42}$ with $V_{4_1}(t)=t^{-2}-t^{-1}+1-t+t^2$ and $V_{9_{42}}(t)=t^{-3}-t^{-2}+t^{-1}-1+t-t^2+t^3$. The Jones polynomials of the other knots extend these two examples. Their coefficients are finite sequences of alternating $1$'s and $-1$'s, starting and ending with $1$. In particular, there is no bound on the span of such Jones polynomials. In fact, polynomials with alternating $1$ and $-1$'s as coefficients, starting and ending with $1$ are obtained as follows. Let $m$ be odd. It is well known that, for any $a\in N$, $t^a-1=\prod_{k|a}\Phi_k(t)$. Hence, \[t^{2m}-1=\prod_{k|2m}\Phi_k(t)=\prod_{k|m}\Phi_k(t)\prod_{k|m}\Phi_{2k}(t)=(t^m-1)\prod_{k|m}\Phi_{2k}(t)\] It follows, that $\prod_{k|m}\Phi_{2k}(t)=t^m+1=(t+1)(t^{m-1}-t^{m-2}+t^{m-3}+\ldots-t+1)$. Since $\Phi_2(t)=t+1$, we get: \[\prod_{k|m,k>1}\Phi_{2k}(t)=t^{m-1}-t^{m-2}+t^{m-3}+\ldots-t+1\] For $m$ odd, we introduce the following notation: \[\widetilde{\Phi}_{2m}(t):=t^{-\frac{m-1}{2}}\prod_{k|m,k>1}\Phi_{2k}(t)=t^{-\frac{m-1}{2}}-t^{-\frac{m-1}{2}+1}+\ldots- t^{\frac{m-1}{2}-1}+t^{\frac{m-1}{2}}\] We allow $m=1$ with $\widetilde{\Phi}_2(t)=1$. We will construct knots with cyclotomic Jones polynomials equal to $\widetilde{\Phi}_{2m}$ for infinitely many odd $m$. Notice, that $\frac{d}{dt}[t^nP(t)]_{t=1}=P'(1)+n$ for any $n\in\mathbb{Z}$, if $P$ is a Laurent polynomial satisfying $P(1)=1$. If $P$ is a Jones polynomial of a knot, it satisfies $P(1)=1$ and $P'(1)=0$ (see \cite{J}), hence $t^nP$ cannot be a Jones polynomial for $n\neq 0$. We say that a Laurent polynomial $P$ is {\it palindromic}, if $P(t^{-1})=t^nP(t)$, for some $n\in \mathbb{Z}$; in particular, if $n=0$, we say that it is {\it symmetric}. One checks that $P'(1)=0$, if $P$ is symmetric. Hence, a palindromic Jones polynomial of a knot must be symmetric. For $n\ge 3$, cyclotomic polynomials $\Phi_n$ are palindromic of even degree, but not symmetric (since they are not Laurent polynomials). In order to make them symmetric, we multpily $\Phi_n$ with $t^{-\frac{\varphi(n)}{2}}$ (where $\varphi$ is the Euler totient function and $\varphi(n)$ is the degree of $\Phi_n$). For $n\ge 3$, we use the notation: \[\Phi^{sym}_n(t)=t^{-\frac{\varphi(n)}{2}}\Phi_n(t)\] For example $\Phi^{sym}_{10}(t)=t^{-2}-t^{-1}+1-t+t^2$ and $\Phi^{sym}_{14}(t)=t^{-3}-t^{-2}+t^{-1}-1+t-t^2+t^3$. Notice, that $\Phi^{sym}_n$ and $\Phi_n$ have the same roots. One checks that the formula for $\widetilde{\Phi}_{2m}(t)$ given above, $m$ odd, simplifies to: \[\widetilde{\Phi}_{2m}(t):=\prod_{k|m,k>1}\Phi^{sym}_{2k}(t)\] The paper is organized as follows: in section~\ref{sec:main} the main theorems are stated, while the notion of arrow diagrams and some proofs are postponed to two latter sections~\ref{sec:arrows} and \ref{sec:proofs}. \section{Main results}\label{sec:main} Let $W_{n,k}$, $n,k\in\mathbb{Z}$, $k\ge 0$, be the knot shown for $n=2$ and $k=3$ in Figure~\ref{fig:wnk}, in the form of an {\it arrow diagram}. In general, there are $n$ arrows on the left kink, and $k$ arrows arranged on $k$ strands, generalizing in an obvious way the case $k=3$ shown in this figure. When $n<0$, there are $|n|$ clockwise arrows on the left kink. In short, the arrows correspond to fibers in the Hopf fibration of $S^3$. A detailed explanation of arrow diagrams is postponed to section~\ref{sec:arrows}. In section~\ref{sec:proofs}, we compute the Jones polynomials of the knots $W_{n,k}$, denoted $V_{W_{n,k}}$: \begin{figure} \caption{$W_{2,3} \label{fig:wnk} \end{figure} \begin{theorem}\label{thm:jonesWnk} Let $n,k\in\mathbb{Z}$, $k\ge 0$. Then, \begin{align*} V_{W_{n,k}}=&\frac{t^{\frac{n(n-1)}{2}+k(k-1)-2nk}}{t^2-1}\left(-t^{(k+2)n+1}+t^{(k+1)(n+1)}(t^{k+1}+1)\right.\\ &\left.-t^{k(n+3)+1}+t-1\right) \end{align*} \end{theorem} Denote by $D_{n,k}$ the terms in the big parenthesis in the formula for $V_{W_{n,k}}$ above. We want to check for which $n,k$ the polynomial $V_{W_{n,m}}$ is symmetric, i.e. $V_{W_{n,m}}(t^{-1})=V_{W_{n,m}}(t)$. It is easy to see that a necessary condion is that $D_{n,k}(t^{-1})=-t^aD_{n,k}(t)$ for some $a\in\mathbb{Z}$. Say that such $D_{n,k}$ is {\it antipalindromic}. For $k=0$, $W_{n,0}$ is an oval with $n\in\mathbb{Z}$ arrows on it. Such knots are torus knots, trivial if and only if $n\in\{1,0,-1,-2\}$, see~\cite{M1}. Thus, when $W_{n,0}$ is non trivial, its Jones polynomial is not symmetric. \begin{theorem}\label{thm:cyclojones} Suppose that $k>0$. The polynomial $V_{n,k}(t)$ is symmetric if and only if $n=k-1$, $k$, $2k$ or $2k+1$. Furthermore, let $f(k)=k^2+k-1$ and $g(k)=2k^2-1$. Then, for $k>0$, \[V_{W_{k-1,k}}(t)=\widetilde{\Phi}_{2f(k)}(t)\] \[V_{W_{k,k}}(t)=\widetilde{\Phi}_{2f(k+1)}(t)\] \[V_{W_{2k,k}}(t)=V_{W_{2k+1,k}}(t)=\widetilde{\Phi}_{2g(k+1)}(t)\] \begin{proof} We consider $D_{n,k}$. First we check when $t$ or $-1$ cancels with some other term. The term $t$ cancels with a term $-t^a$, if $a=1$. This occurs when $n=0$ or $n=-3$. The term $-1$ cancels if $n=-1$ or $n=-2$. One checks, that $D_{n,k}$ is not antipalindromic in all these cases, except for $(n,k)=(0,1)$. That $W_{0,1}$ is trivial is very easy to check, see section~\ref{sec:arrows}. Notice, that $\widetilde{\Phi}_{2f(1)}=\widetilde{\Phi}_2=1$. Suppose now that $n$ and $k$ are such that neither $t$ nor $-1$ cancels, hence $n\ge 1$ or $n\le -4$. Let $n_1=(k+2)n+1$, $n_2=k(n+3)+1$, $p_1=(k+1)(n+1)$ and $p_2=(k+1)(n+2)$ be the exponents of the four remaining terms in $D_{n,k}$ (two negative and two positive ones). Suppose that $n\le -4$. The four exponents are negative with the exception $n_2=0$ for $(n,k)=(-4,1)$. The highest terms are $t-1$ or $t-2$, the lowest is $-t^{n_1}$ and the gap between $n_1$ and any other exponent it at least $5$. Hence, $D_{n,k}$ is not antipalindromic. Suppose that $n\ge 1$. The four exponents are greater or equal to $4$. The four terms cannot all cancel out, since otherwise $V_{W_{n,k}}$ would not be a Laurent polynomial (one may also check case by case that, if a pair of terms cancels, another pair does not cancel). Since there is a gap $1$ in the powers of $t$ and $-1$, in order for $D_{n,k}$ to be antipalindromic, it should contain another pair of terms $t^a-t^{a-1}$ for some $a\ge 5$. One has: \[p_1-n_1=k-n,\quad p_2-n_2=n-k+1,\quad p_2-n_1=2k-n+1,\quad p_1-n_2=n-2k\] One of these four differences has to be equal to $1$, which gives four cases: \begin{itemize} \item $p_1-n_1=k-n=1$. Then, $p_2=n_2$ and these $2$ terms cancel out. Now: \[D_{k-1,k}=t^{n_1}(t-1)+t-1=(t^{k^2+k-1}+1)(t-1)\] Let $m=k^2+k-1$. One checks that: \[V_{W_{k-1,k}}=\frac{t^{-\frac{m-1}{2}}}{t^2-1}(t^m+1)(t-1)=\widetilde{\Phi}_{2m}=\widetilde{\Phi}_{2f(k)}\] \item $p_2-n_2=n-k+1=1$. Then, $p_1=n_1$ and: \[D_{k,k}=t^{n_2}(t-1)+t-1=(t^{k^2+3k+1})(t-1)\] Let $m=k^2+3k+1$. Again, one checks that: \[V_{W_{k,k}}=\widetilde{\Phi}_{2m}=\widetilde{\Phi}_{2f(k+1)}\] \item $p_2-n_1=2k-n+1=1$. Then $p_1=n_2$ and: \[D_{2k,k}=t^{n_1}(t-1)+t-1=(t^{2k^2+4k+1})(t-1)\] Let $m=2k^2+4k+1$. One checks that: \[V_{W_{2k,k}}=\widetilde{\Phi}_{2m}=\widetilde{\Phi}_{2g(k+1)}\] \item $p_1-n_2=n-2k=1$. Then, $p_2=n_1$ and: \[D_{2k+1,k}=t^{n_2}(t-1)+t-1=(t^{2k^2+4k+1})(t-1)\] Let $m=2k^2+4k+1$. One checks that: \[V_{W_{2k+1,k}}=\widetilde{\Phi}({2m})=\widetilde{\Phi}_{2g(k+1)}\] \end{itemize} \end{proof} \end{theorem} As an immedaite consequence, we get: \begin{theorem} There are infinitely many roots of unity that are zeros of Jones polynomials. Such roots are dense in the unit circle. \begin{proof} Since for any odd $m$, $\Phi_{2m}|\widetilde{\Phi}_{2m}$, one has: for any $k>0$, $\zeta_{2f(k)}$, $\zeta_{2f(k+1)}$, $\zeta_{2g(k+1)}$ are zeros of some Jones polynomials. Since $t^{2m}-1=(t^m-1)(t+1)\widetilde{\Phi}_{2m}t^{\frac{m-1}{2}}$, the roots of $\widetilde{\Phi}_{2m}$ are: \[\zeta_{2m}, \zeta_{2m}^3\ldots\zeta_{2m}^{m-2},\zeta_{2m}^{m+2}\ldots \zeta_{2m}^{2m-1}\] It is clear that the roots of the $\widetilde{\Phi}_{2m}$'s, that are Jones polynomials, are dense in the unit circle, since there are infinitely many such $\widetilde{\Phi}_{2m}$'s. \end{proof} \end{theorem} The knots appearing in Theorem~\ref{thm:cyclojones} come in quadruplets for $k=1,2,3...$ In Table~\ref{tab:k4}, are shown the first four quadruplets, their Jones polynomials and the crossing numbers (together with identification for knots up to $15$ crossings; also, the knot $W_{3,1}$ is $16n_{207543}$). \begin{table}[h] \begin{center} \label{tab:k4} \caption{$W_{n,k}$ with cyclotomic Jones polynomials up to $k=4$} \begin{tabular}{c|c|c||c|c|c||c|c|c||c|c|c} $K$&$V_K$&$c(K)$&$K$&$V_K$&$c(K)$&$K$&$V_K$&$c(K)$&$K$&$V_K$&$c(K)$\\ \hline $W_{0,1}$ & $1$& $0_1$ & $W_{1,1}$ & $\Phi^{\scriptsize{sym}}_{10}$ & $4_1$ & $W_{2,1}$ & $\Phi^{sym}_{14}$ & $9_{42}$ & $W_{3,1}$ & $\Phi^{sym}_{14}$ & $16$\\ \hline $W_{1,2}$ & $\Phi^{sym}_{10}$ &$11n_{19}$ & $W_{2,2}$ & $\Phi^{sym}_{22}$ & $\le 18$ & $W_{4,2}$ & $\Phi^{sym}_{34}$ & $\le 38$ & $W_{5,2}$ & $\Phi^{sym}_{34}$& $\le 52$\\ \hline $W_{2,3}$ & $\Phi^{sym}_{22}$ &$\le 31$& $W_{3,3}$ & $\Phi^{sym}_{38}$ &$\le 43$& $W_{6,3}$ & $\Phi^{sym}_{62}$ &$\le 89$& $W_{7,3}$ & $\Phi^{sym}_{62}$ &$\le 108$\\ \hline &&&&&&&&&&&\\[-10pt] $W_{3,4}$ & $\Phi^{sym}_{38}$ &$\le 64$& $W_{4,4}$ & $\Phi^{sym}_{58}$ &$\le 79$& $W_{8,4}$ & $\widetilde{\Phi}_{98}$ &$\le 159$& $W_{9,4}$ & $\widetilde{\Phi}_{98}$ &$\le 184$ \end{tabular} \end{center} \end{table} Notice that $98$ is the first index that is not twice a prime, hence $\widetilde{\Phi}_{98}\neq\Phi^{sym}_{98}$. The knots with $k>1$, except $W_{1,2}$, have more than $16$ crossings (since their Jones polynomials do not appear up to $16$ crossings) and their crossing number seems to increase rapidely. From Lemma~\ref{lem:upperc} below, $c(W_{n,k})<=k^2+(n+k)^2-1$. Using Knotscape\cite{HT} (after removing the arrows in the diagrams, see section~\ref{sec:arrows}), the number of crossings can sometimes be reduced by $1$ or $2$. Knotscape handles diagrams up to $49$ crossings and allows to check that the Alexander polynomial differentiates $W_{2,2}$ from $W_{2,3}$. Since $W_{5,2}$ has a diagram with $52$ crossings, its Alexander polynomial cannot be computed with Knotscape (in order to check whether it is different from $W_{4,2}$). Though it seems unlikely, it is possible that some $W_{2k,k}$ is the same knot as $W_{2k+1,k}$ and/or some $W_{k,k}$ is the same knot as $W_{k,k+1}$. As an example, using Knotscape on a diagram of $W_{4,2}$ with $39$ crossings one gets a reduction to $38$ crossings with the following DT code: \texttt{38 1}\;\;\;\;\;\;\;\;\texttt{6 -14 16 -28 26 -42 68 40 -50 -60 -34 36 -44 -54 52 62 -20 46 -58 -4 2 56 64 -22 70 -76 8 -10 -66 -48 72 -74 24 12 38 18 -32 30 (best available reduction)} We turn now to some properties of the knots $W_{n,k}$. \begin{proposition}\label{prop:11} The knots $W_{n,k}$ are $(1,1)$ knots. In particular they have tunnel number $1$, hence they are prime. \begin{proof} We postpone the proof that $W_{n,k}$ are $(1,1)$ knots to section~\ref{sec:arrows}. Now $(1,1)$ knots have tunnel number $1$ (see~\cite{D}), hence they are prime (see~\cite{N, S}). \end{proof} \end{proposition} \begin{proposition} The knots $W_{n,k}$ with cyclotomic Jones polynomials are non alternating except for $0_1$ and $4_1$. There is no bound on the twist number of such knots. \begin{proof} Suppose that $W_{n,k}$ is non trivial and alternating with Jones polynomial equal to some $\widetilde{\Phi}_{2m}$. From~\cite{DL}, its twist number equals $2$. Such knot can be either a connected sum of torus knots of type $(2,m)$ and $(2,n)$ or a 2-bridge knot. From Proposition~\ref{prop:11} $W_{k,n}$ is prime, so it has to be a 2-bridge knot. Using an explicit formula for Jones polynomials of 2-bridge knots with twist number $2$ in \cite{QYA}, one checks easily, that all such knots, except $4_1$, have Jones polynomials that are not equal to $\widetilde{\Phi}_{2m}$ for any $m$. For the second part, it is shown in~\cite{CK2}, that for a family of links with cyclotomic Jones polynomials of unbounded span, there is no bound on the twist numbers of these links. \end{proof} \end{proposition} We turn now to some obstructions for roots of unity being zeros of Jones polynomials. It is well known that the Jones polynomial has special values in $1$, $\zeta_3$, $i$ and $\zeta_6$, see~\cite{J, LM}. For a knot $K$, $V_{K}(1)=V_{K}(\zeta_3)=1$, $V_K(\zeta_4)=\pm 1$ and $V_K(\zeta_6)=\pm(i\sqrt{3})^n$, $n\in\mathbb{N}o$. This allows to exclude some roots of unity as zeros of Jones polynomials: \begin{theorem}\label{thm:excluded} For $k\in\mathbb{N}o$, let $N=p^k$, $3p^k$, $4p^k$, with $p$ prime; or $N=6p^k$ with $p\neq 3$ prime. Then $\Phi_N$ cannot divide any Jones polynomial. \begin{proof} We check the values of cyclotomic polynomials in $1$, $\zeta_3$, $i$ and $\zeta_6$. From \cite{BHM}, we have for $p$ prime: \[\Phi_{p^k}(1)=p$ for $k>0$ (and $\Phi_1(1)=0)\] \[|\Phi_{3p^k}(\zeta_3)|=\Phi_{4p^k}(i)=|\Phi_{6p^k}(\zeta_6)|=p\] We see that none of these polynomials can divide a Jones polynomial $V_K$, since $V_K(1)=V_K(\zeta_3)=|V_K(i)|=1$ and $|V_K(\zeta_6)|=\sqrt{3}^n$, except if $p=3$ in the case of $\Phi_{6p^k}$, which we excluded in our assumptions. \end{proof} \end{theorem} We can also exclude easily some $\widetilde{\Phi}_{2k}$ as divisors of Jones polynomial: \begin{proposition} Let $k$ be odd. If $\widetilde{\Phi}_{2k}$ divides a Jones polynomial, then $k$ is not divisible by $3$. \begin{proof} If $3|k$, then $\Phi_6|\widetilde{\Phi}_{2k}$, hence $\widetilde{\Phi}_{2k}(\zeta_6)=0$, which is impossible for a divisor of a Jones polynomial. \end{proof} \end{proposition} Notice, that all roots of unity appearing as zeros of Jones polynomials in Theorem~\ref{thm:cyclojones} are of the form $\zeta_{2k}$, with $k$ odd, $3\nmid k$. It is natural to ask what other roots of unity $\zeta_N$ can be zeros of Jones polynomials of knots. Using Theorem~\ref{thm:excluded}, the smallest possible $N$ for such roots are $18,26,35,40,45,46,50,54,55,56,60$. Let us sum this up: \begin{question} Is there a knot with Jones polynomial having a zero in $\zeta_N$ such that: $4|N$; $N$ is odd; $3|N$; or $N=2k$, $k$ odd, $3\nmid k$ but $N$ not coming from Theorem~\ref{thm:cyclojones}? \end{question} One may also ask the question, whether there are infinitely many primes such that $\Phi^{sym}_{2p}$ are Jones polynomials of some knots. A positive answer would follow, if there were infinitely many primes in the image of $f$ or $g$ from Theorem~\ref{thm:cyclojones} (two special cases of the Bunyakovsky conjecture). Since a Mersenne prime $2^p-1$, with $p>2$, satisfies $2^p-1=2(2^\frac{p-1}{2})^2-1=g(2^\frac{p-1}{2})$, we get: \begin{corollary} Let $N=2^p-1$, $p>2$, be a Mersenne prime. Then $\Phi^{sym}_{2N}$ is the Jones polynomial of a knot. \end{corollary} \section{Arrow diagrams}\label{sec:arrows} Arrow diagrams where introduced in~\cite{MD} for links in $F\times S^1$, where $F$ is an orientable surface. They were subsequently extended for links in Seifert manifolds (see~\cite{GM, MM1,MM2}). In~\cite{M1}, they were applied for links in $S^3$: it was shown there, that projections of links under the Hopf fibration from $S^3$ to $S^2$ can be encoded with arrow diagrams in a disk: such a diagram is like a usual diagram of a link, except that it is in a disk and there may be some arrows on it, outside crossings. Two arrow diagrams represent the same link if and only if one diagram can be transformed into the other with a series of six Reidemeister moves, see Figure~\ref{fig:reid}. For the $\Omega_\infty$ move in this figure, the boundary of the disk is drawn in thick. For simplicity we can also omit this boundary when picturing arrow diagrams (as we have done in Figure~\ref{fig:wnk}). \begin{figure} \caption{Reidemeister moves} \label{fig:reid} \end{figure} A detailed interpretation of the arrow diagrams and Reidemeister moves can be found in~\cite{M1}. One can picture easily a link $L$ from its arrow diagram $D$ in the following way: pick a solid torus $T=C\times S^1$, $C$ a disk, consisting of some oriented fibers $p\times\ S^1$, $p\in C$, in the Hopf fibration of $S^3$. Let $S^1=I\cup I'$ consist of two intervals glued along their endpoints. Then $T=B\cup B'$, where $B=C\times I$ and $B'=C\times I'$ are two balls. If there are no arrows in $D$, $L$ lies entirely in $B$. Otherwise it lies in $B$ except for some neighborhoods of the arrows where it goes through $B'$ along an oriented fiber and the orientation of the arrow agrees with the orientation of the fiber. We turn now to the proof of Proposition~\ref{prop:11}. We want to show that the knots $W_{n,k}$ are $(1,1)$ knots. Recall from \cite{D} that a link $L$ admits a $(g,b)$ decomposition, if there is a genus $g$ Heegard splitting $(V_0,V_1)$ of $S^3$ such that $V_i$ intersects $L$ in $b$ trivial arcs, for $i\in\{0,1\}$. To show that $W_{n,k}$ is a $(1,1)$ knot, we need to show that it intersects each $T_i$ in a trivial arc, for a Heegard spliting of $S^3$ into two solid tori $T_i$, $i\in\{0,1\}$. We say that an arrow diagram of a knot is {\it annulus monotonic}, if there is an annulus $A=S^1\times I$, containing the diagram and such that the curve of the diagram has exactly one minimum and one maximum w.r.t. $I$. Applying $\Omega_\infty$ on the left kink of $W_{n,k}$ (see Figure~\ref{fig:wnk}), we obtain a diagram consiting of a spiral with some arrows on it. Such a diagram is clearly annulus monotonic, see Figure~\ref{fig:annulus}. Proposition~\ref{prop:11} now follows directly from the following: \begin{lemma} Suppose that a knot $K$ has an annulus monotonic arrow diagram $D$. Then $K$ is a $(1,1)$-knot. \begin{proof} Let $A=S^1\times I$ be an annulus containing $D$ and such that $D$ has exactly one minimum and one maximum w.r.t. $I$. The closure of $S^3\setminus (A\times S^1)$ consists of two solid tori $T_0$ and $T_1$, chosen so that $T_i\cap (A\times S^1)=(S^1\times {i})\times S^1$, $i\in\{0,1\}$. Cut $I$ into $I_0=[0,a]$ and $I_1=[a,1]$ for some $a\in (0,1)$, so that the intersection $(S^1\times I_0)\cap D$ is a small trivial arc. $A$ decomposes into two annuli $A_i=S^1\times I_i$, $i\in\{0,1\}$. Let $T'_i=T_i\cup (A_i\times S^1)$ be two solid tori, $i\in\{0,1\}$, so that $S^3=T'_0\cup T'_1$. Then $T'_0\cap K$ is clearly a trivial arc in $T'_0$. We claim that $T'_1\cap K$ is also a trivial arc in $T'_1$. Let $I_2=[b,1]$, $b>a$, be such that $(S^1\times I_2)\cap D$ is a small trivial arc. Let $A_2=S^1\times I_2$. Since $D$ is annulus monotonic, the pair $(A_1\times S^1,K\cap (A_1\times S^1))$ can be isotoped to $(A_2\times S^1,K\cap (A_2\times S^1))$, by removing the tori $(S^1\times {c})\times S^1$ for $c$ from $a$ to $b$. Such isotopy clearly extends to $(T'_1,K\cap T'_1)$, so $K\cap T'_1$ is a trivial arc in $T'_1$. Thus $K$ is a $(1,1)$ knot. \end{proof} \end{lemma} \begin{figure} \caption{An annulus monotonic diagram, drawn inside the annulus} \label{fig:annulus} \end{figure} We remark here, that for some $n$'s, hypothetical knots with Jones polynomials $\Phi^{sym}_n$ would only admit a $(g,b)$ decomposition with large $g+b$. Indeed, in \cite{BHM} the values of cyclotomic polynomials in $\zeta_5$ are computed. It is shown there, that $|\Phi_n(\zeta_5)|$ can be arbitrarily large for some $n$'s. For example it grows very fast with the number of primes in the decomposition of $n$, when $n$ is a product of an odd number of distinct primes congruent to $2$ or $3$ modulo $5$. For instance, for $n=2\;3\;7\;13\;17$, one checks that this module is approximately $2207$. On the other hand, it follows from \cite{MSY} that, if a knot $K$ admits a $(g,b)$ decomposition, its Jones polynomial $V_K$ satisfies $|V_K(\zeta_5)|\le \alpha^g\beta^{b-1}$, where $\alpha>1$ and $\beta>1$ can be explicitely computed. Hence, a large module implies large $g+b$. It was shown in~\cite{M1}, that the usual blackboard framing for links obtained from their diagrams extends to arrow diagrams and that such framing is invariant under all Reidemeister moves except $\Omega_1$. In particular, to compute the writhe of a framed link represented by an arrow diagram, one may eliminate all arrows without using $\Omega_1$, then sum the signs of all crossings in the arrowless diagram. We present now a formula for the writhe of any arrow diagram of a knot. This formula holds also for oriented links. Let $D$ be an oriented arrow diagram. Let $r$ be an arrow in $D$. The {\it sign} of $r$, denoted $\epsilon(r)$, is defined as follows: $\epsilon(r)=1$ (resp. $\epsilon(r)=-1$), if $r$ points in the same (resp. opposite) direction as the orientation of the diagram. We also say that $r$ is {\it positive} (resp. {\it negative}). The winding number of $r$, denoted $ind(r)$, is by definition the winding number $ind_D(P)$, where $D$ is the diagram considered as an oriented curve and $P$ is a point close to $r$, to the right of $D$ according to the orientation of $D$. For example, consider $W_{2,3}$ in Figure~\ref{fig:wnk}. Orient it so that the left kink is oriented clockwise. Then the $3$ arrows on the right are positive and the two arrows on the left are negative. Also the winding numbers of the arrows on the right are $0$, $1$ and $2$, wheras the $2$ arrows on the left have winding number $-1$. Denote by $w(D)$ the writhe of the framed knot represented by the arrow diagram $D$. Denote by $\bar{w}(D)$ the writhe, when all arrows in $D$ are ignored (it is sum of the signs of crossings in $D$). We have the following formula for the writhe: \begin{lemma}\label{lem:wr_formula} Let $D$ be an oriented arrow diagram. Let $n=\displaystyle\sum_{r}\epsilon(r)$, the sum taken over all arrows of $D$. Then: \[w(D)=\bar{w}(D)+\displaystyle\sum_r 2\epsilon(r)ind(r)+n(n+1)\] \begin{proof} We remove with Reidemeister moves all arrows in $D$ keeping track of the signs of the crossings that appear. We do not use $\Omega_1$, thus the writhe is unchanged. Consider an arrow $r$ in $D$. We push it next to the boundary of the diagram in such a way that the orientation of the arc next to the arrow agrees with the counterclockwise orientation of the boundary of the diagram (see Figure~\ref{fig:pn_arrows} (left), where $3$ arrows have been pushed and the arcs are oriented as wished). \begin{figure} \caption{Two positive, one negative arrow (left); pushing an arrow through the arc next to it (right)} \label{fig:pn_arrows} \end{figure} To achieve this, we use $\Omega_2$ and $\Omega_5$ moves repeatedly. When $r$ crosses an arc, two positive or two negative crossings appear. One checks that the total contribution, when $r$ is next to the boundary, is $2\epsilon(r)ind(r)$. Notice that when $r$ is next to the boundary, but the orientation of the arc is not the desired one, then $ind(r)=-1$ and $r$ has to be pushed once through a piece of arc next to it, see Figure~\ref{fig:pn_arrows} (right). After all the arrows have been pushed, so they are as in Figure~\ref{fig:pn_arrows} (left), the sum of the signs of all crossings is $\bar{w}(D)+\displaystyle\sum_r 2\epsilon(r)ind(r)$. Suppose now, that there are $a$ positive arrows and $b$ negative ones, so that $n=a-b$. Push every positive arrow through an arc next to it as in Figure~\ref{fig:pn_arrows} (right). This adds $2a$ positive crossings. Now any arrow $r$ can be eliminated with $\Omega_\infty$ followed by $\Omega_4$. We push the remaining arrows through the arc created by $\Omega_\infty$. One checks that if an arrow $r'$ is pushed through the arc coming from $r$, this adds two positive (resp. negative) crossings if $\epsilon(r)=\epsilon(r')$ (resp. $\epsilon(r)=-\epsilon(r')$). Then one repeats the process with the arrow $r'$ (eliminating it and pushing all other arrows through it). Hence, at the end any pair of arrows $r$ and $r'$ contributes $2\epsilon(r)\epsilon(r')$ to the writhe. The total contribution to the writhe of this second part is thus: \[2a+a(a-1)+b(b-1)-2ab=(a-b)(a-b+1)=n(n+1)\] Combined with the first part, this gives the required formula. \end{proof} \end{lemma} Applying Lemma~\ref{lem:wr_formula} to $W_{n,k}$ we get: \begin{lemma}\label{lem:writheWL} Let $W_{n,k}$ stand for the diagram in Figure\ref{fig:wnk}, as well as for the framed knot represented by this diagram. Then: \[w(W_{n,k})=n^2+n+2k^2+k-2nk\] \begin{proof} Orient $W_{n,k}$ so that the $k$ arrows are positive. If $n>0$ then the $n$ arrows are negative. If $n<0$ then the $|n|$ arrows are positive. The $k$ arrows have winding numbers $0, 1, 2,\ldots,k-1$. The $n$ arrows have all winding number $-1$. Also, $\bar{w}(W_{n,k})=k$. Hence: \[w(W_{n,k})=k+2(-n)(-1)+2(1+2+\ldots+k-1)+(k-n)(k-n+1)\] \[=k+2n+k(k-1)+(k-n)(k-n+1)=n^2+n+2k^2+k-2nk\] \end{proof} \end{lemma} For an upper estimate of the number of crossings, $c(W_{n,k})$, we use Lemma~1 from~\cite{M1}. It states that, if a diagram $D$ has $k$ crossings and all its arrows are next to the boundary, with $a$ of them removable (i.e. one can remove them with $\Omega_\infty$ followed by $\Omega_4$) and $b>0$ of them non removable, then $c(K)\le k+b-1+(a+b)(a+b-1)$. We get: \begin{lemma}\label{lem:upperc} For $n\ge 0$, $k\ge 0$ and $k+n>0$, one has: \[c(W_{n,k})\le k^2+(n+k)^2-1\] \begin{proof} Starting with the diagram of $W_{n,k}$ shown in Figure~\ref{fig:wnk}, we push $k-1$ arrows so that they are next to the boundary. We get a diagram with $k+2(1+2+\ldots+k-1)=k+k(k-1)=k^2$ crossings. Then, we can apply Lemma~1 from~\cite{M1}. Since $n\ge 0$, all arrows will be non removable. Since $n+k>0$, there is at least one non removable arrow. Thus, $c(W_{n,k})\le k^2+(n+k)-1+(n+k)(n+k-1)=k^2+(n+k)^2-1$. \end{proof} \end{lemma} We end this section with a visualization of any knot $W_{n,k}$. Such knot is obtained by a small modification from a pair of torus knots lying on the boundary of a thickened Hopf link. It was shown in~\cite{M1} how to get some simple arrow diagrams of torus knots: one checks that $W_{n,0}$ is the torus knot $T(n,n+1)$ and $W_{0,k}$ is the torus knot $T(k,2k+1)$. Consider the diagram of $W_{n,k}$ in Figure~\ref{fig:wnk}. Let $W^s_{n,k}$ be the diagram of a $2$-component link, obtained from $W_{n,k}$ by smoothing vertically the crossing next to the $n$ arrows. The components of $W^s_{n,k}$ are torus knots $T(n,n+1)$ and $T(k,2k+1)$. Let $D$ and $D'$ be two disjoint disks, such that $D$ contains the $n$ arrows, $D'$ contains the $k$ arrows and $W^s_{n,k}$ is contained in $D\cup D'$. Let $T$, resp. $T'$, be two solid tori consisting of fibers intersecting $D$, resp. $D'$, in the Hopf fibration of $S^3$. Then $T$ and $T'$ form a thickened Hopf link. The torus $(n,n+1)$ component of $W^s_{n,k}$ can be pushed onto $\partial T$ and the torus $(k,2k+1)$ component can be pushed onto $\partial T'$. Then $W_{n,k}$ is obtained from such two linked torus knots by reverting the smoothing back to the crossing. \section{Jones polynomials of the knots $W_{n,k}$}\label{sec:proofs} Let $G_n$, $G'_n$ and $G'_{a,b}$, $n,a,b\in\mathbb{Z}$, be the arrow diagrams shown in Figure~\ref{fig:gn}. In the box is an arrow tangle $G$ (a tangle with, possibly, some arrows on it). Let $g_n$, $g'_n$ and $g'_{a,b}$ be the Kauffman brackets of, respectively, $G_n$, $G'_n$ and $G'_{a,b}$. We want to express $g'_n$ with some $g_k$'s. In order to do it, we will use $g'_{a,b}$'s. \begin{figure} \caption{$G_n$, $G'_n$ and $G'_{a,b} \label{fig:gn} \end{figure} It is useful to define for $n\ge0$ the sum: \[S_n=A^{n}g_{-n}+A^{n-2}g_{-n+2}+\ldots+A^{-n+2}g_{n-2}+A^{-n}g_n=\displaystyle\sum_{i=0}^n A^{n-2i}g_{-n+2i}\] Extend $S_n$ for negative $n$, by defining $S_{-1}=0$ and, for $n<-1$: \[S_n=-S_{|n|-2}\] \begin{lemma}\label{lem:kink} For $n\in\mathbb{Z}$: \[g'_n=(A^{-1}-A^3)A^nS_n-A^{2n-1}g_{-n}\] \begin{proof} One checks easily that the formula holds for $n=0$ and $n=-1$. From the defining relations of the Kauffman bracket, we get: \[ \raisebox{-7pt}{\includegraphics{Lplus}}=A^2 \raisebox{-7pt}{\includegraphics{Lminus}} +(A^{-1}-A^{3}) \raisebox{-7pt}{\includegraphics{Lzero}}\] Using this relation and $\Omega_5$ and $\Omega_4$ moves, we get: \[g'_{a,b}=A^2g'_{a-1,b-1}+(A^{-1}-A^3)g_{a+b}\label{eq:ttt}\tag{*}\] Suppose that $n\ge 1$. Iterating equation~(\ref{eq:ttt}) until $g'_{0,-n}=-A^3g_{-n}$, we get: \begin{align*} g'_{n,0}=&\quad A^2g'_{n-1,-1}+(A^{-1}-A^3)g_{n}\\ =&\quad A^4g'_{n-2,-2}+A^2(A^{-1}-A^3)g_{n-2}+(A^{-1}-A^3)g_{n}\\ =&\quad \ldots\\ = &\quad -A^3A^{2n}g_{-n}+(A^{-1}-A^3)\left(g_n+A^2g_{n-2}+\ldots+A^{2n-2}g_{-n+2}\right)\\ =&\quad -A^{2n-1}g_{-n}+(A^{-1}-A^3)\left(g_n+A^2g_{n-2}+\ldots+A^{2n}g_{-n}\right)\\ =&\quad (A^{-1}-A^3) A^n S_n -A^{2n-1}g_{-n} \end{align*} Suppose now that $n\le -2$. Rewriting equation~(\ref{eq:ttt}) and replacting $a$ by $a+1$ and $b$ by $b+1$ one gets: \[g'_{a,b}=A^{-2}g'_{a+1,b+1}-A^{-2}(A^{-1}-A^3)g_{a+b+2}\] Iterating until $g'_{n+|n|,|n|}=g'_{0,-n}=-A^3g_{-n}$, we get: \begin{align*} g'_{n,0}=&\quad A^{-2}g'_{n+1,1}-A^{-2}(A^{-1}-A^3)g_{n+2}\\ =&\quad A^{-4}g'_{n+2,2}-A^{-4}(A^{-1}-A^3)g_{n+4}-A^{-2}(A^{-1}-A^3)g_{n+2}\\ =&\quad \ldots\\ = &\quad -A^3A^{2n}g_{-n}-A^{-2}(A^{-1}-A^3)\left(g_{n+2}+A^{-2}g_{n+4}+\ldots+A^{2n+2}g_{-n}\right)\\ = &\quad -A^{2n-1}g_{-n}-A^{-2}(A^{-1}-A^3)\left(g_{n+2}+A^{-2}g_{n+4}+\ldots+A^{2n+4}g_{-n-2}\right)\\ = &\quad -A^{2n-1}g_{-n}-A^{-2}(A^{-1}-A^3)A^{n+2}\left(A^{-n-2}g_{n+2}+\ldots+A^{n+2}g_{-n-2}\right)\\ = &\quad -A^{2n-1}g_{-n}-(A^{-1}-A^3)A^{n}S_{|n|-2}\\ = &\quad (A^{-1}-A^3)A^{n}S_{n}-A^{2n-1}g_{-n} \end{align*} \end{proof} \end{lemma} We now prove Theorem~\ref{thm:jonesWnk} by induction on $k$. $W_{n,0}$ is an oval with $n\in\mathbb{Z}$ arrows on it. This is the torus knot $T(n,n+1)$ if $n\ge 0$ and, for $n<0$, $W_{n,0}=W_{-1-n,0}$ (use $\Omega_\infty$ and $\Omega_4$ moves). One checks that for $k=0$ the formula in Theorem~\ref{thm:jonesWnk} is the correct formula for such torus knots (see also~\cite{M1}). We restate Theorem~\ref{thm:jonesWnk} in terms of the Kauffman bracket using the formula $V_K(t)=(-A)^{-3w(K)}<K>$, where $w(K)$ is the writhe of $K$ and $t=A^{-4}$. From Lemma~\ref{lem:writheWL}, we have $w(W_{n,k})=n^2+n+2k^2+k-2nk$, hence $(-1)^{3w(W_{n,k})}=(-1)^k$ and: \[<W_{n,k}>=(-1)^kA^{3(n+1)n+ 3k(2k+1-2n)}V_{W_{n,k}}\] One checks, that Theorem~\ref{thm:jonesWnk} can be restated as: \begin{proposition} \[<W_{n,k}>(A^{-8}-1)=(-1)^kA^{n^2-2kn+2k^2+n-k-8}\] \[\left((1-A^4)A^{4kn+4n+8k+4}-A^{4n-4k+4}+A^{4k+4}-A^{8k-4n+4}+1\right)\] \begin{proof} For $k=0$ the formula is correct, since it is correct for the Jones polynomial and we just restate it with the Kauffman bracket using the writhe. Let $k\ge 0$. Assume that the formula holds for $<W_{n,k}>$, $n\in\mathbb{Z}$. We use Lemma~\ref{lem:kink}, with $W_{n,k+1}=G'_n$. One has to identify $G_a$ (in that lemma), for any $a\in\mathbb{Z}$: it has a diagram shown in Figure~\ref{fig:WGn} (for $k=2$ as an example). Using a single $\Omega_\infty$ move on the strand with the $a+1$ arrows, one gets $W_{-a-2,k}$ (one extra arrow comes from the move). Since this move does not change the writhe, $<G_a>=<W_{-a-2,k}>$. \begin{figure} \caption{Identifying $G_a$} \label{fig:WGn} \end{figure} Recall the notations used in Lemma~\ref{lem:kink}: \[g'_n=<G'_n>,\quad g_a=<G_a>,\quad S_n=\displaystyle\sum_{i=0}^n A^{n-2i}g_{-n+2i}\] Also, by definition: $S_{-1}=0$ and $S_n=-S_{|n|-2}$ for $n<-1$. Let: \begin{align*} S'_{n}=&(-1)^kA^{n^2-2kn-6n+2k^2-k-10}\left(-A^{4kn+12n+12k+16}\right.\\ &\left.+A^{4kn+8n+4k}+A^{4n+8k+8}-A^{8n}\right) \end{align*} One checks that $S'_{-1}=0$ and $S'_n=-S'_{-n-2}$ for any $n\in\mathbb{Z}$. We claim that for any $n\in\mathbb{Z}$: \[S_{n}(A^{-8}-1)=S'_{n} \tag{**}\label{ssp}\] Because of the skew-symmetry of both $S_n$ and $S'_n$ around $-1$, it is sufficient to prove~(\ref{ssp}) for $n\ge -1$. It is true for $n=-1$. Now $S_{0}=g_0=<W_{-2,k}>$. By induction on $k$: \[<W_{-2,k}>(A^{-8}-1)=(-1)^k A^{2k^2-k-10}(-A^{12k+16}+A^{8k+8}+A^{4k}-1)=S'_{0}\] One has obviously: \begin{align*} S_{n+2}&=S_n+A^{n+2}g_{-n-2}+A^{-n-2}g_{n+2}\\ &=S_n+A^{n+2}<W_{n,k}>+A^{-n-2}<W_{-n-4,k}> \end{align*} Thus, to prove~(\ref{ssp}), we need to show that: \[S'_{n+2}=S'_n+(A^{-8}-1)\left(A^{n+2}<W_{n,k}>+A^{-n-2}<W_{-n-4,k}>\right)\] One checks: \[S'_{n+2}=(-1)^kA^{n^2-2kn+2k^2-2n-5k-18}\left(-A^{4kn+12n+20k+40}\right.\] \[\left.+A^{4kn+8n+12k+16}-A^{8n+16}+A^{4n+8k+16}\right)\] By induction on $k$: \begin{align*} &S'_n+(A^{-8}-1)\left(A^{n+2}<W_{n,k}>+A^{-n-2}<W_{-n-4,k}>\right)=\\ &(-1)^kA^{n^2-2kn-6n+2k^2-k-10}\left(-A^{4kn+12n+12k+16}\right.\\ &\left.+A^{4kn+8n+4k}+A^{4n+8k+8}-A^{8n}\right)+(-1)^kA^{n^2-2kn+2k^2+2n-k-6}\\ &\left((1-A^4)A^{4kn+4n+8k+4}-A^{4n-4k+4}+A^{4k+4}-A^{8k-4n+4}+1\right)\\ &+(-1)^k A^{n^2+2kn+2k^2+6n+7k+2}\left((1-A^4)A^{-4kn-4n-8k-12}-A^{-4n-4k-12}\right.\\ &\left.+A^{4k+4}-A^{8k+4n+20}+1\right)=(-1)^kA^{n^2-2kn+2k^2-2n-5k-18}\\ &\left(-A^{4kn+8n+16k+24}+A^{4kn+4n+8k+8}+A^{12k+16}-A^{4n+4k+8}\right.\\ &+(1-A^4)A^{4kn+8n+12k+16}-A^{8n+16}+A^{4n+8k+16}-A^{12k+16}+A^{4n+4k+12}\\ &+(1-A^4)A^{4n+4k+8}-A^{4kn+4n+8k+8}+A^{4kn+8n+16k+24}\\ &\left.-A^{4kn+12n+20k+40}+A^{4kn+8n+12k+20}\right)=S'_{n+2} \end{align*} Since $g_{-n}=<W_{n-2,k}>$, from Lemma~\ref{lem:kink} we get: \[<W_{n,k+1}>=(A^{-1}-A^3)A^{n}S_{n}-A^{2n-1}<W_{n-2,k}>\] Hence: \begin{align*} &<W_{n,k+1}>(A^{-8}-1)=(A^{-1}-A^3)A^{n}S'_{n}-A^{2n-1}<W_{n-2,k}>(A^{-8}-1)\\ &=(-1)^k(A^{-1}-A^3)A^{n^2-2kn-5n+2k^2-k-10}\left(-A^{4kn+12n+12k+16}+A^{4kn+8n+4k}\right.\\ &\left.+A^{4n+8k+8}-A^{8n}\right)-(-1)^kA^{n^2-2kn+2k^2-n+3k-7}\left((1-A^4)A^{4kn+4n-4}\right.\\ &\left.-A^{4n-4k-4}+A^{4k+4}-A^{8k-4n+12}+1\right)=(-1)^kA^{n^2-2kn+2k^2-n+3k-7}\\ &\left(-(1-A^4)A^{4kn+8n+8k+12}+(1-A^4)A^{4kn+4n-4}+(1-A^4)A^{4k+4}\right.\\ &-(1-A^4)A^{4n-4k-4}-(1-A^4)A^{4kn+4n-4}+A^{4n-4k-4}-A^{4k+4}\\ &\left.+A^{8k-4n+12}-1\right)=(-1)^{k+1}A^{n^2-2kn+2k^2-n+3k-7}\\ &\left((1-A^4)A^{4kn+8n+8k+12}-A^{4n-4k}+A^{4k+8}-A^{8k-4n+12}+1\right)\\ &=(-1)^{k+1}A^{n^2-2(k+1)n+2(k+1)^2+n-(k+1)-8}\left((1-A^4)A^{4(k+1)n+4n+8(k+1)+4}\right.\\ &\left.-A^{4n-4(k+1)+4}+A^{4(k+1)+4}-A^{8(k+1)-4n+4}+1\right) \end{align*} Thus the formula holds for $<W_{n,k+1}>$ and we are done. \end{proof} \end{proposition} \end{document}
\begin{document} \title{Recursive formulation of the multiconfigurational time-dependent Hartree method for fermions, bosons and mixtures thereof in terms of one-body density operators} \author{Ofir E. Alon$^{1\ast}$\footnote[0]{$^{\ast}$ [email protected]}, Alexej I. Streltsov$^{2\dag}$\footnote[0]{$^{\dag}$ [email protected]}, Kaspar Sakmann$^{2\ddag}$\footnote[0]{$^{\ddag}$ [email protected]},\break Axel U. J. Lode$^{2\S}$\footnote[0]{$^{\S}$ [email protected]}, Julian Grond$^{2\P}$\footnote[0]{$^{\P}$ [email protected]}, and Lorenz S. Cederbaum$^{2\parallel}$\footnote[0]{$^{\parallel}$ [email protected]}} \affiliation{$^{1}$ Department of Physics, University of Haifa at Oranim, Tivon 36006, Israel.} \affiliation{$^{2}$ Theoretische Chemie, Physikalisch-Chemisches Institut, Universit\"at Heidelberg,\\ Im Neuenheimer Feld 229, D-69120 Heidelberg, Germany.} \begin{abstract} The multiconfigurational time-dependent Hartree method (MCTDH) [H.-D. Meyer, U. Manthe, and L. S. Cederbaum, Chem. Phys. Lett. {\bf 165}, 73 (1990); U. Manthe, H.-D. Meyer, and L. S. Cederbaum, J. Chem. Phys. {\bf 97}, 3199 (1992)] is celebrating nowadays entering its third decade of tackling numerically-exactly a broad range of correlated multi-dimensional non-equilibrium quantum dynamical systems. Taking in recent years particles' statistics explicitly into account, within the MCTDH for fermions (MCTDHF) and for bosons (MCTDHB), has opened up further opportunities to treat larger systems of interacting identical particles, primarily in laser-atom and cold-atom physics. With the increase of experimental capabilities to simultaneously trap mixtures of two, three, and possibly even multiple kinds of interacting composite identical particles together, we set up the stage in the present work and specify the MCTDH method for such cases. Explicitly, the MCTDH method for systems with three kinds of identical particles interacting via all combinations of two- and three-body forces is presented, and the resulting equations-of-motion are briefly discussed. All four possible mixtures (Fermi-Fermi-Fermi, Bose-Fermi-Fermi, Bose-Bose-Fermi and Bose-Bose-Bose) are presented in a unified manner. Particular attention is paid to represent the coefficients' part of the equations-of-motion in a compact recursive form in terms of one-body density operators only. The recursion utilizes the recently proposed Combinadic-based mapping for fermionic and bosonic operators in Fock space [A. I. Streltsov, O. E. Alon, and L. S. Cederbaum, Phys. Rev. A {\bf 81}, 022124 (2010)] and successfully applied and implemented within MCTDHB. Our work sheds new light on the representation of the coefficients' part in MCTDHF and MCTDHB without resorting to the matrix elements of the many-body Hamiltonian with respect to the time-dependent configurations. It suggests a recipe for efficient implementation of the schemes derived here for mixtures which is suitable for parallelization. \end{abstract} \pacs{31.15.xv, 67.60.-g, 05.30.Fk, 05.30.Jp, 03.65.-w} \maketitle \section{Introduction}\label{SEC1} Quantum non-equilibrium dynamics is important to many branches of physics and chemistry \cite{Book_dynamics1,Book_dynamics2,Nuclear_book,Book_dynamics3,Pit_Stri_book,Book_dynamics4} and often requires the solution of the time-dependent many-particle Schr\"odinger equation. A particular efficient method to solve the time-dependent many-particle Schr\"odinger equation is the multiconfigurational time-dependent Hartree (MCTDH) algorithm and approach \cite{cpl,jcp,review,book}. MCTDH, which is considered at present the most efficient wave-packet propagation tool, has amply been employed for multi-dimensional dynamical systems of distinguishable degrees-of-freedom, typically molecular vibrations, see, e.g., Refs.~\cite{JCP_24a,JCP_24b,Manthe_review,Lenz_CI,relaxation2,vib_new1,vib_new2,irene}. We mention that recent developments on multi-layer formulation of MCTDH have opened up further possibilities to treat larger systems of distinguishable degrees-of-freedom \cite{ML_1,ML_2,ML_3}. MCTDH has recently been applied with much success to various systems with a few identical particles in the field of cold-atom physics, see, e.g., Refs.~\cite{ZO_st1,ZO_st2,ZO_dy2,Sascha_mix,axel,Sascha_dip}. In recent years, taking the quantum statistics between identical particles {\it a priori} into account, the MCTDH method has been specified for systems of identical particles, which opened up interesting possibilities to treat larger systems. First MCTDHF -- the fermionic version of MCTDH -- was developed by three independent groups \cite{MCTDHF1,MCTDHF2,MCTDHF3}. Shortly after, MCTDHB -- the bosonic version of MCTDH -- was developed in \cite{MCTDHB0,MCTDHB1}. For applications of MCTDHF to laser-matter interaction and other few-fermion problems see, e.g., Refs.~\cite{applF1,applF2,applF3,applF4,applF5,applF6,applF7,applF7m5,applF8,applF9}, where the last work combines optimal control theory with MCTDHF. For applications of MCTDHB to Bose-Einstein condensates see, e.g., Refs.~\cite{applB1,applB2,applB3,applB4,applB5}, where the last two works combine optimal control theory with MCTDHB. Since the seminal paper of L\"owdin \cite{Lowdin}, reduced density matrices and particularly reduced two-body density matrices have been a lively field of research, see, e.g., Refs.~\cite{Slava,MAZZ1,MAZZ2,MAZZ3,MAZZ4,MAZZ5,MAZZ6}. Reduced one-body density matrices are an inherent part of the MCTDH \cite{cpl,jcp,review,book}. In the present context, reduced one- and two-body density matrices were first used to derive the static self-consistent theory for bosons, the multiconfigurational Hartree for bosons (MCHB) in \cite{MCHB}. Thereafter, MCTDHB and MCTDHF were formulated in a unified manner by employing reduced one-, two- \cite{unified} and three-body \cite{book} density matrices. Further specification of MCTDH to mixtures of two kinds of identical particles (MCTDH-FF for Fermi-Fermi mixtures; MCTDH-BF for Bose-Fermi mixtures; and MCTDH-BB for Bose-Bose mixtures) was put forward in \cite{MCTDHX}. All the above developments made use of the fact that the mean-field operators in the traditional MCTDH can be factorized to products of reduced density matrices times one-body operators. Finally, we mention that MCTDH has been extended to systems with particle conversion (termed MCTDH-{\it conversion}), where particles of one kind can convert to another kind \cite{conversion}. A breakthrough in the formulation \cite{mapping,3well} and implementation \cite{package} of MCTDHB has stemmed from a general Combinadic-based mapping of bosonic (and fermionic) operators in Fock space. In this formulation, the direct calculation of the matrix representation of the Hamiltonian in the (huge) multiconfigurational space is abandoned, and is replaced by the action of one-body and two-body density operators on the multiconfigurational wave-function. The operation of the various density operators can be performed in parallel \cite{package}, which further accelerates the performance of the algorithm. This brings us closer to the topic and contents of the present work. Two-body interaction is the most basic interaction in an interacting (quantum) system. When the particles comprising the quantum system have internal structure, higher-order interactions (forces) may come into play. For instance, in nuclear physics it has long been accepted that three-body interactions are necessary to fully understand the structure of nuclei, see, e.g. \cite{nuc3b,nuc3a}. Much more recently, and in the context of another field, the proposition to utilize cold polar molecules to engineer (condensed-matter) systems with three-body interactions has been made \cite{cold3}. So, the motivation to study the non-equilibrium dynamics of systems with up to three-body forces is clear. But why study the quantum dynamics of a mixture of three kinds of identical particles? Are such systems present in nature? In the cold-atom world, the plurality of atoms is one of the most important ingredients experimentalists (and theorists) have at their disposal. For instance, the element Yb has seven stable isotopes (5 bosonic and 2 fermionic isotopes). Yb has been envisaged to play an instrumental role in realizing various interesting ultra-cold mixtures (see Ref.~\cite{Yb} for a realization of a Bose-Einstein condensate with $^{170}$Yb atoms and the discussion therein). More recently, a quantum degenerate Fermi-Fermi mixture of $^6$Li-$^{40}$K atoms coexisting with a Bose-Einstein Condensate of $^{87}$Rb atoms were realized \cite{TH_2008}, as well as a triply quantum-degenerate mixture of bosonic $^{41}$K atoms and two fermionic $^{40}$K and $^6$Li atoms \cite{MZ_2011}. Hence, mixtures of three kinds of identical particles have been created in the lab. All the above dictate the purposes and contents of the present work. The MCTDH method for systems with three kinds of identical particles interacting via all combinations of two- and three-body forces is derived, and the resulting equations-of-motion are briefly discussed. All four possible mixtures (Fermi-Fermi-Fermi, Bose-Fermi-Fermi, Bose-Bose-Fermi and Bose-Bose-Bose) are presented in a unified manner. Particular attention is paid to representing the coefficients' part of the equations-of-motion in a compact recursive form in terms of one-body density operators only. The recursion utilizes the recently proposed Combinadic-based mapping \cite{mapping} which has already been successfully applied and implemented within MCTDHB \cite{package}. Our work sheds new light on the representation of the coefficients' part in MCTDHF and MCTDHB without resorting to the matrix elements of the many-body Hamiltonian with respect to the time-dependent configurations, and suggests a recipe for efficient implementation of the theory derived here for mixtures which is suitable for parallelization. The structure of the paper is as follows. In Sec.~\ref{SEC2} we present the building bricks of the theory by reconstructing MCTDHF and MCTDHB. In Sec.~\ref{SEC3} we assemble from these ingredients the multiconfigurational time-dependent Hartree method for mixtures of three kinds of identical particles interacting via up to three-body forces. A brief summary and outlook are given in Sec.~\ref{SEC4}. Finally, we collect in Appendixes \ref{appendix_A}-\ref{appendix_C} for completeness and ease of presentation of the main text various quantities appearing and needed in the derivation. The paper and the Appendixes are detailed and intended also to serve as a guide for the implementation of the equations-of-motion. The reconstruction of MCTDHF and MCTDHB is given in sufficient detail. This allows us to defer to the Appendixes much of the lengthly formulas used later on for the mixtures. \section{Building bricks: Reconstructing MCTDHF and MCTDHB}\label{SEC2} \subsection{From basic ingredients to mapping}\label{SEC2.1} Our starting point is the many-body Hamiltonian of $N_A$ interacting identical particles of type $A$: \begin{equation}n\label{ham} & & \hat H^{(A)} = \hat h^{(A)} + \hat W^{(A)} + \hat U^{(A)} = \int d{\bf x}\bigg\{ \hat{\mathbf \Psi}^\dag_A(\x) \hat h^{(A)}(\x) \hat{\mathbf \Psi}_A(\x) + \nonumber \\ &+& \frac{1}{2} \int d\x' \bigg[ \hat{\mathbf \Psi}^\dag_A(\x) \hat{\mathbf \Psi}^\dag_A(\x') \hat W^{(A)}(\x,\x') \hat{\mathbf \Psi}_A(\x') \hat{\mathbf \Psi}_A(\x) + \\ &+& \frac{1}{3} \int d\x'' \hat{\mathbf \Psi}^\dag_A(\x) \hat{\mathbf \Psi}^\dag_A(\x') \hat{\mathbf \Psi}^\dag_A(\x'') \hat W^{(A)}(\x,\x',\x'') \hat{\mathbf \Psi}_A(\x'') \hat{\mathbf \Psi}_A(\x') \hat{\mathbf \Psi}_A(\x) \bigg] \bigg\}, \nonumber \ \end{equation}n where $\hat h^{(A)}$ is the one-body part, $\hat W^{(A)}$ the two-body part and $\hat U^{(A)}$ the three-body part. The operators $\hat h^{(A)}$, $\hat W^{(A)}$ and $\hat U^{(A)}$ can generally be time-dependent. We use the time-independent field operator expanded by time-dependent orbitals: \begin{equation}\label{field} \hat{\mathbf \Psi}_A(\x) = \sum_k \hat a_k(t)\phi_k(\x,t), \end{equation} where the annihilation and creation operators obey the usual fermionic/bosonic anti/commutation relations, $\hat a_q(t) \hat a_k^\dag(t) \pm \hat a_k^\dag(t) \hat a_q(t) = \delta_{kq}$. Correspondingly, the field operator obeys the anti/commutation relations, $\hat{\mathbf \Psi}_A(\x) \left\{\hat{\mathbf \Psi}_A(\x')\right\}^\dag \pm \left\{\hat{\mathbf \Psi}_A(\x')\right\}^\dag \hat{\mathbf \Psi}_A(\x) = \delta(\x-\x')$. Here and hereafter the upper sign refers to fermions and the lower to bosons. The coordinate ${\bf x}\equiv \{\r, \sigma\}$ stands for spatial degrees of freedom and spin, if present. Thus, the shorthand notations $\delta(\x-\x')=\delta(\r-\r')\delta_{\sigma,\sigma'}$ and $\int d{\bf x}\equiv \int d{\bf r}\sum_\sigma$ are implied throughout this work. Furthermore, we do not denote explicitly the dependence of quantities on time when unambiguous. Plugging the expansion (\ref{field}) into the many-body Hamiltonian (\ref{ham}) one gets: \begin{equation}\label{ham2nd} \hat H^{(A)} = \sum_{k,q} h^{(A)}_{kq} \hat \rho^{(A)}_{kq} + \frac{1}{2} \sum_{k,s,q,l} W^{(A)}_{ksql} \hat \rho^{(A)}_{kslq} + \frac{1}{6}\sum_{k,s,p,r,l,q} U^{(A)}_{kspqlr} \hat \rho^{(A)}_{ksprlq}, \end{equation} where the matrix elements with respect to the orbitals $\left\{\phi_k(\x,t)\right\}$ are given by: \begin{equation}n\label{matrix_elements} h^{(A)}_{kq} &=& \int \phi_k^\ast(\x,t) \hat h^{(A)}(\x) \phi_q(\x,t) d\x, \nonumber \\ W^{(A)}_{ksql} &=& \int \!\! \int \phi_k^\ast(\x,t) \phi_s^\ast(\x',t) \hat W^{(A)}(\x,\x') \phi_q(\x,t) \phi_l(\x',t) d{\bf x}d\x', \nonumber \\ U^{(A)}_{kspqlr} &=& \int \!\! \int \!\! \int \phi_k^\ast(\x,t) \phi_s^\ast(\x',t) \phi_p^\ast(\x'',t) \hat U^{(A)}(\x,\x',\x'') \times \nonumber \\ &\times& \phi_q(\x,t) \phi_l(\x',t) \phi_r(\x'',t) d{\bf x}d\x' d\x''. \ \end{equation}n In (\ref{ham2nd}), we introduce the one-body density operators \begin{equation}\label{density_oper_1B} \hat \rho^{(A)}_{kq} = \hat a_k^\dag \hat a_q, \end{equation} as well as the two- and three-body density operators \begin{equation}n\label{density_oper_2B_3B} & & \hat \rho^{(A)}_{kslq} = \hat a_k^\dag \hat a_s^\dag \hat a_l \hat a_q = \pm \hat \rho^{(A)}_{kq} \delta_{sl} \mp \hat \rho^{(A)}_{kl} \hat \rho^{(A)}_{sq}, \nonumber \\ & & \hat \rho^{(A)}_{ksprlq} = \hat a_k^\dag \hat a_s^\dag \hat a_p^\dag \hat a_r \hat a_l \hat a_q = \pm \hat \rho^{(A)}_{kslq} \delta_{pr} - \hat \rho^{(A)}_{ksrq} \delta_{pl} + \hat \rho^{(A)}_{ksrl} \hat \rho^{(A)}_{pq}. \end{equation}n The reason for this choice of notation with density operators in (\ref{ham2nd}) will become clear below. We see that the two-body density operators $\left\{\hat \rho^{(A)}_{kslq}\right\}$ can be written as products of the one-body density operators, and that the three-body density operators $\left\{\hat \rho^{(A)}_{ksprlq}\right\}$ can be written as products of the two- and one-body density operators, and so on, recursively. Hence, the one-body density operators $\left\{\hat \rho^{(A)}_{kq}\right\}$ in (\ref{density_oper_1B}) are our basic building bricks. The many-body wave-function is expanded by time-dependent configurations (determinants $\left|\i;t\right>$ for fermions, permanents $\left|\n;t\right>$ for bosons) assembled by distributing the $N_A$ particles over the $M_A$ time-dependent orbitals introduced in the expansion (\ref{field}). For fermions we write \cite{mapping}: \begin{equation}\label{MCTDHF_ansatz} \left|\Psi^{(A)}(t)\right> = \sum_{\{\i\}} C_{\i}(t) \left|\i;t\right> \equiv \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C_{J_A}(t) \left|J_A;t\right>, \end{equation} where the address $J_A$ is defined as follows: \begin{equation}\label{I_numbering} J_A \equiv J_A(\i)= 1 + \sum_{j=1}^{M_A-N_A}\binom{M_A-i_j}{M_A-N_A+1-j}, \end{equation} whereas for bosons we write \cite{mapping}: \begin{equation}\label{MCTDHB_ansatz} \left|\Psi^{(A)}(t)\right> = \sum_{\{\n\}} C_{\n}(t) \left|\n;t\right> \equiv \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C_{J_A}(t) \left|J_A;t\right>, \end{equation} where the address $J_A$ is defined as follows: \begin{equation}\label{J_numbering} J_A \equiv J_A(\n) = 1 + \sum_{k=1}^{M_A-1} \binom{N_A+M_A-1-k-\sum_{l=1}^{k}n_l}{M_A-k}. \end{equation} The notation used in (\ref{MCTDHF_ansatz}-\ref{J_numbering}) follows the Combinadic-based addressing scheme of configurations introduced in \cite{mapping}. For fermions we enumerate configurations by holes, ${\bf i}= (i_1,i_2,\ldots,i_j=q,\ldots,i_{M_A-N_A})$ and $\i^{kq} = (i_1,i_2,\ldots,i_l=k,\ldots,i_{M_A-N_A})$, whereas for bosons we enumerate configurations by particles, ${\bf n}= (n_1,\ldots,n_k,\ldots,n_q,\ldots,n_{M_A})$ and $\n^{kq} = (n_1,\ldots,n_k-1,\ldots,n_q+1,\ldots,n_{M_A})$. The index $J_A$ is termed ``address" because it is an integer uniquely identifying a configuration which is described by the positions of the holes $\i$ (for fermions) or the occupation numbers $\n$ (for bosons). For more details of the Combinadic-based mapping and particularly the connection between the bosonic occupation numbers and the positions of the fermionic holes see \cite{mapping}. For our requirements, we will need the result of the operation of the basic building bricks onto the state vector, namely, the operation of the one-body density operators $\left\{\hat \rho^{(A)}_{kq}\right\}$ onto $\left|\Psi^{(A)}(t)\right>$. Thus we have: \begin{equation}\label{O_den} \hat \rho^{(A)}_{kq} \left|\Psi^{(A)}(t)\right> = \hat \rho^{(A)}_{kq} \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C_{J_A}(t) \left|J_A;t\right> \equiv \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t) \left|J_A;t\right>. \end{equation} For fermions we have the following relations \cite{mapping}: \begin{equation}n\label{basic_mapping_F} \!\!\!\!\!\!\!\! C^{\hat \rho^{(A)}_{kq}}_{J_A}(t) \equiv C^{\hat \rho^{(A)}_{kq}}_{J_A(\i)}(t) &=& \left \{ \begin{matrix} C_{J_A(\i^{kq})}(t) \times (-1)^{d(\i^{kq})}; & \ k \ne q, \ k \in \i^{kq}, \ q \not\in \i^{kq}\\ C_{J_A(\i)}(t); & \ k = q, \ k \not\in \i\\ 0; & \ {\mathrm{otherwise}} \\ \end{matrix} \right., \ \end{equation}n where the distance between the $i_j$-th hole of $\i$ at orbital $q$ and the $i_l$-th hole of $\i^{kq}$ at orbital $k$ is given by $d(\i^{kq}) = |k-q| - |j-l| - 1$ [equivalently, $d(\i^{kq}) = \sum_{p \in (k,q)} n_p$ simply enumerates how many fermions are there between the $k$-th and $q$-th orbitals]. For bosons we have the following relations \cite{mapping}: \begin{equation}n\label{basic_mapping_B} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t) \equiv C^{\hat \rho^{(A)}_{kq}}_{J_A(\n)}(t) &=& \left \{ \begin{matrix} C_{J_A(\n^{kq})}(t) \times \sqrt{n_k} \sqrt{n_q +1}; & \ k \ne q \\ C_{J_A(\n)}(t) \times n_k; & \ k = q \\ \end{matrix} \right., \ \end{equation}n which concludes our exposition of the Combinadic-based mapping and assembly of the operations of the basic building bricks $\left\{\hat \rho^{(A)}_{kq}\right\}$ on the many-body wave-function $\left|\Psi^{(A)}(t)\right>$. From Eqs.~(\ref{density_oper_1B},\ref{density_oper_2B_3B}) we see how to use the one-body (basic) building bricks $\left\{\hat \rho^{(A)}_{kq}\right\}$ to assemble higher-body operators. In particular we find: \begin{equation}n\label{basic_mapping_2B_3B} & & C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t) = \pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t) \mp {C^{\hat \rho^{(A)}_{sq}}_{J_A}}^{\hat \rho^{(A)}_{kl}}\!\!(t), \nonumber \\ & & C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t) = \pm \delta_{pr} C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t) - \delta_{pl} C^{\hat \rho^{(A)}_{ksrq}}_{J_A}(t) + {C^{\hat \rho^{(A)}_{pq}}_{J_A}}^{\hat \rho^{(A)}_{ksrl}}\!(t). \ \end{equation}n The meaning of the two levels of density operators in the superscripts of the coefficients $C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t)$ and $C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t)$, resulting from higher-body operators in (\ref{basic_mapping_2B_3B}), is that the lower-level density operator is multiplied on the many-body wave-function first, and the upper-level density operator is multiplied thereafter on the result. The key ingredient in the utilization of the Lagrangian formulation \cite{MCTDHB1,LF1,LF2} of the (Dirac-Frenkel \cite{DF1,DF2}) time-dependent variational principle to derive the equations-of-motion is the evaluation of matrix elements with respect to the multiconfigurational wave-function $\left|\Psi^{(A)}(t)\right>$. This will be utilized in the next subsection \ref{SEC2.2}. For the moment, we would like to prescribe how such matrix elements with respect to $\left|\Psi^{(A)}(t)\right>$ are to be evaluated. Consider the operator $\hat O^{(A)}$, which can be a one-body operator, two-body operator, three-body operator, etc. Then, we express and compute the expectation value of $\hat O^{(A)}$ with respect to $\left|\Psi^{(A)}(t)\right>$ as follows \cite{mapping}: \begin{equation}\label{expectation} \left<\Psi^{(A)}(t)\left| \hat O^{(A)} \right|\Psi^{(A)}(t)\right> = \left<\Psi^{(A)}(t)\left| \left\{ \hat O^{(A)} \right|\Psi^{(A)}(t)\right> \right\} = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A}(t) C^{\hat O^{(A)}}_{J_A}(t), \end{equation} where \begin{equation}\label{O_Psi} \hat O^{(A)} \left|\Psi^{(A)}(t)\right> = \hat O^{(A)} \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C_{J_A}(t) \left|J_A;t\right> \equiv \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^{\hat O^{(A)}}_{J_A}(t) \left|J_A;t\right>. \end{equation} In particular, for a one-body operator, $\hat O^{(A)} = \sum_{k,q} O^{(A)}_{kq} \hat \rho^{(A)}_{kq}$, we get: \begin{equation}\label{C_one} C^{\hat O^{(A)}}_{J_A}(t) = \sum_{k,q}^{M_A} O^{(A)}_{kq} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t), \end{equation} for a two-body operator, $\hat O^{(A)} = \frac{1}{2} \sum_{k,s,q,l} O^{(A)}_{ksql} \hat \rho^{(A)}_{kslq}$, we get from (\ref{basic_mapping_2B_3B}): \begin{equation}n\label{C_two} C^{\hat O^{(A)}}_{J_A}(t) &=& \frac{1}{2} \sum_{k,s,q,l}^{M_A} O^{(A)}_{ksql} C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t) = \nonumber \\ &=& \frac{1}{2} \sum_{k,s,q,l}^{M_A} O^{(A)}_{ksql} \left[ \pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t) \mp {C^{\hat \rho^{(A)}_{sq}}_{J_A}}^{\hat \rho^{(A)}_{kl}}\!\!(t) \right], \ \end{equation}n and for a three-body operator, $\hat O^{(A)} = \frac{1}{6} \sum_{k,s,p,r,l,q} O^{(A)}_{kspqlr} \hat \rho^{(A)}_{ksprlq}$, we get from (\ref{basic_mapping_2B_3B}): \begin{equation}n\label{C_three} & & C^{\hat O^{(A)}}_{J_A}(t) = \frac{1}{6} \sum_{k,s,p,r,l,q}^{M_A} O^{(A)}_{kspqlr} C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t) = \\ &=& \frac{1}{6} \sum_{k,s,p,r,l,q}^{M_A} O^{(A)}_{kspqlr} \left[ \pm \delta_{pr} C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t) - \delta_{pl} C^{\hat \rho^{(A)}_{ksrq}}_{J_A}(t) + {C^{\hat \rho^{(A)}_{pq}}_{J_A}}^{\hat \rho^{(A)}_{ksrl}}\!(t) \right]. \nonumber \ \end{equation}n Finally and generally, the result of a sum of (operations of) operators, e.g., $\hat O_1^{(A)} + \hat O_2^{(A)}$, on $\left|\Psi^{(A)}(t)\right>$ translates to the sum of the respective coefficients \cite{mapping}: \begin{equation}\label{operators_sum} C^{\hat O_1^{(A)} + \hat O_2^{(A)}}_{J_A}(t) = C^{\hat O_1^{(A)}}_{J_A}(t) + C^{\hat O_2^{(A)}}_{J_A}(t). \end{equation} These compact relations resting on one-body density operators only [the two-body density operators in (\ref{C_three}) are assembled from one-body density operators according to (\ref{density_oper_1B},\ref{density_oper_2B_3B})] will be used to reformulate MCTDHF and MCTDHB in a recursive manner in the following subsection \ref{SEC2.2}. \subsection{Equations-of-motion utilizing one-body density operators and Combinadic-based mapping}\label{SEC2.2} We can derive (reconstruct) the MCTDHF and MCTDHB equations-of-motion, taking into account {\it a-priori} that matrix elements of the form of (\ref{expectation}) enter the variational formulation. Within the Lagrangian formulation \cite{MCTDHB1,LF1,LF2} of the (Dirac-Frenkel \cite{DF1,DF2}) time-dependent variational principle, the action functional of the time-dependent many-particle Schr\"odinger equation takes on the following form: \begin{equation}n\label{func_basic} & & S\left[\left\{C_{J_A}(t)\right\},\left\{\phi_k(\x,t)\right\}\right] = \int dt \Bigg\{\left< \Psi^{(A)}(t) \left| \hat H^{(A)} - i\frac{\partial}{\partial t}\right| \Psi^{(A)}(t)\right> - \nonumber \\ & & \qquad - \sum_{k,j}^{M_A} \mu_{kj}^{(A)}(t) \left[\left<\phi_k \left|\right.\phi_j\right> - \delta_{kj}\right] - \varepsilon^{(A)}(t) \left[\sum_{J_A=1}^{N^{(A)}_{\mathit{conf}}} \left|C_{J_A}(t)\right|^2 - 1 \right]\Bigg\}, \ \end{equation}n where the time-dependent Lagrange multipliers $\left\{\mu_{kj}^{(A)}(t)\right\}$ are introduced to guarantee the orthonormality of the orbitals at all times. Furthermore, they enable one to first evaluate the expectation value of $\hat H^{(A)} - i\frac{\partial}{\partial t}$ with respect to $\left|\Psi^{(A)}(t)\right>$ and then to perform the variation, which is precisely what is needed in order to exploit the Combinadic-based mapping \cite{mapping} {\it a-priori} in the derivation of the equations-of-motion. The (here redundant) time-dependent Lagrange multiplier $\varepsilon^{(A)}(t)$ ensures normalization of the expansion coefficients at all times, and would resurface in the static theory in the case of the imaginary-time-propagation formulation. To perform the variation of the action functional with respect to the coefficients, we express the expectation value $\left< \Psi^{(A)}(t) \left| \hat H^{(A)} - i\frac{\partial}{\partial t}\right| \Psi^{(A)}(t)\right>$ following the Combinadic-based mapping \cite{mapping} and the compact expression in Eq.~(\ref{expectation}): \begin{equation}\label{expectation_H_C} \left<\Psi^{(A)}(t)\left| \hat H^{(A)} - i\frac{\partial}{\partial t} \right|\Psi^{(A)}(t)\right> = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A}(t) \left[ C^{\hat H^{(A)} - i\frac{\partial}{\partial t}^{(A)}}_{J_A}\!(t) - i \dot C_{J_A}(t) \right]. \end{equation} Representation (\ref{expectation_H_C}) makes it clear what the variation with respect to the coefficients $\left\{C^\ast_{J_A}(t)\right\}$ would lead to. When this variation is performed explicitly, one immediately finds: \begin{equation}\label{C_gen} C^{\hat H^{(A)} - i\frac{\partial}{\partial t}^{(A)}}_{J_A}\!(t) = i \dot C_{J_A}(t), \qquad \forall J_A. \end{equation} The meaning of $i\frac{\partial}{\partial t}^{(A)}$ is that the time-derivative is a one-body operator in the $A$-species Fock (and orbital) space. According to the rules of the previous subsection \ref{SEC2.1}, the left-hand-side of Eq.~(\ref{C_gen}) is given by the sum of its one-, two- and three-body constituents: \begin{equation}\label{SE_C_gen} C^{\hat H^{(A)} - i\frac{\partial}{\partial t}^{(A)}}_{J_A}\!(t) = C^{\hat h^{(A)} - i\frac{\partial}{\partial t}^{(A)}}_{J_A}\!(t) + C^{\hat W^{(A)}}_{J_A}(t) + C^{\hat U^{(A)}}_{J_A}(t). \end{equation} The invariance of $\left|\Psi^{(A)}(t)\right>$ to unitary transformations of the orbitals, compensated by the `reverse' transformations of the orbitals is well-known \cite{cpl,jcp,MCTDHB1} and can be represented as follows: $\left|\Psi^{(A)}(t)\right> = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C_{J_A}(t) \left|J_A;t\right> =$\break $\sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} \overline{C}_{J_A}(t) \overline{\left|J_A;t\right>}$, with obvious notation. This invariance can be utilized to bring the equations-of-motion into a simpler form (see, in particular, the discussion below on the orbitals' part). Primarily, the differential conditions first introduced by the MCTDH founders \cite{cpl,jcp}: \begin{equation}\label{diff_con_A} \left\{i\frac{\partial}{\partial t}^{(A)}\right\}_{kq} \equiv i\left<\phi_k \left|\dot\phi_q\right>\right. = 0, \ \ k,q=1,\ldots,M_A, \end{equation} come out explicitly from such a unitary transformation \cite{MCTDHB1,conversion} and straightforwardly lead in the case of the equations-of-motion for the coefficients to: \begin{equation}n\label{C_gen_phi_phidot} & & C^{\hat H^{(A)}}_{J_A}(t) = i \dot C_{J_A}(t), \qquad \forall J_A, \nonumber \\ & & C^{\hat H^{(A)}}_{J_A}(t) = C^{\hat h^{(A)}}_{J_A}(t) + C^{\hat W^{(A)}}_{J_A}(t) + C^{\hat U^{(A)}}_{J_A}(t). \ \end{equation}n For the general form of the differential conditions, Eq.~(\ref{diff_con_A}), see the literature \cite{review,book}. We remark that a particular interesting representation (put forward and utilized so far for distinguishable degrees-of-freedom only) of the differential conditions can be made in order to propagate the systems' natural orbitals \cite{Uwe_nat1,Uwe_nat2}. In MCTDHF and MCTDHB the integration of the coefficients' part in time is performed (for unitary time-evolution) by the short iterative Lanczos (SIL) algorithm \cite{SIL}. We remark on the numerical implementation of Eq.~(\ref{C_gen_phi_phidot}) within SIL propagation \cite{package}. For the SIL one needs to operate with powers of $\hat H$ onto the many-particle wave-function and construct the $K$-dimensional Krylov subspace: $\left\{\left|\Psi^{(A)}(t)\right>, \hat H^{(A)}\left|\Psi^{(A)}(t)\right>,\ldots, \hat{H}^{(A)}\strut^{K-1} \left|\Psi^{(A)}(t)\right> \right\}$. In the language of the Combinadic-based mapping of coefficients and utilizing the recipe of how to operate with operators on the many-particle wave-function discussed above \cite{mapping}, this construction translates to: $\left\{C_{J_A}(t), C^{\hat H^{(A)}}_{J_A}(t),{C^{\hat H^{(A)}}_{J_A}}^{\hat H^{(A)}}\!(t),\ldots\right\}$. Let us now move to the equations-of-motion for the orbitals $\left\{\phi_k(\x,t)\right\}$. For this, the expectation value of the many-body Hamiltonian $\hat H^{(A)}$ with respect to $\left|\Psi^{(A)}(t)\right>$ has to be expressed in a form which allows for variation with respect to the orbitals, namely as an explicit function of the quantities (integrals) $h^{(A)}_{kq}$, $W^{(A)}_{ksql}$ and $U^{(A)}_{kspqlr}$ in (\ref{matrix_elements}). The result reads: \begin{equation}n\label{expectation_H3_phi} \!\!\!\!\!\!\!\! & & \left<\Psi\left|\hat H^{(A)} - i\frac{\partial}{\partial t} \right|\Psi\right> = \sum_{k,q=1}^{M_A} \rho^{(A)}_{kq} \left[ h^{(A)}_{kq} - \left\{i\frac{\partial}{\partial t}^{(A)}\right\}_{kq} \right] + \\ & & + \frac{1}{2}\sum_{k,s,l,q=1}^{M_A} \rho^{(A)}_{kslq} W^{(A)}_{ksql} + \frac{1}{6}\sum_{k,s,p,r,l,q=1}^{M_A} \rho^{(A)}_{ksprlq} U^{(A)}_{kspqlr} - \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} i C^\ast_{J_A}(t) \dot C_{J_A}(t). \nonumber \ \end{equation}n The expectation values of the density operators $\hat \rho^{(A)}_{kq}$, $\hat \rho^{(A)}_{kslq}$ and $\hat \rho^{(A)}_{ksprlq}$ with respect to $\left|\Psi^{(A)}(t)\right>$ (resulting from the expectation value of the Hamiltonian with respect to many-particle wave-function) are computed following Eq.~(\ref{expectation}): \begin{equation}n\label{denisty_matrx_element} & & \rho^{(A)}_{kq} = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A}(t) C^{\hat \rho^{(A)}_{kq}}_{J_A}(t), \qquad \rho^{(A)}_{kslq} = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A}(t) C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t), \nonumber \\ & & \qquad \qquad \rho^{(A)}_{ksprlq} = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A}(t) C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t), \ \end{equation}n where the coefficients $C^{\hat \rho^{(A)}_{kq}}_{J_A}(t)$, $C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t)$ and $C^{\hat \rho^{(A)}_{ksprlq}}_{J_A}(t)$ are given in Eqs.~(\ref{basic_mapping_F},\ref{basic_mapping_B}) and (\ref{basic_mapping_2B_3B}), respectively. We collect the expectation values of the one-body density operators as the matrix $\brho^{(A)}(t)=\left\{\rho^{(A)}_{kq}(t)\right\}$. One should remember that the expectation values of two- and three-body density operators can generally not be factorized into products of expectation values of one-body density operators. For instance (and in the language of the Combinadic-based mapping of coefficients), $C^{\hat \rho^{(A)}_{kslq}}_{J_A}(t) = \pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t) \mp {C^{\hat \rho^{(A)}_{sq}}_{J_A}}^{\hat \rho^{(A)}_{kl}}\!\!(t) \ne \pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}}_{J_A}(t) \mp C^{\hat \rho^{(A)}_{kl}}_{J_A}(t) C^{\hat \rho^{(A)}_{sq}}_{J_A}(t)$. This is unlike the operation of the density operators themselves on the many-particle wave-function utilized above. We can now perform the variation of $S\left[\left\{C_{J_A}(t)\right\},\left\{\phi_k(\x,t)\right\}\right]$ with respect to the orbitals. This variation has been detailed in the literature, see \cite{MCTDHB1,unified}, and we give here the main steps in the derivation of the equations-of-motion as far as they are needed for our needs later on. Making use of the orthonormality relation between the time-dependent orbitals $\left\{\phi_k(\x,t)\right\}$, we can solve for the Lagrange multipliers, $k,j = 1,\ldots,M_A$: \begin{equation}n\label{MCTDHX_H3_mu} & & \!\!\!\!\!\!\!\! \mu_{kj}^{(A)}(t) = \\ & & \!\!\!\!\!\!\!\! = \left<\phi_j\left| \sum^{M_A}_{q=1} \left( \rho^{(A)}_{kq} \left[ \hat h^{(A)} - i\frac{\partial}{\partial t}^{(A)} \right] + \sum^{M_A}_{s,l=1}\rho^{(A)}_{kslq} \hat W^{(A)}_{sl} + \frac{1}{2}\sum_{s,p,r,l=1}^{M_A} \rho^{(A)}_{ksprlq} \hat U^{(A)}_{splr} \right) \right|\phi_q\right>. \nonumber \ \end{equation}n The Lagrange multipliers $\left\{\mu_{kj}^{(A)}(t)\right\}$ can be eliminated from the equations-of-motion which is achieved by the introduction of the projection operator: \begin{equation}\label{project_A} \hat {\mathbf P}^{(A)} = 1 - \sum_{u=1}^{M_A} \left|\phi_{u}\right>\left<\phi_{u}\right|. \end{equation} When this is done, we find the following equations-of-motion for the orbitals $\left\{\phi_j(\x,t)\right\}$, $j=1,\ldots,M_A$: \begin{equation}n\label{MCTDHX_P_P_H3_eom} & & \hat {\mathbf P}^{(A)} i\left|\dot\phi_j\right> = \hat {\mathbf P}^{(A)} \Bigg[\hat h^{(A)} \left|\phi_j\right> + \\ & & + \sum^{M_A}_{k,q=1} \left\{\brho^{(A)}(t)\right\}^{-1}_{jk} \sum^{M_A}_{s,l=1} \left(\rho^{(A)}_{kslq} \hat{W}^{(A)}_{sl} +\frac{1}{2}\sum_{p,r=1}^{M_A} \rho^{(A)}_{ksprlq} \hat U^{(A)}_{splr} \right) \left|\phi_q\right> \Bigg], \nonumber \ \end{equation}n where \begin{equation}n\label{TD_1B_2_3_potentials} & & \hat W^{(A)}_{sl}(\x,t)=\int\phi_s^\ast(\x',t) \hat W^{(A)}(\x,\x') \phi_l(\x',t) d\x', \\ & & \hat U^{(A)}_{splr}(\x,t) = \int \!\! \int \phi_s^\ast(\x',t) \phi_p^\ast(\x'',t) \hat U^{(A)}(\x,\x',\x'') \phi_l(\x',t) \phi_r(\x'',t) d\x' d\x'', \nonumber \end{equation}n are local (for spin-independent interactions), time-dependent one-body potentials, and $\dot \phi_j \equiv \frac{\partial\phi_j}{\partial t}$. Utilizing the differential conditions (\ref{diff_con_A}) we can eliminate the projection operator $\hat {\mathbf P}^{(A)}$ appearing on the left-hand-side of Eq.~(\ref{MCTDHX_P_P_H3_eom}) and arrive at the final result for the equations-of-motion of the orbitals in MCTDHF and MCTDHB (see \cite{book,unified}), $j=1,\ldots,M_A$: \begin{equation}n\label{MCTDHX_P_H3_eom} & & i\left|\dot\phi_j\right> = \hat {\mathbf P}^{(A)} \Bigg[\hat h^{(A)} \left|\phi_j\right> + \\ & & + \sum^{M_A}_{k,q=1} \left\{\brho^{(A)}(t)\right\}^{-1}_{jk} \sum^{M_A}_{s,l=1} \left(\rho^{(A)}_{kslq} \hat{W}^{(A)}_{sl} +\frac{1}{2}\sum_{p,r=1}^{M_A} \rho^{(A)}_{ksprlq} \hat U^{(A)}_{splr} \right) \left|\phi_q\right> \Bigg]. \nonumber \ \end{equation}n Summarizing, the coupled sets of equations-of-motion (\ref{C_gen_phi_phidot}) for the expansion coefficients and (\ref{MCTDHX_P_H3_eom}) for the orbitals constitute the MCTDHF and MCTDHB methods, where the one-body density operators (\ref{density_oper_1B},\ref{density_oper_2B_3B}) are employed as the basic building bricks in their construction and implementation. We can also propagate the MCTDHF and MCTDHB equations-of-motion (\ref{C_gen_phi_phidot},\ref{MCTDHX_P_H3_eom}) in imaginary time and arrive for time-independent Hamiltonians at the corresponding self-consistent static theories, MCHF \cite{gen_MCHF1,gen_MCHF2} and MCHB \cite{MCHB}. Thus, setting $t \to -it$ into the coupled sets (\ref{C_gen},\ref{MCTDHX_P_P_H3_eom}) or into (\ref{C_gen_phi_phidot},\ref{MCTDHX_P_H3_eom}), and translating back from the projection operator $\hat {\mathbf P}^{(A)}$ to the Lagrange multipliers $\left\{\mu_{kj}^{(A)}\right\}$, the final result reads, $k=1,\ldots,M_A$: \begin{equation}n\label{MCTDH_H3_stationary} & & \!\!\!\!\!\!\!\! \sum_{q=1}^{M_A} \left[ \rho^{(A)}_{kq} \hat h^{(A)} + \sum^{M_A}_{s,l=1} \left(\rho^{(A)}_{kslq} \hat{W}^{(A)}_{sl} + \frac{1}{2}\sum_{p,r=1}^{M_A} \rho^{(A)}_{ksprlq} \hat U^{(A)}_{splr}\right) \right] \left|\phi_q\right> = \sum_{j=1}^{M_A} \mu_{kj}^{(A)} \left|\phi_j\right>, \nonumber \\ & & \qquad \qquad C^{\hat H^{(A)}}_{J_A} = \varepsilon^{(A)} C_{J_A}, \qquad \forall J_A, \ \end{equation}n where, making use of the normalization of the many-particle wave-function, $\varepsilon^{(A)}= \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} C^\ast_{J_A} C^{\hat H^{(A)}}_{J_A}$ is the eigen-energy of the system. Making use of the fact that the matrix of Lagrange multipliers $\{\mu_{kj}^{(A)}\}$ is Hermitian (for stationary states) and of the invariance property of the multiconfigurational wave-function (to unitary transformations of the orbitals compensated by the `reverse' transformations of the coefficients), one can transform Eq.~(\ref{MCTDH_H3_stationary}) to a representation where $\{\mu_{kj}^{(A)}\}$ is a diagonal matrix. All in all, we have formulated in the present section the MCTDHF and MCTDHB equations-of-motion, as well as their static variants MCHF and MCHB, by (i) utilizing in a recursive manner one-body density operators only, and by (ii) employing {\it a priori} the Combinadic-based mapping formulation of Ref.~\cite{mapping} to evaluate matrix elements. This sets up the tools to put forward the MCTDH theory for mixtures of three kinds of identical particles in the following Sec.~\ref{SEC3}, and to briefly discuss its structure and properties, and how to implement it. \section{Three kinds of identical particles: MCTDH-FFF, MCTDH-BFF, MCTDH-BBF and MCTDH-BBB}\label{SEC3} In the present section we specify the MCTDH theory for mixtures of three kinds of identical particles, interacting with up to three-body forces. Before we get into the details of derivation and flood of equations, we would like to lay out a general scheme or flowchart that one can follow to handle similar or even more complex mixtures. Specifically, we need to assign a different set of time-dependent orthonormal orbitals to each and every species in the mixture. These orbitals are then used to assemble the time-dependent configurations (with determinants' parts for fermions and permanents' parts for bosons). The many-particle wave-function is thereafter assembled as a linear combination of all time-dependent configurations with time-dependent expansion coefficients. The many-particle Hamiltonian contains different terms: It contains intra-species terms and inter-species terms which consist of two-body, three-body and so on interactions. The main point in the representation of the Hamiltonian is the utilization of one-body density operators. In turn, all intra-species and inter-species interactions can be represented utilizing (products of) one-body density operators only. The key step in the derivation of the equations-of-motion is the utilization of the Lagrangian formulation \cite{MCTDHB1,LF1,LF2} of the (Dirac-Frenkel \cite{DF1,DF2}) time-dependent variational principle with Lagrange multipliers for each species' orbitals, ensuring thereby the orthonormality of the orbitals for all times. In such a way, matrix-elements appear within the formulation explicitly, before the variation with respect to either the expansion coefficients or the orbitals is performed. The equations-of-motion for the expansion coefficients of the multiconfigurational wave-function are obtained by taking the variation of the action functional when it is expressed explicitly in terms of the expansion coefficients. The Combinadic-based mapping \cite{mapping} lifts the necessity to work with the huge matrix representation of the Hamiltonian with respect to the configurations, and allows one to efficiently perform operations on the vector of expansion coefficients directly. The equations-of-motion for the orbitals are obtained by taking the variation of the action functional when it is expressed explicitly in terms of the (integrals of the) orbitals. When this is performed, expectation values of the various density operators in the Hamiltonian (with respect to the many-particle wave-function) emerge which can be efficiently computed utilizing the Combinadic-based mapping \cite{mapping}. \subsection{Additional ingredients for mixtures}\label{SEC3.1} For a mixture of three kinds of identical particles, $N_A$ particles of type $A$, $N_B$ particles of type $B$ and $N_C$ particles of type $C$, we need now two additional field operators expanded by different complete sets of time-dependent orbitals: \begin{equation}\label{field_3Mix} \hat{\mathbf \Psi}_B(\y) = \sum_{k'} \hat b_{k'}(t) \psi_{k'}(\y,t), \qquad \hat{\mathbf \Psi}_C(\z) = \sum_{k''} \hat c_{k''}(t)\chi_{k''}(\z,t), \end{equation} where the field operator for the $A$-species particles $\hat{\mathbf \Psi}_A(\x)$ was first introduced and expanded in (\ref{field}). Note that each species can have a different spin, hence the explicit three distinct coordinates $\x$, $\y$ and $\z$. Field operators of distinct particles (can be chosen to) commute. Our starting point is the many-body Hamiltonian of the most general 3-species mixture with up to 3-body interactions: \begin{equation}n\label{ham_3mix_general} & & \hat H^{(ABC)} = \hat H^{(A)} + \hat H^{(B)} + \hat H^{(C)} + \hat W^{(AB)} + \hat W^{(AC)} + \hat W^{(BC)} + \\ & & + \hat U^{(AAB)} + \hat U^{(ABB)} + \hat U^{(AAC)} + \hat U^{(ACC)} + \hat U^{(BBC)} + \hat U^{(BCC)} + \hat U^{(ABC)}. \nonumber \ \end{equation}n Here, $\hat H^{(A)}$, $\hat H^{(B)}$ and $\hat H^{(C)}$ are the single-species Hamiltonians that can be read of (\ref{ham}). The inter-species two-body interaction parts are given by: \begin{equation}n\label{2_body_forces} & & \hat W^{(AB)} = \int \!\! \int d{\bf x} d{\bf y}\hat{\mathbf \Psi}^\dag_A(\x) \hat{\mathbf \Psi}^\dag_B(\y) \hat W^{(AB)}(\x,\y) \hat{\mathbf \Psi}_B(\y) \hat{\mathbf \Psi}_A(\x), \nonumber \\ & & \hat W^{(AC)} = \int \!\! \int d{\bf x}d{\bf z}\hat{\mathbf \Psi}^\dag_A(\x) \hat{\mathbf \Psi}^\dag_C(\z) \hat W^{(AC)}(\x,\z) \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_A(\x), \nonumber \\ & & \hat W^{(BC)} = \int \!\! \int d{\bf y}d{\bf z}\hat{\mathbf \Psi}^\dag_B(\y) \hat{\mathbf \Psi}^\dag_C(\z) \hat W^{(BC)}(\y,\z) \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_B(\y). \ \end{equation}n The inter-species three-body interaction parts, resulting from the force between two identical particles and a third distinct particle, are given by: \begin{equation}n\label{binary_3_body_forces} \hat U^{(AAB)} &=& \frac{1}{2} \int \!\! \int \!\! \int \!\! d{\bf x}d\x' d{\bf y}\hat{\mathbf \Psi}^\dag_A(\x) \hat{\mathbf \Psi}^\dag_A(\x') \hat{\mathbf \Psi}^\dag_B(\y) \hat U^{(AAB)}(\x,\x',\y) \times \nonumber \\ & & \times \hat{\mathbf \Psi}_B(\y) \hat{\mathbf \Psi}_A(\x') \hat{\mathbf \Psi}_A(\x), \nonumber \\ \hat U^{(ABB)} &=& \frac{1}{2} \int \!\! \int \!\! \int \!\! d{\bf x}d{\bf y}d\y' \hat{\mathbf \Psi}^\dag_A(\x) \hat{\mathbf \Psi}^\dag_B(\y) \hat{\mathbf \Psi}^\dag_B(\y') \hat U^{(ABB)}(\x,\y,\y') \times \nonumber \\ & & \times \hat{\mathbf \Psi}_B(\y') \hat{\mathbf \Psi}_B(\y) \hat{\mathbf \Psi}_A(\x), \nonumber \\ \hat U^{(AAC)} &=& \frac{1}{2} \int \!\! \int \!\! \int \!\! d{\bf x}d\x' d{\bf z}\hat{\mathbf \Psi}^\dag_A(\x) \hat{\mathbf \Psi}^\dag_A(\x') \hat{\mathbf \Psi}^\dag_C(\z) \hat U^{(AAC)}(\x,\x',\z) \times \nonumber \\ & & \times \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_A(\x') \hat{\mathbf \Psi}_A(\x), \nonumber \\ \hat U^{(ACC)} &=& \frac{1}{2} \int \!\! \int \!\! \int \!\! d{\bf x}d{\bf z}d\z' \hat{\mathbf \Psi}^\dag_A(\x) \hat{\mathbf \Psi}^\dag_C(\z) \hat{\mathbf \Psi}^\dag_C(\z') \hat U^{(ACC)}(\x,\z,\z') \times \nonumber \\ & & \times \hat{\mathbf \Psi}_C(\z') \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_A(\x), \nonumber \\ \hat U^{(BBC)} &=& \frac{1}{2} \int \!\! \int \!\! \int \!\! d{\bf y}d\y' d{\bf z}\hat{\mathbf \Psi}^\dag_B(\y) \hat{\mathbf \Psi}^\dag_B(\y') \hat{\mathbf \Psi}^\dag_C(\z) \hat U^{(BBC)}(\y,\y',\z) \times \nonumber \\ & & \times \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_B(\y') \hat{\mathbf \Psi}_B(\y), \nonumber \\ \hat U^{(BCC)} &=& \frac{1}{2} \int \!\! \int \!\! \int \!\! d{\bf y}d{\bf z}d\z' \hat{\mathbf \Psi}^\dag_B(\y) \hat{\mathbf \Psi}^\dag_C(\z) \hat{\mathbf \Psi}^\dag_C(\z') \hat U^{(BCC)}(\y,\z,\z') \times \nonumber \\ & & \times \hat{\mathbf \Psi}_C(\z') \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_B(\y). \end{equation}n Finally, the inter-species three-body interaction part, resulting from the force between three different particles is given by: \begin{equation}n\label{3_body_forces} \hat U^{(ABC)} &=& \int \!\! \int \!\! \int \!\! d{\bf x}d{\bf y}d{\bf z}\hat{\mathbf \Psi}^\dag_A(\x) \hat{\mathbf \Psi}^\dag_B(\y) \hat{\mathbf \Psi}^\dag_C(\z) \hat U^{(ABC)}(\x,\y,\z) \times \nonumber \\ & & \times \hat{\mathbf \Psi}_C(\z) \hat{\mathbf \Psi}_B(\y) \hat{\mathbf \Psi}_A(\x). \end{equation}n When all the above are combined, i.e., the field operators $\hat{\mathbf \Psi}_A(\x)$, $\hat{\mathbf \Psi}_B(\y)$ and $\hat{\mathbf \Psi}_B(\z)$ substituted into the various interaction terms, we find the following second-quantized expression for the mixture's Hamiltonian: \begin{equation}n\label{ham_mix_2nd} & & \hat H^{(ABC)} = \sum_{k,q} h^{(A)}_{kq} \hat \rho^{(A)}_{kq} + \frac{1}{2} \sum_{k,s,q,l} W^{(A)}_{ksql} \hat \rho^{(A)}_{kslq} + \frac{1}{6}\sum_{k,s,p,r,l,q} U^{(A)}_{kspqlr} \hat \rho^{(A)}_{ksprlq} + \nonumber \\ & & + \sum_{k',q'} h^{(B)}_{k'q'} \hat \rho^{(B)}_{k'q'} + \frac{1}{2} \sum_{k',s',q',l'} W^{(B)}_{k's'q'l'} \hat \rho^{(B)}_{k's'l'q'} + \frac{1}{6}\sum_{k',s',p',r',l',q'} U^{(B)}_{k's'p'q'l'r'} \hat \rho^{(B)}_{k's'p'r'l'q'} + \nonumber \\ & & + \sum_{k'',q''} h^{(C)}_{k''q''} \hat \rho^{(C)}_{k''q''} + \frac{1}{2} \sum_{k'',s'',q'',l''} W^{(C)}_{k''s''q''l''} \hat \rho^{(C)}_{k''s''l''q''} + \nonumber \\ & & + \frac{1}{6}\sum_{k'',s'',p'',r'',l'',q''} U^{(C)}_{k''s''p''q''l''r''} \hat \rho^{(C)}_{k''s''p''r''l''q''} + \nonumber \\ & & + \sum_{k,k',q,q'} W^{(AB)}_{kk'qq'} \hat\rho^{(A)}_{kq} \hat\rho^{(B)}_{k'q'} + \sum_{k,k'',q,q''} W^{(AC)}_{kk''qq''} \hat\rho^{(A)}_{kq} \hat\rho^{(C)}_{k''q''} + \sum_{k',k'',q',q''} W^{(BC)}_{k'k''q'q''} \hat\rho^{(B)}_{k'q'} \hat\rho^{(C)}_{k''q''} + \nonumber \\ & & + \frac{1}{2} \sum_{k,k',s,q,q',l} U^{(AAB)}_{kk'sqq'l} \hat\rho^{(A)}_{kslq} \hat\rho^{(B)}_{k'q'} + \frac{1}{2} \sum_{k,k',s',q,q',l'} U^{(ABB)}_{kk's'qq'l'} \hat\rho^{(A)}_{kq} \hat\rho^{(B)}_{k's'l'q'} + \nonumber \\ & & + \frac{1}{2} \sum_{k,k'',s,q,q'',l} U^{(AAC)}_{kk''sqq''l} \hat\rho^{(A)}_{kslq} \hat\rho^{(C)}_{k''q''} + \frac{1}{2} \sum_{k,k'',s'',q,q'',l''} U^{(ACC)}_{kk''s''qq''l''} \hat\rho^{(A)}_{kq} \hat\rho^{(C)}_{k''s''l''q''} + \nonumber \\ & & + \frac{1}{2} \sum_{k',k'',s',q',q'',l'} U^{(BBC)}_{k'k''s'q'q''l'} \hat\rho^{(B)}_{k's'l'q'} \hat\rho^{(C)}_{k''q''} + \frac{1}{2} \sum_{k',k'',s'',q',q'',l''} U^{(BCC)}_{k'k''s''q'q''l''} \hat\rho^{(B)}_{k'q'} \hat\rho^{(C)}_{k''s''l''q''} + \nonumber \\ & & + \sum_{k,k',k'',q,q',q''} U^{(ABC)}_{kk'k''qq'q''} \hat\rho^{(A)}_{kq} \hat\rho^{(B)}_{k'q'} \hat\rho^{(C)}_{k''q''}. \end{equation}n $\hat H^{(ABC)}$ governs the non-equilibrium dynamics (and statics) of the mixture, and the most efficient way to treat this dynamics is by specifying the MCTDH method for the mixture, making use of the building bricks of the previous section \ref{SEC2}. We see in (\ref{ham_mix_2nd}) two kinds of ingredients. First, there are matrix elements (integrals) of the various interaction terms with respect to the orbitals. For the flow of exposition and for completeness, we list them in Appendix \ref{appendix_C}. Second, there are various density operators in $\hat H^{(ABC)}$. The $B$ and $C$ intra-species density operators can be read directly from Eqs.~(\ref{density_oper_1B},\ref{density_oper_2B_3B}), when replacing therein the $A$-species quantities. The inter-species density operators in (\ref{ham_mix_2nd}) can all be represented as appropriate products of the one-body density operators: $\left\{\hat \rho^{(A)}_{kq}\right\}$, $\left\{\hat \rho^{(B)}_{k'q'}\right\}$ and $\left\{\hat \rho^{(C)}_{k''q''}\right\}$. These are the (basic) building bricks of our theory for mixtures. But how to operate with them on many-particle wave-functions of mixtures? The multiconfigurational ansatz for a mixture of three kinds of identical particles now takes on the from: \begin{equation}\label{3Mix_ansatz} \left|\Psi^{(ABC)}(t)\right> = \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} \sum^{N^{(B)}_{\mathit{conf}}}_{J_B=1} \sum^{N^{(C)}_{\mathit{conf}}}_{J_C=1} C_{J_A,J_B,J_C}(t) \left|J_A,J_B,J_C;t\right>, \end{equation} where we denote hereafter $\vec J = (J_A, J_B, J_C)$ for brevity, such that $C_{\vec J}(t) \equiv C_{J_A,J_B,J_C}(t)$, $\left|\vec J;t\right> \equiv \left|J_A,J_B,J_C;t\right>$ and $\sum_{\{\vec J\}} \equiv \sum^{N^{(A)}_{\mathit{conf}}}_{J_A=1} \sum^{N^{(B)}_{\mathit{conf}}}_{J_B=1} \sum^{N^{(C)}_{\mathit{conf}}}_{J_C=1}$. To prescribe the action of operators on the multiconfigurational wave-function of the mixture (\ref{3Mix_ansatz}), all we need to know is how the density operators operate on $\left|\Psi^{(ABC)}(t)\right>$. The operation of the basic, one-body density operators, whether $\hat \rho^{(A)}_{kq}$, $\hat \rho^{(B)}_{k'q'}$ or $\hat \rho^{(C)}_{k''q''}$ can be read of directly from Eqs.~(\ref{O_den}-\ref{C_three}) and we will not repeat them here (one needs just to replace therein $J_A$ by $\vec J$ in the overall notation, and $M_A$ by $M_B$ or $M_C$, when appropriate; also see \cite{mapping}). For the inter-species two-body density operators we have: \begin{equation}\label{2B_mix_dens_oper} C^{\hat \rho^{(A)}_{kq}\hat \rho^{(B)}_{k'q'}}_{\vec J}(t), \qquad C^{\hat \rho^{(A)}_{kq}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t), \qquad C^{\hat \rho^{(B)}_{k'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t). \end{equation} The notation in (\ref{2B_mix_dens_oper}) is to be understood as follows: The two one-body density operators (in each case) are written as superscripts on the same level, signifying that they commute one with the other; The operation of the two one-body density operators on $\left|\Psi^{(ABC)}(t)\right>$ is to be performed sequentially, i.e., the first operates on $\left|\Psi^{(ABC)}(t)\right>$ and the second operates on the outcome. Finally, for the inter-species three-body density operators we have: \begin{equation}n\label{3B_mix_dens_oper} & & C^{\hat \rho^{(A)}_{kslq} \hat \rho^{(B)}_{k'q'}}_{\vec J}(t), \qquad C^{\hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{k's'l'q'}}_{\vec J}(t), \qquad C^{\hat \rho^{(A)}_{kslq} \hat \rho^{(C)}_{k''q''}}_{\vec J}(t), \nonumber \\ & & C^{\hat \rho^{(A)}_{kq} \hat \rho^{(C)}_{k''s''l''q''}}_{\vec J}(t), \qquad C^{\hat \rho^{(B)}_{k's'l'q'} \hat \rho^{(C)}_{k''q''}}_{\vec J}(t), \qquad C^{\hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{k''s''l''q''}}_{\vec J}(t), \ \end{equation}n where the operation of the two-body density operators appearing in the superscripts is further decomposed to operations of one-body density operators on $\left|\Psi^{(ABC)}(t)\right>$ analogously to Eq.~(\ref{basic_mapping_2B_3B}) [see Appendix \ref{appendix_A}], and \begin{equation}\label{3B_mix_dens_oper_3} C^{\hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{k''q''}}_{\vec J}(t). \end{equation} Now we are in the position to write the action of operators on the multiconfigurational wave-function of the mixture (\ref{3Mix_ansatz}). This is collected for ease of reading and for completeness in Appendix \ref{appendix_A}. We have gathered most ingredients for the derivation of the equations-of-motion, which is written down in the subsequence section \ref{SEC3.2}. There are four possible mixtures (Fermi-Fermi-Fermi, Bose-Fermi-Fermi, Bose-Bose-Fermi and Bose-Bose-Bose), and the resulting MCTDH-FFF, MCTDH-BFF, MCTDH-BBF and MCTDH-BBB are to be derived and presented in a unified manner, in the spirit it has been done in the single-species case \cite{book,unified} (and the previous section \ref{SEC2}) and for mixtures of two kinds of identical particles \cite{book,MCTDHX}. \subsection{Equations-of-motion utilizing one-body density operators and Combinadic-based mapping for mixtures}\label{SEC3.2} The action functional of the time-dependent many-particle Schr\"odinger equation takes on the form: \begin{equation}n\label{func_ABC} & & S\left[\left\{C_{\vec J}(t)\right\},\left\{\phi_k(\x,t)\right\}, \left\{\psi_{k'}(\y,t)\right\},\left\{\chi_{k''}(\z,t)\right\}\right] = \\ & & \int dt \Bigg\{\left< \Psi^{(ABC)}(t) \left| \hat H^{(ABC)} - i\frac{\partial}{\partial t}\right| \Psi^{(ABC)}(t)\right> - \nonumber \\ & & - \sum_{k,j}^{M_A} \mu_{kj}^{(A)}(t) \left[\left<\phi_k \left|\right.\phi_j\right> - \delta_{kj}\right] - \sum_{k',j'}^{M_B} \mu_{k'j'}^{(B)}(t) \left[\left<\psi_{k'} \left|\right.\psi_{j'}\right> - \delta_{k'j'}\right] - \nonumber \\ & & - \sum_{k'',j''}^{M_C} \mu_{k''j''}^{(C)}(t) \left[\left<\chi_{k''} \left|\right.\chi_{j''}\right> - \delta_{k''j''}\right] - \varepsilon^{(ABC)}(t) \left[\sum_{\{\vec J\}} \left|C_{\vec J}(t)\right|^2 - 1 \right]\Bigg\}, \nonumber \end{equation}n where the time-dependent Lagrange multipliers $\left\{\mu_{kj}^{(A)}(t)\right\}$, $\left\{\mu_{k'j'}^{(B)}(t)\right\}$ and\break $\left\{\mu_{k''j''}^{(C)}(t)\right\}$ are introduced, respectively, to ensure the orthonormality of the $A$-, $B$- and $C$-species orbitals at all times. Note that orbitals of distinct particles need not be orthogonal to each other. As for the single-species theory, the Lagrange multiplier $\varepsilon^{(ABC)}(t)$ is redundant in the time-dependent case and will resurface in the static theory. In what follows we present the main steps of the derivation. More details and various quantities needed for the derivation and in particular for the implementation of the equations-of-motion are deferred to Appendix \ref{appendix_B} and Appendix \ref{appendix_C}. To perform the variation of the action functional (\ref{func_ABC}) with respect to the coefficients, we write the expectation value of $\hat H^{(ABC)}$ with respect to $\left|\Psi^{(ABC)}(t)\right>$ in a form which is explicit with respect to the coefficients: \begin{equation}n\label{expectation_H_ABC_C} & & \left<\Psi^{(ABC)}(t)\left| \hat H^{(ABC)} - i\frac{\partial}{\partial t} \right|\Psi^{(ABC)}(t)\right> = \nonumber \\ & & \qquad = \sum_{\{\vec J\}} C^\ast_{\vec J}(t) \left[ C^{\hat H^{(ABC)} - i\frac{\partial}{\partial t}^{(A)} - i\frac{\partial}{\partial t}^{(B)} - i\frac{\partial}{\partial t}^{(C)}}_{\vec J}\!\!(t) - i \dot C_{\vec J}(t) \right]. \end{equation}n The three time-derivative operators $i\frac{\partial}{\partial t}^{(A)}$, $i\frac{\partial}{\partial t}^{(B)}$ and $i\frac{\partial}{\partial t}^{(C)}$ make it clear that to each species there is associated a different one-body operator representing the derivative of orbitals in time. Performing the variation of $ S\left[\left\{C_{\vec J}(t)\right\},\left\{\phi_k(\x,t)\right\}, \left\{\psi_{k'}(\y,t)\right\},\left\{\chi_{k''}(\z,t)\right\}\right]$ with respect to the expansion coefficients $\left\{ C^\ast_{\vec J}(t)\right\}$, we then make use of the differential conditions for the orbitals of each species, \begin{equation}n\label{diff_con_BC} & & \left\{i\frac{\partial}{\partial t}^{(B)}\right\}_{k'q'} \equiv i\left<\psi_{k'} \left|\dot\psi_{q'}\right>\right. = 0, \ \ k',q'=1,\ldots,M_B, \nonumber \\ & & \left\{i\frac{\partial}{\partial t}^{(C)}\right\}_{k''q''} \equiv i\left<\chi_{k''} \left|\dot\chi_{q''}\right>\right. = 0, \ \ k'',q''=1,\ldots,M_C, \end{equation}n where the differential conditions with respect to the $A$-spices orbitals have been introduced in (\ref{diff_con_A}). This leads to the final result for the equations-of-motion for the expansion coefficients: \begin{equation}n\label{C_MIX_gen_phi_phidot} & & C^{\hat H^{(ABC)}}_{\vec J}(t) = i \dot C_{\vec J}(t), \qquad \forall \vec J, \nonumber \\ & & C^{\hat H^{(ABC)}}_{\vec J}(t) = C^{\hat H^{(A)}}_{\vec J}(t) + C^{\hat H^{(B)}}_{\vec J}(t) + C^{\hat H^{(C)}}_{\vec J}(t) + \nonumber \\ & & + C^{\hat W^{(AB)}}_{\vec J}(t) + C^{\hat W^{(AC)}}_{\vec J}(t) + C^{\hat W^{(BC)}}_{\vec J}(t) + \nonumber \\ & & + C^{\hat U^{(AAB)}}_{\vec J}(t) + C^{\hat U^{(ABB)}}_{\vec J}(t) + C^{\hat U^{(AAC)}}_{\vec J}(t) + C^{\hat U^{(ACC)}}_{\vec J}(t) + \nonumber \\ & & + C^{\hat U^{(BBC)}}_{\vec J}(t) + C^{\hat U^{(BCC)}}_{\vec J}(t) + C^{\hat U^{(ABC)}}_{\vec J}(t). \ \end{equation}n We remark that other forms of the differential conditions (\ref{diff_con_A},\ref{diff_con_BC}) can be used, in particular, each species can have a different form depending on the physical problem at hand and on numerical needs. Let us move to the equations-of-motion for the orbitals $\left\{\phi_k(\x,t)\right\}$, $\left\{\psi_{k'}(\y,t)\right\}$ and $\left\{\chi_{k''}(\z,t)\right\}$. For this, we express the expectation value\break $\left<\Psi^{(ABC)}\left|\hat H^{(ABC)} - i\frac{\partial}{\partial t}\right|\Psi^{(ABC)}\right>$ in a form which explicitly depends on the various integrals with respect to the orbitals. The result is lengthly and posted in Appendix \ref{appendix_C}. In particular, the expectation values of the various density operators in $\hat H^{(ABC)}$ [Eq.~(\ref{ham_mix_2nd})] emerge as matrix elements of the different intra-species and inter-species reduced density matrices. For ease of reading and for completeness, we collect in Appendix \ref{appendix_B} all reduced density matrices and their respective matrix elements needed in the theory and its numerical implementation. We can now proceed and perform the variation of the action functional (\ref{func_ABC}) with respect to the orbitals. Performing the variation with respect to $\left\{\phi^\ast_k(\x,t)\right\}$, $\left\{\psi^\ast_{k'}(\y,t)\right\}$ and $\left\{\chi^\ast_{k''}(\z,t)\right\}$, making use of the orthonormality relations of each species' orbitals, we solve for the Lagrange multipliers, $k,j=1,\ldots,M_A$, $k',j'=1,\ldots,M_B$ and $k'',j''=1,\ldots,M_C$: \begin{equation}n\label{LM_A_B_C} & & \mu_{kj}^{(A)}(t) = \left<\phi_j\left| \sum^{M_A}_{q=1} \left( \rho^{(A)}_{kq} \left[ \hat h^{(A)} - i\frac{\partial}{\partial t}^{(A)} \right] + \{\rho_2 \hat W\}^{(A)}_{kq} + \{\rho_3 \hat U\}^{(A)}_{kq} \right) \right|\phi_q\right>, \ \\ & & \mu_{k'j'}^{(B)}(t) = \left<\psi_{j'}\left| \sum^{M_B}_{q'=1} \left( \rho^{(B)}_{k'q'} \left[ \hat h^{(B)} - i\frac{\partial}{\partial t}^{(B)} \right] + \{\rho_2 \hat W\}^{(B)}_{k'q'} + \{\rho_3 \hat U\}^{(B)}_{k'q'} \right) \right|\psi_{q'}\right>, \nonumber \\ & & \mu_{k''j''}^{(C)}(t) = \left<\chi_{j''}\left| \sum^{M_C}_{q''=1} \left( \rho^{(C)}_{k''q''} \left[ \hat h^{(C)} - i\frac{\partial}{\partial t}^{(C)} \right] + \{\rho_2 \hat W\}^{(C)}_{k''q''} + \{\rho_3 \hat U\}^{(C)}_{k''q''} \right) \right|\chi_{q''}\right>. \nonumber \ \end{equation}n The terms appearing in the Lagrange multipliers are all defined in Appendix \ref{appendix_C}. We discuss them below, after we arrive at the final form of the equations-of-motion for the orbitals. To proceed we introduce the projection operators for the mixture: \begin{equation}\label{project_BC} \hat {\mathbf P}^{(B)} = 1 - \sum_{u'=1}^{M_B} \left|\psi_{u'}\right>\left<\psi_{u'}\right|, \qquad \hat {\mathbf P}^{(C)} = 1 - \sum_{u''=1}^{M_C} \left|\chi_{u''}\right>\left<\chi_{u''}\right|, \end{equation} where the projection operator for the $A$-species orbitals $\hat {\mathbf P}^{(A)}$ was defined in (\ref{project_A}). Now, eliminating the Lagrange multipliers (\ref{LM_A_B_C}) and making use of the differential conditions for each species (\ref{diff_con_A},\ref{diff_con_BC}), we obtain the final form of the equations-of-motion of the orbitals of the mixture, $j=1,\ldots,M_A$, $j'=1,\ldots,M_B$ and $j''=1,\ldots,M_C$: \begin{equation}n\label{EOM_final_orbitals_3mix} & & \!\!\!\!\!\!\!\! i\left|\dot\phi_j\right> = \hat {\mathbf P}^{(A)} \left[\hat h^{(A)} \left|\phi_j\right> + \sum^{M_A}_{k,q=1} \left\{\brho^{(A)}(t)\right\}^{-1}_{jk} \bigg( \{\rho_2 \hat W\}^{(A)}_{kq} + \{\rho_3 \hat U\}^{(A)}_{kq} \bigg) \left|\phi_q\right> \right], \\ & & \!\!\!\!\!\!\!\! i\left|\dot\psi_{j'}\right> = \hat {\mathbf P}^{(B)} \left[\hat h^{(B)} \left|\psi_{j'}\right> + \sum^{M_B}_{k',q'=1} \left\{\brho^{(B)}(t)\right\}^{-1}_{j'k'} \bigg( \{\rho_2 \hat W\}^{(B)}_{k'q'} + \{\rho_3 \hat U\}^{(B)}_{k'q'} \bigg) \left|\psi_{q'}\right> \right], \nonumber \\ & & \!\!\!\!\!\!\!\! i\left|\dot\chi_{j''}\right> = \hat {\mathbf P}^{(C)} \left[\hat h^{(C)} \left|\chi_{j''}\right> + \sum^{M_C}_{k'',q''=1} \left\{\brho^{(C)}(t)\right\}^{-1}_{j''k''} \bigg( \{\rho_2 \hat W\}^{(C)}_{k''q''} + \{\rho_3 \hat U\}^{(C)}_{k''q''} \bigg) \left|\chi_{q''}\right> \right]. \nonumber \ \end{equation}n We see the appealing structure of the equations-of-motion for the orbitals. The various one-body operators which assemble the contributions from different orders of the interactions, corresponding to the one-, two- and three-body parts of the many-particle Hamiltonian $\hat H^{(ABC)}$ (\ref{ham_3mix_general}), are separated. Moreover, it is seen that each one-body operator is comprised of products of reduced density matrices of increasing order times one-body potentials resulting from interactions of the same order (see Appendix \ref{appendix_C} for the explicit terms). This separation, originally put forward for the first time in this context for the single-species static theory for bosons MCHB \cite{MCHB}, is not only theoretically appealing, but is expected to make the implementation of the theory in case of higher-body forces further efficient. Equations-of-motion (\ref{EOM_final_orbitals_3mix}) for the orbitals together with (\ref{C_MIX_gen_phi_phidot}) for the expansion coefficients constitute the propagation theory for mixtures of three kinds of identical particles, interacting with all possible interactions up to three-body forces. All four possible mixtures (Fermi-Fermi-Fermi, Bose-Fermi-Fermi, Bose-Bose-Fermi and Bose-Bose-Bose) are presented in a unified manner, the respective acronyms are denoted as MCTDH-FFF, MCTDH-BFF, MCTDH-BBF and MCTDH-BBB. To conclude our work, we note that one can compute with imaginary time propagation for time-independent Hamiltonians self-consistent ground and excited states for 3-species mixtures. Substituting $t \to -it$ into the equations-of-motion for the coefficients and orbitals, Eqs.~(\ref{C_MIX_gen_phi_phidot},\ref{EOM_final_orbitals_3mix}), the final time-independent (static) theory reads, $k=1,\ldots,M_A$, $k'=1,\ldots,M_B$ and $k''=1,\ldots,M_C$: \begin{equation}n\label{statical_3mix} & & \sum_{q=1}^{M_A} \left[ \rho^{(A)}_{kq} \hat h^{(A)} + \{\rho_2 \hat W\}^{(A)}_{kq} + \{\rho_3 \hat U\}^{(A)}_{kq} \right] \left|\phi_q\right> = \sum_{j=1}^{M_A} \mu_{kj}^{(A)} \left|\phi_j\right>, \nonumber \\ & & \sum_{q'=1}^{M_B} \left[ \rho^{(B)}_{k'q'} \hat h^{(B)} + \{\rho_2 \hat W\}^{(B)}_{k'q'} + \{\rho_3 \hat U\}^{(B)}_{k'q'} \right] \left|\psi_{q'}\right> = \sum_{j'=1}^{M_B} \mu_{k'j'}^{(B)} \left|\psi_{j'}\right>, \nonumber \\ & & \sum_{q''=1}^{M_C} \left[ \rho^{(C)}_{k''q''} \hat h^{(C)} + \{\rho_2 \hat W\}^{(C)}_{k''q''} + \{\rho_3 \hat U\}^{(C)}_{k''q''} \right] \left|\chi_{q''}\right> = \sum_{j''=1}^{M_C} \mu_{k''j''}^{(C)} \left|\chi_{j''}\right>, \nonumber \\ & & \qquad \qquad C^{\hat H^{(ABC)}}_{\vec J} = \varepsilon^{(ABC)} C_{\vec J}, \qquad \forall \vec J, \ \end{equation}n where, making use of the normalization of the static many-particle wave-function $\left|\Psi^{(ABC)}\right>$, $\varepsilon^{(ABC)}= \sum_{\vec J} C^\ast_{\vec J} C^{\hat H^{(ABC)}}_{\vec J}$ is the eigen-energy of the system. Finally, utilizing the fact that the matrices of Lagrange multipliers $\{\mu_{kj}^{(A)}\}$, $\{\mu_{k'j'}^{(B)}\}$ and $\{\mu_{k''j''}^{(C)}\}$ are Hermitian (for stationary states) and of the invariance property of the multiconfigurational wave-function (to unitary transformations of each species' orbitals compensated by the `reverse' transformations of the coefficients), one can transform Eq.~(\ref{statical_3mix}) to a representation where $\{\mu_{kj}^{(A)}\}$, $\{\mu_{k'j'}^{(B)}\}$ and $\{\mu_{k''j''}^{(C)}\}$ are diagonal matrices. This concludes our derivations. \section{Brief summary and outlook}\label{SEC4} In the present work we have specified the MCTDH method for a new complicated system of relevance. We have considered mixtures of three kinds of identical particles interacting via all combinations of two- and three-body forces. We have derived the equations-of-motion for the expansion coefficients, $\left\{C_{\vec J}(t)\right\}$, and the orbitals, $\left\{\phi_k(\x,t)\right\}$, $\left\{\psi_{k'}(\y,t)\right\}$ and $\left\{\chi_{k''}(\z,t)\right\}$, see Eqs.~(\ref{C_MIX_gen_phi_phidot},\ref{EOM_final_orbitals_3mix}). The self-consistent static theory has been derived as well, see Eq.~(\ref{statical_3mix}). All quantities needed for the implementation of the theory have been prescribed in details. On the methodological level, we have represented the coefficients' part of the equations-of-motion in a compact recursive form in terms of one-body density operators only, $\left\{\hat \rho^{(A)}_{kq}\right\}$, $\left\{\hat \rho^{(B)}_{k'q'}\right\}$ and $\left\{\hat \rho^{(C)}_{k''q''}\right\}$. The recursion utilizes the recently proposed Combinadic-based mapping for fermionic and bosonic operators in Fock space \cite{mapping} that has been successfully applied and implemented within the MCTDHB package \cite{package}. Our derivation sheds new light on the representation of the coefficients' part in MCTDHF and MCTDHB without resorting to the matrix elements of the many-body Hamiltonian with respect to the time-dependent configurations, and suggests a recipe for efficient implementation of MCTDH-FFF, MCTDH-BFF, MCTDH-BBF and MCTDH-BBB which is well-suitable for parallel implementation. As an outlook of the present theory, let us imagine the possibility of conversion between the distinct particles, say the conversion of the $A$ and $B$ species to the $C$ species, which can be written symbolically as the following ``reaction'': $$ A + B \leftrightharpoons C. $$ Such a process would be a model, e.g., for the resonant association of hetero-nuclear ultra-cold molecules. The derivation of an efficient MCTDH-{\it conversion} theory in this case would require the extension of the Combinadic-based mapping \cite{mapping} to systems with particle conversion, and the assembly of more building bricks than just the one-body density operators used in the present theory, $\left\{\hat \rho^{(A)}_{kq}\right\}$, $\left\{\hat \rho^{(B)}_{k'q'}\right\}$ and $\left\{\hat \rho^{(C)}_{k''q''}\right\}$. \section*{Acknowledgments} The paper is dedicated to Professor Debashis Mukherjee, a dear colleague and friend, on the occasion of his 65{\it th} birthday. We are grateful to Hans-Dieter Meyer for multiple and continuous discussions on MCTDH, and acknowledge financial support by the DFG. \appendix \section{Calculating expectation values of operators in mixtures of three kinds of identical particles}\label{appendix_A} Following \cite{mapping}, we write the general expectation value of an operator $\hat O^{(3mix)}$ in a 3-species mixture as follows: \begin{equation}n\label{expectation_3} & & \left<\Psi^{(ABC)}(t)\left| \hat O^{(3mix)} \right|\Psi^{(ABC)}(t)\right> = \left<\Psi^{(ABC)}(t)\left| \left\{ \hat O^{(3mix)} \right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\ & & = \sum_{{\vec J}} C^\ast_{\vec J}(t) C^{\hat O^{(3mix)}}_{\vec J}(t), \end{equation}n where \begin{equation}\label{O_Psi_3} \hat O^{(3mix)} \left|\Psi^{(ABC)}(t)\right> = \hat O^{(3mix)} \sum_{{\vec J}} C_{\vec J}(t) \left|\vec J;t\right> \equiv \sum_{{\vec J}} C^{\hat O^{(3mix)}}_{\vec J}(t) \left|\vec J;t\right>. \end{equation} $\hat O^{(3mix)}$ can be a one-, two- or three-body operator or any combination thereof. The operation of single-species operators, whether $\hat O^{(A)}$, $\hat O^{(B)}$ or $\hat O^{(C)}$ can be read of directly from Eqs.~(\ref{O_den}-\ref{C_three}) and we will not repeat them here (one needs just to replace therein $J_A$ by $\vec J$ in the overall notation, and $M_A$ by $M_B$ or $M_C$, when appropriate; also see \cite{mapping}). For the inter-species two-body operators we prescribe the compact result for completeness. For the two-body operators $\hat O^{(AB)} = \sum_{k,k',q,q'} O^{(AB)}_{kk'qq'} \hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{k'q'}$,\break $\hat O^{(AC)} = \sum_{k,k'',q,q''} O^{(AC)}_{kk''qq''} \hat \rho^{(A)}_{kq} \hat \rho^{(C)}_{k''q''}$ and $\hat O^{(BC)} = \sum_{k',k'',q',q''} O^{(BC)}_{k'k''q'q''} \hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{k''q''}$ we find: \begin{equation}n\label{C_3mix_2B} C^{\hat O^{(AB)}}_{\vec J}(t) &=& \sum_{k,k',q,q'=1}^{M_A,M_B} O^{(AB)}_{kk'qq'} C^{\hat \rho^{(A)}_{kq}\hat \rho^{(B)}_{k'q'}}_{\vec J}(t), \nonumber \\ C^{\hat O^{(AC)}}_{\vec J}(t) &=& \sum_{k,k'',q,q''=1}^{M_A,M_C} O^{(AC)}_{kk''qq''} C^{\hat \rho^{(A)}_{kq}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t), \nonumber \\ C^{\hat O^{(BC)}}_{\vec J}(t) &=& \sum_{k',k'',q',q''=1}^{M_B,M_C} O^{(BC)}_{k'k''q'q''} C^{\hat \rho^{(B)}_{k'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t). \ \end{equation}n Note the factorization of the one-body (basic) density operators for the inter-species operators, which simplify the way how the coefficients' vector is evaluated. For the inter-species three-body operators resulting from the force between two identical particles and a third distinct one we list the final result for completeness. For the three-body operators \begin{equation}n\label{3B_operators} \hat O^{(AAB)} &=& \frac{1}{2} \sum_{k,k',s,q,q'l} O^{(AAB)}_{kk'sqq'l} \hat \rho^{(A)}_{kslq} \hat \rho^{(B)}_{k'q'}, \nonumber \\ \hat O^{(ABB)} &=& \frac{1}{2} \sum_{k,k',s',q,q',l'} O^{(ABB)}_{kk's'qq'l'} \hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{k's'l'q'}, \nonumber \\ \hat O^{(AAC)} &=& \frac{1}{2} \sum_{k,k'',s,q,q''l} O^{(AAC)}_{kk''sqq''l} \hat \rho^{(A)}_{kslq} \hat \rho^{(C)}_{k''q''}, \nonumber \\ \hat O^{(ACC)} &=& \frac{1}{2} \sum_{k,k'',s'',q,q'',l''} O^{(ACC)}_{kk''s''qq''l''} \hat \rho^{(A)}_{kq} \hat \rho^{(C)}_{k''s''l''q''}, \nonumber \\ \hat O^{(BBC)} &=& \frac{1}{2} \sum_{k',k'',s',q',q''l'} O^{(BBC)}_{k'k''s'q'q''l'} \hat \rho^{(B)}_{k's'l'q'} \hat \rho^{(C)}_{k''q''}, \nonumber \\ \hat O^{(BCC)} &=& \frac{1}{2} \sum_{k',k'',s'',q',q'',l''} O^{(BCC)}_{k'k''s''q'q''l''} \hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{k''s''l''q''}, \ \end{equation}n we find: \begin{equation}n\label{C_3mix_binary_3B} & & C^{\hat O^{(AAB)}}_{\vec J}(t) = \nonumber \\ &=& \frac{1}{2} \sum_{k,k',s,q,q',l=1}^{M_A,M_B} O^{(AAB)}_{kk'sqq'l} \left[ \pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}\hat \rho^{(B)}_{k'q'}}_{\vec J}(t) \mp {C^{\hat \rho^{(A)}_{sq}\hat \rho^{(B)}_{k'q'}}_{\vec J}}^{\hat \rho^{(A)}_{kl}}\!(t) \right], \nonumber \\ & & C^{\hat O^{(ABB)}}_{\vec J}(t) = \nonumber \\ &=& \frac{1}{2} \sum_{k,k',s',q,q',l'=1}^{M_A,M_B} O^{(ABB)}_{kk's'qq'l'} \left[ \pm \delta_{s'l'} C^{\hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{k'q'}}_{\vec J}(t) \mp {C^{\hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{s'q'}}_{\vec J}}^{\hat \rho^{(B)}_{k'l'}}\!(t) \right], \nonumber \\ & & C^{\hat O^{(AAC)}}_{\vec J}(t) = \nonumber \\ &=& \frac{1}{2} \sum_{k,k'',s,q,q'',l=1}^{M_A,M_C} O^{(AAC)}_{kk''sqq''l} \left[ \pm \delta_{sl} C^{\hat \rho^{(A)}_{kq}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t) \mp {C^{\hat \rho^{(A)}_{sq}\hat \rho^{(C)}_{k''q''}}_{\vec J}}^{\hat \rho^{(A)}_{kl}}\!(t) \right], \ \\ & & C^{\hat O^{(ACC)}}_{\vec J}(t) = \nonumber \\ &=& \frac{1}{2} \sum_{k,k'',s'',q,q'',l''=1}^{M_A,M_C} O^{(ACC)}_{kk''s''qq''l''} \left[ \pm \delta_{s''l''} C^{\hat \rho^{(A)}_{kq} \hat \rho^{(C)}_{k''q''}}_{\vec J}(t) \mp {C^{\hat \rho^{(A)}_{kq} \hat \rho^{(C)}_{s''q''}}_{\vec J}}^{\hat \rho^{(C)}_{k''l''}}\!(t) \right], \nonumber \\ & & C^{\hat O^{(BBC)}}_{\vec J}(t) = \nonumber \\ &=& \frac{1}{2} \sum_{k',k'',s',q',q'',l'=1}^{M_B,M_C} O^{(BBC)}_{k'k''s'q'q''l'} \left[ \pm \delta_{s'l'} C^{\hat \rho^{(B)}_{k'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t) \mp {C^{\hat \rho^{(B)}_{s'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}}^{\hat \rho^{(B)}_{k'l'}}\!(t) \right], \nonumber \\ & & C^{\hat O^{(BCC)}}_{\vec J}(t) = \nonumber \\ &=& \frac{1}{2} \sum_{k',k'',s'',q',q'',l''=1}^{M_B,M_C} O^{(BCC)}_{k'k''s''q'q''l''} \left[ \pm \delta_{s''l''} C^{\hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{k''q''}}_{\vec J}(t) \mp {C^{\hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{s''q''}}_{\vec J}}^{\hat \rho^{(C)}_{k''l''}}\!(t) \right]. \nonumber \end{equation}n We remind that the appearance of the one-body (basic) density operators on two levels means that the lower-level multiplication has to be performed first, and the upper-level second. Finally, for the inter-species three-body operator we give the closed-form result for completeness. For the three-body operator\break $\hat O^{(ABC)} = \sum_{k,k',k'',q,q',q''} O^{(ABC)}_{kk'k''qq'q''} \hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{k''q''}$ we find: \begin{equation}\label{C_3mix_3B} C^{\hat O^{(ABC)}}_{\vec J}(t) = \sum_{k,k',k'',q,q',q''=1}^{M_A,M_B,M_C} O^{(ABC)}_{kk'k''qq'q''} C^{\hat \rho^{(A)}_{kq} \hat \rho^{(B)}_{k'q'} \hat \rho^{(C)}_{k''q''}}_{\vec J}(t), \end{equation} which concludes our Combinadic-based \cite{mapping} representation of the equations-of-motion for the coefficients in MCTDH for mixtures of 3 kinds of identical particles interacting with up to 3-body forces, and the calculations of all relevant matrix elements with respect to $\left|\Psi^{(ABC)}(t)\right>$. \section{Reduced density matrices for mixtures of three kinds of identical particles interacting with up to three-body forces}\label{appendix_B} \subsection*{Intra-species reduced density matrices} The reduced one-body density matrix of the single-species multiconfigurational wave-function $\left|\Psi^{(A)}(t)\right>$ is given by: \begin{equation}n\label{DNS_A_1} & & \rho^{(A)}(\x_1|\x'_1;t) = N_A \int d\x_2 d\x_3 \cdots d\x_{N_A} \\ & & = {\Psi^{(A)}}^\ast(\x'_1,\x_2,\ldots,\x_{N_A};t) \Psi^{(A)}(\x_1,\x_2,\ldots,\x_{N_A};t) = \nonumber \\ & & = \left<\Psi^{(A)}(t)\left|\left\{\hat{\mathbf \Psi}_A^\dag(\x'_1)\hat{\mathbf \Psi}_A(\x_1) \right|\Psi^{(A)}(t)\right>\right\} = \sum^M_{k,q=1} \rho^{(A)}_{kq}(t) \phi^\ast_k(\x'_1,t)\phi_q(\x_1,t), \nonumber \ \end{equation}n where its matrix elements in the orbital basis $\rho^{(A)}_{kq}(t)$ are given in Eq.~(\ref{denisty_matrx_element}) of the main text. Then, the reduced two-body density matrix of the single-species multiconfigurational wave-function $\left|\Psi^{(A)}(t)\right>$ is given by: \begin{equation}n\label{DNS_A_2} & & \rho^{(A)}(\x_1,\x_2|\x'_1,\x'_2;t) = N_A(N_A-1) \int d\x_3 \cdots d\x_{N_A} \times \\ & & \times {\Psi^{(A)}}^\ast(\x'_1,\x'_2,\x_3,\ldots,\x_{N_A};t) \Psi^{(A)}(\x_1,\x_2,\x_3,\ldots,\x_{N_A};t) = \nonumber \\ & & = \left<\Psi^{(A)}(t)\left|\left\{\hat{\mathbf \Psi}_A^\dag(\x'_1)\hat{\mathbf \Psi}_A^\dag(\x'_2) \hat{\mathbf \Psi}_A(\x_2)\hat{\mathbf \Psi}_A(\x_1)\right|\Psi^{(A)}(t)\right> \right\} = \nonumber \\ & & = \sum^M_{k,s,l,q=1} \rho^{(A)}_{kslq}(t) \phi^\ast_k(\x'_1,t) \phi^\ast_s(\x'_2,t) \phi_l(\x_2,t) \phi_q(\x_1,t), \nonumber \ \end{equation}n where its matrix elements in the orbital basis $\rho^{(A)}_{kslq}(t)$ are given in Eq.~(\ref{denisty_matrx_element}). Finally in the single-species case, the reduced three-body density matrix of $\left|\Psi^{(A)}(t)\right>$ is given by: \begin{equation}n\label{DNS_A_3} & & \rho^{(A)}(\x_1,\x_2,\x_3|\x'_1,\x'_2,\x'_3;t) = N_A(N_A-1)(N_A-2) \int d\x_4 \cdots d\x_{N_A} \times \nonumber \\ & & \times {\Psi^{(A)}}^\ast(\x'_1,\x'_2,\x'_3,\x_4,\ldots,\x_{N_A};t) \Psi^{(A)} (\x_1,\x_2,\x_3,\x_4,\ldots,\x_{N_A};t) = \\ & & = \left<\Psi^{(A)}(t)\left|\left\{\hat{\mathbf \Psi}_A^\dag(\x'_1)\hat{\mathbf \Psi}_A^\dag(\x'_2) \hat{\mathbf \Psi}_A^\dag(\x'_3) \hat{\mathbf \Psi}_A(\x_3)\hat{\mathbf \Psi}_A(\x_2) \hat{\mathbf \Psi}_A(\x_1)\right|\Psi^{(A)}(t)\right>\right\} = \nonumber \\ & & = \sum^M_{k,s,p,r,l,q=1} \rho^{(A)}_{ksprlq}(t) \phi^\ast_k(\x'_1,t) \phi^\ast_s(\x'_2,t) \phi^\ast_p(\x'_3,t) \phi_r(\x_3,t) \phi_l(\x_2,t) \phi_q(\x_1,t), \nonumber \ \end{equation}n where its matrix elements in the orbital basis $\rho^{(A)}_{ksprlq}(t)$ are given in Eq.~(\ref{denisty_matrx_element}). The reduced density matrices of the $B$ and $C$ species are defined in an analogous manner, where $B$ and $C$ quantities are to replace the $A$ quantities in Eqs.~(\ref{DNS_A_1}-\ref{DNS_A_3}). \subsection*{Inter-species reduced two-body density matrices} For completeness, we give all inter-species reduced density matrices that occur in a mixture of three kinds of identical particles interacting with up to three-body forces, where each species may have a different spin. There are three such reduced density matrices which are associated with the two-body interactions of two distinct particles. \begin{equation}n\label{DNS_AB} & & \rho^{(AB)}(\x_1,\y_1|\x'_1,\y'_1;t) = N_A N_B \int \, d\x_2 \cdots d\x_{N_A} d\y_2 \cdots d\y_{N_B} d\z_1 \cdots d\z_{N_C} \times \nonumber \\ & & \times {\Psi^{(ABC)}}^\ast(\x'_1,\ldots,\x_{N_A},\y'_1,\ldots,\y_{N_B},\z_1,\ldots,\z_{N_C};t) \times \nonumber \\ & & \times \Psi^{(ABC)}(\x_1,\ldots,\x_{N_A},\y_1,\ldots,\y_{N_B},\z_1,\ldots,\z_{N_C};t) = \nonumber \\ & & = \left<\Psi^{(ABC)}(t)\left| \left\{ \hat{\mathbf \Psi}_A^\dag(\x'_1) \hat{\mathbf \Psi}_A(\x_1) \hat{\mathbf \Psi}_B^\dag(\y'_1) \hat{\mathbf \Psi}_B(\y_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\ & & = \sum^{M_A,M_B}_{k,k',q,q'=1} \rho^{(AB)}_{kk'qq'}(t) \phi^\ast_{k}(\x'_1,t) \phi_{q}(\x_1,t) \psi^\ast_{k'}(\y'_1,t) \psi_{q'}(\y_1,t), \ \end{equation}n where its matrix elements in the orbital basis are give by: \begin{equation} \rho^{(AB)}_{kk'qq'}(t)= \sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kq}\hat \rho^{(B)}_{k'q'}}_{\vec J}(t). \end{equation} \begin{equation}n\label{DNS_AC} & & \rho^{(AC)}(\x_1,\z_1|\x'_1,\z'_1;t) = N_A N_C \int d\x_2 \cdots d\x_{N_A} d\y_1 \cdots d\y_{N_B} d\z_2 \cdots d\z_{N_C} \times \nonumber \\ & & \times {\Psi^{(ABC)}}^\ast(\x'_1,\ldots,\x_{N_A},\y_1,\ldots,\y_{N_B},\z'_1,\ldots,\z_{N_C};t) \times \nonumber \\ & & \times \Psi^{(ABC)}(\x_1,\ldots,\x_{N_A},\y_1,\ldots,\y_{N_B},\z_1,\ldots,\z_{N_C};t) = \nonumber \\ & & = \left<\Psi^{(ABC)}(t)\left| \left\{ \hat{\mathbf \Psi}_A^\dag(\x'_1) \hat{\mathbf \Psi}_A(\x_1) \hat{\mathbf \Psi}_C^\dag(\z'_1) \hat{\mathbf \Psi}_C(\z_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\ & & = \sum^{M_A,M_B}_{k,k'',q,q''=1} \rho^{(AC)}_{kk''qq''}(t) \phi^\ast_{k}(\x'_1,t) \phi_{q}(\x_1,t) \chi^\ast_{k''}(\z'_1,t) \chi_{q''}(\z_1,t), \ \end{equation}n where its matrix elements in the orbital basis are given by: \begin{equation} \rho^{(AC)}_{kk''qq''}(t)= \sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kq}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t). \end{equation} \begin{equation}n\label{DNS_BC} & & \rho^{(BC)}(\y_1,\z_1|\y'_1,\z'_1;t) = N_B N_C \int d\x_1 \cdots d\x_{N_A} d\y_2 \cdots d\y_{N_B} d\z_2 \cdots d\z_{N_C} \times \nonumber \\ & & \times {\Psi^{(ABC)}}^\ast(\x_1,\ldots,\x_{N_A},\y'_1,\ldots,\y_{N_B},\z'_1,\ldots,\z_{N_C};t) \times \nonumber \\ & & \times \Psi^{(ABC)}(\x_1,\ldots,\x_{N_A},\y_1,\ldots,\y_{N_B},\z_1,\ldots,\z_{N_C};t) = \nonumber \\ & & = \left<\Psi^{(ABC)}(t)\left| \left\{ \hat{\mathbf \Psi}_B^\dag(\y'_1) \hat{\mathbf \Psi}_B(\y_1) \hat{\mathbf \Psi}_C^\dag(\z'_1) \hat{\mathbf \Psi}_C(\z_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\ & & = \sum^{M_B,M_B}_{k',k'',q',q''=1} \rho^{(BC)}_{k'k''q'q''}(t) \psi^\ast_{k'}(\y'_1,t) \psi_{q'}(\y_1,t) \chi^\ast_{k''}(\z'_1,t) \chi_{q''}(\z_1,t), \ \end{equation}n where its matrix elements in the orbital basis are given by: \begin{equation} \rho^{(AC)}_{k'k''q'q''}(t)= \sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(B)}_{k'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t). \end{equation} \subsection*{Inter-species reduced three-body density matrices} There are six reduced three-body density matrices which are associated with the three-body interactions of two identical particles with a third distinct one. \begin{equation}n\label{DNS_AAB} & & \rho^{(AAB)}(\x_1,\x_2,\y_1|\x'_1,\x'_2,\y'_1;t) = \\ & & = N_A(N_A-1) N_B \int \, d\x_3 \cdots d\x_{N_A} d\y_2 \cdots d\y_{N_B} d\z_1 \cdots d\z_{N_C} \times \nonumber \\ & & \times {\Psi^{(ABC)}}^\ast(\x'_1,\x'_2,\ldots,\x_{N_A},\y'_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) \times \nonumber \\ & & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\ & & = \left<\Psi^{(ABC)}(t)\left| \left\{ \hat{\mathbf \Psi}_A^\dag(\x'_1) \hat{\mathbf \Psi}_A^\dag(\x'_2) \hat{\mathbf \Psi}_A(\x_2) \hat{\mathbf \Psi}_A(\x_1) \hat{\mathbf \Psi}_B^\dag(\y'_1) \hat{\mathbf \Psi}_B(\y_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\ & & = \sum^{M_A,M_B}_{k,k',s,l,q,q'=1} \rho^{(AAB)}_{kk'slqq'}(t) \phi^\ast_{k}(\x'_1,t) \phi^\ast_s(\x'_2,t) \phi_l(\x_2,t) \phi_{q}(\x_1,t) \psi^\ast_{k'}(\y'_1,t) \psi_{q'}(\y_1,t), \nonumber \ \end{equation}n where \begin{equation} \rho^{(AAB)}_{kk'slqq'}(t)= \sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kslq}\hat \rho^{(B)}_{k'q'}}_{\vec J}(t) \end{equation} are its matrix elements in the orbital basis. \begin{equation}n\label{DNS_ABB} & & \rho^{(ABB)}(\x_1,\y_1,\y_2|\x'_1,\y'_1,\y'_2;t) = \\ & & = N_A N_B (N_B-1) \int \, d\x_2 \cdots d\x_{N_A} d\y_3 \cdots d\y_{N_B} d\z_1 \cdots d\z_{N_C} \times \nonumber \\ & & \times {\Psi^{(ABC)}}^\ast(\x'_1,\x_2,\ldots,\x_{N_A},\y'_1,\y'_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) \times \nonumber \\ & & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\ & & = \left<\Psi^{(ABC)}(t)\left| \left\{ \hat{\mathbf \Psi}_A^\dag(\x'_1) \hat{\mathbf \Psi}_A(\x_1) \hat{\mathbf \Psi}_B^\dag(\y'_1) \hat{\mathbf \Psi}_B^\dag(\y'_2) \hat{\mathbf \Psi}_B(\y_2) \hat{\mathbf \Psi}_B(\y_1) \right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\ & & = \sum^{M_A,M_B}_{k,k',s',l',q,q'=1} \rho^{(ABB)}_{kk's'l'qq'}(t) \phi^\ast_{k}(\x'_1,t) \phi_{q}(\x_1,t) \psi^\ast_{k'}(\y'_1,t) \psi^\ast_{s'}(\y'_2,t) \psi_{l'}(\y_2,t) \psi_{q'}(\y_1,t), \nonumber \ \end{equation}n where \begin{equation} \rho^{(ABB)}_{kk's'l'qq'}(t)= \sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kq}\hat \rho^{(B)}_{k's'l'q'}}_{\vec J}(t) \end{equation} are its matrix elements in the orbital basis. \begin{equation}n\label{DNS_AAC} & & \rho^{(AAC)}(\x_1,\x_2,\z_1|\x'_1,\x'_2,\z'_1;t) = \\ & & = N_A(N_A-1) N_C \int \, d\x_3 \cdots d\x_{N_A} d\y_1 \cdots d\y_{N_B} d\z_2 \cdots d\z_{N_C} \times \nonumber \\ & & \times {\Psi^{(ABC)}}^\ast(\x'_1,\x'_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z'_1,\z_2,\ldots,\z_{N_C};t) \times \nonumber \\ & & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\ & & = \left<\Psi^{(ABC)}(t)\left| \left\{ \hat{\mathbf \Psi}_A^\dag(\x'_1) \hat{\mathbf \Psi}_A^\dag(\x'_2) \hat{\mathbf \Psi}_A(\x_2) \hat{\mathbf \Psi}_A(\x_1) \hat{\mathbf \Psi}_C^\dag(\z'_1) \hat{\mathbf \Psi}_C(\z_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\ & & = \sum^{M_A,M_C}_{k,k'',s,l,q,q''=1} \rho^{(AAC)}_{kk''slqq''}(t) \phi^\ast_{k}(\x'_1,t) \phi^\ast_s(\x'_2,t) \phi_l(\x_2,t) \phi_{q}(\x_1,t) \chi^\ast_{k''}(\z'_1,t) \chi_{q''}(\z_1,t), \nonumber \ \end{equation}n where \begin{equation} \rho^{(AAC)}_{kk''slqq''}(t)= \sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kslq}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t) \end{equation} are its matrix elements in the orbital basis. \begin{equation}n\label{DNS_ACC} & & \rho^{(ACC)}(\x_1,\z_1,\z_2|\x'_1,\z'_1,\z'_2;t) = \\ & & = N_A N_C (N_C-1) \int \, d\x_2 \cdots d\x_{N_A} d\y_1 \cdots d\y_{N_B} d\z_3 \cdots d\z_{N_C} \times \nonumber \\ & & \times {\Psi^{(ABC)}}^\ast(\x'_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z'_1,\z'_2,\ldots,\z_{N_C};t) \times \nonumber \\ & & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\ & & = \left<\Psi^{(ABC)}(t)\left| \left\{ \hat{\mathbf \Psi}_A^\dag(\x'_1) \hat{\mathbf \Psi}_A(\x_1) \hat{\mathbf \Psi}_C^\dag(\z'_1) \hat{\mathbf \Psi}_C^\dag(\z'_2) \hat{\mathbf \Psi}_C(\z_2) \hat{\mathbf \Psi}_C(\z_1) \right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\ & & = \sum^{M_A,M_C}_{k,k'',s'',l'',q,q''=1} \rho^{(ACC)}_{kk''s''l''qq''}(t) \phi^\ast_{k}(\x'_1,t) \phi_{q}(\x_1,t) \chi^\ast_{k''}(\z'_1,t) \chi^\ast_{s''}(\z'_2,t) \chi_{l''}(\z_2,t) \chi_{q''}(\z_1,t), \nonumber \ \end{equation}n where \begin{equation} \rho^{(ACC)}_{kk''s''l''qq''}(t)= \sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kq}\hat \rho^{(C)}_{k''s''l''q''}}_{\vec J}(t) \end{equation} are its matrix elements in the orbital basis. \begin{equation}n\label{DNS_BBC} & & \rho^{(BBC)}(\y_1,\y_2,\z_1|\y'_1,\y'_2,\z'_1;t) = \\ & & = N_B(N_B-1) N_C \int \, d\x_1 \cdots d\x_{N_A} d\y_3 \cdots d\y_{N_B} d\z_2 \cdots d\z_{N_C} \times \nonumber \\ & & \times {\Psi^{(ABC)}}^\ast(\x_1,\x_2,\ldots,\x_{N_A},\y'_1,\y'_2,\ldots,\y_{N_B},\z'_1,\z_2,\ldots,\z_{N_C};t) \times \nonumber \\ & & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\ & & = \left<\Psi^{(ABC)}(t)\left| \left\{ \hat{\mathbf \Psi}_B^\dag(\y'_1) \hat{\mathbf \Psi}_B^\dag(\y'_2) \hat{\mathbf \Psi}_B(\y_2) \hat{\mathbf \Psi}_B(\y_1) \hat{\mathbf \Psi}_C^\dag(\z'_1) \hat{\mathbf \Psi}_C(\z_1)\right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\ & & = \sum^{M_B,M_C}_{k',k'',s',l',q',q''=1} \rho^{(BBC)}_{k'k''s'l'q'q''}(t) \psi^\ast_{k'}(\y'_1,t) \psi^\ast_{s'}(\y'_2,t) \psi_{l'}(\y_2,t) \phi_{q'}(\y_1,t) \chi^\ast_{k''}(\z'_1,t) \chi_{q''}(\z_1,t), \nonumber \ \end{equation}n where \begin{equation} \rho^{(BBC)}_{k'k''s'l'q'q''}(t)= \sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(B)}_{k's'l'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t) \end{equation} are its matrix elements in the orbital basis. \begin{equation}n\label{DNS_BCC} & & \rho^{(BCC)}(\y_1,\z_1,\z_2|\y'_1,\z'_1,\z'_2;t) = \\ & & = N_B N_C (N_C-1) \int \, d\x_1 \cdots d\x_{N_A} d\y_2 \cdots d\y_{N_B} d\z_3 \cdots d\z_{N_C} \times \nonumber \\ & & \times {\Psi^{(ABC)}}^\ast(\x_1,\x_2,\ldots,\x_{N_A},\y'_1,\y_2,\ldots,\y_{N_B},\z'_1,\z'_2,\ldots,\z_{N_C};t) \times \nonumber \\ & & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\ & & = \left<\Psi^{(ABC)}(t)\left| \left\{ \hat{\mathbf \Psi}_B^\dag(\y'_1) \hat{\mathbf \Psi}_B(\y_1) \hat{\mathbf \Psi}_C^\dag(\z'_1) \hat{\mathbf \Psi}_C^\dag(\z'_2) \hat{\mathbf \Psi}_C(\z_2) \hat{\mathbf \Psi}_C(\z_1) \right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\ & & = \sum^{M_B,M_C}_{k',k'',s'',l'',q',q''=1} \rho^{(BCC)}_{k'k''s''l''q'q''}(t) \psi^\ast_{k'}(\y'_1,t) \psi_{q'}(\y_1,t) \chi^\ast_{k''}(\z'_1,t) \chi^\ast_{s''}(\z'_2,t) \chi_{l''}(\z_2,t) \chi_{q''}(\z_1,t), \nonumber \ \end{equation}n where \begin{equation} \rho^{(BCC)}_{k'k''s''l''q'q''}(t)= \sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(B)}_{k'q'}\hat \rho^{(C)}_{k''s''l''q''}}_{\vec J}(t) \end{equation} are its matrix elements in the orbital basis. Finally, there is a single reduced three-body density matrix which is associated with the three-body interaction of three distinct particles. \begin{equation}n\label{DNS_ABC} & & \rho^{(ABC)}(\x_1,\y_1,\z_1|\x'_1,\y'_1,\z'_1;t) = \\ & & = N_A N_B N_C \int \, d\x_2 \cdots d\x_{N_A} d\y_2 \cdots d\y_{N_B} d\z_2 \cdots d\z_{N_C} \times \nonumber \\ & & \times {\Psi^{(ABC)}}^\ast(\x'_1,\x_2,\ldots,\x_{N_A},\y'_1,\y_2,\ldots,\y_{N_B},\z'_1,\z_2,\ldots,\z_{N_C};t) \times \nonumber \\ & & \times \Psi^{(ABC)}(\x_1,\x_2,\ldots,\x_{N_A},\y_1,\y_2,\ldots,\y_{N_B},\z_1,\z_2,\ldots,\z_{N_C};t) = \nonumber \\ & & = \left<\Psi^{(ABC)}(t)\left| \left\{ \hat{\mathbf \Psi}_A^\dag(\x'_1) \hat{\mathbf \Psi}_A(\x_1) \hat{\mathbf \Psi}_B^\dag(\y'_1) \hat{\mathbf \Psi}_B(\y_1) \hat{\mathbf \Psi}_C^\dag(\z'_2) \hat{\mathbf \Psi}_C(\z_1) \right|\Psi^{(ABC)}(t)\right> \right\} = \nonumber \\ & & = \sum^{M_A,M_B,M_C}_{k,k',k'',q,q',q''=1} \rho^{(ABC)}_{kk'k''qq'q''}(t) \phi^\ast_{k}(\x'_1,t) \phi_{q}(\x_1,t) \psi^\ast_{k'}(\y'_1,t) \psi_{q'}(\y_1,t), \chi^\ast_{k}(\y'_1,t) \chi_{q}(\y_1,t) \nonumber \ \end{equation}n where \begin{equation} \rho^{(ABC)}_{kk'k''qq'q'}(t)= \sum_{\vec J} C^\ast_{\vec J}(t) C^{\hat \rho^{(A)}_{kq}\hat \rho^{(B)}_{k'q'}\hat \rho^{(C)}_{k''q''}}_{\vec J}(t) \end{equation} are its matrix elements in the orbital basis. \section{Further details of the derivation of the equations-of-motion for mixtures of three kinds of identical particles}\label{appendix_C} The derivation of the equations-of-motion for the orbitals (\ref{EOM_final_orbitals_3mix}) starts from expressing the expectation value of $\hat H^{(ABC)}$ with respect to the many-particle wave-function $\left|\Psi^{(ABC)}\right>$ in a form which depends explicitly on the various integrals with respect to the orbitals. Thus we have: \begin{equation}n\label{expectation_ALL_orbitals} & &\left<\Psi^{(ABC)}\left|\hat H^{(ABC)} - i\frac{\partial}{\partial t}\right|\Psi^{(ABC)}\right> = \sum_{k,q=1}^{M_A} \rho^{(A)}_{kq} \left[ h^{(A)}_{kq} - \left\{i\frac{\partial}{\partial t}^{(A)}\right\}_{kq} \right] + \nonumber \\ && + \frac{1}{2}\sum_{k,s,l,q=1}^{M_A} \rho^{(A)}_{kslq} W^{(A)}_{ksql} + \frac{1}{6}\sum_{k,s,p,r,l,q=1}^{M_A} \rho^{(A)}_{ksprlq} U^{(A)}_{kspqlr} + \nonumber \\ && + \sum_{k',q'=1}^{M_B} \rho^{(B)}_{k'q'} \left[ h^{(B)}_{k'q'} - \left\{i\frac{\partial}{\partial t}^{(B)}\right\}_{k'q'} \right] + \nonumber \\ && + \frac{1}{2}\sum_{k',s',l',q'=1}^{M_B} \rho^{(B)}_{k's'l'q'} W^{(B)}_{k's'q'l'} + \frac{1}{6}\sum_{k',s',p',r',l',q'=1}^{M_B} \rho^{(B)}_{k's'p'r'l'q'} U^{(B)}_{k's'p'q'l'r'} + \nonumber \\ && + \sum_{k'',q''=1}^{M_C} \rho^{(C)}_{k''q''} \left[ h^{(C)}_{k''q''} - \left\{i\frac{\partial}{\partial t}^{(C)}\right\}_{k''q''} \right] + \nonumber \\ && + \frac{1}{2}\sum_{k'',s'',l'',q''=1}^{M_C} \rho^{(C)}_{k''s''l''q''} W^{(C)}_{k''s''q''l''} + \nonumber \\ & & + \frac{1}{6}\sum_{k'',s'',p'',r'',l'',q''=1}^{M_C} \rho^{(C)}_{k''s''p''r''l''q''} U^{(C)}_{k''s''p''q''l''r''} + \\ & & + \sum_{k,k',q,q'=1}^{M_A,M_B} \rho^{(AB)}_{kk'qq'} W^{(AB)}_{kk'qq'} + \sum_{k,k'',q,q''=1}^{M_A,M_C} \rho^{(AC)}_{kk''qq''} W^{(AC)}_{kk''qq''} + \nonumber \\ & & + \sum_{k',k'',q',q''=1}^{M_B,M_C} \rho^{(BC)}_{k'k''q'q''} W^{(BC)}_{k'k''q'q''} + \nonumber \\ & & + \frac{1}{2} \sum_{k,k',s,q,q',l=1}^{M_A,M_B} \rho^{(AAB)}_{kk'slqq'} U^{(AAB)}_{kk'sqq'l} + \frac{1}{2} \sum_{k,k',s',q,q',l'=1}^{M_A,M_B} \rho^{(ABB)}_{kk's'l'qq'} U^{(ABB)}_{kk's'qq'l'} + \nonumber \\ & & + \frac{1}{2} \sum_{k,k'',s,q,q'',l=1}^{M_A,M_C} \rho^{(AAC)}_{kk''slqq''} U^{(AAC)}_{kk''sqq''l} + \frac{1}{2} \sum_{k,k'',s'',q,q'',l''=1}^{M_A,M_C} \rho^{(ACC)}_{kk''s''l''qq''} U^{(ACC)}_{kk''s''qq''l''} + \nonumber \\ & & + \frac{1}{2} \sum_{k',k'',s',q',q'',l'=1}^{M_B,M_C} \rho^{(BBC)}_{k'k''s'l'q'q''} U^{(BBC)}_{k'k''s'q'q''l'} + \frac{1}{2} \sum_{k',k'',s'',q',q'',l''=1}^{M_B,M_C} \rho^{(BCC)}_{k'k''s''l''q'q''} U^{(BCC)}_{k'k''s''q'q''l''} + \nonumber \\ & & + \sum_{k,k',k'',q,q',q''=1}^{M_A,M_B,M_C} \rho^{(ABC)}_{kk'k''qq'q''} U^{(ABC)}_{kk'k''qq'q''} - \sum_{\{\vec J\}} i C^\ast_{\vec J}(t) \dot C_{\vec J}(t). \nonumber \ \end{equation}n The expectation values of the various density operators appearing in (\ref{expectation_ALL_orbitals}) have been prescribed in Appendix \ref{appendix_B}. The matrix elements in (\ref{expectation_ALL_orbitals}) of the $A$ and correspondingly of the $B$ and $C$ single-species terms with respect to the orbitals have been discussed in section \ref{SEC2.1}, see Eq.~(\ref{matrix_elements}). The matrix elements arising from two-body inter-species interactions are listed for completeness below: \begin{equation}n\label{MIX_matrix_elements_2B} & & W^{(AB)}_{kk'qq'} = \int \!\! \int \phi_k^\ast(\x,t) \psi_{k'}^\ast(\y,t) \hat W^{(AB)}(\x,\y) \phi_q(\x,t) \psi_{q'}(\y,t) d{\bf x}d\y, \nonumber \\ & & W^{(AC)}_{kk''qq''} = \int \!\! \int \phi_k^\ast(\x,t) \chi_{k''}^\ast(\z,t) \hat W^{(AC)}(\x,\z) \phi_q(\x,t) \chi_{q''}(\z,t) d{\bf x}d\z, \nonumber \\ & & W^{(BC)}_{k'k''q'q''} = \int \!\! \int \psi_{k'}^\ast(\y,t) \chi_{k''}^\ast(\z,t) \hat W^{(BC)}(\y,\z) \psi_{q'}(\y,t) \chi_{q''}(\z,t) d{\bf y}d\z, \ \end{equation}n and the matrix elements arising from three-body inter-species interactions read as follows: \begin{equation}n\label{MIX_matrix_elements_3B} & & U^{(AAB)}_{kk'sqq'l} = \nonumber \\ & & = \int \!\! \int \!\! \int \phi_k^\ast(\x,t) \phi_s^\ast(\x',t) \psi_{k'}^\ast(\y,t) \hat U^{(AAB)}(\x,\x',\y) \phi_q(\x,t) \phi_l(\x',t) \psi_{q'}(\y,t) d{\bf x}d\x' d\y, \nonumber \\ & & U^{(ABB)}_{kk's'qq'l'} = \nonumber \\ & & = \int \!\! \int \!\! \int \phi_k^\ast(\x,t) \psi_{k'}^\ast(\y,t) \psi_{s'}^\ast(\y',t) \hat U^{(ABB)}(\x,\y,\y') \phi_q(\x,t) \psi_{q'}(\y,t) \psi_{l'}(\y',t) d{\bf x}d{\bf y}d\y', \nonumber \\ & & U^{(AAC)}_{kk''sqq''l} = \nonumber \\ & & = \int \!\! \int \!\! \int \phi_k^\ast(\x,t) \phi_s^\ast(\x',t) \chi_{k''}^\ast(\z,t) \hat U^{(AAC)}(\x,\x',\z) \phi_q(\x,t) \phi_l(\x',t) \chi_{q''}(\z,t) d{\bf x}d\x' d\z, \nonumber \\ & & U^{(ACC)}_{kk''s''qq''l''} = \nonumber \\ & & = \int \!\! \int \!\! \int \phi_k^\ast(\x,t) \chi_{k''}^\ast(\z,t) \chi_{s''}^\ast(\z',t) \hat U^{(ACC)}(\x,\z,\z') \phi_q(\x,t) \chi_{q''}(\z,t) \chi_{l''}(\z',t) d{\bf x}d{\bf z}d\z', \nonumber \\ & & U^{(BBC)}_{k'k''s'q'q''l'} = \nonumber \\ & & = \int \!\! \int \!\! \int \psi_{k'}^\ast(\y,t) \psi_{s'}^\ast(\y',t) \chi_{k''}^\ast(\z,t) \hat U^{(BBC)}(\y,\y',\z) \psi_{q'}(\y,t) \psi_{l'}(\y',t) \chi_{q''}(\z,t) d{\bf y}d\y' d\z, \nonumber \\ & & U^{(BCC)}_{k'k''s''q'q''l''} = \nonumber \\ & & = \int \!\! \int \!\! \int \psi_{k'}^\ast(\y,t) \chi_{k''}^\ast(\z,t) \chi_{s''}^\ast(\z',t) \hat U^{(BCC)}(\y,\z,\z') \psi_{q'}(\y,t) \chi_{q''}(\z,t) \chi_{l''}(\z',t) d{\bf y}d{\bf z}d\z', \nonumber \\ & & U^{(ABC)}_{kk'k''qq'q''} = \nonumber \\ & & = \int \!\! \int \!\! \int \phi_{k}^\ast(\x,t) \psi_{k'}^\ast(\y,t) \chi_{k''}^\ast(\z,t) \hat U^{(ABC)}(\x,\y,\z) \phi_{q}(\x,t) \psi_{q'}(\y,t) \chi_{q''}(\z,t) d{\bf x}d{\bf y}d\z. \nonumber \\ \end{equation}n Performing the variation of the integrals (\ref{MIX_matrix_elements_2B}) with respect to the orbitals $\left\{\phi_k(\x,t)\right\}$, $\left\{\psi_{k'}(\y,t)\right\}$ and $\left\{\chi_{k''}(\z,t)\right\}$, we find six types of inter-species one-body potentials emerging from two-body interactions: \begin{equation}n\label{all_local_2B_potentials} & & \hat W^{(AB)}_{k'q'}(\x,t) = \int \psi_{k'}^\ast(\y,t) \hat W^{(AB)}(\x,\y) \psi_{q'}(\y,t) d\y, \nonumber \\ & & \hat W^{(BA)}_{kq}(\y,t) = \int \phi_{k}^\ast(\x,t) \hat W^{(AB)}(\x,\y) \phi_{q}(\x,t) d\x, \nonumber \\ & & \hat W^{(AC)}_{k''q''}(\x,t) = \int \chi_{k''}^\ast(\z,t) \hat W^{(AC)}(\x,\z) \chi_{q''}(\z,t) d\z, \nonumber \\ & & \hat W^{(CA)}_{kq}(\z,t) = \int \phi_{k}^\ast(\x,t) \hat W^{(AC)}(\x,\z) \phi_{q}(\x,t) d\x, \nonumber \\ & & \hat W^{(BC)}_{k''q''}(\y,t) = \int \chi_{k''}^\ast(\z,t) \hat W^{(BC)}(\y,\z) \chi_{q''}(\z,t) d\z, \nonumber \\ & & \hat W^{(CB)}_{k'q'}(\z,t) = \int \psi_{k'}^\ast(\y,t) \hat W^{(BC)}(\y,\z) \psi_{q'}(\y,t) d\y. \end{equation}n Making the variation of the integrals (\ref{MIX_matrix_elements_3B}) with respect to the orbitals, we arrive at fifteen types of inter-species one-body potentials resulting from three-body interactions: \begin{equation}n\label{all_local_3B_potentials} & & \hat U^{(AAB)}_{sk'lq'}(\x,t) = \int \!\! \int \phi_s^\ast(\x',t) \psi_{k'}^\ast(\y,t) \hat U^{(AAB)}(\x,\x',\y) \phi_l(\x',t) \psi_{q'}(\y,t) d\x' d\y, \nonumber \\ & & \hat U^{(BAA)}_{ksql}(\y,t) = \int \!\! \int \phi_k^\ast(\x,t) \phi_s^\ast(\x',t) \hat U^{(AAB)}(\x,\x',\y) \phi_q(\x,t) \phi_l(\x',t) d{\bf x}d\x', \nonumber \\ & & \hat U^{(ABB)}_{k's'q'l'}(\x,t) = \int \!\! \int \psi_{k'}^\ast(\y,t) \psi_{s'}^\ast(\y',t) \hat U^{(ABB)}(\x,\y,\y') \psi_{q'}(\y,t) \psi_{l'}(\y',t) d{\bf y}d\y', \nonumber \\ & & \hat U^{(BAB)}_{ks'ql'}(\y,t) = \int \!\! \int \phi_k^\ast(\x,t) \psi_{s'}^\ast(\y',t) \hat U^{(ABB)}(\x,\y,\y') \phi_q(\x,t) \psi_{l'}(\y',t) d{\bf x}d\y', \nonumber \\ & & \hat U^{(AAC)}_{sk''lq''}(\x,t) = \int \!\! \int \phi_s^\ast(\x',t) \chi_{k''}^\ast(\z,t) \hat U^{(AAC)}(\x,\x',\z) \phi_l(\x',t) \chi_{q''}(\z,t) d\x' d\z, \nonumber \\ & & \hat U^{(CAA)}_{ksql}(\z,t) = \int \!\! \int \phi_k^\ast(\x,t) \phi_s^\ast(\x',t) \hat U^{(AAC)}(\x,\x',\z) \phi_q(\x,t) \phi_l(\x',t) d{\bf x}d\x', \nonumber \\ & & \hat U^{(ACC)}_{k''s''q''l''}(\x,t) = \int \!\! \int \chi_{k''}^\ast(\z,t) \chi_{s''}^\ast(\z',t) \hat U^{(ACC)}(\x,\z,\z') \chi_{q''}(\z,t) \chi_{l''}(\z',t) d{\bf z}d\z', \nonumber \\ & & \hat U^{(CAC)}_{ks''ql''}(\z,t) = \int \!\! \int \phi_k^\ast(\x,t) \chi_{s''}^\ast(\z',t) \hat U^{(ACC)}(\x,\z,\z') \phi_q(\x,t) \chi_{l''}(\z',t) d{\bf x}d\z', \nonumber \\ & & \hat U^{(BBC)}_{s'k''l'q''}(\y,t) = \int \!\! \int \psi_{s'}^\ast(\y',t) \chi_{k''}^\ast(\z,t) \hat U^{(BBC)}(\y,\y',\z) \psi_{l'}(\y',t) \chi_{q''}(\z,t) d\y' d\z, \nonumber \\ & & \hat U^{(CBB)}_{k's'q'l'}(\z,t) = \int \!\! \int \psi_{k'}^\ast(\y,t) \psi_{s'}^\ast(\y',t) \hat U^{(BBC)}(\y,\y',\z) \psi_{q'}(\y,t) \psi_{l'}(\y',t) d{\bf y}d\y', \nonumber \\ & & \hat U^{(BCC)}_{k''s''q''l''}(\y,t) = \int \!\! \int \chi_{k''}^\ast(\z,t) \chi_{s''}^\ast(\z',t) \hat U^{(BCC)}(\y,\z,\z') \chi_{q''}(\z,t) \chi_{l''}(\z',t) d{\bf z}d\z', \nonumber \\ & & \hat U^{(CBC)}_{k's''q'l''}(\z,t) = \int \!\! \int \psi_{k'}^\ast(\y,t) \chi_{s''}^\ast(\z',t) \hat U^{(BCC)}(\y,\z,\z') \psi_{q'}(\y,t) \chi_{l''}(\z',t) d{\bf y}d\z', \nonumber \\ & & \hat U^{(ABC)}_{k'k''q'q''}(\x,t) = \int \!\! \int \psi_{k'}^\ast(\y,t) \chi_{k''}^\ast(\z,t) \hat U^{(ABC)}(\x,\y,\z) \psi_{q'}(\y,t) \chi_{q''}(\z,t) d{\bf y}d\z, \nonumber \\ & & \hat U^{(BAC)}_{kk''qq''}(\y,t) = \int \!\! \int \phi_{k}^\ast(\x,t) \chi_{k''}^\ast(\z,t) \hat U^{(ABC)}(\x,\y,\z) \phi_{q}(\x,t) \chi_{q''}(\z,t) d{\bf x}d\z, \nonumber \\ & & \hat U^{(CAB)}_{kk'qq'}(\z,t) = \int \!\! \int \phi_{k}^\ast(\x,t) \psi_{k'}^\ast(\y,t) \hat U^{(ABC)}(\x,\y,\z) \phi_{q}(\x,t) \psi_{q'}(\y,t) d{\bf x}d\y. \ \end{equation}n All one-body potentials in (\ref{all_local_2B_potentials}) and (\ref{all_local_3B_potentials}) are local (for spin-independent interactions), time-dependent potentials. To arrive at the final form of the equations-of-motion (\ref{EOM_final_orbitals_3mix}), we define the auxiliary one-body operators for the $A$-species' particles: \begin{equation}n\label{1B_oper_A} & & \{\rho_2 \hat W\}^{(A)}_{kq} \equiv \sum^{M_A}_{s,l=1} \rho^{(A)}_{kslq} \hat W^{(A)}_{sl} + \sum_{k',q'=1}^{M_B} \rho^{(AB)}_{kk'qq'} \hat W^{(AB)}_{k'q'} + \sum_{k'',q''=1}^{M_C} \rho^{(AC)}_{kk''qq''} \hat W^{(AC)}_{k''q''}, \nonumber \\ & & \{\rho_3 \hat U\}^{(A)}_{kq} \equiv \frac{1}{2} \sum^{M_A}_{s,p,l,r=1} \rho^{(A)}_{ksprlq} \hat U^{(A)}_{splr} + \sum_{k',s,q',l=1}^{M_A,M_B} \rho^{(AAB)}_{kk'slqq'} \hat U^{(AAB)}_{sk'lq'} + \nonumber \\ & & + \sum_{k',s',q',l'=1}^{M_B} \rho^{(ABB)}_{kk's'l'qq'} \hat U^{(ABB)}_{k's'q'l'} + \sum_{k'',s,q'',l=1}^{M_A,M_C} \rho^{(AAC)}_{kk''slqq''} \hat U^{(AAC)}_{sk''lq''} + \nonumber \\ & & + \sum_{k'',s'',q'',l''=1}^{M_C} \rho^{(ACC)}_{kk''s''l''qq''} \hat U^{(ACC)}_{k''s''q''l''} + \sum_{k',k'',q',q''=1}^{M_B,M_C} \rho^{(ABC)}_{kk'k''qq'q''} \hat U^{(ABC)}_{k'k''q'q''}, \ \end{equation}n for the $B$-species' particles: \begin{equation}n\label{1B_oper_B} & & \{\rho_2 \hat W\}^{(B)}_{k'q'} \equiv \sum^{M_B}_{s',l'=1} \rho^{(B)}_{k's'l'q'} \hat W^{(B)}_{s'l'} + \sum_{k,q=1}^{M_A} \rho^{(AB)}_{kk'qq'} \hat W^{(BA)}_{kq} + \sum_{k'',q''=1}^{M_C} \rho^{(BC)}_{k'k''q'q''} \hat W^{(BC)}_{k''q''}, \nonumber \\ & & \{\rho_3 \hat U\}^{(B)}_{k'q'} \equiv \frac{1}{2}\sum_{s',p',r',l'=1}^{M_B} \rho^{(B)}_{k's'p'r'l'q'} \hat U^{(B)}_{s'p'l'r'} + \sum_{k,s,q,l=1}^{M_A} \rho^{(AAB)}_{kk'slqq'} \hat U^{(BAA)}_{ksql} + \nonumber \\ & & + \sum_{k,s',q,l'=1}^{M_A,M_B} \rho^{(ABB)}_{kk's'l'qq'} \hat U^{(BAB)}_{ks'ql'} + \sum_{k'',s',q'',l'=1}^{M_B,M_C} \rho^{(BBC)}_{k'k''s'l'q'q''} \hat U^{(BBC)}_{s'k''l'q''} + \nonumber \\ & & + \sum_{k'',s'',q'',l''=1}^{M_C} \rho^{(BCC)}_{k'k''s''l''q'q''} \hat U^{(BCC)}_{k''s''q''l''} + \sum_{k,k'',q,q''=1}^{M_A,M_C} \rho^{(ABC)}_{kk'k''qq'q''} \hat U^{(BAC)}_{kk''qq''}, \end{equation}n and for the $C$-species' particles: \begin{equation}n\label{1B_oper_C} & & \{\rho_2 \hat W\}^{(C)}_{k''q''} \equiv \sum^{M_C}_{s'',l''=1} \rho^{(C)}_{k''s''l''q''} \hat W^{(C)}_{s''l''} + \sum_{k,q=1}^{M_A} \rho^{(AC)}_{kk''qq''} \hat W^{(CA)}_{kq} + \sum_{k',q'=1}^{M_B} \rho^{(BC)}_{k'k''q'q''} \hat W^{(CB)}_{k'q'}, \nonumber \\ & & \{\rho_3 \hat U\}^{(C)}_{k''q''} \equiv \frac{1}{2}\sum_{s'',p'',r'',l''=1}^{M_C} \rho^{(C)}_{k''s''p''r''l''q''} \hat U^{(C)}_{s''p''l''r''} + \sum_{k,s,q,l=1}^{M_A} \rho^{(AAC)}_{kk''slqq''} \hat U^{(CAA)}_{ksql} + \nonumber \\ & & \sum_{k,s'',q,l''=1}^{M_A,M_C} \rho^{(ACC)}_{kk''s''l''qq''} U^{(CAC)}_{ks''ql''} + \sum_{k',s',q',l'=1}^{M_B} \rho^{(BBC)}_{k'k''s'l'q'q''} \hat U^{(CBB)}_{k's'q'l'} + \nonumber \\ & & \sum_{k',s'',q',l''=1}^{M_B,M_C} \rho^{(BCC)}_{k'k''s''l''q'q''} \hat U^{(CBC)}_{k's''q'l''} + \sum_{k,k',q,q'=1}^{M_A,M_B} \rho^{(ABC)}_{kk'k''qq'q''} \hat U^{(CAB)}_{kk'qq'}. \end{equation}n These auxiliary one-body operators are constructed from products of matrix elements of reduced density matrices of increasing order (see Appendix \ref{appendix_B}) times the one-body potentials resulting from interactions of the same order, see Eqs.~(\ref{all_local_2B_potentials}) and (\ref{all_local_3B_potentials}). The derivation of the equations-of-motion (\ref{EOM_final_orbitals_3mix}) is now fully completed. \end{document}
\begin{document} \title{Spectral asymptotics for Dirichlet to Neumann operator in the domains with edges\thanks{\emph{2010 Mathematics Subject Classification} \begin{abstract} We consider eigenvalues of the Dirichlet-to-Neumann operator for Laplacian in the domain (or manifold) with edges and establish the asymptotics of the eigenvalue counting function \begin{equation*} \mathsf{N}(\lambda)= \kappa_0\lambda^d +O(\lambda^{d-1})\qquad \text{as\ \ } \lambda\to+\infty, \end{equation*} where $d$ is dimension of the boundary. Further, in certain cases we establish two-term asymptotics \begin{equation*} \mathsf{N}(\lambda)= \kappa_0\lambda^d +\kappa_1\lambda^{d-1}+o(\lambda^{d-1})\qquad \text{as\ \ } \lambda\to+\infty. \end{equation*} We also establish improved asymptotics for Riesz means. \end{abstract} \enlargethispage{2.5\baselineskip} \chapter{Introduction} \label{sect-1} Let $X$ be a compact connected $(d+1)$-dimensional Riemannian manifold with the boundary $Y$, regular enough to properly define operators $J$ and $\Lambda$ below\footnote{\label{foot-1} Manifolds with edges are of this type.}. Consider \emph{Steklov problem\/} \begin{align} &\mathsf{D}elta w=0 &&\text{in}\ \ X,\label{eqn-1-1}\\ &(\partial_\nu +\lambda) w|_Y =0, \label{eqn-1-2} \end{align} where $\mathsf{D}elta$ is the positive Laplace-Beltrami operator\footnote{\label{foot-2} Defined via quadratic forms.}, acting on functions on $X$, and $\nu$ is the unit inner normal to $Y$. In the other words, we consider eigenvalues of the Dirichlet-to-Neumann operator. For $v$, which is a restriction to $Y$ of $\sC^2$ function, we define $Jv= w$, where $\mathsf{D}elta w=0$ in $X$, $w|_Y=v$, \underline{and} $\Lambda v=-\partial_\nu Jv|_Y$. \begin{definition}\label{def-1-1} $\Lambda$ is called \emph{Dirichlet-to-Neumann operator\/}. \end{definition} The purpose of this paper is to consider manifold with the boundary which has edges: i.e. each point $y\in Y$ has a neighbourhood $U$ in $\bar{X}\coloneqq X\cap Y$, which is the diffeomorphic either to $\mathbb{R}^+ \times \mathbb{R}^d$ (then $y$ is a \emph{regular point\/}), or to $\mathbb{R}^{+\,2} \times \mathbb{R}^{d-1}$ (then $y$ is an \emph{inner edge point\/}) or to $(\mathbb{R}^2 \setminus \mathbb{R}^{-\,2}) \times \mathbb{R}^{d-1}$ (then $y$ is an \emph{outer edge point\/}). Let $Z_\mathsf{inn}$ and $Z_\mathsf{out}$ be sets of the inner and outer edge points respectively, and $Z=Z_\mathsf{inn}\cup Z_\mathsf{out}$. One can prove easily the following proposition: \begin{proposition}\label{prop-1-2} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{propo-1-2-i} $\Lambda$ is a non-negative essentially self-adjoint operator in $\sL^2(Y)$; $\operatorname{Ker}(\Lambda)$ consists of constant functions. \item\label{propo-1-2-ii} $\Lambda$ has a discrete accumulating to infinity spectrum with eigenvalues $0=\lambda_0<\lambda_1\le \ldots $ could be obtained recurrently from the following variational problem: \begin{multline} \int _X |\nabla w|^2\,dx\mapsto \min (=\lambda_n)\\ \text{as\ \ } \int_Y |w|^2\,dx'=1, \qquad \int_Y w w^\dag_k\,dx'=0\quad\text{for\ \ } k=0,\ldots, n-1. \label{eqn-1-3} \end{multline} \end{enumerate} \end{proposition} \enlargethispage{\baselineskip} \begin{corollary}\label{cor-1-3} The number of eigenvalues of $\Lambda$, which are less than $\lambda$, equals to the maximal dimension of the linear space of $\sC^2$-functions, on which the quadratic form \begin{equation} \int _X |\nabla w|^2\,dx-\lambda \int_Y |w|^2\,dx' \label{eqn-1-4} \end{equation} is negative definite. \end{corollary} \begin{proposition}\label{prop-1-4} Operator $\Lambda$ has a domain $\sH^1(Y)$ and \begin{equation} \|\Lambda u\|_{Y}+\|u\|_{Y}\mathsf{asym}p \|u\|_{\sH^1(Y)}, \label{eqn-1-5} \end{equation} where $(.,.)$ and $\|.\|$ denote $\sL^2$ inner product and norm. \end{proposition} \begin{proof} Let $L=\ell\cdot \nabla$, $\ell$ be a vector field which makes an acute angle with the inner normal (at $Z$--with both inner normals). Consider \begin{multline} 0=-(\mathsf{D}elta w, L w)_X= (\nabla w, \nabla L w)_X + (\partial_\nu w, L w)_Y=\\ \int Q(\nabla w)\,dy + O(\|w\|^2_{\sH^1(X)}), \label{eqn-1-6} \end{multline} where \begin{equation} Q(\nabla w)= (\nu\cdot \nabla w)(\ell\cdot\nabla w )-\frac{1}{2} \nu\cdot\ell|\nabla w|^2. \label{eqn-1-7} \end{equation} This quadratic form has one positive and $d$ negative eigenvalues. Further, on the subspace orthogonal to $\ell$, all eigenvalues are negative. Then \begin{equation} \|\partial _\nu w\|^2 + C\|w\|_{\sH^1(X)}^2 \mathsf{asym}p \|w\|_{\sH^(Y)}^2. \label{eqn-1-8} \end{equation} Combined with the estimate for $\|w\|_{\sH^1(X)}^2\le C\|w\|_{\sH^{\frac{1}{2}}(Y)}^2 $ it implies the statement. \end{proof} \begin{remark}\label{rem-1-5} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{rem-1-5-i} If $Y$ is infinitely smooth, then $\Lambda$ is the first-order pseudodifferential operator on $Y$ with the principal symbol $(g_Y(x,\xi))^{1/2}$, where $g_Y$ is the restriction of the metrics to $Y$. Then the standard results hold: \begin{equation} \mathsf{N}(\lambda)=\kappa_0 \lambda^d + O(\lambda^{d-1})\qquad\text{as\ \ }\lambda \to +\infty \label{eqn-1-9} \end{equation} with the standard coefficient $\kappa_0 =(2\pi)^{-d}\omega_d \operatorname{mes} (Y)$, where $\operatorname{mes}(Y)$ means $d$-dimensional volume of $Y$, $\omega_d$ is the volume of the unit ball in $\mathbb{R}^d$. We also can get two-term asymptotics with the same remainder estimate for $\mathsf{N}(\lambda)*\lambda_+^{r-1}$, $0<r \le 1$. \item\label{rem-1-5-ii} Moreover, if the set of all periodic geodesics of $Y$ has measure $0$, then \begin{equation} \mathsf{N}(\lambda)=\kappa_0 \lambda^d + \kappa_1 \lambda^{d-1}+o(\lambda^{d-1})\qquad\text{as\ \ }\lambda \to \infty. \label{eqn-1-10} \end{equation} We also can get two-term asymptotics (three-term for $r=1$) with the same remainder estimate for $\mathsf{N}(\lambda)*\lambda_+^{r-1}$, $0<r \le 1$. The same asymptotics, albeit with a larger number of terms, hold for $r>1$. \item\label{rem-1-5-iii} ``Regular'' singularities of the dimension $<(d-1)$ (like conical points in $3\mathsf{D}$) do not cause any problems for asymptotics of $\mathsf{N}(\lambda)$---we can use a rescaling technique to cover them; moreover, in the framework of this paper they would not matter even combined with edges (like vertices in $3\mathsf{D}$). \end{enumerate} \end{remark} \chapter{Dirichlet-to-Neumann operator} \label{sect-2} \section{Toy-model: dihedral angle} \label{sect-2-1} Let $Z=\mathbb{R}^{d-1}$ with the Euclidean metrics, $X= \mathcal{X} \times Z $, $Y= \mathcal{Y}\times Z$, where $\mathcal{X}$ is a planar angle of solution $\alpha$, $0<\alpha\le 2\pi$, $\mathcal{Y}=\mathcal{Y}_1\cup \mathcal{Y}_2$, $\mathcal{Y}_j$ are rays (see Figure~\ref{fig-2}). Then one can identify $Y$ with $\mathbb{R}^d$ with coordinates $(s,z)$, where $z\in Z$ and \begin{itemize}[label=-] \item $s=\operatorname{dist} (y,Z)$ for for a point $y\in Y_1= \mathcal{Y}_1\times Z$, \item $s=-\operatorname{dist} (y,Z)$ for for a point $y\in Y_2= \mathcal{Y}_2\times Z$. \end{itemize} Then we have a Euclidean metrics and a corresponding positive Laplacian $\mathsf{D}elta_Y$ on $Y$. \begin{remark}\label{rem-2-1} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{rem-2-1-i} We can consider any angle $\alpha >0$, including $\alpha >2\pi$ (in which case $X$ could be defined in the polar coordinates, but then we need to address some issues with the domain of operator). \item\label{rem-2-1-ii} If $\alpha=\pi$, then $\Lambda=\mathsf{D}elta_Y^{1/2}$. \item\label{rem-2-1-iii} We say that $X$ is a \emph{proper angle\/} if $\alpha\in (0,\pi)$ and that $X$ is a \emph{improper angle\/} if $\alpha\in (\pi,2\pi)$. We are not very concerned about $\alpha=\pi,\,2\pi$ since these cases will be forbidden in the general case. \end{enumerate} \end{remark} For this toy-model we can make a partial Fourier transform $F_{z\to \zeta}$ and then study equation in the planar angle: \begin{equation} \mathsf{D}elta_2 w+w=0, \label{eqn-2-1} \end{equation} where $\mathsf{D}elta_2$ is a positive $2\mathsf{D}$-Laplacian and we made also a change of variables $x''\mapsto |\zeta|\cdot x''$, $x''=(x_1,x_2)$. Denote by $\bar{J}$ and $\bar{\Lambda}$ operators $J$ and $\Lambda$ for (\ref{eqn-2-1}). This problem is extensively studied in Appendix~\ref{sect-A}. Then we can use the separation of variables. Singularities at the vertex for solutions to (\ref{eqn-2-1}) and $w|_Y=0$, are the same as for $\mathsf{D}elta _2w=0$, $w|_Y=0$ and they are combinations of $r^{\pi n/\alpha}\sin (\pi n\theta/\alpha)$ with $n=1,2,\ldots$, where $(r,\theta)\in \mathbb{R}^+ \times (0,\alpha)$ are polar coordinates. This show the role of $\alpha$: if $\alpha \in (0,\pi)$ those functions are in $\sH^{\sigma}_\mathsf{loc}(\mathcal{X})$ with $\sigma< 1+\pi n/\alpha$, and $\partial_\nu w|_Y$ belong to $\sH^{\sigma-3/2}_\mathsf{loc}(\mathcal{Y})$. One can prove easily the following Propositions~\ref{prop-2-2} and \ref{prop-2-3} below: \begin{proposition}\label{prop-2-2} The following are bounded operators \begin{align} & \mathsf{D}elta_\mathsf{D}^{-1}:\sH^{\sigma} (X)\to \sH^{\sigma+2}(X), \label{eqn-2-2}\\ &J:\sH^{\sigma+\frac{3}{2}}(Y)\to \sH^{\sigma+2}(X), \label{eqn-2-3}\\ &\Lambda :\sH^{\sigma+\frac{3}{2}}(Y)\to \sH^{\sigma+\frac{1}{2}}(X), \label{eqn-2-4} \end{align} where $\mathsf{D}elta_\mathsf{D}$ is an operator $\mathsf{D}elta$ with zero Dirichlet boundary conditions on $Y$ and \begin{itemize}[label=-] \item $\sigma\in [-\frac{1}{2},0]$, if $\alpha \in (0,\pi)$, and \item $\sigma\in [-\frac{1}{2},\bar{\sigma})$ with $\bar{\sigma}=\pi /\alpha-1$ otherwise. \end{itemize} \end{proposition} \begin{proposition}\label{prop-2-3} For equation \textup{(\ref{eqn-2-1})} in $\mathcal{X}$ \begin{equation} \bar{\Lambda}- (D_s^2+1)^{1/2} = \sum_{j+k\le 1} D_s^j \bar{K}_{jk}D_s^k\,, \label{eqn-2-5} \end{equation} where operators $\bar{K}_{jk}$ have Schwartz kernels $\bar{K}_{jk}(s,s')$ such that \begin{multline} |D_s^pD_{s'}^q \bar{K}_{jk}(s,t') |\le \\ C_{pqm }|s|^{-(\bar{\sigma}-p)_-}|s'|^{-(\bar{\sigma}-q)_-}(|s|+|s'|)^{-p-q +(\bar{\sigma}-p)_- + (\bar{\sigma}-q)_-} (|s|+|s'|+1)^m \label{eqn-2-6} \end{multline} and $l_\pm \coloneqq \max (\pm l,0)$ and $m$ is arbitrarily large. \end{proposition} Then \begin{corollary}\label{cor-2-4} For the toy-model in $X$ \begin{equation} \Lambda- \mathsf{D}elta_Y^{1/2} = \sum_{j+k\le 1} D_s^j K_{jk}D_s^k\,, \label{eqn-2-7} \end{equation} where operators $K_{jk}$ have Schwartz kernels \begin{equation} K_{j,k}(x',s; y', s')= (2\pi)^{1-d} \iint |\xi'|^{2-j-k} \bar{K}_{jk}(s|\xi'|,\, s'|\xi'|) e^{-i\langle x'-y',\xi'\rangle}\,d\xi'. \label{eqn-2-8} \end{equation} \end{corollary} \section{General case} \label{sect-2-2} Consider now the general case. In this case we can again introduce coordinate $s$ on $Y$ and consider $Y$ as a Riemannian manifold, but with the metrics which is only $\sC^{0,1}$ (Lipschitz class); more precisely, it is $\sC^\infty$ on both $Y_1$ and $Y_2$, but the first derivative with respect to $s$ may have a jump on $Z$. It does not, however, prevent us from introduction of $\mathsf{D}elta_Y$ and therefore $\mathsf{D}elta_Y^{\frac{1}{2}}$, but the latter would not be necessarily the classical pseudodifferential operator. We want to exclude the degenerate cases of the angles $\pi$ and $2\pi$. So, let us assume that \begin{claim}\label{eqn-2-9} $Z=\{x\colon x_1=x_2=0\}$ and $X=Z\times \mathcal{X}$ with a planar angle $\mathcal{X}\ni (x_1,x_2)$, disjoint from half-plane and the plane with a cut. \end{claim} \begin{definition}\label{def-2-5} For $z\in Z$ let $\alpha(z)$ be an internal angle between two leaves of $Y$ at point $z$ (calculated in the corresponding metrics). Due to our assumption either $\alpha(z) \in (0,\pi)$ or $\alpha (z)\in (\pi,2\pi)$. Let $Z_j$ be a connected component of $Z$. \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{def-2-5-i} $Z_j$ is a \emph{inner edge} if $\alpha(z)\in (0,\pi)$ on $Z_j$, and \item\label{def-2-5-ii} $Z_j$ is an \emph{outer edge} if $\alpha(z)\in (\pi,2\pi)$ on $Z_j$. \end{enumerate} \end{definition} One can prove easily \begin{proposition}\label{prop-2-6} The following are bounded operators \begin{align} & \mathsf{D}elta_\mathsf{D}^{-1}:\sH^{\sigma} (X)\to \sH^{\sigma+2}(X), \label{eqn-2-10}\\ &J:\sH^{\sigma+\frac{3}{2}}(Y)\to \sH^{\sigma+2}(X), \label{eqn-2-11}\\ &\Lambda :\sH^{\sigma+\frac{3}{2}}(Y)\to \sH^{\sigma+\frac{1}{2}}(X), \label{eqn-2-12} \end{align} where $\mathsf{D}elta_\mathsf{D}$ is an operator $\mathsf{D}elta$ with zero Dirichlet boundary conditions on $Y$ and \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{prop-2-6-i} $\sigma\in [-\frac{1}{2},0]$, if $\alpha(z)\in (0,\pi)\ \forall z\in Z$, and \item\label{prop-2-6-ii} $\sigma\in [-\frac{1}{2},\bar{\sigma})$ with $\bar{\sigma}=\pi /\bar{\alpha}-1$, $\bar{\alpha}=\max_{z\in Z} \alpha(z)$ otherwise. \end{enumerate} \end{proposition} One can also prove easily \begin{proposition}\label{prop-2-7} In the general case, assuming that $Z=\{x\colon x_1=x_2=0\}$ and $X=Z\times \mathcal{X}$ with a planar angle $\mathcal{X}\ni (x_1,x_2)$ of solution $\in (0,\pi)\cup(\pi,2\pi)$ \begin{equation} \Lambda- \mathsf{D}elta_Y^{1/2} = b+ \sum_{j+k\le 1} D_s^j K_{jk}D_s^k\,, \label{eqn-2-13} \end{equation} where $b$ is a bounded operator and operators $K_{jk}$ have Schwartz kernels and \begin{multline} K_{j,k}(x',s; y', s')=\\ (2\pi)^{1-d} \iint |\xi'|^{2-j-k} \bar{K}_{jk}\bigl(\frac{1}{2}(x'+y'), s|\xi'|,\, s'|\xi'|\bigr)\, e^{-i\langle x'-y',\xi'\rangle}\,d\xi'. \label{eqn-2-14} \end{multline} \end{proposition} \begin{remark}\label{rem-2-8} On the distances $\gtrsim 1$ from $Z$, $b$ is a classical $0$-order pseudodifferential operator, on the distance $\gtrsim |\xi'|^{-1+\delta}$ it is a rough $0$-order pseudodifferential operator\footnote{\label{foot-3} I. e. with the symbols such that $|D_x^\alpha D_\xi^\beta|\le C_{\alpha\beta} \rho^{-|\beta|}\gamma^{-|\alpha}$ with $\rho\gamma \ge h^{1-\delta}$, $\rho\gamma\ge h^{1-\delta}$. Here $\rho=1$, $\gamma= |x|$.}. \end{remark} \chapter{Microlocal analysis} \label{sect-3} \section{Propagation of singularities near edge} \label{sect-3-1} We are going to consider microlocal analysis near point $(\bar{x},\bar{\xi}'')\in T^*Z$ under assumption (\ref{eqn-2-9}). In our approach we use definition of operator $\Lambda$ rather than its description of the previous Section~\ref{sect-2}. So, let $x=(x'';x')\in \mathbb{R}^2\times \mathbb{R}^{d-1}$. \begin{proposition}\label{prop-3-1} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{prop-3-1-i} Let $q_j=q_j(\xi')$ ($j=1,2$) be two symbols, constant as $|\xi'|\ge C$. Assume that the $\operatorname{dist} (\supp (q_1),\supp(q_2))\ge \epsilon$. Consider $h$-pseudodifferential operators $Q_j=q_j^\mathsf{w}(h^{-1}D')$, $j=1,2$. Then the operator norms of \begin{phantomequation}\label{eqn-3-1}\end{phantomequation} \begin{align} &Q_1 \mathsf{D}elta_\mathsf{D}^{-1}Q_2:\sL^2(X)\to \sH^2(X),&&Q_1 JQ_2:\sH^{\frac{1}{2}}(Y)\to \sH^2(X), \tag*{$\textup{(\ref*{eqn-3-1})}_{1,2}$}\label{eqn-3-1-1}\\ &Q_1\Lambda Q_2 :\sH^{1}(Y)\to \sL^2(Y) \tag*{$\textup{(\ref*{eqn-3-1})}_{3}$}\label{eqn-3-1-3} \end{align} do not exceed $C'h^s$ with arbitrarily large $s$ where $\mathsf{D}elta_\mathsf{D}$ is an operator $\mathsf{D}elta$ with zero Dirichlet boundary conditions on $Y$. \item\label{prop-3-1-ii} Let $Q_j(x')$ ($j=1,2$) be two functions. Then operators $\textup{(\ref{eqn-3-1})}_{1-3}$ are infinitely smoothing by $x'$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{pf-3-1-i} Without any loss of the generality one can assume that $q_j$ are constant also in the vicinity of $0$. Then the operator norms of $[Q_j,\mathsf{D}elta]\mathsf{D}elta_\mathsf{D}^{-1}$ in $\sL^2(X)$ do not exceed $Ch$; replacing $Q_j$ by $Q_j^{(n)}$ with $Q_j^{(0)}=Q_j$ and $Q_j^{(n)}\coloneqq [Q_j^{(n-1)},\mathsf{D}elta]\mathsf{D}elta_\mathsf{D}^{-1}$ for $j=1,2,\ldots$, we prove by induction that the operator norms of $Q_j^{(n)}$ in in $\sL^2(X)$ do not exceed $Ch^n$. Then one can prove by induction easily that the operator norm of $\textup{(\ref*{eqn-3-1})}_{1}$ does not exceed $Ch^s$. Then one can prove easily that the operator norm of $\textup{(\ref{eqn-3-1})}_{2,3}$ do not exceed $Ch^s$ as well. It concludes the proof of Statement~\ref{prop-3-1-i}. \item\label{pf-3-1-ii} Statement~\ref{prop-3-1-ii} is proven by the same way. \end{enumerate} \vskip-\baselineskip\ \end{proof} Let $u(x,y,t)$ be Schwartz kernel of $e^{it\Lambda}$, $x,y\in Y$. \begin{proposition}\label{prop-3-2} Consider $h$-pseudodifferential operator $Q=q^\mathsf{w}(x',h^{-1}D')$ where $q$ vanishes $\{|\xi'|\le c_0\}$. Let $\chi\in \sC_0^\infty (\mathbb{R})$, $T\ge h^{1-\delta}$. Then operator norms of $F_{t\to \tau} \chi_T(t) Q_x u$ and $F_{t\to \tau} \chi_T(t) u\,^t\!Q_y$ do not exceed $C'_Th^s$ for $\tau\le c$ for $c_0=c_0(c)$. \end{proposition} \begin{proof} One need to consider $v=e^{it\Lambda}f$, $f\in \sH^1 (Y)$, $\|f\|_{\sL^2(Y)}=1$ and observe that it satisfies $(D_t-\Lambda )v=0$. Using (\ref{eqn-1-5}) we see that operator $(D_t-V)$ is elliptic in $\{|\xi'|\ge c_0, \tau\le c\}$ while Proposition~\ref{prop-3-1} ensures its locality. \end{proof} Therefore, in what follows \begin{remark}\label{rem-3-3} Studying energy levels $\tau\le c$ we can always apply cut-out domain $\{|\xi'|\ge c_0\}$. \end{remark} Now we can study the propagation of singularities. Let us prove that the propagation speed with respects to $x$ and $\xi'$ do not exceed $C_0$. For this and other our analysis we need the following Proposition~\ref{prop-3-4}: \begin{proposition}\label{prop-3-4} For $h$-pseudodifferential operator $Q=q^\mathsf{w}(x,hD')$ the following formula onnecting commutators $[\mathsf{D}elta,Q]$ and $[\Lambda +\partial_\nu, Q]$ holds: \begin{equation} -\operatorname{Re} i([\mathsf{D}elta,Q]Jv,Jv)_X= \operatorname{Re} i(([\Lambda,Q]+[\partial_\nu, Q])v,v)_Y \label{eqn-3-2} \end{equation} \end{proposition} \begin{proof} First, consider real valued symbol $q=q(x,\xi')$ and $Q=q^\mathsf{w}(x,hD')$ its Weyl quantization. Let $v$ denote just any function on $Y$ and $V$ its continuation as a harmonic function. Then for $w=Jv$ \begin{multline*} 0=(Q\mathsf{D}elta w,w)_X= (\mathsf{D}elta Qw,w) -([\mathsf{D}elta ,Q]w,w)_X=\\ -([\mathsf{D}elta ,Q]w,w)_X+ (Qw,\mathsf{D}elta w)_X - (\partial_\nu Qw, w)_Y + (Qw, \partial _\nu w)_Y= \\ -([\mathsf{D}elta ,Q]w,w)_X - ( Q\partial_\nu w, w)_Y -( [\partial_\nu,Q] w, w)_Y + (Qw, \partial _\nu w)_Y=\\ -([\mathsf{D}elta ,Q]w,w)_X + ( Q \Lambda v, v)_Y -( [\partial_\nu,Q] v, v)_Y - (v, Q\Lambda v)_Y=\\ -([\mathsf{D}elta ,Q]w,w)_X - (\Lambda Q v,v)_Y-( [\partial_\nu,Q] v, v)_Y, \end{multline*} which implies (\ref{eqn-3-2}). \end{proof} Now we can prove that at energy levels $\tau\le c$ the propagation speed with respects to $x$ and $\xi'$ do not exceed $C_0=C_0(c)$. \begin{proposition-foot}\footnotetext{\label{foot-4} Cf. Theorem~\ref{monsterbook-thm-8-5-6}\ref{monsterbook-thm-8-5-6-i} of \cite{monsterbook}.}\label{prop-3-5} Let $Q_j=q_j^\mathsf{w} (x,hD')$ and $\operatorname{dist} (\supp (q_1),\supp (q_2))\ge C_0 T$ with fixed $T>0$. Let $\chi\in \sC_0^\infty ([-1,1])$. Then for $\tau \le c$ \begin{equation} |F_{t\to h^{-1}\tau} \bigl( \chi_T(t) Q_{1x} u \,^t\!Q_{2y} \bigr)|\le Ch^m, \label{eqn-3-3} \end{equation} where here and below $m$ is an arbitrarily large exponent and $C=C_m$. \end{proposition-foot} \begin{proof} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{pf-3-5-i} The proof is the standard one for propagation with respect to $(x',\xi')$: we consider $\phi (x',\xi',t)$ and prove that under the microhyperbolicity condition \begin{gather} \phi_t -\{ |\xi'|,\phi \}\ge \epsilon_0, \label{eqn-3-4}\\ \shortintertext{which is equivalent to} 2\phi_t - |\xi'|^{-1}\{|\xi'|^2,\phi \}\ge 2\epsilon_0|\xi'|, \label{eqn-3-5} \end{gather} our standard propagation theorem (see Theorem~\ref{monsterbook-thm-2-1-2} of \cite{monsterbook}) holds, just repeating arguments of its proof, using equality (\ref{eqn-3-2}) and the fact that $\|Jv\|_{\sH^{1/2}(X)} \mathsf{asym}p \|v\|_{\sL^2(Y)}$. Then we plug $\phi (x',\xi',t)=\psi(x',\xi') -t$ with $|\nabla_{x',\xi'} \psi |\le \epsilon_0$, and prove that (\ref{eqn-3-3}) for $q_j=q_j(x',\xi')$. \item\label{pf-3-5-ii} We need also prove that the propagation speed with respect to $(x_1,x_2)$\,\footnote{\label{foot-5} Or, equivalently, with respect to $s$.} does not exceed $C_0$, but it is easy since for $|s|\ge \epsilon$, $\Lambda$ is a first-order pseudodifferential operator with the symbol $|\xi|$. \end{enumerate} \vskip-\baselineskip\ \end{proof} \begin{remark}\label{rem-3-6} In fact, it follows from the proof, that the propagation speed with respect to $x'$ do not exceed $C_0$, and the propagation speed with respect to $\xi'$ does not exceed $C_0|\xi'|$ with $C_0$, which does not depend on restriction $\tau\le c$. Meanwhile, the propagation speed with respect $(x_1,x_2)$ does not exceed $1$. \end{remark} Next we prove that at energy levels $\tau=1$ the propagation speed with respects to $x'$ in the vicinity of $(0,\bar{\xi}')$ with $|\bar{\xi}'|\ge\epsilon_0$ is at least $\epsilon_1=\epsilon_1(\epsilon_0)$. \begin{proposition-foot}\footnotetext{\label{foot-6} Cf. Theorem~\ref{monsterbook-thm-8-5-6}\ref{monsterbook-thm-8-5-6-ii} of \cite{monsterbook}.}\label{prop-3-7} Let $Q_j=q_j^\mathsf{w} (x,hD')$ and \begin{equation*} \operatorname{dist}_{x'} (\supp (q_1),\supp (q_2))\le \epsilon_1T \end{equation*} with fixed $T>0$. Let $\chi\in \sC_0^\infty ([-1,-\frac{1}{2}]\cup [-\frac{1}{2},-1])$. Then for $|\tau-1|\le \epsilon_0$ \textup{(\ref{eqn-3-3})} holds. \end{proposition-foot} \begin{proof} After propagation theorem mentioned in the proof of Proposition~\ref{prop-3-5} is proven we just plug $\phi (x',\xi'.t)=\psi (x',\xi')-\epsilon t$ with $\xi'\cdot \nabla_{x'} \psi \ge 1$. \end{proof} \begin{corollary-foot}\footnotetext{\label{foot-7} Cf. Corollary~\ref{monsterbook-cor-8-5-7}\ref{monsterbook-cor-8-5-7-ii} of \cite{monsterbook}.}\label{cor-3-8} In the framework of Proposition~\ref{prop-3-5} consider $|\tau -1|\le \epsilon$. Then \begin{align} &|F_{t\to h^{-1}\tau} \Gamma_{x} \chi_T(t) u \,^t\!Q_y|\le Ch^{1-d+m}T^{-m} \label{eqn-3-6}\\ \shortintertext{and} & |F_{t\to h^{-1}\tau} \Gamma_{x} \bigl(\bar{\chi}_{T'}(t)- \bar{\chi}_{T}(t)\bigr) u \,^t\!Q_y| \le Ch^{1-d+m}T^{-m}, \label{eqn-3-7} \end{align} provided $\chi\in \sC_0^\infty ([-1,-\frac{1}{2}]\cup[\frac{1}{2},1])$, $\bar{\chi}\in \sC_0^\infty ([-1,1])$, $\bar{\chi}=1$ on $[-\frac{1}{2},\frac{1}{2}]$, $h\le T\le T'\le T_0$ with small constant $T_0$. \end{corollary-foot} \begin{proof} For small constant $T$ (\ref{eqn-3-6}) follows directly from Proposition~\ref{prop-3-7}, after one proves easily that we can insert $D_{x''}$, $D_{y''}$ to the corresponding estimate, which is easy. For $h\le T\le T_0$ we use just rescaling like in the proof of Theorem~\ref{monsterbook-thm-2-1-19} of \cite{monsterbook}. Finally, (\ref{eqn-3-7}) is obtained by the summation with respect to partition of unity with respect to $t$. \end{proof} This implies immediately \begin{corollary}\label{cor-3-9} $\mathsf{N}_h (\tau)$ and $\mathsf{N}_h (\tau) *\tau_+^{\sigma-1}$ are approximated by the corresponding Tauberian expressions with $T\mathsf{asym}p h^{1-\delta}$ with errors $O(h^{1-d})$ and $O(h^{1-d+\sigma})$ respectively (as $\tau= 1$ and $h\to +0$). \end{corollary} \section{Reflection of singularities from the edge} \label{sect-3-2} The results of the previous subsection are sufficient to prove \emph{sharp spectral asymptotics\/} (with the remainder estimate $O(\lambda^{d-1})$), which do not require conditions of the global nature, but insufficient to prove sharper spectral asymptotics (with the remainder estimate $o(\lambda^{d-1})$), which require conditions of the global nature. For this more ambitious purpose we need to prove that the singularities propagate along geodesic billiards on the boundary $Y$, reflecting and refracting on the edge $Z$ (so billiards will be branching), and the typical singularity (with $|\xi'|<\tau$) does not stick to $Z$. To do this we will follow arguments of Subsection~\ref{monsterbook-sect-8-5-4} of \cite{monsterbook}. Assuming (\ref{eqn-2-9}) consider operator $Q=x_1D_1+x_2D_2-i/2$, which acts along $Y$. As an operator in $\sL^2(Y)$ it is self-adjoint, as an operator in $\sL^2(X)$ it is not, but differs from a self-adjoint operator $Q=x_1D_1+x_2D_2-i$ by $i/2$, which does not affects commutators. As a result, repeating the proof of Proposition~\ref{prop-3-4} we arrive to \begin{claim}\label{eqn-3-8} Under assumption (\ref{eqn-2-9}) equality (\ref{eqn-3-2}) also holds for the operator $Q=x_1D_1+x_2D_2-i$. \end{claim} To apply arguments of the proof of Propositions~\ref{monsterbook-prop-8-5-9} and then \ref{monsterbook-prop-8-5-10} of \cite{monsterbook}, we need to check, if operator $i[\Lambda, Q]$ is positive definite, which in virtue of (\ref{eqn-3-2}) is equivalent to the same property for the form in the left: \begin{equation} \operatorname{Re} (i[\mathsf{D}elta, Q]w,w) -\operatorname{Re} (i[\partial_\nu ,Q]w,w)_Y \ge \epsilon \|\nabla w\|^2 \text{for\ \ } w\colon \mathsf{D}elta=0. \label{eqn-3-9} \end{equation} For the toy-model $i[\mathsf{D}elta, Q]= 2(D_1^2+D_2^2)$, $i[\partial_\nu ,Q]= -\partial_\nu$, and the form on the left coincides with $\|\nabla w\|^2 - 2\|\nabla 'w\|^2$ on $w$ in question and therefore after Fourier transform $F_{x'\to\zeta}$ and change of variables $x_{1,2}$ it boils down to the inequality \begin{equation} \|\nabla w\|^2 -\|w\|^2 \ge \epsilon (\|\nabla w\|^2+\|w\|^2)\qquad \text{for\ \ } w\colon \mathsf{D}elta_2 w+w=0 \label{eqn-3-10} \end{equation} for two-dimensional $\mathsf{D}elta_2$, norms and scalar products. This inequality is explored in Appendix~\ref{sect-A}, and in virtue of Proposition~\ref{cor-A-11} (\ref{eqn-3-10}) holds for $\alpha \in (\pi,2\pi)$. Meanwhile due to Proposition~\ref{prop-A-16} (\ref{eqn-3-10}) fails for $\alpha \in (0,\pi)$. Therefore we arrive to \begin{proposition-foot}\footnotetext{\label{foot-8} Cf. Proposition~\ref{monsterbook-prop-8-5-9} of \cite{monsterbook}.}\label{prop-3-10} Consider two-dimensional toy-model (planar angle) with $\alpha \in (\pi, 2\pi)$. Let $\psi \in \mathscr{C}^\infty _0 ([-1,1])$, $\psi_\gamma (x)=\psi (x/\gamma)$ and $\phi \in \mathscr{C}_0^\infty([-1,1])$, $\tau \ge 1+\epsilon_0$. Then as $\gamma\ge h^{1-\delta}$, $T\ge C_0\gamma$, $h^\delta\ge \eta\ge h^{1-\delta}T^{-1}$ \begin{equation} \| \phi (\eta^{-1} (hD_t-\tau)) \psi_\gamma e^{it\Lambda}\psi_\gamma|_{t=T} \| \le CT^{-1}\gamma + Ch^{\delta'} . \label{eqn-3-11} \end{equation} \end{proposition-foot} \begin{proof} Proof follows the proof of Proposition~\ref{monsterbook-prop-8-5-9} of \cite{monsterbook} with $m=1$, and uses equality (\ref{eqn-3-2}) to reduce calculation of the commutator $[\Lambda,Q]$ to the calculation of the commutator $[\mathsf{D}elta,Q]$. \end{proof} \begin{proposition-foot}\footnotetext{\label{foot-9} Cf. Proposition~\ref{monsterbook-prop-8-5-10} of \cite{monsterbook}.}\label{prop-3-11} Consider $(d+1)$-dimensional toy-model (dihedral angle) with $\alpha \in (\pi, 2\pi)$. Let $\psi \in \mathscr{C}^\infty _0 ([-1,1])$, $\psi_\gamma (x)=\psi (x_1/\gamma)$, $\phi \in \mathscr{C}_0^\infty([-1,1])$, $\varphi \in \mathscr{C}_0^\infty(\mathbb{R}^{d-1})$ supported in $\{|\xi'|\le 1-\epsilon\}$ with $\epsilon>0$. Finally, let $\gamma\ge h^{1-\delta}$, $T\ge C h^{-\delta}\gamma$, $h^\delta\ge \eta\ge h^{1-\delta}T^{-1}$. Then \begin{equation} \|\phi (\eta^{-1} (hD_t -1)) \varphi (hD') \psi_\gamma (x_1) e^{it\Lambda}\psi_\gamma(x_1)|_{t=T}\| = O(h^m) \label{eqn-3-12} \end{equation} with arbitrarily large $m$. \end{proposition-foot} \begin{proof} Proof follows the proof of Proposition~\ref{monsterbook-prop-8-5-10} of \cite{monsterbook} with $m=1$. \end{proof} Now we can consider the general case. Consider a point $\bar{z}=(\bar{x},\bar{\xi'})\in T^*Z$, $|\bar{x}'|<1$. We can raise to points $\bar{z}^\pm=(\bar{x},\bar{\xi}^{\pm})\in T^*Y_{1,2}|_Z$ with $|\bar{\xi}^{\pm} |=1$ and $\iota \bar{z}^\pm =\bar{z}$, where $\iota (x,\xi)= (x,\xi')\in T^*Z$ for $(x,\xi)\in T^*Y|_Z$. Consider geodesic trajectories $\Psi_t ( \bar{z}^{\pm} )$, going from $\bar{z}^{\pm})$ into $T^*Y_{1,2}$ for $t<0$, $|t|<\epsilon$; this distinguishes these two points. We also can consider geodesic trajectories $\Psi_t (\bar{z}^{\pm})$, going from $\bar{z}^{\mp}$ into $T^*Y_{1,2}$ for $t>0$, $|t|<\epsilon$. Let $\iota^{-1}\bar{z}=\{\bar{z}^+,\bar{z}^-\}$ and let $\Psi_t (\iota^{-1}\bar{z})$ be obtained as a corresponding union as well\footnote{\label{foot-10} So we actually restrict $\iota$ to $S^*Y|_Z$ and $\iota^{-1}$ to $B^*Z$.}. So, for such point $\bar{z}$ \ $\Psi_t (\iota^{-1}z)$ with $t<0$ consists of two incoming geodesic trajectories, $\Psi_t (\iota^{-1}z)$ with $t<0$ consists of two outgoing geodesic trajectories. Similarly, for $z\in (T^*(Y\setminus Z))$ we can introduce $\Psi_t (z)$: when trajectory hits $Z$ it branches. \begin{theorem-foot}\footnotetext{\label{foot-11} Cf. Theorem~\ref{monsterbook-thm-8-5-11} of \cite{monsterbook}.} \label{thm-3-12} Consider a point $z=(x,\xi)\in T^*Y$, $|\xi|=1$. Consider a (branching ) geodesic trajectory $\Psi_t (z)$ with $\pm t\in [0, T]$ (one sign only) with $T\ge \epsilon_0$ and assume that for each $t$ indicated it meets $\partial X$ transversally i.e. \begin{multline} \operatorname{dist} (\uppi_x \Psi_t (x,\xi),\partial X) \le \epsilon \implies\\ |\frac{d\ }{dt}\operatorname{dist} (\uppi_x \Psi_t (x,\xi),\partial X)|\ge \epsilon \qquad \forall t: \pm t\in [0, mT]. \label{eqn-3-13} \end{multline} Also assume that \begin{equation} \operatorname{dist} (\uppi_x \Psi_t (x,\xi),\partial X)\ge \epsilon_0 \qquad \text{as\ \ } t=0, \ \pm t=T. \label{eqn-3-14} \end{equation} Let $\epsilon >0$ be a small enough constant, $Q$ be supported in $\epsilon$-vicinity of $(x,\xi)$ and $Q_1\equiv 1$ in $C_0\epsilon$-vicinity of $\Psi_t(x,\xi)$ as $t=\pm T$. Then operator $(I-Q_1) e^{-it\Lambda }Q$ is negligible as $t=\pm mT$. \end{theorem-foot} \begin{proof} Proof follows the proof of Theorem~\ref{monsterbook-thm-8-5-11} of \cite{monsterbook} with $m=1$. \end{proof} Adapting construction of the ``dependence set'' to our case, we arrive to the following \begin{definition}\label{def-3-13} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{def-3-13-i} The curve $z(t)$ in $T^*Y$ is called a \emph{generalized geodesic billiard\/} if a.e. \begin{equation} \frac{dz}{dt} \in K(z), \label{eqn-3-15} \end{equation} where \begin{enumerate}[label=(\alph*), wide, labelindent=20pt] \item \label{def-3-13-ia} $K(z)= \{H_{g}(z)\}$, $g(z)$ is a metric form, if $z\in T^*(Y\setminus Z)$, \item\label{def-3-13-ib} $K(z) =\{H_{g}(z')\colon z'\in \iota^{-1}\iota z\}$, if $z\in T^*Y|_Z$. \end{enumerate} \item\label{def-3-13-ii} Let $\Psi_t(z)$ for $t\gtrless 0$ be a set of points $z'\in T^*Y$ such that there exists generalized geodesic billiard $z(t')$ with $0 \lessgtr t' \lessgtr t$, such that $z(0)=z$ and $z(t)=z'$. Map $(z,t)\mapsto \Psi_t(z)$ is called a \emph{generalized (branching) billiard flow\/}. \item\label{def-3-13-iii} Point $z\in T^*Y$ is \emph{partially periodic\/} (with respect to $\Psi$) if for some $t\ne 0$ $z\in \Psi_t(z)$. Point $z\in T^*Y$ is \emph{completely periodic\/} (with respect to $\Psi$) if for some $t\ne 0$ $\{z\}= \Psi_t(z)$ \end{enumerate} \end{definition} Then we arrive immediately to \begin{corollary}\label{cor-3-14} Assume that $Z$ consists only of only outer edges. Also assume that the set of all partially periodic points is zero. Then $\mathsf{N}_h (\tau)$ and $\mathsf{N}_h (\tau) *\tau_+^{r-1}$ are approximated by the corresponding Tauberian expressions with $T\mathsf{asym}p h^{1-\delta}$ with errors $o(h^{1-d})$ and $o(h^{1-d+r})$ respectively (as $\tau= 1$ and $h\to +0$). \end{corollary} \begin{proof} Easy details are left to the reader. \end{proof} \chapter{Main results} \label{sect-4} \section{From Tauberian to Weyl asymptotics} \label{sect-4-1} Now we can apply the method of successive approximations as described in Section~\ref{monsterbook-sect-7-2} of \cite{monsterbook}, considering an unperturbed operator \begin{enumerate}[label=(\alph*), wide, labelindent=20pt] \item As one in $\mathbb{R}^{d}$, with the metrics, frozen at point $y$, if $\operatorname{dist}(y,Z)\ge h^{1-\delta}$. \item As one in the dihedral edge, with the metrics, frozen at point $(y',0)$, if $y=(y';s)$ with $|s|=\operatorname{dist}(y,Z)\le h^{1-\delta}$, \end{enumerate} with the following modification: We calculate $\Lambda$ also this way, applying successful approximations for both $\mathsf{D}elta $, when we solve $\mathsf{D}elta w=0$, $w|_Y=v$, and to $\partial_\nu $, when we calculate $\partial_\nu w|_Y$. Then we prove that for operator $h\Lambda$ the Tauberian expression $\mathsf{N}_h^\mathsf{T}(1)$ for $\mathsf{N}_h^-(1)$ with $T=h^{1-\delta}$ coincides modulo $O(h^m)$ with the (generalized) Weyl expression \begin{equation} \mathsf{N}^\mathsf{W}_h \sim \kappa_0 h^{-d} + \kappa_1 h^{1-d}+ \ldots, \label{eqn-4-1} \end{equation} with the standard coefficient $\kappa_0$ and with $\kappa_1=\kappa_{1,Y\setminus Z}+\kappa_{1,Z}$, where $\kappa_{1,Y\setminus Z}$ is calculated in the standard way, for the smooth boundary, and \begin{gather} \kappa_{1,Z}=(2\pi)^{1-d}\omega_{d-1}\int _Z \varkappa (\alpha (y))\,dy\,, \label{eqn-4-2}\\ \varkappa (\alpha) = \int_1^\infty \int_{-\infty}^\infty \lambda ^{-d} \Bigl( \mathbf{e}_\alpha (s,s,\lambda) - \pi ^{-1}(\lambda-1) \Bigr)\,dsd \lambda\,, \label{eqn-4-3} \end{gather} $\mathsf{e}_\alpha (s,s',\lambda)$ is a Schwartz kernel of the spectral projector of $\hat{\Lambda}$ in the planar angle of solution $\alpha$ and $\pi ^{-1}(\lambda-1)$ is a corresponding Weyl approximation. \section{Main theorems} \label{sect-4-2} Thus we arrive to the corresponding asymptotics for $\mathsf{N}^-_h(1)$ and from them, obviously to asymptotics for $\mathsf{N} (\tau)$: \begin{theorem}\label{thm-4-1} Let $Y$ be a compact manifold with edges. Then the following asymptotics hold as $\tau\to+\infty$: \begin{align} &\mathsf{N}(\tau) = \kappa_0 \tau^{d} + O(\tau^{d-1}) \label{eqn-4-4} \shortintertext{and for $r>0$} &\mathsf{N}(\tau)*\tau_+^{r-1} = \Bigl(\sum_{m<r} \kappa_m \tau^{d-r}\Bigr)*\tau_+^{r-1} + O(\tau^{d-1}). \label{eqn-4-5} \end{align} \end{theorem} \begin{theorem}\label{thm-4-2} Let $Y$ be a compact manifold with edges. Assume that $Z$ consists only of outer edges and that the set of all points, which are partially periodic with respect to the generalized billiard flow, has a measure~$0$. Then the following asymptotics hold as $\tau\to+\infty$: \begin{align} &\mathsf{N}(\tau) = \kappa_0 \tau^{d} + \kappa_1 \tau^{d-1}+o(\tau^{d-1}) \label{eqn-4-6} \shortintertext{and for $r>0$} &\mathsf{N}(\tau)*\tau_+^{r-1} = \Bigl(\sum_{m \le r} \kappa_m \tau^{d-r}\Bigr)*\tau_+^{r-1} + o(\tau^{d-1}). \label{eqn-4-7} \end{align} \end{theorem} \section{Discussion} \label{sect-4-3} \begin{remark}\label{rem-4-3} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{rem-4-3-i} Even for standard ordinary non-branching billiards, billiard flow $\Psi_t$ could be multivalued. However, if through point $z\in T^*(Y\setminus Z)$ (where now $Y$ is a manifold, and $Z$ is its boundary) passes an infinitely long in the positive (or negative) time direction billiard trajectory, which always meets $Z$ transversally, and each finite time interval contains a finite number of reflections, then $\Psi_t(z)$ for $\pm t>0$ is single-valued. Points, which do not have such property, are called \emph{dead-end points\/}. For ordinary billiards the set of dead-end points has measure zero. \item\label{rem-4-3-ii} For branching billiards (with velocities $c_1,c_2$) we can introduce the notion of the dead-end point as well: it is if at least one of the branches either meets $Z$ non-transversally, or makes an infinite number of reflections on some finite time interval. As it was shown by Yu.~Safarov and D.~Vassiliev \cite{safarov:vassiliev}, if $c_1$ and $c_2$ are not disjoint (our case!), the set of dead-end billiards could have positive measure. \end{enumerate} \end{remark} \begin{remark}\label{rem-4-4} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{rem-4-4-i} Checking non-periodicity assumption is difficult. But in some domains it will be doable. F.e. assume that $Y=Y_1\cup Y_2$ globally is a domain of revolution, so $Z$ is a $(d-1)$-dimensional sphere. Then the measure of the set of dead-end billiards is $0$. \item\label{rem-4-4-ii} Assume that neither $Y_1$, nor $Y_2$ contains closed geodesics. Let $\varphi_j(\beta)$ be the length of the segment of geodesics in $Y_j$, with only ends on $Z$, where $\beta$ is the reflection angle: \begin{figure} \caption{Trajectories on the manifold of revolution } \label{fig-1} \end{figure} Assume that $\varphi_j(\beta)$ are analytic and $\varphi_j(\beta)\to 0$ as $\beta\to +0$. Then the measure of the set of partially periodic billiards is $0$. \end{enumerate} \end{remark} \begin{remark}\label{rem-4-5} Our arguments hold not only for compact $X$ but also for $X\operatorname{sub}set \mathbb{R}^{d+1}$ with the compact complement and with the metrics. stabilizing to Euclidean at infinity. \end{remark} \begin{remark}\label{rem-4-6} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{rem-4-6-i} In the next version of this paper we want to prove sharper asymptotics for domains with inner edges. To do this we need to understand, how singularities propagate near inner edges. One can prove that there are plenty of singularities, concentrated in $Z\times \mathbb{R}\ni (x,t)$ and $\{|\xi'|< \tau\}$. This is similar to the Rayleigh waves. And, we hope, exactly like Rayleigh waves, those singularities do not prevent us from the sharper asympotics. What we need to prove is that the singularities in $\{|\xi'|<\tau\}$, coming from $Y\setminus Z$ transversally to $Z$, reflect and refract but leave $Z$ instantly. In other words, that these two kinds of waves are completely separate. It is what I am trying to prove now. \item\label{rem-4-6-ii} Let $\mathcal{K}$ be the linear span of the corresponding eigenfunctions. We need to prove that $\|\nabla w\|\ge \|w\|$ holds for $w=\hat{J}v$ with $v\in \mathcal{K}^\perp$. One can prove easily that $\|\nabla w\|= \|w\|$ for $w=\hat{J}v$ and eigenfunction $v$ (Proposition~\ref{prop-A-16}). \end{enumerate} \end{remark} \begin{appendices} \chapter{Planar toy-model} \label{sect-A} \section{Preparatory results} \label{sect-A-1} Here, in contrast to the whole article, $X= \{x\in\mathbb{R}^2, x_1\ge |x_2|\cot (\alpha/2)\}$ is a planar angle of solution $\alpha \in (0,2\pi]$ with a boundary $Y=Y_1\cup Y_2$, $Y_{1,2}= \{x\colon x_1=|x_2|\cot(\alpha/2),\ \pm x_2<0\}$ and a bisector $Y_0 =\{x\colon x_2=0, x_1>0\}$, and $\mathsf{D}elta =-\partial_1^2-\partial_2^2$ is a positive Laplacian (so, for simplicity we do not write ``hat''). \begin{figure} \caption{Proper and improper angles} \label{fig-2} \end{figure} \begin{remark}\label{rem-A-1} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{rem-A-1-i} For $\alpha=\pi$ we have a regular half-plane $\{x\colon x_1>0\}$, and for $\alpha =2\pi$ we have a plane with the cut $\{x\colon x_1\le 0,\ x_2=0\}$. \item One can consider $\alpha >2\pi$ on the covering of $\mathbb{R}^2$. \end{enumerate} \end{remark} Consider real-valued\footnote{\label{foot-A-1} For complex-valued solutions then the main inequalities with $w^2$ replaced by $|w|^2$ follow automatically.} solutions of \begin{equation} L w \coloneqq (\mathsf{D}elta +1)w=0 \label{A-1} \end{equation} and operators $J$, $\Lambda$: $w=Jv$ solves (\ref{A-1}) and $w|_Y=v$; $\Lambda v=-\partial_\nu w|_Y$ with $w=Jv$. Recall that $\nu$ is an inner normal to $Y$. Observe that for any angle\footnote{\label{foot-A-2} Not necessary symmetric with respect to $x_1$-axis.} \begin{equation} 2\iint _X Lw\cdot w_{x_1}\,dx_1dx_2= \int_{\partial X} \Bigl((w_{x_1}^2 -w_{x_2}^2-w^2) \nu_1 + 2 w_{x_1}w_{x_2} \nu_2 \Bigr)\,dr, \label{A-2} \end{equation} where $dr$ is a Euclidean measure on $Y$, and the then similar formula holds with $x_1$ and $x_2$ permuted. Then for solution of (\ref{A-1}) \begin{equation} \int_{\partial X} \Bigl((w_{x_1}^2 -w_{x_2}^2) (\nu_1 \ell_1 -\nu_2\ell_2) + 2 w_{x_1}w_{x_2} (\nu_2 \ell_1 +\nu _1\ell_2)-w^2(\nu_1\ell_1+\nu_2\ell_2)\Bigr)\,dr=0. \label{A-3} \end{equation} If on $\Gamma\operatorname{sub}set \partial X$ $\ell_1=\nu_1$, $\ell_2=\nu_2$, then we can calculate invariantly as if $\ell_1=\nu_1=0$, $\ell_2=\nu_2=1$: \begin{multline} (w_{x_1}^2 -w_{x_2}^2) (\nu_1 \ell_1 -\nu_2\ell_2) + 2 w_{x_1}w_{x_2} (\nu_2 \ell_1 +\nu _1\ell_2)-w^2(\nu_1\ell_1+\nu_2\ell_2)= \\ w_\nu^2-w_r^2 -w^2, \label{A-4} \end{multline} where $w_r=\partial_r w$ and $w_\nu=\partial_\nu w$. All these formulae hold not only for the original angle, but also for the smaller angle. Then let consider as $X$ an upper half of the symmetric angle, $\partial X= Y_2\cup Y_0$, on $Y_0$ the integrand is \begin{equation} \mathcal{I}\coloneqq (w_{x_2}^2 -w_{x_1}^2 -w^2)\ell_2 + 2 w_{x_1}w_{x_2} \ell_1 \label{A-5} \end{equation} with $\ell_1=\sin (\alpha/2)$, $\ell_2=-\cos(\alpha/2)$. Consider different cases: \emph{Antisymmetric case:\/} $w|_{Y_0}=0$, then $\mathcal{I}= -w_{x_2}^2\cos(\alpha/2)$ and \begin{equation} \int _{Y_2} \bigl(w_\nu^2-w_r^2 -w^2\bigr)\,dr - \cos(\alpha/2) \int_{Y_0} w_{x_2}^2 \,dx =0. \label{A-6} \end{equation} \emph{Symmetric case:\/} $w_{x_2}|_{Y_0}=0$, then $\mathcal{I}= (w_{x_1}^2+w^2)\cos(\alpha/2)$ and \begin{equation} \int _{Y_2} \bigl(w_\nu^2-w_r^2-w^2\bigr)\,dr + \cos(\alpha/2)\int_{Y_0} (w_{x_1}^2+w^2)\,dx_1 =0. \label{A-7} \end{equation} \begin{proposition}\label{prop-A-2} Let $w$ satisfy \textup{(\ref{A-1})}. Let \underline{either} $\alpha \in (0,\pi]$ and $w$ is antisymmetric, \underline{or} $\alpha \in [\pi,2\pi)$ and $w$ is symmetric. Then\begin{equation} \|\nabla w\|^2 \ge \|w\|^2. \label{A-8} \end{equation} \end{proposition} \begin{proof} In both cases $\int_{Y_2} (|\nabla w|^2-w^2)\,dr \ge \int_{Y_2} (w_\nu^2\, -w^2)\,ds \ge 0$. Applying this inequality to the angle, shifted by $t$ along $x_1$, and integrating by $t\in (0,\infty)$, we obtain a double integral (divided by $\sin (\alpha/2)$). Moreover, one can see easily, that this inequality is strict unless $w=0$. \end{proof} Similarly, if instead of multiplying by $(\nu_1 w_{x_1}+\nu_2 w_{x_2})$ we multiply by $(x_2 w_{x_1} - x_1w_{x_2})$, then extra terms in the double integral will be $\pm w_{x_1} w_{x_2}$ and they cancel one another. However, on $Y$ we get $x_2=\nu_1 r$, $x_1=-\nu_2 r$ and therefore contribution of $Y_2$ will be as in above with extra factor $r$: \begin{equation} \int _{Y_1} \bigl(w_\nu^2-w_r^2-w^2\bigr)\,rdr . \label{A-9} \end{equation} On $Y_0$ we get extra factor $x_1=r$, but not $\nu_2=-\cos(\alpha/2)$, and we arrive to \emph{Antisymmetric case:\/} $w|_{Y_0}=0$, then $\mathcal{I}= w_{x_2}^2x_1$ and \begin{equation} \int _{Y_2} \bigl(w_\nu^2-w_r^2-w^2\bigr)\,rdr + \int_{Y_0} w_{x_2}^2 \,x_1dx_1 =0. \label{A-10} \end{equation} \emph{Symmetric case}: $w_{x_2}|_{Y_0}=0$, then $\mathcal{I}= (-w_{x_1}^2-w^2)x_1$ and \begin{equation} \int _{Y_2} \bigl(w_\nu^2-w_r^2-w^2\bigr)\,rdr - \int_{Y_0} (w_{x_1}^2+w^2)\,x_1dx_1 =0. \label{A-11} \end{equation} Let us explore dependence $\Lambda=\Lambda(\alpha)$ on $\alpha$. Observe first that \begin{gather} \iint \bigl(\nabla w\cdot \nabla w'+ww'\bigr)\,dxd_1dx_2 = \int Lw\cdot w' - \int_{\partial X} \partial_\nu w\cdot w'\,dr \label{A-12}\\ \intertext{where $(r,\theta)$ are polar coordinates and therefore $dr$ is an Euclidean measure on $Y$. It implies} (\Lambda v,v')_Y = \iint \bigl( \nabla w\cdot \nabla w' + ww'\bigr)\,dx_1dx_2, \label{A-13} \end{gather} for $w=Jv$, $w'=Jv'$. Therefore \begin{claim}\label{A-14} $\Lambda $ is symmetric and nonnegative operator in $\sL^2(Y)$. \end{claim} Consider $X=X(\alpha)$, $Y=Y(\alpha)$, $\Lambda=\Lambda(\alpha)$ and keep $w$ independent on $\alpha$. Let us replace $\alpha$ by $\alpha+\updelta\alpha$ etc. Then for a symmetric $X$ we have $\updelta v = -r(\partial_\nu w) \updelta\alpha= \frac{1}{2}r(\Lambda v) \updelta\alpha$ and it follows from (\ref{A-13}) that \begin{gather} ((\updelta \Lambda)v,v)_Y + 2(\Lambda v, \updelta v)_Y = \frac{1}{2}\int_Y \bigl( |\nabla w|^2 + |w|^2\Bigr)\,rdr\times \updelta\alpha \notag \shortintertext{and therefore} ((\updelta \Lambda)v,v)_Y = -\frac{1}{2}\int_Y \bigl( w_\nu^2 -w_r^2 - |w|^2\Bigr)\,rdr \times \updelta\alpha. \label{A-15} \end{gather} Combining with (\ref{A-10}) and (\ref{A-13}) we arrive to \begin{proposition}\label{prop-A-3} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{prop-A-3-i} On symmetric functions $\Lambda(\alpha)$ is monotone increasing function of $\alpha$. \item\label{prop-A-3-ii} On antisymmetric functions $\Lambda(\alpha)$ is monotone inreasing function of $\alpha$. \end{enumerate} \end{proposition} Let us identify $Y$ with $\mathbb{R}\ni s$, $s=\mp r$ on $Y_{1,2}$ respectively. \begin{proposition}\label{prop-A-4} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{prop-A-4-i} On symmetric functions $\Lambda(\pi)=(D_s^2+I)^{\frac{1}{2}}$. \item\label{prop-A-4-ii} On antisymmetric functions $\Lambda(2\pi)\ge (D_s^2+I)^{\frac{1}{2}}$. \end{enumerate} \end{proposition} \begin{proof} Statement \ref{prop-A-4-i} is obvious. Statement \ref{prop-A-4-ii} follows from the fact that on antisymmetric function $v$ $\Lambda(2\pi)v $ coincides with $\Lambda (\pi)v^0$, restricted to $\{x_1<0\}$, where $v^0$ is $v$, extended by $0$ to $\{x_1>0\}$. \end{proof} Therefore, combining Propositions~\ref{prop-A-3} and ~\ref{prop-A-4} we conclude that \begin{corollary}\label{cor-A-5} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{cor-A-5-i} On symmetric functions $\Lambda(\alpha )\ge (D_s^2+I)^{\frac{1}{2}}$ for $\alpha \in [\pi,2\pi]$. \item\label{prop-A-5-ii} On antisymmetric functions $\Lambda(\alpha)\ge (D_s^2+I)^{\frac{1}{2}}$ for $\alpha \in (0,2\pi]$. \end{enumerate} \end{corollary} \begin{remark}\label{rem-A-6} One can prove easily, that inequalities are strict for $\alpha \in (\pi,2\pi]$, $\alpha \in (0,2\pi]$ respectively. \end{remark} Now we want to finish general arguments and to prove inequality (\ref{A-8}) for antisymmetric $w$ and $\alpha \in (\pi,2\pi]$. It will be more convenient to use polar coordinates $(r,\theta)$ and notations $\mathcal{Y}_\beta=\{(r,\theta)\colon:\theta=\beta\}$, $\mathcal{X}_{\beta_1,\beta_2}=\{(r,\theta)\colon:\beta_1\le \theta\le \beta_2\}$. Here and below $\beta_*\in [-\alpha/2,\alpha/2]$. Recall that \begin{equation} L=-\partial_r^2-r^{-1}\partial_r -r^{-2}\partial_\theta^2 +1. \label{A-16} \end{equation} \begin{proposition}\label{prop-A-7} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{prop-A-7-i} Let $w$ satisfy equation \textup{(\ref{A-1})} in $X$. Then \begin{equation} \mathcal{I}(\beta )\coloneqq \int _{\mathcal{Y}_{\beta}} \Bigl[ r^{-2} w_\theta ^2-w_r^2-w^2\Bigr]\,rdr \label{A-17} \end{equation} does not depend on $\beta$. \item\label{prop-A-7-ii} Therefore \begin{equation} \mathcal{J}(\beta_1,\beta_2)\coloneqq \iint _{\mathcal{X}_{\beta_1,\beta_2}} \Bigl[r^{-2} w_\theta ^2-w_r^2-w^2\Bigr]\,rdr d\theta \label{A-18} \end{equation} depends only on $\beta_2-\beta_1$ and therefore is proportional to it. \end{enumerate} \end{proposition} \begin{proof} One proves \ref{prop-A-7-i} by analyzing $-\iint _{\mathcal{X}_{\beta_1,\beta_2}} Lw\cdot\partial_\theta w\, dxdy$ (which actually was done before, since $w_\theta = -x_2w_{x_1}+ x_1w_{x_2}$. To prove \ref{prop-A-7-ii} observe that $\partial_\beta \mathcal{J}(\beta_1,\beta)=\mathcal{I}(\beta)$. \end{proof} \begin{proposition}\label{prop-A-8} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{prop-A-8-i} Function \begin{equation} \mathcal{J}(\beta_1,\beta_2)\coloneqq \int _{\mathcal{X}_{\beta_1,\beta_2}} w^2 r^{-1}\,drd\theta \label{A-19} \end{equation} with fixed $\beta_{1,2}=\beta\mp \sigma$ is convex with respect to $\beta$ (if $\sigma>0$). \item\label{prop-A-8-ii} Further, if $w$ is either symmetric or antisymmetric, then it reaches minimum as $\beta=0$ (i.e. $\mathcal{X}_{\beta_1,\beta_2}$ is symmetric with respect to $Y_0$). \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{pf-A-8-i} Consider \begin{multline} 0= \iint_{\mathcal{X}_{\beta_1,\beta_2}} Lw\cdot w \, r drd\theta = \\ \iint_{\mathcal{X}_{\beta_1,\beta_2}} (w_r^2 +r^{-2}w_\theta^2 + w^2)\, r drd\theta +\mathcal{I}'(\beta_1)-\mathcal{I}'(\beta_2) \label{A-20} \end{multline} with \begin{equation} \mathcal{I}'(\beta)=\int_{\mathcal{Y}_{\beta}} ww_\theta \,r^{-1} dr=\partial_\beta \mathcal{I}(\beta),\qquad \mathcal{I}(\beta)\coloneqq \int_{\mathcal{Y}_{\beta}} w^2 \,r^{-1} dr. \label{A-21} \end{equation} Observe that the first term is positive. Then $\mathcal{I}'(\beta_2) -\mathcal{I}'(\beta_1)>0 $; on the other hand, it is the second derivative of $\mathcal{J}(\beta_1,\beta_2)$ with respect to $\beta$. \item\label{pf-A-8-ii} Moreover, for both symmetric and antisymmetric $w$ $\mathcal{I}(\beta_2)-\mathcal{I}(\beta_1)=0$. And the difference $\mathcal{I}(\beta_2) -\mathcal{I}(\beta_1)=0$ for $\beta=0$. \end{enumerate} \end{proof} \begin{corollary}\label{cor-A-9} Since $w_\theta $ satisfies the same equation and is antisymmetric (symmetric) respectively, the same conclusions \ref{prop-A-8-i}, \ref{prop-A-8-ii} hold for $\mathcal{J}\coloneqq \int _{\mathcal{X}_{\beta_1,\beta_2}} w_\theta^2 r^{-1}\,rdrd\theta$. Then in virtue of Proposition~\ref{prop-A-7}\ref{prop-A-7-ii} the same conclusions \ref{prop-A-8-i}, \ref{prop-A-8-ii} hold for $\mathcal{J}\coloneqq \iint _{\mathcal{X}_{\beta_1,\beta_2}} (w_r^2 + w^2)\,rdrd\theta$. \end{corollary} Next, observe that $ L r\partial_r w = 2\mathsf{D}elta w = -2 w$ and if we use the same arguments, as in the proof of Proposition~\ref{prop-A-8}\ref{prop-A-8-ii} for $r\partial_r w $, then instead of the first term in (\ref{A-20}) we get \begin{gather} \iint_{\mathcal{X}_{\beta_1,\beta_2}} \bigl( (rw_r)_r^2 +w_{r\theta}^2 + (rw_r)^2 -w^2\bigr)\, r drd\theta, \label{A-22}\\ \intertext{where an additional last term appears as} \iint_{\mathcal{X}_{\beta_1\beta_2}} 2w \partial_r w\cdot r^2drd\theta= -\iint_{\mathcal{X}_{\beta_1\beta_2}} w^2\,rdrd\theta. \notag \end{gather} Consider last two terms and skip integration by $d\theta$; plugging $w=r^{-3/2}u$ with $u(0)=0$, we arrive to \begin{multline*} \int \bigl(u_r - \frac{3}{2}r^{-1}u)^2 - r^{-2}u^2\bigr)\,dr= \int \bigl(u_r^2 -3r^{-1}u_ru +\frac{5}{4} r^{-2}u^2\bigr)\,dr = \int \bigl(u_r^2 - \frac{1}{4}r^{-2}u^2\bigr)\,dr \end{multline*} which is again nonnegative term. Then we arrive to \begin{corollary}\label{cor-A-10} The same conclusions \ref{prop-A-8-i} and \ref{prop-A-8-ii} of Proposition~\ref{prop-A-8} hold for $\mathcal{J}(\beta_1,\beta_2)$ with $w$ replaced by $rw_r$, i.e. $\mathcal{J}(\beta_1,\beta_2)\coloneqq \iint_{\mathcal{X}_{\beta_1,\beta_2}}w_r^2\,rdrd\theta$. Then in virtue of Proposition~\ref{prop-A-7}\ref{prop-A-7-ii} the same conclusions \ref{prop-A-8-i}, \ref{prop-A-8-ii} hold for \begin{multline} 2\iint_{\mathcal{X}_{\beta_1,\beta_2}} |\partial_rw|^2\,rdrd\theta + \\ \iint_{\mathcal{X}_{\beta_1,\beta_2}} (r^{-2}|\partial_\theta w|^2 - |\partial_rw|^2 -|w|^2|)\,rdrd\theta= \iint_{\mathcal{X}_{\beta_1,\beta_2}} (|\nabla w|^2-|w|^2)\,dxdy. \end{multline} \end{corollary} Now we can prove \begin{proposition}\label{cor-A-11} Let $\alpha \in (\pi,2\pi]$. Then for both symmetric and antisymmetric $w$ \textup{(\ref{A-8})} holds. \end{proposition} \begin{proof} Indeed, assume that it is not the case: $\iint_X (|\nabla w|^2-w^2)\,dxdy <0$ for some $w$. Then due to Corollary~\ref{cor-A-10} the same is true for $X$ replaced by $\mathcal{X}_{\beta_1,\beta_2}$ with $\beta_{1,2}=\mp (2\pi -\alpha)/2$ and the same $w$. Then it is true for the sum of these to expressions (with $X==X_{-\alpha/2,\alpha/2}$ and $\mathcal{X}_{\beta_1,\beta_2}$), which is the sum of the same expressions for the half-planes $X_{\alpha/2-\pi,\alpha/2}$ and $X_{-\alpha/2,\pi-\alpha/2}$. However, for half-planes (\ref{A-8}) holds. \end{proof} Let $P_\tau=\uptheta(\tau-\Lambda)$. \begin{proposition}\label{prop-A-12} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{prop-A-12-i} Let $\alpha \in [\pi,2\pi]$. Then for any $\tau>1$ for $w=Jv$, $v\in \operatorname{Ran} (I-P_\tau)$, \begin{equation} \|\nabla w\|^2\ge (1+\delta)\|w\|^2 \label{A-24} \end{equation} with $\delta=\delta(\tau)>0$. \item\label{prop-A-12-ii} Let $\alpha \in (0,\pi]$. Then for any $\tau>1$ for antisymmetric $w=Jv$, $v\in \operatorname{Ran} (I-P_\tau)$, \textup{(\ref{A-24})} holds. \end{enumerate} \end{proposition} \begin{proof} Observe first that \begin{claim}\label{A-25} $\{v\in \operatorname{Ran}(I-P_\tau) \colon \|v\|_Y=1,\, \colon \|\nabla Jv\|^2\le (1+\delta')\|Jv\|^2 \}$ is a compact set in $\sL^2(Y)$ for $\delta'=\delta'(\tau)>0$. \end{claim} Indeed, in the zone $\{x\colon |x|\ge R\}$ we can apply semiclassical arguments with $h\coloneqq R^{-1}$ after scaling $x\mapsto R^{-1}x$. Since in both cases (\ref{A-8}) holds with a strict inequality for $w\ne 0$), we arrive to both Statements~\ref{prop-A-12-i} and \ref{prop-A-12-ii}. \end{proof} \section{Spectrum} \label{sect-A-2} The above results are sufficient for our needs, for $\alpha \in (\pi,2\pi]$. However we would like to explore the case of $\alpha \in (0,\pi)$ and even $\alpha \in (\pi,2\pi]$ in more depth. \begin{corollary}\label{cor-A-13} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{cor-A-13-i} Let $\alpha \in [\pi,2\pi]$. Then $\operatorname{Spec}(\Lambda)=[1,\infty)$ and it is continuous. \item\label{cor-A-13-ii} Let $\alpha \in (0,\pi]$. Then $\operatorname{Spec}(\Lambda_\mathsf{asym})=[1,\infty)$ and it is continuous, where $\Lambda_\mathsf{sym}$ and $\Lambda_\mathsf{asym}$ denote the restriction of $\Lambda$ to the spaces of symmetric and antisymmetric functions, correspondingly. \end{enumerate} \end{corollary} \begin{proof} We already know that the that essential spectrum of $\Lambda $ is $[1,\infty)$. We also know that in the case~\ref{cor-A-13-i} $\Lambda > I$ and in the case~\ref{cor-A-13-ii} $\Lambda_\mathsf{asym} > I$. Therefore $1$ is not an eigenvalue. Continuity of the spectrum follows from $(i[\Lambda,Q]v,v) \ge \delta \|v\|^2$ for $v\in \operatorname{Ran} (I-P_\tau)$, $v$ is antisymmetric in the case \ref{cor-A-13-ii}, which is due to Statements~\ref{prop-A-12-i} and \ref{prop-A-12-ii} of Propostion~\ref{prop-A-12}. \end{proof} \begin{remark}\label{rem-A-14} Paper~\cite{KP} is dealing mainly with the eigenvalues of $\mathsf{D}elta_2$ in the planar sector under Robin boundary condition $(\partial_\nu +\gamma)w|_Y=0$, $\gamma>0$\,\footnote{In that paper $\alpha $ is a half-angle, and $\nu$ is a unit external normal. Below we refer to this paper using our notations.}. Then eigenvalues $\tau$ of $\Lambda$ and eigenvalues $\mu $ of that problem are related through Birman-Schwinger principle and scaling: $\mu_k = -\tau_k^{-2}\gamma^2$. Some of the results: \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{rem-A-14-i} Theorem~3.1 states that $(-\infty, -\gamma^2)$ contains only discrete spectrum of such operator and it is finite. \item\label{rem-A-14-ii} Theorem~2.3 states that for $\alpha \in (0,\pi)$ the bottom eigenvalue $-\gamma^2/\sin^2(\alpha/2)$ is simple and the corresponding eigenfunction is $\exp (-\gamma x_1/\sin(\alpha/2)$. \item\label{rem-A-14-iii} Theorem~3.6 states that for $\alpha \in [\frac{\pi}{3},\pi)$ there is no other eigenvalues in $(0,1)$, while Theorem 4.1 implies that the number of such eigenvalues is $\mathsf{asym}p \alpha^{-2}$ as $\alpha\to 0$\,\footnote{\label{foot-yz} In fact, the compete asymptotic expansion of the eigenvalues is derived in Theorem~4.16 of \cite{KP}.}. \end{enumerate} \end{remark} Then we conclude that \begin{corollary}\label{cor-A-15} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{cor-A-15-i} Interval $(0,1)$ contains only discrete spectrum of $\Lambda_\mathsf{sym}$ which is finite. \item\label{cor-A-15-ii} For $\alpha \in (0, \pi)$ the bottom eigenvalues is $\sin(\alpha/2)$ and the corresponding eigenfunction is $\exp (-x_1)$. \end{enumerate} \end{corollary} The discrete spectrum would not prevent us from the extending our main results to $\alpha\in (0,\pi)$. Even (possible) eigenvalue $1$ on the edge of the essential spectrum would not be an obstacle. However eigenvalues embedded into $(1,\infty)$ are an obstacle (see Proposition~\ref{prop-A-16}). \begin{proposition}\label{prop-A-16} If $w_p=Jv_p$ where $v_p$ are eigenfunctions of $\Lambda$, corresponding to eigenvalues $\tau_p$, and $\tau_j=\tau_k$, then \begin{equation} (\nabla w_j,\nabla w_k) -(w_j,w_k)=0. \label{A-26} \end{equation} In particular, \begin{equation} \|\nabla w_j,\|^2 -\|w_j\|^2=0. \label{A-27} \end{equation} \end{proposition} \begin{proof} It follows from equality (\ref{eqn-3-2}) for $Q=x_1D_1+x_2D_2 +i/2$ and $([Q,\Lambda]v_j,v_k)= (\Lambda v_j,Qv_jk)-(Qv_j,\Lambda v_k)=0$ for eigenfunction $v_j$ $v_k$ provided $\tau_j=\tau_k$. \end{proof} To extend the main sharp spectral asymptotics to operators in domains with inner edges one needs to prove the first following \begin{conjecture}\label{conj-A-17} For any $\tau>1$ and for any $w=Jv$ with symmetric $v\in \operatorname{Ran} (I-P_\tau)$ estimate \textup{(\ref{A-24})} holds. \end{conjecture} \begin{remark}\label{rem-A-18} \begin{enumerate}[label=(\roman*), wide, labelindent=0pt] \item\label{rem-A-18-i} Recall that this is true for $\alpha \in [\pi,2\pi]$ and, also, for $\alpha\in (0,\pi)$ and antisymmetric $v$. So, only the case of $\alpha\in (0,\pi)$ and symmetric $v$ needs to be covered. \item\label{rem-A-18-ii} So far it is unknown, if in the the case of $\alpha\in (0,\pi)$ $\Lambda_\mathsf{sym}$ has eigenvalues embedded into continuous spectrum $(1,\infty)$ or on its edge. \item\label{rem-A-18-iii} Also it is unknown in any case, if the continuous spectrum is absolutely continuous (i.e. that the singular continuous spectrum is empty). \end{enumerate} \end{remark} \end{appendices} \end{document}
\begin{document} \theoremstyle{definition} \makeatletter \xpatchcmd{\@thm}{.}{}{}{} \makeatother \makeatletter \newcommand\blfootnote[1]{ \begingroup \renewcommand\thefootnote{}\footnote{#1} \addtocounter{footnote}{-1} \endgroup} \makeatother \newtheorem{theorem}{Theorem} \newtheorem{definition}[theorem]{Definition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \title{Measure-theoretic bounds on the spectral radius of graphs from walks} \blfootnote{\textit{Email addresses}: [email protected], [email protected], [email protected], \mbox{[email protected]}} \begin{abstract} Let $\mathcal{G}$ be an undirected graph with adjacency matrix $A$ and spectral radius $\rho$. Let $w_k, \phi_k$ and $\phi_k^{(i)}$ be, respectively, the number walks of length $k$, closed walks of length $k$ and closed walks starting and ending at vertex $i$ after $k$ steps. In this paper, we propose a measure-theoretic framework which allows us to relate walks in a graph with its spectral properties. In particular, we show that $w_k, \phi_k$ and $\phi_k^{(i)}$ can be interpreted as the moments of three different measures, all of them supported on the spectrum of $A$. Building on this interpretation, we leverage results from the classical moment problem to formulate a hierarchy of new lower and upper bounds on $\rho$, as well as provide alternative proofs to several well-known bounds in the literature. \\ \textbf{Keywords:} \textit{walks on graphs, spectral radius, moment problem} \end{abstract} \section{Introduction} Given an undirected graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$ with vertex set $\mathcal{V} = \{1, \dots, n \}$ and edge set $\mathcal{E}\subseteq \mathcal{V} \times \mathcal{V}$, we define a \textit{walk of length $k$}, or \textit{$k$-walk}, as a sequence of vertices $(i_0, i_1, \dots, i_{k})$ such that $(i_s, i_{s+1}) \in \mathcal{E}$ for $s \in \{0, \dots , k-1\}$. A walk of length $k$ is called \textit{closed} if $i_0 = i_{k}$; furthermore, we will refer to them as \textit{closed walk from vertex} $i_0$ when we need to distinguish them from the set of all closed $k$-walks. We denote the number of walks, closed walks, and closed walks from vertex $i$ of length $k$ by $w_k$, $\phi_k$, and $\phi_k(i)$, respectively. Denote by $A$ the adjacency matrix of graph $\mathcal{G}$ and its eigenvalues by $\lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_n$ . The set containing these eigenvalues will be referred to as the \textit{spectrum} of $\mathcal{G}$. From Perron-Frobenius' Theorem~\cite{horn2012matrix}, we have that the spectral radius of $A$, defined by $\rho:= \max\limits_{\scriptscriptstyle 1\le i\le n} \vert \lambda_i\vert$, is equal to $\lambda_1$. \\ In the literature, we find several lower bounds on $\rho$ formulated in terms of walks in the graph. Bounds in terms of closed walks are rarer (see, e.g, \cite{preciado2013moment}). Many of these bounds come from dexterous applications of the Rayleigh principle \cite{horn2012matrix} or the Cauchy-Schwarz inequality. For example, making use of these tools, Collatz and Sinogowitz~\cite{collatz1957spektren}, Hoffmeister~\cite{hofmeister1988spectral}, Yu et. al.~\cite{yu2004spectral}, and Hong and Zhang~\cite{hong2005sharp} derived, respectively, the following lower bounds: \begin{equation} \rho \ge \dfrac{w_1}{w_0}, \qquad \rho \ge \sqrt{\dfrac{w_2}{w_0}}, \qquad \rho \ge \sqrt{\dfrac{w_4}{w_2}}, \qquad \rho \ge \sqrt{\dfrac{w_6}{w_4}}. \label{eq:first-bounds} \end{equation} Nikiforov\footnote{Nikiforov's notation in \cite{nikiforov2006walks} indexes $w_k$ in terms of the number of nodes visited by the walks instead of the number of steps, as used in our manuscript.} \cite{nikiforov2006walks} generalized these results by expressing the number of walks $w_k$ in terms of the eigenvalues, to obtain bounds of the form \begin{align} \rho^r \ge \dfrac{w_{2s+r}}{w_{2s}}, \label{niki-bound} \end{align} for $s,r \in \mathbb{N}_0$. Cioab\u{a} and Gregory \cite{cioabua2007large} provide the following improvement to the first bound in \eqref{eq:first-bounds}: \begin{align} \rho \ge \dfrac{w_1}{w_0} + \dfrac{1}{w_0(\Delta + 2)}, \end{align} \noindent where $\Delta$ is the maximum of the vertex degrees in $\mathcal{G}$. Nikiforov \cite{nikiforov2007bounds} showed that \begin{align} \rho > \frac{w_1}{w_0} + \frac{1}{2w_0 + w_1}. \end{align} \noindent Favaron et. al \cite{favaron1993some} used the fact that there is a $K_{1,{\scriptscriptstyle \Delta}}$ subgraph in $\mathcal{G}$ to obtain: \begin{align} \rho \ge \sqrt{\Delta}. \label{favaron} \end{align} There is a number of upper bounds on $\rho$ in terms of graph invariants like the domination number \cite{stevanovic2008spectral}, chromatic number \cite{cvetkovic1972chromatic, edwards1983lower}, and clique number \cite{edwards1983lower}. Nikiforov \cite{nikiforov2006walks} provides a whole hierarchy of bounds in terms of the clique number $\omega(\mathcal{G})$, which for $k \in \mathbb{N}_0$ are given by \begin{align} \rho^{k+1} \le \left(1 - \dfrac{1}{\omega(\mathcal{G})}\right) w_{k}. \label{nikiforov-bound} \end{align} \noindent We also find in the literature several bounds in terms of the \textit{fundamental weight} of $\mathcal{G}$, defined as $\sum_{j=1}^n u_{1j}$, where $u_{1j}$ is the $j$-th entry of the \textit{leading eigenvector}\footnote{The leading eigenvector of $A$ is the eigenvector associated with the largest eigenvalue $\lambda_1$. We assume eigenvectors to be normalized to be of unit Euclidean norm.} of $A$ denoted by $\mathbf{u}_1$. For example, Wilf \cite{wilf1986spectral} proved the following upper bound: \begin{align} \rho \leq \frac{\omega(\mathcal{G})-1}{\omega(\mathcal{G})}\left(\sum_{j=1}^{n} u_{1j}\right)^{2}. \label{wilf} \end{align} Cioaba and Gregory \cite{cioaba2007principal} showed that, for $k \in \mathbb{N}_0$, \begin{align} \rho^k \le \sqrt{w_{2k}}\max_{1\le j \le n} u_{1j}. \label{ineq:vanmieghem} \end{align} Moreover, Van Mieghem \cite{van2014graph} proved the bound \begin{align} \rho^k \le \dfrac{w_k}{\sum_{i=1}^n u_{1i}}\max_{1\le j \le n}u_{1j}. \end{align} In this paper we provide upper and lower bounds on $\rho$ by interpreting the sequences $\{w_k\}_{k=0}^\infty, \{\phi_k\}_{k=0}^\infty$ and $\{\phi_k^{(i)}\}_{k=0}^\infty$ as moments of three measures supported on the spectrum of $\mathcal{G}$. Building on this interpretation, we will use classical results from probability theory relating the moments of a measure with its support. Following this approach, we will derive a hierarchy of new bounds on the spectral radius, as well as provide alternative proofs to several existing bounds in the literature. The rest of the paper is organized as follows. Section~\ref{sec:background} outlines the tools we will use to analyze walks on graphs using measures and moment sequences. Section~\ref{sec:lower} presents multiple lower bounds on the spectral radius derived from the moment problem, while Section~\ref{sec:upper} introduces several upper bounds. \section{Background and Preliminaries} \label{sec:background} Throughout this paper, we use standard graph theory notation, as in \cite{west1996introduction}. We will use upper-case letters for matrices, calligraphic upper-case letters for sets, and bold lower-case letters for vectors. For a vector $\mathbf{v}$ or a matrix $M$, we denote by $\mathbf{v}^\intercal$ and $M^\intercal$ their respective transposes. The $(i,j)$-th entry of a matrix $M$ is denoted by $M_{ij}$. For a $n\times n$ matrix $M$ and a set $\mathcal{J} \subseteq \{1, \dots, n\}$, the matrix $M_{\mathcal{J}}$ is defined to be the submatrix of $M$ where columns and rows with indices not in $\mathcal{J}$ have been removed; $M_{\mathcal{J}}$ is also called a \textit{principal submatrix} of $M$ and, if $\mathcal{J} = \{1, \dots, k\}$, $M_{\mathcal{J}}$ is called a \textit{leading principal submatrix}. Finally, we say that a symmetric matrix $M \in \mathbb{R}^{n \times n}$ is positive semidefinite (resp. positive definite) if for every non-zero vector $\mathbf{v} \in \mathbb{R}^n$ we have $\mathbf{v}^\intercal M \mathbf{v} \geq 0$ (resp. $\mathbf{v}^\intercal M \mathbf{v} > 0 )$ and we denote this as $M \succeq 0$ (resp. $M \succ 0$). \subsection{Spectral measures and walks} We can relate walks and closed walks on a graph $\mathcal{G}$ to its spectrum using measures, as we describe in detail below. We begin by stating the following well-known result from algebraic graph theory \cite{biggs1993algebraic}: \begin{lemma} \label{lema:powers} For any integer $k$, the $(i,j)$-th entry of the matrix $A^k$ is equal to the number of $k$-walks from vertex $i$ to vertex $j$ on $\mathcal{G}$. \end{lemma} \noindent Since $\mathcal{G}$ is undirected, $A$ is symmetric and admits an orthonormal diagonalization. In particular, let $\{\mathbf{u}_1, \mathbf{u}_2, \ldots, \mathbf{u}_n\}$ be a complete set of orthonormal eigenvectors of $A$. Hence, we have that $ A^k = U \operatorname{diag}\left(\lambda_{1}^k, \ldots, \lambda_{n}^k\right) U^{\intercal},$ \noindent for every $k \ge 0$, where $U := [\mathbf{u}_1 | \mathbf{u}_2 | \ldots | \mathbf{u}_n]$. We denote the $i$-th entry of the $l$-th eigenvector by $u_{il}$. From this factorization, we can obtain identities which will be used in the following sections. \begin{lemma} \label{lema:moms} Define $c_l^{(i)} := u_{il}^2$ and $c_l := \left(\sum_{i=1}^n u_{il}\right)^2$. Then, for every $k \ge 0$, we have \begin{align*} \phi_k = \sum_{l=1}^n \lambda_l^k, \qquad \phi_k(i) = \sum_{l=1}^n c_l^{(i)} \lambda_l^k, \qquad w_k = \sum_{l=1}^n c_l \lambda_l^k. \end{align*} \end{lemma} \begin{proof} Using Lemma~\ref{lema:powers} we have that $\phi_k = \sum_{i=1}^n (A^k)_{ii}$, $\phi_k(i) = (A^k)_{ii}$, and $w_k = \sum_{i, j} (A^k)_{ij}$. Furthermore, we have that $\left(A^{k}\right)_{i j} =\sum_{l=1}^{n} u_{i l} \lambda_{l}^{k} u_{j l},$ directly from the diagonalization of $A$. Combining these results, and the fact that $\sum_{i=1}^n c_l^{(i)} = 1$, the result follows. \end{proof} Next, we introduce three atomic measures supported on the spectrum of $A$. \begin{definition}[Spectral measures] \label{def:spectral-measures} Let $\delta(\cdot)$ be the Dirac delta measure. For a simple graph $\mathcal{G}$ with eigenvalues $\lambda_1 \ge \lambda_2, \dots \ge \lambda_n$, define the \textit{closed-walks measure} as \begin{align*} \mu_{\mathcal{G}}(x) \coloneqq \sum_{l=1}^{n} \delta\left(x-\lambda_{l}\right). \end{align*} We also define the \textit{closed-walks measure for vertex $i$} as \begin{align*} \mu_{\mathcal{G}}^{(i)}(x) \coloneqq \sum_{l=1}^{n} c_l^{(i)} \delta\left(x-\lambda_{l}\right), \end{align*} \noindent and the \textit{walks measure} as \begin{align*} \nu_{\mathcal{G}}(x) \coloneqq \sum_{l=1}^{n} c_l \delta\left(x-\lambda_{l}\right). \end{align*} \end{definition} \begin{lemma} \label{lemma:moms-walks} For a real measure $\zeta (x)$, define its \textit{$k$-th moment} as $m_k\left(\zeta\right) = \int_{\mathbb{R}} x^k \mathrm{d} \zeta(x)$. Then, the measures in Definition \eqref{def:spectral-measures} satisfy \begin{align*} m_k\left(\mu_{\mathcal{G}}\right) = \phi_k, \qquad m_k (\mu_{\mathcal{G}}^{(i)}) = \phi_k^{(i)}, \qquad m_k\left(\nu_{\mathcal{G}}\right) = w_k. \end{align*} \end{lemma} \begin{proof} For the case of $\mu_{\mathcal{G}}$, we evaluate $k$-th moment, as follows: \begin{align*} m_k\left(\mu_{\mathcal{G}}\right) = \int_{\mathbb{R}} x^k \mathrm{d} \mu_{\mathcal{G}}(x) = \int_{\mathbb{R}}x^k \sum_{l=1}^n \delta(x - \lambda_l) \mathrm{d}x =\sum_{l=1}^n \lambda_l^k = \phi_k. \end{align*} The other two cases have analogous proofs. \end{proof} \subsection{The moment problem} In order to derive bounds on the spectral radius $\rho$, we will make use of results from the \textit{moment problem} \cite{schmudgen2017moment}. This problem is concerned with finding necessary and sufficient conditions for a sequence of real numbers to be the \textit{moment sequence} of a measure supported on a set $\mathcal{K} \subseteq \mathbb{R}$. This is formalized below. \begin{definition}[$\mathcal{K}$-moment sequence] \label{def:moment-seq} The infinite sequence of real numbers $\mathbf{m}=\left(m_{0}, m_{1}, m_{2},\ldots\right)$ is called a \emph{$\mathcal{K}$-moment sequence} if there exists a Borel measure $\zeta$ supported on $\mathcal{K} \subseteq \mathbb{R}$ such that \begin{align*} m_k = \int_{\mathcal{K}} x^k d\zeta(x), \hspace{12pt} \text{ for all } k \in \mathbb{N}_0 . \end{align*} \end{definition} The following result, known as \textit{Hamburger's theorem} \cite{schmudgen2017moment}, will be used in Sections \ref{sec:lower} and \ref{sec:upper}. \begin{theorem}[Hamburger's Theorem~\cite{schmudgen2017moment}]\label{thm:hamburger} Let $\mathbf{m} = \left(m_{0},m_{1},m_{2},\ldots\right)$ be an infinite sequence of real numbers. For $n \in \mathbb{N}_0$, define the \textit{Hankel matrix of moments} as \begin{align} H_{n}(\mathbf{m}) \coloneqq \left[\begin{array}{cccc} {m_{0}} & {m_{1}} & {\dots} & {m_{n}} \\ {m_{1}} & {m_{2}} & {\dots} & {m_{n+1}} \\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ {m_{n}} & {m_{n+1}} & {\dots} & {m_{2 n}} \end{array}\right] \in \mathbb{R}^{(n+1)\times (n+1)}. \label{def:hankel-moms} \end{align} The sequence $\mathbf{m}$ is a $\mathbb{R}$-moment sequence, if and only if, for every $n \in \mathbb{N}_0$, $H_n(\mathbf{m}) \succeq 0$. \end{theorem} The characterizations of moment sequences supported on intervals of the form $(-\infty, u]$ and $[-u, u]$ are known as the Stieltjes and Hausdorff moment problems, respectively. A proof for the following theorem, known as \textit{Stieltjes' theorem}, can be found in \cite{schmudgen2017moment} for the case where $u=0$, and it can be easily adapted to any $ u \in \mathbb{R}$ through a simple change of variables. \begin{theorem}[Stieltjes' theorem] \label{thm:stieltjes} Let $\mathbf{m} = \left(m_{0},m_{1},m_{2},\ldots\right)$ be an infinite sequence of real numbers. For $n \in \mathbb{N}_0$, define the \textit{shifted Hankel matrix of moments} $S_n(\mathbf{m})$ as \begin{align} S_{n}(\mathbf{m}) \coloneqq \left[\begin{array}{cccc} {m_{1}} & {m_{2}} & {\dots} & {m_{n+1}} \\ {m_{2}} & {m_{3}} & {\dots} & {m_{n+2}} \\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ {m_{n+1}} & {m_{n+2}} & {\dots} & {m_{2 n + 1}} \end{array}\right]\in \mathbb{R}^{(n+1)\times (n+1)}. \end{align} The sequence $\mathbf{m}$ is a $(-\infty, u]$-moment sequence, if and only if, for every $n \in \mathbb{N}_0$, \begin{align} H_n(\mathbf{m}) \succeq 0, \quad \text{ and } \quad u H_n(\mathbf{m}) - S_n(\mathbf{m}) \succeq 0. \label{mom-minus} \end{align} Similarly, the sequence $\mathbf{m}$ is a $[-u, \infty)$-moment sequence, if and only if, for every $n \in \mathbb{N}_0$, \begin{align} H_n(\mathbf{m}) \succeq 0, \quad \text{ and } \quad u H_n(\mathbf{m}) + S_n(\mathbf{m}) \succeq 0 \label{mom-plus}. \end{align} \end{theorem} The positive (semi)definiteness of a symmetric matrix can be certified using \textit{Sylvester's criterion}. \begin{theorem}[Sylvester's criterion \cite{meyer2000matrix}] A matrix $M$ is positive semidefinite, if and only if, the determinant of every principal submatrix is non-negative. Moreover, $M$ is positive definite, if and only if, the determinant of every leading principal submatrix $R$ is positive. \end{theorem} \section{Lower Bounds on the Spectral Radius} \label{sec:lower} The supports of the spectral measures in Definition \eqref{def:spectral-measures} are contained in the interval $[-\rho, \rho]$ and their moments can be written in terms of walks in $\mathcal{G}$. Since the moments of a measure impose constraints on its support, the number of walks in $\mathcal{G}$ imposes constraints on $\rho$, as stated below. \begin{lemma} \label{lem:sdp} For a graph $\mathcal{G}$, let $\mathbf{m}$ be the sequence of moments of any measure supported on the spectrum of $\mathcal{G}$. Then, for any finite set $\mathcal{J} \subset \mathbb{N} _0$, \begin{align} \rho H_{\mathcal{J}}(\mathbf{m}) - S_{\mathcal{J}}(\mathbf{m}) &\succeq 0, \label{mom-minus-J}\\ \rho H_{\mathcal{J}}(\mathbf{m}) + S_{\mathcal{J}}(\mathbf{m}) &\succeq 0 \label{mom-plus-J} , \end{align} \noindent where $H_{\mathcal{J}}(\mathbf{m})$ and $S_{\mathcal{J}}(\mathbf{m})$ are submatrices of $H_n(\mathbf{m})$ and $S_n(\mathbf{m})$, defined in \eqref{thm:hamburger} and \eqref{thm:stieltjes}, respectively. \end{lemma} \begin{proof} Since $\mathbf{m}$ corresponds to the sequence of moments of a measure whose support is contained in $[-\rho, \rho]$, it follows that $\rho$ must satisfy the necessary conditions $\eqref{mom-minus}$ and $\eqref{mom-plus}$. Furthermore, every leading principal submatrix of a positive semidefinite matrix is also positive semidefinite by Sylvester's criterion; hence, the matrix inequalities \eqref{mom-minus-J} and \eqref{mom-plus-J} follow. \end{proof} As stated in Lemma~2.3, the moments of all the three measures defined in Definition \ref{def:spectral-measures} can be written in terms of walks in the graph. Since the supports of these three measures are equal to the eigenvalue spectrum of $\mathcal{G}$, we can apply Lemma~\ref{lem:sdp} to the moment sequences obtained by counting different types of walks in the graph. Using the above Lemma, we can use a truncated sequence of moments to find a lower bound on $\rho$ by solving a \textit{semidefinite program} \cite{boyd2004convex}, as stated below: \begin{theorem} \label{thm:numerical} The solution to the following semidefinite program is a lower bound on the spectral radius of $\mathcal{G}$ \begin{align*} \begin{array}{cl} {\displaystyle \min_{u}} & u \\ {\text { s.t. }} & {uH_{n}(\mathbf{m}) - S_{n}(\mathbf{m}) \succeq 0,} \\ {} & {uH_{n}(\mathbf{m}) + S_{n}(\mathbf{m}) \succeq 0,} \end{array} \end{align*} where $\mathbf{m} = (m_0, m_1, \dots, m_{2n + 1})$ is a truncated sequence of moments of any measure supported on the eigenvalue spectrum of $\mathcal{G}$. \end{theorem} The above Theorem~can be used to compute numerical bounds on the spectral radius by setting the moments to be one of $\phi_k, \phi_k^{(i)}$, or $w_k$. Moreover, we can use Lemma~\ref{lem:sdp} to obtain closed-form bounds on $\rho$ involving a small number of moments for which the semidefinite program in Theorem~\ref{thm:numerical} can be solved analytically. The following corollary analyzes the case where $|\mathcal{J}| = 1$. \begin{corollary} \label{basic-niki-bound} For an undirected graph $\mathcal{G}$, let $\mathbf{m}$ be the sequence of moments of a measure supported on the spectrum of $\mathcal{G}$. Then, for every $k \in \mathbb{N}_0$ and even $q$, \begin{align} \rho^k \ge \dfrac{m_{2s+k}}{m_{2s}}. \label{eq:basic-niki-bound} \end{align} \end{corollary} \begin{proof} Let $\zeta(x)$ be an atomic measure supported on $\{\lambda_1, \lambda_2, \dots, \lambda_n \}$, defined as $\zeta(x) \coloneqq \sum_{i=1}^n z_i \delta(x - \lambda_i)$ and let $\{m_0, m_1, \dots \}$ be its moment sequence. For every $k \in \mathbb{N}_0$ and even $q$, we construct the following measure based on $\zeta(x)$: \begin{align} \zeta_{q,k}(x) \coloneqq \sum_{i=1}^n z_i \lambda_i^q \delta(x - \lambda_i^k). \label{eq:new-measure} \end{align} We see that $\zeta_{q,k}(x)$ is supported on $\{\lambda_1^k, \lambda_2^k, \dots, \lambda_n^k \}$, and its moments are given by the sequence $\{m_q, m_{q+k}, m_{q+2k}, \dots \}$, for $k \in \mathbb{N}_0$. We note that, for even $q$, the support of the measure $\zeta_{q,k}(x)$ is contained in $[-\rho^k, \rho^k]$; thus, setting $J = \{1\}$, we use Lemma~\ref{lem:sdp} to obtain $\rho^k m_{q} - m_{q+k} \ge 0$, which implies \eqref{eq:basic-niki-bound}. \end{proof} If we set $\mathbf{m} = \{w_s\}_{s=0}^{\infty}$, this corollary gives an alternative proof for the lower bounds in \eqref{niki-bound}, proven by Nikiforov \cite{nikiforov2006walks}. It also generalizes these results to closed walks by using $\phi_k$ or $\phi_k^{(i)}$ as the sequence of moments. Another interesting result comes from applying Lemma~\ref{lem:sdp} to the case where $|\mathcal{J}| = 2$. Corollary \ref{cor:largest-root} below provides a new lower bound in terms of the largest root of a quadratic polynomial. Its proof relies on the following lemma. \begin{lemma} \label{lemma:dets} Let $\mathbf{m}$ be the sequence of moments of a measure supported on the spectrum of $\mathcal{G}$. For $s, k \ge 0$, define the following matrices: \begin{align} H^{(2s,k)}\coloneqq \left[\begin{array}{ll}{m_{2s}} & {m_{2s+k}} \\ {m_{2s+k}} & {m_{2s+2k}}\end{array}\right], \quad \text{and} \quad S^{(2s,k)}\coloneqq\left[\begin{array}{ll}{m_{2s+k}} & {m_{2s+2k}} \\ {m_{2s+2k}} & {m_{2 s + 3k}}\end{array}\right] . \label{eq:HS-matrices} \end{align} Whenever $\det\left(H^{(2s,k)}\right) \neq 0$, we have \begin{align} \rho^{2k} \ge \dfrac{ \det(S^{(2s,k)})}{\det(H^{(2s,k)})}. \label{H-S} \end{align} \end{lemma} \begin{proof} Let $\zeta_{2s,k}(x)$ be the measure defined in \eqref{eq:new-measure}. The support of this measure is supported on $[-\rho^k, \rho^k]$ and has moment sequence $(m_{2s}, m_{2s+k}, m_{2s+2k}, \dots )$. Applying Lemma~\ref{lem:sdp} with $\mathcal{J} = \{1, 2\}$ we obtain \begin{align} \rho^{k} H^{(2s,k)} \pm S^{(2s,k)} \succeq 0 \implies \rho^k \ge \dfrac{ \mathbf{x}^{\intercal}S^{(2s,k)} \mathbf{x} }{\mathbf{x}^{\intercal} H^{(2s,k)} \mathbf{x}}, \label{H-S posdef} \end{align} \noindent for every non-zero $\mathbf{x} \in \mathbb{R}^2$. From Theorem~\ref{thm:hamburger}, we know that $H^{(2s, k)} \succeq 0$; hence, its eigenvalues $\xi_1$ and $\xi_2$ satisfy $\xi_1 \ge \xi_2 \ge 0$. By Rayleigh principle, we have that \begin{align} \dfrac{ \mathbf{x}^\intercal H^{(2s,k)} \mathbf{x} }{\mathbf{x}^\intercal \mathbf{x}} \le \xi_1, \qquad \dfrac{ \mathbf{w}^\intercal H^{(2s,k)} \mathbf{w} }{\mathbf{w}^\intercal \mathbf{w}} = \xi_2, \label{eq:rayleigh1} \end{align} \noindent for every non-zero $\mathbf{x}$ and for $\mathbf{w}$ being the eigenvector corresponding to the second eigenvalue of $H^{(2s,k)}$. Similarly, let $\gamma_1 \ge \gamma_2$ be the eigenvalues of $S^{(2s,k)}$. By Perron-Frobenius, we know that $\gamma_1 \ge 0$. If $\gamma_2 < 0$, then $\det(S^{(2s,k)}) < 0$ and the inequality \eqref{H-S posdef} is trivial. If instead $\gamma_2\ge 0$, then \begin{align} \dfrac{ \mathbf{x}^\intercal S^{(2s,k)} \mathbf{x} }{\mathbf{x}^\intercal \mathbf{x}} \ge \gamma_2 , \qquad \dfrac{ \mathbf{v}^\intercal S^{(2s,k)} \mathbf{v} }{\mathbf{v}^\intercal \mathbf{v}} = \gamma_1, \label{eq:rayleigh2} \end{align} \noindent for any non-zero $\mathbf{x}$ and for $\mathbf{v}$ equal to the leading eigenvector of $S^{(2s,k)}$. We plug vectors $\mathbf{v}$ and $\mathbf{w}$ into \eqref{H-S posdef} to obtain \begin{align*} \rho^k \ge \hspace{3pt} \dfrac{ \mathbf{\mathbf{v}}^{\intercal}S^{(2s,k)} \mathbf{v} }{\mathbf{v}^{\intercal} H^{(2s,k)} \mathbf{v}} \ge \dfrac{ \gamma_1 }{ \xi_1},\\ \rho^k \ge \dfrac{ \mathbf{\mathbf{w}}^{\intercal}S^{(2s,k)} \mathbf{w} }{\mathbf{w}^{\intercal} H^{(2s,k)} \mathbf{w}} \ge \dfrac{\gamma_2}{\xi_2}, \end{align*} \noindent where the last inequalities come from \eqref{eq:rayleigh1} and \eqref{eq:rayleigh2}. Multiplying both inequalities we obtain \begin{align*} \rho^{2k} \ge \dfrac{\gamma_1 \gamma_2 }{\xi_1 \xi_2} = \dfrac{ \det(S^{(2s,k)})}{\det(H^{(2s,k)})}. \end{align*} \end{proof} We are now ready to prove the following corollary. \begin{corollary} \label{cor:largest-root} Let $\mathbf{m}$ be the sequence of moments of a measure supported on the spectrum of $\mathcal{G}$. For $s, k \in \mathbb{N}_0$, let $H^{(2s,k)}$ and $S^{(2s,k)}$ be defined as in \eqref{eq:HS-matrices} and define the following matrix: \begin{align*} F^{(2s,k)}\coloneqq \left[\begin{array}{ll}{m_{2s+k}} & {m_{2s+3k}} \\ {m_{2s}} & {m_{2s+2k}}\end{array}\right]. \end{align*} Then, whenever $\det\left(H^{(2s,k)}\right) \neq 0$, we have \begin{align} \rho \ge \left( \dfrac{ \left| \det\left( F^{(2s, k)}\right)\right| + \sqrt{\det\left( F^{(2s, k)}\right)^2 - 4 \det\left(H^{(2s,k)} S^{(2s,k)}\right)}}{2 \det\left(H^{(2s,k)}\right)}\right)^{1/k}. \end{align} \end{corollary} \begin{proof} The inequality \eqref{H-S posdef} implies that $\det \left(\rho^k H^{(2s,k)} + S^{(2s,k)}\right) \ge 0$. This can be expanded to \begin{align*} \det\left(\left[\begin{array}{ll}{\rho^k m_{2s} + m_{2s+k}} & {\rho^k m_{2s+k} + m_{2s+2k}} \\ {\rho^k m_{2s+k} + m_{2s+2k}} & {\rho^k m_{2s+2k} + m_{2s+3k}}\end{array}\right]\right) \ge 0, \end{align*} \noindent which simplifies to \begin{align} \det\left(H^{(2s,k)}\right)\rho^{2k} - \det\left(F^{(2s,k)}\right) \rho^k + \det\left(S^{(2s,k)}\right) \ge 0. \label{quadratic rho-1} \end{align} Similarly, \eqref{H-S posdef} implies that $\det \left(\rho^k H^{(2s,k)} - S^{(2s,k)}\right) \ge 0$, which implies \begin{align} \det\left(H^{(2s,k)}\right)\rho^{2k} + \det\left(F^{(2s,k)}\right) \rho^k + \det\left(S^{(2s,k)}\right) \ge 0. \label{quadratic rho-2} \end{align} Inequalities \eqref{quadratic rho-1} and \eqref{quadratic rho-2} are satisfied simultaneously if and only if \begin{align} \det\left(H^{(2s,k)}\right)\rho^{2k} - \left|\det\left(F^{(2s,k)}\right)\right| \rho^k + \det\left(S^{(2s,k)}\right) \ge 0. \label{eq:quadratic-rho} \end{align} By Theorem~\ref{thm:hamburger} we have that $\det\left(H^{(2s,k)}\right) > 0$. Using Lemma~\ref{lemma:dets}, we know that $\det\left(S^{(2s,k)}\right) \le \det\left(H^{(2s,k)}\right) \rho^{2k} $, which we substitute into \eqref{eq:quadratic-rho} to yield \begin{align*} 2\det\left(H^{(2s,k)}\right)\rho^{2k} - \left| \det\left(F^{(2s,k)}\right)\right| \rho^k \ge 0. \end{align*} Since $\rho \ge 0$, we conclude that \begin{align} \rho^{k} \ge \dfrac{\left| \det\left(F^{(2s,k)}\right)\right|}{2\det\left(H^{(2s,k)}\right)}. \label{eq:larger-than-smallest-root} \end{align} Next, define the quadratic polynomial \begin{align*} P(r) = \det\left(H^{(2s,k)}\right)r^2 - \left|\det\left(F^{(2s,k)}\right)\right| r^k + \det\left(S^{(2s,k)}\right), \end{align*} which has a positive leading coefficient. From \eqref{eq:quadratic-rho} we know that $P(\rho^k) \ge 0$ and from \eqref{eq:larger-than-smallest-root} we know that $\rho^k$ is larger than the smallest root of $P$. This implies that $\rho^k$ is larger than the largest root of $P$ and the result follows. \end{proof} We can apply Corollary \ref{cor:largest-root} with $s=0$ and $ k=1$ to the closed-walks measure of a graph $\mathcal{G}$, leveraging the fact that $\phi_0 = n$, $\phi_1 = 0$, $\phi_2$ is twice the number of edges, and $\phi_3$ is three times the number of triangles in $\mathcal{G}$, as follows. \begin{corollary} \label{cor:T-e} For a graph $\mathcal{G}$ with $n$ vertices, $e$ edges and $T$ triangles, we have that \begin{align*} \rho \ge \dfrac{3T}{2e} + \sqrt{\left(\dfrac{3T}{2e}\right)^2 + \dfrac{2e}{n}}. \end{align*} \end{corollary} Similarly, because $\phi_0^{(i)} = 1$, $\phi_1^{(i)} = 0$, $\phi_2^{(i)}$ is the degree of vertex $i$, and $\phi_3^{(i)}$ is twice the number of triangles touching vertex $i$, we can apply Corollary \ref{cor:largest-root} to the closed-walks measure for vertex $i$. \begin{corollary} \label{cor:ti-di} Denoting by $d_i$ the degree of vertex $i$ and $T_i$ the number of triangles touching vertex $i$, we have \begin{align*} \rho \ge \max_{i \in 1, \dots, n} \dfrac{T_i + \sqrt{T_i^2 + d_i^3}}{d_i}. \end{align*} \end{corollary} Notice how Corollary \ref{cor:ti-di} implies \begin{align*} \rho \ge \dfrac{T_{\scriptscriptstyle \Delta} + \sqrt{T_{\scriptscriptstyle \Delta}^2 + \Delta^3}}{\Delta} \ge \sqrt{\Delta}, \end{align*} where $\Delta$ is the maximum of the vertex degrees in $\mathcal{G}$ and $T_{\scriptscriptstyle \Delta}$ is the maximum triangle count amongst vertices with degree $\Delta$, improving the bound in \eqref{favaron}.\\ \begin{table}[h!] \centering \begin{adjustbox}{center} \def3.0{3.0} \begin{tabular}{|c|c|c|c|} \hline General bound & Special cases & \begin{minipage}{0.78in} \centering $\vphantom{\sum^k}$ Moment sequence $\vphantom{\sum^k}$ \end{minipage}& Reference\\ \hline & $\rho \ge \left(\dfrac{w_{2s+k}}{w_{2s}}\right)^{1/k \hspace{5pt} }$ \cite{nikiforov2006walks} & \begin{minipage}{0.78in}\centering walks \end{minipage} & \\[1ex] \cline{2-3} $\rho \ge \left(\dfrac{m_{2s+k}}{m_{2s}}\right)^{1/k} $ & $\rho \ge \left(\dfrac{\phi_{2s+k}}{\phi_{2s}}\right)^{1/k} $ & closed walks& \begin{minipage}{0.6in} Corollary \ref{basic-niki-bound} \centering \end{minipage}\\[1ex] \cline{2-3} & $\rho \ge \left(\dfrac{\phi_{2s+k}^{(i)}}{\phi_{2s}^{(i)}}\right)^{1/k} $ & \begin{minipage}{0.78in} \centering $\vphantom{\sum^k}$ closed walks from node $i$ \end{minipage}& \\[1ex] \hline & $ \rho \ge \dfrac{3T}{2e} + \sqrt{\left(\dfrac{3T}{2e}\right)^2 + \dfrac{2e}{n}}$ & closed walks & \begin{minipage}{0.6in} Corollary \ref{cor:T-e} \centering \end{minipage} \\[1ex] \cline{2-4} $ \rho \ge \left( \dfrac{ \left| \det\left( F^{\scriptscriptstyle(2s, k)}\right)\right| + \sqrt{\det\left( F^{\scriptscriptstyle (2s, k)}\right)^2 - 4 \det\left(H^{\scriptscriptstyle (2s,k)} S^{\scriptscriptstyle (2s,k)}\right)}}{2 \det\left(H^{\scriptscriptstyle (2s,k)}\right)}\right)^{1/k}$ & $\displaystyle \rho \ge \max_{i \in 1, \dots, n} \dfrac{T_i + \sqrt{T_i^2 + d_i^3}}{d_i}$ & \begin{minipage}{0.78in} \centering $\vphantom{\sum^k}$ closed walks from node $i$ \end{minipage} & \multirow{2}{*}{\begin{minipage}{0.6in} \centering Corollary \ref{cor:ti-di} \end{minipage} }\\ \cline{2-3} & $ \hspace{15pt} \rho \ge \sqrt{\Delta}^{\hspace{15pt}}$ \cite{favaron1993some} & \begin{minipage}{0.78in} \centering $\vphantom{\sum^k}$ closed walks from node $i$ \end{minipage}& \\ \hline \end{tabular} \end{adjustbox} \caption{Summary of lower bounds on the spectral radius $\rho$, obtained as corollaries of Lemma~\ref{lem:sdp}. The number of $k$-walks, closed $k$-walks and closed $k$-walks from node $i$, in $\mathcal{G}$, are denoted by $w_k$, $\phi_k$ and $\phi_k^{(i)}$, respectively. We write $n$, $e$ and $T$ to denote the number of nodes, edges and triangles in $\mathcal{G}$. We write $d_i$ and $T_i$ to denote the degree of node $i$ and the number of triangles touching node $i$, respectively. The largest node degree is denoted by $\Delta$.} \label{tab:lower-bounds} \end{table} \section{Upper bounds on the spectral radius} \label{sec:upper} In this section, we make use of Theorems \ref{thm:hamburger} and \ref{thm:stieltjes} to derive upper bounds on $\rho$. These bounds are based on the analysis of three new measures similar to the ones in \eqref{def:spectral-measures}. In particular, these new measures, denoted by $\tilde{\mu}_{\mathcal{G}}$, $\tilde{\mu}_{\mathcal{G}}^{(i)}$, and $\tilde{\nu}_{\mathcal{G}}$, are the result of excluding the summand corresponding to the Dirac delta centered at $\lambda_1 = \rho$ from the definitions of $\mu_{\mathcal{G}}$, $\mu_{\mathcal{G}}^{(i)}$ and $\nu_{\mathcal{G}}$, respectively. Therefore, these three new measures are supported on the set $\{\lambda_2, \lambda_3, \dots, \lambda_n\} \subset [-\rho, \rho]$, and their moments are, respectively, \begin{align} m_k(\tilde{\mu}_{\mathcal{G}}) = \sum_{l=2}^n \lambda_l^k &= \phi_{k} - \rho^k, \label{moms-bulk-cw}\\ m_k(\tilde{\mu}_{\mathcal{G}}^{(i)}) = \sum_{l=2}^n c_l^{(i)}\lambda_l^k &= \phi_{k}(i) - c_1^{(i)}\rho^k, \label{moms-bulk-cw-i}\\ m_k(\tilde{\nu}_{\mathcal{G}}) = \sum_{l=2}^n c_l \lambda_l^k &= w_k - c_1 \rho^k. \label{moms-bulk-w} \end{align} Applying Hamburger's Theorem~to these measures we obtain the following result: \begin{lemma} \label{lem:upper-bounds-phi} Let $\mathbf{m}$ be the sequence of moments of an atomic measure $\mu(x) = \sum_{i=1}^n \alpha_i \delta(x- \lambda_i)$ supported on the spectrum of $\mathcal{G}$, and define the infinite-dimensional Hankel matrix $P$ given by: \begin{align*} P :=\left[\begin{array}{cccc} 1 & \rho & \rho^2 & {\dots} \\ \rho & \rho^2 & \rho^3 & {\dots} \\ {\rho^{2}} & {\rho^{3}} & {\rho^{4}} & {\dots} \\ {\vdots} & {\vdots} & {\vdots} & {\ddots} \end{array}\right] . \end{align*} Hence, for any finite $\mathcal{J} \subset \mathbb{N}_0$, \begin{align} H_{\mathcal{J}}(\mathbf{m}) - \alpha_1 P_{\mathcal{J}} \succeq 0, \label{non-sdp} \end{align} \noindent where $H_{\mathcal{J}}(\mathbf{m})$ is a submatrix of the Hankel matrix of moments $H_{\max(\mathcal{J})}$ defined in \eqref{def:hankel-moms}. \end{lemma} \begin{proof} We simply note that the matrix $H_{\mathcal{J}}(\mathbf{m}) - \alpha_1 P$ is the Hankel matrix containing the moments of the measure resulting from removing the term corresponding to $\lambda_1$ from the measure $\mu$ supported on the spectrum of $\mathcal{G}$. The result follows directly from Theorem~\ref{thm:hamburger} and Sylvester's criterion. \end{proof} \begin{corollary} \label{trivial-bounds} Let $\mathbf{m}$ be the sequence of moments of an atomic measure $\mu(x) = \sum_{i=1}^n \alpha_i \delta(x- \lambda_i)$ supported on the spectrum of $\mathcal{G}$. Then \begin{align} \rho &\le \left(\dfrac{m_{2k}}{\alpha_1}\right)^{1/2k} . \end{align} \end{corollary} \begin{proof} For $\mathcal{J} = \{k+1\}$, Lemma~\ref{lem:upper-bounds-phi} implies \begin{align*} m_{2k}- \alpha_1 \rho^{2k} \ge 0. \end{align*} This finishes the proof. \end{proof} Applying this corollary to the measures $\mu_{\mathcal{G}}, \mu_{\mathcal{G}}^{(i)}$ and $\nu_{\mathcal{G}}$, we obtain three different hierarchies of bounds. For example, applying Corollary \ref{trivial-bounds} to the measure $\nu_{\mathcal{G}}$, which has moments $\mathbf{m} = \{w_s\}_{s=1}^\infty$, we obtain the bound $\rho \le (w_{2k}/c_1)^{1/2k}$, where $c_1 = \left(\sum_{i=1}^n u_{i1}\right)^2$ is the fundamental weight. One can prove that this bound is tighter than the bound \eqref{nikiforov-bound} proved by Nikiforov \cite{nikiforov2006walks} (albeit only for even exponents). In particular, rearranging Wilf's inequality \eqref{wilf}, we obtain \begin{align} \left(1 - \frac{1}{\omega(\mathcal{G})}\right) &\le \frac{\rho}{c_1}. \label{wilf-rearranged} \end{align} Moreover, the upper bound \eqref{nikiforov-bound} can be expressed as \begin{align*} \left( \left(1 - \frac{1}{\omega(\mathcal{G})}\right) w_{2k}\right)^{\frac{1}{2k + 1}}. \end{align*} By substituting \eqref{wilf-rearranged} into this upper bound, we obtain \begin{align*} \left( \left(1 - \frac{1}{\omega(\mathcal{G})}\right) w_{2k}\right)^{\frac{1}{2k + 1}} \ge \left( \rho \frac{w_{2k}}{c_1}\right)^{\frac{1}{2k + 1}} \ge \left( \left(\frac{w_{2k}}{c_1}\right)^{\frac{1}{2k}} \frac{w_{2k}}{c_1}\right)^{\frac{1}{2k + 1}} = \left(\frac{w_{2k}}{c_1}\right)^{\frac{1}{2k}}, \end{align*} where the last quantity is the upper bound from Corollary \ref{trivial-bounds}, which is less or equal to the bound in \eqref{nikiforov-bound}.\\ Using Lemma~\ref{lem:upper-bounds-phi} with larger principal submatrices, we can improve these upper bounds further. The following upper bound is obtained by analyzing the case of $\mathcal{J} = \{1, k+1\}$. \begin{corollary} \label{2x2-bounds} Let $\mathbf{m}$ be the sequence of moments of an atomic measure $\mu(x) = \sum_{i=1}^n \alpha_i \delta(x- \lambda_i)$ supported on the spectrum of $\mathcal{G}$. Then, for any $k \in \mathbb{N}$ \begin{align} \rho \le \left(\dfrac{m_k + \sqrt{\left(\frac{m_0}{\alpha_1} - 1\right)\left(m_0m_{2k} - m_k^2\right) }}{ \vphantom{\sqrt{\frac{m_0}{\alpha_1}}} m_0}\right)^{1/k} . \label{largest-root} \end{align} Furthermore, this bound is tighter than the one in Corollary \ref{trivial-bounds}. \end{corollary} \begin{proof} Applying Lemma~\ref{lem:upper-bounds-phi} with $\mathcal{J} = \{1, k+1\}$, we conclude that $\det (H_\mathcal{J}(\mathbf{m}) - \alpha_1 P_\mathcal{J}) \ge 0$, which simplifies to the following expression: \begin{align*} -m_0\rho^{2k} + 2m_k \rho^k + \frac{1}{\alpha_1}\left((m_0 - \alpha_1 )m_{2k} - m_k^2\right) \ge 0. \end{align*} Making the substitution $y = \rho^k$, we obtain the following quadratic inequality \begin{align*} -m_0y^2 + 2m_k y + \frac{1}{\alpha_1}\left((m_0 - \alpha_1)m_{2k} - m_k^2\right) \ge 0. \end{align*} The quadratic on the left-hand side has a negative leading coefficient, which implies it is negative whenever $y$ is larger than its largest root, which is given by the right hand side of \eqref{largest-root}. After substituting back $\rho^k$, the result follows. To see that this bound improves the one in Corollary \ref{trivial-bounds}, note that Corollaries \ref{trivial-bounds} and \ref{basic-niki-bound} imply \begin{align*} m_{2k} \ge \alpha_1 \rho^{2k} \ge \alpha_1 \dfrac{m_{4k}}{m_{2k}} \hspace{8pt} \implies \hspace{8pt} m_{4k} \le \dfrac{1}{\alpha_1} m_{2k}^2 \hspace{8pt} \implies \hspace{8pt} m_0m_{4k} - m_{2k}^2 \le \left(\dfrac{m_0}{\alpha_1} - 1\right)m_{2k}^2. \end{align*} Hence, we can use the inequality in Corollary \ref{largest-root} to obtain \begin{align*} \rho^{2k} \le \dfrac{m_{2k} + \sqrt{\left(\dfrac{m_0}{\alpha_1} - 1\right)\left(m_0m_{4k} - m_{2k}^2\right) }}{m_0} \le \dfrac{m_{2k} + \sqrt{\left(\dfrac{m_0}{\alpha_1} - 1\right)\left(\dfrac{m_0}{\alpha} - 1\right)m_{2k}^2 }}{m_0} = \dfrac{m_{2k}}{\alpha_1}. \end{align*} \end{proof} As with previous results, we can obtain concrete bounds from this result by substituting $\alpha_1$ and $m_k$ by either $(i)$ 1 and $\phi_k$, $(ii)$ $c_1^{(i)}$ and $\phi_k^{(i)}$, or $(iii)$ $c_1$ and $w_k$, respectively. For example, we can apply Corollary \ref{2x2-bounds} to the closed-walks measure for node $i$, $\mu_{\mathcal{G}}^{(i)}$, using $\mathcal{J} = \{1, 2\}$. Since $\phi_0(i) = 1$ and $\phi_1(i) = 0$, we obtain the upper bound in the following Corollary. \begin{corollary} \label{not-better-phi-j} For a graph $\mathcal{G}$ \begin{align} \rho^2 \le \left( \dfrac{1}{c_1^{i}} - 1\right) \phi_2(i) = \left(\dfrac{1}{x_i^2} - 1\right) d_i, \hspace{12pt} \text{for all } i \in \{1, \dots, n\}, \label{bound on xj} \end{align} where $d_i$ is the degree of vertex $i$ and $x_i$ is the $i$-th component of the leading eigenvector of $A$. \end{corollary} Notice that inequality \eqref{bound on xj} can be written as $$ x_i \le \dfrac{1}{\sqrt{1 + \frac{\rho^2}{d_i}}}\hspace{3pt},$$ which was first proven by Cioabă and Gregory in \cite{cioaba2007principal}. Our method provides an alternative proof. Furthermore, we can refine Corollary \ref{not-better-phi-j}, as follows. \begin{corollary} \label{maybe-better-bipartite} For a bipartite graph $\mathcal{G}$ and $k \in \mathbb{N}_0$, we have \begin{align} \rho^{2k} &\le \dfrac{\phi_{2k}}{2}, \label{half-bound}\\ \rho^{2k} &\le \dfrac{\phi_{2k}(i)}{2c_1^{(i)}}. \label{half-bound-i} \end{align} \end{corollary} \begin{proof} To prove our result, we define a new measure $\mu_{\mathcal{G}}^+ = \sum_{i=1}^{\lceil n/2 \rceil} 2\delta\left(x-\lambda_{i}^2\right)$. Notice that, since the eigenvalue spectrum of a bipartite graph is symmetric, we have that $m_k\left(\mu_{\mathcal{G}}^+\right) = \sum_{i=1}^{\lceil n/2 \rceil} 2\lambda_i^{2k} = m_k\left(\mu_{\mathcal{G}}\right) = \phi_{2k}$. The upper bound given by \eqref{half-bound} then follows by adapting the proof of Corollary \ref{trivial-bounds} to the measure $\mu_{\mathcal{G}}^+$. A similar construction can be done for the closed-walks measure $\mu_{\mathcal{G}}^(i)$ from node $i$. It is easy to see that \begin{align} \phi_{2k}^{(i)} &= \sum_{j=1}^{\lceil n/2 \rceil} \left(c_j^{(i)} + c_{n-j}^{(i)}\right)\lambda_i^{2k}. \label{half-bound-i-moments} \end{align} In what follows, we prove that $c_j^{(i)} = c_{n-j}^{(i)}$ using the \textit{eigenvector-eigenvalue identity} \cite{denton2019eigenvectors}. We first prove that $u_{i,j} = u_{n-i, j}$ for the case of odd $n$, and note that for even $n$ there is an analogous proof. Let $M_{\{j\}}$ be the principal minor of $A$ obtained by deleting row $j$ and column $j$, and let $\gamma_1 \ge \gamma_2 \ge \dots \ge \gamma_{n-1}$ be the eigenvalues of $M_{\{j\}}$. Because $M_{\{j\}}$ is the adjacency matrix of the graph obtained by deleting node $j$, which is also bipartite, its spectrum is also symmetric. From the \textit{eigenvector-eigenvalue identity} we have: \begin{align} u_{i, j}^{2} \prod_{l=1 ; l \neq i}^{n}\left(\lambda_{i} -\lambda_{l} \right) = \prod_{l=1}^{n-1}\left(\lambda_{i} - \gamma_{l}\right). \label{eq:eigenvalue-eigenvector} \end{align} Using the symmetry of the spectrum of $M_{\{j\}}$, the term on the right hand side can be rewritten as \begin{align} \prod_{l=1}^{n-1}\left(\lambda_{i} - \gamma_{l}\right) = \prod_{l=1}^{(n-1)/2} \left(\lambda_{i}^2 - \gamma_{l}^2\right). \label{right-term-ev-ew} \end{align} Similarly, the symmetry of the spectrum of $A$ implies $\lambda_{n-i} = -\lambda_i$ and $\lambda_{\lceil n/2 \rceil} = 0$. Thus, the term accompanying $u_{i,j}^2$ in the left hand side can be rewritten as \begin{align} \prod_{l=1 ; l \neq i}^{n}\left(\lambda_{i} -\lambda_{l} \right) = \left(2 \lambda_i\right) \lambda_i \prod_{\substack{l=1 \\ l \neq i, n-i}} \left(\lambda_i^2 - \lambda_l^2\right), \label{left-term-ev-ew} \end{align} \noindent where the first factor on the right corresponds to the term $\left( \lambda_i - \lambda_{n-i}\right)$, the second factor corresponds to $\left(\lambda_i - \lambda_{\lceil n/2 \rceil} \right)$, and the third factor corresponds to the pairs of remaining eigenvalues. After these substitutions, it becomes clear that solving for $u_{ij}^2$ in \eqref{eq:eigenvalue-eigenvector} yields the same result as solving for $u_{n-i,j}^2$. We conclude that $c_j^{(i)} = c_{n-j}^{(i)}$. Hence, we can conclude from \eqref{half-bound-i-moments} that \begin{align*} \phi_{2k}^{(i)} = \sum_{l=1}^{n/2} 2c_j^{(i)} \lambda_i^{2k}. \end{align*} Thus, if we define the measure \begin{align*} \mu_{\mathcal{G}}^{(+,j)}(x)&= \sum_{i=1}^{\lceil n/2 \rceil} 2c_i^{(j)}\delta\left(x-\lambda_{i}^2\right), \end{align*} \noindent then $m_k (\mu_{\mathcal{G}}^{(+,j)}) = \phi_{2k}$ and the upper bound \eqref{half-bound-i} follows from adapting the proof of Corollary \ref{trivial-bounds} to this measure. \end{proof} A more general version of Corollary \ref{2x2-bounds} is given in the following theorem. \begin{theorem} \label{conditions for bounds} Let $\mathbf{m}$ be the sequence of moments of an atomic measure $\mu(x) = \sum_{i=1}^n \alpha_i \delta(x- \lambda_i)$ supported on the spectrum of $\mathcal{G}$. Define the infinite dimensional Hankel matrix $R$ given by \begin{align*} R :=\left[\begin{array}{cccc} 1 & r & r^2 & {\dots} \\ r & r^2 & r^3 & {\dots} \\ {r^{2}} & {r^{3}} & {r^{4}} & {\dots} \\ {\vdots} & {\vdots} & {\vdots} & {\ddots} \end{array}\right] . \end{align*} Let $\mathcal{J} = \{j_1, j_2, \dots, j_s\}$ and $\mathcal{J}^{\prime} = \{j_1, j_2, \dots, j_{s-1}\}$ for $j_1, \dots, j_s \in \mathbb{N}_0$ such that $H_{\mathcal{J}^\prime}(\mathbf{m}) \succ 0$. Then, the largest root $r^*$ of the polynomial: \begin{align*} Q(r) \coloneqq \det \left(H_{\mathcal{J}}(\mathbf{m}) - \alpha_1 R_J\right), \end{align*} is an upper bound on the spectral radius. \end{theorem} \begin{proof} We will prove that $Q(r)$ has a negative leading coefficient equal to $- \alpha_1 \det\left(H(\mathbf{m})_{\mathcal{J}'}\right) $. It is well known (see for example \cite{goberstein1988evaluating}) that if $A \in \mathbb{R}^{n\times n}$ and $B := \mathbf{uv}^{\intercal}$ is a rank-1 matrix in $\mathbb{R}^{n \times n}$ then \begin{align} \det(A + B) = \det(A) + \mathbf{v}^{\intercal} \operatorname{adj}(A) \mathbf{u},\label{nice-trick} \end{align} where $\operatorname{adj}(A)$ is the cofactor matrix of $A$. Let $\mathbf{r} \coloneqq \left(r^{j_1 - 1}, \dots, r^{j_s - 1}\right)$. We note that \begin{align*} -\alpha_1 R_\mathcal{J} = \left(\alpha_1\left(r^{j_1 - 1}, \dots, r^{j_s - 1}\right)\right)^{\intercal} \left(-\left(r^{j_1 - 1}, \dots, r^{j_s - 1}\right)\right) = (\alpha_1 \mathbf{r})(-\mathbf{r})^{\intercal}. \end{align*} Using \eqref{nice-trick}, we obtain \begin{align} \label{incredible determinants} \det(H(\mathbf{m})_\mathcal{J} - \alpha_1 R_\mathcal{J}) = \det(H(\mathbf{m})_\mathcal{J}) - \alpha_1 \mathbf{r}^{\intercal} \operatorname{adj}(H(\mathbf{m})_\mathcal{J}) \mathbf{r}. \end{align} It follows that the leading term of $Q(r)$ is $-\alpha C_{s,s}r^{2(j_{s}-1)} = -\alpha \det(H_{\mathcal{J}'}) < 0$. By Lemma~\ref{lem:upper-bounds-phi} we have $Q(\rho) \ge 0$ and, therefore, $\rho \le r^*$. \end{proof} Lemma~\ref{lem:upper-bounds-phi} was proved applying Hamburger's Theorem~to the moment sequence $\{m_s - \alpha_1 r^s\}_{s=0}^{\infty}$. We can also apply Stieltjes' Theorem~to the same moment sequence to obtain a different hierarchy of upper bounds. This is stated in the following theorem. \begin{theorem} \label{thm:upper-bounds-stieljtes} Let $\mathbf{m}$ be the sequence of moments of an atomic measure $\mu(x) = \sum_{i=1}^n \alpha_i \delta(x- \lambda_i)$ supported on the spectrum of $\mathcal{G}$. Then, for any $\mathcal{J} \in \mathbb{N}_0$, \begin{align} \rho \left(H(\mathbf{m})_{\mathcal{J}} - \alpha P_{\mathcal{J}}\right) + \left(S(\mathbf{m})_{\mathcal{J}}- \alpha_1 \rho P_{\mathcal{J}}\right) \succeq 0, \label{non-sdp-stieltjes} \end{align} \noindent where $H(\mathbf{m})$ and $S(\mathbf{m})$ are the Hankel matrices of moments defined in Theorems \ref{thm:hamburger} and \ref{thm:stieltjes}, respectively. \end{theorem} \begin{proof} Recall from Lemma~\ref{lem:upper-bounds-phi} that the sequence $\{m_s - \alpha_1 \rho^s \}_{s=0}^\infty$ corresponds to the moment sequence of a measure whose support is contained in $[-\rho, \rho]$ and, therefore, the result follows from Theorem~\ref{thm:stieltjes} . \end{proof} Theorem~\ref{thm:upper-bounds-stieljtes} can be used to obtain bounds that improve on those of Corollary \ref{trivial-bounds}, as shown below. \begin{corollary} \label{polynomial} Let $\mathbf{m}$ be the sequence of moments of an atomic measure $\mu(x) = \sum_{i=1}^n \alpha_i \delta(x- \lambda_i)$ supported on the spectrum of $\mathcal{G}$. Then, the largest root $r^*$ of the following polynomial: \begin{align*} Q(r) \coloneqq m_{2k} r + m_{2k+1} - 2\alpha_1 r^{2k+1}, \end{align*} \noindent is an upper bound on the spectral radius. Furthermore, this bound is tighter than the bound in Corollary \ref{trivial-bounds}. \end{corollary} \begin{proof} Applying Theorem~\ref{thm:upper-bounds-stieljtes} with $\mathcal{J} = \{k+1\}$, we obtain \begin{align*} \rho \left[m_{2k}\right] + \left[m_{2k+1}\right] -2 \alpha_1 \rho \left[\rho^{2k}\right] \succeq 0 \implies \rho m_{2k} + m_{2k+1} -2 \alpha_1 \rho^{2k+1} \ge 0, \end{align*} \noindent and, thus, $Q(\rho) \ge 0$. Since the leading coefficient of $Q(r)$ is negative, it follows that $\rho \le r^*$. To prove that $r^* \le (m_{2k}/\alpha_1)^{1/2k}$, we first prove that $r^*$ is the unique root of $Q(r)$ in the region defined by \begin{align} r \ge \left(\dfrac{m_{2k}}{2\alpha_1 (2k+1)}\right)^{1/2k}. \label{interval} \end{align} This is indeed the case, because the derivative of $Q(r)$, given by $Q^\prime(r) = m_{2k} - 2\hspace{1pt}(2k+1)\hspace{1pt}\alpha_1\hspace{1pt}r^{\scriptscriptstyle2k + 1}$, is negative in the interval defined in \eqref{interval}. Also, note that \begin{align*} \left(\dfrac{m_{2k}}{\alpha_1} \right)^{1/2k} \ge \left(\dfrac{m_{2k}}{2\alpha_1 (2k+1)}\right)^{1/2k}. \end{align*} Therefore, it suffices to show that \begin{align*} Q\left(\left(\dfrac{ m_{2k} }{\alpha_1} \right)^{1/2k}\right) \le 0. \end{align*} To this end, we evaluate and obtain \begin{align*} &Q\left(\left(\dfrac{1}{\alpha_1} m_{2k}\right)^{1/2k}\right) \le 0\\ &\iff \left(\dfrac{1}{\alpha_1} m_{2k}\right)^{1/2k} m_{2k} + m_{2k+1} -2 \alpha_1 \left(\left(\dfrac{1}{\alpha_1} m_{2k}\right)^{1/2k}\right)^{2k+1}\le 0\\ &\iff \left(\dfrac{1}{\alpha_1} m_{2k}\right)^{1/2k} m_{2k} + m_{2k+1} - 2 m_{2k} \left(\dfrac{1}{\alpha_1} m_{2k}\right)^{1/2k} \le 0\\ &\iff \dfrac{m_{2k+1}}{m_{2k}} \le \left(\dfrac{1}{\alpha_1} m_{2k}\right)^{1/2k}, \end{align*} where the last inequality is true since the left-hand side is a lower bound of $\rho$ by Corollary \ref{basic-niki-bound}, and the right hand side is an upper bound of $\rho$ by Corollary \ref{trivial-bounds}. This finishes the proof. \end{proof} The implicit bound in Corollary \ref{polynomial}, when applied to the moment sequence $\mathbf{m} = \{w_s\}_{s=1}^\infty$, provides an improvement on the bound given in Corollary \ref{basic-niki-bound} and, consequently, on the bound \eqref{niki-bound}. Using inequality \eqref{wilf}, we can also obtain a bound in terms of the clique number instead of the fundamental weight, which also improves on \eqref{niki-bound}, as we show below. \begin{corollary} \label{polynomial-clique} The largest root $r^*$ of the following polynomial: \begin{align*} Q(r) \coloneqq m_{2k}r + m_{2k+1} - 2\dfrac{\omega(\mathcal{G})}{\omega(\mathcal{G}) - 1} r^{2k+2}, \end{align*} \noindent is an upper bound on the spectral radius. Furthermore, this bound is an improvement on \eqref{nikiforov-bound}. \end{corollary} \begin{proof} The proof is very similar to that of Corollary \ref{polynomial}. Using \eqref{wilf}, we have that \begin{align*} \rho w_{2k} + w_{2k+1} - \dfrac{\omega(\mathcal{G})}{\omega(\mathcal{G}) - 1}\rho^{2k+1} \ge \rho w_{2k} + w_{2k+1} - 2\alpha_1 \rho^{2k+1} \ge 0. \end{align*} We omit the details to avoid repetitions. \end{proof} \begin{table}[h!] \centering \begin{adjustbox}{center} \def3.0{3.0} \begin{tabular}{|c|c|c|c|} \hline General bound & Special cases & \begin{minipage}{0.78in} \centering $\vphantom{\sum^k}$ Moment sequence $\vphantom{\sum^k}$ \end{minipage}& Reference\\ \hline & $\rho \le \left(\phi_{2k}\right)^{1/2k}$ & \begin{minipage}{0.78in}\centering closed walks \end{minipage} & \\[1ex] \cline{2-3} &$\rho \le \left(\dfrac{w_{2k}}{c_1}\right)^{1/2k}$ & all & \begin{minipage}{0.65in}\centering Corollary \ref{trivial-bounds} \end{minipage} \\[1ex] \cline{2-3} $\rho \le \left(\dfrac{m_{2k}}{\alpha_1}\right)^{1/2k}$& $\qquad \rho \le \left(\left(1 - \dfrac{1}{\omega(\mathcal{G})}\right)w_{2k}\right)^{1/2k+1}$\hspace{3pt} \cite{nikiforov2006walks} & \begin{minipage}{0.78in} \centering $\vphantom{\sum^k}$ walks \end{minipage}& \\[1ex] \cline{2-4} & $\rho \le \left(\dfrac{\phi_{2k}}{2}\right)^{1/2k}$& \begin{minipage}{0.78in} \centering $\vphantom{\sum^k}$ closed walks (bipartite $\mathcal{G}$) \end{minipage} & \multirow{2}{*}{\begin{minipage}{0.65in}\centering Corollary \ref{maybe-better-bipartite} \end{minipage}}\\ \cline{2-3} & $\rho \le \left(\dfrac{\phi_{2k}^{(i)}}{2}\right)^{1/2k}$& \begin{minipage}{0.78in} \centering $\vphantom{\sum^k}$ closed walks from node $i$ (bipartite $\mathcal{G}$) \end{minipage} &\\[1ex] \hline $\rho \le \left(\dfrac{m_k + \sqrt{\left(\dfrac{m_0}{\alpha_1} - 1\right)\left(m_0m_{2k} - m_k^2\right) }}{m_0}\right)^{1/k} $ & \hspace{42pt} $\rho \le \sqrt{\left(\dfrac{1}{x_i^2}-1\right)d_i}$\hspace{3pt} \cite{cioaba2007principal} & \begin{minipage}{0.78in} \centering $\vphantom{\sum^k}$ closed walks from node $i$ \end{minipage} & \begin{minipage}{0.65in}\centering Corollary \ref{not-better-phi-j} \end{minipage} \\[2.5ex] \hline \multirow{2}{*}{ $\begin{array}{ccl} \multirow{2}{*}{$\rho \le $ } & { \max\limits_{r}} & {r} \\[-5ex] & {\text{s.t}} & { m_{2k}r + m_{2k+1} - 2\alpha_1 r^{2k+1} = 0} \end{array}$ } & $\begin{array}{ccl} \multirow{2}{*}{$\rho \le $ } & { \max\limits_{r}} & {r} \\[-5ex] & {\text{s.t}} & { w_{2k}r + w_{2k+1} - 2c_1 r^{2k+1} = 0} \end{array}$ & \begin{minipage}{0.78in} \centering $\vphantom{\sum^k}$ walks \end{minipage} & \begin{minipage}{0.65in}\centering Corollary \ref{polynomial} \end{minipage}\\[1ex] \cline{2-4} & $\begin{array}{ccl} \multirow{2}{*}{$\rho \le $ } & { \max\limits_{r}} & {r} \\[-5ex] & {\text{s.t}} & { w_{2k}r + w_{2k+1} - 2\dfrac{\omega(\mathcal{G})}{\omega(\mathcal{G}) - 1} r^{2k+2} = 0} \end{array}$ & \begin{minipage}{0.78in} \centering $\vphantom{\sum^k}$ walks \end{minipage} & \begin{minipage}{0.65in}\centering Corollary \ref{polynomial-clique} \end{minipage}\\[3ex]\hline \end{tabular} \end{adjustbox} \caption{Upper bounds on the spectral radius $\rho$, where $ m_k $ is the $k$-th moment of an atomic measure $\mu(x) = \sum_{i=1}^n \alpha_i \delta(x- \lambda_i)$ supported on the spectrum of $\mathcal{G}$. The number of $k$-walks, closed $k$-walks and closed $k$-walks from node $i$ are denoted by $w_k$, $\phi_k$ and $\phi_k^{(i)}$, respectively. We write $\omega(\mathcal{G})$, $c_1$ and $x_i$ for the clique number of $\mathcal{G}$, the fundamental weight of $A$, and the $i$-th component of the leading eigenvector of $A$, respectively.} \label{tab:upper-bounds} \end{table} \FloatBarrier \end{document}
\begin{document} \begin{bottomstuff} Michael J. Neely$\dagger$ and Longbo Huang$\star$ are with the Electrical Engineering Department at the University of Southern California, Los Angeles, CA \{$\dagger$web: http://www-rcf.usc.edu/$\sim$mjneely\}, \{$\star$web: http://www-scf.usc.edu/$\sim$longbohu\}). This material is supported in part by one or more of the following: the DARPA IT-MANET program grant W911NF-07-0028, the NSF Career grant CCF-0747525. \end{bottomstuff} \title{Dynamic Product Assembly and Inventory Control for Maximum Profit} \section{Introduction} This paper considers the problem of maximizing time average profit at a product assembly plant. The plant manages the purchasing, assembly, and pricing of $M$ types of raw materials and $K$ types of products. Specifically, the plant maintains a storage buffer for each of the $M$ materials, and can assemble each product from some specific combination of materials. The system operates in slotted time with normalized slots $t \in\{0, 1, 2, \ldots\}$. Every slot, the plant makes decisions about purchasing new raw materials and pricing the $K$ products for sale to the consumer. This is done in reaction to material costs and consumer demand functions that are known on each slot but can change randomly from slot to slot according to a stationary process with a possibly unknown probability distribution. It is well known that the problem of maximizing time average profit in such a system can be treated using dynamic programming and Markov decision theory. A textbook example of this approach for a single product (single queue) problem is given in \cite{bertsekas-dp}, where inventory storage costs are also considered. However, such approaches may be prohibitively complex for problems with large dimension, as the state space grows exponentially with the number of queues. Further, these techniques require knowledge of the probabilities that govern purchasing costs and consumer demand functions. Case studies of multi-dimensional inventory control are treated in \cite{ndp-inventory} using a lower complexity neuro-dynamic programming framework, which approximates the optimal value function used in traditional dynamic programming. Such algorithms fine-tune the parameters of the approximation by either offline simulations or online feedback (see also \cite{bertsekas-neural}\cite{approx-dp}). In this paper, we consider a different approach that does not attempt to approximate dynamic programming. Our algorithm reacts to the current system state and does not require knowledge of the probabilities that affect future states. Under mild ergodicity assumptions on the material supply and consumer demand processes, we show that the algorithm can push time average profit to within $\epsilon$ of optimality, for any arbitrarily small value $\epsilon>0$. This can be achieved by finite storage buffers of size $cT_{\epsilon}/\epsilon$, where $c$ is a coefficient that is polynomial in $K$ and $M$, and $T_{\epsilon}$ is a constant that depends on the ``mixing time'' of the processes. In the special case when these processes are i.i.d. over slots, we have $T_{\epsilon} = 1$ for all $\epsilon>0$, and so the buffers are size $O(1/\epsilon)$.\footnote{If the material supply and consumer demand processes are modulated by finite state ergodic Markov chains, then $T_{\epsilon} = O(\log(1/\epsilon))$ and so the buffers are size $O((1/\epsilon)\log(1/\epsilon))$.} The algorithm can be implemented in real time even for problems with large dimension (i.e., large $K$ and $M$). Thus, our framework circumvents the ``curse of dimensionality'' problems associated with dynamic programming. This is because we are not asking the same question that could be asked by dynamic programming approaches: Rather than attempting to maximize profit subject to finite storage buffers, we attempt to reach the more difficult target of pushing profit arbitrarily close to the maximum that can be achieved in systems with \emph{infinite buffer space}. We can approach this optimality with finite buffers of size $O(1/\epsilon)$, although this may not be the optimal buffer size tradeoff (see \cite{neely-energy-delay-it}\cite{neely-utility-delay-jsac} for tradeoff-optimal algorithms in a communication network). A dynamic program might be able to achieve the same profit with smaller buffers, but would contend with curse of dimensionality issues. Prior work on inventory control with system models similar to our own is found in \cite{pricing-short-life-cycles} \cite{assemble-to-order-dp} \cite{assemble-to-order-plambeck06} and references therein. Work in \cite{pricing-short-life-cycles} considers a single-dimensional inventory problem where a fixed number of products are sold over a finite horizon with a constant but unknown customer arrival rate. A set of coupled differential equations are derived for the optimal policy using Markov decision theory. Work in \cite{assemble-to-order-dp} provides structural results for multi-dimensional inventory problems with product assembly, again using Markov decision theory, and obtains numerical results for a two-dimensional system. A multi-dimensional product assembly problem is treated in \cite{assemble-to-order-plambeck06} for stochastic customer arrivals with fixed and known rates. The complexity issue is treated by considering a large volume limit and using results of heavy traffic theory. The work in \cite{assemble-to-order-plambeck06} also considers joint optimal price decisions, but chooses all prices at time zero and holds them constant for all time thereafter. Our analysis uses the ``drift-plus-penalty'' framework of stochastic network optimization developed for queueing networks in \cite{now}\cite{neely-fairness-infocom05}\cite{neely-energy-it}. Our problem is most similar to the work in \cite{jiang-processing}, which uses this framework to address \emph{processing networks} that queue components that must be combined with other components. The work in \cite{jiang-processing} treats multi-hop networks and maximizes throughput and throughput-utility in these systems using a \emph{deficit max-weight} algorithm that uses ``deficit queues'' to keep track of the deficit created when a component cannot be processed due to a missing part. Our paper does not consider a multi-hop network, but has similar challenges when we do not have enough inventory to build a desired product. Rather than using deficit queues, we use a different type of Lyapunov function that avoids deficits entirely. Our formulation also considers the purchasing and pricing aspects of the problem, particularly for a manufacturing plant, and considers arbitrary (possibly non-ergodic) material supply and consumer demand processes. Previous work in \cite{two-prices-allerton07} uses the drift-plus-penalty framework in a related revenue maximization problem for a wireless service provider. In that context, a two-price result demonstrates that dynamic pricing must be used to maximize time average profit (a single price is often not enough, although two prices are sufficient). The problem in this paper can be viewed as the ``inverse'' of the service provider problem, and has an extra constraint that requires the plant to maintain enough inventory for a sale to take place. However, a similar two-price structure applies here, so that time-varying prices are generally required for optimality, even if material costs and consumer demands do not change with time. This is a simple phenomenon that often arises when maximizing the expectation of a non-concave profit function subject to a limited supply of raw materials. In the real world, product providers often use a regular price that applies most of the time, with reduced ``sale'' prices that are offered less frequently. While the incentives for two-price behavior in the real world are complex and are often related to product expiration dates (which is not part of our mathematical model), two-price (or multi-price) behavior can arise even in markets with non-perishable goods. Time varying prices also arise in other contexts, such as in the work \cite{pricing-short-life-cycles} which treats the sale of a fixed amount of items over a finite time horizon. It is important to note that the term ``dynamic pricing'' is often associated with the practice of price discrimination between consumers with different demand functions. It is well known that charging different consumers different prices is tantalizingly profitable (but often illegal). Our model does not use such price discrimination, as it offers the same price to all consumers. However, the revenue earned from our time-varying strategy may be indirectly reaping benefits that are similar to those achievable by price discrimination, without the inherent unfairness. This is because the aggregate demand function is composed of individual demands from consumers with different preferences, which can partially be exploited with a time-varying price that operates on two different price regions. The outline of this paper is as follows: In the next section we specify the system model. The optimal time average profit is characterized in Section \ref{section:optimal-profit}, where the two-price behavior is also noted. Our dynamic control policy is developed in Section \ref{section:dynamic} for an i.i.d. model of material cost and consumer demand states. Section \ref{section:ergodic} treats a more general ergodic model, and arbitrary (possibly non-ergodic) processes are treated in Section \ref{section:non-ergodic}. \section{System Model} \label{section:model} There are $M$ types of raw materials, and each is stored in a different storage buffer at the plant. Define $Q_m(t)$ as the (integer) number of type $m$ materials in the plant on slot $t$. We temporarily assume all storage buffers have infinite space, and later we show that our solution can be implemented with finite buffers of size $O(1/\epsilon)$, where the $\epsilon$ parameter determines a profit-buffer tradeoff. Let $\bv{Q}(t) = (Q_1(t), \ldots, Q_M(t))$ be the vector of queue sizes, also called the \emph{inventory vector}. From these materials, the plant can manufacture $K$ types of products. Define $\beta_{mk}$ as the (integer) number of type $m$ materials required for creation of a single item of product $k$ (for $m \in \{1, \ldots, M\}$ and $k \in \{1, \ldots, K\}$). We assume that products are assembled quickly, so that a product requested during slot $t$ can be assembled on the same slot, provided that there are enough raw materials.\footnote{Algorithms that yield similar performance but require products to be assembled one slot before they are delivered can be designed based on simple modifications, briefly discussed in Section \ref{subsection:assembly-delay}.} Thus, the plant must have $Q_m(t) \geq \beta_{mk}$ for all $m \in \{1, \ldots, M\}$ in order to sell one product of type $k$ on slot $t$, and must have twice this amount of materials in order to sell two type $k$ products, etc. The simplest example is when each raw material itself represents a finished product, which corresponds to the case $K=M$, $\beta_{mm} = 1$ for all $m$, $\beta_{mk} = 0$ for $m \neq k$. However, our model allows for more complex assembly structures, possibly with different products requiring some overlapping materials. Every slot $t$, the plant must decide how many new raw materials to purchase and what price it should charge for its products. Let $\bv{A}(t) = (A_1(t), \ldots, A_M(t))$ represent the vector of the (integer) number of new raw materials purchased on slot $t$. Let $\tilde{\bv{D}}(t) = (\tilde{D}_1(t), \ldots, \tilde{D}_K(t))$ be the vector of the (integer) number of products sold on slot $t$. The queueing dynamics for $m \in \{1, \ldots, M\}$ are thus: \begin{equation} \label{eq:dynamics} Q_m(t+1) = \max\left[Q_m(t) - \sum_{k=1}^K \beta_{mk}\tilde{D}_k(t), 0\right] + A_m(t) \end{equation} Below we describe the pricing decision model that affects product sales $\tilde{\bv{D}}(t)$, and the cost model associated with purchasing decisions $\bv{A}(t)$. \subsection{Product Pricing and the Consumer Demand Functions} For each slot $t$ and each commodity $k$, the plant must decide if it desires to offer commodity $k$ for sale, and, if so, what price it should charge. Let $Z_k(t)$ represent a binary variable that is $1$ if commodity $k$ is offered and is $0$ else. Let $P_k(t)$ represent the per-unit price for product $k$ on slot $t$. We assume that prices $P_k(t)$ are chosen within a compact set $\script{P}_k$ of price options. Thus: \begin{equation} \label{eq:p-k-constraint} P_k(t) \in \script{P}_k \: \mbox{ for all products $k \in \{1, \ldots, K\}$ and all slots $t$} \end{equation} The sets $\script{P}_k$ include only non-negative prices and have a finite maximum price $P_{k, max}$. For example, the set $\script{P}_k$ might represent the interval $0 \leq p \leq P_{k,max}$, or might represent a discrete set of prices separated by some minimum price unit. Let $\bv{Z}(t) = (Z_1(t), \ldots, Z_K(t))$ and $\bv{P}(t) = (P_1(t), \ldots, P_K(t))$ be vectors of these decision variables. Let $Y(t)$ represent the \emph{consumer demand state} for slot $t$, which represents any factors that affect the expected purchasing decisions of consumers on slot $t$. Let $\bv{D}(t) = (D_1(t), \ldots, D_K(t))$ be the resulting \emph{demand vector}, where $D_k(t)$ represents the (integer) amount of type $k$ products that consumers want to buy in reaction to the current price $P_k(t)$ and under the current demand state $Y(t)$. Specifically, we assume that $D_k(t)$ is a random variable that depends on $P_k(t)$ and $Y(t)$, is conditionally i.i.d. over all slots with the same $P_k(t)$ and $Y(t)$ values, and satisfies: \begin{equation} \label{eq:demand-function} F_k(p, y) = \expect{D_k(t) \left|\right. P_k(t) = p, Y(t) = y} \: \: \forall p \in \script{P}_k, y \in \script{Y} \end{equation} The $F_k(p,y)$ function is assumed to be continuous in $p\in \script{P}$ for each $y \in \script{Y}$.\footnote{This ``continuity'' is automatically satisfied in the case when $\script{P}_k$ is a finite set of points. Continuity of $F_k(p,y)$ and compactness of $\script{P}_k$ ensures that linear functionals of $F_k(p,y)$ have well defined maximizers $p \in \script{P}_k$.} We assume that the current demand state $Y(t)$ is known to the plant at the beginning of slot $t$, and that the demand function $F_k(p, y)$ is also known to the plant. The process $Y(t)$ takes values in a finite or countably infinite set $\script{Y}$, and is assumed to be stationary and ergodic with steady state probabilities $\pi(y)$, so that: \[ \pi(y) = Pr[Y(t) = y] \: \: \forall y \in \script{Y}, \forall t \] The probabilities $\pi(y)$ are not necessarily known to the plant. We assume that the maximum demand for each product $k\in \{1, \ldots, K\}$ is deterministically bounded by a finite integer $D_{k, max}$, so that regardless of price $\bv{P}(t)$ or the demand state $Y(t)$, we have: \[ 0 \leq D_k(t) \leq D_{k, max} \: \: \mbox{ for all slots $t$ and all products $k$} \] This boundedness assumption is useful for analysis. Such a finite bound is natural in cases when the maximum number of customers is limited on any given slot. The bound might also be artificially enforced by the plant due to physical constraints that limit the number of orders that can be fulfilled on one slot. Define $\mu_{m,max}$ as the resulting maximum demand for raw materials of type $m$ on a given slot: \begin{equation} \label{eq:mu-max} \mu_{m, max} \defequiv \sum_{k=1}^K\beta_{mk} D_{k,max} \end{equation} If there is a sufficient amount of raw materials to fulfill all demands in the vector $\bv{D}(t)$, and if $Z_k(t) = 1$ for all $k$ such that $D_k(t)>0$ (so that product $k$ is offered for sale), then the number of products sold is equal to the demand vector: $\tilde{\bv{D}}(t) = \bv{D}(t)$. We are guaranteed to have enough inventory to meet the demands on slot $t$ if $Q_m(t) \geq \mu_{m,max}$ for all $m \in \{1, \ldots, M\}$. However, there may not always be enough inventory to fulfill all demands, in which case we require a \emph{scheduling decision} that decides how many units of each product will be assembled to meet a subset of the demands. The value of $\tilde{\bv{D}}(t) = (\tilde{D}_1(t), \ldots, \tilde{D}_K(t))$ must be chosen as an integer vector that satisfies the following \emph{scheduling constraints}: \begin{eqnarray} 0 \leq \tilde{D}_k(t) \leq Z_k(t)D_k(t) & \forall k \in \{1, \ldots, K\} \label{eq:scheduling-constraints1} \\ Q_m(t) \geq \sum_{k=1}^K\beta_{mk}\tilde{D}_k(t) & \forall m\in\{1, \ldots, M\} \label{eq:scheduling-constraints2} \end{eqnarray} \subsection{Raw Material Purchasing Costs} \label{subsection:cost-state} Let $X(t)$ represent the \emph{raw material supply state} on slot $t$, which contains components that affect the purchase price of new raw materials. Specifically, we assume that $X(t)$ has the form: \[ X(t) = [(x_1(t), \ldots, x_M(t)); (s_1(t), \ldots, s_M(t))] \] where $x_m(t)$ is the per-unit price of raw material $m$ on slot $t$, and $s_m(t)$ is the maximum amount of raw material $m$ available for sale on slot $t$. We assume that $X(t)$ takes values on some finite or countably infinite set $\script{X}$, and that $X(t)$ is stationary and ergodic with probabilities: \[ \pi(x) = Pr[X(t) = x] \: \: \forall x \in \script{X}, \forall t \] The $\pi(x)$ probabilities are not necessarily known to the plant. Let $c(\bv{A}(t), X(t))$ be the total cost incurred by the plant for purchasing a vector $\bv{A}(t)$ of new materials under the supply state $X(t)$: \begin{equation} \label{eq:example-cost} c(\bv{A}(t), X(t)) = \sum_{m=1}^M x_m(t) A_m(t) \end{equation} We assume that $\bv{A}(t)$ is limited by the constraint $\bv{A}(t) \in \script{A}(X(t))$, where $\script{A}(X(t))$ is the set of all vectors $\bv{A}(t) = (A_1(t), \ldots, A_M(t))$ such that for all $t$: \begin{eqnarray} 0 \leq A_m(t) \leq \min[A_{m,max}, s_m(t)] & \forall m \in \{1, \ldots, M\} \label{eq:Amax} \\ A_m(t) \mbox{ is an integer } & \forall m \in \{1, \ldots, M\} \label{eq:integer-a} \\ c(\bv{A}(t), X(t)) \leq c_{max} & \label{eq:cmax} \end{eqnarray} where $A_{m, max}$ and $c_{max}$ are finite bounds on the total amount of each raw material that can be purchased, and the total cost of these purchases on one slot, respectively. These finite bounds might arise from the limited supply of raw materials, or might be artificially imposed by the plant in order to limit the risk associated with investing in new raw materials on any given slot. A simple special case is when there is a finite maximum price $x_{m, max}$ for raw material $m$ at any time, and when $c_{max} = \sum_{m=1}^M x_{m, max} A_{m,max}$. In this case, the constraint (\ref{eq:cmax}) is redundant. \subsection{The Maximum Profit Objective} \label{subsection:queueing-dynamics} Every slot $t$, the plant observes the current queue vector $\bv{Q}(t)$, the current demand state $Y(t)$, and the current supply state $X(t)$, and chooses a purchase vector $\bv{A}(t) \in \script{A}(X(t))$ and pricing vectors $\bv{Z}(t)$, $\bv{P}(t)$ (with $Z_k(t) \in \{0, 1\}$ and $P_k(t) \in \script{P}_k$ for all $k \in \{1, \ldots, K\}$). The consumers then react by generating a random demand vector $\bv{D}(t)$ with expectations given by (\ref{eq:demand-function}). The actual number of products filled is scheduled by choosing the $\tilde{\bv{D}}(t)$ vector according to the scheduling constraints (\ref{eq:scheduling-constraints1})-(\ref{eq:scheduling-constraints2}), and the resulting queueing update is given by (\ref{eq:dynamics}). For each $k \in \{1, \ldots, K\}$, define $\alpha_k$ as a fixed (non-negative) cost associated with assembling one product of type $k$. Define a process $\phi(t)$ as follows: \begin{equation} \label{eq:phi} \phi(t) \defequiv - c(\bv{A}(t), X(t)) + \sum_{k=1}^K Z_k(t)D_k(t)(P_k(t)-\alpha_k) \end{equation} The value of $\phi(t)$ represents the total instantaneous profit due to material purchasing and product sales on slot $t$, under the assumption that all demands are fulfilled (so that $\tilde{D}_k(t) = D_k(t)$ for all $k$). Define $\phi_{actual}(t)$ as the \emph{actual} instantaneous profit, defined by replacing the $D_k(t)$ values in the right hand side of (\ref{eq:phi}) with $\tilde{D}_k(t)$ values. Note that $\phi(t)$ can be either positive, negative, or zero, as can $\phi_{actual}(t)$. Define time average expectations $\overline{\phi}$ and $\overline{\phi}_{actual}$ as follows: \[ \overline{\phi} \defequiv \lim_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{\phi(\tau)} \: \: , \: \: \overline{\phi}_{actual} \defequiv \lim_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{\phi_{actual}(\tau)} \] Every slot $t$, the plant observes the current queue vector $\bv{Q}(t)$, the current demand state $Y(t)$, and the current supply state $X(t)$, and chooses a purchase vector $\bv{A}(t) \in \script{A}(X(t))$ and pricing vectors $\bv{Z}(t)$, $\bv{P}(t)$ (with $Z_k(t) \in \{0, 1\}$ and $P_k(t) \in \script{P}_k$ for all $k \in \{1, \ldots, K\}$). The consumers then react by generating a random demand vector $\bv{D}(t)$ with expectations given by (\ref{eq:demand-function}). The actual number of products filled is scheduled by choosing the $\tilde{\bv{D}}(t)$ vector according to the scheduling constraints (\ref{eq:scheduling-constraints1})-(\ref{eq:scheduling-constraints2}), and the resulting queueing update is given by (\ref{eq:dynamics}). The goal of the plant is to maximize the time average expected profit $\overline{\phi}_{actual}$. For convenience, a table of notation is given in Table \ref{table:notation}. \begin{table}[ht] \caption{Table of Notation} \centering \begin{tabular}{l l} \hline Notation & Definition \\ \hline $X(t)$ & Supply state, $\pi(x) = Pr[X(t) = x]$ for $x \in \script{X}$\\ $\bv{A}(t) = (A_1(t), \ldots, A_M(t))$ & Raw material purchase vector for slot $t$ \\ $c(\bv{A}(t), X(t))$ & Raw material cost function \\ $\script{A}(X(t))$ & Constraint set for decision variables $\bv{A}(t)$ \\ $Y(t)$ & Consumer demand state, $\pi(y)=Pr[Y(t)=y]$ for $y \in \script{Y}$\\ $\bv{Z}(t) = (Z_1(t), \ldots, Z_K(t))$ & 0/1 sale vector \\ $\bv{P}(t)=(P_1(t), \ldots, P_K(t))$ & Price vector, $P_k(t) \in \script{P}_k$\\ $\bv{D}(t)= (D_1(t), \ldots, D_K(t))$ & Random demand vector (in reaction to $\bv{P}(t)$) \\ $F_k(p,y)$ & Demand function, $F_k(p, y) = \expect{D_k(t)\left|\right.P_k(t) = p, Y(t) = y}$\\ $\bv{Q}(t) = (Q_1(t), \ldots, Q_M(t))$ & Queue vector of raw materials in inventory \\ $Q_{m,max}$ & Maximum buffer size of queue $m$ \\ $\alpha_k$ & Cost incurred by assembly of one product of type $k$ \\ $\beta_{mk}$ & Number of $m$ raw materials needed for assembly product type $k$ \\ $\phi(t)$ & Instantaneous profit variable for slot $t$ (given by (\ref{eq:phi})) \\ $\bv{\mu}(t) = (\mu_1(t), \ldots, \mu_M(t))$ & Departure vector for raw materials, $\mu_m(t) = \sum_{k=1}^K \beta_{mk}(t)D_k(t)$ \\ $\tilde{\bv{D}}(t), \tilde{\bv{\mu}}(t)$ & Actual fulfilled demands and raw materials used for slot $t$ \\ $\phi_{actual}(t)$ & Actual instantaneous profit for slot $t$ \\ [1ex] \hline \end{tabular} \label{table:notation} \end{table} \section{Characterizing Maximum Time Average Profit} \label{section:optimal-profit} Assume infinite buffer capacity (so that $Q_{m,max} = \infty$ for all $m \in \{1, \ldots, M\}$). Consider any control algorithm that makes decisions for $\bv{Z}(t)$, $\bv{P}(t)$, $\bv{A}(t)$, and also makes scheduling decisions for $\tilde{\bv{D}}(t)$, according to the system structure as described in the previous section. Define $\phi^{opt}$ as the \emph{maximum time average profit} over all such algorithms, so that all algorithms must satisfy $\overline{\phi}_{actual} \leq \phi^{opt}$, but there exist algorithms that can yield profit arbitrarily close to $\phi^{opt}$. The value of $\phi^{opt}$ is determined by the steady state distributions $\pi(x)$ and $\pi(y)$, the cost function $c(\bv{A}(t), X(t))$, and the demand functions $F_k(p,y)$ according to the following theorem. \begin{thm} \label{thm:max-profit} (Maximum Time Average Profit) Suppose the initial queue states satisfy $\expect{Q_m(0)} < \infty$ for all $m \in \{1, \ldots, M\}$. Then under any control algorithm, the time average achieved profit satisfies: \[ \limsup_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{\phi_{actual}(\tau)} \leq \phi^{opt} \] where $\phi^{opt}$ is the maximum value of the objective function in the following optimization problem, defined in terms of auxiliary variables $\hat{c}$, $\hat{r}$, $\theta(\bv{a}, x)$, $\hat{a}_m, \hat{\mu}_m$ (for all $x \in \script{X}, \bv{a} \in \script{A}(x)$, $m \in \{1, \ldots, M\}$): \begin{eqnarray*} \mbox{Maximize:} && \phi \\ \mbox{Subject to:} && \phi = -\hat{c} + \hat{r} \: \: \: , \: \: \: \hat{a}_m \geq \hat{\mu}_m \: \: \forall m \\ \hat{c} &=& \sum_{x\in\script{X}} \pi(x)\sum_{\bv{a}\in\script{A}(x)} \theta(\bv{a}, x)c(\bv{a}, x) \\ \hat{r} &=& \sum_{y \in \script{Y}}\pi(y)\sum_{k=1}^K \expect{Z_k(t)(P_k(t)-\alpha_k)F_k(P_k(t), y)\left|\right. Y(t) = y} \\ \hat{a}_m &=& \sum_{x \in \script{X}}\pi(x)\sum_{\bv{a}\in\script{A}(x)}\theta(\bv{a}, x) a_m \: \: \forall m \\ \hat{\mu}_m &=& \sum_{y \in \script{Y}}\pi(y)\sum_{k=1}^K \beta_{mk} \expect{Z_k(t)F_k(P_k(t), y)\left|\right.Y(t) = y} \: \: \forall m \\ && \hspace{-.3in} 0 \leq \theta(\bv{a}, x) \leq 1 \: \: \: \forall x \in \script{X} , \bv{a} \in \script{A}(x) \\ && \hspace{-.3in} \sum_{\bv{a}\in\script{A}(x)}\theta(\bv{a}, x) = 1 \: \: \: \forall x \in \script{X} \\ && \hspace{-.3in} P_k(t) \in \script{P} \: \: , \: \: Z_k(t) \in \{0, 1\} \: \: \: \forall k , t \end{eqnarray*} where $\bv{P}(t) = (P_1(t), \ldots, P_K(t))$ and $\bv{Z}(t) = (Z_1(t), \ldots, Z_K(t))$ are vectors randomly chosen with a conditional distribution that can be chosen as any distribution that depends only on the observed value of $Y(t) = y$. The expectations in the above problem are with respect to the chosen conditional distributions for these decisions. \end{thm} \begin{proof} See Appendix A. \end{proof} In Section \ref{section:dynamic} we show that algorithms can be designed to achieve a time average profit $\overline{\phi}$ that is within $\epsilon$ of the value $\phi^{opt}$ defined in Theorem \ref{thm:max-profit}, for any arbitrarily small $\epsilon>0$. Thus, $\phi^{opt}$ represents the optimal time average profit over all possible algorithms. The variables in Theorem \ref{thm:max-profit} can be interpreted as follows: The variable $\theta(\bv{a}, x)$ represents a conditional probability of choosing $\bv{A}(t) = \bv{a}$ given that the plant observes supply state $X(t) = x$. The variable $\hat{c}$ thus represents the time average cost of purchasing raw materials under this stationary randomized policy, and the variable $\hat{r}$ represents the time average revenue for selling products. The variables $\hat{a}_m$ and $\hat{\mu}_m$ represent the time average arrival and departure rates for queue $m$, respectively. The above theorem thus characterizes $\phi^{opt}$ in terms of all possible \emph{stationary randomized control algorithms}, that is, all algorithms that make randomized choices for $\bv{A}(t), \bv{Z}(t), \bv{P}(t)$ according to fixed conditional distributions given the supply state $X(t)$ and demand state $Y(t)$. Note that Theorem \ref{thm:max-profit} contains no variables for the scheduling decisions for $\tilde{\bv{D}}(t)$, made subject to (\ref{eq:scheduling-constraints1})-(\ref{eq:scheduling-constraints2}). Such scheduling decisions allow choosing $\tilde{\bv{D}}(t)$ in reaction to the demands $\bv{D}(t)$, and hence allow more flexibility beyond the choice of the $\bv{Z}(t)$ and $\bv{P}(t)$ variables alone (which must be chosen before the demands $\bv{D}(t)$ are observed). That such additional scheduling options cannot be exploited to increase time average profit is a consequence of our proof of Theorem \ref{thm:max-profit}. We say that a policy is \emph{$(X,Y)$-only} if it chooses $\bv{P}(t)$, $\bv{Z}(t)$, $\bv{A}(t)$ values as a stationary and randomized function only of the current observed $X(t)$ and $Y(t)$ states. Because the sets $\script{P}_k$ are compact and the functions $F_k(p,y)$ are continuous in $p \in \script{P}_k$ for each $y \in \script{Y}$, it can be shown that the value of $\phi^{opt}$ in Theorem \ref{thm:max-profit} can be \emph{achieved} by a particular $(X,Y)$-only policy, as shown in the following corollary. \begin{cor} \label{cor:1} There exists an $(X,Y)$-only policy $\bv{P}^*(t)$, $\bv{Z}^*(t)$, $\bv{A}^*(t)$ such that: \footnote{Note that in (\ref{eq:a-stat1} ) we have changed the ``$\geq$'' into ``$=$''. It is easy to show that doing so in Theorem \ref{thm:max-profit} does not result in any loss of optimality.} \begin{eqnarray} \expect{\phi^*(t)} &=& \phi^{opt} \label{eq:phi-stat1} \\ \expect{A_m^*(t)} &=& \expect{\mu_m^*(t)} \forall m \in \{1, \ldots, M\} \label{eq:a-stat1} \end{eqnarray} where $\phi^{opt}$ is the optimal time average profit defined in Theorem \ref{thm:max-profit}, and where $\expect{\phi^*(t)}$ and $\expect{\mu_m^*(t)}$ are given by: \begin{eqnarray*} \expect{\phi^*(t)} &=& -\expect{c(\bv{A}^*(t), X(t))} + \sum_{k=1}^K\expect{Z_k^*(t)(P_k^*(t)-\alpha_k)F_k(P_k^*(t), Y(t))} \\ \expect{\mu_m^*(t)} &=& \sum_{k=1}^K \beta_{mk} \expect{Z_k^*(t)F_k(P_k^*(t), Y(t))} \: \: \forall m \in \{1,\ldots, M\} \end{eqnarray*} where the expectations are with respect to the stationary probability distributions $\pi(x)$ and $\pi(y)$ for $X(t)$ and $Y(t)$, and the (potentially randomized) decisions for $\bv{A}^*(t), \bv{Z}^*(t), \bv{P}^*(t)$ that depend on $X(t)$ and $Y(t)$. \end{cor} \subsection{On the Sufficiency of Two Prices} It can be shown that the $(X,Y)$-only policy of Corollary \ref{cor:1} can be used to achieve time average profit arbitrarily close to optimal as follows: Define a parameter $\rho$ such that $0 < \rho < 1$. Use the $(X,Y)$-only decisions for $\bv{P}^*(t)$ and $\bv{A}^*(t)$ every slot $t$, but use new decisions $\tilde{Z}_k(t) = Z_k^*(t)1_k(t)$, where $1_k(t)$ is an i.i.d. Bernoulli process with $Pr[1_k(t) =1] = \rho$. It follows that the inequality (\ref{eq:a-stat1}) becomes: \[ \expect{A_m^*(t)} = \expect{\mu_m^*(t)} = (1/\rho)\expect{\tilde{\mu}_m(t)} \] where $\tilde{\mu}_m(t)$ corresponds to the new decisions $\tilde{Z}_k(t)$. It follows that all queues with non-zero arrival rates $\expect{A_m^*(t)}$ have these rates \emph{strictly greater} than the expected service rates $\expect{\tilde{\mu}_m(t)}$, and so these queues grow to infinity with probability 1. It follows that we always have enough material to meet the consumer demands, so that $\tilde{\bv{D}}(t) = \bv{D}(t)$ and the scheduling decisions (\ref{eq:scheduling-constraints1})-(\ref{eq:scheduling-constraints2}) become irrelevant. This reduces profit only by a factor $O(1-\rho)$, which can be made arbitrarily small as $\rho \rightarrow 1$. Here we show that the $(X,Y)$-only policy of Corollary \ref{cor:1} can be changed into an $(X,Y)$-only policy that randomly chooses between at most \emph{two} prices for each unique product $k \in \{1, \ldots, K\}$ and each unique demand state $Y(t) \in \script{Y}$, while still satisfying (\ref{eq:phi-stat1})-(\ref{eq:a-stat1}). This result is based on a similar two-price theorem derived in \cite{two-prices-allerton07} for the case of a service provider with a single queue. We extend the result here to the case of a product provider with multiple queues. \begin{thm} \label{thm:two-price} Suppose there exists an $(X,Y)$-only algorithm that allocates $\bv{Z}(t)$ and $\bv{P}(t)$ to yield (for some given values $\hat{r}$ and $\hat{\mu}_m$ for $m \in \{1,\ldots, M\}$): \begin{eqnarray} &\sum_{k=1}^K \expect{Z_k(t)(P_k(t)-\alpha_k)F_k(P_k(t), Y(t))} \geq \hat{r}& \label{eq:two-price1} \\ &\sum_{k=1}^K \beta_{mk} \expect{Z_k(t) F_k(P_k(t), Y(t))} \leq \hat{\mu}_m \: \: \mbox{ for all $m \in \{1, \ldots, M\}$}& \label{eq:two-price2} \end{eqnarray} Then the same inequality constraints can be achieved by a new stationary randomized policy $\bv{Z}^*(t)$, $\bv{P}^*(t)$ that uses at most two prices for each unique product $k \in \{1, \ldots, K\}$ and each unique demand state $Y(t) \in \script{Y}$. \end{thm} \begin{proof} The proof is given in Appendix B. \end{proof} The expectation on the left hand side of (\ref{eq:two-price1}) represents the expected revenue generated from sales under the original $(X,Y)$-only policy, and the expectation on the left hand side of (\ref{eq:two-price2}) represents the expected departures from queue $Q_m(t)$ under this policy. The theorem says that the pricing part of the $(X,Y)$-only algorithm, which potentially uses many different price options, can be changed to a 2-price algorithm without decreasing revenue or increasing demand for materials. Simple examples can be given to show that two prices are often \emph{necessary} to achieve maximum time average profit, even when user demand functions are the same for all slots (see \cite{two-prices-allerton07} for a simple example for the related service-provider problem). We emphasize that the $(X,Y)$-only policy of Corollary \ref{cor:1} is not necessarily practical, as implementation would require full knowledge of the $\pi(x)$ and $\pi(y)$ distributions, and it would require a solution to the (very complex) optimization problem of Theorem \ref{thm:max-profit} even if the $\pi(x)$ and $\pi(y)$ distributions were known. Further, it relies on having an infinite buffer capacity (so that $Q_{m,max} = \infty$ for all $m \in\{1, \ldots, M\}$). A more practical algorithm is developed in the next section that overcomes these difficulties. \section{A Dynamic Pricing and Purchasing Algorithm} \label{section:dynamic} Here we construct a dynamic algorithm that makes purchasing and pricing decisions in reaction to the current queue sizes and the observed $X(t)$ and $Y(t)$ states, without knowledge of the $\pi(x)$ and $\pi(y)$ probabilities that govern the evolution of these states. We begin with the assumption that $X(t)$ is i.i.d. over slots with probabilities $\pi(x) = Pr[X(t) = x]$, and $Y(t)$ is i.i.d. over slots with $\pi(y) = Pr[Y(t)=y]$. This assumption is extended to more general non-i.i.d. processes in Sections \ref{section:ergodic} and \ref{section:non-ergodic}. Define $1_k(t)$ as an indicator variable that is $1$ if and only if $Q_m(t)< \mu_{m, max}$ for some queue $m$ such that $\beta_{mk}>0$ (so that type $m$ raw material is used to create product $k$): \begin{equation} \label{eq:edge} 1_k(t) = \left\{ \begin{array}{ll} 1 &\mbox{ if $Q_m(t) < \mu_{m, max}$ for some $m$ such that $\beta_{mk}>0$} \\ 0 & \mbox{ otherwise} \end{array} \right. \end{equation} To begin, let us choose an algorithm from the restricted class of algorithms that choose $\bv{Z}(t)$ values to satisfy the following \emph{edge constraint} every slot $t$: \begin{eqnarray} \mbox{For all $k \in\{1, \ldots, K\}$, we have: } Z_k(t) = 0 \mbox{ whenever } 1_k(t) = 1 \label{eq:edge-constraint} \end{eqnarray} That is, the edge constraint (\ref{eq:edge-constraint}) ensures that no type $k$ product is sold unless inventory in all of its corresponding raw material queues $m$ is at least $\mu_{m, max}$. Under this restriction, we always have enough raw material for any generated demand vector, and so $\tilde{D}_k(t) = D_k(t)$ for all products $k$ and all slots $t$. Thus, from (\ref{eq:phi}) we have $\phi_{actual}(t) = \phi(t)$. Define $\mu_m(t)$ as the number of material queue $m$ departures on slot $t$: \begin{equation} \label{eq:mu} \mu_m(t) \defequiv \sum_{k=1}^K \beta_{mk}Z_k(t)D_k(t) \end{equation} The queueing dynamics of (\ref{eq:dynamics}) thus become: \begin{equation} \label{eq:dynamics1} Q_m(t+1) = Q_m(t) - \mu_m(t) + A_m(t) \end{equation} The above equation continues to assume we have infinite buffer space, but we soon show that we need only a finite buffer to implement our solution. \subsection{Lyapunov Drift} For a given set of non-negative parameters $\{\theta_m\}$ for $m \in \{1, \ldots, M\}$, define the non-negative \emph{Lyapunov function} $L(\bv{Q}(t))$ as follows: \begin{eqnarray} L(\bv{Q}(t)) \defequiv \frac{1}{2}\sum_{m=1}^M (Q_m(t)-\theta_m)^2 \label{eq:lyap-function} \end{eqnarray} This Lyapunov function is similar to that used for stock trading problems in \cite{neely-stock-arxiv}, and has the flavor of keeping queue backlog near a non-zero value $\theta_m$, as in \cite{neely-energy-delay-it}. Define the conditional Lyapunov drift $\Delta(\bv{Q}(t))$ as follows:\footnote{Strictly speaking, we should use the notation $\Delta(\bv{Q}(t), t)$, as the drift may be non-stationary. However, we use the simpler notation $\Delta(\bv{Q}(t))$ as a formal representation of the right hand side of (\ref{eq:lyap}).} \begin{equation} \label{eq:lyap} \Delta(\bv{Q}(t)) \defequiv \expect{L(\bv{Q}(t+1)) - L(\bv{Q}(t)) \left|\right.\bv{Q}(t)} \end{equation} Define a constant $V>0$, to be used to affect the revenue-storage tradeoff. Using the stochastic optimization technique of \cite{now}, our approach is to design a strategy that, every slot $t$, observes current system conditions $\bv{Q}(t)$, $X(t)$, $Y(t)$ and makes pricing and purchasing decisions to minimize a bound on the following ``drift-plus-penalty'' expression: \[ \Delta(\bv{Q}(t)) - V\expect{\phi(t)\left|\right.\bv{Q}(t)} \] where $\phi(t)$ is the instantaneous profit function defined in (\ref{eq:phi}). \subsection{Computing the Drift} We have the following lemma. \begin{lem} \label{lem:drift-comp} (Drift Computation) Under any algorithm that satisfies the edge constraint (\ref{eq:edge-constraint}), and for any constants $V\geq0$, $\theta_m \geq 0$ for $m \in\{1, \ldots, M\}$, the Lyapunov drift $\Delta(\bv{Q}(t))$ satisfies: \begin{eqnarray} \Delta(\bv{Q}(t)) - V\expect{\phi(t)|\bv{Q}(t)} &\leq& B - V\expect{\phi(t)|\bv{Q}(t)} \nonumber \\ && + \sum_{m=1}^M(Q_m(t) - \theta_m)\expect{A_m(t) - \mu_m(t) \left|\right.\bv{Q}(t)} \label{eq:drift-q} \end{eqnarray} where the constant $B$ is defined: \begin{equation} \label{eq:B} B \defequiv \frac{1}{2}\sum_{m=1}^M\max[A_{m,max}^2, \mu_{m,max}^2] \end{equation} \end{lem} \begin{proof} The edge constraint (\ref{eq:edge-constraint}) ensures that the dynamics (\ref{eq:dynamics1}) hold for all $t$. By squaring (\ref{eq:dynamics1}) we have: \[ (Q_m(t+1) - \theta_m)^2 = (Q_m(t) - \theta_m)^2 + (A_m(t)-\mu_m(t))^2 + 2(Q_m(t)-\theta_m)(A_m(t) - \mu_m(t)) \] Dividing by $2$, summing over $m \in \{1, \ldots, M\}$, and taking conditional expectations yields: \[ \Delta(\bv{Q}(t)) = \expect{B(t)|\bv{Q}(t)} + \sum_{m=1}^M(Q_m(t) - \theta_m)\expect{A_m(t) - \mu_m(t)|\bv{Q}(t)} \] where $B(t)$ is defined: \begin{equation} \label{eq:Bt-appendix} B(t) \defequiv \frac{1}{2}\sum_{m=1}^M(A_m(t) - \mu_m(t))^2 \end{equation} By the finite bounds on $A_m(t)$ and $\mu_m(t)$, we clearly have $B(t) \leq B$ for all $t$. \end{proof} Now note from the definition of $\mu_m(t)$ in (\ref{eq:mu}) that: \begin{eqnarray} \expect{\mu_m(t) \left|\right.\bv{Q}(t)} &=& \expect{ \sum_{k=1}^K \beta_{mk}Z_k(t)D_k(t) \left|\right.\bv{Q}(t)} \nonumber \\ &=& \sum_{k=1}^K \beta_{mk}\expect{\expect{Z_k(t)D_k(t)\left|\right.\bv{Q}(t), P_k(t), Y(t)}\left|\right.\bv{Q}(t)} \nonumber \\ &=& \sum_{k=1}^K\beta_{mk}\expect{Z_k(t)F_k(P_k(t), Y(t))\left|\right.\bv{Q}(t)} \label{eq:foo} \end{eqnarray} where we have used the law of iterated expectations in the final equality. Similarly, we have from the definition of $\phi(t)$ in (\ref{eq:phi}): \begin{eqnarray} \expect{\phi(t) \left|\right.\bv{Q}(t)} &=& -\expect{c(\bv{A}(t), X(t))\left|\right.\bv{Q}(t)} \nonumber \\ && + \sum_{k=1}^K \expect{Z_k(t)(P_k(t)-\alpha_k)F_k(P_k(t), Y(t))\left|\right.\bv{Q}(t)} \label{eq:foo2} \end{eqnarray} Plugging (\ref{eq:foo}) and (\ref{eq:foo2}) into (\ref{eq:drift-q}) yields: \begin{eqnarray} \Delta(\bv{Q}(t)) - V\expect{\phi(t)\left|\right.\bv{Q}(t)} &\leq& B \nonumber \\ && \hspace{-1.8in} +\sum_{m=1}^M (Q_m(t) - \theta_m)\expect{A_m(t) - \sum_{k=1}^K\beta_{mk}Z_k(t)F_k(P_k(t), Y(t)) \left|\right.\bv{Q}(t)} \nonumber \\ && \hspace{-1.8in} + V\expect{c(\bv{A}(t), X(t))\left|\right.\bv{Q}(t)} \nonumber \\ && \hspace{-1.8in} - V \expect{ \sum_{k=1}^K Z_k(t)(P_k(t)-\alpha_k) F_k(P_k(t), Y(t)) \left|\right.\bv{Q}(t)} \label{eq:big-drift2} \end{eqnarray} In particular, the right hand side of (\ref{eq:big-drift2}) is identical to the right hand side of (\ref{eq:drift-q}). \subsection{The Dynamic Purchasing and Pricing Algorithm} Minimizing the right hand side of the (\ref{eq:big-drift2}) over all feasible choices of $\bv{A}(t)$, $\bv{Z}(t)$, $\bv{P}(t)$ (given the observed $X(t)$ and $Y(t)$ states and the observed queue values $\bv{Q}(t)$) yields the following algorithm: \emph{\underline{Joint Purchasing and Pricing Algorithm (JPP)}:} Every slot $t$, perform the following actions: \begin{enumerate} \item \emph{Purchasing:} Observe $\bv{Q}(t)$ and $X(t)$, and choose $\bv{A}(t)=(A_1(t), \ldots, A_M(t))$ as the solution of the following optimization problem defined for slot $t$: \begin{eqnarray} \mbox{Minimize:} & Vc(\bv{A}(t), X(t)) + \sum_{m=1}^M A_m(t)(Q_m(t)-\theta_m) \label{eq:purchase1} \\ \mbox{Subject to:} & \bv{A}(t) \in \script{A}(X(t)) \label{eq:purchase2} \end{eqnarray} where $\script{A}(X(t))$ is defined by constraints (\ref{eq:Amax})-(\ref{eq:cmax}). \item \emph{Pricing:} Observe $\bv{Q}(t)$ and $Y(t)$. For each product $k \in \{1, \ldots, K\}$, if $1_k(t) = 1$, choose $Z_k(t) = 0$ and do not offer product $k$ for sale. If $1_k(t) = 0$, choose $P_k(t)$ as the solution to the following problem: \begin{eqnarray} \mbox{Maximize:} && V(P_k(t)-\alpha_k)F_k(P_k(t), Y(t)) \nonumber \\ && + F_k(P_k(t), Y(t))\sum_{m=1}^M\beta_{mk}(Q_m(t)-\theta_m) \label{eq:pricing1} \\ \mbox{Subject to:} && P_k(t) \in \script{P}_k \label{eq:pricing2} \end{eqnarray} If the above maximization is positive, set $Z_k(t) = 1$ and keep $P_k(t)$ as the above value. Else, set $Z_k(t) = 0$ and do not offer product $k$ for sale. \item \emph{Queue Update:} Fulfill all demands $D_k(t)$, and update the queues $Q_m(t)$ according to (\ref{eq:dynamics1}) (noting by construction of this algorithm that $\tilde{\bv{D}}(t) = \bv{D}(t)$ for all $t$, so that the dynamics (\ref{eq:dynamics1}) are equivalent to (\ref{eq:dynamics})). \end{enumerate} The above JPP algorithm does not require knowledge of probability distributions $\pi(x)$ or $\pi(y)$, and is decoupled into separate policies for pricing and purchasing. The pricing policy is quite simple and involves maximizing a (possibly non-concave) function of one variable $P_k(t)$ over the 1-dimensional set $\script{P}_k$. For example, if $\script{P}_k$ is a discrete set of 1000 price options, this involves evaluating the function over each option and choosing the maximizing price. The purchasing policy is more complex, as $\bv{A}(t)$ is an integer vector that must satisfy (\ref{eq:Amax})-(\ref{eq:cmax}). This is a \emph{knapsack problem} due to the constraint (\ref{eq:cmax}). However, the decision is trivial in the case when $c_{max} = \sum_{m=1}^Mx_{m,max}A_{m,max}$, in which case the constraint (\ref{eq:cmax}) is redundant. \subsection{Deterministic Queue Bounds} We have the following simple lemma that shows the above policy can be implemented on a finite buffer system. \begin{lem} \label{lem:finite-buffer} (Finite Buffer Implementation) If initial inventory satisfies $Q_m(0) \leq Q_{m, max}$ for all $m \in \{1, \ldots, M\}$, where $Q_{m, max} \defequiv \theta_m + A_{m,max}$, then JPP yields $Q_m(t) \leq Q_{m, max}$ for all slots $t \geq 0$ and all queues $m\in\{1, \ldots, M\}$. \end{lem} \begin{proof} Fix a queue $m$, and suppose that $Q_m(t) \leq Q_{m, max}$ for some slot $t$ (this holds by assumption at $t=0$). We show that $Q_m(t+1) \leq Q_{m, max}$. To see this, note that because the function $c(\bv{A}(t), X(t))$ in (\ref{eq:example-cost}) is non-decreasing in every entry of $\bv{A}(t)$ for each $X(t)\in \script{X}$, the purchasing policy (\ref{eq:purchase1})-(\ref{eq:purchase2}) yields $A_m(t) = 0$ for any queue $m$ that satisfies $Q_m(t) > \theta_m$. It follows that $Q_m(t)$ cannot increase if it is greater than $\theta_m$, and so $Q_m(t+1) \leq \theta_m + A_{m, max}$ (because $A_{m,max}$ is the maximum amount of increase for queue $m$ on any slot). \end{proof} The following important related lemma shows that queue sizes $Q_m(t)$ are always above $\mu_{m, max}$, provided that they start with at least this much raw material and that the $\theta_m$ values are chosen to be sufficiently large. Specifically, define $\theta_m$ as follows: \begin{equation} \label{eq:good-theta} \theta_m \defequiv \max_{\{k \in\{1, \ldots, K\} | \beta_{mk}>0\}} \left[\frac{V(P_{k,max} - \alpha_k)}{\beta_{mk}} + \sum_{i\in\{1, \ldots, M\} , i \neq m} \frac{\beta_{ik}A_{i,max}}{\beta_{mk}} + 2\mu_{m,max} \right] \end{equation} \begin{lem} \label{lem:lower-bound} Suppose that $\theta_m$ is defined by (\ref{eq:good-theta}), and that $Q_m(0) \geq \mu_{m,max}$ for all $m \in \{1, \ldots, M\}$. Then for all slots $t\geq0$ we have: \[ Q_m(t) \geq \mu_{m,max} \: \: \forall m \in\{1, \ldots, M\} \] \end{lem} \begin{proof} Fix $m \in \{1, \ldots, M\}$, and suppose that $Q_m(t) \geq \mu_{m,max}$ for some slot $t$ (this holds by assumption for $t=0$). We prove it also holds for slot $t+1$. If $Q_m(t) \geq 2\mu_{m,max}$, then $Q_m(t+1) \geq \mu_{m,max}$ (because at most $\mu_{m,max}$ units can depart queue $m$ on any slot), and hence we are done. Suppose now that $\mu_{m,max} \leq Q_m(t) < 2\mu_{m,max}$. In this case the pricing functional in (\ref{eq:pricing1}) satisfies the following for any product $k$ such that $\beta_{mk} >0$: \begin{eqnarray} &&\hspace{-.3in}V(P_k(t) - \alpha_k)F_k(P_k(t), Y(t)) + F_k(P_k(t), Y(t))\sum_{i=1}^M\beta_{ik}(Q_i(t)-\theta_i) \nonumber \\ &\leq& F_k(P_k(t),Y(t)) \times \nonumber \\ && \left[V(P_{k,max} - \alpha_k) + \sum_{i\in\{1, \ldots, M\} ,i \neq m}\beta_{ik}A_{i,max} + \beta_{mk}(2\mu_{m,max}-\theta_m)\right] \label{eq:edge2} \\ &\leq& 0 \label{eq:edge3} \end{eqnarray} where in (\ref{eq:edge2}) we have used the fact that $(Q_i(t) - \theta_i) \leq A_{i,max}$ for all queues $i$ (and in particular for all $i \neq m$) by Lemma \ref{lem:finite-buffer}. In (\ref{eq:edge3}) we have used the definition of $\theta_m$ in (\ref{eq:good-theta}). It follows that the pricing rule (\ref{eq:pricing1})-(\ref{eq:pricing2}) sets $Z_k(t) = 0$ for all products $k$ that use raw material $m$, and so no departures can take place from $Q_m(t)$ on the current slot. Thus: $\mu_{m,max} \leq Q_m(t) \leq Q_m(t+1)$, and we are done. \end{proof} \subsection{Performance Analysis of JPP} \begin{thm} \label{thm:performance1} (Performance of JPP) Suppose that $\theta_m$ is defined by (\ref{eq:good-theta}) for all $m \in \{1, \ldots, M\}$, and that $\mu_{m,max} \leq Q_m(0) \leq \theta_m + A_{m,max}$ for all $m \in \{1, \ldots, M\}$. Suppose $X(t)$ and $Y(t)$ are i.i.d. over slots. Then under the JPP algorithm implemented with any parameter $V>0$, we have: (a) For all $m \in \{1, \ldots, M\}$ and all slots $t \geq 0$: \[ \mu_{m,max} \leq Q_m(t) \leq Q_{m, max} \defequiv \theta_{m,max} + A_{m,max} \] where $Q_{m,max} = O(V)$. (b) For all slots $t>0$ we have: \begin{equation} \frac{1}{t}\sum_{\tau=0}^{t-1}\expect{\phi_{actual}(\tau)} \geq \phi^{opt} - \frac{B}{V} - \frac{\expect{L(\bv{Q}(0))}}{Vt} \end{equation} where the constant $B$ is defined in (\ref{eq:B}) and is independent of $V$, and where $\phi^{opt}$ is the optimal time average profit defined in Theorem \ref{thm:max-profit}. (c) The time average profit converges with probability 1, and satisfies: \begin{equation} \lim_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} \phi_{actual}(\tau) \geq \phi^{opt} - \frac{B}{V} \: \: \: \: \mbox{(with probability 1)} \end{equation} \end{thm} Thus, the time average profit is within $O(1/V)$ of optimal, and hence can be pushed arbitrarily close to optimal by increasing $V$, with a tradeoff in the maximum buffer size that is $O(V)$. Defining $\epsilon = B/V$ yields the desired $[O(\epsilon), O(1/\epsilon)]$ profit-buffer size tradeoff. \begin{proof} (Theorem \ref{thm:performance1} parts (a) and (b)) Part (a) follows immediately from Lemmas \ref{lem:finite-buffer} and \ref{lem:lower-bound}. To prove part (b), note that JPP observes $\bv{Q}(t)$ and makes control decisions for $Z_k(t)$, $\bv{A}(t)$, $P_k(t)$ that minimize the right hand side of (\ref{eq:drift-q}) under any alternative choices. Thus: \begin{eqnarray} \Delta(\bv{Q}(t)) - V\expect{\phi(t)\left|\right.\bv{Q}(t)} &\leq& B - V\expect{\phi^*(t)|\bv{Q}(t)} \nonumber \\ && + \sum_{m=1}^M(Q_m(t)-\theta_m)\expect{A_m^*(t)-\mu_m^*(t)|\bv{Q}(t)} \label{eq:big-drift3} \end{eqnarray} where $\expect{\phi^*(t)|\bv{Q}(t)}$, and $\expect{\mu_m^*(t)|\bv{Q}(t)}$ correspond to any alternative choices for the decision variables $Z_k^*(t)$, $P_k^*(t)$, $A_m^*(t)$ subject to the same constraints, being that $P_k^*(t) \in \script{P}_k$, $A_m^*(t)$ satisfies (\ref{eq:Amax})-(\ref{eq:cmax}), and $Z_k^*(t) \in \{0, 1\}$, and $Z_k^*(t) = 0$ whenever $1_k(t) = 1$. Because $Q_m(t) \geq \mu_{m,max}$ for all $m$, we have $1_k(t)=0$ for all $k \in \{1, \ldots, K\}$ (where $1_k(t)$ is defined in (\ref{eq:edge})). Thus, the $(X,Y)$-only policy of Corollary \ref{cor:1} satisfies the desired constraints. Further, this policy makes decisions based only on $(X(t), Y(t))$, which are i.i.d. over slots and hence independent of the current queue state $\bv{Q}(t)$. Thus, from (\ref{eq:phi-stat1})-(\ref{eq:a-stat1}) we have: \begin{eqnarray*} \expect{\phi^*(t)|\bv{Q}(t)} &=& \expect{\phi^*(t)} = \phi^{opt} \\ \expect{A_m^*(t) - \mu_m^*(t)|\bv{Q}(t)} &=& \expect{A_m^*(t) - \mu_m^*(t)} = 0 \: \: \forall m \in \{1, \ldots, M\} \end{eqnarray*} Plugging the above two identities into the right hand side of (\ref{eq:big-drift3}) yields: \begin{eqnarray} \Delta(\bv{Q}(t)) - V\expect{\phi(t)\left|\right.\bv{Q}(t)} &\leq& B -V\phi^{opt} \label{eq:big-drift4} \end{eqnarray} Taking expectations of the above and using the definition of $\Delta(\bv{Q}(t))$ yields: \[ \expect{L(\bv{Q}(t+1))} - \expect{L(\bv{Q}(t))} - V\expect{\phi(t)} \leq B - V\phi^{opt} \] The above holds for all slots $t\geq0$. Summing over $\tau\in \{0, 1, \ldots, t-1\}$ for some integer $t>0$ yields: \begin{equation*} \expect{L(\bv{Q}(t))} - \expect{L(\bv{Q}(0))} - V\sum_{\tau=0}^{t-1}\expect{\phi(\tau)} \leq Bt- Vt\phi^{opt} \end{equation*} Dividing by $tV$, rearranging terms, and using non-negativity of $L(\bv{Q}(t))$ yields: \begin{equation} \label{eq:here} \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{\phi(\tau)} \geq \phi^{opt} - \frac{B}{V} - \frac{\expect{L(\bv{Q}(0))}}{Vt} \end{equation} Because $1_k(t) = 0$ for all $k$ and all $\tau$, we have $\phi(\tau) = \phi_{actual}(\tau)$ for all $\tau$, and we are done. \end{proof} \begin{proof} (Theorem \ref{thm:performance1} part (c)) Taking a limit of (\ref{eq:here}) proves that: \[ \liminf_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{\phi(\tau)} \geq \phi^{opt} - B/V \] The above result made no assumption on the initial distribution of $\bv{Q}(0)$. Thus, letting $\bv{Q}(0)$ be any particular initial state, we have: \begin{equation} \label{eq:recurrent} \liminf_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1}\expect{\phi(\tau)|\bv{Q}(0)} \geq \phi^{opt} - B/V \end{equation} However, under the JPP algorithm the process $\bv{Q}(t)$ is a discrete time Markov chain with finite state space (because it is an integer valued vector with finite bounds given in part (a)). Suppose now the initial condition $\bv{Q}(0)$ is a recurrent state. It follows that the time average of $\phi(t)$ must converge to a well defined constant $\overline{\phi}(\bv{Q}(0))$ with probability 1 (where the constant may depend on the initial recurrent state $\bv{Q}(0)$ that is chosen). Further, because $\phi(\tau)$ is bounded above and below by finite constants for all $\tau$, by the Lebesgue dominated convergence theorem we have: \[ \lim_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1}\expect{\phi(\tau)|\bv{Q}(0)} = \overline{\phi}(\bv{Q}(0)) \] Using this in (\ref{eq:recurrent}) yields: \[ \overline{\phi}(\bv{Q}(0)) \geq \phi^{opt} - B/V \] This is true for all initial states that are recurrent. If $\bv{Q}(0)$ is a transient state, then $\bv{Q}(t)$ eventually reaches a recurrent state and hence achieves a time average that is, with probability 1, greater than or equal to $\phi^{opt} - B/V$. \end{proof} \subsection{Place-Holder Values} Theorem \ref{thm:performance1} seems to require the initial queue values to satisfy $Q_m(t) \geq \mu_{m,max}$ for all $t$. This suggests that we need to purchase that many raw materials before start-up. Here we use the \emph{place holder backlog} technique of \cite{neely-asilomar08} to show that we can achieve the same performance without this initial start-up cost, without loss of optimality. The technique also allows us to reduce our maximum buffer size requirement $Q_{m,max}$ by an amount $\mu_{m,max}$ for all $m \in \{1, \ldots, M\}$, with no loss in performance. To do this, we start the system off with exactly $\mu_{m,max}$ units of \emph{fake raw material} in each queue $m \in \{1, \ldots, M\}$. Let $Q_m(t)$ be the total raw material in queue $m$, including both the actual and fake material. Let $Q^{actual}_m(t)$ be the amount of actual raw material in queue $m$. Then for slot $0$ we have: \[ Q_m(0) = Q_m^{actual}(0) + \mu_{m,max} \: \: \: \forall m \in \{1, \ldots, M\} \] Assume that $\mu_{m,max} \leq Q_m(0) \leq \theta_m + A_{m,max}$ for all $m \in \{1, \ldots, M\}$, as needed for Theorem \ref{thm:performance1}. Thus, $0 \leq Q_m^{actual}(0) \leq \theta_m + A_{m,max} - \mu_{m,max}$ (so that the actual initial condition can be 0). We run the JPP algorithm as before, using the $\bv{Q}(t)$ values (not the actual queue values). However, if we are ever asked to assemble a product, we use \emph{actual} raw materials whenever possible. The only problem comes if we are asked to assemble a product for which there are not enough actual raw materials available. However, we know from Theorem \ref{thm:performance1} that the queue value $Q_m(t)$ never decreases below $\mu_{m,max}$ for any $m\in\{1, \ldots, M\}$. It follows that we are \emph{never} asked to use any of our fake raw material. Therefore, the fake raw material stays untouched in each queue for all time, and we have: \begin{equation} \label{eq:fake-backlog} Q_m(t) = Q_m^{actual}(t) + \mu_{m,max} \: \: \forall m \in \{1, \ldots, M\}, \forall t\geq 0 \end{equation} The sample path of the system is equivalent to a sample path that does not use fake raw material, but starts the system in the non-zero initial condition $\bv{Q}(0)$. Hence, the resulting profit achieved is the same. Because the limiting time average profit does not depend on the initial condition, the time average profit it still at least $\phi^{opt} - B/V$. However, by (\ref{eq:fake-backlog}) the actual amount of raw material held is reduced by exactly $\mu_{m,max}$ on each slot, which also reduces the maximum buffer size $Q_{m,max}$ by exactly this amount. \subsection{Demand-Blind Pricing} As in \cite{two-prices-allerton07} for the service provider problem, here we consider the special case when the demand function $F_k(P_k(t), Y(t))$ has the form: \begin{equation} \label{eq:demand-blind} F_k(P_k(t), Y(t)) = h_k(Y(t))\hat{F}_k(P_k(t)) \end{equation} for some non-negative functions $h_k(Y(t))$ and $\hat{F}_k(P_k(t))$. This holds, for example, when $Y(t)$ represents the integer number of customers at time $t$, and $\hat{F}_k(p)$ is the expected demand at price $p$ for each customer, so that $F_k(P_k(t),Y(t)) = Y(t)\hat{F}_k(P_k(t))$. Under the structure (\ref{eq:demand-blind}), the JPP pricing algorithm (\ref{eq:pricing1}) reduces to choosing $P_k(t) \in \script{P}_k$ to maximize: \[ V(P_k(t) - \alpha_k)\hat{F}_k(P_k(t)) + \hat{F}_k(P_k(t))\sum_{m=1}^M\beta_{mk}(Q_m(t)-\theta_m) \] Thus, the pricing can be done without knowledge of the demand state $Y(t)$. \subsection{Extension to 1-slot Assembly Delay} \label{subsection:assembly-delay} Consider now the modified situation where products require one slot for assembly, but where consumers still require a product to be provided on the same slot in which it is purchased. This can easily be accommodated by maintaining an additional set of $K$ \emph{product queues} for storing finished products. Specifically, each product queue $k$ is initialized with $D_{k,max}$ units of finished products. The plant also keeps the same material queues $\bv{Q}(t)$ as before, and makes all control decisions exactly as before (ignoring the product queues), so that every sample path of queue levels and control variables is the same as before. However, when new products are purchased, the consumers do not wait for assembly, but take the corresponding amount of products out of the product queues. This exact amount is replenished when the new products complete their assembly at the end of the slot. Thus, at the beginning of every slot there are always exactly $D_{k,max}$ units of type $k$ products in the product queues. The total profit is then the same as before, with the exception that the plant incurs a fixed startup cost associated with initializing all product queues with a full amount of finished products. \subsection{Extension to Price-Vector Based Demands} Suppose that the demand function $F_k(P_k(t), Y(t))$ associated with product $k$ is changed to a function $F_k(\bv{P}(t), \bv{Z}(t), Y(t))$ that depends on the full price vectors $\bv{Z}(t)$ and $\bv{P}(t)$. This does not significantly change the analysis of the Maximum Profit Theorem (Theorem \ref{thm:max-profit}) or of the dynamic control policy of this section. Specifically, the maximum time average profit $\phi^{opt}$ is still characterized by randomized $(X,Y)$-only algorithms, with the exception that the new price-vector based demand function is used in the optimization of Theorem \ref{thm:max-profit}. Similarly, the dynamic control policy of this section is only changed by replacing the original demand function with the new demand function. However, the 2-price result of Theorem \ref{thm:two-price} would no longer apply in this setting. This is because Theorem \ref{thm:two-price} uses a strategy of independent pricing that only applies if the demand function for product $k$ depends only on $P_k(t)$ and $Y(t)$. In the case when demands are affected by the full price vector, a modified analysis can show that $\phi^{opt}$ can be achieved over stationary randomized algorithms that use at most $\min[K, M]+1$ price vectors $(\bv{Z}^{(i)}, \bv{P}^{(i)})$ for each demand state $Y(t) \in \script{Y}$. \section{Ergodic Models} \label{section:ergodic} In this section, we consider the performance of the Joint Purchasing and Pricing Algorithm (JPP) under a more general class of non-i.i.d. consumer demand state $Y(t)$ and material supply state $X(t)$ processes that possess the \emph{decaying memory property} (defined below). In this case, the deterministic queue bounds in part (a) of Theorem \ref{thm:performance1} still hold. This is because part (a) is a \emph{sample path statement}, which holds under any arbitrary $Y(t)$ and $X(t)$ processes. Hence we only have to look at the profit performance. \subsection{The Decaying Memory Property} In this case, we first assume that the stochastic processes $Y(t)$ and $X(t)$ both have well defined time averages. Specifically, we assume that: \begin{eqnarray} \lim_{t\rightarrow\infty}\frac{1}{t}\sum_{\tau=0}^{t-1}1\{Y(t)=y\}=\pi(y)\,\,\text{with probability 1}\label{eq:ergodicity_y}\\ \lim_{t\rightarrow\infty}\frac{1}{t}\sum_{\tau=0}^{t-1}1\{X(t)=x\}=\pi(x)\,\,\text{with probability 1},\label{eq:ergodicity_x} \end{eqnarray} where $\pi(y)$ and $\pi(x)$ are the same as in the i.i.d. case for all $x$ and $y$. Now we consider implementing the $(X, Y)$-only policy in Corollary \ref{cor:1}. Because this policy makes decisions every slot purely as a function of the current states $X(t)$ and $Y(t)$, and because the limiting fractions of time of being in states $x, y$ are the same as in the i.i.d. case, we see that Corollary \ref{cor:1} still holds if we take the limit as $t$ goes to infinity, i.e.: \begin{eqnarray} \phi^{opt} &=&\lim_{t\rightarrow\infty}\frac{1}{t}\sum_{\tau=0}^{t-1}\expect{\phi^*(\tau)}\label{eq:ergodic_profit}\\ 0&=&\lim_{t\rightarrow\infty}\frac{1}{t}\sum_{\tau=0}^{t-1}\expect{A^*_m(\tau)-\sum_{k=1}^K\beta_{mk}Z^*_k(\tau)F_k(P^*_k(\tau), Y(\tau))}\quad\forall m \label{eq:ergodic_rates} \end{eqnarray} where $\expect{\phi^*(\tau)}$, $\expect{A^*_m(\tau)}$ and $\expect{\sum_{k=1}^K\beta_{mk}Z^*_k(\tau)F_k(P^*_k(\tau), Y(\tau))}$ are defined as in Corollary \ref{cor:1}, with expectations taken over the distributions of $X(t)$ and $Y(t)$ at time $t$ and the possible randomness of the policy. We now define $H(t)$ to be the system history up to time slot $t$ as follows: \begin{eqnarray} H(t)\triangleq \{X(\tau), Y(\tau)\}_{\tau=0}^{t-1}\cup \{[Q_m(\tau)]_{m=1}^M\}_{\tau=0}^t.\label{eq:history} \end{eqnarray} We say that the state processes $X(t)$ and $Y(t)$ have the \emph{decaying memory property} if for any small $\epsilon>0$, there exists a positive integer $T=T_{\epsilon}$, i.e., $T$ is a function of $\epsilon$, such that for any $t_0\in\{0, 1, 2, ...\}$ and any $H(t_0)$, the following holds under the $(X, Y)$-only policy in Corollary \ref{cor:1}: \begin{eqnarray} \bigg|\phi^{opt}-\frac{1}{T}\sum_{\tau=t_0}^{t_0+T-1}\expect{ \phi^*(\tau)\left.|\right. H(t_0)} \bigg|\leq\epsilon.\label{eq:decaying_profit}\\ \bigg| \frac{1}{T}\sum_{\tau=t_0}^{t_0+T-1}\expect{A^*_m(\tau)-\sum_{k=1}^K\beta_{mk}Z^*_k(\tau)F_k(P^*_k(\tau), Y(\tau)) \left.|\right. H(t_0)} \bigg|\leq\epsilon\label{eq:decaying_rate} \end{eqnarray} It is easy to see that if $X(t)$ and $Y(t)$ both evolve according to a finite state ergodic Markov chain, then the above will be satisfied. If $X(t)$ and $Y(t)$ are i.i.d. over slots, then $T_{\epsilon} = 1$ for all $\epsilon\geq 0$. \subsection{Performance of JPP under the Ergodic Model} We now present the performance result of JPP under this decaying memory property. \begin{thm}\label{theorem:JPP_ergodic} Suppose the Joint Purchasing and Pricing Algorithm (JPP) is implemented, with $\theta_m$ satisfying condition (\ref{eq:good-theta}), and that $\mu_{m, max}\leq Q_m(0)\leq\theta_m+A_{m, max}$ for all $m\in\{1, 2, ..., M\}$. Then the queue backlog values $Q_m(t)$ for all $m\in\{1, \ldots, M\}$ satisfy part (a) of Theorem \ref{thm:performance1}. Further, for any $\epsilon>0$ and $T$ such that (\ref{eq:decaying_profit}) and (\ref{eq:decaying_rate}) hold, we have that: \begin{eqnarray} \liminf_{t\rightarrow\infty}\frac{1}{t} \sum_{\tau=0}^{t-1}\expect{\phi_{actual}(\tau) }\geq \phi^{opt} - \frac{TB}{V}- \epsilon \left(1+\sum_{m=1}^M \frac{\max[\theta_m,A_{m,max}]}{V}\right), \label{eq:exp_Tdrift_final} \end{eqnarray} where $B$ is defined in (\ref{eq:B}). \end{thm} To understand this result, note that the coefficient multiplying the $\epsilon$ term in the right hand side of (\ref{eq:exp_Tdrift_final}) is $O(1)$ (recall that $\theta_m$ in (\ref{eq:good-theta}) is linear in $V$). Thus, for a given $\epsilon>0$, the final term is $O(\epsilon)$. Let $T = T_{\epsilon}$ represent the required mixing time to achieve (\ref{eq:decaying_profit})-(\ref{eq:decaying_rate}) for the given $\epsilon$, and choose $V = T_{\epsilon}/\epsilon$. Then by (\ref{eq:exp_Tdrift_final}), we are within $\epsilon B + O(\epsilon) = O(\epsilon)$ of the optimal time average profit $\phi^{opt}$, with buffer size $O(V) = O(T_{\epsilon}/\epsilon)$. For i.i.d. processes $X(t)$, $Y(t)$, we have $T_{\epsilon}=1$ for all $\epsilon\geq 0$, and so the buffer size is $O(1/\epsilon)$. For processes $X(t)$, $Y(t)$ that are modulated by finite state ergodic Markov chains, it can be shown that $T_{\epsilon} = O(\log(1/\epsilon))$, and so the buffer size requirement is $O((1/\epsilon)\log(1/\epsilon))$ \cite{rahul-cognitive-tmc}\cite{two-prices-allerton07}. To prove the theorem, it is useful to define the following notions. Using the same Lyapunov function in (\ref{eq:lyap-function}) and a positive integer $T$, we define the $T$-slot Lyapunov drift as follows: \begin{eqnarray} \Delta_T(H(t))\triangleq \expect{L(\bv{Q}(t+T))-L(\bv{Q}(t))\left.|\right. H(t)},\label{eq:Tslotdrift} \end{eqnarray} where $H(t)$ is defined in (\ref{eq:history}) as the past history up to time $t$. It is also useful to define the following notion: \begin{eqnarray} \hat{\Delta}_T(t)\triangleq \expect{L(\bv{Q}(t+T))-L(\bv{Q}(t))\left.|\right. \hat{H}_T(t)},\label{eq:Tslotdrift_sp} \end{eqnarray} where: \begin{eqnarray} \hat{H}_T(t)= \{H(t)\} \cup \{X(t), Y(t), ..., X(t+T-1), Y(t+T-1)\}\cup\{\bv{Q}(t)\}\label{eq:Tslot_realization_def} \end{eqnarray} The value $\hat{H}_T(t)$ represents all history $H(t)$, and additionally includes the sequence of realizations of the supply and demand states in the interval $\{t, t+1, ..., t+T-1\}$. It also includes the backlog vector $\bv{Q}(t)$ (this is already included in $H(t)$, but we explicitly include it again in (\ref{eq:Tslot_realization_def}) for convenience). Given these values, the expectation in (\ref{eq:Tslotdrift_sp}) is with respect to the random demand outcomes $D_k(t)$ and the possibly randomized control actions. We have the following lemma: \begin{lem}\label{lemma:sample_path_Tdrift} Suppose the JPP algorithm is implemented with the $\theta_m$ values satisfying (\ref{eq:good-theta}) and $\mu_{m, max}\leq Q_m(0)\leq\theta_m+A_{m, max}$ for all $m\in\{1, 2, ..., M\}$. Then for any $t_0\in\{0, 1, 2, ...\}$, any integers $T$, any $\hat{H}_T(t_0)$, and any $\bv{Q}(t_0)$ value, we have: \begin{eqnarray} \hspace{-.2in}&&\hat{\Delta}_T(t_0)-V \sum_{\tau=t_0}^{t_0+T-1}\expect{\phi(\tau)\left.|\right. \hat{H}_T(t_0)}\leq T^2B -V \sum_{\tau=t_0}^{t_0+T-1}\expect{\phi^*(\tau)\left.|\right. \hat{H}_T(t_0)}\label{eq:Tdrift_samplepath_lemma}\\ \hspace{-.2in}&& +\sum_{m=1}^M\big(Q_m(t_0)-\theta_m\big)\sum_{\tau=t_0}^{t_0+T-1}\expect{A^*_m(\tau)-\sum_{k=1}^K\beta_{mk}Z^*_k(\tau)F_k(P^*_k(\tau), Y(\tau))\left.|\right. \hat{H}_T(t_0)},\nonumber \end{eqnarray} with $B$ defined in (\ref{eq:B}) and $\phi^*(\tau), A^*_m(\tau)$, $Z_k^*(\tau)$ and $P^*_k(\tau)$ are variables generated by any other policy that can be implemented over the $T$ slot interval (including those that know the future $X(\tau)$, $Y(\tau)$ states in this interval). \end{lem} \begin{proof} See Appendix C. \end{proof} We now prove Theorem \ref{theorem:JPP_ergodic}. \begin{proof} (Theorem \ref{theorem:JPP_ergodic}) Fix any $t_0\geq0$. Taking expectations on both sides of (\ref{eq:Tdrift_samplepath_lemma}) (conditioning on the information $H(t_0)$ that is already included in $\hat{H}_T(t_0)$) yields: \begin{eqnarray*} \hspace{-.2in}&&\Delta_T(H(t_0)) - V \sum_{\tau=t_0}^{t_0+T-1}\expect{\phi(\tau)\left.|\right. H(t_0)}\leq T^2B- V\sum_{\tau=t_0}^{t_0+T-1}\expect{\phi^*(\tau)\left.|\right. H(t_0)} \\ \hspace{-.2in}&&+\sum_{m=1}^M(Q_m(t_0)-\theta_m)\sum_{\tau=t_0}^{t_0+T-1}\expect{A^*_m(\tau)-\sum_{k=1}^K\beta_{mk}Z^*_k(\tau)F_k(P^*_k(\tau), Y(\tau))\left.|\right. H(t_0)}. \end{eqnarray*} Now plugging in the policy in Corollary \ref{cor:1} and using the $\epsilon$ and $T$ that yield (\ref{eq:decaying_profit}) and (\ref{eq:decaying_rate}) in the above, and using the fact that $|Q_m(t_0)-\theta_m|\leq \max[\theta_m, A_{m,max}]$, we have: \begin{eqnarray} \Delta_T(H(t_0)) - V \sum_{\tau=t_0}^{t_0+T-1}\expect{\phi(\tau)\left.|\right. H(t_0)}\leq \nonumber \\ T^2B- VT\phi^{opt} + VT\epsilon +T\sum_{m=1}^M \max[\theta_m, A_{m,max}]\epsilon.\label{eq:exp_Tdrift_0} \end{eqnarray} We can now take expectations of (\ref{eq:exp_Tdrift_0}) over $H(t_0)$ to obtain: \begin{eqnarray} \expect{L(\bv{Q}(t_0+T)) - L(\bv{Q}(t_0))} - V \sum_{\tau=t_0}^{t_0+T-1}\expect{\phi(\tau) }\leq \nonumber \\ T^2B- VT\phi^{opt} + VT\epsilon +T\sum_{m=1}^M \max[\theta_m, \mu_{m,max}]\epsilon. \nonumber \end{eqnarray} Summing the above over $t_0= 0, T, 2T, ..., (J-1)T$ for some positive $J$ and dividing both sides by $VTJ$, we get: \begin{eqnarray} \frac{\expect{L(\bv{Q}(JT)) - L(\bv{Q}(0))}}{VTJ} - \frac{1}{JT} \sum_{\tau=0}^{JT-1}\expect{\phi(\tau) }\leq \nonumber \\ \frac{TB}{V}- \phi^{opt} + \epsilon +\sum_{m=1}^M \frac{\max[\theta_m, A_{m,max}]\epsilon}{V}.\label{eq:exp_Tdrift_2} \end{eqnarray} By rearranging terms and using the fact that $L(t)\geq0$ for all $t$, we obtain: \begin{eqnarray*} \frac{1}{JT} \sum_{\tau=0}^{JT-1}\expect{\phi(\tau) }\geq \phi^{opt} - \frac{TB}{V}- \epsilon \left(1+\sum_{m=1}^M \frac{\max[\theta_m,A_{m,max}]}{V}\right) - \frac{\expect{ L(\bv{Q}(0))}}{VTJ}. \end{eqnarray*} Taking the liminf as $J\rightarrow\infty$, we have: \begin{eqnarray} \liminf_{t\rightarrow\infty}\frac{1}{t} \sum_{\tau=0}^{t-1}\expect{\phi(\tau) }\geq \phi^{opt} - \frac{TB}{V}- \epsilon \left(1+\sum_{m=1}^M \frac{\max[\theta_m,A_{m,max}]}{V}\right). \end{eqnarray} This completes the proof of Theorem \ref{theorem:JPP_ergodic}. \end{proof} \section{Arbitrary Supply and Demand Processes} \label{section:non-ergodic} In this section, we further relax our assumption about the supply and demand processes to allow for \emph{arbitrary} $X(t)$ and $Y(t)$ processes, and we look at the performance of the JPP algorithm. Note that in this case, the notion of ``optimal'' time average profit may no longer be applicable. Thus, we instead compare the performance of JPP with the optimal value that one can achieve over a time interval of length $T$. This optimal value is defined to be the supremum of the achievable average profit over any policy, including those which know the entire realizations of $X(t)$ and $Y(t)$ over the $T$ slots at the very beginning of the interval. We will call such a policy a \emph{$T$-slot Lookahead policy} in the following. We will show that in this case, JPP's performance is close to that under an optimal $T$-slot Lookahead policy. This $T$-slot lookahead metric is similar to the one used in \cite{neely-stock-arxiv}\cite{neely-universal-scheduling}, with the exception that here we compare to policies that know only the $X(t)$ and $Y(t)$ realizations and not the demand $D_k(t)$ realizations. \subsection{The $T$-slot Lookahead Performance} Let $T$ be a positive integer and let $t_0\geq0$. Define $\phi_T(t_0)$ as the optimal expected profit achievable over the interval $\{t_0, t_0+1, ..., t_0+T-1\}$ by any policy that has the complete knowledge of the entire $X(t)$ and $Y(t)$ processes over this interval and that ensures that the quantity of the raw materials purchased over the interval is equal to the expected amount consumed. Note here that although the future $X(t)$, $Y(t)$ values are assumed to be known, the random demands $D_k(t)$ are still unknown. Mathematically, $\phi_T(t_0)$ can be defined as the solution to the following optimization problem: \begin{eqnarray} \hspace{-.3in}(\textbf{P1}) &&\nonumber\\ \hspace{-.3in} \max: &&\phi_T(t_0)=\sum_{\tau=t_0}^{t_0+T-1} \expect{\phi(\tau)\left.|\right. \hat{H}_T(t_0)}\label{eq:universal_obj}\\ \hspace{-.3in}s.t. && \sum_{\tau=t_0}^{t_0+T-1} \expect{A_m(\tau)-\sum_{k=1}^K\beta_{mk}Z_k(\tau)F_k(P_k(\tau), Y(\tau))\left.|\right. \hat{H}_T(t_0)} = 0, \,\,\,\forall m \label{eq:universal_cond}\\ \hspace{-.3in}&& \text{Constraints}\,\, (\ref{eq:p-k-constraint}), (\ref{eq:Amax}), (\ref{eq:integer-a}), (\ref{eq:cmax} ).\label{eq:universal_cond2} \end{eqnarray} Here $\hat{H}_T(t_0)$ is defined in (\ref{eq:Tslot_realization_def}) and includes the sequence of realizations of $X(t)$ and $Y(t)$ during the interval $\{t_0, ..., t_0+T-1\}$; $\phi(\tau)$ is defined in Equation (10) as the instantaneous profit obtained at time $\tau$; $A_m(\tau)$ and $\sum_{k=1}^K\beta_{mk}Z_k(\tau)F_k(P_k(\tau), Y(\tau))$ are the number of newly ordered parts and the expected number of consumed parts in time $\tau$, respectively; and the expectation is taken over the randomness in $D_k(\tau)$, due to the fact that the demand at time $\tau$ is a random variable. We note that in Constraint (\ref{eq:universal_cond}), we actually do not require that in every intermediate step the raw material queues have enough for production. This means that a $T$-slot Lookahead policy can make products out of its future raw materials, provided that they are purchased before the interval ends. Since purchasing no materials and selling no products over the entire interval is a valid policy, we see that the value $\phi_T(t_0)\geq0$ for all $t_0$ and all $T$. In the following, we will look at the performance of JPP over the interval from $0$ to $JT-1$, which is divided into a total of $J$ frames with length $T$ each. We show that for any $J>0$, the JPP algorithm yields an average profit over $\{0, 1, ..., JT-1\}$ that is close to the profit obtained with an optimal $T$-slot Lookahead policy implemented on each $T$-slot frame. \subsection{Performance of JPP under arbitrary supply and demand} The following theorem summarizes the results. \begin{thm}\label{theorem:JPP_arbitrary} Suppose the Joint Purchasing and Pricing Algorithm (JPP) is implemented, with $\theta_m$ satisfying condition (\ref{eq:good-theta}) and that $\mu_{m, max}\leq Q_m(0)\leq\theta_m+A_{m, max}$ for all $m\in\{1, 2, ..., M\}$. Then for any arbitrary $X(t)$ and $Y(t)$ processes, the queue backlog values satisfies part (a) of Theorem \ref{thm:performance1}. Moreover, for any positive integers $J$ and $T$, and any $\hat{H}_{JT}(0)$ (which specifies the initial queue vector $\bv{Q}(0)$ according to the above bounds, and specifies all $X(\tau)$, $Y(\tau)$ values for $\tau \in\{0, 1, \ldots, JT-1\}$), the time average profit over the interval $\{0, 1, ..., JT-1\}$ satisfies: \begin{eqnarray*} \frac{1}{JT} \sum_{\tau=0}^{JT-1}\expect{\phi(\tau)\left.|\right. \hat{H}_{JT}(0)}\geq\frac{1}{JT}\sum_{j=0}^{J-1}\phi_T(jT) - \frac{BT}{V} -\frac{L(\bv{Q}(0))}{ VJT}. \end{eqnarray*} where $\phi_T(jT)$ is defined to be the optimal value of the problem (\textbf{P1}) over the interval $\{jT, ..., (j+1)T-1\}$. The constant $B$ is defined in (\ref{eq:B}). \end{thm} \begin{proof} (Theorem \ref{theorem:JPP_arbitrary}) Fix any $t_0\geq0$. We denote the optimal solution to the problem (\textbf{P1}) over the interval $\{t_0, t_0+1, ..., t_0+T-1\}$ as: \[\{\phi^*(\tau), A^*_m(\tau), Z^*_k(\tau), P^*_k(\tau)\}_{\tau=t_0, ..., t_0+T-1}^{m=1, ..., M}.\] Now recall (\ref{eq:Tdrift_samplepath_lemma}) as follows: \begin{eqnarray} \hspace{-.2in}&&\hat{\Delta}_T(t_0)-V \sum_{\tau=t_0}^{t_0+T-1}\expect{\phi(\tau)\left.|\right. \hat{H}_T(t_0)}\leq T^2B-V \sum_{\tau=t_0}^{t_0+T-1}\expect{\phi^*(\tau)\left.|\right. \hat{H}_T(t_0)}\label{eq:Tdrift_samplepath_lemma_reuse}\\ \hspace{-.2in}&& +\sum_{m=1}^M\big(Q_m(t_0)-\theta_m\big)\sum_{\tau=t_0}^{t_0+T-1}\expect{A^*_m(\tau)-\sum_{k=1}^K\beta_{mk}Z^*_k(\tau)F_k(P^*_k(\tau), Y(\tau))\left.|\right. \hat{H}_T(t_0)},\nonumber \end{eqnarray} Now note that the actions $\phi^*(\tau)$, $A^*_m(\tau)$ $Z_k^*(\tau)$ and $P^*_k(\tau)$ satisfy (\ref{eq:universal_obj})-(\ref{eq:universal_cond}) and so: \begin{eqnarray} \hat{\Delta}_T(t_0)-V \sum_{\tau=t_0}^{t_0+T-1}\expect{\phi(\tau)\left.|\right. \hat{H}_T(t_0)}\leq T^2B-V \phi_T(t_0). \label{eq:almost-done} \end{eqnarray} Now note that since JPP makes actions based only on the current queue backlog and $X(\tau)$, $Y(\tau)$ states, we have: \begin{eqnarray*} \hat{\Delta}_T(t_0) &=& \expect{L(\bv{Q}(t_0+T))-L(\bv{Q}(t_0))|\hat{H}_{JT}(0), \bv{Q}(t_0)} \\ \sum_{\tau=t_0}^{t_0+T-1} \expect{\phi(\tau)|\hat{H}_T(t_0)} &=& \sum_{\tau=t_0}^{t_0+T-1}\expect{\phi(\tau)|\hat{H}_{JT}(0), \bv{Q}(t_0)} \end{eqnarray*} That is, conditioning on the additional $X(\tau)$, $Y(\tau)$ states for $\tau$ outside of the $T$-slot interval does not change the expectations. Using these in (\ref{eq:almost-done}) yields: \begin{eqnarray*} \expect{L(\bv{Q}(t_0+T))-L(\bv{Q}(t_0))|\hat{H}_{JT}(0), \bv{Q}(t_0)} -V\sum_{\tau=t_0}^{t_0+T-1}\expect{\phi(\tau)|\hat{H}_{JT}(0), \bv{Q}(t_0)} \leq \\ T^2B-V \phi_T(t_0). \end{eqnarray*} Taking expectations of the above with respect to the random $\bv{Q}(t_0)$ states (given $\hat{H}_{JT}(0)$) yields: \begin{eqnarray*} \expect{L(\bv{Q}(t_0+T))-L(\bv{Q}(t_0))|\hat{H}_{JT}(0)} -V\sum_{\tau=t_0}^{t_0+T-1}\expect{\phi(\tau)|\hat{H}_{JT}(0)} \leq \\ T^2B-V \phi_T(t_0). \end{eqnarray*} Letting $t_0=jT$ and summing over $j=0, 1, ... J-1$ yields: \begin{eqnarray*} \expect{L(\bv{Q}(JT))-L(\bv{Q}(0))\left.|\right. \hat{H}_{JT}(0)} - V \sum_{\tau=0}^{JT-1}\expect{\phi(\tau)\left.|\right. \hat{H}_{JT}(0)}\\ \leq JT^2B-V \sum_{j=0}^{J-1}\phi_T(jT). \end{eqnarray*} Rearranging the terms, dividing both sides by $VJT$ and using the fact that $L(t)\geq0$ for all $t$, we get: \begin{eqnarray*} \frac{1}{JT} \sum_{\tau=0}^{JT-1}\expect{\phi(\tau)\left.|\right. \hat{H}_{JT}(0)}\geq\frac{1}{JT}\sum_{j=0}^{J-1}\phi_T(jT) - \frac{BT}{V} -\frac{\expect{L(\bv{Q}(0))\left.|\right. \hat{H}_{JT}(0)}}{ VJT}. \end{eqnarray*} Because $\bv{Q}(0)$ is included in the $\hat{H}_{JT}(0)$ information, we have $\expect{L(\bv{Q}(0))|\hat{H}_{JT}(0)} = L(\bv{Q}(0))$. This proves Theorem \ref{theorem:JPP_arbitrary}. \end{proof} \section{Conclusions} We have developed a dynamic pricing and purchasing strategy that achieves time average profit that is arbitrarily close to optimal, with a corresponding tradeoff in the maximum buffer size required for the raw material queues. When the supply and demand states $X(t)$ and $Y(t)$ are i.i.d. over slots, we showed that the profit is within $O(1/V)$ of optimality, with a worst-case buffer requirement of $O(V)$, where $V$ is a parameter that can be chosen as desired to affect the tradeoff. Similar performance was shown for ergodic $X(t)$ and $Y(t)$ processes with a mild decaying memory property, where the deviation from optimality also depends on a ``mixing time'' parameter. Finally, we showed that the same algorithm provides efficient performance for \emph{arbitrary} (possibly non-ergodic) $X(t)$ and $Y(t)$ processes. In this case, efficiency is measured against an ideal $T$-slot lookahead policy with knowledge of the future $X(t)$ and $Y(t)$ values up to $T$ slots. Our Joint Purchasing and Pricing (JPP) algorithm reacts to the observed system state on every slot, and does not require knowledge of the probabilities associated with future states. Our analysis technique is based on Lyapunov optimization, and uses a Lyapunov function that ensures enough inventory is available to take advantage of emerging favorable demand states. This analysis approach can be applied to very large systems, without the curse of dimensionality issues seen by other approaches such as dynamic programming. \section*{Appendix A --- Proof of Necessity for Theorem \ref{thm:max-profit}} \begin{proof} (Necessity portion of Theorem \ref{thm:max-profit}) For simplicity, we assume the system is initially empty. Because $X(t)$ and $Y(t)$ are stationary, we have $Pr[X(t) = x] = \pi(x)$ and $Pr[Y(t) = y] = \pi(y)$ for all $t$ and all $x \in \script{X}$, $y\in\script{Y}$. Consider any algorithm that makes decisions for $\bv{A}(t), \bv{Z}(t), \bv{P}(t)$ over time, and also makes decisions for $\tilde{\bv{D}}(t)$ according to the scheduling constraints (\ref{eq:scheduling-constraints1})-(\ref{eq:scheduling-constraints2}). Let $\phi_{actual}(t)$ represent the actual instantaneous profit associated with this algorithm. Define $\overline{\phi}_{actual}$ as the $\limsup$ time average expectation of $\phi_{actual}(t)$, and let $\{\tilde{t}_i\}$ represent the subsequence of times over which the $\limsup$ is achieved, so that: \begin{equation} \label{eq:liminf-appa} \lim_{i \rightarrow \infty} \frac{1}{\tilde{t}_i} \sum_{\tau=0}^{\tilde{t}_i-1} \expect{\phi_{actual}(\tau)} = \overline{\phi}_{actual} \end{equation} Let $\overline{c}(t), \overline{r}(t)$, $\overline{a}_m(t), \overline{\mu}_m(t)$ represent the following time averages up to slot $t$: \begin{eqnarray} \overline{c}(t) &\defequiv& \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{c(\bv{A}(\tau), X(\tau))} \label{eq:appa1} \\ \overline{r}(t) &\defequiv& \frac{1}{t}\sum_{\tau=0}^{t-1} \sum_{k=1}^K\expect{Z_k(\tau)\tilde{D}_k(\tau)(P_k(\tau)-\alpha_k)} \label{eq:appa2} \\ \overline{a}_m(t) &\defequiv& \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{A_m(\tau)} \label{eq:appa3} \\ \overline{\mu}_m(t) &\defequiv& \frac{1}{t}\sum_{\tau=0}^{t-1} \sum_{k=1}^K \beta_{mk}\expect{Z_k(\tau)\tilde{D}_k(\tau)} \label{eq:appa4} \end{eqnarray} Because the system is initially empty, we cannot use more raw materials of type $m$ up to time $t$ than we have purchased, and hence: \begin{equation} \label{eq:supply-constraint-appa} \overline{a}_m(t) \geq \overline{\mu}_m(t) \: \: \mbox{ for all $t$ and all $m \in \{1, \ldots, M\}$} \end{equation} Further, note that the sum profit up to time $t$ is given by: \begin{equation} \label{eq:sum-profit-appa} \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{\phi_{actual}(\tau)} = -\overline{c}(t) + \overline{r}(t) \end{equation} We now have the following claim, proven at the end of this section. \emph{Claim 1:} For each slot $t$ and each $x\in\script{X}$, $y\in\script{Y}$, $\bv{a} \in \script{A}(x)$, $k \in \{1, \ldots, K\}$, there are functions $\theta(\bv{a}, x, t)$, $\nu_k(y,t)$, and $d_k(y,t)$ such that: \begin{eqnarray} \overline{c}(t) &=& \sum_{x \in \script{X}} \pi(x)\sum_{\bv{a} \in \script{A}(x)} \theta(\bv{a}, x, t)c(\bv{a}, x) \label{eq:c1-appa} \\ \overline{a}_m(t) &=& \sum_{x \in \script{X}} \pi(x) \sum_{\bv{a}\in\script{A}(x)} \theta(\bv{a}, x, t)a_m \label{eq:c2-appa} \\ \overline{r}(t) &=& \sum_{y \in \script{Y}}\pi(y)\sum_{k=1}^K \nu_k(y,t) \label{eq:c3-appa}\\ \overline{\mu}_m(t) &=& \sum_{y\in\script{Y}} \pi(y)\sum_{k=1}^K \beta_{mk} d_k(y,t) \label{eq:c4-appa} \end{eqnarray} and such that: \begin{eqnarray} 0 \leq \theta(\bv{a}, x, t) \leq 1 \: \: , \: \: \sum_{\bv{a} \in \script{A}(x)} \theta(\bv{a}, x, t) = 1 \: \: \forall x\in\script{X}, \bv{a} \in \script{A}(x) \label{eq:compact1} \end{eqnarray} and such that for each $y \in \script{Y}$, $k \in \{1, \ldots, K\}$, the vector $(\nu_k(y,t); d_k(y,t))$ is in the convex hull of the following two-dimensional compact set: \begin{equation} \label{eq:compact2} \Omega_k(y) \defequiv \{ (\nu, \mu) \left|\right. \nu = (p -\alpha_k)zF_k(p, y) \: , \: \mu = zF_k(p, y) \: , \: p \in \script{P} \: , \: z \in \{0, 1\} \}. \end{equation} The values $[\theta(\bv{a}, x, t); (\nu_k(y,t); d_k(y,t))]_{x \in \script{X}, y\in\script{Y}, \bv{a} \in\script{A}(x)}$ of Claim 1 can be viewed as a finite or countably infinite dimensional vector sequence indexed by time $t$ that is contained in the compact set defined by (\ref{eq:compact1}) and the convex hull of (\ref{eq:compact2}). Hence, by a classical diagonalization procedure, every infinite sequence contains a convergent subsequence that is contained in the set \cite{billingsley}. Consider the infinite sequence of times $\tilde{t}_i$ (for which (\ref{eq:liminf-appa}) holds), and let $t_i$ represent the infinite subsequence for which $\theta(\bv{a}, x, t_i)$, $\nu_k(y, t_i)$, and $d_k(y, t_i)$ converge. Let $\theta(\bv{a}, x)$, $\nu_k(y)$, and $d_k(y)$ represent the limiting values. Define $\hat{c}$, $\hat{a}_m$, $\hat{r}$, and $\hat{\mu}_m$ as the corresponding limiting values of (\ref{eq:c1-appa})-(\ref{eq:c4-appa}). \begin{eqnarray*} \hat{c} &=& \sum_{x \in \script{X}} \pi(x)\sum_{\bv{a} \in \script{A}(x)} \theta(\bv{a}, x)c(\bv{a}, x) \\ \hat{a}_m &=& \sum_{x \in \script{X}} \pi(x) \sum_{\bv{a}\in\script{A}(x)} \theta(\bv{a}, x)a_m \\ \hat{r} &=& \sum_{y \in \script{Y}}\pi(y)\sum_{k=1}^K \nu_k(y) \\ \hat{\mu}_m &=& \sum_{y\in\script{Y}} \pi(y)\sum_{k=1}^K \beta_{mk} d_k(y) \end{eqnarray*} Further, the limiting values of $\theta(\bv{a}, x)$ retain the properties (\ref{eq:compact1}) and hence can be viewed as probabilities. Furthermore, taking limits as $t_i\rightarrow\infty$ in (\ref{eq:supply-constraint-appa}) and (\ref{eq:sum-profit-appa}) yields: \begin{eqnarray*} & \hat{a}_m \geq \hat{\mu}_m \: \: \mbox{ for all $m \in \{1, \ldots, M\}$}& \\ &\overline{\phi}_{actual} = -\hat{c} + \hat{r}& \end{eqnarray*} Finally, note that for each $k \in \{1, \ldots, K\}$ and each $y \in \script{Y}$, the vector $(\nu_k(y), d_k(y))$ is in the convex hull of the set (\ref{eq:compact2}), and hence can be achieved by an $(X,Y)$-only policy that chooses $\bv{Z}(t)$ and $\bv{P}(t)$ as a random function of the observed value of $Y(t)$, such that $Z_k(t) \in \{0, 1\}$ and $P_k(t) \in\script{P}_k$ for all $k$, and: \begin{eqnarray*} \nu_k(y) &=& \expect{(P_k(t)-\alpha_k)Z_k(t)F_k(P_k(t), y)\left|\right. Y(t) = y} \\ d_k(y) &=& \expect{Z_k(t)F_k(P_k(t), y) \left|\right. Y(t) = y } \end{eqnarray*} It follows that $\overline{\phi}_{actual}$ is an achievable value of $\phi$ for which there are appropriate auxiliary variables that satisfy the constraints of the optimization problem of Theorem \ref{thm:max-profit}. However, $\phi^{opt}$ is defined as the supremum over all such $\phi$ values, and hence we must have $\overline{\phi}_{actual} \leq \phi^{opt}$. \end{proof} It remains only to prove Claim 1. \begin{proof} (Claim 1) We can re-write the expression for $\overline{c}(t)$ in (\ref{eq:appa1}) as follows: \begin{eqnarray*} \overline{c}(t) &=& \frac{1}{t}\sum_{\tau=0}^{t-1}\sum_{x \in \script{X}} \sum_{\bv{a}}c(\bv{a}, x)\pi(x)Pr[\bv{A}(\tau)=\bv{a}\left|\right.X(\tau)=x] \\ &=& \sum_{x \in \script{X}}\pi(x)\sum_{\bv{a}} \theta(\bv{a}, x, t)c(\bv{a}, x) \end{eqnarray*} where $\theta(\bv{a}, x, t)$ is defined: \[ \theta(\bv{a}, x, t) \defequiv \frac{1}{t}\sum_{\tau=0}^{t-1} Pr[\bv{A}(\tau) = \bv{a}\left|\right. X(\tau) = x] \] and satisfies: \[ 0 \leq \theta(\bv{a}, x, t) \leq 1 \: \: , \: \: \sum_{\bv{a}}\theta(\bv{a}, x, t) = 1 \: \: \forall t, x \in \script{X} \] This proves (\ref{eq:c1-appa}). Likewise, we can rewrite the expression for $\overline{a}_m(t)$ in (\ref{eq:appa3}) as follows: \begin{eqnarray*} \overline{a}_m(t) &=& \sum_{x \in \script{X}} \pi(x) \sum_{\bv{a}} \theta(\bv{a}, x, t)a_m \end{eqnarray*} This proves (\ref{eq:c2-appa}). To prove (\ref{eq:c3-appa})-(\ref{eq:c4-appa}), note that we can rewrite the expression for $\overline{r}(t)$ in (\ref{eq:appa2}) as follows: \begin{eqnarray} \overline{r}(t) &=& \frac{1}{t}\sum_{\tau=0}^{t-1} \sum_{y \in \script{Y}}\sum_{k=1}^K \pi(y)\expect{(P_k(\tau)-\alpha_k)Z_k(\tau)\tilde{D}_k(\tau)\left|\right. Y(\tau) = y} \label{eq:baz0-appa} \end{eqnarray} Now for all $y \in \script{Y}$, all vectors $\bv{z} \in \{0,1\}^K$, $\bv{p} \in \script{P}^K$, and all slots $t$, define $\gamma_k(y, \bv{z}, \bv{p}, t)$ as follows: \begin{eqnarray*} \gamma_k(y, \bv{z}, \bv{p}, t) \defequiv \left\{ \begin{array}{ll} \frac{\expect{z_k\tilde{D}_k(t)\left|\right. Y(t) = y, \bv{Z}(t) = \bv{z}, \bv{P}(t) = \bv{p}}}{\expect{z_kD_k(t)\left|\right. Y(t) = y, \bv{Z}(t) = \bv{z}, \bv{P}(t) = \bv{p}}} & \mbox{ if the denominator is non-zero} \\ 0 & \mbox{ otherwise} \end{array} \right. \end{eqnarray*} Note that $\tilde{D}_k(t) \leq D_k(t)$, and so $0 \leq \gamma_k(y, \bv{z}, \bv{p}, t) \leq 1$. It follows by definition that: \[ \expect{z_k\tilde{D}_k(t)\left|\right.Y(t) = y, \bv{Z}(t) = \bv{z}, \bv{P}(t) = \bv{p}} = \gamma_k(y, \bv{z}, \bv{p}, t)F_k(p_k, y) \] Using the above equality with iterated expectations in (\ref{eq:baz0-appa}) yields: \begin{eqnarray} \hspace{-.3in}\overline{r}(t) &=& \sum_{y \in \script{Y}} \sum_{k=1}^K \pi(y) \times \nonumber \\ && \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{(P_k(\tau)-\alpha_k) \gamma_k(y, \bv{Z}(\tau), \bv{P}(\tau), \tau)F_k(P_k(\tau), y) \left|\right. Y(\tau) = y} \label{eq:r-appa} \end{eqnarray} With similar analysis, we can rewrite the expression for $\overline{\mu}_m(t)$ in (\ref{eq:appa4}) as follows: \begin{eqnarray} \hspace{-.3in} \overline{\mu}_m(t) &=& \sum_{y\in\script{Y}} \sum_{k=1}^K\beta_{mk}\pi(y) \times \nonumber \\ && \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{ \gamma_k(y, \bv{Z}(\tau), \bv{P}(\tau), \tau)F_k(P_k(\tau), y) \left|\right. Y(t) = y} \label{eq:d-appa} \end{eqnarray} Now define $\nu_k(y,t)$ and $d_k(y,t)$ as the corresponding time average expectations inside the summation terms of (\ref{eq:r-appa}) and (\ref{eq:d-appa}), respectively, so that: \begin{eqnarray*} \overline{r}(t) &=& \sum_{y\in\script{Y}}\pi(y)\sum_{k=1}^K \nu_k(y, t) \\ \overline{\mu}_m(t) &=& \sum_{y\in\script{Y}}\pi(y)\sum_{k=1}^K \beta_{mk}d_k(y,t) \end{eqnarray*} Note that the time average expectation over $t$ slots used in the definitions of $\nu_k(y,t)$ and $d_k(y,t)$ can be viewed as an operator that produces a convex combination. Specifically, the two-dimensional vector $(\nu_k(y,t); d_k(y,t))$ can be viewed as an element of the convex hull of the following set $\hat{\Omega}_k(y)$: \[ \hat{\Omega}_k(y) \defequiv \{ (\nu, d) \left|\right. \nu = (p-\alpha_k)\gamma F_k(p, y) \: , \: d = \gamma F_k(p, y) \: , \: p \in \script{P} \: , \: 0 \leq \gamma \leq 1 \} \] However, it is not difficult to show that the convex hull of the set $\hat{\Omega}_k(y)$ is the same as the convex hull of the set $\Omega_k(y)$ defined in (\ref{eq:compact2}).\footnote{This can be seen by noting that $\Omega_k(y) \subset \hat{\Omega}_k(y) \subset Conv(\Omega_k(y))$ and then taking convex hulls of this inclusion relation.} This proves (\ref{eq:c3-appa}) and (\ref{eq:c4-appa}). \end{proof} \section*{Appendix B --- Proof of the 2-Price Theorem (Theorem \ref{thm:two-price})} \begin{proof} (Theorem \ref{thm:two-price}) The proof follows the work of \cite{two-prices-allerton07}. For each product $k \in \{1, \ldots, K\}$ and each possible demand state $y \in \script{Y}$, define constants $\hat{r}_k(y)$ and $\hat{d}_{k}(y)$ as follows: \begin{eqnarray*} \hat{r}_k(y) &\defequiv& \expect{Z_k(t)(P_k(t)-\alpha_k)F_k(P_k(t), y)\left|\right. Y(t) = y} \\ \hat{d}_{k}(y) &\defequiv& \expect{Z_k(t)F_k(P_k(t), y)\left|\right. Y(t) = y} \end{eqnarray*} where $\bv{Z}(t) = (Z_1(t), \ldots, Z_K(t))$ and $\bv{P}(t) = (P_1(t), \ldots, P_K(t))$ are the stationary randomized decisions given in the statement Theorem \ref{thm:two-price}. Thus, by (\ref{eq:two-price1}) and (\ref{eq:two-price2}): \begin{eqnarray} &\sum_{k=1}^K \sum_{y\in\script{Y}} \pi(y)\hat{r}_k(y) \geq \hat{r}& \label{eq:baz1} \\ &\sum_{k=1}^K \beta_{mk} \sum_{y \in \script{Y}} \pi(y) \hat{d}_{mk}(y) \leq \hat{\mu}_m \: \: \mbox{for all $m \in \{1, \ldots, M\}$}& \label{eq:baz2} \end{eqnarray} Now consider a particular $k, y$, and define the two-dimensional set $\Omega(k, y)$ as follows: \[ \Omega(k, y) = \{ (r; d) \left|\right. r = z(p-\alpha_k)F_k(p, y), d = zF_k(p, y), p \in \script{P}, z \in \{0, 1\}\} \] We now use the fact that if $\bv{C}$ is any general random vector that takes values in a general set $\script{C}$, then $\expect{\bv{C}}$ is in the convex hull of $\script{C}$ \cite{now}. Note that for any random choice of $Z_k(t) \in \{0, 1\}, P_k(t) \in \script{P}_k$, we have: \[ (Z_k(t)(P_k(t) - \alpha_k)F_k(P_k(t), y); Z_k(t) F_k(P_k(t), y)) \in \Omega(k,y) \] Hence, the conditional expectation of this random vector given $Y(t) = y$, given by $(\hat{r}_k(y); \hat{d}_{k}(y))$, is in the convex hull of $\Omega(k, y)$. Because $\Omega(k,y)$ is a two dimensional set, any element of its convex hull can be expressed as a convex combination that uses at most three elements of $\Omega(k,y)$ (by Caratheodory's Theorem \cite{bertsekas-convex}). Moreover, because the set $\script{P}$ is compact and $F_k(p, y)$ is a continuous function of $p \in \script{P}$ for each $y \in \script{Y}$, the set $\Omega(k, y)$ is compact and hence any point on the \emph{boundary} of its convex hull can be described by a convex combination of at most \emph{two} elements of $\Omega(k,y)$ (see, for example, \cite{two-prices-allerton07}). Let $(\hat{r}_k^*(y), \hat{d}_{k}(y))$ be the boundary point with the largest value of the first entry given the second entry is $\hat{d}_{k}(y)$. We thus have $\hat{r}_k^*(y) \geq \hat{r}_k(y)$, and writing the convex combination with two elements we have: \begin{eqnarray*} (\hat{r}_k^*(y); \hat{d}_{k}(y)) &=& \eta^{(1)}\left(z^{(1)}(p^{(1)}-\alpha_k)F_k(p^{(1)}, y) ; z^{(1)}F_k(p^{(1)}, y)\right) \\ && + \eta^{(2)}\left(z^{(2)}(p^{(2)}-\alpha_k)F_k(p^{(2)}, y) ; z^{(2)}F_k(p^{(2)}, y)\right) \end{eqnarray*} for some set of decisions $(z^{(1)}, p^{(1)})$ and $(z^{(2)}, p^{(2)})$ (with $z^{(i)} \in \{0, 1\}, p^{(i)} \in \script{P}$) and probabilities $\eta^{(1)}$ and $\eta^{(2)}$ such that $\eta^{(1)} + \eta^{(2)} = 1$. Note that these $z^{(i)}, p^{(i)}, \eta^{(i)}$ values are determined for a particular $(k,y)$, and hence we can relabel them as $z_k^{(i)}(y)$, $p_k^{(i)}(y)$, and $\eta_k^{(i)}(y)$ for $i \in \{1, 2\}$. Now define the following stationary randomized policy: For each product $k\in\{1, \ldots, K\}$, if $Y(t) = y$, independently choose $Z_k^*(t) = z_k^{(1)}(y)$ and $P_k^*(t) = p_k^{(1)}(y)$ with probability $\eta_k^{(1)}(y)$, and else choose $Z_k^*(t) = z_k^{(2)}(y)$ and $P_k^*(t) = p_k^{(2)}(y)$. It follows that for a given value of $y$, this policy uses at most two different prices for each product.\footnote{Further, given the observed value of $Y(t) = y$, this policy makes pricing decisions independently for each product $k$.} Further, we have: \begin{eqnarray*} \hat{r}_k(y) &\leq& \expect{Z_k^*(t)(P_k^*(t)-\alpha_k)F_k(P_k^*(t), y)\left|\right. Y(t) = y} \\ \hat{d}_{k}(y) &=& \expect{Z_k^*(t)F_k(P_k^*(t), y)\left|\right. Y(t) = y} \end{eqnarray*} Summing these conditional expectations over $k \in \{1, \ldots, K\}$ and $y \in \script{Y}$ and using (\ref{eq:baz1})-(\ref{eq:baz2}) yields the result. \end{proof} \section*{Appendix C --- Proof of Lemma \ref{lemma:sample_path_Tdrift}} \begin{proof} (Lemma \ref{lemma:sample_path_Tdrift}) Using the queueing dynamic equation (\ref{eq:dynamics1}) (which holds because $Q_m(t) \geq \mu_{m,max}$ for all $t$), it is easy to show: \begin{eqnarray*} \frac{1}{2}\big(Q_m(\tau+1)-\theta_m\big)^2-\frac{1}{2}\big(Q_m(\tau)-\theta_m\big)^2 \\ = \frac{1}{2}(A_m(t) - \mu_m(t))^2 +(Q_m(\tau)-\theta_m)\big[A_m(\tau)-\mu_m(\tau)\big], \end{eqnarray*} Now summing over $m\in\{1, \ldots, M\}$ and adding to both sides the term $-V\phi(\tau)$, we have: \begin{eqnarray*} \tilde{\Delta}_1(\tau) -V \phi(\tau)\leq B-V \phi(\tau) + \sum_{m=1}^M(Q_m(\tau)-\theta_m)\big[A_m(\tau)-\mu_m(\tau)\big]. \end{eqnarray*} where $B$ is defined in (\ref{eq:B}), and $\tilde{\Delta}_1(\tau)\defequiv \frac{1}{2}\sum_{m=1}^M\big[\big(Q_m(\tau+1)-\theta_m\big)^2- \big(Q_m(\tau)-\theta_m\big)^2\big]$ is the $1$-step sample path drift of the Lyapunov function at time $\tau$. Now for any $t_0\leq\tau\leq t_0+T-1$, we can take expectations over the above equation conditioning on $\hat{H}_T(t_0)$ to get: \begin{eqnarray} \hspace{-.1in}&&\expect{\tilde{\Delta}_1(\tau)|\hat{H}_T(t_0)} -V \expect{\phi(\tau)\left.|\right.\hat{H}_T(t_0)}\leq B-V \expect{\phi(\tau)\left.|\right.\hat{H}_T(t_0)} \label{eq:1step_drift_ep}\\ \hspace{-.1in}&&\qquad\qquad\qquad\qquad\qquad+ \sum_{m=1}^M\expect{(Q_m(\tau)-\theta_m)\big[A_m(\tau)-\mu_m(\tau)\big]\left.|\right.\hat{H}_T(t_0)}.\nonumber \end{eqnarray} However, using iterated expectations in the last term as in (\ref{eq:foo}), we see that: \begin{eqnarray*} \hspace{-.1in}&&\quad\sum_{m=1}^M\expect{(Q_m(\tau)-\theta_m)\big[A_m(\tau)-\mu_m(\tau)\big]\left.|\right.\hat{H}_T(t_0)} \\ \hspace{-.1in}&&= \sum_{m=1}^M\expect{(Q_m(\tau)-\theta_m)\expect{A_m(\tau)-\mu_m(\tau) \left.|\right.\bv{Q}(\tau), \bv{P}(\tau), \bv{Z}(\tau), \hat{H}_T(t_0)}\left.|\right.\hat{H}_T(t_0)}\\ \hspace{-.1in}&&= \sum_{m=1}^M\expect{(Q_m(\tau)-\theta_m)\big[A_m(\tau)- \sum_{k=1}^K\beta_{mk}Z_k(\tau)F_k(P_k(\tau), Y(\tau))\big] \left.|\right.\hat{H}_T(t_0)} \end{eqnarray*} Plugging this back into (\ref{eq:1step_drift_ep}), we get: \begin{eqnarray} \hspace{-.1in}&&\expect{\tilde{\Delta}_1(\tau)|\hat{H}_T(t_0)} -V \expect{\phi(\tau)\left.|\right.\hat{H}_T(t_0)}\leq B-V \expect{\phi(\tau)\left.|\right.\hat{H}_T(t_0)} \label{eq:1step_drift_ep2}\\ \hspace{-.1in}&& + \sum_{m=1}^M\expect{(Q_m(\tau)-\theta_m)\big[A_m(\tau)-\sum_{k=1}^K\beta_{mk}Z_k(\tau)F_k(P_k(\tau), Y(\tau))\big]\left.|\right.\hat{H}_T(t_0)}.\nonumber \end{eqnarray} Now since, given the $\bv{Q}(\tau)$ values on slot $\tau$, the JPP algorithm minimizes the right hand side of the above equation at time $\tau$, we indeed have: \begin{eqnarray} \hspace{-.1in}&&\expect{\tilde{\Delta}_1(\tau)|\hat{H}_T(t_0)} -V \expect{\phi(\tau)\left.|\right.\hat{H}_T(t_0)}\leq B-V \expect{\phi^*(\tau)\left.|\right.\hat{H}_T(t_0)} \label{eq:1step_drift_ep3}\\ \hspace{-.1in}&& + \sum_{m=1}^M\expect{(Q_m(\tau)-\theta_m)\big[A^*_m(\tau)-\sum_{k=1}^K\beta_{mk}Z^*_k(\tau)F_k(P^*_k(\tau), Y(\tau))\big]\left.|\right.\hat{H}_T(t_0)},\nonumber \end{eqnarray} where $\phi^*(\tau), A^*_m(\tau)$ $Z_k^*(\tau)$ and $P^*_k(\tau)$ are variables generated by any other policies. Summing (\ref{eq:1step_drift_ep3}) from $\tau=t_0$ to $\tau=t_0+T-1$, we thus have: \begin{eqnarray} \hspace{-.1in}&&\hat{\Delta}_T(t_0) -\sum_{\tau=t_0}^{t_0+T-1}V \expect{\phi(\tau)\left.|\right.\hat{H}_T(t_0)}\leq TB-V\sum_{\tau=t_0}^{t_0+T-1} \expect{\phi^*(\tau)\left.|\right.\hat{H}_T(t_0)} \label{eq:1step_drift_ep4}\\ \hspace{-.1in}&& + \sum_{m=1}^M\sum_{\tau=t_0}^{t_0+T-1}\expect{(Q_m(\tau)-\theta_m)\big[A^*_m(\tau)-\sum_{k=1}^K\beta_{mk}Z^*_k(\tau)F_k(P^*_k(\tau), Y(\tau))\big]\left.|\right.\hat{H}_T(t_0)}.\nonumber \end{eqnarray} Now using the fact that for any $t$, $|Q_m(t+\tau)-Q_m(t)|\leq \tau \max[A_{m, max}, \mu_{m,max}]$, we get: \begin{eqnarray*} \hspace{-.1in}&&\sum_{m=1}^M\sum_{\tau=t_0}^{t_0+T-1}(Q_m(\tau)-\theta_m)\big[A^*_m(\tau)-\sum_{k=1}^K\beta_{mk}Z^*_k(\tau)F_k(P^*_k(\tau), Y(\tau))\big]\\ \hspace{-.1in}&&\leq B' +\sum_{m=1}^M\sum_{\tau=t_0}^{t_0+T-1}(Q_m(t_0)-\theta_m)\big[A^*_m(\tau)-\sum_{k=1}^K\beta_{mk}Z^*_k(\tau)F_k(P^*_k(\tau), Y(\tau))\big] \end{eqnarray*} where: \[ B' \defequiv \frac{T(T-1)}{2}\sum_{m=1}^M\max[A_{m,max}^2, \mu_{m,max}^2] = T(T-1)B\] where $B$ is defined in (\ref{eq:B}). Plugging this into (\ref{eq:1step_drift_ep4}) and using the fact that conditioning on $\hat{H}_T(t_0)$, $\bv{Q}(t_0)$ is a constant, we get: \begin{eqnarray*} \hspace{-.2in}&&\hat{\Delta}_T(t_0)-V \sum_{\tau=t_0}^{t_0+T-1}\expect{\phi(\tau)\left.|\right. \hat{H}_T(t_0)}\leq TB + B' -V \sum_{\tau=t_0}^{t_0+T-1}\expect{\phi^*(\tau)\left.|\right. \hat{H}_T(t_0)} \\ \hspace{-.2in}&& +\sum_{m=1}^M\big(Q_m(t_0)-\theta_m\big)\sum_{\tau=t_0}^{t_0+T-1}\expect{A^*_m(\tau)-\sum_{k=1}^K\beta_{mk}Z^*_k(\tau)F_k(P^*_k(\tau), Y(\tau))\left.|\right. \hat{H}_T(t_0)},\nonumber \end{eqnarray*} Noting that $TB + B' = T^2B$ proves Lemma \ref{lemma:sample_path_Tdrift}. \end{proof} \end{document}
\begin{document} \title{ A non-differentiable essential irrational invariant curve for a $C^1$ symplectic twist map } \author{M.-C. ARNAUD \thanks{ANR KAM faible ANR-07-BLAN-0361} \thanks{ANR DynNonHyp ANR BLAN08-2-313375} \thanks{Universit\'e d'Avignon et des Pays de Vaucluse, Laboratoire d'Analyse non lin\' eaire et G\' eom\' etrie (EA 2151), F-84 018Avignon, France. e-mail: [email protected]} } \maketitle \abstract{ We construct a $C^1$ symplectic twist map $f$ of the annulus that has an essential invariant curve $\Gamma$ such that: \begin{enumerate} \item[$\bullet$] $\Gamma$ is not differentiable; \item[$\bullet$] the dynamic of $f_{|\Gamma}$ is conjugated to the one of a Denjoy counter-example. \end{enumerate} } \section{Introduction} The exact symplectic twist maps of the two-dimensional annulus\footnote{all these notions will be precisely defined is subsection \ref{sub21}} were studied for a long time because they represent (via a symplectic change of coordinates) the dynamic of the generic symplectic diffeomorphisms of surfaces near their elliptic periodic points (see \cite{Ch1}). One motivating example of such a map was introduced by Poincar\'e for the study of the restricted 3-Body problem.\\ The study of such maps was initiated by G.D.~Birkhoff in the 20's (see \cite{Bir1}). Among other beautiful results, he proved the following one (see \cite{He2} too)~: \begin{theo} {\bf (G.D.~Birkhoff)} Let $f$ be a symplectic twist map of the two-dimensional annulus. Then any essential curve that is invariant by $f$ is the graph of a Lipschitz map. \end{theo} In this statement, an {\em essential curve} is a simple loop that is not homotopic to a point. \\ Later, in the 50's, the K.A.M.~theorems provide the existence of some invariant curves for sufficiently regular symplectic diffeomorphisms of surfaces near their elliptic fixed points (see \cite{Ko}, \cite{Arno}, \cite{Mo} and \cite{Ru}). These theorems provide also some essential invariant curves for the symplectic twist maps that are close to the completely integrable ones. These K.A.M. curves are all very regular (at least $C^3$, see \cite{He2}). \\ But general invariant curves for general symplectic twist maps have no reason to be so regular. The example of the simple pendulum (see \cite{Ch2}) shows us that an invariant curve can be non-differentiable at one point: the separatrix of the simple pendulum has an angle at the hyperbolic fixed point. In \cite{He2} and \cite{Arna1}, some other examples are given of symplectic twist maps that have an non-differentiable essential invariant curve that contains some periodic points.\\ In all these examples, the non-differentiability appears at the periodic points. A natural question is then:\\ {\bf Question.} {\em Does a symplectic twist map exist that has an essential invariant curve that is non-differentiable at a non-periodic point?}\\ A related question is the following one, due to J.~Mather in \cite{EKMY}:\\ {\bf Question. (J.~Mather)} {\em Does there exist an example of a symplectic $C^r$ twist map with an essential invariant curve that is not $C^1$ and that contains no periodic point (separate question for each $r\in [1, \infty]\cup\{ \omega\}$)?}\\ Let us point out that such an invariant essential curve cannot be too irregular~: \begin{enumerate} \item[$\bullet$] firstly, Birkhoff theorem implies that this curve has to be the graph of a Lipschitz map; hence, by Rademacher theorem, it has to be differentiable above a set that has full Lebesgue measure; \item[$\bullet$] secondly, I proved in \cite{Arna1} that this curve has too be $C^1$ above a $G_\delta$ subset of $\mathbb {T}$ that has full Lebesque measure\footnote{The precise definition of $C^1$ in this context will be given in subsection \ref{sub21}}. \end{enumerate} We will prove: \begin{thm}\label{T1} There exists a symplectic $C^1$ twist map $f$ of the annulus that has an essential invariant curve $\Gamma$ such that: \begin{enumerate} \item[$\bullet$] $\Gamma$ contains no periodic points; \item [$\bullet$] the restriction $f_{|\Gamma}$ is $C^0$-conjugated to a Denjoy counter-example; \item[$\bullet$] if $\gamma~: \mathbb {T}\rightarrow \mathbb {R}$ is the map whose $\Gamma$ is the graph, then $\gamma$ is $C^1$ at every point except along the projection of one orbit, along which $\gamma$ has distinct right and left derivatives. \end{enumerate} \end{thm} This lets open Mather's question for $r\geq 2$ and also the following question:\\ {\bf Question.} {\em Does a symplectic twist map exist that has an essential invariant curve that is non-differentiable and that is such that the dynamic restricted to this curve is minimal?}\\ Before giving the guideline of the proof, let us comment on some related results. \\ Finding invariant curves with no periodic points that are less regular that the considered symplectic twist map has been a challenging problem for a long time. \\ First at all, as an application of K.A.M. theorems, for a fixed diophantine rotation number and a 1-parameter smooth family (for example the standard family) such that the invariant curve disappears, it is classical that the ``last invariant curve'' is not $C^\infty$, even if the dynamic is $C^\infty$~: if this happens, by using K.A.M. theorems the curve cannot disappear\dots\\ Secondly, M.~Herman built in \cite{He2} some $C^2$ symplectic twist maps that have a $C^1$-invariant curve on which the dynamic is conjugated to the one of a Denjoy counter-example; such a curve cannot be $C^2$. In \cite{EKMY}, J.~Mather asks if such a $C^3$ counter-example exists.\\ To build our counter-example, we will use a family of symplectic twist maps that was introduced by M.~Herman in \cite{He2}. These maps are defined by~: $$f_\varphi: \mathbb {T}\times\mathbb {R}\rightarrow \mathbb {T}\times\mathbb {R}; (\theta, r)\mapsto (\theta+r, r+\varphi(\theta +r)).$$ where $\varphi~: \mathbb {T}\rightarrow \mathbb {R}$ is a $C^1$ map such that $\int_{\mathbb {T}}\varphi(\theta)d\theta=0$.\\ As noticed by M.~Herman, the main advantage of this map is the following one. We denote a lift of $g:\mathbb {T} \rightarrow \mathbb {T}$ by $\tilde g~:\mathbb {R}\rightarrow\mathbb {R}$. Then the graph of ${\partial}si~: \mathbb {T}\rightarrow \mathbb {R}$ is invariant by $f_\varphi$ if and only if we have: \begin{enumerate} \item[$\bullet$] $g=Id_{\mathbb {T}}+{\partial}si$ is an orientation preserving homeomorphism of $\mathbb {T}$; \item[$\bullet$] $Id_\mathbb {R} +\frac{1}{2} \varphi=\frac{1}{2}\left( \tilde g+ \tilde g ^{-1}\right)$. \end{enumerate} Hence, in order to answer to Mather's question, we just have to find $g=Id_\mathbb {T}+{\partial}si: \mathbb {T}\rightarrow \mathbb {T}$ that is an increasing non-differentiable homeomorphism of $\mathbb {T}$ with no periodic points such that $\tilde g +\tilde g^{-1}$ is $C^1$.\\ We begin by using a $C^1$ Denjoy counter-example (see \cite{He1} and \cite{He2} for precise constructions) $g=Id_\mathbb {T}+{\partial}si~:\mathbb {T}\rightarrow \mathbb {T}$. The non-wandering set of $g$ is then a Cantor subset $K$ of $\mathbb {T}$ such that $g_{|K}$ is minimal. We then consider a point $x_0\in \mathbb {T}\backslash K$ that is not in the Cantor subset $K$ and its orbit $(x_k)_{k\in\mathbb {Z}}=(g^k(x_0))_{k\in\mathbb {Z}}$. Then we modify $g$ in a neighborhood of this orbit in such a way that the new homeomorphism $h~: \mathbb {T}\rightarrow\mathbb {T}$ coincides with $g$ along the orbit of $x_0$ and is $C^1$ at every point but the orbit of $x_0$. At every point $x_k$ of the orbit of $x_0$, we assume that $h$ has some left and right derivatives, denoted by $\beta_k^l$ and $\beta_k^r$ such that: $\forall k\in\mathbb {Z}, \beta_k^r+\frac{1}{\beta_{k-1}^r}= \beta_k^\ell+\frac{1}{\beta_{k-1}^\ell}$.\\ If now we define $\varphi~: \mathbb {T}\rightarrow \mathbb {R}$ by~: $Id_\mathbb {R} +\frac{1}{2} \varphi=\frac{1}{2}\left( \tilde h+ \tilde h ^{-1}\right)$, then $\varphi$ is $C^1$ at every point of $\mathbb {T}$ but the orbit of $x_0$, has a right and left derivative along the orbit of $x_0$ and verifies (we denote by $\varphi_r'$ and $\varphi_\ell'$ the right and left derivative of $\varphi$): $\forall k\in \mathbb {Z}, \varphi'_r(x_k)= \beta_k^r+\frac{1}{\beta_{k-1}^r}-2= \beta_k^\ell+\frac{1}{\beta_{k-1}^\ell}-2=\varphi'_\ell(x_k)$. Hence $\varphi$ is differentiable. Roughly speaking, the left and right derivatives of $h$ at $x_k$ and $x_{k-1}$ are balanced in the formula that gives $\varphi$. This idea that the irregularities of $h$ and $h^{-1}$ are balanced in the formula that gives $\varphi$ was the one that used M.~Herman in \cite{He2} to construct his Denjoy counterexample for a $C^2$ symplectic twist map. If we choose carefully $h$, we will see that $\varphi$ is in fact $C^1$. Let us point out that things are not as simple as they seem to be, and the choice of the sequences $(\beta_k^\ell)$ and $(\beta_k^r)$ is a delicate process as we will explain in the next section. \section{Proof of theorem \ref{T1}} \subsection{Generalities about twist maps and other topics} \label{sub21} \begin{nota} \noindent $\bullet$ $\mathbb {T}=\mathbb {R}/\mathbb {Z}$ is the circle. \noindent $\bullet$ $\mathbb {A}=\mathbb {T}\times \mathbb {R}$ is the annulus and an element of $\mathbb {A}$ is denoted by $(\theta, r)$. \noindent $\bullet$ $\mathbb {A}$ is endowed with its usual symplectic form, $\omega=d\theta\wedge dr$ and its usual Riemannian metric. \noindent $\bullet$ ${\partial}i: \mathbb {T} \times \mathbb {R} \rightarrow\mathbb {T}$ is the first projection and $\tilde{\partial}i: \mathbb {R}^2\rightarrow \mathbb {R}$ its lift. \noindent $\bullet$ if $\alpha\in\mathbb {T}$, $R_\alpha: \mathbb {T}\rightarrow \mathbb {T}$ is the rotation defined by $R_\alpha (\theta)=\theta+\alpha$. \end{nota} \begin{defin} A $C^1$ diffeomorphism $f: \mathbb {A}\rightarrow \mathbb {A}$ of the annulus that is isotopic to identity is a {\em positive twist map} (resp. {\em negative twist map}) if, for any given lift $\tilde f: \mathbb {R}^2\rightarrow \mathbb {R}^2$ and for every $\tilde\theta\in\mathbb {R}$, the maps $r\mapsto \tilde{\partial}i\circ \tilde f(\tilde\theta,r)$ is an increasing (resp decreasing) diffeomorphisms. A {\em twist map} may be positive or negative. \end{defin} Then the maps $f_\varphi$ that we defined at the end of the introduction are positive symplectic twist maps. \begin{defin} Let $\gamma~:\mathbb {T}\rightarrow\mathbb {R}$ be a continuous map. We say that $\gamma$ is $C^1$ at $\theta\in \mathbb {T}$ is there exists a number $\gamma'(\theta)\in\mathbb {R}$ such that, for every sequences $(\theta^1_n)$ and $(\theta_n^2)$ of points of $\mathbb {T}$ that converge to $\theta$ such that $\theta_n^1\not=\theta_n^2$, then: $$\lim_{n\rightarrow \infty} \frac{\gamma(\theta_n^1)-\gamma(\theta_n^2)}{\theta^1_n-\theta^2_n}=\gamma'(\theta) $$ where we denote by $\theta_n^1-\theta_n^2$ the unique number that represents $\theta_n^1-\theta_n^2$ and that belongs to $]-\frac{1}{2}, \frac{1}{2}]$. \end{defin} If we assume that $\gamma$ is differentiable at every point of $\mathbb {T}$, then this notion of $C^1$ coincides with the usual one (the derivative is continuous at the considered point). \subsection{Denjoy counter-example} Following \cite{He2} p. 94, we define a Denjoy counter-example in the following way.\\ We assume that $\alpha\notin\mathbb {Q}/\mathbb {Z}$ , $\delta>0$ and that $C>>1$. Then we introduce: $$\ell_k=\frac{a_C }{(|k|+C)(\log(|k|+C))^{1+\delta}}$$ where $a_C $ is chosen such that $\displaystyle{ \sum_{k\in\mathbb {Z}}\ell_k=1 }$. We use a $C^\infty$ function $\eta~:\mathbb {R}\rightarrow \mathbb {R}$ such that $\eta\geq 0$, ${\rm support}(\eta)\subset [\frac{1}{4}, \frac{3}{4}]$ and $\int_0^1\eta(t)dt=1$. We define $\eta_k$ by~: $\eta_k(t)=\eta\left( \frac{t}{\ell_k}\right)$. Then we have: $\int_0^1\eta_k(t)dt=\ell_k$. Moreover, there exist two constants $C_1$, $C_2$, that depend only on $\eta$, such that~: $$C_1\leq |\eta_k|\leq C_2;\quad \frac{C_1}{\ell_k}\leq |\eta_k'|\leq \frac{C_2}{\ell_k}.$$ We assume now that $C>>1$ is great enough so that: $$\forall k\in\mathbb {Z}, \left| \frac{\ell_{k+1}}{\ell_k}-1\right|C_2<1.$$ Then the map $g_k~: [0, \ell_k]\rightarrow [0, \ell_{k+1}]$ defined by $g_k(x)=\int_0^x \left(1+\left(\frac{\ell_{k+1}}{\ell_{k}}-1\right) \eta_k(t)\right)dt$ is a $C^\infty$ diffeomorphism such that $g_k(\ell_k)=\ell_{k+1}$. There exists a Cantor subset $K\subset \mathbb {T}$ that has zero Lebesgue measure and that is such that the connected components of $\mathbb {T}\backslash K$, denoted by $(I_k)_{k\in\mathbb {Z}}$, are on $\mathbb {T}$ in the same order as the sequence $(k\alpha)$ and such that ${\rm length}(I_k)=\ell_k$.\\ Let us recall what is the semi-conjugation $j:\mathbb {T}\rightarrow \mathbb {T}$ of the Denjoy counter-example to the rotation $R_\alpha$. Il $x\in \{ k\alpha; k\in\mathbb {Z}\}$, then we define~: $j^{-1}(x)=\int_0^xd\mu(t)$ where $\mu$ is the probability measure $\displaystyle{\mu =\sum_{k\in \mathbb {Z}}\ell_k\delta_{k\alpha}}$, $\delta_{k\alpha}$ being the Dirac mass at $k\alpha$. Then $j:\mathbb {T}\rightarrow\mathbb {T}$ is a continuous map with degree 1 that preserves the order on $\mathbb {T}$ and that is such that $j(I_k)=k\alpha$.\\ Then there is a $C^1$ diffeomorphism $g:\mathbb {T}\rightarrow \mathbb {T}$ that fix $K$, is such that $K$ is the unique minimal subset for $g$, has for rotation number $\rho(g)=\alpha$, verifies $j\circ g=R_\alpha\circ j$. If $k\in\mathbb {Z}$, we introduce the notation: $g_{|I_k}=g_k$; then we have: $g_k(I_k)=I_{k+1}$. Following \cite{He2} again, we can assume that: $g_k'=g'_{|I_k}=\left(1+\left(\frac{\ell_{k+1}}{\ell_{k}}-1\right) \eta_k\right)\circ R_{-\lambda_k}$ where $R_{\lambda_k}(I_k)=[0, \ell_k]$ and that $g_k: I_k\rightarrow I_{k+1}$ is defined by~: $g_k=R_{\lambda_{k+1}}\circ h_k\circ R_{-\lambda_k}$.\\ Let us point out two facts that will be useful: $\displaystyle{\lim_{|k|\rightarrow \infty}\| g'_k-1\|=0 }$ and: \\ $\forall \theta\in K, g'(\theta)=1$. \subsection{Modification of the Denjoy counter-example $g$} We choose $x_0\in I_0$ and we consider its orbit $(x_k)_{k\in\mathbb {Z}}=(g^k(x_0))_{k\in\mathbb {Z}}$. \\ Then we will build a perturbation $h$ of $g$ such that $g_{|K}=h_{|K}$ and $\forall k\in\mathbb {Z}, h(x_k)=g(x_k)=x_{k+1}$. \begin{nota} \begin{enumerate} \item[$\bullet$] for all $k\in \mathbb {Z}$, we have: $I_k=]a_k, b_k[$, $L_k=]a_k, x_k]$ and $R_k=[x_k, b_k[$; \item[$\bullet$] $\chi: \mathbb {R}\rightarrow \mathbb {R}$ is defined by: $\chi=\tilde g +\tilde g^{-1}-2Id_\mathbb {R}$; \item[$\bullet$] for all $k\in\mathbb {Z}$, we denote: $\alpha_k=g'(x_k)$ and $m_k=2+\chi'(x_k)$. \end{enumerate} \end{nota} Because of the definition of $\chi$, we have then: $\forall k\in\mathbb {Z}, \alpha_k+\frac{1}{\alpha_{k-1}}=m_k$. \begin{nota} For every parameter $m\in\mathbb {R}$, let $\Phi_m: ]0, +\infty[\rightarrow ]-\infty, m[$ be defined by~: $\Phi_m(t)=m-\frac{1}{t}$. \end{nota} Let us notice that every function $\Phi_m$ is an increasing diffeomorphism. Moreover: \begin{enumerate} \item[$\bullet$] if $m<2$, then $\Phi_m$ has no fixed points and~: $\forall t, \Phi_m(t)<t$ and $\displaystyle{\lim_{n\rightarrow +\infty}\Phi_m^n(t)=-\infty}$; \item[$\bullet$] if $m=2$, $1$ is the only fixed point of $\Phi_m$. Moreover: if $t>1$, then $1<\Phi_m(t)<t$ and $\displaystyle{\lim_{n\rightarrow +\infty}\Phi_m^n(t)=1}$; if $t<1$, then $\Phi_m(t)<t$ and $\displaystyle{\lim_{n\rightarrow +\infty}\Phi_m^n(t)=-\infty}$; \item[$\bullet$] if $m>2$, $\Phi_m$ has two fixed points, $p_-<p_+$; if $t>p_+$, then $p_+<\Phi_m(t)<t$ and $\displaystyle{\lim_{n\rightarrow +\infty} \Phi_m^n(t)=p_+}$; if $p_-<t<p_+$, then $p_-<t<\Phi_m(t)<p_+$ and $\displaystyle{\lim_{n\rightarrow +\infty} \Phi_m^n(t)=p_+}$; if $t<p_-$, then $\Phi_m(t)<t$ and $\displaystyle{\lim_{n\rightarrow +\infty}\Phi_m^n(t)=-\infty}$. \end{enumerate} We have: $\forall k\in \mathbb {Z}, \alpha_k=\Phi_{m_k}(\alpha_{k-1})$. \\ Let us now choose $\beta_0^L>\alpha_0$ and $\beta_0^R>\alpha_0$ (each of them is then denoted by $\beta_0$). As every $\Phi_m$ is increasing and defined on $]0,+ \infty[$, we can define $(\beta_n)_{n\geq 0}$ in the following way: $\beta_{n+1}=\Phi_{m_n}(\beta_n)$. Then $\forall n\geq 0, \beta_n>\alpha_n>0$. \begin{lemma}\label{L1} We have: $\displaystyle{\lim_{n\rightarrow +\infty} \beta_n=1}$. \end{lemma} \demo Let us recall that: $\displaystyle{\lim_{n\rightarrow +\infty} \alpha_n=1}$. We deduce that $\displaystyle{\liminf_{n\rightarrow+\infty} \beta_n\geq 1}$ and that $\displaystyle{\lim_{n\rightarrow +\infty} m_n=2}$.\\ Let us fix $\varepsilon>0$; then there exists $N>0$ such that: $\forall n\geq N, m_n\leq 2+\varepsilon$. Then, for all $n\geq N$, we have: $\beta_{n+1}=\Phi_{m_{n+1}}(\beta_n)=\Phi_{2+\varepsilon}(\beta_n)-(2+\varepsilon-m_{n+1})<\Phi_{2+\varepsilon}(\beta_n)$. Using the fact that $\Phi_{2+\varepsilon}$ is increasing, we easily deduce: $\forall n\geq 0, \beta_{N+n}\leq \Phi_{2+\varepsilon}^n(\beta_N)$. We know that $(\Phi_{2+\varepsilon}^n(\beta_N))_{n\in\mathbb {N}}$ has a limit, and because $\displaystyle{\liminf_{n\rightarrow+\infty} \beta_n\geq 1}$ this limit cannot be smaller than $1$. Hence $\displaystyle{\lim_{n\rightarrow +\infty} \Phi_{2+\varepsilon}^n(\beta_n)=p_+(\varepsilon)}$ if we denote the greatest fixed point of $\Phi_{2+\varepsilon}$ by $p_+(\varepsilon)$. We deduce that $\displaystyle{\limsup_{n\rightarrow +\infty}\beta_n\leq p_+(\varepsilon)}$. We have: $p_+(\varepsilon)=\frac{2+\varepsilon+\sqrt{\varepsilon(4+\varepsilon)}}{2}$, hence: $\displaystyle{\lim_{\varepsilon\rightarrow 0^+}p^+(\varepsilon)=1}$.\\ Finally, we have proved that $\displaystyle{\limsup_{n\rightarrow +\infty}\beta_n\leq 1}$. As $\displaystyle{\liminf_{n\rightarrow +\infty}\beta_n\geq 1}$, we deduce the lemma. \qed\goodbreak\vskip10pt In a similar way, we choose $0<\beta_{-1}^L<\alpha_{-1}$ and $0<\beta_{-1}^R<\alpha_{-1}$ and we denote each of them by $\beta_{-1}$. As every $\Phi_m^{-1}$ is increasing and defined on $]-\infty, m[$, we can define $(\beta_k)_{k\leq -1}$ by $\beta_{k-1}=\left(\Phi_{m_k}\right)^{-1}(\beta_k)$. Then : $\forall k\leq -1, \beta_k\leq \alpha_k$. \begin{lemma} We have: $\displaystyle{\lim_{k\rightarrow -\infty} \beta_k=1}$. \end{lemma} The proof of this lemma is similar to the one of lemma \ref{L1}. Now, we can assume: \begin{enumerate} \item[$\bullet$] $\beta_0^R\not=\beta_0^L$; \item[$\bullet$] $\beta_0^L+\frac{1}{\beta_{-1}^L}=\beta_0^R+\frac{1}{\beta_{-1}^R}$. \end{enumerate} Then we denote this last quantity by $\tilde m_{0}=\beta_0^L+\frac{1}{\beta_{-1}^L}=\beta_0^R+\frac{1}{\beta_{-1}^R}$. Because $\beta_0>\alpha_0$ and $\beta_{-1}<\alpha_{-1}$, we necessarily have: $\tilde m_0>m_0$. Now $(\beta_k^L)$ (resp. $(\beta_k^R))$ is a good candidate to be the left (resp. right) derivative of $h$ along the orbit $(x_k)$. \begin{prop}\label{P1} There exists an orientation preserving homeomorphism $h:\mathbb {T}\rightarrow \mathbb {T}$ such that: \begin{enumerate} \item[$\bullet$] $h_{|K}=g_{|K}$ and $\forall k\in\mathbb {Z}, h(x_k)=g(x_k)$; \item[$\bullet$] $h$ and $h^{-1}$ are $C^1$ at every point of $\mathbb {T}$ but the orbit of $x_0$; \item[$\bullet$] $h$ and $h^{-1}$ have some right and left derivative at every point $x_k$ of the orbit of $x_0$, and $h'_R(x_k)=\beta_k^R$, $h'_L(x_k)=\beta_k^L$. Moreover, $h_{|R_k}$ and $h_{|L_k}$ are $C^1$. \end{enumerate} \end{prop} \begin{remk} If proposition \ref{P1} is true, then theorem \ref{T1} is proved: if $h=Id_\mathbb {T}+{\partial}si$ and $\varphi=\tilde g+\tilde g^{-1}-2Id_\mathbb {R}$, then the graph of ${\partial}si$ is invariant by $f_\varphi$ and the dynamic of $f_\varphi$ restricted to this graph is the one of a Denjoy counter-example with $\alpha$ as rotation number. Moreover, ${\partial}si$ is non-differentiable along the orbit of $x_0$ but $\varphi$ is $C^1$. Indeed, as $g$ and $g^{-1}$ are, $\varphi$ is $C^1$ at every point of $\mathbb {T}$ but the orbit of $x_0$. Moreover, the restriction of $\varphi$ to each interval $L_k=]a_k, x_k]$ or $R_k=[x_k, b_k[$ is $C^1$. To prove that $\varphi$ is $C^1$, we then just have to prove that the right and left derivatives are equal along the orbit of $x_0$. We have: \begin{enumerate} \item[$\bullet$] if $k\not=0$, $\varphi'_L(x_k)=\beta_k^L+\frac{1}{\beta_{k-1}^L}-2=m_k-2=\beta_k^R+\frac{1}{\beta_{k-1}^R}-2=\varphi'_R(x_k)$; \item[$\bullet$] if $k=0$, $\varphi'_L(x_0)=\beta_0^L+\frac{1}{\beta_{-1}^L}-2=\tilde m_0-2=\beta_0^R+\frac{1}{\beta_{-1}^R}-2=\varphi'_R(x_0)$. \end{enumerate} Hence $\varphi$ is $C^1$. \end{remk} Let us now prove proposition \ref{P1}. We modify $g$, or rather its derivative, in each interval $L_k$ and $R_k$ in the following way. Let us notice that: $\displaystyle{\lim_{|k|\rightarrow +\infty} |g'_{|[a_k, b_k]}-1|=0}$; $\displaystyle{\lim_{|k|\rightarrow +\infty}\beta_k^L=\lim_{|k|\rightarrow +\infty}\beta_k^R=1}$; $g'_{|K}=1$.\\ Then on each interval $L_k=]a_k, x_k]$, we replace $g'$ by a continuous function $\delta_k:]a_k, x_k]\rightarrow \mathbb {R}_+^*$ such that: \begin{enumerate} \item\label{pt1} $\delta_k(x_k)=\beta_k^L$; \item\label{pt2} $\delta_k$ coincide with $g'$ in a neighborhood of $a_k$; \item\label{pt3} $\int_{a_k}^{x_k}\delta_k=\int_{a_k}^{x_k}g'=\tilde g(x_k)-\tilde g(a_k)$; \item\label{pt4} $\forall t\in L_k, |\delta_k(t)-1|\leq \max\{ |g_{|L_k}-1|, |\beta_k^L-1|\}+\frac{1}{1+|k|}$. \end{enumerate} To build $\delta_k$, we just have to replace $g'$ between $x_k-\varepsilon_k$ and $x_k$ by some affine function and then to modify slightly $g'$ elsewhere in $L_k$ to rectify the value of the integral. If $\varepsilon_k$ is small enough, than the change in the integral is very small and we have the last inequality (but of course the slope of the affine function can be very great, so the perturbation of $g$ that we build in not small in $C^2$ topology).\\ We then define $h_{|L_k}$ by: $$\forall t\in [a_k, x_k], h(t)= g(a_k)+\int_{a_k}^t\delta_k (s)ds.$$ We proceed in similar way to define $h_{|R_k}$ and we obtain similar properties. Moreover, we ask: $h_{|K}=g_{|K}$. Then $h$ is continuous. By construction, its restriction to every interval $I_k$ is continuous. Moreover, we have $\displaystyle{\lim_{|k|\rightarrow +\infty} |h_{|I_k}-g(a_k)|=0}$ (because of point \ref{pt4} and the fact that $\displaystyle{\lim_{|k|\rightarrow \infty}{\rm length}(I_k)=0}$). We deduce that $h$ is continuous at every point of $K$. Moreover, $h$ is orientation preserving and injective by construction. Hence $h$ is an orientation preserving homeomorphism of $\mathbb {T}$. Moreover, $h_{|K}=g_{|K}$ by construction and $\forall k\in\mathbb {Z}, h(x_k)=g(x_k)$ by point \ref{pt3}. By construction, $h_{|R_k}$ and $h_{|L_k}$ are $C^1$ (and then the same is true for $h^{-1}$), $h'_R(x_k)=\beta_k^R$ and $h'_L(x_k)=\beta_k^L$. Let us now prove that $h$ is differentiable at every point of $K$ and that $h'_{|K}=1$. We consider $y\in K$ and a sequence $(y_n)$ that converge to $y$ and that is such that: $\forall n, y_n\not=y$. We want to prove that $\displaystyle{\lim_{n\rightarrow +\infty} \frac{h(y_n)-h(y)}{y_n-y}=1}$. Considering eventually different cases, we can assume that $(y_n)$ tends to $y$ from above. Then there are two cases: \begin{enumerate} \item[$\bullet$] either $y=a_k$ for some $k$. Then we have the conclusion by point \ref{pt2}; \item[$\bullet$] or $y$ is accumulated from above by a sequence $(a_{k_n})_{n\in\mathbb {N}}$ that are left ends of intervals $I_{j_k}$. \end{enumerate} Because $g$ is $C^1$ and $g'_{|K}=1$, for every $\varepsilon>0$ there exists $\eta>0$ such that for every $z\in [y, y+\eta[$, then $|g'(z)-1|<\varepsilon$. Let us assume that $n$ is big enough such that $y_n\in[y, y+\eta[$. There are three cases: \begin{enumerate} \item[$\bullet$] $y_n\in K$. Then there exists $z\in [y, y+\eta[$ such that: $$ \left| \frac{h(y_n)-h(y)}{y_n-y}-1\right|= \left|\frac{g(y_n)-g(y)}{y_n-y}-1\right|=\left|g'(z)-1\right|<\varepsilon;$$ \item[$\bullet$] $y_n\in L_k$ for some $k$. Then there exist $z, z'\in [y, y+\eta[$ such that: $$ \frac{h(y_n)-h(y)}{y_n-y} = \frac{h(y_n)-h(a_k)+g(a_k)-g(y)}{y_n-y} =\frac{y_n-a_k}{y_n-y}h'(z)+\frac{a_k-y}{y_n-y}g'(z').$$ \item[$\bullet$] $y_n\in R_k$ for some $k$. Then there exist $z, z'\in [y, y+\eta[$ such that: $$ \frac{h(y_n)-h(y)}{y_n-y} = \frac{h(y_n)-h(x_k)+g(x_k)-g(y)}{y_n-y} =\frac{y_n-x_k}{y_n-y}h'(z)+\frac{x_k-y}{y_n-y}g'(z').$$ Because $g$ is $C^1$ and because of point \ref{pt4}, if $\eta$ is small enough, then $h'(z)$ and $g'(z')$ are close enough to 1, and their barycentre is close to 1 two. \end{enumerate} Hence we have prove that $h$ is derivable along $K$ and that $h'_{|K}=1$. Because $g$ is $C^1$ and $g'_{|K}=1$, because of point \ref{pt4}, $h$ is $C^1$ on $K$ and $h'_{|K}=1$. Then $h$ satisfies all the conclusions of proposition \ref{P1}. \end{document}
\begin{document} \title{On algebraic and topological semantics of the modal logic of common knowledge $\mathsf{S4} \begin{abstract} We investigate algebraic and topological semantics of the modal logic $\mathsf{S4}^{C}_I$ and obtain strong completeness of the given system in the case of local semantic consequence relations. In addition, we consider an extension of the logic $\mathsf{S4}^{C}_I$ with certain infinitary derivations and establish strong completeness results for the obtained system in the case of global semantic consequence relations. Furthermore, we identify the class of completable $\mathsf{S4}^{C}_I$-algebras and obtain for them a Stone-type representation theorem.\\\\ \textit{Keywords:} common knowledge, algebraic semantics, topological semantics, local and global consequence relations, infinitary derivations, fixed-point algebras, completions \end{abstract} \section{Introduction} \label{s1} The modal logic $\mathsf{S4}^{C}_I$ \cite{Fag+95, MH95} is an epistemic propositional logic whose language contains modal connectives $\Box_i$ for each element $i$ of a finite non-empty set $I$ and an additional modal connective $\C$. This language has the following intended interpretation: elements of the set $I$ are understood as epistemic agents; an expression $\Box_i \varphi$ is interpreted as 'an agent $i$ knows that $\varphi$ is true'; the intended interpretation of $\C \varphi$ is '$\varphi$ is common knowledge among the agents from $I$'. Also, there is an abbreviation $\E \varphi:= \bigwedge_{i\in I} \Box_i\varphi$, which expresses mutual knowledge of $\varphi$: 'all agents know that $\varphi$ is true'. The concept of common knowledge is captured in $\mathsf{S4}^{C}_I$ according to the so-called fixed-point account (see Subsection 2.4 from \cite{Van13}). In terms of algebraic semantics, it means that $\C a$ is equal to $\nu z. \: (\E a \wedge \E z)$ in any $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$, i.e., in any $\mathsf{S4}^{C}_I$-algebra, an element $\C a$ is the greatest fixed-point of a mapping $z \mapsto \E a \wedge \E z$. Thus the logic $\mathsf{S4}^{C}_I$ belongs to the family of modal fixed-point logics and, like other logics from this family, is not valid in its canonical Kripke frame and is not strongly complete with respect to its Kripke semantics. In the given article, we obtain strong completeness of the logic $\mathsf{S4}^{C}_I$ for its topological interpretation. Our results concern not only the so-called local semantic consequence relation, but also the global one. Recall that, over topological $\mathsf{S4}^{C}_I$-models, a formula $\varphi$ is a local semantic consequence of $\Gamma$ if for any topological $\mathsf{S4}^{C}_I$-model $\mathcal{M}$ and any point $x$ of $\mathcal{M}$ \[(\forall \psi\in \Gamma\;\; \mathcal{M},x \vDash \psi )\Longrightarrow \mathcal{M},x \vDash \varphi.\] A formula $\varphi$ is a global semantic consequence of $\Gamma$ if for any topological $\mathsf{S4}^{C}_I$-model $\mathcal{M}$ \[(\forall \psi\in \Gamma\;\; \mathcal{M} \vDash \psi )\Longrightarrow \mathcal{M} \vDash \varphi.\] We prove that the local semantic consequence relation corresponds to a derivability relation obtained from ordinary derivations from assumptions in $\mathsf{S4}^{C}_I$, whereas the global one corresponds to a derivability relation defined by certain infinitary derivations. In order to obtain these completeness results, we focus on algebraic semantics of the logic $\mathsf{S4}^{C}_I$. Although very little is known in general about completions of fixed-point Boolean algebras, we manage to identify the class of completable $\mathsf{S4}^{C}_I$-algebras and obtain for them a Stone-type representation theorem. As a corollary, we establish algebraic and topological completeness of the logic $\mathsf{S4}^{C}_I$ so that the global algebraic and topological consequence relations correspond the derivability relation defined by the aforementioned infinitary derivations. Finally, we shall note that the given article is inspired by \cite{Sham20}, where similar results are obtained for provability logics $\mathsf{GL}$ and $\mathsf{GLP}$. \section{Ordinary and infinitary derivations} \label{s2} In this section we recall a Hilbert calculus for the modal logic of common knowledge $\mathsf{S4}^{C}_I$ and consider ordinary and infinitary derivations in the given system. Throughout this article, we fix a finite non-empty set $I$ of agents. \textit{Formulas of} $\mathsf{S4}^{C}_I$ are built from the countable set of propositional variables $\mathit{Var}= \{p_0, p_1, \dotsc\}$ and the constant $\bot$ using propositional connectives $\to$, $\Box_i$, for each $i\in I$, and $\C$. We treat other Boolean connectives and the modal connective $\E$ as abbreviations: \begin{gather*} \neg \varphi := \varphi\to \bot,\qquad\top := \neg \bot,\qquad \varphi\wedge \psi := \neg (\varphi\to \neg \psi), \qquad \varphi\vee \psi := \neg \varphi\to \psi,\\ \varphi \leftrightarrow \psi:=(\varphi\to \psi)\wedge (\psi \to \varphi),\qquad \quad \E \varphi := \bigwedge\limits_{i\in I} \Box_i \varphi. \end{gather*} We denote the set of formulas of $\mathsf{S4}^{C}_I$ by $\mathit{Fm}_I$. The logic $\mathsf{S4}^{C}_I$ is defined by the following Hilbert calculus. \textit{Axiom schemas:} \begin{itemize} \item[(i)] the tautologies of classical propositional logic; \item[(ii)] $\Box_i (\varphi\to \psi)\to (\Box_i \varphi\to \Box_i \psi)$; \item[(iii)] $\Box_i \varphi \to \Box_i \Box_i \varphi$; \item[(iv)] $\Box_i \varphi \to \varphi$; \item[(v)] $\C (\varphi\to \psi)\to (\C \varphi\to \C \psi) $; \item[(vi)] $\C \varphi \rightarrow \E\varphi \wedge \E \C \varphi$; \item[(viii)] $ \E\varphi \wedge \C(\varphi \rightarrow \E \varphi) \rightarrow \C \varphi$. \end{itemize} \textit{Inference rules:} \[ \AXC{$\varphi$} \AXC{$\varphi\to \psi$} \LeftLabel{$\mathsf{mp}$} \RightLabel{ ,} \BIC{$\psi$} \DisplayProof\qquad \AXC{$\varphi$} \LeftLabel{$\mathsf{nec}$} \RightLabel{ .} \UIC{$\C \varphi$} \DisplayProof \] We note that reflexivity and transitivity of the modal connective $\C$ is provable in $\mathsf{S4}^{C}_I$, i.e. $\mathsf{S4}^{C}_I \vdash \C \varphi \to \varphi$ and $\mathsf{S4}^{C}_I \vdash \C \varphi \to \C\C \varphi$ for any formula $\varphi$. Now we introduce three derivability relations $\vdash_l$, $\vdash_g$ and $\vdash$ with the following interpretation: \begin{itemize} \item from an assumption that all statements from $\Gamma$ are true, it follows that $\varphi$ is true ($\Gamma \vdash_l \varphi$); \item from an assumption that all statements from $\Sigma$ are common knowledge, it follows that $\varphi$ is common knowledge ($\Sigma \vdash_g \varphi$); \item from an assumption that all statements from $\Sigma$ are common knowledge and all statements from $\Gamma$ are true, it follows that $\varphi$ is true ($\Sigma ; \Gamma \vdash \varphi$). \end{itemize} For a set of formulas $\Gamma$ and a formula $\varphi$, we put $\Gamma \vdash_l \varphi$ if there is a finite subset $\Gamma_0$ of $\Gamma$ such that $\mathsf{S4}^{C}_I \vdash \bigwedge \Gamma_0 \to \varphi$. In order to define the derivability relation $\vdash_g$, we extend the Hilbert calculus for the logic $\mathsf{S4}^{C}_I$ with the following infinitary derivations. An \emph{$\omega$-derivation} is a well-founded tree whose nodes are marked by formulas of $\mathsf{S4}^{C}_I$ and that is constructed according to the rules ($\mathsf{mp}$), ($\mathsf{nec}$) and the following inference rule: \[ \AXC{$\varphi_0 \rightarrow \E\psi \wedge \E\varphi_1$} \AXC{$\varphi_1 \rightarrow \E\psi \wedge \E \varphi_2$} \AXC{$\varphi_2 \rightarrow \E\psi \wedge \E \varphi_3 \qquad \ldots$} \LeftLabel{$\omega$} \RightLabel{ .} \TIC{$\varphi_0 \rightarrow \C \psi$} \DisplayProof \] An \emph{assumption leaf} of an $\omega$-derivation is a leaf that is not marked by an axiom of $\mathsf{S4}^{C}_I$. For a set of formulas $\Sigma$ and a formula $\varphi$, we set $\Sigma \vdash_g \varphi$ if there is an $\omega$-derivation with the root marked by $\varphi$ in which all assumption leaves are marked by some elements of $\Sigma$. We note that $\Sigma \vdash_g \C \varphi$ and $\Sigma \vdash_g \Box_i \varphi$ for each $i\in I$ whenever $\Sigma \vdash_g \varphi$. \begin{proposition}\label{basic property} For any formula $\varphi$, we have \[\mathsf{S4}^{C}_I \vdash \varphi \Longleftrightarrow \emptyset \vdash_g \varphi .\] \end{proposition} \begin{proof} Our proof is based on Kripke semantics of $\mathsf{S4}^{C}_I$. Recall that a Kripke $\mathsf{S4}^C_I$-frame $(W, (R_i)_{i\in I}, S)$ is a set $W$ together with a sequence of binary relations on $W$ such that \begin{itemize} \item $R_i$ is reflexive and transitive for each $i\in I$; \item $S$ is the transitive closure of the relation $\bigcup\limits_{i\in I} R_i$. \end{itemize} A notion of Kripke $\mathsf{S4}^C_I$-model is defined in the standard way. Now we stress that the logic $\mathsf{S4}^{C}_I$ is sound and complete with respect to its class of Kripke frames (see \cite{Fag+95, MH95, StShZo20}). Let us prove that for any formula $\varphi$, we have \[\mathsf{S4}^{C}_I \vdash \varphi \Longleftrightarrow \emptyset \vdash_g \varphi .\] The left-to-right implication trivially holds. For the converse, it is sufficient to show that the inference rule ($\omega$) is admissible in $\mathsf{S4}^{C}_I$, i.e. $\mathsf{S4}^{C}_I \vdash \varphi_0 \to \C \psi$ whenever there exists a sequence $(\varphi_j)_{j\in \mathbb{N}}$ such that $\mathsf{S4}^{C}_I \vdash \varphi_j \to \E \psi \wedge \E \varphi_{j+1}$ for each $j\in \mathbb{N}$. This claim is established by \emph{reductio ad absurdum}. Assume $\mathsf{S4}^{C}_I \nvdash \varphi_0 \to \C \psi$ and $\mathsf{S4}^{C}_I \vdash \varphi_j \to \E \psi \wedge \E \varphi_{j+1}$ for $j\in \mathbb{N}$. Then there exist a Kripke $\mathsf{S4}^{C}_I$-model $\mathcal{K}$ and its world $w$ such that $\mathcal{K}, w\vDash \varphi_0$ and $\mathcal{K}, w\nvDash \C\psi$. Consequently, there is a world $w^\prime$ such that $(w, w^\prime)\in S $ and $\mathcal{K}, w^\prime\nvDash \psi$. Since $S$ is the transitive closure of $\bigcup_{i\in I} R_i$, there is a finite sequence of worlds $w_0, w_1, \dotsc, w_k$, where $w_0 =w$, $w_k =w^\prime$, $k>0$ and $(w_l, w_{l+1}) \in \bigcup_{i\in I} R_i$ for each $l<k$. From the assertions $\mathcal{K}, w\vDash \varphi_0$ and $\mathsf{S4}^{C}_I \vdash \varphi_j \to \E \psi \wedge \E \varphi_{j+1}$, we obtain that $\mathcal{K}, w_l\vDash \varphi_l$ for $l<k$. Since $\mathcal{K}, w_{k-1}\vDash \varphi_{k-1}$, we see that $\mathcal{K}, w_{k-1}\vDash \E \psi$ and $\mathcal{K}, w_{k}\vDash \psi$. This contradiction with the assertion $\mathcal{K}, w^\prime\nvDash \psi$ concludes the proof. \end{proof} Finally, we define the third derivability relation $\vdash$. We put $\Sigma ; \Gamma \vdash \varphi$ if there is a finite subset $\Gamma_0$ of $\Gamma$ such that $\Sigma \vdash_g \bigwedge \Gamma_0 \to \varphi $. Note that the relation $\vdash $ is a generalization of $\vdash_l$ and $\vdash_g$ since $\Gamma \vdash_l \varphi \Leftrightarrow \emptyset ; \Gamma \vdash \varphi$ and $\Sigma \vdash_g \varphi \Leftrightarrow \Sigma ; \emptyset \vdash \varphi$. We give a proof of the first equivalence. \begin{proposition}\label{basic property2} For any set of formulas $\Gamma$ and any formula $\varphi$, we have \[\Gamma \vdash_l \varphi \Longleftrightarrow \emptyset ; \Gamma \vdash \varphi.\] \end{proposition} \begin{proof} If $\Gamma \vdash_l \varphi $, then there is a finite subset $\Gamma_0$ of $\Gamma$ such that $\mathsf{S4}^{C}_I\vdash \bigwedge \Gamma_0 \to\varphi$. From Proposition \ref{basic property}, we obtain $\emptyset \vdash_g \bigwedge \Gamma_0 \to\varphi$ and $\emptyset ; \Gamma \vdash \varphi$. Now if $\emptyset ; \Gamma \vdash \varphi$, then there is a finite subset $\Gamma_0$ of $\Gamma$ such that $\emptyset\vdash_g \bigwedge \Gamma_0 \to\varphi$. Applying Proposition \ref{basic property}, we have $\mathsf{S4}^{C}_I \vdash \bigwedge \Gamma_0 \to\varphi$. Consequently $ \Gamma \vdash_l \varphi$. \end{proof} \section{Algebraic and topological semantics} \label{s4} In this section we consider algebraic and topological consequence relations that correspond to the derivability relations $\vdash_l$, $\vdash_g$ and $\vdash$ from the previous section. The completeness results connecting semantic and previously introduced syntactic consequence relations are proved in this and the next sections. An \emph{interior algebra} $\mathcal{A}= ( A, \wedge, \vee, \to, 0, 1, \Box )$ is a Boolean algebra $( A, \wedge, \vee, \to, 0, 1)$ together with a unary mapping $\Box \colon A \to A$ satisfying the conditions: \[ \Box 1=1 , \qquad \Box (x \wedge y) = \Box x \wedge \Box y, \qquad \Box x = \Box \Box x, \qquad \Box x \leqslant x .\] For any interior algebra $\mathcal{A}$, the mapping $\Box$ is monotone with respect to the order (of the Boolean part) of $\mathcal{A}$. Indeed, if $a \leqslant b$, then $a \wedge b =a$, $\Box a \wedge \Box b =\Box (a \wedge b) = \Box a$, and $\Box a \leqslant \Box b$. Recall that the powerset algebra of any topological space gives us an example of an interior algebra. Moreover, we have the following tight connection between topological spaces and interior algebras. \begin{proposition}[K.~Kuratowski \cite{Kur22}]\label{Kur} \begin{enumerate} \item If $(X, \tau)$ is a topological space, then the powerset Boolean algebra of $X$ expanded with the interior mapping $\I_\tau$ is an interior algebra. \item If the powerset Boolean algebra of $X$ expanded with a mapping $\Box\colon \mathcal{P}(X) \to \mathcal{P}(X)$ forms an interior algebra, then there is a unique topology $\tau$ on $X$ such that $\Box=\I_\tau$. \end{enumerate} \end{proposition} A Boolean algebra $( A, \wedge, \vee, \to, 0, 1)$ expanded with unary mappings $\Box_i$, for each $i\in I$, and $\C$ is an \emph{$\mathsf{S4}^{C}_I$-algebra} if \begin{itemize} \item $( A, \wedge, \vee, \to, 0, 1, \Box_i)$ is an interior algebra for $i\in I$, \item $ ( A, \wedge, \vee, \to, 0, 1, \C)$ is an interior algebra, \item $\C a \leqslant \E a \wedge \E \C a$ for any $a\in A$, \item $ \E a \wedge \C (a \to \E a) \leqslant \C a$ for any $a\in A$, \end{itemize} \vspace*{-0.1cm} where $\E a := \bigwedge_{i\in I} \Box_i a$. Note that the mapping $\E$ is monotone and distributes over $\wedge$ since all mappings $\Box_i $ are monotone and distribute over $\wedge$ in any $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$. Now we define a class of algebras that corresponds to the logic $\mathsf{S4}^{C}_I$ extended with $\omega$-derivations. We call an $\mathsf{S4}^{C}_I$-algebra \emph{standard} if, for any element $d$ and any sequence of elements $( a_j)_{j\in \mathbb{N}}$ such that $a_{j}\leqslant \E d \wedge \E a_{j+1}$ for all $j \in \mathbb{N}$, we have $a_0\leqslant \C d$. We have the following series of examples of standard $\mathsf{S4}^{C}_I$-algebras. An $\mathsf{S4}^{C}_I$-algebra is \emph{($\sigma$-)complete} if each of its (countable) subsets $S$ has the least upper bound $\bigvee S$. \begin{proposition}\label{Sigma-complete algebras are standard} Any $\sigma$-complete $\mathsf{S4}^{C}_I$-algebra is standard. \end{proposition} \begin{proof} Assume we have a $\sigma$-complete $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$, its element $d$ and a sequence of elements $( a_j)_{j\in \mathbb{N}}$ such that $a_{j}\leqslant \E d \wedge \E a_{j+1}$ for all $j\in \mathbb{N}$. We prove that $a_0\leqslant \C d$. Put $b= \bigvee\limits_{j \in \mathbb{N}} a_j$. For any $j\in \mathbb{N}$, we have $a_{j}\leqslant \E d \wedge \E a_{j+1}\leqslant \E d \wedge \E b $. Hence, \[b \leqslant \E d\wedge \E b =\E (d \wedge b), \qquad d\wedge b \leqslant \E ( d \wedge b).\] Therefore, \begin{gather*} d\wedge b \to \E ( d \wedge b) =1, \qquad \C ( d\wedge b \to \E ( d \wedge b)) = \C 1 =1, \\ b \leqslant \E (d \wedge b) \leqslant \E(d \wedge b) \wedge \C (d \wedge b \to \E ( d \wedge b)) \leqslant \C (d \wedge b) \leqslant \C d. \end{gather*} Since $a_0 \leqslant b$, we conclude that $a_0 \leqslant \C d$. \end{proof} \begin{remark} Let us note without going into details that the Lindenbaum-Tarski algebra of $\mathsf{S4}^{C}_I$ is standard because, by Proposition \ref{basic property}, the set of theorems of $\mathsf{S4}^{C}_I$ is closed under the inference rule ($\omega$). \end{remark} Now we define algebraic consequence relations that correspond to the derivability relations $\vdash_l$, $\vdash_g$ and $\vdash$. A \emph{valuation in an $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}=( A, \wedge, \vee, \to, 0, 1, (\Box_i)_{i\in I}, \C)$} is a function $v \colon \mathit{Fm}_I \to A$ such that \[v (\bot) = 0,\;\; v (\varphi \to \psi) = v (\varphi) \to v(\psi),\;\; v (\Box_i \varphi) = \Box_i v (\varphi),\;\; v (\C \varphi) = \C v (\varphi).\] For a subset $S$ of an $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$, the filter of (the Boolean part of) $\mathcal{A}$ generated by $S$ is denoted by $\langle S \rangle$. Given a set of formulas $\Gamma$ and a formula $\varphi$, we put $\Gamma \VDash_l \varphi$ if for any standard $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$ and any valuation $v$ in $\mathcal{A}$ \[v(\varphi)\in\langle \{v(\psi) \mid \psi \in \Gamma\}\rangle.\] We also put $\Sigma \VDash_g \varphi$ if for any standard $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$ and any valuation $v$ in $\mathcal{A}$ \[ (\forall \xi \in \Sigma\;\; v(\xi)=1) \Longrightarrow v(\varphi)=1 . \] Further, we set $ \Sigma; \Gamma\VDash \varphi$ if for any standard $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$ and any valuation $v$ in $\mathcal{A}$ \[(\forall\xi\in \Sigma\;\; v(\xi)=1) \Longrightarrow v(\varphi)\in\langle \{v(\psi) \mid \psi \in \Gamma\}\rangle.\] Notice that the relation $\VDash$ is a generalization of $\VDash_l$ and $\VDash_g$ since $ \Gamma \VDash_l \varphi \Leftrightarrow \emptyset ; \Gamma \VDash \varphi$ and $\Sigma \VDash_g \varphi\Leftrightarrow \Sigma ;\emptyset \VDash \varphi $. \begin{lemma} For any set of formulas $\Sigma$ and any formula $\varphi$, we have \[\Sigma \vdash_g \varphi \Longrightarrow \Sigma\VDash_g \varphi.\] \end{lemma} \begin{proof} Assume $\pi$ is an $\omega$-derivation with the root marked by $\varphi$ in which all assumption leaves are marked by some elements of $\Sigma$. Assume also that we have a standard $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$ and a valuation $v$ in $\mathcal{A}$ such that $v(\xi)=1$ for any $\xi \in \Sigma$. We prove that $v(\varphi)=1$ by \emph{reductio ad absurdum}. We see that, for any axiom $\rho$ of $\mathsf{S4}^{C}_I$, its value $v(\rho)$ equals $1$. Further, for any application of the inference rules ($\mathsf{mp}$), ($\mathsf{nec}$) or ($\omega$), the value of its conclusion equals $1$ whenever the values of all premises are equal to $1$. Thus, if $v(\varphi)\neq 1$, then there is a branch in the $\omega$-derivation $\pi$ such that the values of all formulas on the branch don't equal $1$. Since $\pi$ is well-founded, the branch connects the root with some leaf of $\pi$. This leaf is marked by an axiom of $\mathsf{S4}^{C}_I$ or a formula from $\Sigma$. In both cases, the value of the formula from the leaf equals $1$. This contradiction concludes the proof. \end{proof} \begin{theorem}[Algebraic completeness] \label{algebraic completeness} For any sets of formulas $\Sigma$ and $\Gamma$, and for any formula $\varphi$, we have \[\Sigma ;\Gamma \vdash \varphi \Longleftrightarrow \Sigma ;\Gamma \VDash \varphi.\] \end{theorem} \begin{proof} First, we prove the left-to right implication Assume $\Sigma ;\Gamma \vdash \varphi$. In addition, assume that we have a standard $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$ and a valuation $v$ in $\mathcal{A}$ such that $v(\xi)=1$ for any $\xi \in \Sigma$. We shall prove that $v(\varphi)\in\langle \{v(\psi) \mid \psi \in \Gamma\}\rangle$. By the definition of $\vdash$, there is a finite subset $\Gamma_0$ of $\Gamma$ such that $\Sigma \vdash_g \bigwedge \Gamma_0 \to\varphi$. From the previous lemma, we obtain $\Sigma \VDash_g \bigwedge \Gamma_0 \to\varphi$. Thus $v(\Gamma_0 \to\varphi)=1$ and $\bigwedge \{v(\psi)\mid \psi \in \Gamma_0\} \leqslant v(\varphi)$. Consequently $v(\varphi)\in\langle \{v(\psi) \mid \psi \in \Gamma\}\rangle$. Now we prove the converse. Assume $\Sigma ;\Gamma \VDash \varphi$. Consider the theory $T=\{ \theta \in \mathit{Fm}_I \mid \Sigma \vdash_g \theta\}$. We see that $T$ contains all axioms of $\mathsf{S4}^C_I$ and is closed under the rules ($\mathsf{mp}$), ($\mathsf{nec}$) and ($\omega$). We define an equivalence relation $\sim_T$ on the set of formulas $Fm_I$ by putting $\mu \sim_T \rho$ if and only if $(\mu \leftrightarrow \rho) \in T$. We denote the equivalence class of $\mu$ by $[\mu]_T$. Note that $(\Box_i \mu \leftrightarrow \Box_i \rho) \in T $ and $(\C \mu \leftrightarrow \C\rho) \in T$ whenever $(\mu \leftrightarrow \rho) \in T$. Applying the Lindenbaum-Tarski construction, we obtain an $\mathsf{S4}^C_I$-algebra $\mathcal{L}_T$ on the set of equivalence classes of formulas, where $[\mu]_T\wedge [\rho]_T = [\mu\wedge \rho]_T$, $[\mu]_T \vee [\rho]_T = [\mu\vee \rho]_T$, $[\mu]_T\to [\rho]_T = [\mu\to \rho]_T$, $0 = [\bot]_T$, $1= [\top]_T$, $ \Box_i [\mu]_T=[\Box_i \mu]_T$ and $\C [\mu]_T = [\C \mu]_T$. We claim that the algebra $\mathcal{L}_T$ is standard. Suppose, for some formula $\alpha$, we have a sequence of formulas $(\mu_j)_{j\in \mathbb{N}}$ such that $[\mu_j]_T\leqslant \E [\alpha]_T \wedge \E [\mu_{j+1}]_T$. Thus $[\mu_j \to \E \alpha \wedge \E \mu_{j+1}]_T=1$ and $(\mu_j \to \E \alpha \wedge \E \mu_{j+1}) \in T$. For every $j\in \mathbb{N}$, there exists an $\omega$-derivation $\pi_j$ for the formula $\mu_j \to \E \alpha \wedge \E \mu_{j+1}$ such that all assumption leaves of $\pi_j$ are marked by some elements of $\Sigma$. We obtain the following $\omega$-derivation for the formula $\mu_0\to \C \alpha$: \begin{gather*} \AXC{$\pi_0$} \noLine \UIC{$\vdots$} \noLine \UIC{$\mu_0 \to \E\alpha \wedge \E \mu_{1}$} \AXC{$\pi_1$} \noLine \UIC{$\vdots$} \noLine \UIC{$\mu_1 \to \alpha \E \wedge \E \mu_{2}$} \AXC{$\ldots$} \LeftLabel{$\omega$} \RightLabel{ ,} \TIC{$\mu_0 \rightarrow \C \alpha$} \DisplayProof \end{gather*} where all assumption leaves are marked by some elements of $\Sigma$. Hence, $(\mu_0 \rightarrow \C \alpha )\in T$ and $[\mu_0]_T \leqslant \C[\alpha]_T$. We conclude that the $\mathsf{S4}^C_I$-algebra $\mathcal{L}_T$ is standard. Consider the valuation $v \colon \theta \mapsto [\theta]_T$ in the standard $\mathsf{S4}^C_I$-algebra $\mathcal{L}_T$. Since $\Sigma \subset T$, we have $v(\xi)=1$ for any $\xi \in \Sigma$. From the assumption $ \Sigma ;\Gamma \VDash \varphi$, we obtain $v(\varphi) \in \langle \{v(\psi) \mid \psi \in \Gamma\}\rangle$. Consequently there is a finite subset $\Gamma_0$ of $\Gamma$ such that $\bigwedge \{v(\psi) \mid \psi \in \Gamma_0\} \leqslant v(\varphi)$. Hence $\bigwedge \{[\psi]_T \mid \psi \in \Gamma_0\} \leqslant [\varphi]_T$, $[\bigwedge \Gamma_0 \to \varphi ]_T =1$ and $(\bigwedge \Gamma_0 \to \varphi)\in T$. It follows that $\Sigma \vdash_g \bigwedge \Gamma_0 \to \varphi$ and $\Sigma ;\Gamma \vdash \varphi$. \end{proof} Let us consider topological semantics of the logic $\mathsf{S4}^{C}_I$. We have the following connection between multitopological spaces and $\mathsf{S4}^{C}_I$-algebras. \begin{proposition}\label{topological property} $ $ \begin{enumerate} \item If $(\tau_i)_{i\in I}$ is a family of topologies on a set $X$, then the powerset Boolean algebra of $X$ expanded with the interior mappings $\I_{\tau_i}$, for $i\in I$, and $\I_\tau$, for $\tau = \bigcap\limits_{i\in I} \tau_i $, is an $\mathsf{S4}^{C}_I$-algebra. \item If the powerset Boolean algebra of $X$ expanded with mappings $\Box_i$, for $i\in I$, and $\C$ forms an $\mathsf{S4}^{C}_I$-algebra, then there exists a unique family of topologies $(\tau_i)_{i\in I}$ on $X$ such that $\Box_i=\I_{\tau_i}$ for each $i\in I$. Moreover, we have $\C = \I_\tau$ for $\tau = \bigcap\limits_{i\in I} \tau_i $. \end{enumerate} \end{proposition} \begin{proof} (1) Assume $(\tau_i)_{i\in I}$ is a family of topologies on a set $X$. For each $i\in I$, by Proposition \ref{Kur}, the powerset Boolean algebra of $X$ expanded with the interior mapping $\I_{\tau_i}$ forms an interior algebra. Analogously, the powerset Boolean algebra of $X$ expanded with $\I_{\tau}$ is an interior algebra for $\tau = \bigcap_{i\in I} \tau_i$. Now we claim that \[\I_\tau (Y) \subset \bigcap\limits_{i\in I} \I_{\tau_i} (Y) \cap \bigcap\limits_{i\in I} \I_{\tau_i} (\I_\tau (Y)) \] for any $Y\subset X$. Since $\I_\tau (Y)\in \tau \subset \tau_i$ for each $i\in I$ and $\I_\tau (Y) \subset Y$, we have $\I_\tau (Y) \subset \I_{\tau_i} (Y)$ and $\I_\tau (Y) = \I_{\tau_i} (\I_\tau (Y))$ for each $i\in I$. Consequently, \[\I_\tau (Y) \subset \bigcap\limits_{i\in I} \I_{\tau_i} (Y) \cap \bigcap\limits_{i\in I} \I_{\tau_i} (\I_\tau (Y)).\] It remains to show that \[\bigcap\limits_{i\in I} \I_{\tau_i} (Y) \cap \I_\tau ((X \setminus Y)\cup \bigcap\limits_{i\in I} \I_{\tau_i} (Y)) \subset \I_\tau (Y)\] for any $Y\subset X$. We see that, for each $i\in I$, \begin{align*} \bigcap\limits_{i\in I} \I_{\tau_i} (Y) \cap \I_\tau ((X \setminus Y)\cup \bigcap\limits_{i\in I} \I_{\tau_i} (Y)) &\subset \I_{\tau_i} (Y) \cap \I_{\tau_i} ((X \setminus Y)\cup \bigcap\limits_{i\in I} \I_{\tau_i} (Y))\\ & \subset \I_{\tau_i} (Y \cap ((X \setminus Y)\cup \bigcap\limits_{i\in I} \I_{\tau_i} (Y)))\\ & \subset \I_{\tau_i} (Y \cap \bigcap\limits_{i\in I} \I_{\tau_i} (Y))\\ & \subset \I_{\tau_i} ( \bigcap\limits_{i\in I} \I_{\tau_i} (Y)). \end{align*} In addition, we have \begin{align*} \bigcap\limits_{i\in I} \I_{\tau_i} (Y) \cap \I_\tau ((X \setminus Y)\cup \bigcap\limits_{i\in I} \I_{\tau_i} (Y)) & \subset \I_\tau ((X \setminus Y)\cup \bigcap\limits_{i\in I} \I_{\tau_i} (Y)) \\ &\subset \I_{\tau_i}(\I_\tau ((X \setminus Y)\cup \bigcap\limits_{i\in I} \I_{\tau_i} (Y))). \end{align*} Consequently, \begin{multline*} \bigcap\limits_{i\in I} \I_{\tau_i} (Y) \cap \I_\tau ((X \setminus Y)\cup \bigcap\limits_{i\in I} \I_{\tau_i} (Y)) \subset \\\I_{\tau_i} (\bigcap\limits_{i\in I} \I_{\tau_i} (Y) \cap \I_\tau ((X \setminus Y)\cup \bigcap\limits_{i\in I} \I_{\tau_i} (Y))). \end{multline*} Notice that the set on the left-hand side of this inclusion is a subset of $Y$. Further, we see that this set is $\tau_i$-open for each $i\in I$. Therefore it belongs to the topology $\tau$ and is included in $\I_\tau (Y)$. We obtain that the powerset Boolean algebra of $X$ expanded with the interior mappings $\I_{\tau_i}$, for $i\in I$, and $\I_\tau$, for $\tau = \bigcap\limits_{i\in I} \tau_i $, is an $\mathsf{S4}^{C}_I$-algebra. (2) Assume the powerset Boolean algebra of of a set $X$ expanded with mappings $\Box_i$, for $i\in I$, and $\C$ forms an $\mathsf{S4}^{C}_I$-algebra. By Proposition \ref{Kur}, there exists a unique family of topologies $(\tau_i)_{i\in I}$ on $X$ such that $\Box_i=\I_{\tau_i}$ for each $i\in I$. It remains to show that $\C = \I_\tau$ for $\tau = \bigcap_{i\in I} \tau_i $. First, let us check that $\C (Y) \subset \I_\tau (Y)$ for any $Y \subset X$. We have $\C (Y) \subset \E (Y) \subset Y$ and $\C (Y) \subset \E \C (Y) \subset \I_{\tau_i} (\C (Y))$ for each $i\in I$. Consequently the set $\C (Y)$ is $\tau_i$-open for each $i\in I$ and is $\tau$-open. Since $\C (Y)$ is a $\tau$-open subset of $Y$, it is included in $\I_\tau (Y)$. Now we prove the converse inclusion. We have $\I_\tau (Y) \subset \E \I_\tau (Y)$ because $\I_\tau (Y)$ is $\tau_i$-open for each $i\in I$. Thus we obtain \[(X \setminus \I_\tau (Y)) \cup \E \I_\tau (Y) =X, \qquad \C ((X \setminus \I_\tau (Y)) \cup \E \I_\tau (Y) ) =X. \] Consequently, \[\I_\tau (Y) \subset \E \I_\tau (Y) \cap \C ((X \setminus \I_\tau (Y)) \cup \E \I_\tau (Y) ) \subset \C (\I_\tau Y) \subset \C (Y),\] which concludes the proof. \end{proof} From Proposition \ref{topological property}, we see that powersets of $I$-topological spaces $(X, (\tau_i)_{i\in I})$ can be considered as $\mathsf{S4}^{C}_I$-algebras. Now we define topological consequence relations that correspond to the derivability relations $\vdash_l$, $\vdash_g$ and $\vdash$. A \emph{topological $\mathsf{S4}^{C}_I$-model} is a pair $\mathcal{M}=(\mathcal{X},v)$, where $\mathcal{X}$ is an $I$-topological space $(X, (\tau_i)_{i\in I})$ and $v$ is a valuation in the powerset $\mathsf{S4}^{C}_I$-algebra of $\mathcal{X}$, i.e. $v$ is a mapping $v \colon \mathit{Fm}_I \to \mathcal{P}(X)$ such that $v (\bot) = \emptyset$, $v (\varphi \to \psi) = v(\psi) \cup (X \setminus v (\varphi))$, $v (\Box_i \varphi) = \I_{\tau_i} (v(\varphi))$, $ v (\C \varphi) = \I_\tau ( v (\varphi))$, where $\tau = \bigcap_{i\in I} \tau_i $. A formula $\varphi$ is \emph{true at a point $x$ of a model $\mathcal{M}$}, written as $\mathcal{M},x \vDash \varphi$, if $x\in v(\varphi)$. A formula $\varphi$ is called \emph{true in $\mathcal{M}$} if $\varphi$ is true at all points of $\mathcal{M}$. In this case we write $\mathcal{M}\vDash \varphi$. Given a set of formulas $\Gamma$ and a formula $\varphi$, we set $\Gamma \vDash_l \varphi$ if for any $\mathsf{S4}^{C}_I$-model $\mathcal{M}$ and any point $x$ of $\mathcal{M}$ \[(\forall \psi\in \Gamma\;\; \mathcal{M},x \vDash \psi )\Longrightarrow \mathcal{M},x \vDash \varphi.\] We also set $\Sigma \vDash_g \varphi$ if for any $\mathsf{S4}^{C}_I$-model $\mathcal{M}$ \[ (\forall \xi \in \Sigma\;\; \mathcal{M} \vDash \xi) \Longrightarrow \mathcal{M} \vDash \varphi. \] In addition, we set $ \Sigma; \Gamma\vDash \varphi$ if for any $\mathsf{S4}^{C}_I$-model $\mathcal{M}$ and any point $x$ of $\mathcal{M}$ \[((\forall\xi\in \Sigma\;\; \mathcal{M} \vDash \xi) \wedge (\forall \psi\in \Gamma\;\; \mathcal{M},x \vDash \psi )) \Longrightarrow \mathcal{M},x \vDash \varphi.\] Clearly, the relation $\vDash$ is a generalization of $\vDash_l$ and $\vDash_g$ since $ \Gamma \vDash_l \varphi \Leftrightarrow \emptyset ; \Gamma \vDash \varphi$ and $\Gamma \vDash_g \varphi\Leftrightarrow \Gamma ;\emptyset \vDash \varphi $. \begin{proposition}\label{topological soundness} For any sets of formulas $\Sigma$ and $\Gamma$, and for any formula $\varphi$, we have \[\Sigma ;\Gamma \VDash \varphi \Longrightarrow \Sigma ;\Gamma \vDash \varphi.\] \end{proposition} \begin{proof} Assume $\Sigma ;\Gamma \VDash \varphi$. In addition, assume we have a topological $\mathsf{S4}^{C}_I$-model $\mathcal{M} = (\mathcal{X}, v)$ and a point $x$ of $\mathcal{M}$ such that \[(\forall\xi\in \Sigma\;\; \mathcal{M} \vDash \xi) \wedge (\forall \psi\in \Gamma\;\; \mathcal{M},x \vDash \psi ).\] We shall prove that $\mathcal{M},x\vDash\varphi$. We see that the powerset $\mathsf{S4}^{C}_I$-algebra of $\mathcal{X}$ is $\sigma$-complete. Hence, by Proposition \ref{Sigma-complete algebras are standard}, it is standard. Since $\Sigma ;\Gamma \VDash \varphi$, we obtain that $v(\varphi) \in \langle \{ v(\psi) \mid \psi \in \Gamma\}\rangle$. Consequently, there is a finite subset $\Gamma_0$ of $\Gamma$ such that $\bigcap \{v(\psi) \mid \psi \in \Gamma_0\} \subset v(\varphi)$. We see that \[x \in \bigcap \{v(\psi) \mid \psi \in \Gamma\}\subset \bigcap \{v(\psi) \mid \psi \in \Gamma_0\} \subset v(\varphi).\] Thus $\mathcal{M},x\vDash\varphi$. \end{proof} \section{Topological completeness and completable algebras} \label{s6} In this section we prove the topological completeness results that connect the consequence relations $\VDash_l$, $\VDash_g$ and $\VDash$ with the derivability relations introduced in Section \ref{s2}. We also give a representation theorem for standard $\mathsf{S4}^{C}_I$-algebras and prove that the class of completable $\mathsf{S4}^{C}_I$-algebras precisely consists of standard ones. First, we recall the McKinsey-Tarski representation of interior algebras from \cite{McTar44}. Let $\mathit{Ult}\: \mathcal{A}$ be the set of all ultrafilters of (the Boolean part of) an interior algebra $\mathcal{A}=(A, \wedge, \vee, \to, 0, 1, \Box)$. Put $\widehat{a} = \{u \in \mathit{Ult}\: \mathcal{A} \mid a \in u\}$ for $a\in A$. We recall that the mapping $\:\widehat{\cdot}\;\colon a \mapsto \widehat{a}\:$ is an embedding of the Boolean algebra $(A, \wedge, \vee, \to, 0, 1)$ into the powerset Boolean algebra $\mathcal{P}(\mathit{Ult}\: \mathcal{A})$ by the Stone representation theorem. \begin{proposition}[Representation of interior algebras]\label{McKinTar} For any interior algebra $\mathcal{A}=(A, \wedge, \vee, \to, 0, 1, \Box)$, there exists a topology $\tau$ on $\mathit{Ult}\: \mathcal{A}$ such that $\widehat{\Box a} =\I_\tau (\widehat{a})$ for any element $a$ of $\mathcal{A}$. Moreover, the topology generated by $\{\widehat{\Box b}\mid b\in A\}$ provides an example of such a topology. \end{proposition} In order to obtain a generalization of this result for the case of standard $\mathsf{S4}^{C}_I$-algebras, we recall some basic properties of directed graphs. A \emph{directed graph} is a pair $\mathcal{S} =( S, \prec )$, where $\prec$ is a binary relation on $S$. An element $a $ of $\mathcal{S}$ is called \emph{accessible} if there is no infinite sequence $(a_j)_{i\in \mathbb{N}}$ such that $a=a_0$ and $a_{i+1} \prec a_j$ for each $i\in \mathbb{N}$. All accessible elements of $\mathcal{S}$ form the \emph{accessible part of $\mathcal{S}$}, which is denoted by $\mathit{Acc}\:(\mathcal{S})$. The restriction of $\prec$ on $\textit{Acc}\:(\mathcal{S})$ is well-founded. For an element $a$ of $\mathcal{S}$, define $\mathit{ht}_\mathcal{S}(a)$ as its ordinal height with respect to $\prec$. For $a\in \textit{Acc}\:(\mathcal{S})$, we recall that its ordinal height is defined by transfinite recursion on $\prec$ as follows: \[\mathit{ht}_\mathcal{S} (a)= \sup \{ \mathit{ht}_\mathcal{S} (b)+1 \mid b \prec a \}.\] If $a\nin \textit{Acc}\:(\mathcal{S})$, then $\mathit{ht}_\mathcal{S} (a):=\infty $. A \emph{homomorphism} from $\mathcal{S}_1=( S_1, \prec_1 )$ to $\mathcal{S}_2=( S_2, \prec_2 )$ is a function $f \colon S_1 \to S_2$ such that $f(b)\prec_2 f(c)$ whenever $b \prec_1 c$. \begin{proposition}\label{prop1} Suppose $f\colon \mathcal{S}_1 \to \mathcal{S}_2$ is a homomorphism of directed graphs and $a$ is an element of $\mathcal{S}_1$. Then $\mathit{ht}_{\mathcal{S}_1} (a) \leqslant\mathit{ht}_{\mathcal{S}_2} (f(a))$. \end{proposition} For directed graphs $\mathcal{S}_1 =( S_1, \prec_1 )$ and $\mathcal{S}_2 =( S_2, \prec_2 )$, their product $\mathcal{S}_1\times \mathcal{S}_2$ is defined as the set $S_1 \times S_2$ together with the following relation \[( b_1 , b_2 ) \prec ( c_1 , c_2 ) \Longleftrightarrow b_1 \prec_1 c_1 \text{ and } b_2 \prec_2 c_2. \] \begin{proposition}\label{prop2} Suppose $a$ and $b$ are elements of directed graphs $\mathcal{S}_1$ and $\mathcal{S}_2$ respectively. Then $\mathit{ht}_{\mathcal{S}_1 \times \mathcal{S}_2} (( a, b)) = \min \{ \mathit{ht}_{\mathcal{S}_1} (a), \mathit{ht}_{\mathcal{S}_2} (b) \}$. \end{proposition} Now, for an $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}=( A, \wedge, \vee, \to, 0, 1, (\Box_i)_{i\in I}, \C )$ and its element $d$, we define the binary relation $\prec_{d}$ on $A$: \[a\prec_{d} b : \Longleftrightarrow b \leqslant \E d \wedge \E a.\] We denote the ordinal height of $a$ in $(A,\prec_d)$ by $\mathit{ht}_{d} (a)$. Further, for any ordinal $\gamma$, we put $\mathit{M}_d(\gamma):= \{ a \in A \mid \gamma \leqslant\mathit{ht}_d (a)\} $. Notice that $\mathit{M}_d(0)= A$ and $\mathit{M}_d(\delta) \supset \mathit{M}_d(\gamma)$ whenever $\delta \leqslant \gamma$. \begin{proposition}\label{char} An $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}=( A, \wedge, \vee, \to, 0, 1, (\Box_i)_{i\in I}, \C )$ is standard if and only if, for any element $d$ of $\mathcal{A}$, the accessible part of $(A,\prec_{d})$ is equal to $\{a\in A \mid a \nleqslant \C d\}$. \end{proposition} \begin{proof} In any $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}=( A, \wedge, \vee, \to, 0, 1, (\Box_i)_{i\in I}, \C )$, if $b \leqslant \C d$, then \[b \succ_d \C d \succ_d \C d \succ_d \ldots,\] because $b\leqslant \C d \leqslant \E d \wedge \E \C d$. In this case, we have $b \nin \textit{Acc}\:(A, \prec_d)$. Consequently $\textit{Acc}\:(A, \prec_d) \subset \{a\in A \mid a \nleqslant \C d\}$. Thus it is sufficient to show that the algebra $\mathcal{A}$ is standard if and only if the binary relation $\prec_d$ is well-founded on $\{a\in A \mid a \nleqslant \C d\}$. First, assume the algebra $\mathcal{A}$ is standard. We obtain the required statement by \emph{reductio ad absurdum}. If there is a descending sequence $a_0 \succ_d a_1 \succ_d \ldots$ of elements of $\{a\in A \mid a \nleqslant \C d\}$, then we have a sequence of elements of $\mathcal{A}$ such that $a_j \leqslant \E d\wedge\E a_{j+1}$ for all $j\in \mathbb{N}$. Since the $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$ is standard, we have $a_0 \leqslant \C d$. We obtain a contradiction with the assumption $a_j \nleqslant \C d$ for any $j\in \mathbb{N}$. Now assume that, for any element $d$ of $\mathcal{A}$, the binary relation $\prec_d$ is well-founded on $\{a\in A \mid a \nleqslant \C d\}$. We consider any sequence of elements of $\mathcal{A}$ such that $a_j \leqslant \E d\wedge\E a_{j+1}$ for all $j\in \mathbb{N}$ and prove $a_0 \leqslant \C d$. If all elements of the sequence belong to $\{a\in A \mid a \nleqslant \C d\}$, then there is a $\prec_d$-descending sequence of elements of $\{a\in A \mid a \nleqslant \C d\}$, which is contradiction. Consequently there exists a natural number $k$ such that $a_k \leqslant \C d$. We claim that $a_0 \leqslant \C d$ and prove this claim by induction on $k$. If $k=0$, then we have $a_0 =a_k \leqslant \C d$. If $k=l +1$, then $a_l \leqslant \E d\wedge\E a_k \leqslant \E a_k \leqslant a_k \leqslant \C d$. We obtain $a_0 \leqslant \C d$ by the induction hypothesis for $l$. \end{proof} \begin{lemma}\label{basic} Suppose $c$, $e$ and $d$ are elements of a standard $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$. Then $\mathit{ht}_d (c\vee e) = \min \{\mathit{ht}_d (c), \mathit{ht}_d (e) \} $ and $\mathit{ht}_d (c) +1 \leqslant \mathit{ht}_d (\E d \wedge \E c)$, where we define $\infty+1 :=\infty$. \end{lemma} \begin{proof} Assume we have a standard $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}=( A, \wedge, \vee, \to, 0, 1, (\Box_i)_{i\in I}, \C )$ and three elements $c$, $e$ and $d$ of $\mathcal{A}$. First, we prove that $\mathit{ht}_d (c\vee e) = \min \{\mathit{ht}_d (c), \mathit{ht}_d (e) \} $. Put $\mathcal{S} =(A, \prec_d)$. We see that the mapping \[f \colon (c,e) \mapsto c\vee e\] is a homomorphism from $\mathcal{S} \times \mathcal{S}$ to $\mathcal{S}$. Hence, by Proposition \ref{prop2} and Proposition \ref{prop1}, we obtain \[\min \{\mathit{ht}_\mathcal{S} (c), \mathit{ht}_\mathcal{S} (e) \}= \mathit{ht}_{\mathcal{S}\times \mathcal{S}} ((c,e)) \leqslant \mathit{ht}_\mathcal{S} (c\vee e).\] Consequently, \[\min \{\mathit{ht}_d (c), \mathit{ht}_d (e) \} \leqslant \mathit{ht}_d (c\vee e).\] On the other hand, $\mathit{ht}_d (c \vee e) \leqslant \mathit{ht}_d (c)$ since \[\{ g \in A \mid g \prec_d (c\vee e)\} \subset \{ g \in A \mid g \prec_d c\}.\] Analogously, we have $\mathit{ht}_d (c \vee e) \leqslant \mathit{ht}_d (e)$. It follows that \[\mathit{ht}_d (c \vee e) = \min \{\mathit{ht}_d (e), \mathit{ht}_d (e) \} .\] Now we prove $\mathit{ht}_d (c) +1 \leqslant \mathit{ht}_d (\E d \wedge \E c)$. Notice that $c \prec_d (\E d \wedge \E c)$. If $(\E d \wedge \E c) \leqslant \C d$, then the element $ (\E d \wedge \E c)$ is not accessible in $(A, \prec_d)$ by Proposition \ref{char}, $\mathit{ht}_d (\E d \wedge \E c) =\infty$ and the required inequality immediately holds. Otherwise, the inequality holds from the recursive definition of $\mathit{ht}_d$ on the accessible part of $(A,\prec_d)$. \end{proof} \begin{lemma}\label{basic2} For any standard $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$, any element $d$ of $\mathcal{A}$ and any ordinal $\gamma$, the set $\mathit{M}_d(\gamma) $ is an ideal in $\mathcal{A}$. \end{lemma} \begin{proof} Note that $0 \leqslant \C d$, the element $0$ is not accessible in $(A, \prec_d)$ and $\mathit{ht}_d (0)=\infty$. Consequently $0$ belongs to $ \mathit{M}_d(\gamma)$. Suppose $c$ and $e$ belong to $\mathit{M}_d(\gamma) $. Then $\gamma \leqslant \mathit{ht}_d (c)$ and $\gamma \leqslant \mathit{ht}_d (e)$. We have $\gamma \leqslant \min \{\mathit{ht}_d (e), \mathit{ht}_d (c) \} = \mathit{ht}_d (c \vee e) $ by Lemma \ref{basic}. Consequently $c \vee e$ belongs to $\mathit{M}_d(\gamma) $. Finally, suppose $c$ belongs to $\mathit{M}_d(\gamma) $ and $e\leqslant c$. We shall show that $e \in\mathit{M}_d(\gamma) $. We have $\gamma \leqslant \mathit{ht}_d (c) = \mathit{ht}_d (c \vee e) = \min \{\mathit{ht}_d (c), \mathit{ht}_d (e) \} \leqslant \mathit{ht}_d (e) $ by Lemma \ref{basic}. Hence $e \in\mathit{M}_d(\gamma) $. \end{proof} Now we consider a representation of standard $\mathsf{S4}^{C}_I$-algebras. We denote the set of ultrafilters of an $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}=(A, \wedge, \vee, \to, 0, 1, (\Box_i)_{i\in I}, \C)$ by $\mathit{Ult}\: \mathcal{A}$ analogously to the case of interior algebras. For $a\in A$, we recall that $\widehat{a} = \{u \in \mathit{Ult}\: \mathcal{A} \mid a \in u\}$ and the mapping $\:\widehat{\cdot}\;\colon a \mapsto \widehat{a}\:$ is an embedding of the Boolean algebra $(A, \wedge, \vee, \to, 0, 1)$ into the powerset Boolean algebra $\mathcal{P}(\mathit{Ult}\: \mathcal{A})$. \begin{theorem}[Representation of $\mathsf{S4}^{C}_I$-algebras]\label{representation theorem} For any standard $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$, there exists a family of topologies $(\tau_i)_{i\in I}$ on $\mathit{Ult}\: \mathcal{A}$ such that $\widehat{\Box_i a} = \I_{\tau_i} (\widehat{a})$ for any element $a$ of $\mathcal{A}$ and any $i\in I$. Moreover, we have $\widehat{\C a} = \I_\tau (\widehat{a})$ for $\tau = \bigcap_{i\in I} \tau_i $ and any element $a$ of $\mathcal{A}$. \end{theorem} \begin{proof} Assume we have a standard $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}=(A, \wedge, \vee, \to, 0, 1, (\Box_i)_{i\in I}, \C)$. For $i\in I$, let $\tau_i$ be the topology on $\mathit{Ult}\: \mathcal{A}$ generated by $\{\widehat{\Box_i b}\mid b\in A\}$. Note that the set $\{\widehat{\Box_i b}\mid b\in A\}$ contains $\mathit{Ult}\: \mathcal{A}$ and is closed under finite intersections since $\Box_i 1 =1$ and $\Box_i (b^\prime\wedge b^{\prime\prime}) =\Box_i b^\prime \wedge\Box_i b^{\prime\prime} $. Thus, for each $i\in I$, the set $\{\widehat{\Box_i b}\mid b\in A\}$ is a basis of $\tau_i$. Now we put $\tau = \bigcap_{i\in I} \tau_i$. By Proposition \ref{McKinTar}, $\widehat{\Box_i a} = \I_{\tau_i} (\widehat{a})$ for any element $a$ of $\mathcal{A}$ and any $i\in I$. It remains to show that $\widehat{\C a} = \I_\tau (\widehat{a})$ for any element $a$ of $\mathcal{A}$. First, we check that $\widehat{\C a} \subset \I_\tau (\widehat{a})$. Recall that $\C a \leqslant\E a \wedge \E \C a \leqslant a \wedge \E \C a$. Therefore $\widehat{\C a} \subset \widehat{a}$ and $\widehat{\C a} \subset \bigcap_{i\in I} \I_{\tau_i}(\widehat{\C a}) \subset \I_{\tau_i}(\widehat{\C a})$ for any $i\in I$. We see that $\widehat{\C a}$ is $\tau_i$-open for each $i\in I$. Hence the set $\widehat{\C a}$ is a $\tau$-open subset of $\widehat{a}$. Consequently $\widehat{\C a} \subset \I_\tau (\widehat{a})$. Now, for an element $d\in A$, let $\mathit{ht}_d(\mathcal{A}) := \sup \{ \mathit{ht}_d (b)+1 \mid \text{$b \in A$ and $b \nleqslant \C d$}\}$. Recall that the accessible part of $(A, \prec_d)$ is equal to $\{b\in A \mid b \nleqslant \C d\}$. Hence we see $\mathit{M}_d(\mathit{ht}_d(\mathcal{A}))= \{b\in A \mid b \leqslant \C d\}$. For an ultrafilter $u$ of $\mathcal{A}$, set \[\mathit{rk}_d(u):= \begin{cases} \min \{ \gamma \leqslant \mathit{ht}_d(\mathcal{A}) \mid u \cap \mathit{M}_d(\gamma) =\emptyset\} &\text{if } \C d \nin u ;\\ \infty &\text{otherwise}. \end{cases}\] From this definition, we see that, for any ordinal $\gamma$, \begin{gather} \gamma < \mathit{rk}_d(u) \Longleftrightarrow u \cap \mathit{M}_d(\gamma) \neq\emptyset.\label{formula1} \end{gather} Put $J^\gamma_d:=\{u\in \mathit{Ult}\: \mathcal{A} \mid \gamma\leqslant \mathit{rk}_d(u)\}$. We claim that, for any element $d$ of $\mathcal{A}$ and any ordinal $\gamma$, \begin{gather} \bigcap_{i\in I} \I_{\tau_i}(\widehat{d}) \cap \bigcap_{i\in I} \I_{\tau_i} (J^\gamma_d ) \subset J^{\gamma+1}_d .\label{formula2} \end{gather} This claim is established by \emph{reduction ad absurdum}. Suppose there is an ultrafilter $w$ such that \[w\in \bigcap_{i\in I} \I_{\tau_i}(\widehat{d}) \cap \bigcap_{i\in I} \I_{\tau_i} (J^\gamma_d )= \widehat{ \E d} \cap \bigcap_{i\in I} \I_{\tau_i} ( J^\gamma_d)\quad \text{and} \quad w\nin J^{\gamma+1}_d. \] We see that $\E d \in w$ and, for each $i\in I$, there is an element $b_i$ of $\mathcal{A}$ such that $\Box_i b_i \in w$ and $\widehat{\Box_i b_i} \subset J^\gamma_d$. From the assertion $w\nin J^{\gamma+1}_d$, we also have $\gamma +1 \nleqslant \mathit{rk}_d(w)$. Consequently $\mathit{rk}_d(w)\leqslant \gamma$. In addition, we see $w \cap \mathit{M}_d(\gamma) = \emptyset$ from (\ref{formula1}). Let us consider the element $s:=\bigvee_{i\in I} \Box_i b_i$. Since $\Box_i b_i= \Box_i \Box_i b_i \leqslant \Box_i s$ and $\Box_i b_i \in w$, we have $\Box_i \Box_i b_i\in w$ and $\Box_i s \in w$ for each $i\in I$. Also, we have $\E s \in w$ and $(\E d\wedge \E s) \in w$. From the assertions $w \cap \mathit{M}_d(\gamma) = \emptyset$ and $(\E d\wedge \E s) \in w$, we see $(\E d \wedge \E s) \nin \mathit{M}_d(\gamma)$ and $\mathit{ht}_d(\E d \wedge \E s ) < \gamma$. By Lemma \ref{basic}, we obtain that $\mathit{ht}_d (s)+1 < \gamma$ and the element $s$ belongs to the accessible part of $(A, \prec_d)$. Recall that $M_d(\mathit{ht}_d (s)+1)$ is an ideal of $\mathcal{A}$ by Lemma \ref{basic2}. Trivially, $s \nin M_d(\mathit{ht}_d (s)+1)$. Hence, from the Boolean ultrafilter theorem, there exists an ultrafilter $w^\prime$ such that $s \in w^\prime$ and $w^\prime \cap M_d(\mathit{ht}_d (s)+1)=\emptyset$. We obtain that $w^\prime \in \widehat{s} =\bigcup_{i\in I} \widehat{\Box_i b_i} \subset J^\gamma_d$ and $\mathit{rk}_d(w^\prime)\leqslant \mathit{ht}_d (s)+1 < \gamma$ from (\ref{formula1}). It is a contradiction with the definition of $J^\gamma_d$. The claim is established. It remains to check that $ \I_\tau (\widehat{a}) \subset \widehat{\C a}$ for any element $a$ of $\mathcal{A}$. Notice that $\widehat{\C a} = J^\infty_a =\bigcap_\gamma J^{\gamma+1}_a$. We show that $\I_\tau (\widehat{a}) \subset J^{\gamma+1}_a$ for any ordinal $\gamma $ by transfinite induction on $\gamma$. Suppose $\I_\tau (\widehat{a}) \subset J^{\gamma_0+1}_a$ for any $\gamma_0 <\gamma$. Hence $\I_\tau (\widehat{a}) \subset J^\gamma_a$ and \[ \I_\tau (\widehat{a}) \subset \bigcap_{i\in I} \I_{\tau_i} (\widehat{a}) \cap \bigcap_{i\in I} \I_{\tau_i} (\I_\tau (\widehat{a})) \subset \bigcap_{i\in I} \I_{\tau_i} (\widehat{a}) \cap \bigcap_{i\in I} \I_{\tau_i} (J^\gamma_a) \subset J^{\gamma+1}_a,\] where the rightmost inclusion follows from (\ref{formula2}) and the leftmost inclusion holds because the powerset algebra of $\mathit{Ult}\: \mathcal{A}$ expanded with the mappings $\I_{\tau_i}$, for $i\in I$, and $\I_\tau$ is an $\mathsf{S4}^{C}_I$-algebra by Proposition \ref{topological property}. Finally, we obtain that \[\I_\tau (\widehat{a}) \subset \bigcap_\gamma J^{\gamma+1}_a = J^\infty_a = \widehat{\C a},\] which completes the proof. \end{proof} \begin{remark} Note that the given representation theorem applied for the Lindenbaum-Tarski algebra of $\mathsf{S4}^{C}_I$ provides a structure, which is called a canonical neighbourhood frame for $\mathsf{S4}^{C}_I$ in terms of neigbourhood semantics. \end{remark} \begin{corollary} An $\mathsf{S4}^{C}_I$-algebra is embeddable into a complete $\mathsf{S4}^{C}_I$-algebra if and only if it is standard. \end{corollary} \begin{proof} (if) Suppose an $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$ is standard. By Theorem \ref{representation theorem}, there exists a family of topologies $(\tau_i)_{i\in I}$ on $\mathit{Ult}\: \mathcal{A}$ such that, for any element $a$ of $\mathcal{A}$, $\widehat{\Box_i a} = \I_{\tau_i} (\widehat{a})$, for each $i\in I$, and $\widehat{\C a} = \I_\tau (\widehat{a})$ for $\tau = \bigcap_{i\in I} \tau_i $. By Proposition \ref{topological property}, the powerset Boolean algebra of $\mathit{Ult}\: \mathcal{A}$ expanded with the mappings $\I_{\tau_i}$, for $i\in I$, and $\I_\tau$ is an $\mathsf{S4}^{C}_I$-algebra. We see that the mapping $\:\widehat{\cdot}\;\colon a \mapsto \widehat{a}\:$ is an injective homomorphism from $\mathcal{A}$ to the powerset $\mathsf{S4}^{C}_I$-algebra of $\mathit{Ult}\: \mathcal{A}$. Therefore the algebra $\mathcal{A}$ is embeddable into a complete $\mathsf{S4}^{C}_I$-algebra. (only if) Suppose an $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$ is isomorphic to a subalgebra of a complete $\mathsf{S4}^{C}_I$-algebra $\mathcal{B}$. Then the $\mathsf{S4}^{C}_I$-algebra $\mathcal{B}$ is $\sigma$-complete. By Proposition \ref{Sigma-complete algebras are standard}, it is standard. Since any subalgebra of a standard algebra is standard, the algebra $\mathcal{A}$ is standard. \end{proof} \begin{remark} An embedding of interior algebra $\mathcal{A}$ into a complete interior algebra of subsets of $\mathit{Ult}\: \mathcal{A}$ provided by Proposition \ref{McKinTar} is called the \emph{topo-canonical completion of $\mathcal{A}$} in \cite{BezhMiMo08}. Our result can be considered as a generalization of this notion for the case of $\mathsf{S4}^{C}_I$-algebras. \end{remark} Now we obtain completeness results that connect the consequence relations $\VDash_l$, $\VDash_g$ and $\VDash$ with algebraic and syntactic consequence relations from the previous sections. \begin{theorem}[Algebraic and topological completeness] For any sets of formulas $\Sigma$ and $\Gamma$, and for any formula $\varphi$, we have \[\Sigma ;\Gamma \vdash \varphi \Longleftrightarrow \Sigma ;\Gamma \VDash \varphi \Longleftrightarrow \Sigma ;\Gamma \vDash \varphi.\] \end{theorem} \begin{proof} From Theorem \ref{algebraic completeness} and Proposition \ref{topological soundness}, it remains to show that $\Sigma ;\Gamma \VDash \varphi$ whenever $\Sigma ; \Gamma \vDash \varphi$. We prove this implication by \emph{reductio ad absurdum}. Suppose $\Sigma ; \Gamma \vDash \varphi$ and $\Sigma ; \Gamma \nVDash \varphi$. Then there exist a standard $\mathsf{S4}^{C}_I$-algebra $\mathcal{A}$ and a valuation $v$ in $\mathcal{A}$ such that $v(\xi)=1$ for any $\xi \in \Sigma$ and $v(\varphi)\nin\langle \{v(\psi) \mid \psi \in \Gamma\}\rangle$. By the Boolean ultrafilter theorem, there is an ultrafilter $u$ of $\mathcal{A}$ such that $v(\varphi) \nin u$ and $ \{v(\psi) \mid \psi \in \Gamma\} \subset u$. From Theorem \ref{representation theorem} and Proposition \ref{topological property}, there exists a family of topologies $(\tau_i)_{i\in I}$ on $\mathit{Ult}\: \mathcal{A}$ such that the mapping $\:\widehat{\cdot}\;\colon a \mapsto \widehat{a}\:$ is an injective homomorphism from $\mathcal{A}$ to the powerset $\mathsf{S4}^{C}_I$-algebra of $\mathit{Ult}\: \mathcal{A}$. Let us denote the valuation $\beta \mapsto \widehat{v(\beta)}$ in the powerset $\mathsf{S4}^{C}_I$-algebra of $\mathit{Ult}\: \mathcal{A}$ by $w$. Put $\mathcal{M} =((\mathit{Ult}\: \mathcal{A}, (\tau_i)_{i\in I}),w)$. Note that $\bigcap \{ w(\xi) \mid \xi \in \Sigma\} = \mathit{Ult}\: \mathcal{A}$, $u \in \bigcap \{w(\psi) \mid \psi \in \Gamma\}$ and $u \nin w(\varphi)$. Hence we obtain $\mathcal{M} \vDash \xi $ for any $\xi \in \Sigma$, $\mathcal{M}, u \vDash \psi$ for any $\psi \in \Gamma$ and $\mathcal{M}, u \nvDash \varphi$, which is a contradiction with the assumption $\Sigma ; \Gamma \vDash \varphi$. \end{proof} From this result, we immediately obtain the following corollary. \begin{corollary} For any sets of formulas $\Sigma$ and $\Gamma$, and for any formula $\varphi$, we have \begin{gather*} \Gamma \vdash_l \varphi \Longleftrightarrow \Gamma \VDash_l \varphi \Longleftrightarrow \Gamma \vDash_l \varphi,\qquad\;\: \Sigma \vdash_g \varphi \Longleftrightarrow \Sigma \VDash_g \varphi \Longleftrightarrow \Sigma \vDash_g \varphi. \end{gather*} \end{corollary} \subsubsection*{Funding.} This work is supported by the Russian Science Foundation under grant 21-11-00318. \subsubsection*{Acknowledgements.} The main ideas of the article arose when I was visiting my parents-in-law Anatoly Filatov and Larisa Filatova in 2020. I heartily thank them for their hospitality. SDG \end{document}
\begin{document} \title{\textbf{Criterion for the coincidence of strong and weak Orlicz spaces}} \footnotesize\date{} \author{Maria Rosaria Formica ${}^{1}$, Eugeny Ostrovsky ${}^2$} \maketitle \begin{center} ${}^{1}$ Parthenope University of Naples, via Generale Parisi 13,\\ Palazzo Pacanowsky, 80132, Napoli, Italy. \\ e-mail: [email protected] \\ ${}^2$ Department of Mathematics and Statistics, Bar-Ilan University, \\ 59200, Ramat Gan, Israel. \\ e-mail: [email protected]\\ \end{center} \begin{abstract} We provide necessary and sufficient conditions for the coincidence, up to equivalence of the norms, between strong and weak Orlicz spaces. Roughly speaking, this coincidence holds true only for the so-called {\it exponential} spaces.\par We find also the exact value of the embedding constant which appears in the corresponding norm inequality. \end{abstract} \noindent {\footnotesize {\it Key words and phrases}: Measure, Orlicz space, Young-Orlicz function, norm equivalence, tail function and tail norm, expectation, Lorentz spaces, Orlicz-Luxemburg strong and weak norms, embedding constant, Markov-Tchebychev's inequality.\\ \noindent {\it 2010 Mathematics Subject Classification}: 46E30, 60B05. \section{Notations. Definitions. Statement of the problem.} Let $ (X = \{x\}, \cal{F}, \mu)$ be a measurable space with atomless sigma-finite non-zero measure $ \mu.$ Let $N = N(u), \ u \in \mathbb{R} $, be a non-negative numerical-valued Young-Orlicz function. This means that $N(u)$ is even, continuous, convex, strictly increasing to infinity as $u\geq 0$, $u\to \infty$ and such that \begin{eqnarray*} \lim_{u \to 0} \frac {N(u)}{u} = 0 \ , \ \ \ \lim_{u \to \infty} \frac {N(u)}{u} = +\infty. \end{eqnarray*} In particular, $$ N(u) = 0 \ \Leftrightarrow \ u = 0. $$ Denote by $M_0 = M_0(X,\mu) $ the set of all numerical-valued measurable functions $f: X \to \mathbb{R}$, finite almost everywhere. The Orlicz space $ L(N) = L(N; X, \mu)$ consists of all functions $ f: X \to \mathbb{R} $ from the set $ M_0(X,\mu)$ for which the classical Luxemburg norm $ \ ||f||_{L(N)}$ (equivalent to the Orlicz norm) or, in more detail, the {\it strong } Luxemburg norm $ ||f||_{sL(N)}$ defined by \begin{equation}\label{Luxembur norm} ||f||_{L(N)} = ||f||_{sL(N)} := \inf \left\{ k > 0 \, : \, \int_X N(|f(x)|/k) \ d\mu(x) \le 1 \ \right\} \end{equation} is finite.\par Furthermore, if $0<||f||_{L(N)}<\infty$, then \begin{equation}\label{integral less-equal 1} \int_X N \left( \ \frac{|f(x)|}{||f||_{L(N)}} \ \right) \ d\mu(x)\leq 1. \end{equation} Note that the equality sign occurs in \eqref{integral less-equal 1} if in addition the Young - Orlicz function $N(\cdot)$ satisfies the well known $\Delta_2$-condition. Moreover, if there exists $k_0 >0$ such that $\displaystyle \int_X N \left( \ \frac{|f(x)|}{k_0} \ \right) \ d\mu(x) = 1$, then $f \in L(N)$ and $k_0=||f||_{L(N)}$ (see \cite{Kras Rut1961}, Chapter 2, Section 9). The Orlicz spaces have been extensively investigated by M. M. Rao and Z. D. Ren in \cite{RaoRenTheory 1991,RaoRenAppl1997}; see also \cite{Bennett Sharpley1988,Kras Rut1961,Masta2016,OsMono1999,OsSirHIAT2007}, etc. Recently in \cite{Fiorenza2018a} (see also \cite{Fiorenza2018b}) the authors studied the Gagliardo-Nirenberg inequality in rearrangement invariant Banach function spaces, in particular in Orlicz spaces. \par Note that the so-called {\it exponential} Orlicz spaces are isomorphic to suitable Grand Lebesgue Spaces, see \cite{Buld Mush OsPuch1992,KozOs1985,KozOsSir2017,OsMono1999}. For some properties, variants and applications of the classical Grand Lebesgue Spaces see for example \cite{caponeformicagiovanonlanal2013,fioforgogakoparakoNA,anatrielloformicagiovajmaa2017}. Recall that the Orlicz space $L(N)$ is said to be {\it exponential} if there exists $\delta>0$ such that the generating Young-Orlicz function $N = N(u)$ verifies \begin{eqnarray*} \lim_{ u \to \infty } \frac{\ln N(u)}{ [\ln( 2 + u)]^{1+\delta}} = \infty. \end{eqnarray*} For instance, this condition is satisfied when \begin{eqnarray*} N(u) = N^{(m)}(u) = \exp \left( |u|^m/m \right) - 1, \ m = \rm const > 0, \end{eqnarray*} as well as for an arbitrary Young-Orlicz function which is equivalent to $N^{(m)}(u)$ or when $$ N(u) = N_{(\Delta)}(u) \stackrel{def}{=} \exp \left( \ [\ln (1 + |u|)]^{\Delta} \ \right) - 1, \ \ \Delta = {\rm const} > 1. $$ Denote, as usually, for an arbitrary measurable function $f: X \to \mathbb R$ its Lebesgue-Riesz norm $$ ||f||_p := \left[ \int_X |f(x)|^p \ d\mu(x) \right]^{1/p}, \ p \in [1, \infty). $$ Suppose that the measure $\mu$ is probabilistic (or, more generally, bounded): $\mu(X) = 1$. It is known, see e.g. \cite{Ostrovsky Bide Mart}, that the measurable function $f$ (random variable, \ r.v.) belongs to the space $L(N^{m}), \ m = {\rm const} > 0$ iff $$ \sup_{p \ge 1} \left[ \ ||f||_p \ p^{-1/m} \ \right] < \infty. $$ Further, the non-zero function $ f: X \to \mathbb R$ belongs to the Orlicz space $L(N_{(\Delta)})$ iff, for some non-trivial constant $ C \in (0, \, \infty)$, $$ \sup_{p \ge 1} \left[ \ ||f||_p \ \exp \left( - C \ p^{\Delta} \right) \ \right] < \infty. $$ Define, as usually, for a function $f: X \to \mathbb{R} $ from the set $M_0(X,\mu)$ its {\it tail function } \begin{equation}\label{tail function} T[f](t) \stackrel{def}{=} \mu \{ \ x: \ |f(x)| \ge t \ \}, \ t \ge 0. \end{equation} The function defined in \eqref{tail function} is also known as \lq\lq distribution function\rq\rq, but we prefer the first name since the notion \lq\lq distribution function\lq\lq is very used in other sense in the probability theory.\par An arbitrary tail function is left continuous, monotonically non-increasing, takes values in the interval $[0, \mu(X)]$ if $0 < \mu(X) < \infty$ and in the semi-open interval $[0,\mu(X))$ if $\mu(X) = \infty$. Besides, $$ \lim_{t \to \infty} T[f](t) = 0. $$ The inverse conclusion is also true: such an arbitrary function is the tail function for a suitable measurable finite a.e. map $ \ f: X \to \mathbb{R}, \ $ {\it defined on a sufficiently rich measurable space.} \par The set of all tail functions will be denoted by $ \ W: \ $ \begin{equation} W =\{ \ T[f](\cdot), \ f \in M_0(X,\mu) \ \}. \end{equation} There are many rearrangement invariant function spaces in which the norm (or quasi-norm) of the function $f(\cdot)$ may be expressed by means of its tail function $T[f](\cdot)$, for example, the well-known Lorentz spaces. For the detailed investigation of the Lorentz spaces we refer the reader, e.g., to \cite{Bennett Sharpley1988, Lorentz1 1950, Lorentz2 1951, SoriaLor 1998, SteinWeiss1975}. \par We introduce here a modification of these spaces. Let $\theta = \theta(t), \ t \ge 0$, be an arbitrary tail function: $ \theta \in W. $ The so-called {\it tail quasi-norm} (or for brevity {\it tail norm})\, $||f||_{\rm Tail[\theta]}$ \! of a function $f \in M_0(X,\mu)$, with respect to the corresponding tail function $ \theta(\cdot),$ is defined by \begin{equation}\label{tail quasi norm} ||f||_{\rm Tail[\theta]} \stackrel{def}{=} \ \inf \{K > 0 \, : \, \forall t > 0 \ \Rightarrow \ T[f](t) \le \theta(t/K) \ \}. \end{equation} It is easily seen that this functional satisfies the following properties: $$ ||f||_{\rm Tail[\theta]} \ge 0; \ \ \ ||f||_{\rm Tail[\theta]} = 0 \ \Longleftrightarrow \ f = 0 ; $$ $$ || c \ f||_{\rm Tail[\theta]} = |c| \, ||f||_{\rm Tail[\theta]}, \ \ c = {\rm const} \in \mathbb{R}. $$ Correspondingly, the set of all the functions $f$ belonging to the set $ M_0(X,\mu)$ and having finite value $ ||f||_{\rm Tail[\theta]} $ is said to be the {\it tail space} $\rm Tail[\theta].$ \par The following question is formulated in \cite{Cwikel Kaminska Maligranda2004} by M. Cwikel, A. Kaminska, L. Maligranda and L. Pick: {\it \lq\lq Are the generalized Lorentz spaces really spaces?\rq\rq}, i.e., can these spaces be normed such that they are (complete) Banach functional rearrangement invariant spaces? A particular {\it positive} answer on this question, i.e., under appropriate simple conditions, may be found in \cite{OsSirLorNorm2012}. See also \cite[chapter 1, sections 1,2]{OsMono1999}. \par We denote $$ I(f) = \int f(x) \ d\mu(x) = \int_X f(x) \ d\mu(x); $$ if $ \mu$ is a probability measure, we have $ \mu(X) = 1$ and we replace $(X = \{x\}, \cal{F}, \mu)$ with the standard triplet $(\Omega = \{ \omega \}, \cal{F}, {\bf P}) $ and, for any numerical- valued measurable function, i.e., in other words, random variable $\xi = \xi(\omega)$, we have $$ {\bf E} \xi := I(\xi) = \int_{\Omega} \xi(\omega) \ {\bf P}(d \, \omega); \ \ \ T[\xi](t) = {\bf P} ( |\xi| \ge t ), \ t \ge 0. $$ Define now, for an arbitrary Young-Orlicz function $N = N(u)$, the following tail function from the set $W$ \begin{equation} V[N](t) = V_N(t) \stackrel{def}{=} \min \left( \ \mu(X), \ \frac{1}{N(t)} \ \right). \end{equation} Of course, $ \min(c,\infty) = c, \ c \in (0,\infty)$. \par Suppose $0 \ne f \in L(N)$; then there exists a finite positive constant $ C $ such that $ I (N( |f(\cdot)|/C)) \le 1$; one can take for instance $C = ||f||_{L(N)}$. \par It follows from the classical Markov-Tchebychev's inequality \begin{equation} T[f](t) \le V[N](t/C), \ t \ge 0. \end{equation} In particular, \begin{equation} T[f](t) \le V[N] \left( \ \frac{t}{||f||_{sL(N)}} \ \right), \ t \ge 0. \end{equation} In other words, if $0\ne f \in L(N) < \infty$, then the function $f(\cdot)$, as well as its normed version $\tilde{f} = f/||f||_{L(N)}$, belongs to the suitable tail space: \begin{equation} ||f||_{{\rm Tail}[V[N]]} \le ||f||_{L(N)} = ||f||_{sL(N)}. \end{equation} \begin{definition}\label{def weak Orlicz} {\rm Let $N$ be a Young-Orlicz function and $f\in M_0(X,\mu)$. We say that $f$ belongs to the {\it weak} Orlicz space $wL(N)$ and we write $f(\cdot) \in wL(N)$ iff the following condition is satisfied \begin{equation}\label{weak orlicz} ||f||_{{\rm Tail}[V[N]]} < \infty \ \Longleftrightarrow \ f \in {\rm Tail}[V[N]]. \end{equation} We will write for brevity also $$ ||f||_{wL(N)} \stackrel{def}{=} ||f||_{{\rm Tail}[V[N]]}. $$ Obviously \begin{equation} \label{low1} ||f||_{wL(N)} \le ||f||_{sL(N)} \end{equation} } and $$sL(N) \subset wL(N).$$ \end{definition} \begin{remark} {\rm Let us emphasize the difference between the general {\it tail space} $Tail[\theta]$ and the concrete weak Orlicz space $wL(N)$. In the first case the \lq\lq parameter\rq\rq $\theta$ is an arbitrary element of the tail set $W$, while for the description of the weak Orlicz space in the definition \ref{def weak Orlicz} the function $N(\cdot)$ belongs to the narrow class of Young-Orlicz functions.} \end{remark} The complete review of the theory of these spaces is contained in \cite{Liu MaoFa2013}; see also \cite{Liu Ye2010,Liu Hou MaoFa2017} and the recent paper \cite{Kawasumi2018}. It is proved therein, in particular, that these spaces are $F$-spaces and may be normed under appropriate conditions, wherein the norm in the corresponding $F$-space or Banach space is linear and equivalent to the weak Orlicz norm.\par {\it There a natural question appears: under what conditions imposed on the function $N = N(u)$ can the inequality \eqref{low1} be reversed, of course, up to a multiplicative constant? } \par {\bf In detail, our aim is to find necessary and sufficient conditions, imposed on the Young-Orlicz function $ N(\cdot)$, under which} \begin{equation} \label{defY(N)} Y(N) \stackrel{def}{=} \sup_{0 \ne f \in \ wL(N) } \left\{ \ \frac{||f||_{sL(N)}}{||f||_{wL(N)}} \ \right\} < \infty. \end{equation} {\bf It is also interesting, by our opinion, to calculate the exact value of the parameter $ \ Y(N) \ $ in the case of its finiteness; we will make this computation in Section 3.} \par \begin{remark} { \rm The {\it lower bound} in the last relation, namely, $$ \underline{Y}(N) \stackrel{def}{=} \inf_{0 \ne f \in \ wL(N) } \left\{ \ \frac{||f||_{sL(N)}}{||f||_{wL(N)}} \ \right\}, $$ is known and $\underline{Y}(N) = 1$. In detail, it follows from \eqref{low1} that $\underline{Y}(N) \le 1; $ on the other hand, both the norms coincide for the arbitrary indicator function of a measurable set $A$ having a non-trivial measure: $0 < \mu(A) < \infty$ (see \cite{Liu MaoFa2013}). \par } \end{remark} The comparison theorems between weak as well as between ordinary (strong) Orlicz spaces and other spaces are obtained, in particular, in \cite{Bennett Sharpley1988,Buld Mush OsPuch1992,Iaffei1996,KozOsSir2017,Masta2016,SoriaLor 1998,SteinWeiss1975}, etc.\par In both the next examples the space $(X = \{x\}, \cal{F}, \ {\bf P})$ is probabilistic; one can still assume that $X = [0,1]$, equipped with the ordinary Lebesgue measure $d\mu(x) = dx$. \par \begin{example} {\bf A negative case.}\par {\rm Let $ N(u) = N_p(u) = |u|^p,\ p = \rm const > 1$; in other words, the Orlicz space $ L(N_p)$ coincides with the classical Lebesgue-Riesz space $ L_p$: $$ |\xi|_p = \left[ \ {\bf E}|\xi|^p \ \right]^{1/p}. $$ The corresponding tail function has the form $$ V[N_p](t) = \min(1, \ t^{-p}), \ t > 0. $$ On the other hand, let us introduce the r.v. $\eta $ such that $$ T[\eta](t) := V[N_p](t), \ \ t > 0; $$ then, the r.v. $\eta$ has unit norm in the corresponding weak Orlicz space $wL(N_p)$ but $$ {\bf E}|\eta|^p = {\bf E}\eta^p = p \int_1^{\infty} t^{p-1} \ t^{-p} \ dt = p \int_1^{\infty} t^{-1} \ dt = \infty, $$ $ ||\eta||_p = \infty. \ $ In other words $ Y(N_p) = \infty$. As usual, the classical Lebesgue-Riesz norm $||\eta||_p$, \ $ p\ge 1$, of the random variable $\eta$ is defined by $$ ||\eta||_p \stackrel{def}{=} \left[ {\bf E} |\eta|^p \ \right]^{1/p}. $$ } \end{example} \begin{example} {\bf A positive case.}\par {\rm Let now $$ N(u) = N^{(2)}(u) = \exp \left(u^2/2 \right) - 1, \ u \in \mathbb{R}, $$ the so-called subgaussian case. It is well-known that the non-zero r.v. $\zeta$ belongs to the Orlicz space $L(N^{(2)})$ if and only if there exists $C = {\rm const} > 0$ such that $$ T[\zeta](t) \le \exp (- C \ t^2), \ t \ge 0, $$ or equally $$ \sup_{p \ge 1} \left[ \ ||\zeta||_p/\sqrt{p} \ \right] < \infty. $$ Thus, in this case, $Y(N^{(2)}) < \infty$. \par The same conclusion holds true also for the more general so-called exponential Orlicz spaces, which are in turn equivalent to the Grand Lebesgue Spaces, see \cite{KozOs1985,KozOsSir2017,OsSirHIAT2007}, \cite[Chapter 1, Section 1.2]{OsMono1999}. For instance, this condition is satisfied when \begin{eqnarray*} N(u) = N^{(m)}(u) = \exp \left( |u|^m \right) - 1, \ m = {\rm const}, > 0 \end{eqnarray*} as well as for an arbitrary Young-Orlicz function which is equivalent to $N^{(m)}(u)$; or when $$ N(u) = N_{(\Delta)}(u) \stackrel{def}{=} \exp \left( \ [\ln (1 + |u|)]^{\Delta} \ \right) - 1, \ \Delta = \rm const > 1. $$ } \end{example} \section{Main result.} Let $ (X = \{x\}, \cal{F}, \mu)$ be a measurable space with atomless sigma-finite non-zero measure $ \mu$ and let $N$ be a Young-Orlicz function. Define an unique value $ t_0= t_0(\mu(X)) \in [0, \infty)$ by $$ N(t_0) = \frac{1}{\mu(X)}; $$ in particular, when $ \ \mu(X) = \infty, \ $ then $ \ t_0 = 0. \ $ \par Denote also \begin{equation}\label{defJ} \begin{split} J(N) &\stackrel{def}{=} \inf_{C > 0} \int_0^{\infty} N(C \ t) \ \left| \ d V[N](t) \ \right| =\\ &-\sup_{C > 0} \int_0^{\infty} N(C \ t) \ d V[N](t) = -\sup_{C > 0} \int_0^{\infty} N(C \ t) \ V'[N](t)dt. \end{split} \end{equation} \noindent Note that the function $t \to V[N](t) $ is monotonically non-increasing, therefore $|d V[N](t)|=-d V[N](t)$. \par Evidently, when $ t_0 > 0$ we have $$ \int_0^{\infty} N(C \ t) \ \left| \ d V[N](t) \ \right| =-\int_{t_0}^{\infty} N(C \ t )\, d \left[ \frac{1}{N(t)} \right]. $$ \begin{theorem}\label{th equivalence strong-weak} Let $Y(N)$ and $J(N)$ be defined respectively by \eqref{defY(N)} and \eqref{defJ}. The necessary and sufficient condition for the equivalence of the strong and weak Luxemburg-Orlicz's norms, i.e. $Y(N) < \infty$, is the following: \begin{equation}\label{condition J(N) finite} J(N) < \infty, \end{equation} or equivalently \begin{equation} \label{def CN} \exists C = C[N] \in (0,\infty) \,: \, \int_0^{\infty} N(C \ t) \ \left| \ d V[N](t) \ \right| < \infty. \end{equation} \end{theorem} \begin{remark} \label{even less} {\rm Evidently, if $ C[N] \in (0,\infty)$, then $$ \forall C_1 \in (0, C[N]) \ \Rightarrow \int_0^{\infty} N(C_1 \ t) \ \left| \ d V[N](t) \ \right| < \infty. $$ } \end{remark} \ {\bf Proof.} \ {\bf A.} First of all, note that \begin{equation}\label{note A} \int_X N(f(x)) \ d\mu(x) \ = - \int_0^{\infty} N(t) \ d\, T[f](t) \ = \int_0^{\infty} N(t) \ |d\, T[f](t)| . \end{equation} \ {\bf B. An auxiliary tool.} \par \begin{lemma}\label{auxiliary lemma} Let $ \xi, \ \eta $ be non-negative numerical-valued r.v. such that $T[\xi](t) \le T[\eta] (t), \ t \ge 0$. Let also $N(u)$ be a non-negative increasing function, $u \ge 0$. Then \begin{equation} {\bf E}N(\xi) \le {\bf E}N(\eta). \end{equation} \end{lemma} \noindent{\bf Proof of Lemma \ref{auxiliary lemma}}.\par We can assume as before, without loss of generality, $X = [0,1]$ with Lebesgue measure. One can assume also that $$ \xi(x) = [1 - T[\xi]]^{-1}(x), \ \ \ \eta(x) = [1 - T[\eta]]^{-1}(x), $$ where $ \ G^{-1} \ $ denotes a left-inversion for the function $ \ G(\cdot). \ $ Then $ \ \xi(x) \le \eta(x) \ $ and hence $ N(\xi)\le N(\eta), \ $ and a fortiori $ \ {\bf E} N(\xi) \le {\bf E} N(\eta). \ $ \par \begin{remark} {\rm Of course, Lemma 2.1 remains true also for non-finite measure $ \ \mu, \ $ as long as it is sigma-finite.} \par \end{remark} \ {\bf C. \ Necessity.} \noindent Let us introduce the following non-negative numerical-valued measurable function $g = g(x), \ x \in X$, for which \begin{equation} T[g](t) = V[N](t), \ t > 0; \end{equation} then $ \ g(\cdot) \in wL(N) \ $ with unit norm in this space. \par \noindent By the condition $Y(N)<\infty$, the function also $g$ belongs to the space $s L(N)$, therefore $$ \exists C_0 \in (0, \infty) \ \, : \, \gamma(N) = \gamma_{C_0}(N) \stackrel{def}{=} \int_X N(C_0 \ |g(x)|) \ d\mu(x) <\infty. $$ We deduce, by virtue of \eqref{note A}, $$ \int _0^{\infty} N(C_0 \ t) \ |d V[N](t)| = \gamma(N) < \infty, $$ \begin{equation} J(N) = \inf_{ C > 0} \int_0^{\infty} N(C \ t) \ |d V[N](t)| \le \int_0^{\infty} N(C_0 \ t) \ |d V[N](t)| = \gamma(N) < \infty. \end{equation} \ {\bf D. \ Sufficiency.}\par \noindent Assume that the condition $ J(N) < \infty$ is satisfied. Suppose that the measurable function $ f: X \to \mathbb{R} $ belongs to the weak Orlicz space $wL(N)$: \begin{equation} T[f](t) \le V[N](t/C_2), \ t \ge 0, \end{equation} for some finite positive value $C_2$. Let $C_3 = {\rm const} \in (0,\infty)$, its exact value will be clarified below. By using Lemma \ref{auxiliary lemma} we get \begin{eqnarray*} & & \int_X N(C_3 f(x) ) \ d\mu(x) = \int_0^\infty N(C_3t)dT[f](t) \\ &\leq & \int_0^{\infty} N(C_3\, t) \ |dV[N](t/C_2)| = \int_0^{\infty} N(C_2 \, C_3 \, t) \ |dV[N](t)| \\ &=& \int_0^{\infty} N(C_4\, t) \ |dV[N](t)| < \infty, \end{eqnarray*} if the (positive) value $C_4 := C_2 \ C_3$ is sufficiently small, for instance $ C_4 \le C[N].$ \par Thus, the function $ f (\cdot)$ belongs to the strong Orlicz space $sL(N)$. $\Box$ \par \begin{remark} {\rm The condition of Theorem \ref{th equivalence strong-weak} is satisfied for the exponential Orlicz space of the form $ \ L(N^{(m)}), \ m > 0, \ $ and is not satisfied for the Orlicz space $ \ L(N_{(\Delta)}), \ \Delta >1, \ $ also exponential space.} \end{remark} \section{Quantitative estimates.} It is interest, by our opinion, to obtain the {\it quantitative} estimation of the constant which appears in the norm inequality for the embedding $wL(N) \subset sL(N)$; namely, our aim is to compute the exact value for $Y(N)$, defined in \eqref{defY(N)}. \par In detail, let $f: X \to \mathbb{R}$ be some function from the space $ wL(N)$; one can suppose, without loss of generality, \begin{equation} T[f](t) \le V[N](t), \ t \ge 0 \ \Longleftrightarrow \ ||f||_{wL(N)} \le 1. \end{equation} Assume also that the condition \eqref{condition J(N) finite} is satisfied, namely $J(N) < \infty$; we want to find the upper estimate for the value $||f||_{sL(N)}$. \par Let us introduce the variable \begin{equation}\label{y0} y_0 = y_0(N, \ \mu(X)) := N^{-1}(1/\mu(X)), \end{equation} so that $ \ y_0(N, \infty) = 0, \ \ y_0(N, 1) = N^{-1}(1)$ and define the function \begin{equation}\label{Q} Q(k)= Q[N](k):=\int_{y_0}^{\infty} N(y/k) \ \left| d \frac{1}{N(y)} \right|, \ \ k \in (1, \ \infty]. \end{equation} or equally $$ Q(k) = \int_1^{\infty} N \left( \frac{N^{-1}(w)}{k} \ \right) \ \frac{dw}{w^2}. $$ Of course $ \ Q(0+) = \infty, \ Q(\infty) = 0$. \par Denote also \begin{equation}\label{def K0} k_0[N] := Q^{-1}(1) \in [1, \infty). \end{equation} Notice that the finiteness of the value $ \ k_0[N] \ $ is quite equivalent to the condition $J(N)<\infty$ of Theorem \ref{th equivalence strong-weak}. \par \begin{theorem} Assume that the condition $J(N)<\infty$ is satisfied. Let $k_0[N]$ be defined by \eqref{def K0}. Then \begin{equation}\label{exactvalue} ||f||_{sL(N)} \le k_0[N] \ ||f||_{wL(N)}, \end{equation} and the coefficient $k_0[N]$ is here the best possible. Namely, \begin{equation} Y(N)=\sup_{0 \ne f \in wL(N)} \left\{ \ \frac{||f||_{sL(N)}}{||f||_{wL(N)}} \ \right\} = k_0[N]. \end{equation} \end{theorem} In other words, $k_0[N]$ is the exact value (attainable) of the embedding constant in the inclusion $wL(N) \subset sL(N)$. Moreover, there exists a measurable function $f_0: X \to \mathbb{R} $, with $||f_0||_{wL(N)}=1$, for which the equality in \eqref{exactvalue} holds true: \begin{equation} \label{attain} ||f_0||_{sL(N)} = k_0[N] \ ||f_0||_{wL(N)}. \end{equation} Obviously $k_0[N]=+\infty$ when $J(N)=\infty$. {\bf Proof.} First of all, note that the function $k \to Q(k), \ k \in (1, \ \infty)$ is continuous, strictly monotonically decreasing and herewith $$ Q(\infty) = \lim_{k \to \infty} Q(k) = 0, $$ by virtue of dominated convergence theorem; as well as $$ Q(1+) \stackrel{def}{=} \lim_{k \to 1+} Q(k) = \int_{y_0}^{\infty} N(y) \ \left| d \frac{1}{N(y)} \right| = \int_{1/\mu(X)}^{\infty} z \ \left| d \frac{ 1}{z} \right| = \infty, $$ and the case when $\mu(X) = \infty $ is not excluded.\par Thus, the value $k_0[N]$ there exists, is unique, positive, and finite: \ $ \ k_0[N] \ \in (1, \infty)$. Further, assume that the non-zero measurable function $ \ f: \ X \to \mathbb{R} \ $ belongs to the weak Orlicz space $ \ wL(N); \ $ one can suppose, without loss of generality, $ ||f||_{wL(N)} = 1$: \begin{equation} T[f](t) \le \min \left( \mu(X), \frac{1}{N(t)} \right) =: T[g](t), \end{equation} where $T[g](t)=V[N](t)$. \par We deduce, from the definition of the value $k_0[N]$ and using once again Lemma \ref{auxiliary lemma}, $$ \int_X N \left( \ \frac{f(x)}{k_0[N]} \ \right) \ d\mu(x) \le \int_X N \left( \ \frac{g(x)}{k_0[N]} \right) \ d\mu(x) = $$ \begin{equation} \int_{y_0}^{\infty} N \left( \ \frac{y}{k_0[N]} \ \right) \left| \ d \frac{1}{N(y)} \ \right| = Q(k_0[N]) =1, \end{equation} therefore \begin{equation} ||f||_{sL(N)} \le k_0[N] = k_0[N] \ ||f||_{wL(N)}. \end{equation} So we proved the {\it upper} estimate; the {\it unimprovability} of ones follows immediately from the relation \begin{equation} ||g||_{sL(N)} = k_0[N] = k_0[N] \ ||g||_{wL(N)}. \end{equation} In detail: \begin{equation}\label{eq integral} \int_X N \left( \frac{g(x)}{k_0[N]} \right) \ d\mu(x) = \int_X N \left( \ \frac{y}{k_0[N]} \ \right) \ \left| \ d V[N](y) \ \right| = 1 \end{equation} in accordance with the choice of the magnitude $k_0[N]$. Therefore $$ ||g||_{sL[N]} = k_0[N] \ \hspace{5mm} $$ and simultaneously $||g||_{wL[N]} = 1$. So, in \eqref{attain} one can choose $f_0(x):= g(x)$ (attainability). \par \begin{example} {\rm Let $(X = \{x\}, \cal{F}, \mu)$ be a {\it probability} space with atomless sigma-finite measure $\mu(X)=1$. We define the following Young-Orlicz function, more precisely, the following family of Young-Orlicz functions \begin{equation*} N(u) = N^{(m)}(u) \stackrel{def}{=} \exp \left( \ |u|^m /m\ \right) - 1, \ \ m = {\rm const} >1. \end{equation*} The case $m = 2$ is known as subgaussian case. The corresponding tail behavior for non-zero r.v. $ \xi$, having finite weak Orlicz norm in the space $\left (X, L \left(N^{(m)} \right) \right)$, has the form $$ T[\xi](t) \le \exp(- C(m) \ t^{m'}/m'), \ \ t \ge 0, \ \ m' \stackrel{def}{=} m/(m-1). $$ Let us introduce the following modification of the incomplete beta-function $$ B_{\gamma}(a,b) \stackrel{def}{=} \int_{\gamma}^1 t^{a -1} \ (1 - t)^{b-1} \ dt, \ \ \gamma \in (0,1), \ \ a,b = {\rm const} \in \mathbb{R}, \ b > 0, $$ and define the variables $ \theta = \theta(k,m) := k^{-m}, \ k > 1$, and the function \begin{equation*} \begin{split} G(\alpha) \stackrel{def}{=} B_{1/2}(-1, 1 - \alpha) &= \int_{1/2}^1 t^{-2} \ (1-t)^{-\alpha} \,dt\\ &=\int_0^{1/2}(1 - z)^{-2}\ z^{-\alpha} \,dz,\ \ \ \alpha < 1. \end{split} \end{equation*} With the change of variable $t=1-z$ we have $$G(\alpha)=\int_0^{1/2}(1-z)^{-2}z^{-\alpha}\,dz. $$ Using the Taylor series expansion $$(1-z)^{-2}=\sum_{n=0}^\infty(n+1)z^n, \ \ \ z\in (-1,1)$$ which converges uniformly at least in the closed interval $[0,1/2]$, we get \begin{equation*} G(\alpha)= \sum_{n=0}^\infty(n+1)\int_0^{1/2} z^{n-\alpha}\,dz, \end{equation*} which gives \begin{equation}\label{G series} G(\alpha) = \sum_{n=0}^{\infty} (n+1)\frac{2^{-n - 1 + \alpha }}{n+1 - \alpha}. \end{equation} By \eqref{y0} we obtain $$ y_0=y_0^{(m)}= N^{-1}(1)=(m \ln 2)^{1/m} $$ and by \eqref{Q} \begin{equation*} \begin{split} Q_m(k) & = Q[N^{(m)}](k) = \int_{y_0^{(m)}}^{\infty} \left( e^{y^m \ k^{-m}/m} - 1 \right) \ \left| \ d_y \frac{1}{e^{y^m/m} - 1} \ \right|\\ \\ &=\int_{(m \ln 2)^{1/m}}^{\infty} \left( e^{y^m \ k^{-m}/m} - 1 \right) \frac{e^{y^m/m}\,y^{m-1}}{(e^{y^m/m}-1)^2}\, dy.\\ \end{split} \end{equation*} Now we put $x=e^{y^m/m}$, so $dx=e^{y^m/m}\,y^{m-1}\, dy$, \, $x\in(2,\infty)$; then \begin{equation*} \begin{split} Q_m(k) & =\int_2^\infty(x^{k^{-m}}-1)(x-1)^{-2} \, dx. \end{split} \end{equation*} We make another change of variable $t=1-1/x \ \ \Rightarrow \ \ x=1/(1-t) \ \ \Rightarrow \ \ dx =\frac{dt}{(1-t)^2}$, which yields \begin{equation*} \begin{split} Q_m(k) & =\int_{1/2}^1\frac{1-(1-t)^{k^{-m}}}{(1-t)^{k^{-m}}} \, \left(\frac{t}{1-t}\right)^{-2}\frac{dt}{(1-t)^2}\\ \\ &=\int_{1/2}^1 t^{-2}\ (1-t)^{-k^{-m}}\, dt-\int_{1/2}^1t^{-2}\,dt=\int_0^{1/2} (1 - z)^{-2} \ z^{-k^{-m}} \, dz - 1\\ \\ &=G(k^{-m})-1=G(\theta(k,m))-1 \end{split} \end{equation*} Therefore, the value $k_0=k_0 \left[N^{(m)} \right]=Q^{-1}(1)$ defined in \eqref{def K0} may be found as follows. Define an absolute constant $ \beta_0$ by means of the relation \begin{equation} \int_0^{1/2} (1 - z)^{-2} \ z^{-\beta_0} \, dz= G(\beta_0) = 2; \end{equation} then $$ \beta_0 \approx 0.431870 $$ and \begin{equation} k_0=k_0 \left[N^{(m)} \right] = \left[\beta_0 \right]^{-1/m}, \end{equation} or equally \begin{equation} G(k_0^{-m})= 2. \end{equation} Note in addition that $ G(0) = 1 $, $G(1^-) = \infty$ and $G$ is strictly increasing in $(0,1)$, therefore the value $\beta_0$ there exists and it is unique. \par Note that $$ G(\alpha) > \int_{1/2}^1 (1 - t)^{-\alpha} \ dt = \frac{2^{\alpha - 1}}{1 - \alpha}, \ \alpha \in (0,1), $$ and, when $ \alpha \to 1^-$, $$ G(\alpha) \sim \frac{1}{1 - \alpha}. $$ If $ \alpha \to 0^+ $, by Taylor series expansion we have $$ G(\alpha) \sim 1 + C_5 \alpha, $$ where $$ C_5 \stackrel{def}{=} \int_0^{1/2} \frac{|\ln z|}{(1 - z)^2} \ dz = 2 \ \ln 2 \approx 1.38629. $$ Indeed, we put $$ C_6(\varepsilon) := \int_{\varepsilon}^{1/2} \frac{\ln z}{(1 - z)^2} \ dz, $$ so that $$ C_5 = - \lim_{\varepsilon \to 0+} C_6(\varepsilon), \ \ \varepsilon \in (0, 1/2). $$ By means of integration by parts we get $$ C_6(\varepsilon) = \int_{\varepsilon}^{1/2} \frac{\ln z}{(1-z)^2} \ dz = \int_{\varepsilon}^{1/2} \ln z \ d\frac{1}{1-z}= $$ $$ \frac{\ln(1/2)}{1/2} - \frac{\ln \varepsilon}{(1 - \varepsilon)} - C_7, $$ where $$ C_7 = \int_{\varepsilon}^{1/2}\frac{1}{z(1 - z)} \ dz= \int_{\varepsilon}^{1/2}\frac{dz}{z} + \int_{\varepsilon}^{1/2} \frac{dz}{(1 - z)} = $$ $$ \ln(1/2) - \ln \varepsilon - \ln(1/2) + \ln(1 - \varepsilon) = \ln(1 - \varepsilon) - \ln \varepsilon. $$ Therefore $$ C_6 = - 2 \ln 2 - \frac{\ln \varepsilon}{1 - \varepsilon} - \ln(1 - \varepsilon) + \ln \varepsilon \,\to\, - 2 \ln 2, $$ as long as $ \ \varepsilon \to 0+. \ $ Thus, $ \ C_5 = 2 \ln 2. $ Note that $\displaystyle\lim_{m \to \infty} k_0 \left[N^{(m)} \right] = 1$.\par To summarize: denote \begin{equation}\label{inf k0} \underline{k_0} := \inf_N k_0[N], \end{equation} where $ \ " \inf " \ $ in \eqref{inf k0} is calculated over all the Young-Orlicz functions $N(\cdot)$. We actually proved that \begin{equation} \underline{k_0} = 1. \end{equation} In detail, it follows from \eqref{low1} that \begin{equation} \underline{\tau_0} \ge 1. \end{equation} On the other hands, $$ \underline{\tau_0} \le \lim_{m \to \infty} \tau_0 \left[N^{(m)} \right] = 1. $$ Evidently, $$ \overline{k_0} := \sup_N k_0[N] = \infty. $$ } \end{example} \emph{Acknowledgement.} {\footnotesize The first author has been partially supported by the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM) and by Universit\`a degli Studi di Napoli Parthenope through the project \lq\lq sostegno alla Ricerca individuale\rq\rq }.\par \ The second author is grateful to M. Sgibnev (Novosibirsk, Russia) for sending his interesting articles. \par \end{document}
\begin{document} \baselineskip=16pt \title{\sf Construction of 2nd stage shape invraiant potentials} \begin{abstract} We introduce concept of next generation shape invariance and show that the process of shape invariant extension can be continued indefinitely. \end{abstract} \subsubsection*{Introduction} Suppose we have a shape invariant potential \(V(x)\) with superpotential given by \(W(x)\), prepotential by \(\Omega(x)\), the partner potentials \(V_\pm(x)\) defined by \begin{equation}\label{EQ01} V_\pm = W^2 - W^\prime(x) \end{equation} differ from \(V(x)\) by a constant. Shape invariance means that \(V_-(x)\) and \(V_+(x)\) have the same form and coincide if parameters appearing in \(W(x\) are redefined. In an earlier papers we have demonstrated that all the shape invariant potentials could be constructed by assuming an ansatz \(W(x) = \lambda F(x)\) and solving the constraint implied by shape invariance [1,2]. In addition a process, of arriving at the shape invariant potentials related to the exceptional orthogonal polynomials, was given within the framework of quantum Hamilton Jacobi (QHJ) formalism. This naturally raises the question if the process can be repeated to arrive at potentials related to next generation of exceptional polynomials. This question has been addressed in a recent article by Sree Ranjani within the frame work of QHJ [3]. She has explicitly constructed the second generation potentials and the corresponding polynomials. Here we give a slightly different approach to the above mentioned problem. It will be seen in that this approach is much simpler and leads to a set of new results on shape invariant potentials. If introduce a function \(f(x)\) defined by \[f(x) =\exp\Big(-\int W(x) dx\Big),\] the Hamiltonians corresponding to the potentials \(V_\pm(x)\) in \eqref{EQ01} can be written as \begin{equation}\label{EQ02} H_+ = - \frac{1}{f}\dd{x} f^2 \dd{x} \frac{1}{f}, \qquad H_- = - f\dd{x}\frac{1}{f^2}\dd{x} f. \end{equation} {\it We will assume that the partners \(H_\pm\) have the shape invariance property and that the solutions of the corresponding eigenvalue equations are known.} \subsubsection*{Generalizing Shape Invariance} Given a differential operator of the form \(H_\pm\) in \eqref{EQ02}we introduce a differential operators \begin{eqnarray} L_1 = - \frac{1}{g}\dd{x} g \dd{x} ; \qquad L_2 = - g\dd{x} \frac{1}{g}\dd{x}. \end{eqnarray} and define \begin{equation}\label{EQ04} L \equiv L_1 + \widetilde{V}(x) = - \frac{1}{g}\dd{x} g \dd{x} + \widetilde{V}(x) \end{equation} It can be easily seen that, for the choice \(g=f^2\), the differential operators \(L_1,L_2\) are related to \(H_\pm\). \begin{equation} L_1={f}^{-1} H_+ f ,\qquad\qquad L_2 = f H_- {f}^{-1}. \end{equation} The additional term \(\widetilde{V}(x)\) in Eqref{EQ04} in the potential \(V(x)\) will called a next generation term in the potential. One the aims of this papers is to use a generalized shape invariance requirement and to determine restrictions on the next generation term \(\widetilde{V}(x)\). We define a concept of {\bf gen-next shape invariance} for \(\widetilde{V}(x)\) in a manner close to the shape invariance property as it is understood in the current literature. For this purpose we follow our paper-I. Given a Schrodinger differential operator \(L\), of the form \eqref{EQ04}, we will define partner potentials and gen-next shape invariance as follows. We first set up the eigenvalue equation for \begin{equation} L \psi = E \psi. \end{equation} We write \(\psi(x) = \exp(\Omega(x))\), where \(\Omega(x)\) obeys the QHJ equation for \(L\). \begin{equation} \Big(\dd[\Omega]{x}\Big)^2 + \frac{1}{g} \dd{x}\Big(g\dd[\Omega(x)]{x}\Big) - E = 0. \end{equation} The above form suggests that we introduce next-gen partner potentials \(\widetilde{V}_\pm(x)\) by \begin{equation} \widetilde{V}_\pm(x) = W^2 \pm \frac{1}{g}\dd{x}(gW) \end{equation} where \(W(x)= \dd[\Omega]{x}\). In general the function \(W(x)\), to be by next-gen super potential, will depend on some parameters. In the simplest case of translational shape invariance, there is only one such parameter. Anticipating the results, we assume that the next-gen superpotential potential \(W(x)\) can be chosen to depend on one parameter. We use \(\lambda, \mu\) etc. to denote different values of this parameter. The shape invariance requirement, \( V_+(x,\lambda) = V_-(x,\mu) + \text{const}\), then take the form \begin{equation} W^2(x,\lambda) + \frac{1}{g(x)} \dd{x}\Big( g(x) W(x,\lambda) \Big) = W^2(x,\mu) - \frac{1}{g(x)} \dd{x}\Big( g(x) W(x,\mu) \Big) + \text{constant} \end{equation} As before, following our paper [1], we make an ansatz \begin{equation} W(x) = \lambda F(x). \end{equation} and rewrite the resulting equation in the form \begin{equation} F^2 + \frac{1}{(\mu-\lambda)}\frac{1}{g} \dd[(gF)]{x}= \text{constant}. \end{equation} Next change the variable from \(x\) to \(\xi =\alpha x\), where \(\alpha = \mu-\lambda\). Using notation \(\bar{g}(\xi)= g(\xi/\alpha)\) we get \begin{equation}\label{EQ11} F^2(\xi) + \frac{1}{\bar{g}}\dd{\xi}\Big(\bar{g} F \Big) = K, \end{equation} where \(K\) is a constant. Next substituting \(F= \psi^\prime/\psi\), the function \(\psi\) satisfies the equation \begin{equation} \DD[\psi]{\xi} + \frac{1}{\bar{g}}\dd[\bar{g}]{\xi} \, \dd[\psi]{\xi} - K \psi = 0. \end{equation} This equation is can be rearranged as \begin{equation} - \Big(\frac{1}{\bar{g}}\dd{\xi} \bar{g} \dd{\xi} + K \Big) \psi=0. \psi + K \psi=0. \end{equation} Substituting \(g= f^2; \psi= \frac{1}{f}\phi\), we get \begin{equation} \Big( \frac{1}{f}\dd{\xi} f^2 \dd{\xi} \frac{1}{f} + K \Big) \phi(\xi) =0 \end{equation} The solutions of this equation are known and can be used to construct solution for the function \(F(x)\) and hence the next generation of shape invariant potentials can be determined. It is obvious that the process outlined here can be carried out indefinitely. \subsubsection*{Remarks} \begin{itemize} \item It should be noted that in the second stage we will get a two parameter shape invariant potential. In general we will have translational invariance in \(2^n\) parameters in the \(n^\text{th}\) stage. \item Instead of starting from \(L_1\) we could have started with \(L_2\) see \eqref{EQ04}. This will lead to two other potentials. Thus we will have 'four partners' in the second stage and \(2^n\) partners after the \(n^\text{th}\) stage. The multiparameter shape invariant potentials have been found for the first time in this work. \item The EOP and related potentials discussed in the literature correspond to special values of the parameter \(\lambda\) appearing in higher generation potentials. \item {\it Iso-spectral deformation route:} Suppose we have potentials \(V_\pm\) specified by superpotential \(W(x)\) and that we wish to carryout a shape invariant extension, following our earlier paper. We would then write \(\overline{W}(x) = W(x)+\chi(x)\) and demand that \(V_+\) (or \(V_-)\) \(W(x)\) and \(\overline{W}(x)\) be equal up to a constant, we would get \begin{equation} W^2(x) + \dd[W]{x} = \overline{W}^2(x) + \dd[\overline{W}]{x} \end{equation} This route to obtain a new shape invariant potential leads to the following constraint on \(\chi\) \begin{eqnarray} \chi^2 \pm 2W(x) \chi + \chi^\prime +K=0 \\ \chi^2 \pm 2\chi\Big(\frac{1}{f} \dd{x}f(x)\Big) + \chi^\prime +K=0 \end{eqnarray} because \(W=f^\prime/f\). One of these equations coincides with \eqref{EQ11} if we choose the sign, define the argument suitably and remember that \(\bar{g} = f^2\). Thus the process of going to the next generation of shape invariance described here contains the isospectral deformation process used in our paper [1]. \end{itemize} \subsubsection*{Concluding reamrks} In this paper we have demonstrated for any shape invariant potential, for which the nergye eigenfucntions and eigenvalues are known, we can construct the next generation shape invaraint potential and the process can be repeated indefinitely. Physically only potentials which are free of singularities are of interest. To decide if the next generation potential so obtained is free of singularity or not requires a separate study. \subsubsection*{References} \begin{itemize} \item[{[1]}] S. Sree Ranjani, R. Sandhya and A. K. Kapoor, ``Shape invariant rational extensions and potentials related to exceptional polynomials"\\ International Journal of Modern Physics A {\bf 30} (2015) 1550146. \item[{[2]}] R. Sandhya, S. Sree Ranjani and A. K. Kapoor ''Shape invariant potentials in higher dimensions''\\ Annals of Physics 359 (2015) 125–135. \item[{[3]}] S.S. Ranjani , ``QHJ route to multi-indexed exceptional Laguerre polynomials and corresponding rational potentials'', arrXiv.org :1705.07682. \end{itemize} \end{document}
\begin{document} \title{Generalized Disparate Impact for Configurable Fairness Solutions in ML} \maketitle \def\arabic{footnote}{*}\footnotetext{Equal contribution.\\ Correspondence to: Luca Giuliani [email protected], Eleonora Misino [email protected]}\def\arabic{footnote}{\arabic{footnote}} \begin{abstract} We make two contributions in the field of AI fairness over continuous protected attributes. First, we show that the Hirschfeld-Gebelein-Renyi (HGR) indicator (the only one currently available for such a case) is valuable but subject to a few crucial limitations regarding semantics, interpretability, and robustness. Second, we introduce a family of indicators that are: 1) complementary to HGR in terms of semantics; 2) fully interpretable and transparent; 3) robust over finite samples; 4) configurable to suit specific applications. Our approach also allows us to define fine-grained constraints to permit certain types of dependence and forbid others selectively. By expanding the available options for continuous protected attributes, our approach represents a significant contribution to the area of fair artificial intelligence. \end{abstract} \section{Introduction} In recent years, the social impact of data-driven AI systems and its ethical implications have become widely recognized. For example, models may discriminate over population groups \cite{propublica2016,10.1001/jamainternmed.2018.3763}, spurring extensive research on AI fairness. Typical approaches in this area involve quantitative indicators defined over a ``protected attribute'', which can be used to measure discrimination or enforce fairness constraints. On the one hand, such metrics are arguably the most viable solution for mitigating fairness issues; on the other hand, the nuances of ethics can hardly be reduced to simple rules. From this point of view, the availability of multiple and diverse metrics is a significant asset since it enables choosing the best indicator depending on the application. Regarding available solutions, the case of categorical protected attributes is well covered by multiple indicators (see \Cref{sec:background}). Conversely, a single approach works with continuous protected attributes at the moment of writing; this is the \emph{Hirschfeld-Gebelein-Renyi} (HGR) correlation coefficient \cite{Rnyi1959OnMO}, which has two viable implementations for Machine Learning (ML) systems \cite{pmlr-v97-mary19a,ijcai2020p313}. We view the lack of diverse techniques for continuous protected groups as a major issue. We contribute to this area by 1) identifying a few critical limitations in the HGR approach, and 2) introducing a family of indicators that complement HGR semantics and have technical advantages. In terms of limitations, we highlight how the theoretical HGR formulation is prone to pathological behavior for finite samples, leading to the oversized importance of implementation details and limited interpretability. Moreover, the generality of the HGR formulation makes the indicator unsuitable for exclusively measuring the functional dependency between the protected attribute and the target. Finally, the HGR indicator cannot account for scale effects on fairness since it is based on the scale-invariant Pearson's correlation coefficient. We introduce the Generalized Disparate Impact (GeDI), a family of indicators inspired by the HGR approach and by the Disparate Impact Discrimination Index \cite{DBLP:conf/aaai/AghaeiAV19}. GeDI{} indicators measure the dependency based on how well a user-specified function of the protected attribute can approximate the target variable. Our indicators support both discrete and continuous protected attributes and 1) complement the HGR semantics, 2) are fully interpretable, 3) are robust for finite samples, and 4) can be extensively configured. Indicators in the family share a core technical structure that allows for a uniform interpretation and unified techniques for stating fairness constraints. Moreover, the GeDI{} formulation supports fine-grained constraints in order to permit only certain types of dependency while ruling out others. By introducing GeDI{}, we aim to improve metrics diversity while limiting complexity. \section{Background and Motivation} \label{sec:background} State-of-the-art research on algorithmic fairness focuses on measuring discrimination and enforcing fairness constraints. Some techniques \cite{DBLP:journals/kais/KamiranC11,NIPS2017_9a49a25d} operate by transforming the data before training the model to mitigate discrimination and are referred to as \emph{pre-processing approaches}. Conversely, \emph{post-processing approaches} calibrate the predictive model once trained \cite{Calders2010,NIPS2016_9d268236}. Finally, \emph{in-processing approaches} focus on removing discrimination at learning time. To this end, \cite{pmlr-v81-menon18a} modify the class-probability estimates during training, while \cite{10.1007/978-3-642-33486-3_3,Zafar_2017,10.5555/3327144.3327203,ijcai2020p315,pmlr-v80-komiyama18a} embed fairness in the learning procedure through constraints in the objective. All the approaches mentioned above rely on metrics restricted to discrete protected attributes. Two recent works \cite{pmlr-v97-mary19a,ijcai2020p313} extend the method by \cite{6137441} to continuous variables by minimizing the Hirschfeld-Gebelein-Renyi (HGR) correlation coefficient \cite{Rnyi1959OnMO}. A more extensive review of fairness approaches can be found in \cite{10.1145/3457607}. \paragraph{Disparate Impact} One of the most widely used notions of fairness is based on the legal concept of \textit{disparate impact} \cite{disparate_impact}, which occurs whenever a neutral practice negatively impacts a protected group. The principle of disparate impact can be extended to ML models by considering their output with respect to protected attributes \cite{DBLP:conf/kdd/FeldmanFMSV15}. To quantify the disparate impact in regression and classification, \cite{DBLP:conf/aaai/AghaeiAV19} propose a fairness indicator deemed \textit{Disparate Impact Discrimination Index} (DIDI). The higher the DIDI, the more disproportionate the model output is with respect to protected attributes, and the more it suffers from disparate impact. The simplicity of the DIDI formulation makes it highly interpretable. Given a sample $\{x_i, y_i\}_{i=1}^n$ including values for a protected attribute $x$ and a continuous target value $y$, the \textit{Disparate Impact Discrimination Index} is referred to as $\mathrm{DIDI}_{\operatorname{r}}(x, y)$ and defined as: \begin{equation} \label{eqn:didi_r} \sum_{v \in \mathcal{X}}\left|\frac{\sum_{i =1}^n y_i I(x_i=v)}{\sum_{i=1}^n I(x_i=v)}-\frac{1}{n} \sum_{i =1}^n y_i\right| \end{equation} where $\mathcal{X}$ is the domain of $x$ and $I(\phi)$ is the indicator function for the logical formula $\phi$. The DIDI represents the sum of discrepancies between the average target for each protected group and for the whole dataset. The indicator has a specialized formulation for classification tasks, i.e., when $y$ is discrete. In this case, $\mathrm{DIDI}_{\operatorname{c}}(x, y)$ is defined as: \begin{equation*} \label{eqn:didi_c} \sum_{u \in \mathcal{Y}} \sum_{v \in \mathcal{X}} \left|\frac{\sum_{i=1}^n I(y_i=u \land x_i=v)}{\sum_{i=1}^n I(x_i=v)} - \frac{1}{n} \sum_{i=1}^n I(y_i=u)\right| \end{equation*} \paragraph{The HGR Indicator} To the best of our knowledge, the HGR indicator is the only fairness metric that applies to continuous protected attributes. It is based on the \textit{Hirschfeld-Gebelein-Renyi} (HGR) correlation coefficient \cite{Rnyi1959OnMO}, which is a normalized measure of the relationship between two random variables, $X \in \mathcal{X}$ and $Y \in \mathcal{Y}$. When the coefficient is zero, the two variables are independent, while they are strictly dependent when it is equal to $1$. Formally, the HGR correlation coefficient is defined as: \begin{equation} \operatorname{HGR}(X, Y)=\sup _{f, g} \rho(f(X), g(Y)) \label{eq:hgr_def} \end{equation} where $\rho$ is Pearson's correlation coefficient and $f, g$ are two measurable functions (referred to as copula transformations) with finite and strictly positive variance. \begin{figure*} \caption{(a) HGR indicator may overfit data points due to the unrestricted transformations; this leads to overly large correlation estimates in finite datasets, even if $y$ is independent of $x$, as in the example. (b) Using unrestricted copula transformations allows HGR to capture non-functional dependencies, like the increasing divergence in target values. However, in some scenarios, discrimination is linked only to the function dependency $\mathbb{E} \label{fig:HGR_finite_sample} \label{fig:HGR_increasing_divergence} \label{fig:HGR_absolute_effectsB} \label{fig:HGR_limits} \end{figure*} \paragraph{Limitations of the HGR Approach} The HGR indicator has several noteworthy properties, including the ability to measure very general forms of dependency. Despite its strengths, however, it is not devoid of limitations. As a first contribution, we identify three of them. First, when applied to finite samples, the theoretical HGR formulation is prone to pathological behavior. In finite datasets, the Pearson's correlation in \Cref{eq:hgr_def} is replaced with its sample version, leading to: \begin{equation}\label{eqn:hgr_sample} \operatorname{hgr}(x, y) = \max_{f, g} r(f_x, g_x)) = \max_{f, g} \frac{cov(f_x, g_y)}{\sigma(f_x)\sigma(g_y)} \end{equation} where $f_x , g_y,$ are short notations for $f(x)$ and $g(y)$, respectively; $\sigma(\cdot)$ is the sample standard deviation, and $cov(\cdot, \cdot)$ is the sample covariance. Since the copula transformations are unrestricted, \Cref{eqn:hgr_sample} might be ill-behaved when the protected attribute takes many values. As an extreme case, with all-distinct $x$ values, choosing $f(x_i) = y_i$ ensures maximum Pearson's correlation, leading to overly large HGR values (see \Cref{fig:HGR_finite_sample}). The existing implementations mitigate this issue by using models with finite variance for $f$ and $g$ -- discretized KDE in \cite{pmlr-v97-mary19a}, and Neural Networks in \cite{ijcai2020p313}. However, these solutions link the indicator semantics to low-level, sub-symbolic details that are difficult to interpret and to control. Second, using unrestricted copula transformations allows HGR to measure very general forms of dependency between $x$ and $y$, including scenarios such as the diverging target values in \Cref{fig:HGR_increasing_divergence}. However, there are situations where discrimination arises only when the \emph{expected} value of the target is affected, i.e., it is linked to the strength of the functional dependency $\mathbb{E}[y \mid x]$. For example, in \Cref{fig:HGR_increasing_divergence}, a third confounder attribute, correlated with $x$, might motivate the divergence. In this case, the confounder might not raise any ethical concern, but the HGR indicator would not able to exclusively measure the functional dependency. Third, the HGR indicator satisfies all the Renyi properties \cite{Rnyi1959OnMO} by relying on Pearson's correlation coefficient, but this makes the approach unable to account for scale effects on fairness. For example, let us assume the target values $y$ are linearly correlated to the continuous protected attributes. In some practical cases, applying an affine transformation on $y$ may reduce the discrimination (see \Cref{fig:HGR_absolute_effectsB}), but the HGR indicator cannot capture this effect since Pearson's correlation is scale-invariant. \section{Generalized Disparate Impact} \label{sec:approach} In this section, we derive the GeDI{} family of indicators and its semantics. The process is guided by two design goals: 1) complementing the HGR approach to provide more options for continuous protected attributes; 2) improving over the technical limitations we identified in \Cref{sec:background}. \paragraph{HGR Computation as Optimization} We start by observing that the sample Pearson correlation can be restated as the optimal solution of a Least Squares problem. Formally, $r(f_x, g_y)$ from \Cref{eqn:hgr_sample} is given by: \begin{equation}\label{eqn:pearson_as_optmization} \argmin_{r} \frac{1}{n} \left\| r \frac{f_x - \mu(f_x)}{\sigma(f_x)} - \frac{g_y - \mu(g_y)}{\sigma(g_y)} \right\|_2^2 \end{equation} where $\mu(\cdot)$ is the sample average operator; this is a well-known statistical result, whose proof we report in Appendix~\ref{app:least_square_to_pearson}. Using \Cref{eqn:pearson_as_optmization} may seem counter-intuitive since it casts the whole HGR computation process as a bilevel optimization problem. In particular, we have: \begin{equation} \max_{f, g} \argmin_{r} \frac{1}{n} \left\| r \frac{f_x - \mu(f_x)}{\sigma(f_x)} - \frac{g_y - \mu(g_y)}{\sigma(g_y)} \right\|_2^2 \end{equation} However, the two optimization objectives are aligned since larger $r$ values correspond to lower squared residuals; this alignment can be exploited to obtain an alternative single-level formulation for the HGR indicator. The process is covered in detail in Appendix~\ref{app:bilevel_opt} and leads to: \begin{equation}\label{eqn:hgr_alternative} \operatorname{hgr}(x, y) = |r^*| \end{equation} where $r^*$ is obtained by solving: \begin{equation}\label{eqn:hgr_as_optimization} \argmin_{f, g, r} \frac{1}{n} \left\| r \frac{f_x - \mu(f_x)}{\sigma(f_x)} - \frac{g_y - \mu(g_y)}{\sigma(g_y)} \right\| \end{equation} The equivalence holds if $\sigma(g_y),\sigma(f_x) > 0$, which is also needed for the Pearson correlation to be well-defined. \paragraph{The GeDI{} Indicator Family} The main insight from \Cref{eqn:hgr_alternative} and~\eqref{eqn:hgr_as_optimization} is that the HGR computation can be understood as a Least Square fitting. We derive the GeDI{} family by building on the same observation, with a few key differences. In particular, we define a GeDI{} indicator as a measure of how well a user-selected, interpretable function of the protected attribute $x$ can approximate the target variable $y$. Formally, we have that: \begin{equation} \label{eqn:indicator_abs_d} \operatorname{GeDI}{}(x, y; F) = |d^*| \end{equation} where $d^*$ is defined via the optimization problem: \begin{equation}\label{eqn:indicator_definition} \begin{split} \argmin_{d, \alpha}\ & \frac{1}{n} \left\| d (F \alpha - \mu(F \alpha)) - (y - \mu(y)) \right\|_2^2 \\ \text{s.t. } & \|\alpha\|_1 = 1 \end{split} \end{equation} where $\alpha \in \mathbb{R}^k$ is a vector of coefficients, $d \in \mathbb{R}$ is a scale factor whose absolute value corresponds to the indicator value, and $\|\cdot\|_1$ is the L1 norm that we introduce to obtain one of the equivalent optimal solutions of the optimization problem. $F$ is a $n \times k$ \emph{kernel matrix} whose columns Finally, $F_j$ represent the evaluation of the basis functions $f_j(x)$ of the protected attribute $x$, namely: \begin{equation} F \alpha = \sum_{j=1}^k F_j \alpha_j = \sum_{j=1}^k f_j(x) \alpha_j \end{equation} Any kernel can be chosen, provided that the resulting matrix is full-rank. The kernel and its order $k$ are part of the specification of a GeDI{} indicator and appear in its notation. \begin{table}[bt] \center \caption{Type of dependencies measured by indicators for continuous protected attributes. Double check-mark ($\checkmark \checkmark$) specifies that the indicator \textit{exclusively} measures the corresponding dependency. } \begin{tabular}{lccc} \toprule \textbf{\small Measurable Dep.} & \textbf{\small GeDI{}} & \textbf{\small HGR-kde} & \textbf{\small HGR-nn} \\\midrule \small Functional & \small $\checkmark\checkmark$ & $\checkmark$ & $\checkmark$ \\ \small Non-functional & $\times$ & $\checkmark$ & $\checkmark$ \\ \small Scale-independent & $\checkmark$ & \small $\checkmark\checkmark$ & \small $\checkmark\checkmark$ \\ \small Scale-dependent & $\checkmark$ & $\times$ & $\times$ \\ \small User-configurable & $\checkmark$ & $\times$ & $\times$ \\\bottomrule \end{tabular} \label{tab:dep_types} \end{table} \paragraph{Rationale and Interpretation} Our formulation differs from \Cref{eqn:hgr_as_optimization} in three regards. First, it lacks the copula transformation on the $y$ variable (the $g$ function). The DIDI inspires this property: it allows our indicators to exclusively measure the strength of the functional dependency $\mathbb{E}[y \mid x]$ but also makes them incapable of measuring other forms of dependency. This semantics complements the HGR one, thus increasing the available options. Second, the standardization terms have been replaced with the normalization constraint $\|\alpha\|_1 = 1$. This constraint allows us to obtain one of the equivalent optimal solutions, while keeping it viable for linear optimization approaches. The DIDI also inspires this choice as it makes our indicators sensitive to scale changes, complementing the HGR semantics. As expected, this prevents the satisfaction of some Renyi properties that exclusively apply to scale-invariant metrics. Third, we restrict the copula transformation on $x$ (the $f$ function) to be linear over a possibly non-linear kernel. As a result, our indicator is fully interpretable. In particular, the indicator measures the overall functional dependency, while the coefficients identify which functional dependencies have the most significant effect, as determined by the kernel components. \begin{figure*} \caption{Example of how the GeDI{} \label{fig:running_no} \label{fig:running_1} \label{fig:running_2} \label{fig:running_3} \label{fig:running_example} \end{figure*} \paragraph{Choice of Kernel} Individual indicators from the GeDI{} family are obtained via the specification of a kernel, which allows users to define which types of functional dependency are relevant for the considered use case. This is the main criterion for the kernel choice and provides a level of configurability that is absent in existing indicators. \Cref{tab:dep_types} summarizes the types of dependency that can be measured by indicators with support for continuous protected attributes. A double check-mark specifies that the indicator \emph{exclusively} measures the corresponding dependency. Specifying a kernel might seem cumbersome, but it should be kept in mind that any HGR implementation also needs a mechanism to avoid ill behavior on finite samples. In \cite{pmlr-v97-mary19a} this is done via a finite discretization and a KDE bandwidth, while in \cite{ijcai2020p313} via a neural network architecture and stochastic gradient descent. In both cases, the mechanism is strongly linked to the implementation details, thus reducing transparency and control. Integrating the kernel choice into the indicator definition makes the bias-variance trade-off explicit and controllable. The kernel choice can be simplified by selecting a parametric function family and then adjusting the order $k$ to prevent overfitting and numerical issues. If the chosen function family ensures the $F$ matrix is full-rank, we can asymptotically recover the unrestricted copula transformation $f$. For example, by choosing a polynomial kernel, we have: \begin{align} f_j(x) &= x^j \end{align} In our notation, $\operatorname{GeDI}{}(\cdot, \cdot, V^k)$ denotes the use of a polynomial kernel of order $k$ ($V$ refers to the Vandermonde matrix). Another suitable option is the Fourier kernel, which satisfies the full-rank property. Both choices have a clear interpretation regarding either shape (for polynomials) or spectra (for Fourier kernels). \Cref{fig:running_example} shows how increasing the order of a polynomial kernel yields indicators sensitive to different types of dependence, thus requiring various adjustments with respect to a reference dataset to achieve an indicator value of 0. For example, $\operatorname{GeDI}{}(x, y; V^2)$ is not sensitive to cubic dependence, which is measured by $\operatorname{GeDI}{}(x, y; V^3)$. Overall, we recommend the following process for selecting a kernel: 1) choose a family of functions based on relevance to the considered application and ease of interpretation; 2) tune the order to prevent overfitting and numerical issues. \paragraph{GeDI{} and DIDI} Our approach is designed so that, under specific circumstances, its behavior is analogous to the one of the DIDI \cite{DBLP:conf/aaai/AghaeiAV19}. Let us start by differentiating the objective of \Cref{eqn:indicator_definition} in $d$, and stating the optimality condition (i.e., null derivative). This yields: \begin{equation}\label{eqn:indicator_covariance} \operatorname{GeDI}{}(x, y; F) = \left|\frac{cov(F \alpha^*, y)}{var(F \alpha^*)}\right| \end{equation} whose full proof is in Appendix~\ref{app:gedi_computation}. By assuming a polynomial kernel with $k = 1$ we have: \begin{equation}\label{eqn:indicator_didi} \operatorname{GeDI}{}(x, y; V^1) = \left|\frac{cov(x, y)}{var(x)}\right| \end{equation} where $\alpha^* = 1$, since there is a single coefficient and $\|\alpha^*\|_1 = 1$. In Appendix~\ref{app:gedi_vs_didi}, we prove that \Cref{eqn:indicator_didi} is equivalent to the DIDI if the protected attribute is binary. As a result, it is possible to consider the $\operatorname{GeDI}{}(\cdot, \cdot; V^1)$ indicator as a generalization of the (binary) DIDI to continuous protected attributes, thus strengthening the link between our indicators and the already established metrics. \section{GeDI{} Computation and Constraints} \label{sec:computation} Next, we discuss how GeDI{} indicators can be computed and used to enforce fairness constraints. \paragraph{Computation} Indicators in the GeDI{} family are defined via \Cref{eqn:indicator_definition}, which is a constrained optimization problem. However, it is possible to obtain an unconstrained formulation by applying the following substitutions: \begin{align} \tilde{F}_j &= F_j - \mu(F_j) & \tilde{y} &= y - \mu(y) & \tilde{\alpha} &= |d| \alpha \label{eqn:tricky_substitution} \end{align} i.e., by centering $y$ and the columns $F_j$, and by combining $d$ and $\alpha$ in a single vector of variables. The substitutions involve no loss of generality since \Cref{eqn:tricky_substitution} admits a solution for every $\tilde{\alpha} \in \mathbb{R}^k$. As a result, the problem is reduced to a classical minimal Least Squares formulation: \begin{equation}\label{eqn:indicator_unconstrained} \argmin_{\tilde{\alpha}} \frac{1}{n} \| \tilde{F} \tilde{\alpha} - \tilde{y} \|_2^2 \end{equation} The solution can be computed via any Linear Regression approach, which typically involves solving the linear system: \begin{equation}\label{eqn:unconstrained_solution_system} \tilde{F}^T \tilde{F} \tilde{\alpha}^* = \tilde{F}^T \tilde{y} \end{equation} From here, the absolute value $|d^*|$ can be recovered as: \begin{equation}\label{eqn:convenient_definition} |d^*| = \|\tilde{\alpha}^*\|_1 \end{equation} which is implied by \Cref{eqn:tricky_substitution} and the constraint $\|\alpha\|_1 = 1$. Overall, the process is simple and well-understood in terms of numerical stability. Additionally, since \Cref{eqn:indicator_unconstrained} can be solved to optimality in polynomial time, GeDI{} indicators are entirely determined by the kernel choice, with benefits for reproducibility. This property is less clearly satisfied by the existing HGR-based indicators: the approach from \cite{pmlr-v97-mary19a} is deterministic but ambiguous unless several implementation details are provided; the method from \cite{ijcai2020p313} is inherently not deterministic, since it relies on stochastic gradient descent. In \Cref{tab:properties}, we summarize the characteristics of the GeDI{} indicators compared to the HGR approaches. \begin{table}[t] \center \caption{Characteristics of the GeDI{} indicators when compared to HGR-kde \cite{pmlr-v97-mary19a} and HGR-nn \cite{ijcai2020p313} approach.} \begin{tabular}{lccc} \toprule \textbf{\small Characteristic} & \textbf{\small GeDI{}} & \textbf{\small HGR-kde} & \textbf{\small HGR-nn} \\\midrule \small Interpretability & \small full & \small partial & \small partial\\[6pt] \makecell[l]{ \small Bias/variance trade-off} & \small transparent & \small opaque & \small opaque \\[6pt] \makecell[l]{\small Theoretically unrestricted \\ transformations} & $f$ & $f, g$ & $f, g$ \\[10pt] \small Renyi properties & \small partial & $\checkmark$ & $\checkmark$ \\[6pt] \small Deterministic & $\checkmark$ & $\checkmark$ & $\times$ \\[6pt] \makecell[l]{\small Information required \\ for full specification} & \small kernel & \makecell{\small discretization, \\ KDE bandwidth} & \small full NN + SGD params \\[10pt] \small Constraint enforcing & \small declarative \textbf{or} penalizer & \small penalizer & \small penalizer \\[6pt] \small Fine-grain constraints & $\checkmark$ & $\times$ & $\times$ \\\bottomrule \end{tabular} \label{tab:properties} \end{table} \paragraph{Fairness Constraints} GeDI{} indicators can be used to formulate fairness constraints. In this setting, the target $y$ can be changed, either because it represents the output of a ML model or because preprocessing/postprocessing techniques are employed. A constraint in the form: \begin{equation} \operatorname{GeDI}{}(x, y; F) \leq q, \quad q \in \mathbb{R}^{+} \end{equation} can be translated into a set of piecewise linear relations: \begin{align} & \tilde{F}^T \tilde{F} \tilde{\alpha}^* = \tilde{F}^T \tilde{y} \label{eqn:cst_set_1}\\ & \|\tilde{\alpha}^*\|_1 \leq q \label{eqn:cst_set_2} \end{align} where $\tilde{\alpha}^*$ is not directly computed to avoid matrix inversion. Specific projection-based approaches for constrained ML, such as the one from \cite{Detassis_2021}, can rely on mathematical programming to deal with constraints in this form. Indeed, \Cref{fig:running_example} is obtained by adjusting target values to satisfy Equations \eqref{eqn:cst_set_1}-\eqref{eqn:cst_set_2} and minimizing the Mean Squared Error. As an alternative, it is also possible to obtain a Lagrangian term (i.e., a penalizer) in the form: \begin{equation} \label{eqn:gedi_vector_penalizer} \lambda \max(0, \|\tilde{\alpha}^*\|_1 - q), \quad \lambda \in \mathbb{R}^+ \end{equation} where $\lambda$ is the penalizer weight. In order to avoid matrix inversion in automatic differentiation engines, we can rely on differentiable least-squares operators, such as \texttt{\small tf.linalg.lstsq} or \texttt{\small torch.linalg.lstsq}. \paragraph{Fine-grained Constraints} The choice of the kernel allows a user to specify which types of dependency should be measured. In addition, our formulation allows us to restrict specific terms of the copula transformation by enforcing constraints on individual coefficients. Formally, it is possible to replace the set of constraints from \eqref{eqn:cst_set_2} with: \begin{align} & |\tilde{\alpha}^*| \leq q, \quad q \in \mathbb{R}^{+k} \label{eqn:cst_set_2_fg} \end{align} where the single constraint on $\|\alpha^*\|_1$ is replaced by individual constraints on the absolute value of the coefficients. By algebraic manipulation and taking advantage of the fact that $\tilde{F}^T \tilde{F}$ is positive definite, we obtain a single set of inequalities that avoids any matrix inversion: \begin{align}\label{eqn:cst_set_3_fg} |\tilde{F}^T \tilde{y}| \leq q \odot \tilde{F}^T \tilde{F} \end{align} where $\odot$ is the Hadamard (i.e., element-wise) product. \Cref{eqn:cst_set_3_fg} can be processed directly by projection approaches or turned into a (vector) penalizer similar to \Cref{eqn:gedi_vector_penalizer}. An interesting setup is to allow a single, easily explainable form of dependency (e.g., linear) while explicitly forbidding others; when applied to a ML model, this setup allows us to obtain a modified sample $\{x_i, y_i^\prime\}$, where $y_i^\prime$ represents the model output after the constraint has been enforced. In this situation, we have: \begin{equation} \operatorname{GeDI}{}(x, y^\prime; V^k) = \operatorname{GeDI}{}(x, y^\prime; V^1) \end{equation} The equality holds since all polynomial degree dependencies from 2 to $k$ are explicitly removed, implying that we can measure the discrimination for the constrained model in terms of the DIDI analogy presented in \Cref{sec:approach}. Ensuring fairness in machine learning is crucial in domains where automated decisions have an impact on people’s lives, like in health, credit, and justice systems. Machine learning models may learn biases from historical data and encode discriminatory behaviors that are endemic in society (e.g., biased recidivism risk \cite{propublica2016}). In many scenarios, designing the model to ignore all the protected attributes is not sufficient to achieve fairness since other features might act as proxies for the protected attributes \cite{Pedreschi2008}. Thus, as the use of machine learning increases in sensitive domains, there is now a growing interest in defining models that satisfy fairness requirements while preserving high performances. In the following section, we describe several approaches with no intention of offering a comprehensive list. A more extensive review can be found in \cite{10.1145/3457607}.\\ \textbf{Pre-processing and post-processing techniques.} Pre-processing techniques focus on transforming the data in order to mitigate discrimination before training the model. In particular, \cite{DBLP:journals/kais/KamiranC11} investigates different techniques to pre-process the data such as suppressing of the sensitive attribute, changing class labels, and re-sampling the data to remove discrimination. Calmon et al. \cite{NIPS2017_9a49a25d} proposes to reduce discrimination while preserving utility by performing a data transformation learned through a convex optimization. A different class of techniques for removing discrimination is to calibrate the predictive model once trained. For example, \cite{Calders2010} proposes to adjust the probability learned by the classifier, while \cite{NIPS2016_9d268236} focuses on post-calibrating the classification threshold through a loss minimization. However, all these approaches are deeply linked to the discrete nature of the protected attributes, while GeDI applies also to continuous variables.\\ \textbf{In-processing techniques.} The third type of approaches, which most closely relates to our work, focuses on removing discrimination at learning time. To this goal, \cite{pmlr-v81-menon18a} modify the class-probability estimates during the training, while \cite{10.1007/978-3-642-33486-3_3,Zafar_2017,10.5555/3327144.3327203,ijcai2020p315,pmlr-v80-komiyama18a} embed fairness in the learning procedure through a constraint in the objective. Two recent works \cite{pmlr-v97-mary19a,ijcai2020p313}, extend the method proposed by \cite{6137441} to continuous variables by minimizing the Hirschfeld-Gebelein-Renyi (HGR) correlation coefficient \cite{Rnyi1959OnMO}. \section{Empirical Evaluation} \label{sec:experiments} In this section, we discuss an empirical evaluation performed with three objectives: 1) testing how the kernel choice and fine-grained constraints affect the GeDI{} semantics; 2) studying the relation with other metrics, in particular the DIDI and the HGR indicators; 3) investigating how the use of different ML models and constraint enforcement algorithms impacts effectiveness. Here we report the main details and experiments; additional information is provided in \Cref{app:section_5}. The source code and the datasets are available at \url{https://github.com/giuluck/GeneralizedDisparateImpact} under MIT license. \begin{table}[bt] \center \caption{The four discrimination-aware learning tasks used in our experiments. Type B stands for binary, C for continuous.} \small \begin{tabular}{m{5em} m{4.75em} >{\centering\arraybackslash}m{1.75em} m{4.75em} >{\centering\arraybackslash}m{1.75em}} \toprule & \multicolumn{2}{c}{\textbf{Target}} & \multicolumn{2}{c}{\textbf{Protected Att.}} \\ \textbf{Dataset} & \textbf{Name} & \textbf{Type} & \textbf{Name} & \textbf{Type} \\ \midrule \multirow{2}{*}{Adult} & \multirow{2}{*}{income} & \multirow{2}{*}{B} & sex & B \\ & & & age & C \\ \noalign{\vskip 5pt} \multirow{2}{\textwidth}{Communities\\\& Crimes} & \multirow{2}{*}{violentPerPop} & \multirow{2}{*}{C} & race & B \\ & & & pctBlack & C \\ \bottomrule \end{tabular} \label{tab:experiments} \end{table} \paragraph{Experimental Setup} \begin{figure} \caption{Results from preprocessing tests presented from the perspective of the binned DIDI metrics with different bin numbers both for the coarse-grain (a) and the fine-grain (b) constraint formulation.} \label{fig:coarse_didi} \label{fig:fine_didi} \label{fig:gedi_didi} \end{figure} We rely on two common benchmark datasets in the field of fair AI: \textit{Communities \& Crimes}\footnote{ \scriptsize{\url{https://archive.ics.uci.edu/ml/datasets/Communities+and+Crime}}} with a continuous target, and \textit{Adult}\footnote{\scriptsize{\url{https://archive.ics.uci.edu/ml/datasets/Adult}}} with a binary target. For both, we define two discrimination-aware learning tasks by considering either a binary or a continuous protected attribute. The resulting tasks are summarized in \Cref{tab:experiments}. We restrict our analysis to polynomial kernels due to their ease of interpretation; moreover, their known issues with numerical stability at higher orders make them a computationally interesting benchmark. We consider two setups for our method. In the first, referred to as GeDI{}-Coarse, we derive an indicator via a $V^k$ kernel and constrain its value as in \Cref{eqn:cst_set_2}. In the second, referred to as GeDI{}-Fine, we use the same type of indicator, but the constraint is enforced on individual coefficients; specifically, we allow some degrees of linear dependency while completely forbidding higher orders (up to $k$). For binary protected attributes, we only use $k = 1$ as a higher order would not guarantee the full-rank requirement; hence the two setups coincide. As a representative of the HGR approach, we use the indicator from \cite{pmlr-v97-mary19a} with default parameters. For the DIDI, we use the original formulation from \cite{DBLP:conf/aaai/AghaeiAV19}. When adopting DIDI for continuous protected attributes, we apply a quantile-based discretization in $n$ bins before computing the indicator; we refer to the resulting metrics as $\operatorname{DIDI}$-n. This technique does not require specialized indicators but is strongly sensitive to the chosen discretization and prone to overfitting with higher $n$ values. We use \textit{percentage values} when presenting results for experiments with fairness constraints on all metrics, i.e., their values are normalized over those of the original datasets. \paragraph{Experiments Description} We consider two types of tests. The first consist in \emph{preprocessing experiments}, where we directly change the target values to: 1) satisfy coarse- or fine-grained constraints on $\operatorname{GeDI}{}(x, y; V^k)$ and 2) minimize a loss function appropriate to the task -- Mean Squared Error for regression and Hamming Distance for classification. No machine learning model is trained in this process. We rely on an exact mathematical programming solver that allows for an efficient and stable formulation for fine-grained constraints (more details in \Cref{app:fine_grain_formulation}). We use different kernel orders $k$, but a fixed threshold for each dataset, corresponding to 20\% of the value on the $\operatorname{GeDI}{}(x, y, V^1)$ for the unaltered data. With this design, any difference in behavior is entirely due to changes in the semantics determined by the chosen kernel. Second, we report results for several \emph{performance tests}, where training problems with fairness constraints are solved using multiple machine learning models and optimization algorithms. These experiments aim to test how these factors affect constraint satisfaction, model accuracy, and generalization. In particular, we train an unconstrained version of a Random Forest (RF), a Gradient Boosting (GB), and a Neural Network (NN) model. Then, the same models are trained using the Moving Targets (MT) algorithm from \cite{Detassis_2021}, which allows us to deal with constraints in a declarative fashion. Finally, to investigate the impact of different constraint enforcement methods, we train a version of the neural network model where a penalizer (or Semantics-Based Regularizer, SBR) is used during gradient descent. The penalizer weight is dynamically controlled at training time using the approach from \cite{Fioretto2021}. All these experiments are performed using a 5-fold cross-validation procedure on the entire dataset. See \Cref{app:experimental_setup} for a complete list of the technical settings, and \Cref{app:constrained_models} for a broader description of the constrained models. Similarly to the preprocessing tests, we use a fixed threshold for all constraints, corresponding to 20\% the value of $\operatorname{GeDI}{}(x, y, V^1)$ for the original datasets. \input{tables/categorical.tex} \input{tables/continuous.tex} \paragraph{Preprocessing experiments} \Cref{fig:gedi_didi} shows the outcome of our preprocessing tests for the $\operatorname{GeDI}{}(x, y; V^k)$, both for coarse-grained (a) and fine-grained (b) approaches. In particular, we report the results for multiple kernel sizes (colors) and different bin numbers ($x$ axis) from the perspective of the binned DIDI metrics ($y$ axis). All GeDI{} constraints are satisfied by construction due to the use of an exact solver. Since increasing the bin number in DIDI-n enables measuring more complex dependencies, and both indicators focus on functional dependencies, the figures allow us to observe how the GeDI{} semantics depends on the kernel order. We notice that increasing the kernel order with the GeDI{}-Fine approach tends to improve the discretized DIDI across all bin numbers (\Cref{fig:fine_didi}); in the few cases where the DIDI-3 and DIDI-5 worsen by adding GeDI{} constraints, we expect that the discrepancy is due to how the bin boundaries ``cut'' the shapes induced by the polynomial kernels, i.e., it is due to discretization noise. The same behavior is observed for GeDI{}-Coarse (\Cref{fig:coarse_didi}), although with a lower degree of consistency. Such a trend confirms that increasing the kernel order makes our indicators capable of measuring more complex dependencies. In the GeDI{}-Fine case, all higher-order dependencies are canceled, while the GeDI{}-Coarse approach is more flexible regarding which types of dependency are allowed; this explains why enforcing a 20\% threshold with the GeDI{}-Fine approach also leads to similar DIDI-n values since the semantics of the two approaches are similar. Despite this, we underline that the equivalence from \Cref{sec:computation} does not strictly hold in the analyzed datasets since the protected attribute is not natively categorical. \paragraph{Performance experiments} Table \ref{table:categorical} shows the results for the performance experiments in tasks with binary protected attributes. We report the average values on the \emph{train} and \emph{validation} splits for: 1) an accuracy metric, i.e., \textit{R2} for regression tasks and \textit{Accuracy} for classification ones, and 2) our fairness indicator $\operatorname{GeDI}(\cdot, \cdot; V^1)$, which in this case is equivalent to the DIDI. Additionally, we report a single column with training times. Rows related to unconstrained models are italicized, and the best results for the constrained models are highlighted in bold font. Table \ref{table:continuous} reports similar data for tasks with continuous protected attributes. We rely on $\operatorname{GeDI}{}(x, y, V^5)$ as a fairness indicator, and we enforce both GeDI{}-Fine and GeDI{}-Coarse since they differ in formulation for higher-order kernels. The results obtained using different models and algorithms in the four tasks analyzed are rather diverse regarding accuracy, degree of constraint satisfaction, and training time. This stresses the advantage of having access to multiple learning models and constraint enforcing methods, which our approach provides. The behavior is reasonably consistent in terms of both accuracy and constraint satisfaction, with low standard deviations. The only exception is the classification task (\emph{Adult}) with continuous protected attribute: in this scenario, and in general, when dealing with continuous protected attributes, the unconstrained models tend to introduce additional discrimination on top of the existing one. As a consequence, our thresholds become particularly demanding since they are based on GeDI{} values for the original dataset. Stringent thresholds require significant alterations to the input/output relation that the models would naturally learn, thus exacerbating generalization issues. \begin{figure} \caption{Percentage values of GeDI{} \label{fig:gedi_hgr} \end{figure} Finally, in \Cref{fig:gedi_hgr}, we analyze the link between GeDI{} and the HGR indicator by reporting their values for all of our performance tests. The displayed results come from unconstrained and constrained models and different tasks. According to this figure, we highlight that: (1) GeDI is a valid option for fairness metrics, since it correlates with an already established metrics (namely HGR) but (2) it is not redundant, as its semantics differs from the one of HGR. Indeed, as expected, we can see that the two metrics are correlated (suggesting they capture related concepts of fairness), but with a high variance (highlighting the difference in their semantics). For this reason, GeDI offers a valid alternative to the set of existing metrics, enriching the range of options among which the practitioners can select the indicator that best fits their application. \section{Conclusions} In this work, we introduce the GeDI{} family of indicators to extend the available options for measuring and enforcing fairness with continuous protected attributes. Our indicators complement the existing HGR-based solutions regarding semantics and provide a more interpretable, transparent, and controllable fairness metric. GeDI{} allows the user to specify which types of dependency are relevant and how they should be restricted. While some of the design choices in our approach are incompatible with the HGR formulation, others could be applied in principle. The resulting configurable HGR-based solution would have technical properties similar to the GeDI{} indicators. Preliminary work in this direction corroborates this conjecture, making us regard this topic as a promising area for future research. \FloatBarrier \section{Proofs of Section 3} \subsection{Least Squares Problem and Pearson's Correlation} \label{app:least_square_to_pearson} Consider two random samples $x \in \mathbb{R}^{n}$ and $y \in \mathbb{R}^{n}$ and the following least square problem: \begin{equation} \argmin_{r} \frac{1}{n} \left\| r \frac{x - \mu_x}{\sigma_x} - \frac{y - \mu_y}{\sigma_y} \right\|_2^2 \end{equation} where $r$ is the sample Pearson's correlation coefficient. This is a basic Linear Regression problem over standardized variables. Since the problem is convex, an optimal solution can be found by requiring the loss function gradient to be null. By differentiating over $r$ we get: \begin{equation} \frac{1}{n} \left(r \frac{x - \mu_x}{\sigma_x} - \frac{y - \mu_y}{\sigma_y}\right)^T \frac{x - \mu_x}{\sigma_x} = 0 \end{equation} By algebraic manipulations we get: \begin{equation} r \frac{1}{n} \frac{(x - \mu_x)^T (x - \mu_x)}{\sigma_x^2} = \frac{1}{n} \frac{(x - \mu_x)^T (y - \mu_y)}{\sigma_x \sigma_y} \end{equation} By applying the definition of variance and standard deviation, we have that $\frac{1}{n}(x - \mu_x)^T (x - \mu_x) = \sigma_x^2$, thus leading us to: \begin{equation}\label{eqn:sample_rho} r = \frac{1}{n} \frac{(x - \mu_x)^T (y - \mu_y)}{\sigma_x \sigma_y} \end{equation} This is the value of the sample Pearson correlation coefficient, which is therefore equivalent to the optimal parameter for a properly defined Linear Regression problem. \subsection{Bilevel Optimization Problem Simplification} \label{app:bilevel_opt} We prove that the objective of the outer and inner optimization problems are in this case aligned, thus making it possible to simplify the formulation. Specifically, let us consider two pairs of copula transformations $f^{\prime}_x, g^{\prime}_y$ and $f^{\prime\prime}_x, g^{\prime\prime}_y$, and let $r^{\prime}$ and $r^{\prime\prime}$ be the corresponding values for the sample Pearson correlation coefficient. Let us assume that one pair of transformation leads to a smaller MSE, i.e.: \begin{equation}\label{eqn:hgr_equiv_1} \frac{1}{n} \left\| r^{\prime} f^{\prime}_x - g^{\prime}_y \right\|_2^2 < \frac{1}{n} \left\| r^{\prime\prime} f^{\prime\prime}_x - g^{\prime\prime}_y \right\|_2^2 \end{equation} The two terms can be expanded as: \begin{equation} r^{\prime 2} \frac{f^{\prime T}_xf^{\prime}_x}{n} - 2 r^{\prime} \frac{f^{\prime T}_x g^{\prime}_y}{n} + \frac{g^{\prime T}_y g^{\prime}_y}{n} < r^{\prime \prime 2} \frac{f^{\prime\prime T}_x f^{\prime\prime}_x}{n} - 2 r^{\prime\prime} \frac{f^{\prime\prime T}_x g^{\prime\prime}_y}{n} + \frac{g^{\prime\prime T}_y g^{\prime\prime}_y}{n} \end{equation} Due to our zero-mean assumption, all quadratic terms in the form $f^{\prime T}_xf^{\prime}_x / n$, etc., correspond to sample variances. Again due to our assumptions, variances have unit value. Therefore, we have: \begin{equation} r^{\prime 2} - 2 r^{\prime} \frac{f^{\prime T}_x g^{\prime}_y}{n} + 1 < r^{\prime\prime 2} - 2 r^{\prime\prime} \frac{f^{\prime\prime T}_x g^{\prime\prime}_y}{n} + 1 \end{equation} Now, as established in Equation~\eqref{eqn:sample_rho}, we have that $r^{\prime} = f^{\prime T}_x g^{\prime}_y / n$ and $r^{\prime\prime} = f^{\prime\prime T}_x g^{\prime\prime}_y / n$. Therefore, we have: \begin{equation}\label{eqn:hgr_equiv_2} r^{\prime 2} - 2 r^{\prime 2} + 1 < r^{\prime\prime 2} - 2 r^{\prime\prime 2} + 1 \end{equation} Since all steps leading from Equation~\eqref{eqn:hgr_equiv_1} to Equation~\eqref{eqn:hgr_equiv_2} are invertible, we have that: \begin{equation}\label{eqn:hgr_equiv} r^{\prime 2} > r^{\prime\prime 2} \quad \Leftrightarrow \quad \frac{1}{n} \left\| r^{\prime} f^{\prime}_x - g^{\prime}_y \right\|_2^2 < \frac{1}{n} \left\| r^{\prime\prime} f^{\prime\prime}_x - g^{\prime\prime}_y \right\|_2^2 \end{equation} In other words, maximizing the square of the sample HGR is equivalent to minimizing the Mean Squared Error. Now, maximizing $r^2$ corresponds to maximizing either $r$ or minimizing $-r$. Since the copula transformations are generic, we can always change the sign of $r$ by changing the sign of either $f$ or $g$. This means that maximizing $r$ is in fact equivalent to maximizing $r^2$ in our context. Overall, this suggests an approach for computing the sample HGR that does not rely on bilevel optimization. Namely, first we determine $f$ and $g$ by solving the Least Squares Problem: \begin{equation} \argmin_{f, g, r} \frac{1}{n} \left\| r f_x - g_y \right\|_2^2 \end{equation} Then we compute the sample HGR as the absolute value of the sample Pearson correlation, i.e.: \begin{equation} \operatorname{hgr}(x, y) = |r(f_x, g_x)| \end{equation} Computing the absolute value is equivalent to performing the sign swap on one of the two transformations, as previously described. \subsection{Closed-form Computation of Generalized Disparate Impact} \label{app:gedi_computation} We start from the definition of the GeDI family of indicators as for \Cref{eqn:indicator_definition}, i.e.: \begin{align} \argmin_{d, \alpha}\ \frac{1}{n} \left\| d (F \alpha - \mu(F \alpha)) - (y - \mu(y)) \right\|_2^2 & & \text{s.t. } \|\alpha\|_1 = 1 \end{align} We can embed the constraint into the objective function $C(d, \alpha, \lambda)$ using a Lagrangian multiplier $\lambda$, from which we obtain: \begin{align} \argmin_{d, \alpha, \lambda}\ C(d, \alpha, \lambda) & & \text{s.t. } C(d, \alpha, \lambda) = \frac{1}{n} \left\| d (F \alpha - \mu(F \alpha)) - (y - \mu(y)) \right\|_2^2 + \lambda (\|\alpha\|_1 - 1) \end{align} The optimal solution of the objective function can be found by requiring its gradient to be null. This implies that having a null derivative in $d$ is a necessary condition for optimality, which we can exploit to retrieve the value of $d^*$ as follows. First, we compute the derivative of $C(d, \alpha, \lambda)$ with respect to $d$: \begin{equation} \label{eqn:gedi_derivative} \frac{\partial C(d, \alpha, \lambda)}{\partial d} = \frac{2}{n} (F \alpha - \mu(F \alpha))^T ((F \alpha - \mu(F \alpha))d - (y - \mu(y))) \end{equation} Then, by requiring \Cref{eqn:gedi_derivative} to be null, we get the following equivalence: \begin{equation} \frac{1}{n} (F \alpha^* - \mu(F \alpha^*))^T (F \alpha^* - \mu(F \alpha^*)) d^* = \frac{1}{n} (F \alpha^* - \mu(F \alpha^*))^T (y - \mu(y)) \end{equation} The scalar values $\frac{1}{n} (F \alpha^* - \mu(F \alpha^*))^T (F \alpha^* - \mu(F \alpha^*))$ and $\frac{1}{n} (F \alpha^* - \mu(F \alpha^*))^T (y - \mu(y))$ represent the variance of the vector $F \alpha^*$ and the covariance between $F \alpha^*$ and $y$, respectively. Hence, we get the closed-form value of $d^*$ as: \begin{equation} d^* = \frac{\operatorname{cov}(F \alpha^*, y)}{\operatorname{var}(F \alpha^*)} \end{equation} Finally, since \Cref{eqn:indicator_abs_d} ties the value of the GeDI{} indicator to the absolute value of $d^*$, we get: \begin{equation} \operatorname{GeDI}{}(x, y; F) = \left| \frac{\operatorname{cov}(F \alpha^*, y)}{\operatorname{var}(F \alpha^*)} \right| \end{equation} \subsection{Equivalence Between GeDI and DIDI} \label{app:gedi_vs_didi} We consider the case of continuous target $y$ and binary protected attribute $x$. The $\operatorname{DIDI}$ can be computed as for \Cref{eqn:didi_r}: \begin{equation} \operatorname{DIDI}_r(x, y) = \sum_{v \in \mathcal{X}}\left|\frac{\sum_{i =1}^n y_i I(x_i=v)}{\sum_{i=1}^n I(x_i=v)} - \frac{1}{n} \sum_{i =1}^n y_i\right| \end{equation} Since $x$ is a binary attribute ($\mathcal{X} = \{0, 1\}$), we can replace the indicator function $I(x_i = v)$ with either $1 - x_i$ or $x_i$ depending on the value $v$, obtaining: \begin{equation} \label{eqn:didi_special_regression} \operatorname{DIDI}_r(x, y) = \left| \frac{\sum_{i =1}^n (1 - x_i) y_i}{\sum_{i=1}^n (1 - x_i)} - \frac{1}{n} \sum_{i =1}^n y_i \right| + \left| \frac{\sum_{i =1}^n x_i y_i}{\sum_{i=1}^n x_i} - \frac{1}{n} \sum_{i =1}^n y_i \right| \end{equation} By algebraic manipulations within the summations, and by dividing for the constant factor $n$ both the numerator and the denominator of every fractional term, we can rewrite \Cref{eqn:didi_special_regression} as: \begin{equation} \label{eqn:didi_special_means_1} \operatorname{DIDI}_r(x, y) = \left| \frac{\mu_y - \mu_{xy}}{1 - \mu_x} - \mu_y \right| + \left| \frac{\mu_{xy}}{\mu_x} - \mu_y \right| \end{equation} where $\mu_x$ and $\mu_y$ represent the average value of vectors $x$ and $y$ respectively, and $\mu_{xy}$ is the average value of vector $x \odot y$. We can further manipulate this equation to obtain: \begin{equation} \label{eqn:didi_special_means_2} \operatorname{DIDI}_r(x, y) = \left| \frac{\mu_x\mu_y - \mu_{xy}}{1 - \mu_x}\right| + \left| \frac{\mu_{xy} - \mu_x\mu_y}{\mu_x} \right| \end{equation} The numerators of both terms are equal except for the sign, hence we can join the two absolute values by simply swapping the sign of one of them. Moreover, we can notice that $\mu_{xy} - \mu_x \mu_y$ represents the covariance between $x$ and $y$. Therefore: \begin{equation} \label{eqn:didi_special_covariance} \operatorname{DIDI}_r(x, y) = \left| \frac{\operatorname{cov}(x, y)}{1 - \mu_x} + \frac{\operatorname{cov}(x, y)}{\mu_x} \right| = \left| \frac{\operatorname{cov}(x, y)}{\mu_x - \mu_x^2} \right| \end{equation} Since $x$ is a binary vector, it is invariant to the power operator. Thus, $\mu_x = \mu_{x^2}$ and, subsequently, the denominator of \Cref{eqn:didi_special_covariance} reduces to the variance of $x$. Hence: \begin{equation} \operatorname{DIDI}_r(x, y) = \left| \frac{\operatorname{cov}(x, y)}{\operatorname{var(x)}} \right| = \operatorname{GeDI}{}(x, y; V^1) \end{equation} The same reasoning can be applied when both $x$ and $y$ are binary vectors. Again, the indicator function $I(x_i = v)$ can be replaced with either $1 - x_i$ or $x_i$ depending on the value $v$. Similarly, since in this case we are computing the $\operatorname{DIDI}_c$, we replace $I(y_i = u)$ with $1 - y_i$ and $y_i$, and $I(y_i = u \land x_i = v)$ with the corresponding product between the previous terms. Eventually, we obtain: \begin{equation} \begin{split} \label{eqn:didi_special_classification} \operatorname{DIDI}_c(x, y) = \left| \frac{\sum_{i =1}^n (1 - x_i) (1 - y_i)}{\sum_{i=1}^n (1 - x_i)} - \frac{1}{n} \sum_{i =1}^n (1 - y_i) \right| + \left| \frac{\sum_{i =1}^n x_i (1 - y_i)}{\sum_{i=1}^n x_i} - \frac{1}{n} \sum_{i =1}^n (1 - y_i) \right| + \\ \left| \frac{\sum_{i =1}^n (1 - x_i) y_i}{\sum_{i=1}^n (1 - x_i)} - \frac{1}{n} \sum_{i =1}^n y_i \right| + \left| \frac{\sum_{i =1}^n x_i y_i}{\sum_{i=1}^n x_i} - \frac{1}{n} \sum_{i =1}^n y_i \right| \end{split} \end{equation} By using the same notation as in \Cref{eqn:didi_special_means_1} and applying analogous mathematical manipulations, we get to: \begin{equation} \operatorname{DIDI}_c(x, y) = \left| \frac{\mu_x\mu_y - \mu_{xy}}{1 - \mu_x}\right| + \left| \frac{\mu_{xy} - \mu_x\mu_y}{\mu_x} \right| + \left| \frac{\mu_x\mu_y - \mu_{xy}}{1 - \mu_x}\right| + \left| \frac{\mu_{xy} - \mu_x\mu_y}{\mu_x} \right| \end{equation} This value is twice as much that in \Cref{eqn:didi_special_means_2}, meaning that the $\operatorname{DIDI}_c$ is twice our indicator $\operatorname{GeDI}{}(x, y; V^1)$. This is not a problem since we can get rid of the constant scaling factor. Moreover, when we constrain the DIDI indicator up to a relative threshold that depends on the original level of discrimination, the scaling factor cancels out naturally. \section{Appendix of Section 5} \label{app:section_5} \subsection{Optimized Fine-grained Formulation} \label{app:fine_grain_formulation} We want to solve the following optimization model: \begin{equation} \begin{split} \label{eqn:fine_grained_problem} & \argmin_z \left\| z - y \right\|_2^2 \\ \text{s.t. } \operatorname{GeDI}{}&(x, z; V^k) = \operatorname{GeDI}{}(x, z; V^1), \\ \operatorname{GeDI}{}&(x, z; V^1) \leq q \end{split} \end{equation} where the GeDI{} indicator is defined as in \Cref{eqn:indicator_abs_d} and computed according to Equations \eqref{eqn:unconstrained_solution_system}-\eqref{eqn:convenient_definition}. Since we impose $\operatorname{GeDI}{}(x, z; V^k) = \operatorname{GeDI}{}(x, z; V^1)$, the optimal vector $\tilde\alpha^*$ which solves \Cref{eqn:unconstrained_solution_system} for $F = V^k$ is such that all the elements are null apart from the first one. It follows that: \begin{equation} \label{eqn:fine_grained_system} \tilde{F_1}^T \tilde{F} \tilde\alpha_1^* = \tilde{F}^T \tilde{y} \end{equation} where $\tilde{F}_1 = x - \mu(x) = \tilde{x}$ is the first column of the kernel matrix, namely the only one paired to a non-null $\tilde\alpha$ coefficient. \Cref{eqn:fine_grained_system} is a system of $k$ equations with a single variable. From the first equation we can retrieve the value of $\tilde\alpha_1^*$ in the following way: \begin{equation} \tilde{x}^T \tilde{x} \tilde\alpha_1^* = \tilde{x}^T \tilde{y} \end{equation} whose solution is $\tilde\alpha_1^* = \frac{\operatorname{cov}(x, y)}{\operatorname{var}(x)}$. According to \Cref{eqn:convenient_definition}, the GeDI{} indicator can be retrieved as the absolute value of $\tilde\alpha_1^*$ since all the remaining items of $\tilde\alpha^*$ are null. This result is in fact equivalent to $\operatorname{GeDI}{}(x, y; V^1)$. In addition to that, the remaining $k - 1$ equalities from \Cref{eqn:fine_grained_system} must be satisfied. This can be achieved by operating on the projections. Indeed, the optimization problem defined in \Cref{eqn:fine_grained_problem} has $n$ free variables -- i.e., the vector $z$ --, with $n \gg k$ in almost all the practical cases. These equalities are in the form: \begin{align} \label{eqn:fine_grained_constraints} \tilde{F_1}^T \tilde{F_j} \tilde\alpha_1^* = \tilde{F_j}^T \tilde{y} & & \forall j \in \{2, \hdots, k\} \end{align} where $\tilde{F}_j = x^j - \mu(x^j) = \tilde{x}^j$. Since $\tilde\alpha_1^*$ can be computed in closed-form, we can plug it into \Cref{eqn:fine_grained_constraints}, eventually obtaining the following set of constraints: \begin{align} \operatorname{cov}(x^j, x) \operatorname{cov}(x, y) = \operatorname{cov}(x^j, y) \operatorname{var}(x) & & \forall j \in \{2, \hdots, k\} \end{align} which can be used to solve \Cref{eqn:fine_grained_problem} without the need to set up the respective Least Squares Problems. \subsection{Experimental Setup for Reproducibility} \label{app:experimental_setup} All the models are trained on a machine with an Intel Core I9 10920X 3.5G and 64GB of RAM. The Random Forest and Gradient Boosting models are based on their available implementations in \texttt{scikit-learn 1.0.2} with default parameters, while the Neural Network and the semantics-based Regularization models are implemented using \texttt{torch 1.13.1}. Specifically, the hyper-parameters of neural-based models are obtained via a grid search analysis with train-test splitting on the two unconstrained tasks aimed at maximizing test accuracy. In particular, the neural models are trained for 200 epochs with batch size 128 and two layers of 256 units for the \textit{Communities \& Crimes} tasks and three layers of 32 units for the \textit{Adult} tasks. The only exception is the semantics-based Regularization model which runs for 500 epochs to compensate the fact that it is trained full-batch in order to better deal with group constraints. Additionally, all the neurons have ReLU activation function except for the output one, which has either linear or sigmoid activation depending on whether the task is regression or classification, respectively. Accordingly, the loss function is either mean square error or binary crossentropy, but in both cases the training is performed using the Adam optimizer with default hyperparameters. As regards Moving Target's optimization routine, we leverage the Python APIs offered by \texttt{gurobipy 10.0} to solve it within our \texttt{Python 3.7} environment. The backend solver is \texttt{Gurobi 10.0}, for which we use the default parameters except for $\mbox{WorkLimit} = 60$. \subsection{Constrained Approaches Descriptions} \label{app:constrained_models} Here we will briefly present the two approaches that we use to enforce our constraint in several ML models. Both approaches are employed to solve the following constrained optimization problem: \begin{align} \label{eqn:constrained_problem} \argmin_{\theta} \mathcal{L}(f(x; \theta), y) & & \text{s.t. } f(x; \theta) \in \mathcal{C} \end{align} where $f(x; \theta)$ represent the predictions of the ML model $f$ with learned parameters $\theta$, $\mathcal{C}$ is the feasible region, and $\mathcal{L}$ is a task-specific loss function. For both the approaches, the original papers are provided as reference for a more details. \paragraph{Moving Targets} Moving Targets (MT)~\cite{Detassis_2021} is a framework for constrained ML based on bilevel decomposition. It works by iteratively alternating between a \emph{learner step}, which is in charge of training the ML model, and a \emph{master step}, which projects the solution onto the feasible space while minimizing the distance between both model's predictions and original targets. In practice, MT solves the problem described in \Cref{eqn:constrained_problem} by alternating between the two following sub-problems: \begin{align} \label{eqn:mt_master_step} z_{(i)} & = \argmin_z \mathcal{L}(z, f(x; \theta_{(i - 1)})) + \alpha_{(i)} \cdot \mathcal{L}(z, y) & \text{s.t. } z \in \mathcal{C} \\ \label{eqn:mt_learner_step} \theta_{(i)} & = \argmin_{\theta} \mathcal{L}(f(x; \theta), z_{(i)}) & \end{align} where the subscript ${(i)}$ indicates values obtained at the $i^{th}$ iteration, $\alpha_{(i)}$ is a factor used to balance the distance between the original targets and the predictions during the \emph{master step}, and the first value $\theta_{(0)}$ is obtained by pre-training the ML model. The algorithm is perfectly suited for our purpose for three main reasons: 1) it is model-agnostic, thus allowing us to test the behaviour of our constraint for different models, each of which has its own specific bias and limitation; 2) it is conceived to deal with declarative group-constraints like the GeDI{} one, since it allows to train the ML model using mini-batches if needed; and 3) it can naturally deal with classification tasks without the need of relaxing the class targets to class probabilities. As regards our experiments, the \textit{learner step} is performed as a plain ML task leveraging either \texttt{scikit-learn} or \texttt{torch} depending on the chosen model. Instead, the \textit{master step} is formulated as a Mixed-integer Program (MIP) and delegated to the \texttt{Gurobi} solver. More specifically, for the coarse-grained constraint formulation we define $k$ free variables for the coefficients $\tilde{\alpha}$ and retrieve their values by imposing an equality constraints according to \Cref{eqn:cst_set_1}; the overall constraint on the GeDI{} value is then imposed according to \Cref{eqn:cst_set_2}. As for the fine-grained formulation, we force the constraint $-q \leq \operatorname{cov}(x, z) \leq q$ in order not to exceed the defined threshold and, additionally, we impose the satisfaction of the $k - 1$ equality constraints defined in \Cref{eqn:fine_grained_constraints}, as motivated by the optimized formulation showed in \Cref{app:fine_grain_formulation}. Finally, in our setup we define each value $\alpha_{(i)}$ as the $i^{th}$ item of the harmonic series, namely $\alpha_{(i)} = i^{-1}$, and the loss function $\mathcal{L}$ is either MSE or Hamming Distance depending on whether the task is regression or classification. Respectively to the Semantic-based Regularization approach, a key advantage of MT lies in the fact that MIP models can naturally deal with discrete variables, thus requiring no need to relax the problem to the continuous domain. \paragraph{Semantic-based Regularization} The Lagrangian Dual framework for Semantic-based Regularization~\cite{Fioretto2021} extends the concept of loss penalizers by allowing for an automated calibration of the lagrangian multipliers. Specifically, let us consider the case in which we have a penalty vector $\mathcal{P}(y, f(x; \theta)) \in \mathbb{R}^{+k}$ which represents the violations for $k$ different constraints. We can embed these violations in the loss function $\mathcal{L}(y, f(x; \theta)) \in \mathbb{R}^+$ of our neural model by multiplying each violation with its respective multiplier $\lambda_i$. The overall loss will be $$\mathcal{L}(y, f(x; \theta)) + \lambda^T \mathcal{P}(y, f(x; \theta)).$$ The main pitfall of this approach is that it requires to fine-tune the multipliers vector $\lambda$ according to the task. The Lagrangian Dual framework solves this problem by proposing a bilevel optimization schema where: 1) the loss function is minimized via gradient-descent with fixed multipliers, and 2) the loss function is maximized via gradient-ascent with fixed network structure. In practice, this is equivalent to perform the following steps in sequence: \begin{align} \theta_{(i)} & = \argmin_{\theta} \left\{\mathcal{L}(y, f(x; \theta)) + \lambda_{(i-1)}^T \mathcal{P}(y, p) \right\} \\ \lambda_{(i)} &= \argmin_{\lambda} \left\{ \mathcal{L}(y, f(x; \theta_{(i)})) + \lambda^T \mathcal{P}(y, p) \right\} \end{align} where the subscript ${(i)}$ indicates the value of the $i^{th}$ iteration -- performed once per batch --, and $\lambda_{(0)}$ is a null vector. As regards our experiments, the $\tilde{\alpha}$ coefficients are computed via the \texttt{torch.linalg.lstsq} differentiable operator. In the coarse-grained formulation, the penalizers vector $\mathcal{P}(y, f(x; \theta))$ consists of a single which is computed according to \Cref{eqn:gedi_vector_penalizer}. Instead, in the fine-grained formulation the vector has $k$ different terms -- one for each $\tilde{\alpha}_i$ --, of which all but the first term exhibit a violation proportional to their absolute value. A major pitfall of this approach is due to the incompatibility of the \textit{round} operator with gradient-based learning algorithms, being its gradient null for each $x$. This makes it necessary to relax the formulation of the GeDI{} indicator (only) in classification tasks, by adopting predicted probabilities rather than predicted class targets. \end{document}
\betagin{document} \tau}\def\hT{\hat{T}itle{On a Greedy 2-Matching Algorithm and Hamilton Cycles in Random Graphs with Minimum Degree at Least Three.} \alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltauthor{Alan Frieze\tau}\def\hT{\hat{T}hanks{Research supported in part by NSF Grant CCF1013110},\\Department of Mathematical Sciences,\\ Carnegie Mellon University,\\Pittsburgh PA15217.} \title{On a Greedy 2-Matching Algorithm and Hamilton Cycles in Random Graphs with Minimum Degree at Least Three.} \betagin{abstract} We describe and analyse a simple greedy algorithm {\sc 2greedy}\ that finds a good 2-matching $M$ in the random graph $G=G_{n,cn}^{\d\geq 3}$ when $c\geq 15$. A 2-matching is a spanning subgraph of maximum degree two and $G$ is drawn uniformly from graphs with vertex set $[n]$, $cn$ edges and minimum degree at least three. By good we mean that $M$ has $O(\log n)$ components. We then use this 2-matching to build a Hamilton cycle in $O(n^{1.5+o(1)})$ time \text{w.h.p.}. \varepsilonnd{abstract} \par \nuoindentgmaection{Introduction} There have been many papers written on the existence of Hamilton cycles in random graphs. Koml\'os and Szemer\'edi \cite{KoSz}, Bollob\'as \cite{B1}, Ajta, Koml\'os and Szemer\'edi \cite{AKS} showed that the question is intimately related to the minimum degree. Loosely speaking, if we are considering random graphs with $n$ vertices and minimum degree at least two then we need $\Omega(n\log n)$ edges in order that they are likely to be Hamiltonian. For sparse random graphs with $O(n)$ random edges, one needs to have minimum degree at least three. This is to avoid having three vertices of degree two sharing a common neighbour. There are several models of a random graph in which minimum degree three is satisfied: Random regular graphs of degree at least three, Robinson and Wormald \cite{RW1}, \cite{RW2} or the random graph $G_{3-out}$, Bohman and Frieze \cite{BF}. Bollob\'as, Cooper, Fenner and Frieze \cite{BCFF} considered the classical random graph $G_{n,m}$ with conditioning on the minimum degree $k$ i.e. each graph with vertex set $[n]$ and $m$ edges and minimum degree at least $k$ is considered to be equally likely. Denote this model of a random graph by $G_{n,m}^{\d\geq k}$. They showed that for every $k\geq 3$ there is a $c_k\leq (k+1)^3$ such that if $c\geq c_k$ then \text{w.h.p.}\ $G_{n,cn}^{\d\geq k}$ has $(k-1)/2$ edge disjoint Hamilton cycles, where a perfect matching constitutes half a Hamilton cycle in the case where $k$ is even. It is reasonable to conjecture that $c_k=k/2$. The results of this paper and a companion \cite{FP} reduce the known value of $c_3$ from 64 to below 15. It can be argued that replacing one incorrect upper bound by a smaller incorrect upper bound does not constitute significant progress. However, the main contribution of this paper is to introduce a new greedy algorithm for finding a large 2-matching in a random graph and to give a (partial) analysis of its performance and of course to apply it to the Hamilton cycle problem. One is interested in the time taken to construct a Hamilton cycle in a random graph. Angluin and Valiant \cite{AV} and Bollob\'as, Fenner and Frieze \cite{BFF} give polynomial time algorithms. The algorithm in \cite{AV} is very fast, $O(n\log^2n)$ time, but requires $Kn\log n$ random edges for sufficiently large $K>0$. The algorithm in \cite{BFF} is of order $n^{3+o(1)}$ but works \text{w.h.p.}\ at the exact threshold for connectivity. Frieze \cite{F1} gave an $O(n^{3+o(1)})$ time algorithm for finding large cycles in sparse random graphs and this could be adapted to find Hamilton cycles in $G_{n,cn}^{\d\geq 3}$ in this time for sufficiently large $c$. Another aim of \cite{FP} and this paper is reduce this running time. The results of this paper and its companion \cite{FP} will reduce this to $n^{1.5+o(1)}$ for sufficiently large $c$, and perhaps in a later paper, we will further reduce the running time by borrowing ideas from a linear expected time algorithm for matchings due to Chebolu, Frieze and Melsted \cite{CFM}. The idea of \cite{CFM} is to begin the process of constructing a perfect matching by using the Karp-Sipser algorithm \cite{KS} to find a good matching and then build this up to a perfect matching by alternating paths. The natural extension of this idea is to find a good 2-matching and then use extension-rotation arguments to transform it into a Hamilton cycle. A 2-matching $M$ of $G$ is a spanning subgraph of maximum degree 2. Each component of $M$ is a cycle or a path (possibly an isolated vertex) and we let $\k(M)$ denote the number of components of $M$. The time taken to transform $M$ into a Hamilton cycle depends heavily on $\k(M)$. The aim is to find a 2-matching $M$ for which $\k(M)$ is small. The main result of this paper is the following: \betagin{Theorem}\lambdabel{th1}\ There is an absolute constant $c_0>0$ and such that if $c\geq c_0$ then \text{w.h.p.}\ {\sc 2greedy}\ finds a 2-matching $M$ with $\k(M)=O(\log n)$. (This paper gives an analytic proof that $c_0\leq 15$. We have a numerical proof that $c_0\leq 2.5$). \varepsilonnd{Theorem} Given this theorem, we will show how we can use this and the result of \cite{FP} to show \betagin{Theorem}\lambdabel{th2}\ If $c\geq c_0$ then \text{w.h.p.}\ a Hamilton cycle can be found in $O(n^{1.5+o(1)})$ time. \varepsilonnd{Theorem} {\bf Acknowledgement:} I would like to thank my colleague Boris Pittel for his help with this paper. He ought to be a co-author, but he has declined to do so. \par \nuoindentgmaection{Outline of the paper} As already indicated, the idea is to use a greedy algorithm to find a good 2-matching and then transform it into a Hamilton cycle. We will first give an over-view of our greedy algorithm. As we proceed, we select edges to add to our 2-matching $M$. Thus $M$ consists of paths and cycles (and isolated vertices). Vertices of the cycles and vertices interior to the paths get deleted from the current graph, which we denote by $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. No more edges can be added incident to these interior vertices. Thus the paths can usefully be thought of as being contracted to the set of edges of a matching $M^*$ on the remaining vertices of $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. This matching is not part of $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. We keep track of the vertices covered by $M^*$ by using a 0/1 vector $b$ so that for vertex $v$, $b(v)$ is the indicator that $v$ is covered by $M^*$. Thus when $v$ is still included in $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ and $b(v)=1$, it will be the end-point of a path in the current 2-matching $M$. The greedy algorithm first tries to cover vertices of degree at most two that are not covered by $M$ or vertices of degree one that are covered by $M$. These choices are forced. When there are no such vertices, we choose an edge at random. We make sure that one of the end-points $u,v$ of the chosen edge has $b$-value zero. The aim here is to try to quickly ensure that $b(v)=1$ for all vertices of $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. This will essentially reduce the problem to that of finding another (near) perfect matching in $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. The first phase of the algorithm finishes when all of the vertices that remain have $b$-value one. This necessarily means that the contracted paths form a matching of the graph $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ that remains at this stage. Furthermore, we will see that $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ is distributed as $G_{\nu,\m}^{\d\geq 2}$ for some $\nu,\m$ and then we construct another (near) perfect matching $M^{**}$ of $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ by using the linear expected time algorithm of \cite{CFM}. We put $M$ and $M^{**}$ together to create a 2-matching along with the cycles that have been deleted. Note that some vertices may have become isolated during the construction of $M$ and these will form single components of our 2-matching. The union of two random (near) perfect matchings is likely to have $O(\log n)$ components. Full details of this algorithm are given in Section~\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{alg}. Once we have described the algorithm, we can begin its analysis. We first describe the random graph model that we will use. We call it the {\varepsilonm Random Sequence model}. It was first used in Bollob\'as and Frieze \cite{BoFrM} and independently in Chvat\'al \cite{Ch}. We used it in \cite{AFP} for our analysis of the Karp-Sipser algorithm. We prove the truncated Poisson nature of the degree sequence of the graph $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ that remains at each stage in Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{mod}. We then, in Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{alg}, give a detailed description of {\sc 2greedy}. In Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{uni} we show that the distribution of the evolving graph $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ can be succinctly described by a 6-component vector ${\bf v}=(y_1,y_2,z_1,y,z,\m)$ that evolves as a Markov chain. Here $y_j,j=1,2$ denotes the number of vertices of degree $j$ that are not incident with $M$ and $z_1$ denotes the number of vertices of degree one that are incident with $M$. $y$ denotes the number of vertices of degree at least three that are not incident with $M$ and $z$ denotes the number of vertices of degree at least two that are incident with $M$. $\m$ denotes the number of edges. It is important to keep $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda=y_1+y_2+z_1$ small and {\sc 2greedy}\ will attempt to handle such vertices when $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda>0$. In this way we keep $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda$ small \text{w.h.p.}\ throughout the algorithm and this will mean that the final 2-matching produced will have few components. Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{cec} first describes the (approximate) transition probabilities of this chain. There are four types of step in {\sc 2greedy}\ that depend on which if any of $y_1,y_2,z_1$ are positive. Thus there are four sets of transition probabilities. Given the expected changes in {\bf v}, we first show that in all cases the expected change in $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda$ is negative, when $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda$ is positive. This indicates that $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda$ will not get large and a high probability polylog bound is proven. We are using the differential equation method and Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{diff} describes the sets of differential equations that can be used to track the progress of the algorithm \text{w.h.p.}. The parameters for these equations will be $h^*at{\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltalpha}t{{\bf v}}=(h^*y_1,h^*y_2,h^*z,h^*y,h^*z,h^*m)$. There are four sets of equations corresponding to the four types of step in {\sc 2greedy}. It is important to know the proportion of each type of step over a small interval. We thus consider a {\varepsilonm sliding trajectory} i.e. a weighted sum of these four sets of equations. The weights are chosen so that in the weighted set of equations we have $h^*y_1'=h^*y_2'=h^*z_1'=0$. This is in line with the fact that $h^*y,h^*z,h^*m\gg\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda$ for most of the algorithm. We verify that the expressions for the weights are non-negative. We then verify that \text{w.h.p.}\ the sliding trajectory and the process parameters remain close. Our next aim is to show that \text{w.h.p.}\ there is a time $T$ such that $y(T)=0,z(T)=\Omega(n)$. It would therefore be most natural to show that for the sliding trajectory, there is a time $h^*T$ such that $h^*y(h^*T)=0,h^*z(h^*T)=\Omega(n)$. The equations for the sliding trajectory are complicated and we have not been able to do this directly. Instead, we have set up an approximate system of equations (in parameters $\tau}\def\hT{\hat{T}iy,\tau}\def\hT{\hat{T}iz,\tau}\def\hT{\hat{T}im$) that are close when $c\geq 15$. We can prove these parameters stay close to $h^*y,h^*z,h^*m$ and that there is a time $\tau}\def\hT{\hat{T}iT$ such that $\tau}\def\hT{\hat{T}iy(\tau}\def\hT{\hat{T}iT)=0,\tau}\def\hT{\hat{T}iz(\tau}\def\hT{\hat{T}iT)=\Omega(n)$. The existence of $h^*T$ is deduced from this and then we can deduce the existence of $T$. We then in Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{noc} show that \text{w.h.p.}\ {\sc 2greedy}\ creates a matching with $O(\log n)$ components, completing the proof of Theorem \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{th1}. Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{posa} shows how to use an extension-rotation procedure on our graph $G$ to find a Hamilton cycle within the claimed time bounds. This procedure works by extending paths one edge at a time and using an operation called a rotation to increase the number of chances of extending a path. It is not guaranteed to extend a path, even if it is possible some other way. There is the notion of a {\varepsilonm booster}. This is a non-edge whose addition will allow progress in the extension-rotation algorithm. The companion paper \cite{FP} shows that for $c\geq 2.67$ there will \text{w.h.p.}\ always be many boosters. To get the non-edges we first randomly choose $s=n^{1/2}\log^{-2}n$ random edges $X$ of $G$, none of which are incident with a vertex of degree three. We then write $G=G'+X$ and argue in Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{remove} that the pair $(G',X)$ can be replaced by $(H,Y)$ where $H=G_{n,cn-s}^{\d\geq 3}$ and $Y$ is a random set of edges disjoint from $E(H)$. We then argue in Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{exro} that \text{w.h.p.}\ $Y$ contains enough boosters to create a Hamilton cycle within the claimed time bound. Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{final} contains some concluding remarks. \par \nuoindentgmaection{Random Sequence Model} \lambdabel{mod} A small change of model will simplify the analysis. Given a sequence ${\bf x} = x_1,x_2,\ldots,x_{2M}\in [n]^{2M}$ of $2M$ integers between 1 and $N$ we can define a (multi)-graph $G_{{\bf x}}=G_{\bf x}(N,M)$ with vertex set $[N]$ and edge set $\{(x_{2i-1},x_{2i}):1\leq i\leq M\}$. The degree $d_{\bf x}(v)$ of $v\in [N]$ is given by $$d_{\bf x}(v)=|\par \nuoindentgmaet{j\in [2M]:x_j=v}|.$$ If ${\bf x}$ is chosen randomly from $[N]^{2M}$ then $G_{{\bf x}}$ is close in distribution to $G_{N,M}$. Indeed, conditional on being simple, $G_{{\bf x}}$ is distributed as $G_{N,M}$. To see this, note that if $G_{{\bf x}}$ is simple then it has vertex set $[N]$ and $M$ edges. Also, there are $M!2^M$ distinct equally likely values of ${\bf x}$ which yield the same graph. Our situation is complicated by there being lower bounds of $2,3$ respectively on the minimum degree in two disjoint sets $J_2,J_3\par \nuoindentgmaubseteq [N]$. The vertices in $J_0=[N]\par \nuoindentgmaetminus J_2\cup J_3$ are of fixed degree bounded degree and the sum of their degrees is $D=o(N)$. So we let $$[N]^{2M}_{J_2,J_3;D}=\\ \{{\bf x}\in [N]^{2M}:d_{\bf x}(j)\geq i\tau}\def\hT{\hat{T}ext{ for }j\in J_i,\,i=2,3\tau}\def\hT{\hat{T}ext{ and } \par \nuoindentgmaum_{j\in J_0}d_{\bf x}(j)=D\}.$$ Let $G=G(N,M,J_2,J_3;D)$ be the multi-graph $G_{\bf x}$ for ${\bf x}$ chosen uniformly from $[N]^{2M}_{J_2,J_3;D}$. It is clear then that conditional on being simple, $G(n,m,\varepsilonmptyset,[n];0)$ has the same distribution as $G_{n,m}^{\d\geq 3}$. It is important therefore to estimate the probability that this graph is simple. For this and other reasons, we need to have an understanding of the degree sequence $d_{\bf x}$ when ${\bf x}$ is drawn uniformly from $[N]^{2M}_{J_2,J_3;D}$. Let $$f_k(\l)=e^\l-\par \nuoindentgmaum_{i=0}^{k-1}\frac{\l^i}{i!}$$ for $k\geq 0$. \betagin{lemma} \lambdabel{lem3} Let ${\bf x}$ be chosen randomly from $[N]^{2M}_{J_2,J_3;D}$. For $i=2,3$ let $Z_j\,(j\in [J_i])$ be independent copies of a {\varepsilonm truncated Poisson} random variable ${\cal P}_i$, where $$\mathbb{P}({\cal P}_i=t)=\frac{{\l}^t}{t!f_i({\l})},h^*space{1in}t=i,i+1,\ldots\ .$$ Here ${\l}$ satisfies \betagin{equation}\lambdabel{2} \par \nuoindentgmaum_{i=2}^3\frac{{\l}f_{i-1}({\l})}{f_i({\l})}|J_i|=2M-D. \varepsilonnd{equation} For $j\in J_0$, $Z_j=d_j$ is a constant and $\par \nuoindentgmaum_{j\in J_0}d_j=D$. Then $\{d_{\bf x}(j)\}_{j\in [N]}$ is distributed as $\{Z_j\}_{j\in [N]}$ conditional on $Z=\par \nuoindentgmaum_{j\in [n]}Z_j=2M$. \varepsilonnd{lemma} {\bf Proofh^*space{2em}} Note first that the value of ${\l}$ in (\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{2}) is chosen so that $$\mathbb{E}(Z)=2M.$$ Fix $J_0,J_2,J_3$ and $\boldsymbol{\xi}=(\xi_1,\xi_2,\ldots,\xi_N)$ such that $\xi_j=d_j$ for $j\in J_0$ and $\xi_j\geq k$ for $k=2,3$. Then, $$\mathbb{P}(d_{{\bf x}}= \boldsymbol{\xi}) = \left( \frac{(2M)!}{\xi_1! \xi_2! \ldots \xi_N! }\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight) {\bf i}gg/ \left( \par \nuoindentgmaum_{{\bf x}\in [N]^{2M}_{J_2,J_3;D}} \frac{(2M)!}{x_1! x_2! \ldots x_N!} \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight).$$ On the other hand, \betagin{align*} &\mathbb{P} \left((Z_1,Z_2,\ldots,Z_N)= \boldsymbol{\xi} {\bf i}gg|\; \par \nuoindentgmaum_{j\in[N]} Z_j = 2M\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)=\\ &\left( (2M)!\tau}\def\hT{\hat{T}ext{ P\/}od_{j\in J_0}\frac{1}{d_j!}\tau}\def\hT{\hat{T}ext{ P\/}od_{i=2}^3\tau}\def\hT{\hat{T}ext{ P\/}od_{j\in J_i} \frac{{\l}^{\xi_j}}{f_i({\l}) \xi_j!} \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight) {\bf i}gg/ \left( \par \nuoindentgmaum_{{\bf x}\in [N]^{2M}_{J_2,J_3;D}} (2M)!\tau}\def\hT{\hat{T}ext{ P\/}od_{j\in J_0}\frac{1}{d_j!} \tau}\def\hT{\hat{T}ext{ P\/}od_{i=2}^3\tau}\def\hT{\hat{T}ext{ P\/}od_{j\in J_i} \frac{{\l}^{x_j}}{f_i({\l}) x_j!} \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)\\ &=\left( \frac{ \tau}\def\hT{\hat{T}ext{ P\/}od_{i=2}^3f_i({\l})^{-|J_i|} {\l}^{2M}}{\xi_1! \xi_2! \ldots \xi_N!} \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight) {\bf i}gg/ \left( \par \nuoindentgmaum_{{\bf x}\in [N]^{2M}_{J_2,J_3;D}} \frac{\tau}\def\hT{\hat{T}ext{ P\/}od_{i=2}^3f_i({\l})^{-|J_i|} {\l}^{2M}}{x_1! x_2! \ldots x_N!} \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)\\ &=\mathbb{P}(d_{\bf x}= \boldsymbol{\xi}). \varepsilonnd{align*} h^*space*{\fill}\mbox{$\Box$} To use Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lem3} for the approximation of vertex degrees distributions we need to have sharp estimates of the probability that $Z$ is close to its mean $2M$. In particular we need sharp estimates of $\mathbb{P}(Z=2M)$ and $\mathbb{P}(Z-Z_1=2M-k)$, for $k=o(N)$. These estimates are possible precisely because $\mathbb{E}(Z)=2M$. Using the special properties of $Z$, we can refine a standard argument to show (Appendix 1) that where $N_\varepsilonll=|J_\varepsilonll|$ and $N^*=N_2+N_3$ and the variances are \betagin{multline}\lambdabel{30} \par \nuoindentgma_\varepsilonll^2=\frac{f_\varepsilonll({\l})({\l}^2f_{\varepsilonll-2}({\l})+{\l}f_{\varepsilonll-1}({\l}))-{\l}^2f_{\varepsilonll-1}({\l})^2} {f_\varepsilonll({\l})^2}={\l}\frac{d}{d{\l}}\bfrac{{\l}f_{\varepsilonll-1}({\l})}{f_\varepsilonll({\l})} \\ \tau}\def\hT{\hat{T}ext{ and }\par \nuoindentgma^2=\frac{1}{N^*}\par \nuoindentgmaum_{\varepsilonll=2}^3N_\varepsilonll\par \nuoindentgma_\varepsilonll^2 \varepsilonnd{multline} that if $N^*\par \nuoindentgma^2\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaightarrow \infty$ and $k=O(\par \nuoindentgmaqrt{N^*}\par \nuoindentgma)$ then \betaq{ll1} \mathbb{P}\left(Z=2M-k\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)=\frac{1}{\par \nuoindentgma\par \nuoindentgmaqrt{2\p N^*}}\left(1+ O\bfrac{k^2+1}{N^*\par \nuoindentgma^{2}}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight). \varepsiloneq A proof for $J_2=[N]$ was given in the appendix of \cite{AFP}. We need to modify the proof in a trivial way. Given \varepsilonqref{ll1} and $$\par \nuoindentgma_\varepsilonll^2=O({\l}),\qquad\varepsilonll=2,3,$$ we obtain \betagin{lemma} \lambdabel{lem4} Let ${\bf x}$ be chosen randomly from $[N]^{2M}_{J_2,J_3;D}$. \betagin{description} \item[(a)] Assume that $\log N^*=O((N^* {\l})^{1/2})$. For every $j\in J_\varepsilonll$ and $\varepsilonll\leq k\leq \log N^*$, \betaq{f1} \mathbb{P}(d_{\bf x}(j)=k)=\frac{{\l}^k}{k!f_\varepsilonll({\l})} \left(1+O\left(\frac{k^2+1}{N^* {\l}}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight). \varepsiloneq Furthermore, for all $\varepsilonll_1,\varepsilonll_2\in\par \nuoindentgmaet{2,3}$ and $j_1\in J_{\varepsilonll_1},j_2\in J_{\varepsilonll_2},\,j_1\nueq j_2$, and $\varepsilonll_i\leq k_i\leq \log N^*$, \betaq{f2} \mathbb{P}(d_{\bf x}(j_1)=k_1,d_{\bf x}(j_2)=k_2)=\frac{{\l}^{k_1}}{k_1!f_{\varepsilonll_1}({\l})}\frac{{\l}^{k_2}}{k_2! f_{\varepsilonll_2}({\l})}\left(1+O\bfrac{\log^2 N^*}{N^* {\l}}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight). \varepsiloneq \item[(b)] \betaq{maxdegree} d_{\bf x}(j)\leq\frac{\log N}{(\log\log N)^{1/2}} \quad\text{q.s.}\footnote{An event ${\cal E}={\cal E}(N^*)$ occurs quite surely (\text{q.s.}, in short) if $\mathbb{P}({\cal E})=1-O(N^{-a})$ for any constant $a>0$} \varepsiloneq for all $j\in J_2\cup J_3$. \varepsilonnd{description} \varepsilonnd{lemma} {\bf Proofh^*space{2em}} Assume that $j=1\nuotin J_0$. Then \betagin{eqnarray*} \mathbb{P}(d_{\bf x}(1)=k)&=&\frac{\mathbb{P}\left(Z_1=k\mbox{ and }\par \nuoindentgmaum_{i=1}^NZ_i=2M\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)}{\mathbb{P}\left(\par \nuoindentgmaum_{i=1}^NZ_i=2M\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)}\\ &=&\frac{{\l}^k}{k!f_\varepsilonll({\l})}\frac{\mathbb{P}\left(\par \nuoindentgmaum_{i=2}^NZ_i=2M-k\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)}{ \mathbb{P}\left(\par \nuoindentgmaum_{i=1}^NZ_i=2M\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)}. \varepsilonnd{eqnarray*} Likewise, with $j_1=1,j_2=2$, $$ \mathbb{P}(d_{\bf x}(1)=k_1,d_{\bf x}(2)=k_2)=\frac{{\l}^{k_1}}{k_1!f_{\varepsilonll_1}({\l})}\frac{{\l}^{k_2}}{k_2!f_{\varepsilonll_2}({\l})} \frac{\mathbb{P}\left(\par \nuoindentgmaum_{i=3}^NZ_i=2M-k_1-k_2\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)}{\mathbb{P}\left(\par \nuoindentgmaum_{i=1}^NZ_i=2M\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight )}.\nuonumber $$ Statement (a) follows immediately from \varepsilonqref{ll1} and (b) follows from simple estimations. h^*space*{\fill}\mbox{$\Box$} Let $\nu_{\bf x}^\varepsilonll(s)$ denote the number of vertices in $J_\varepsilonll,\varepsilonll=2,3$ of degree $s$ in $G_{\bf x}$. Equation (\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{ll1}) and a standard tail estimate for the binomial distribution shows \betagin{lemma} \lambdabel{lem4x} Suppose that $\log N^*=O((N^* {\l})^{1/2})$ and $N_\varepsilonll\tau}\def\hT{\hat{T}o\infty$ with $N$. Let ${\bf x}$ be chosen randomly from $[N]^{2M}_{J_2,J_3;D}$. Then \text{q.s.}, \betaq{degconc} {\cal D}({\bf x})= \left\{\left|\nu_{\bf x}^\varepsilonll(j)-\frac{N_\varepsilonll {\l}^j}{j!f({\l})}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight| \leq \brac{1+\bfrac{N_\varepsilonll {\l}^j}{j!f({\l})}^{1/2}}\log^2 N,\ k\leq j\leq \log N\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight\}. \varepsiloneq \varepsilonnd{lemma} h^*space*{\fill}\mbox{$\Box$} We can now show $G_{\bf x}$, ${\bf x}\in [n]^{2m}_{\varepsilonmptyset,[n];0}$ is a good model for $G_{n,m}^{\d\geq 3}$. For this we only need to show now that \betaq{simpx} \mathbb{P}(G_{\bf x}\tau}\def\hT{\hat{T}ext{ is simple})=\Omega(1). \varepsiloneq For this we can use a result of McKay \cite{McK}. If we fix the degree sequence of ${\bf x}$ then ${\bf x}$ itself is just a random permutation of the multi-graph in which each $j\in [n]$ appears $d_{\bf x}(j)$ times. This in fact is another way of looking at the Configuration model of Bollob\'as \cite{B2}. The reference \cite{McK} shows that the probability $G_{\bf x}$ is simple is asymptotically equal to $e^{-(1+o(1))\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma(\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma+1)}$ where $\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma=m_2/m$ and $m_2=\par \nuoindentgmaum_{j\in [n]}d_{{\bf x}}(j)(d_{{\bf x}}(j)-1)$. One consequence of the exponential tails in Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lem4x} is that $m_2=O(m)$. This implies that $\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma=O(1)$ and hence that \varepsilonqref{simpx} holds. We can thus use the Random Sequence Model to prove the occurrence of high probability events in $G_{n,m}^{\d\geq 3}$. With this in hand, we can now proceed to describe our 2-matching algorithm. \par \nuoindentgmaection{Greedy Algorithm}\lambdabel{alg} Our algorithm will be applied to the random graph $G=G_{n,m}^{\d\geq 3}$ and analysed in the context of $G_{\bf x}$. As the algorithm progresses, it makes changes to $G$ and we let $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ denote the current state of $G$. The algorithm grows a 2-matching $M$ and for $v\in [n]$ we let $b(v)$ be the 0/1 indicator for vertex $v$ being incident to an edge of $M$. We let \betagin{itemize} \item $\m$ be the number of edges in $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$, \item $V_{0,j}=\par \nuoindentgmaet{v\in [n]:d_\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa(v)=0,\,b(v)=j}$, $j=0,1$, \item $Y_k=\par \nuoindentgmaet{v\in [n]:d_\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa(v)=k\tau}\def\hT{\hat{T}ext{ and }b(v)=0}$, $k=1,2$, \item $Z_1=\par \nuoindentgmaet{v\in [n]:d_\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa(v)=1\tau}\def\hT{\hat{T}ext{ and }b(v)=1}$, \item $Y=\par \nuoindentgmaet{v\in [n]:d_\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa(v)\geq 3\tau}\def\hT{\hat{T}ext{ and }b(v)=0}$,\qquad This is $J_3$ of Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{mod}. \item $Z=\par \nuoindentgmaet{v\in [n]:d_\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa(v)\geq2\tau}\def\hT{\hat{T}ext{ and }b(v)=1}$,\qquad This is $J_2$ of Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{mod}. \item $M$ is the set of edges in the current 2-matching. \item $M^*$ is the matching induced by the path components of $M$ i.e. if $P\par \nuoindentgmaubseteq M$ is a path from $x$ to $y$ then $(x,y)$ will be an edge of $M^*$ and the internal edges of $P$ will have been deleted from $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. \varepsilonnd{itemize} Observe that the sequence ${\bf b}=(b(v))$ is determined by $V_{0,0},V_{0,1},Y_1,Y_2,Z_1,Y,Z$. If $Y_1\nueq 0$ then we choose $v\in Y_1$ and add the edge incident to $v$ to $M$, because doing so is not a mistake i.e. there is a maximum size 2-matching of $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ that contains this edge. If $Y_1=\varepsilonmptyset$ and $Y_2\nueq \varepsilonmptyset$ then we choose $v\in Y_2$ and add the two edges incident to $v$ to $M$, because doing so is also not a mistake i.e. there is a maximum size 2-matching of $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ that contains these edges. Similarly, if $Y_2=\varepsilonmptyset$ and $Z_1\nueq \varepsilonmptyset$ we choose $v\in Z_1$ and add the unique edge of $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ incident to $v$ to $M$. When we add an edge to $M$ it can cause vertices of $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ to become internal vertices of paths of $M$ and be deleted from $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. In particular, this happens to $v\in Z_1$ in the case just described. When $Y_1=Y_2=Z_1=\varepsilonmptyset\nueq Y$ we choose a random edge incident to a vertex of $Y$. In this way we hope to end up in a situation where $Y_2=Z_1=Y=\varepsilonmptyset$ and $|Z|=\Omega(n)$. This has advantages that will be explained later in Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{noc} and we have only managed to prove that this happens \text{w.h.p.}\ when $c\geq 15$. When $Y_1=Y_2=Z_1=Y=\varepsilonmptyset$ we are looking for a maximum matching in the graph $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ that remains and we can use the results of \cite{CFM}. We now give details of the steps of \nuoindent {\bf Algorithm {\sc 2greedy}:} \betagin{description} \item[Step 1(a) $Y_1\nueq\varepsilonmptyset$]\ \\ Choose a random vertex $v$ from $Y_1$. Suppose that its neighbour in $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ is $w$. We add $(v,w)$ to $M$ and move $v$ to $V_{0,1}$. \betagin{enumerate}[(i)] \item If $b(w)=0$ then we add $(v,w)$ to $M^*$. If $w$ is currently in $Y$ then move it to $Z$. If it is currently in $Y_1$ then move it to $V_{0,1}$. If it is currently in $Y_2$ then move it to $Z_1$. Call this re-assigning $w$. \item If $b(w)=1$ let $u$ be the other end point of the path $P$ of $M$ that contains $w$. We remove $(w,u)$ from $M^*$ and replace it with $(v,u)$. We move $w$ to $V_{0,1}$ and make the requisite changes due to the loss of other edges incident with $w$. Call this {\varepsilonm tidying up}. \varepsilonnd{enumerate} \item[Step 1(b): $Y_1=\varepsilonmptyset$ and $Y_2\nueq\varepsilonmptyset$]\ \\ Choose a random vertex $v$ from $Y_2$. Suppose that its neighbours in $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ are $w_1,w_2$. If $w_1=w_2=v$ then we simply delete $v$ from $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. (We are dealing with loops because we are analysing the algorithm within the context of $G_{\bf x}$. This case is of course unnecessary when the input is simple i.e. for $G_{n,m}^{\d\geq k}$). Continuing with the most likely case, we move $v$ to $V_{0,1}$. We delete the edges\\ $(v,w_1),(v,w_2)$ from $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ and place them into $M$. In addition, \betagin{enumerate}[(i)] \item If $b(w_1)=b(w_2)=0$ then we add $(w_1,w_2)$ to $M^*$ and put $b(w_1)=b(w_2)=1$. Re-assign $w_1,w_2$. \item If $b(w_1)=b(w_2)=1$ let $u_i,i=1,2$ be the other end points of the paths $P_1,P_2$ of $M$ that contain $w_1,w_2$ respectively. There are now two possibilities: \betagin{enumerate}[(1)] \item $u_1=w_2$ and $u_2=w_1$. In this case, adding the two edges creates a cycle $C=(v,w_1,P_1,w_2,v)$ and we delete the edge $(w_1,w_2)$ from $M^*$. Vertices $w_1,w_2$ are deleted from $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. The rest of $C$ has already been deleted. Tidy up. \item $u_1\nueq w_2$ and $u_2\nueq w_1$. Adding the two edges creates a path\\ $(u_1,P_1',w_1,v,w_2,P_2,u_2)$ to $M$, where $P_1'$ is the reversal of $P_1$. We delete the edges $(w_1,u_1),(w_2,u_2)$ from $M^*$ and add $(u_1,u_2)$ in their place. Vertices $w_1,w_2$ are deleted from $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. Tidy up. \varepsilonnd{enumerate} \item If $b(w_1)=0$ and $b(w_2)=1$ let $u_2$ be the other end point of the path $P_2$ of $M$ that contains $w_2$. We delete $(w_2,u_2)$ from $M^*$ and replace it with $(w_1,u_2)$. We put $b(w_1)=1$ and re-assign it and delete vertex $w_2$ from $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. Tidy up. \varepsilonnd{enumerate} \item[Step 1(c): $Y_2=\varepsilonmptyset$ and $Z_1\nueq\varepsilonmptyset$]\ \\ Choose a random vertex $v$ from $Z_1$. Let $u$ be the other endpoint of the path $P$ of $M$ that contains $v$. Let $w$ be the unique neighbour of $v$ in $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. We delete $v$ from $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ and add the edge $(v,w)$ to $M$. In addition there are two cases. \betagin{enumerate}[(1)] \item If $b(w)=0$ then we delete $(v,u)$ from $M^*$ and replace it with $(w,u)$ and put $b(w)=1$ and re-assign $w$. \item If $b(w)=1$ then let $u$ be the other end-point of the path containing $w$ in $M$. If $u\nueq v$ then we delete vertex $w$ and the edge $(u,w)$ from $M^*$ and replace it with $(u,v)$. Tidy up. If $u=v$ then we have created a cycle $C$ and we delete it from $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ as in Step 1(b)(i)(1). \varepsilonnd{enumerate} \item[Step 2: $Y_1=Y_2=Z_1=\varepsilonmptyset$ and $Y\nueq\varepsilonmptyset$]\ \\ Choose a random edge $(v,w)$ incident with a vertex $v\in Y$. We delete the edge $(v,w)$ from $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ and add it to $M$. We put $b(v)=1$ and move it from $Y$ to $Z$. There are two cases. \betagin{enumerate}[(i)] \item If $b(w)=0$ then put $b(w)=1$ and move it from $Y$ to $Z$. We add the edge $(v,w)$ to $M^*$. \item If $b(w)=1$ let $u$ be the other end point of the path in $M$ containing $w$. We delete vertex $w$ and the edge $(u,w)$ from $M^*$ and replace it with $(u,v)$. Tidy up. \varepsilonnd{enumerate} \item[Step 3: $Y_1=Y_2=Z_1=Y=\varepsilonmptyset$]\ \\ At this point $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ will be seen to be distributed as $G_{\nu,\m}^{\d\geq 2}$ for some $\nu,\m$ where $\m=O(\nu)$. As such, it contains a (near) perfect matching $M^{**}$ \cite{FP} and it can be found in $O(\nu)$ expected time \cite{CFM}. \varepsilonnd{description} The output of {\sc 2greedy}\ is set of edges in $M\cup M^{**}$. No explicit mention has been made of vertices contributing to $V_{0,0}$. When we we tidy up after removing a vertex $w$, any vertex whose sole neighbour is $w$ will be placed in $V_{0,0}$. \par \nuoindentgmaection{Uniformity}\lambdabel{uni} In the previous section, we described the action of the algorithm as applied to $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. In order to prove a uniformity property, it is as well to consider the changes induced by the algorithm in terms of {\bf x}. When an edge is removed we will replace it in ${\bf x}$ by a pair of $\par \nuoindentgmatar$'s. This goes for all of the edges removed at an iteration, not just the edges of the 2-matching $M$. Thus at the end of this and subsequent iterations we will have a sequence in $\Lambda=([n]\cup \{\par \nuoindentgmatar\})^{2m}$ where for all $i$, $x_{2i-1}=\par \nuoindentgmatar$ if and only if $x_{2i}=\par \nuoindentgmatar$. We call such sequences {\varepsilonm proper}. We use the same notation as in Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{mod}. Let $S=S({\bf x})=\{i:z_{2i-1}=z_{2i}=\par \nuoindentgmatar\}$. Note that the number of edges $\m$ in $G_{\bf x}$ is given by $$\m=m-|S|.$$ For a tuple ${\bf v}=(V_{0,0},V_{0,1},Y_1,Y_2,Z_1,Y,Z,S)$ we let $\Lambda_{{\bf v}}$ denote the set of pairs $({\bf x},{\bf b})$ where ${\bf x}\in \Lambda$ is proper and \betagin{itemize} \item $V_{0,j}=\par \nuoindentgmaet{v\in [n]:d_{\bf x}(v)=0,\,b(v)=j}$, $j=0,1$, \item $Y_k=\par \nuoindentgmaet{v\in [n]:d_{\bf x}(v)=k\tau}\def\hT{\hat{T}ext{ and }b(v)=0}$, $k=1,2$, \item $Z_1=\par \nuoindentgmaet{v\in [n]:d_{\bf x}(v)=1\tau}\def\hT{\hat{T}ext{ and }b(v)=1}$, \item $Y=\par \nuoindentgmaet{v\in [n]:d_{\bf x}(v)\geq 3\tau}\def\hT{\hat{T}ext{ and }b(v)=0}$, \item $Z=\par \nuoindentgmaet{v\in [n]:d_{\bf x}(v)\geq2\tau}\def\hT{\hat{T}ext{ and }b(v)=1}$. \item $S=S({\bf x})$. \varepsilonnd{itemize} (Re-call that {\bf b}\ is determined by {\bf v}). For vectors ${\bf x},{\bf b}$ we define ${\bf v}({\bf x},{\bf b})$ by $({\bf x},{\bf b})\in \Lambda_{{\bf v}({\bf x},{\bf b})}$. We also use the notation ${\bf x}\in \Lambda_{{\bf v}({\bf x})}$ when the second component {\bf b}\ is assumed. Given two sequences ${\bf x},{\bf x}'\in \Lambda$, we say that ${\bf x}'\par \nuoindentgmaubseteq {\bf x}$ if $x_j=\par \nuoindentgmatar$ implies $x_j'=\par \nuoindentgmatar$. In which case we define ${\bf y}={\bf x}-{\bf x}'$ by $$y_j=\betagin{cases} x_j&\tau}\def\hT{\hat{T}ext{ If }x_j\nueq \par \nuoindentgmatar=x_j'\\ \par \nuoindentgmatar&\tau}\def\hT{\hat{T}ext{ Otherwise} \varepsilonnd{cases} $$ Thus ${\bf y}$ records the changes in going from {\bf x}\ to ${\bf x}'$. Given two sequences ${\bf x},{\bf x}'\in \Lambda$ we say that ${\bf x},{\bf x}'$ are {\varepsilonm disjoint} if $x_j\nueq\par \nuoindentgmatar$ implies that $x_j'=\par \nuoindentgmatar$. In which case we define ${\bf y}={\bf x}+{\bf x}'$ by $$y_j=\betagin{cases} x_j&\tau}\def\hT{\hat{T}ext{ If }x_j\nueq \par \nuoindentgmatar\\ x_j'&\tau}\def\hT{\hat{T}ext{ If }x_j'\nueq \par \nuoindentgmatar\\ \par \nuoindentgmatar&\tau}\def\hT{\hat{T}ext{ Otherwise} \varepsilonnd{cases} $$ Thus, \betaq{1-1} \mbox{if ${\bf x}'\par \nuoindentgmaubseteq {\bf x}$ then ${\bf x}'$ and ${\bf x}-{\bf x}'$ are disjoint and ${\bf x}={\bf x}'+({\bf x}-{\bf x}')$.} \varepsiloneq Suppose now that $({\bf x}(0),{\bf b}(0)),({\bf x}(1),{\bf b}(1)),\ldots,({\bf x}(t),{\bf b}(t))$ is the sequence of pairs representing the graphs constructed by the algorithm {\sc 2greedy}. Here ${\bf x}(i-1)\par \nuoindentgmaupseteq {\bf x}(i)$ for $i\geq 1$ and so we can define ${\bf y}(i)={\bf x}(i-1)-{\bf x}(i)$. Suppose that ${\bf v}(i)={\bf v}({\bf x}(i))$ for $1\leq i\leq t$ where ${\bf v}(0)=(\varepsilonmptyset,\varepsilonmptyset,\varepsilonmptyset,\varepsilonmptyset,[n],\varepsilonmptyset,\varepsilonmptyset)$ and ${\bf b}(0)=0$. Let $$\Lambda_{{\bf v}\mid {\bf b}}=\par \nuoindentgmaet{{\bf x}:({\bf x},{\bf b})\in \Lambda_{\bf v}}.$$ \betagin{lemma} \lambdabel{lem2} Suppose that ${\bf x}(0)$ is a random member of $\Lambda_{{\bf v}(0)\mid{\bf b}(0)}$. Then given\\ ${\bf v}(0),{\bf v}(1),\ldots,{\bf v}(t)$, the vector ${\bf x}(t)$ is a random member of $\Lambda_{{\bf v}(t)\mid{\bf b}(t)}$ for all $t\geq 0$, that is, the distribution of ${\bf x}(t)$ is uniform, conditional on the edges deleted in the first $t$ steps. (Note that ${\bf b}(t)$ is fixed by ${\bf v}(t)$ here). \varepsilonnd{lemma} {\bf Proofh^*space{2em}} We prove this by induction on $t$. It is trivially true for $t=0$. Fix $t\geq 0,{\bf x}(t),{\bf b}(t),{\bf x}(t+1),{\bf b}(t+1)$. We define a sequence ${\bf x}(t)={\bf z}_1,{\bf z}_2,\ldots,{\bf z}_s={\bf x}(t+1)$ where ${\bf z}_{i+1}$ is obtained from ${\bf z}_i$ by a {\varepsilonm basic step} \nuoindent {\bf Basic Step}: Given ${\bf x},{\bf b}$ and ${\bf v}={\bf v}({\bf x},{\bf b})$ we create new sequences ${\bf x}'=A_j({\bf x}),{\bf b}'=B_j({\bf b})$ and ${\bf v}'={\bf v}({\bf x}',{\bf b}')$. Let ${\bf w}={\bf x}-{\bf x}'$. A basic step corresponds to replacing the edge $(w_{2j-1},w_{2j})$ by an edge of the matching $M$, for some index $j$. Let $u=w_{2j-1},v=w_{2j}$. \betagin{description} \item[Case 1:] Here we assume $b(u)=b(v)=0$.\\ Replace $x_{2j-1},x_{2j}$ by $\par \nuoindentgmatar$'s and put $b(u)=b(v)=1$. \item[Case 2:] Here we assume $b(u)=0,\,b(v)=1$.\\ Replace $x_{2k-1},x_{2k}$ by $\par \nuoindentgmatar$'s for every $k$ such that $v\in \par \nuoindentgmaet{x_{2k-1},x_{2k}}$ and put $b(u)=1$. \item[Case 3:] Here we assume $b(u)=b(v)=1$.\\ Replace $w_{2k-1},w_{2k}$ by $\par \nuoindentgmatar$'s for every $k$ such that $\par \nuoindentgmaet{u,v}\cap \par \nuoindentgmaet{w_{2k-1},w_{2k}} \nueq\varepsilonmptyset$. \varepsilonnd{description} \betagin{claim}\lambdabel{cl11} Suppose that ${\bf x}'=A_j({\bf x})$ and ${\bf y}={\bf x}-{\bf x}'$ and ${\bf b}'=B_j({\bf b})$. Then the map $\f:{\bf z}\in \Lambda_{{\bf v}({\bf x},{\bf b})}^{{\bf y}}\tau}\def\hT{\hat{T}o({\bf z}-{\bf y},{\bf b}')$ is 1-1 and each $({\bf z}',{\bf b}')\in \Lambda_{{\bf v}({\bf x}',{\bf b}')}$ is the image under $\f$ of a unique member of $\Lambda_{{\bf v}({\bf x},{\bf b})}^{\bf y}$, where $\Lambda_{{\bf v}({\bf x},{\bf b})}^{\bf y}=\par \nuoindentgmaet{({\bf z},{\bf b})\in \Lambda_{{\bf v}({\bf x},{\bf b})}:\;{\bf z}\par \nuoindentgmaupseteq{\bf y}}$. \varepsilonnd{claim} {\bf Proof of Claim \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{cl11}}. Equation \varepsilonqref{1-1} implies that $\f$ is 1-1. Let ${\bf v}={\bf v}({\bf x},{\bf b})$ and ${\bf v}'={\bf v}({\bf x}',{\bf b}')$. Choose $({\bf w},{\bf b}')\in \Lambda_{{\bf v}'}$. Because $S'$ is determined by ${\bf v}'$, we see that {\bf y}\ and {\bf w}\ are necessarily disjoint and we simply have to check that if $\par \nuoindentgmax={\bf w}+{\bf y}$ then $(\par \nuoindentgmax,{\bf b})\in \Lambda_{{\bf v}}$. But in all cases, ${\bf v}(\par \nuoindentgmax,{\bf b})$ is determined by ${\bf v}'$ and ${\bf y}$ and this implies that ${\bf v}(\par \nuoindentgmax,{\bf b})={\bf v}({\bf x},{\bf b})$. This statement is the crux of the proof and we should perhaps justify it a little more. Suppose then that we are given ${\bf v}'$ (and hence ${\bf b}'$) and ${\bf y}$ and {\bf b}. Observe that this determines $d_{{\bf x}^*}(v)$ for all $v\in V_{0,0}'\cup V_{0,1}'\cup Y_1'\cup Y_2'\cup Z_1'$. Together with $b(v)$ this determines the place of $v$ in the partition defined by {\bf v}. Now $Y'\par \nuoindentgmaubseteq Y$ and it only remains to deal with $v\in Z'$. If $d_{\bf y}(v)>0$ then $v\in Y\cup Z$ and $b(v)$ determines which of the sets $v$ is in. If $d_{\bf y}(v)=0$ and $b(v)=1$ then $v\in Z$. If $d_{\bf y}(v)=0$ and $b(v)=0$ then $v\in Y$. This is because $b(v)=0$ and $b'(v)=1$ implies that we have put one of the edges incident with $v$ into $M$. \\ {\bf End of proof of Claim \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{cl11}} The claim implies (inductively) that if ${\bf x}$ is a uniform random member of $\Lambda_{{\bf v}\mid{\bf b}}$ and we do a sequence of basic steps involving the ``deletion'' of ${\bf y}_1,{\bf y}_2,\ldots,{\bf y}_s$ where ${\bf y}_{i+1}\par \nuoindentgmaubseteq {\bf x}-{\bf y}_1-\cdots {\bf y}_i$, then ${\bf x}'={\bf x}-{\bf y}_1-\cdots -{\bf y}_s$ is a uniform random member of $\Lambda_{{\bf v}'\mid{\bf b}'}$, where ${\bf v}'={\bf v}({\bf x}',{\bf b}')$ for some ${\bf b}'$. This will imply Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lem2} once we check that a step of {\sc 2greedy}\ can be broken into basic steps. First consider Step 1(a). First we choose a vertex in $x\in Y_1$. Then we apply Case 1 or 2 with probabilities determined by {\bf v}. Now consider Step 1(b). First we choose a vertex in $x\in Y_2$. We can then replace one of the edges incident with $x$ by a matching edge. We apply Case 1 or Case 2 with probabilities determined by {\bf v}. After this we apply Case 2 or Case 3 with probabilities determined by {\bf v}. For Step 1(c) we apply one of Case 2 or Case 3 with probabilities determined by {\bf v}. For Step 2, we apply one of Case 1 or Case 2 with probabilities determined by {\bf v}. This completes the proof of Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lem2}. h^*space*{\fill}\mbox{$\Box$} As a consequence \betagin{lemma} \lambdabel{lem3a} The random sequence ${\bf v}(t),\,t=0,1,2,\ldots,$ is a Markov chain. \varepsilonnd{lemma} {\bf Proofh^*space{2em}} Slightly abusing notation, \betagin{align*} &\mathbb{P}({\bf v}(t+1)\mid {\bf v}(0),\ldots,{\bf v}(t))\\ &=\par \nuoindentgmaum_{{\bf w}'\in \Lambda_{{\bf v}(t+1)}}\mathbb{P}({\bf w}'\mid {\bf v}(0),\ldots,{\bf v}(t))\\ &=\par \nuoindentgmaum_{{\bf w}'\in \Lambda_{{\bf v}(t+1)}}\par \nuoindentgmaum_{{\bf w}\in \Lambda_{{\bf v}(t)}}\mathbb{P}({\bf w}',{\bf w}\mid {\bf v}(0),\ldots,{\bf v}(t))\\ &=\par \nuoindentgmaum_{{\bf x}'\in \Lambda_{{\bf v}(t+1)}}\par \nuoindentgmaum_{{\bf w}\in \Lambda_{{\bf v}(t)}} \mathbb{P}({\bf w}'\mid {\bf v}(0),\ldots,{\bf v}(t-1),{\bf w}) \mathbb{P}({\bf w}\mid{\bf v}(0),\ldots,{\bf v}(t))\\ &=\par \nuoindentgmaum_{{\bf w}'\in \Lambda_{{\bf v}(t+1)}}\par \nuoindentgmaum_{{\bf w}\in \Lambda_{{\bf v}(t)}}\mathbb{P}({\bf w}'\mid {\bf w})|\Lambda_{{\bf v}(t)}|^{-1},\qquad using\ Lemma\ \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lem2}. \varepsilonnd{align*} which depends only on ${\bf v}(t),{\bf v}(t+1)$. h^*space*{\fill}\mbox{$\Box$} We now let $$|{\bf v}|=\par \nuoindentgmaet{|V_{0,0}|,|V_{0,1}|,|Y_1|,|Y_2|,|Z_1|,|Y|,|Z|,|S|}.$$ Then we let $\Lambda_{|{\bf v}|}$ denote the set of $({\bf x},{\bf b})\in \Lambda$ with $|{\bf v}({\bf x},{\bf b})|=|{\bf v}|$ and we let\\ $\Lambda_{|{\bf v}|\,\mid{\bf b}}=\par \nuoindentgmaet{{\bf x}:({\bf x},{\bf b})\in \Lambda_{|{\bf v}|}}$. It then follows from Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lem3a} that by symmetry, \betagin{lemma} \lambdabel{|lem2|} The random sequence $|{\bf v}(t)|,\,t=0,1,2,\ldots,$ is a Markov chain. \varepsilonnd{lemma} A component of a graph is {\varepsilonm trivial} if it consists of a single isolated vertex. \betagin{lemma}\lambdabel{few} {\bf Whp} the number of non-trivial components of the graph induced by $M\cup M^{**}$ is $O(\log n)$. \varepsilonnd{lemma} {\bf Proofh^*space{2em}} Lemma 3 of Frieze and {\L}uczak \cite{FL} proves that \text{w.h.p.}\ the union of two random (near) perfect matchings of $[n]$ has at most $3\log n$ components. Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lem2} implies that at the end of Phase 1, $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ is a copy of $G_{\nu,\m}^{\d\geq 2}$, independent of $M^*$. In which case the (near) perfect matching of $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ is independent of $M^*$ and we can apply \cite{FL}. h^*space*{\fill}\mbox{$\Box$} \par \nuoindentgmaection{Conditional expected changes}\lambdabel{cec} We now set up a system of differential equations that closely describe the path taken by the parameters of Algorithm {\sc 2greedy}, as applied to $G_{\bf x}$ where ${\bf x}$ is chosen randomly from $[n]^{2m}_{\varepsilonmptyset,[n];0}$. We introduce the following notation: At some point in the algorithm, the state of $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$ is described by ${\bf x}\in [n]^{2M}_{J_2,J_3;D}$, together with an indicator vector {\bf b}. We let $y_i=\card{\par \nuoindentgmaet{v:d_{\bf x}(v)=i\tau}\def\hT{\hat{T}ext{ and }b(v)=0}}$ and let $z_i=\card{\par \nuoindentgmaet{v:d_{\bf x}(v)=i\tau}\def\hT{\hat{T}ext{ and }b(v)=1}}$ for $i\ge0$. We let $y=\par \nuoindentgmaum_{i\ge 3}y_i$ and $z=\par \nuoindentgmaum_{i\ge 2}z_i$ and let $2\m=\par \nuoindentgmaum_{i\geq 0}i(y_i+z_i)$ be the total degree. Thus in the notation of Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{alg} we have $y_i=|Y_i|, i=1,2,\,J_3=Y,N_3=y,\, z_1=|Z_1|,\,J_2=Z,N_2=z,D=y_1+2y_2+z_1, M=\m$. Then it follows from Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lem4x}, that as long as $(y+z)\l=\Omega(\log^2n)$, we have \text{q.s.}, \betagin{equation}\lambdabel{1} y_k \alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltapprox \frac{\lambda^k}{k!f_3(\lambda)},\,\, (k\ge 3);\quad z_k\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltapprox\,\frac{\lambda^k}{k!f_{2}(\lambda)},\,\,(k\ge 2). \varepsilonnd{equation} Here $\lambda$ is the root of \betagin{equation}\lambdabel{3} y\,\frac{\lambda f_2(\lambda)}{f_3(\lambda)}+z\,\frac{\lambda f_1(\lambda)}{f_2(\lambda)}=2\mu-y_1-2y_2-z_1. \varepsilonnd{equation} h^*space*{\fill}\mbox{$\Box$} {\bf Notational Convention:} There are a large number of parameters that change as\\ {\sc 2greedy}\ progresses. Our convention will be that if we write a parameter $\xi$ then by default it means $\xi(t)$, the value of $\xi$ after $t$ steps of the algorithm. Thus the initial value of $\xi$ will be $\xi(0)$. When $\xi$ is evaluated at a different point, we make this explicit. We now keep track of the expected changes in ${\bf v}=(y_1,y_2,y,z_1,z_2,\m)$ due to one step of {\sc 2greedy}. These expectations are conditional on the current values of ${\bf b}$ and the degree sequence {\bf d}. We let $N=y+z$, which is a small departure from the notation of Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{mod}. In the following sequence of equations, $\xi'=\xi(t+1)$ represents the value of parameter $\xi$ after the corresponding step of {\sc 2greedy}. \betagin{lemma}\lambdabel{lemchanges} The following are the expected one step changes in the parameters\\ $(y_1,y_2,y,z_1,z,\m)$. We will compute them conditional on the degree sequence {\bf d}\ and on $|{\bf v}|$. We give both, because the first are more transparent and the second are what is needed. The error terms $\varepsilon_?$ are the consequence of multi-edges and we will argue that they are small. We take $$N=y+z.$$ {\bf Step 1.\/} $y_1+y_2+z_1>0$. \par \nuoindentgmai {\bf Step 1(a)\/}. $y_1>0$. \betagin{eqnarray} \mathbb{E\/}[y_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_1\mid {\bf b},{\bf d}]&=&-1-\brac{\frac{y_1}{2\mu}+ \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{y_1}{2\mu}} +\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{2y_2}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{04x}}.\lambdabel{04x}\\ \mathbb{E\/}[y_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_1\mid |{\bf v}|]&=&-1-\frac{y_1}{2\mu}-\frac{y_1z}{4\mu^2}\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +\frac{y_2z}{2\mu^2}\,\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)}+O\bfrac{\log^2N}{\l N}\lambdabel{04xq}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[y_2^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_2\mid {\bf b},{\bf d}]&=&-\brac{\frac{2y_2}{2\mu}+ \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{2y_2}{2\mu}} +\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{3y_3}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{04}}.\lambdabel{04}\\ \mathbb{E\/}[y_2^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_2\mid |{\bf v}|]&=&-\frac{y_2}{\mu}- \frac{y_2z}{2\mu^2}\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +\frac{yz}{8\mu^2}\frac{\lambda^3}{f_3(\lambda)}\,\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +O\bfrac{\log^2N}{\l N}\lambdabel{04q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[z_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-z_1\mid {\bf b},{\bf d}]&=&-\brac{\frac{z_1}{2\mu}+ \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{z_1}{2\mu}} +\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{2z_2}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{05}}.\lambdabel{05}\\ \mathbb{E\/}[z_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-z_1\mid |{\bf v}|]&=&-\frac{z_1}{2\mu} -\frac{z_1z}{4\mu^2}\,\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +\frac{z^2}{4\mu^2}\frac{\lambda^4f_0(\lambda)}{f_2(\lambda)^2}+O\bfrac{\log^2N}{\l N}. \lambdabel{05q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[y^\tau}\def\hT{\hat{T}ext{ P\/}ime-y\mid {\bf b},{\bf d}]&=&-\brac{\par \nuoindentgmaum_{k\ge 3}\frac{ky_k}{2\mu}+ \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{3y_3}{2\mu}}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{06}}.\lambdabel{06}\\ \mathbb{E\/}[y^\tau}\def\hT{\hat{T}ext{ P\/}ime-y\mid |{\bf v}|]&=&-\frac{y}{2\mu}\,\frac{\lambda f_2(\lambda)}{f_3(\lambda)}-\frac{yz}{8\mu^2}\,\frac{\lambda^3}{f_3(\lambda)}\, \,\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)}+O\bfrac{\log^2N}{\l N}. \lambdabel{06q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[z^\tau}\def\hT{\hat{T}ext{ P\/}ime-z\mid {\bf b},{\bf d}]&=&\par \nuoindentgmaum_{k\geq 3}\frac{ky_k}{2\mu}-\par \nuoindentgmaum_{k\ge 2} \frac{kz_k}{2\mu}-\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{2z_2}{2\mu} +\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{07}}.\lambdabel{07}\\ \mathbb{E\/}[z^\tau}\def\hT{\hat{T}ext{ P\/}ime-z\mid |{\bf v}|]&=&\frac{y}{2\mu} \frac{\lambda f_2(\lambda)}{f_3(\lambda)}-\frac{z}{2\mu}\frac{\lambda f_1(\lambda)}{f_2(\lambda)}-\frac{z^2}{4\mu^2}\,\frac{\lambda^4f_0(\lambda)}{f_2(\lambda)^2} +O\bfrac{\log^2N}{\l N}.\lambdabel{07q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[\mu^\tau}\def\hT{\hat{T}ext{ P\/}ime-\mu\mid {\bf b},{\bf d}]&=&-1-\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}(k-1)+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{08a}}.\lambdabel{08a}\\ \mathbb{E\/}[\mu^\tau}\def\hT{\hat{T}ext{ P\/}ime-\mu\mid|{\bf v}|]&=&-1-\frac{z}{2\mu}\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +O\bfrac{\log^2N}{\l N}.\lambdabel{08q} \varepsilonnd{eqnarray} {\bf Step 1(b)\/}. $y_1=0,y_2>0$. \betagin{eqnarray} \mathbb{E\/}[y_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_1\mid {\bf b},{\bf d}]&=& 2\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{2y_2}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{4x}}.\lambdabel{4x}\\ \mathbb{E\/}[y_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_1\mid |{\bf v}|]&=&\frac{y_2z}{\mu^2}\,\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +O\bfrac{\log^2N}{\l N}.\lambdabel{4xq}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[y_2^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_2\mid {\bf b},{\bf d}]&=&-1-2\brac{\frac{2y_2}{2\mu}+ \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{2y_2}{2\mu}} +2\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{3y_3}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{4}}.\lambdabel{4}\\ \mathbb{E\/}[y_2^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_2\mid |{\bf v}|]&=&-1-\frac{2y_2}{\mu}-\frac{y_2z}{\mu^2}\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +\frac{yz}{4\mu^2}\frac{\lambda^3}{f_3(\lambda)}\,\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +O\bfrac{\log^2N}{\l N}.\lambdabel{4q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[z_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-z_1\mid {\bf b},{\bf d}]&=&-2\brac{\frac{z_1}{2\mu}+ \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{z_1}{2\mu}} +2\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{2z_2}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{5}}.\lambdabel{5}\\ \mathbb{E\/}[z_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-z_1\mid |{\bf v}|]&=&-\frac{z_1}{\mu}-\frac{z_1z}{2\mu^2}\,\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +\frac{z^2}{2\mu^2}\frac{\lambda^4f_0(\lambda)}{f_2(\lambda)^2} +O\bfrac{\log^2N}{\l N}.\lambdabel{5q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[y^\tau}\def\hT{\hat{T}ext{ P\/}ime-y\mid {\bf b},{\bf d}]&=&-2\brac{\par \nuoindentgmaum_{k\ge 3}\frac{ky_k}{2\mu}+ \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{3y_3}{2\mu}}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{6}}.\lambdabel{6}\\ \mathbb{E\/}[y^\tau}\def\hT{\hat{T}ext{ P\/}ime-y\mid |{\bf v}|]&=&-\frac{y}{\mu}\,\frac{\lambda f_2(\lambda)}{f_3(\lambda)}-\frac{yz}{4\mu^2}\,\frac{\lambda^3}{f_3(\lambda)}\, \,\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +O\bfrac{\log^2N}{\l N}.\lambdabel{6q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[z^\tau}\def\hT{\hat{T}ext{ P\/}ime-z\mid {\bf b},{\bf d}]&=&2\brac{\par \nuoindentgmaum_{k\geq 3}\frac{ky_k}{2\mu}- \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}- \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{2z_2}{2\mu}}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{7}}.\lambdabel{7}\\ \mathbb{E\/}[z^\tau}\def\hT{\hat{T}ext{ P\/}ime-z\mid |{\bf v}|]&=&\frac{y}{\mu} \frac{\lambda f_2(\lambda)}{f_3(\lambda)}-\frac{z}{\mu}\frac{\lambda f_1(\lambda)}{f_2(\lambda)}-\frac{z^2}{2\mu^2}\,\frac{\lambda^4f_0(\lambda)}{f_2(\lambda)^2} +O\bfrac{\log^2N}{\l N}.\lambdabel{7q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[\mu^\tau}\def\hT{\hat{T}ext{ P\/}ime-\mu\mid {\bf b},{\bf d}]&=&-2-2\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}(k-1)+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{8a}}.\lambdabel{8a}\\ \mathbb{E\/}[\mu^\tau}\def\hT{\hat{T}ext{ P\/}ime-\mu\mid |{\bf v}|]&=&-2-\frac{z}{\mu}\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +O\bfrac{\log^2N}{\l N}.\lambdabel{8aq}\\ \varepsilonnd{eqnarray} {\bf Step 1(c).\/} $y_1=y_2=0,z_1>0$. \betagin{eqnarray} \mathbb{E\/}[y_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_1\mid {\bf b},{\bf d}]&=&O\bfrac{1}{N}.\lambdabel{9x}\\ \mathbb{E\/}[y_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_1\mid |{\bf v}|]&=&O\bfrac{1}{N}.\lambdabel{9xq}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[y_2^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_2\mid {\bf b},{\bf d}]&=&\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{3y_3}{2\mu} +\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{9}}.\lambdabel{9}\\ \mathbb{E\/}[y_2^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_2\mid |{\bf v}|]&=&{\frac{yz}{8\mu^2}\,\frac{\lambda^3}{f_3(\lambda)}\, \frac{\lambda^2f_0(\lambda)}{f_2(\lambda)}} +O\bfrac{\log^2N}{\l N}.\lambdabel{9q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[z_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-z_1\mid {\bf b},{\bf d}]&=&-1-\frac{z_1}{2\mu}- \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{z_1}{2\mu} +\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{2z_2}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{10}}.\lambdabel{10}\\ \mathbb{E\/}[z_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-z_1\mid |{\bf v}|]&=&-1-\frac{z_1}{2\mu}- \frac{z_1z}{4\mu^2}\,\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +\frac{z^2}{4\mu^2}\frac{\lambda^4f_0(\lambda)}{f_2(\lambda)^2} +O\bfrac{\log^2N}{\l N}.\lambdabel{10q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[y^\tau}\def\hT{\hat{T}ext{ P\/}ime-y\mid {\bf b},{\bf d}]&=&-\par \nuoindentgmaum_{k\ge 3}\frac{ky_k}{2\mu}- \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{3y_3}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{6a}}.\lambdabel{6a}\\ \mathbb{E\/}[y^\tau}\def\hT{\hat{T}ext{ P\/}ime-y\mid |{\bf v}|]&=&-\frac{y}{2\mu}\, \frac{\lambda f_2(\lambda)}{f_3(\lambda)}-\frac{yz}{8\mu^2}\,\frac{\lambda^3}{f_3(\lambda)}\, \,\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +O\bfrac{\log^2N}{\l N}.\lambdabel{6aq}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[z^\tau}\def\hT{\hat{T}ext{ P\/}ime-z\mid {\bf b},{\bf d}]&=&\par \nuoindentgmaum_{k\geq 3}\frac{ky_k}{2\mu}-\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}- \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{2z_2}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{7a}}.\lambdabel{7a}\\ \mathbb{E\/}[z^\tau}\def\hT{\hat{T}ext{ P\/}ime-z\mid |{\bf v}|]&=&\frac{y}{2\mu}\,\frac{\lambda f_2(\lambda)}{f_3(\lambda)} -\frac{z}{2\mu}\frac{\lambda f_1(\lambda)}{f_2(\lambda)} -\frac{z^2}{4\mu^2}\,\frac{\lambda^4f_0(\lambda)}{f_2(\lambda)^2} +O\bfrac{\log^2N}{\l N}.\lambdabel{7aq}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[\mu^\tau}\def\hT{\hat{T}ext{ P\/}ime-\mu\mid {\bf b},{\bf d}]&=&-1-\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}(k-1)+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{8aa}}.\lambdabel{8aa}\\ \mathbb{E\/}[\mu^\tau}\def\hT{\hat{T}ext{ P\/}ime-\mu\mid |{\bf v}|]&=&-1-\frac{z}{2\mu}\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)} +O\bfrac{\log^2N}{\l N}.\lambdabel{8aaq} \varepsilonnd{eqnarray} {\bf Step 2.\/} $y_1=y_2=z_1=0$. \betagin{eqnarray} \mathbb{E\/}[y_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_1\mid {\bf b},{\bf d}]&=&O\bfrac{1}{N}.\lambdabel{11x}\\ \mathbb{E\/}[y_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_1\mid |{\bf v}|]&=&O\bfrac{1}{N}.\lambdabel{11xq}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[y_2^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_2\mid {\bf b},{\bf d}]&=&\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{3y_3}{2\mu}+ \varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{11}}.\lambdabel{11}\\ \mathbb{E\/}[y_2^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_2\mid |{\bf v}|]&=&\frac{yz}{8\mu^2}\frac{\lambda^3}{f_3(\lambda)} \frac{\lambda^2f_0(\lambda)}{f_2(\lambda)}+O\bfrac{\log^2N}{\l N}. \lambdabel{11q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[z_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-z_1\mid {\bf b},{\bf d}]&=&\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{2z_2}{2\mu}+ \varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{12}}.\lambdabel{12}\\ \mathbb{E\/}[z_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-z_1\mid |{\bf v}|]&=&\frac{z^2}{4\mu^2}\,\frac{\lambda^4f_0(\lambda)}{f_2(\lambda)^2} +O\bfrac{\log^2N}{\l N}. \lambdabel{12q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[y^\tau}\def\hT{\hat{T}ext{ P\/}ime-y\mid {\bf b},{\bf d}]&=&-1-\par \nuoindentgmaum_{k\ge 3}\frac{ky_k}{2\mu}- \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{3y_3}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{13}}.\lambdabel{13}\\ \mathbb{E\/}[y^\tau}\def\hT{\hat{T}ext{ P\/}ime-y\mid |{\bf v}|]&=&-1-\frac{y}{2\mu}\,\frac{\lambda f_2(\lambda)}{f_3(\lambda)}-\frac{yz}{8\mu^2}\,\frac{\lambda^3}{f_3(\lambda)}\, \,\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)}+O\bfrac{\log^2N}{\l N}. \lambdabel{13q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[z^\tau}\def\hT{\hat{T}ext{ P\/}ime-z\mid {\bf b},{\bf d}]&=&1-\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}- \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)\frac{2z_2}{2\mu} +\par \nuoindentgmaum_{k\ge 3}\frac{ky_k}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{14}}.\lambdabel{14}\\ \mathbb{E\/}[z^\tau}\def\hT{\hat{T}ext{ P\/}ime-z\mid |{\bf v}|]&=&1-\frac{z}{2\mu}\frac{\lambda f_1(\lambda)}{f_2(\lambda)}-\frac{z^2}{4\mu^2}\,\frac{\lambda^4f_0(\lambda)}{f_2(\lambda)^2}+\frac{y}{2\mu} \,\frac{\lambda f_2(\lambda)}{f_3(\lambda)}+O\bfrac{\log^2N}{\l N}. \lambdabel{14q}\\ \nuonumber\\ \nuonumber\\ \mathbb{E\/}[\mu^\tau}\def\hT{\hat{T}ext{ P\/}ime-\mu\mid {\bf b},{\bf d}]&=& -1-\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}(k-1)+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{15a}}.\lambdabel{15a}\\ \mathbb{E\/}[\mu^\tau}\def\hT{\hat{T}ext{ P\/}ime-\mu\mid |{\bf v}|]&=&-1-\frac{z}{2\mu}\frac{\lambda^2 f_0(\lambda)}{f_2(\lambda)} +O\bfrac{\log^2N}{\l N}. \lambdabel{15aq} \varepsilonnd{eqnarray} \varepsilonnd{lemma} {\bf Proofh^*space{2em}} The verification of \varepsilonqref{04x} -- \varepsilonqref{15a} is long but straightforward. We will verify \varepsilonqref{04x} and \varepsilonqref{04xq} and add a few comments and hope that the reader is willing to accept or check the remainder by him/herself. Suppose without loss of generality that {\bf x}\ is such that $x_1=v=1\in Y_1$. The remainder of {\bf x}\ is a random permutation of $2m-2\m$ $\par \nuoindentgmatar$'s and $2\m-1$ values from $[n]$ where the number of times $j$ occurs is ${\bf d}_{\bf x}(j)$ for $j\in [n]$. The term -1 accounts for the deletion of $v$ from $\Gamma} \def\i{\iota} \def\k{\kappa} \def\K{\Kappa$. There is a probability $\frac{y_1}{2\m-1}=\frac{y_1}{2\m}+O\bfrac{1}{\m}$ that $x_2\in Y_1$ and this accounts for the second term in \varepsilonqref{04x}. Observe next that there is a probability $\frac{kz_k}{2\m-1}$ that $x_2\in Z_k,k\geq 2$. In which case another $k-1$ edges will be deleted. In expectation, the number of vertices in $Y_1$ lost by the deletion of one such edge is $\frac{y_1-1}{2\m-3}$ and this accounts for the third term. On the other hand, each such edge has a $\frac{2y_2}{2\m-3}$ probability of being incident with a vertex in $Y_2$. The deletion of such an edge will create a vertex in $Y_1$ and this explains the fourth term. We collect the errors from replacing $\m$ by $\m-1$ etc. into the last term. This gives a contribution of order $1/N$. The above analysis ignored the extra contributions due to multiple edges. We can bound this by \betaq{eta} \varepsilonta_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{04x}}=\par \nuoindentgmaum_{k\geq 3}\frac{kz_k}{2\m} \par \nuoindentgmaum_{\varepsilonll\geq 3}\frac{\varepsilonll y_\varepsilonll}{2\m-1}{\bf i}nom{k-1}{\varepsilonll-1}\bfrac{\varepsilonll}{2\m-k}^{\varepsilonll-2}. \varepsiloneq To explain this, we assume $x_2\in Z_k$, which is accounted for by the first sum over $k$. Now, to create a vertex in $Y_1$, the removal of $x_2$ must delete $\varepsilonll-1$ of the edges incident with some vertex $y$ in $Y_\varepsilonll$. The term $\frac{\varepsilonll y_\varepsilonll}{2\m-1}$ is the probability that the first of the chosen $\varepsilonll-1$ edges is incident with $y\in Y_\varepsilonll$ and the factor $\bfrac{\varepsilonll}{2\m-k}^{\varepsilonll-2}$ bounds the probability that the remaining $\varepsilonll-2$ edges are incident with $y$. To go from conditioning on {\bf b},{\bf d}\ to conditioning on $|{\bf v}|$ we need to use the expected values of $y_k,z_l$ etc., conditional on {\bf v}. For this we use \varepsilonqref{f1} and \varepsilonqref{f2}. We have, up to an error term $O\bfrac{\log^2N}{\l N}$, \betagin{eqnarray} \mathbb{E\/}\left[\par \nuoindentgmaum_{k\ge 3}ky_k{\bf i}ggr||{\bf v}|\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight]&=&\par \nuoindentgmaum_{k\ge 3}k\,y\,\frac{\lambda^k}{k!f_3(\lambda)} =\frac{y\lambda}{f_3(\lambda)}\par \nuoindentgmaum_{j\ge 2}\frac{\lambda^j}{j!}=y\,\frac{\lambda f_2(\lambda)}{f_3(\lambda)}, \lambdabel{16}\\ \mathbb{E\/}\left[\par \nuoindentgmaum_{k\ge 2}kz_k{\bf i}ggr||{\bf v}|\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight]&=&\par \nuoindentgmaum_{k\ge 2}k\,z\,\frac{\lambda^k}{k!f_2(\lambda)} =\frac{z\lambda}{f_2(\lambda)}\par \nuoindentgmaum_{j\ge 1}\frac{\lambda^j}{j!}=z\,\frac{\lambda f_1(\lambda)}{f_2(\lambda)},\lambdabel{16a}\\ \mathbb{E\/}\left[\par \nuoindentgmaum_{k\ge 3}k(k-1)y_k{\bf i}ggr||{\bf v}| \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight]&=&\par \nuoindentgmaum_{k\ge 3}k(k-1)\,y\,\frac{\lambda^k}{k!f_3(\lambda)} =\frac{y\lambda^2}{f_3(\lambda)}\par \nuoindentgmaum_{j\ge 1}\frac{\lambda^j}{j!}=y\, \frac{\lambda^2f_1(\lambda)}{f_3(\lambda)},\lambdabel{17a}\\ \mathbb{E\/}\left[\par \nuoindentgmaum_{k\ge 2}k(k-1)z_k{\bf i}ggr||{\bf v}| \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight]&=&\par \nuoindentgmaum_{k\ge 2}k(k-1)\,z\,\frac{\lambda^k}{k!f_2(\lambda)} =\frac{z\lambda^2}{f_2(\lambda)}\par \nuoindentgmaum_{j\ge 0}\frac{\lambda^j}{j!}=z\,\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)}\lambdabel{17}. \varepsilonnd{eqnarray} In particular, using \varepsilonqref{17} in \varepsilonqref{04x} we get \varepsilonqref{04xq}. The other terms are obtained in a similar fashion. We remark that we need to use \varepsilonqref{f2} when we deal with products $z_ky_\varepsilonll$, $k\geq2$ and $\varepsilonll\ge3$. Since, $k,\varepsilonll\leq\log n$ in \varepsilonqref{eta} we see, with the aid of \varepsilonqref{16} -- \varepsilonqref{17} that $\mathbb{E\/}[\varepsilonta_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{04x}}\mid{\bf v}]=O(1/N)$. This bound is true for all other $\varepsilon_?$. h^*space*{\fill}\mbox{$\Box$} \par \nuoindentgmaubsection{Negative drift for $y_1,y_2,z_1$} Algorithm {\sc 2greedy}\ tries to keep $y_1,y_2,z_1$ small by its selection in Step 1. We now verify that there is a negative drift in $$\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda=\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(t)=y_1+2y_2+z_1$$ in all cases of Step 1. This will enable us to show that \text{w.h.p.}\ $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda$ remains small throughout the execution of {\sc 2greedy}. Let \betaq{eqQ} Q=Q({\bf v})=\frac{yz}{4\mu^2}\,\frac{\lambda^3}{f_3(\lambda)}\,\frac{\lambda^2f_0(\lambda)}{f_2(\lambda)}+ \frac{z^2}{4\mu^2}\,\frac{\lambda^4f_0(\lambda)}{f_2(\lambda)^2}. \varepsiloneq Then simple algebra gives \betagin{align} &\mathbb{E\/}[\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda'-\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda\mid|{\bf v}|]=-(1-Q)-\brac{\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda+y_2}\brac{\frac{1}{2\m}+\frac{z\l^2f_0(\l)} {4\m^2f_2(\l)}}+O\bfrac{\log^2N}{\l N}&Case\ 1(a)\lambdabel{C1a}\\ &\mathbb{E\/}[\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda'-\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda\mid|{\bf v}|]=-2(1-Q)-\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda\brac{\frac{1}{\m}+\frac{z\l^2f_0(\l)} {2\m^2f_2(\l)}}+O\bfrac{\log^2N}{\l N}&Case\ 1(b)\lambdabel{C1b}\\ &\mathbb{E\/}[\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda'-\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda\mid|{\bf v}|]=-(1-Q)-\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda\brac{\frac{1}{2\m}+\frac{z\l^2f_0(\l)} {4\m^2f_2(\l)}}+O\bfrac{\log^2N}{\l N}&Case\ 1(c)\lambdabel{C1c} \varepsilonnd{align} We will show \betagin{lemma}\lambdabel{alem}[Pittel] \betaq{1-Q} \l>0\tau}\def\hT{\hat{T}ext{ implies }Q<1 \varepsiloneq and \betagin{equation}\lambdabel{Qless} Q=\left\{\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltalignedat2 &O(\lambda^{-1}),\quad&&\lambda\tau}\def\hT{\hat{T}o\infty,\\ &1-\Theta(\lambda^2),\quad&&\lambda\tau}\def\hT{\hat{T}o 0.\varepsilonndalignedat\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight. \varepsilonnd{equation} \varepsilonnd{lemma} {\bf Proofh^*space{2em}} Now, by \varepsilonqref{3}, $Q<1$ is equivalent to $$ yz\frac{\lambda^5f_0(\lambda)}{f_2(\lambda)f_3(\lambda)}+z^2\frac{\lambda^4f_0(\lambda)}{f_2(\lambda)^2} < \left(y\frac{\lambda f_2(\lambda)} {f_3(\lambda)}+z\frac{\lambda f_1(\lambda)}{f_2(\lambda)}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)^2, $$ or, introducing $x=y/z$, \betagin{equation}\lambdabel{Fxla} F(x,\lambda):=\frac{x\frac{\lambda^5f_0(\lambda)}{f_2(\lambda)f_3(\lambda)}+\frac{\lambda^4 f_0(\lambda)} {f_2(\lambda)^2}}{\left(x\frac{\lambda f_2(\lambda)}{f_3(\lambda)}+ \frac{\lambda f_1(\lambda)}{f_2(\lambda)}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)^2}<1,\quad\forall\,\l>0, x\ge 0. \varepsilonnd{equation} In particular, $F(\infty,\lambda)=0$. Now $$ F_x(x,\lambda)=\left(x\frac{\lambda f_2(\lambda)}{f_3(\lambda)}+\frac{\lambda f_1(\lambda)}{f_2(\lambda)}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)^{-4} G(x,\lambda), $$ where \betagin{multline}\lambdabel{Gxla} G(x,\lambda)=\frac{\lambda^5f_0(\lambda)}{f_2(\lambda)f_3(\lambda)}\left(x\frac{\lambda f_2(\lambda)}{f_3(\lambda)}+ \frac{\lambda f_1(\lambda)}{f_2(\lambda)}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)^2\\ -2\left(x\frac{\lambda f_2(\lambda)}{f_3(\lambda)}+\frac{\lambda f_1(\lambda)}{f_2(\lambda)}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)\frac{\lambda f_2(\lambda)} {f_3(\lambda)}\left(x\frac{\lambda^5f_0(\lambda)}{f_2(\lambda)f_3(\lambda)}+ \frac{\lambda^4f_0(\lambda)}{f_2(\lambda)^2}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight). \varepsilonnd{multline} Notice that $$ G(0,\lambda)=\lambda^6f_0(\lambda)f_1(\lambda)f_2(\lambda)^{-3}f_3(\lambda)^{-1}{\bf i}gl(\lambda f_1(\lambda)-2f_2(\lambda){\bf i}gr)>0, $$ as $\lambda f_1(\lambda)-2f_2(\lambda)>0$. Whence $F_x(0,\lambda)>0$ and as a function of $x$, $F(x,\lambda)$ attains its maximum at the root of $G(x,\lambda)=0$, which is \betaq{bars} \bar x=\frac{f_3(\lambda){\bf i}gl(\lambda f_1(\lambda)-2f_2(\lambda){\bf i}gr)}{\lambda f_2(\lambda)^2}. \varepsiloneq Now, \varepsilonqref{Gxla} implies that $\bar{x}$ satisfies \betaq{gf1} \bar{x}\frac{\lambda^5f_0(\lambda)}{f_2(\lambda)f_3(\lambda)}+\frac{\lambda^4f_0(\lambda)}{f_2(\lambda)^2}= \frac{\lambda^5f_0(\lambda)}{f_2(\lambda)f_3(\lambda)} \left(\bar{x}\frac{\lambda f_2(\lambda)}{f_3(\lambda)}+\frac{\lambda f_1(\lambda)}{f_2(\lambda)}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight) \tau}\def\hT{\hat{T}imes \frac{f_3(\lambda)}{2\lambda f_2(\lambda)} \varepsiloneq and \varepsilonqref{bars} implies that \betaq{gf2} \bar{x}\frac{\lambda f_2(\lambda)}{f_3(\lambda)}+\frac{\lambda f_1(\lambda)}{f_2(\lambda)}=\frac{2(\l f_1(\l)-f_2(\l))} {f_2(\l)}. \varepsiloneq Substituting \varepsilonqref{gf1}, \varepsilonqref{gf2} into \varepsilonqref{Fxla}, we see that $$F(\bar x,\lambda)=\,\frac{\frac{\lambda^5f_0(\lambda)}{f_2(\lambda)f_3(\lambda)}\frac{f_3(\lambda)}{2\lambda f_2(\lambda)}} {\left(\bar x\frac{\lambda f_2(\lambda)}{f_3(\lambda)}+\frac{\lambda f_1(\lambda)}{f_2(\lambda)}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)} =\,\frac{\lambda^4f_0(\lambda)}{4f_2(\lambda){\bf i}gl(\lambda f_1(\lambda)-f_2(\lambda){\bf i}gr)}.$$ Thus, \betagin{equation}\lambdabel{1-F} 1-F(\bar x,\lambda)=\frac{D(\lambda)}{4f_2(\lambda){\bf i}gl(\lambda f_1(\lambda)-f_2(\lambda){\bf i}gr)}, \varepsilonnd{equation} where \betagin{align*} D(\lambda)=&4f_2(\lambda){\bf i}gl(\lambda f_1(\lambda)- f_2(\lambda){\bf i}gr)-\lambda^4f_0(\lambda)\\ =&-4-4\lambda-(\lambda^4+4\lambda^2-8)e^{\lambda}+(4\lambda-4)e^{2\lambda}. \varepsilonnd{align*} In particular, \betagin{equation}\lambdabel{1-F,inf} 1-F(\bar x,\lambda)=1-O(\lambda^{-1}),\quad\lambda\tau}\def\hT{\hat{T}o\infty. \varepsilonnd{equation} Expanding $e^{\lambda}$and $e^{2\lambda}$, we obtain after collecting like terms that $$D(\lambda)=\par \nuoindentgmaum_{j\ge 6}\frac{d_j}{j!}\,\lambda^j,$$ where $$d_j=2^{j+1}(j-2)-(j)_4-4(j)_2+8.$$ Here $d_j=0$ for $0\leq j\leq 5$ and $d_6=40,d_7=280,d_8=1176,d_9=3864,d_{10}=10992$ and $d_j>0$ for $j\ge 11$ is clear. Therefore $D(\lambda)$ is positive for all $\lambda>0$. Since $D(\lambda)\par \nuoindentgmaim d_6\lambda^6$ and $4f_2(\lambda){\bf i}gl(\lambda f_1(\lambda)-f_2(\lambda)\par \nuoindentgmaim \l^4$ as $\lambda\tau}\def\hT{\hat{T}o 0$, we see that \betagin{equation}\lambdabel{1-F,zero} 1-F(\bar x,\lambda)\par \nuoindentgmaim d_6\lambda^2,\quad \lambda\tau}\def\hT{\hat{T}o 0. \varepsilonnd{equation} This completes the proof of Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{alem}. h^*space*{\fill}\mbox{$\Box$} It follows from \varepsilonqref{C1a}, \varepsilonqref{C1b}, \varepsilonqref{C1c} and Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{alem} that, regardless of case, \betaq{eqx1} \zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda>0\tau}\def\hT{\hat{T}ext{ implies }\mathbb{E\/}[\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda'-\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda\mid|{\bf v}|]\leq -c_1(1\wedge\l)^2+O\bfrac{\log^2N}{\l N} \varepsiloneq for some absolute constant $c_1>0$, where $1\wedge\l=\min\par \nuoindentgmaet{1,\l}$. To avoid dealing with the error term in \varepsilonqref{eqx1} we introduce the stopping time, $$T_{er}=\min\par \nuoindentgmaet{t:\l^2\leq\frac{\log^3n}{\l N}}.$$ (This is well defined, since eventually $N=0$). The following stopping time is also used: $$T_0=\min\par \nuoindentgmaet{t:\l\leq1\tau}\def\hT{\hat{T}ext{ or } N\leq n/2}<T_{er}.$$ So we can replace \varepsilonqref{eqx1} by \betaq{eqx2} \zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda>0\tau}\def\hT{\hat{T}ext{ implies }\mathbb{E\/}[\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda'-\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda\mid|{\bf v}|]\leq -c_1/2,\qquad 0\leq t\leq T_0, \varepsiloneq which holds for $n$ sufficiently large. There are several places where we need a bound on $\l$: \betagin{lemma}\lambdabel{lambda} {\bf Whp} $\l\leq 3ce$ for $t\leq T_0$. \varepsilonnd{lemma} {\bf Proofh^*space{2em}} We will show that w.h.p. $y_1+2y_2+z_1=o(n)$ throughout. It follows from \varepsilonqref{3} and the inequalities in Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{simpineq} that if $\Lambda$ is sufficiently large and if $\l(t)\geq \Lambda$ then $Y\cup Z$ contains $y+z$ vertices and at least $\Lambda(y+z)/2$ edges and hence has total degree at least $\Lambda(y+z)$. We argue that \text{w.h.p.}\ $G$ does not contain such a sub-graph. We will work in the random sequence model. We can assume that $|Y\cup Z|\geq n/2$. Now fix a set $S\par \nuoindentgmaubseteq [n]$ where $s=|S|\geq n/3$. Let $D$ denote the total degree of vertices in $S$. Then \betagin{multline}\lambdabel{boundd} \mathbb{P}(D=d)\leq O(n^{1/2})\par \nuoindentgmaum_{\par \nuoindentgmaubstack{d_1+\cdots+d_s=d\\d_j\geq 3}} \tau}\def\hT{\hat{T}ext{ P\/}od_{i=1}^s\frac{\l^{d_i}}{f_3(\l)d_i!}\leq O(n^{1/2})\frac{\l^d}{d!f_3(\l)^s} \par \nuoindentgmaum_{\par \nuoindentgmaubstack{d_1+\cdots+d_s=d\\d_j\geq0}}\frac{d!}{d_1!\cdots d_s!}\\ =O(n^{1/2})\frac{\l^ds^d}{d!f_3(\l)^s}. \varepsilonnd{multline} Here $\l=\l(0)$ and we are using Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lem3}. The factor $O(n^{1/2})$ accounts for the conditioning that the total degree is $2cn$. Now $\l(0)\leq 2c$ and $f_3(\l(0))\geq 1$. It follows that $$ \mathbb{P}(\mathbb{E\/}ists S:d\geq \Lambda s)\leq O(n^{1/2}) \par \nuoindentgmaum_{s\geq n/3}\par \nuoindentgmaum_{d\geq \Lambda s}{\bf i}nom{n}{s} \frac{(2c)^ds^d}{d!}\leq O(n^{1/2})\par \nuoindentgmaum_{s\geq n/3}\par \nuoindentgmaum_{d\geq \Lambda s}\bfrac{ne}{s}^s \frac{(2c)^ds^d}{d!} $$ The terms involving $d$ in the second sum are $u_d=\frac{(2cs)^d}{d!}$ and for $d/s$ large we have $u_{d+1}/u_d=O(s/d)$ and so we can put $d=\Lambda s$ in the second expression. After substituting $d!\geq (d/e)^d$ this gives $$\mathbb{P}(\mathbb{E\/}ists S:d\geq \Lambda s)\leq O(n^{1/2})\par \nuoindentgmaum_{s\geq n/2} \bfrac{3e(2ce)^{\Lambda}}{\Lambda^{\Lambda}}^s=o(1) $$ if $\Lambda\geq 3ce$. h^*space*{\fill}\mbox{$\Box$} Our aim now is to give a high probability bound on the maximum value that $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda$ will take during the process. We first prove a simple lemma involving the functions $\f_j(x)=\frac{xf_{j-1}(x)}{f_j(x)}$, $j=2,3$. \betagin{lemma}\lambdabel{fuk} \betaq{A1} \f_j(x)\tau}\def\hT{\hat{T}ext{ is convex and increasing and } j\leq \f_j(x)\tau}\def\hT{\hat{T}ext{ and }\frac{1}{j+1}\leq \f_j'(x)\leq 1\tau}\def\hT{\hat{T}ext{ for }j=2,3. \varepsiloneq \varepsilonnd{lemma} {\bf Proofh^*space{2em}} Now, if $H(x)=\frac{xF(x)}{G(x)}$ then $$H'(x)=\frac{G(x)(xF'(x)+F(x))-xF(x)G'(x)}{G(x)^2}$$ and \betagin{multline*} H''(x)=\\ \frac{2xF(x)G'(x)^2+G(x)^2(2F'(x)+xF''(x))-G(x)(2xF'(x)G'(x)+F(x)(2G'(x)+xG''(x)))}{G(x)^3}. \varepsilonnd{multline*} {\bf Case $j=2$:} \betaq{deeriv} \f_2'(x)=\frac{e^{2x}-(x^2+2)e^x+1}{(e^x-1-x)^2}. \varepsiloneq But, $$e^{2x}-(x^2+2)e^x+1=\par \nuoindentgmaum_{j\geq 4}\frac{2^j-j(j-1)-2}{j!}x^j$$ and so $\f_2'(x)>0$ for $x>0$. \betaq{deeeriv} \f_2''(x)=\frac{e^{2x}(x^2-4x+2)+e^x(x^3+x^2+4x-4)+2}{(e^x-1-x)^3}. \varepsiloneq But \betagin{multline*} e^{2x}(x^2-4x+2)+e^x(x^3+x^2+4x-4)+2=\\ \par \nuoindentgmaum_{j\geq 6}\frac{2^{j-2}(j(j-1)-8j+8)+j(j-1)(j-2)+j(j-1)+4j-4}{j!}x^j \varepsilonnd{multline*} and so $\f_2''(x)>0$ for $x>0$. {\bf Case $j=3$:} \betaq{deeriv0} \f_3'(x)=\frac{2e^{2x}-e^x\brac{x^3-x^2+4x+4}+x^2+4x+2}{2(e^x-1-x-\frac{x^2}{2})^2}. \varepsiloneq But, $$2e^{2x}-e^x\brac{x^3-x^2+4x+4}+x^2+4x+2=\par \nuoindentgmaum_{j\geq 6}\frac{2^{j+1}-j(j-1)(j-2)+j(j-1)-4j-4}{j!}x^j$$ and so $\f_3'(x)>0$ for $x>0$. \betaq{deeeriv1} \f_3''(x)=\frac{x(e^{2x}(2x^2-12x+12)+e^x(x^4+8x^2-24)+2x^2+12x+12)}{4(e^x-1-x-\frac{x^2}{2})^3}. \varepsiloneq But \betagin{multline*} e^{2x}(2x^2-12x+12)+e^x(x^4+8x^2-24)+2x^2+12x+12\\ \par \nuoindentgmaum_{j\geq 9}\frac{2^{j-1}(j(j-1)-12j+24)+j(j-1)(j-2)(j-3)+8j(j-1)-24}{j!}x^j. \varepsilonnd{multline*} and so $\f_3''(x)>0$ for $x>0$. So $\f_2,\f_3$ are convex and so we only need to check that $\f_2(0)=2,\f_2'(0)=1/3,\f_3(0)=3,\f_3'(0)=1/4$ and $\f_2'(\infty)=\f_3'(\infty)=1$. h^*space*{\fill}\mbox{$\Box$} Consider $\l$ as a function of ${\bf v}$, defined by \betaq{2x} y\f_3(\l)+z\f_2(\l)=\Pi \varepsiloneq where $\Pi=2\m-y_1-2y_2-z_1$. We now prove a lemma bounding the change in $\l$ as we change {\bf v}. \betagin{lemma}\lambdabel{onestep} $$|\l({\bf v}_1)-\l({\bf v}_2)|= O\bfrac{||{\bf v}_1-{\bf v}_2||_1}{N},\qquad\tau}\def\hT{\hat{T}ext{ for }t<T_{er}.$$ \varepsilonnd{lemma} {\bf Proofh^*space{2em}} We write ${\bf v}_1=(y_1,y_2,z_1,y,z,\m)\geq 0$ and ${\bf v}_2=(y_1+\d_{y_1},y_2+\d_{y_2},z_1+\d_{z_1}, y+\d_y,z+\d_z,\m+d_{\m})\geq 0$ and $\Pi,\Pi+\d_\Pi$ for the two values of $\Pi$. Then \betaq{3x} (y+\d_y)\f_3(\l+\d_\l)-y\f_3(\l)+(z+\d_ z)\f_2(\l+\d_\l)-\f_2(\l)=\d_\Pi. \varepsiloneq Convexity and our lower bound on $\f_j'$ implies that $$\f_j(\l)\geq \f_j(\l+\d_\l)-\d_\l\f_j'(\l+\d_\l)\geq \f_j(\l+\d_\l)-\d_\l.$$ So from \varepsilonqref{3x} we have $$(y+\d_y)(\f_3(\l)+\d_\l)-y\f_3(\l)+(z+\d_z)(\f_2(\l)+\d_\l)-\f_2(\l)\geq\d_\Pi.$$ This implies that $$\d_\l\geq \frac{\d_\Pi-\d_y\f_3(\l)-\d_z\f_2(\l)}{y+\d_y+z+\d_z}.$$ So, $$\d_\l\leq 0\tau}\def\hT{\hat{T}ext{ implies }|\d_\l|=O\bfrac{||{\bf v}_1-{\bf v}_2||_1}{N}.$$ Note that we use Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lambda} to argue that $\f_j(\l),j=2,3$ are bounded within our range of interest. To deal with $\d_\l\geq 0$ we observe that convexity implies $$\f_j(\l+\d_\l)\geq \f_j(\l)+\d_\l\f_j'(\l).$$ So from \varepsilonqref{3x} we have $$(y+\d_y)(\f_3(\l)+\d_\l\f_3'(\l))-y\f_3(\l)+(z+\d_z)(\f_2(\l)+\d_\l\f_2'(\l))-\f_2(\l)\leq\d_\Pi.$$ This implies that $$\d_\l\leq \frac{\d_\Pi-\d_y\f_3(\l)-\d_z\f_2(\l)}{(y+\d_y)\f_3'(\l)+(z+\d_z)\f_2'(\l)}.$$ So, $$\d_\l\geq 0\tau}\def\hT{\hat{T}ext{ implies }|\d_\l|= O\bfrac{||{\bf v}_1-{\bf v}_2||_1}{N}.$$ h^*space*{\fill}\mbox{$\Box$} \betagin{lemma}\lambdabel{Klem} If $c\geq 15$ then \text{q.s.} $$\nuot\mathbb{E\/}ists 1\leq t\leq T_0:\;\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(t)> \log^2 n.$$ \varepsilonnd{lemma} {\bf Proofh^*space{2em}} Define a sequence $$X_i=\betagin{cases}\min\par \nuoindentgmaet{\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(i+1)-\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(i),\log n} &0\leq i\leq T_0\\-c_1/2&T_0<i\leq n\varepsilonnd{cases}$$ The variables $X_1,X_2,\ldots,X_n$ are not independent. On the other hand, conditional on an event that occurs \text{q.s.}, we see that $$X_{s+1}+\ldots+X_t=\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(t)-\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(s)\tau}\def\hT{\hat{T}ext{ for }0\leq s<t\leq T_0$$ and $$\mathbb{E\/}[X_t\mid X_1,\ldots,X_{t-1}]\leq -c_1/2\tau}\def\hT{\hat{T}ext{ for }t\leq n.$$ Next, for $0\leq s\leq t\leq T_0$ let $$\bar{\l}(s,t)=\par \nuoindentgmaum_{\tau}\def\hT{\hat{T}=s+1}^t\l(\tau}\def\hT{\hat{T})^2.$$ Note that \betaq{ign} \bar{\l}(s,t)\geq t-s. \varepsiloneq We argue as in the proof of the Azuma-Hoeffding inequality that for any $1\leq s<t\leq n$ and $u\geq 0$, \betaq{x4} \mathbb{P}(X_{s+1}+\cdots+X_t\geq u-c_1\bar{\l}(s,t)/2)\leq \mathbb{E\/}p\par \nuoindentgmaet{-\frac{2u^2}{(t-s)\log^2n}}. \varepsiloneq We deduce from this that \betagin{multline}\lambdabel{eqx4} \mathbb{P}(\mathbb{E\/}ists 1\leq s<t\leq T_0:\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(s)=0<\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(\tau}\def\hT{\hat{T}),s<\tau}\def\hT{\hat{T}\leq t)\leq\\ n^2\mathbb{E\/}p\par \nuoindentgmaet{-\frac{2\max\par \nuoindentgmaet{0,c_1\bar{\l}(s,t)/2-\log n}^2}{(t-s)\log^2n}}. \varepsilonnd{multline} Putting $t-s=L_1=\log^2n$ we see from \varepsilonqref{eqx4} that \text{q.s.} \betaq{eqx5} \nuot\mathbb{E\/}ists 1\leq s<t-L_1\leq T_0-L_1:\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(s)=0<\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(\tau}\def\hT{\hat{T}),s<\tau}\def\hT{\hat{T}\leq t. \varepsiloneq Suppose now that there exists $\tau}\def\hT{\hat{T}\leq T_0$ such that $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(\tau}\def\hT{\hat{T})\geq L_1$. Then q.s. there exists $t_1\leq \tau}\def\hT{\hat{T}\leq t_1+L_1$ such that $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(t_1)=0$. But then given $t_1$, $$\mathbb{P}(\mathbb{E\/}ists t_1\leq \tau}\def\hT{\hat{T}\leq t_1+L_1:\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(\tau}\def\hT{\hat{T})\geq L_1)\leq \mathbb{E\/}p\par \nuoindentgmaet{-\frac{2(c_1L_1/2-\log n)^2}{L_1\log^2n}}.$$ Here we are using the generalisation of Hoeffding-Azuma that deals with $\max_{i\leq L_1}X_1+\cdots+X_i$. And then we get that \text{q.s.} \betaq{eqx6} \nuot\mathbb{E\/}ists t\leq T_0:\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(\tau}\def\hT{\hat{T})\geq L_1. \varepsiloneq We do this in two stages because of the condition $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda>0$ in \varepsilonqref{eqx2}. Remember here that $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(0)=0$ and \varepsilonqref{eqx5} says that $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda$ cannot stay positive for very long. h^*space*{\fill}\mbox{$\Box$} \par \nuoindentgmaection{Associated Equations.}\lambdabel{diff} The expected changes conditional on {\bf v}\ lead us to consider the following collection of differential equations: Note that we do not use any scaling. We will put hats on variables i.e. $h^*y_1$ etc. will be the deterministic counterpart of $y_1$. Also, as expected, the hatted equivalent of \varepsilonqref{2x} holds: \betaq{10xxx} \frac{h^*yh^*l f_2(h^*l)}{f_3(h^*l)}+\frac{h^*z\l \f_1(h^*l)}{f_2(h^*l)}=2h^*m-h^*y_1-2h^*y_2-h^*z_1. \varepsiloneq \nuoindent {\bf Step 1(a).\/} $h^*y_1>0$. \betagin{align} \frac{dh^*y_1}{dt}=&\,-1-\frac{h^*y_1}{2h^*m}-\frac{h^*y_1h^*z}{4h^*m^2}\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)} +\frac{h^*y_2h^*z}{2h^*m^2}\,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)},\lambdabel{18x}\\ \frac{dh^*y_2}{dt}=&\,-\frac{h^*y_2}{h^*m}-\frac{h^*y_2h^*z}{2h^*m^2}\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)} +\frac{h^*yh^*z}{8h^*m^2}\frac{h^*l^3}{f_3(h^*l)}\,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)},\lambdabel{18}\\ \frac{dh^*z_1}{dt}=&\,-\frac{h^*z_1}{2h^*m}-\frac{h^*z_1h^*z}{4h^*m^2}\,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)} +\frac{h^*z^2}{4h^*m^2}\frac{h^*l^4f_0(h^*l)}{f_2(h^*l)^2},\lambdabel{19}\\ \frac{dh^*y}{dt}=&\,-\frac{h^*y}{2h^*m}\,\frac{h^*l f_2(h^*l)}{f_3(h^*l)}-\frac{h^*yh^*z}{8h^*m^2}\,\frac{h^*l^3}{f_3(h^*l)}\, \,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)},\lambdabel{20}\\ \frac{dz}{dt}=&\,\frac{h^*y}{2h^*m}\frac{h^*l f_2(h^*l)}{f_3(h^*l)}-\frac{h^*z}{2h^*m}\frac{h^*l f_1(h^*l)}{f_2(h^*l)}-\frac{h^*z^2}{4h^*m^2}\,\frac{h^*l^4f_0(h^*l)}{f_2(h^*l)^2},\lambdabel{21}\\ \frac{dh^*m}{dt}=&\,-1-\frac{h^*z}{2h^*m}\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)}.\lambdabel{22} \varepsilonnd{align} {\bf Step 1(b).\/} $h^*y_1=0,h^*y_2>0$. \betagin{align} \frac{dh^*y_1}{dt}=&\,\frac{h^*y_2h^*z}{h^*m^2}\,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)},\lambdabel{18xx}\\ \frac{dh^*y_2}{dt}=&\,-1-\frac{2h^*y_2}{h^*m}-\frac{h^*y_2h^*z}{h^*m^2}\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)} +\frac{h^*yh^*z}{4h^*m^2}\frac{h^*l^3}{f_3(h^*l)}\,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)},\lambdabel{18mm}\\ \frac{dh^*z_1}{dt}=&\,-\frac{h^*z_1}{h^*m}-\frac{h^*z_1h^*z}{2h^*m^2}\,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)} +\frac{h^*z^2}{2h^*m^2}\frac{h^*l^4f_0(h^*l)}{f_2(h^*l)^2},\lambdabel{19mm}\\ \frac{dh^*y}{dt}=&\,-\frac{h^*y}{h^*m}\,\frac{h^*l f_2(h^*l)}{f_3(h^*l)}-\frac{h^*yh^*z}{4h^*m^2}\,\frac{h^*l^3}{f_3(h^*l)}\, \,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)},\lambdabel{20mm}\\ \frac{dh^*z}{dt}=&\,\frac{h^*y}{h^*m}\frac{h^*l f_2(h^*l)}{f_3(h^*l)}-\frac{h^*z}{h^*m}\frac{h^*l f_1(h^*l)}{f_2(h^*l)}-\frac{h^*z^2}{2h^*m^2}\,\frac{h^*l^4f_0(h^*l)}{f_2(h^*l)^2},\lambdabel{21mm}\\ \frac{dh^*m}{dt}=&\,-2-\frac{h^*z}{h^*m}\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)}.\lambdabel{22mm} \varepsilonnd{align} {\bf Step 1(c).\/} $h^*y_1=h^*y_2=0,h^*z_1>0$. \betagin{align} { \frac{dh^*y_1}{dt}=}&0,\lambdabel{23x}\\ { \frac{dh^*y_2}{dt}=}&\,{\frac{h^*yh^*z}{8h^*m^2}\,\frac{h^*l^3}{f_3(h^*l)}\, \frac{h^*l^2f_0(h^*l)}{f_2(h^*l)}},\lambdabel{23}\\ { \frac{dh^*z_1}{dt}=}&{ \,-1-\frac{h^*z_1}{2h^*m}-\frac{h^*z_1h^*z}{4h^*m^2}\,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)}+ \frac{h^*z^2}{4h^*m^2}\,\frac{h^*l^4f_0(h^*l)}{f_2(h^*l)^2}}, \lambdabel{24}\\ { \frac{dh^*y}{dt}=}&{\,-\frac{h^*y}{2h^*m}\, \frac{h^*l f_2(h^*l)}{f_3(h^*l)}-\frac{h^*yh^*z}{8h^*m^2}\,\frac{h^*l^3}{f_3(h^*l)}\, \,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)},}\lambdabel{20a}\\ \frac{dh^*z}{dt}=&\,\frac{h^*y}{2h^*m}\,\frac{h^*l f_2(h^*l)}{f_3(h^*l)} -\frac{h^*z}{2h^*m}\frac{h^*l f_1(h^*l)}{f_2(h^*l)} -\frac{h^*z^2}{4h^*m^2}\,\frac{h^*l^4f_0(h^*l)}{f_2(h^*l)^2},\lambdabel{21a}\\ \frac{dh^*m}{dt}=&\,-1-\frac{h^*z}{2h^*m}\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)}.\lambdabel{25a} \varepsilonnd{align} {\bf Step 2.\/} $h^*y_1=h^*y_2=h^*z_1=0$. \betagin{align} \frac{dh^*y_1}{dt}=&0,\lambdabel{18ax}\\ \frac{dh^*y_2}{dt}=&\,\frac{h^*yh^*z}{8h^*m^2} \frac{h^*l^3}{f_3(h^*l)}\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)},\lambdabel{18a}\\ \frac{dh^*z_1}{dt}=&\,\frac{h^*z^2}{4h^*m^2}\,\frac{h^*l^4f_0(h^*l)}{f_2(h^*l)^2},\lambdabel{19a}\\ \frac{dh^*y}{dt}=&\,-1-\frac{h^*y}{2h^*m}\,\frac{h^*l f_2(h^*l)}{f_3(h^*l)}-\frac{h^*yh^*z}{8h^*m^2}\,\frac{h^*l^3}{f_3(h^*l)}\, \,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)},\lambdabel{20aa}\\ \frac{dh^*z}{dt}=&\,1-\frac{h^*z}{2h^*m}\frac{h^*l f_1(h^*l)}{f_2(h^*l)}-\frac{h^*z^2}{4h^*m^2}\,\frac{h^*l^4f_0(h^*l)}{f_2(h^*l)^2}+\frac{h^*y}{2h^*m} \,\frac{h^*l f_2(h^*l)}{f_3(h^*l)},\lambdabel{21aa}\\ \frac{dh^*m}{dt}=&\,-1-\frac{h^*z}{2h^*m}\frac{h^*l^2 f_0(h^*l)}{f_2(h^*l)}.\lambdabel{22aa} \varepsilonnd{align} We will show that \text{w.h.p.}\ the process defined by {\sc 2greedy}\ can be closely modeled by a suitable weighted sum of the above four sets of equations. Let these weights be $\tau}\def\hT{\hat{T}h_a,\tau}\def\hT{\hat{T}h_b,\tau}\def\hT{\hat{T}h_c$ and $1-\tau}\def\hT{\hat{T}h_a-\tau}\def\hT{\hat{T}h_b-\tau}\def\hT{\hat{T}h_c$ respectively. It has been determined that $y_1,y_2,z_1$ are all $O(\log^2 n)$ \text{w.h.p.}. We will only need to analyse our process up till the time $y=0$ and we will show that at this time, $z=\Omega(n)$ \text{w.h.p.}. Thus $y_1,y_2,z_1$ are "negligible" throughout. In which case $h^*y_1,h^*y_2,h^*z_2$ should also be negligible. It makes sense therefore to choose $\tau}\def\hT{\hat{T}h_a=0$. The remaining weights should be chosen so that the weighted derivatives of $h^*y_1,h^*y_2,h^*z_1$ are zero. This has all been somewhat heuristic and its validity will be verified in Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{section-close}. \par \nuoindentgmaubsection{Sliding trajectory} Conjecturally we need to mix Steps 1(a), 1(b) 1(c) and 2 with nonnegative weights $\tau}\def\hT{\hat{T}heta_a=0$, $\tau}\def\hT{\hat{T}heta_b$, $\tau}\def\hT{\hat{T}heta_c$, $\tau}\def\hT{\hat{T}h_2=1-\tau}\def\hT{\hat{T}heta_b-\tau}\def\hT{\hat{T}heta_c$ respectively, chosen such that the resulting system of differential equations admits a solution such that $h^*y_2(t)\varepsilonquiv 0$ and $h^*z_1(t) \varepsilonquiv 0$. We will write the multipliers in terms of \betaq{ABCD} h^*at{A}=\frac{h^*yh^*zh^*l^5f_0(h^*l)}{8h^*m^2f_2(h^*l)f_3(h^*l)},\quad h^*at{B}=\frac{h^*z^2h^*l^4f_0(h^*l)} {4h^*m^2f_2(h^*l)^2}, \quad h^*at{C}=\frac{h^*yh^*l f_2(h^*l)}{2h^*m f_3(h^*l)},\quad h^*at{D}=\frac{h^*zh^*l^2f_0(h^*l)}{2h^*m f_2(h^*l)}. \varepsiloneq Using, \varepsilonqref{18x}, \varepsilonqref{18xx}, \varepsilonqref{23x} and \varepsilonqref{18ax} we see $h^*y_1(t)\varepsilonquiv 0$ implies that $$0\varepsilonquiv\frac{dh^*y_1}{dt}=\tau}\def\hT{\hat{T}h_a.$$ Equivalently \betagin{equation}\lambdabel{thetaaa} \tau}\def\hT{\hat{T}heta_a=0. \varepsilonnd{equation} Using \varepsilonqref{18}, \varepsilonqref{23} and \varepsilonqref{18a}, we see that $h^*y_2(t)\varepsilonquiv 0$ implies that \betagin{align} &0\varepsilonquiv\frac{dh^*y_2}{dt}\nuonumber\\ &=\tau}\def\hT{\hat{T}heta_b\left[-1+\frac{h^*yh^*z}{4h^*m^2}\,\frac{h^*l^3}{f_3(h^*l)}\,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight] +\tau}\def\hT{\hat{T}heta_c\frac{h^*yh^*z}{8h^*m^2}\,\frac{h^*l^3}{f_3(h^*l)}\,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)} +(1-\tau}\def\hT{\hat{T}heta_b-\tau}\def\hT{\hat{T}heta_c)\frac{h^*yh^*z}{8h^*m^2}\,\frac{h^*l^3}{f_3(h^*l)}\,\frac{h^*l^2f_0(h^*l)}{f_2(h^*l)},\nuonumber\\ &=-(1-h^*at{A})\tau}\def\hT{\hat{T}h_b+h^*at{A}.\lambdabel{dy2=0} \varepsilonnd{align} Equivalently \betagin{equation}\lambdabel{thetaa} \tau}\def\hT{\hat{T}heta_b=\frac{h^*at{A}} {1-h^*at{A}}. \varepsilonnd{equation} Likewise, using \varepsilonqref{19}, \varepsilonqref{24} and \varepsilonqref{19a}, $z_1(t)\varepsilonquiv 0$ implies \betagin{align} &0\varepsilonquiv\frac{dh^*z_1}{dt}\nuonumber\\ &=\tau}\def\hT{\hat{T}heta_b \frac{h^*z^2}{2h^*m^2}\,\frac{h^*l^4f_0(h^*l)}{f_2(h^*l)^2} +\tau}\def\hT{\hat{T}heta_c\left[-1+\frac{h^*z^2}{4h^*m^2}\,\frac{h^*l^4f_0(h^*l)}{f_2(h^*l)^2}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight] +(1-\tau}\def\hT{\hat{T}heta_b-\tau}\def\hT{\hat{T}heta_c)\frac{h^*z^2}{4h^*m^2}\,\frac{h^*l^4f_0(h^*l)}{f_2(h^*l)^2},\nuonumber\\ &=h^*at{B}\tau}\def\hT{\hat{T}h_b-\tau}\def\hT{\hat{T}h_c+h^*at{B}.\lambdabel{dz1=0} \varepsilonnd{align} Equivalently \betagin{equation}\lambdabel{thetab} \tau}\def\hT{\hat{T}heta_c=(1+\tau}\def\hT{\hat{T}heta_b)h^*at{B}=\frac{h^*at{B}}{1-h^*at{A}}. \varepsilonnd{equation} From \varepsilonqref{thetaa} it follows that $\tau}\def\hT{\hat{T}heta_b\ge 0$ iff $$ h^*at{A}\leq 1, $$ in which case, by \varepsilonqref{thetab}, $\tau}\def\hT{\hat{T}heta_c\ge 0$, as well. From \varepsilonqref{dz1=0} and \varepsilonqref{thetab} it follows that $1-\tau}\def\hT{\hat{T}heta_b-\tau}\def\hT{\hat{T}heta_c\ge 0$ iff \betaq{2A} 2h^*at{A}+h^*at{B}\leq 1. \varepsiloneq We conclude that $\tau}\def\hT{\hat{T}heta_b,\tau}\def\hT{\hat{T}heta_c,1-\tau}\def\hT{\hat{T}heta_b-\tau}\def\hT{\hat{T}heta_c\in [0,1]$ iff $Q\leq 1$, see \varepsilonqref{eqQ}. But this is implied by Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{alem}. It may be of some use to picture the equations defining $\tau}\def\hT{\hat{T}h_a,\tau}\def\hT{\hat{T}h_b,\tau}\def\hT{\hat{T}h_c,\tau}\def\hT{\hat{T}h_2$: \betaq{equations} \betagin{array}{rrrrc} -\tau}\def\hT{\hat{T}h_a&&&&=0\\ &(1-h^*at{A})\tau}\def\hT{\hat{T}h_b&&&=h^*at{A}\\ &-h^*at{B}\tau}\def\hT{\hat{T}h_b&+\tau}\def\hT{\hat{T}h_c&&=h^*at{B}\\ \tau}\def\hT{\hat{T}h_a&+\tau}\def\hT{\hat{T}h_b&+\tau}\def\hT{\hat{T}h_c&+\tau}\def\hT{\hat{T}h_2&=1. \varepsilonnd{array} \varepsiloneq If in the notation of Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{Klem} we let $\Omega_1=\par \nuoindentgmaet{{\bf v}:\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda\leq L_1}$ then we may restrict our attention to {\bf v}\ in \varepsilonqref{04x} -- \varepsilonqref{15a} such that ${\bf v}\in\Omega_1$. In which case, the terms involving $y_1,y_2,z$ can be absorbed into the error term fopr $t\leq T_0$. The relevant equations then become, with $$A=\frac{yz\l^5f_0(\l)}{8\m^2f_2(\l)f_3(\l)},\quad B=\frac{z^2\l^4f_0(\l)} {4\m^2f_2(\l)^2}, \quad C=\frac{y\l f_2(\l)}{2\m f_3(\l)},\quad D=\frac{z\l^2f_0(\l)}{2\m f_2(\l)}.$$ {\bf Step 1(a)\/}. $y_1>0$. \betagin{eqnarray} \mathbb{E\/}[y_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_1\mid |{\bf v}|]&=&-1+O\bfrac{\log^2N}{\l N}\lambdabel{04xq1mmm}\\ \mathbb{E\/}[y_2^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_2\mid |{\bf v}|]&=&A+O\bfrac{\log^2N}{\l N}\lambdabel{04q1mmm}\\ \mathbb{E\/}[z_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-z_1\mid |{\bf v}|]&=&B+O\bfrac{\log^2N}{\l N}.\lambdabel{05q1mm}\\ \mathbb{E\/}[y^\tau}\def\hT{\hat{T}ext{ P\/}ime-y\mid |{\bf v}|]&=&- C-A+O\bfrac{\log^2N}{\l N}.\lambdabel{06q1}\\ \mathbb{E\/}[z^\tau}\def\hT{\hat{T}ext{ P\/}ime-z\mid |{\bf v}|]&=&C-(1-C)-B +O\bfrac{\log^2N}{\l N}.\lambdabel{07q1}\\ \mathbb{E\/}[\mu^\tau}\def\hT{\hat{T}ext{ P\/}ime-\mu\mid|{\bf v}|]&=&-1-D+O\bfrac{\log^2N}{\l N}.\lambdabel{08q1} \varepsilonnd{eqnarray} {\bf Step 1(b)\/}. $y_1=0,y_2>0$. \betagin{eqnarray} \mathbb{E\/}[y_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_1\mid |{\bf v}|]&=&O\bfrac{\log^2N}{\l N}.\lambdabel{4xq1mm}\\ \mathbb{E\/}[y_2^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_2\mid |{\bf v}|]&=&-1+2A+O\bfrac{\log^2N}{\l N}.\lambdabel{4q1mm}\\ \mathbb{E\/}[z_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-z_1\mid |{\bf v}|]&=&2B+O\bfrac{\log^2N}{\l N}.\lambdabel{5q1}\\ \mathbb{E\/}[y^\tau}\def\hT{\hat{T}ext{ P\/}ime-y\mid |{\bf v}|]&=&-2C-2A+O\bfrac{\log^2N}{\l N}.\lambdabel{6q1}\\ \mathbb{E\/}[z^\tau}\def\hT{\hat{T}ext{ P\/}ime-z\mid |{\bf v}|]&=&2C-2(1-C)-2B+O\bfrac{\log^2N}{\l N}.\lambdabel{7q1}\\ \mathbb{E\/}[\mu^\tau}\def\hT{\hat{T}ext{ P\/}ime-\mu\mid |{\bf v}|]&=&-2-2D+O\bfrac{\log^2N}{\l N}.\lambdabel{8aq1} \varepsilonnd{eqnarray} {\bf Step 1(c).\/} $y_1=y_2=0,z_1>0$. \betagin{eqnarray} \mathbb{E\/}[y_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_1\mid |{\bf v}|]&=&O\bfrac{1}{N}.\lambdabel{9xq1mmm}\\ \mathbb{E\/}[y_2^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_2\mid |{\bf v}|]&=&A+O\bfrac{\log^2N}{\l N}.\lambdabel{9q1mmm}\\ \mathbb{E\/}[z_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-z_1\mid |{\bf v}|]&=&-1+B+O\bfrac{\log^2N}{\l N}.\lambdabel{10q1mmm}\\ \mathbb{E\/}[y^\tau}\def\hT{\hat{T}ext{ P\/}ime-y\mid |{\bf v}|]&=&-C-A+O\bfrac{\log^2N}{\l N}.\lambdabel{6aq1}\\ \mathbb{E\/}[z^\tau}\def\hT{\hat{T}ext{ P\/}ime-z\mid |{\bf v}|]&=&C-(1-C)-B+O\bfrac{\log^2N}{\l N}.\lambdabel{7aq1}\\ \mathbb{E\/}[\mu^\tau}\def\hT{\hat{T}ext{ P\/}ime-\mu\mid |{\bf v}|]&=&-1-D+O\bfrac{\log^2N}{\l N}.\lambdabel{8aaq1} \varepsilonnd{eqnarray} {\bf Step 2.\/} $y_1=y_2=z_1=0$. \betagin{eqnarray} \mathbb{E\/}[y_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_1\mid |{\bf v}|]&=&O\bfrac{1}{N}.\lambdabel{11xq1}\\ \mathbb{E\/}[y_2^\tau}\def\hT{\hat{T}ext{ P\/}ime-y_2\mid |{\bf v}|]&=&A+O\bfrac{\log^2N}{\l N}. \lambdabel{11q1}\\ \mathbb{E\/}[z_1^\tau}\def\hT{\hat{T}ext{ P\/}ime-z_1\mid |{\bf v}|]&=&B+O\bfrac{\log^2N}{\l N}. \lambdabel{12q1}\\ \mathbb{E\/}[y^\tau}\def\hT{\hat{T}ext{ P\/}ime-y\mid |{\bf v}|]&=&-1-C-A+O\bfrac{\log^2N}{\l N}. \lambdabel{13q1}\\ \mathbb{E\/}[z^\tau}\def\hT{\hat{T}ext{ P\/}ime-z\mid |{\bf v}|]&=&1+C-(1-C)-B+O\bfrac{\log^2N}{\l N}. \lambdabel{14q1}\\ \mathbb{E\/}[\mu^\tau}\def\hT{\hat{T}ext{ P\/}ime-\mu\mid |{\bf v}|]&=&-1-D+O\bfrac{\log^2N}{\l N}. \lambdabel{15aq1} \varepsilonnd{eqnarray} \par \nuoindentgmaubsection{Closeness of the process and the differential equations}\lambdabel{section-close} We already know that $y_1,y_2,z_1$ are small \text{w.h.p.}\ up to time $T_0$. We now show that \text{w.h.p.}\ $y,z,\mu$ are closely approximated by $h^*y,h^*z,h^*m$, which are the solutions to the weighted sum of the sets of equations labelled Step 1(b), Step 1(c) and Step 2. These equations will be simplified by putting $y_1=y_2=z_1=0$. First some notation. We will use $\psi_{\xi,\varepsilonta}$ to denote the expression we have obtained for the derivative of $\xi$ in Case 1 ($\varepsilonta$) or Case 2 in the case of $\varepsilonta=2$. We are then led to consider the equations: {\bf Sliding Trajectory:} \betagin{align*} \frac{dh^*y}{dt}&=\tau}\def\hT{\hat{T}h_b\psi_{b,y}(h^*y,h^*z,h^*m)+ \tau}\def\hT{\hat{T}h_c\psi_{c,y}(h^*y,h^*z,h^*m)+(1-\tau}\def\hT{\hat{T}h_b-\tau}\def\hT{\hat{T}h_c)\psi_{2,y}(h^*y,h^*z,h^*m)\\ &=\tau}\def\hT{\hat{T}h_b(-2(h^*at{C}+h^*at{A}))+\tau}\def\hT{\hat{T}h_c(-(h^*at{C}+h^*at{A}))+(1-\tau}\def\hT{\hat{T}h_b-\tau}\def\hT{\hat{T}h_c)(-(1+h^*at{C}+h^*at{A}))\\ &=-(h^*at{C}+h^*at{A})(2\tau}\def\hT{\hat{T}h_b+\tau}\def\hT{\hat{T}h_c+1-\tau}\def\hT{\hat{T}h_b-\tau}\def\hT{\hat{T}h_c)-(1-\tau}\def\hT{\hat{T}h_b-\tau}\def\hT{\hat{T}h_c)\\ &=\frac{h^*at{B}-h^*at{C}}{1-h^*at{A}}-1.\\ \frac{dh^*z}{dt}&=\tau}\def\hT{\hat{T}h_b\psi_{b,z}(h^*y,h^*z,h^*m)+ \tau}\def\hT{\hat{T}h_c\psi_{c,z}(h^*y,h^*z,h^*m)+(1-\tau}\def\hT{\hat{T}h_b-\tau}\def\hT{\hat{T}h_c)\psi_{2,z}(h^*y,h^*z,h^*m)\\ &=\tau}\def\hT{\hat{T}h_b(2(h^*at{C}-(1-h^*at{C})-h^*at{B}))+\tau}\def\hT{\hat{T}h_c(h^*at{C}-(1-h^*at{C})-h^*at{B})+(1-\tau}\def\hT{\hat{T}h_b-\tau}\def\hT{\hat{T}h_c)(1+h^*at{C}-1+h^*at{C}-h^*at{B})\\ &=(2h^*at{C}-h^*at{B})(\tau}\def\hT{\hat{T}h_b+1)-2\tau}\def\hT{\hat{T}h_b-\tau}\def\hT{\hat{T}h_c\\ &=\frac{2h^*at{C}-2h^*at{A}-2h^*at{B}}{1-h^*at{A}}.\\ \frac{dh^*m}{dt}&=\tau}\def\hT{\hat{T}h_b\psi_{b,\m}(h^*y,h^*z,h^*m)+ \tau}\def\hT{\hat{T}h_c\psi_{c,\m}(h^*y,h^*z,h^*m)+(1-\tau}\def\hT{\hat{T}h_b-\tau}\def\hT{\hat{T}h_c)\psi_{2,\m}(h^*y,h^*z,h^*m)\\ &=\tau}\def\hT{\hat{T}h_b(-2(1+h^*at{D}))+\tau}\def\hT{\hat{T}h_c(-(1+h^*at{D}))+(1-\tau}\def\hT{\hat{T}h_b-\tau}\def\hT{\hat{T}h_c)(-(1+h^*at{D}))\\ &=-(1+h^*at{D})(2\tau}\def\hT{\hat{T}h_b+\tau}\def\hT{\hat{T}h_c+1-\tau}\def\hT{\hat{T}h_b-\tau}\def\hT{\hat{T}h_c)\\ &=-\frac{1+h^*at{D}}{1-h^*at{A}}. \varepsilonnd{align*} The starting conditions are \betaq{hinitial} h^*y(0)=n,h^*z(0)=0,h^*m(0)=cn. \varepsiloneq Summarising: \betaq{slide} \frac{dh^*y}{dt}=\frac{h^*at{B}-h^*at{C}}{1-h^*at{A}}-1;\quad\frac{dh^*z}{dt}=\frac{2h^*at{C}-2h^*at{A}-2h^*at{B}}{1-h^*at{A}};\quad\frac{dh^*m}{dt}=-\frac{1+h^*at{D}}{1-h^*at{A}}. \varepsiloneq and \betaq{slide1} \frac{h^*yh^*l f_2(h^*l)}{f_3(h^*l)}+\frac{h^*zh^*l f_1(h^*l)}{f_2(h^*l)}=2h^*m. \varepsiloneq We remark for future reference that \varepsilonqref{slide} implies that \betaq{future} h^*m\tau}\def\hT{\hat{T}ext{ is decreasing with $t$ as long as $h^*l>0$} \varepsiloneq and \varepsilonqref{slide1} implies that \betaq{slide2} h^*y+h^*z\leq \frac{2h^*m}{h^*l}. \varepsiloneq Let ${\bf u}={\bf u}(t)$ denote $(y(t),z(t),\m(t))$ and let $h^*u=h^*u(t)$ denote $(h^*y(t),h^*z(t),h^*m(t))$. We now show that ${\bf u}$ and $h^*u$ remain close: \betagin{lemma}\lambdabel{close} $$||{\bf u}(t)-h^*u(t)||_1\leq n^{8/9}, \qquad\tau}\def\hT{\hat{T}ext{ for }1\leq t\leq T_0,\ \text{w.h.p.}.$$ \varepsilonnd{lemma} {\bf Proofh^*space{2em}} Let $\d_\varepsilonta({\bf v}),\varepsilonta=a,b,c,2$ be the 0/1 indicator for the process {\sc 2greedy}\ applying Step 1($\varepsilonta$) for $\varepsilonta=a,b,c$ or Step 2 if $\varepsilonta=2$ when the current state is {\bf v}. For times $t_1<t_2$ we use the notation $$\D_\varepsilonta({\bf v}(t_1,t_2))=\par \nuoindentgmaum_{t=t_1}^{t_2}\d_\varepsilonta({\bf v}(t))$$ Now let $\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma=n^\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta$ where $\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta=1/4$. It follows from Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{onestep} that for $t\leq T_0-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma$, \betaq{pp0} |\l(t)-\l(t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)|\leq \frac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\log n}{N(t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)}. \varepsiloneq Because $\l$ changes very little, simple estimates then give \betagin{claim}\lambdabel{lem++} \betagin{align} &|A(t)-A(t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)|=O\bfrac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\log n}{N(t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)} &|B(t)-B(t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)|= O\bfrac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\log n}{N(t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)}\lambdabel{pp1}\\ &|C(t)-C(t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)|=O\bfrac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\log n}{N(t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)} &|D(t)-D(t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)|=O\bfrac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\log n}{N(t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)}\lambdabel{pp2} \varepsilonnd{align} If $||{\bf u}(t)-h^*u(t)||_1\leq n^{8/9}$ then \betagin{align} &|A(t)-h^*at{A}(t)|=O\bfrac{||{\bf u}(t)-h^*u(t)||_1}{N(t)} &|B(t)-h^*at{B}(t)|=O\bfrac{||{\bf u}(t)-h^*u(t)||_1}{N(t)}\lambdabel{pp1a}\\ &|C(t)-h^*at{C}(t)|= O\bfrac{||{\bf u}(t)-h^*u(t)||_1}{N(t)}&|D(t)-h^*at{D}(t)|=O\bfrac{||{\bf u}(t)-h^*u(t)||_1}{N(t)}\lambdabel{pp2a} \varepsilonnd{align} \varepsilonnd{claim} {\bf Proofh^*space{2em}} The first expressions in \varepsilonqref{pp1} and \varepsilonqref{pp2} are easy to deal with as the functions $f_j$ are smooth and $\l$ is bounded throughout, see Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lambda}. Thus the each $f_j$ changes by $O(\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\log n/N)$ and $y,z,\m$ change by $O(\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\log n)$ and $\m=\Omega(N)$. For \varepsilonqref{pp1a} and \varepsilonqref{pp2a} we use Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{onestep} to argue that $$|\l(t)-h^*l(t)|=O\bfrac{||{\bf u}(t)-h^*u(t)||_1}{N(t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)}.$$ Our assumption $t\leq T_0$ implies that $\m(t)=\Omega(n)$ and then $\m(t)\par \nuoindentgmaim h^*m(t)$ and $N(t)\par \nuoindentgmaim h^*at{\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltalpha}t{N}(t)$ and we can argue as for \varepsilonqref{pp1} and \varepsilonqref{pp2}.\\ {\bf End of proof of Claim \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lem++}} Now fix $t$ and define for $\xi=y_1,y_2,z_1$, $$ X_i(\xi)=\betagin{cases}\xi(t+i+1)-\xi(t+i)&t+i<T_0\\ \mathbb{E\/}[\xi(t+1)-\xi(t)\mid {\bf v}(t)]&t+i\geq T_0 \varepsilonnd{cases} $$ Then, \betaq{pp3} \log n\geq\mathbb{E\/}[X_i(\xi)\mid {\bf v}(t+i)]=\par \nuoindentgmaum_{\varepsilonta\in\par \nuoindentgmaet{a,b,c,2}}\d_\varepsilonta(t+i) \psi_{\varepsilonta,\xi}({\bf u}(t+i))+O\bfrac{\log^2N(t+i)}{\l(t+i)N(t+i)} \varepsiloneq It follows from \varepsilonqref{pp0} -- \varepsilonqref{pp2} that for all $\varepsilonta,\xi$ and $i\leq \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma$, $$\psi_{\varepsilonta,\xi}({\bf u}(t+i))=\psi_{\varepsilonta,\xi}({\bf u}(t))+O\bfrac{\log n}{n^{1-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta}}.$$ It then follows from \varepsilonqref{pp3} that \text{q.s.} \betaq{pp4} \log n\geq \mathbb{E\/}[\xi(t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)-\xi(t)\mid{\bf u}(t)]=\par \nuoindentgmaum_{\varepsilonta\in\par \nuoindentgmaet{a,b,c,2}}\D_\varepsilonta({\bf u}(t,t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)) \psi_{\varepsilonta,\xi}({\bf u}(t))+O\bfrac{\log n}{n^{1-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta}} \varepsiloneq This can be written as follows: We let $\D_a= \D_a({\bf u}(t,t+\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma))/\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma$ etc. and $A=A(t),B=B(t)$. \betaq{equations1} \betagin{array}{rrrrl} -\D_a&&&&=O(\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{-1}\log n)\\ &(1-A)\D_b&&&=A+O(\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{-1}\log n)\\ &-B\D_b&+\D_c&&=B+O(\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{-1}\log n)\\ \D_a&+\D_b&+\D_c&+\D_2&=1. \varepsilonnd{array} \varepsiloneq In comparison with \varepsilonqref{equations} we see, using \varepsilonqref{pp1a}, \varepsilonqref{pp2a} that \betaq{xi} |\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\tau}\def\hT{\hat{T}h_\xi(h^*u(t))-\D_\xi|=O\brac{\log n+\frac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma||{\bf u}(t)-h^*u(t)||_1}{N}} \tau}\def\hT{\hat{T}ext{ for }\xi=a,b,c,2. \varepsiloneq Note that $A,h^*at{A}\leq 1/2$, see \varepsilonqref{2A}. This will be useful in dealing with $\tau}\def\hT{\hat{T}h_b$ and $\D_b$. We now consider the difference between $h^*u$ and ${\bf u}$ at times $\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma,2\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma,\ldots$. We write \betaq{z1} \xi(i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)-h^*at{\xi}(i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)=\xi((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)-h^*at{\xi}((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)+\par \nuoindentgmaum_{t=(i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma+1}^{i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}([\xi(t)-\xi(t-1)]-[h^*at{\xi}(t)-h^*at{\xi}(t-1)]) \varepsiloneq where $\xi=y,z,\m$ and $h^*at{\xi}=h^*y,h^*z,h^*m$ in turn. Then we write \betaq{z2} \xi(t)-\xi(t-1)=\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_t+\b_t\tau}\def\hT{\hat{T}ext{ and }h^*at{\xi}(t)-h^*at{\xi}(t-1)=h^*at{\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltalpha}_t+h^*at{\b}_t \varepsiloneq where $$\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_t=\par \nuoindentgmaum_{\varepsilonta\in\par \nuoindentgmaet{a,b,c,2}}\d_{\varepsilonta,\xi}({\bf u}(t-1))\psi_{\varepsilonta,\xi}({\bf u}(t-1)) \tau}\def\hT{\hat{T}ext{ and }\b_t=\xi(t)-\xi(t-1)-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_t$$ and $$h^*at{\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltalpha}_t=\par \nuoindentgmaum_{\varepsilonta\in\par \nuoindentgmaet{a,b,c,2}} \tau}\def\hT{\hat{T}h_{\varepsilonta,h^*at{\xi}}(h^*u(t-1))\psi_{\varepsilonta,h^*at{\xi}}(h^*u(t-1))\tau}\def\hT{\hat{T}ext{ and }h^*at{\b}_t=h^*at{\xi}(t)-h^*at{\xi}(t-1)-h^*at{\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltalpha}_t.$$ It follows from \varepsilonqref{06q1}, \varepsilonqref{07q1} etc. that $$\mathbb{E\/}[\b_t\mid {\bf u}(t-1)]=O\bfrac{\log^2N(t)}{\l(t)N(t)}.$$ An easy bound, which is a consequence of the Azuma-Hoeffding inequality, is that \betaq{z3} \mathbb{P}\brac{\par \nuoindentgmaum_{t=(i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma+1}^{i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}\b_t\ge \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{1/2}\log^2n}\leq e^{-\Omega(\log^2n)}. \varepsiloneq We see furthermore that \betagin{align} &\par \nuoindentgmaum_{t=(i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma+1}^{i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}h^*at{\b}_t=\nuonumber\\ &=\par \nuoindentgmaum_{t=(i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma+1}^{i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}\brac{h^*at{\xi}'(h^*u(t-1+\tau}\def\hT{\hat{T}ext{Var\/}sigma_t))-\par \nuoindentgmaum_{\varepsilonta\in\par \nuoindentgmaet{a,b,c,2}} \tau}\def\hT{\hat{T}h_{\varepsilonta,h^*at{\xi}}(h^*u(t-1))h^*at{\xi}_{\varepsilonta}'(h^*u(t-1))}\nuonumber\\ &=\par \nuoindentgmaum_{t=(i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma+1}^{i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}\brac{h^*at{\xi}'(h^*u(t-1))+O\bfrac{\log n}{N}-\par \nuoindentgmaum_{\varepsilonta\in\par \nuoindentgmaet{a,b,c,2}} \tau}\def\hT{\hat{T}h_{\varepsilonta,h^*at{\xi}}(h^*u(t-1))h^*at{\xi}_{\varepsilonta}'(h^*u(t-1))}\nuonumber\\ &=O\bfrac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\log n}{N}=o(\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{1/2}\log^2n),\lambdabel{z4} \varepsilonnd{align} where $0\leq\tau}\def\hT{\hat{T}ext{Var\/}sigma_t\leq1$ and $h^*at{\xi}'_\varepsilonta(t)$ is the derivative of $h^*at{\xi}$ in Case $\varepsilonta$. In this and the following claims we take $N=N(i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)$, the number of vertices at time $i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma$. Now write \betagin{multline}\lambdabel{z5} \par \nuoindentgmaum_{t=(i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma+1}^{i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_t= \par \nuoindentgmaum_{t=(i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma+1}^{i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma} \par \nuoindentgmaum_{\varepsilonta\in\par \nuoindentgmaet{a,b,c,2}}\d_{\varepsilonta,\xi}({\bf u}(t)) \brac{\psi_{\varepsilonta,\xi}({\bf u}((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)+O\bfrac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\log n}{N}}\\ =\par \nuoindentgmaum_{\varepsilonta\in\par \nuoindentgmaet{a,b,c,2}}\D_\xi({\bf u}((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma+1,i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma))\psi_{\varepsilonta,\xi}({\bf u}((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma) +O\bfrac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^2\log n}{N} \varepsilonnd{multline} and \betagin{multline}\lambdabel{z6} \par \nuoindentgmaum_{t=(i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma+1}^{i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}h^*at{\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltalpha}_t=\\ \par \nuoindentgmaum_{t=(i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma+1}^{i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma} \par \nuoindentgmaum_{\varepsilonta\in\par \nuoindentgmaet{a,b,c,2}}\brac{\tau}\def\hT{\hat{T}h_{\varepsilonta,\xi}(h^*u((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma))+O\bfrac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}{N}} \brac{\psi_{\varepsilonta,\xi}(h^*u((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)+O\bfrac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}{N}}\\ =\par \nuoindentgmaum_{\varepsilonta\in\par \nuoindentgmaet{a,b,c,2}}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\tau}\def\hT{\hat{T}h_{\varepsilonta,\xi}(h^*u((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma))\psi_{\varepsilonta,\xi}(h^*u((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma) +O\bfrac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^2}{N}. \varepsilonnd{multline} It follows that \betaq{z7} \par \nuoindentgmaum_{t=(i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma+1}^{i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}(h^*at{\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltalpha}_t-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_t)=A_1+A_2+o\brac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{1/2}\log^2 n} \varepsiloneq where \betagin{align} A_1&= \par \nuoindentgmaum_{\varepsilonta\in\par \nuoindentgmaet{a,b,c,2}}(\D_\xi({\bf u}((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma+1,i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma))-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\tau}\def\hT{\hat{T}h_{\varepsilonta,\xi}(h^*u((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma))) \psi_{\varepsilonta,\xi}({\bf u}((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)\nuonumber\\ &=O\brac{\log n+\frac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma||{\bf u}((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)-h^*u((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)||_1}{N}}. \lambdabel{z8}\\ A_2&= \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\par \nuoindentgmaum_{\varepsilonta\in\par \nuoindentgmaet{a,b,c,2}}\psi_{\varepsilonta,\xi}({\bf u}((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma) (\psi_{\varepsilonta,\xi}({\bf u}((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)-\psi_{\varepsilonta,\xi}(h^*u((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma))\nuonumber\\ &=O\brac{\frac{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma||{\bf u}((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)-h^*u((i-1)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)||_1}{N}}\lambdabel{z9}. \varepsilonnd{align} It follows from \varepsilonqref{z1} to \varepsilonqref{z9} that \text{w.h.p.}, $i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\leq T_0$ implies that with \betaq{ai=} a_i=||{\bf u}(i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)-h^*u(i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)||_1 \varepsiloneq that for some $C_1>0$, $$a_i\leq a_{i-1}\brac{1+\frac{C_1\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}{N_i}}+2\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{1/2}\log^2n $$ where $N_i=N(i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)\geq n/2$. Putting $$\Pi_i=\tau}\def\hT{\hat{T}ext{ P\/}od_{j=0}^i\brac{1+\frac{C_1\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}{N_j}}\leq e^{2C_1i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma/n}$$ we see by induction that \betaq{ai} a_i\leq 2\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{1/2}\log^2n\par \nuoindentgmaum_{j=0}^i\frac{\Pi_i}{\Pi_j}\leq 2\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{1/2}\log^2n(i+1)e^{2C_1i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma/n}. \varepsiloneq Since $i\leq n/\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma$ we have $$||{\bf u}(i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)-h^*u(i\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma)||_1=O(n\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{-1/2}\log^2n).$$ Going from $\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmadown{T_0/\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}$ to $T_0$ adds at most $\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma\log n$ to the gap and the lemma follows. h^*space*{\fill}\mbox{$\Box$} \par \nuoindentgmaection{Approximate equations}\lambdabel{approxeq} The equations \varepsilonqref{slide} are rather complicated and we have not made much progress in solving them. Nevertheless, we can obtain information about them from a simpler set of equations that closely approximate them when $c$ is sufficiently large. The important observation is that when $h^*l$ is large, \betaq{approx} h^*at{A}\ll 1;\quad h^*at{B}\ll 1;\quad h^*at{C}\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltapprox \frac{h^*yh^*l}{2h^*m}; \quad h^*at{D}\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltapprox \frac{h^*zh^*l^2}{2h^*m};\quad h^*l\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltapprox \frac{2h^*m}{h^*y+h^*z}. \varepsiloneq We will therefore approximate equations \varepsilonqref{slide} by the following equations in variables $\tau}\def\hT{\hat{T}iy,\tau}\def\hT{\hat{T}iz,\tau}\def\hT{\hat{T}im$, $\tau}\def\hT{\hat{T}il$: \betagin{align} &\tau}\def\hT{\hat{T}iy'=-\frac{\tau}\def\hT{\hat{T}iy}{\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz}-1\lambdabel{hydash}\\ &\tau}\def\hT{\hat{T}iz'=\frac{2\tau}\def\hT{\hat{T}iy}{\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz}\lambdabel{hzdash}\\ &\tau}\def\hT{\hat{T}im'=-1-\frac{2\tau}\def\hT{\hat{T}iz\tau}\def\hT{\hat{T}im}{(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)^2}\lambdabel{hmdash}\\ &\tau}\def\hT{\hat{T}il=\frac{2\tau}\def\hT{\hat{T}im}{\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz}.\lambdabel{hldash} \varepsilonnd{align} The initial conditions for $\tau}\def\hT{\hat{T}iy,\tau}\def\hT{\hat{T}iz,\tau}\def\hT{\hat{T}im,\tau}\def\hT{\hat{T}il$ are that they start out equal to $h^*y,h^*z,h^*m,h^*l$ at time $t=0$ i.e. \betaq{initial} \tau}\def\hT{\hat{T}iy(0)=n;\quad \tau}\def\hT{\hat{T}iz(0)=0;\quad \tau}\def\hT{\hat{T}im(0)=cn;\quad \tau}\def\hT{\hat{T}il=2c. \varepsiloneq \par \nuoindentgmaubsubsection{Analysis of the approximate equations} The first two approximate equations imply $(\tau}\def\hT{\hat{T}iy +\tau}\def\hT{\hat{T}iz/2)^\tau}\def\hT{\hat{T}ext{ P\/}ime=-1$, so that $$ \tau}\def\hT{\hat{T}iy +\frac{\tau}\def\hT{\hat{T}iz}{2}=n-t. $$ Using the second approximate equation and $\tau}\def\hT{\hat{T}iy=1-t-\tau}\def\hT{\hat{T}iz/2$, we obtain $$ \tau}\def\hT{\hat{T}iz^\tau}\def\hT{\hat{T}ext{ P\/}ime=\frac{2(n-t-\tau}\def\hT{\hat{T}iz/2)}{n-t+\tau}\def\hT{\hat{T}iz/2}, $$ or, introducing $\tau}\def\hT{\hat{T}au=n-t$ and $$ X=\frac{\tau}\def\hT{\hat{T}iz}{2(n-t)}=\frac{\tau}\def\hT{\hat{T}iz}{2\tau}\def\hT{\hat{T}au}, $$ we get \betagin{equation}\lambdabel{dXdtau} \frac{X+1}{X^2+1}\,dX =-\frac{1}{\tau}\def\hT{\hat{T}au}\,d\tau}\def\hT{\hat{T}au. \varepsilonnd{equation} Integrating, $$ \frac{1}{2}\ln(X^2+1)+\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltarctan X =-\ln \tau}\def\hT{\hat{T}au +C. $$ Now, at $t=0$ we have $\tau}\def\hT{\hat{T}au=n$ and $X=0$. So $C=\ln n$, i. e. $$ \frac{1}{2}\ln(X^2+1)+\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltarctan X =-\ln(\tau}\def\hT{\hat{T}au/n). $$ Let $\tau}\def\hT{\hat{T}iT$ satisfy $\tau}\def\hT{\hat{T}iy(\tau}\def\hT{\hat{T}iT)=0$. At $t=\tau}\def\hT{\hat{T}iT$, we have $X=1$, so $$ \ln n-\ln(n-\tau}\def\hT{\hat{T}iT)=\frac{1}{2}\ln 2 +\frac{\pi}{4} $$ which implies \betaq{TT} \tau}\def\hT{\hat{T}iT = \brac{1-\frac{1}{2^{1/2}}e^{-\pi/4}}n\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltapprox 0.677603n. \varepsiloneq Note that \betagin{align*} \tau}\def\hT{\hat{T}il'&=\frac{2\tau}\def\hT{\hat{T}im'}{\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz}-\frac{2\tau}\def\hT{\hat{T}im(\tau}\def\hT{\hat{T}iy'+\tau}\def\hT{\hat{T}iz')}{(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)^2}\nuonumber\\ &=-\frac{2}{\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz}-\frac{4\tau}\def\hT{\hat{T}iz\tau}\def\hT{\hat{T}im}{(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)^3}-\frac{2\tau}\def\hT{\hat{T}im}{(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)^2} \brac{\frac{\tau}\def\hT{\hat{T}iy}{\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz}-1} \nuonumber\\ &=-\frac {2}{\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz}-\frac{2\tau}\def\hT{\hat{T}iz\tau}\def\hT{\hat{T}im}{(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)^3}\nuonumber\\ &=-\frac {2}{\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz}-\frac{\tau}\def\hT{\hat{T}iz\tau}\def\hT{\hat{T}il}{(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)^2}, \varepsilonnd{align*} \betaq{ldash} \tau}\def\hT{\hat{T}ext{which implies that $\tau}\def\hT{\hat{T}il$ is decreasing with $t$, at least as long as $\tau}\def\hT{\hat{T}iy,\tau}\def\hT{\hat{T}iz,\tau}\def\hT{\hat{T}il>0$.} \varepsiloneq Here \betagin{align*} \frac{\tau}\def\hT{\hat{T}iz}{(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)^2}=&\,\frac{\tau}\def\hT{\hat{T}iz}{(n-t+\tau}\def\hT{\hat{T}iz/2)^2}\\ =&\frac{\tau}\def\hT{\hat{T}iz}{(n-t)^2(1+X)^2}\\ =&\frac{2X}{(n-t)(1+X)^2}. \varepsilonnd{align*} Likewise $$ -\frac{2}{\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz}=-\frac{2}{(n-t)(1+X)}. $$ So $\tau}\def\hT{\hat{T}il$ satisfies $$ \tau}\def\hT{\hat{T}il^\tau}\def\hT{\hat{T}ext{ P\/}ime=-\frac{2}{(n-t)(1+X)}-\frac{2X}{(n-t)(1+X)^2}\,\tau}\def\hT{\hat{T}il,\quad\tau}\def\hT{\hat{T}il(0)=2c. $$ Using \varepsilonqref{dXdtau}, we obtain $$ \frac{d\tau}\def\hT{\hat{T}il}{dX}=-\frac{2}{1+X^2}-\frac{2X}{(1+X)(1+X^2)}\,\tau}\def\hT{\hat{T}il,\quad \left.\tau}\def\hT{\hat{T}il(X)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight|_{X=0}=2c. $$ Integrating this first-order, linear ODE, we obtain \betaq{comp} \tau}\def\hT{\hat{T}il(X)=\frac{(1+X)e^{-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltarctan X}}{\par \nuoindentgmaqrt{1+X^2}} \left[2c-\int_0^X \frac{2e^{\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltarctan x}}{(1+x) \par \nuoindentgmaqrt{1+x^2}}\,dx\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight]. \varepsiloneq In which case \betaq{iwc} \tau}\def\hT{\hat{T}il(\tau}\def\hT{\hat{T}iT)\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltapprox 1.53c-1.418. \varepsiloneq \par \nuoindentgmaubsubsection{Simple Inequalities}\lambdabel{simpineq} We will use the following to quantify \varepsilonqref{approx}: $$ 1\leq \frac{f_2(h^*l)}{f_3(h^*l)}= 1+\varepsilon_1,\qquad 1\le \frac{f_0(h^*l)}{f_2(h^*l)}=1+\varepsilon_2, \qquad 1\leq \frac{f_0(h^*l)}{f_3(h^*l)}=1+\varepsilon_3.$$ where $$\varepsilon_1=\frac{h^*l^2}{2f_3(h^*l)},\qquad\varepsilon_2=\frac{1+h^*l}{f_2(h^*l)},\qquad\varepsilon_3=\frac{h^*l^2+2h^*l+2}{2f_3(h^*l)}.$$ We use the above to verify the following sequence of inequalities for $h^*y,h^*z,h^*m,h^*l$: \betagin{align} &\frac{2h^*m}{h^*y+h^*z}(1-\varepsilon_4)\leqh^*l\leq \frac{2h^*m}{h^*y+h^*z}.\lambdabel{first}\\ &0\leq h^*at{A}\leq \varepsilon_5\nuonumber\\ &0\leq h^*at{B}\leq \varepsilon_6\nuonumber\\ &\frac{h^*yh^*l}{2h^*m}\leq h^*at{C}= \frac{yh^*l}{2h^*m}(1+\varepsilon_1).\nuonumber\\ &\frac{zh^*l^2}{2h^*m}\leq h^*at{D}=\frac{h^*zh^*l^2}{2h^*m}(1+\varepsilon_2).\nuonumber \varepsilonnd{align} where $$\varepsilon_4=\frac{\varepsilon_1}{1+\varepsilon_1},\qquad \varepsilon_5=\frac{(1+\varepsilon_2)(1+\varepsilon_3)\l^3}{8f_0(h^*l)},\qquad\varepsilon_6=\frac{h^*l^2(1+\varepsilon_2)^2}{f_0(h^*l)}.$$ (We use \varepsilonqref{slide2} to get $h^*yh^*z\leq h^*m^2/h^*l^2$ for use in defining $\varepsilon_5$). For \varepsilonqref{first} we use $$\frac{h^*l(h^*y+h^*z)}{2h^*m}\geq \frac{1}{\max\par \nuoindentgmaet{\frac{f_2(h^*l)}{f_3(h^*l)},\frac{f_1(h^*l)}{{f_2(h^*l)}}}}= \frac{f_3(h^*l)}{f_2(h^*l)}= \frac{1}{1+\varepsilon_1}.$$ It follows from \varepsilonqref{first} that the initial value $h^*l_0$ of $h^*l$ satisfies $$2c\geq h^*l_0\geq 2c(1-\varepsilon_4).$$ Now $\varepsilon_4\leq .0001$ for $x\geq 15$ and so \betaq{lh0} 2c(1-.0001)\leq h^*l_0\leq 2c. \varepsiloneq \par \nuoindentgmaubsubsection{Main Goal}\lambdabel{maingoal} Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{near} (below) in conjunction with Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{close}, will enable us to argue that \text{w.h.p.}\ in the process {\sc 2greedy}, at some time $T\leq T_0$ we will have \betaq{Texists} y(T)=0,z(T)=\Omega(n)\tau}\def\hT{\hat{T}ext{ and }\l(t)=\Omega(1)\tau}\def\hT{\hat{T}ext{ for }t\leq T. \varepsiloneq Define $$T_+=\min\par \nuoindentgmaet{t>0:h^*y(t)\leq0\tau}\def\hT{\hat{T}ext{ or }h^*z(t)\leq0\tau}\def\hT{\hat{T}ext{ or } \tau}\def\hT{\hat{T}iy(t)\leq0\tau}\def\hT{\hat{T}ext{ or }\tau}\def\hT{\hat{T}iz(t)\leq0}.$$ We can bound this from below by a small constant as follows: Initially $h^*at{A},h^*at{B}$ are small $h^*at{C}$ is close to one for $c\geq 15$ and so \varepsilonqref{slide} implies that $h^*z$ is strictly increasing at the beginning. Also, $h^*y,\tau}\def\hT{\hat{T}iy$ start out large ($=n$) and so remain positive initially. Next define \betaq{T1def} T_1=\min\par \nuoindentgmaet{T_+,\max\par \nuoindentgmaet{t:h^*l(\tau}\def\hT{\hat{T}))\geq \l^* \tau}\def\hT{\hat{T}ext{ and }\min\par \nuoindentgmaet{\tau}\def\hT{\hat{T}iy(\tau}\def\hT{\hat{T})+\tau}\def\hT{\hat{T}iz(\tau}\def\hT{\hat{T}),h^*y(\tau}\def\hT{\hat{T})+h^*z(\tau}\def\hT{\hat{T})}\geq \b n\tau}\def\hT{\hat{T}ext{ for } \tau}\def\hT{\hat{T}\leq t}} \varepsiloneq where $$\b=-.01+\tau}\def\hT{\hat{T}iz(\tau}\def\hT{\hat{T}iT)/n=-.01+ 2(n-\tau}\def\hT{\hat{T}iT)/n\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltapprox .63$$ and $$\l^*=\tau}\def\hT{\hat{T}il(\tau}\def\hT{\hat{T}iT)-5.$$ Comparing \varepsilonqref{iwc} and \varepsilonqref{lh0} we see that $h^*l_0>\l^*$. Note that \betaq{T0TT} T_1\leq \tau}\def\hT{\hat{T}iT. \varepsiloneq This is because $\tau}\def\hT{\hat{T}iy(\tau}\def\hT{\hat{T}iT)=0$ and $\tau}\def\hT{\hat{T}iy'(\tau}\def\hT{\hat{T}iT)=-1$. \betagin{lemma}\lambdabel{near} For large enough $c$, \betaq{hT} h^*y(T_1)=0<h^*z(T_1)=\Omega(n)\tau}\def\hT{\hat{T}ext{ and }h^*l(T_1)=\Omega(1). \varepsiloneq \varepsilonnd{lemma} {\bf Proofh^*space{2em}} It follows from \varepsilonqref{slide} and Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{simpineq} that \betagin{align} h^*y'&\leq \frac{\varepsilon_6-\frac{h^*yh^*l}{2h^*m}}{1-\varepsilon_5}-1\leq -\frac{h^*y}{h^*y+h^*z}-1+\varepsilon_7\nuonumber\\ h^*y'&\geq -\frac{\frac{h^*yh^*l}{2h^*m}(1+\varepsilon_1)}{1-\varepsilon_5}-1 \geq -\frac{h^*y}{h^*y+h^*z}-1-\varepsilon_8.\lambdabel{lowerydash}\\ \nuonumber\\ h^*z'&\leq \frac{2h^*yh^*l}{2h^*m}\cdot\frac{1+\varepsilon_1}{1-\varepsilon_5} \leq \frac{2h^*y}{h^*y+h^*z}+2\varepsilon_{8}.\nuonumber\\ h^*z'&\geq \frac{2h^*y}{h^*y+h^*z}-2\varepsilon_{7} \lambdabel{lowerzdash}\\\nuonumber\\ h^*l&\leq \frac{2h^*m}{h^*y+h^*z}\lambdabel{lowerldash}\\ h^*l&\geq \frac{2h^*m}{h^*y+h^*z}(1-\varepsilon_4) \geq \frac{2h^*m}{h^*y+h^*z}-\varepsilon_{9}\lambdabel{upperldash}\\ \nuonumber\\ h^*m'&\leq -1-\frac{h^*zh^*l^2}{2h^*m} \leq -1-\frac{2h^*zh^*m}{(h^*y+h^*z)^2}(1-\varepsilon_4)^2\leq-1-\frac{2h^*zh^*m}{(h^*y+h^*z)^2} +\varepsilon_{10} \nuonumber\\ h^*m'&\geq -\frac{1+\frac{h^*zh^*l^2}{2h^*m}(1+\varepsilon_2)}{1-\varepsilon_5}\geq -1-\frac{2h^*zh^*m}{(h^*y+h^*z)^2} -\varepsilon_{11}.\lambdabel{lowermdash} \varepsilonnd{align} where \betagin{align*} &\varepsilon_7=\frac{\varepsilon_4+\varepsilon_5+\varepsilon_6}{1-\varepsilon_5},&\varepsilon_8=\frac{\varepsilon_1+\varepsilon_5}{1-\varepsilon_5},\qquad\qquad\qquad &\varepsilon_{9}=h^*l\varepsilon_4,\\ &\varepsilon_{10}=\frac{2h^*l\varepsilon_4}{1-\varepsilon_4},&\varepsilon_{11}=\frac{h^*l(\varepsilon_2+\varepsilon_5)+\varepsilon_5}{(1-\varepsilon_4)(1-\varepsilon_5)} \varepsilonnd{align*} When $t=0$ we have $h^*y=n,h^*z=0,h^*m=cn$ and $h^*l$ satisfying \varepsilonqref{lh0}, we see that $T_1>0$ for $c\geq 15$. We can write $h^*y(0)=n,h^*z(0)=0,h^*m(0)=cn$ and \betagin{align} &h^*y'=-\frac{h^*y}{h^*y+h^*z}-1+\tau}\def\hT{\hat{T}h_1\qquad&\tau}\def\hT{\hat{T}ext{ where }|\tau}\def\hT{\hat{T}h_1|\leq \d^*.\lambdabel{delta1}\\ &h^*z'=\frac{2h^*y}{h^*y+h^*z}+\tau}\def\hT{\hat{T}h_2\qquad&\tau}\def\hT{\hat{T}ext{ where }|\tau}\def\hT{\hat{T}h_2|\leq 2\d^*.\lambdabel{delta2}\\ &h^*m'=-1-\frac{2h^*zh^*m}{(h^*y+h^*z)^2}+\tau}\def\hT{\hat{T}h_3& \qquad\tau}\def\hT{\hat{T}ext{ where }-\varepsilon_{11}\le\tau}\def\hT{\hat{T}h_3\le\varepsilon_{10}\lambdabel{delta3}\\ &h^*l=\frac{2h^*m}{h^*y+h^*z}+\tau}\def\hT{\hat{T}h_4&\qquad\tau}\def\hT{\hat{T}ext{ where } -\varepsilon_9\leq \tau}\def\hT{\hat{T}h_4\leq 0.\lambdabel{delta4} \varepsilonnd{align} where $$\d^*=\max\par \nuoindentgmaet{\varepsilon_1,\varepsilon_2,\ldots,\varepsilon_8}.$$ It can easily be checked that the functions $\varepsilon_1,\ldots,\varepsilon_{11}$ are all monotone decreasing for $h^*l\geq \l^*$, ($\l^*(15)\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltapprox 16.549$). Furthermore, $\d^*(16)<.00011$ and our error estimates will mostly be $\d^*$ times a moderate size constant. The only exceptions contain a factor $c$, but if $c$ is large then $\d^*$ will decrease to compensate. It follows from \varepsilonqref{hmdash} and \varepsilonqref{delta3} that \betaq{mudown} h^*m,\tau}\def\hT{\hat{T}im\tau}\def\hT{\hat{T}ext{ both decrease for }t\leq T_1,\tau}\def\hT{\hat{T}ext{ since }\tau}\def\hT{\hat{T}h_3<1\tau}\def\hT{\hat{T}ext{ for }h^*l\geq \l^*. \varepsiloneq The ensuing calculations involve many constants and the expressions \varepsilonqref{yzok} and \varepsilonqref{very} claim some inequalities that are tedious to justify. It is unrealistic to expect the reader to check these calculations. Instead, we have provided mathematica output in an appendix that will be seen to justify our claims. The reader will notice the similarity between these equations and the approximation \varepsilonqref{hydash} -- \varepsilonqref{hldash}. We will now refer to the equations \varepsilonqref{delta1} -- \varepsilonqref{delta4} as the {\varepsilonm true} equations and \varepsilonqref{hydash} -- \varepsilonqref{hldash} as the {\varepsilonm approximate} equations. \par \nuoindentgmaubsubsection{$y,z$ and $h^*y,h^*z$ are close} We claim next that \betaq{half} \max\par \nuoindentgmaet{|h^*y(t)-\tau}\def\hT{\hat{T}iy(t)|,|h^*z(t)-\tau}\def\hT{\hat{T}iz(t)|}\leq\d^* F_1(t/n)n \tau}\def\hT{\hat{T}ext{ for }0\leq t\leq T_1. \varepsiloneq where \betaq{fig} F_a(x)=\b(e^{2ax/\b}-1)\tau}\def\hT{\hat{T}ext{ for }x\leq \frac{\tau}\def\hT{\hat{T}iT}{n}. \varepsiloneq for $a>0$. Note that $$F_a'(t)=2(aF_a(t)/\b+1).$$ In the proof of \varepsilonqref{half}, think of $n$ as fixed and $h$ as a parameter that tends to zero. Think of $\varepsilon$ as small, but fixed until the end of the proof. In the display beginning with equation \varepsilonqref{dis1}, only $h$ is the quantity going to zero. Let $$h^*hu_i=h^*y(ih),h^*hv_i=h^*z(ih),\tau}\def\hT{\hat{T}iu_i=\tau}\def\hT{\hat{T}iy(ih),\tau}\def\hT{\hat{T}iv_i=\tau}\def\hT{\hat{T}iz(ih)\tau}\def\hT{\hat{T}ext{ for }0\leq i\leq n/h.$$ Assume inductively that for $i<i_0=T_1/h$ \betagin{equation}\lambdabel{ui} |h^*hu_i-\tau}\def\hT{\hat{T}iu_i|,|h^*hv_i-\tau}\def\hT{\hat{T}iv_i|\leq \d F_{1+\varepsilon}(ih/n)n. \varepsilonnd{equation} This is true for $i=0$. Suppose that $$F_{1+\varepsilon}((i+1)h/n)=F_{1+\varepsilon}(ih/n)+\frac{h}{n}F_{1+\varepsilon}'((i+\tau}\def\hT{\hat{T}h)h/n)$$ for some $0\leq \tau}\def\hT{\hat{T}h\leq 1$. Then by the inductive assumption and the Taylor expansion and uniform boundedness of second derivatives, \betagin{align} h^*hv_{i+1}-\tau}\def\hT{\hat{T}iv_{i+1}&=h^*hv_i-\tau}\def\hT{\hat{T}iv_i+h\brac{\frac{2h^*hu_i}{h^*hu_i+h^*hv_i}- \frac{2\tau}\def\hT{\hat{T}iu_i}{\tau}\def\hT{\hat{T}iu_i+\tau}\def\hT{\hat{T}iv_i}+\tau}\def\hT{\hat{T}h_2(ih)}+O(h^2)\lambdabel{dis1}\\ &=h^*hv_i-\tau}\def\hT{\hat{T}iv_i+h\brac{\frac{2h^*hu_i(\tau}\def\hT{\hat{T}iv_i-h^*hv_i)- 2h^*hv_i(\tau}\def\hT{\hat{T}iu_i-h^*hu_i)}{(h^*hu_i+h^*hv_i)(\tau}\def\hT{\hat{T}iu_i+\tau}\def\hT{\hat{T}iv_i)}+\tau}\def\hT{\hat{T}h_2(ih)}+O(h^2)\nuonumber\\ &\leqh^*hv_i-\tau}\def\hT{\hat{T}iv_i+h\brac{\frac{2(\tau}\def\hT{\hat{T}iu_i+\tau}\def\hT{\hat{T}iv_i) \max\par \nuoindentgmaet{|h^*hv_i-\tau}\def\hT{\hat{T}iv_i|,|h^*hu_i-\tau}\def\hT{\hat{T}iu_i|}}{(h^*hu_i+h^*hv_i)(\tau}\def\hT{\hat{T}iu_i+\tau}\def\hT{\hat{T}iv_i)} +\tau}\def\hT{\hat{T}h_2(ih)}+O(h^2)\nuonumber\\ &\leq \d^* F_{1+\varepsilon}(ih/n)n+2\d^* h\brac{F_1(ih/n)/\b+1}+O(h^2)\nuonumber\\ &=\d^*(F_{1+\varepsilon}(ih/n)n+hF_1'(ih/n))+O(h^2)\nuonumber\\ &=\d^*(F_{1+\varepsilon}((i+1)h/n)n+h(F_1'(ih/n)-F_{1+\varepsilon}'((i+\tau}\def\hT{\hat{T}h)h/n)))+O(h^2)\nuonumber\\ &\leq \d^*(F_{1+\varepsilon}((i+1)h/n)n-\Omega(\varepsilon h),\nuonumber \varepsilonnd{align} completing the induction. The remaining three cases are proved similarly. This completes the inductive proof of \varepsilonqref{ui}. Letting $\varepsilon\tau}\def\hT{\hat{T}o 0$ we see for example that $h^*y(t)-\tau}\def\hT{\hat{T}iy(t)\leq \d^* F_1(t)$ for $t\leq T_1$. This completes the proof of \varepsilonqref{half}. Let $$\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_0=F_1(\tau}\def\hT{\hat{T}iT/n).$$ Observe next that \betaq{hhalf} (\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)'=\frac{\tau}\def\hT{\hat{T}iy}{\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz}-1\leq 0. \varepsiloneq So for $t\leq T_1$ we have \betaq{644} \tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz\geq \tau}\def\hT{\hat{T}iy(\tau}\def\hT{\hat{T}iT)+\tau}\def\hT{\hat{T}iz(\tau}\def\hT{\hat{T}iT)=\tau}\def\hT{\hat{T}iz(\tau}\def\hT{\hat{T}iT)=2(n-\tau}\def\hT{\hat{T}iT)=(\b+.01)n. \varepsiloneq Furthermore, putting $X=1$ and going back to \varepsilonqref{comp}, \betaq{l1} \tau}\def\hT{\hat{T}il(\tau}\def\hT{\hat{T}iT)=\frac{(1+\tau}\def\hT{\hat{T}iT)e^{-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltarctan \tau}\def\hT{\hat{T}iT}}{\par \nuoindentgmaqrt{1+\tau}\def\hT{\hat{T}iT^2}} \brac{2c-\int_{x=0}^1\frac{2e^{\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltarctan x}}{(1+x) \par \nuoindentgmaqrt{1+x^2}}\,dx}= \alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_1c-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_2. \varepsiloneq \par \nuoindentgmaubsubsection{Lower bounding $h^*l$} We now show that $\tau}\def\hT{\hat{T}il-h^*l$ is small. We now use \varepsilonqref{delta3} and \varepsilonqref{half} to write for $t\leq T_1$, \betagin{align*} &|\tau}\def\hT{\hat{T}im'-h^*m'|\\ &\leq |\tau}\def\hT{\hat{T}h_3|+\card{\frac{2h^*m h^*z((h^*y+h^*z)^2+4\d^* F_1(t/n)(h^*y+h^*z)n+4{\d^*}^2 F_1(t/n)^2n^2) -2\tau}\def\hT{\hat{T}im (h^*y+h^*z)^2(h^*z-\d^* F_1(t/n)n)}{(h^*y+h^*z)^2(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)^2}}\\ &=|\tau}\def\hT{\hat{T}h_3|+\card{\frac{2h^*z(h^*y+h^*z)^2(h^*m-\tau}\def\hT{\hat{T}im)+2\d^* F_1(t/n)n(h^*y+h^*z) (4h^*m h^*z+\tau}\def\hT{\hat{T}im (h^*y+h^*z))+8h^*mh^*z{\d^*}^2F_1(t/n)^2n^2} {(h^*y+h^*z)^2(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)^2}}. \varepsilonnd{align*} Now, using \varepsilonqref{mudown}, $$\frac{4h^*mh^*z+\tau}\def\hT{\hat{T}im(h^*y+h^*z)}{(h^*y+h^*z)(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)^2} \leq \frac{4h^*m+\tau}\def\hT{\hat{T}im}{(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)^2}\leq \frac{5c}{\b^2n}$$ and $$\frac{8h^*mh^*z}{(h^*y+h^*z)^2(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)^2}\leq \frac{8c}{\b^3n^2}. $$ So, $$|\tau}\def\hT{\hat{T}im'-h^*m'|\leq \frac{2|h^*m-\tau}\def\hT{\hat{T}im|}{\b n}+\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_3\d^*$$ where $$\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_3=\frac{10\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_0c}{\b^2}+\frac{8c\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_0^2\d^*}{\b^3}+\frac{(2c+1)\d^*}{\b(1-\d^*)^2},$$ where the third term is an upper bound for $\varepsilon_{10},\varepsilon_{11}$ and its validity rests on \varepsilonqref{lowerldash} and \varepsilonqref{mudown}, with which we bound $h^*l\leq 2c/\b$. Integrating, we get that if $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda=|h^*m-\tau}\def\hT{\hat{T}im|$ then $$ \zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda'-\frac{2\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda}{\b n}\leq \alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_3\d^*$$ and so $$|\tau}\def\hT{\hat{T}im'-h^*m'|\leq \alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_3\d^* e^{2t/\b n}\int_{\tau}\def\hT{\hat{T}au=0}^t e^{-2\tau}\def\hT{\hat{T}au/\b n}d\tau}\def\hT{\hat{T}au=\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_3\d^* \frac{\b n}{2}(e^{2t/\b n}-1)\leq \alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_4\d^* n.$$ for $t\leq T_1$, where $$\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_4=\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_0\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_3/2.$$ It then follows that as long as $t\leq T_1$, \betagin{align*} \tau}\def\hT{\hat{T}il-h^*l&=-\tau}\def\hT{\hat{T}h_4+\frac{2\tau}\def\hT{\hat{T}im(h^*y+h^*z)-2h^*m(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)}{(h^*y+h^*z)(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)}\\ &\leq \varepsilon_9+\frac{2(h^*m+\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_4\d^* n)(h^*y+h^*z)- 2h^*m(h^*y+h^*z-2\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_0\d^* n)}{(h^*y+h^*z)(\tau}\def\hT{\hat{T}iy+\tau}\def\hT{\hat{T}iz)}\\ &\leq \varepsilon_9+\frac{2\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_4\d^*}{\b}+\frac{4c\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_0\d^*}{\b^2}. \varepsilonnd{align*} It follows from \varepsilonqref{ldash} that for $t\leq T_1$ we have \betaq{lf} h^*l(t)\geq \tau}\def\hT{\hat{T}il(T_1)-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_5\d^* \varepsiloneq where $$\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_5=\frac{2c}{\b}+\frac{2\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_4}{\b}+\frac{4c\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_0}{\b^2}.$$ We now argue that $h^*y(T_1)=0$ and $h^*l(T_1)\geq \l^*$. This proves the Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{near}, since $h^*y(T_1)+h^*z(T_1)\geq \b n$. Suppose then to the contrary that $h^*y(T_1)>0$. Recall that $T_1\leq \tau}\def\hT{\hat{T}iT$ (see \varepsilonqref{T0TT}) and suppose first that $T_1<\tau}\def\hT{\hat{T}iT$. Now let $$T_2=\min\par \nuoindentgmaet{T_1+\varepsilon n,(T_1+\tau}\def\hT{\hat{T}iT)/2}$$ where $0<\varepsilon<10^{-10}$ is such that \betaq{defeps} \max\par \nuoindentgmaet{\tau}\def\hT{\hat{T}\in [T_1,T_2]: \varepsilon\max\par \nuoindentgmaet{|h^*l'(\tau}\def\hT{\hat{T})|,|h^*y'(\tau}\def\hT{\hat{T})|,|h^*z'(\tau}\def\hT{\hat{T})|}\leq 10^{-10}}. \varepsiloneq The existence of such an $\varepsilon$ follows by elementary propositions in real analysis. We will argue that $\tau}\def\hT{\hat{T}\in [T_1,T_2]$ implies $$h^*l(\tau}\def\hT{\hat{T})\geq \l^*\tau}\def\hT{\hat{T}ext{ and }\min\par \nuoindentgmaet{h^*y(\tau}\def\hT{\hat{T})+h^*z(\tau}\def\hT{\hat{T}),\tau}\def\hT{\hat{T}iy(\tau}\def\hT{\hat{T})+\tau}\def\hT{\hat{T}iz(\tau}\def\hT{\hat{T})}\geq \b n \tau}\def\hT{\hat{T}ext{ and }\min\par \nuoindentgmaet{h^*y(\tau}\def\hT{\hat{T}),h^*z(\tau}\def\hT{\hat{T}),\tau}\def\hT{\hat{T}iy(\tau}\def\hT{\hat{T}),\tau}\def\hT{\hat{T}iz(\tau}\def\hT{\hat{T})}>0,$$ which contradicts the definition of $T_1$. Fix $\tau}\def\hT{\hat{T}\in [T_1,T_2]$. Now $\tau}\def\hT{\hat{T}<\tau}\def\hT{\hat{T}iT$ implies that $\tau}\def\hT{\hat{T}iy(\tau}\def\hT{\hat{T})>0$. Together with \varepsilonqref{hzdash} we see that $\tau}\def\hT{\hat{T}iz$ increases for $t\leq \tau}\def\hT{\hat{T}iT$ and hence $\tau}\def\hT{\hat{T}iz(\tau}\def\hT{\hat{T})>0$. We have $\tau}\def\hT{\hat{T}iy'(t)\geq -2$ (see \varepsilonqref{hydash}) and $\tau}\def\hT{\hat{T}iz'(t)\geq 0$ for $t\leq T_2$ (see \varepsilonqref{hzdash}) and so for some $\tau}\def\hT{\hat{T}au_1,\tau}\def\hT{\hat{T}au_2\in [T_1,T_2]$ \betagin{align*} \tau}\def\hT{\hat{T}iy(\tau}\def\hT{\hat{T})&=\tau}\def\hT{\hat{T}iy(T_1)+(\tau}\def\hT{\hat{T}-T_1)\tau}\def\hT{\hat{T}iy'(\tau}\def\hT{\hat{T}au_1)\geq \tau}\def\hT{\hat{T}iy(T_1)-2\varepsilon n.\\ \tau}\def\hT{\hat{T}iz(\tau}\def\hT{\hat{T})&=\tau}\def\hT{\hat{T}iz(T_1)+(\tau}\def\hT{\hat{T}-T_1)\tau}\def\hT{\hat{T}iz'(\tau}\def\hT{\hat{T}au_2)\geq \tau}\def\hT{\hat{T}iz(T_1). \varepsilonnd{align*} It follows (using \varepsilonqref{hhalf}) that \betaq{zz1} \tau}\def\hT{\hat{T}iy(\tau}\def\hT{\hat{T})+\tau}\def\hT{\hat{T}iz(\tau}\def\hT{\hat{T})\geq \tau}\def\hT{\hat{T}iy(T_0)+\tau}\def\hT{\hat{T}iz(T_0)-2\varepsilon n\geq \tau}\def\hT{\hat{T}iy(\tau}\def\hT{\hat{T}iT)+\tau}\def\hT{\hat{T}iz(\tau}\def\hT{\hat{T}iT)-2\varepsilon n>\b n. \varepsiloneq We have, for some $\tau}\def\hT{\hat{T}_3,\tau}\def\hT{\hat{T}_4\in [T_1,T_2]$, \betagin{align} h^*y(\tau}\def\hT{\hat{T})+h^*z(\tau}\def\hT{\hat{T})&=h^*y(T_1)+h^*z(T_1)+(\tau}\def\hT{\hat{T}-T_1)(h^*y(\tau}\def\hT{\hat{T}_3)+h^*z(\tau}\def\hT{\hat{T}_4))'\nuonumber\\ &\geq \tau}\def\hT{\hat{T}iy(T_1)+\tau}\def\hT{\hat{T}iz(T_1)-2F_1(\tau}\def\hT{\hat{T}iT/n)\d^* n-2\tau}\def\hT{\hat{T}imes 10^{-10}n\nuonumber\\ &\geq \brac{\b+.01-2\brac{\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_0\d^*+10^{-10})}}n\nuonumber\\ &\geq \b n.\lambdabel{yzok} \varepsilonnd{align} We now argue that $h^*z(\tau}\def\hT{\hat{T})>0$. Equation \varepsilonqref{hzdash} shows that $\tau}\def\hT{\hat{T}iz$ is strictly increasing initially. Also, if $h^*l\geq \l^*$ then $\tau}\def\hT{\hat{T}h_3\leq 1/8$. From \varepsilonqref{delta2} we see that $h^*z$ is strictly increasing at least until a time $\tau}\def\hT{\hat{T}_0$ when $h^*y(\tau}\def\hT{\hat{T}_0)\leq \b\d^*$. On the other hand, we see from \varepsilonqref{yzok} that if $h^*y(\tau}\def\hT{\hat{T})\leq \b\d^*$ then $h^*z(\tau}\def\hT{\hat{T})>0$. So, \betaq{fcv} \min\par \nuoindentgmaet{h^*y(\tau}\def\hT{\hat{T}),h^*z(\tau}\def\hT{\hat{T}),\tau}\def\hT{\hat{T}iy(\tau}\def\hT{\hat{T}),\tau}\def\hT{\hat{T}iz(\tau}\def\hT{\hat{T})}>0. \varepsiloneq Now we write \betagin{align} h^*l(\tau}\def\hT{\hat{T})&=h^*l(T_1)+(\tau}\def\hT{\hat{T}-T_1)h^*l'(\tau}\def\hT{\hat{T}_3)\nuonumber\\ &\geq \tau}\def\hT{\hat{T}il(T_1)-(\tau}\def\hT{\hat{T}il(T_1)-h^*l(T_1))-10^{-10},\qquad\tau}\def\hT{\hat{T}ext{ using \varepsilonqref{defeps}},\nuonumber\\ &\geq \alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_1c-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_2-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_5\d^*-10^{-10}.\nuonumber\\ &>\l^*.\lambdabel{very} \varepsilonnd{align} We must now deal with the case where $T_1=\tau}\def\hT{\hat{T}iT$. Here we can just use \varepsilonqref{half} to argue that $h^*z(T_1)>\tau}\def\hT{\hat{T}iz(T_1)-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_0\d^* n>0$ and $h^*y(T_1)+h^*z(T_1)>\tau}\def\hT{\hat{T}iy(T_1)+\tau}\def\hT{\hat{T}iz(T_1)-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_0\d^* n> (\b+.01-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_0\d^*)n>\b n$ and $h^*l(\tau}\def\hT{\hat{T}iT)\geq \tau}\def\hT{\hat{T}il(\tau}\def\hT{\hat{T}iT)-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_5\d^*>\l^*$. This completes the proof of Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{near}. h^*space*{\fill}\mbox{$\Box$} It follows from Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{close} that \text{w.h.p.}\ $y(T_1)\leq n^{8/9}$, $z(T_1)\geq \b n-n^{8/9}$ and $\l(T_1)\geq \l^*$. We claim that \text{q.s.}, $y$ becomes zero within the next $\nu=n^{9/10}$ steps of {\sc 2greedy}. Suppose not. It follows from Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{onestep} that $\l$ changes by $o(1)$ and by \varepsilonqref{maxdegree} that $z$ changes by $o(n)$ during these $\nu$ steps. Thus $T_1+\nu\leq T_0$. It follows from \varepsilonqref{eqx5} that \text{q.s.}\ at least $\nu\log^{-2}n$ of these steps will be of type Step 2. But each such step reduces $y$ by at least one, contradiction. This verifies \varepsilonqref{Texists}. \par \nuoindentgmaection{The number of components in the output of the algorithm}\lambdabel{noc} We will tighten our bound on $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda$ from Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{Klem}. \betagin{lemma}\lambdabel{Klemm} If $c\geq 15$ then for every positive constant $K$ there exists a constant $c_2=c_2(K)$ such that $$\mathbb{P}\brac{\mathbb{E\/}ists 1\leq t\leq T_1:\;\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(t)> c_2\log n}\leq n^{-K}.$$ \varepsilonnd{lemma} {\bf Proofh^*space{2em}} We now need to use a sharper inequality than \varepsilonqref{x4} to replace $L_1$ by what is claimed in the statement of the lemma. This sharper inequality uses higher moments of the $X_t$'s and we can estimate them now that we have the estimate of the maximum of $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(t)$ given in \varepsilonqref{x4}. So, we now have to estimate terms of the form $$\Psi_j(\xi\mid \boldsymbol{\varepsilonta})=\mathbb{E\/}[\ |(\xi'-\xi)-\mathbb{E\/}[(\xi'-\xi)\mid \boldsymbol{\varepsilonta}]|^j\ \mid \boldsymbol{\varepsilonta}).$$ for $\xi=y_1,y_2,z_2$, $2\leq j\le \log n$ and $\boldsymbol{\varepsilonta}={\bf v}$ or ${\bf b},{\bf d}$. We use the inequality $$(a+b+c+d)^j\leq 4^j(|a|^j+|b|^j+|c|^j+|d|^j)$$ for $j\geq 1$. We will also need to estimate, for $2\leq j\leq \log n$, \betagin{multline*} \par \nuoindentgmaum_{k\geq 2}\frac{k(k-1)^j\l^k}{k!}=\l^2\par \nuoindentgmaum_{k\geq 0}\frac{(k+1)^{j-1}\l^k}{(k-2)!}< 2^j\l^2\par \nuoindentgmaum_{k\geq 0}\frac{k^j\l^k}{k!} =2^j\l^2\par \nuoindentgmaum_{k\geq 0}\par \nuoindentgmaum_{\varepsilonll=0}^j\genfrac{\{}{\}}{0in}{}{j}{\varepsilonll}\frac{(k)_\varepsilonll\l^k}{k!}\\ =2^j\l^2\par \nuoindentgmaum_{\varepsilonll=0}^j\genfrac{\{}{\}}{0in}{}{j}{\varepsilonll}\l^\varepsilonll\par \nuoindentgmaum_{k\geq \varepsilonll}\frac{\l^{k-\varepsilonll}}{(k-\varepsilonll)!} \leq 2^j\l^{j+2}e^\l \par \nuoindentgmaum_{\varepsilonll=0}^j\genfrac{\{}{\}}{0in}{}{j}{\varepsilonll}\leq 2^jj!\l^{j+2}e^\l. \varepsilonnd{multline*} Here $\genfrac{\{}{\}}{0in}{}{j}{\varepsilonll}$ is a Stirling number of the second kind and it is easy to verify by induction on $j$ that the Bell number $\par \nuoindentgmaum_{\varepsilonll=0}^j\genfrac{\{}{\}}{0in}{}{j}{\varepsilonll}\leq j!$. {\bf Step 1.\/} $y_1+y_2+z_1>0$. \par \nuoindentgmai {\bf Step 1(a)\/}. $y_1>0$. \betagin{eqnarray} \Psi_j(y_1\mid{\bf b},{\bf d})&\leq&4^j\brac{{\frac{y_1}{2\mu}+ \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{y_1}{2\mu}} +\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{2y_2}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{04x1}}}.\lambdabel{04x1}\\ \Psi_j(y_i\mid{\bf v})&=&O\brac{2^{3j}\l^je^\l j!\brac{\frac{\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda}{N}+\frac{\log^2N}{\l N}}}.\lambdabel{04xq1}\\ \nuonumber\\ \nuonumber\\ \Psi(y_2\mid {\bf b},{\bf d})&\leq&4^j\brac{\frac{2y_2}{2\mu}+ \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{2y_2}{2\mu} +\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{3y_3}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{041}}}.\lambdabel{041}\\ \Psi_j(y_2\mid |{\bf v}|)&=&O\brac{2^{3j}\l^je^\l j!\brac{\l^3+\frac{\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda}{N}+ \frac{\log^2N}{\l N}}}.\lambdabel{04q1}\\ \nuonumber\\ \nuonumber\\ \Psi(z_1\mid {\bf b},{\bf d})&\le&4^j\brac{\frac{z_1}{2\mu}+ \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{z_1}{2\mu} +\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{2z_2}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{051}}}.\lambdabel{051}\\ \Psi(z_1\mid |{\bf v}|)&=&O\brac{2^{3j}\l^je^\l j!\brac{\l^2+\frac{\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda}{N}+\frac{\log^2N}{\l N}}}. \lambdabel{05q1}\\ \varepsilonnd{eqnarray} {\bf Step 1(b)\/}. $y_1=0,y_2>0$. \betagin{eqnarray} \Psi_j(y_1\mid {\bf b},{\bf d})&\leq&4^j\brac{ 2\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{2y_2}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{4x1}}}.\lambdabel{4x1}\\ \Psi_j(y_1\mid |{\bf v}|)&=&O\brac{2^{3j}\l^je^\l j!\brac{\frac{\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda}{N}+ \frac{\log^2N}{\l N}}}.\lambdabel{4xq1}\\ \nuonumber\\ \nuonumber\\ \Psi_j(y_2\mid {\bf b},{\bf d})&\leq&4^j\brac{\frac{2y_2}{2\mu}+ 2\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{2y_2}{2\mu} +2\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{3y_3}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{41}}}.\lambdabel{41}\\ \Psi_j(y_2\mid |{\bf v}|)&=&O\brac{2^{3j}\l^je^\l j!\brac{\l^3+\frac{\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda}{N}+ \frac{\log^2N}{\l N}}}.\lambdabel{4q1}\\ \nuonumber\\ \nuonumber\\ \Psi_j(z_1\mid {\bf b},{\bf d})&\leq&4^j\brac{\frac{z_1}{\mu}+ 2\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{z_1}{2\mu} +2\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{2z_2}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{51}}}.\lambdabel{51}\\ \Psi_j(z_1\mid |{\bf v}|)&=&O\brac{2^{3j}\l^je^\l j!\brac{\l^2+\frac{\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda}{N}+\frac{\log^2N}{\l N}}}.\lambdabel{5q1mmm}\\ \nuonumber\\ \nuonumber\\ \varepsilonnd{eqnarray} {\bf Step 1(c).\/} $y_1=y_2=0,z_1>0$. \betagin{eqnarray} \Psi_j(y_1\mid {\bf b},{\bf d})&=&\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{9x1}}.\lambdabel{9x1}\\ \Psi_j(y_1\mid |{\bf v}|)&=&\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{9xq1}}.\lambdabel{9xq1}\\ \nuonumber\\ \nuonumber\\ \Psi_j(y_2\mid {\bf b},{\bf d})&\leq&2^j\brac{\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{3y_3}{2\mu} +\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{91}}}.\lambdabel{91}\\ \Psi_j(y_2\mid |{\bf v}|)&=&O\brac{2^{3j}\l^je^\l j! \brac{\l^3+\frac{\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda}{N}+\frac{\log^2N}{\l N}}}.\lambdabel{9q1}\\ \nuonumber\\ \nuonumber\\ \Psi_j(z_1\mid {\bf b},{\bf d})&\leq&4^j\brac{\frac{z_1}{2\mu}+ \par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{z_1}{2\mu} +\par \nuoindentgmaum_{k\ge 2}\frac{kz_k}{2\mu}\,(k-1)^j\frac{2z_2}{2\mu}+\varepsilon_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{101}}}.\lambdabel{101}\\ \Psi_j(z_1\mid |{\bf v}|)&=&O\brac{2^{3j}\l^je^\l j! \brac{\l^2+\frac{\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda}{N}+\frac{\log^2N}{\l N}}}.\lambdabel{10q1} \varepsilonnd{eqnarray} Now let ${\cal E}_t=\par \nuoindentgmaet{\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(\tau}\def\hT{\hat{T})\leq \log^2n:1\leq\tau}\def\hT{\hat{T}\leq t}$. Then let $$Y_i=\betagin{cases}(\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(i+1)-\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(i))1({\cal E}_i) &0\leq i\leq T_1\\-c_1/2&T_1<i\leq n\varepsilonnd{cases}$$ Then, \text{q.s.} $$Y_{s+1}+\ldots+Y_t=\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(t)-\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(s)\tau}\def\hT{\hat{T}ext{ for }0\leq s<t\leq T_1.$$ For some absolute constant $c_2$, and with $\tau}\def\hT{\hat{T}h=\frac{c_1}{100ce^{3ce+1}(3ce)^3}$ and $i\leq L_1$, \betagin{multline*} \mathbb{E\/}[e^{\tau}\def\hT{\hat{T}h Y_{s+i}}\mid Y_{s+1},\ldots,Y_{s+i-1}] =\par \nuoindentgmaum_{k=0}^\infty \tau}\def\hT{\hat{T}h^k\mathbb{E\/}\left[\frac{Y_{s+i}^k}{k!}{\bf i}gg| Y_{s+1},\ldots,Y_{s+i-1}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight]\\ \leq 1-\tau}\def\hT{\hat{T}h c_1/2 +c_2\par \nuoindentgmaum_{k=2}^\infty \tau}\def\hT{\hat{T}h^k2^{3k}\l(i)^{k+3}e^{\l(i)}\leq e^{-\tau}\def\hT{\hat{T}h c_1/3}, \varepsilonnd{multline*} where we have used \varepsilonqref{eqx2} and we have used Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lambda} to bound $\l(i)$. It follows that for $t-s\leq L_1$ and real $u>0$ $$\mathbb{P}(Y_{s+1}+\cdots+Y_t\geq u)\leq e^{ -\tau}\def\hT{\hat{T}h(u+ c_1(t-s)/3)} $$ Suppose now that there exists $\tau}\def\hT{\hat{T}\leq T_0$ such that $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(\tau}\def\hT{\hat{T})\geq L_2$. Now q.s. there exists $t_1\leq \tau}\def\hT{\hat{T}\leq t_1+L_1$ such that $\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(t_1)=0$. But then putting $u=-\log n$ and $L_2=\frac{6K\log n}{c_1}$ we see that given $t_1$, $$\mathbb{P}(\mathbb{E\/}ists t_1\leq \tau}\def\hT{\hat{T}\leq t_1+L_1:\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(\tau}\def\hT{\hat{T})\geq L_2)\leq \mathbb{P}\brac{\nueg{\bf i}gcup_t{\cal E}_t}+e^{-\tau}\def\hT{\hat{T}h(c_1L_2/3-\log n)}\leq n^{-K}.$$ h^*space*{\fill}\mbox{$\Box$} We get a new path for every increase in $V_{0,j},\,j\leq 1$. If we look at equations \varepsilonqref{04x} etc., then we see that the expected number added to $V_{0,j}$ at step $t$ is $O(\zeta} \deltaf\tau}\def\hT{\hat{T}h{\tau}\def\hT{\hat{T}heta} \deltaf\Th{\Theta} \deltaf\l{\lambdambda(t)/\m(t))$. So if $Z_P(t)$ is the number of increases at time $t$ and $Z_P=\par \nuoindentgmaum_{t=0}^{T_3}Z_P(t)$, where $T_3$ is the time at the beginning of Step 3, then \betaq{ZP} \mathbb{E\/}[Z_P]=O\mathbb{E\/}\brac{\brac{\log n\par \nuoindentgmaum_{t=0}^{T_3}\frac{1}{\mu(t)}} =O\brac{\log n\ \mathbb{E\/}\left[\log\bfrac{\m(0)}{\m(T_3)}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight]}}. \varepsiloneq Now in our case $\m(T_3)=\Omega(n)$ with probability $1-o(n^{-2})$ in which case $\mathbb{E\/}[Z_p]=O(\log n)$. We will apply the Chebyshev inequality to show concentration around the mean. We will condition on $||{\bf u}(t)-h^*u(t)||_1\leq n^{8/9}$ for $t\leq T_1$ (see Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{close}). With this conditioning, the expected value of $Z_P(t)$ is determined up to a factor $1-O(n^{-1/9}\log^2n)$ by the value of $h^*u(t)$. In which case, $\mathbb{E\/}[Z_P(t)\mid Z_P(s)] =(1+o(1))\mathbb{E\/}[Z_P(t)]$ and we can apply the Chebychev inequality to show that \text{w.h.p.}\ $Z_P=O(\log n)$. We combine this with Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{few} to obtain Theorem \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{th1}. \par \nuoindentgmaection{Hamilton cycles}\lambdabel{posa} We will now show how we can use Theorem \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{th1}(a) to prove the existence and construction of Hamilton cycles. We will first need to remove a few random edges $X$ from $G=G_{n,cn}^{\d\geq 3}$ in such a way that the pair $(G-X,X)$ is distributed very close to $(H=G_{n,cn-|X|}^{\d\geq 3},Y)$ where $Y$ is a random set of edges disjoint from $E(H)$. In which case we can apply Theorem \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{th1} to $H$ and then we can use the edges of $Y$ to close cycles in the extension-rotation procedure. \par \nuoindentgmaubsection{Removing a random set of edges}\lambdabel{remove} Let $$s=n^{1/2}\log^{-2}n$$ and let $$\Omega=\par \nuoindentgmaet{(H,Y):H\in {\cal G}_{n,cn-s}^{\d\geq 3}, Y\par \nuoindentgmaubseteq{\bf i}nom{[n]}{2},|Y|=s\tau}\def\hT{\hat{T}ext{ and }E(H)\cap Y=\varepsilonmptyset}$$ where ${\cal G}_{n,m}^{\d\geq 3}=\par \nuoindentgmaet{G_{n,m}^{\d\geq 3}}$. We consider two ways of randomly choosing an element of $\Omega$. \betagin{enumerate}[{\bf (a)}] \item First choose $G$ uniformly from ${\cal G}_{n,cn}^{\d\geq 3}$ and then choose an $s$-set $X$ uniformly from $E(G)\par \nuoindentgmaetminus E_3(G)$, where $E_3(G)$ is the set of edges of $G$ that are incident with a vertex of degree 3. This produces a pair $(G-X,X)$. We let $\mathbb{P}_a$ denote the induced probability measure on $\Omega$. \item Choose $H$ uniformly from ${\cal G}_{n,cn-s}^{\d\geq 3}$ and then choose an $s$-set $Y$ uniformly from ${\bf i}nom{[n]}{2}\par \nuoindentgmaetminus E(H)$. This produces a pair $(H,Y)$. We let $\mathbb{P}_b$ denote the induced probability measure on $\Omega$. \varepsilonnd{enumerate} The following lemma implies that as far as properties that happen \text{w.h.p.}\ in $G$, we can use Method (b), just as well as Method (a) to generate our pair $(H,Y)$. \betagin{lemma}\lambdabel{contig} There exists $\Omega_1\par \nuoindentgmaubseteq \Omega$ such that \betagin{enumerate}[{\bf (i)}] \item $\mathbb{P}_a(\Omega_1)=1-o(1)$. \item $\om=(H,Y)\in \Omega_1$ implies that $\mathbb{P}_a(\om)=(1+o(1))\mathbb{P}_b(\om)$. \varepsilonnd{enumerate} \varepsilonnd{lemma} {\bf Proofh^*space{2em}} We first compute the expectation of the number $\m_3=\m_3(G)$ of edges incident to a vertex of degree 3 in $G$ chosen uniformly from ${\cal G}_{n,cn}^{\d\geq 3}$. We will use the random sequence model of Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{mod}. We will show that $\m_3$ is highly concentrated in this model and then we can transfer this result to our graph model. Observe first that if $\nu_3$ is the number of vertices of degree 3 in $G_{\bf x}$ then Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lem4x} implies that $$\card{\nu_3-\frac{\l^3}{3!f_3(\l)}n}=O(n^{1/2}\log n),\qquad\text{q.s.}.$$ Here $\l$ is the solution to $\l f_2(\l)/f_3(\l)=2cn$. To see how many edges are incident to these $\nu_3$ vertices we consider the following experiment: Condition on $\nu_3=\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma n$ where $\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma$ will be taken to be close to $\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3=\frac{\l^3}{3!f_3(\l)}$. We take a random permutation $\p$ of $[2cn]$ and compute the number $Z$ of $i\leq cn$ such that $\par \nuoindentgmaet{\p(2i-1),\p(2i)}\cap [3\nu_3]\nueq \varepsilonmptyset$. This will give us the number of edges in $G_{\bf x}$ that are incident with a vertex of degree 3. Now $$\mathbb{E\/}[Z]=cn\brac{1-\frac{2cn-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma n}{2cn}\frac{2cn-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma n-2}{2cn-2}}=cn(2\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^2+O(1/n)).$$ Now interchanging two positions in $\p$ can change $Z$ by at most one and so applying the Azuma-Hoeffding inequality for permutations (see for example Lemma 11 of Frieze and Pittel \cite{FP1} or Section 3.2 of McDiarmid \cite{McD}) we see that $\mathbb{P}(|Z-\mathbb{E\/}[Z]|\geq u)\leq e^{-u^2/(cn)}$ for any $u\geq 0$. Putting this all together we see that $$\mathbb{P}(|\m_3(G)-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3(2-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3)cn|\geq u)\leq e^{-u^2/cn}.$$ Now let $$\widehat{{\cal G}}_{n,cn}^{\d\geq 3}=\par \nuoindentgmaet{G\in{\cal G}_{n,cn}^{\d\geq 3}: |\m_3(G)-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3(2-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3)cn|\leq n^{1/2}\log n}$$ and $$\Omega_a=\par \nuoindentgmaet{(H,Y)\in\Omega:H+Y\in \widehat{{\cal G}}_{n,cn}^{\d\geq 3}}.$$ This satisfies requirement (a) of the lemma. Suppose next that $\om\in \Omega_a$. Then \betagin{align} &\mathbb{P}_a(\om)=\frac{1}{|{\cal G}_{n,cn}^{\d\geq 3}|} \cdot\frac{1}{{\bf i}nom{cn(1-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3)^2\pm n^{1/2}\log n}{s}}= \frac{1+O(\log^{-1} n)}{|{\cal G}_{n,cn}^{\d\geq 3}|\cdot{\bf i}nom{cn(1-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3)^2}{s}}\lambdabel{obo1}\\ &\mathbb{P}_b(\om)=\frac{1}{|{\cal G}_{n,cn-s}^{\d\geq 3}|}\cdot\frac{1}{{\bf i}nom{{\bf i}nom{n}{2}-cn}{s}}\lambdabel{obo2} \varepsilonnd{align} One can see from this that one has to estimate the ratio $|{\cal G}_{n,cn}^{\d\geq 3}|/ |{\cal G}_{n,cn-s}^{\d\geq 3}|$. For this we make estimates of $$M=|\par \nuoindentgmaet{(G_1,G_2)\in{\cal G}_{n,cn}^{\d\geq 3}\tau}\def\hT{\hat{T}imes {\cal G}_{n,cn-s}^{\d\geq 3}:E(G_1)\par \nuoindentgmaupseteq E(G_2) }|.$$ We have the following inequalities: \betagin{align} &|\widehat{{\cal G}}_{n,cn}^{\d\geq 3}|{\bf i}nom{cn(1-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3)^2-n^{1/2}\log n}{s}\leq M\leq|\widehat{{\cal G}}_{n,cn}^{\d\geq 3}| {\bf i}nom{cn(1-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3)^2+n^{1/2}\log n}{s}+\nuonumber\\ &\gammap{3}|{\cal G}_{n,cn}^{\d\geq 3}|\par \nuoindentgmaum_{|u|\geq n^{1/2}\log n}{\bf i}nom{cn(1-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3)^2+u}{s}e^{-u^2/cn}\lambdabel{obo3}\\ &M=|{\cal G}_{n,cn-s}^{\d\geq 3}|{\bf i}nom{{\bf i}nom{n}{2}-cn}{s}. \lambdabel{obo4} \varepsilonnd{align} We get \varepsilonqref{obo3} by summing $\m_3(G_1)$ over $G_1\in{\cal G}_{n,cn}^{\d\geq 3}$ and bounding $\m_3(G_1)$ according to whether or not $G$ is in $\widehat{{\cal G}}_{n,cn}^{\d\geq 3}$. Equation \varepsilonqref{obo4} is obtained by summing over $G_2\in {\cal G}_{n,cn-s}^{\d\geq 3}$, the number of ways of adding $s$ edges to $G_2$. Now \betagin{multline*} \par \nuoindentgmaum_{|u|\geq n^{1/2}\log n}{\bf i}nom{cn(1-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3)^2+u}{s}e^{-u^2/cn}\leq 2\par \nuoindentgmaum_{u\geq n^{1/2}\log n}{\bf i}nom{cn(1-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3)^2}{s}e^{O(us/n)}e^{-u^2/cn}\\ 2{\bf i}nom{cn(1-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3)^2}{s}\par \nuoindentgmaum_{u\geq n^{1/2}\log n}e^{-u^2/2cn}=O\brac{{\bf i}nom{cn(1-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3)^2}{s}e^{-\Omega(\log^2n)}}. \varepsilonnd{multline*} It follows from this and \varepsilonqref{obo3} that $$M=|{\cal G}_{n,cn}^{\d\geq 3}|{\bf i}nom{cn(1-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3)^2}{s}\brac{1+O(\log^{-1}n)}.$$ By comparing with \varepsilonqref{obo4} we see that $$\frac{|{\cal G}_{n,cn}^{\d\geq 3}|}{|{\cal G}_{n,cn-s}^{\d\geq 3}|}=(1+o(1)) \frac{{\bf i}nom{{\bf i}nom{n}{2}-cn}{s}}{{\bf i}nom{cn(1-\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma_3)^2}{s}}.$$ The lemma follows by using this in conjunction with \varepsilonqref{obo1} and \varepsilonqref{obo2}. h^*space*{\fill}\mbox{$\Box$} \par \nuoindentgmaubsection{Connectivity of $G_{n,cn}^{\d\geq 3}$}\lambdabel{conn} \betagin{lemma}\lambdabel{conn1} $G_{n,cn}^{\d\geq 3}$ is connected, \text{w.h.p.}. \varepsilonnd{lemma} {\bf Proofh^*space{2em}} It follows from Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{contig} that we can replace $G_{n,cn}^{\d\geq 3}$ by $G_{n,cn-s}^{\d\geq 3}$ plus $s$ random edges. We use the random sequence model to deal with $G_{n,cn-s}^{\d\geq 3}$. Let Fix $4\leq k\leq n/\log^{20}n$. For $K\par \nuoindentgmaubseteq [n]$, $e(K)$ denotes the number of edges of $G_{\bf x}$ contained in $K$. Let $\varepsilonll_0=\log n/(\log\log n)^{1/2}$. Then with $\l$ the solution to $\l f_2(\l)/f_3(\l)=2c$, \betagin{align} &\mathbb{P}(\mathbb{E\/}ists K\par \nuoindentgmaubseteq [n]:e(K)\geq 5k/4)\leq o(1)+\d_k{\bf i}nom{n}{k}\par \nuoindentgmaum_{d=3k/2}^{\varepsilonll_0k} \frac{\l^dk^d}{d!f_3(\l)^k}{\bf i}nom{cn}{5k/4}\bfrac{d}{cn}^{5k/2}.\lambdabel{mm1}\\ &\leq \d_k\par \nuoindentgmaum_{d=3k/2}^{\varepsilonll_0k}\bfrac{\l ek}{d}^d\bfrac{e^{9/4}\varepsilonll_0^{5/2}k^{1/4}}{(5c/4)^{5/4}f_3(\l)n^{1/4}}^k\leq \d_k\varepsilonll_0k\bfrac{e^{9/4}\varepsilonll_0^{5/2}k^{1/4}e^\l}{(5c/4)^{5/4}f_3(\l)n^{1/4}}^k\lambdabel{mm2} \varepsilonnd{align} {\bf Explanation of \varepsilonqref{mm1}:} Here $\d_k=1+o(1)$ for $k\leq \log^2n$ and $O(n^{1/2})$ for larger $k$. The term $\frac{\l^dk^d}{d!f_3(\l)^k}$ bounds the probability that the total degree of $K$ is $d$, see \varepsilonqref{boundd}. Given the degree sequence we take a random permutation $\p$ of the multi-set $\par \nuoindentgmaet{d_{\bf x}(j)\tau}\def\hT{\hat{T}imes j:j\in[n]}$ and bound the probability that there is a set of $5k/4$ indices $i$ such that $\p(2i-1),\p(2i)\in K$. This expression assumes that vertex degrees are independent random variables. We can always inflate the estimate by $O(n^{1/2})$ to account for the degree sum being fixed. This is what $\d_k$ does for $k\geq \log^2n$. For smaller $k$ we use \varepsilonqref{ll1}. The bound of $d\leq \varepsilonll_0k$ arises from Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{lem4}(b). Let $\par \nuoindentgma_{k}$ denote the RHS of \varepsilonqref{mm2}. Then, we have $\par \nuoindentgmaum_{k=4}^{n/\log^{20}n}\par \nuoindentgma_k=o(1)$. But if no $G$ has minimum degree at least 3 and $K$ contains at most $5|K|/4$ edges then there must be edges with one end in $K$. So, we see that \text{w.h.p.}\ the minimum component size in $G$ will be at least $n/\log^{20}n$. We now use the result of Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{remove}. If we take $H=G_{n,cn-s}^{\d\geq 3},\,s=n^{1/2}\log^{-2}n$ then we know by the above that \text{w.h.p.}\ it only has components of size at least $n/\log^{20}n$. Now add $s$ random edges $Y$. Then $$\mathbb{P}(H+Y\tau}\def\hT{\hat{T}ext{ is not connected})=o(1)+\log^{40}n\brac{1-\frac{1}{\log^{40}n}}^s=o(1).$$ Now apply Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{contig}. h^*space*{\fill}\mbox{$\Box$} \par \nuoindentgmaubsection{Extension-Rotation Argument}\lambdabel{exro} We will as in Section \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{conn} replace $G_{n,cn}^{\d\geq 3}$ by $G_{n,cn-s}^{\d\geq 3}$ plus $s$ random edges $Y$. Having run {\sc 2greedy}\ we will \text{w.h.p.}\ have a two matching $M_0$ say such that $M_0$ has $O(\log n)$ components. The main idea now of course is that of a {\varepsilonm rotation}. Given a path $P=(u_1,u_2, \ldots,u_k$ and an edge $(u_k,u_i)$ where $i\leq k-2$ we say that the path $P'=(u_1,\ldots,u_i,u_k,u_{k-1},\ldots,u_{i+1})$ is obtained from $P$ by a rotation. $u_1$ is the {\varepsilonm fixed} endpoint of this rotation. We now describe an algorithm, {\par \nuoindentgmac extend-rotate}\ that \text{w.h.p.}\ converts $M_0$ into a Hamilton cycle in $O(n^{1.5+o(1)})$ time. Given a path $P$ with endpoints $a,b$ we define a {\varepsilonm restricted rotation search} $RRS(\nu)$ as follows: Suppose that we have a path $P$ with endpoints $a,b$. We start by doing a sequence of rotations with $a$ as the fixed endpoint. Furthermore \betagin{enumerate}[R1] \item We only do a rotation if the endpoint of the path created is not an endpoint of the paths that have been created so far. \item We stop this process when we have either (i) created $\nu$ endpoints or (ii) we have found a path $Q$ with an endpoint that is outside $Q$. We say that we have found an {\varepsilonm extension}. \varepsilonnd{enumerate} Let $END(a)$ be the set of endpoints, other than $a$, produced by this procedure. The main result of \cite{FP} is that \text{w.h.p.}\ that regardless of our choice of path $P$, either (i) we find an extension or (ii) we are able to generate $n^{1-o(1)}$ endpoints. We will run this procedure with $\nu=n^{3/4}\log^2n$. Assuming that we did not find an extension and having constructed $END(a)$, we take each $x\in END(a)$ in turn and starting with the path $P_x$ that we have found from $a$ to $x$, we carry out R1,R2 above with $x$ as the fixed endpoint and either find an extension or create a set of $\nu$ paths with $x$ as one endpoint and the other endpoints comprising a set $END(x)$ of size $\nu$. It follows from \cite{AV} that the above construction $RSS(\nu)$ can be carried out in $O(\nu^2\log n)$ time. Algorithm {\par \nuoindentgmac extend-rotate} \betagin{enumerate}[Step 1] \item Choose a path component $P$ of the current 2-matching $M$, with endpoints $a,b$.\\ If there are no such components and $M$ is not a Hamilton cycle, choose a cycle $C$ of $M$ and delete an edge to create $P$: \item Carry out $RSS(\nu)$ until either an extension is found or we have constructed $\nu+1$ endpoint sets. \betagin{description} \item[Case a:] We find an extension. Suppose that we construct a path $Q$ with endpoints $x,y$ such that $y$ has a neighbour $z\nuotin Q$. \betagin{enumerate}[(i)] \item If $z$ lies in a cycle $C$ then let $R$ be a path obtained from $C$ by deleting one of the edges of $C$ incident with $z$. Let now $P=x,Q,y,z,R$ and go to Step 1. \item If $z=u_j$ lies on a path $R=(u_1,u_2,\ldots,u_k)$ where the numbering is chosen so that $j\geq k/2$ then we let $P=x,Q,y,z,u_{j-1},\ldots,u_1$ and go to Step 1. \varepsilonnd{enumerate} \item[Case b:] If there is no extension then we search for an edge $e=(p,q)\in Y$ such that $p\in END(a)$ and $q\in END(p)$. if there is no such edge then the algorithm fails. If there is such an edge, consider the cycle $P+e$. Now either $C$ is a Hamilton cycle and we are done, or else there is a vertex $u\in C$ and a vertex $v\nuotin C$ such that $(u,v)$ is an edge of $H$. Assuming that $H$ is connected, see Lemma \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{conn1}. We now delete one of the edges, $(u,w)$ say, of $C$ incident with $u$ to create a path $Q$ from $w$ to $u$ and treat $e$ as an extension of this path. We can now proceed as Case a. \varepsilonnd{description} \varepsilonnd{enumerate} \par \nuoindentgmaubsubsection{Analysis of {\par \nuoindentgmac extend-rotate}} We first bound the number of executions of $RSS(\nu)$. Suppose that $M_0$ has $\k\leq K_1\log n$ components for some $K_1>0$. Each time we execute Step 2, we either reduce the number of components by one or we halve the size of one of the components not on the current path. So if the component sizes of $M_0$ are $n_1,n_2,\ldots,n_\k$ then the number of executions of Step 2 can be bounded by $$\k+\par \nuoindentgmaum_{i=1}^\k \log_2n_i\leq \k+\k\log_2(n/\k)=O(\log n\log\log n).$$ (Re-call that $\log(ab)\leq 2\log((a+b)/2)$ for $a,b>0$ and you will see the first inequality here). An execution of Step 2 takes $O(\nu^2\log n)$ time and so we are within the time bound claimed by Theorem \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{th2}. We first argue that {\par \nuoindentgmac extend-rotate}\ succeeds \text{w.h.p.}. Suppose that the edges of $Y$ are $e_1,e_2,\ldots,e_s$. We can allow the algorithm to access these edges in order, never going back to a previously examined edge. The probably that an $e_i$ can be used in Case b is always at least $\frac{{\bf i}nom{\nu}{2}-s}{{\bf i}nom{n}{2}}\geq \frac{\log^4n}{2n^{1/2}}$ (we have subtracted $s$ because some of the useful edges might have been seen before the current edge in the order). So the probability of failure is bounded by the probability that the binomial $Bin\brac{s,\frac{\log^4n}{2n^{1/2}}}$ is less than $K_2\log n\log\log n$ for some $K_2>0$. And this tends to zero. This completes the proof of Theorem \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{th2}. \par \nuoindentgmaection{Concluding remarks}\lambdabel{final} The main open question concerns what happens when $c<15$. Is it true that \varepsilonqref{hT} holds all the way down to $c>3/2$? We have done some numerical experiments and here are some results from these experiments: $$ \betagin{array}{ccccc} c&y_{final}&z_{final}&\m_{final}&\l_{final}\\ 3.0&0.000008&0.283721&0.398527&1.822428\\ 2.9&0.000009&0.242563&0.326139&1.602749\\ 2.8&0.000010&0.197461&0.253645&1.370798\\ 2.7&0.000010&0.148901&0.182327&1.123928\\ 2.6&0.000010&0.098344&0.114494&0.858355\\ 2.5&0.000010&0.048976&0.054010&0.565840 \varepsilonnd{array} $$ These are the results of running Euler's method with step length $10^{-5}$ on the sliding trajectory \varepsilonqref{slide}. They indicate that \varepsilonqref{hT} holds down to somewhere close to 2.5. This would indicate some sort of phase transition in the performance of {\sc 2greedy}\ at around this point. There is one for the Karp-Sipser matching algorithm and so we are led to conjecture there is one here too. Can we prove anything for $c<15$? At the moment we can not even show that at the completion of {\sc 2greedy}\ the 2-matching $M$ has $o(n)$ components. This will be the subject of further research. Finally, we mention once again, the possible use of the ideas of \cite{CFM} to reduce the running time of our Hamilton cycle algorithm to $O(n^{1+o(1)})$ time. Our list of problems/conjectures arising from this research can thus be summarised: \betagin{enumerate}[{\bf (a)}] \item Find a threshold $c_1$ such that {\sc 2greedy}\ produces a 2-matching in $G_{n,cn}^{\d\geq 3}$ with $O(\log n)$ components \text{w.h.p.}\ iff $c>c_1$ . \item If $c_1>3/2$ then show that when $c\in (3/2,c_1)$, the number of components in the 2-matching produced is $O(n^\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta)$ for some constant $\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta<1$. \item Analyse the performance of {\sc 2greedy}\ on the random graph $G_{n,cn}$ i.e. do not condition on degree at least three. Is there a threshold $c_2$ such that if $c\leq c_2$ then \text{w.h.p.}\ only Steps 1a,1b,1c are needed, making the matching produced optimal. \item Can {\sc 2greedy}\ be used to find a Hamilton cycle \text{w.h.p.}\ in $O(n^{1+o(1)})$ time when applied to $G_{n,cn}^{\d\geq 3}$ and $c$ sufficiently large? \item How much of this can be extended to find edge disjoint Hamilton cycles in $G_{n,cn}^{\d\geq k}$ for $k\geq 4$. \varepsilonnd{enumerate} \betagin{thebibliography}{99} {\bf i}bitem{AKS} M. Ajtai, J. Koml\'{o}s and E. Szemer\'{e}di. {\varepsilonm The first occurrence of Hamilton cycles in random graphs.}, Annals of Discrete Mathematics 27 (1985), 173--178. {\bf i}bitem{AV} D. Angluin and L.G. Valiant, {\varepsilonm Fast probabilistic algorithms for Hamilton circuits and matchings}, Journal of Computer and System Sciences 18 (1979) 155-193. {\bf i}bitem{AFP} J. Aronson, A.M. Frieze and B.G. Pittel, Maximum matchings in sparse random graphs: Karp-Sipser re-visited, {\varepsilonm Random Structures and Algorithms} 12 (1998) 111-178. {\bf i}bitem{BF} T. Bohman and A.M. Frieze, Hamilton cycles in 3-out, {\varepsilonm Random Structures and Algorithms} 35 (2009) 393-417. {\bf i}bitem{B1} B. Bollob\'as, {\varepsilonm The evolution of sparse graphs}, In: Graph Theory and Combinatorics, Proc. Cambridge Combin. Conf. in honour of Paul Erd\H os (B. Bollob\'as, ed.), Academic Press, 1984, pp. 35--57. {\bf i}bitem{B2} B. Bollob\'as, {\varepsilonm A probabilistic proof of an asymptotic formula for the number of labelled regular graphs}, European Journal on Combinatorics 1 (1980) 311-316. {\bf i}bitem{BCFF} B.Bollob\'as, C. Cooper, T.I.Fenner and A.M.Frieze, On Hamilton cycles in sparse random graphs with minimum degree at least $k$, {\varepsilonm Journal of Graph Theory} 34 (2000) 42-59. {\bf i}bitem{BFF} B.Bollob\'as, T.I.Fenner and A.M.Frieze, {\varepsilonm An algorithm for finding Hamilton paths and cycles in random graphs}, Combinatorica 7 (1987), 327--341. {\bf i}bitem{BoFrM} B.Bollob\'as and A.M.Frieze, On matchings and hamiltonian cycles in random graphs, {\varepsilonm Annals of Discrete Mathematics} 28 (1985) 23-46. {\bf i}bitem{CFM} P.Chebolu, A.M. Frieze and P.Melsted, Finding a Maximum Matching in a Sparse Random Graph in $O(n)$ Expected Time JACM 57, (2010) {\bf i}bitem{Ch} V. Chv\'atal, Almost all graphs with 1.44$n$ edges are 3-colourable, {\varepsilonm Random Structures and Algorithms} 2 (1991) 11-28. {\bf i}bitem{F1} A.M. Frieze, Finding Hamilton cycles in sparse random graphs, {\varepsilonm Journal of Combinatorial Theory B} 44 (1988) 230-250. {\bf i}bitem{FL} A. M. Frieze and T. {\L}uczak, Hamiltonian cycles in a class of random graphs: one step further, in {\varepsilonm Proceedings of Random Graphs '87}, Edited by M.Karonski, J.Jaworski and A.Rucinski, John Wiley and Sons, 53-59. {\bf i}bitem{FP} A.M. Frieze and B. Pittel, On a sparse random graph with minimum degree three: Likely Posa's sets are large. {\bf i}bitem{FP1} A.M. Frieze and B. Pittel, Perfect matchings in random graphs with prescribed minimal degree, {\varepsilonm Trends in Mathematics}, Birkhauser Verlag, Basel (2004) 95-132. {\bf i}bitem{KS} R.M. Karp and M. Sipser, {\varepsilonm Maximum matchings in sparse random graphs}, Proceedings of the 22nd Annual IEEE Symposium on Foundations of Computing (1981) 364-375. {\bf i}bitem{KoSz} J. Koml\'os and E. Szemer\'edi, {\varepsilonm Limit distributions for the existence of Hamilton circuits in a random graph}, Discrete Mathematics 43 (1983), 55--63. {\bf i}bitem{McD} C. McDiarmid, Concentration, {\varepsilonm Probabilistic Methods for Algorithmic Discrete Mathematics} (M. Habib, C. McDiarmid, J. Ramirez-Alfonsin, B. Reed eds.), Springer, Berlin (1998) 1-46. {\bf i}bitem{McK} B. McKay, Asymptotics for 0-1 matrices with prescribed line sums, in {\varepsilonm Enumeration and Design}, (Academic Press, 1984) 225-238. {\bf i}bitem{Pi} B.Pittel, On tree census and the giant component in sparse random graphs, {\varepsilonm Random Structures and Algorithms} 1 (1990), 311-342. {\bf i}bitem{RW1} R.W. Robinson and N.C. Wormald, Almost all cubic graphs are Hamiltonian. {\varepsilonm Random Structures and Algorithms} {\bf 3} (1992) 117-125. {\bf i}bitem{RW2} R.W. Robinson and N.C. Wormald, Almost all regular graphs are Hamiltonian. {\varepsilonm Random Structures and Algorithms} {\bf 5} (1994) 363-374. \varepsilonnd{thebibliography} \alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltappendix \par \nuoindentgmaection{Proof of \varepsilonqref{ll1}} To find a sharp estimate for the probabilities in (\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{ll1}) we have to refine a bit the proof of the local limit theorem, since in our case the variance of the $Z_ j$ are not always bounded away from zero. However it is enough to consider the case where $N\par \nuoindentgma^2\tau}\def\hT{\hat{T}o\infty$. There is little loss of generality in assuming that $D=0$ here. As usual, we start with the inversion formula \betagin{eqnarray} \mathbb{P}\left(\par \nuoindentgmaum_{j=1}^NZ_j=\tau}\def\hT{\hat{T}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)&=&\frac {1}{2\pi}\int_{-\pi}^{\pi}e^{-i\tau}\def\hT{\hat{T} x} \mathbb{E}\left(e^{ix\par \nuoindentgmaum_{j=1}^NZ_j}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)\,dx\nuonumber\\ &=&\frac {1}{2\pi} \int_{-\pi}^{\pi}e^{-i\tau}\def\hT{\hat{T} x}\tau}\def\hT{\hat{T}ext{ P\/}od_{\varepsilonll=2}^3\left[\mathbb{E}(e^{ix{\cal P}_\varepsilonll})\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight]^{N_\varepsilonll} \,dx, \lambdabel{53} \varepsilonnd{eqnarray} where $\tau}\def\hT{\hat{T}=2M-k$. Consider first $|x|\ge (N{\l})^{-5/12}$. Using an inequality (see Pittel \cite{Pi}) $$ |f_\varepsilonll(\varepsilonta)|\le e^{({\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmam Re}\varepsilonta -|\varepsilonta|)/(\varepsilonll+1)}f_\varepsilonll(|\varepsilonta|), $$ we estimate \betagin{align} &\frac {1}{2\pi}\int_{|x|\ge (N{\l})^{-5/12}} \left|e^{-i\tau}\def\hT{\hat{T} x}\tau}\def\hT{\hat{T}ext{ P\/}od_{\varepsilonll=2}^3\left(\frac {f_\varepsilonll(e^{ix}{\l})}{f_\varepsilonll({\l})}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)^{N_\varepsilonll}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight|\,dx\nuonumber\\ &\leq\frac {1}{2\pi}\int_{|x|\ge (N{\l})^{- 5/12}} e^{N{\l}(\cos x-1)/4}\,dx\nuonumber\\ &\leq e^{N{\l}[(\cos ((N{\l})^{-5/12})-1)/4]}\nuonumber\\ &\leq e^{-(N{\l})^{1/6}/9}. \lambdabel{54} \varepsilonnd{align} For $|x|\le (N{\l})^{-5/12}$, putting $\varepsilonta={\l}e^{ix}$ and using $$\par \nuoindentgmaum_{\varepsilonll=2}^3\frac{N_\varepsilonll {\l}f_\varepsilonll^\tau}\def\hT{\hat{T}ext{ P\/}ime({\l})}{f_\varepsilonll({\l})}=2M\tau}\def\hT{\hat{T}ext{ and } d/dx=i\varepsilonta d/d\varepsilonta$$ we expand $\par \nuoindentgmaum_{\varepsilonll=2}^3N_\varepsilonll \log\brac{\frac{f_\varepsilonll(\varepsilonta)}{f_\varepsilonll({\l})}}$ as a Taylor series around $x=0$ to obtain \betagin{eqnarray} -i\tau}\def\hT{\hat{T} x+\par \nuoindentgmaum_{\varepsilonll=2}^3N_\varepsilonll\log \left(\frac {f_\varepsilonll(e^{ix}{\l})}{f_\varepsilonll({\l})}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)&=&ikx-\frac {x^2}2 \left.{\cal D} \brac{\par \nuoindentgmaum_{\varepsilonll=2}^3N_\varepsilonll\frac {\varepsilonta f_\varepsilonll^\tau}\def\hT{\hat{T}ext{ P\/}ime(\varepsilonta)}{f_\varepsilonll(\varepsilonta)}}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight|_{\varepsilonta={\l}} \nuonumber \\ &&-\frac {ix^3}{3!}\left.{\cal D}^2 \brac{\par \nuoindentgmaum_{\varepsilonll=2}^3N_\varepsilonll\frac {\varepsilonta f_\varepsilonll^\tau}\def\hT{\hat{T}ext{ P\/}ime(\varepsilonta)}{f_\varepsilonll(\varepsilonta)}} \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight|_{\varepsilonta={\l}}\nuonumber\\ &&+O\brac{x^4\left. {\cal D}^3\brac{\par \nuoindentgmaum_{\varepsilonll=2}^3N_\varepsilonll\frac {\varepsilonta f_\varepsilonll^\tau}\def\hT{\hat{T}ext{ P\/}ime(\varepsilonta)}{f_\varepsilonll(\varepsilonta)}} \rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight|_{\varepsilonta=\tau}\def\hT{\hat{T}ilde\varepsilonta}}.\qquad\, \lambdabel{55} \varepsilonnd{eqnarray} Here $\tau}\def\hT{\hat{T}ilde\varepsilonta={\l}e^{i\tau}\def\hT{\hat{T}ilde x}$, with $\tau}\def\hT{\hat{T}ilde x$ being between $0$ and $x$, and ${\cal D}=\varepsilonta(d/d\varepsilonta)$. Now, the coefficients of $x^2/2,\, x^3/3!$ and $x^4$ are $N\par \nuoindentgma^2,\,O(N\par \nuoindentgma^2),\,O(N\par \nuoindentgma^2)$ respectively, and $\par \nuoindentgma^2$ is of order ${\l}$. (Use (\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{30}) and consider the effect of ${\cal D}$ on a power of $\varepsilonta$.) So the second and the third terms in (\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{55}) are $o(1)$ uniformly for $|x|\le (N{\l})^{-5/12}$. Therefore \betagin{equation} \frac 1{2\pi}\int_{|x|\le (N{\l})^{-5/12}}=\int_1+\int_2+\int_3, \lambdabel{55a} \varepsilonnd{equation} where \betagin{eqnarray} \int_1&=&\frac {1}{2\pi}\int_{|x|\le (N{\l})^{-5/12}} e^{ikx-N\par \nuoindentgma^2x^2 /2}\,dx\nuonumber\\ &=&\frac {1}{\par \nuoindentgmaqrt {2\pi N\par \nuoindentgma^2}}+O\left(\frac{k^2+1}{({\l}N)^{3/2}}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight),\lambdabel{56}\\ \int_2&=&O\left({\cal D}^2 \brac{\par \nuoindentgmaum_{\varepsilonll=2}^3N_\varepsilonll\frac {{\l} f_\varepsilonll^\tau}\def\hT{\hat{T}ext{ P\/}ime({\l})}{f_\varepsilonll({\l})}} \int_{|x|\le (N{\l})^{-5/12}} x^3e^{-N\par \nuoindentgma^2x^2/2}\,dx\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)\nuonumber\\ &=&O\left(N{\l}\int_{ |x|\le (N{\l})^{-5/12}} |x|^3e^{-N\par \nuoindentgma^2x^2/2}\, dx\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)\nuonumber\\ &=&O(e^{-\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltalpha (N{\l})^{1/6}}),\lambdabel{57}\\ \nuoalign{($\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Deltalpha >0$ is an absolute constant), and} \int_3&=&O\brac{N{\l}\int_{|x|\le (N{\l})^{-5/12}} x^4 e^{-N\par \nuoindentgma^2x^2/2}\,dx} \nuonumber\\ &=&o\left(\int_2\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight).\lambdabel{58} \varepsilonnd{eqnarray} Using (\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{53})-(\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaef{58}), we arrive at $$ \mathbb{P}\left(\par \nuoindentgmaum_{\varepsilonll}Z_{\varepsilonll}=\tau}\def\hT{\hat{T}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)=\frac {1}{\par \nuoindentgmaqrt {2\pi N\par \nuoindentgma^2}} \tau}\def\hT{\hat{T}imes\brac{1+O\left(\frac {k^2+1}{N{\l}}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)}. $$ \par \nuoindentgmaection{Mathematica Output} In the computations below, $\varepsilon_p(h^*l)$ is represented by $ep[x]$ and $\alpha} \def\b{\betata} \def\d{\delta} \def\D{\Delta_p$ is represented by $Ap$ and $\b$ is represented by $B$. The computation $C_1$ is the justification for \varepsilonqref{very}. \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{f0}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\tau}\def\hT{\hat{T}ext{Exp}[x]}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{f1}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\tau}\def\hT{\hat{T}ext{f0}[x]-1}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{f2}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\tau}\def\hT{\hat{T}ext{f1}[x]-1-x}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{f3}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\tau}\def\hT{\hat{T}ext{f2}[x]-1-x-\frac{x^2}{2}}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{e1}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\frac{\tau}\def\hT{\hat{T}ext{f2}[x]}{\tau}\def\hT{\hat{T}ext{f3}[x]}-1}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{e2}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\frac{\tau}\def\hT{\hat{T}ext{f0}[x]}{\tau}\def\hT{\hat{T}ext{f2}[x]}-1}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{e3}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\frac{\tau}\def\hT{\hat{T}ext{f0}[x]}{\tau}\def\hT{\hat{T}ext{f3}[x]}-1}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{e4}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\frac{\tau}\def\hT{\hat{T}ext{e1}[x]}{1+\tau}\def\hT{\hat{T}ext{e1}[x]}}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{e5}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\frac{(1+\tau}\def\hT{\hat{T}ext{e2}[x])(1+\tau}\def\hT{\hat{T}ext{e3}[x])x^3}{8\tau}\def\hT{\hat{T}ext{f0}[x]}}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{e6}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\frac{x^2(1+\tau}\def\hT{\hat{T}ext{e2}[x])^2}{\tau}\def\hT{\hat{T}ext{f0}[x]}}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{e7}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\frac{\tau}\def\hT{\hat{T}ext{e4}[x]+\tau}\def\hT{\hat{T}ext{e5}[x]+\tau}\def\hT{\hat{T}ext{e6}[x]}{1-\tau}\def\hT{\hat{T}ext{e5}[x]}}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{e8}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\frac{\tau}\def\hT{\hat{T}ext{e1}[x]+\tau}\def\hT{\hat{T}ext{e5}[x]}{1-\tau}\def\hT{\hat{T}ext{e5}[x]}}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{e9}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}x \tau}\def\hT{\hat{T}ext{e4}[x]}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{e10}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\frac{2x \tau}\def\hT{\hat{T}ext{e4}[x]}{1-\tau}\def\hT{\hat{T}ext{e4}[x]}}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{e11}[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\frac{x(\tau}\def\hT{\hat{T}ext{e2}[x]+\tau}\def\hT{\hat{T}ext{e5}[x])+\tau}\def\hT{\hat{T}ext{e5}[x]}{(1-\tau}\def\hT{\hat{T}ext{e4}[x])(1-\tau}\def\hT{\hat{T}ext{e5}[x])}}\) \nuoindent\(\pmb{d[\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\tau}\def\hT{\hat{T}ext{Max}[\tau}\def\hT{\hat{T}ext{e1}[x],\tau}\def\hT{\hat{T}ext{e2}[x],\tau}\def\hT{\hat{T}ext{e3}[x],\tau}\def\hT{\hat{T}ext{e4}[x],\tau}\def\hT{\hat{T}ext{e5}[x],\tau}\def\hT{\hat{T}ext{e6}[x],\tau}\def\hT{\hat{T}ext{e7}[x],\tau}\def\hT{\hat{T}ext{e8}[x]]}\) \nuoindent\(\pmb{N[d[16]]}\) \nuoindent\(0.000102752\) \nuoindent\(\pmb{T=1-\frac{1}{2^{1/2}\tau}\def\hT{\hat{T}ext{Exp}[\tau}\def\hT{\hat{T}ext{Pi}/4]}}\) \nuoindent\(1-\frac{e^{-\pi /4}}{\par \nuoindentgmaqrt{2}}\) \nuoindent\(\pmb{B=-.01+2(1-T)}\) \nuoindent\(0.634794\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{A0}=B(\tau}\def\hT{\hat{T}ext{Exp}[2T/B]-1)}\) \nuoindent\(4.73302\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{A1}=N\left[\frac{2(1+T)\tau}\def\hT{\hat{T}ext{Exp}[-\tau}\def\hT{\hat{T}ext{ArcTan}[T]]}{\left(1+T^2\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)^{1/2}}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight]}\) \nuoindent\(1.5312\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{A2}=N\left[\frac{(1+T)\tau}\def\hT{\hat{T}ext{Exp}[-\tau}\def\hT{\hat{T}ext{ArcTan}[T]]}{\left(1+T^2\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)^{1/2}}\tau}\def\hT{\hat{T}ext{Integrate}\left[\frac{2\tau}\def\hT{\hat{T}ext{Exp}[\tau}\def\hT{\hat{T}ext{ArcTan}[x]]}{(1+x)\left(1+x^2\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight)^{1/2}},\{x,0,1\}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight]\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigmaight]}\) \nuoindent\(1.41846\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{A3}[\tau}\def\hT{\hat{T}ext{c$\_$},\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\frac{10 \tau}\def\hT{\hat{T}ext{A0} c }{B^2}+\frac{8 \tau}\def\hT{\hat{T}ext{A0}^2 c d[x]}{B^3}+\frac{(c+1)d[x]}{B(1-d[x])^2}}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{A4}[\tau}\def\hT{\hat{T}ext{c$\_$},\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\tau}\def\hT{\hat{T}ext{A0} \tau}\def\hT{\hat{T}ext{A3}[c,x]/2}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{A5}[\tau}\def\hT{\hat{T}ext{c$\_$},\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\frac{2c}{B}+\frac{2 \tau}\def\hT{\hat{T}ext{A4}[c,x]}{B}+\frac{4 c \tau}\def\hT{\hat{T}ext{A0}}{B^2}}\) \nuoindent\(\pmb{\tau}\def\hT{\hat{T}ext{C1}[\tau}\def\hT{\hat{T}ext{c$\_$},\tau}\def\hT{\hat{T}ext{x$\_$}]\tau}\def\hT{\hat{T}ext{:=}\tau}\def\hT{\hat{T}ext{A1} c-\tau}\def\hT{\hat{T}ext{A2}-\tau}\def\hT{\hat{T}ext{A5}[c,x] d[x]}\) \nuoindent\(\pmb{N[\tau}\def\hT{\hat{T}ext{C1}[15,16]]}\) \nuoindent\(20.1217\) \varepsilonnd{document}
\begin{document} \title [Power numerical radius inequalities] {{ Power numerical radius inequalities from an extension of Buzano's inequality }} \author[P. Bhunia]{Pintu Bhunia} \address{ {Department of Mathematics, Indian Institute of Science, Bengaluru 560012, Karnataka, India}} \email{[email protected]; [email protected]} \thanks{Dr. Pintu Bhunia would like to thank SERB, Govt. of India for the financial support in the form of National Post Doctoral Fellowship (N-PDF, File No. PDF/2022/000325) under the mentorship of Professor Apoorva Khare} \thanks{} \subjclass[2020]{47A12, 47A30, 15A60} \keywords {Numerical radius, Operator norm, Bounded linear operator, Inequality} \date{\today} \maketitle \begin{abstract} Several numerical radius inequalities are studied by developing an extension of the Buzano’s inequality. It is shown that if $T$ is a bounded linear operator on a complex Hilbert space, then \begin{eqnarray*} w^n(T) &\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^k \right\| \left\|T \right\|^{n-k}, \end{eqnarray*} for every positive integer $n\geq 2.$ This is a non-trivial improvement of the classical inequality $w(T)\leq \|T\|.$ The above inequality gives an estimation for the numerical radius of the nilpotent operators, i.e., if $T^n=0$ for some least positive integer $n\geq 2$, then \begin{eqnarray*} w(T) &\leq& \left(\sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^k \right\| \left\|T \right\|^{n-k}\right)^{1/n} \leq \left( 1- \frac{1}{2^{n-1}}\right)^{1/n} \|T\|. \end{eqnarray*} Also, we deduce a reverse inequality for the numerical radius power inequality $w(T^n)\leq w^n(T)$. We show that if $\|T\|\leq 1$, then \begin{eqnarray*} w^n(T) &\leq& \frac{1}{2^{n-1}} w(T^n)+ 1- \frac{1}{2^{n-1}}, \end{eqnarray*} for every positive integer $n\geq 2.$ This inequality is sharp. \end{abstract} \section{\textbf{Introduction}} \noindent Let $\mathcal{B}(\mathcal{H})$ denote the $C^*$-algebra of all bounded linear operators on a complex Hilbert space $\mathcal{H}$ with usual inner product $\langle.,. \rangle$ and the corresponding norm $\|\cdot\|.$ Let $T\in \mathcal{B}(\mathcal{H})$ and let $|T|=(T^*T)^{1/2}$, where $T^*$ denotes the adjoint of $T.$ The numerical radius and the usual operator norm of $T$ are denoted by $w(T)$ and $\|T\|,$ respectively. The numerical radius of $T$ is defined as $$ w(T)=\sup \left\{|\langle Tx,x\rangle| : x\in \mathcal{H}, \|x\|=1 \right\}.$$ It is well known that the numerical radius defines a norm on $\mathcal{B}(\mathcal{H})$ and is equivalent to the usual operator norm. Moreover, \text{for every $T\in \mathcal{B}(\mathcal{H})$,} the following inequalities hold: \begin{eqnarray}\label{eqv} \frac12 \|T\| \leq w(T) \leq \|T\|. \end{eqnarray} The inequalities are sharp, $w(T)=\frac12 \|T\|$ if $T^2=0$ and $w(T)=\|T\|$ if $T$ is normal. Like as the usual operator norm, the numerical radius satisfies the power inequality, i.e., \begin{eqnarray}\label{power} w(T^n) \leq w^n(T) \end{eqnarray} \text{for every positive integer $n.$} One other basic property for the numerical radius is the weakly unitary invariant property, i.e., $w(T)=w(U^*TU)$ for every unitary operator $U\in \mathcal{B}(\mathcal{H}).$ To study more interesting properties of the numerical radius we refer to \cite{book,book2}. \noindent The numerical radius has various applications in many branches in sciences, more precisely, perturbation problem, convergence problem, approximation problem and iterative method as well as recently developed quantum information system. Due to importance of the numerical radius, many eminent mathematicians have been studied numerical radius inequalities to improve the inequalities \eqref{eqv}. Various inner product inequalities play important role to study numerical radius inequalities. The Cauchy-Schwarz inequality is one of the most useful inequality, which states that for every \text{$x,y \in \mathcal{H}$,} \begin{eqnarray}\label{cauchy} |\langle x,y\rangle| &\leq& \|x\| \|y\|. \end{eqnarray} A generalization of the Cauchy-Schwarz inequality is the Buzano's inequality \cite{Buzano}, which states that for $x,y,e \in \mathcal{H}$ with $\|e\|=1,$ \begin{eqnarray}\label{buzinq} |\langle x,e\rangle \langle e,y\rangle| &\leq& \frac{ |\langle x,y\rangle| +\|x\| \|y\|}{2}. \end{eqnarray} Another generalization of the Cauchy-Schwarz inequality is the mixed Schwarz inequality \cite[pp. 75--76]{Halmos}, which states that for every $x,y\in \mathcal{H}$ and $T\in \mathcal{B}(\mathcal{H}),$ \begin{eqnarray}\label{mixed} |\langle Tx,y\rangle|^2 &\leq& \langle |T|x,x\rangle \langle |T^*|y,y\rangle. \end{eqnarray} Using the above inner product inequalities various mathematicians have developed various numerical radius inequalities, which improve the inequalities \eqref{eqv}, see \cite{Bhunia_ASM_2022, Bhunia_LAMA_2022,Bhunia_RIM_2021,Bhunia_BSM_2021,Bhunia_ADM_2021,D08,Kittaneh_2003}. Also using other technique various nice numerical radius inequalities have been studied, see \cite{Abu_RMJM_2015,Bag_MIA_2020,Bhunia_LAA_2021,Bhunia_LAA_2019,Kittaneh_LAMA_2023, Kittaneh_STD_2005, Yamazaki}. Haagerup and Harpe \cite{Haagerup} developed a nice estimation for the numerical radius of the nilpotent operators, i.e., if $T^n=0$ for some positive integer $n\geq 2$, then \begin{eqnarray}\label{haag} w(T) &\leq& \cos \left(\frac{\pi}{n+1} \right) \|T\|. \end{eqnarray} \noindent In this paper, we obtain a generalization of the Buzano's inequality \eqref{buzinq} and using this generalization we develop new numerical radius inequalities, which improve the existing ones. From the numerical radius inequalities obtained here, we deduce several results. We deduce an estimation for the nilpotent operators like \eqref{haag}. Further, we deduce a reverse inequality for the numerical radius power inequality \eqref{power}. \section{\textbf{Numerical radius inequalities}} We begin our study with proving the following lemma, which is a generalization of the Buzano's inequality \eqref{buzinq}. \begin{lemma}\label{buz-extension} Let $x_1,x_2,\ldots,x_n,e \in \mathcal{H}$, where $\|e\|=1$. Then \begin{eqnarray*} \left| \mathop{\Pi}\limits_{k=1}^n \langle x_k,e\rangle \right| &\leq& \frac{ \left| \langle x_1,x_2\rangle \mathop{\Pi}\limits_{k=3}^n \langle x_k,e\rangle\right| + \mathop{\Pi}\limits_{k=1}^n \|x_k\|}{2}. \end{eqnarray*} \end{lemma} \begin{proof} Following the inequality \eqref{buzinq}, we have $$ |\langle x_1, e\rangle \langle x_2,e\rangle| \leq \frac{|\langle x_1, x_2\rangle |+ \|x_1\|\|x_2\|}{2}.$$ By replacing $x_2$ by $ \mathop{\Pi}\limits_{k=3}^n \langle x_k,e\rangle x_2$ and using $\left| \mathop{\Pi}\limits_{k=3}^n \langle x_2,e\rangle\right| \leq \mathop{\Pi}\limits_{k=3}^n \| x_k \|$, we obtain the desired inequality. \end{proof} Now using Lemma \ref{buz-extension} (for $n=3$), we prove the following numerical radius inequality. \begin{theorem}\label{th1} Let $T\in \mathcal{B}(\mathcal{H})$. Then \begin{eqnarray*} w(T) &\leq& \sqrt[3]{\frac14 w(T^3) + \frac14\left( \|T^2\|+ \|T^*T+TT^*\|\right) \|T\|}. \end{eqnarray*} \end{theorem} \begin{proof} Take $x\in \mathcal{H}$ and $\|x\|=1.$ From Lemma \ref{buz-extension} (for $n=3$), we have \begin{eqnarray*} |\langle Tx,x\rangle |^3 &=& |\langle Tx,x\rangle\langle T^*x,x\rangle\langle T^*x,x\rangle|\\ &\leq& \frac{|\langle Tx,T^*x\rangle \langle T^*x,x\rangle|+ \|Tx\| \|T^*x\|^2}{2}\\ &=& \frac{|\langle T^2x,x\rangle \langle T^*x,x\rangle|}{2}+ \frac{(\|Tx\|^2+ \|T^*x\|^2)\|T^*x\|}{4} \end{eqnarray*} Also, from the Buzano's inequality \eqref{buzinq}, we have \begin{eqnarray*} |\langle T^2x,x\rangle \langle T^*x,x\rangle| &\leq& \frac{|\langle T^2x,T^*x\rangle|+ \|T^2x\| \|T^*x\| }{2}\\ &=& \frac{|\langle T^3x,x\rangle|+ \|T^2x\| \|T^*x\| }{2}. \end{eqnarray*} Therefore, \begin{eqnarray*} |\langle Tx,x\rangle |^3 &\leq& \frac{|\langle T^3x,x\rangle|+ \|T^2x\| \|T^*x\|}{4} + \frac{(\|Tx\|^2+ \|T^*x\|^2)\|T^*x\|}{4}\\ &\leq& \frac14 w(T^3)+ \frac14 \|T^2\| \|T\|+\frac14 \|T^*T+TT^* \| \|T \|. \end{eqnarray*} Therefore, taking supremum over $\|x\|=1,$ we get the desired inequality. \end{proof} The inequality in Theorem \ref{th1} is an improvement of the second inequality in \eqref{eqv}, since $w(T^3)\leq \|T^3\|\leq \|T\|^3$, $\|T^2\|\leq \|T\|^2$ and $\|T^*T+TT^*\|\leq 2\|T\|^2.$ To show non-trivial improvement, we consider a matrix $T=\begin{bmatrix} 0&2&0\\ 0&0&1\\ 0&0&0 \end{bmatrix}.$ Then $w(T^3)=0$ and so $$ \sqrt[3]{ \frac14 w(T^3) + \frac14\left( \|T^2\|+ \|T^*T+TT^*\|\right) \|T\| } < \|T\|.$$ Next, using the Buzano's inequality \eqref{buzinq}, we obtain the following numerical radius inequality. \begin{theorem}\label{th2} Let $T\in \mathcal{B}(\mathcal{H}).$ Then \begin{eqnarray*} w(T) &\leq& \sqrt[3]{\frac12 w(TT^*T)+\frac14 \|T^*T+TT^*\| \|T\|}. \end{eqnarray*} \end{theorem} \begin{proof} Take $x\in \mathcal{H}$ and $\|x\|=1.$ By the Cauchy-Schwarz inequality \eqref{cauchy}, we have \begin{eqnarray*} |\langle Tx,x\rangle |^3 &\leq& \|Tx\|^2|\langle T^*x,x\rangle| = |\langle |T|^2x,x\rangle \langle T^*x,x\rangle|. \end{eqnarray*} From the Buzano's inequality \eqref{buzinq}, we have \begin{eqnarray*} |\langle Tx,x\rangle |^3 &\leq& \frac{|\langle |T|^2x,T^*x\rangle|+ \||T|^2x\| \|T^*x\|}{2}\\ &\leq& \frac{|\langle T|T|^2x,x\rangle |+ \||T|^2x\| \|T^*x\|}{2}\\ &\leq& \frac{|\langle T|T|^2x,x\rangle |+ \|T\| \|Tx\| \|T^*x\|}{2}\\ &\leq& \frac{|\langle T|T|^2x,x\rangle |}{2}+\frac{ (\|Tx\|^2+ \|T^*x\|^2)\|T\|}{4}\\ &\leq& \frac12 w(T|T|^2)+ \frac14 \|T^*T+TT^*\| \|T\|. \end{eqnarray*} Therefore, taking supremum over $\|x\|=1,$ we get the desired inequality. \end{proof} Clearly, for every $T\in \mathcal{B}(\mathcal{H})$, $$\sqrt[3]{\frac12 w(TT^*T)+\frac14 \|T^*T+TT^*\| \|T\|}\leq \sqrt[3]{\frac12 w(TT^*T)+\frac12 \|T\|^3} \leq \|T\|.$$ Also, using similar technique as Theorem \ref{th2}, we can prove the following numerical radius inequality. \begin{eqnarray}\label{eqn5} w(T)&\leq& \sqrt[3]{\frac12 w(T^*T^2)+ \frac12 \|T\|^3}. \end{eqnarray} And also replacing $T$ by $T^*$ in \eqref{eqn5}, we get \begin{eqnarray}\label{eqn6} w(T)&\leq& \sqrt[3]{\frac12 w(T^2T^*)+ \frac12 \|T\|^3}. \end{eqnarray} Consider a matrix $T=\begin{bmatrix} 0&2&0\\ 0&0&1\\ 0&0&0 \end{bmatrix}$. Then we see that $$ w(T^2 T^*)=1< w(T^*T^2)=2< w(TT^*T)=\sqrt{65}/2 .$$ Therefore, combining Theorem \ref{th2} and the inequalities \eqref{eqn5} and \eqref{eqn6}, we obtain the following corollary. \begin{cor}\label{cor2} Let $T\in \mathcal{B}(\mathcal{H})$. Then \begin{eqnarray*} w(T) &\leq& \sqrt[3]{\frac12 \min \Big(w(TT^*T), w(T^2T^*), w(T^*T^2) \Big) + \frac12 \|T\|^3}. \end{eqnarray*} \end{cor} Since $w(TT^*T)\leq \|T\|^3, \ w(T^2T^*)\leq \|T^2\| \|T\|\leq \|T\|^3,\ w(T^*T^2)\leq \|T^2\| \|T\|\leq \|T\|^3,$ the inequality in Corollary \ref{cor2} is an improvement of the second inequality in \eqref{eqv}. Now, Theorem \ref{th2} together with the inequalities \eqref{eqn5} and \eqref{eqn6} implies the following result. \begin{cor}\label{coor2} Let $T\in \mathcal{B}(\mathcal{H})$. If $w(T)=\|T\|,$ then \begin{eqnarray*} w(TT^*T)= w(T^2T^*)= w(T^*T^2)= \|T\|^3. \end{eqnarray*} \end{cor} Next, by using Lemma \ref{buz-extension} (for $n=3$), the Buzano's inequality \eqref{buzinq} and the mixed Schwarz inequality \eqref{mixed}, we obtain the following result. \begin{theorem}\label{th3} Let $T\in \mathcal{B}(\mathcal{H}).$ Then \begin{eqnarray*} w(T) &\leq& \sqrt[3]{\frac14 w(|T|T|T^*|)+ \frac14 \Big ( \|T^2\|+ \|T^*T+TT^*\| \Big)\|T\|}. \end{eqnarray*} \end{theorem} \begin{proof} Let $x\in \mathcal{H}$ with $\|x\|=1.$ From the mixed Schwarz inequality \eqref{mixed}, we have \begin{eqnarray*} |\langle Tx,x\rangle|^3 &\leq & \langle |T^*|x,x\rangle |\langle T^*x,x\rangle| \langle |T|x,x\rangle. \end{eqnarray*} Using Lemma \ref{buz-extension} (for $n=3$), we have \begin{eqnarray*} |\langle Tx,x\rangle|^3 &\leq & \frac{|\langle |T^*|x,T^*x\rangle \langle |T|x,x\rangle | + \||T^*|x\| \|T^*x\| \||T|x\| } {2}\\ &= & \frac{|\langle T |T^*|x,x\rangle | \langle |T|x,x\rangle + \||T^*|x\| \|T^*x\| \||T|x\| } {2}. \end{eqnarray*} By Buzano's inequality \eqref{buzinq}, we have \begin{eqnarray*} |\langle T |T^*|x,x\rangle | \langle |T|x,x\rangle &\leq& \frac{ | \langle T |T^*|x,|T|x\rangle |+ \|T |T^*|x\| \||T|x\| }{2}\\ &=& \frac{| \langle |T| T |T^*|x, x\rangle |+ \|T |T^*|x\| \||T|x\| }{2}.\\ \end{eqnarray*} Also by AM-GM inequality, we have \begin{eqnarray*} \||T^*|x\| \|T^*x\| \||T|x\| &\leq& \frac12(\||T^*|x\|^2 + \||T|x\|^2) \|T^*x\|\\ &=& \frac12 \langle (|T|^2+|T^*|^2)x,x\rangle \|T^*x\|. \end{eqnarray*} Therefore, \begin{eqnarray*} |\langle Tx,x\rangle|^3 &\leq & \frac{ |\langle |T| T |T^*|x, x\rangle |+ \|T |T^*|x\| \||T|x\| }{4} + \frac14 \langle (|T|^2+|T^*|^2)x,x\rangle \|T^*x\|\\ &\leq& \frac14 w(|T|T|T^*|)+ \frac14 \Big ( \|T |T^*|\|+ \|T^*T+TT^*\| \Big)\|T\|. \end{eqnarray*} From the polar decomposition $T^*=U|T^*|$, it is easy to verify that $\|T |T^*|\|=\|T^2\|.$ So, \begin{eqnarray*} |\langle Tx,x\rangle|^3 &\leq& \frac14 w(|T|T|T^*|)+ \frac14 \Big ( \|T^2 \|+ \|T^*T+TT^*\| \Big)\|T\|. \end{eqnarray*} Therefore, taking supremum over $\|x\|=1,$ we get the desired result. \end{proof} Now, combining Theorem \ref{th3} and Theorem \ref{th1}, we obtain the following corollary. \begin{cor}\label{cor3} Let $T\in \mathcal{B}(\mathcal{H}),$ then \begin{eqnarray*} w(T) \leq \sqrt[3]{\frac14 \min \Big( w(T^3), w(|T|T|T^*|) \Big)+ \frac14 \Big ( \|T^2\|+ \|T^*T+TT^*\| \Big)\|T\|}. \end{eqnarray*} \end{cor} Using similar technique as Theorem \ref{th3}, we can also prove the following inequality. \begin{eqnarray}\label{eqn7} w(T) &\leq& \sqrt[3]{\frac14 w(|T^*|T|T|)+ \frac38 \|T^*T+TT^*\| \|T\|}. \end{eqnarray} Clearly, the inequalities in Corollary \ref{cor3} and \eqref{eqn7} are stronger than the second inequality in \eqref{eqv}. And when $w(T)=\|T\|,$ then $$ w(|T|T|T^*|)= w(|T^*|T|T|) =w(T^3)=\|T\|^3.$$ Next theorem reads as follows: \begin{theorem}\label{th4} Let $T\in \mathcal{B}(\mathcal{H}).$ Then \begin{eqnarray*} w(T) &\leq& \sqrt[4]{ \Big( \frac12 w(T|T|) +\frac14 \|T^*T+TT^*\| \Big) \Big ( \frac12 w(T^*|T^*|)+ \frac14\|T^*T+TT^*\| \Big)}. \end{eqnarray*} \end{theorem} \begin{proof} Let $x\in \mathcal{H}$ with $\|x\|=1.$ From the mixed Schwarz inequality \eqref{mixed}, we have $$ |\langle Tx,x\rangle|^2 \leq \langle |T|x,x\rangle \langle |T^*|x,x\rangle.$$ Therefore, $$ |\langle Tx,x\rangle|^4 \leq \langle |T|x,x\rangle \langle |T^*|x,x\rangle |\langle Tx,x\rangle \langle T^*x,x\rangle|.$$ From the Buzano's inequality \eqref{buzinq}, we have \begin{eqnarray*} |\langle |T|x,x \rangle \langle T^*x,x\rangle | &\leq& \frac{|\langle |T|x, T^*x \rangle|+ \||T|x\| \|T^*x\|}{2}\\ &\leq& \frac12 |\langle T|T|x, x \rangle|+ \frac14 (\||T|x\|^2+ \|T^*x\|^2)\\ &\leq& \frac12 w(T|T|)+ \frac14 \|T^*T+TT^*\|. \end{eqnarray*} Similarly, \begin{eqnarray*} |\langle |T^*|x,x \rangle \langle Tx,x\rangle | &\leq& \frac{|\langle |T^*|x, Tx \rangle|+ \||T^*|x\| \|Tx\|}{2}\\ &\leq& \frac12 |\langle T^*|T^*|x, x \rangle|+ \frac14 (\||T^*|x\|^2+ \|Tx\|^2)\\ &\leq& \frac12 w(T^*|T^*|)+ \frac14 \|T^*T+TT^*\|. \end{eqnarray*} Therefore, $$|\langle Tx,x\rangle|^4 \leq \left(\frac12 w(T|T|)+ \frac14 \|T^*T+TT^*\|\right) \left( \frac12 w(T^*|T^*|)+ \frac14 \|T^*T+TT^*\|\right).$$ Taking supremum over $\|x\|=1,$ we get $$w^4(T) \leq \left(\frac12 w(T|T|)+ \frac14 \|T^*T+TT^*\|\right) \left( \frac12 w(T^*|T^*|)+ \frac14 \|T^*T+TT^*\|\right),$$ as desired. \end{proof} Again, using similar technique as Theorem \ref{th4}, we can prove the following result. \begin{theorem}\label{th5} Let $T\in \mathcal{B}(\mathcal{H}).$ Then \begin{eqnarray*} w(T) &\leq& \sqrt[4]{ \Big( \frac12 w(T|T^*|) +\frac12 \|T\|^2 \Big) \Big ( \frac12 w(T^*|T|)+ \frac12\|T\|^2 \Big)}. \end{eqnarray*} \end{theorem} Clearly, the inequalities in Theorem \ref{th4} and Theorem \ref{th5} are improvements of the second inequality in \eqref{eqv}. The inequalities imply that when $w(T)=\|T\|$, then $$w(T|T|)=w(T^*|T^*|) = w(T|T^*|)= w(T^*|T|)=\|T\|^2.$$ Now we consider the following example to compare the inequalities in Theorem \ref{th4} and Theorem \ref{th5}. \begin{example} Consider a matrix $T=\begin{bmatrix} 0&1&0\\ 0&0&1\\ 0&0&0 \end{bmatrix},$ then Theorem \ref{th4} gives $w(T)\leq \sqrt{ \frac{1}{2\sqrt{2}}+ \frac12 }$, whereas Theorem \ref{th5} gives $w(T)\leq \sqrt{ \frac{1}{4}+ \frac12}.$ Again, Consider $T=\begin{bmatrix} 0&1&0\\ 0&0&2\\ 0&0&0 \end{bmatrix},$ then Theorem \ref{th4} gives $w(T)\leq \sqrt{ \frac{\sqrt{17}+5}{4} }$, whereas Theorem \ref{th5} gives $w(T)\leq \sqrt{ \frac{5+5}{4} }.$ Therefore, we would like to note that the inequalities obtained in Theorem \ref{th4} and Theorem \ref{th5} are not comparable, in general. \end{example} \section{\textbf{Numerical radius inequalities involving general powers}} We develop a numerical radius inequality involving general powers $w^n(T)$ and $w(T^n)$ for every positive integer $n\geq 2$ and from which we derive nice results related to the nilpotent operators and reverse power inequality for the numerical radius. First we prove the following theorem. \begin{theorem}\label{th7} If $T\in \mathcal{B}(\mathcal{H}),$ then \begin{eqnarray*} |\langle Tx, x\rangle|^n &\leq& \frac{1}{2^{n-1}} \left|\langle T^nx, x\rangle \right|+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^kx \right\| \left\|T^*x \right\|^{n-k}, \end{eqnarray*} for all $x\in \mathcal{H}$ with $\|x\|=1$ and for every positive integer $n\geq 2.$ \end{theorem} \begin{proof} We have \begin{eqnarray*} && |\langle Tx,x \rangle|^n \\ &=&|\langle Tx,x \rangle \langle T^*x,x \rangle \langle T^*x,x \rangle^{n-2} |\\ &\leq& \frac{\left|\langle Tx,T^*x \rangle \langle T^*x,x \rangle^{n-2} \right|+ \|Tx\| \|T^*x\|^{n-1} }{2} \,\, (\text{by Lemma \ref{buz-extension}})\\ &=& \frac{\left|\langle T^2x,x \rangle \langle T^*x,x \rangle \langle T^*x,x \rangle^{n-3} \right|+ \|Tx\| \|T^*x\|^{n-1} }{2}\\ &\leq & \frac{ \frac{\left|\langle T^2x,T^*x \rangle \langle T^*x,x \rangle^{n-3} \right|+ \|T^2x\| \|T^*x\|^{n-2} }{2}+ \|Tx\| \|T^*x\|^{n-1} }{2} \,\, (\text{by Lemma \ref{buz-extension}})\\ &= & \frac{\left|\langle T^3x,x \rangle \langle T^*x,x \rangle \langle T^*x,x \rangle^{n-4} \right|+ \|T^2x\| \|T^*x\|^{n-2} }{2^2}+\frac{ \|Tx\| \|T^*x\|^{n-1} }{2} \\ &\leq & \frac{\frac{\left|\langle T^3x,T^*x \rangle \langle T^*x,x \rangle^{n-4}\right|+ \|T^3x\| \|T^*x\|^{n-3} }{2} + \|T^2x\| \|T^*x\|^{n-2} }{2^2} +\frac{ \|Tx\| \|T^*x\|^{n-1} }{2} \\ && \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, (\text{by Lemma \ref{buz-extension}})\\ &= & \frac{\left|\langle T^4x,x \rangle \langle T^*x,x \rangle \langle T^*x,x \rangle^{n-5}\right|+ \|T^3x\| \|T^*x\|^{n-3} }{2^3} + \frac{ \|T^2x\| \|T^*x\|^{n-2} }{2^2} \\ && \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, +\frac{ \|Tx\| \|T^*x\|^{n-1} }{2}. \end{eqnarray*} Repeating this approach $(n-1)$ times (i.e., using Lemma \ref{buz-extension}), we obtain \begin{eqnarray*} |\langle Tx, x\rangle|^n &\leq& \frac{1}{2^{n-1}} \left|\langle T^nx, x\rangle \right|+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^kx \right\| \left\|T^*x \right\|^{n-k}, \end{eqnarray*} as desired. \end{proof} The following generalized numerical radius inequality is a simple consequence of Theorem \ref{th7}. \begin{cor}\label{cor5} If $T\in \mathcal{B}(\mathcal{H}),$ then \begin{eqnarray}\label{pp} w^n(T) &\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^k \right\| \left\|T \right\|^{n-k}, \end{eqnarray} for every positive integer $n\geq 2.$ \end{cor} \begin{remark} (i) For every $T\in \mathcal{B}(\mathcal{H})$ and for every positive integer $n\geq 2$, \begin{eqnarray*} w^n(T) &\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^k \right\| \left\|T \right\|^{n-k} \\ &\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T \right\|^{n} \\ &\leq& \frac{1}{2^{n-1}} \|T^n\|+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T \right\|^{n} \,\, \text{(by \eqref{eqv})}\\ &\leq& \frac{1}{2^{n-1}} \|T\|^n+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T \right\|^{n} = \|T\|^n. \end{eqnarray*} Therefore, the inequality \eqref{pp} is an improvement of the second inequality in \eqref{eqv}. \noindent (ii) From the above inequalities, it follows that if $w(T)=\|T\|$, then $$ w^n(T)=w(T^n)=\|T^n\|=\|T\|^n,$$ for every positive integer $n\geq 2.$ And so it is easy to see that when $w(T)=\|T\|$, then $$w(T)= \lim\limits_{n\to \infty} \|T^n\|^{1/n}= r(T),$$ where $r(T)$ denotes the spectral radius of $T.$ The second equality holds for every operator $T\in \mathcal{B}(\mathcal{H})$ and it is known as Gelfand formula for spectral radius. \noindent (iii) Taking $n=2$ in Corollary \ref{cor5}, we get $$ w^2(T) \leq \frac12 w(T^2)+ \frac12\|T\|^2,$$ which was proved by Dragomir \cite{D08}. \noindent (iv) Taking $n=2$ in Theorem \ref{th7}, we deduce that $$ w^2(T) \leq \frac12 w(T^2)+ \frac14 \|T^*T+TT^*\|,$$ which was proved by Abu-Omar and Kittaneh \cite{Abu_RMJM_2015}. \end{remark} Following Corollary \ref{cor5}, we obtain an estimation for the nilpotent operators. \begin{cor}\label{nilpotent} Let $T\in \mathcal{B}(\mathcal{H}).$ If $T^n=0$ for some least positive integer $n\geq 2$, then \begin{eqnarray*} w(T) &\leq& \left(\sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^k \right\| \left\|T \right\|^{n-k}\right)^{1/n} \leq \left( 1- \frac{1}{2^{n-1}}\right)^{1/n} \|T\|. \end{eqnarray*} \end{cor} Consider a matrix $T=\begin{bmatrix} 0&1&2\\ 0&0&3\\ 0&0&0 \end{bmatrix}.$ Then we see that Corollary \ref{nilpotent} gives $w(T)\leq \alpha \approx 3.0021$ and Theorem \ref{th1} gives $w(T) \leq \beta \approx 2.5546,$ whereas the inequality \eqref{haag} gives $w(T)\leq \gamma \approx 2.5811.$ Therefore, we conclude that for the nilpotent operators the inequality \eqref{haag} (given by Haagerup and Harpe) is not always better than the inequalities discussed here and vice versa. Finally, by using Corollary \ref{cor5} we obtain a reverse power inequality for the numerical radius. \begin{cor} Let $T\in \mathcal{B}(\mathcal{H})$. If $\|T\|\leq 1$, then \begin{eqnarray*} w^n(T) &\leq& \frac{1}{2^{n-1}} w(T^n)+ 1- \frac{1}{2^{n-1}}, \end{eqnarray*} for every positive integer $n\geq 2.$ This inequality is sharp. \end{cor} \begin{proof} From Corollary \ref{cor5}, we have \begin{eqnarray*} w^n(T) &\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T^k \right\| \left\|T \right\|^{n-k}\\ &\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \left\|T \right\|^{n}\\ &\leq& \frac{1}{2^{n-1}} w(T^n)+ \sum_{k=1}^{n-1} \frac{1}{2^{k}} \\ &=& \frac{1}{2^{n-1}} w(T^n)+ 1- \frac{1}{2^{n-1}}. \end{eqnarray*} If $T^*T=TT^*$ and $\|T\|=1,$ then \begin{eqnarray*} w^n(T) &=& \frac{1}{2^{n-1}} w(T^n)+ 1- \frac{1}{2^{n-1}}=1. \end{eqnarray*} \end{proof} \noindent {\bf{Data availability statements.}}\\ Data sharing not applicable to this article as no datasets were generated or analysed during the current study.\\ \noindent {\bf{Declarations.}}\\ \noindent {\bf{Conflict of Interest.}} The author declares that there is no conflict of interest. \end{document}
\begin{document} \begin{abstract} Let $\pi: X \to S$ be a family of smooth projective curves, and let $L$ and $M$ be a pair of line bundles on $X$. We show that Deligne's line bundle $\< {L,M}$ can be obtained from the $\mathcal{K}_2$-gerbe $G_{L,M}$ constructed in \cite{ER} via an integration along the fiber map for gerbes that categorifies the well known one arising from the Leray spectral sequence of $\pi$. Our construction provides a full account of the biadditivity properties of $\< {L,M}$. Our main application is to the categorification of correspondences on the self-product of a curve. The functorial description of the low degree maps in the Leray spectral sequence for $\pi$ that we develop is of independent interest, and, along the way, we provide an example of their application to the Brauer group. \end{abstract} \title{Fiber integration of gerbes and Deligne line bundles} \section{Introduction} Let $S$ be a smooth variety over a field $F$, and let $\pi:X \to S$ be a smooth projective morphism of relative dimension one. Deligne \cite{SGA4, MR902592} has constructed a bi-additive functor of Picard categories \[ \Psi_{X/S} \colon \tors_X(\mathbb{G}_m) \times \tors_X(\mathbb{G}_m) \to \tors_S(\mathbb{G}_m), \quad \Psi_{X/S}(L,M) = \<{L,M}\,, \] where the bi-additivity means that there are natural isomorphisms \[ \<{L+L', M} \congto \<{L,M} + \<{L',M}\,,\quad \<{L,M} \congto \<{M,L}\,. \] Even though there are multiple approaches to $\Psi_{X/S}$ (see \S \ref{deligne-as-det}) the proof of bi-additivity is non-trivial in each one of them. Let $\mathcal{K}_{j}$ be the usual Zariski sheaf attached to the presheaf $U \mapsto K_j(U)$ on $X$. A basic result of Bloch-Quillen is that $H^j(X, \mathcal{K}_j)$ is isomorphic to the Chow group $CH^j(X)$ of codimension-$j$ cycles on $X$. Our main result is the following:\footnote{Theorem \ref{Main} was conjectured by M.~Patnaik \cite[Remark 21.3.2]{Patnaik}.} \begin{theorem} \label{Main} The functor $\Psi_{X/S}$ factorizes as a composition of a bi-additive functor $\mathcal{u}p$ and an additive functor $\int_{\pi}$: \[ \tors_X(\mathbb{G}_m) \times \tors_X(\mathbb{G}_m) \overset{\mathcal{u}p}{\longrightarrow} \gerb_X(\mathcal{K}_2) \xrightarrow{\int_{\pi}} \tors_S(\mathbb{G}_m)\,. \] \end{theorem} In the statement, $\gerb_X(\mathcal{K}_{2})$ denotes the Picard 2-category of gerbes on $X$ with band $\mathcal{K}_{2}$; the bi-additivity of $\Psi_{X/S}$ is a consequence of the bi-additivity of the cup-product. The functor $\int_{\pi}$ categorifies the pushforward map \[ \pi_*: \mathsf{C}H^2(X) \to \mathsf{C}H^1(S)\,, \] and it represents \cite[XVIII \S 1.3]{SGA4} the integration of a gerbe along the fibers of $\pi$. Thus, $\Psi_{X/S}$ is actually a categorification of the pairing \[ \mathsf{C}H^1(X) \times \mathsf{C}H^1(X) \overset{\mathcal{u}p}{\longrightarrow} \mathsf{C}H^2(X) \overset{\pi_*}{\longrightarrow} \mathsf{C}H^1(S)\,. \] The proof of Theorem \ref{Main} is obtained by combining the following Theorems \ref{prop1}, \ref{prop2}, and \ref{prop3}: \begin{theorem} \label{prop1} On any smooth variety $Y$ over $F$, there exists a natural bi-additive functor \[ \tors_Y(\mathbb{G}_m) \times \tors_Y(\mathbb{G}_m) \overset{\mathcal{u}p}{\longrightarrow} \gerb_Y(\mathcal{K}_2). \] \end{theorem} This is essentially proved in \cite{ER}, but for the biadditivity, which we address below. Biadditivity or additivity is straightforward, but we must contend with the fact that some of the entities involved are higher categories or stacks. Let $G_{L,M}$ be the $\mathcal{K}_2$-gerbe corresponding to the cup-product of line bundles $L$ and $M$ on $X$ (viewed as $\mathbb{G}_m$-torsors). \begin{theorem} \label{prop2} For $\pi\colon X \to S$ as above of relative dimension one, there exists a natural additive functor \[ \int_\pi \colon \gerb_X(\mathcal{K}_2) \longrightarrow \tors_S(\mathbb{G}_m) \,. \] \end{theorem} The proof of Theorem~\ref{prop2} consists in writing the maps in the low degree part of the Leray spectral sequence for $\pi\colon X\to S$ directly in terms of the (higher) stacks they classify. While this can be traced back in some implicit form to \cite[\S V.3.1-2]{Giraud}, we reprise it here as we need in particular an explicit description of the functors involved. In particular, the integration map is given by taking the sheaf of connected components of the pushforward gerbe from $X$ to $S$. We describe the integration map in greater generality, by working with a general site morphism. Finally, we have \begin{theorem}\label{prop3} One has a natural isomorphism \[\int_{\pi} G_{L,M} \cong \<{L,M}\>.\] \end{theorem} Let $Y$ be a smooth proper variety $Y$ over $F$; let $\mathsf{C}H^*(Y) = \mathrm{op}lus_j\mathsf{C}H^j(Y)$ for the total Chow group of $Y$. Recall that one has a homomorphism \cite[Example 16.1.2(c)]{MR1644323} \begin{equation}\label{chow-end} \mathsf{C}H^{\mathrm{dim}~Y}(Y \times Y) \to \End(\mathsf{C}H^*(Y)) \end{equation} of rings; the ring structure on the former is given by the composition of correspondences. The categorification of (\ref{chow-end}) is of great interest. We use Theorem \ref{Main} to provide a categorification (Theorem \ref{nuovo}) of (\ref{chow-end}) when $\dim Y =1$. It should be remarked that the problem of categorification of (\ref{chow-end}) seems to be formidable when $\dim Y > 1$: for a surface $Y$, one needs to endow the Picard $2$-category $\cGerb_{Y\times Y}(\mathcal{K}_2)$ with a ring structure. Let $\mathbf{CH}^1(Y)$ denote the Picard category of line bundles on $Y$. If $C$ is a smooth projective curve over an algebraically closed field, then $\mathbf{CH}^1(C \times C)$ can be naturally enhanced to a ring category and the natural functor $\mathbf{CH}^1(C \times C) \to \stEnd(\mathbf{CH}^1(C))$ is a functor of ring categories (Theorem \ref{nuovo}). While there are several generalizations \cite{MR2562455, MR991974, MR962493, MR1005159, MR1085257, MR1078860, MR772054, Eriksson} of Deligne's construction, they are all restricted to line bundles or codimension one. However, Theorem \ref{Main} suggests new generalizations \cite{ER2} of Deligne's construction: if $f:Y\to S$ is smooth proper of relative dimension two, there exists a natural bi-additive functor \[ \Psi^2_{Y/S}: \gerb_Y(\mathcal{K}_2) \times \gerb_Y(\mathcal{K}_2) \longrightarrow \gerb_S(\mathcal{K}_2), \] which is a categorification of the pairing \[ \mathsf{C}H^2(Y) \times \mathsf{C}H^2(Y) \longrightarrow \mathsf{C}H^4(Y) \xrightarrow{f_*} \mathsf{C}H^2(S)\,. \] \subsection*{Organization} In section~\ref{Leray} we analyze in some detail the low-degree terms exact sequence of the Leray spectral sequence for $\pi\colon X \to S$. While this is all well known from \cite[\S V.3.1-2]{Giraud}, we expand on it as several details were famously left as an exercise (\cite[Exercice 3.1.9.2]{Giraud}). Since we describe the maps in the sequence fairly explicitly, as an example we use them to illustrate an application to the Brauer group, which is of independent interest. In section~\ref{cat-int-div} we prove Theorems~\ref{prop1} and~\ref{prop2}, and, finally, we prove Theorem \ref{prop3}, the comparison with Deligne's construction, in section~\ref{comp-deligne}. We end with a proof of the main result (Theorem \ref{nuovo}) about the categorification of correspondences in \S \ref{sec:categ-corr}. The requisite results from the theory of Picard categories are in \S \ref{sec:picard-stacks-endom}. \subsection*{Notations} For any sheaf $A$ of abelian groups on a site we denote by $\tors (A)$ the Picard category of $A$-torsors and by $\stTors (A)$ the corresponding stack. Similarly, one categorical level up, for $\gerb (A)$ and $\cGerb (A)$, which denote the 2-Picard category of $A$-gerbes and the corresponding 2-stack. For any stack $\cF$, we denote by $\pi_0(\cF)$ its sheaf of connected components and by $\mathsf{F}$ the category of its global sections, that is $\mathrm{H}OM(\mathit{pt},\cF)$, where $\mathit{pt}$ is the terminal sheaf. \subsection*{Acknowledgements.} Thanks to Gerard Freixas I Montplet for alerting us to \cite{MR962493}, and Sasha Beilinson for his help with understanding~\cite{MR962493} and for helping us with the proof of Proposition~\ref{beilinson-lemma}. This work was partly supported by a Travel Award Grant from the Florida State University College of Arts and Sciences, which we gratefully acknowledge. \section{Norms and Deligne's construction} We recall some properties of Deligne's functor $\Psi_{X/S}$ and the line bundle $\<{L,M}\>$ on $S$. \subsection{Norms and finite maps} Let $g: {V}\to W$ be a finite and flat morphism of varieties. Given a line bundle $L$ on $V$, its norm (relative to $g$) is a line bundle $N_{V/W}(L)$ on $W$. One has an additive functor of Picard categories \cite[\S 7.1]{MR902592} \[ N_{V/W}: \tors_V{\mathbb G_m} \longrightarrow \tors_W{\mathbb G_m}\,. \] \subsection{Characterization of Deligne's functor $\Psi_{X/S}$}\label{deligne-as-det} Let $D\subset X$ be an effective relative Cartier divisor \cite[Tag 056P]{stacks-project} of $\pi:X \longrightarrow S$. Namely, $D$ is an effective Cartier divisor on $X$ and the induced morphism $\pi:D\to S$ is finite and flat. For any line bundle $M$ on $X$, the norm $N_{D/S}(M)$ is a line bundle on $S$. Deligne's construction $\Psi_{X/S}$ is characterized by \cite[XVIII 1.3.16]{SGA4}: (i) functoriality, and (ii) for any section of $L$ with zero set an effective Cartier divisor $D$ on $X$, a canonical isomorphism \begin{equation}\label{norm-finite} N_{D/S} \big(M~\big|_D~\big) \cong \<{L,M} \>\,. \end{equation} Another approach to $\Psi_{X/S}$ from \cite[XVIII 1.3.17.2]{SGA4} is the following: if $D$ and $E$ are effective relative Cartier divisors on $X$, then \begin{equation} \<{\mathcal{O} (D), \mathcal{O} (E)}\> \cong \det \mathrm{R}R\pi_*(\mathcal{O} (D) \overset{\mathbf{L}}{\otimes} \mathcal{O} (E)). \end{equation} \section{Fiber Integration of gerbes and the Leray spectral sequence} \label{Leray} Let $A$ be an abelian sheaf on $X$. The spectral sequence \begin{equation} \label{leray} E^{i,j}_2 = \mathrm{H}^i (S, \mathrm{R}^j\pi_*A) \mathrm{R}ightarrow \mathrm{H}^{i+j}(X, A) \end{equation} has as low-term exact sequence \cite[Appendix II, page 309]{MilneEC} \begin{equation}\label{lowterm} 0 \longrightarrow E^{1,0}_2 \longrightarrow E^1 \longrightarrow E^{0,1}_2 \longrightarrow E^{2,0}_2 \longrightarrow E^2_1 \longrightarrow E^{1,1}_2\,, \end{equation} where \[ E^1 = \mathrm{H}^1(X,A)\,, \quad E^2_1 = \Ker( \mathrm{H}^2(X, A) \longrightarrow \mathrm{H}^0(S, \mathrm{R}^2\pi_*A))\,, \] and, of course, $\mathrm{H}^0(S, \mathrm{R}^2\pi_*A) = E^{0,2}_2$. The maps above arise from functors between categories of torsors and gerbes, as shown in~\cite[pp.~324--327]{Giraud}. For our own purposes, and also to rephrase the arguments in loc.~cit.\ in a more transparent way, we turn to an explicit description of these functors. Our arguments below (in the Zariski topology) are easily seen to be also valid in the \'etale or analytic topology. In fact, at the beginning they are valid for any morphism between sites whose underlying functor is assumed for simplicity to preserve finite limits, and we shall begin our discussion in such generality. \subsection{Site morphisms, push-forwards and pullbacks of stacks } \label{sec:push-forw-pullb} Let $\pi\colon \mathsf{D}\to \mathsf{C}$ be a morphism of (small) sites. We let $u=\pi^{-1}\colon \mathsf{C} \to \mathsf{D}$ denote the underlying functor. Thus, $\pi$ is a morphism of sites if composition along $u$ preserves sheaves, and this operation has a left adjoint that is exact \cite{SGA4}; this is implied by the property that $u$ preserves coverings and if both $\mathsf{C}$ and $\mathsf{D}$ have finite limits $u$ preserves them \cite{JardineLHT}, \cite[\href{https://stacks.math.columbia.edu/tag/00X0}{Tag 00X0}]{stacks-project}. Let $\cG$ be a category over $\mathsf{D}$. Its \emph{push-forward} $\pi_*\cG$ along $\pi$ is defined by \begin{equation*} \pi_*\cG = \mathsf{C} \times_{\mathsf{D}} \cG \end{equation*} as a category over $\mathsf{C}$ via the first projection. It is fibered (resp. a stack) if so is $\cG$. On the other hand, let $p\colon \cF\to \mathsf{C}$ be a stack. The inverse image $\pi^*\cF$ is a pair $(\cF',\phi)$, where $\cF'$ is a stack over $\mathsf{C}$, and $\phi\colon \cF\to \pi_*\cF'$ a stack morphism such that, for any stack $\cG$ over $\mathsf{D}$, the following composite functor \begin{equation*} \mathrm{H}OM_\mathsf{D}(\cF',\cG) \longrightarrow \mathrm{H}OM_\mathsf{C}(\pi_*\cF',\pi_*\cG) \longrightarrow \mathrm{H}OM_\mathsf{C}(\cF,\pi_*\cG)\,, \end{equation*} is an equivalence of categories \cite[Déf.\ 3.2.1]{Giraud}. Here $\mathrm{H}OM$ denotes the category of stack morphisms. Thus, the inverse image is truly only defined up to equivalence. While specific formulas to compute a model of $\pi^*\cF$ do exist \cite[\href{https://stacks.math.columbia.edu/tag/04WJ}{Tag 04WJ}]{stacks-project}, the universal property is sufficient to characterize its connected components. Recall that $\pi_0(\cF)$ is the sheaf corresponding to the presheaf of connected components: to any object $U\in \mathsf{C}$ it assigns the set of connected components $\pi_0(\cF_U)$ of the fiber category $\cF_U$ \cite[Chap.\ 7]{BreenAst}. (In ref.\ \cite[n.\ 2.1.3.3]{Giraud} this is the ``sheaf of maximal sub-gerbes of $\cF$.'') We have \cite[Prop.\ 2.1.5.5 (iii)]{Giraud} an isomorphism of sheaves over $\mathsf{D}$: \begin{equation*} \pi_0(\pi^*\cF) \congto \pi^*(\pi_0(\cF)) \,. \end{equation*} This follows from the fact that if $x,y$ are any two objects of $\cF$ over $U\in \mathsf{C}$, then there is a sheaf isomorphism \begin{equation*} \pi^*\mathrm{H}om_{\cF}(x,y) \congto \mathrm{H}om_{\cF'}(x',y')\,, \end{equation*} where the objects $x',y'$ of $\cF'_{\pi^{-1}(U)}$ are constructed via the above universal property (ibid.). As a consequence, since a gerbe is locally connected, we have that the inverse image of a gerbe is a gerbe \cite[Cor.\ 2.1.5.6]{Giraud}. In fact, assuming, as we shall do in later sections, that $\cF$ has band $A$, for an abelian sheaf $A$ over $\mathsf{C}$, then $\pi^*\cF$ has band $\pi^*A$. On the other hand, even if $\cG\to \mathsf{D}$ is a gerbe, its push-forward will not necessarily be so. In other words, $\pi_0(\pi_*(\cG))$ may turn out to be a nontrivial sheaf over $\mathsf{C}$. More precisely, we have the following statement. \begin{lemma}[\protect{\cite[Exercice 3.1.9.2]{Giraud}}] \label{lem:exercise} Let $\pi\colon \mathsf{D}\to \mathsf{C}$ be a site morphism as above. Let $\cG$ be an $A$-gerbe on $\mathsf{D}$, where $A$ is an abelian sheaf. Then $\pi_0(\pi_*(\cG))$ is a pseudo $\mathrm{R}^1\pi_*A$-torsor. It is a torsor if and only if the class $[\cG]\in \mathrm{H}^2(\mathsf{D},A)$ lies in the kernel of the map $\mathrm{H}^2(\mathsf{D},A)\to \mathrm{H}^0(\mathsf{C},\mathrm{R}^2\pi_*A)$, and hence in the term denoted $E^2_1$ above. \end{lemma} An $A$-gerbe $\cG$ is {\it{horizontal}} if its class $[\cG]$ lies in $E^2_1$. If $\cG$ is horizontal, then $\pi_0(\pi_*(\cG))$ is an $\mathrm{R}^1\pi_*A$-torsor. By $\mathrm{H}^i(\mathsf{C},-)$ (same for $\mathsf{D}$) we denote the cohomology of the terminal sheaf $\mathit{pt}$. In the concrete case of the Zariski sites, where $\mathit{pt}$ is represented by the site's terminal object, this reduces to the groups considered at the beginning of this section. \begin{definition}\label{horizontal} Let $\cGerb_X(\mathcal{K}_2)'$ be the full sub(2-)category of $\cGerb_X(\mathcal{K}_2)$ consisting of horizontal gerbes. The functor \[ \mathsf{T}heta_{\pi}:\gerb_X(\mathcal{K}_2)'\to \tors_S(\mathrm{R}^1\pi_*\mathcal{K}_2) \] sends a gerbe $\cG$ to $\pi_0(\pi_*(\cG))$. \end{definition} We write $\theta:E^2_1\to E^{1,1}_2$ for the induced map. (In the previous statement, as well as in several that follows, the relevant (2-)categories can be upgraded to the correspoonding (2-)stacks.) \subsection{Proof of Lemma~\protect~\ref{lem:exercise}} \label{sec:médaille_de_chocolat} This section is devoted to a complete proof of Lemma~\ref{lem:exercise}. Several points of the proof will be explicitly needed in sections~\ref{cat-int-div} and~\ref{comp-deligne} below.\footnote{As all the details are famously not available in the original reference as well as in the literature, we felt compelled to include them here.} It is convenient to express the sites' topologies in terms of local epimorphisms \cite{SGA4,KS}, and take hypercovers of those, in particular Čech nerves. For simplicial objects we use the ``opposite index convention'' \cite{Duskin2001} (and reverse the order of the maps for cosimplicial ones) when pulling back by simplicial maps: $d^*_i(-) = (-)_{[n]\setminus i}$, where $[n]$ is the ordinal $[n]=\{0 < 1 < \dots < n\}$. \subsubsection{Objects with operators} \label{sec:objects-with-oper} Let $\cF$ be a stack over a site $\mathsf{C}$, and let $G$ be a sheaf of groups over $\mathsf{C}$. The stack $\stOp(G,\cF)$ has objects the pairs $(x,\mathit{\acute{e}t}a)$, where $x\in \cF_U$, and $\mathit{\acute{e}t}a\colon G\vert_U\to \Aut_U(x)$. Morphisms from $(x,\mathit{\acute{e}t}a)$ to $(y,\theta)$ are arrows $\alpha\colon x\to y$ in $\cF_U$ compatible with the structure: $\alpha \circ \mathit{\acute{e}t}a (g) = \theta(g)\circ\alpha$, for all sections $g\in G\vert_U$. \begin{lemma}[\protect{\cite[III № 2.3]{Giraud}}] \label{lem:twist} There is a stack morphism $t\colon \stTors(G)\times_\mathsf{C} \stOp(G,\cF)\to \cF$. \end{lemma} This ``twisting'' morphism assigns to each pair $(P,(x,\mathit{\acute{e}t}a))$ over $U\in\mathsf{C}$ an object of $\cF_U$, variously denoted as $\prescript{P}{}x$ or $P\wedge^G x$. \begin{proof} If $P=G$, the trivial $G$-torsor, we set $\prescript{G}{}x = x$. To a morphism $(g,\alpha): (G,(x,\mathit{\acute{e}t}a)) \to (G,(x',\mathit{\acute{e}t}a'))$ (here $g\in G$ is identified with an automorphism of the trivial torsor) we assign the morphism $\prescript{G}{}x \to \prescript{G}{}x'$ given by $\alpha\circ \mathit{\acute{e}t}a(g) = \mathit{\acute{e}t}a'(g)\circ \alpha$. In general, we regard $P\in \stTors(G)\vert_U$ and $x\in \cF_U$ as defined by descent data relative to an acyclic fibration $\epsilon \colon V_\bullet\to U$ covering $U$. The pullbacks $\epsilon^*x$ and $\epsilon^*P \cong G\vert_{V_0} $ to $V_0$ are glued over $V_1$ by isomorphisms \begin{equation*} \alpha \colon x_1 \to x_0\,, \qquad g \colon G \to G\,, \end{equation*} where $g\in G(V_1)$ is an isomorphism between trivial $G\vert_{V_1}$-torsors, satisfying the cocycle conditions $\alpha_{02} = \alpha_{01}\circ \alpha_{12}$ and $g_{01}g_{12} = g_{02}$ over $V_2$. That $(x,\mathit{\acute{e}t}a)$ is an object of $\stOp(G,\cF)$ is expressed by the condition $\alpha \circ \mathit{\acute{e}t}a_1(-) = \mathit{\acute{e}t}a_0(-) \circ \alpha$ over $V_1$. Pullbacks to $V_2$ along the face maps $d_0,d_1,d_2$ yield morphisms $(g_{ij},\alpha_{ij}): (G,(x_j,\mathit{\acute{e}t}a_j)) \to (G,(x_i,\mathit{\acute{e}t}a_i))$, $0\leq i < j \leq 2$, in $\stOp(G,\cF)_{V_2}$ such that $\alpha_{ij}\circ \mathit{\acute{e}t}a_j(g_{ij}) = \mathit{\acute{e}t}a_i(g_{ij})\circ \alpha_{ij}$, in addition to the other cocycle conditions. Then we have \begin{equation*} \alpha_{02} \circ \mathit{\acute{e}t}a_2(g_{02}) = \alpha_{01}\circ \alpha_{12} \circ \mathit{\acute{e}t}a_2(g_{01})\circ \mathit{\acute{e}t}a_2(g_{12}) = (\alpha_{01}\circ \mathit{\acute{e}t}a_1(g_{01})) \circ (\alpha_{12}\circ \mathit{\acute{e}t}a_2(g_{12}))\,, \end{equation*} showing the gluing data $\alpha \circ \mathit{\acute{e}t}a_1(g) = \mathit{\acute{e}t}a_0(g) \circ \alpha \colon x_1\to x_0$ satisfy the cocycle identity and therefore define an object $\prescript{P}{}x$ of $\cF_U$. \end{proof} The most important properties of the twisted objects are listed in the following lemma implicit in \cite{Giraud}, where for any two objects $x,y\in \cF_U$ we denote by $\Isom_U(x,y)$ the sheaf of isomorphisms from $x$ to $y$. Note that $\Isom_U(x,y)$ is a right $\Aut_U(x)$-torsor. (In fact it is an $(\Aut_U(y),\Aut_U(x))$-bitorsor, but we shall not need this fact.) \begin{lemma}[\cite{Emsalem2017,MR2362847}] \label{lem:tors} Let $\cF$ be a stack and $G$ a sheaf of groups on $\mathsf{C}$. Let $\stOp(G,\cF)$ the stack of objects with $G$-action. The twisting morphism $t$ has the following properties: \begin{enumerate} \item\label{item:1} If $P\in \stTors(G)_U$, and $x\in \cF_U$, then \begin{math} \Isom_U(x, P\wedge^G x) \cong P\wedge^G \Aut_U(x)\,; \end{math} \item\label{item:2} If $P=\Isom_U(y,x)$, then there is a canonical isomorphism $P\wedge^{\Aut_U(y)} y\congto x$, where the twisting arises from the stack $\stOp(\Aut_U(y),\cF\vert_U)$ over $\mathsf{C}/U$. \end{enumerate} \end{lemma} \begin{proof} Let $(y,\theta)$ be another object of $\stOp(G,\cF)$ over $U$. Using the same notation as in Lemma~\ref{lem:twist} for descent data relative to $V_\bullet \to U$, a morphism $Q\wedge^G y \to P\wedge^G x$ of twisted objects over $U$ corresponds to a morphism $\lambda \colon \epsilon^*y \to \epsilon^*x$ over $V_0$ such that the diagram over $V_1$ \begin{equation} \label{eq:1} \begin{tikzcd} y_1 \ar[r,"\lambda_1"] \ar[d,"\beta \circ\theta_1 (h)"'] & x_1 \ar[d,"\alpha\circ\mathit{\acute{e}t}a_1(g)"] \\ y_0 \ar[r,"\lambda_0"'] & x_0 \end{tikzcd} \end{equation} commutes. Here $\beta$ and $h$ represent the descent data and cocycle for $y$ and the $G$-torsor $Q$, respectively. In particular, if $y=x$ and $Q$ is the trivial torsor, we get the simpler relation \begin{equation*} \alpha \circ \mathit{\acute{e}t}a_1(g) \circ \lambda_1 = \lambda_0\circ \alpha\,. \end{equation*} Rewriting it in the more suggestive way \begin{equation*} \mathit{\acute{e}t}a_1(g) \circ \lambda_1 = \alpha^{-1}\circ \lambda_0\circ \alpha \end{equation*} shows that the $\lambda$ defines a section of $P\wedge^G \Aut_U(x)$, proving the first point. If $P=\Isom_U(x,y)$, $\lambda \colon \epsilon^*y \to \epsilon^*x$ provides a section of $P$ over $V_0$. As $\lambda$ does not necessarily descend to $U$, the two pullbacks $\lambda_0$ and $\lambda_1$ to $V_1$ are related by a diagram of the form \begin{equation*} \begin{tikzcd} y_1 \ar[r,"\lambda_1"] \ar[d,"\bar h"'] & x_1 \ar[dd,"\alpha"] \\ y_1 \ar[d,"\beta"'] \\ y_0 \ar[r,"\lambda_0"'] & x_0 \end{tikzcd} \end{equation*} for an appropriate $\bar h\in \Aut(y_1) = \Aut_U(y)(V_1)$. Comparing with~\eqref{eq:1} (taking $\theta=\id$) shows these data descend to an isomorphism $P\wedge^{\Aut_U(y)} y\congto x$, as wanted. \end{proof} $\cF$ is an abelian gerbe with band $A$ if and only if there is a \emph{canonical morphism} $\cF \to \stOp(A;\cF)$, because, in such case, the correspondence \begin{equation*} x\in \cF_U \rightsquigarrow \mathit{\acute{e}t}a_x\colon A\vert_U\congto \Aut_U(x) \end{equation*} is functorial: the diagram \begin{equation*} \begin{tikzcd} & A\vert_U \ar[dl,"\mathit{\acute{e}t}a_x"'] \ar[dr,"\mathit{\acute{e}t}a_y"] \\ \Aut_U(x) \ar[rr,"\alpha_*"'] && \Aut_U(y) \end{tikzcd} \end{equation*} commutes whenever $\alpha\colon x\to y$ \cite[Def.\ 2.9]{BreenAst}. (Note that the above diagram embodies a morphism of $\stOp(A;\cF)$.) Therefore there is a \emph{canonical twisting action} $(P,x) \rightsquigarrow {}^Px$ resulting from the composite morphism \begin{equation*} \stTors(A) \times_\mathsf{C} \cF \longrightarrow \stTors(A)\times_\mathsf{C} \stOp(A; \cF) \longrightarrow \cF\,. \end{equation*} \subsubsection{The pushforward} \label{sec:pushforward} Let us return to the situation of the site morphism $\pi\colon \mathsf{D}\to \mathsf{C}$. Recall that $u\colon \mathsf{C}\to \mathsf{D}$ is the underlying functor of $\pi$. Let $A$ be an abelian sheaf and $\cG$ be an $A$-gerbe over $\mathsf{D}$. It is convenient to identify the band with the automorphism sheaves. In this way, Lemma~\ref{lem:tors}, statement~(\ref{item:1}), simply becomes $\Isom_U(x,P\wedge^A x)\cong P$. The action $\pi_0(\pi_*\cG) \times \mathrm{R}^1\pi_*A \to \pi_0(\pi_*\cG)$ is induced by the twisting action of $\stTors(A)$ on $\cG$ on $\mathsf{D}$: if $x\in \cG_{u(U)}$ represents a section of $\pi_0(\pi_*\cG)$, and $P\in \stTors(A)_{u(U)}$ represents a class of $\mathrm{R}^1\pi_*A\,(U)$, we let the result of the action be the connected component of the object $P\wedge^A x\in \cG_{u(U)} \cong \pi_*(\cG)_U$. This action is free, because if $P\wedge^Ax \cong x$, then by Lemma~\ref{lem:tors}~(\ref{item:1}) $\mathrm{H}om_U(x,x\wedge^A x) \cong P$ has a global section, hence $P\cong A\vert_{u(U)}$. The action is also transitive. Indeed, if the objects $x,y\in \cG_{u(U)} \cong \pi_*(\cG)_U$ represent two sections of $\pi_0(\pi_*\cG)$, by Lemma~\ref{lem:tors}~(\ref{item:2}) we have $y \cong P \wedge^A x$, where $P=\mathrm{H}om_{u(U)}(x,y)$. Therefore the section of $\pi_0(\pi_*\cG)$ over $U$ defined by $y$ is obtained from that defined by $x$ via the action of the section of $\mathrm{R}^1\pi_*A\, (U)$ determined by $P$, as wanted. Thus, $\pi_0(\pi_*\cG)$ is a pseudo-torsor. Let $U$ be an object of $\mathsf{C}$, and denote by $U'=u(U)$ the corresponding object of $\mathsf{D}$. As a gerbe, $\cG$ is locally nonempty, hence there will be a local epimorphism $V'\to U'$ covering $U'$ with an object $x\in \cG_{V'}$. The object $x$ ought to be seen as a trivialization of the restriction $\cG\vert_{U'}$, whose characteristic class is an element of $\mathrm{H}^2(U',A\vert_{U'}) \cong \mathrm{R}^2\pi_*A \,(U)$. (In effect, this class can be calculated by computing the $2$-cocycle determined by $x$ via a hypercovering $V'_\bullet\to U'$ in the usual way \cite{BreenAst}.) If it is zero, then $\cG_{U'}$ has a global object (that is, $x$ descends to an object over $U'$), which then provides a section of $\pi_0(\pi_*\cG)$ over $U\in \mathsf{C}$. Clearly, if $\mathrm{R}^2\pi_*A$ vanishes, this argument shows that $\pi_0(\pi_*\cG)$ is locally nonempty. On the other hand, if $\pi_0(\pi_*\cG)$ is locally nonempty, for every object $U$ we can find a local epimorphism $V\to U$ such that $\cG_{u(V)}$ has an object and therefore $\mathrm{H}^2(u(V),A\vert_{u(V)}) = 0$. Now writing $\mathrm{R}^2\pi_*A (U) = \lim_{[V_\bullet \to U]} \mathrm{H}^2(u(-),A\vert_{u(-)})$ we get $\mathrm{R}^2\pi_*A =0$. This finishes the proof of Lemma~\ref{lem:exercise}. \subsection{Maximal subgerbes and pullbacks} \label{sec:maxim-subg-pullb} The following extra facts (``tautologies'' in \cite[V~№ 3.1.8]{Giraud}) are going to be helpful. Recall that we have a site morphism $\pi\colon \mathsf{D}\to \mathsf{C}$. Following Giraud, let us say that a gerbe $\cG$ over $\mathsf{D}$ \emph{comes from} a gerbe on $\mathsf{C}$, if there is a gerbe $\cF$ on $\mathsf{C}$ and a morphism of gerbes $m\colon \pi^*\cF\to \cG$ over $\mathsf{D}$. \begin{lemma} \label{lem:tautology} The gerbe $\cG$ on $\mathsf{D}$ comes from a gerbe on $\mathsf{C}$ if and only if the sheaf $\pi_0(\pi_*\cG)$ admits a section. \end{lemma} \begin{proof} If $\cG$ comes from a gerbe on $\mathsf{C}$, let $\cF$ be such a gerbe and $m\colon \pi^*\cF\to \cG$ the corresponding morphism. By adjunction (cf.\ the universal property that defines the operation $\pi^*$, sect.~\ref{sec:push-forw-pullb}) we obtain a morphism $n\colon \cF \to \pi_*\cG$. Since $\cF$ is a gerbe, for the sheaf of connected components we get $\pi_0(n)\colon \mathit{pt}\to \pi_0(\pi_*\cG)$, hence a section of $\pi_0(\pi_*\cG)$. Conversely, if $\pi_0(\pi_*\cG)$ has a section, say $\xi\colon \mathit{pt} \to \pi_0(\pi_*\cG)$, define $\cF = \mathit{pt}\times_{\pi_0(\pi_*\cG)} \pi_*(\cG)$, which is a gerbe on $\mathsf{C}$, and $n\colon \cF\to \pi_0(\pi_*\cG)$ as the second projection. The latter is by construction fully faithful, hence, again by adjunction, we have the morphism $m\colon \pi^*(\cF) \to \cG$, and so $\cG$ comes from a gerbe on $\mathsf{C}$. \end{proof} \begin{remark} In the previous proof we have used the well known fact (but, again, ultimately due to Giraud \cite[III Prop.\ 2.1.5.3]{Giraud}) that any stack $\cS$ the projection $\cS\to \pi_0(\cS)$ makes it a gerbe on the sheaf of its connected components. For a section $\xi\in \pi_0(\pi_*\cG)$, the pullback $\xi^*(\cS)$ is the corresponding \emph{maximal sub-gerbe.} \end{remark} \begin{remark} Using Lemma~\ref{lem:exercise}, we see that Lemma~\ref{lem:tautology} is equivalent to the exactness at $\mathrm{H}^2(\mathsf{C},\pi_*A)$ of the low term sequence \begin{equation*} \cdots \longrightarrow \mathrm{H}^0(\mathsf{C},\mathrm{R}^1\pi_*A) \longrightarrow \mathrm{H}^2(\mathsf{C},\pi_*A) \longrightarrow \mathrm{H}^2(\mathsf{D},A)' \longrightarrow \cdots \end{equation*} arising from the Leray spectral sequence we recalled above, where we set $\mathrm{H}^2(\mathsf{D},A)' = E^2_1$. \end{remark} \subsection{Interpretation of the maps} \label{sec:interpretation-maps} \subsubsection{The map $E^{1,0}_2 \to E^1$}\label{sec:map-e1-0_2} This is the obvious pull-back map $\mathrm{H}^1(S, \pi_*A) \to \mathrm{H}^1(X,\pi^*\pi_*A) \to \mathrm{H}^1(X, A)$ using the natural adjunction $\pi^*\pi_* A \to A$ of sheaves on $X$. This is just the composite functor \[ \stTors_S(\pi_*(A)) \overset{\pi^*}{\longrightarrow} \stTors_X(\pi^*\pi_*(A)) \longrightarrow \stTors_X(A) \] for the corresponding gerbes. \subsubsection{The map $E^1\to E^{0,1}_2$} \label{sec:map-e0-1_2} Using that $\mathrm{R}^1\pi_*A$ is the sheaf associated to $U \rightsquigarrow \mathrm{H}^1(\pi^{-1}(U),A)$, the map is obtained by considering the class of an object $P$ of $\stTors_X(X)$ under $P \mapsto \pi_*P$. \subsubsection{The map $E^{0,1}_2 \to E^{2,0}_2$} This is the transgression map relative to the standard sequence arising from an injective resolution $0\to A \to I^\bullet$ on $X$. Then it is standard that \begin{equation*} 0 \longrightarrow \pi_*A \longrightarrow \pi_*I^0 \longrightarrow Z^1(\pi_*I^\bullet) \longrightarrow \mathrm{R}^1\pi_*A \longrightarrow 0\,. \end{equation*} Viewing it as the splicing of two short exact sequences \begin{equation*} 0 \longrightarrow \pi_*A \longrightarrow \pi_*I^0 \longrightarrow C \longrightarrow 0\,,\qquad 0\longrightarrow C \longrightarrow Z^1(\pi_*I^\bullet) \longrightarrow \mathrm{R}^1\pi_*A \longrightarrow 0\,, \end{equation*} the transgression map is the composite \begin{equation*} \mathrm{H}^0(S,\mathrm{R}^2\pi_*A) \longrightarrow \mathrm{H}^1(S,C) \longrightarrow \mathrm{H}^2(S,\pi_*A)\,. \end{equation*} The latter is obtained by taking the global objects of the composite 2-functor: \begin{equation*} A \longrightarrow \stTors_S(C) \longrightarrow \cGerb_S(\pi_*A)\,. \end{equation*} The map on the right is the well known classifying map of the extension of $\mathrm{R}^1\pi_*A$ by $C$ above \cite[V~№ 3.2]{Giraud} (see also \cite{ER}). \subsubsection{The map $E^{2,0}_2 \to E^{2}_1$} Analogously to \ref{sec:map-e1-0_2}, we have the composite 2-functor \begin{equation*} \cGerb_S(\pi_*A) \longrightarrow \cGerb_X(\pi^*\pi_*A) \longrightarrow \cGerb_X(A)\,, \end{equation*} where the arrow on the right is ``change of band'' functor along $\pi^*\pi_* A \to A$. Taking isomorphism classes in the global fibers gives the composite $\mathrm{H}^2(S,\pi_*A) \to \mathrm{H}^2(X, \pi^*\pi_*A) \to \mathrm{H}^2(X,A)$. Now, thanks to Lemma~\ref{lem:tautology} the image is actually in $E^2_1$. \subsubsection{The map $\theta\colon E^2_1 \to E^{1,1}_2$} Let $\cG$ be a stack on $X$ and consider the correspondence $\cG \leadsto \pi_0(\pi_*\cG)$. This correspondence is easily seen to be a functorial one between the homotopy category of stacks—as it identifies two naturally isomorphic stack morphisms—on $X$ to that of sheaves on $S$. Therefore, by Lemma~\ref{lem:exercise}, and the subsequent sections, it reduces to a functor $\mathrm{H}o (\gerb_X(A)') \to \tors_S(\mathrm{R}^1\pi_*(A))$, where $\gerb_X(A)'$ denotes the subcategory of those $A$-gerbes whose fiber categories over opens of the form $\pi^{-1}(U)$, for every sufficiently small open neighborhood $U\subset S$ around every point of $S$, are not empty. By taking classes, we get the map. \subsection{Application: Brauer groups}\label{Brauer} The map $\theta: E^2_1 \to E^{1,1}_2$ above plays a very important role in many arithmetical applications \cite{Kai, Lichtenbaum, Skorobogatov}. To recall this, let $F$ be a perfect field and consider the \'etale sheaf $\mathbb{G}_m$ on $T = \Spec F$. Fix an algebraic closure $\bar{F}$ of $F$ and let $\bar{T} =\Spec\bar{F}$; write $\Gamma$ for the Galois group of $\bar{F}$ over $F$. Let $g:Y\to T$ be a smooth proper map and $\bar{Y} = Y\times_T \bar{T}$. The group $\mathrm{H}^2_{\mathit{\acute{e}t}}(Y, \mathbb{G}_m)$ is the Brauer group $\mathrm{B}r(Y)$. The group $E^2_1$ is the relative Brauer group $\mathrm{B}r(\bar{Y}/{Y})$, namely, the kernel of the map $\mathrm{B}r(Y) \to \mathrm{B}r(\bar{Y})$. The map $E^2_1 \to E^{1,1}_2$ then becomes the map \[ \theta \colon \mathrm{B}r(\bar{Y}/Y) \longrightarrow \mathrm{H}^1_{\mathit{\acute{e}t}}(T, \mathrm{R}^1g_* \mathbb{G}_m)\,, \] arising in several contexts. For instance, for any elliptic curve $E$ over $F= \mathbb{Q}$, the map $\theta$ gives the well known isomorphism \[ \mathrm{B}r(E) \simeq \mathrm{H}^1(\mathbb{Q}, E(\bar{\mathbb{Q}})) = \mathrm{H}^1_{\mathit{\acute{e}t}}(\Spec\mathbb{Q}, E)\,. \] The following explicit description of $\theta$ seems to be missing in the literature: Given an element $\alpha$ of $ \mathrm{B}r(\bar{Y}/Y)$, pick a $\mathbb{G}_m$ gerbe $G$ on $Y$ representing $\alpha$. By definition, the base change $\bar{G}$ on $\bar{Y}$ is trivial. Fix an equivalence $f\colon \bar{G} \simeq \tors_{\bar{Y}}(\mathbb{G}_m)$. Then, given any $\sigma \in \Gamma$, the gerbe $\sigma^*\bar{G}$ is equivalent to $\bar{G}$ as $G$ comes from $Y$. Write $f_{\sigma}$ for the resulting equivalence of $\tors_{\bar{X}}(\mathbb{G}_m)$: \[ f_{\sigma}\colon \tors_{\bar{Y}}(\mathbb{G}_m) \overset{f}{\longleftarrow} \bar{G} \simeq \sigma^*\bar{G} \xrightarrow{\sigma^*f} \sigma^* \tors_{\bar{Y}}(\mathbb{G}_m) = \tors_{\bar{Y}}(\mathbb{G}_m)\,. \] Any self-equivalence \cite[\S5.1]{Milne} \[ \tors_{\bar{Y}}(\mathbb{G}_m) \simeq \tors_{\bar{Y}}(\mathbb{G}_m) \] is a translation by a fixed $\mathbb{G}_m$-torsor $L$, namely the self-equivalence is of the form $(-) \mapsto (-) + L$. Therefore, if $L_{\sigma}$ is the $\mathbb{G}_m$-torsor on $\bar{Y}$ corresponding to $f_{\sigma}$, then the map $\sigma \mapsto L_{\sigma}$ represents the element $\theta(\alpha)$ of $\mathrm{H}^1_{\mathit{\acute{e}t}}(T, \mathrm{R}^1g_*\mathbb{G}_m)$. \section{Gerbes and categorical intersection of divisors} \label{cat-int-div} In this section, we prove Theorems \ref{prop1} and \ref{prop2}. Let $Y$ be a smooth variety over $F$. Let $\mathcal{K}_i$ denote the Zariski sheaf associated with the presheaf $U \mapsto K_i(U)$. \subsection{Heisenberg groups} For any pair abelian sheaves $A$ and $B$ on $Y$, we have constructed~\cite{ER} a Heisenberg sheaf $H_{A,B}$ (of nilpotent groups) which fits into an exact sequence \begin{equation} \label{eq:heisenberg} 0 \longrightarrow A\otimes B \longrightarrow H_{A,B} \longrightarrow A\times B \longrightarrow 0\,, \end{equation} providing a categorification of the cup-product \begin{equation} \label{cupproduct} \mathrm{H}^1(Y,A) \times \mathrm{H}^1(Y,B) \longrightarrow \mathrm{H}^2(Y, A\otimes B) \end{equation} in the following manner. Given an $A$-torsor $P$ and a $B$-torsor $Q$, the $A\times B$-torsor $P\times Q$ can be lifted locally to a $H_{A,B}$-torsor in several ways. These local lifts assemble to a $A\otimes B$-gerbe $G_{P,Q}$. Much like the bulk of section~\ref{Leray}, we can formulate the result we need in much greater generality. As in \cite[\S 3]{ER}, we assume $A$ and $B$ are abelian objects of a topos $\mathsf{T}$. (In the applications, we assume $\mathsf{T}$ to be the topos of sheaves on the Zariski, or other relevant topology, of the scheme.) Recall from {loc.\,cit.}\xspace that the Heisenberg group $H_{A,B}$ is defined by the group law: \begin{equation*} (a,b,t)\, (a',b',t') = (aa',bb',t + t' + a\otimes b')\,, \end{equation*} where $a,a'$ are sections of $A$, $b,b'$ of $B$, and $t,t'$ of $A\otimes B$. The extension~\eqref{eq:heisenberg} is set-theoretically split, i.e.\xspace there is a section of the underlying map of sheaves of sets. The map \begin{equation} \label{cocycle} f \colon (A\times B)\times (A\times B) \longrightarrow A\otimes B, \quad f(a,b,a',b') = a\otimes b', \end{equation} is a cocycle representing the class of the extension in $\mathrm{H}^2(\mathrm{B}_{A\times B}, A\otimes B) \cong \mathbf{H}^2(K(A \times B,1), A\otimes B)$, where on the left we have the cohomology of classifying topos \cite{Giraud}, and on the right that of the corresponding Eilenberg-Mac~Lane simplicial object of $\mathsf{T}$. In fact these cohomologies are in turn isomorphic to $[ K(A \times B,1) , K(A \otimes B,2) ]$, the hom-set in the homotopy category \cite{MR516914,MR0491680}, and the cocycle $f$ coincides with the only non-trivial component of the characteristic map \cite[Prop.\ 3.4]{ER}. \begin{proposition} \label{prop:biadditive} The functor \[ c_{A,B} \colon \stTors (A) \times \stTors (B) \longrightarrow \cGerb(A\otimes B)\,,\quad P\times Q \longmapsto G_{P,Q} \] is bi-additive. On $\pi_0$, it induces the cup-product map \eqref{cupproduct}, upon choosing $\mathsf{T} = Y_{\mathrm{Zar}}\sptilde$. \end{proposition} \begin{remark}[On bi-additivity] \label{rem:biadditivity} Note that for any (abelian) band $L$, $\cGerb (L)$ is really a 2-stack. Hence the notion of 2-additivity should be updated with appropriate 2-coherence data from higher algebra. This is both outside the scope of this note and inconsequential in the case at hand. Alternatively, we can mod out the 2-morphisms and consider $\cGerb (L)$ as a Picard 1-stack of $\mathsf{T}$. Thus, bi-additivity consists of the data of functorial equivalences \begin{align*} c_{A,B} (P_1 + P_2, Q) &\congto c_{A,B} (P_1, Q) +c_{A,B}(P_2, Q) \\ c_{A,B} (P, Q_1 + Q_2) &\congto c_{A,B} (P, Q_1) +c_{A,B}(P, Q_2) \end{align*} subject to the condition that decomposing $c_{A,B} (P_1 + P_2, Q_1 + Q_2)$ according to the two possible ways determined by the above morphisms gives rise to a commutative (or commutative up to coherent 2-isomorphism) diagram. This would be exactly the kind of diagram familiar from the theory of biextensions \cite{MR0354656-VII,MR823233} (see also \cite{MR1114212}). \end{remark} \begin{proof}[Proof of Proposition~\ref{prop:biadditive}] Bi-additivity is essentially already implied by the fact that the cocycle representing the class of the extension is the tensor product, which is bilinear. It is best to look at this in the universal case, namely over $K(A \times B, 1)$—the rest follows by pullback—where the bilinearity of the tensor product has the following interpretation. By functoriality, from the group operation ($A$ is an abelian object) $+_A\colon A \times A \to A$ we get the map $+_A\colon K(A \times A \times B, 1) \to K(A \times B, 1)$ corresponding to the Baer sum of torsors. Its composition with the characteristic map $c \colon K(A\times B,1)\to K(A\otimes B,2)$ equals in the homotopy category the sum $c_1 + c_2$, where $c_i$, $i=1,2$, is the composition \begin{equation*} K(A \times A \times B,1) \overset{p_i}{\longrightarrow} K(A \times B,1) \overset{c}{\longrightarrow} K(A \otimes B, 2)\,; \end{equation*} the first map is induced by the projection onto the first (second) factor, as it follows from~\eqref{cocycle} and the form of $c$ computed in \cite[\S 3.4]{ER}. In turn, the map $c \circ (+_A)$ classifies the extension $(+_A)^* H_{A,B}$, whereas $c_1+c_2$ classifies the extension $p_1^*H_{A,B} + p_2^*H_{A,B}$—the sum is the Baer sum in this case—so that we obtain the isomorphism \begin{equation} \label{iso-extensions} (+_A)^* H_{A,B} \cong p_1^*H_{A,B} + p_2^*H_{A,B} \end{equation} of central extensions of $A \times A \times B$ by $A\otimes B$. Similarly for the ``variable'' $B$. Furthermore, the commutativity of the diagram alluded to in Remark~\ref{rem:biadditivity} is immediately implied by further pulling back the isomorphism~\eqref{iso-extensions} by $+_B\colon B\times B\to B$, its counterpart for $B$ via $+_A$, and again using~\eqref{cocycle}. \end{proof} \subsection{Proof of Theorem \ref{prop1}} Consider the map $\mu \colon \mathbb{G}_m \times \mathbb{G}_m \to \mathcal{K}_2$ obtained using the identification $\mathbb{G}_m\simeq \mathcal{K}_1$ and the multiplication $\mathcal{K}_1 \times \mathcal{K}_1 \to \mathcal{K}_2$. The functor $\mathcal{u}p$ defined as the composite \[ \tors_Y(\mathbb{G}_m) \times \tors_Y(\mathbb{G}_m) \xrightarrow{c_{\mathbb{G}_m, \mathbb{G}_m}} \gerb_Y(\mathbb{G}_m \otimes \mathbb{G}_m) \xrightarrow{\mu_*} \gerb_Y(\mathcal{K}_2) \] is the required bi-additive functor. \qed The functor $\mathcal{u}p$ is so-named as it categorifies the cup-product (which can be identified with the intersection product \[ \mathrm{H}^1(Y, \mathbb{G}_m) \times \mathrm{H}^1(Y, \mathbb{G}_m) \longrightarrow \mathrm{H}^2(Y,\mathcal{K}_2) \simeq \mathsf{C}H^2(Y) \longleftarrow \mathsf{C}H^1(Y) \times \mathsf{C}H^1(Y)\,. \] \begin{remark} \label{rem:biext} The bi-additivity property of the map $c_{A,B}$ of Proposition~\ref{prop:biadditive} has the following conjectural formal interpretation. The maps $+_A$ and $+_B$, plus the commutative diagram in Remark~\ref{rem:biadditivity} and the proof of Proposition~\ref{prop:biadditive} comprise a structure that can be described as the categorification of a biextension, namely a $\stTors(A\otimes B)$-torsor (hence an $A\otimes B$-gerbe) \begin{equation*} \cH \longrightarrow \stTors(A) \times \stTors(B) \end{equation*} equipped with partial addition laws $+_A$ (resp.\ $+_B$) giving it the structure of an extension of $\stTors (A)$ (resp.\ $\stTors (B)$) by $\stTors(A \otimes B)$. \end{remark} \subsection{Proof of Theorem \ref{prop2}}Our proof will use the results of \S \ref{Leray} on (\ref{lowterm}) for $\pi:X \to S$ with $A=\mathcal{K}_2$ on $X$. Proposition \ref{beilinson-lemma} shows that all $\mathcal{K}_2$-gerbes are horizontal (Definition \ref{horizontal}). The functor $\int_{\pi}$ is then defined as the composition of \[ \cGerb_X(\mathcal{K}_2) \xrightarrow{\mathsf{T}heta} \stTors_S(\mathrm{R}^1\pi_*\mathcal{K}_2) \xrightarrow{\mathrm{Norm}} \stTors_S(\mathcal{K}_1)\,. \] Our first step is to show that $\cGerb_X(\mathcal{K}_2)'$ is all of $\cGerb_X(\mathcal{K}_2)$, in other words, every $\mathcal{K}_2$-gerbe on $X$ is horizontal. This is proved by showing $\mathrm{R}^2\pi_*\mathcal{K}_2=0$ which provides the isomorphism \[ E^2_1 \congto \mathrm{H}^2(X, \mathcal{K}_2)\, . \] We start with the following result, implicit in \cite[A5.1 (iv)]{MR962493}, essentially due to Beilinson-Schechtman. \begin{proposition}[Beilinson-Schechtman]\label{beilinson-lemma} The sheaf $\mathrm{R}^2\pi_*\mathcal{K}_2$ is zero. \end{proposition} This gives a map \begin{equation}\label{map-bson} \theta \colon \mathrm{H}^2(X, \mathcal{K}_2) \longrightarrow \mathrm{H}^1(S, \mathrm{R}^1\pi_*\mathcal{K}_2) \end{equation} using \[ \mathrm{H}^2(X, \mathcal{K}_2)\leftiso E^2_1 \longrightarrow \mathrm{H}^1(S, \mathrm{R}^1\pi_*\mathcal{K}_2)\,. \] \begin{proof} Let $N$ be the dimension of $S$, so that $X$ has dimension $N+1$. For any $s\in S$, we have to show that the stalk of $\mathrm{R}^2\pi_*\mathcal{K}_2$ at $s$ is zero. By definition, this is the direct limit \[ \varinjlim_{s\in U}\mathrm{R}^2\pi_*\mathcal{K}_2(U) = \varinjlim_{s\in U}\mathrm{H}^2(\pi^{-1}(U), \mathcal{K}_2) = \varinjlim_{s\in U}\mathsf{C}H^2(\pi^{-1}(U))\,, \] where the last equality comes from the Bloch-Quillen isomorphism (valid for any smooth variety $V$) \[ \mathrm{H}^2(V, \mathcal{K}_2) \congto \mathsf{C}H^2(V)\,. \] So, we have to show that for any $s \in S$, any open set $U$ containing $s$, and any codimension two cycle $Z$ in $\pi^{-1}(U) \subset X$, there exists an open subset $U' \subset U$ such that the class of $Z$ goes to zero under the map \[ \mathsf{C}H^2(\pi^{-1}(U)) \longrightarrow \mathsf{C}H^2(\pi^{-1}(U'))\,. \] This is clear when $s$ is the generic point $\Spec F(S)$ of $S$: in this case, we take $U'$ to be the complement of $\pi(\abs{Z})$ in $U$. Here we have written $\abs{Z}$ for the support of $Z$. The next the case is when $s$ is a point of codimension $i>0$, corresponding to a codimension $i$ subvariety $V$ of $S$. Let us write $Y\subset X$ for $\pi^{-1}(V)$; then $Y$ is a subset of $X$ with codimension $i$. For any open $U \subset S$, the condition $s\in U$ means $U \cap V$ is non-empty. Let $U$ be such an open set. There are two cases to consider: \begin{description} \item[Case 1] If $\abs{Z}$ is disjoint from $Y$, then we can proceed as before as $\pi(Z)$ is disjoint from $V$, so we take $U'$ to be the complement of $\pi(\abs{Z})$ in $U$. Since $U' \cap V = U \cap V$, we see that $U'\cap V$ is non-empty. Since $Z$ is in the kernel of the localization sequence for Chow groups \[ \mathsf{C}H^2(\pi^{-1}(U)) \to \mathsf{C}H^2(\pi^{-1}(U) - \abs{Z}) \to 0\,, \] it is also in the kernel of the composite map \[ \mathsf{C}H^2(\pi^{-1}(U)) \to \mathsf{C}H^2(\pi^{-1}(U) - \abs{Z}) \to \mathsf{C}H^2(\pi^{-1}(U'))\,. \] This finishes the proof in this case. \item[Case 2] If $\abs{Z}$ is not disjoint from $Y$, we can find a codimension two cycle $Z'$ in $\pi^{-1}(U)$ with $[Z] = [Z']\in \mathsf{C}H^2(\pi^{-1}(U))$ which intersects $Y$ transversally. The codimension of the cycle $Z'.Y$ is $i+2$, because its dimension (= maximum of the dimensions of the irreducible components) is $N+1 - i-2 = N-1-i$. Hence the dimension of the image $\pi(Z'.Y)$ is at most $N-1-i$, and so its support $\abs{\pi(Z'.Y)}$ is a proper closed subset of $V = \pi(Y)$. If $U''$ is the complement of $\abs{\pi(Z'.Y)}$ in $U$, then the intersection of $U''$ and $V$ is empty. By definition, the cycle $Z' \cap \pi^{-1}(U'')$ is disjoint from $Y$. This means that the image of $Z'$ (= image of $Z$) under the map \[ \mathsf{C}H^2(\pi^{-1}(U)) \longrightarrow \mathsf{C}H^2(\pi^{-1}(U'')) \] is a cycle disjoint from $Y$. By Case 1, we can shrink $U''$ further to $U'$ such that $Z'$ (and hence $Z$ also) is in the kernel of the map \[ \mathsf{C}H^2(\pi^{-1}(U'')) \longrightarrow \mathsf{C}H^2(\pi^{-1}(U'))\,, \] as required.\end{description} \end{proof} This gives the functor $\mathsf{T}heta$ appearing in the definition of \[ \int_{\pi} \colon \cGerb_X(\mathcal{K}_2) \xrightarrow{\mathsf{T}heta} \stTors_S(\mathrm{R}^1\pi_*\mathcal{K}_2) \xrightarrow{\mathrm{Norm}} \stTors_S(\mathcal{K}_1). \] Our next step is the definition of the map $\mathrm{R}^1\pi_*\mathcal{K}_2 \longrightarrow \mathcal{O_S^*}$. \begin{remark} The same proof shows that if $f\colon Y \to T$ is a smooth proper map of dimension $n$ with $Y$ and $T$ smooth, then $\mathrm{R}^jf_*\mathcal{K}_j =0$ for all $j>n$. This says that the relative Chow sheaves $\mathsf{C}H^j(Y/T)$ vanish for all $j>n$. \end{remark} \subsection{The norm map $\mathrm{R}^1\pi_*\mathcal{K}_2 \longrightarrow \mathcal{O_S^*}$}\label{Norm} This well known map \cite[3.4]{Rost}, \cite[pp.~262-264]{Gillet} arises from the covariant functoriality for proper maps of Rost's cycle modules (Chow groups in our case). We provide the details for the convenience of the reader. Our description proceeds via the Gersten sequence (a flasque resolution of the Zariski sheaf $\mathcal{K}_2$ on $X$) \begin{equation}\label{gersten} 0\longrightarrow \mathcal{K}_2 \longrightarrow \mathit{\acute{e}t}a_*\mathcal{K}_{2,\mathit{\acute{e}t}a} \longrightarrow \bigoplus_{x \in X^{(1)}} i_*K_1(k(x)) \longrightarrow \bigoplus_{y\in X^{(2)}} i_*K_0(k(y)) \to 0\,; \end{equation} here $\mathit{\acute{e}t}a: \Spec F(X) \to X$ is the generic point of $X$ and $X^{(i)}$ denotes the set of points of codimension $i$ of $X$. For any $U$ open in $S$, the norm map \begin{equation*} \mathrm{H}^1(\pi^{-1}(U), \mathcal{K}_2) \longrightarrow \mathcal O^*_S(U) \end{equation*} is obtained as follows. Since the first group is the homology at degree one of (\ref{gersten}), we proceed by constructing a map \[ \bigoplus_{x \in \pi^{-1}(U)^{(1)}} i_*K_1(k(x)) \to \mathcal O^*_S(U)\,. \] For each such $x \in \pi^{-1}(U)$ of codimension one, the map $x \to \pi(x)$ is either finite or not, and it is zero in the second case. In the first case, there is a norm map \[ k(x)^* \to k(\pi(x))^*\,; \] since $x$ has codimension one in $X$, its image $\pi(x)$ is the generic point of $S$ and hence the above norm map is a map \[ k(x)^* \to F(S)^*\,. \] An element of $\mathrm{H}^1(\pi^{-1}(U), \mathcal{K}_2)$ arises from a finite collection of functions $f_x\in k(x)^*$ (for $x\in \pi^{-1}(U)$ of codimension one which is finite onto its image) which is in the kernel of the map \[ \bigoplus_{x \in \pi^{-1}(U)^{(1)}} i_*K_1(k(x)) \to \bigoplus_{y \in \pi^{-1}(U)^{(2)}} i_*K_0(k(y))\,. \] On each component, this is the ord or valuation map. One checks that this means that the (finite) product of the norms of $f_x$ is an element of $F(S)^*$ with no poles on $U$ and hence defines an element of $\mathcal O_S^*(U)$. This gives the required functor \[ \stTors_S(\mathrm{R}^1\pi_*\mathcal{K}_2) \xrightarrow{\mathrm{Norm}} \stTors_S(\mathcal{K}_1)\,, \] completing the definition of the functor $\int_{\pi}$ of Theorem \ref{prop2}. \section{Comparison with Deligne's construction} \label{comp-deligne} Given line bundles $L$ and $M$ (viewed as $\mathbb{G}_m$-torsors) on $X$, consider the $\mathcal{K}_2$-gerbe $G_{L,M}$ on $X$. By Proposition \ref{beilinson-lemma}, the element $[G_{L,M}]$ of $\mathrm{H}^2(X, \mathcal{K}_2)$ actually lives in $E_1^2$ and hence $G_{L,M}$ is horizontal. By Lemma \ref{lem:exercise}, $\mathsf{T}heta(G_{L,M})$ is a $\mathrm{R}^1\pi_*\mathcal{K}_2$-torsor. By definition, $\int_{\pi}G_{L,M}$ is its pushforward along the norm map of \S \ref{Norm}, \[ \mathrm{Norm} \colon \mathrm{R}^1\pi_*\mathcal{K}_2 \longrightarrow \mathcal{O_S^*}\,, \] which gives a line bundle $(L,M)$ on $S$. In this section, we show that this gives Deligne's line bundle $\< {L,M}\>$. Since $(L,M)$ is bi-additive and its construction is functorial, this reduces to showing the identity in Theorem \ref{prop3}: \[ \< {\mathcal{O}(D), \mathcal{O}(E)} \cong ( {\mathcal{O}(D), \mathcal{O}(E)})\,, \] for any relative Cartier divisors $D$ and $E$ on $X$ with $D$ effective. \subsection{Comparison} To show that $\< {\mathcal{O}(D), \mathcal{O}(E)}$ is isomorphic to $( {\mathcal{O}(D), \mathcal{O}(E)})$, one just has to show that they are equal in $\mathrm{H}^1(S, \mathcal{O}^*)$. This amounts to showing that the diagram below is commutative: \begin{equation}\label{diagram} \begin{tikzcd} & \mathrm{H}^1(X, \mathcal{O}^*) \ar[dl,"\mathit{\acute{e}t}a"'] \ar[dr,"\mathcal{u}p"] \\ \mathrm{H}^1(D, \mathcal{O}^*) \ar[rr,"\lambda"] \ar[d,"N_{D/S}"'] && \mathrm{H}^2(X, \mathcal{K}_2) \ar[d,"\theta"] \\ \mathrm{H}^1(S, \mathcal{O}^*) && \mathrm{H}^1(S, \mathrm{R}^1\pi_*\mathcal{K}_2) \ar[ll,"\mathrm{Norm}"'] \end{tikzcd} \end{equation} The map $\mathit{\acute{e}t}a$ is the restriction to $D$ of a line bundle $\mathcal{O}(E)$. The map $\mathcal{u}p$ sends $\mathcal{O}(E)$ to its cup-product with $\mathcal{O}(D)$. The boundary map $\lambda$ in the localization sequence \[0 \to \mathcal{K}_{2,X} \longrightarrow j_*\mathcal{K}_{2,U} \longrightarrow i_*\mathcal{K}_{1,D} \to 0 \] for $X$, $U= X - D$, and $D$. The map $\theta$ is the map (\ref{map-bson}) \[ \mathrm{H}^2(X, \mathcal{K}_2) \longrightarrow \mathrm{H}^1(S, \mathrm{R}^1\pi_*\mathcal{K}_2)\,. \] The commutativity of (\ref{diagram}) is an implicit consequence of the axiomatics of Rost \cite{Rost}, but we provide a direct proof. \subsubsection{The top triangle of (\ref{diagram})} We first prove the commutativity of the top triangle of (\ref{diagram}). Let $\{U_i\}$ be a Zariski open cover of $X$ such that $D$ and $E$ are principal divisors on $U_i$. Let $\{f_i\}$ be defining equations for $D$ and $\{g_i\}$ be defining equations for $E$. Then, \[ \{a_{ij}:= \frac{f_i}{f_j} \in \mathcal O^*(U_i \times U_j)\}, \quad \{b_{ij}:= \frac{g_i}{g_j} \in \mathcal O^*(U_i \times U_j)\} \] are cocycle representatives for $\mathcal O(D)$ and $\mathcal O(E)$. By the explicit description \cite[(1-18)]{MR2362847} of the cup-product map in \v{C}ech cohomology, the map $\mathcal{u}p$ sends $\{b_{ij}\}$ to the $2$-cocycle \begin{equation} \label{cup-product} \{(a_{ij}, b_{jk})\}\in K_2(U_i \times U_j \times U_k)\,. \end{equation} Given a cocycle $s_{ij}\in \mathcal O^*(U_i \times U_j \times D)$ relative to the cover $\{U_i \times D\}$ of $D$, one computes its image under $\lambda$ as follows. Pick $\tilde{s}_{ij} \in K_2 (U_i \times U_j \times U)$ whose tame symbol along $D$ is $s_{ij}$; then check that its \v{C}ech boundary $\partial( \tilde{s}_{ij})$ (a $2$-cochain with values in $\mathcal{K}_{2,U}$) is zero when viewed as a cochain with values in $i_*\mathcal{K}_{1,D}$. This means that $\partial (\tilde{s}_{ij})$ is a $2$-cocycle with values in $\mathcal{K}_{2,X}$; this is defined to be the image of $s_{ij}$ under $\lambda$. Let us apply this to compute the image of $\mathcal O(E)\vert_D$ under $\lambda$. Let $\bar{b}_{ij}$ be the image of $b_{ij}$ under the map \[ \mathcal O^*(U_i \times U_j) \to \mathcal O^*(U_i \times U_j \times D)\,. \] The cocycle $\{\bar{b}_{ij}\}$ represents $\mathcal O(E)\vert_D$. To compute its image under $\lambda$, consider the element (symbol) \[ t_{ij} = (f_i,b_{ij}) \in K_2(U \times U_i \times U_j). \] We know that $b_{ij}$ is a unit in $U_i \times U_j$ and so defines an element of $K_1(U_i \times U_j)$; we know $f_i$ is the defining equation of $D$ on $U_i$ and so it is a unit on $U \times U_i$ and thus $f_i$ defines an element of $K_1(U \times U_i)$. So $t_{ij}$ is a well-defined element of $K_2(U \times U_i \times U_j)$. If $v$ denotes the valuation \[ F(X)^* \longrightarrow \mathbb{Z} \] defined by the divisor $D$, the tame symbol map is the map \[ K_2(U) \to K_1(D), \quad (a,b) \mapsto (-1)^{v(a)v(b)}.~ \overline{\bigg(\frac{a^{v(b)}}{b^{v(a)}}\bigg)}\,. \] Since $v(f_i) =1$ and $v(b_{ij}) =0$, we see that $t_{ij}$ maps to the element \[ (-1)^{1 \times 0}.~\overline{\bigg(\frac{f_i^0}{b_{ij}^1}\bigg)} = \bar{b}_{ij}^{-1}\,. \] So the cochain $\{t_{ij}\}$ lifts the inverse of the cocycle $\{\bar{b}_{ij}\}$. Its \v{C}ech boundary (which represents the image under $\lambda$ of the inverse of $\{\bar{b}_{ij}\}$) \[ t_{ij} - t_{ik} + t_{jk} = (f_i,b_{ij}) - (f_i,b_{ik}) + (f_j,b_{jk}) \] is a $2$-cocycle with values in $\mathcal{K}_2$. Since \[ \biggl\{b_{ij} = \frac{g_i}{g_j}\biggr\} \] is a cocycle, the relation \[ b_{ik} = b_{ij}+ b_{jk} \] holds. Using this, the image of the inverse of $\{\bar{b}_{ij}\}$ under $\lambda$ is given by the negative of the element in (\ref{cup-product}): \[ (f_i, b_{ij}) - (f_i, b_{ij}) -(f_i, b_{jk}) +(f_j, b_{jk}) = (\frac{f_j}{f_i}, b_{jk}) = -(a_{ij}, b_{jk})\,. \] This says that $\lambda$ maps $\{\bar{b}_{ij}\}$ to the class of the cup product of $\mathcal O(D)$ and $\mathcal O(E)$ in $\mathrm{H}^2(X ,\mathcal{K}_2)$ thus completing the proof of the commutativity of the top triangle in (\ref{diagram}). \subsubsection{The bottom square of (\ref{diagram})} We begin with an explicit description of the map \[ \theta \colon \mathrm{H}^2(X, \mathcal{K}_2) \to \mathrm{H}^1(S, \mathrm{R}^1 \pi_*\mathcal{K}_2) \] in (\ref{map-bson}). Let $G$ be a $\mathcal{K}_2$-gerbe on $X$. As $\mathsf{C}H^2(X)= \mathrm{H}^2(X,\mathcal{K}_2)$ (Bloch-Quillen), we can pick a codimension-two cycle $c$ representing $[G]$ on $X$. As $G$ is horizontal, there exists an open cover $\{V_{\alpha}\}$ of $S$ such that $[G] =0\in \mathrm{H}^2(W_{\alpha}, \mathcal{K}_2)$, with $W_{\alpha} = \pi^{-1}(V_{\alpha})$; note $\{W_{\alpha}\}$ is an open cover of $X$. In terms of the Gersten complex \[ 0\longrightarrow \mathcal{K}_2 \longrightarrow \mathit{\acute{e}t}a_*\mathcal{K}_{2,\mathit{\acute{e}t}a} \longrightarrow \bigoplus_{x \in W_{\alpha}^{(1)}} K_1(k(x)) \xrightarrow{\mathrm{ord}} \bigoplus_{y\in W_{\alpha}^{(2)}} K_0(k(y)) \to 0\,, \] which computes the cohomology of $\mathcal{K}_2$ on $W_{\alpha}$, we have that the vanishing in $H^2(W_{\alpha}, \mathcal{K}_2)$ of the restriction $c_{\alpha}$ of the codimension-two cycle $c$ representing $[G]$ on $W_{\alpha}$. Then, there exists an element $h_{\alpha} \in \bigoplus_{x \in W_{\alpha}^{(1)}}~K_1(k(x))$ such that $\textrm{ord}(h_{\alpha}) =c_{\alpha}$ in the sequence on $W_{\alpha}$. So $h_{\alpha}$ is a collection of divisors in $W_{\alpha}$ whose associated functions cut out together the codimension-two cycle $c$. Since $\textrm{ord}(h_{\alpha}) = \textrm{ord}(h_{\alpha'})$ on $W_{\alpha} \cap W_{\alpha'}$, we see that the element $r_{\alpha,\alpha'}\coloneq h_{\alpha} -h_{\alpha'}$ on $W_{\alpha} \cap W_{\alpha'}$ defines an element of $\mathrm{H}^1(W_{\alpha} \cap W_{\alpha'}, \mathcal{K}_2)$. The cocycle condition is a formal consequence: \[ r_{\alpha,\alpha'} + r_{\alpha',\alpha''} + r_{\alpha'',\alpha} =0\,. \] Namely, $\{r_{\alpha,\alpha'}\}$ defines a Čech $1$-cocycle on $S$ with values in $\mathrm{R}^1 \pi_*\mathcal{K}_2$; this is the element $\theta(G)$. Taking norms down to $S$ gives a Čech $1$-cocycle \[ \tilde{r}_{\alpha,\alpha'} \coloneq N_{D/S}\biggl( \frac{h_{\alpha}}{h_{\alpha'}} \biggr) \] with values in $\mathbb G_m$ on $S$. This completes the description of the maps in the bottom square of (\ref{diagram}). With all this in place, it is now easy to show that the bottom square of (\ref{diagram}) commutes. Recall the defining equations $g_i$ of $E$ relative to the open cover $\{U_i\}$ of $X$. Restricting $\mathcal O(E)$ to $D$ and applying $\lambda$ gives the gerbe $G= G_{\mathcal O(D), \mathcal O(E)}$, by the commutativity of the top triangle of (\ref{diagram}). We use the above description to compute the image of the gerbe under $\theta$; we see that $h_{\alpha}$ can be taken to be the collection of functions $\bar{g}_{i, \alpha} = \bar{g}_i\lvert_{D_{\alpha, i}}$ on $D_{\alpha, i}= D \cap W_{\alpha}\cap U_i$ which cuts out the codimension-two cycle corresponding to the intersection of $D$ and $E$. The norm down to $S$ of the corresponding $r_{\alpha, \alpha'}$ gives the Čech-cocycle with values in $\mathcal{K}_1$ of $S$; this is the image of $\mathcal O(E)$ along one part of the bottom square in (\ref{diagram}). On the other hand, consider the image of $\mathcal O(E)$ under the left vertical map of (\ref{diagram}). Let \[ \bar{g}_{i, \alpha} = \bar{g}_i\lvert_{D_{\alpha, i}}, \quad e_{\alpha}= \prod_{i}N_{D_{\alpha, i}/{S}} \bigl(\bar{g}_{i, \alpha}\bigr) \,. \] The image of $\mathcal O(E)$ under the map $N_{D/S}$ is given by the cocycle \[ c_{\alpha, \alpha'}\coloneq \frac{e_{\alpha}}{e_{\alpha'}}\in \mathcal{O}^*(V_{\alpha}\cap V_{\alpha'})\,. \] It is clear that $c_{\alpha, \alpha'}$ is equal to $\mathrm{Norm}(r_{\alpha, \alpha'})$. This shows the commutativity of the diagram (\ref{diagram}), since the image of $\mathcal{O}(E)$ along the vertical left map of (\ref{diagram}) gives Deligne's line bundle $\<{\mathcal{O}(D), \mathcal{O}(E)}\>$, and the image along the other side of (\ref{diagram}) is \[( \mathcal O(D), \mathcal O(E)) = \mathrm{Norm}\circ \mathsf{T}heta\circ\mathcal{u}p (\mathcal{O}(D), \mathcal{O}(E)) = \mathrm{Norm}\circ \mathsf{T}heta\circ (G_{\mathcal{O}(D), \mathcal{O}(E)}) = \int_{\pi} G_{\mathcal{O}(D), \mathcal{O}(E)}. \] This proves Theorem \ref{prop3} and therefore Theorem \ref{Main}. \section{Picard stacks and their endomorphisms} \label{sec:picard-stacks-endom} Here and elsewhere in this paper ``Picard stack,'' or ``Picard category,'' means ``strictly commutative Picard'' in the sense of Deligne \cite{10.1007/BFb0070724}. Namely, if we denote the monoidal operation of $\cP$ simply by $\mathnormal{+} \colon \cP \times \cP \to \cP$, then the symmetry condition given by the natural isomorphisms $\sigma_{x,y} \colon x + y \to y + x$ must satisfy the additional condition that $\sigma_{x,x} = \id_{x+x}$. Such stacks have the pleasant property that there exists a two-term complex of abelian sheaves $d\colon A^{-1} \to A^0$ such that $\cF$ is equivalent, as a Picard stack, to the one associated to the action groupoid formed from the complex. We denote this situation by \begin{equation*} \cP \simeq \bigl[ A^{-1} \stackrel{d}{\longrightarrow} A^0 \bigr]\sptilde\,. \end{equation*} A classical example arises from the well known divisor exact sequence of Zariski sheaves \begin{equation}\label{divisorsequence} 0 \to \mathbb{G}_m \to \mathit{\acute{e}t}a_*F(Y)^{\times} \to \bigoplus_{y\in Y^1} (i_y)_*\mathbb Z \to 0\,, \end{equation} where $Y$ is a smooth scheme over a field $F$ and the sum is over the set $Y^1$ of points of codimension one in $Y$. We get the equivalence: \begin{equation*} \stTors_Y(\mathbb{G}_m) \simeq \bigl[ \mathit{\acute{e}t}a_*F(Y)^{\times} \to \bigoplus_{y\in Y^1} (i_y)_*\mathbb Z \bigr]\sptilde \end{equation*} In the sequel we shall denote by $\mathcal{CH}^1_Y$ the Picard stack on the right hand side of the above relation and by $\mathbf{CH}^1(Y)$ the Picard category of its global sections. Therefore we have $\tors_Y(\mathbb{G}_m) \simeq \mathbf{CH}^1(Y)$. Still from \cite{10.1007/BFb0070724}, we have that morphisms and natural transformations form a Picard stack $\cHom(\cP,\cQ)$, where the additive structure is defined pointwise: if $F,G$ are two objects, then $(F+G)(x) \coloneq F(x) +_{\cQ} G(x)$, for any object $x$ of $\cP$. It is immediate to verify that this is symmetric and in fact strictly commutative if $+_{\cQ}$ is. \subsection{Ring structures} \label{sec:ring-structures} We set $\stEnd (\cP) = \cHom (\cP,\cP)$. By the above considerations, it is a Picard stack, but the composition of morphisms gives it an additional unital monoidal structure, with respect to which $\stEnd (\cP)$ acquires the structure of a stack of ring groupoids—also known as categorical rings—of the sort described in \cite{rings-tac2015} (see also \cite{drinfeld2021notion} for a résumé). Note that the ``multiplication'' monoidal structure, being given by composition of functors, is strictly associative. \begin{remark} \label{rem:butterflies} If $\cP \simeq [A^{-1}\to A^0]\sptilde$, then by \cite{ButterfliesI} $\stEnd(\cP)$ is equivalent, as a stack of ring groupoids, to $\stCorr(A^\bullet,A^\bullet)$, the stack whose objects are butterfly diagrams: an object is given by an extension $0 \to A^\bullet \to E \to A^0\to 0$ such that its pullback to $A^{-1}$ via $d \colon A^{-1}\to A^0$ is trivial. Morphisms are morphisms of extensions. $\stCorr(A^\bullet,A^\bullet)$ is a stack of ring groupoids: the ``$+$'' is given by the Baer sum of extensions; the ``$\times$'' is given by concatenation of butterflies described in {loc.\,cit.}\xspace This structure is associative, but not strictly so. \end{remark} \subsection{Quotients and colimits} \label{sec:quotients-colimits} Let $F\colon \cP\to \cQ$ be a morphism of Picard stacks. Its cokernel $\mathsf{C}oker F$ is the stack associated to the following construction, the details of which can be found in the literature (see, e.g.\xspace \cite{VITALE2002383,kv2000}).\footnote{This is valid for Picard categories and stacks that not necessarily strictly commutative.} Let assume $F \colon \mathbf{P}\to \mathbf{Q}$ is a morphism of Picard \emph{categories.} The cokernel $\mathsf{C}oker F$ is a Picard category defined as follows: \begin{enumerate} \item its class of objects is the same as that of $\mathbf{Q}$; \item a morphism $[f , a] \colon x \to y$ is an equivalence class of pairs $(f, a)$, where $f$ is morphism $f \colon x \to x + F(a)$ in $\mathbf{Q}$, $a \in \mathrm{Ob}j\mathbf{P}$, and two pairs $(f,a)$ and $(g,b)$ are equivalent if there exists an arrow $u \colon a\to b$ in $\mathbf{P}$ and the diagram \begin{equation*} \begin{tikzcd}[sep=small,cramped] & x \ar[dl,"f"'] \ar[dr,"g"] \\ y + Fa) \ar[rr,"y+F(u)"'] && y + F(b) \end{tikzcd} \end{equation*} commutes. \item The monoidal structure is defined to be that of $\mathsf{Q}$ on objects and by the class of the composite arrow \begin{equation*} \begin{split} x + y \to (x + F(a)) + (y + F(b)) &\congto (x + F(a)) + (y + F(b)) \\ &\congto (x + y) + (F(a) + F(b)) \congto (x + y) + F(a + b) \end{split} \end{equation*} if $[f,a]\colon x\to x'$ and $[g,b]\colon y \to y'$. \end{enumerate} There is a canonical functor $p_F\colon \mathbf{Q}\to \mathsf{C}oker F$ which is the identity on objects and sends an arrow $f\colon x\to y$ in $\mathbf{Q}$ to the class of the composite: \begin{equation*} \begin{tikzcd}[sep=small,cramped] x \ar[r,"f"] & y \ar[r,"\simeq"] & y + 0_{\mathsf{Q}} \ar[r,"\simeq"] & y + F (0_{\mathsf{P}}) \end{tikzcd}\,. \end{equation*} There is an isomorphism $\pi_F\colon p_F\circ F \mathrm{R}ightarrow 0\colon \mathbf{P} \to \mathbf{Q}$ given by \begin{math} \pi_{F,a} \colon \begin{tikzcd}[cramped,sep=small] F(a) \ar[r,"\simeq"] & 0_{\mathbf{Q}} + F(a) \end{tikzcd}\,. \end{math} It follows that we have an abelian group isomorphism \begin{equation*} \pi_0(\mathsf{C}oker F) \cong \mathsf{C}oker \bigl(\pi_0(F) \colon \pi_0(\mathbf{P}) \to \pi_0(\mathbf{Q})\bigr)\,. \end{equation*} As mentioned, if $F\colon \cP\to \cQ$ is a morphism of Picard \emph{stacks,} we define $\mathsf{C}oker F$ to be the Picard stack associated to the pseudo-functor \begin{equation*} U \rightsquigarrow \mathsf{C}oker \bigl(F_U \colon \cP (U) \to \cQ(U) \bigr) \end{equation*} where $U$ is in the base site. In the following section, we apply this construction to the diagram \begin{equation*} \begin{tikzcd}[cramped] \cP_1 \ar[r,"F_1"] & \cQ \ar[r,leftarrow,"F_2"] & \cP_2 \end{tikzcd} \end{equation*} and the resulting morphism $F_1+F_2\colon \cP_1\times \cP_2\to \cQ$, defined on objects by $(F_1 + F_2) (x_1 , x_2) = F_1(x_1) + F_2(x_2)$. We shorten or notation and simply write $\mathsf{C}oker (F_1+F_2)$ as $\cQ/(\cP_1+\cP_2)$. By the above recollection we have \begin{equation*} \pi_0\bigl( \cQ/(\cP_1+\cP_2) \bigr) \cong \pi_0(\cQ) / (\pi_0(\cP_1) + \pi_0(\cP_2))\,. \end{equation*} \section{Categorification of correspondences} \label{sec:categ-corr} In this section, $S=\Spec~F$ and a curve $C$ is a smooth projective connected one-dimensional scheme over $S$. For simplicity, we assume (just in this section) that $F$ is algebraically closed. The main result (Theorem \ref{nuovo}) of this section is an application of Theorem \ref{Main} using \S \ref{sec:picard-stacks-endom} to the categorification of well known identities (\ref{ek}) and (\ref{caar}) about correspondences on the self-product of a curve. We will work with the category $\mathcal{V}$ whose objects are curves and the maps are (correspondences) $\mathrm{H}om_{\mathcal{V}}(D, C) = \mathsf{D}iv(C \times D)$ with composition defined by product of correspondences. In the following we are stating our results for Picard categories, but there are parallel statements for the corresponding Picard stacks. \subsection{Categorifying $\mathsf{C}H^1(Y)$} For any smooth scheme $Y$ over $S$, it follows from (\ref{divisorsequence}) that the Chow group $\mathsf{C}H^1(Y)$ is isomorphic to the Picard group $\Pic(Y) = \mathrm{H}^1(Y, \mathbb{G}_m)$ of $Y$; the Picard category $\mathbf{CH}^1(Y)$ is canonically equivalent to the Picard category of $\mathbb G_m$-torsors or line bundles on $Y$. The Picard category $\mathbf{CH}^1(Y)$ categorifies the Chow group $\mathsf{C}H^1(Y)$ of divisors: \begin{enumerate} \item $\mathsf{C}H^1(Y) = \Pic(Y) =\mathrm{H}^1(Y, \mathbb{G}_m) = \pi_0(\mathbf{CH}^1(Y))$; \item Any map $f \colon Y \to Y'$ of smooth schemes defines an additive functor of Picard categories \begin{equation*} f^*: \mathbf{CH}^1(Y') \to \mathbf{CH}^1(Y), \qquad L \mapsto f^*L\,; \end{equation*} the induced map on $\pi_0$ is the pullback of divisors $f^*: \mathsf{C}H^1(Y') \to \mathsf{C}H^1(Y)$. \end{enumerate} For a curve $C$, let $\Pic^0(C)$ be the kernel of the degree map $\Pic(C) \to \mathbb Z$. If $\mathbf{CH}^1(C)^0$ is the sub-Picard category of $\mathbf{CH}^1(C)$ consisting of line bundles of degree zero, then $\pi_0(\mathbf{CH}^1(C)^0) = \Pic^0(C)$. \subsection{Correspondences} We refer to \cite[Chapter 16]{MR1644323} for details. Let $C$ and $D$ be curves and let $\pi_C$ and $\pi_D$ be the two projections on $C\times D$. A correspondence $\alpha: D \vdash C$ from $D$ to $C$ is a divisor $\alpha$ on $C \times D$. It defines a line bundle $\mathcal{O}(\alpha)$ on $C \times D$. The correspondence $\alpha$ acts on divisors: it induces a map \begin{equation*} \alpha^* \colon \Pic(D) \to \Pic(C) \quad m \mapsto (\pi_C)_*(\pi_D^*m.\alpha) \end{equation*} which sends a divisor $m$ on $D$ to the pushforward along $\pi_C$ of the intersection of $\alpha$ and $\pi_D^*m$ on $C\times D$. It restricts to a map $\textrm{Pic}^0(D) \to \textrm{Pic}^0(C)$: if $m$ has degree zero, then so does $\alpha^*(m)$. We get a homomorphism \begin{equation} \label{ek} T\colon \mathsf{C}H^1(C \times D) \to \mathrm{H}om(\Pic(D), \Pic(C)) \to \mathrm{H}om(\Pic^0(D), \Pic^0(C))\,, \quad \alpha \mapsto \alpha^* \end{equation} as $\alpha^*$ depends only on the class of $\alpha$ in $\mathsf{C}H^1(C \times D)$. Degenerate correspondences \cite[Example 16.1.2]{MR1644323} constitute the subgroup $I(D,C) =\pi_C^*(CH^1(C)) + \pi_D^*(CH^1(D))$ of $\mathsf{C}H^1(C \times D)$. The map $T$ induces an isomorphism \cite[Proposition 3.3, Theorem 3.9]{MR1265529} \begin{equation} \label{doh} T\colon \frac{\mathsf{C}H^1(C \times D)}{I(D,C)} \to \mathrm{H}om(\Pic^0(D), \Pic^0(C)); \end{equation} see \cite[Chapter 11, Theorem 5.1]{MR2062673} for another proof when $F=\mathbb C$. Over a non-algebraically closed field, the isomorphism holds if $C$ and $D$ have rational points. Composition of correspondences induces a ring structure on $\mathsf{C}H^1(C \times C)$ with $I(C,C)$ as an ideal \cite[Example 16.1.2]{MR1644323}. It is known that \begin{itemize} \item \cite[Corollary 16.1.2]{MR1644323} the map \begin{equation}\label{teen} T: \mathsf{C}H^1(C \times C) \to \End(\mathsf{C}H^1(C))\,, \quad \alpha \mapsto \alpha^* \end{equation} is a homomorphism of rings. \item $T$ induces a ring isomorphism \cite[Example 16.1.2(c)]{MR1644323} \begin{equation} \label{caar} T\colon \frac{CH^1(C \times C)}{I(C,C)} \to \End(\Pic^0(C)), \end{equation} as $F$ is algebraically closed; see \cite[Chapter 11, Theorem 5.1]{MR2062673} for a proof when $F=\mathbb C$. \end{itemize} The following result provides a categorification of the above statements. \begin{theorem} \label{nuovo} There is an additive functor of Picard categories \begin{equation} \tilde{T}\colon \mathbf{CH}^1(C \times D) \to \cHom(\mathbf{CH}^1(D), \mathbf{CH}^1(C)) \to \cHom(\mathbf{CH}^1(D)^0, \mathbf{CH}^1(C)^0) \end{equation} which induces (\ref{ek}) on $\pi_0$. $\mathsf{T}ilde{T}$ has the following properties: \begin{enumerate}[label=(\roman*)] \item Let $M(D,C)= \mathsf{C}oker(\pi_C^*+ \pi_D^*)$ be the cokernel of the additive functors \begin{equation} \pi_C^*\colon \mathbf{CH}^1(C) \to \mathbf{CH}^1(C \times D) \leftarrow \mathbf{CH}^1(D) \reflectbox{$\colon$} \pi_D^*. \end{equation} Then $\tilde{T}$ induces an additive functor \begin{equation} \tilde{T}_{D,C} \colon M(D,C) \to \cHom(\mathbf{CH}^1(D)^0, \mathbf{CH}^1(C)^0) \end{equation} which, on $\pi_0$, is (\ref{doh}). \item $\mathbf{CH}^1(C \times C)$ comes naturally equipped with the structure of a categorical ring (\S \ref{sec:ring-structures}) which, on $\pi_0$, is the composition of correspondences. \item the functors \begin{equation} \mathsf{T}ilde{T} \colon \mathbf{CH}^1(C \times C)\to \stEnd(\mathbf{CH}^1(C)), \quad \mathsf{T}ilde{T}_{C,C} \colon M(C,C) \to \stEnd(\mathbf{CH}^1(C)^0) \end{equation} are functors of categorical rings (\S \ref{sec:ring-structures}). These induce (\ref{teen}), (\ref{caar}) on $\pi_0$. \end{enumerate} \end{theorem} \begin{remark} The assignment $C \mapsto \mathbf{CH}^1(C)$ and $g\in \mathsf{D}iv (C\times D) \mapsto \mathsf{T}ilde{T}(\mathcal{O} (g))$ comprise a pseudo-functor from $\mathcal{V}$ (the category of curves and correspondences) to the 2-category of Picard categories (or stacks). \end{remark} \subsection{Proof of Theorem \ref{nuovo}} The existence of $\tilde{T}$ is provided by the following lemma. \begin{lemma} For any correspondence $\alpha:D \vdash C$, the map $\alpha^*: \mathsf{C}H^1(D) \to \mathsf{C}H^1(C)$ is induced by an additive functor $\tilde{\alpha}^* \colon \mathbf{CH}^1(D) \to \mathbf{CH}^1(C)$. This functor restricts to a functor $\mathbf{CH}^1(D)^0 \to \mathbf{CH}^1(C)^0$. Further, if $\beta$ is another correspondence, then $\widetilde{(\alpha+\beta)}^* = \tilde{\alpha}^*+ \tilde{\beta}^*$ as additive functors. \end{lemma} \begin{proof} This is a simple application of Theorem \ref{Main}. Given a line bundle $M$ on $D$, consider the pair $\pi_D^*M$ and $\mathcal{O} (\alpha)$ of line bundles on $C \times D$; \cite{ER} constructs a $\mathcal{K}_2$-gerbe $G_{(\mathcal{O} (\alpha), \pi_D^*M)}$on $C \times D$. As this gerbe is horizontal by Proposition \ref{beilinson-lemma}, one can integrate it along $\pi_C:C \times D \to C$ to get a line bundle on $C$: \begin{equation*} \tilde{\alpha}^*M = \int_{\pi_C}~G_{(\mathcal O(\alpha), \pi_D^*M)} \,. \end{equation*} Both the additivity of $\tilde{\alpha}^*$ and the property $\widetilde{(\alpha+\beta)}^* = \tilde{\alpha}^*+ \tilde{\beta}^*$ follow from the bi-additivity (Theorem \ref{prop2}) of $G$. If the line bundle $M$ on $D$ has degree zero, then so does the line bundle $ \tilde{\alpha}^*M$ on $C$ as its class in $CH^1(C)$ is $\alpha^*(M)$ which has degree zero. \end{proof} This gives us a bi-additive functor of Picard categories \[\mathbf{CH}^1(C \times D) \times \mathbf{CH}^1(D) \to \mathbf{CH}^1(C), \qquad (\alpha,M) \mapsto \tilde{\alpha}^*M,\] and an additive functor \[\tilde{T}: \mathbf{CH}^1(C \times D) \to \cHom(\mathbf{CH}^1(D), \mathbf{CH}^1(C)) \to \cHom(\mathbf{CH}^1(D)^0, \mathbf{CH}^1(C)^0)\] where, for any pair $P$, $P'$ of Picard categories, $\cHom(P,P')$ is the Picard category of additive functors from $P$ to $P'$. Statement (i) of Theorem \ref{nuovo} concerns the factorization of $\tilde{T}$ as \begin{equation}\label{eq2} \tilde{T}:M(D,C) \to \cHom(\mathbf{CH}^1(D)^0, \mathbf{CH}^1(C)^0) \end{equation} This, in turn, follows from the triviality of $\tilde{T}$ on $\pi_C^*\mathbf{CH}^1(C)$ and $\pi_D^*\mathbf{CH}^1(D)$: \begin{itemize} \item {\bf $\tilde{T}$ restricted to $\pi_D^*\mathbf{CH}^1(D)$.} If $g:D \vdash C$ is the pullback $\pi_D^*x$ of a divisor $x$ on $D$, then $\tilde{T}(g)$ applied to a line bundle $L$ on $D$ is defined as $\int_{\pi_C}G_{(\mathcal O(g), \pi_D^*L)}$. As the construction of the $\mathcal{K}_2$-gerbe is functorial, we have \[G_{(\pi_D^*x, \pi_D^*L)}= \pi_D^*G_{(x,L)};\] as $H^2(D, \mathcal{K}_2) =0$, the $\mathcal{K}_2$-gerbe $G_{(x,L)}$ on $D$ is trivializable. Since $\int_{\pi_C}$ is an additive functor, $\tilde{T}(g)(L)$ is trivializable. It follows that \[\tilde{T}(g): \mathbf{CH}^1(D)^0 \to \mathbf{CH}^1(C)^0\] is the trivial functor. \item {\bf $\tilde{T}$ restricted to $\pi_C^*\mathbf{CH}^1(C)$.} If $g:D \vdash C$ is $\pi^*_Cx$ of a divisor $x$ on $C$ and $m=\sum m _j y_j$ is a divisor on $D$, then $\tilde{T}(g)(m)$ corresponds to the ${\mathrm deg}~m$-th power of the line bundle $\mathcal O(x)$ and hence is trivial when $m$ has degree zero. This can be seen as follows: $\tilde{T}(g)(m)$ is the object corresponding to the line bundle \[\<{\pi_D^*m,\pi^*_C x} = \otimes_j \<{\pi^*_D y_j, \pi_C^* x} ^{\otimes m_j}.\] Since $\pi_C: C \times y_j \hookrightarrow C \times D \to C$ is an isomorphism for any closed point $y_j$ of $D$, one has $\<{\pi^*_D y_j, \pi_C^* x} = \mathcal O(x)$ by (\ref{norm-finite}). By bi-additivity, $$\<{\pi_D^*m,\pi^*_C x}= (\mathcal O(x))^{\mathrm{deg}~m}.$$ If $m$ has degree zero, then $\tilde{T}(g)(m)$ is trivializable. So the functor \[\tilde{g}^*~=~\tilde{T}(g): \mathbf{CH}^1(D)^0 \to \mathbf{CH}^1(C)^0\] is trivial. \end{itemize} This completes the proof of (i) of Theorem \ref{nuovo}. \subsection{Composition}We show that $\tilde{T}$ is compatible with composition of correspondences. Let $X =C_1 \times C_2 \times C_3$ be the product of three curves $C_1, C_2, C_3$ and let $\pi_{ij}:X\to C_i \times C_j$ be the projections. If $g:C_2\vdash C_1$ is a correspondence on $C_1 \times C_2$ and $h:C_3\vdash C_2$ on $C_2 \times C_3$, one can compose $g$ and $h$ to get a correspondence $g\circ h:C_3\vdash C_1$ on $C_1 \times C_3$: \[g\circ h = (\pi_{13}) _* (\pi_{23}^*h~.~\pi_{12}^*g),\] by pulling back $g$ and $h$ to $X$ and intersecting them and pushing forward via $\pi_{13}$ to $C_1 \times C_3$. This gives a bi-additive map \[\circ: \mathsf{C}H^1(C_1 \times C_2) \times \mathsf{C}H^1(C_2 \times C_3) \to \mathsf{C}H^1(C_1 \times C_3).\] \begin{lemma}\label{l1} The above bi-additive map is induced by a bi-additive functor \[\tilde{\circ}: \mathbf{CH}^1(C_1 \times C_2) \times \mathbf{CH}^1(C_2 \times C_3) \to \mathbf{CH}^1(C_1 \times C_3).\] \end{lemma} \begin{proof} The functor is defined as follows: The pair $\pi_{12}^*\mathcal{O}(g)$ and $\pi_{23}^* \mathcal{O}(h)$ of line bundles on $X$ give rise to a $\mathcal{K}_2$-gerbe $G_{(\pi_{12}^*\mathcal{O}(g),\pi_{23}^* \mathcal{O}(h))}$ on $X$. Since it is horizontal (Proposition \ref{beilinson-lemma}) for the map (a relative curve) $\pi_{13}:X \to C_1 \times C_3$, we can integrate it along $\pi_{13}$ to obtain a line bundle $\<{\pi_{12}^*\mathcal{O}(g),\pi_{23}^* \mathcal{O}(h)}$ on $C_1 \times C_3$. The functor $\tilde{\circ}$, in the notation of Theorem \ref{Main}, is \[\tilde{\circ}:(g,h) \mapsto \int_{\pi_{13}}G_{(\pi_{12}^*\mathcal{O}(g),\pi_{23}^* \mathcal{O}(h))} = \<{\pi_{12}^*\mathcal{O}(g),\pi_{23}^* \mathcal{O}(h)}.\] It follows from Theorem \ref{Main} that $\tilde{\circ}$ induces $\circ$ on $\pi_0$. \end{proof} Taking $C_1=C_2=C_3=C$ proves (ii) of Theorem \ref{nuovo}. \begin{lemma}\label{compos-eh?}The functor $\tilde{T}$ is compatible with composition: namely, the diagram \begin{equation} \label{eq:100} \begin{tikzcd} \mathbf{CH}^1(C_1 \times C_2) \times \mathbf{CH}^1(C_2 \times C_3) \ar[r,"\tilde{\circ}"] \ar[d,"\tilde{T}\times \tilde{T}"'] & \mathbf{CH}^1(C_1 \times C_3) \ar[d,"\tilde{T}"] \\ \cHom(\mathbf{CH}^1(C_2), \mathbf{CH}^1(C_1)) \times \cHom(\mathbf{CH}^1(C_3), \mathbf{CH}^1(C_2)) \ar[r,""'] & \cHom(\mathbf{CH}^1(C_3), \mathbf{CH}^1(C_1)) \end{tikzcd}, \end{equation} commutes up to natural isomorphisms $\tilde{T}(g\tilde{\circ}{h}) \cong \tilde{T}(g)\circ\tilde{T}(h)$. \end{lemma} \begin{proof} For any smooth projective morphism $f: Y\to B$ of relative dimension one and line bundles $L_1, L_2$ on $Y$, let $\<{L _1, L_2}_{f} = \int_fG_{(L_1, L_2)}$ denote the Deligne line bundle on $B$. Our task is to prove the existence of a natural isomorphism for any $L \in \mathbf{CH}^1(C_3)$: \begin{equation}\label{eq:101} \<{\<{\pi_{12}^*\mathcal{O}(g), \pi_{23}^*\mathcal{O}(h)}_{\pi_{13}}, \alpha^*_3L}_{\alpha_1} \cong \<{\mathcal{O}(g), \gamma^*_{2}\<{\mathcal{O}(h), \beta_3^*L}_{\beta_2}}_{\gamma_1} \end{equation} where the maps are \[\begin{tikzcd} C_1\times C_3 \ar[r, "\alpha_3"] \ar[d, "\alpha_1"] & C_3, & C_2\times C_3 \ar[r, "\beta_3"] \ar[d, "\beta_2"] &C_3, & C_1\times C_2 \ar[r, "\gamma_2"] \ar[d, "\gamma_1"] & C_2.\\ C_1 &&C_2 &&C_1& \end{tikzcd} \] By additivity in $L$, it suffices to consider the case $L = \mathcal{O}(x)$ for a closed point $x = \Spec~F$ of $C_3$. We put \begin{gather*} \iota_1\colon C_1 \cong D =C_1 \times x \hookrightarrow C_1 \times C_3, \quad \iota_2\colon C_2 \cong E = C_2 \times x \hookrightarrow C_2 \times C_3, \\ \iota_{12} \colon C_1 \times C_2 \cong C_1 \times C_2 \times x \hookrightarrow C_1 \times C_2 \times C_2 = X. \end{gather*} By (\ref{norm-finite}), the left-hand-side of (\ref{eq:101}) is \[ \<{\<{\pi_{12}^*\mathcal{O}(g), \pi_{23}^*\mathcal{O}(h)}_{\pi_{13}}, \alpha^*_3L}_{\alpha_1} \cong N_{D/C_1}(\<{\pi_{12}^*\mathcal{O}(g), \pi_{23}^*\mathcal{O}(h)}_{\pi_{13}}\big |_D) \cong \iota_1^*\bigg( \<{\pi_{12}^*\mathcal{O}(g), \pi_{23}^*\mathcal{O}(h)}_{\pi_{13}}\bigg).\] On the other hand, by (\ref{norm-finite}), the right-hand-side of (\ref{eq:101}) is \[ \<{\mathcal{O}(g), \gamma^*_{2}\<{\mathcal{O}(h), \beta_3^*L}_{\beta_2}}_{\gamma_1} \cong \<{\mathcal{O}(g), \gamma^*_{2}N_{{E}/C_2}(\mathcal{O}(h)\big |_E)}_{\gamma_1} \cong \<{\mathcal{O}(g), \gamma^*_{2}\iota_2^*\mathcal{O}(h) }_{\gamma_1} \cong\<{\mathcal{O}(g), \iota_{12}^*\pi_{23}^*\mathcal{O}(h)}_{\gamma_1},\] using $C_2 \cong E$ for the second isomorphism and the following diagram for the last isomorphism: \[\begin{tikzcd} C_1\times C_2 \ar[d, "\gamma_2"] \ar[r, "\cong"] &C_1 \times C_2 \times x \ar[r] &C_1 \times C_2 \times C_3 =X \ar[d, "\pi_{23}"]\\ C_2 \ar[r, "\cong"] & E \ar[r]& C_2 \times C_3\,, \end{tikzcd} \] where the top row is $\iota_{12}$ and the bottom one is $\iota_2$. As $\pi_{12}\circ \iota_{12}$ is the identity map on $C_1 \times C_2$, we have \[ \<{\mathcal{O}(g), \iota_{12}^*\pi_{23}^*\mathcal{O}(h)}_{\gamma_1} \cong \<{\iota_{12}^* \pi_{12}^*\mathcal{O}(g), \iota_{12}^*\pi_{23}^*\mathcal{O}(h)}_{\gamma_1}. \] The required natural isomorphism in (\ref{eq:101}), namely, \[\<{\iota_{12}^*\pi_{12}^*\mathcal{O}(g), \iota_{12}^*\pi_{23}^*\mathcal{O}(h)}_{\gamma_1} \cong \iota_1^*\bigg( \<{\pi_{12}^*\mathcal{O}(g), \pi_{23}^*\mathcal{O}(h)}_{\pi_{13}}\bigg) \] follows from functoriality: use the map of relative curves \[\begin{tikzcd} C_1\times C_2 \ar[d, "\gamma_1"] \ar[r, "\cong"] &C_1 \times C_2 \times x \ar[r] &C_1 \times C_2 \times C_3 =X \ar[d, "\pi_{13}"]\\ C_1 \ar[r, "\cong"] & D \ar[r]& C_1 \times C_3\,, \end{tikzcd} \] where the top row is still $\iota_{12}$ and the bottom one is now $\iota_1$. This proves Lemma \ref{compos-eh?}. \end{proof} Taking $C=C_1 = C_2=C_3$ in the above lemma, we obtain that \[ \tilde{\circ}: \mathbf{CH}^1(C\times C) \times \mathbf{CH}^1(C \times C) \to \mathbf{CH}^1(C \times C)\] is a monoidal functor of Picard categories and that \[\tilde{T}: \mathbf{CH}^1(C \times C) \to \stEnd(\mathbf{CH}^1(C))\] is a functor of ring categories proving (iii). This finishes the proof of Theorem~\ref{nuovo}. \end{document}
\begin{document} \title{Reply to ``Comment on `Witnessed entanglement and the geometric measure of quantum discord' "} \author{Tiago Debarba} \email{[email protected]} \author{Thiago O. Maciel} \author{Reinaldo O. Vianna} \affiliation{Departamento de F\'{\i}sica - ICEx - Universidade Federal de Minas Gerais, Av. Pres. Ant\^onio Carlos 6627 - Belo Horizonte - MG - Brazil - 31270-901.} \date{\today} \begin{abstract} We show that the mistakes pointed out by Rana and Parashar [Phys. Rev. A {\bf 87}, 016301 (2013)] do not invalidate the main conclusion of our work [Phys. Rev. A {\bf 86}, 024302 (2012)]. We show that the errors affected only a particular application of our general results, and present the correction. \end{abstract} \pacs{03.67.Mn, 03.65.Aa} \maketitle Rana and Parashar \cite{comment} claim that our bounds between geometrical discord and entanglement \cite{debarbapra} are incorrect. They give examples of violations of our bounds and suggest it has to do with non-monotonicity of geometrical discord in the Hilbert-Schmidt norm. The authors started their comment revising our definition of geometrical discord and pointing a typographical error in the definition of negativity. We defined negativity as the sum of the negative eigenvalues of the partial transpose of the state, Eq.16 of our work, while some authors further normalize this quantity. Their critique about the normalization of the geometrical discord in the Hilbert-Schmidt norm is also irrelevant, for the normalized geometrical discord is greater than ours. The first counterexample which would violate our results is the maximally entangled state for two qubits ($\phi_{+}$). They consider the negativity as $1$, while the 2-norm geometrical discord is $1/2$. But it is not correct. Consider Eq.20 , \begin{equation} D_{(2)}(\phi_{+})\geq \frac{E_{w}^2}{Tr(W_{\phi_{+}}^2)}. \end{equation} We have $D_{(2)}(\phi_{+})=1/2$, and $E_{w}=Tr(W_{\phi_{+}}\phi_{+})=Tr(P_{-}\phi_{+}^{T_1})=1/2$, where $P_{-}$ is the projector associated to the negative eigenvalue of the partial transpose of $\phi_{+}$. $Tr(W_{\phi_{+}}^2)$ is the number of negative eigenvalues of the partial transpose, which is $1$. Thus $D_{(2)} = 1/2 \geq E_{w}^2=1/4$. The next counterexample is the $2\otimes 32$-dimension state. For this example we have quantum discord $D_{(2)}(\rho)=0.01$ and $E_{w}^2/Tr[W_{\rho}^2]=0.0032$, where $E_{w}$ is the negativity, and Eq.20 is satisfied. However, in the comment the equation taken was Eq.21, and via that relation we get $\mathcal{N}^2/(d-1)^2=0.0316$, which violates the bound. The point is we mistakenly had written that $Tr[W_{\rho}^2]\leq d-1$, for a system with dimension $d\otimes d'$ and $d\leq d'$. In the counterexample we have $Tr[W_{\rho}^2]=10$, i.e. the partial transpose of the state has $10$ negative eigenvalues and not $d-1=1$, and this is the reason of the wrong violation in Eq.21. In the comment, the authors conclude that the violation comes from the fact that $D_{(2)}(\rho)$ is not a monotonic distance, but monotonicity does not play any role in our bounds. Finally, the authors claim that Eq.27 is not valid. Equation 27 is a particular case of Eq.22, where we get a linear relation between geometrical discord calculated via trace norm and witnessed entanglement. This bound is valid only for entanglement measures whose optimal entanglement witnesses live in the domain $-\mathbb{I} \leq W \leq \mathbb{I}$, and the entanglement witness for the negativity is not in this domain, which explains the problem with the bound in Eq.27. An example of entanglement measure for which this bound is valid is the random robustness of entanglement, Eq.28. Equation 27 can be easily corrected by means of an inequality more general than Eq.22, namely: \begin{equation} D_{(1)}\geq \frac{E_w}{|| W_{\rho}||_{\infty}}, \end{equation} where $|| W_{\rho}||_{\infty}$ is the greatest eigenvalue of the optimal entanglement witness of the state $\rho$ \footnote{Take the well known inequality for operators $A$ and $B$, $||A||_{q}||B||_{p}\geq |Tr[AB^{\dagger}]|$, for $1/q+1/p = 1$. Set $A=\rho-{\xi}$, where $\xi$ is $\rho$'s nearest non-discordant state, and set $B = W_{\rho}$, where $W_{\rho}$ is the optimal entanglement witness of $\rho$, then follows straightforwardly $D_{(p)} \geq E_{w}/||W_{\rho}||_{q} $. For $p=1$, we have $q=\infty$.}. Note that this bound is valid for every witnessed entanglement. In conclusion, the main results of our work are Eq.20 and Eq.22, which are rigorously correct. They were calculated from {\it first principles}, via well known inequalities for operators and properties of entanglement witnesses. We made two mistakes when specializing for the negativity, as discussed and clarified above. The conjecture proposed by D. Girolami and G. Adesso \cite{girolamiadesso} about the interplay between geometrical quantum discord and entanglement is implicit in Eq.20 and Eq.22. \end{document}
\begin{document} \title[Generalized skew Pieri rules]{Quasisymmetric and noncommutative skew Pieri rules} \author{V. Tewari} \address{Department of Mathematics, University of Washington, Seattle, WA 98105, USA} \email{\href{mailto:[email protected]}{[email protected]}} \author{S. van Willigenburg} \address{Department of Mathematics, University of British Columbia, Vancouver, BC V6T 1Z2, Canada} \email{\href{mailto:[email protected]}{[email protected]}} \thanks{ The authors were supported in part by the National Sciences and Engineering Research Council of Canada.} \subjclass[2010]{Primary 05E05, 16T05, 16W55; Secondary 05A05, 05E10} \keywords{composition, composition poset, composition tableau, noncommutative symmetric function, quasisymmetric function, skew Pieri rule} \begin{abstract} In this note we derive skew Pieri rules in the spirit of Assaf-McNamara for skew quasisymmetric Schur functions using the Hopf algebraic techniques of Lam-Lauve-Sottile, and recover the original rules of Assaf-McNamara as a special case. We then apply these techniques a second time to obtain skew Pieri rules for skew noncommutative Schur functions. \end{abstract} \maketitle \section{Introduction}\label{sec:intro} The Hopf algebra of quasisymmetric functions, $\ensuremath{\operatorname{QSym}}$, was first defined explicitly in \cite{gessel}. It is a nonsymmetric generalization of the Hopf algebra of symmetric functions, and arises in many areas such as the representation theory of the 0-Hecke algebra \cite{BBSSZ, DKLT, konig, 0-Hecke}, probability \cite{hersh-hsiao, stanley-riffle}, and is the terminal object in the category of combinatorial Hopf algebras \cite{aguiar-bergeron-sottile}. Recently a basis of $\ensuremath{\operatorname{QSym}}$, known as the basis of quasisymmetric Schur functions, was discovered \cite{QS}, which is a nonsymmetric generalization of the symmetric function basis of Schur functions. These quasisymmetric Schur functions arose from the combinatorics of Macdonald polynomials \cite{HHL}, have been used to resolve the conjecture that $\ensuremath{\operatorname{QSym}}$ over the symmetric functions has a stable basis \cite{lauve-mason}, and have initiated the dynamic research area of discovering other quasisymmetric Schur-like bases such as row-strict quasisymmetric Schur functions \cite{ferreira, mason-remmel}, Young quasisymmetric Schur functions \cite{LMvW}, dual immaculate quasisymmetric functions \cite{BBSSZ}, type $B$ quasisymmetric Schur functions \cite{jingli, oguz}, quasi-key polynomials \cite{assafsearles, searles} and quasisymmetric Grothendieck polynomials \cite{monical}. Their name was aptly chosen since these functions not only naturally refine Schur functions, but also generalize many classical Schur function properties, such as the Littlewood-Richardson rule from the classical \cite{littlewood-richardson} to the generalized \cite[Theorem 3.5]{BLvW}, the Pieri rules from the classical \cite{pieri} to the generalized \cite[Theorem 6.3]{QS} and the RSK algorithm from the classical \cite{knuth, robinson, schensted} to the generalized \cite[Procedure 3.3]{mason}. Dual to $\ensuremath{\operatorname{QSym}}$ is the Hopf algebra of noncommutative symmetric functions, $\ensuremath{\operatorname{NSym}}$ \cite{GKLLRT}, whose basis dual to that of quasisymmetric Schur functions is the basis of noncommutative Schur functions \cite{BLvW}. By duality this basis again has a Littlewood-Richardson rule and RSK algorithm, and, due to noncommutativity, two sets of Pieri rules, one arising from multiplication on the right \cite[Theorem 9.3]{tewari} and one arising from multiplication on the left \cite[Corollary 3.8]{BLvW}. Therefore in both $\ensuremath{\operatorname{QSym}}$ and $\ensuremath{\operatorname{NSym}}$ a key question in this realm remains: Are there \emph{skew} Pieri rules for quasisymmetric and noncommutative Schur functions? In this note we give such rules that are analogous to that of their namesake Schur functions. More precisely, the note is structured as follows. In Section~\ref{sec:comps} we review necessary notions on compositions and define operators on them. In Section~\ref{sec:QSYMNSYM} we recall $\ensuremath{\operatorname{QSym}}$ and $\ensuremath{\operatorname{NSym}}$, the bases of quasisymmetric Schur functions and noncommutative Schur functions, and their respective Pieri rules. In Section~\ref{sec:skew} we give skew Pieri rules for quasisymmetric Schur functions in Theorem~\ref{the:QSskewPieri} and recover the Pieri rules for skew shapes of Assaf and McNamara in Corollary~\ref{cor:AM}. We close with skew Pieri rules for noncommutative Schur functions in Theorem~\ref{the:NCskewPieri}. \section{Compositions and diagrams}\label{sec:comps} A finite list of integers $\alpha = (\alpha _1, \ldots , \alpha _\ell)$ is called a \emph{weak composition} if $\alpha _1, \ldots , \alpha _\ell$ are nonnegative, is called a \emph{composition} if $\alpha _1, \ldots , \alpha _\ell$ are positive, and is called a \emph{partition} if $\alpha _1\geq \cdots \geq\alpha _\ell >0$. Note that every weak composition has an underlying composition, obtained by removing every zero, and in turn every composition has an underlying partition, obtained by reordering the list of integers into weakly decreasing order. Given $\alpha = (\alpha _1, \ldots , \alpha _\ell)$ we call the $\alpha _i$ the \emph{parts} of $\alpha$, also $\ell$ the \emph{length} of $\alpha$ denoted by $\ell(\alpha)$, and the sum of the parts of $\alpha$ the \emph{size} of $\alpha$ denoted by $|\alpha |$. The empty composition of length and size zero is denoted by $\emptyset$. If there exists $\alpha _{k+1} = \cdots = \alpha _{k+j} = i$ then we often abbreviate this to $i^j$. Also, given weak compositions $\alpha= (\alpha_1,\ldots ,\alpha_{\ell})$ and $\beta=(\beta_1,\ldots,\beta_m)$, we define the \emph{concatenation} of $\alpha$ and $\beta$, denoted by $\alpha \beta$, to be the weak composition $(\alpha_1,\ldots,\alpha_\ell,\beta_1,\ldots,\beta_m)$. We define the \emph{near-concatenation} of $\alpha$ and $\beta$, denoted by $\alpha \odot \beta$, to be the weak composition $(\alpha_1,\ldots,\alpha_\ell + \beta_1,\ldots,\beta_m)$. For example, if $\alpha=(2,1,0,3)$ and $\beta=(1,4,1)$, then $\alpha\beta=(2,1,0,3,1,4,1)$ and $\alpha\odot \beta=(2,1,0,4,4,1)$. The \emph{composition diagram} of a weak composition $\alpha$, also denoted by $\alpha$, is the array of left-justified boxes with $\alpha _i$ boxes in row $i$ from the \emph{top}, that is, following English notation for Young diagrams of partitions. We will often think of $\alpha$ as both a weak composition and as a composition diagram simultaneously, and hence future computations such as adding/subtracting 1 from the rightmost/leftmost part equalling $i$ (as a weak composition) are synonymous with adding/removing a box from the bottommost/topmost row of length $i$ (as a composition diagram). \begin{example}\label{ex:comps} The composition diagram of the weak composition of length 5, $\alpha=(2,0,4,3,6)$, is shown below. $$\tableau{\ &\ \\ \\ \ &\ &\ &\ \\ \ &\ &\ \\\ &\ &\ &\ &\ &\ }$$ The composition of length 4 underlying $\alpha$ is $(2,4,3,6)$, and the partition of length 4 underlying it is $(6,4,3,2)$. They all have size 15. \end{example} \subsection{Operators on compositions}\label{sec:ops} In this subsection we will recall four families of operators, each of which are dependent on a positive integer parameter. These families have already contributed to the theory of quasisymmetric and noncommutative Schur functions, and will continue to cement their central role as we shall see later. Although originally defined on compositions, we will define them in the natural way on weak compositions to facilitate easier proofs. The first of these operators is the box removing operator $\mathfrak{d}$, which first appeared in the Pieri rules for quasisymmetric Schur functions \cite{QS}. The second of these is the appending operator $a$. These combine to define our third operator, the jeu de taquin or jdt operator $\mathfrak{u}$. This operator is pivotal in describing jeu de taquin slides on tableaux known as semistandard reverse composition tableaux and in describing the right Pieri rules for noncommutative Schur functions \cite{tewari}. Our fourth and final operator is the box adding operator $\mathfrak{t}$ \cite{BLvW, MNtewari}, which plays the same role in the left Pieri rules for noncommutative Schur functions \cite{BLvW} as $\mathfrak{u}$ does in the aforementioned right Pieri rules. Each of these operators is defined on weak compositions for every integer $i\geq 0$. We note that $$\mathfrak{d} _0 = a_0 = \mathfrak{u} _0 = \mathfrak{t} _ 0 = {Id}$$namely the identity map, which fixes the weak composition it is acting on. With this in mind we now define the remaining operators for $i\geq 1$. The first \emph{box removing operator} on weak compositions, $\mathfrak{d} _i$ for $i\geq 1$, is defined as follows. Let $\alpha$ be a weak composition. Then $$\mathfrak{d} _i (\alpha) = \alpha '$$where $\alpha '$ is the weak composition obtained by subtracting 1 from the rightmost part equalling $i$ in $\alpha$. If there is no such part then we define $\mathfrak{d} _i(\alpha) = 0$. \begin{example}\label{ex:down} Let $\alpha=(2,1,2)$. Then $\mathfrak{d}_1(\alpha)=(2,0,2)$ and $\mathfrak{d}_2(\alpha)=(2,1,1)$. \end{example} Now we will discuss two notions that will help us state our theorems in a concise way later, as well as connect our results to those in the classical theory of symmetric functions. Let $i_1 < \cdots < i_k$ be a sequence of positive integers, and let $\alpha$ be a weak composition. Consider the operator $\mathfrak{d}_{i_1}\cdots \mathfrak{d}_{i_k}$ acting on the weak composition $\alpha$, and assume that the result is a valid weak composition. Then the boxes that are removed from $\alpha$ are said to form a \emph{$k$-horizontal strip}, and we can think of the operator $\mathfrak{d}_{i_1}\cdots \mathfrak{d}_{i_k}$ as removing a $k$-horizontal strip. Similarly, given a sequence of positive integers $i_1\geq \cdots \geq i_k$, consider the operator $\mathfrak{d}_{i_1}\cdots \mathfrak{d}row_{i_k}$ acting on $\alpha$ and suppose that the result is a valid weak composition. Then the boxes that are removed from $\alpha$ are said to form a \emph{$k$-vertical strip}. As before, we can think of the operator $\mathfrak{d}_{i_1}\cdots \mathfrak{d}row_{i_k}$ as removing a $k$-vertical strip. \begin{example}\label{ex:horizontal and vertical strip} Consider $\alpha=(2,5,1,3,1)$. When we compute $\mathfrak{d}_{1}\mathfrak{d}_2\mathfrak{d}_4\mathfrak{d}_5(\alpha)$, the operator $\mathfrak{d}_{1}\mathfrak{d}_2\mathfrak{d}_4\mathfrak{d}_5$ removes the $4$-horizontal strip shaded in red from $\alpha$. $$ \ytableausetup{smalltableaux,boxsize=0.5em} \begin{ytableau} *(white) & *(red!80)\\ *(white) &*(white) &*(white) &*(red!80) &*(red!80) \\ *(white) \\ *(white) &*(white) &*(white) \\ *(red!80)\\ \end{ytableau} $$ When we compute $\mathfrak{d}_3\mathfrak{d}_2\mathfrak{d}_1\mathfrak{d}_1(\alpha)$, the operator $\mathfrak{d}_3\mathfrak{d}_2\mathfrak{d}_1\mathfrak{d}_1$ removes the $4$-vertical strip shaded in red from $\alpha$. $$ \ytableausetup{smalltableaux,boxsize=0.5em} \begin{ytableau} *(white) & *(red!80)\\ *(white) &*(white) &*(white) &*(white) &*(white) \\ *(red!80) \\ *(white) &*(white) &*(red!80) \\ *(red!80)\\ \end{ytableau} $$ \end{example} \begin{remark}\label{rem:horizontal and vertical strip} If we consider partitions as Young diagrams in English notation, then the above notions of horizontal and vertical strips coincide with their classical counterparts. For example, consider the operator $\mathfrak{d}_1\mathfrak{d}_2\mathfrak{d}_4\mathfrak{d}_5$ acting on the partition $(5,3,2,1,1)$, in contrast to acting on the composition $(2,5,1,3,1)$ as in Example \ref{ex:horizontal and vertical strip}. Then the $4$-horizontal strip shaded in red is removed. $$ \ytableausetup{smalltableaux,boxsize=0.5em} \begin{ytableau} *(white) &*(white) &*(white) &*(red!80) &*(red!80) \\ *(white) &*(white) &*(white) \\ *(white) & *(red!80)\\ *(white) \\ *(red!80) \end{ytableau} $$ \end{remark} We now define the second \emph{appending operator} on weak compositions, $a_i$ for $i\geq 1$, as follows. Let $\alpha = (\alpha _1, \ldots , \alpha _{\ell(\alpha)})$ be a weak composition. Then $$a _i (\alpha) = (\alpha _1, \ldots , \alpha _{\ell(\alpha)}, i)$$namely, the weak composition obtained by appending a part $i$ to the end of $\alpha$. \begin{example}\label{ex:append} Let $\alpha = (2,1,3)$. Then $a_2 ((2,1,3))= (2,1,3,2)$. Meanwhile, $a_j \mathfrak{d} _2 ((3,5,1)) = 0$ for all $j\geq 0$ since $\mathfrak{d} _2 ((3,5,1)) = 0$. \end{example} With the definitions of $a_i$ and $\mathfrak{d} _i$ we define the third \emph{jeu de taquin} or \emph{jdt operator} on weak compositions, $\mathfrak{u} _i$ for $i\geq 1$, as $$\mathfrak{u} _i = a_i \mathfrak{d}_1\mathfrak{d}_2\mathfrak{d}_3 \cdots \mathfrak{d} _{i-1}.$$ \begin{example}\label{ex:jdt} We will compute $\mathfrak{u}_4(\alpha)$ where $\alpha = (3,5,2,4,1,2)$. This corresponds to computing $a_4\mathfrak{d}_1\mathfrak{d}_2\mathfrak{d}_3(\alpha)$. Now \begin{eqnarray*} \mathfrak{d}_1\mathfrak{d}_2\mathfrak{d}_3(\alpha)&=&\mathfrak{d}_1\mathfrak{d}_2\mathfrak{d}_3((3,5,2,4,1,2))\\&=&\mathfrak{d}_1\mathfrak{d}_2((2,5,2,4,1,2))\\&=& \mathfrak{d}_1((2,5,2,4,1,1))\\&=& (2,5,2,4,1,0). \end{eqnarray*} Hence $\mathfrak{u}_4(\alpha)=(2,5,2,4,1,0,4)$. \end{example} Let $i_1 < \cdots < i_k$ be a sequence of positive integers, and let $\alpha$ be a weak composition. Consider the operator $\mathfrak{u}_{i_k}\cdots \mathfrak{u}_{i_1}$ acting on the weak composition $\alpha$, and assume that the result is a valid weak composition. Then the boxes that are added to $\alpha$ are said to form a \emph{$k$-right horizontal strip}, and we can think of the operator $\mathfrak{u}_{i_k}\cdots \mathfrak{u}_{i_1}$ as adding a $k$-right horizontal strip. Similarly, given a sequence of positive integers $i_1\geq \cdots \geq i_k$, consider the operator $\mathfrak{u}_{i_k}\cdots \mathfrak{u}_{i_1}$ acting on $\alpha$ and suppose that the result is a valid weak composition. Then the boxes that are added to $\alpha$ are said to form a \emph{$k$-right vertical strip}. As before, we can think of the operator $\mathfrak{u}_{i_k}\cdots \mathfrak{u}_{i_1}$ as adding a $k$-right vertical strip. Lastly, we define the fourth \emph{box adding operator} on weak compositions, $\mathfrak{t} _i$ for $i\geq 1$, as follows. Let $\alpha = (\alpha _1, \ldots , \alpha _{\ell(\alpha)})$ be a weak composition. Then $$\mathfrak{t} _1 (\alpha) = (1, \alpha _1, \ldots , \alpha _{\ell(\alpha)})$$and for $i\geq 2$ $$\mathfrak{t} _i (\alpha) = (\alpha _1, \ldots , \alpha _j + 1, \ldots ,\alpha _{\ell(\alpha)})$$where $\alpha _j$ is the leftmost part equalling $i-1$ in $\alpha$. If there is no such part, then we define $\mathfrak{t} _i (\alpha) = 0$. \begin{example}\label{ex:boxadd} Consider the composition $\alpha=(3,2,3,1,2)$. Then $\mathfrak{t}_1(\alpha)=(1,3,2,3,1,2)$, $\mathfrak{t}_2(\alpha)=(3,2,3,2,2)$, $\mathfrak{t}_3(\alpha)=(3,3,3,1,2)$, $\mathfrak{t}_4(\alpha)=(4,2,3,1,2)$ and $\mathfrak{t}_i(\alpha)=0$ for all $i\geq 5$. \end{example} As with the jdt operators let $i_1 < \cdots < i_k$ be a sequence of positive integers, and let $\alpha$ be a weak composition. Consider the operator $\mathfrak{t}_{i_k}\cdots \mathfrak{t}_{i_1}$ acting on the weak composition $\alpha$, and assume that the result is a valid weak composition. Then the boxes that are added to $\alpha$ are said to form a \emph{$k$-left horizontal strip}, and we can think of the operator $\mathfrak{t}_{i_k}\cdots \mathfrak{t}_{i_1}$ as adding a $k$-left horizontal strip. Likewise, given a sequence of positive integers $i_1\geq \cdots \geq i_k$, consider the operator $\mathfrak{t}_{i_k}\cdots \mathfrak{t}_{i_1}$ acting on $\alpha$ and suppose that the result is a valid weak composition. Then the boxes that are added to $\alpha$ are said to form a \emph{$k$-left vertical strip}, and we can think of the operator $\mathfrak{t}_{i_k}\cdots \mathfrak{t}_{i_1}$ as adding a $k$-left vertical strip. The box adding operator is also needed to define the composition poset \cite[Definition 2.3]{BLvW}, which in turn will be needed to define skew quasisymmetric Schur functions in the next section. \begin{definition}\label{def:RcLc} The \emph{composition poset}, denoted by $\mathcal{L}_{c}$, is the poset consisting of the set of all compositions equipped with cover relation $\lessdot _Yc$ such that for compositions $\alpha, \beta$ $$\beta \lessdot _Yc \alpha \mbox{ if and only if } \alpha = \mathfrak{t} _i (\beta)$$for some $i\geq1$. \end{definition} The order relation $< _{c}$ in $\mathcal{L}_{c}$ is obtained by taking the transitive closure of the cover relation $\lessdot _Yc$. \begin{example}\label{ex:boxaddLc} We have that $(3,2,3,1,2) \lessdot _Yc (4,2,3,1,2)$ by Example~\ref{ex:boxadd}. \end{example} \section{Quasisymmetric and noncommutative symmetric functions}\label{sec:QSYMNSYM} We now recall the basics of graded Hopf algebras before focussing on the graded Hopf algebra of quasisymmetric functions \cite{gessel} and its dual, the graded Hopf algbera of noncommutative symmetric functions \cite{GKLLRT}. We say that $\mathcal{H}$ and $\mathcal{H}^*$ form a pair of dual graded Hopf algebras each over a field $K$ if there exists a duality pairing $\langle \ ,\ \rangle : \mathcal{H}\otimes \mathcal{H}^{*} \longrightarrow K$, for which the structure of $\mathcal{H}^*$ is dual to $\mathcal{H}$ that respects the grading, and vice versa. More precisely, the duality pairing pairs the elements of any basis $\{B_i\}_{i\in I}$ of the graded piece $\mathcal{H}^N$ for some index set $I$, and the elements of its dual basis $\{D_i\}_{i\in I}$ of the graded piece $(\mathcal{H}^N)^*$, given by $\langle B_i, D_j\rangle = \delta_{ij}$, where the \emph{Kronecker delta}\index{Kronecker delta} $\delta_{ij} = 1$ if $i=j$ and 0 otherwise. Duality is exhibited in that the product coefficients of one basis are the coproduct coefficients of its dual basis and vice versa, that is, \begin{eqnarray*} B_i \cdot B_j = \sum_k b^k_{i,j} B_k &\qquad \Longleftrightarrow\qquad & \Delta D_k = \sum_{i,j} b^k_{i,j} D_i \otimes D_j \\ D_i \cdot D_j = \sum_k d^k_{i,j} D_k &\qquad \Longleftrightarrow\qquad & \Delta B_k = \sum_{i,j} d^k_{i,j} B_i \otimes B_j \end{eqnarray*}where $\cdot$ denotes \emph{product} and $\Delta$ denotes \emph{coproduct}. Graded Hopf algebras also have an \emph{antipode} $S: \mathcal{H}\longrightarrow\mathcal{H}$, whose general definition we will not need. Instead we will state the specific antipodes, as needed, later. Lastly, before we define our specific graded Hopf algebras, we recall one Hopf algebraic lemma, which will play a key role later. For $h\in \mathcal{H}$ and $a\in \mathcal{H}^{*}$, let the following be the respective coproducts in Sweedler notation. \begin{eqnarray}\label{eq:coproductH} \Delta (h)&=& \displaystyle\sum_{h} h_{(1)}\otimes h_{(2)} \end{eqnarray} \begin{eqnarray}\label{eq:coproductHdual} \Delta (a)&=&\displaystyle\sum_{a} a_{(1)}\otimes a_{(2)} \end{eqnarray} Now define left actions of $\mathcal{H}^{*}$ on $\mathcal{H}$ and $\mathcal{H}$ on $\mathcal{H}^{*}$, both denoted by $\rightharpoonup$, as \begin{eqnarray}\label{eq:HdualactingonH} a \rightharpoonup h &=& \displaystyle\sum_{h}\langle h_{(2)},a\rangle h_{(1)}, \end{eqnarray} \begin{eqnarray}\label{eq:HactingonHdual} h\rightharpoonup a &=& \displaystyle\sum_{a} \langle h,a_{(2)}\rangle a_{(1)}, \end{eqnarray} where $a\in \mathcal{H}^{*}$, $h\in \mathcal{H}$. Then we have the following. \begin{lemma}\cite{lam-lauve-sottile}\label{lem:magiclemma} For all $g,h \in \mathcal{H}$ and $a\in \mathcal{H}^{*}$, we have that \begin{eqnarray*} (a\rightharpoonup g)\cdot h &= & \displaystyle\sum_{h} \left( S(h_{(2)})\rightharpoonup a \right)\rightharpoonup \left(g\cdot h_{(1)}\right) \end{eqnarray*} where $S:\mathcal{H}\longrightarrow \mathcal{H}$ is the antipode. \end{lemma} The graded Hopf algebra of quasisymmetric functions, $\ensuremath{\operatorname{QSym}}$ \cite{gessel}, is a subalgebra of $\mathbb{C} [[x_1, x_2, \ldots]]$ with a basis given by the following functions, which in turn are reliant on the natural bijection between compositions and sets, for which we first need to recall that $[i]$ for $i\geq 1$ denotes the set $\{1,2,\ldots , i\}$. Now we can state the bijection. Given a composition $\alpha = ( \alpha _1 , \ldots , \alpha _{\ell(\alpha)})$, there is a natural subset of $[|\alpha|-1]$ corresponding to it, namely, $$\mathrm{set} (\alpha) = \{ \alpha _1 , \alpha _1 + \alpha _2, \ldots , \alpha _1+\alpha _2 + \cdots + \alpha _{\ell(\alpha)-1}\} \mbox{ and } \mathrm{set}((|\alpha|))=\emptyset.$$Conversely, given a subset $S = \{ s_1< \cdots < s_{|S|}\}\subseteq [N-1]$, there is a natural composition of size $N$ corresponding to it, namely, $$\mathrm{comp} (S) = (s_1, s_2 - s_1, \ldots , N-s_{|S|}) \mbox{ and } \mathrm{comp}(\emptyset)=(N).$$ \begin{definition}\label{def:Fbasis} Let $\alpha = (\alpha _1, \ldots , \alpha _{\ell(\alpha)})$ be a composition. Then the \emph{fundamental quasisymmetric function} $F_\alpha$ is defined to be $F_\emptyset = 1$ and $$F_\alpha = \sum x_{i_1} \cdots x_{i_{|\alpha|}}$$where the sum is over all $|\alpha|$-tuples $(i_1, \ldots , i_{|\alpha|})$ of indices satisfying $$i_1\leq \cdots \leq i_{|\alpha|} \mbox{ and } i_j<i_{j+1} \mbox{ if } j \in \mathrm{set}(\alpha).$$ \end{definition} \begin{example}\label{ex:Fbasis} $F_{(1,2)} = x_1x_2^2 + x_1x_3^2 + \cdots + x_1x_2x_3 + x_1x_2x_4 + \cdots.$ \end{example} Then $\ensuremath{\operatorname{QSym}}$ is a graded Hopf algebra $$\ensuremath{\operatorname{QSym}} = \bigoplus _{N\geq 0} \ensuremath{\operatorname{QSym}} ^N$$where $$\ensuremath{\operatorname{QSym}} ^N = \operatorname{span} \{ F_\alpha \;|\; |\alpha| = N \}.$$The product for this basis is inherited from the product of monomials and Definition~\ref{def:Fbasis}. The coproduct \cite{gessel} is given by $\Delta(1)=1\otimes1$ and \begin{equation}\label{eq:Fcoproduct} \Delta(F_\alpha)= \sum _{\beta\gamma = \alpha \atop \mbox{ or }\beta\odot\gamma = \alpha}F_\beta \otimes F_\gamma \end{equation}and the antipode, which was discovered independently in \cite{ehrenborg-1, malvenuto-reutenauer}, is given by $S(1)=1$ and \begin{equation}\label{eq:antipode} S(F_\alpha)= (-1)^{|\alpha|}F_{\mathrm{comp}(\mathrm{set}(\alpha)^c)} \end{equation}where $\mathrm{set}(\alpha)^c$ is the complement of $\mathrm{set}(\alpha)$ in the set $[|\alpha| -1]$. \begin{example}\label{ex:Fcoprodantipode} $$\Delta (F_{(1,2)})=F_{(1,2)}\otimes 1 + F_{(1,1)}\otimes F_{(1)} + F_{(1)}\otimes F_{(2)} + 1\otimes F_{(1,2)}$$and $S(F_{(1,2)})=(-1)^3 F_{(2,1)}$. \end{example} However, this is not the only basis of $\ensuremath{\operatorname{QSym}}$ that will be useful to us. For the second basis we will need to define skew composition diagrams and then standard skew composition tableaux. For the first of these, let $\alpha, \beta$ be two compositions such that $\beta < _{c} \alpha$. Then we define the \emph{skew composition diagram} $\alpha {/\!\!/} \beta $ to be the array of boxes that are contained in $\alpha$ but not in $\beta$. That is, the boxes that arise in the saturated chain $\beta \lessdot _Yc \cdots \lessdot _Yc \alpha$. We say the \emph{size} of $\alpha {/\!\!/} \beta$ is $|\alpha {/\!\!/} \beta | = |\alpha| - |\beta|$. Note that if $\beta=\emptyset$, then we recover the composition diagram $\alpha$. \begin{example}\label{ex:skewshape} The skew composition diagram $(2,1,3){/\!\!/} (1)$ is drawn below with $\beta$ denoted by $\bullet$. $$\tableau{\ &\ \\ \ \\ \bullet&\ &\ \\}$$ \end{example} We can now define standard skew composition tableaux. Given a saturated chain, $C$, in $\mathcal{L}_{c}$ $$\beta = \alpha ^0 \lessdot _Yc \alpha ^1 \lessdot _Yc \cdots \lessdot _Yc \alpha ^{|\alpha {/\!\!/} \beta|} = \alpha$$we define the \emph{standard skew composition tableau} ${\tau} _C$ of \emph{shape} $\alpha{/\!\!/} \beta$ to be the skew composition diagram $\alpha {/\!\!/} \beta$ whose boxes are filled with integers such that the number $|\alpha {/\!\!/} \beta| -i +1$ appears in the box in ${\tau} _C$ that exists in $\alpha ^i$ but not $\alpha ^{i-1}$ for $1\leq i\leq |\alpha {/\!\!/} \beta|$. If $\beta = \emptyset$, then we say that we have a \emph{standard composition tableau}. Given a standard skew composition tableau, ${\tau}$, whose shape has size $N$ we say that the \emph{descent set} of ${\tau}$ is $$\mathrm{Des} ({\tau}) = \{ i \;|\; i+1 \mbox{ appears weakly right of } i \} \subseteq [N-1]$$and the corresponding \emph{descent composition} of ${\tau}$ is $\mathrm{comp}({\tau})= \mathrm{comp}(\mathrm{Des}({\tau}))$. \begin{example}\label{ex:skewCT} The saturated chain $$(1)\lessdot _Yc (2) \lessdot _Yc (1,2) \lessdot _Yc (1,1,2) \lessdot _Yc (1,1,3) \lessdot _Yc (2,1,3)$$gives rise to the standard skew composition tableau ${\tau}$ of shape $(2,1,3){/\!\!/} (1)$ below. $$\tableau{3 &1\\ 4 \\ \bullet&5 &2 \\}$$Note that $\mathrm{Des}({\tau}) = \{1,3, 4\}$ and hence $\mathrm{comp}({\tau}) = (1,2,1,1)$. \end{example} With this is mind we can now define skew quasisymmetric Schur functions \cite[Proposition 3.1]{BLvW}. \begin{definition}\label{def:QSbasis} Let $\alpha {/\!\!/} \beta$ be a skew composition diagram. Then the \emph{skew quasisymmetric Schur function} ${\mathcal{S}} _{\alpha{/\!\!/} \beta}$ is defined to be $${\mathcal{S}} _{\alpha {/\!\!/} \beta} = \sum F_{\mathrm{comp} ({\tau})}$$where the sum is over all standard skew composition tableaux ${\tau}$ of shape $\alpha{/\!\!/} \beta$. When $\beta = \emptyset$ we call ${\mathcal{S}} _\alpha$ a \emph{quasisymmetric Schur function}. \end{definition} \begin{example}\label{ex:QSbasis} We can see that ${\mathcal{S}} _{(n)} = F_{(n)}$ and ${\mathcal{S}} _{(1^n)} = F_{(1^n)}$ and $${\mathcal{S}} _{(2,1,3){/\!\!/} (1)} = F_{(2,1,2)}+ F_{(2,2,1)} + F_{(1,2,1,1)}$$from the standard skew composition tableaux below. $$\tableau{2 &1\\ 3 \\ \bullet&5 &4 \\}\qquad \tableau{2 &1\\ 4 \\ \bullet&5 &3 \\}\qquad \tableau{3 &1\\ 4 \\ \bullet&5 &2 \\}$$ \end{example} Moreover, the set of all quasisymmetric Schur functions forms another basis for $\ensuremath{\operatorname{QSym}}$ such that $$\ensuremath{\operatorname{QSym}} ^N = \operatorname{span} \{ {\mathcal{S}} _\alpha \;|\; |\alpha| = N\}$$and while explicit formulas for their product and antipode are still unknown, their coproduct \cite[Definition 2.19]{BLvW} is given by \begin{eqnarray}\label{eq:coproductquasischur} \Delta({\mathcal{S}}_{\alpha})=\displaystyle\sum_{\gamma} {\mathcal{S}}_{\alpha{/\!\!/}\gamma}\otimes {\mathcal{S}}_{\gamma} \end{eqnarray}where the sum is over all compositions $\gamma$. As discussed in the introduction, quasisymmetric Schur functions have many interesting algebraic and combinatorial properties, one of the first of which to be discovered was the exhibition of Pieri rules that utilise our box removing operators \cite[Theorem 6.3]{QS}. \begin{theorem}\emph{(Pieri rules for quasisymmetric Schur functions)}\label{the:QSPieri} Let $\alpha $ be a composition and $n$ be a positive integer. Then \begin{align*} {\mathcal{S}}_{\alpha}\cdot {\mathcal{S}}_{(n)}=\sum {\mathcal{S}}_{\alpha^+} \end{align*} where $\alpha^+$ is a composition such that $\alpha$ can be obtained by removing an $n$-horizontal strip from it. Similarly, \begin{align*} {\mathcal{S}}_{\alpha}\cdot {\mathcal{S}}_{(1^n)}=\sum {\mathcal{S}}_{\alpha^+ } \end{align*} where $\alpha^+$ is a composition such that $\alpha$ can be obtained by removing an $n$-vertical strip from it. \end{theorem} Dual to $\ensuremath{\operatorname{QSym}}$ is the graded Hopf algebra of noncommutative symmetric functions, $\ensuremath{\operatorname{NSym}}$, itself a subalgebra of $\mathbb{C} << x_1, x_2, \ldots >>$ with many interesting bases \cite{GKLLRT}. The one of particular interest to us is the following \cite[Section 2]{BLvW}. \begin{definition}\label{def:NCbasis} Let $\alpha$ be a composition. Then the \emph{noncommutative Schur function} ${\mathbf{s}} _\alpha$ is the function under the duality pairing $\langle \ ,\ \rangle :\ensuremath{\operatorname{QSym}} \otimes \ensuremath{\operatorname{NSym}} \rightarrow \mathbb{C}$ that satisfies $$\langle {\mathcal{S}} _\alpha , {\mathbf{s}} _\beta \rangle = \delta _{\alpha\beta}$$where $\delta _{\alpha\beta} = 1$ if $\alpha = \beta$ and $0$ otherwise.\end{definition} Noncommutative Schur functions also have rich and varied algebraic and combinatorial properties, including Pieri rules, although due to the noncommutative nature of $\ensuremath{\operatorname{NSym}}$ there are now Pieri rules arising both from multiplication on the right \cite[Theorem 9.3]{tewari}, and from multiplication on the left \cite[Corollary 3.8]{BLvW}. We include them both here for completeness, and for use later. \begin{theorem}\emph{(Right Pieri rules for noncommutative Schur functions)}\label{the:RightPieri} Let $\alpha $ be a composition and $n$ be a positive integer. Then \begin{align*} {\mathbf{s}}_{\alpha }\cdot {\mathbf{s}}_{(n)}=\sum {\mathbf{s}}_{\alpha^+} \end{align*} where $\alpha^+$ is a composition such that it can be obtained by adding an $n$-right horizontal strip to $\alpha$. Similarly, \begin{align*} {\mathbf{s}}_{\alpha }\cdot {\mathbf{s}}_{(1^n)}=\sum {\mathbf{s}}_{\alpha^+} \end{align*} where $\alpha^+$ is a composition such that it can be obtained by adding an $n$-right vertical strip to $\alpha$. \end{theorem} \begin{theorem}\emph{(Left Pieri rules for noncommutative Schur functions)}\label{the:LeftPieri} Let $\alpha $ be a composition and $n$ be a positive integer. Then \begin{align*} {\mathbf{s}}_{(n)} \cdot {\mathbf{s}}_{\alpha } =\sum {\mathbf{s}}_{\alpha^+} \end{align*} where $\alpha^+$ is a composition such that it can be obtained by adding an $n$-left horizontal strip to $\alpha$. Similarly, \begin{align*} {\mathbf{s}}_{(1^n)} \cdot {\mathbf{s}}_{\alpha } =\sum {\mathbf{s}}_{\alpha^+ } \end{align*} where $\alpha^+$ is a composition such that it can be obtained by adding an $n$-left vertical strip to $\alpha$.\end{theorem} Note that since quasisymmetric and noncommutative Schur functions are indexed by compositions, \emph{if any parts of size 0 arise during computation, then they are ignored}. \section{Generalized skew Pieri rules}\label{sec:skew} \subsection{Quasisymmetric skew Pieri rules}\label{subsec:QSymskewPieri} We now turn our attention to proving skew Pieri rules for skew quasisymmetric Schur functions. The statement of the rules is in the spirit of the Pieri rules for skew shapes of Assaf and McNamara \cite{assaf-mcnamara}, and this is no coincidence as we recover their rules as a special case in Corollary~\ref{cor:AM}. However first we prove a crucial proposition. \begin{proposition}\label{ob:skewingisharpooning} Let $\alpha, \beta$ be compositions. Then ${\mathbf{s}}b \rightharpoonup {\mathcal{S}}_{\alpha} = {\mathcal{S}}_{\alpha{/\!\!/}\beta}$. \end{proposition} \begin{proof} Recall Equation~\eqref{eq:coproductquasischur} states that $$ \Delta({\mathcal{S}}_{\alpha})=\displaystyle\sum_{\gamma} {\mathcal{S}}_{\alpha{/\!\!/}\gamma}\otimes {\mathcal{S}}_{\gamma} $$ where the sum is over all compositions $\gamma$. Thus using Equations \eqref{eq:HdualactingonH} and \eqref{eq:coproductquasischur} we obtain \begin{eqnarray}\label{eq:harpoons} {\mathbf{s}}b \rightharpoonup {\mathcal{S}}_{\alpha}= \displaystyle\sum_{\gamma} \langle {\mathcal{S}}_{\gamma},{\mathbf{s}}b\rangle {\mathcal{S}}_{\alpha{/\!\!/}\gamma} \end{eqnarray} where the sum is over all compositions $\gamma$. Since by Definition~\ref{def:NCbasis}, $\langle {\mathcal{S}}_{\gamma},{\mathbf{s}}b\rangle$ equals $1$ if $\beta= \gamma$ and $0$ otherwise, the claim follows. \end{proof} \begin{remark}\label{rem:terms that equal 0 for poset reasons} The proposition above does not tell us when ${\mathbf{s}}b \rightharpoonup {\mathcal{S}}_{\alpha} = {\mathcal{S}}_{\alpha{/\!\!/}\beta}$ equals $0$. However, by the definition of $\alpha {/\!\!/} \beta$ this is precisely when $\alpha$ and $\beta$ satisfy $\beta \not< _{c} \alpha$. Consequently in the theorem below the \emph{nonzero} contribution will only be from those $\alpha^+$ and $\beta^-$ that satisfy $\beta^- < _{c} \alpha^+$. As always if any parts of size 0 arise during computation, then they are ignored. \end{remark} \begin{theorem}\label{the:QSskewPieri} Let $\alpha, \beta$ be compositions and $n$ be a positive integer. Then \begin{align*} {\mathcal{S}}_{\alpha{/\!\!/}\beta}\cdot {\mathcal{S}}_{(n)}=\sum_{i+j=n}(-1)^j{\mathcal{S}}_{\alpha^+{/\!\!/}\beta^-} \end{align*} where $\alpha^+$ is a composition such that $\alpha$ can be obtained by removing an $i$-horizontal strip from it, and $\beta^{-}$ is a composition such that it can be obtained by removing a $j$-vertical strip from $\beta$. Similarly, \begin{align*} {\mathcal{S}}_{\alpha{/\!\!/}\beta}\cdot {\mathcal{S}}_{(1^n)}=\sum_{i+j=n}(-1)^j{\mathcal{S}}_{\alpha^+{/\!\!/}\beta^-} \end{align*} where $\alpha^+$ is a composition such that $\alpha$ can be obtained by removing an $i$-vertical strip from it, and $\beta^{-}$ is a composition such that it can be obtained by removing a $j$-horizontal strip from $\beta$. \end{theorem} \begin{proof} For the first part of the theorem, our aim is to calculate ${\mathcal{S}}_{\alpha{/\!\!/}\beta}\cdot {\mathcal{S}}_{(n)}$, which in light of Proposition \ref{ob:skewingisharpooning}, is the same as calculating $({\mathbf{s}}b \rightharpoonup {\mathcal{S}}_{\alpha})\cdot {\mathcal{S}}_{(n)}$. Taking $a={\mathbf{s}}b$, $g={\mathcal{S}}_{\alpha}$ and $h={\mathcal{S}}_{(n)}$ in Lemma \ref{lem:magiclemma} gives the LHS as $({\mathbf{s}}b \rightharpoonup {\mathcal{S}}_{\alpha})\cdot {\mathcal{S}}_{(n)}$. For the RHS observe that, by Definition~\ref{def:QSbasis}, ${\mathcal{S}} _{(n)} = {F} _{(n)}$ and by Equation~\eqref{eq:Fcoproduct} we have that \begin{eqnarray}\label{eq:coproductF} \Delta({F}_{(n)})&=& \sum_{i+j=n}{F}_{(i)}\otimes {F}_{(j)}. \end{eqnarray} Substituting these in yields \begin{eqnarray}\label{eq:firststeprhs} \displaystyle\sum_{i+j=n}(S({F}_{(j)})\rightharpoonup {\mathbf{s}}b)\rightharpoonup({\mathcal{S}}_{\alpha}\cdot {F}_{(i)}). \end{eqnarray} Now, by Equation~\eqref{eq:antipode}, we have that $S({F}_{(j)})=(-1)^j{F}_{(1^j)}$. This reduces \eqref{eq:firststeprhs} to \begin{eqnarray}\label{eq:secondsteprhs} \displaystyle\sum_{i+j=n}((-1)^j{F}_{(1^j)}\rightharpoonup {\mathbf{s}}b)\rightharpoonup({\mathcal{S}}_{\alpha}\cdot {F}_{(i)}). \end{eqnarray} We will first deal with the task of evaluating ${F}_{(1^j)}\rightharpoonup {\mathbf{s}}b$. We need to invoke Equation \eqref{eq:HactingonHdual} and thus we need $\Delta({\mathbf{s}}b)$. Assume that \begin{eqnarray} \Delta({\mathbf{s}}b)=\sum_{\gamma,\delta}b_{\gamma,\delta}^{\beta}{\mathbf{s}}_{\gamma}\otimes {\mathbf{s}}_{\delta} \end{eqnarray} where the sum is over all compositions $\gamma,\delta$. Thus Equation~\eqref{eq:HactingonHdual} yields \begin{eqnarray} {F}_{(1^j)}\rightharpoonup {\mathbf{s}}b&=& \displaystyle\sum_{\gamma,\delta} b_{\gamma,\delta}^{\beta}\langle {F}_{(1^j)},{\mathbf{s}}_{\delta}\rangle{\mathbf{s}}_{\gamma}. \end{eqnarray} Observing that, by Definition~\ref{def:QSbasis}, ${F}_{(1^j)}={\mathcal{S}}_{(1^j)}$ and that, by Definition~\ref{def:NCbasis}, $\langle {\mathcal{S}}_{(1^j)},{\mathbf{s}}_{\delta}\rangle$ equals $1$ if $\delta = (1^j)$ and equals $0$ otherwise, we obtain \begin{eqnarray}\label{eq:eharpooningncsstep1} {F}_{(1^j)}\rightharpoonup {\mathbf{s}}b &=& \displaystyle\sum_{\gamma} b_{\gamma,(1^j)}^{\beta}{\mathbf{s}}_{\gamma}. \end{eqnarray} Since by Definition~\ref{def:NCbasis} and the duality pairing we have that $\langle {\mathcal{S}}_{\gamma}\otimes {\mathcal{S}}_{\delta},\Delta({\mathbf{s}}b)\rangle=\langle {\mathcal{S}}_{\gamma}\cdot{\mathcal{S}}_{\delta},{\mathbf{s}}b\rangle=b_{\gamma,\delta}^{\beta}$, we get that \begin{eqnarray} \langle {\mathcal{S}}_{\gamma}\cdot{\mathcal{S}}_{(1^j)},{\mathbf{s}}b\rangle&=& b_{\gamma,(1^j)}^{\beta}. \end{eqnarray} The Pieri rules for quasisymmetric Schur functions in Theorem~\ref{the:QSPieri} state that $b_{\gamma,(1^j)}^{\beta}$ is $1$ if there exists a weakly decreasing sequence $\ell_1\geq \ell_2\geq\cdots \geq \ell_j$ such that $\mathfrak{d}_{\ell_1}\cdots \mathfrak{d}_{\ell_j}(\beta)=\gamma$, and is $0$ otherwise. Thus this reduces Equation~\eqref{eq:eharpooningncsstep1} to \begin{eqnarray}\label{eq:eharpooningncsstep2} {F}_{(1^j)}\rightharpoonup {\mathbf{s}}b &=& \displaystyle\sum_{\substack{\mathfrak{d}_{\ell_1}\cdots \mathfrak{d}_{\ell_j}(\beta)=\gamma\\\ell_1\geq\cdots \geq \ell_j}}{\mathbf{s}}_{\gamma}. \end{eqnarray} Since ${\mathcal{S}}_{(i)}={F}_{(i)}$, by Definition~\ref{def:QSbasis}, the Pieri rules in Theorem~\ref{the:QSPieri} also imply that \begin{eqnarray}\label{eq:rowpieriruleqschur} {\mathcal{S}}_{\alpha}\cdot {F}_{(i)}&=& \displaystyle \sum_{\substack{\mathfrak{d}_{r_1}\cdots \mathfrak{d}_{r_i}(\varepsilon)=\alpha\\r_1<\cdots < r_i}}{\mathcal{S}}_{\varepsilon}. \end{eqnarray} Using Equations~\eqref{eq:eharpooningncsstep2} and \eqref{eq:rowpieriruleqschur} in \eqref{eq:secondsteprhs}, we get \begin{eqnarray} \displaystyle\sum_{i+j=n}((-1)^j{F}_{(1^j)}\rightharpoonup {\mathbf{s}}b)\rightharpoonup({\mathcal{S}}_{\alpha}\cdot {F}_{(i)})&=& \displaystyle\sum_{i+j=n}\left((-1)^j\displaystyle\sum_{\substack{\mathfrak{d}_{\ell_1}\cdots \mathfrak{d}_{\ell_j}(\beta)=\gamma\\\ell_1\geq\cdots \geq \ell_j}}{\mathbf{s}}_{\gamma}\right)\rightharpoonup \left( \sum_{\substack{\mathfrak{d}_{r_1}\cdots \mathfrak{d}_{r_i}(\varepsilon)=\alpha\\r_1<\cdots < r_i}}{\mathcal{S}}_{\varepsilon} \right). \nonumber\\ \end{eqnarray} Using Proposition \ref{ob:skewingisharpooning}, we obtain that \begin{eqnarray}\label{eq:penultimateskewpieri} \displaystyle\sum_{i+j=n}((-1)^j{F}_{(1^j)}\rightharpoonup {\mathbf{s}}b)\rightharpoonup({\mathcal{S}}_{\alpha}\cdot {F}_{(i)})&=&\displaystyle\sum_{i+j=n}\left(\sum_{\substack{\mathfrak{d}_{\ell_1}\cdots \mathfrak{d}_{\ell_j}(\beta)=\gamma\\\ell_1\geq\cdots \geq \ell_j\\\mathfrak{d}_{r_1}\cdots \mathfrak{d}_{r_i}(\varepsilon)=\alpha\\r_1<\cdots < r_i}}(-1)^j{\mathcal{S}}_{\varepsilon{/\!\!/}\gamma}\right). \end{eqnarray} Thus \begin{eqnarray}\label{eq:skewpierirule-row} {\mathcal{S}}_{\alpha{/\!\!/}\beta}\cdot {\mathcal{S}}_{(n)}&=&\displaystyle\sum_{i+j=n}\left(\sum_{\substack{\mathfrak{d}_{\ell_1}\cdots \mathfrak{d}_{\ell_j}(\beta)=\gamma\\\ell_1\geq\cdots \geq \ell_j\\\mathfrak{d}_{r_1}\cdots \mathfrak{d}_{r_i}(\varepsilon)=\alpha\\r_1<\cdots < r_i}}(-1)^j{\mathcal{S}}_{\varepsilon{/\!\!/}\gamma}\right). \end{eqnarray} The first part of the theorem now follows from the definitions of $i$-horizontal strip and $j$-vertical strip. For the second part of the theorem we use the same method as the first part, but this time calculate $$({\mathbf{s}}b \rightharpoonup {\mathcal{S}}_{\alpha})\cdot {\mathcal{S}}_{(1^n)}.$$ \end{proof} \begin{remark}\label{rem:noomega} Notice that as opposed to the classical case where one can apply the $\omega$ involution to obtain the corresponding Pieri rule, we can not do this here. This is because the image of the skew quasisymmetric Schur functions under the $\omega$ involution is not yet known explicitly. Notice that the $\omega$ map applied to quasisymmetric Schur functions results in the row-strict quasisymmetric Schur functions of Mason and Remmel \cite{mason-remmel}. \end{remark} \begin{example}\label{ex:skewQSPierirow} Let us compute ${\mathcal{S}}_{(1,3,2){/\!\!/} (2,1)}\cdot {\mathcal{S}}_{(2)}$. We first need to compute all compositions $\gamma$ that can be obtained by removing a vertical strip of size at most 2 from $\beta=(2,1)$. These compositions correspond to the white boxes in the diagrams below, while the boxes in the darker shade of red correspond to the vertical strips that are removed from $\beta$. $$ \ytableausetup{smalltableaux,boxsize=0.5em} \begin{ytableau} *(white) & *(white)\\ *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white) & *(white)\\ *(red!80) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white) & *(red!80)\\ *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white) & *(red!80)\\ *(red!80) \end{ytableau} $$ Next we need to compute all compositions $\varepsilon$ such that a horizontal strip of size at most $2$ can be removed from it so as to obtain $\alpha$. We list these $\varepsilon$s below with the boxes in the lighter shade of green corresponding to horizontal strips that need to be removed to obtain $\alpha$. $$ \ytableausetup{smalltableaux,boxsize=0.5em} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white)\\ *(white) & *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white)\\ *(white) & *(white)\\ *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white)\\ *(green!70)\\ *(white) & *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(green!70)\\ *(white) & *(white) & *(white)\\ *(white) & *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white)\\ *(white) & *(white) & *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white) &*(green!70)\\ *(white) & *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white) \\ *(green!70)\\ *(white) & *(white) & *(green!70) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white) & *(green!70)\\ *(green!70)\\ *(white) & *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(green!70)\\ *(white) & *(white) & *(white)\\ *(white) & *(white) & *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(green!70)\\ *(white) & *(white) & *(white) & *(green!70)\\ *(white) & *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white) \\ *(white) & *(white) \\ *(green!70) & *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white) \\ *(white) & *(white) & *(green!70)\\ *(green!70) \\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white) & *(green!70) \\ *(white) & *(white) \\ *(green!70) \\ \end{ytableau} \hspace{5mm} $$ $$ \ytableausetup{smalltableaux,boxsize=0.5em} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white) \\ *(white) & *(white) & *(green!70) & *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white) & *(green!70) \\ *(white) & *(white) & *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white) & *(green!70) & *(green!70)\\ *(white) & *(white) \\ \end{ytableau} $$ Now to compute ${\mathcal{S}}_{(1,3,2){/\!\!/} (2,1)}\cdot {\mathcal{S}}_{(2)}$, our result tells us that for every pair of compositions in the above lists $(\varepsilon,\gamma)$ such that (the number of green boxes in $\varepsilon$)+(the number of red boxes in $\gamma$)=2, and $\gamma < _{c} \varepsilon$ we have a term ${\mathcal{S}}_{\varepsilon {/\!\!/} \gamma}$ with a sign $(-1)^{\text{number of red boxes}}$. Hence we have the following expansion, suppressing commas and parentheses in compositions for ease of comprehension. \begin{align*} {\mathcal{S}}_{132{/\!\!/} 21}\cdot {\mathcal{S}}_{2}=&{\mathcal{S}}_{132{/\!\!/} 1}-{\mathcal{S}}_{1321{/\!\!/} 11}-{\mathcal{S}}_{1312{/\!\!/} 2}-{\mathcal{S}}_{1132{/\!\!/}2}-{\mathcal{S}}_{1132{/\!\!/} 11}-{\mathcal{S}}_{133{/\!\!/} 2}-{\mathcal{S}}_{133{/\!\!/} 11}\\&-{\mathcal{S}}_{142{/\!\!/} 2}-{\mathcal{S}}_{142{/\!\!/} 11}+{\mathcal{S}}_{1133{/\!\!/} 21}+{\mathcal{S}}_{1142{/\!\!/} 21}+{\mathcal{S}}_{1322{/\!\!/} 21}+{\mathcal{S}}_{1331{/\!\!/} 21}\\&+{\mathcal{S}}_{1421{/\!\!/} 21}+{\mathcal{S}}_{143{/\!\!/} 21}+{\mathcal{S}}_{152{/\!\!/} 21} \end{align*} \end{example} \begin{example}\label{ex:skewQSPiericol} Let us compute ${\mathcal{S}}_{(1,3,2){/\!\!/} (2,1)}\cdot {\mathcal{S}}_{(1,1)}$. We first need to compute all compositions $\gamma$ that can be obtained by removing a horizontal strip of size at most 2 from $\beta=(2,1)$. These compositions correspond to the white boxes in the diagrams below, while the boxes in the darker shade of red correspond to the horizontal strips that are removed from $\beta$. $$ \ytableausetup{smalltableaux,boxsize=0.5em} \begin{ytableau} *(white) & *(white)\\ *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white) & *(white)\\ *(red!80) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white) & *(red!80)\\ *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white) & *(red!80)\\ *(red!80) \end{ytableau} $$ Next we need to compute all compositions $\varepsilon$ such that a vertical strip of size at most $2$ can be removed from it so as to obtain $\alpha$. We list these $\varepsilon$s below with the boxes in the lighter shade of green corresponding to vertical strips that need to be removed to obtain $\alpha$. $$ \ytableausetup{smalltableaux,boxsize=0.5em} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white)\\ *(white) & *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white)\\ *(white) & *(white)\\ *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white)\\ *(green!70)\\ *(white) & *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(green!70)\\ *(white) & *(white) & *(white)\\ *(white) & *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white)\\ *(white) & *(white) & *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white) &*(green!70)\\ *(white) & *(white) \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white)\\ *(white) & *(white) \\ *(green!70)\\ *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white)\\ *(green!70)\\ *(white) & *(white) \\ *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(green!70)\\ *(white) & *(white) & *(white)\\ *(white) & *(white) \\ *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white)\\ *(green!70)\\ *(green!70)\\ *(white) & *(white) \\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(green!70)\\ *(white) & *(white) & *(white)\\ *(green!70)\\ *(white) & *(white) \\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(green!70)\\ *(green!70)\\ *(white) & *(white) & *(white)\\ *(white) & *(white) \\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white) \\ *(white) & *(white) & *(green!70) \\ *(green!70) \\ \end{ytableau} \hspace{5mm} $$ $$ \ytableausetup{smalltableaux,boxsize=0.5em} \begin{ytableau} *(white)\\ *(white) & *(white) & *(white) \\ *(green!70)\\ *(white) & *(white) & *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white)\\ *(green!70)\\ *(white) & *(white) & *(white) \\ *(white) & *(white) & *(green!70) \\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white) & *(green!70)\\ *(white) & *(white) & *(white) \\ *(white) & *(white) & *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white) \\ *(white) & *(white) & *(white) & *(green!70)\\ *(white) & *(white) \\ *(green!70)\\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white) \\ *(white) & *(white) & *(white) & *(green!70)\\ *(green!70)\\ *(white) & *(white) \\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white) \\ *(green!70)\\ *(white) & *(white) & *(white) & *(green!70)\\ *(white) & *(white) \\ \end{ytableau} \hspace{5mm} \begin{ytableau} *(white) \\ *(white) & *(white) & *(white) & *(green!70) \\ *(white) & *(white) & *(green!70)\\ \end{ytableau} $$ Now to compute ${\mathcal{S}}_{(1,3,2){/\!\!/} (2,1)}\cdot {\mathcal{S}}_{(1,1)}$, our result tells us that for every pair of compositions in the above lists $(\varepsilon,\gamma)$ such (that the number of green boxes in $\varepsilon$)+(the number of red boxes in $\gamma$)=2 and $\gamma < _{c} \varepsilon$, we have a term ${\mathcal{S}}_{\varepsilon{/\!\!/} \gamma}$ with a sign $(-1)^{\text{number of red boxes}}$. Hence we have the following expansion, suppressing commas and parentheses in compositions for ease of comprehension. \begin{align*} {\mathcal{S}}_{132{/\!\!/} 21}\cdot {\mathcal{S}}_{11}=&{\mathcal{S}}_{132{/\!\!/} 1}-{\mathcal{S}}_{1321{/\!\!/} 11}-{\mathcal{S}}_{1312{/\!\!/} 2}-{\mathcal{S}}_{1132{/\!\!/}2}-{\mathcal{S}}_{1132{/\!\!/} 11}-{\mathcal{S}}_{133{/\!\!/} 2}-{\mathcal{S}}_{133{/\!\!/} 11}\\&-{\mathcal{S}}_{142{/\!\!/} 2}-{\mathcal{S}}_{142{/\!\!/} 11}+{\mathcal{S}}_{13121{/\!\!/} 21}+{\mathcal{S}}_{11321{/\!\!/} 21}+{\mathcal{S}}_{11132{/\!\!/} 21}+{\mathcal{S}}_{1331{/\!\!/} 21}\\&+{\mathcal{S}}_{1133{/\!\!/} 21}+{\mathcal{S}}_{233{/\!\!/} 21}+{\mathcal{S}}_{1421{/\!\!/} 21}+{\mathcal{S}}_{1142{/\!\!/} 21}+{\mathcal{S}}_{143{/\!\!/} 21} \end{align*} \end{example} We now turn our attention to skew Schur functions, which we will define in the next paragraph, after we first discuss some motivation for our attention. Skew Schur functions can be written as a sum of skew quasisymmetric Schur functions \cite[Lemma 2.23]{SSQSS}, so one might ask whether we can recover the Pieri rules for skew shapes of Assaf and McNamara by expanding a skew Schur function as a sum of skew quasisymmetric Schur functions, applying our quasisymmetric skew Pieri rules and then collecting suitable terms. However, a much simpler proof exists. A skew Schur function $s _{\lambda/\mu}$ for partitions $\lambda, \mu$ where $\ell(\lambda)\geq\ell(\mu)$, can be defined, given an $M>\ell(\lambda)$, by \cite[Section 5.1]{BLvW} \begin{equation}\label{eq:skewSchur}s _{\lambda/\mu}={\mathcal{S}} _{\lambda + 1^{M} {/\!\!/} \mu + 1^{M}}\end{equation}where $\lambda + 1^{M} = (\lambda _1 + 1, \ldots, \lambda_{\ell(\lambda)} +1, 1^{M-\ell(\lambda)})$, and $\mu + 1^{M} = (\mu _1 + 1, \ldots, \mu_{\ell(\mu)} +1, 1^{M -\ell(\mu)})$. It follows immediately that $s_{(n)}=s_{(n)/\emptyset}={\mathcal{S}} _{(n)}$ and $s_{(1^n)}=s_{(1^n)/\emptyset}={\mathcal{S}} _{(1^n)}$ by Equation~\eqref{eq:skewSchur}. Then as a corollary of our skew Pieri rules we recover the skew Pieri rules of Assaf and McNamara as follows. \begin{corollary}\cite[Theorem 3.2]{assaf-mcnamara}\label{cor:AM} Let $\lambda, \mu$ be partitions where $\ell(\lambda)\geq\ell(\mu)$ and $n$ be a positive integer. Then $$s_{\lambda/\mu}\cdot s_{(n)} = \sum _{i+j = n} (-1) ^j s _{\lambda^+/\mu ^-}$$ where $\lambda^+$ is a partition such that the boxes of $\lambda ^+$ not in $\lambda$ are $i$ boxes such that no two lie in the same column, and $\mu ^-$ is a partition such that the boxes of $\mu$ not in $\mu ^-$ are $j$ boxes such that no two lie in the same row. Similarly, $$s_{\lambda/\mu}\cdot s_{(1^n)} = \sum _{i+j = n} (-1) ^j s _{\lambda^+/\mu ^-}$$ where $\lambda^+$ is a partition such that the boxes of $\lambda ^+$ not in $\lambda$ are $i$ boxes such that no two lie in the same row, and $\mu ^-$ is a partition such that the boxes of $\mu$ not in $\mu ^-$ are $j$ boxes such that no two lie in the same column. \end{corollary} \begin{proof} Let $N>\ell(\lambda)+n+1$. Then consider the product ${\mathcal{S}} _{\lambda + 1^N {/\!\!/} \mu + 1^N} \cdot {\mathcal{S}} _{(n)}$ (respectively, ${\mathcal{S}} _{\lambda + 1^N {/\!\!/} \mu + 1^N} \cdot {\mathcal{S}} _{(1^n)}$) where $\lambda, \mu$ are partitions and $\ell(\lambda)\geq\ell(\mu)$. By the paragraph preceding the corollary, this is equivalent to what we are trying to compute. To begin, we claim that if $$\mathfrak{d} _1\mathfrak{d} _{r_2} \cdots \mathfrak{d} _{r_i} (\alpha ')= \lambda + 1^N$$where $1<r_2<\cdots<r_i$ (respectively, $\mathfrak{d} _{r _1} \cdots \mathfrak{d} _{r _{i-q}} \mathfrak{d} _1^q (\alpha ')= \lambda + 1^N$ where $q\geq 0$ and $r _1 \geq \cdots \geq r _{i-q} >1$) and $$\mathfrak{d} _{\ell _1} \cdots \mathfrak{d} _{\ell _{j-p}} \mathfrak{d} _1^p (\mu+ 1^N) = \beta '$$ where $p\geq 0$ and $\ell _1 \geq \cdots \geq \ell _{j-p} >1$ (respectively, $\mathfrak{d} _1\mathfrak{d} _{\ell_2} \cdots \mathfrak{d} _{\ell_j} (\mu+ 1^N) = \beta '$ where $1<\ell_2<\cdots< \ell_j$) and $$\mathfrak{d} _{r_2} \cdots \mathfrak{d} _{r_i} (\alpha '')= \lambda + 1^N$$(respectively, $\mathfrak{d} _{r _1} \cdots \mathfrak{d} _{r _{i-q}} \mathfrak{d} _1^{q+1}(\alpha '')= \lambda + 1^N$) and $$\mathfrak{d} _{\ell _1} \cdots \mathfrak{d} _{\ell _{j-p}} \mathfrak{d} _1^{p+1} (\mu+ 1^N) = \beta ''$$(respectively, $\mathfrak{d} _{\ell_2} \cdots \mathfrak{d} _{\ell_j}(\mu+ 1^N) = \beta ''$) then $\beta ' < _{c} \alpha ' $ if and only if $\beta '' < _{c} \alpha ''$. This follows from three facts. Firstly $\ell(\lambda) \geq \ell(\mu)$, secondly $\alpha ' $ and $\alpha ''$ only differ by $\mathfrak{d} _1$, and thirdly $\beta '$ and $\beta ''$ only differ by $\mathfrak{d} _1$. Moreover, ${\alpha ' {/\!\!/} \beta '}= {\alpha '' {/\!\!/} \beta ''}$. Furthermore, by our skew Pieri rules in Theorem~\ref{the:QSskewPieri}, the summands ${\mathcal{S}} _{\alpha ' {/\!\!/} \beta '}$ and ${\mathcal{S}}_{\alpha '' {/\!\!/} \beta ''}$ will be of opposite sign, and thus will cancel since ${\alpha ' {/\!\!/} \beta '}= {\alpha '' {/\!\!/} \beta ''}$. Consequently, any nonzero summand appearing in the product ${\mathcal{S}} _{\lambda + 1^N {/\!\!/} \mu + 1^N} \cdot {\mathcal{S}} _{(n)}$ (respectively, ${\mathcal{S}} _{\lambda + 1^N {/\!\!/} \mu + 1^N} \cdot {\mathcal{S}} _{(1^n)}$) is such that no box can be removed from the first column of $(\lambda + 1^N)^+$ to obtain $\lambda + 1^N$, nor from the first column of $\mu+ 1^N$ to obtain $(\mu+ 1^N)^-$. Next observe that we can obtain $\lambda + 1^N$ by removing an $i$-horizontal (respectively, $i$-vertical) strip not containing a box in the first column from $(\lambda + 1^N)^+$ if and only if $(\lambda + 1^N)^+ = \lambda ^+ + 1^N$ where $\lambda ^+$ is a partition such that the boxes of $\lambda ^+$ not in $\lambda$ are $i$ boxes such that no two lie in the same column (respectively, row). Similarly, we can obtain $(\mu+ 1^N)^-$ by removing a $j$-vertical (respectively, $j$-horizontal) strip not containing a box in the first column from $\mu+ 1^N$ if and only if $(\mu+ 1^N)^- = \mu^-+ 1^N$ where $\mu ^-$ is a partition such that the boxes of $\mu$ not in $\mu ^-$ are $j$ boxes such that no two lie in the same row (respectively, column). \end{proof} \subsection{Noncommutative skew Pieri rules}\label{subsec:NSymskewPieri} It is also natural to ask whether skew Pieri rules exist for the dual counterparts to skew quasisymmetric Schur functions and whether our methods are applicable in order to prove them. To answer this we first need to define these dual counterparts, namely skew noncommutative Schur functions. \begin{definition}\label{def:ncsskew} Given compositions $\alpha, \beta$, the \emph{skew noncommutative Schur function} ${\mathbf{s}}_{\alpha/\beta}$ is defined implicitly via the equation \begin{eqnarray*} \Delta({\mathbf{s}}a)&=&\displaystyle\sum_{\beta}{\mathbf{s}}_{\alpha/\beta}\otimes {\mathbf{s}}b \end{eqnarray*} where the sum ranges over all compositions $\beta$. \end{definition} With this definition and using Equation~\eqref{eq:HactingonHdual} we can deduce that $${\mathcal{S}}_{\beta} \rightharpoonup {\mathbf{s}}a= {\mathbf{s}}_{\alpha/\beta}$$via a proof almost identical to that of Proposition~\ref{ob:skewingisharpooning}. We know from Definition~\ref{def:QSbasis} that ${\mathcal{S}} _{(n)} = F_{(n)}$ and ${\mathcal{S}} _{(1^n)} = F_{(1^n)}$. Combined with the product for fundamental quasisymmetric functions using Definition~\ref{def:Fbasis}, Definition~\ref{def:NCbasis}, and the duality pairing, it is straightforward to deduce that for $n\geq 1$ the coproduct on ${\mathbf{s}} _{(n)}$ and ${\mathbf{s}} _{(1^n)}$ is given by $$\Delta({\mathbf{s}} _{(n)}) = \sum _{i+j=n} {\mathbf{s}} _{(i)}\otimes {\mathbf{s}} _{(j)} \qquad\Delta({\mathbf{s}} _{(1^n)}) = \sum _{i+j=n} {\mathbf{s}} _{(1^i)}\otimes {\mathbf{s}} _{(1^j)}.$$Also the action of the antipode $S$ on ${\mathbf{s}} _{(n)}$ and ${\mathbf{s}} _{(1^n)}$ is given by $$S({\mathbf{s}} _{(j)})= (-1)^j {\mathbf{s}} _{(1^j)} \qquad S({\mathbf{s}} _{(1^j)})= (-1)^j {\mathbf{s}} _{(j)}.$$Using all the above in conjunction with the right Pieri rules for noncommutative Schur functions in Theorem~\ref{the:RightPieri} yields our concluding theorem, whose proof is analogous to the proof of Theorem~\ref{the:QSskewPieri}, and hence is omitted. As always if any parts of size 0 arise during computation, then they are ignored. \begin{theorem}\label{the:NCskewPieri} Let $\alpha, \beta$ be compositions and $n$ be a positive integer. Then \begin{align*} {\mathbf{s}}_{\alpha/\beta}\cdot {\mathbf{s}}_{(n)}=\sum_{i+j=n}(-1)^j{\mathbf{s}}_{\alpha^+/\beta^-} \end{align*} where $\alpha^+$ is a composition such that it can be obtained by adding an $i$-right horizontal strip to $\alpha$, and $\beta^{-}$ is a composition such that $\beta$ can be obtained by adding a $j$-right vertical strip to it. Similarly, \begin{align*} {\mathbf{s}}_{\alpha/\beta}\cdot {\mathbf{s}}_{(1^n)}=\sum_{i+j=n}(-1)^j{\mathbf{s}}_{\alpha^+/\beta^-} \end{align*} where $\alpha^+$ is a composition such that it can be obtained by adding an $i$-right vertical strip to $\alpha$, and $\beta^{-}$ is a composition such that $\beta$ can be obtained by adding a $j$-right horizontal strip to it. \end{theorem} \end{document}
\begin{document} \title{Fields of character values for finite special unitary groups} \begin{abstract} Turull has described the fields of values for characters of $SL_n(q)$ in terms of the parametrization of the characters of $GL_n(q)$. In this article, we extend these results to the case of $SU_n(q)$.\\ \\ \noindent 2010 {\em AMS Subject Classification}: 20C15, 20C33 \end{abstract} \section{Introduction} It is a problem of general interest to understand the fields of values of the complex characters of finite groups, as these fields often reflect important or subtle properties of the group itself. Turull \cite[Section 4]{turull01} computed the fields of character values of the finite special linear groups $SL_n(q)$ by using properties of degenerate Gelfand-Graev characters of $GL_n(q)$. In this paper, we extend these methods to compute the fields of character values for the finite special unitary groups $SU_n(q)$. In particular, we use properties of generalized Gelfand-Graev characters of $SU_n(q)$ and the full unitary group $GU_n(q)$ to get this information. Further, we frame these methods so that we obtain many results for both $SL_n(q)$ and $SU_n(q)$ simultaneously. Turull also computes the Schur indices of the characters of $SL_n(q)$. This appears to be a much more difficult problem for $SU_n(q)$. For example, it is helpful in the $SL_n(q)$ case that the Schur index for every character of $GL_n(q)$ is 1. However, the Schur indices of the characters of $GU_n(q)$ are not all explicitly known, but are known to take values other than 1. This paper is organized as follows. In Section \ref{sec:Chars}, we establish the necessary results from character theory that are needed for the main arguments. In Sections \ref{sec:LusztigInd} and \ref{sec:Param}, we give some tools from Deligne-Lusztig theory and the parameterization of the characters of $GL^{\epsilon}_n(q)$, respectively, and we use these to describe the characters of $SL^{\epsilon}_n(q)$ in Section \ref{sec:Restriction}. We introduce generalized Gelfand-Graev characters in Section \ref{sec:GGGR}. In Section \ref{sec:Initial}, we obtain some preliminary results on fields of character values which follow quickly from the material in Section \ref{sec:Chars}. To deal with the harder cases, we need some explicit information on unipotent elements obtained in Section \ref{sec:Unipotent}, and we apply this information to generalized Gelfand-Graev characters in Section \ref{sec:MoreGGGR}. Finally, in Section \ref{sec:Main} we prove our main results in Theorem \ref{thm:turullext1} and Corollary \ref{cor:turullext}, which give explicitly the fields of values of any character of $SU_n(q)$ and a description of the real-valued characters of $SU_n(q)$, as well as recover the corresponding results for $SL_n(q)$ originally found in \cite[Section 4]{turull01}. \subsection*{Notation} We will often use the notations found in \cite{turull01}, for clarity of analogous statements. For example, the natural action of a Galois automorphism $\sigma\in\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ on a character $\mathrm{Char}i$ of a group will be denoted $\sigma\mathrm{Char}i$. Here for a group element $g$, the value of $\sigma\mathrm{Char}i$ is given by $\sigma\mathrm{Char}i(g)=\sigma(\mathrm{Char}i(g))$. We write $\mathbb{Q}(\mathrm{Char}i)$ for the field obtained from $\mathbb{Q}$ by adjoining all values of the character $\mathrm{Char}i$. For an integer $n$, we will write $n=n_2n_{2'}$ where $n_2$ is a $2$-power and $n_{2'}$ is odd. Further, for an element $x$ of a finite group $Y$, we write $x=x_2x_{2'}$ where $x_2$ has $2$-power order and $x_{2'}$ has odd order. We denote by $|x|$ the order of the element $x$ (we also use this notation for cardinality and size of partitions, which will be clear from context). We write $\mathrm{Irr}(Y)$ for the set of all irreducible complex characters of the group $Y$. Given two elements $g, x$ in $Y$, we write $g^x = x^{-1}gx$, and for $\mathrm{Char}i \in \mathrm{Irr}(Y)$, we define $\mathrm{Char}i^x$ by $\mathrm{Char}i^x(g) = \mathrm{Char}i(xgx^{-1})$. For a subgroup $X\leq Y$, we write $\mathrm{Ind}_X^Y(\varphi)$ for the character of $Y$ induced from a character $\varphi$ of $X$, and we write $\res^Y_X(\mathrm{Char}i)$ for the character of $X$ restricted from a character $\mathrm{Char}i$ of $Y$. We will further use $\mathrm{Irr}(Y|\varphi)$ and $\mathrm{Irr}(X|\mathrm{Char}i)$ to denote the set of irreducible constituents of $\mathrm{Ind}_X^Y(\varphi)$ and $\res^Y_X(\mathrm{Char}i)$, respectively. Throughout the article, let $q$ be a power of a prime $p$ and let $G=SL^\epsilon_n(q)$ and $\wt{G}=GL^\epsilon_n(q)$, where $\epsilon\in\{\pm1\}$. Here when $\epsilon = 1$, we mean $\wt{G}=GL_n(q)$ and $G=SL_n(q)$, and when $\epsilon=-1$, we mean $\wt{G}=GU_n(q)$ and $G=SU_n(q)$. We also write $\bg{G}=SL_n(\bar{\mathbb{F}}_q)$ and $\wt{\bg{G}}=GL_n(\bar{\mathbb{F}}_q)$ for the corresponding algebraic groups, so that $\wt{G}=\wt{\bg{G}}^{F_\epsilon}$ and $G=\bg{G}^{F_\epsilon}$ for an appropriate Frobenius morphism $F_{\epsilon}$. \section{Characters} \label{sec:Chars} \subsection{Lusztig Induction} \label{sec:LusztigInd} For this section, we let $\mathbf{H}$ be any connected reductive group over $\bar{\mathbb{F}}_q$ with Frobenius map $F$, and write $H = \mathbf{H}^F$. For any $F$-stable Levi subgroup $\mathbf{L}$ of $\mathbf{H}$, contained in a parabolic subgroup $\mathbf{P}$, we write $L = \mathbf{L}^F$ and denote by $R_L^H = R_{\mathbf{L} \subset \mathbf{P}}^{\mathbf{H}}$ the Lusztig (or twisted) induction functor. When $\mathbf{P}$ may be chosen to be an $F$-stable parabolic, then $R_L^H$ becomes Harish-Chandra induction. When $\mathbf{L} = \mathbf{T}$ is chosen to be a maximal torus and $\theta$ is a character of $T = \mathbf{T}^F$, then $R_T^H(\theta)$ is the corresponding Deligne-Lusztig (virtual) character. We need the following basic result regarding actions on characters of finite reductive groups obtained through twisted induction. \begin{lemma} \label{DLlemma} Let $\mathbf{H}$ and $H = \mathbf{H}^F$ be as above. Let $\mathbf{L}$ be an $F$-stable Levi subgroup of $\mathbf{H}$, and write $L = \mathbf{L}^F$. Let $\mathrm{Char}i$ be a character of $L$, $\sigma \in \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$, and $\alpha$ a linear character of $H$ which is trivial on unipotent elements. Then $$ \sigma R_{L}^H(\mathrm{Char}i) = R_L^H(\sigma \mathrm{Char}i) \quad \text{ and } \quad \alpha R_L^H(\mathrm{Char}i) = R_L^H(\alpha \mathrm{Char}i).$$ In particular, when $\mathbf{L} = \mathbf{T}$ is a maximal torus and $\mathrm{Char}i = \theta$ is a character of $T = \mathbf{T}^F$, then we have $$ \sigma R_{T}^H(\theta) = R_T^H(\sigma \theta) \quad \text{ and } \quad \alpha R_T^H(\theta) = R_T^H(\alpha \theta).$$ \end{lemma} \begin{proof} From \cite[Proposition 11.2]{dignemichel}, for any $g \in H$ we have $$R_L^H(\mathrm{Char}i)(g) = \frac{1}{|L|}\sum_{l \in L} \mathrm{Tr}((g,l^{-1})|X) \mathrm{Char}i(l),$$ where $\mathrm{Tr}((g,l^{-1})|X)$ is the Lefschetz number corresponding to the $H \times L$-action on the $\ell$-adic cohomology $X$ of the relevant Deligne-Lusztig variety. In particular, these numbers are rational integers (by \cite[Corollary 10.6]{dignemichel}, for example). Thus, $$\sigma R_L^H(\mathrm{Char}i)(g) = \frac{1}{|L|}\sum_{l \in L} \mathrm{Tr}((g,l^{-1})|X) \sigma\mathrm{Char}i(l) = R_L^H(\sigma \mathrm{Char}i).$$ Now let $g \in H$ have Jordan decomposition $g=su$, so that $s \in H$ is semisimple and $u \in H$ is unipotent, and we have $\alpha(g)=\alpha(s)$. From \cite[Proposition 12.2]{dignemichel} we have \begin{equation} \label{DLGreen} R_L^H(\mathrm{Char}i)(g) = \frac{1}{|L||C_{\mathbf{H}}^{\circ}(s)^F|} \sum_{\{ h \in H \, \mid \, s \in h \mathbf{L} h^{-1} \}} |C_{h \mathbf{L}h^{-1}}^{\circ} (s)^F| \sum_{ v \in C_{h \mathbf{L}h^{-1}}^{\circ} (s)^F_{\mathrm{u}}} Q_{C_{h \mathbf{L}h^{-1}}^{\circ} (s)}^{C_{\mathbf{H}}^{\circ}(s)}(u, v^{-1}) \mathrm{Char}i^h(sv), \end{equation} where $C_{\mathbf{H}}^{\circ}(s)$ denotes the connected component of the centralizer, and $Q_{C_{h\mathbf{L}h^{-1}}^{\circ}(s)}^{C_{\mathbf{H}}^{\circ}(s)}$ denotes the Green function. Note that for any $h \in H$ from the first sum of \eqref{DLGreen}, and any unipotent $v$ from the second sum of \eqref{DLGreen}, we have $$\alpha(g) = \alpha(s) = \alpha(sv) = \alpha(h sv h^{-1}) = \alpha^h(sv).$$ So, if we multiply \eqref{DLGreen} by $\alpha(g)$, we may pass this factor through the sums to obtain $$ \alpha(g) \mathrm{Char}i^h(sv) = \alpha^h(sv) \mathrm{Char}i^h(sv) = (\alpha \mathrm{Char}i)^h(sv).$$ It follows that we have $\alpha(g) R_L^H(\mathrm{Char}i)(g) = R_L^H(\alpha \mathrm{Char}i)(g)$, as claimed. \end{proof} \subsection{Parametrization of Characters of $GL^\epsilon_n(q)$} \label{sec:Param} We identify $GL_1(\bar{\mathbb{F}}_q)$ with $\bar{\mathbb{F}}_q^{\times}$, and so $F_{\epsilon}$ acts on $\bar{\mathbb{F}}_q^{\times}$ via $F_{\epsilon}(a)=a^{\epsilon q}$. For any integer $k \geq 1$, we define $T_k$ to be the multiplicative subgroup of $\bar{\mathbb{F}}_q^{\times}$ fixed by $F_{\epsilon}^k$, that is $$T_k = (\bar{\mathbb{F}}_q^{\times})^{F_{\epsilon}^k}.$$ We denote by $\wh{T}_k$ the multiplicative group of complex-valued linear characters of $T_k$. Whenever $d|k$, we have the natural norm map $\mathrm{Nm}_{k,d}=\mathrm{Nm}$ from $T_k$ to $T_d$, and the transpose map $\wh{\mathrm{Nm}}$ gives a norm map from $\wh{T}_d$ to $\wh{T}_k$, where $\wh{\mathrm{Nm}}(\xi) = \xi \circ \mathrm{Nm}$. We consider the direct limit of the character groups $\wh{T}_k$ with respect to these norm maps, $\displaystyle \lim_{\longrightarrow} \wh{T_k}$, on which $F_{\epsilon}$ acts through its natural action on the groups $T_k$. Moreover, the fixed points of $\displaystyle \lim_{\longrightarrow} \wh{T_k}$ under $F_{\epsilon}^d$ can be identified with $\wh{T_d}$. We let $\Theta$ denote the set of $F_{\epsilon}$-orbits of $\displaystyle \lim_{\longrightarrow} \wh{T_k}$. The elements of $\Theta$ are sometimes called {\em simplices} (in \cite{Green, turull01} for example). They are naturally dual objects to polynomials with roots given by an $F_{\epsilon}$-orbit of $\bar{\mathbb{F}}_q^{\times}$. For any orbit $\phi \in \Theta$, let $|\phi|$ denote the size of the orbit. Let $\mathcal{P}$ denote the set of all partitions of non-negative integers, where we write $|\nu| = n$ if $\nu$ is a partition of $n$, and let $\mathcal{P}_n$ denote the set of all partitions of $n$. The irreducible characters of $\wt{G}=GL^{\epsilon}_n(q)$ are parameterized by partition-valued functions on $\Theta$. Specifically, given a function $\lambda: \Theta \rightarrow \mathcal{P}$, define $|\lambda|$ by $$|\lambda| = \sum_{\phi \in \Theta} |\phi| |\lambda(\phi)|,$$ and define $\mathcal{F}_n$ by $$ \mathcal{F}_n = \{ \lambda: \Theta \rightarrow \mathcal{P} \, \mid \, |\lambda| = n\}.$$ Then $\mathcal{F}_n$ gives a parametrization of the irreducible complex characters of $\wt{G}$. Given $\lambda \in \mathcal{F}_n$, we let $\wt{\mathrm{Char}i}_{\lambda}$ denote the irreducible character corresponding to it. We need several details regarding the structure of the character $\wt{\mathrm{Char}i}_{\lambda}$. In the case $\epsilon =1$, these facts all follow from the original work of Green \cite{Green}, and also appear from a slightly different point of view in the book of Macdonald \cite[Chapter IV]{MacBook}. For the case $\epsilon=-1$, the facts we need appear in \cite{TV07}, which contains relevant results from \cite{dignemichelunitary, LuszSrin}. First consider some $\lambda \in \mathcal{F}_n$ such that $\lambda(\phi)$ is a nonempty partition for exactly one $\phi \in \Theta$, and write $\wt{\mathrm{Char}i}_{\lambda}= \wt{\mathrm{Char}i}_{\lambda(\phi)}$. Suppose that $|\phi|=d$, so that $|\lambda(\phi)| = n/d$. Then let $\omega^{\lambda(\phi)}$ be the irreducible character of the symmetric group $S_{n/d}$ parameterized by $\lambda(\phi) \in \mathcal{P}_{n/d}$. We fix this parametrization so that the partition $(1, 1, \ldots, 1)$ corresponds to the trivial character. For any $\gamma =(\gamma_1, \gamma_2, \ldots, \gamma_{\ell}) \in \mathcal{P}_{n/d}$, let $\omega^{\lambda(\phi)}(\gamma)$ denote the character $\omega^{\lambda(\phi)}$ evaluated at the conjugacy class parameterized by $\gamma$ (where $(1, 1, \ldots, 1)$ corresponds to the identity), and let $z_{\gamma}$ the size of the centralizer in $S_{n/d}$ of the class corresponding to $\gamma$. Let $T_{\gamma}$ be the torus $$ T_{\gamma} = T_{d\gamma_1} \times T_{d \gamma_2} \times \cdots \times T_{d \gamma_{\ell}},$$ and let $\theta \in \phi$. Then we have \begin{equation} \label{DLLinComb} \wt{\mathrm{Char}i}_{\lambda(\phi)} = \pm \sum_{\gamma \in \mathcal{P}_{n/d}} \frac{ \omega^{\lambda(\phi)}(\gamma)}{z_{\gamma}} R_{T_{\gamma}}^{\wt{G}}(\theta), \end{equation} where the sign can be determined explicitly (see the Remark after \cite[Theorem 4.3]{TV07}, for example), but the sign will not have any impact for us. Note that from \eqref{DLLinComb}, it follows from our parametrization of characters of the symmetric group and \cite[Proposition 12.13]{dignemichel} that the trivial character of $\wt{G}$ corresponds to $\lambda({\bf 1})=(1, 1, \ldots, 1)$. For an arbitrary $\lambda \in \mathcal{F}_n$, let $\phi_1, \phi_2, \ldots, \phi_r$ be precisely those elements in $\Theta$ such that $\lambda(\phi_i)$ is a nonempty partition, and let $d_i = |\phi_i|$. Let $n_i = d_i |\lambda(\phi_i)|$, and define $L$ to be the Levi subgroup $L = GL_{n_1}^{\epsilon}(q) \times \cdots \times GL_{n_r}^{\epsilon}(q)$. The character $\wt{\mathrm{Char}i}_{\lambda}$ is then given by \begin{equation} \label{Lusztigprod} \wt{\mathrm{Char}i}_{\lambda} = \pm R_{L}^{\wt{G}}\left(\wt{\mathrm{Char}i}_{\lambda(\phi_1)} \times \cdots \times \wt{\mathrm{Char}i}_{\lambda(\phi_r)}\right). \end{equation} The sign only appears in the $\epsilon = -1$ case, and again can be determined explicitly. Note that \eqref{Lusztigprod} is Harish-Chandra induction in the case $\epsilon = 1$. \subsection{Restriction to $SL^\epsilon_n(q)$ and Actions on the Parametrization} \label{sec:Restriction} We now turn to the parametrization of the characters of $G = SL^{\epsilon}_n(q)$ in terms of the characters of $\wt{G}$ described in the previous section. This is done for the case $\epsilon=1$ in \cite{karkargreen,Lehrer}, and we adapt the methods there to handle the more general case of $\epsilon = \pm 1$. Consider any Galois automorphism $\sigma \in \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$. Through the natural action of $\sigma$ on the character values of any $\theta \in \wh{T}_d$, we have an action of $\sigma$ on the orbits $\phi \in \Theta$. Given any $\lambda \in \mathcal{F}_n$, we define $\sigma \lambda$ by $$ \sigma \lambda(\phi) = \lambda(\sigma \phi).$$ For $\alpha\in\wh{T}_1$, define $\alpha\wt{\mathrm{Char}i}$ and $\alpha\theta$ by usual product of characters in $\mathrm{Irr}(\wt{G})$ and $\wh{T}_d$, where we compose $\alpha$ with determinant and the norm maps, respectively. Then $\alpha$ acts on the orbits $\phi \in \Theta$ as well, and we get an action of $\alpha$ on $\mathcal{F}_n$ by defining $\alpha\lambda$ as $$\alpha\lambda(\phi)=\lambda(\alpha\phi).$$ We will need the following statements regarding these actions on the characters of $\wt{G}$. \begin{lemma}\label{lem:actions} Let $\lambda\in\mathcal{F}_n$. For any $\sigma \in \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ and any $\alpha \in \wh{T}_1$, we have $$ \sigma \wt{\mathrm{Char}i}_{\lambda} = \wt{\mathrm{Char}i}_{\sigma \lambda} \quad \text{ and } \quad \alpha \wt{\mathrm{Char}i}_{\lambda} = \wt{\mathrm{Char}i}_{\alpha \lambda}.$$ \end{lemma} \begin{proof} We proceed in a manner similar to the proof of \cite[Proposition of Section 3]{karkargreen}, which proves this statement in the $\epsilon=1$ case with the $\wh{T}_1$ action. We begin by considering $\lambda \in \mathcal{F}_n$ such that $\lambda(\phi)$ is a nonempty partition for precisely one $\phi \in \Theta$, and so $\wt{\mathrm{Char}i}_{\lambda}=\wt{\mathrm{Char}i}_{\lambda(\phi)}$ is given by \eqref{DLLinComb}. Given $\sigma \in \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$, since each $\omega^{\lambda(\phi)}(\gamma)$ and each $z_{\gamma}$ is a rational integer, we have $$\sigma \wt{\mathrm{Char}i}_{\lambda(\phi)} = \pm \sum_{\gamma \in \mathcal{P}_{n/d}} \frac{ \omega^{\lambda(\phi)}(\gamma)}{z_{\gamma}} \sigma R_{T_{\gamma}}^{\wt{G}}(\theta) = \pm \sum_{\gamma \in \mathcal{P}_{n/d}} \frac{ \omega^{\lambda(\phi)}(\gamma)}{z_{\gamma}} R_{T_{\gamma}}^{\wt{G}}(\sigma\theta),$$ by Lemma \ref{DLlemma}. Since $\sigma \theta \in \sigma \phi$, then we have $\sigma \wt{\mathrm{Char}i}_{\lambda(\phi)} = \wt{\mathrm{Char}i}_{\sigma \lambda(\phi)}$. Similarly, if $\alpha \in \wh{T}_1$, we have by Lemma \ref{DLlemma} $$\alpha \wt{\mathrm{Char}i}_{\lambda(\phi)} = \pm \sum_{\gamma \in \mathcal{P}_{n/d}} \frac{ \omega^{\lambda(\phi)}(\gamma)}{z_{\gamma}} \alpha R_{T_{\gamma}}^{\wt{G}}(\theta) = \pm \sum_{\gamma \in \mathcal{P}_{n/d}} \frac{ \omega^{\lambda(\phi)}(\gamma)}{z_{\gamma}} R_{T_{\gamma}}^{\wt{G}}(\alpha\theta),$$ and since $\alpha \theta \in \alpha \phi$, we have $\alpha \wt{\mathrm{Char}i}_{\lambda(\phi)} = \wt{\mathrm{Char}i}_{\alpha \lambda(\phi)}$. Now consider an arbitrary $\lambda \in \mathcal{F}_n$, with $\wt{\mathrm{Char}i}_{\lambda}$ given by \eqref{Lusztigprod}. By applying Lemma \ref{DLlemma}, along with the first case just proved, we have \begin{align*} \sigma \wt{\mathrm{Char}i}_{\lambda} & = \pm R_L^{\wt{G}}\left(\sigma(\wt{\mathrm{Char}i}_{\lambda(\phi_1)} \times \cdots \times \wt{\mathrm{Char}i}_{\lambda(\phi_r)})\right) \\ & = \pm R_L^{\wt{G}}\left(\sigma\wt{\mathrm{Char}i}_{\lambda(\phi_1)} \times \cdots \times \sigma\wt{\mathrm{Char}i}_{\lambda(\phi_r)})\right) \\ & = \pm R_L^{\wt{G}}\left(\wt{\mathrm{Char}i}_{\sigma\lambda(\phi_1)} \times \cdots \times \wt{\mathrm{Char}i}_{\sigma\lambda(\phi_r)}\right) \\ & = \wt{\mathrm{Char}i}_{\sigma \lambda}. \end{align*} Similarly, if we replace $\sigma$ with $\alpha \in \wh{T}_1$, we have $\alpha \wt{\mathrm{Char}i}_{\lambda} = \wt{\mathrm{Char}i}_{\alpha \lambda}$ as claimed. \end{proof} Note that we may identify $\wt{G}/G$ with $T_1$, and directly from Clifford theory we know every character $\mathrm{Char}i$ of $G$ appears in some multiplicity-free restriction of a character $\wt{\mathrm{Char}i}_{\lambda}$ of $\wt{G}$. The restrictions of two different irreducible characters of $\wt{G}$ are either equal, or have no irreducible constituents of $\mathrm{Irr}(G)$ in common. With this, the next result is all that is needed to parameterize $\mathrm{Irr}(G)$. \begin{lemma}\label{lem:irrparam} Let $\lambda, \mu \in \mathcal{F}_n$ with $\wt{\mathrm{Char}i}_{\lambda}, \wt{\mathrm{Char}i}_{\mu} \in \mathrm{Irr}(\wt{G})$ the corresponding characters. Then $$\mathbb{R}es^{\wt{G}}_G (\wt{\mathrm{Char}i}_{\lambda}) = \mathbb{R}es^{\wt{G}}_G (\wt{\mathrm{Char}i}_{\mu})$$ if and only if there exists some $\alpha \in \wh{T}_1$ such that $\lambda = \alpha \mu$. \end{lemma} \begin{proof}This follows directly from \cite[Theorem 1(i)]{karkargreen} and Lemma \ref{lem:actions}. \end{proof} Consider any irreducible character $\mathrm{Char}i$ of $G$, so $\mathrm{Char}i$ is a constituent of $\mathbb{R}es^{\wt{G}}_G(\wt{\mathrm{Char}i}_{\lambda})$ for some $\lambda \in \mathcal{F}_n$. That is, $\mathrm{Char}i\in\mathrm{Irr}(G|\wt{\mathrm{Char}i}_\lambda)$. The other constituents of this restriction are $\wt{G}$-conjugates of $\mathrm{Char}i$. Note that the field of values of $\mathrm{Char}i$ is invariant under conjugation by $\wt{G}$, and so in studying this field of character values it is not important which constituent we choose. Given $\lambda \in \mathcal{F}_n$, we define the group $\mathcal{I}(\lambda)$ as $$\mathcal{I}(\lambda)=\bigcap\{\ker\alpha \, \mid \, \alpha\in\wh{T}_1 \hbox{ such that } \alpha\lambda=\lambda\}.$$ We collect some basic properties of $\mathcal{I}(\lambda)$ in the following. \begin{proposition}\label{prop:indstabdivides} Let $\mathrm{Char}i\in\mathrm{Irr}(G|\wt{\mathrm{Char}i}_\lambda)$. Then: \begin{itemize} \item The stabilizer in $\wt{G}$ of $\mathrm{Char}i$ is the set of elements with determinant in $\mathcal{I}(\lambda)$. \item The stabilizer of $\lambda$ in $\wh{T}_1$ is the set of elements whose kernel contains $\mathcal{I}(\lambda)$. \item The index $[T_1: \mathcal{I}(\lambda)]$ divides $\gcd(q-\epsilon,n)$. \end{itemize} \end{proposition} \begin{proof} The proof is exactly as in \cite[Propositions 4.2 and 4.3 and Corollary 4.4]{turull01}, using Clifford theory and \prettyref{lem:actions}. \end{proof} \subsection{Remarks on Generalized Gelfand-Graev Characters}\label{sec:GGGR} We recall here some subgroups described in \cite[Section 2]{Geck04} used in the construction of the characters of generalized Gelfand-Graev representations (GGGRs). We introduce only the essentials for our purposes, and refer the reader to \cite{Geck04, kawanaka1985, Taylor16}, for example, for more details. First, let $\bg{T}\leq \bg{B}=\bg{T}\bg{U}$ be an $F_\epsilon$-stable maximal torus and Borel subgroup, respectively, of $\bg{G}$, with unipotent radical $\bg{U}$. Let $\Phi$ be the root system of $\bf{G}$ with respect to $\bg{T}$ and $\Phi^+\subset \Phi$ the set of positive roots determine by $\bg{B}$. To each unipotent class $\mathcal{C}$ in $\bg{G}$ (or, equivalently, in $\wt{\bg{G}}$), there is associated a weighted Dynkin diagram $d\colon \Phi\rightarrow\mathbb{Z} $ and $F_\epsilon$-stable groups \[\bg{U}_{d,i}:=\langle X_{\alpha}|\alpha\in\Phi^+, d(\alpha)\geq i\rangle \leq \bg{U},\] where $X_\alpha$ denotes the root subgroup corresponding to $\alpha$. In particular, $\bg{P}_d:=N_{\wt{\bg{G}}}(\bg{U}_{d,1})$ is an $F_\epsilon$-stable parabolic subgroup of $\wt{\bg{G}}$ and $\bg{U}_{d,i}\lhd \bg{P}_d$ for each $i=1,2,3,\ldots$. We will further write $U_{d,i}:=\bg{U}_{d,i}^{F_\epsilon}$ and $P_d:=\bg{P}_d^{F_\epsilon}$. Given $u\in\mathcal{C}\cap U_{d,2}$, the characters of GGGRs (which we will also refer to as GGGRs) of $\wt{G}$, respectively $G$, are constructed by inducing certain linear characters $\varphi_u\colon U_{d,2}\rightarrow\mathbb{C}^\times$ to $\wt{G}$, resp. ${G}$. In particular, the values of $\varphi_u$ are all $p$th roots of unity. Strictly speaking, the GGGRs are actually rational multiples of the induced character: \[\wt{\Gamma}_u=[U_{d,1}:U_{d,2}]^{-1/2}\mathrm{Ind}_{U_{d,2}}^{\wt{G}}(\varphi_u)\quad\hbox{ and }\quad \Gamma_u=[U_{d,1}:U_{d,2}]^{-1/2}\mathrm{Ind}_{U_{d,2}}^{G}(\varphi_u).\] The following is \cite[Proposition 10.11]{SFTaylorTypeA}, which is a consequence of \cite[Theorem 1.8, Lemma 2.6, and Theorem 10.10]{tiepzalesski04}. \begin{proposition}\label{prop:GGGRvalues} Let $\Gamma_u$ be a GGGR of $G$. Then the following hold. \begin{enumerate} \item If $q$ is a square, $n$ is odd, or $n/(n,q-\epsilon)$ is even, then the values of $\Gamma_u$ are integers. \item Otherwise, the values of $\Gamma_u$ lie in $\mathbb{Q}(\sqrt{\eta p})$, where $\eta\in\{\pm1\}$ is such that $p\equiv\eta\mod 4$. \end{enumerate} \end{proposition} \section{Initial Results on Fields of Values} \label{sec:Initial} Keep the notation from above, so that $G=SL^\epsilon_n(q)$, $\wt{G}=GL^\epsilon_n(q)$, and the characters of $\wt{G}$ are denoted by $\wt{\mathrm{Char}i}_\lambda$ for $\lambda\in\mathcal{F}_n$. For $\lambda\in\mathcal{F}_n$, let $\mathbb{Q}(\lambda)$ denote the field obtained from $\mathbb{Q}$ by adjoining the values of the characters in the orbits $\phi\in\Theta$ such that $\lambda(\phi)$ is nonempty. We define $\mathrm{Galg}(\lambda)$ and $\mathrm{Galr}(\lambda)$ as in \cite{turull01}. That is, $\mathrm{Galg}(\lambda)$ is the stabilizer of $\lambda$ in $\mathrm{Gal}(\mathbb{Q}(\lambda)/\mathbb{Q})$ and \[\mathrm{Galr}(\lambda)=\{\sigma\in\mathrm{Gal}(\mathbb{Q}(\lambda)/\mathbb{Q})\, \mid \, \sigma\lambda=\alpha\lambda \hbox{ for some $\alpha\in\wh{T}_1$}\}.\] \begin{theorem} Let $\lambda\in\mathcal{F}_n$. Then $\mathbb{Q}(\wt{\mathrm{Char}i}_\lambda)=\mathbb{Q}(\lambda)^{\mathrm{Galg}(\lambda)}$ and $ \mathbb{Q}(\res^{\wt{G}}_G(\wt{\mathrm{Char}i}_\lambda))=\mathbb{Q}(\lambda)^{\mathrm{Galr}(\lambda)}.$ That is, the field of values for $\wt{\mathrm{Char}i}_\lambda$ and its restriction to $G$ are the fixed fields of $\mathrm{Galg}(\lambda)$ and $\mathrm{Galr}(\lambda)$, respectively. \end{theorem} \begin{proof} Given \prettyref{lem:actions}, the proof is exactly the same as that of \cite[Propositions 2.8, 3.4]{turull01}. \end{proof} Note that since the members of $\phi\in\Theta$ are characters of $T_d$ for some $d$, it follows that $\mathbb{Q}(\lambda)=\mathbb{Q}(\zeta_m)$ is the field obtained from $\mathbb{Q}$ by adjoining some primitive $m$th root of unity $\zeta_m$, where $\gcd(m,p)=1$. \begin{remark}\label{rem:sigmainv} We further remark that, as in the proof of \cite[Proposition 6.2]{turull01}, the Galois automorphism $\sigma_{-1}\colon\mathbb{Q}(\zeta_m)\rightarrow\mathbb{Q}(\zeta_m)$ satisfying $\sigma_{-1}(\zeta_m)=\zeta_m^{-1}$ induces complex conjugation on $\mathbb{Q}(\lambda)$. Hence $\wt{\mathrm{Char}i}_\lambda$, respectively $\res^{\wt{G}}_G(\wt{\mathrm{Char}i}_\lambda)$, is real-valued if and only if $\sigma_{-1}\in \mathrm{Galg}(\lambda)$, respectively $\sigma_{-1}\in\mathrm{Galr}(\lambda)$. \end{remark} For the remainder of the article, we let $\mathbb{F}_\lambda$ denote the field of values of $\res^{\wt{G}}_G(\wt{\mathrm{Char}i}_\lambda)$. That is, $\mathbb{F}_\lambda$ is the fixed field of $\mathrm{Galr}(\lambda)$. Since $\mathbb{F}_\lambda\subseteq\mathbb{Q}(\lambda)=\mathbb{Q}(\zeta_m)$ and $\gcd(m,p)=1$, we have $\mathbb{F}_\lambda\cap \mathbb{Q}(\zeta_p)=\mathbb{Q}$ for any primitive $p$th root of unity $\zeta_p$. \begin{proposition}\label{prop:initialcase} Let $\mathrm{Char}i\in\mathrm{Irr}(G|\wt{\mathrm{Char}i}_\lambda)$. Keep the notation above. Then \begin{enumerate} \item If $q$ is square, $n$ is odd, or $n/(n,q-\epsilon)$ is even, then $\mathbb{Q}(\mathrm{Char}i)=\mathbb{F}_\lambda$. \item Otherwise, $\mathbb{F}_\lambda\subseteq\mathbb{Q}(\mathrm{Char}i)\subseteq \mathbb{F}_\lambda(\sqrt{\eta p})$, where $\eta\in\{\pm1\}$ is such that $p\equiv\eta\mod 4$. \end{enumerate} In particular, $\mathrm{Char}i$ is real-valued if and only if $\res^{\wt{G}}_G(\wt{\mathrm{Char}i}_\lambda)$ is, except possibly when $q\equiv 3\mod4$ and $2\leq n_2\leq (q-\epsilon)_2$. \end{proposition} \begin{proof} Write $\mathbb{F}:=\mathbb{F}_\lambda$ and $\wt{\mathrm{Char}i}:=\wt{\mathrm{Char}i}_\lambda$. First, we remark that certainly $\mathbb{F}\subseteq \mathbb{Q}(\mathrm{Char}i)$, by its definition, since $\mathbb{R}es_G^{\widetilde{G}}(\widetilde{\mathrm{Char}i})$ is the sum of $\wt{G}$-conjugates of $\mathrm{Char}i$. Let $\wt{\Gamma}$ be a GGGR of $\wt{G}$ such that such that $\langle \widetilde{\Gamma}, \widetilde{\mathrm{Char}i}\rangle_{\widetilde{G}} = 1$, which exists by a well-known result of Kawanaka (see \cite[3.2.18]{kawanaka1985} or \cite[15.7]{Taylor16}). Further, there exists a GGGR, $\Gamma$, of $G$ such that $\widetilde{\Gamma} = \mathrm{Ind}_G^{\widetilde{G}}(\Gamma)$. Then Frobenius reciprocity yields that there is a unique irreducible constituent $\mathrm{Char}i_0\in\mathrm{Irr}(G|\wt{\mathrm{Char}i})$ satisfying $\langle \Gamma, \mathrm{Char}i_0 \rangle_G = 1$. Without loss, we may assume $\mathrm{Char}i$ is this $\mathrm{Char}i_0$, as the field of values is invariant under $\wt{G}$-conjugation. Write $\mathbb{K}=\mathbb{F}(\sqrt{\eta p})$. Let $\sigma\in\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{F})$ in case (1), and let $\sigma\in\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{K})$ in case (2). Then by \prettyref{prop:GGGRvalues}, $\sigma\mathrm{Char}i$ is also a constituent of $\Gamma = \sigma\Gamma$ occurring with multiplicity 1. However, as $\mathbb{R}es_G^{\wt{G}}(\wt{\mathrm{Char}i})$ is invariant under $\sigma$, we have $\sigma\mathrm{Char}i$ is also a constituent of the restriction $\mathbb{R}es_G^{\wt{G}}(\wt{\mathrm{Char}i})$. Hence we see that $\sigma\mathrm{Char}i=\mathrm{Char}i$, by uniqueness, and hence $\mathbb{Q}(\mathrm{Char}i)\subseteq \mathbb{F}$ in case (1) and $\mathbb{Q}(\mathrm{Char}i)\subseteq \mathbb{K}$ in case (2). \end{proof} In our main results below, characters of $T_1$ of $2$-power order will play an important role. In particular, we denote by $\mathrm{sgn}$ the unique member of $\wh{T}_1$ of order $2$. \begin{lemma}\label{lem:orbiteven} Let $\mathrm{Char}i\in\mathrm{Irr}(G|\wt{\mathrm{Char}i}_\lambda)$ and write $I:=\mathrm{stab}_{\wt{G}}(\mathrm{Char}i)$. Then $[\wt{G}:I]$ is even if and only if $\mathrm{sgn}\lambda=\lambda$. \end{lemma} \begin{proof} Note that $2$ divides $[\wt{G}:I]$ if and only if $[I:G]_2\leq \frac{1}{2}(q-\epsilon)_2$, if and only if $\mathcal{I}(\lambda)$ is contained in the unique subgroup of $\wt{G}/G$ of order $\frac{1}{2}(q-\epsilon)$. But notice that this is exactly the kernel of $\mathrm{sgn}$ as an element of $\wh{T_1}$. \end{proof} \begin{lemma}\label{lem:sgn} Let $\mathrm{Char}i\in\mathrm{Irr}(G|\wt{\mathrm{Char}i}_\lambda)$. If $\mathrm{sgn}\lambda\neq\lambda$, then $\mathbb{F}_\lambda=\mathbb{Q}(\mathrm{Char}i)$. \end{lemma} \begin{proof} Write $\mathbb{F}:=\mathbb{F}_\lambda$ and recall that $\mathbb{F}\subseteq\mathbb{Q}(\mathrm{Char}i)\subseteq \mathbb{F}(\sqrt{\eta p})$. Let $\mathbb{K}=\mathbb{F}(\sqrt{\eta p})$, so that $\mathbb{K}$ is a quadratic extension of $\mathbb{F}$. Let $\tau$ be the generator of $\mathrm{Gal}(\mathbb{K}/\mathbb{F})$ and write $I$ for the stabilizer of $\mathrm{Char}i$ under $\wt{G}$. Then note that $\tau^2$ necessarily fixes $\mathrm{Char}i$, and by definition $\tau$ fixes $\mathbb{R}es^{\wt{G}}_G(\wt{\mathrm{Char}i})$, which by Clifford theory is the sum of the $[\wt{G}:I]$ conjugates of $\mathrm{Char}i$ under the action of $\wt{G}$. We prove the contrapositive. Suppose $\mathbb{F}\neq\mathbb{Q}(\mathrm{Char}i)$, so that $\tau$ does not fix $\mathrm{Char}i$. Then since the field of values is invariant under $\wt{G}$-conjugation, it follows that the orbit of $\mathrm{Char}i$ under $\wt{G}$ can be partitioned into pairs conjugate to $\{\mathrm{Char}i, \tau\mathrm{Char}i\}$. Hence the size of the orbit, $[\wt{G}:I]$, must be even, so $\mathrm{sgn}\lambda=\lambda$ by \prettyref{lem:orbiteven}. \end{proof} \section{Unipotent Elements} \label{sec:Unipotent} To deal with the remaining cases (in particular, when $q\equiv \eta\pmod 4$ is nonsquare, $\eta\in\{\pm1\}$, $\epsilon=-1$, and $2\leq n_2\leq (q+1)_2$), we will continue to employ the use of GGGRs. For this, we will need to analyze certain aspects of conjugacy of unipotent elements. Here the authors' observations in \cite{SFVinroot} on this subject will be useful. In particular, if a unipotent element of $\wt{G}$ has $m_k$ Jordan blocks of size $k$ (that is, $m_k$ elementary divisors of the form $(t-1)^k$), then we may find a conjugate in $\wt{G}$ of the form $\bigoplus_k \wt{J}_k^{m_k}$, where the sum is over only those $k$ such that $m_k \neq 0$ and each $\wt{J}_k\in GL^\epsilon_k(q)$. The following lemma, which is \cite[Lemma 3.2]{SFVinroot}, will be useful throughout the section. \begin{lemma}\label{lem:SFV3.2} Let $u$ be a unipotent element in $\wt{G}$ with $m_k$ Jordan blocks of size $k$ for each $1\leq k\leq n$. For each $k$ such that $m_k\neq 0$, let $\delta_k\in T_1$ be arbitrary. Then there exists some $g \in C_{\wt{G}}(u)$ such that $\mathrm{det}(g) = \prod_k \delta_k^k$. \end{lemma} Let $\zeta_p$ be a primitive $p$th root of unity in $\mathbb{C}$. In what follows, we let $b$ be a fixed integer such that $\mathrm{Gal}(\mathbb{Q}(\zeta_p)/\mathbb{Q})$ is generated by the map $\tau\colon \zeta_p\mapsto \zeta_p^b$. Note that $(b,p)=1$ and $b$ has multiplicative order $p-1$ modulo $p$. Further, note that $\tau$ also induces the map $\sqrt{\eta p}\mapsto-\sqrt{\eta p}$ generating $\mathrm{Gal}(\mathbb{Q}(\sqrt{\eta p})/\mathbb{Q})$. Let $\bar{b}$ denote the image of $b$ under a fixed isomorphism $(\mathbb{Z}/p\mathbb{Z})^\times \rightarrow \mathbb{F}_p^\times$, so that $\bar{b}$ generates $\mathbb{F}_{p}^\times$. Note that by \cite[Theorem 1.9]{tiepzalesski04}, every unipotent element $u$ of $GU_n(q)$ is conjugate to $u^b$ in $C_{\widetilde{G}}(s)$, where $s$ is a semisimple element in $C_{\wt{G}}(u)$. We are interested in making precise statements about such a conjugating element. To begin, let $u$ be a regular unipotent element of $GU_n(q)$, identified as in \cite[Lemma 5.1]{SFVinroot}. Arguing as there, we see that an element conjugating $u$ to $u^b$ must have diagonal \[(\bar{b}^{n-1}\beta, \bar{b}^{n-2}\beta,...,\bar{b}\beta,\beta),\] where $\beta\in\mathbb{F}_{q^2}^\times$ and $\bar{b}^{n-1}\beta^{q+1}=1$. Note that the determinant of such an element is $\bar{b}^{\binom{n-1}{2}}\beta^n$ and that the condition that $\bar{b}^{n-1}\beta^{q+1}=1$ yields that $\beta^{q+1}$ is a $(p-1)$-root of unity. \begin{lemma}\label{lem:regunip}\label{lem:beta2} Let $q\equiv \eta\mod 4$ be nonsquare with $\eta\in\{\pm1\}$ and let $u$ be a regular unipotent element of $GU_n(q)$. Keep the notation above. Then \begin{enumerate}[label=(\alph*)] \item If $n$ is even, then $\beta_2$ is a primitive $(q^2-1)_2$-root of unity in $\mathbb{F}_{q^2}^\times$. \item There is an element $x$ in $GU_n(q)$ such that $u^x=u^b$ and $|\det(x)|$ is a $2$-power. \item If $n\not\equiv 0 \pmod 4$, there is an element $x$ in $GU_n(q)$ such that $u^x=u^b$ and $|\det(x)| = (q+1)_2$. \end{enumerate} \end{lemma} \begin{proof} For part (a) note that $|\bar{b}^{n-1}|$ has the same $2$-part as $|\bar{b}|$ since $n$ is even, so $|\beta_2^{q+1}|=(p-1)_2=(q-1)_2$ since $q$ is nonsquare. Hence the multiplicative order of $\beta_2$ is $2(q-\eta)_2=(q^2-1)_2$. To prove the rest, we begin by showing that if $n\not\equiv 0\pmod4$, then we may find an $x$ in $GU_n(q)$ such that $u^x=u^b$ and $|\det(x)|_2=(q+1)_2$. If $n\equiv 2\pmod 4$, then $\beta_2$ is a primitive $(q^2-1)_2$-root of unity by (a), and hence $\beta_2^n$ is a $(q-\eta)_2$ root of unity. If $\eta=-1$, then since $|\bar{b}|_2=2,$ we see $\bar{b}^{n-1\mathrm{Char}oose 2}$ has odd order. Then any $x\in GU_n(q)$ satisfying $u^x=u^b$ must satisfy $|\det(x)|_2 = (q+1)_2$ in this case. If $\eta=1$, note that $(q+1)_2=2$, so we must just show that $\det(x)$ has even order. Here $|\bar{b}|_2=(q-1)_2,$ and $\bar{b_2}^{\binom{n-1}{2}}$ has order strictly smaller than $\bar{b}_2$. Hence $\bar{b}^{\binom{n-1}{2}}\cdot\beta^n$ has even order, so $|\det(x)|_2 = (q+1)_2$ again in this case. Now assume $n$ is odd and let $\wt{x}$ be an element in $GU_n(q)$ satisfying $u^{\wt{x}}=u^b$. Then certainly $\det(\wt{x})\in {T}_1$, so we may use \prettyref{lem:SFV3.2} to replace $\wt{x}$ with some $x\in GU_n(q)$ satisfying $\det(x)=\det(\wt{x})\cdot \delta^n$ for any $\delta\in {T}_1$. In particular, note that $|\delta^n|=|\delta|$ for any $(q+1)_2$-root of unity $\delta$, since $n$ is odd. Then we may choose $\delta$ so that $\det(\wt{x})_2\delta^n$ is a primitive $(q+1)_2$-root of unity, yielding $|\det(x)|_2 = (q+1)_2$. It remains to show that in all cases, $x$ can be chosen such that $|\det(x)|_{2'}=1$. Since $\beta^{q+1}$ is a $(p-1)$-root of unity, we may decompose the determinant of $x$ into $\beta_2^n\cdot \beta_{(q+1)_{2'}}^n\cdot y$, where $y$ is a $(p-1)$-root of unity. However, we also know that the determinant is a $(q+1)$-root and an odd prime cannot divide both $p-1$ and $q+1$. Hence $y$ must be a $2$-power root of unity, and we may replace $x$ with an element of determinant $\beta_2^n\cdot y$, using \prettyref{lem:SFV3.2}. \end{proof} We remark that arguing similarly, we see that in fact there is no $x$ satisfying the conclusion of \prettyref{lem:regunip}(c) if $u$ is a regular unipotent element when $4$ divides $n$. We can, however, generalize to the following statement about more general unipotent elements when $4\not|n$. \begin{corollary}\label{cor:unipconj} Let $q\equiv \eta\mod 4$ be nonsquare with $\eta\in\{\pm1\}$. If $u$ is a unipotent element of $GU_n(q)$ satisfying at least one of the following: \begin{enumerate} \item $u$ has an odd number of elementary divisors of the form $(t-1)^k$ with $k\equiv 2\mod 4$; \item $u$ has an elementary divisor of the form $(t-1)^k$ with $k$ odd, \end{enumerate} then $u$ is conjugate to $u^b$ by an element $x$ satisfying $|\det(x)| = (q+1)_2$. In particular, if $n$ is not divisible by $4$, any unipotent element is conjugate to $u^b$ by an element $x$ satisfying $|\det(x)| = (q+1)_2$. \end{corollary} \begin{proof} Indeed, viewing $u$ as $\bigoplus_k \wt{J}_k^{m_k}$ as in \cite[Section 3.2]{SFVinroot}, we may find elements $x_k$ for each $1\leq k\leq n$ as in \prettyref{lem:regunip} conjugating each $\tilde{J}_k$ to $\tilde{J}_k^b$. In case (1), we see that the product $\bigoplus_k{x_k}^{m_k}$ will satisfy the statement, after possibly again using \prettyref{lem:SFV3.2} to replace $x_k$ for any odd $k$ with an element satisfying $|\det(x_k)|=1$. If (2) holds, but (1) does not hold, $y=\bigoplus_{2|k} {x_k}^{m_k}$ will satisfy $|\det(y)|=|\det(y)|_2 < (q+1)_2$. We may use \prettyref{lem:regunip} to obtain $x_k$ for some $k$ odd such that $|\det(x_k)| = (q+1)_2$, and replace the remaining $x_k$ for odd $k$ with an element satisfying $|\det(x_k)|=1$. The resulting $\bigoplus_k{x_k}^{m_k}$ will satisfy the statement. The last statement follows, since if $n$ is odd, we must be in case (2), and if $n\equiv2\pmod4$, we must be in case (1) or (2). \end{proof} \begin{remark}\label{rem:condsnmod4} We remark that at least one of conditions 1 and 2 of \prettyref{cor:unipconj} must occur if $n\equiv 2\pmod 4$, and that condition 1 implies condition 2 if $n\equiv 0\mod 4$. Further, when $\eta=1$, the condition $n_2\leq (q+1)_2$ induced from \prettyref{prop:initialcase} yields that $n\equiv2\pmod 4$. \end{remark} We now address the case that $4$ divides $n$, $q\equiv 3\pmod 4$, and that neither of the conditions in \prettyref{cor:unipconj} occur. \begin{lemma}\label{lem:unipconj0mod4} Let $q\equiv 3\mod 4$ and let $n\equiv 0\pmod 4$ such that $n_2\leq (q+1)_2$. Let $u$ be a unipotent element of $GU_n(q)$ with no elementary divisors $(t-1)^k$ with $k$ odd. Then $u$ is conjugate to $u^b$ by an element $x$ satisfying $|\det(x)| = \frac{(q^2-1)_2}{n_2}$. \end{lemma} \begin{proof} As in the proof of \prettyref{cor:unipconj}, let $\wt{x}=\bigoplus_k{x_k}^{m_k}$, where for each $k$ such that $m_k\neq 0$, $x_k$ is an element of $GU_k(q)$ conjugating $\tilde{J}_k$ to $\tilde{J}_k^b$ as in \prettyref{lem:regunip}. Now, each $x_k$ has determinant $\pm{(\beta_k)_2}^k $, where ${(\beta_k)_2}$ is a $(q^2-1)_2$-root of unity in $\mathbb{F}_{q^2}^\times$, by \prettyref{lem:beta2}, since the $y$ found there has multiplicative order $(p-1)_2=2$. Then taking $\delta_k\in\mathbb{F}_{q^2}^\times$ to be the primitive $(q+1)_2$ root of unity $\delta_k=(\beta_k)_2^2$, we may use \prettyref{lem:SFV3.2} to replace $x_k$ with an element whose determinant is $\pm(\beta_k)_2^k\delta_k^{rk}=\pm(\beta_k)_2^{k(2r+1)}$ for any odd $r$, yielding that we may replace each $x_k$ with an element whose determinant is $\pm\beta_2^k$ for a fixed $(q^2-1)_2$-root of unity $\beta_2$. Hence the resulting $x$ satisfies $\det(x)=\pm\beta_2^n$, which has the stated order. \end{proof} \section{Application to GGGRs} \label{sec:MoreGGGR} Here we keep the notation of \prettyref{sec:GGGR} and return to the more general case that $\wt{G}=GL_n^\epsilon(q)$ and $G=SL_n^\epsilon(q)$ for $\epsilon\in\{\pm1\}$. \begin{lemma}\label{lem:varphiuconj} Let $u\in \mathcal{C}\cap U_{d,2}$ and suppose that $x$ is an element normalizing $U_{d,2}$ and conjugating $u$ to $u^b$. Then $\varphi_u^x=\varphi_u^b$. \end{lemma} \begin{proof} This follows from the construction of $\varphi_u$ in \cite[Section 5]{Taylor16} or \cite[Section 2]{Geck04}. Indeed, for each $g$ in $U_{d,2}$, we have $\varphi_u^x(g)=\varphi_u(xgx^{-1})=\varphi_{x^{-1}ux}(g)=\varphi_{u^x}(g)=\varphi_{u^b}(g)=\varphi_u(g)^b$, where the second equality is noted in \cite[Remark 2.2]{Geck04}. \end{proof} \begin{lemma}\label{lem:conjinP} Let $u\in \mathcal{C}\cap U_{d,2}$ and $\epsilon=-1$. Then the elements $x$ found in \prettyref{cor:unipconj} and \prettyref{lem:unipconj0mod4} are members of $P_{d}$, and hence normalize $U_{d,2}$. \end{lemma} \begin{proof} First, note that $C_{\wt{\bg{G}}}(u)\leq \bg{P}_{d}$. Indeed, this is noted in \cite[Theorem 2.1.1]{Kawanaka86} for simply connected groups, and here we have $\wt{\bg{G}}=\bg{G}Z(\wt{\bg{G}})$. Further, $u$ is conjugate to $u^b$ in $\bg{P}_{d}$ by \cite[Lemma 4.6]{SFTaylorTypeA}. So $u^x=u^b=u^y$ for some $y\in \bg{P}_{d}$, which yields that $xy^{-1}\in C_{\wt{\bg{G}}}(u)$, and hence $x\in \wt{G}\cap \bg{P}_{d}$. This shows that $x$ is contained in $P_{d}$, which contains $U_{d,2}$ as a normal subgroup. \end{proof} \begin{lemma}\label{lem:GGGRarg} Let $G=SL^\epsilon_n(q)$ and $\wt{G}=GL^\epsilon_n(q)$, with $\epsilon\in\{\pm1\}$. Let $\wt{\mathrm{Char}i}:=\wt{\mathrm{Char}i}_\lambda\in\mathrm{Irr}(\wt{G})$ and let $\wt{\Gamma}_u=[U_{1,d}:U_{2,d}]^{-1/2}\mathrm{Ind}_{U_{d,2}}^{\wt{G}}(\varphi_u)$ be a Generalized Gelfand-Graev character of $\wt{G}$ such that $\langle \wt{\Gamma}_u, \wt{\mathrm{Char}i}\rangle_{\wt{G}}=1$. Let $\sigma\in\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{F}_\lambda)$ and let $x\in\wt{G}$ normalizing $U_{d,2}$ such that $\sigma\varphi_u=\varphi_u^x$. Then for $\mathrm{Char}i\in\mathrm{Irr}(G|\wt{\mathrm{Char}i})$, there is some conjugate $\mathrm{Char}i_0$ of $\mathrm{Char}i$ such that $\sigma\mathrm{Char}i_0=\mathrm{Char}i_0^x$. \end{lemma} \begin{proof} Let $\Gamma_u$ be such that $\wt{\Gamma}_u=\mathrm{Ind}_G^{\wt{G}}\Gamma_u$ and $\Gamma_u=r\cdot\mathrm{Ind}_{U_{d,2}}^{{G}}(\varphi_u)$ where $r=[U_{1,d}:U_{2,d}]^{-1/2}$. Then by Clifford theory and Frobenius reciprocity, there is a unique conjugate, $\mathrm{Char}i_0$, of $\mathrm{Char}i$ such that $\mathrm{Char}i_0\in\mathrm{Irr}(G|\wt{\mathrm{Char}i})$ and $\langle \Gamma_u, \mathrm{Char}i_0\rangle_G=1$. Since $\res_G^{\wt{G}}(\wt{\mathrm{Char}i})$ is fixed by $\sigma$, we also see $\sigma\mathrm{Char}i_0$ is the unique member of $\mathrm{Irr}({G}|\wt{\mathrm{Char}i})$ satisfying $\langle \sigma\Gamma_u, \sigma\mathrm{Char}i_0\rangle_G=1.$ But note that \[\sigma\Gamma_u=r\cdot\mathrm{Ind}_{U_{d,2}}^{G}(\sigma\varphi_u)=r\cdot\mathrm{Ind}_{U_{d,2}}^{G}(\varphi_u^x)=\Gamma_u^x.\] Then $\langle \sigma\Gamma_u, \mathrm{Char}i_0^x\rangle_{G}=\langle \Gamma_u^x, \mathrm{Char}i_0^x\rangle_G=1,$ forcing $\mathrm{Char}i_0^x=\sigma\mathrm{Char}i_0$ by uniqueness, since $\mathrm{Char}i_0^x\in\mathrm{Irr}(G|\wt{\mathrm{Char}i})$. \end{proof} \section{Main Results} \label{sec:Main} We begin by stating our main results. The first is an extension of \cite[Theorem 4.8]{turull01} to the case of $SU_n(q)$, describing the field of values $\mathbb{Q}(\mathrm{Char}i)$ for each $\mathrm{Char}i\in\mathrm{Irr}(G)$. \begin{theorem}\label{thm:turullext1} Let $G=SL^\epsilon_n(q)$ and $\wt{G}=GL^\epsilon_n(q)$, with $\epsilon\in\{\pm1\}$. Let $\lambda\in \mathcal{F}_n$ and let $\mathrm{Char}i\in\mathrm{Irr}(G|\wt{\mathrm{Char}i}_\lambda)$. Then $\mathbb{Q}(\mathrm{Char}i)=\mathbb{F}_\lambda$ unless all of the following hold: \begin{itemize} \item $p$ is odd, \item $q$ is not square, \item $2\leq n_2\leq (q-\epsilon)_2$, and \item $\alpha\lambda=\lambda$ for any element $\alpha\in\widehat{T}_1$ of order $n_2$. \end{itemize} In the latter case, $\mathbb{Q}(\mathrm{Char}i)=\mathbb{F}_\lambda(\sqrt{\eta p})$, where $\eta\in\{\pm1\}$ and $p\equiv\eta\pmod4$. \end{theorem} Taking into consideration \prettyref{rem:sigmainv}, \prettyref{thm:turullext1} immediately yields the following extension of \cite[Proposition 6.2]{turull01}. \begin{corollary}\label{cor:turullext} Let $G=SL^\epsilon_n(q)$ and $\wt{G}=GL^\epsilon_n(q)$, with $\epsilon\in\{\pm1\}$. Let $\lambda\in \mathcal{F}_n$ and let $\mathrm{Char}i\in\mathrm{Irr}(G|\wt{\mathrm{Char}i}_\lambda)$. Then the following are equivalent: \begin{itemize} \item $\mathrm{Char}i$ is real-valued. \item There exists some $\alpha' \in\widehat{T}_1$ such that $\sigma_{-1}\lambda=\alpha'\lambda$, and if $p$ is odd, $q$ is not a square, $2\leq n_2\leq (q-\epsilon)_2$, and $\alpha\lambda=\lambda$ for any element $\alpha\in\widehat{T}_1$ of order $n_2$, then $p\equiv1\pmod4$. \end{itemize} \end{corollary} The remainder of this section will be devoted to proving \prettyref{thm:turullext1}. We begin with an observation restricting the situation of \prettyref{cor:unipconj}. \begin{proposition}\label{prop:oddelemoddindex} Let $G=SL^\epsilon_n(q)$ and $\wt{G}=GL^\epsilon_n(q)$, with $\epsilon\in\{\pm1\}$. Let $\wt{\mathrm{Char}i}\in\mathrm{Irr}(\wt{G})$ and let $\wt{\Gamma}_u$ be a GGGR of $\wt{G}$ such that $\langle \wt{\Gamma}_u, \wt{\mathrm{Char}i}\rangle_{\wt{G}}=1$. Further, assume that $u$ has an elementary divisor of the form $(t-1)^k$ with $k$ odd. Then $[\wt{G}:I]$ is odd, where $I=\mathrm{stab}_{\wt{G}}(\mathrm{Char}i)$ for any $\mathrm{Char}i\in\mathrm{Irr}({G}|\wt{\mathrm{Char}i})$. In particular, in this case, $\mathbb{F}_\lambda=\mathbb{Q}(\mathrm{Char}i)$ by \prettyref{lem:sgn}. \end{proposition} \begin{proof} Write $\wt{\Gamma}_u=[U_{1,d}:U_{2,d}]^{-1/2}\mathrm{Ind}_{U_{d,2}}^{\wt{G}}(\varphi_u)$. By \prettyref{lem:SFV3.2}, there is some $x\in C_{\wt{G}}(u)$ with determinant $\delta^k$, where $\delta$ is a $(q+1)_2$-root of unity in $\mathbb{F}_{q^2}^\times$. In particular, $|\det(x)|=(q+1)_2$ since $k$ is odd, and $\varphi_u^x=\varphi_u$ since $x$ normalizes $U_{d,2}$ as in the proof of \prettyref{lem:varphiuconj}. Then applying \prettyref{lem:GGGRarg} with $\sigma$ trivial yields that some conjugate $\mathrm{Char}i_0$ of $\mathrm{Char}i$ satisfies $\mathrm{Char}i_0^x=\mathrm{Char}i_0$. This implies $[I:G]$ is divisible by $(q+1)_2$, so that $[\wt{G}:I]$ must be odd. \end{proof} For the remainder of this section, we will consider the case $\epsilon=-1$, so that $\wt{G}=GU_n(q)$ and $G=SU_n(q)$. In particular, \prettyref{prop:oddelemoddindex} yields that if $[\wt{G}:I]$ is even, then neither condition in \prettyref{cor:unipconj} holds if $n$ is divisible by $4$, and condition 1 holds if $n\equiv 2\pmod 4$, taking into account \prettyref{rem:condsnmod4}. \begin{proposition}\label{prop:n2mod4} Let $\epsilon=-1$ and suppose that $q\equiv \eta\mod 4$ is nonsquare with $\eta\in\{\pm1\}$ and that $n\equiv 2\pmod 4$. Then the converse of \prettyref{lem:sgn} holds. That is, for $\mathrm{Char}i\in\mathrm{Irr}(G|\wt{\mathrm{Char}i}_\lambda)$, $\mathbb{F}_\lambda=\mathbb{Q}(\mathrm{Char}i)$ if and only if $\mathrm{sgn} \lambda\neq\lambda$. Alternatively, $\mathbb{F}_\lambda(\sqrt{\eta p})=\mathbb{Q}(\mathrm{Char}i)$ if and only if $\mathrm{sgn} \lambda = \lambda$. \end{proposition} \begin{proof} We must show that if $\mathrm{sgn}\lambda= \lambda$, then $\mathbb{F}_\lambda\neq \mathbb{Q}(\mathrm{Char}i)$. First, recall that this condition on $\lambda$ is equivalent to the condition that $[\wt{G}:I]=[T_1:\mathcal{I}(\lambda)]$ is even, by \prettyref{lem:orbiteven}. Since $n_2=2$, \prettyref{prop:indstabdivides} yields that $[\wt{G}:I]_2=2$. This means that no $\wt{G}$-conjugate of $\mathrm{Char}i$ can be fixed by any $\wt{g}\in \wt{G}$ whose determinant satisfies $|\det(\wt{g})|_2=(q+1)_2$. As an abuse of notation, we let $\tau$ also denote the unique element of $\mathrm{Gal}(\mathbb{F}_\lambda(\zeta_p)/\mathbb{F}_\lambda)$ that restricts to our fixed generator $\tau$ of $\mathrm{Gal}(\mathbb{Q}(\zeta_p)/\mathbb{Q})$. In the notation of \prettyref{lem:GGGRarg}, we have $\varphi_u^x=\varphi_u^b=\tau\varphi_u$ for some $x\in P_{d}$ satisfying $|\det(x)|=(q+1)_2$, by \prettyref{cor:unipconj} and Lemmas \ref{lem:varphiuconj} and \ref{lem:conjinP}. Then by \prettyref{lem:GGGRarg}, there is a conjugate $\mathrm{Char}i_0$ of $\mathrm{Char}i$ such that $\mathrm{Char}i_0^x=\tau\mathrm{Char}i_0$. In particular, note that the condition on the determinant yields that $\mathrm{Char}i_0^x\neq \mathrm{Char}i_0$, so $\tau\mathrm{Char}i_0\neq\mathrm{Char}i_0$. Since $\mathrm{Char}i$ and $\mathrm{Char}i_0$ have the same field of values, we see $\tau\mathrm{Char}i\neq \mathrm{Char}i$, and we have $\mathbb{F}_\lambda\neq \mathbb{Q}(\mathrm{Char}i)$. \end{proof} \begin{proposition}\label{prop:n0mod4} Let $\epsilon=-1$ and suppose that $q\equiv 3\pmod 4$ and $4\leq n_2\leq (q+1)_2$, and let $\mathrm{Char}i\in\mathrm{Irr}(G|\wt{\mathrm{Char}i}_\lambda)$ and $I:=\mathrm{stab}_{\wt{G}}(\mathrm{Char}i)$. Then $\mathbb{F}_\lambda=\mathbb{Q}(\mathrm{Char}i)$ if and only if $[\wt{G}:I]_2<n_2$. \end{proposition} \begin{proof} First, note that $[\wt{G}:I]_2\leq n_2$ by \prettyref{prop:indstabdivides}. Note that by \prettyref{lem:sgn}, we may assume that $[\wt{G}:I]$ is even and therefore that $\mathrm{Char}i\in\mathrm{Irr}(G|\wt{\Gamma}_u)$ where $u$ has no odd-power elementary divisor, by \prettyref{prop:oddelemoddindex}. By Lemmas \ref{lem:unipconj0mod4}, \ref{lem:varphiuconj}, and \ref{lem:conjinP}, there is some $x\in \wt{G}$ such that $\varphi_u^x=\varphi_u^b=\tau\varphi_u$ and $|\det(x)|=\frac{2(q+1)_2}{n_2}$, which is divisible by $2$ since $n_2\leq(q+1)_2$. By \prettyref{lem:GGGRarg}, there is a conjugate $\mathrm{Char}i_0$ of $\mathrm{Char}i$ such that $\mathrm{Char}i_0^x=\tau\mathrm{Char}i_0$. Suppose first that $[\wt{G}:I]_2=n_2$, so that $x$ cannot stabilize $\mathrm{Char}i_0$, since $[I:G]_2=\frac{(q+1)_2}{n_2}$ (and the same is true for the stabilizer of $\mathrm{Char}i_0$). This yields that $\mathrm{Char}i_0\neq \tau\mathrm{Char}i_0$, so the same holds for $\mathrm{Char}i$. Hence if $[\wt{G}:I]_2=n_2$, then $\mathbb{F}_\lambda\neq\mathbb{Q}(\mathrm{Char}i)$. Now suppose $[\wt{G}:I]_2<n_2$. That is, $[\wt{G}:I]_2\leq \frac{n_2}{2}$. Then the stabilizer of $\mathrm{Char}i_0$ must contain $x$, since $I/G\cong \mathcal{I}(\lambda)$ is cyclic and contains the unique subgroup of $\wt{G}/G$ of size $\frac{2(q+1)_2}{n_2}$. Then $\mathrm{Char}i_0=\tau\mathrm{Char}i_0$, and the same is true for $\mathrm{Char}i$, so $\mathbb{F}_\lambda=\mathbb{Q}(\mathrm{Char}i)$. \end{proof} \begin{proof}[Proof of \prettyref{thm:turullext1}] Note that the case $[\wt{G}:I]_2=n_2$, for any $n$ even, is equivalent to having $\alpha\lambda=\lambda$ for any $\alpha\in \wh{T}_1$ of order $n_2$. In case $n\equiv 2\pmod 4$, we remark that this $\alpha$ is $\mathrm{sgn}$. Hence Propositions \ref{prop:initialcase}, \ref{prop:n2mod4}, and \ref{prop:n0mod4} combine to yield the statement. \end{proof} \end{document}
\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{document} \hbox{\goth h}box{\goth f}irstpage \hbox{\goth h}box{\goth n}ewtheorem{odstavec}[Theorem]{\hbox{\goth h}skip -2mm} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{abstract}This paper is concerned with energy properties of the wave equation associated to the Dunkl-Cherednik Laplacian. We establish the conservation of the total energy, the strict equipartition of energy under suitable assumptions and the asymptotic equipartition in the general case.\\ {\hbox{\goth s}l Mathematics Subject Index 2000:} {Primary 35L05\,; Secondary 22E30, 33C67, 35L65, 58J45}\\ {\hbox{\goth s}l Keywords and phrases:} {Dunkl-Cherednik operator, wave equation, energy, \hbox{\goth h}box{\goth l}inebreak equipartition} \hbox{\goth e}nd{abstract} \hbox{\goth v}badness=100000 \hbox{\goth s}ection{Introduction} \hbox{\goth h}box{\goth n}oindent We use \cite{O2} as a reference for the Dunkl-Cherednik theory. Let $\hbox{\goth m}athfrak{a}$ be a Euclidean vector space of dimension $d$ equipped with an inner product $\hbox{\goth h}box{\goth l}angle \cdot,\cdot\rangle.$ Let $\hbox{\goth m}athcal{R}$ be a crystallographic root system in $\hbox{\goth m}athfrak{a},$ $\hbox{\goth m}athcal{R}^+$ a positive subsystem and $W$ the Weyl group generated by the reflections \,$r_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha(x)\!=\!x\! -\!2\hbox{\goth h}box{\goth f}rac{\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha,x\rangle}{\|\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\|^2}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha$ \,along the roots $\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\!\in\!\hbox{\goth m}athcal{R}.$ We let $k:\hbox{\goth m}athcal{R}\hbox{\goth t}o[\,0,+\infty)$ denote a multiplicity function on the root system $\hbox{\goth m}athcal R,$ and $\rho=\hbox{\goth h}box{\goth f}rac12\hbox{\goth s}um_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\in\hbox{\goth m}athcal{R}^+}k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha$. We note that $k$ is $W$-invariant. The Dunkl-Cherednik operators are the following differential-difference operators, which are deformations of partial derivatives and still commute pairwise\,: \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} T_\xi f(x) =\hbox{\goth h}box{\goth p}artial_{\xi}f(x)-\hbox{\goth h}box{\goth l}angle\rho,\xi\rangle f(x) +\hbox{\goth s}um_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\in\hbox{\goth m}athcal{R}^{+}}k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\, \hbox{\goth h}box{\goth f}rac{\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha,\xi\rangle}{1\,-\,e^{-\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha,x\rangle}}\, \{f(x)\!-\!f(r_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}x)\}\,. \hbox{\goth e}nd{equation*} Given an orthonormal basis $\{\xi_1,\dots,\xi_d\}$ of $\hbox{\goth m}athfrak{a}$, the Dunkl-Cherednik Laplacian is defined by \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} Lf(x)=\hbox{\goth s}um_{j=1}^{d}T_{\xi_j}^{2}f(x)\,. \hbox{\goth e}nd{equation*} More explicit formulas for $L$ exist but they will not be used in this paper. The Laplacian $L$ is selfadjoint with respect to the measure $\hbox{\goth m}u(x) dx$ where \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth m}u(x)=\hbox{\goth h}box{\goth p}rod_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\in\hbox{\goth m}athcal{R}^+} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl|\,2\hbox{\goth s}inh\hbox{\goth h}box{\goth f}rac{\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha,x\rangle}2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr|^{2k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}\,. \hbox{\goth e}nd{equation*} Consider the wave equation \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation} \hbox{\goth h}box{\goth l}abel{WaveEquation} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{cases} \,\hbox{\goth h}box{\goth p}artial_t^2u(t,x)=L_xu(t,x),\\ \,u(0,x)=f(x), \,\hbox{\goth h}box{\goth p}artial_t\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|_{t=0}u(t,x)=g(x), \hbox{\goth e}nd{cases} \hbox{\goth e}nd{equation} with smooth and compactly supported initial data $(f,g).$ Let us introduce: \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{itemize} \item the \hbox{\goth t}extit{kinetic energy} \,$\hbox{\goth m}athcal{K}[u](t) =\hbox{\goth h}box{\goth f}rac12{{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|\hbox{\goth h}box{\goth p}artial_tu(x,t)\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|^{2}\hbox{\goth m}u(x)\,dx$, \item the \hbox{\goth t}extit{potential energy} \,$\hbox{\goth m}athcal{P}[u](t) =-\,\hbox{\goth h}box{\goth f}rac12{{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\, Lu(x,t)\,\hbox{\goth o}verline{u(x,t)}\,\hbox{\goth m}u(x)\,dx$, \item the \hbox{\goth t}extit{total energy} \,$\hbox{\goth m}athcal{E}[u](t) =\hbox{\goth m}athcal{K}[u](t) +\hbox{\goth m}athcal{P}[u](t)$. \hbox{\goth e}nd{itemize} In this paper we prove \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{itemize} \item the conservation of the total energy: \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation} \hbox{\goth h}box{\goth l}abel{Conservation} \hbox{\goth t}extstyle \hbox{\goth m}athcal{E}[u](t)=\hbox{\goth t}ext{constant}, \hbox{\goth e}nd{equation} \item the strict equipartition of energy, under the assumptions that the dimension $d$ is odd and that all the multiplicities $k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha$ are integers: \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation} \hbox{\goth h}box{\goth l}abel{StrictEquipartition} \hbox{\goth t}extstyle \hbox{\goth m}athcal{K}[u](t)=\hbox{\goth m}athcal{P}[u](t)=\hbox{\goth h}box{\goth f}rac12\,\hbox{\goth m}athcal{E}[u] \hbox{\goth t}ext{ \,for }|t|\hbox{\goth t}ext{ large}\,, \hbox{\goth e}nd{equation} \item the asymptotic equipartition of energy, for arbitrary $d$ and $\R^+$-valued multiplicity function $k$: \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation} \hbox{\goth h}box{\goth l}abel{AsymptoticEquipartition} \hbox{\goth t}extstyle \hbox{\goth m}athcal{K}[u](t)\hbox{\goth t}o\hbox{\goth h}box{\goth f}rac12\,\hbox{\goth m}athcal{E}[u] \hbox{\goth t}ext{ \,and \,} \hbox{\goth m}athcal{P}[u](t)\hbox{\goth t}o\hbox{\goth h}box{\goth f}rac12\,\hbox{\goth m}athcal{E}[u] \hbox{\goth t}ext{ \,as \,} |t| \hbox{\goth t}ext{ goes to } \infty. \hbox{\goth e}nd{equation} \hbox{\goth e}nd{itemize} The proofs follow \cite{BOS} and use the Fourier transform in the Dunkl-Cherednik setting, which we will recall in next section. We mention that during the past twenty years, several works were devoted to Huygens' principle and equipartition of energy for wave equations on symmetric spaces and related settings. See for instance \cite{BO}, \cite [ch.\,V, \S\;5]{H}, \cite{BOS}, \cite{KY}, \cite{SO}, \cite{BOP1}, \cite{BOP2}, \cite{S}. \hbox{\goth s}ection{Generalized hypergeometric functions and Dunkl-Cherednik transform} \hbox{\goth h}box{\goth n}oindent Opdam \cite{O1} introduced the following special functions, which are deformations of exponential functions \,$e^{\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,x\rangle}$, \,and the associated Fourier transform. \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{theorem} There exist a neighborhood \,$U$ of \,$0$ in $\hbox{\goth m}athfrak{a}$ and a unique holomorphic function \,$(\hbox{\goth h}box{\goth l}ambda,x)\hbox{\goth m}apsto G_\hbox{\goth h}box{\goth l}ambda(x)$ \,on \,$a_{\hbox{\goth m}athbb{C}}\!\hbox{\goth t}imes\!(\hbox{\goth m}athfrak{a}\!+\!iU)$ such that \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{cases} \;T_\xi G_\hbox{\goth h}box{\goth l}ambda(x)=\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\xi\rangle\,G_\hbox{\goth h}box{\goth l}ambda(x) \hbox{\goth q}uad\hbox{\goth h}box{\goth f}orall\;\xi\!\in\!\hbox{\goth m}athfrak{a}\,,\\ \;G_\hbox{\goth h}box{\goth l}ambda(0)=1\,. \hbox{\goth e}nd{cases} \hbox{\goth e}nd{equation*} Moreover, the following estimate holds on \,$a_{\hbox{\goth m}athbb{C}}\!\hbox{\goth t}imes\!\hbox{\goth m}athfrak{a}$\,: \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} |G_\hbox{\goth h}box{\goth l}ambda(x)|\hbox{\goth h}box{\goth l}e|W|^{\hbox{\goth h}box{\goth f}rac12}\,e^{\|{\rm Re}\,\hbox{\goth h}box{\goth l}ambda\|\|x\|}\,. \hbox{\goth e}nd{equation*} \hbox{\goth e}nd{theorem} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{Definition} Let $f$ be a nice function on $\hbox{\goth m}athfrak{a}$, say $f$ belongs to the space $\hbox{\goth m}athcal{C}_c^\infty(\hbox{\goth m}athfrak{a}) $ of smooth functions on $\hbox{\goth m}athfrak{a}$ with compact support. Its Dunkl-Cherednik transform is defined by \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth m}athcal{F}\!f(\hbox{\goth h}box{\goth l}ambda) =\int_{\hbox{\goth m}athfrak{a}}f(x)\,G_{-iw_0\hbox{\goth h}box{\goth l}ambda}(w_0x)\,\hbox{\goth m}u(x)\,dx. \hbox{\goth e}nd{equation*} Here $w_0$ denotes the longest element in the Weyl group $W.$ \hbox{\goth e}nd{Definition} The involvement of $w_0$ in the definition of $\hbox{\goth m}athcal F$ is related to the below skew-adjointness property of the Dunkl-Cherednik operators with respect to the inner product \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth h}box{\goth l}angle f,g\rangle=\int_{\hbox{\goth m}athfrak{a}}f(x)\,\hbox{\goth o}verline{g(x)}\,\hbox{\goth m}u(x)\,dx, \hbox{\goth q}quad f,g\in\hbox{\goth m}athcal{C}_c^\infty(\hbox{\goth m}athfrak{a}). \hbox{\goth e}nd{equation*} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{lemma} The adjoint of \,$T_\xi$ is $-w_0T_{w_0\xi}w_0$\,: \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth h}box{\goth l}angle T_\xi f,g\rangle=\hbox{\goth h}box{\goth l}angle f,-w_0T_{w_0\xi}w_0g\rangle\,. \hbox{\goth e}nd{equation*} \hbox{\goth e}nd{lemma} As an immediate consequence, we obtain: \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{corollary} For every \,$\xi,\hbox{\goth h}box{\goth l}ambda\!\in\!\hbox{\goth m}athfrak{a}$ and $f\!\in\!\hbox{\goth m}athcal{C}_c^\infty(\hbox{\goth m}athfrak{a})$, we have \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth m}athcal{F}(T_\xi f)(\hbox{\goth h}box{\goth l}ambda) =i\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\xi\rangle\hbox{\goth m}athcal{F}\!f(\hbox{\goth h}box{\goth l}ambda), \hbox{\goth e}nd{equation*} and therefore \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth m}athcal{F}(Lf)(\hbox{\goth h}box{\goth l}ambda)=-\|\hbox{\goth h}box{\goth l}ambda\|^{2}\,\hbox{\goth m}athcal{F}\!f(\hbox{\goth h}box{\goth l}ambda)\,. \hbox{\goth e}nd{equation*} \hbox{\goth e}nd{corollary} Next we will recall from \cite{O1} three main results about the Dunkl-Cherednik transform (see also \cite{O2}). For $R>0,$ let $\hbox{\goth m}athcal{C}_R^\infty(\hbox{\goth m}athfrak{a})$ be the space of smooth functions on $\hbox{\goth m}athfrak{a}$ \,vanishing outside the ball \,$B_R\!=\!\{x\!\in\!\hbox{\goth m}athfrak{a}\,|\|x\|\!\hbox{\goth h}box{\goth l}e\!R\}.$ We let $\hbox{\goth m}athcal{H}_{R}(\hbox{\goth m}athfrak{a}_{\hbox{\goth m}athbb{C}})$ denote the space of holomorphic functions $h$ on the complexification $\hbox{\goth m}athfrak{a}_{\hbox{\goth m}athbb{C}}$ of $\hbox{\goth m}athfrak{a}$ such that, for every integer $N\!>\!0$, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth t}extstyle \hbox{\goth s}up_{\hbox{\goth h}box{\goth l}ambda\in\hbox{\goth m}athfrak{a}_{\hbox{\goth m}athbb{C}}} (1\!+\!\|\hbox{\goth h}box{\goth l}ambda\|)^N\, e^{-R\,\|{\rm Im}\,\hbox{\goth h}box{\goth l}ambda\|}\,|h(\hbox{\goth h}box{\goth l}ambda)| <+\infty\,. \hbox{\goth e}nd{equation*} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{theorem} \hbox{\goth h}box{\goth l}abel{PaleyWiener} {\rm\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}f (Paley-Wiener)} The transformation $\hbox{\goth m}athcal{F}$ is an isomorphism of $\hbox{\goth m}athcal{C}_R^\infty(\hbox{\goth m}athfrak{a})$ onto $\hbox{\goth m}athcal{H}_{R}(\hbox{\goth m}athfrak{a}_{\hbox{\goth m}athbb{C}}),$ for every $R\!>\!0$. \hbox{\goth e}nd{theorem} The Plancherel formula and the inversion formula of $\hbox{\goth m}athcal F$ involve the complex measure \,$\hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda$ \,with density \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth t}extstyle \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\, =\hbox{\goth h}space{-1mm}\hbox{\goth h}box{\goth p}rod\hbox{\goth h}box{\goth l}imits_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\in\hbox{\goth m}athcal{R}_0^+}\hbox{\goth h}space{-1mm} \hbox{\goth h}box{\goth f}rac {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(i\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\, +\,k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)\hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}} {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(i\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr) \hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}}\, \hbox{\goth h}box{\goth f}rac {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(\hbox{\goth h}box{\goth f}rac{i \hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\, +\,k_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}}2+\,k_{2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)\hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}} {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(\hbox{\goth h}box{\goth f}rac{i\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\, +\,k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)\hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}}\, \hbox{\goth h}box{\goth f}rac {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(-\,i\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\, +\,k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)\hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}} {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(-\,i\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\,+\,1\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr) \hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}}\, \hbox{\goth h}box{\goth f}rac {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(\hbox{\goth h}box{\goth f}rac{-i \hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\, +\,k_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}}2+\,k_{2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}+\,1\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)\hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}} {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(\hbox{\goth h}box{\goth f}rac{-i\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\, +\,k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)\hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}}\,, \hbox{\goth e}nd{equation*} where \,$\hbox{\goth m}athcal{R}_0^+\! =\hbox{\goth h}space{-.25mm}\{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\!\in\hbox{\goth m}athcal{R}^+ |\,\hbox{\goth h}box{\goth f}rac\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha2\!\hbox{\goth h}box{\goth n}otin\!\hbox{\goth m}athcal{R}\,\}$ \,is the set of positive indivisible roots, \,$\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\!=\!2\|\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\|^{-2}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha$ \,the coroot corresponding to $\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha$, and \,$k_{2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}\!=\!0$ \,if \,$2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\!\hbox{\goth h}box{\goth n}otin\!\hbox{\goth m}athcal{R}$. Notice that $\hbox{\goth h}box{\goth n}u$ is an analytic function on $\hbox{\goth m}athfrak{a},$ with a polynomial growth and which extends meromorphically to~$\hbox{\goth m}athfrak{a}_{\hbox{\goth m}athbb{C}}$. It is actually a polynomial if the multiplicity function $k$ is integer-valued and it has poles otherwise. \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{theorem} {\rm\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}f (Inversion formula)} There is a constant \,$c_0\!>\!0$ such that, for every $f\!\in\!\hbox{\goth m}athcal{C}_c^\infty(\hbox{\goth m}athfrak{a})$, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} f(x)\,=\,c_0\int_{\hbox{\goth m}athfrak{a}}\, \hbox{\goth m}athcal{F}\!f(\hbox{\goth h}box{\goth l}ambda)\,G_{i\hbox{\goth h}box{\goth l}ambda}(x)\,\hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\,. \hbox{\goth e}nd{equation*} \hbox{\goth e}nd{theorem} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{theorem}\hbox{\goth h}box{\goth l}abel{Plancherel} {\rm\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}f (Plancherel formula)} For every $f,g\!\in\!\hbox{\goth m}athcal{C}_c^\infty(\hbox{\goth m}athfrak{a})$, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \int_{\hbox{\goth m}athfrak{a}}\,f(x)\,\hbox{\goth o}verline{g(x)}\,\hbox{\goth m}u(x)\,dx\, =\,c_0\int_{\hbox{\goth m}athfrak{a}}\,\hbox{\goth m}athcal{F}\!f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda)\, \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\,, \hbox{\goth e}nd{equation*} where \,$\widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda) :=\hbox{\goth o}verline{\hbox{\goth m}athcal{F}(w_0g)(w_0\hbox{\goth h}box{\goth l}ambda)} =\!{{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; \hbox{\goth o}verline{g(x)}\,G_{i\hbox{\goth h}box{\goth l}ambda}(x)\,\hbox{\goth m}u(x)\,dx$\,. \hbox{\goth e}nd{theorem} \hbox{\goth s}ection{Conservation of energy} \hbox{\goth h}box{\goth n}oindent This section is devoted to the proof of \hbox{\goth e}qref{Conservation}. Via the Dunkl-Cherednik transform, the wave equation \hbox{\goth e}qref{WaveEquation} becomes \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{cases} \,\hbox{\goth h}box{\goth p}artial_t^2\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}u(t,\hbox{\goth h}box{\goth l}ambda) =-\|\hbox{\goth h}box{\goth l}ambda\|^2\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}u(t,\hbox{\goth h}box{\goth l}ambda),\\ \,\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}u(0,\hbox{\goth h}box{\goth l}ambda) =\hbox{\goth m}athcal{F}\!f(\hbox{\goth h}box{\goth l}ambda), \,\hbox{\goth h}box{\goth p}artial_t\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|_{t=0}\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}u(t,\hbox{\goth h}box{\goth l}ambda) =\hbox{\goth m}athcal{F}\!g(\hbox{\goth h}box{\goth l}ambda), \hbox{\goth e}nd{cases} \hbox{\goth e}nd{equation*} and its solution satisfies \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation}\hbox{\goth h}box{\goth l}abel{SolutionFourier} \hbox{\goth t}extstyle \hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}u(t,\hbox{\goth h}box{\goth l}ambda) =(\cos t\|\hbox{\goth h}box{\goth l}ambda\|)\,\hbox{\goth m}athcal{F}\!f(\hbox{\goth h}box{\goth l}ambda) +\hbox{\goth h}box{\goth f}rac{\hbox{\goth s}in t\|\hbox{\goth h}box{\goth l}ambda\|}{\|\hbox{\goth h}box{\goth l}ambda\|}\hbox{\goth m}athcal{F}\!g(\hbox{\goth h}box{\goth l}ambda)\,. \hbox{\goth e}nd{equation} By means of the Paley-Wiener Theorem 2.5, in \cite[p. 52-53]{S} the author proves the following finite speed propagation property: \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{quote} Assume that the initial data $f$ and $g$ belong to $\hbox{\goth m}athcal{C}_R^\infty(\hbox{\goth m}athfrak{a})$. Then the solution $u(t,x)$ belongs to $\hbox{\goth m}athcal{C}_{R+|t|}^{\infty}(\hbox{\goth m}athfrak{a})$ as a function of $x$. \hbox{\goth e}nd{quote} Let us express the potential and kinetic energies defined in the introduction via the Dunkl-Cherednik transform. Using the Plancherel formula and Corollary 2.4, we have \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation} \hbox{\goth h}box{\goth l}abel{PotentialFourier1} \hbox{\goth m}athcal{P}[u](t) =\,{\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}2}{{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; \|\hbox{\goth h}box{\goth l}ambda\|^2\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}u(t,\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}u(t,\hbox{\goth h}box{\goth l}ambda)\, \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\,. \hbox{\goth e}nd{equation} Moreover, since the Dunkl-Cherednik Laplacian is $W$-invariant, it follows that $(w_0u)(t,x)\!=\!u(t,w_0x)$ is the solution to the wave equation \hbox{\goth e}qref{WaveEquation} with the initial data \,$w_0f$ and $w_0g$. Thus \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation} \hbox{\goth h}box{\goth l}abel{SolutionFourierBis} \hbox{\goth t}extstyle \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}u(t,\hbox{\goth h}box{\goth l}ambda) =(\cos t\|\hbox{\goth h}box{\goth l}ambda\|)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda) +\hbox{\goth h}box{\goth f}rac{\hbox{\goth s}in t\|\hbox{\goth h}box{\goth l}ambda\|}{\|\hbox{\goth h}box{\goth l}ambda\|} \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda)\,. \hbox{\goth e}nd{equation} Now, by substituting \hbox{\goth e}qref{SolutionFourier} and \hbox{\goth e}qref{SolutionFourierBis} in \hbox{\goth e}qref{PotentialFourier1}, we get \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation}\hbox{\goth h}box{\goth l}abel{PotentialFourier2} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{aligned} \hbox{\goth m}athcal{P}[u](t) &={\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}2}{{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; \|\hbox{\goth h}box{\goth l}ambda\|^2\,(\cos t\|\hbox{\goth h}box{\goth l}ambda\|)^2\, \hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda)\, \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\\ &+{\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}2}{{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; (\hbox{\goth s}in t\|\hbox{\goth h}box{\goth l}ambda\|)^2\, \hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda)\, \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\\ &+{\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}4}{{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; \|\hbox{\goth h}box{\goth l}ambda\|\,(\hbox{\goth s}in 2t\|\hbox{\goth h}box{\goth l}ambda\|)\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\,\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda) +\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda)\,\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\, \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\,.\\ \hbox{\goth e}nd{aligned} \hbox{\goth e}nd{equation} Similarly to $\hbox{\goth m}athcal P[u]$, we can rewrite the kinetic energy as \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth m}athcal{K}[u](t) =\,{\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}2}{{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; \hbox{\goth h}box{\goth p}artial_{t}\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}u(t,\hbox{\goth h}box{\goth l}ambda)\, \hbox{\goth h}box{\goth p}artial_{t}\widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}u(t,\hbox{\goth h}box{\goth l}ambda)\, \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\,. \hbox{\goth e}nd{equation*} Using the following facts \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{cases} \;\hbox{\goth h}box{\goth p}artial_{t}\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}u(t,\hbox{\goth h}box{\goth l}ambda) =-\,\|\hbox{\goth h}box{\goth l}ambda\|\,(\hbox{\goth s}in t\|\hbox{\goth h}box{\goth l}ambda\|)\, \hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}f(\hbox{\goth h}box{\goth l}ambda) +(\cos t\|\hbox{\goth h}box{\goth l}ambda\|)\, \hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda)\,,\\ \;\hbox{\goth h}box{\goth p}artial_{t}\widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}u(t,\hbox{\goth h}box{\goth l}ambda) =-\,\|\hbox{\goth h}box{\goth l}ambda\|\,(\hbox{\goth s}in t\|\hbox{\goth h}box{\goth l}ambda\|)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda) +(\cos t\|\hbox{\goth h}box{\goth l}ambda\|)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda)\,,\\ \hbox{\goth e}nd{cases} \hbox{\goth e}nd{equation*} we deduce that \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation} \hbox{\goth h}box{\goth l}abel{KineticFourier1} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{aligned} \hbox{\goth m}athcal{K}[u](t) &={\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}2} {{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; \|\hbox{\goth h}box{\goth l}ambda\|^2\,(\hbox{\goth s}in t\|\hbox{\goth h}box{\goth l}ambda\|)^2\, \hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda)\, \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\\ &+{\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}2}{{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; (\cos t\|\hbox{\goth h}box{\goth l}ambda\|)^2\, \hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda)\, \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\\ &-{\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}4}{{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; \|\hbox{\goth h}box{\goth l}ambda\|\,(\hbox{\goth s}in 2t\|\hbox{\goth h}box{\goth l}ambda\|)\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\,\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda) +\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda)\,\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\, \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\,.\\ \hbox{\goth e}nd{aligned} \hbox{\goth e}nd{equation} By suming up \hbox{\goth e}qref{PotentialFourier2} and \hbox{\goth e}qref{KineticFourier1}, we obtain the conservation of the total energy\,: \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth m}athcal{E}[u](t) =\,{\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}2} {{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\|\hbox{\goth h}box{\goth l}ambda\|^2\, \hbox{\goth m}athcal{F}\!f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda) +\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.4mm}g(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda) \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\,\hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda =\,\hbox{\goth m}athcal{E}[u](0)\,. \hbox{\goth e}nd{equation*} That is $\hbox{\goth m}athcal{E}[u](t)$ is independent of $t.$ \hbox{\goth s}ection{Equipartition of energy} \hbox{\goth h}box{\goth n}oindent This section is devoted to the proof of \hbox{\goth e}qref{StrictEquipartition} and \hbox{\goth e}qref{AsymptoticEquipartition}. Using the classical trigonometric identities for double angles, we can rewrite the identities \hbox{\goth e}qref{PotentialFourier2} and \hbox{\goth e}qref{KineticFourier1} respectively as \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{aligned} \hbox{\goth m}athcal{P}[u](t) &={\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}4} {{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\|\hbox{\goth h}box{\goth l}ambda\|^2\, \hbox{\goth m}athcal{F}\!f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda) +\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.4mm}g(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda) \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\,\hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\\ &+{\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}4} {{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; (\cos2t\|\hbox{\goth h}box{\goth l}ambda\|)\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\|\hbox{\goth h}box{\goth l}ambda\|^2 \hbox{\goth m}athcal{F}\!f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda) -\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.4mm}g(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda) \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\,\hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\\ &+{\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}4} {{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; \|\hbox{\goth h}box{\goth l}ambda\|\,(\hbox{\goth s}in 2t\|\hbox{\goth h}box{\goth l}ambda\|)\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\,\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda) +\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda)\,\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\, \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\\ \hbox{\goth e}nd{aligned} \hbox{\goth e}nd{equation*} and \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{aligned} \hbox{\goth m}athcal{K}[u](t) &={\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}4} {{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\|\hbox{\goth h}box{\goth l}ambda\|^2\, \hbox{\goth m}athcal{F}\!f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda) +\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.4mm}g(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda) \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\,\hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\\ &-{\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}4} {{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; (\cos2t\|\hbox{\goth h}box{\goth l}ambda\|)\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\|\hbox{\goth h}box{\goth l}ambda\|^2 \hbox{\goth m}athcal{F}\!f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda) -\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.4mm}g(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda) \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\,\hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\\ &-{\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}4} {{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; \|\hbox{\goth h}box{\goth l}ambda\|\,(\hbox{\goth s}in 2t\|\hbox{\goth h}box{\goth l}ambda\|)\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\,\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda) +\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda)\,\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\, \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\,.\\ \hbox{\goth e}nd{aligned} \hbox{\goth e}nd{equation*} Hence \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation} \hbox{\goth h}box{\goth l}abel{Difference1} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{aligned} &\hbox{\goth m}athcal{P}[u](t)\hbox{\goth h}space{-.3mm}-\hbox{\goth h}space{-.2mm}\hbox{\goth m}athcal{K}[u](t)\,=\\ &={\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}2} {{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\, (\cos2t\|\hbox{\goth h}box{\goth l}ambda\|)\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\|\hbox{\goth h}box{\goth l}ambda\|^2 \hbox{\goth m}athcal{F}\!f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda) -\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.4mm}g(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda) \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\,\hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\\ &+\,{\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}2} {{\displaystyle}playstyle\int_{\hbox{\goth m}athfrak{a}}}\; \|\hbox{\goth h}box{\goth l}ambda\|\,(\hbox{\goth s}in 2t\|\hbox{\goth h}box{\goth l}ambda\|)\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\,\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}f(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda) +\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}g(\hbox{\goth h}box{\goth l}ambda)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(\hbox{\goth h}box{\goth l}ambda)\,\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\, \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,d\hbox{\goth h}box{\goth l}ambda\,.\\ \hbox{\goth e}nd{aligned} \hbox{\goth e}nd{equation} Introducing polar coordinates in $\hbox{\goth m}athfrak{a}$, \hbox{\goth e}qref{Difference1} becomes \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation} \hbox{\goth h}box{\goth l}abel{Difference2} \hbox{\goth m}athcal{P}[u](t)\hbox{\goth h}space{-.3mm}-\hbox{\goth h}space{-.2mm}\hbox{\goth m}athcal{K}[u](t) ={\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}2} {{\displaystyle}playstyle\int_{\,0}^{+\infty}}\hbox{\goth h}space{-1mm} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\cos(2tr)\,\Phi(r)+\hbox{\goth s}in(2tr)\,r\,\Psi(r)\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\, r^{d-1}\,dr\,, \hbox{\goth e}nd{equation} where \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{aligned} \Phi(r)\,&=\int_{S(\hbox{\goth m}athfrak{a})} \hbox{\goth t}extstyle\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\,r^2\hbox{\goth m}athcal{F}\!f(r\hbox{\goth s}igma)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(r\hbox{\goth s}igma) -\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.4mm}g(r\hbox{\goth s}igma)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.4mm}g(r\hbox{\goth s}igma)\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\, \hbox{\goth h}box{\goth n}u(r\hbox{\goth s}igma)\,d\hbox{\goth s}igma\,,\\ \Psi(r)\,&=\int_{S(\hbox{\goth m}athfrak{a})} \hbox{\goth t}extstyle\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\hbox{\goth m}athcal{F}\!f(r\hbox{\goth s}igma)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.4mm}g(r\hbox{\goth s}igma) +\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.4mm}g(r\hbox{\goth s}igma)\, \widetilde{\hbox{\goth m}athcal{F}}\!f(r\hbox{\goth s}igma)\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\, \hbox{\goth h}box{\goth n}u(r\hbox{\goth s}igma)\,d\hbox{\goth s}igma\,,\\ \hbox{\goth e}nd{aligned} \hbox{\goth e}nd{equation*} and $d\hbox{\goth s}igma$ denotes the surface measure on the unit sphere $S(\hbox{\goth m}athfrak{a})$ in $\hbox{\goth m}athfrak{a}$. Let \,$\hbox{\goth h}box{\goth g}amma_0\!\in\!(0,+\infty]$ \,be the width of the largest horizontal strip \,$|{\rm Im}\, z|\!<\!\hbox{\goth h}box{\goth g}amma_0$ \,in which \,$z\hbox{\goth m}apsto\hbox{\goth h}box{\goth n}u(z\hbox{\goth s}igma)$ \,is holomorphic for all directions \,$\hbox{\goth s}igma\!\in\!S(\hbox{\goth m}athfrak{a})$. \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{lemma} \hbox{\goth h}box{\goth l}abel{EstimatePhiPsi} {\rm (i)} $\Phi(z)$ and $\Psi(z)$ extend to even holomorphic functions in the strip \,$|{\rm Im}\, z|\!<\!\hbox{\goth h}box{\goth g}amma_0$. {\rm (ii)} If $\hbox{\goth h}box{\goth g}amma_0\!<\!+\infty$, the following estimate holds in every substrip \,$|{\rm Im}\, z|\!\hbox{\goth h}box{\goth l}e\!\hbox{\goth h}box{\goth g}amma$ \,with \,$\hbox{\goth h}box{\goth g}amma\!<\!\hbox{\goth h}box{\goth g}amma_0:$ \,For every $N\!>\!0$, there is a constant \,$C\!>\!0$ $($depending on $f,g\!\in\!\hbox{\goth m}athcal{C}_R^\infty(\hbox{\goth m}athfrak{a})$, $N$ \!and $\hbox{\goth h}box{\goth g}amma$$)$ such that \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} |\Phi(z)|+|\Psi(z)|\hbox{\goth h}box{\goth l}e C\,|z|^{|\hbox{\goth m}athcal{R}_0^+|}\,(1\!+\!|z|)^{-N}\,e^{\,2R\,|{\rm Im}\, z|}\,. \hbox{\goth e}nd{equation*} {\rm (iii)} If $\hbox{\goth h}box{\goth g}amma_0=+\infty$, the previous estimate holds uniformly in $\hbox{\goth m}athbb{C}$. \hbox{\goth e}nd{lemma} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{proof} (i) follows from the definition of \,$\Phi$ and $\Psi$. Let us turn to the estimates (ii) and (iii). On one hand, according to the Paley-Wiener Theorem (Theorem \ref{PaleyWiener}), all transforms \,$\hbox{\goth m}athcal{F}\!f(z\hbox{\goth s}igma)$, $\widetilde{\hbox{\goth m}athcal{F}}\!f(z\hbox{\goth s}igma)$, $\hbox{\goth m}athcal{F}\hbox{\goth h}space{-.25mm}g(z\hbox{\goth s}igma)$, $\widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.25mm}g(z\hbox{\goth s}igma)$ \,are \,$\hbox{\goth t}ext{O}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(\{1\!+\!|z|\}^{-N}e^{\,R\,|{\rm Im}\, z|}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)$. On the other hand, let us discuss the behavior of the Plancherel measure. Consider first the case where all multiplicities are integers. Without loss of generality, we may assume that $k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\!\in\!\hbox{\goth m}athbb{N}^*$ and $k_{2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}\!\in\!\hbox{\goth m}athbb{N}$ \;for every indivisible root $\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha$. Then \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{aligned} \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\, &\hbox{\goth t}extstyle =\,{\rm const.}\,\;\hbox{\goth h}box{\goth p}rod_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\in\hbox{\goth m}athcal{R}_0^+} \hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\! +\!i(k_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}\hbox{\goth h}space{-.75mm}+\!2k_{2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha})\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\\ &\hbox{\goth t}extstyle\hbox{\goth t}imes\; \hbox{\goth h}box{\goth p}rod_{0<j<k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle^2\hbox{\goth h}space{-.75mm}+\!j^2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\} \;\hbox{\goth h}box{\goth p}rod_{0\hbox{\goth h}box{\goth l}e\widetilde{j}<k_{2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle^2\hbox{\goth h}space{-.75mm} +\!(k_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}\hbox{\goth h}space{-.75mm}+\!2\widetilde{j})^2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\} \hbox{\goth e}nd{aligned} \hbox{\goth e}nd{equation*} is a polynomial of degree \,$2\,|k|=2\hbox{\goth s}um_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\in\hbox{\goth m}athcal{R}^+}\!k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha$. In general, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)={\rm const.}\,\,\hbox{\goth h}box{\goth p}i(\hbox{\goth h}box{\goth l}ambda)\,\widetilde\hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\,, \hbox{\goth e}nd{equation*} where \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth t}extstyle \hbox{\goth h}box{\goth p}i(\hbox{\goth h}box{\goth l}ambda) =\,\hbox{\goth h}box{\goth p}rod_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\in\hbox{\goth m}athcal{R}_0^+} \hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle \hbox{\goth e}nd{equation*} is a homogeneous polynomial of degree $|\hbox{\goth m}athcal{R}_0^+|$ and \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth t}extstyle \widetilde\hbox{\goth h}box{\goth n}u(\hbox{\goth h}box{\goth l}ambda)\, =\hbox{\goth h}space{-1mm}\hbox{\goth h}box{\goth p}rod\hbox{\goth h}box{\goth l}imits_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\in\hbox{\goth m}athcal{R}_0^+}\hbox{\goth h}space{-1mm} \hbox{\goth h}box{\goth f}rac {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(i\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\, +\,k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)\hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}} {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(i\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\,+\,1\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr) \hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}}\, \hbox{\goth h}box{\goth f}rac {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(\hbox{\goth h}box{\goth f}rac{i \hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\, +\,k_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}}2+\,k_{2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)\hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}} {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(\hbox{\goth h}box{\goth f}rac{i\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\, +\,k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)\hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}}\, \hbox{\goth h}box{\goth f}rac {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(-\,i\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\, +\,k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)\hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}} {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(-\,i\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\,+\,1\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr) \hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}}\, \hbox{\goth h}box{\goth f}rac {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(\hbox{\goth h}box{\goth f}rac{-i \hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\, +\,k_{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}}2+\,k_{2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}+\,1\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)\hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}} {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(\hbox{\goth h}box{\goth f}rac{-i\hbox{\goth h}box{\goth l}angle\hbox{\goth h}box{\goth l}ambda,\check\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha\rangle\, +\,k_\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}lpha}2\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)\hbox{\goth v}phantom{\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|}} \hbox{\goth e}nd{equation*} is an analytic function which never vanishes on $\hbox{\goth m}athfrak{a}$. Notice that \,$z\hbox{\goth m}apsto\hbox{\goth h}box{\goth n}u(z\hbox{\goth s}igma)\hbox{\goth t}ext{ or }\widetilde\hbox{\goth h}box{\goth n}u(z\hbox{\goth s}igma)$ has poles for generic directions $\hbox{\goth s}igma\!\in\!S(\hbox{\goth m}athfrak{a})$ as soon as some multiplicities are not integers. Using Stirling's formula \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth G}amma(\xi)\hbox{\goth s}im\hbox{\goth s}qrt{2\hbox{\goth h}box{\goth p}i}\,\xi^{\xi-\hbox{\goth h}box{\goth f}rac12}\,e^{-\xi} \hbox{\goth q}uad\hbox{\goth t}ext{as \,}|\xi|\hbox{\goth t}o+\infty \hbox{\goth t}ext{ \,with \,}|\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth a}rg z|\!<\!\hbox{\goth h}box{\goth p}i\!-\!\hbox{\goth v}arepsilon\,, \hbox{\goth e}nd{equation*} we get the following estimate for the Plancherel density, in each strip \,$|{\rm Im}\, z|\!<\!\hbox{\goth h}box{\goth g}amma$ \,with \,$0\!<\!\hbox{\goth h}box{\goth g}amma\!<\!\hbox{\goth h}box{\goth g}amma_0$\,: \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} |\hbox{\goth h}box{\goth n}u(z\hbox{\goth s}igma)|\hbox{\goth h}box{\goth l}e\,C\,|z|^{|\hbox{\goth m}athcal{R}_0^+|}\,(1\!+\!|z|)^{2|k|-|\hbox{\goth m}athcal{R}_0^+|}\,. \hbox{\goth e}nd{equation*} The estimates (ii) and (iii) follow easily from these considerations. \hbox{\goth e}nd{proof} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{proposition} \hbox{\goth h}box{\goth l}abel{DifferenceEstimate1} Assume that the dimension $d$ is odd and that all multiplicities are integers. Then there exists a constant \,$C\!>\!0$ $($depending on the initial data \hbox{\goth h}box{\goth l}inebreak $f,g\!\in\!\hbox{\goth m}athcal{C}_R^\infty(\hbox{\goth m}athfrak{a}))$ such that, for every \,$\hbox{\goth h}box{\goth g}amma\!\hbox{\goth h}box{\goth g}e\!0$ and \,$t\!\in\!\hbox{\goth m}athbb{R}$, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} |\hbox{\goth m}athcal{P}[u](t)\hbox{\goth h}space{-.3mm}-\hbox{\goth h}space{-.2mm}\hbox{\goth m}athcal{K}[u](t)| \hbox{\goth h}box{\goth l}e C\,e^{\,2\hbox{\goth h}box{\goth g}amma(R-|t|)}\,. \hbox{\goth e}nd{equation*} \hbox{\goth e}nd{proposition} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{proof} Evenness allows us to rewrite \hbox{\goth e}qref{Difference2} as follows\,: \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth m}athcal{P}[u](t)\hbox{\goth h}space{-.3mm}-\hbox{\goth h}space{-.2mm}\hbox{\goth m}athcal{K}[u](t) ={\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{c_0}4} {{\displaystyle}playstyle\int_{-\infty}^{+\infty}}\hbox{\goth h}space{-1mm}e^{\,i2tr} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\Phi(r)\!-\!ir\Psi(r)\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\,r^{d-1}\,dr\,. \hbox{\goth e}nd{equation*} Let us shift the contour of integration from \,$\hbox{\goth m}athbb{R}$ \,to \,$\hbox{\goth m}athbb{R}\hbox{\goth h}space{-.75mm}\hbox{\goth h}box{\goth p}m\!i\hbox{\goth h}box{\goth g}amma$, according to the sign of~$t$, and estimate the resulting integral, using Lemma \ref{EstimatePhiPsi}.iii. As a result, the difference of energy $$\hbox{\goth m}athcal{P}[u](t)-\hbox{\goth m}athcal{K}[u](t)= \hbox{\goth h}box{\goth f}rac{c_0}4\,e^{-2\hbox{\goth h}box{\goth g}amma|t|} {\int_{-\infty}^{+\infty}} e^{\,i2tr}\,\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{\Phi(r\!\hbox{\goth h}box{\goth p}m\!i\hbox{\goth h}box{\goth g}amma)\! -i(r\!\hbox{\goth h}box{\goth p}m\!\hbox{\goth h}box{\goth g}amma)\,\Psi(r\!\hbox{\goth h}box{\goth p}m\!i\hbox{\goth h}box{\goth g}amma)\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\, (r\!\hbox{\goth h}box{\goth p}m\!i\hbox{\goth h}box{\goth g}amma)^{d-1} dr$$ is \,$\hbox{\goth t}ext{O}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl((1\!+\!\hbox{\goth h}box{\goth g}amma)^{-N}e^{\,2\hbox{\goth h}box{\goth g}amma(R-|t|)}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)$. \hbox{\goth e}nd{proof} As an immediate consequence of the above statement and in view of the fact that $\hbox{\goth h}box{\goth g}amma_0=\infty$ when $k$ is integer valued, we deduce the strict equipartition of energy \hbox{\goth e}qref{StrictEquipartition} for \,$|t|\!\hbox{\goth h}box{\goth g}e\!R,$ by letting $\hbox{\goth h}box{\goth g}amma\!\hbox{\goth t}o\!\infty.$ Henceforth, we will drop the above assumption on $k.$ By resuming the proof of Proposition \ref{DifferenceEstimate1} and using Lemma \ref{EstimatePhiPsi}.ii instead of Lemma \ref{EstimatePhiPsi}.iii, we obtain the following result. \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{proposition} \hbox{\goth h}box{\goth l}abel{EstimateDifference2} Assume that the dimension $d$ is odd. Then, for every \,$0\!<\!\hbox{\goth h}box{\goth g}amma\!<\!\hbox{\goth h}box{\goth g}amma_0$, there is a constant \,$C\!>\!0$ $(\hbox{\goth t}ext{depending on the initial data $f,g\!\in\!\hbox{\goth m}athcal{C}_R^\infty(\hbox{\goth m}athfrak{a})$})$ such that \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} |\hbox{\goth m}athcal{P}[u](t)\hbox{\goth h}space{-.3mm}-\hbox{\goth h}space{-.2mm}\hbox{\goth m}athcal{K}[u](t)| \hbox{\goth h}box{\goth l}e C\,e^{-2\hbox{\goth h}box{\goth g}amma|t|} \hbox{\goth q}uad\hbox{\goth h}box{\goth f}orall\;t\!\in\!\hbox{\goth m}athbb{R}\,. \hbox{\goth e}nd{equation*} \hbox{\goth e}nd{proposition} As a corollary, we obtain the asymptotic equipartition of energy \hbox{\goth e}qref{AsymptoticEquipartition} in the odd dimensional case, with an exponential rate of decay. In the even dimensional case, the expression \hbox{\goth e}qref{Difference2} cannot be handled by complex analysis and we proceed differently. \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{proposition} \hbox{\goth h}box{\goth l}abel{EstimateDifference2} Assume that the dimension $d$ is even. Then there is a constant \,$C\!>\!0$ $(\hbox{\goth t}ext{depending on the initial data $f,g\!\in\!\hbox{\goth m}athcal{C}_R^\infty(\hbox{\goth m}athfrak{a})$})$ such that \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} |\hbox{\goth m}athcal{P}[u](t)\hbox{\goth h}space{-.3mm}-\hbox{\goth h}space{-.2mm}\hbox{\goth m}athcal{K}[u](t)| \hbox{\goth h}box{\goth l}e C\,(1\!+\!|t|)^{-d-|\hbox{\goth m}athcal{R}_0^+|} \hbox{\goth q}uad\hbox{\goth h}box{\goth f}orall\;t\!\in\!\hbox{\goth m}athbb{R}\,. \hbox{\goth e}nd{equation*} \hbox{\goth e}nd{proposition} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{proof} The problem lies in the decay at infinity. According to lemma \ref{EstimatePhiPsi}, $\Phi(r)$ and $\Psi(r)$ are divisible by $r^{D}$, where $D\!=\!|\hbox{\goth m}athcal{R}_0^+|$. Let us integrate \hbox{\goth e}qref{Difference2} \,$d\!+\!D$ times by parts. This way \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \int_{\,0}^{+\infty}\hbox{\goth h}space{-1mm} \cos(2tr)\,r^{d-1} \hbox{\goth u}nderbrace{\Bigl\{ \int_{S(\hbox{\goth m}athfrak{a})} \hbox{\goth m}athcal{F}\hbox{\goth h}space{-.4mm}g(r\hbox{\goth s}igma)\, \widetilde{\hbox{\goth m}athcal{F}}\hbox{\goth h}space{-.4mm}g(r\hbox{\goth s}igma)\, \hbox{\goth h}box{\goth n}u(r\hbox{\goth s}igma)\,d\hbox{\goth s}igma \Bigr\}}_{\widetilde\Phi(r)} dr \hbox{\goth e}nd{equation*} becomes \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \hbox{\goth h}box{\goth p}m\;{\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{1\hbox{\goth t}ext{ \,or \,}0}{(2t)^{d+D}}}\, {\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{(d\,+D)\,!}{(D+1)\,!}}\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl({\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac\hbox{\goth h}box{\goth p}artial{\hbox{\goth h}box{\goth p}artial r}}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)^{D+1} \widetilde\Phi(r)\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}ig|_{r=0}\, \hbox{\goth h}box{\goth p}m\int_{\,0}^{+\infty}\hbox{\goth h}space{-1mm} {\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac{\cos(2tr)\hbox{\goth t}ext{ or }\hbox{\goth s}in(2tr)}{(2t)^{d+D}}}\, \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl({\hbox{\goth t}extstyle\hbox{\goth h}box{\goth f}rac\hbox{\goth h}box{\goth p}artial{\hbox{\goth h}box{\goth p}artial r}}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)^{d+D} \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl\{r^{d-1}\,\widetilde\Phi(r)\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr\}\,dr \hbox{\goth e}nd{equation*} which is \,$\hbox{\goth t}ext{O}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(|t|^{-d-D}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)$. Similarly \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \int_{\,0}^{+\infty}\hbox{\goth h}space{-1mm} \cos(2tr)\,r^{d+1}\, \Bigl\{\int_{S(\hbox{\goth m}athfrak{a})}\! \hbox{\goth m}athcal{F}\!f(r\hbox{\goth s}igma)\,\widetilde{\hbox{\goth m}athcal{F}}\!f(r\hbox{\goth s}igma)\, \hbox{\goth h}box{\goth n}u(r\hbox{\goth s}igma)\,d\hbox{\goth s}igma\Bigr\}\,dr =\,\hbox{\goth t}ext{O}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(|t|^{-d-D-2}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr) \hbox{\goth e}nd{equation*} and \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{equation*} \int_{\,0}^{+\infty}\hbox{\goth h}space{-1mm} \cos(2tr)\,r^{d}\,\Psi(r)\,dr =\,\hbox{\goth t}ext{O}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(|t|^{-d-D-1}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr). \hbox{\goth e}nd{equation*} This concludes the proof of Proposition \ref{EstimateDifference2}. \hbox{\goth e}nd{proof} As a corollary, we obtain the asymptotic equipartition of energy \hbox{\goth e}qref{AsymptoticEquipartition} in the even dimensional case, with a polynomial rate of decay. \hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}egin{remark} Our result may not be optimal. In the $W$\hbox{\goth h}space{-1mm}-invariant case, one obtains indeed the rate of decay \,$\hbox{\goth t}ext{\rm O}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igl(\{1\!+\!|t|\}^{-d-2|\hbox{\goth m}athcal{R}_0^+\!|}\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}igr)$ as in \cite{BOS}. \hbox{\goth e}nd{remark} \references \hbox{\goth h}box{\goth n}extref{S}{Ben Said, S.}{\hbox{\goth e}m Huygens' principle for the wave equation associated with the trigonometric Dunkl-Cherednik operators} {Math. Research Letters {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}f 13} (2006), no. 1, 43--58 } \hbox{\goth h}box{\goth n}extref{SO} {Ben Said, S. and {\O}rsted, B.}{\hbox{\goth e}m The wave equation for Dunkl operators} { Math. (N.S.) {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}f 16} (2005), no. 3--4, 351--391 } \hbox{\goth h}box{\goth n}extref{BO}{Branson, T. and \'Olafsson, G.}{\hbox{\goth e}m Equipartition of energy for waves in symmetric space} {J. Funct. Anal. {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}f 97} (1991), no. 2, 403--416} \hbox{\goth h}box{\goth n}extref{BOP1} {Branson, T. \'Olafsson, G. Pasquale, A.}{\hbox{\goth e}m The Paley--Wiener theorem and the local Huygens' principle for compact symmetric spaces}{ Math. (N.S.) {\hbox{\goth h}box{\hbox{\goth h}box{\goth g}oth b}f 16} (2005), no. 3--4, 393--428 } \hbox{\goth h}box{\goth n}extref{BOP2} {Branson, T. and \'Olafsson, G.} {\hbox{\goth e}m The Paley-Wiener theorem for the Jacobi transform and the local Huygens' principle for root systems with even multiplicities}{ Math. (N.S.) 16 (2005), no. 3--4, 429--442 } \hbox{\goth h}box{\goth n}extref{BOS} {Branson, T. \'Olafsson, G.Schlichtkrull, H.} {\hbox{\goth e}m Huygens' principle in Riemannian symmetric spaces} {Math. Ann. 301 (1995), no. 3, 445--462} \hbox{\goth h}box{\goth n}extref{KY} {El Kamel, J. Yacoub, C.} {\hbox{\goth e}m Huygens' principle and equipartition of energy for the modified wave equation associated to a generalized radial Laplacian} {Ann. Math Blaise Pascal 12 (2005), no. 1, 147--160} \hbox{\goth h}box{\goth n}extref{H} { Helgason, S.} {\hbox{\goth e}m Geometric analysis on symmetric spaces} {Math. Surveys Monographs 39, Amer. Math. Soc. (1994)} \hbox{\goth h}box{\goth n}extref{O1} { Opdam, E. M. }{\hbox{\goth e}m Harmonic analysis for certain representations of graded Hecke algebras} {Acta. Math. 175 (1995), 75--121} \hbox{\goth h}box{\goth n}extref{O2} { Opdam, E. M. }{\hbox{\goth e}m Lecture notes on Dunkl operators for real and complex reflection groups}{ Math. Soc. Japan Mem. 8 (2000) } \hbox{\goth h}box{\goth l}astpage \hbox{\goth e}nd{document}
\begin{document} \title{Sequence entropy tuples and mean sensitive tuples} \author[J. Li]{Jie Li } \address[Jie Li]{School of Mathematics and Statistics, Jiangsu Normal University, Xuzhou, Jiangsu, 221116, P.R. China} \email{[email protected]} \author[C. Liu]{Chunlin Liu } \address[Chunlin Liu]{CAS Wu Wen-Tsun Key Laboratory of Mathematics, School of Mathematical Sciences, University of Science and Technology of China, Hefei, Anhui, 230026, P.R. China} \email{[email protected]} \author[S. Tu]{Siming Tu } \address[Siming Tu]{ School of Mathematics (Zhuhai), Sun Yat-sen University, Zhuhai, Guangdong 519082, P.R. China} \email{[email protected]} \author[T. Yu]{Tao Yu } \address[Tao Yu]{Department of Mathematics, Shantou University, Shantou 515063, P. R. China} \email{[email protected]} \begin{abstract} Using the idea of local entropy theory, we characterize the sequence entropy tuple via mean forms of the sensitive tuple in both topological and measure-theoretical senses. For the measure-theoretical sense, we show that for an ergodic measure-preserving system, the $\mu$-sequence entropy tuple, the $\mu$-mean sensitive tuple and the $\mu$-sensitive in the mean tuple coincide, and give an example to show that the ergodicity condition is necessary. For the topological sense, we show that for a certain class of minimal systems, the mean sensitive tuple is the sequence entropy tuple. \end{abstract} \date{\today} \subjclass[2020]{37A35, 37B05} \keywords{Sequence entropy tuples; mean sensitive tuples; sensitive in the mean tuples} \maketitle \section{Introduction} By a {\it topological dynamical system} ({\it t.d.s.} for short) we mean a pair $(X,T)$, where $X$ is a compact metric space with a metric $d$ and $T$ is a homeomorphism from $X$ to itself. A point $x\in X$ is called a \textit{transitive point} if ${\mathrm{Orb}(x,T)}=\{x,Tx,\ldots\}$ is dense in $X$. A t.d.s. $(X,T)$ is called \textit{minimal} if all points in $X$ are transitive points. Denote by $\B_X$ all Borel measurable subsets of $X$. A Borel (probability) measure $\mu$ on $X$ is called $T$-\textit{invariant} if $\mu(T^{-1}A)=\mu(A)$ for any $A\in \mathcal{B}_X$. A $T$-invariant measure $\mu$ on $X$ is called \textit{ergodic} if $B\in \mathcal{B}_X$ with $T^{-1}B=B$ implies $\mu(B)=0$ or $\mu(B)=1$. Denote by $M(X, T)$ (resp. $M^e(X, T)$) the collection of all $T$-invariant measures (resp. all ergodic measures) on $X$. For $\mu \in M(X,T)$, the \textit{support} of $\mu$ is defined by $\supp(\mu )=\{x\in X\colon \mu (U)>0\text{ for any neighbourhood }U\text{ of }x\}$. Each measure $\mu\in M(X,T)$ induces a {\it measure-preserving system} ({\it m.p.s.} for short) $(X,\B_X,\mu, T)$. It is well known that the entropy can be used to measure the local complexity of the structure of orbits in a given system. One may naturally ask how to characterize the entropy in a local way. The related research started from the series of pioneering papers of Blanchard et al \cite{B1992, B1993, B1997, B1995}, in which the notions of entropy pairs and entropy pairs for a measure were introduced. From then on entropy pairs have been intensively studied by many researchers. Huang and Ye \cite{HY06} extended the notions from pairs to finite tuples, and showed that if the entropy of a given system is positive, then there are entropy $n$-tuples for any $n\in \mathbb{N}$ in both topological and measurable settings. The sequence entropy was introduced by Ku\v shnirenko \cite{Kus} to establish the relation between spectrum theory and entropy theory. As in classical local entropy theory, the sequence entropy can also be localized. In \cite{HLSY03, HMY04} authors investigated the sequence entropy pairs, sequence entropy tuples and sequence entropy tuples for a measure, respectively. Using tools from combinatorics, Kerr and Li \cite{KL07, KL09} studied (sequence) entropy tuples, (sequence) entropy tuples for a measure and IT-tuples via independence sets. Huang and Ye \cite{HY09} showed that a system has a sequence entropy $n$-tuple if and only if its maximal pattern entropy is no less than $\log n$ in both topological and measurable settings. More introductions and applications of the local entropy theory can refer to a survey \cite{GY09}. In addition to the entropy, the sensitivity is another candidate to describe the complexity of a system, which was first used by Ruelle \cite{Ruelle1977}. In \cite{X05}, Xiong introduced a multi-variant version of the sensitivity, called the $n$-sensitivity. \begin{comment} According to Auslander and Yorke \cite{AY80} a t.d.s. $(X,T)$ is called \emph{sensitive} if there exists $\delta>0$ such that for every opene (open and non-empty) subset $U$, there exist $x_1,x_2\in U$ and $m\in\mathbb{N}$ with $d(T^mx_1,T^mx_2)>\delta$. In \cite{X05}, Xiong introduced a multi-variate version of sensitivity, called $n$-sensitivity. \end{comment} Motivated by the local entropy theory, Ye and Zhang \cite{YZ08} introduced the notion of sensitive tuples. Particularly, they showed that a transitive t.d.s. is $n$-sensitive if and only if it has a sensitive $n$-tuple; and a sequence entropy $n$-tuple of a minimal t.d.s. is a sensitive $n$-tuple. For the converse, Maass and Shao \cite{MS07} showed that in a minimal t.d.s., if a sensitive $n$-tuple is a minimal point of the $n$-fold product t.d.s. then it is a sequence entropy $n$-tuple. \begin{comment} They introduced the notions of $n$-sensitivity for a measure $\mu$ and sensitive $n$-tuple for $\mu$ and showed that a t.d.s. with an ergodic measure $\mu$ is $n$-sensitive for $\mu$ if and only if it has a sensitive $n$-tuple for $\mu$; and for a t.d.s. with an ergodic measure $\mu$, sequence entropy $n$-tuple for $\mu$ is a sensitive $n$-tuple for $\mu$. \end{comment} Recently, Li, Tu and Ye \cite{LTY15} studied the sensitivity in the mean form. Li, Ye and Yu \cite{LY21,LYY22} further studied the multi-version of mean sensitivity and its local representation, namely, the mean $n$-sensitivity and the mean $n$-sensitive tuple. One naturally wonders if there is still a characterization of sequence entropy tuples via mean sensitive tuples. By the results of \cite{ FGJO, GJY21,KL07,LYY22} one can see that a sequence entropy tuple is not always a mean sensitive tuple even in a minimal t.d.s. Nonetheless, the works of \cite{DG16,Huang06,LTY15} yield that every minimal mean sensitive t.d.s. (i.e. has a mean sensitive pair by \cite{LYY22}) is not tame (i.e. exists an IT pair by \cite{KL07}). So generally, we conjecture that for any minimal t.d.s., a mean sensitive $n$-tuple is an IT $n$-tuple and so a sequence entropy $n$-tuple by \cite[Theorem 5.9]{KL07}. Now we can answer this question under an additional condition. Namely,\begin{thm}\rightarrowbel{thm:ms=>it} Let $(X,T)$ be a minimal t.d.s. and $\pi: (X,T)\rightarrow (X_{eq},T_{eq})$ be the factor map to its maximal equicontinuous factor which is almost one to one. Then for $2\le n\in\mathbb{N}$, $$MS_n(X,T)\subset IT_n(X,T),$$ where $MS_n(X,T)$ denotes all the mean sensitive $n$-tuples and $IT_n(X,T)$ denotes all the IT $n$-tuples. \end{thm} In the parallel measure-theoretical setting, Huang, Lu and Ye \cite{HLY11} studied measurable sensitivity and its local representation. The notion of $\mu$-mean sensitivity for an invariant measure $\mu$ on a t.d.s. was studied by Garc\'{\i}a-Ramos \cite{G17}. Li \cite{L16} introduced the notion of the $\mu$-mean $n$-sensitivity, and showed that an ergodic m.p.s. is $\mu$-mean $n$-sensitive if and only if its maximal pattern entropy is no less than $\log n$. The authors in \cite{LYY22} introduced the notion of the $\mu$-$n$-sensitivity in the mean, which was \begin{comment} if there is $\delta>0$ such that for any Borel subset $A$ of $X$ with $\mu(A)>0$ there are $m\in \mathbb{N}$ and $n$ pairwise distinct points $x_1^m,x_2^m,\dots,x_n^m\in A$ such that $$ \frac{1}{m}\sum_{k=0}^{m-1}\min_{1\le i\neq j\le n} d(T^k x_i^m, T^k x_j^m)>\delta. $$ By definitions $\mu$-sensitivity in the mean tuple seems weaker than $\mu$-mean sensitivity tuple, however, they are \end{comment} proved to be equivalent to the $\mu$-mean $n$-sensitivity in the ergodic case. Using the idea of localization, the authors \cite{LY21} introduced the notion of the $\mu$-mean sensitive tuple and showed that every $\mu$-entropy tuple of an ergodic m.p.s. is a $\mu$-mean sensitive tuple. A natural question is left open in \cite{LY21}: \begin{ques} Is there a characterization of $\mu$-sequence entropy tuples via $\mu$-mean sensitive tuples? \end{ques} The authors in \cite{LT20} introduced a weaker notion named the density-sensitive tuple and showed that every $\mu$-sequence entropy tuple of an ergodic m.p.s. is a $\mu$-density-sensitive tuple. In this paper, we give a positive answer to this question. Namely, \begin{thm}\rightarrowbel{cor:se=sm} Let $(X,T)$ be a t.d.s., $\mu\in M^e(X,T)$ and $2\le n\in \mathbb{N}$. Then the $\mu$-sequence entropy $n$-tuple, the $\mu$-mean sensitive $n$-tuple and the $\mu$-$n$-sensitive in the mean tuple coincide. \end{thm} By the definitions, it is easy to see that a $\mu$-mean sensitive $n$-tuple must be a $\mu$-$n$-sensitive in the mean tuple. Thus, Theorem \ref{cor:se=sm} is a direct corollary of the following two theorems. \begin{thm}\rightarrowbel{thm:sm=>se} Let $(X,T)$ be a t.d.s., $\mu\in M(X,T)$ and $2\le n\in \mathbb{N}$. Then each $\mu$-$n$-sensitive in the mean tuple is a $\mu$-sequence entropy $n$-tuple. \end{thm} \begin{thm}\rightarrowbel{thm:se=>ms} Let $(X,T)$ be a t.d.s., $\mu\in M^e(X,T)$ and $2\le n\in \mathbb{N}$. Then each $\mu$-sequence entropy $n$-tuple is a $\mu$-mean sensitive $n$-tuple. \end{thm} In fact, Theorem \ref{thm:sm=>se} shows a bit more than Theorem \ref{cor:se=sm}, as for a $T$-invariant measure $\mu$ which is not ergodic, every $\mu$-$n$-sensitive in the mean tuple is still a $\mu$-sequence entropy $n$-tuple. However, the following result shows that ergodicity of $\mu$ in Theorem \ref{thm:se=>ms} is necessary. \begin{thm}\rightarrowbel{thm:sm=/=se} For every $2\le n\in \mathbb{N}$, there exist a t.d.s. $(X,T)$ and $\mu\in M(X,T)$ such that there is a $\mu$-sequence entropy $n$-tuple but it is not a $\mu$-$n$-sensitive in the mean tuple. \end{thm} It is fair to note that Garc{\'i}a-Ramos told us that at the same time, he with Mu{\~n}oz-L{\'o}pez had also got a completely independent proof of the equivalence of the sequence entropy pair and the mean sensitive pair in the ergodic case \cite{GM22}. Their proof relies on the deep equivalent characterization of measurable sequence entropy pairs developed by Kerr and Li \cite{KL09} using the combinatorial notion of independence. Our results provide more information in general case, and the proofs work on the classical definition of sequence entropy pairs introduced in \cite{HMY04}. It is worth noting that the proofs depend on a new interesting ergodic measure decomposition result (Lemma \ref{0726}), which was applied to prove the profound Erd\"os's conjectures in the number theory by Kra, Moreira, Richter and Robertson \cite{KMRR,KMRR1}. This decomposition may have more applications because it has the hybrid topological and Borel structures. The outline of the paper is the following. In Sec. \ref{sec2}, we recall some basic notions that we will use in the paper. In Sec. \ref{sec3}, we prove Theorem \ref{thm:sm=>se}. In Sec. \ref{sect:proof of thm se=>ms}, we show Theorem \ref{thm:se=>ms} and Theorem \ref{thm:sm=/=se}. In Sec. \ref{sec5}, we study the mean sensitive tuple and the sequence entropy in the topological sense and show Theorem \ref{thm:ms=>it}. \section{Preliminaries}\rightarrowbel{sec2} Throughout the paper, denote by $\mathbb{N}$ and ${\mathbb{Z}}_{+}$ the collections of natural numbers $\{1,2,\dots\}$ and non-negative integers $\{0,1,2,\dots\}$, respectively. For $F\subset \mathbb{Z}_+$, denote by $\#\{F\}$ (or simply write $\#F$ when it is clear from the context) the cardinality of $F$. The \emph{upper density} $\overline{D}(F)$ of $F$ is defined by $$ \overline{D}(F)=\limsup_{n\to\infty} \frac{\#\{F\cap[0,n-1]\}}{n}. $$ Similarly, the \emph{lower density} $\underline{D}(F)$ of $F$ can be given by $$ \underline{D}(F)=\liminf_{n\to\infty} \frac{\#\{F\cap[0,n-1]\}}{n}. $$ If $\overline{D}(F)=\underline{D}(F)$, we say that the \textit{density} of $F$ exists and equals to the common value, which is written as $D(F)$. Given a t.d.s. $(X,T)$ and $n\in \mathbb{N}$, denote by $X^{(n)}$ the $n$-fold product of $X$. Let $\Delta_n(X)=\{(x,x,\dots, x)\in X^{(n)}\colon x\in X\}$ be the diagonal of $ X^{(n)}$ and $\Delta_n^\prime(X)=\{(x_1,x_2,...,x_n)\in X^{(n)}: x_i=x_j \text{ for some } 1\le i\neq j\le n \}$. If a closed subset $Y\subset X$ is $T$-invariant in the sense of $TY= Y$, then the restriction $(Y, T|_Y)$ (or simply write $(Y,T)$ when it is clear from the context) is also a t.d.s., which is called a \textit{subsystem} of $(X,T)$. Let. $(X,T)$ be a t.d.s., $x\in X$ and $U,V\subset X$. Denote by $$ N(x,U)=\{n\in\mathbb{Z}_+ \colon T^n x\in U\} \ \text{ and }\ N(U,V)=\{n\in\mathbb{Z}_+: U\cap T^{-n}V\neq\emptyset\}. $$ A t.d.s. $(X,T)$ is called \textit{transitive} if $N(U,V)\neq\emptyset$ for all non-empty open subsets $U,V$ of $X$. It is well known that the set of all transitive points in a transitive t.d.s. forms a dense $G_\delta$ subset of $X$ . Given two t.d.s. $(X, T)$ and $(Y,S)$, a map $\pi\colon X\to Y$ is called a \textit{factor map} if $\pi$ is surjective and continuous such that $\pi\circ T=S\circ\pi$, and in which case $(Y,S)$ is referred to be a \textit{factor} of $(X, T)$. Furthermore, If $\pi$ is a homeomorphism, we say that $(X,T)$ is \textit{conjugate} to $(Y,S)$. A t.d.s. $(X,T)$ is called \textit{equicontinuous} (resp. \textit{mean equicontinuous}) if for any $\epsilon>0$ there is $\delta>0$ such that if $x,y\in X$ with $d(x,y)<\delta$ then $\max_{k\in\mathbb{Z}_+}d(T^kx,T^ky)<\epsilon$ (resp. $\limsup_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}d(T^kx,T^ky)<\epsilon$). Every t.d.s. $(X, T)$ is known to have a maximal equicontinuous factor (or a maximal mean equicontinuous factor \cite{LTY15}). More studies on mean equicontinuous systems can see the recent survey \cite{LYY}. In the following of this section, we fix a t.d.s. $(X,T)$ with a measure $\mu\in M(X,T)$. The {\it entropy of a finite measurable partition $\alpha=\left\{A_1, A_2, \ldots, A_k\right\}$ of $X$ } is defined by $ H_\mu(\alpha)=-\sum_{i=1}^k \mu\left(A_i\right) \log \mu\left(A_i\right), $ where $0 \log 0$ is defined to be 0. Moreover, we define the {\it sequence entropy of $T$ with respect to $\alpha$ along an increasing sequence $S=\left\{s_i\right\}_{i=1}^{\infty}$ of $\mathbb{Z}_+$ } by $$ h_\mu^{S}(T, \alpha)=\limsup _{n\rightarrow \infty} \frac{1}{n} H_\mu\left(\bigvee_{i=1}^n T^{-s_i} \alpha\right). $$ The {\it sequence entropy of $T$ along the sequence $S$} is $$ h_\mu^{S}(T)=\sup _{\alpha} h_\mu^{S}(T, \alpha), $$ where the supremum takes over all finite measurable partitions. Correspondingly, the {\it topological sequence entropy of $T$ with respect to $S$ and a finite open cover $\mathcal{U}$ } is $$ h^{S}(T, \mathcal{U})=\limsup _{n \rightarrow\infty} \frac{1}{n} \log N\left(\bigvee_{i=1}^n T^{-s_i} \mathcal{U}\right), $$ where $N\left(\bigvee_{i=1}^n T^{-s_i} \mathcal{U}\right)$ is the minimum among the cardinalities of all sub-families of $\bigvee_{i=1}^n T^{-s_i} \mathcal{U}$ covering $X$. The {\it topological sequence entropy of $T$ with respect to $S$ } is defined by $$h^{S}(T)=\sup _{\mathcal{U}} h^{S}(T, \mathcal{U}),$$ where the supremum takes over all finite open covers. Let $(x_i)_{i=1}^n\in X^{(n)}$. A finite cover $\mathcal{U}=\{U_1,U_2,\ldots,U_k\}$ of $X$ is said to be an {\it admissible cover} with respect to $(x_i)_{i=1}^n$ if for each $1\leq j\leq k$ there exists $1\leq i_j\leq n$ such that $x_{i_j}\notin\overline{U_j}$. Analogously, we define admissible partitions with respect to $(x_i)_{i=1}^n$. \begin{defn}[\cite{HMY04},\cite{MS07}]An $n$-tuple $(x_i)_{i=1}^n\in X^{(n)}\setminus \Delta_n(X)$, $n\geq 2$ is called \begin{itemize} \item a sequence entropy $n$-tuple for $\mu$ if for any admissible finite Borel measurable partition $\alpha$ with respect to $(x_i)_{i=1}^n$, there exists a sequence $S=\{m_i\}_{i=1}^{\infty}$ of $\mathbb{Z}_+$ such that $h^{S}_{\mu}(T,\alpha)>0$. Denote by $SE_n^{\mu}(X,T)$ the set of all sequence entropy $n$-tuples for $\mu$. \item a sequence entropy $n$-tuple if for any admissible finite open cover $\mathcal{U}$ with respect to $(x_i)_{i=1}^n$, there exists a sequence $S=\{m_i\}_{i=1}^{\infty}$ of $\mathbb{Z}_+$ such that $h^{S}(T,\mathcal{U})>0$. Denote by $SE_n(X,T)$ the set of all sequence entropy $n$-tuples. \end{itemize} \end{defn} We say that $f\in L^2(X,\B_X,\mu)$ is {\it almost periodic} if $\{f\circ T^n : n\in \mathbb{Z}_+\}$ is precompact in $L^2(X,\B_X,\mu)$. The set of all almost periodic functions is denoted by $H_c$, and there exists a $T$-invariant $\sigma$-algebra $\mathcal{K}_\mu \subset \B_X$ such that $H_c= L^2(X,\mathcal{K}_\mu,\mu)$, $\mathcal{K}_\mu$ is called the Kronecker algebra of $(X, \B_X,\mu, T )$. The product $\sigma$-algebra of $X^{(n)}$ is denoted by $\mathcal{B}_X^{(n)}$. Define the measure $\rightarrowmbda_n(\mu)$ on $\mathcal{B}_X^{(n)}$ by letting $$\rightarrowmbda_n(\mu)(\prod_{i=1}^nA_i)=\int_{X}\prod_{i=1}^n\mathbb{E}(1_{A_i}|\mathcal{K}_\mu)d\mu.$$ Note that $SE_n^{\mu}(X,T)=\supp(\rightarrowmbda_n(\mu))\setminus \Delta_n(X)$ \cite[Theorem 3.4]{HMY04}. \section{Proof of Theorem \ref{thm:sm=>se}}\rightarrowbel{sec3} \begin{defn}[\cite{LY21}]\rightarrowbel{defn:mu mean n-sensitive tuple} For $2\le n\in \mathbb{N}$ and a t.d.s. $(X,T)$ with $\mu\in M(X,T)$, we say that the $n$-tuple $(x_1,x_2,\dotsc,x_n)\in X^{(n)}\setminus \Delta_n(X)$ is \begin{enumerate} \item a \textit{$\mu$-mean $n$-sensitive tuple} if for any open neighbourhoods $U_i$ of $x_i$ with $i=1,2,\dotsc,n$, there is $\delta> 0$ such that for any $A\in \B_X$ with $\mu(A)>0$ there are $y_1,y_2,\dotsc,y_n\in A$ and a subset $F$ of $\mathbb{Z}_+$ with $\overline{D}(F)>\delta$ such that $T^k y_i \in U_i$ for all $i=1,2,\dots,n$ and $k\in F$. \item a \textit{$\mu$-$n$-sensitive in the mean tuple} if for any $\tau>0$, there is $\delta=\delta(\tau)> 0$ such that for any $A\in\B_X$ with $\mu(A)>0$ there is $m\in \mathbb{N}$ and $y_1^m,y_2^m,\dotsc,y_n^m\in A$ such that $$ \frac{\#\{0\le k\le m-1: T^ky_i^m\in B(x_i,\tau), i=1,2,\ldots,n\}}{m}>\delta. $$ \end{enumerate} \end{defn} We denote the set of all $\mu$-mean $n$-sensitive tuples (resp. $\mu$-$n$-sensitive in the mean tuples) by $MS_n^\mu(X,T)$ (resp. $SM_n^\mu(X,T)$). We call an $n$-tuple $(x_1,x_2,\dotsc,x_n)\in X^{(n)}$ \textit{essential} if $x_i\neq x_j$ for each $1\le i<j\le n$, and at this time we write the collection of all essential $n$-tuples in $MS_n^\mu(X,T)$ (resp. $SM_n^\mu(X,T)$) as $MS_n^{\mu,e}(X,T)$ (resp. $SM_n^{\mu,e}(X,T)$). \begin{comment} \begin{defn}[\cite{LYY22}]\rightarrowbel{defn:mu-n-sensitive in the mean} For $2\le n\in \mathbb{N}$ and a t.d.s. $(X,T)$ with $\mu\in M(X,T)$, we say that $(X,T)$ is \textit{$\mu$-$n$-sensitive in the mean} if there is $\delta>0$ such that for any Borel subset $A$ of $X$ with $\mu(A)>0$ there are $m\in \mathbb{N}$ and $n$ pairwise distinct points $x_1^m,x_2^m,\dots,x_n^m\in A$ such that $$ \frac{1}{m}\sum_{k=0}^{m-1}\min_{1\le i\neq j\le n} d(T^k x_i^m, T^k x_j^m)>\delta. $$ \end{defn} \end{comment} \begin{proof}[Proof of Theorem \ref{thm:sm=>se}] It suffices to prove $SM_n^{\mu,e}(X,T)\subset SE_n^{\mu,e}(X,T)$. Let $(x_1,\ldots,x_n)\in SM_n^{\mu,e}(X,T)$. Take $\alpha=\{A_1,\ldots,A_l\}$ as an admissible partition of $(x_1,\ldots,x_n)$. Then for each $1\le k\le l$, there is $i_k\in \{1,\ldots,n\}$ such that $x_{i_k}\notin \overline{A_k}$. Put $E_i=\{1\le k\le l: x_i\not\in \overline{A_k}\}$ for $1\le i\le n$. Obviously, $\cup_{i=1}^n E_i=\{1,\ldots,l\}$. Set $$B_1=\cup_{k\in E_1}A_k, B_2=\cup_{k\in E_2\setminus E_1}A_k, \ldots, B_n=\cup_{k\in E_n\setminus(\cup_{j=1}^{n-1}E_j)}A_k. $$ Then $\beta=\{B_1,\ldots,B_n\}$ is also an admissible partition of $(x_1,\ldots,x_n)$ such that $x_i\notin \overline{B_i}$ for all $1\le i\le n$. Without loss of generality, we assume $B_i\neq \emptyset$ for $1\le i\le n$. It suffices to show that there exists a sequence $S=\{m_i\}_{i=1}^{\infty}$ of $\mathbb{Z}_+$ such that $h^{S}_{\mu}(T,\beta)>0,$ as $\alpha\succ\beta$. Let $$h^*_\mu(T,\beta)=\sup \{h^{S}_{\mu}(T,\beta): S \ \text{is a sequence of } \mathbb{Z}_+ \}.$$ By \cite[Lemma 2.2 and Theorem 2.3]{HMY04}, we have $h^*_\mu(T,\beta)=H_\mu(\beta|\mathcal{K}_\mu)$, where $\mathcal{K}_\mu$ is the Kronecker algebra of $(X,\B_X,\mu,T)$. So it suffices to show $\beta\nsubseteq \mathcal{K}_\mu$. We prove it by contradiction. Now we assume that $\beta\subseteq \mathcal{K}_\mu$. Then for each $i=1,\ldots,n$, $1_{B_i}$ is an almost periodic function. By \cite[Theorems 4.7 and 5.2]{Y19}, $1_{B_i}$ is a $\mu$-equicontinuous in the mean function. That is, for each $1\le i\le n$ and any $\tau>0$, there is a compact $K\subset X$ with $\mu(K)>1-\tau$ such that for any $\epsilon'>0$, there is $\delta'>0$ such that for all $m\in\mathbb{N}$, whenever $x,y\in K$ with $d(x,y)<\delta'$, \begin{equation}\rightarrowbel{3} \frac{1}{m}\sum_{t=0}^{m-1}|1_{B_i}(T^tx)- 1_{B_i}(T^ty)|<\epsilon'. \end{equation} On the other hand, take $\epsilon>0$ such that $B_\epsilon(x_i)\cap B_i=\emptyset$ for $i=1,\ldots,n$. Since $(x_1,\ldots,x_n)\in SM_n^{\mu,e}(X,T)$, there is $\delta:=\delta(\epsilon)>0$ such that for any $A\in \B_X$ with $\mu(A)>0$ there are $m\in\mathbb{N}$ and $y_1^m,\ldots,y_n^m\in A$ such that if we denote $C_m=\{0\le t\le m-1:T^ty_i^m\in B_\epsilon(x_i)\text{ for all }i=1,2,\ldots,n\}$ then $\#C_m \ge m\delta$. Since $ B_\epsilon(x_1)\cap B_1=\emptyset$, then $ B_\epsilon(x_1)\subset \cup_{i=2}^nB_i$. This implies that there is $i_0\in \{2,\ldots,n\}$ such that $$ \# \{t\in C_m: T^ty_1^m\in B_{i_0} \}\ge \frac{\#C_m}{n-1}. $$ For any $t\in C_m$, we have $T^ty_{i_0}^m\in B_\epsilon(x_{i_0})$, and then $T^ty_{i_0}^m\notin B_{i_0}$, as $B_\epsilon(x_{i_0})\cap B_{i_0}=\emptyset$. This implies that \begin{equation}\rightarrowbel{e1} \frac{1}{m}\sum_{t=0}^{m-1}|1_{B_{i_0}}(T^ty_1^m)-1_{B_{i_0}}(T^ty_{i_0}^m)|\ge\frac{\#C_m}{m(n-1)}\ge \frac{\delta}{n-1}. \end{equation} Choose a measurable subset $A\subset K$ such that $\mu(A)>0$ and $\diam(A)=\sup\{d(x,y):x,y\in A\}<\delta'$, and $\epsilon'=\frac{\delta}{2(n-1)}$. Then by \eqref{3}, for any $m\in\mathbb{N}$ and $x,y\in A$, $$ \frac{1}{m}\sum_{t=0}^{m-1}|1_{B_{i_0}}(T^tx)- 1_{B_{i_0}}(T^ty)|<\frac{\delta}{2(n-1)}, $$ a contradiction with \eqref{e1}. Thus, $SM_n^{\mu,e}(X,T)\subset SE_n^{\mu,e}(X,T)$. \end{proof} \section{Proof of Theorem \ref{thm:se=>ms}}\rightarrowbel{sect:proof of thm se=>ms} In Section 4.1, we first reduce Theorem \ref{thm:se=>ms} to just prove that it is true for the ergodic m.p.s. with a continuous factor map to its Kronecker factor, and then we finish the proof of Theorem \ref{thm:se=>ms} under this assumption. In Section 4.2, we show the condition that $\mu$ is ergodic is necessary. \subsection{Ergodic case} Throughout this section, we will use the following two types of factor maps between two m.p.s. $(X, \B_X,\mu, T)$ and $(Z, \B_Z,\nu, S)$. \begin{enumerate} \item \emph{Measurable factor maps:} a measurable map $\pi: X \rightarrow Z$ such that $\mu\circ\pi^{-1}=\nu$ and $\pi \circ T=S \circ \pi$ for $\mu$-a.e; \item \emph{Continuous factor maps:} a topological factor map $\pi: X \rightarrow Z$ such that $\mu\circ\pi^{-1}=\nu$. \end{enumerate} If a continuous factor map $\pi$ such that $\pi^{-1}(\B_Z)=\mathcal{K}_\mu$, $\pi$ is called a continuous factor map to its Kronecker factor. The following result is a weaker version in \cite[Proposition 3.20]{KMRR}. \begin{lem}\rightarrowbel{lem3} Let $(X, \mathcal{B}_X,\mu, T)$ be an ergodic m.p.s. Then there exists an ergodic m.p.s. $(\tilde{X},\tilde{B}, \tilde{\mu}, \tilde{T})$ and a continuous factor map $\tilde{\pi}: \tilde{X} \rightarrow X$ such that $(\tilde{X},\tilde{B}, \tilde{\mu}, \tilde{T})$ has a continuous factor map to its Kronecker factor. \end{lem} The following result shows that we only need to prove $SE_n^{\mu}(X,T)\subset MS_n^{\mu}(X,T)$ for all ergodic m.p.s. with a continuous factor map to its Kronecker factor. \begin{lem}\rightarrowbel{lem5} If $SE_n^{\tilde{\mu}}(\tilde{X},\tilde T)\subset MS_n^{\tilde{\mu}}(\tilde{X},\tilde T)$ for all ergodic m.p.s. $(\tilde{X},\tilde{B}, \tilde{\mu}, \tilde{T})$ with a continuous factor map to its Kronecker factor, then $SE_n^{\mu}(X,T)\subset MS_n^{\mu}(X,T)$ for all ergodic m.p.s. $(X, \mathcal{B}_X,\mu, T)$. \end{lem} \begin{proof} By Lemma \ref{lem3}, there exists an ergodic m.p.s. $(\tilde{X},\tilde{B}, \tilde{\mu}, \tilde{T})$ and a continuous factor map $\tilde{\pi}: \tilde{X} \rightarrow X$ such that $(\tilde{X},\tilde{B}, \tilde{\mu}, \tilde{T})$ has a continuous factor map to its Kronecker factor. Thus $SE_n^{\tilde\mu}(\tilde{X},\tilde T)\subset MS_n^{\tilde\mu}(\tilde{X},\tilde T)$, by the assumption. For any $(x_1,\dotsc,x_n)\in SE_n^{\mu}(X,T)\setminus \Delta_n'(X)$, by \cite[Theorem 3.7]{HMY04} there exists an $n$-tuple $(\tilde{x_1},\dots,\tilde{x_n})\in SE_n^{\tilde\mu}(\tilde{X},\tilde T)\setminus \Delta_n'(\tilde{X})$ such that $\tilde\pi(\tilde{x_i})=x_i$. For any open neighborhood $U_1\times \dots \times U_n$ of $(x_1,\dotsc,x_n)$ with $U_i\cap U_j=\emptyset$ for $i\neq j$, then $\tilde\pi^{-1}(U_1)\times \dots \times \tilde\pi^{-1}(U_n)$ is an open neighborhood of $(\tilde{x_1},\dots,\tilde{x_n})$. Since $(\tilde{x_1},\dots,\tilde{x_n})\in SE_n^{\tilde\mu}(\tilde{X},\tilde T)\setminus \Delta_n'(\tilde{X})\subset MS_n^{\tilde\mu}(\tilde{X},\tilde T)\setminus \Delta_n'(\tilde{X})$, there exists $\delta>0$ such that for any $A\in\mathcal{B}_X$ with $\tilde{\mu}(\tilde\pi^{-1}(A))=\mu(A)>0$, there exist $F\subset \mathbb{N}$ with $\overline{D}(F)\ge \delta$ and $\tilde{y_1},\dots,\tilde{y_n}\in \tilde\pi^{-1}(A)$ such that for any $m\in F$, $$ (\tilde T^m\tilde{y_1},\dots,\tilde T^m\tilde{y_n}) \in \tilde\pi^{-1}(U_1)\times \dots \times \tilde\pi^{-1}(U_n)$$ and hence $(T^m\tilde\pi(\tilde{y_1}),\dots,T^m\tilde\pi(\tilde{y_n}))\in U_1\times \dots \times U_n$. Note that $\tilde\pi(\tilde{y_i})\in A$ for each $i=1,2,\ldots,n$. Thus we have $(x_1,\dotsc,x_n)\in MS_n^{\mu}(X,T)$. \end{proof} According to the lemma above-mentioned, in the rest of this section, we fix an ergodic m.p.s. with a continuous factor map $\pi:(X,\mathcal{B}_X, \mu, T)\rightarrow (Z,\mathcal{B}_Z, \nu, R)$ to its Kronecker factor. Moreover, we fix a measure disintegration $z \to \eta_{z}$ of $\mu$ over $\pi$, i.e. $\mu = \int_Z \eta_{z} d\nu(z)$. The following lemma plays a crucial role in our proof. In \cite[Proposition 3.11]{KMRR}, the authors proved it for $n=2$, but general cases are similar in idea. For readability, we move the complicated proof to Appendix \ref{APPENDIX}. \begin{lem}\rightarrowbel{0726} Let $\pi:(X,\mathcal{B}_X, \mu, T)\rightarrow (Z,\mathcal{B}_Z, \nu, R)$ be a continuous factor map to its Kronecker factor. Then for each $n\in\mathbb{N}$, there exists a continuous map $\textbf{x}\mapsto \rightarrowmbda_{\textbf{x}}^n$ from $X^{(n)}$ to $M(X^{(n)})$ such that the map $\textbf{x} \mapsto \rightarrowmbda_{\textbf{x}}^n$ is an ergodic decomposition of $\mu^{(n)}$, where $\mu^{(n)}$ is the n-fold product of $\mu$ and $$ \rightarrowmbda^n_\textbf{x} = \int_Z \eta_{z + \pi(x_1)} \times\dots\times \eta_{z+\pi(x_n)} d\nu(z), \text{ for }\textbf{x}=(x_1,x_2,\ldots,x_n).$$ \end{lem} The following two lemmas can be viewed as generalizations of Lemma 3.3 and Theorem 3.4 in \cite{HMY04}, respectively. \begin{lem}\rightarrowbel{lem1} Let $\pi:(X,\mathcal{B}_X, \mu, T)\rightarrow (Z,\mathcal{B}_Z, \nu, R)$ be a continuous factor map to its Kronecker factor. Assume that $\mathcal{U}=\{U_1, U_2, \dots, U_n\}$ is a measurable cover of $X$. Then for any measurable partition $\alpha$ finer than $\mathcal{U}$ as a cover, there exists an increasing sequence $S\subset\mathbb{Z}_+$ with $h_{\mu}^{S}(T,\alpha)>0$ if and only if $\rightarrowmbda_\textbf{x}^n (U_1^c\times\dots\times U_n^c)>0$ for all $\textbf{x}=(x_1,\dotsc, x_n)\in X^{(n)}$. \end{lem} \begin{proof} $(\mathbb{R}ightarrow)$ By the contrary, we may assume that $\rightarrowmbda_\textbf{x}^n(U_1^c\times\dots\times U_n^c)=0$ for some $\textbf{x}=(x_1,\dotsc, x_n)\in X^{(n)}$. Let $C_i=\{z\in Z: \eta_{z+\pi(x_i)}(U_i^c)>0\}$ for $i=1,\dotsc,n$. Then $$\mu(U_i^c\setminus \pi^{-1}(C_i))=\int_{Z}\eta_{z+\pi(x_i)}(U_i^c\cap \pi^{-1}(C_i^c))d \nu(z)=0.$$ Put $D_i=\pi^{-1}(C_i)\cup (U_i^c\setminus \pi^{-1}(C_i))$. Then $D_i\in \pi^{-1}(\mathcal{B}_Z)= \mathcal{K}_\mu$ and $D_i^c\subset U_i$, where $\mathcal{K}_\mu$ is the Kronecker factor of $X$. For any $\textbf{s}=(s(1),\dotsc,s(n))\in \{0,1\}^n$, let $D_{\textbf{s}}=\cap_{i=1}^nD_i\left(s(i)\right)$, where $D_i(0)=D_i$ and $D_i(1)=D_i^c$. Set $E_1=\left(\cap_{i=1}^nD_i\right)\cap U_1 $ and $E_j=\left(\cap_{i=1}^nD_i\right)\cap( U_j\setminus \bigcup_{i=1}^{j-1}U_i)$ for $j=2,\dotsc,n$. Consider the measurable partition $$\alpha=\left\{D_\textbf{s}:\textbf{s}\in\{0,1\}^n\setminus\{(0,\dotsc,0)\}\right\}\cup\{E_1, \dotsc, E_n\}.$$ For any $\textbf{s}\in \{0,1\}^n\setminus\{(0,\dotsc,0)\}$, we have $s(i)=1$ for some $i=1,\dotsc,n$, then $D_\textbf{s}\subset D_i^c\subset U_i$. It is straightforward that for all $1\leq j\leq n$, $E_j\subset U_j$. Thus $\alpha$ is finer than $\mathcal{U}$ and by hypothesis there exists an increasing sequence $S$ of $\mathbb{Z}_+$ with $h_{\mu}^{S}(T,\alpha)>0$. On the other hand, since $\rightarrowmbda_{\textbf{x}}^n(U_1^c\times\dots\times U_n^c)=0$, we deduce $\nu\left(\cap_{i=1}^nC_i\right)=0$ and hence $\mu\left(\cap_{i=1}^nD_i\right)=0$. Thus we have $E_1,\dotsc, E_n\in \mathcal{K}_\mu$. It is also clear that $D_\textbf{s}\in \mathcal{K}_\mu$ for all $\textbf{s}\in\{0,1\}^n\setminus\{(0,\dotsc,0)\}$, as $D_1,\dotsc,D_n\in \mathcal{K}_\mu.$ Therefore each element of $\alpha$ is $\mathcal{K}_\mu$-measurable, by \cite[Lemma 2.2]{HMY04}, $$h^{S}_{\mu}(T,\alpha)\leq H_{\mu}(\alpha|\mathcal{K}_\mu)=0,$$ a contradiction. $(\Leftarrow)$Assume $\rightarrowmbda_\textbf{x}^n(U_1^c\times\dots\times U_n^c)>0$ for any $\textbf{x}\in X^{(n)}$. In particular, we take $\textbf{x}=(x,\ldots,x)$ such that $\pi(x)$ is the identity element of group $Z$. Without loss of generality, we may assume that any finite measurable partition $\alpha$ which is finer than $\mathcal{U}$ as a cover is of the type $\alpha=\left\{A_1, A_2, \ldots, A_n\right\}$ with $A_i \subset U_i$, for $1 \leqslant i \leqslant n$. Let $\alpha$ be one of such partitions. We observe that \begin{equation*} \begin{split} \int_Z \eta_{z}({A_1^c}) \dots\eta_{z}(A_n^c) d\nu(z) \ge \int_Z \eta_{z }({U_1^c}) \dots\eta_{z}(U_n^c) d\nu(z)=\rightarrowmbda_\textbf{x}^n(U_1^c\times\dots\times U_n^c)>0. \end{split} \end{equation*} Therefore, $A_j \notin \mathcal{K}_\mu$ for some $1 \leqslant j \leqslant n$. It follows from \cite[Theorem 2.3]{HMY04} that there exists a sequence $S \subset \mathbb{Z}_+$ such that $h_\mu^{S}(T, \alpha)=H_\mu\left(\alpha \mid \mathcal{K}_\mu\right)>0$. This finishes the proof. \end{proof} \begin{lem}\rightarrowbel{lem2} For any $\textbf{x}=(x_1,\dotsc,x_n)\in X^{(n)}$, \[SE_{n}^\mu(X,T)= \operatorname{supp}\rightarrowmbda_{\textbf{x}}^n\setminus \Delta_n(X).\] \end{lem} \begin{proof} On the one hand, let $\textbf{y}=(y_1,\dotsc,y_n)\in SE_n^{\mu}(X,T)$. We show that $\textbf{y}\in\operatorname{supp}\rightarrowmbda_{\textbf{x}}^n\setminus \Delta_n(X)$. It suffices to prove that for any measurable neighborhood $U_1\times \dots \times U_n$ of $\textbf{y}$, $$\rightarrowmbda_{\textbf{x}}^n\left(U_1\times U_2\times \dots \times U_n\right)> 0.$$ Without loss of generality, we assume that $U_i\cap U_j=\emptyset$ if $y_i\not= y_j$. Then $\mathcal{U}=\{U_1^c, U_2^c, \dots, U_n^c\}$ is a finite cover of $X$. It is clear that any finite measurable partition $\alpha$ finer than $\mathcal{U}$ as a cover is an admissible partition with respect to $\textbf{y}$. Therefore, there exists an increasing sequence $S\subset\mathbb{Z}_+$ with $h_{\mu}^{S}(T,\alpha)>0$. By Lemma \ref{lem1}, we obtain that $$\rightarrowmbda_\textbf{x}^n\left(U_1\times U_2\times \dots \times U_n\right)> 0,$$ which implies that $\textbf{y}\in \operatorname{supp}\rightarrowmbda_{\textbf{x}}^n$. Since $\textbf{y}\notin \Delta_n(X)$, $\textbf{y}\in \operatorname{supp}\rightarrowmbda_{\textbf{x}}^n\setminus \Delta_n(X)$. On the other hand, let $\textbf{y}=(y_1,\ldots,y_n) \in \operatorname{supp}\rightarrowmbda_\textbf{x}^n\setminus \Delta_n(X)$. We show that for any admissible partition $\alpha=\left\{A_1, A_2, \ldots, A_k\right\}$ with respect to $\textbf{y}$ there exists an increasing sequence $S \subset \mathbb{Z}_+$ such that $h_\mu^{S}(T, \alpha)>0$. Since $\alpha$ is an admissible partition with respect to $\textbf{y}$ there exist closed neighborhoods $U_i$ of $y_i, 1 \leqslant i \leqslant n$, such that for each $j \in\{1,2, \ldots, k\}$ we find $i_j \in\{1,2, \ldots, n\}$ with $A_j \subset U_{i_j}^c$. That is, $\alpha$ is finer than $\mathcal{U}=\left\{U_1^c, U_2^c, \ldots, U_n^c\right\}$ as a cover. Since $$\rightarrowmbda_\textbf{x}^n\left(U_1\times U_2\times \dots \times U_n\right)>0,$$ by Lemma \ref{lem1}, there exists an increasing sequence $S \subset \mathbb{Z}_+$ such that $h_\mu^{S}(T, \alpha)>0$. \end{proof} Now we are ready to give the proof of Theorem \ref{thm:se=>ms}. \begin{proof}[Proof of Theorem \ref{thm:se=>ms}] We only need to prove that $SE_n^{\mu,e}(X,T)\subset MS_n^{\mu,e}(X,T)$. We let $\pi:(X,\mathcal{B}_X, \mu, T)\rightarrow (Z,\mathcal{B}_Z, \nu, R)$ be a continuous factor map to its Kronecker factor. For any $\textbf{y}=(y_1,\ldots,y_n)\in SE_n^{\mu,e}(X,T)$, let $U_1\times U_2\times \dots \times U_n$ be an open neighborhood of $\textbf{y}$ such that $U_i\cap U_j=\emptyset$ for $1\le i\not=j \le n$. By Lemma \ref{lem2}, one has $\rightarrowmbda_\textbf{x}^n\left(U_1\times U_2\times \dots \times U_n\right)> 0$ for any $\textbf{x}=(x_1,\dotsc,x_n)\in X^{(n)}$. Since the map $\textbf{x} \mapsto \rightarrowmbda_\textbf{x}^n$ is continuous, $X$ is compact and $U_1, U_2, \dotsc, U_n$ are open sets, it follows that there exists $\delta>0$ such that for any $\textbf{x}\in X^{(n)}$, $\rightarrowmbda_\textbf{x}^n\left(U_1\times U_2\times \dots \times U_n\right)\ge \delta$. As the map $\textbf{x} \mapsto \rightarrowmbda_\textbf{x}^n$ is an ergodic decomposition of $\mu^{(n)}$, there exists $B\subset X^{(n)}$ with $\mu^{(n)}(B)=1$ such that $\rightarrowmbda_\textbf{x}^n$ is ergodic on $X^{(n)}$ for any $\textbf{x}\in B$. For any $A\in\mathcal{B}_X$ with $\mu (A)>0$, there exists a subset $C$ of $X^{(n)}$ with $\mu^{(n)}(C)>0$ such that for any $\textbf{x}\in C$, \[\rightarrowmbda_\textbf{x}^n(A^n)>0.\] Take $\textbf{x}\in B\cap C$, by the Birkhoff pointwise ergodic theorem, for $\rightarrowmbda_\textbf{x}^n$-a.e. $(x_1',\dotsc,x_n')\in X^{(n)}$ \[\lim_{N\to \infty}\frac{1}{N}\sum_{m=0}^{N-1}1_{U_1\times U_2\times\dots\times U_n}(T^mx_1',\dotsc,T^mx_n')=\rightarrowmbda_\textbf{x}^n\left(U_1\times U_2\times\dots\times U_n\right)\ge \delta.\] Since $\rightarrowmbda_\textbf{x}^n\left(A^n\right)>0$, there exists $(x_1'',\dotsc,x_n'')\in A^n$ such that \begin{equation*} \begin{split} &\lim_{N\to \infty}\frac{1}{N}\#\{m\in[0,N-1]:(T^mx_1'',\dotsc,T^mx_n'')\in U_1\times U_2\times\dots\times U_n\}\\ &=\lim_{N\to \infty}\frac{1}{N}\sum_{m=0}^{N-1}1_{U_1\times U_2\times\dots\times U_n}(T^mx_1'',\dotsc,T^mx_n'')\\ &=\rightarrowmbda_\textbf{x}^n\left(U_1\times U_2\times\dots\times U_n\right)\ge \delta. \end{split} \end{equation*} Let $F=\{m\in\mathbb{Z}_+:(T^mx_1'',\dotsc,T^mx_n'')\in U_1\times U_2\times\dots\times U_n\}$. Then $D(F)\ge \delta$ and hence $\textbf{y}\in MS_n^{\mu,e}(X,T).$ This finishes the proof. \end{proof} \subsection{Non-ergodic case} \begin{lem}\rightarrowbel{lem4} Let $(X,T)$ be a t.d.s. For any $\mu\in M(X,T)$ with the form $\mu=\sum_{i=1}^{m}\rightarrowmbda_i\nu_i$, where $\nu_i\in M^e(X,T)$, $\sum_{i=1}^m\rightarrowmbda_i=1$ and $\rightarrowmbda_i>0$, one has \begin{equation}\rightarrowbel{1} \bigcup_{i=1}^mSE_n^{\nu_i}(X,T)\subset SE_n^{\mu}(X,T) \end{equation} and \begin{equation}\rightarrowbel{2} \bigcap_{i=1}^mSM_n^{\nu_i}(X,T)= SM_n^{\mu}(X,T). \end{equation} \end{lem} \begin{proof} We first prove that \eqref{1}. For any $\textbf{x}=(x_1,\dotsc,x_n)\in\bigcup_{i=1}^mSE_n^{\nu_i}(X,T)$, there exists $i\in\{1,2,\ldots,m\}$ such that $\textbf{x}\in SE_n^{\nu_i}(X,T)$ and then for any admissible partition $\alpha$ with respect to $\textbf{x}$, there exists $S=\{s_j\}_{j=1}^\infty$ such that $h_{\nu_i}^S(T,\alpha)>0.$ By the definition of the sequence entropy \[h_{\mu}^S(T,\alpha)=\limsup_{N\to \infty}\sum_{i=1}^m\rightarrowmbda_i\frac{1}{N}H_{\nu_i}(\bigvee_{j=0}^{N-1}T^{-s_j}\alpha)\ge \rightarrowmbda_ih_{\nu_i}^S(T,\alpha)>0.\] So $\textbf{x}\in SE_n^{\mu}(X,T)$, which finishes the proof of \eqref{1}. Next, we show \eqref{2}. For this, we only need to note that for any $A\in\mathcal{B}_X$, $\mu(A)>0$ if and only if $\nu_j(A)>0$ for some $j\in\{1,2,\ldots m\}.$ \end{proof} \begin{proof}[Proof of Theorem \ref{thm:sm=/=se}] We first claim that there is a t.d.s. $(X,T)$ with $\mu_1,\mu_2\in M^e(X,T)$ such that $SE_n^{\mu_1}(X,T)\neq SE_n^{\mu_2}(X,T)$. For example, we recall that the full shift on two symbols with the measure defined by the probability vector $(1/2,1/2)$. It has completely positive entropy and the measure has the full support. Thus every non-diagonal $n$-tuple is a sequence entropy $n$-tuple for this measure. In particular, we consider such two full shifts $(X_1,T_1,\mu_1)=\left(\{0,1\}^{\mathbb{Z}},\sigma_1,\mu_1\right)$ and $(X_2,T_2,\mu_2)=\left(\{2,3\}^{\mathbb{Z}},\sigma_2,\mu_2\right)$ and define a new system $(X,T)$ as $X=X_1\bigsqcup X_2$, $T|_{X_i}=T_i, i=1,2$. Then $\mu_1,\mu_2\in M^e(X,T)$ and $SE_n^{\mu_1}(X,T)=X_1^{(n)}\setminus\Delta_n(X_1)\neq X_2^{(n)}\setminus\Delta_n(X_2)=SE_n^{\mu_2}(X,T).$ Let $\mu=\frac{1}{2}\mu_1+\frac{1}{2}\mu_2\in M(X,T)$. By Lemma \ref{lem4}, if $SE_n^\mu(X,T)=SM_n^\mu(X,T)$ then we have \[\cup_{i=1}^2SE_n^{\mu_i}(X,T)\subset SE_n^{\mu}(X,T)=SM_n^\mu(X,T)=\cap_{i=1}^2SM_n^{\mu_i}(X,T).\] However, applying Theorem \ref{cor:se=sm} to each $\mu_i\in M^e(X,T)$, one has \[SE_n^{\mu_i}(X,T)=SM_n^{\mu_i}(X,T), \text{ for }i=1,2.\] So $SE_n^{\mu_1}(X,T)= SE_n^{\mu_2}(X,T)$, a contradiction with our assumption. \end{proof} \section{topological sequence entropy and mean sensitive tuples}\rightarrowbel{sec5} This section is devoted to providing some partial evidences for the conjecture that in a minimal system every mean sensitive tuple is a topological sequence entropy tuple. It is known that the topological sequence entropy tuple has lift property \cite{MS07}. We can show that under the minimality condition, the mean sensitive tuple also has lift property. Let us begin with some notions. For $2\le n\in\mathbb{N}$, we say that $(x_1,x_2,\dotsc,x_n)\in X^{(n)}\setminus \Delta_n(X)$ (resp. $(x_1,x_2,\dotsc,x_n)\in X^{(n)}\setminus \Delta'_n(X)$) is a \textit{mean $n$-sensitive tuple} (resp. an \textit{essential mean $n$-sensitive tuple}) if for any $\tau>0$, there is $\delta=\delta(\tau)> 0$ such that for any nonempty open set $U\subset X$ there exist $y_1,y_2,\dotsc,y_n\in U$ such that if we denote $F=\{k\in\mathbb{Z}_+\colon T^ky_i\in B(x_i,\tau),i=1,2,\ldots,n\}$ then $\overline{D}(F)>\delta$. Denote the set of all mean $n$-sensitive tuples (resp. essential mean $n$-sensitive tuples) by $MS_n(X,T)$ (resp. $MS^e_n(X,T)$). \begin{thm}\rightarrowbel{lem:MSn-factor} Let $\pi: (X,T)\rightarrow (Y,S)$ be a factor map between two t.d.s. Then \begin{enumerate} \item $\pi^{(n)} ( MS_n(X,T))\subset MS_n(Y,S)\cup \Delta_n(Y)$ for every $n\geq 2$; \item $\pi^{(n)}\left(MS_n(X, T) \cup \Delta_n(X)\right)= MS_n(Y,S)\cup \Delta_n(Y)$ for every $n\geq 2$, provided that $(X,T)$ is minimal. \end{enumerate} \end{thm} \begin{proof} (1) is easy to be proved by the definition. We only prove (2). Supposing that $(y_1,y_2,\cdots,y_n)\in MS_n(Y,S)$, we will show that there exists $(z_1,z_2,\cdots,z_n)\in MS_n(X,T)$ such that $\pi(z_i)=y_i$. Fix $x\in X$ and let $U_m=B(x,\frac{1}{m})$. Since $(X,T)$ is minimal, $\operatorname{int}(\pi(U_m))\not= \emptyset$, where $\operatorname{int}(\pi(U_m))$ is the interior of $\pi(U_m)$. Since $(y_1,y_2,\cdots,y_n)\in MS_n(Y,S)$, there exists $\delta>0$ and $y_m^1, \cdots, y_m^n\in \operatorname{int}(\pi(U_m))$ such that $$\overline{D}(\{k\in \mathbb{Z}_+: S^ky_m^i \in \overline{B(y_i, 1)} \text{ for }i=1,\ldots,n\})\ge \delta.$$ Then there exist $x_m^1, \cdots, x_m^n\in U_m$ with $\pi(x_m^i)=y_m^i$ such that for any $m\in \mathbb{N}$, $$\overline{D}(\{k\in \mathbb{Z}_+: T^kx_m^i \in \pi^{-1}(\overline{B(y_i, 1)})\text{ for }i=1,\ldots,n\})\ge \delta.$$ Put $$ A=\prod_{i=1}^n \pi^{-1}(\overline{B(y_i, 1)}), $$ and it is clear that $A$ is a compact subset of $X^{(n)}$. We can cover $A$ with finite nonempty open sets of diameter less than $1$, i.e., $A \subset \cup_{i=1}^{N_1}A_1^i$ and $\diam(A_1^i)<1$. Then for each $m\in \mathbb{N}$ there is $1\leq N_1^m\leq N_1$ such that $$\overline{D}(\{k\in \mathbb{Z}_+: (T^kx_m^1,\ldots, T^kx_m^n)\in \overline{A_1^{N_1^m}}\cap A \})\ge \delta/N_1.$$ Without loss of generality, we assume $N_1^m=1$ for all $m\in \mathbb{N}$. Namely, $$ \overline{D}(\{k\in \mathbb{Z}_+: (T^kx_m^1,\ldots, T^kx_m^n)\in \overline{A_1^{1}}\cap A\}) \ge \delta/N_1 \text{ for all }m\in\mathbb{N}. $$ Repeating above procedure, for $l\ge 1$ we can cover $\overline{A_l^{1}}\cap A$ with finite nonempty open sets of diameter less than $\frac{1}{l+1}$, i.e., $\overline{A_l^{1}}\cap A \subset \cup_{i=1}^{N_{l+1}}A_{l+1}^i$ and $\diam(A_{l+1}^i)<\frac{1}{l+1}$. Then for each $m\in \mathbb{N}$ there is $1\leq N_{l+1}^m\leq N_{l+1}$ such that $$ \overline{D}(\{k\in \mathbb{Z}_+: (T^kx_m^1,\ldots, T^kx_m^n)\in \overline{A_{l+1}^{N_{l+1}^m} }\cap A \}) \ge \frac{\delta}{N_1N_2\cdots N_{l+1}}. $$ Without loss of generality we assume $N_{l+1}^m=1$ for all $m\in \mathbb{N}$. Namely, $$ \overline{D}(\{k\in \mathbb{Z}_+: (T^kx_m^1,\ldots, T^kx_m^n)\in \overline{A_{l+1}^{1}}\cap A \}) \ge\frac{\delta}{N_1N_2\cdots N_{l+1}} \text{ for all }m\in\mathbb{N}. $$ It is clear that there is a unique point $(z_1^1,\ldots,z_n^1)\in \bigcap_{l=1}^{\infty} \overline{A_l^{1}}\cap A $. We claim that $(z_1^1,\ldots,z_n^1)\in MS_n(X, T)$. In fact, for any $\tau>0$, there is $l\in \mathbb{N}$ such that $\overline{A_{l}^{1}}\cap A \subset V_{1}\times\cdots \times V_{n}$, where $V_i=B(z_i^1,\tau)$ for $i=1,\ldots,n$. By the construction, for any $m\in\mathbb{N}$ there are $x_m^1,\ldots, x_m^n\in U_m$ such that $$ \overline{D}(\{k\in \mathbb{Z}_+: (T^kx_m^1,\ldots, T^kx_m^n)\in \overline{A_{l}^{1}}\cap A \}) \ge\frac{\delta}{N_1N_2\cdots N_{l}} $$ and so $$ \overline{D}(\{k\in \mathbb{Z}_+: (T^kx_m^1,\ldots, T^kx_m^n)\in V_{1}\times\cdots \times V_{n} \}) \ge \frac{\delta}{N_1N_2\cdots N_{l}}. $$ for all $m\in \mathbb{N}$. For any nonempty open set $U\subset X$, since $x$ is a transitive point, there is $s\in \mathbb{Z}$ such that $T^sx\in U$. We can choose $m\in \mathbb{Z}$ such that $T^sU_{m}\subset U$. This implies that $T^sx_{m}^1,\ldots, T^sx_{m}^n\in U$ and $$ \overline{D}(\{k\in \mathbb{Z}_+: (T^k(T^sx_{m}^1),\ldots, T^k(T^sx_{m}^n))\in V_{1}\times\cdots \times V_{n}\} ) \ge \frac{\delta}{N_1N_2\cdots N_{l}}. $$ So we have $(z_1^1,\ldots,z_n^1)\in MS_n(X, T)$. Similarly, for each $p\in\mathbb{N}$, there exists $(z_1^p,\ldots,z_n^p)\in MS_n(X, T)\cap \prod_{i=1}^n \pi^{-1}(\overline{B(y_i, \frac{1}{p})})$. Set $z_i^p\rightarrow z_i$ as $p\rightarrow \infty$. Then $(z_1,\ldots,z_n)\in MS_n(X, T)\cup \Delta_n(X)$ and $\pi(z_i)=y_i$. \end{proof} Denote by $\mathcal{A}(MS_2(X, T))$ the smallest closed $T\times T$-invariant equivalence relation containing $MS_2(X, T)$. \begin{cor}\rightarrowbel{cor:max-me-factor} Let $(X,T)$ be a minimal t.d.s. Then $X/\mathcal{A}(MS_2(X, T))$ is the maximal mean equicontinuous factor of $(X,T)$. \end{cor} \begin{proof} Let $Y=X/\mathcal{A}(MS_2(X, T))$ and $\pi:(X,T)\to (Y,S)$ be the corresponding factor map. We show that $(Y,S)$ is mean equicontinuous. Assume that $(Y,S)$ is not mean equicontinuous, by \cite[Corollary 5.5]{LTY15} $(Y,S)$ is mean sensitive. Then by \cite[Theorem 4.4]{LYY22}, $MS_2(Y,S)\not=\emptyset$. By Theorem \ref{lem:MSn-factor}, there exists $(x_1,x_2)\in MS_2(X, T)$ such that $(\pi(x_1),\pi(x_2))\in MS_2(Y,S)$. Then $(x_1,x_2)\not \in R_\pi:=\{(x,x')\in X\times X:\pi(x)=\pi(x')\}$, a contradiction with $R_\pi=\mathcal{A}(MS_2(X, T))$. Let $(Z,W)$ be a mean equicontinuous t.d.s. and $\theta: (X,T)\to (Z,W)$ be a factor map. Since $(X,T)$ is minimal, so is $(Z,W)$. Then by \cite[Corollary 5.5]{LTY15} and \cite[Theorem 4.4]{LYY22}, $MS_2(Z,W)=\emptyset$. By Theorem \ref{lem:MSn-factor} $MS_2(X,T)\subset R_\theta$, where $R_\theta$ is the corresponding equivalence relation with respect to $\theta$. This implies that $(Z,W)$ is a factor of $(Y,S)$ and so $(Y,S)$ is the maximal mean equicontinuous factor of $(X,T)$. \end{proof} In the following we show Theorem \ref{thm:ms=>it}. Let us begin with some preparations. \begin{defn}[\cite{KL07}]Let $(X,T)$ be a t.d.s. \begin{itemize} \item For a tuple $(A_1,A_2,\ldots, A_n)$ of subsets of $X$, we say that a set $J\subseteq \mathbb{Z}_+$ is an {\em independence set} for $A$ if for every nonempty finite subset $I\subseteq J$ and function $\sigma: I\rightarrow \{1,2,\ldots, n\}$ we have $\bigcap_{k\in I} T^{-k} A_{\sigma(k)}\neq \emptyset.$ \item For $n\ge2$, we call a tuple $\textbf{x}=(x_1,\ldots,x_n)\in X^{(n)}$ an {\em IT-tuple} if for any product neighbourhood $U_1\times U_2\times \ldots \times U_n$ of $\textbf{x}$ in $X^{(n)}$ the tuple $(U_1,U_2,\ldots, U_n)$ has an infinite independence set. We denote the set of IT-tuples of length $n$ by ${\rm IT}_n (X, T)$. \item For $n\ge2$, we call an IT-tuple $\textbf{x}=(x_1,\ldots,x_n)\in X^{(n)}$ an essential {\em IT-tuple} if $x_i\neq x_j$ for any $i\neq j$. We denote the set of all essential IT-tuples of length $n$ by ${\rm IT}^e_n (X, T)$. \end{itemize} \end{defn} \begin{prop}\cite[Proposition 3.2]{HLSY}\rightarrowbel{independent sets} Let $X$ be a compact metric topological group with the left Haar measure $\mu$, and let $n\in \mathbb{N}$ with $n\ge 2$. Suppose that $V_{1},\ldots,V_{n}\subset X$ are compact subsets satisfying that \begin{enumerate} \item[(i)] $\overline{\operatorname{int} V_i}=V_i$ for $i=1,2,\cdots,n$, \item[(ii)] $\operatorname{int}(V_{i})\cap \operatorname{int}(V_{j})=\emptyset$ for all $1\le i\neq j\le n$, \item[(iii)] $\mu(\bigcap_{1\leq i\leq n}V_{i})>0$. \end{enumerate} Further, assume that $T: X\rightarrow X$ is a minimal rotation and $\mathcal{G}\subset X$ is a residual set. Then there exists an infinite set $I\subset \mathbb{Z}_+$ such that for all $a\in\{1,2,\ldots,n\}^{I}$ there exists $x \in\mathcal{G}$ with the property that \begin{equation}\rightarrowbel{eq: in the int} x\in \bigcap_{k\in I} T^{-k} {\rm int}(V_{a(k)}),\quad {\rm i.e.}\ T^kx\in \operatorname{int}(V_{a(k)}) \ \text{ for any }k\in I. \end{equation} \end{prop} A subset $Z\subset X$ is called {\it proper} if $Z$ is a compact subset with $\overline{\operatorname{int}(Z)} = Z$. The following lemma can help us to complete the proof of Theorem \ref{thm:ms=>it}. \begin{lem}\rightarrowbel{lem:proper one to one} Let $(X,T)$ and $(Y,S)$ be two t.d.s., and $\pi:(X,T)\to (Y,S)$ be a factor map. Suppose that $(X,T)$ is minimal. Then the image of proper subsets of $X$ under $\pi$ are proper subset of $Y$. \end{lem} \begin{proof} Given a proper subset $Z$ of $X$, we will show $\pi(Z)$ is also proper. It is clear that $\pi(Z)$ is compact, as $\pi$ is continuous. Now we prove $\overline{\operatorname{int}(\pi(Z))} = \pi(Z)$. It follows from the closeness of $\pi(Z)$ that $\overline{\operatorname{int}(\pi(Z))} \subset \pi(Z)$. On the other hand, for any $y\in \pi(Z), $ take $x\in \pi^{-1}(y)\cap Z$. Since $\pi^{-1}(y)\cap Z=\pi^{-1}(y)\cap\overline{\operatorname{int}(Z)}$, there exists a sequence $\{x_n\}_{n\in\mathbb{N}}$ such that $x_n\in \operatorname{int}(Z)$ and $\lim_{n\to \infty}x_n=x$. Let $\{r_n\}_{n\in\mathbb{N}}$ be a sequence of $\mathbb{R}$ satisfying $$\lim_{n\to\infty}r_n=0\text{ and }B(x_n,r_n)\subset \operatorname{int}(Z).$$ By the minimality of $(X,T)$, we have $\pi$ is semi-open, and hence $\operatorname{int}(\pi(B(x_n,r_n)))\neq \emptyset$. Thus, there exists $x_n'\in B(x_n,r_n)$ such that $\pi(x_n')\in \operatorname{int}(\pi(B(x_n,r_n)))\subset\operatorname{int}(\pi(Z))$. Since $x_n'\in B(x_n,r_n)$ and $\lim_{n\to \infty}x_n=x$, one has $\lim_{n\to \infty}x_n'=x$, and hence $\lim_{n\to \infty}\pi(x_n')=\pi(x)=y.$ This implies that $y\in \overline{\operatorname{int}(\pi(Z))}$, which finishes the proof. \end{proof} Inspired by \cite[Proposition 3.7]{HLSY}, we can give the proof of Theorem \ref{thm:ms=>it}. \begin{proof} [Proof of Theorem \ref{thm:ms=>it}] It suffices to prove $MS^e_n(X,T)\subset IT_n^e(X,T)$. Given $\textbf{x}=(x_1,\ldots,x_n)\in MS^e_n(X,T)$, we will show that $\textbf{x}\in IT^e_n(X,T).$ Since the minimal t.d.s. $(X_{eq},T_{eq})$ is the maximal equicontinuous factor of $(X,T)$, then $X_{eq}$ can be viewed as a compact metric group with a $T_{eq}$-invariant metric $d_{eq}$. Let $\mu$ be the left Haar probability measure of $X_{eq}$, which is also the unique $T_{eq}$-invariant probability measure of $(X_{eq},T_{eq})$. Let $$X_1=\{x\in X: \#\{\pi^{-1}(\pi(x))\}=1\}, \quad Y_1=\pi(X_1).$$ Then $Y_1$ is a dense $G_\delta$-set as $\pi$ is almost one to one. Without loss of generality, assume that $\epsilon=\frac 14 \min_{1\le i\neq j\le n}d(x_i,x_j)$. Let $U_i=\overline{B_\epsilon(x_i)}$ for $1\le i\le n$. Then $U_i$ is proper for each $1\le i\le n$. We will show that $U_1,U_2,\ldots,U_n$ is an infinite independent tuple of $(X,T)$, i.e. there is some infinite set $I\subseteq \mathbb{Z}_+$ such that $$\bigcap_{k\in I}T^{-k}U_{a(k)}\neq \emptyset, \ \text{for all} \ a\in \{1,2,\ldots,n\}^I.$$ Let $V_i=\pi(U_i)$ for $1\le i\le n$. By Lemma \ref{lem:proper one to one}, $V_i$ is proper for each $i\in \{1,2,\ldots,n\}$. We claim that ${\rm int }(V_i)\cap {\rm int}(V_j)=\emptyset$ for all $1\le i\neq j\le n$. In fact, if there is some $1\le i\neq j\le n$ such that ${\rm int }(V_i)\cap {\rm int}(V_j)\not=\emptyset$, then $${\rm int }(V_i)\cap {\rm int}(V_j)\cap Y_1\not=\emptyset,$$ as $Y_1$ is a dense $G_\d$-set. Let $y\in {\rm int }(V_i)\cap {\rm int}(V_j)\cap Y_1$. Then there are $x_i\in U_i$ and $x_j\in U_j$ such that $y=\pi(x_i)=\pi(x_j)$, which contradicts with $y\in Y_1$. Choose a nonempty open set $W_m\subset X$ with $\operatorname{diam}(\pi(W_m))<\frac{1}{m}$ for each $m\in \mathbb{N}$. Since $\textbf{x}\in MS^e_n(X,T)$, there exist $\delta>0$ and $\textbf{x}^m=(x_1^m, x_2^m,\cdots, x_n^m)\in W_m\times \dots \times W_m$ such that $\overline{D}(N(\textbf{x}^m, U_1\times U_2\times \cdots \times U_n))\ge \delta.$ Let $\textbf{y}^m=(y_1^m,y_2^m,\cdots,y_n^m)=\pi^{(n)} (\textbf{x}^m)$. Then $$\overline{D}(N(\textbf{y}^m, V_1\times V_2\times \cdots \times V_n))\ge \delta.$$ For $p\in \overline{D}(N(\textbf{y}^m, V_1\times V_2\times \cdots \times V_n))$, $T_{eq}^py_i^m\in V_i$. As $\operatorname{diam}(\pi(W_m))<\frac{1}{m}$, $d_{eq}(y_1^m,y_i^m)<\frac{1}{m}$ for $1\le i\le n$. Note that $$d_{eq}(T_{eq}^py_1^m,T_{eq}^py_i^m)=d_{eq}(y_1^m,y_i^m)<\frac{1}{m}\quad\text{ for }1\le i\le n.$$ Let $V_i^m=B_{\frac{1}{m}}(V_i)=\{y\in X_{eq}:d_{eq}(y,V_i)<\frac{1}{m}\}$. Then $T_{eq}^py_1^m\in \cap_{i=1}^n V_i^m$ and $$\overline{D}(N(y_1^m, \cap_{i=1}^n V_i^m))\ge \delta.$$ Since $(X_{eq},T_{eq})$ is uniquely ergodic with respect to the measure $\mu$, $\mu(\cap_{i=1}^n V_i^m)\ge \delta$. Letting $m\to \infty$, one has $\mu(\cap_{i=1}^n V_i)\ge \delta>0.$ By Proposition \ref{independent sets}, there is an infinite $I\subseteq \mathbb{Z}_+$ such that for all $a\in\{1,2,\ldots,n\}^{I}$ there exists $y_0\in Y_1$ with the property that \begin{equation*} y_0\in \bigcap_{k\in I} T_{eq}^{-k} {\rm int}(V_{a(k)}). \end{equation*} Set $\pi^{-1}(y_0)=\{x_0\}$. Then \begin{equation*} x_0\in \bigcap_{k\in I} T^{-k} U_{a(k)}, \end{equation*} which implies that $(x_1,x_2,\cdots,x_n)\in IT_n(X,T)$. \end{proof} \section*{Acknowledgments} We thank the referee for a very careful reading and many useful comments, which helped us to improve the paper. Research of Jie Li is supported by NNSF of China (Grant No. 12031019); Chunlin Liu is partially supported by NNSF of China (Grant No. 12090012); Siming Tu is supported by NNSF of China (Grant No. 11801584 and No. 12171175); and Tao Yu is supported by NNSF of China (Grant No. 12001354) and STU Scientific Research Foundation for Talents (Grant No. NTF19047). \begin{appendix} \section{Proof of Lemma \ref{0726}}\rightarrowbel{APPENDIX} In this section, we give the proof of Lemma \ref{0726}. \begin{lem}\rightarrowbel{0724} For a m.p.s. $(X,\mathcal{B}_X,\mu,T)$ with $\mathcal{K}_\mu$ its Kronecker factor, $n\in\mathbb{N}$ and $f_i\in L^\infty(X,\mu)$, $i=1,\dotsc,n$, we have \[ \lim_{M \to \infty} \dfrac{1}{M} \sum_{m=1}^M \prod_{i=1}^{n} f_i( T^m x_i) = \lim_{M \to \infty} \dfrac{1}{M} \sum_{m=1}^M \prod_{i=1}^{n}\mathbb{E}(f_i | \mathcal{K}_\mu)(T^m x_i). \] \end{lem} \begin{proof} On the one hand, by the Birkhoff ergodic theorem, for $\textbf{x}=(x_1,\dotsc,x_n)\in X^{(n)}$, let $F(\textbf{x})=F(x_1,\dots,x_n)=\prod_{i=1}^{n} f_i(x_i)$, \[\lim_{M \to \infty} \dfrac{1}{M} \sum_{m=1}^M \prod_{i=1}^{n} f_i( T^m x_i) =\lim_{M\to\infty}\dfrac{1}{M} \sum_{m=1}^{M} F\left(\left(T^{(n)}\right)^m\textbf{x}\right)= \mathbb{E}_{\mu^{(n)}}(\prod_{i=1}^{n}f_i|I_{\mu^{(n)}})(\textbf{x}),\] where $I_{\mu^{(n)}}=\{A\in \mathcal{B}^{(n)}_X: T^{(n)}A=A\}$. On the other hand, following \cite[Lemma 4.4]{HMY04}, we have $(\mathcal{K}_\mu)^{\bigotimes n}=\mathcal{K}_{\mu^{(n)}}$. Then for $\textbf{x}=(x_1,\dotsc,x_n)\in X^{(n)}$, \[\prod_{i=1}^{n}\mathbb{E}_{\mu}(f_i|\mathcal{K}_\mu)(x_i) =\mathbb{E}_{\mu^{(n)}}(\prod_{i=1}^{n}f_i|(\mathcal{K}_\mu)^{\bigotimes n})(\textbf{x}) =\mathbb{E}_{\mu^{(n)}}(\prod_{i=1}^{n}f_i|\mathcal{K}_{\mu^{(n)}})(\textbf{x}).\] This implies that \begin{align*} \lim_{M\to\infty}\dfrac{1}{M} \sum_{m=1}^{M} \prod_{i=1}^{n} \mathbb{E}_{\mu}(f_i|\mathcal{K}_\mu)(T^mx_i) = & \mathbb{E}_{\mu^{(n)}}(\prod_{i=1}^{n}\mathbb{E}_{\mu}(f_i|\mathcal{K}_\mu)|I_{\mu^{(n)}})(\textbf{x})\\ = &\mathbb{E}_{\mu^{(n)}}(\mathbb{E}_{\mu^{(n)}}(\prod_{i=1}^{n}f_i|\mathcal{K}_{\mu^{(n)}})|I_{\mu^{(n)}})(\textbf{x})\\ = &\mathbb{E}_{\mu^{(n)}}(\prod_{i=1}^{n}f_i|I_{\mu^{(n)}})(\textbf{x}), \end{align*} where the last equality follows from the fact that $I_{\mu^{(n)}}\subset\mathcal{K}_{\mu^{(n)}}.$ \end{proof} \begin{lem}\rightarrowbel{0725} Let $(Z,\B_Z,\nu,R)$ be a minimal rotation on a compact abelian group. Then for any $n\in\mathbb{N}$ and $\phi_i\in L^\infty(Z,\nu)$, $i=1,\dotsc,n$,, \[\lim_{M\to\infty}\dfrac{1}{M} \sum_{m=1}^{M} \prod_{i=1}^{n} \phi_i (R^mz_i) = \int_Z \prod_{i=1}^{n} \phi_i (z_i+z)d \nu(z) \quad\text{ for }\nu^{(n)}\text{-a.e. }(z_1,\ldots, z_n). \] \end{lem} \begin{proof} Since $(Z,\B_Z,\nu,R)$ be a minimal rotation on a compact abelian group, there exists $a\in Z$ such that $R^mz=z+ma$ for any $z\in Z$. Let $F(z)=\prod_{i=1}^{n} \phi_i (z_i+z)$. Then $F(R^me_Z)=F(ma)$ where $e_Z$ is identity element of $Z$. Since $(Z,R)$ is minimal equicontinuous, $(Z,\B_Z,\nu,R)$ is uniquely ergodic. By an approximation argument, we have, for $\nu^{(n)}$-a.e. $(z_1,\ldots, z_n)$, \begin{align*} \lim_{M\to\infty}\dfrac{1}{M} \sum_{m=1}^{M}\prod_{i=1}^{n} \phi_i(R^mz_i) =&\lim_{M\to\infty}\dfrac{1}{M} \sum_{m=1}^{M}\prod_{i=1}^{n} \phi_i (z_i+ma)\\ =&\lim_{M\to\infty}\dfrac{1}{M} \sum_{m=1}^{M} F(ma) =\lim_{M\to\infty}\dfrac{1}{M} \sum_{m=1}^{M} F(R^me_Z)\\ =&\int_Z F(z) d \nu(z) = \int_Z \prod_{i=1}^{n} \phi_i (z_i+z) d \nu(z). \end{align*} The proof is completed. \end{proof} \begin{proof}[Proof of Lemma \ref{0726}.] Let $z \mapsto \eta_z$ be the disintegration of $\mu$ over the continuous factor map $\pi$ from $(X,\B_X,\mu,T)$ to its Kronecker factor $(Z,\B_Z,\nu,R)$. For $n\in\mathbb{N}$, define \begin{equation*} \rightarrowbel{eqn:lambda_2_dim_definition_for_section_2_is_this_unqiue_yet} \rightarrowmbda^n_{\textbf{x}} = \int_Z \eta_{z + \pi(x_1)} \times\dots\times \eta_{z+\pi(x_n)} d\nu(z) \end{equation*} for every $\textbf{x}=(x_1,\dotsc,x_n) \in X^{(n)}$. We first note that for each $\textbf{x} \in X^{(n)}$ the measures $\eta_{z + \pi(x_i)}$ are defined for $\nu$-a.e. $z \in Z$ and therefore is well-defined. To prove that $\textbf{x} \mapsto \rightarrowmbda^n_\textbf{x}$ is continuous first note that uniform continuity implies \[ (u_1,\dotsc,u_n) \mapsto \int_Z \prod_{i=1}^{n}f_i(z + u_i) d\nu(z) \] from $Z^{(n)}$ to $\mathbb{C}$ is continuous whenever $f_i \colon Z \to \mathbb{C}$ are continuous. An approximation argument then gives continuity for every $f_i \in L^\infty(Z,\nu)$. In particular, \[ \textbf{x} \mapsto \int_Z \prod_{i=1}^{n}\mathbb{E}(f_i \mid \B_Z)(z + \pi(x_i)) d\nu(z) \] from $X^{(n)}$ to $\mathbb{C}$ is continuous whenever $f_i \in L^\infty(X,\mu)$, which in turn implies continuity of $\textbf{x} \mapsto \rightarrowmbda_{\textbf{x}}^n$. To prove that $\textbf{x}\mapsto \rightarrowmbda_{\textbf{x}}^n$ is an ergodic decomposition we first calculate \begin{equation*} \int_{X^{(n)}} \int_Z \prod_{i=1}^{n}\eta_{z + \pi(x_i)}d \nu(z) d \mu^{(n)}(\textbf{x}) =\int_Z \prod_{i=1}^{n}\int_X \eta_{z + \pi(x_i)} d \mu(x_i) d \nu(z), \end{equation*} which is equal to $\mu^{(n)}$ because all inner integrals are equal to $\mu$. We conclude that \begin{equation*} \rightarrowbel{eq_continuousergodicdecompositionofmu1} \mu^{(n)} = \int_{X^{(n)}}\rightarrowmbda^n_\textbf{x} d \mu^{(n)}(\textbf{x}), \end{equation*} which shows $\textbf{x} \mapsto \rightarrowmbda^n_\textbf{x}$ is a disintegration of $\mu^{(n)}$. We are left with verifying that \[ \int_{X^{(n)}} F d \rightarrowmbda^n_\textbf{x} = \mathbb{E}_{\mu^{(n)}}(F \mid I_{\mu^{(n)}})(\textbf{x}) \] for $\mu^{(n)}$-a.e. $\textbf{x}\in X^{(n)}$ whenever $F \colon X^{(n)} \to \mathbb{C}$ is measurable and bounded. Recall that $I_{\mu^{(n)}}$ denotes the $\sigma$-algebra of $T^{(n)}$-invariant sets. Fix such an $F$. It follows from the pointwise ergodic theorem that \[ \lim_{M \to \infty} \dfrac{1}{M} \sum_{m=1}^M F( T^m x_1,\dotsc, T^m x_n) = \mathbb{E}_{\mu^{(n)}}(F \mid I_{\mu^{(n)}})(\textbf{x}) \] for $\mu^{(n)}$-a.e. $\textbf{x}\in X^{(n)}$. We therefore wish to prove that \[ \int_{X^{(n)}} F d \rightarrowmbda^n_\textbf{x} = \lim_{M \to \infty} \dfrac{1}{M} \sum_{m=1}^M F( T^m x_1,\dotsc, T^m x_n) \] holds for $\mu^{(n)}$-a.e. $\textbf{x} \in X^{(n)}$. By an approximation argument it suffices to verify that \begin{equation*} \rightarrowbel{eqn:proving_ergodic_kk} \int_{X^{(n)}} f_1 \otimes\dots\otimes f_n d \rightarrowmbda^n_\textbf{x} = \lim_{M \to \infty} \dfrac{1}{M} \sum_{m=1}^M \prod_{i=1}^{n} f_i( T^m x_i) \end{equation*} holds for $\mu^{(n)}$-a.e. $\textbf{x} \in X^{(n)}$ whenever $f_i$ belongs to $L^\infty(X,\mu)$ for $i=1,...,n$. By Lemma \ref{0724}, \[ \lim_{M \to \infty} \dfrac{1}{M} \sum_{m=1}^M \prod_{i=1}^{n} f_i( T^m x_i) = \lim_{M \to \infty} \dfrac{1}{M} \sum_{m=1}^M \prod_{i=1}^{n}\mathbb{E}(f_i \mid \B_Z)(T^m x_i) \] for $\mu^{(n)}$-a.e. $\textbf{x}\in X^{(n)}$. By Lemma \ref{0725}, for every $\phi_i$ in $L^\infty(Z,\nu)$, \[\lim_{M \to \infty}\dfrac{1}{M} \sum_{m=1}^{M} \prod_{i=1}^{n} \phi_i (R^mz_i) = \int_Z \prod_{i=1}^{n} \phi_i (z_i+z)d \nu(z)\] for $\nu^{(n)}$-a.e. $\textbf{z}\in Z^{(n)}$. Taking $\phi_i = \mathbb{E}(f_i \mid \B_Z)$ gives \[ \lim_{M \to \infty} \dfrac{1}{M} \sum_{m=1}^M \prod_{i=1}^{n}\mathbb{E}(f_i \mid \B_Z)(T^m x_i) = \int_{X^{(n)}} f_1 \otimes \dots \otimes f_n d \rightarrowmbda^n_{\textbf{x}} \] for $\mu^{(n)}$-a.e. $\textbf{x}\in X^{(n)}$. \end{proof} \end{appendix} \end{document}
\betaegin{document} \betaegin{frontmatter} \title{Hyperbolicity in the corona and join of graphs} \alphauthor[a]{Walter Carballosa\corref{x}} \alphaddress[a]{Consejo Nacional de Ciencia y Tecnolog\'ia (CONACYT) $\&$ Universidad Aut\'onoma de Zacatecas, Paseo la Bufa, int. Calzada Solidaridad, 98060 Zacatecas, ZAC, M\'exico} \varepsilonad{[email protected]} \cortext[x]{Corresponding author.} \alphauthor[b]{Jos\'e M. Rodr{\'\i}guez} \alphaddress[b]{Department of Mathematics, Universidad Carlos III de Madrid, Av. de la Universidad 30, 28911 Legan\'es, Madrid, Spain.} \varepsilonad{[email protected]} \alphauthor[d]{Jos\'e M. Sigarreta} \alphaddress[d]{Faculdad de Matem\'aticas, Universidad Aut\'onoma de Guerrero, Carlos E. Adame 5, Col. La Garita, Acapulco, Guerrero, Mexico} \varepsilonad{[email protected]} \betaegin{abstract} If X is a geodesic metric space and $x_1,x_2,x_3\in X$, a {\it geodesic triangle} $T=\{x_1,x_2,x_3\}$ is the union of the three geodesics $[x_1x_2]$, $[x_2x_3]$ and $[x_3x_1]$ in $X$. The space $X$ is $\deltaelta$-\varepsilonmph{hyperbolic} $($in the Gromov sense$)$ if any side of $T$ is contained in a $\deltaelta$-neighborhood of the union of the two other sides, for every geodesic triangle $T$ in $X$. If $X$ is hyperbolic, we denote by $\deltaelta(X)$ the sharp hyperbolicity constant of $X$, i.e. $\deltaelta(X)=\inf\{\deltaelta\gammae 0: \, X \, \text{ is $\deltaelta$-hyperbolic}\,\}\,.$ Some previous works characterize the hyperbolic product graphs (for the Cartesian product, strong product and lexicographic product) in terms of properties of the factor graphs. In this paper we characterize the hyperbolic product graphs for graph join $G_1\uplus G_2$ and the corona $G_1\deltaiamond G_2$: $G_1\uplus G_2$ is always hyperbolic, and $G_1\deltaiamond G_2$ is hyperbolic if and only if $G_1$ is hyperbolic. Furthermore, we obtain simple formulae for the hyperbolicity constant of the graph join $G_1\uplus G_2$ and the corona $G_1\deltaiamond G_2$. \varepsilonnd{abstract} \betaegin{keyword} Graph join \sigmaep Corona graph \sigmaep Gromov hyperbolicity \sigmaep Infinite graph \MSC[2010] 05C69 \sigmaep 05A20 \sigmaep 05C50. \varepsilonnd{keyword} \varepsilonnd{frontmatter} \sigmaection{Introduction} Hyperbolic spaces play an important role in geometric group theory and in the geometry of negatively curved spaces (see \cite{ABCD,GH,G1}). The concept of Gromov hyperbolicity grasps the essence of negatively curved spaces like the classical hyperbolic space, Riemannian manifolds of negative sectional curvature bounded away from $0$, and of discrete spaces like trees and the Cayley graphs of many finitely generated groups. It is remarkable that a simple concept leads to such a rich general theory (see \cite{ABCD,GH,G1}). The first works on Gromov hyperbolic spaces deal with finitely generated groups (see \cite{G1}). Initially, Gromov spaces were applied to the study of automatic groups in the science of computation (see, \varepsilonmph{e.g.}, \cite{O}); indeed, hyperbolic groups are strongly geodesically automatic, \varepsilonmph{i.e.}, there is an automatic structure on the group \cite{Cha}. The concept of hyperbolicity appears also in discrete mathematics, algorithms and networking. For example, it has been shown empirically in \cite{ShTa} that the internet topology embeds with better accuracy into a hyperbolic space than into an Euclidean space of comparable dimension; the same holds for many complex networks, see \cite{KPKVB}. A few algorithmic problems in hyperbolic spaces and hyperbolic graphs have been considered in recent papers (see \cite{ChEs,Epp,GaLy,Kra}). Another important application of these spaces is the study of the spread of viruses through on the internet (see \cite{K21,K22}). Furthermore, hyperbolic spaces are useful in secure transmission of information on the network (see \cite{K27,K21,K22,NS}). The study of Gromov hyperbolic graphs is a subject of increasing interest; see, \varepsilonmph{e.g.}, \cite{BRS,BRSV2,BRST,BPK,BHB1,CDR,CPRS,CRS,CRSV,CDEHV,K50,K27,K21,K22,K23,K24,K56,KPKVB,MRSV,MRSV2,NS,PeRSV,PRST,PRSV,PT,R,RSVV,S,S2,T,WZ} and the references therein. We say that the curve $\gamma$ in a metric space $X$ is a \varepsilonmph{geodesic} if we have $L(\gamma|_{[t,s]})=d(\gamma(t),\gamma(s))=|t-s|$ for every $s,t\in [a,b]$ (then $\gammaamma$ is equipped with an arc-length parametrization). The metric space $X$ is said \varepsilonmph{geodesic} if for every couple of points in $X$ there exists a geodesic joining them; we denote by $[xy]$ any geodesic joining $x$ and $y$; this notation is ambiguous, since in general we do not have uniqueness of geodesics, but it is very convenient. Consequently, any geodesic metric space is connected. If the metric space $X$ is a graph, then the edge joining the vertices $u$ and $v$ will be denoted by $[u,v]$. Along the paper we just consider graphs with every edge of length $1$. In order to consider a graph $G$ as a geodesic metric space, identify (by an isometry) any edge $[u,v]\in E(G)$ with the interval $[0,1]$ in the real line; then the edge $[u,v]$ (considered as a graph with just one edge) is isometric to the interval $[0,1]$. Thus, the points in $G$ are the vertices and, also, the points in the interior of any edge of $G$. In this way, any connected graph $G$ has a natural distance defined on its points, induced by taking shortest paths in $G$, and we can see $G$ as a metric graph. If $x,y$ are in different connected components of $G$, we define $d_G(x,y)=\infty$. Throughout this paper, $G=(V,E)$ denotes a simple graph (not necessarily connected) such that every edge has length $1$ and $V\neq \varepsilonmptyset$. These properties guarantee that any connected graph is a geodesic metric space. Note that to exclude multiple edges and loops is not an important loss of generality, since \cite[Theorems 8 and 10]{BRSV2} reduce the problem of compute the hyperbolicity constant of graphs with multiple edges and/or loops to the study of simple graphs. For a nonempty set $X\sigmaubseteq V$, and a vertex $v\in V$, $N_X(v)$ denotes the set of neighbors $v$ has in $X$: $N_X(v):=\{u\in X: [u,v]\in E\},$ and the degree of $v$ in $X$ will be denoted by $\deltaeg_{X}(v)=|N_{X}(v)|$. We denote the degree of a vertex $v\in V$ in $G$ by $\deltaeg(v)\lambdae\infty$, and the maximum degree of $G$ by $\Delta_{G}:=\sigmaup_{v\in V}\deltaeg(v)$. Consider a polygon $J=\{J_1,J_2,\deltaots,J_n\}$ with sides $J_j\sigmaubseteq X$ in a geodesic metric space $X$. We say that $J$ is $\delta$-{\it thin} if for every $x\in J_i$ we have that $d(x,\cup_{j\neq i}J_{j})\lambdae \delta$. Let us denote by $\delta(J)$ the sharp thin constant of $J$, \varepsilonmph{i.e.}, $\delta(J):=\inf\{\delta\gammae 0: \, J \, \text{ is $\delta$-thin}\,\}\,. $ If $x_1,x_2,x_3$ are three points in $X$, a {\it geodesic triangle} $T=\{x_1,x_2,x_3\}$ is the union of the three geodesics $[x_1x_2]$, $[x_2x_3]$ and $[x_3x_1]$ in $X$. We say that $X$ is $\delta$-\varepsilonmph{hyperbolic} if every geodesic triangle in $X$ is $\delta$-thin, and we denote by $\delta(X)$ the sharp hyperbolicity constant of $X$, \varepsilonmph{i.e.}, $\delta(X):=\sigmaup\{\delta(T): \, T \, \text{ is a geodesic triangle in }\,X\,\}.$ We say that $X$ is \varepsilonmph{hyperbolic} if $X$ is $\delta$-hyperbolic for some $\delta \gammae 0$; then $X$ is hyperbolic if and only if $ \delta(X)<\infty.$ If $X$ has connected components $\{X_i\}_{i\in I}$, then we define $\delta(X):=\sigmaup_{i\in I} \delta(X_i)$, and we say that $X$ is hyperbolic if $\delta(X)<\infty$. In the classical references on this subject (see, \varepsilonmph{e.g.}, \cite{BHB,GH}) appear several different definitions of Gromov hyperbolicity, which are equivalent in the sense that if $X$ is $\delta$-hyperbolic with respect to one definition, then it is $\delta'$-hyperbolic with respect to another definition (for some $\delta'$ related to $\delta$). The definition that we have chosen has a deep geometric meaning (see, \varepsilonmph{e.g.}, \cite{GH}). Trivially, any bounded metric space $X$ is $((\deltaiam X)/2)$-hyperbolic. A normed linear space is hyperbolic if and only if it has dimension one. A geodesic space is $0$-hyperbolic if and only if it is a metric tree. If a complete Riemannian manifold is simply connected and its sectional curvatures satisfy $K\lambdaeq c$ for some negative constant $c$, then it is hyperbolic. See the classical references \cite{ABCD,GH} in order to find further results. We want to remark that the main examples of hyperbolic graphs are the trees. In fact, the hyperbolicity constant of a geodesic metric space can be viewed as a measure of how ``tree-like'' the space is, since those spaces $X$ with $\deltaelta(X) = 0$ are precisely the metric trees. This is an interesting subject since, in many applications, one finds that the borderline between tractable and intractable cases may be the tree-like degree of the structure to be dealt with (see, \varepsilonmph{e.g.}, \cite{CYY}). Given a Cayley graph (of a presentation with solvable word problem) there is an algorithm which allows to decide if it is hyperbolic. However, for a general graph or a general geodesic metric space deciding whether or not a space is hyperbolic is usually very difficult. Therefore, it is interesting to study the hyperbolicity of particular classes of graphs. The papers \cite{BRST,BHB1,CCCR,CDR,CRSV,MRSV2,PeRSV,PRSV,R,Si} study the hyperbolicity of, respectively, complement of graphs, chordal graphs, strong product graphs, lexicographic product graphs, line graphs, Cartesian product graphs, cubic graphs, tessellation graphs, short graphs and median graphs. In \cite{CCCR,CDR,MRSV2} the authors characterize the hyperbolic product graphs (for strong product, lexicographic product and Cartesian product) in terms of properties of the factor graphs. In this paper we characterize the hyperbolic product graphs for graph join $G_1\uplus G_2$ and the corona $G_1\deltaiamond G_2$: $G_1\uplus G_2$ is always hyperbolic, and $G_1\deltaiamond G_2$ is hyperbolic if and only if $G_1$ is hyperbolic (see Corollaries \ref{cor:SP} and \ref{cor:sup}). Furthermore, we obtain simple formulae for the hyperbolicity constant of the graph join $G_1\uplus G_2$ and the corona $G_1\deltaiamond G_2$ (see Theorems \ref{th:hypJoin} and \ref{th:corona}). In particular, Theorem \ref{th:corona} states that $\delta(G_1\deltaiamond G_2)=\max\{\delta(G_1),\delta(G_2\uplus E_1)\}$, where $E_1$ is a graph with just one vertex. We want to remark that it is not usual at all to obtain explicit formulae for the hyperbolicity constant of large classes of graphs. \sigmaection{Distance in graph join} In order to estimate the hyperbolicity constant of the graph join $G_1\uplus G_2$ of $G_1$ and $G_2$, we will need an explicit formula for the distance between two arbitrary points. We will use the definition given by Harary in \cite{H}. \betaegin{definition}\lambdaabel{def:join} Let $G_1=(V(G_1),E(G_1))$ and $G_2=(V(G_2),E(G_2))$ two graphs with $V(G_1)\cap V(G_2)=\varnothing$. The \varepsilonmph{graph join} $G_1\uplus G_2$ of $G_1$ and $G_2$ has $V(G_1\uplus G_2)=V(G_1) \cup V(G_2)$ and two different vertices $u$ and $v$ of $G_1\uplus G_2$ are adjacent if $u\in V(G_1)$ and $v\in V(G_2)$, or $[u,v]\in E(G_1)$ or $[u,v]\in E(G_2)$. \varepsilonnd{definition} From the definition, it follows that the graph join of two graphs is commutative. Figure \ref{fig:join} shows the graph join of two graphs. \betaegin{figure}[h] \centering \sigmacalebox{.9} {\betaegin{pspicture}(-1.2,-1.2)(7.7,1.2) \partialscircle[linewidth=.5pt](0,0){1} \cnode*[](-1,0){0.05}{A} \cnode*[](0.5,0.866025){0.05}{B} \cnode*[](0.5,-0.866025){0.05}{C} \cnode*[](2.5,1){0.05}{E} \cnode*[](2.5,0){0.05}{F} \cnode*[](2.5,-1){0.05}{G} \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(2.5,1)(2.5,-1) \partialscircle[linewidth=.5pt](5,0){1} \cnode*[](4,0){0.05}{A'} \cnode*[](5.5,0.866025){0.05}{B'} \cnode*[](5.5,-0.866025){0.05}{C'} \cnode*[](7.5,1){0.05}{E'} \cnode*[](7.5,0){0.05}{F'} \cnode*[](7.5,-1){0.05}{G'} \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(7.5,1)(7.5,-1) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(4,0)(7.5,1) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(4,0)(7.5,0) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(4,0)(7.5,-1) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(5.5,0.866025)(7.5,1) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(5.5,0.866025)(7.5,0) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(5.5,0.866025)(7.5,-1) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(5.5,-0.866025)(7.5,1) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(5.5,-0.866025)(7.5,0) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(5.5,-0.866025)(7.5,-1) \uput[0](1.4,0){$\betaiguplus$} \uput[0](3,0){\lambdaarge{=}} \varepsilonnd{pspicture}} \caption{Graph join of two graphs $C_3 \uplus P_3$.} \lambdaabel{fig:join} \varepsilonnd{figure} \betaegin{remark}\lambdaabel{r:K_nm} For every graphs $G_1,G_2$ we have that $G_1\uplus G_2$ is a connected graph with a subgraph isomorphic to a complete bipartite graph with $V(G_1)$ and $V(G_2)$ as its parts. \varepsilonnd{remark} Note that, from a geometric viewpoint, the graph join $G_1\uplus G_2$ is obtained as an union of the graphs $G_1$, $G_2$ and the complete bipartite graph $K(G_1,G_2)$ linking the vertices of $V(G_1)$ and $V(G_2)$. The following result allows to compute the distance between any two points in $G_1\uplus G_2$. Furthermore, this result provides information about the geodesics in the graph join. \betaegin{proposition}\lambdaabel{prop:JoinDist} For every graphs $G_1, G_2$ we have: \betaegin{itemize} \item[(a)] If $x,y \in G_i$ ($i\in\{1,2\}$), then \[d_{G_1\uplus G_2}(x,y) = \min\lambdaeft\{ d_{G_i}(x,y) , d_{G_i}\betaig(x,V(G_i)\betaig)+2+d_{G_i}\betaig(V(G_i),y\betaig)\right\}.\] \item[(b)] If $x \in G_i$ and $y \in G_j$ with $i\neq j$, then \[d_{G_1\uplus G_2}(x,y) = d_{G_i}\betaig(x,V(G_i)\betaig)+1+d_{G_j}\betaig(V(G_j),y\betaig).\] \item[(c)] If $x \in G_i$ and $y \in K(G_1,G_2)$, then \[d_{G_1\uplus G_2}(x,y) = \min\lambdaeft\{ d_{G_i}(x,Y_i)+d_{G_1\uplus G_2}(Y_i,y) , d_{G_i}\betaig(x,V(G_i)\betaig)+1+d_{G_1\uplus G_2}(Y_j,y)\right\},\] where $y\in [Y_1,Y_2]$ with $Y_i\in V(G_i)$ and $Y_j\in V(G_j)$. \item[(d)] If $x,y \in K(G_1,G_2)$, then \[d_{G_1\uplus G_2}(x,y) = \min\{ d_{K(G_1,G_2)}(x,y), M\},\] where $x\in [X_1,X_2]$, $y\in [Y_1,Y_2]$ with $X_1,Y_1\in V(G_1)$ and $X_2,Y_2\in V(G_2)$, and $M=\min_{i\in\{1,2\}}\{d_{G_1\uplus G_2}(x,X_i)+d_{G_i}(X_i,Y_i)+d_{G_1\uplus G_2}(Y_i,y)\}$ \varepsilonnd{itemize} \varepsilonnd{proposition} \betaegin{proof} We will prove each item separately. In item (a), if $i\neq j$, we consider the two shortest possible paths from $x$ to $y$ such that they either is contained in $G_i$ or intersects $G_j$ (and then it intersects $G_j$ just in a single vertex). In item (b), since any path in $G_1\uplus G_2$ joining $x$ and $y$ contains at less one edge in $K(G_1,G_2)$, we have a geodesic when the path contains an edge joining a closest vertex to $x$ in $V(G_i)$ and a closest vertex to $y$ in $V(G_j)$. In item (c) we consider the two shortest possible paths from $x$ to $y$ containing either $Y_1$ or $Y_2$. Finally, in item (d) we may consider the three shortest possible paths from $x$ to $y$ such that they either is contained in $K(G_1,G_2)$ or contains at lest an edge in $E(G_1)$ or contains at lest an edge in $E(G_2)$. \varepsilonnd{proof} We say that a subgraph $\Gamma$ of $G$ is \varepsilonmph{isometric} if $d_{\Gamma}(x,y)=d_{G}(x,y)$ for every $x,y\in \Gamma$. Proposition \ref{prop:JoinDist} gives the following result. \betaegin{proposition}\lambdaabel{prop:IsomJoin} Let $G_1,G_2$ be two graph and let $\Gamma_1,\Gamma_2$ be isometric subgraphs to $G_1$ and $G_2$, respectively. Then, $\Gamma_1\uplus\Gamma_2$ is an isometric subgraph to $G_1\uplus G_2$. \varepsilonnd{proposition} The following result allows to compute the diameter of the set of vertices in a graph join. \betaegin{proposition}\lambdaabel{prop:vert} For every graphs $G_1,G_2$ we have $1\lambdae\deltaiam V(G_1\uplus G_2)\lambdae 2$. Furthermore, $\deltaiam V(G_1\uplus G_2)=1$ if and only if $G_1$ and $G_2$ are complete graphs. \varepsilonnd{proposition} \betaegin{proof} Since $V(G_1),V(G_2)\neq\varepsilonmptyset$, $\deltaiam V(G_1\uplus G_2)\gammae 1$. Besides, if $u,v\in V(G_1\uplus G_2)$, we have $d_{G_1\uplus G_2}(u,v)\lambdae d_{K(G_1,G_2)}(u,v)\lambdae 2$. In order to finish the proof note that on the one hand, if $G_1$ and $G_2$ are complete graphs, then $G_1\uplus G_2$ is a complete graph with at least $2$ vertices and $\deltaiam V(G_1\uplus G_2)=1$. On the other hand, if $\deltaiam V(G_1\uplus G_2)=1$, then for every two vertices $u,v \in V(G_1)$ we have $[u,v]\in E(G_1)$; by symmetry, we have the same result for every $u,v \in V(G_2)$. \varepsilonnd{proof} Since $\deltaiam V(G) \lambdae \deltaiam G \lambdae \deltaiam V(G) + 1$ for every graph $G$, the previous proposition has the following consequence. \betaegin{corollary}\lambdaabel{c:diam} For every graphs $G_1,G_2$ we have $1\lambdae\deltaiam G_1\uplus G_2\lambdae 3$. \varepsilonnd{corollary} Proposition \ref{prop:JoinDist} and Corollary \ref{c:diam} give the following results. Given a graph $G$, we say that $x\in G$ is a midpoint (of an edge) if $d_{G}(x,V(G))=1/2$. \betaegin{corollary}\lambdaabel{cor:midpoint} Let $G_1,G_2$ be two graphs. If $d_{G_1\uplus G_2}(x,y) = 3$, then $x,y$ are two midpoints in $G_i$ with $d_{G_i}(x,y)\gammae3$ for some $i\in \{1,2\}$. \varepsilonnd{corollary} \betaegin{corollary}\lambdaabel{r:diam3} Let $G_1,G_2$ be two graphs. Then, $\deltaiam G_1\uplus G_2 = 3$ if and only if there are two midpoints $x,y$ in $G_i$ with $d_{G_i}(x,y)\gammae3$ for some $i\in \{1,2\}$. \varepsilonnd{corollary} \sigmaection{Hyperbolicity constant of the graph join of two graphs} In this section we obtain some bounds for the hyperbolicity constant of the graph join of two graphs. These bounds allow to prove that the joins of graphs are always hyperbolic with a small hyperbolicity constant. The next well-known result will be useful. \betaegin{theorem}\cite[Theorem 8]{RSVV}\lambdaabel{t:diameter1} In any graph $G$ the inequality $\delta(G)\lambdae \deltaiam G / 2$ holds and it is sharp. \varepsilonnd{theorem} We have the following consequence of Corollary \ref{c:diam} and Theorem \ref{t:diameter1}. \betaegin{corollary}\lambdaabel{cor:SP} For every graphs $G_1,G_2$, the graph join $G_1\uplus G_2$ is hyperbolic with $\delta(G_1\uplus G_2)\lambdaeq 3/2$, and the inequality is sharp. \varepsilonnd{corollary} Theorem \ref{th:hyp3/2} characterizes the graph join of two graphs for which the equality in the previous corollary is attained. The following result in \cite[Lemma 5]{RSVV} will be useful. \betaegin{lemma}\lambdaabel{l:subgraph} If $\Gamma$ is an isometric subgraph of $G$, then $\delta(\Gamma) \lambdae \delta(G)$. \varepsilonnd{lemma} \betaegin{theorem}\lambdaabel{th:HypIsomJoin} For every graphs $G_1,G_2$, we have $$\delta(G_1\uplus G_2)=\max\{ \delta(\Gamma_1\uplus \Gamma_2) : \Gamma_i \text{ is isometric to } G_i \text{ for } i=1,2 \}.$$ \varepsilonnd{theorem} \betaegin{proof} By Proposition \ref{prop:IsomJoin} and Lemma \ref{l:subgraph} we have $\delta(G_1\uplus G_2)\gammae \delta(\Gamma_1\uplus \Gamma_2)$ for any isometric subgraph $\Gamma_i$ of $G_i$ for $i=1,2$. Besides, since any graph is an isometric subgraph of itself we obtain the equality by taking $\Gamma_1=G_1$ and $\Gamma_2=G_2$. \varepsilonnd{proof} Denote by $J(G)$ the set of vertices and midpoints of edges in $G$. As usual, by \varepsilonmph{cycle} we mean a simple closed curve, i.e., a path with different vertices, unless the last one, which is equal to the first vertex. First, we collect some previous results of \cite{BRS} which will be useful. \betaegin{theorem}\cite[Theorem 2.6]{BRS} \lambdaabel{t:multk/4} For every hyperbolic graph $G$, $\delta(G)$ is a multiple of $1/4$. \varepsilonnd{theorem} \betaegin{theorem}\cite[Theorem 2.7]{BRS} \lambdaabel{t:TrianVMp} For any hyperbolic graph $G$, there exists a geodesic triangle $T = \{x, y, z\}$ that is a cycle with $x, y, z \in J(G)$ and $\delta(T) = \delta(G)$. \varepsilonnd{theorem} The following result characterizes the hyperbolic graphs with a small hyperbolicity constant, see \cite[Theorem 11]{MRSV}. Let us define the \varepsilonmph{circumference} $c(G)$ of a graph $G$ which is not a tree as the supremum of the lengths of its cycles; if $G$ is a tree we define $c(G)=0$. \betaegin{theorem}\lambdaabel{th:delt<1} Let $G$ be any graph. \betaegin{itemize} \item[(a)] {$\delta(G) = 0$ if and only if $G$ is a tree.} \item[(b)] {$\delta(G) = 1/4, 1/2$ is not satisfied for any graph $G$.} \item[(c)] {$\delta(G) = 3/4$ if and only if $\ c(G)=3$.} \varepsilonnd{itemize} \varepsilonnd{theorem} We have the following consequence for the hyperbolicity constant of the joins of graphs. \betaegin{proposition}\lambdaabel{r:discretJoin} For every graphs $G_1,G_2$ the graph join $G_1\uplus G_2$ is hyperbolic with hyperbolicity constant $\delta(G_1\uplus G_2)$ in $\{0, 3/4, 1, 5/4, 3/2\}$. \varepsilonnd{proposition} If $G_1$ and $G_2$ are \varepsilonmph{isomorphic}, then we write $G_1 \sigmaimeq G_2$. It is clear that if $G_1\sigmaimeq G_2$, then $\delta(G_1)=\delta(G_2)$. The $n$-vertex edgeless graph ($n\gammae1$) or \varepsilonmph{empty graph} is a graph without edges and with $n$ vertices, and it is commonly denoted as $E_n$. The following result allows to characterize the joins of graphs with hyperbolicity constant less than one in terms of its factor graphs. Recall that $\Delta_G$ denotes the maximum degree of the vertices in $G$. \betaegin{theorem}\lambdaabel{th:deltJoin<1} Let $G_1,G_2$ be two graphs. \betaegin{itemize} \item[(1)] {$\delta(G_1\uplus G_2)=0$ if and only if $G_1$ and $G_2$ are empty graphs and one of them is isomorphic to $E_1$.} \item[(2)] {$\delta(G_1\uplus G_2)=3/4$ if and only if $G_1\sigmaimeq E_1$ and $\Delta_{G_2}=1$, or $G_2\sigmaimeq E_1$ and $\Delta_{G_1}=1$.} \varepsilonnd{itemize} \varepsilonnd{theorem} \betaegin{proof}$ $ \betaegin{itemize} \item[(1)] {By Theorem \ref{th:delt<1} it suffices to characterize the joins of graphs which are trees. If $G_1$ and $G_2$ are empty graphs and one of them is isomorphic to $E_1$, then it is clear that $G_1\uplus G_2$ is a tree. Assume now that $G_1\uplus G_2$ is a tree. If $G_1$ and $G_2$ have at least two vertices then $G_1\uplus G_2$ has a cycle with length four. Thus, $G_1$ or $G_2$ is isomorphic to $E_1$. Without loss of generality we can assume that $G_1\sigmaimeq E_1$. Note that if $G_2$ has at least one edge then $G_1\uplus G_2$ has a cycle with length three. Then, $G_2\sigmaimeq E_n$ for some $n\in \mathbb{N}$.} \item[(2)] {By Theorem \ref{th:delt<1} it suffices to characterize the joins of graphs with circumference three. If $G_1\sigmaimeq E_1$ and $\Delta_{G_2}=1$, or $G_2\sigmaimeq E_1$ and $\Delta_{G_1}=1$, then it is clear that $c(G_1\uplus G_2)=3$. Assume now that $c(G_1\uplus G_2)=3$. If $G_1,G_2$ both have at least two vertices then $G_1\uplus G_2$ contains a cycle with length four and so $c(G_1\uplus G_2)\gammae4$. Therefore, $G_1$ or $G_2$ is isomorphic to $E_1$. Without loss of generality we can assume that $G_1\sigmaimeq E_1$. Note that if $\Delta_{G_2}\gammae2$ then there is an isomorphic subgraph to $E_1\uplus P_3$ in $G_1\uplus G_2$; thus, $G_1\uplus G_2$ contains a cycle with length four. So, we have $\Delta_{G_2}\lambdae1$. Besides, since $G_2$ is a non-empty graph by (1), we have $\Delta_{G_2}\gammae1$.} \varepsilonnd{itemize} \varepsilonnd{proof} The following result will be useful, see \cite[Theorem 11]{RSVV}. The graph join of a cycle $C_{n-1}$ and a single vertex $E_1$ is referred to as a \varepsilonmph{wheel} with $n$ vertices and denoted by $W_n$. Notice that the complete bipartite graph $K_{n,m}$ is isomorphic to the graph join of two empty graphs $E_n,E_m$, i.e., $K_{n,m}\sigmaimeq E_n\uplus E_m$. \betaegin{example}\lambdaabel{examples} The following graphs have these hyperbolicity constants: \betaegin{itemize} \item The wheel graph with $n$ vertices $W_n$ verifies $\delta(W_4)=\delta(W_5)=1$, $\delta(W_n)=3/2$ for every $7\lambdae n\lambdae 10$, and $\delta(W_n)=5/4$ for $n=6$ and for every $n\gammae 11$. \item The complete bipartite graphs verify $\delta(K_{1,n}) = 0$ for every $n\gammae1$, $\delta(K_{m,n}) = 1$ for every $m,n \gammae2$. \varepsilonnd{itemize} \varepsilonnd{example} Theorem \ref{th:deltJoin<1} and Example \ref{examples} show that the family of graphs $E_1\uplus G$ when $G$ belongs to the set of graphs is a representative collection of joins of graphs since their hyperbolicity constants take all possible values. The following results characterize the graphs with hyperbolicity constant one and greater than one, respectively. If $G_0$ is a subgraph of $G$ and $w\in V(G_0)$, we denote by $\deltaeg_{G_0}(w)$ the degree of $w$ in the induced subgraph by $V(G_0)$. \betaegin{theorem}\cite[Theorem 3.10]{BRS2}\lambdaabel{th:delt=1} Let $G$ be any graph. Then $\delta(G) = 1$ if and only if the following conditions hold: \betaegin{itemize} \item[(1)] {There exists a cycle isomorphic to $C_4$.} \item[(2)] {For every cycle $\sigma$ with $L(\sigma) \gammae 5$ and for every vertex $w \in \sigma$, we have $\deltaeg_\sigma(w) \gammae3$.} \varepsilonnd{itemize} \varepsilonnd{theorem} \betaegin{theorem}\cite[Theorem 3.2]{BRS2}\lambdaabel{th:delt>=5/4} Let $G$ be any graph. Then $\delta(G) \gammae 5/4$ if and only if there exist a cycle $\sigma$ in $G$ with length $L(\sigma) \gammae 5$ and a vertex $w \in V(\sigma)$ such that $\deltaeg_\sigma(w) = 2$. \varepsilonnd{theorem} Theorem \ref{th:delt>=5/4} has the following consequence for joins of graphs. \betaegin{lemma}\lambdaabel{l:Fact_Delt>1} Let $G_1,G_2$ be two graphs. If $\delta(G_1)>1$, then $\delta(G_1\uplus G_2)>1$. \varepsilonnd{lemma} \betaegin{proof} By Theorem \ref{th:delt>=5/4}, there exist a cycle $\sigma$ in $G_1\uplus G_2$ (contained in $G_1$) with length $L(\sigma) \gammae 5$ and a vertex $w \in\sigma$ such that $\deltaeg_\sigma(w) = 2$. Thus, Theorem \ref{th:delt>=5/4} gives $\delta(G_1\uplus G_2)>1$. \varepsilonnd{proof} Note that the converse of Lemma \ref{l:Fact_Delt>1} does not hold, since $\delta(E_1)=\delta(P_4)=0$ and we can check that $\delta(E_1\uplus P_4)=5/4$. \betaegin{corollary}\lambdaabel{c:FactDelt>1} Let $G_1,G_2$ be two graphs. Then $$\delta(G_1\uplus G_2)\gammae \min\betaig\{5/4,\max\{\delta(G_1),\delta(G_2)\}\betaig\}.$$ \varepsilonnd{corollary} \betaegin{proof} By symmetry, it suffices to show $\delta(G_1\uplus G_2)\gammae \min\{5/4,\delta(G_1)\}$. If $\delta(G_1)>1$, then the inequality holds by Lemma \ref{l:Fact_Delt>1}. If $\delta(G_1)=1$, then there exists a cycle isomorphic to $C_4$ in $G_1\sigmaubset G_1\uplus G_2$; hence, $\delta(G_1\uplus G_2)\gammae1$. If $\delta(G_1)=3/4$, then there exists a cycle isomorphic to $C_3$ in $G_1\sigmaubset G_1\uplus G_2$; hence, $\delta(G_1\uplus G_2)\gammae3/4$. The inequality is direct if $\delta(G_1)=0$. \varepsilonnd{proof} The following results allow to characterize the joins of graphs with hyperbolicity constant one in terms of $G_1$ and $G_2$. \betaegin{lemma}\lambdaabel{l:EmptyJoin} Let $G$ be any graph. Then, $\delta(E_1\uplus G)\lambdae1$ if and only if every path $\varepsilonta$ joining two vertices of $G$ with $L(\varepsilonta) = 3$ satisfies $\deltaeg_\varepsilonta(w)\gammae2$ for every vertex $w \in V(\varepsilonta)$. \varepsilonnd{lemma} Note that if every path $\varepsilonta$ joining two vertices of $G$ with $L(\varepsilonta) = 3$ satisfies $\deltaeg_\varepsilonta(w)$ $\gammae2$ for every vertex $w \in V(\varepsilonta)$, then the same result holds for $L(\varepsilonta)\gammae3$ instead of $L(\varepsilonta)=3$. \betaegin{proof} Let $v$ be the vertex in $E_1$. Assume first that $\delta(E_1\uplus G)\lambdae1$. Seeking for a contradiction, assume that there is a path $\varepsilonta$ joining two vertices of $G$ with $L(\varepsilonta) = 3$ and one vertex $w' \in V(\varepsilonta)$ with $\deltaeg_\varepsilonta(w')=1$. Consider now the cycle $\sigma$ obtained by joining the endpoints of $\varepsilonta$ with $v$. Note that $w'\in \sigma$ and $\deltaeg_\sigma(w')=2$; therefore, Theorem \ref{th:delt>=5/4} gives $\delta(E_1\uplus G)>1$, which is a contradiction. Assume now that every path $\varepsilonta$ joining two vertices of $G$ with $L(\varepsilonta) = 3$ satisfies $\deltaeg_\varepsilonta(w)\gammae2$ for every vertex $w \in V(\varepsilonta)$. Note that if $G$ does not have paths isomorphic to $P_4$ then there is no cycle in $E_1\uplus G$ with length greater than $4$ and so, $\delta(E_1\uplus G)\lambdae1$. We are going to prove now that for every cycle $\sigma$ in $G$ with $L(\sigma) \gammae 5$ we have $\deltaeg_{\sigma}(w)\gammae3$ for every vertex $w \in V(\sigma)$. Let $\sigma$ be any cycle in $E_1\uplus G$ with $L(\sigma) \gammae 5$. If $v\in \sigma$, then $\sigma\cap G$ is a subgraph of $G$ isomorphic to $P_{n}$ for $n=L(\sigma)-1$, and $\deltaeg_\sigma(v)=n\gammae4$. Since $L(\sigma\cup G)\gammae3$, $\deltaeg_{\sigma\cap G}(w)\gammae2$ for every $w\in V(\sigma\cap G)$ by hypothesis, and we conclude $\deltaeg_\sigma(w)\gammae3$ for every $w\in V(\sigma)\sigmaetminus\{v\}$. If $v\notin \sigma$, let $w$ be any vertex in $\sigma$ and let $P(w)$ be a path with length $3$ contained in $\sigma$ and such that $w$ is an endpoint of $P(w)$. By hypothesis $\deltaeg_{P(w)}(w)\gammae2$; since $w$ has a neighbor $w'\in V(\sigma\sigmaetminus P(w))$, $\deltaeg_{\sigma}(w)\gammae3$ for any $w\in V(\sigma)$. Then, Theorem \ref{th:delt>=5/4} gives the result. \varepsilonnd{proof} Note that if a graph $G$ verifies $\deltaiam G\lambdae2$ then every path $\varepsilonta$ joining two vertices of $G$ with $L(\varepsilonta) = 3$ satisfies $\deltaeg_\varepsilonta(w)\gammae2$ for every vertex $w \in V(\varepsilonta)$. The converse does not hold, since in the disjoint union $C_3\cup C_3$ of two cycles $C_3$ any path with length $3$ is a cycle and $\deltaiam C_3\cup C_3 = \infty$. However, these two conditions are equivalent if $G$ is connected. If $G$ is a graph with connected components $\{G_j\}$, we define $$ \deltaiam^* G :=\sigmaup_{j} \, \deltaiam G_j. $$ Note that $\deltaiam^* G = \deltaiam G$ if $G$ is connected; otherwise, $\deltaiam G=\infty$. Also, $\deltaiam^* G$ $>1$ is equivalent to $\Delta_{G}\gammae2$. We also have the following result: \betaegin{lemma}\lambdaabel{lemaX} Let $G$ be any graph. Then $\deltaiam^* G\lambdae2$ if and only if every $\varepsilonta$ joining two vertices of $G$ with $L(\varepsilonta)=3$ satisfy $\deltaeg_\varepsilonta(w)\gammae2$ for every $w\in V(G)$. \varepsilonnd{lemma} \betaegin{lemma}\lambdaabel{l:Fact2Vert} Let $G_1$ and $G_2$ be two graphs with at least two vertices. Then, $\delta(G_1\uplus G_2)=1$ if and only if $\deltaiam G_i\lambdae2$ or $G_i$ is an empty graph for $i=1,2$. \varepsilonnd{lemma} \betaegin{proof} Assume that $\delta(G_1\uplus G_2)=1$. Seeking for a contradiction, assume that $\deltaiam G_1$ $\gammae5/2$ and $G_1$ is a non-empty graph or $\deltaiam G_2 \gammae 5/2$ and $G_2$ is a non-empty graph. By symmetry, without loss of generality we can assume that $\deltaiam G_1\gammae5/2$ and $G_1$ is a non-empty graph; hence, there are a vertex $v\in V(G_1)$ and a midpoint $p\in [w_1,w_2]$ with $d_{G_1}(v,p)\gammae5/2$. Consider a cycle $\sigma$ in $G_1\uplus G_2$ containing the vertex $v$, the edge $[w_1,w_2]$ and two vertices of $G_2$, with $L(\sigma)=5$. We have $\deltaeg_{\sigma}(v)=2$. Thus, Theorem \ref{th:delt>=5/4} gives $\delta(G_1\uplus G_2)>1$. This contradicts our assumption, and so, we obtain $\deltaiam G_1\lambdae2$. Assume now that $\deltaiam G_i\lambdae2$ or $G_i$ is an empty graph for $i=1,2$. Since $G_1$ and $G_2$ have at least two vertices, there exists a cycle isomorphic to $C_4$ in $G_1\uplus G_2$. First of all, if $G_1$ and $G_2$ are empty graphs then Example \ref{examples} gives $\delta(G_1\uplus G_2)=1$. Without loss of generality we can assume that $G_1$ is a non-empty graph, then $G_1$ satisfies $\deltaiam G_1\lambdae2$. Assume that $G_2$ is an empty graph. Let $\sigma$ be any cycle in $G_1\uplus G_2$ with $L(\sigma)\gammae5$. Since $\sigma$ contains at least three vertices in $G_1$, we have $\deltaeg_{\sigma}(v)=|V(G_1)\cap\sigma|\gammae3$ for every $v\in V(G_2)\cap\sigma$. Besides, if $|V(G_2)\cap\sigma|\gammae3$ then $\deltaeg_{\sigma}(w)\gammae|V(G_2)\cap\sigma|\gammae3$ for every $w\in V(G_1)\cap\sigma$. If $|V(G_2)\cap\sigma|=1$, then $\varepsilonta:=\sigma\cap G_1$ is a path in $G_1$ with $L(\varepsilonta)\gammae3$, and so, $\deltaeg_\varepsilonta(w)\gammae2$ and $\deltaeg_\sigma(w)\gammae3$ for every $w \in V(\varepsilonta)$. If $|V(G_2)\cap\sigma|=2$, then $\sigma\cap G_1$ is the union of two paths and $|V(G_1)\cap\sigma|\gammae3$; since $\deltaiam G_1\lambdae2$, we have $\deltaeg_{G_1\cap\sigma}(w)\gammae1$ for every $w\in V(G_1)\cap\sigma$ (otherwise there are a vertex $w\in V(G_1)\cap\sigma$ and a midpoint $p\in G_1\cap \sigma$ with $d_{G_1}(w,p)>2$). Then, we have $\deltaeg_{\sigma}(v)\gammae3$ for every $v\in V(\sigma)$ and so, we obtain $\delta(G_1\uplus G_2)=1$ by Theorem \ref{th:delt=1}. \sigmamallskip Finally, assume that $\deltaiam G_2\lambdae2$. By Theorem \ref{t:TrianVMp} it suffices to consider geodesic triangles $T=\{x,y,z\}$ in $G_1\uplus G_2$ that are cycles with $x,y,z\in J(G_1\uplus G_2)$. So, since $\deltaiam G_1,\deltaiam G_2\lambdae2$, Proposition \ref{prop:JoinDist} gives that $L([xy]),L([yz]),L([zx]) \lambdae 2$; thus, for every $\alpha\in[xy]$, $d_{G_1\uplus G_2}(\alpha,[yz]\cup[zx])\lambdae d_{G_1\uplus G_2}(\alpha,\{x,y\})\lambdae L([xy])/2$. Hence, $\delta(T)\lambdae \max\{L([xy]),L([yz]),L([zx])\}/2\lambdae1$ and so, $\delta(G_1\uplus G_2)\lambdae1$. Since $G_1$ and $G_2$ have at least two vertices, by Theorem \ref{th:delt<1} we have $\delta(G_1\uplus G_2)\gammae1$ and we conclude $\delta(G_1\uplus G_2)=1$. \varepsilonnd{proof} The following result characterizes the joins of graphs with hyperbolicity constant one. \betaegin{theorem}\lambdaabel{th:hypJoin1} Let $G_1,G_2$ be any two graphs. Then the following statements hold: \betaegin{itemize} \item {Assume that $G_1\sigmaimeq E_1$. Then $\delta(G_1\uplus G_2)=1$ if and only if $1<\deltaiam^* G_2\lambdae2$.} \item {Assume that $G_1$ and $G_2$ have at least two vertices. Then $\delta(G_1\uplus G_2)=1$ if and only if $\deltaiam G_i\lambdae2$ or $G_i$ is an empty graph for $i=1,2$.} \varepsilonnd{itemize} \varepsilonnd{theorem} \betaegin{proof} We have the first statement by Theorem \ref{th:deltJoin<1} and Lemmas \ref{l:EmptyJoin} and \ref{lemaX}. The second statement is just Lemma \ref{l:Fact2Vert}. \varepsilonnd{proof} In order to compute the hyperbolicity constant of any graph join we are going to characterize the joins of graphs with hyperbolicity constant $3/2$. \betaegin{lemma}\lambdaabel{l:hyp3/2Fact} Let $G_1,G_2$ be any two graphs. If $\delta(G_1\uplus G_2)=3/2$, then each geodesic triangle $T=\{x,y,z\}$ in $G_1\uplus G_2$ that is a cycle with $x,y,z \in J(G_1\uplus G_2)$ and $\delta(T)=3/2$ is contained in either $G_1$ or $G_2$. \varepsilonnd{lemma} \betaegin{proof} Seeking for a contradiction assume that there is a geodesic triangle $T=\{x,y,z\}$ in $G_1\uplus G_2$ that is a cycle with $x,y,z \in J(G_1\uplus G_2)$ and $\delta(T)=3/2$ which contains vertices in both factors $G_1,G_2$. Without loss of generality we can assume that there is $p\in[xy]$ with $d_{G_1\uplus G_2}(p, [yz]\cup[zx]) = 3/2$, and so, $L([xy])\gammae3$. Hence, $d_{G_1\uplus G_2}(x,y)=3$ by Corollary \ref{c:diam}, and by Corollary \ref{cor:midpoint} we have that $x,y$ are midpoints either in $G_1$ or in $G_2$, and so, $p$ is a vertex in $G_1\uplus G_2$. Without loss of generality we can assume that $x,y\in G_1$. Let $V_x$ be the closest vertex to $x$ in $[xz]\cup[zy]$. If $p\in V(G_2)$ then $d_{G_1\uplus G_2}(p,[yz]\cup[zx]) \lambdae d_{G_1\uplus G_2}(p,V_x) =1$. This contradicts our assumption. If $p\in V(G_1)$ then since $T$ contains vertices in both factors, we have $d_{G_1\uplus G_2}(p,[yz]\cup[zx]) \lambdae d_{G_1\uplus G_2}\betaig((p,V(G_2)\cap\{[yz]\cup[zx]\}\betaig) =1$. This also contradicts our assumption, and so, we have the result. \varepsilonnd{proof} \betaegin{corollary}\lambdaabel{c:3/2} Let $G_1,G_2$ be any two graphs. If $\delta(G_1\uplus G_2)=3/2$, then $$\max\{\delta(G_1),\delta(G_2)\}\gammae3/2.$$ \varepsilonnd{corollary} \sigmamallskip The following families of graphs allow to characterize the joins of graphs with hyperbolicity constant $3/2$. Denote by $C_n$ the cycle graph with $n\gammae3$ vertices and by $V(C_n):=\{v_1^{(n)},\lambdadots,v_n^{(n)}\}$ the set of their vertices such that $[v_n^{(n)},v_1^{(n)}]\in E(C_n)$ and $[v_i^{(n)},v_{i+1}^{(n)}]\in E(C_n)$ for $1\lambdae i\lambdae n-1$. Let us consider $\mathcal{C}_6^{(1)}$ the set of graphs obtained from $C_6$ by addying a (proper or not) subset of the set of edges $\{[v_2^{(6)},v_6^{(6)}]$, $[v_4^{(6)},v_6^{(6)}]\}$. Let us define the set of graphs $$\mathcal{F}_6:=\{ G \ \text{\sigmamall containing, as induced subgraph, an isomorphic graph to some element of } \mathcal{C}_6^{(1)}\}.$$ Let us consider $\mathcal{C}_7^{(1)}$ the set of graphs obtained from $C_7$ by addying a (proper or not) subset of the set of edges $\{[v_2^{(7)},v_6^{(7)}]$, $[v_2^{(7)},v_7^{(7)}]$, $[v_4^{(7)},v_6^{(7)}]$, $[v_4^{(7)},v_7^{(7)}]\}$. Define $$\mathcal{F}_7:=\{ G \ \text{\sigmamall containing, as induced subgraph, an isomorphic graph to some element of } \mathcal{C}_7^{(1)}\}.$$ Let us consider $\mathcal{C}_8^{(1)}$ the set of graphs obtained from $C_8$ by addying a (proper or not) subset of the set $\{[v_2^{(8)},v_6^{(8)}]$, $[v_2^{(8)},v_8^{(8)}]$, $[v_4^{(8)},v_6^{(8)}]$, $[v_4^{(8)},v_8^{(8)}]\}$. Also, consider $\mathcal{C}_8^{(2)}$ the set of graphs obtained from $C_8$ by addying a (proper or not) subset of $\{[v_2^{(8)},v_8^{(8)}]$, $[v_4^{(8)},v_6^{(8)}]$, $[v_4^{(8)},v_7^{(8)}]$, $[v_4^{(8)},v_8^{(8)}]\}$. Define $$\mathcal{F}_8:=\{ G \ \text{\sigmamall containing, as induced subgraph, an isomorphic graph to some element of } \mathcal{C}_8^{(1)}\cup \mathcal{C}_8^{(2)}\}.$$ Let us consider $\mathcal{C}_9^{(1)}$ the set of graphs obtained from $C_9$ by addying a (proper or not) subset of the set of edges $\{[v_2^{(9)},v_6^{(9)}]$, $[v_2^{(9)},v_9^{(9)}]$, $[v_4^{(9)},v_6^{(9)}]$, $[v_4^{(9)},v_9^{(9)}]\}$. Define $$\mathcal{F}_9:=\{ G \ \text{\sigmamall containing, as induced subgraph, an isomorphic graph to some element of } \mathcal{C}_9^{(1)}\}.$$ Finally, we define the set $\mathcal{F}$ by $$\mathcal{F}:=\mathcal{F}_6\cup\mathcal{F}_7\cup\mathcal{F}_8\cup\mathcal{F}_9.$$ Note that $\mathcal{F}_6$, $\mathcal{F}_7$, $\mathcal{F}_8$ and $\mathcal{F}_9$ are not disjoint sets of graphs. The following theorem characterizes the joins of graphs $G_1$ and $G_2$ with $\delta(G_1\uplus G_2)=3/2$. For any non-empty set $S\sigmaubset V(G)$, the induced subgraph of $S$ will be denoted by $\lambdaangle S\rangle$. \betaegin{theorem}\lambdaabel{th:hyp3/2} Let $G_1,G_2$ be any two graphs. Then, $\delta(G_1\uplus G_2)=3/2$ if and only if $G_1\in \mathcal{F}$ or $G_2\in\mathcal{F}$. \varepsilonnd{theorem} \betaegin{proof} Assume first that $\delta(G_1\uplus G_2)=3/2$. By Theorem \ref{t:TrianVMp} there is a geodesic triangle $T=\{x,y,z\}$ in $G_1\uplus G_2$ that is a cycle with $x,y,z \in J(G_1)$ and $\delta(T)=3/2$. By Lemma \ref{l:hyp3/2Fact}, $T$ is contained either in $G_1$ or in $G_2$. Without loss of generality we can assume that $T$ is contained in $G_1$. Without loss of generality we can assume that there is $p\in[xy]$ with $d_{G_1\uplus G_2}(p, [yz]\cup[zx]) = 3/2$, and by Corollary \ref{c:diam}, $L([xy])=3$. Hence, by Corollary \ref{cor:midpoint} we have that $x,y$ are midpoints in $G_1$, and so, $p\in V(G_1)$. Since $L([yz])\lambdae3$, $L([zx])\lambdae3$ and $L([yz])+L([zx])\gammae L([xy])$, we have $6\lambdae L(T)\lambdae9$. \sigmamallskip Assume that $L(T)=6$. Denote by $\{v_1,\lambdadots,v_6\}$ the vertices in $T$ such that $T=\betaigcup_{i=1}^{6}[v_i,v_{i+1}]$ with $v_7:=v_1$. Without loss of generality we can assume that $x\in[v_1,v_2]$, $y\in[v_4,v_5]$ and $p=v_3$. Since $d_{G_1\uplus G_2}(x,y)=3$, we have that $\lambdaangle\{v_1,\lambdadots,v_6\}\rangle$ contains neither $[v_1,v_4]$, $[v_1,v_5]$, $[v_2,v_4]$ nor $[v_2,v_5]$; besides, since $d_{G_1\uplus G_2}(p,[yz]\cup[zx])>1$ we have that $\lambdaangle\{v_1,\lambdadots,v_6\}\rangle$ contains neither $[v_3,v_1]$, $[v_3,v_5]$ nor $[v_3,v_6]$. Note that $[v_2,v_6]$, $[v_4,v_6]$ may be contained in $\lambdaangle\{v_1,\lambdadots,v_6\}\rangle$. Therefore, $G_1\in \mathcal{F}_6$. Assume that $L(T)=7$ and $G_1\notin \mathcal{F}_6$. Denote by $\{v_1,\lambdadots,v_7\}$ the vertices in $T$ such that $T=\betaigcup_{i=1}^{7}[v_i,v_{i+1}]$ with $v_8:=v_1$. Without loss of generality we can assume that $x\in[v_1,v_2]$, $y\in[v_4,v_5]$ and $p=v_3$. Since $d_{G_1\uplus G_2}(x,y)=3$, we have that $\lambdaangle\{v_1,\lambdadots,v_7\}\rangle$ contains neither $[v_1,v_4]$, $[v_1,v_5]$, $[v_2,v_4]$ nor $[v_2,v_5]$; besides, since $d_{G_1\uplus G_2}(p,[yz]\cup[zx])>1$ we have that $\lambdaangle\{v_1,\lambdadots,v_7\}\rangle$ contains neither $[v_3,v_1]$, $[v_3,v_5]$, $[v_3,v_6]$ nor $[v_3,v_7]$. Since $G_1\notin \mathcal{F}_6$, $[v_1,v_6]$ and $[v_5,v_7]$ are not contained in $\lambdaangle\{v_1,\lambdadots,v_7\}\rangle$. Note that $[v_2,v_6]$, $[v_2,v_7]$, $[v_4,v_6]$, $[v_4,v_7]$ may be contained in $\lambdaangle\{v_1,\lambdadots,v_7\}\rangle$. Hence, $G_1\in \mathcal{F}_7$. Assume that $L(T)=8$ and $G_1\notin \mathcal{F}_6\cup\mathcal{F}_7$. Denote by $\{v_1,\lambdadots,v_8\}$ the vertices in $T$ such that $T=\betaigcup_{i=1}^{8}[v_i,v_{i+1}]$ with $v_9:=v_1$. Without loss of generality we can assume that $x\in[v_1,v_2]$, $y\in[v_4,v_5]$ and $p=v_3$. Since $d_{G_1\uplus G_2}(x,y)=3$, we have that $\lambdaangle\{v_1,\lambdadots,v_8\}\rangle$ contains neither $[v_1,v_4]$, $[v_1,v_5]$, $[v_2,v_4]$ nor $[v_2,v_5]$; besides, since $d_{G_1\uplus G_2}(p,[yz]\cup[zx])>1$ we have that $\lambdaangle\{v_1,\lambdadots,v_8\}\rangle$ contains neither $[v_3,v_1]$, $[v_3,v_5]$, $[v_3,v_6]$, $[v_3,v_7]$ nor $[v_3,v_8]$. Since $G_1\notin \mathcal{F}_6\cup\mathcal{F}_7$, $[v_1,v_6]$, $[v_1,v_7]$, $[v_5,v_7]$, $[v_5,v_8]$ and $[v_6,v_8]$ are not contained in $\lambdaangle\{v_1,\lambdadots,v_8\}\rangle$. Since $T$ is a geodesic triangle we have that $z\in\{v_{6,7},v_7,v_{7,8}\}$ with $v_{6,7}$ and $v_{7,8}$ the midpoints of $[v_6,v_7]$ and $[v_7,v_8]$, respectively. If $z=v_7$ then $\lambdaangle\{v_1,\lambdadots,v_8\}\rangle$ contains neither $[v_2,v_7]$ nor $[v_4,v_7]$. Note that $[v_2,v_6]$, $[v_2,v_8]$, $[v_4,v_6]$, $[v_4,v_8]$ may be contained in $\lambdaangle\{v_1,\lambdadots,v_8\}\rangle$. If $z=v_{6,7}$ then $\lambdaangle\{v_1,\lambdadots,v_8\}\rangle$ contains neither $[v_2,v_6]$ nor $[v_2,v_7]$. Note that $[v_2,v_8]$, $[v_4,v_6]$, $[v_4,v_7]$, $[v_4,v_8]$ may be contained in $\lambdaangle\{v_1,\lambdadots,v_8\}\rangle$. By symmetry, we obtain an equivalent result for $z=v_{7,8}$. Therefore, $G_1\in \mathcal{F}_8$. Assume that $L(T)=9$ and $G_1\notin \mathcal{F}_6\cup\mathcal{F}_7\cup\mathcal{F}_8$. Denote by $\{v_1,\lambdadots,v_9\}$ the vertices in $T$ such that $T=\betaigcup_{i=1}^{9}[v_i,v_{i+1}]$ with $v_{10}:=v_1$. Without loss of generality we can assume that $x\in[v_1,v_2]$, $y\in[v_4,v_5]$ and $p=v_3$. Since $d_{G_1\uplus G_2}(x,y)=3$, we have that $\lambdaangle\{v_1,\lambdadots,v_9\}\rangle$ contains neither $[v_1,v_4]$, $[v_1,v_5]$, $[v_2,v_4]$ nor $[v_2,v_5]$; besides, since $d_{G_1\uplus G_2}(p,[yz]\cup[zx])>1$ we have that $\lambdaangle\{v_1,\lambdadots,v_9\}\rangle$ contains neither $[v_3,v_1]$, $[v_3,v_5]$, $[v_3,v_6]$, $[v_3,v_7]$, $[v_3,v_8]$ nor $[v_3,v_9]$. Since $T$ is a geodesic triangle we have that $z$ is the midpoint of $[v_7,v_8]$. Since $d_{G_1\uplus G_2}(y,z)=d_{G_1\uplus G_2}(z,x)=3$, we have that $\lambdaangle\{v_1,\lambdadots,v_9\}\rangle$ contains neither $[v_1,v_7]$, $[v_1,v_8]$, $[v_2,v_7]$, $[v_2,v_8]$, $[v_4,v_7]$, $[v_4,v_8]$, $[v_5,v_7]$ nor $[v_5,v_8]$. Since $G_1\notin \mathcal{F}_6\cup\mathcal{F}_7\cup\mathcal{F}_8$, $[v_1,v_6]$, $[v_5,v_9]$, $[v_6,v_8]$, $[v_6,v_9]$ and $[v_7,v_9]$ are not contained in $\lambdaangle\{v_1,\lambdadots,v_9\}\rangle$. Note that $[v_2,v_6]$, $[v_2,v_9]$, $[v_4,v_6]$, $[v_4,v_9]$ may be contained in $\lambdaangle\{v_1,\lambdadots,v_9\}\rangle$. Hence, $G_1\in \mathcal{F}_9$. \sigmamallskip Finally, one can check that if $G_1\in \mathcal{F}$ or $G_2\in \mathcal{F}$, then $\delta(G_1\uplus G_2)=3/2$, by following the previous arguments. \varepsilonnd{proof} These results allow to compute, in a simple way, the hyperbolicity constant of every graph join: \betaegin{theorem}\lambdaabel{th:hypJoin} Let $G_1,G_2$ be any two graphs. Then, \[ \delta(G_1\uplus G_2)=\lambdaeft\{ \betaegin{array}{ll} 0,&\text{if } G_i \sigmaimeq E_1 \text{ and } G_j \sigmaimeq E_n \text{ for } i\neq j \text{ and } n\in \mathbb{N},\\ 3/4,&\text{if } G_i\sigmaimeq E_1 \text{ and } \Delta_{G_j}=1 \text{ for } i\neq j,\\ 1,&\text{if } G_i\sigmaimeq E_1 \text{ and } 1< \deltaiam^* G_j\lambdae2 \text{ for } i\neq j;\text{ or}\\ & n_i\gammae2 \text{ and } \deltaiam G_i\lambdae2 \text{ or}\\ & G_i \text{ is an empty graph for } i=1,2;\\ 3/2,&\text{if } G_1\in \mathcal{F} \text{ or } G_2\in\mathcal{F},\\ 5/4,&\text{otherwise}. \varepsilonnd{array} \right. \] \varepsilonnd{theorem} \betaegin{corollary}\lambdaabel{c:emptyUplus} Let $G$ be any graph. Then, \[ \delta(E_1\uplus G)=\lambdaeft\{ \betaegin{array}{ll} 0,\quad &\mbox{if } \deltaiam^* G=0,\\ 3/4,\quad &\mbox{if } \deltaiam^* G=1,\\ 1,\quad &\mbox{if } 1< \deltaiam^* G\lambdae2,\\ 5/4,\quad &\mbox{if } \deltaiam^* G>2 \text{ and } G\notin \mathcal{F},\\ 3/2,\quad &\mbox{if } G\in \mathcal{F}. \varepsilonnd{array} \right. \] \varepsilonnd{corollary} \sigmaection{Hyperbolicity of corona of two graphs}\lambdaabel{Sect4} In this section we study the hyperbolicity of the corona of two graphs, defined by Frucht and Harary in 1970, see \cite{FH}. \betaegin{definition}\lambdaabel{def:crown} Let $G_1$ and $G_2$ be two graphs with $V(G_1)\cap V(G_2)=\varepsilonmptyset$. The \varepsilonmph{corona} of $G_1$ and $G_2$, denoted by $G_1\deltaiamond G_2$, is defined as the graph obtained by taking one copy of $G_1$ and a copy of $G_2$ for each vertex $v\in V(G_1)$, and then joining each vertex $v\in V(G_1)$ to every vertex in the $v$-th copy of $G_2$. \varepsilonnd{definition} From the definition, it clearly follows that the corona product of two graphs is a non-commutative and non-associative operation. Figure \ref{fig:crown} show the corona of two graphs. \betaegin{figure}[h] \centering \sigmacalebox{1.2} {\betaegin{pspicture}(-.75,-1.9)(6.5,1.9) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(-.5,-.5)(.5,-.5) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(.5,-.5)(.5,.5) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(.5,.5)(-.5,.5) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(-.5,.5)(-.5,-.5) \uput[0](1,0){$\deltaiamond$} \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(1.5,-.5)(2.65,-.5) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(2.65,-.5)(2.075,.5) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(2.075,.5)(1.5,-.5) \uput[0](2.8,0){\lambdaarge{=}} \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(3.25,-.75)(4.2,-.7)(3.75,-1.61)(3.25,-.75) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(5.8,-.7)(6.75,-.75)(6.25,-1.61)(5.8,-.7) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(5.8,.7)(6.75,.75)(6.25,1.61)(5.8,.7) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(3.25,.75)(4.2,.7)(3.75,1.61)(3.25,.75) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-}(4.5,-.5)(5.5,-.5)(5.5,.5)(4.5,.5)(4.5,-.5) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{*-*}(4.5,-.5)(3.25,-.75) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(4.5,-.5)(4.2,-.7) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(4.5,-.5)(3.75,-1.61) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{*-*}(5.5,-.5)(5.8,-.7) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(5.5,-.5)(6.75,-.75) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(5.5,-.5)(6.25,-1.61) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{*-*}(5.5,.5)(5.8,.7) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(5.5,.5)(6.75,.75) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(5.5,.5)(6.25,1.61) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{*-*}(4.5,.5)(3.25,.75) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(4.5,.5)(4.2,.7) \partialsline[linewidth=0.01cm,dotsize=0.07055555cm 2.5]{-*}(4.5,.5)(3.75,1.61) \varepsilonnd{pspicture}} \caption{Corona of two graphs $C_4 \deltaiamond C_3$.} \lambdaabel{fig:crown} \varepsilonnd{figure} Many authors deal just with corona of finite graphs; however, our results hold for finite or infinite graphs. If $G$ is a connected graph, we say that $v\in V(G)$ is a \varepsilonmph{connection vertex} if $G \sigmaetminus \{v\}$ is not connected. Given a connected graph $G$, a family of subgraphs $\{G_n\}_{n\in \Lambdaambda}$ of $G$ is a \varepsilonmph{T-decomposi\-tion} of $G$ if $\cup_n G_n=G$ and $G_n\cap G_m$ is either a connection vertex or the empty set for each $n\neq m$. We will need the following result (see \cite[Theorem 5]{BRSV2}), which allows to obtain global information about the hyperbolicity of a graph from local information. \betaegin{theorem} \lambdaabel{t:treedec} Let $G$ be any connected graph and let $\{G_n\}_n$ be any T-decomposition of $G$. Then $\delta(G)=\sigmaup_n \delta(G_n)$. \varepsilonnd{theorem} We remark that the corona $G_1\deltaiamond G_2$ of two graphs is connected if and only if $G_1$ is connected. The following result characterizes the hyperbolicity of the corona of two graphs and provides the precise value of its hyperbolicity constant. \betaegin{theorem}\lambdaabel{th:corona} Let $G_1,G_2$ be any two graphs. Then $\delta(G_1\deltaiamond G_2)=\max\{\delta(G_1),\delta(E_1\uplus G_2)\}$. \varepsilonnd{theorem} \betaegin{proof} Assume first that $G_1$ is connected. The formula follows from Theorem \ref{t:treedec}, since $\{G_1\}\cup \betaig\{\{v\}\uplus G_2\betaig\}_{v\in V(G_1)}$ is a T-decomposition of $G_1\deltaiamond G_2$. Finally, note that if $G_1$ is a non-connected graph, then we can apply the previous argument to each connected component. \varepsilonnd{proof} Note that Corollary \ref{c:emptyUplus} provides the precise value of $\delta(E_1\uplus G_2)$. \betaegin{corollary}\lambdaabel{cor:sup} Let $G_1,G_2$ be any two graphs. Then $G_1\deltaiamond G_2$ is hyperbolic if and only if $G_1$ is hyperbolic. \varepsilonnd{corollary} \betaegin{proof} By Theorem \ref{th:corona} we have $\delta(G_1\deltaiamond G_2)=\max\{\delta(G_1),\delta(E_1\uplus G_2)\}$. Then, by Corollary \ref{cor:SP} we have $\delta(G_1) \lambdae \delta(G_1\deltaiamond G_2) \lambdae \max\{\delta(G_1),3/2\}$. \varepsilonnd{proof} \betaegin{thebibliography}{99} \betaibitem{ABCD} Alonso, J., Brady, T., Cooper, D., Delzant, T., Ferlini, V., Lustig, M., Mihalik, M., Shapiro, M. and Short, H., Notes on word hyperbolic groups, in: E. Ghys, A. Haefliger, A. Verjovsky (Eds.), Group Theory from a Geometrical Viewpoint, World Scientific, Singapore, 1992. \betaibitem{BRS2} Bermudo, S., Rodr\'{\i}guez, J. M. and Sigarreta, J. M., Small values of the hyperbolicity constant in graphs. Submitted. \betaibitem{BRS} Bermudo, S., Rodr\'{\i}guez, J. M. and Sigarreta, J. M., Computing the hyperbolicity constant, {\it Comput. Math. Appl.} {\betaf 62} (2011), 4592-4595. \betaibitem{BRSV2} Bermudo, S., Rodr\'{\i}guez, J. M., Sigarreta, J. M. and Vilaire, J.-M., Gromov hyperbolic graphs, {\it Discr. Math.} {\betaf 313} (2013), 1575-1585. \betaibitem{BRST} Bermudo, S., Rodr\'{\i}guez, J. M., Sigarreta, J. M. and Tour{\'\i}s, E., Hyperbolicity and complement of graphs, {\it Appl. Math. Letters} {\betaf 24} (2011), 1882-1887. \betaibitem{BPK} Boguna, M., Papadopoulos, F. and Krioukov, D., Sustaining the Internet with Hyperbolic Mapping, {\it Nature Commun.} {\betaf 1}(62) (2010), 18 p. \betaibitem{BHB} Bowditch, B. H., Notes on Gromov's hyperbolicity criterion for path-metric spaces. Group theory from a geometrical viewpoint, Trieste, 1990 (ed. E. Ghys, A. Haefliger and A. Verjovsky; World Scientific, River Edge, NJ, 1991) 64-167. \betaibitem{BHB1} Brinkmann, G., Koolen J. and Moulton ,V., On the hyperbolicity of chordal graphs, {\it Ann. Comb.} {\betaf 5} (2001), 61-69. \betaibitem{CCCR} Carballosa, W., Casablanca, R. M., de la Cruz, A. and Rodríguez, J. M., Gromov hyperbolicity in strong product graphs, {\it Electr. J. Comb.} {\betaf 20}(3) (2013), P2. \betaibitem{CDR} Carballosa, W., de la Cruz, A. and Rodr\'{\i}guez, J. M., Gromov hyperbolicity in lexicographic product graphs. Submitted. \betaibitem{CPRS} Carballosa, W., Pestana, D., Rodr\'{\i}guez, J. M. and Sigarreta, J. M., Distortion of the hyperbolicity constant of a graph, {\it Electr. J. Comb.} {\betaf 19} (2012), P67. \betaibitem{CRS} Carballosa, W., Rodr\'{\i}guez, J. M. and Sigarreta, J. M., New inequalities on the hyperbolity constant of line graphs, to appear in {\it Ars Combinatoria.} Preprint in http://gama.uc3m.es/index.php/jomaro.html \betaibitem{CRSV} Carballosa, W., Rodr\'{\i}guez, J. M., Sigarreta, J. M. and Villeta, M., Gromov hyperbolicity of line graphs, {\it Electr. J. Comb.} {\betaf 18} (2011), P210. \betaibitem{Cha} Charney, R., Artin groups of finite type are biautomatic, {\it Math. Ann.} {\betaf 292} (1992), 671-683. \betaibitem{CYY} Chen, B., Yau, S.-T. and Yeh, Y.-N., Graph homotopy and Graham homotopy, {\it Discrete Math.} {\betaf 241} (2001), 153-170. \betaibitem{CDEHV} Chepoi, V., Dragan, F. F., Estellon, B., Habib, M. and Vaxes Y., Notes on diameters, centers, and approximating trees of $\delta$-hyperbolic geodesic spaces and graphs, {\it Electr. Notes Discr. Math.} {\betaf 31} (2008), 231-234. \betaibitem{ChEs} Chepoi, V. and Estellon, B., Packing and covering $\delta$-hyperbolic spaces by balls, APPROX-RANDOM 2007 pp. 59-73. \betaibitem{Epp} Eppstein, D., Squarepants in a tree: sum of subtree clustering and hyperbolic pants decomposition, SODA'2007, Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, pp. 29-38. \betaibitem{K50} Frigerio, R. and Sisto, A., Characterizing hyperbolic spaces and real trees, {\it Geom. Dedicata} {\betaf 142} (2009), 139-149. \betaibitem{FH} Frucht, R. and Harary, F., On the corona of two graphs, {\it Aequationes Math.} {\betaf 4}(3) (1970), 322–324. \betaibitem{GaLy} Gavoille, C. and Ly, O., Distance labeling in hyperbolic graphs, In ISAAC 2005, pp. 171-179. \betaibitem{GH} Ghys, E. and de la Harpe, P., Sur les Groupes Hyperboliques d'apr\`es Mikhael Gromov. Progress in Mathematics 83, Birkh\"auser Boston Inc., Boston, MA, 1990. \betaibitem{G1} Gromov, M., Hyperbolic groups, in ``Essays in group theory". Edited by S. M. Gersten, M. S. R. I. Publ. {\betaf 8}. Springer, 1987, 75-263. \betaibitem{H} Harary, F., Graph Theory. Reading, MA: Addison-Wesley, 1994. \betaibitem{K27} Jonckheere, E. and Lohsoonthorn, P., A hyperbolic geometry approach to multipath routing, Proceedings of the 10th Mediterranean Conference on Control and Automation (MED 2002), Lisbon, Portugal, July 2002. FA5-1. \betaibitem{K21} Jonckheere, E. A., Contr\^ole du traffic sur les r\'eseaux \`a g\'eom\'etrie hyperbolique--Vers une th\'eorie g\'eom\'etrique de la s\'ecurit\'e l'acheminement de l'information, {\it J. Europ. Syst. Autom.} {\betaf 8} (2002), 45-60. \betaibitem{K22} Jonckheere, E. A. and Lohsoonthorn, P., Geometry of network security, {\it Amer. Control Conf.} {\betaf ACC} (2004), 111-151. \betaibitem{K23} Jonckheere, E. A., Lohsoonthorn, P. and Ariaesi, F, Upper bound on scaled Gromov-hyperbolic delta, {\it Appl. Math. Comp.} {\betaf 192} (2007), 191-204. \betaibitem{K24} Jonckheere, E. A., Lohsoonthorn, P. and Bonahon, F., Scaled Gromov hyperbolic graphs, {\it J. Graph Theory} {\betaf 2} (2007), 157-180. \betaibitem{K56} Koolen, J. H. and Moulton, V., Hyperbolic Bridged Graphs, {\it Europ. J. Comb.} {\betaf 23} (2002), 683-699. \betaibitem{Kra} Krauthgamer, R. and Lee, J. R., Algorithms on negatively curved spaces, FOCS 2006. \betaibitem{KPKVB} Krioukov, D., Papadopoulos, F., Kitsak, M., Vahdat, A. and Boguñ\'a, M., Hyperbolic geometry of complex networks, {\it Physical Review E} {\betaf 82}, 036106 (2010). \betaibitem{MRSV} Michel, J., Rodr\'{\i}guez, J. M., Sigarreta, J. M. and Villeta, M., Hyperbolicity and parameters of graphs, {\it Ars Comb.} {\betaf 100} (2011), 43-63. \betaibitem{MRSV2} Michel, J., Rodr\'{\i}guez, J. M., Sigarreta, J. M. and Villeta, M., Gromov hyperbolicity in cartesian product graphs, {\it Proc. Indian Acad. Sci. Math. Sci.} {\betaf 120} (2010), 1-17. \betaibitem{NS} Narayan, O. and Saniee, I., Large-scale curvature of networks, {\it Physical Review E} {\betaf 84}, 066108 (2011). \betaibitem{O} Oshika, K., Discrete groups, AMS Bookstore, 2002. \betaibitem{PeRSV} Pestana, D., Rodr\'{\i}guez, J. M., Sigarreta, J. M. and Villeta, M., Gromov hyperbolic cubic graphs, {\it Central Europ. J. Math.} {\betaf 10(3)} (2012), 1141-1151. \betaibitem{PRST} Portilla, A., Rodr\'{\i}guez, J. M., Sigarreta, J. M. and Tour{\'\i}s, E., Gromov hyperbolic directed graphs, to appear in {\it Acta Math. Appl. Sinica.} \betaibitem{PRSV} Portilla, A., Rodr\'{\i}guez, J. M., Sigarreta, J. M. and Vilaire, J.-M., Gromov hyperbolic tessellation graphs, to appear in {\it Utilitas Math.} Preprint in http://gama.uc3m.es/index.php/jomaro.html \betaibitem{PT} Portilla, A. and Tour{\'\i}s, E., A characterization of Gromov hyperbolicity of surfaces with variable negative curvature, {\it Publ. Mat.} {\betaf 53} (2009), 83-110. \betaibitem{R} Rodr\'{\i}guez, J. M., Characterization of Gromov hyperbolic short graphs. To appear in Acta Mathematica Sinica. Preprint in http://gama.uc3m.es/index.php/jomaro.html \betaibitem{RSVV} Rodr\'{\i}guez, J. M., Sigarreta, J. M., Vilaire, J.-M. and Villeta, M., On the hyperbolicity constant in graphs, {\it Discr. Math.} {\betaf 311} (2011), 211-219. \betaibitem{S} Shang, Y., Lack of Gromov-hyperbolicity in small-world networks, {\it Cent. Eur. J. Math.} {\betaf 10} (2012), 1152-1158. \betaibitem{S2} Shang, Y., Non-hyperbolicity of random graphs with given expected degrees, {\it Stoch. Models} {\betaf 29} (2013), 451-462. \betaibitem{ShTa} Shavitt, Y., Tankel, T., On internet embedding in hyperbolic spaces for overlay construction and distance estimation, INFOCOM 2004. \betaibitem{Si} Sigarreta, J. M., Hyperbolicity in median graphs, {\it Proc. Indian Acad. Sci. Math. Sci.} {\betaf} (2013). In Press. \betaibitem{T} Tour{\'\i}s, E., Graphs and Gromov hyperbolicity of non-constant negatively curved surfaces, {\it J. Math. Anal. Appl.} {\betaf 380} (2011), 865-881. \betaibitem{WZ} Wu, Y. and Zhang, C., Chordality and hyperbolicity of a graph, {\it Electr. J. Comb.} {\betaf 18} (2011), P43. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \mathbf{t}itle[ Equiaffine Characterization of Lagrangian Surfaces] {Equiaffine Characterization of Lagrangian \mathbf{n}ewline Surfaces in $\mathbb{R}^4$} \mathbf{a}uthor[M.Craizer]{Marcos Craizer} \mathbf{a}ddress{ Catholic University \br Rio de Janeiro\br Brazil} \mathbf{e}mail{[email protected]} \mathbf{t}hanks{The author want to thank CNPq for financial support during the preparation of this manuscript.} \keywords{ Shape Operators, Cubic Forms, Affine Normal Plane Bundle} \subjclass{ 53A15, 53D12} \date{December 23, 2014} \begin{abstract} For non-degenerate surfaces in $\mathbb{R}^4$, a distinguished transversal bundle called affine normal plane bundle was proposed in \cite{Nomizu93}. Lagrangian surfaces have remarkable properties with respect to this normal bundle, like for example, the normal bundle being Lagrangian. In this paper we characterize those surfaces which are Lagrangian with respect to some parallel symplectic form in $\mathbb{R}^4$. \mathbf{e}nd{abstract} \maketitle \section{Introduction} We consider non-degenerate surfaces $M^2\subset\mathbb{R}^4$. For such surfaces, there are many possible choices of the transversal bundle, and we consider here the {\it affine normal plane bundle} proposed in \cite{Nomizu93}. For affine mean curvature, umbilical surfaces and some other properties of this bundle we refer to \cite{Dillen}, \cite{Magid}, \cite{Verstraelen} and \cite{Vrancken}. In this paper, considering the affine normal plane bundle, we give an equiaffine characterization of the Lagrangian surfaces. The results can be compared with \cite{Morvan87}, where a characterization of Lagrangian surfaces is given in terms of euclidean invariants of the surface. Consider the affine $4$-space $\mathbb{R}^4$ with the standard connection $D$ and a parallel volume form $[\cdot,\cdot,\cdot,\cdot]$. Let $M\subset\mathbb{R}^4$ be a surface with a non-degenerate {\it Burstin-Mayer metric} $g$ (\cite{Burstin27}). For a definite metric $g$, we write $\mathbf{e}psilon=1$, while for an indefinite metric $g$, we write $\mathbf{e}psilon=-1$. For a given transversal plane bundle $\sigma$ and $X,Y$ tangent vector fields, write \begin{equation} D_XY=\mathbf{n}abla_XY+h(x,y), \mathbf{e}nd{equation} where $\mathbf{n}abla_XY$ is tangent to $M$ and $h(X,Y)\in\sigma$. Then $\mathbf{n}abla$ is a torsion free affine connection and $h$ is a symmetric bilinear form. For local vector fields $\{\mathbf{x}i_1,\mathbf{x}i_2\}$ defining a basis of $\sigma$, define the symmetric bilinear forms $h^1$ and $h^2$ by \begin{equation} h(X,Y)=h^1(X,Y)\mathbf{x}i_1+h^2(X,Y)\mathbf{x}i_2. \mathbf{e}nd{equation} Let $\{X_1,X_2\}$ be a local $g$-orthonormal tangent frame, i.e., $g(X_1,X_1)=\mathbf{e}psilon$, $g(X_1,X_2)=0$, $g(X_2,X_2)=1$. For an arbitrary transversal plane bundle $\sigma$, it is proved in \cite{Nomizu93} that there exists a unique local basis $\{\mathbf{x}i_1,\mathbf{x}i_2\}$ of $\sigma$ such that $[X_1,X_2,\mathbf{x}i_1,\mathbf{x}i_2]=1$ and \begin{equation}\label{eq:Normalxi} \begin{array}{c} h^1(X_1,X_1)=1,\\ h^1(X_1,X_2)=0,\\ h^1(X_2,X_2)=-\mathbf{e}psilon, \mathbf{e}nd{array} \ \ \ \ \begin{array}{c} h^2(X_1,X_1)=0,\\ h^2(X_1,X_2)=1,\\ h^2(X_2,X_2)=0. \mathbf{e}nd{array} \mathbf{e}nd{equation} There are some transversal plane bundles $\sigma$ with distinguished properties, and we shall consider here the {\it affine normal plane bundle} proposed in \cite{Nomizu93}. Assuming that $M$ is Lagrangian with respect to a parallel symplectic form $\Omega$, we shall verify the following remarkable facts: (1) The affine normal plane bundle is $\Omega$-Lagrangian; (2) $\Omega\mathbf{w}edge\Omega=c[\cdot,\cdot,\cdot,\cdot]$, for some constant c; (3) $\Omega(X_1,\mathbf{x}i_2)-\Omega(X_2,\mathbf{x}i_1)=0$ and $\Omega(X_1,\mathbf{x}i_1)+\mathbf{e}psilon\Omega(X_2,\mathbf{x}i_2)=0$. Based on these facts, we shall describe the equiaffine conditions for a surface to be Lagrangian with respect to a parallel symplectic form. Given a transversal bundle $\sigma$ and a local basis $\{\mathbf{x}i_1,\mathbf{x}i_2\}$, define the $1$-forms $\mathbf{t}au_i^j$, $i=1,2$, $j=1,2$, and the shape operators $S_i$ by \begin{equation}\label{eq:shape} D_X\mathbf{x}i_i=-S_iX+\mathbf{t}au_i^1(X)\mathbf{x}i_1+\mathbf{t}au_i^2(X)\mathbf{x}i_2, \mathbf{e}nd{equation} where $S_iX$ is in the tangent space. Writing \begin{equation}\label{eq:definelambda} S_iX_j=\lambda_{ij}^1X_1+\lambda_{ij}^2X_2, \mathbf{e}nd{equation} define \begin{equation}\label{eq:defineL} \begin{array}{c} L_{11}=\lambda_{11}^1-\lambda_{21}^2;\ \ L_{12}=-\mathbf{e}psilon\lambda_{11}^2-\lambda_{21}^1\\ L_{21}=\lambda_{12}^1-\lambda_{22}^2;\ \ L_{22}=-\mathbf{e}psilon\lambda_{12}^2-\lambda_{22}^1, \mathbf{e}nd{array} \mathbf{e}nd{equation} and the $2\mathbf{t}imes 2$ matrix \begin{equation*} L=\left[ \begin{array}{cc} L_{11} & L_{12}\\ L_{21} & L_{22} \mathbf{e}nd{array} \right]. \mathbf{e}nd{equation*} We shall verify that the rank of $L$ is independent of the choice of the $g$-orthonormal local frame $\{X_1,X_2\}$. Consider the cubic forms $C^i$, $i=1,2$ given by \begin{equation}\label{eq:defineCubic} C^i(X,Y,Z)=\mathbf{n}abla_Xh^{i}(Y,Z)+\mathbf{t}au_1^{i}(X)h^1(Y,Z)+\mathbf{t}au_2^{i}(X)h^2(Y,Z). \mathbf{e}nd{equation} and define \begin{equation}\label{eq:defineF} \begin{array}{c} F_{11}=3C^1(X_1,X_1,X_2)-\mathbf{e}psilon C^1(X_2,X_2,X_2)\\ F_{12}=\mathbf{e}psilon C^1(X_1,X_1,X_1)-3C^1(X_1,X_2,X_2)\\ F_{21}=3C^2(X_1,X_1,X_2)-\mathbf{e}psilon C^2(X_2,X_2,X_2)\\ F_{22}=\mathbf{e}psilon C^2(X_1,X_1,X_1)-3C^2(X_1,X_2,X_2). \mathbf{e}nd{array} \mathbf{e}nd{equation} We shall verify that the rank of the matrix \begin{equation*} F=\left[ \begin{array}{cc} F_{11} & F_{12}\\ F_{21} & F_{22} \mathbf{e}nd{array} \right] \mathbf{e}nd{equation*} is also independent of the choice of the local $g$-orthonormal tangent frame $\{X_1,X_2\}$. In fact we shall prove that the rank of the $2\mathbf{t}imes 4$ matrix $$ H=\left[\ L\ | \ F\ \right] $$ is independent of the choice of the local frame. In case $rank(H)=1$, denote by $[A,B]^t$ a column-vector in the kernel of $H$ and let $\mathbf{e}ta=\mathbf{t}an^{-1}(B/A)$, if $\mathbf{e}psilon=1$, and $\mathbf{e}ta=\mathbf{t}anh^{-1}(B/A)$, if $\mathbf{e}psilon=-1$. Define \begin{equation}\label{eq:defineG} \begin{array}{c} G_1=\Gamma_{22}^2-\mathbf{e}psilon\Gamma_{11}^2+\mathbf{t}au_1^1(X_2)-\mathbf{e}psilon\mathbf{t}au_1^2(X_1)\\ G_2=\Gamma_{11}^1-\mathbf{e}psilon\Gamma_{22}^1-\mathbf{t}au_2^2(X_1)+\mathbf{t}au_1^2(X_2), \mathbf{e}nd{array} \mathbf{e}nd{equation} where \begin{equation}\label{eq:defineChristoffel} \mathbf{n}abla_{X_i}X_j=\Gamma_{ij}^1X_1+\Gamma_{ij}^2X_2. \mathbf{e}nd{equation} We shall verify that, for the affine normal plane bundle, the conditions \begin{equation}\label{eq:DerivEta} \begin{array}{c} d\mathbf{e}ta(X_1)+\mathbf{e}psilon G_1=0\\ d\mathbf{e}ta(X_2)-G_2=0, \mathbf{e}nd{array} \mathbf{e}nd{equation} are independent of the choice of the local frame. Our main theorem is the following: \begin{thm}\label{thm1} Given a surface $M\subset\mathbb{R}^4$, consider a local tangent frame $\{X_1,X_2\}$ and a local basis $\{\mathbf{x}i_1,\mathbf{x}i_2\}$ of the affine normal plane bundle $\sigma$ satisfying equations \mathbf{e}qref{eq:Normalxi}. \begin{enumerate} \item Assume that there exists a parallel symplectic form $\Omega$ such that $L$ is $\Omega$-Lagrangian. Then the affine normal plane bundle is $\Omega$-Lagrangian, $\Omega\mathbf{w}edge\Omega=c[\cdot,\cdot,\cdot,\cdot]$, for some constant $c$ and $[A,B]^t$ belongs to the kernel of $H$, where $A=\Omega(X_1,\mathbf{x}i_2)=\Omega(X_2,\mathbf{x}i_1)$ and $B=\Omega(X_1,\mathbf{x}i_1)=-\mathbf{e}psilon\Omega(X_2,\mathbf{x}i_2)$. Moreover $\mathbf{e}ta$ satisfies equations \mathbf{e}qref{eq:DerivEta}. \item If $rank(H)=1$ and $ker(H)$ satisfies equations \mathbf{e}qref{eq:DerivEta}, then there exists a parallel symplectic form $\Omega$ such that $M$ is $\Omega$-Lagrangian. \mathbf{e}nd{enumerate} \mathbf{e}nd{thm} In order to complete the picture, it remains to consider what occurs under the hypothesis $H=0$. It is proved in \cite{Nomizu93} that, under the weaker hypothesis $F=0$, $M$ must be a complex curve, if the metric $g$ is definite, or a product of planar curves, if $g$ is indefinite. In any case, it is well-known that there are two linearly independent parallel symplectic forms under which $M$ is Lagrangian (see \cite{Craizer14}, \cite{Martinez05}). Thus we can write the following: \begin{corollary} A surface $M^2\subset\mathbb{R}^4$ is Lagrangian with respect to a parallel symplectic form if and only if $rank(H)=1$ and equations \mathbf{e}qref{eq:DerivEta} hold or $rank(H)=0$. \mathbf{e}nd{corollary} The paper is organized as follows: In section 2 we describe the equiaffine invariants of a surface in $\mathbb{R}^4$, showing that $rank(H)$ is independent of the choice of the local frame. In section 3, we give a characterization of the affine normal bundle in terms of the cubic forms and show that equations \mathbf{e}qref{eq:DerivEta} are independent of the choice of the local frame. In section 4 we prove the main theorem. \section{Shape Operators and Cubic Forms} \subsection{The affine metric and local frames} We begin by recalling the definition of the affine metric $g$ of a surface $M\subset\mathbb{R}^4$ (\cite{Burstin27},\cite{Nomizu93}). For a local frame $u=\{X_1,X_2\}$ of the tangent plane, let $$ G_u(Y,Z)=\frac{1}{2}\left( [X_1,X_2,D_YX_1,D_ZX_2]+ [X_1,X_2,D_ZX_1,D_YX_2] \right). $$ Denoting $$ \Delta(u)=G_u(X_1,X_1)G_u(X_2,X_2)-G_u(X_1,X_2)^2, $$ one can verify that the condition $\Delta(u)\mathbf{n}eq 0$ is independent of the choice of the basis $u$. When this condition holds, we say that the surface in {\it non-degenerate}. Along this paper, we shall always assume that the surface $M$ is non-degenerate. For a non-degenerate surface, define $$ g(Y,Z)=\frac{1}{\Delta(u)^{1/3}}G_u(Y,Z). $$ Then $g$ is independent of $u$ and is called the {\it affine metric} of the surface. Consider a $g$-orthonormal local frame $\{X_1,X_2\}$ of $M$. Any other $g$-orthonormal local frame $\{Y_1,Y_2\}$ is related to $\{X_1,X_2\}$ by \begin{equation}\label{eq:ChangeFrame1} \begin{array}{c} Y_1=\cos(\mathbf{t}heta)X_1+\sin(\mathbf{t}heta)X_2\\ Y_2=-\sin(\mathbf{t}heta)X_1+\cos(\mathbf{t}heta)X_2, \mathbf{e}nd{array} \mathbf{e}nd{equation} for $\mathbf{e}psilon=1$ and \begin{equation}\label{eq:ChangeFrame2} \begin{array}{c} Y_1=\cosh(\mathbf{t}heta)X_1+\sinh(\mathbf{t}heta)X_2\\ Y_2=\sinh(\mathbf{t}heta)X_1+\cosh(\mathbf{t}heta)X_2, \mathbf{e}nd{array} \mathbf{e}nd{equation} for $\mathbf{e}psilon=-1$, for some $\mathbf{t}heta$. It is verified in \cite{Nomizu93}, lemmas 4.1 and 4.2, that the corresponding local frame $\{\overline{\mathbf{x}i}_1,\overline{\mathbf{x}i}_2\}$ for $\sigma$ satisfying \mathbf{e}qref{eq:Normalxi} is given by \begin{equation} \begin{array}{c}\label{eq:Changexi1} \overline{\mathbf{x}i}_1=\cos(2\mathbf{t}heta)\mathbf{x}i_1+\sin(2\mathbf{t}heta)\mathbf{x}i_2\\ \overline{\mathbf{x}i}_2=-\sin(2\mathbf{t}heta)\mathbf{x}i_1+\cos(2\mathbf{t}heta)\mathbf{x}i_2, \mathbf{e}nd{array} \mathbf{e}nd{equation} for $\mathbf{e}psilon=1$ and \begin{equation}\label{eq:Changexi2} \begin{array}{c} \overline{\mathbf{x}i}_1=\cosh(2\mathbf{t}heta)\mathbf{x}i_1+\sinh(2\mathbf{t}heta)\mathbf{x}i_2\\ \overline{\mathbf{x}i}_2=\sinh(2\mathbf{t}heta)\mathbf{x}i_1+\cosh(2\mathbf{t}heta)\mathbf{x}i_2, \mathbf{e}nd{array} \mathbf{e}nd{equation} for $\mathbf{e}psilon=-1$. \subsection{Shape operators} The shape operators $S_1$ and $S_2$ are defined by equation \mathbf{e}qref{eq:shape} and its components $\lambda_{ij}^k$ are defined by \mathbf{e}qref{eq:definelambda}. In this section we show how the matrix $L$ defined by \mathbf{e}qref{eq:defineL} changes by a change of the $g$-orthonormal local frame $\{X_1,X_2\}$. In order to have a more compact notation, consider the matrices $R_{\mathbf{e}psilon}$, $\mathbf{e}psilon=\pm 1$, given by $$ R_{1}(\mathbf{t}heta)= \left[ \begin{array}{cc} \cos(\mathbf{t}heta) & \sin(\mathbf{t}heta)\\ -\sin(\mathbf{t}heta) & \cos(\mathbf{t}heta) \mathbf{e}nd{array} \right];\ \ R_{-1}(\mathbf{t}heta)= \left[ \begin{array}{cc} \cosh(\mathbf{t}heta) & \sinh(\mathbf{t}heta)\\ \sinh(\mathbf{t}heta) & \cosh(\mathbf{t}heta) \mathbf{e}nd{array} \right]. $$ \begin{lemma}\label{lemma:ChangeL} Denote by $\overline{L}$ the matrix $L$ associated with the local frame $\{Y_1,Y_2\}$ defined by \mathbf{e}qref{eq:ChangeFrame1} and \mathbf{e}qref{eq:ChangeFrame2}. Then \begin{equation}\label{eq:ChangeL} \overline{L}=R_{\mathbf{e}psilon}(\mathbf{t}heta)LR_{\mathbf{e}psilon}(3\mathbf{e}psilon\mathbf{t}heta). \mathbf{e}nd{equation} \mathbf{e}nd{lemma} \begin{proof} The proof are long but straightforward calculations. For example, in case $\mathbf{e}psilon=-1$, we can calculate the first row of $\overline{L}$ as follows: From equation \mathbf{e}qref{eq:Changexi2} we have that \begin{equation*} \begin{array}{c} \overline{S}_1(Y_1)=\cosh(\mathbf{t}heta)\cosh(2\mathbf{t}heta)S_1(X_1)+\cosh(\mathbf{t}heta)\sinh(2\mathbf{t}heta)S_2(X_1)+\\ +\sinh(\mathbf{t}heta)\cosh(2\mathbf{t}heta)S_1(X_2)+\sinh(\mathbf{t}heta)\sinh(2\mathbf{t}heta)S_2(X_2) \mathbf{e}nd{array} \mathbf{e}nd{equation*} and \begin{equation*} \begin{array}{c} \overline{S}_2(Y_1)=\cosh(\mathbf{t}heta)\sinh(2\mathbf{t}heta)S_1(X_1)+\cosh(\mathbf{t}heta)\cosh(2\mathbf{t}heta)S_2(X_1)+\\ +\sinh(\mathbf{t}heta)\sinh(2\mathbf{t}heta)S_1(X_2)+\sinh(\mathbf{t}heta)\cosh(2\mathbf{t}heta)S_2(X_2) \mathbf{e}nd{array} \mathbf{e}nd{equation*} Now using again equations \mathbf{e}qref{eq:ChangeFrame2} and comparing the coefficients we obtain after some calculations \begin{equation*} \begin{array}{c} \overline{L}_{11}=\cosh(\mathbf{t}heta)\cosh(3\mathbf{t}heta)L_{11}-\cosh(\mathbf{t}heta)\sinh(3\mathbf{t}heta)L_{12}+\\ +\sinh(\mathbf{t}heta)\cosh(3\mathbf{t}heta)L_{21}-\sinh(\mathbf{t}heta)\sinh(3\mathbf{t}heta)L_{22} \mathbf{e}nd{array} \mathbf{e}nd{equation*} and \begin{equation*} \begin{array}{c} \overline{L}_{12}=-\cosh(\mathbf{t}heta)\sinh(3\mathbf{t}heta)L_{11}+\cosh(\mathbf{t}heta)\cosh(3\mathbf{t}heta)L_{12}-\\ -\sinh(\mathbf{t}heta)\sinh(3\mathbf{t}heta)L_{21}+\sinh(\mathbf{t}heta)\cosh(3\mathbf{t}heta)L_{22}, \mathbf{e}nd{array} \mathbf{e}nd{equation*} which agree with equation \mathbf{e}qref{eq:ChangeL}. \mathbf{e}nd{proof} \subsection{Cubic forms} Consider the cubic forms $C^1$ and $C^2$ defined by equation \mathbf{e}qref{eq:defineCubic} and the matrix $F$ whose entries are defined by equations \mathbf{e}qref{eq:defineF}. \begin{lemma}\label{lemma:ChangeF} Denote by $\overline{F}$ the matrix $F$ associated with the local frame $\{Y_1,Y_2\}$ defined by \mathbf{e}qref{eq:ChangeFrame1} and \mathbf{e}qref{eq:ChangeFrame2}. Then \begin{equation}\label{eq:MudF} \overline{F}=R_{\mathbf{e}psilon}(2\mathbf{e}psilon\mathbf{t}heta)FR_{\mathbf{e}psilon}(3\mathbf{e}psilon\mathbf{t}heta). \mathbf{e}nd{equation} \mathbf{e}nd{lemma} \begin{proof} We give a proof in case $\mathbf{e}psilon=1$, the case $\mathbf{e}psilon=-1$ being similar. Using complex notation, observe that $$ C^1(X_1+iX_2, X_1+iX_2,X_1+iX_2)=F_{12}+iF_{11}; $$ $$ C^2(X_1+iX_2, X_1+iX_2,X_1+iX_2)=F_{22}+iF_{21}; $$ By lemma 6.2 of \cite{Nomizu93}, $$ e^{3i\mathbf{t}heta}\overline{C}^1(Y_1+iY_2,Y_1+iY_2,Y_1+iY_2)= $$ $$ \cos(2\mathbf{t}heta)C^1(X_1+iX_2, X_1+iX_2,X_1+iX_2)+\sin(2\mathbf{t}heta)C^2(X_1+iX_2, X_1+iX_2,X_1+iX_2). $$ $$ e^{3i\mathbf{t}heta}\overline{C}^2(Y_1+iY_2,Y_1+iY_2,Y_1+iY_2)= $$ $$ -\sin(2\mathbf{t}heta)C^1(X_1+iX_2, X_1+iX_2,X_1+iX_2)+\cos(2\mathbf{t}heta)C^2(X_1+iX_2, X_1+iX_2,X_1+iX_2). $$ Thus $$ \begin{array}{c} \overline{F}_{12}+i\overline{F}_{11}=e^{-3i\mathbf{t}heta} \left[ \cos(2\mathbf{t}heta)(F_{12}+iF_{11})+\sin(2\mathbf{t}heta)(F_{22}+iF_{21}) \right]\\ \overline{F}_{22}+i\overline{F}_{21}=e^{-3i\mathbf{t}heta} \left[ -\sin(2\mathbf{t}heta)(F_{12}+iF_{11})+\cos(2\mathbf{t}heta)(F_{22}+iF_{21}) \right], \mathbf{e}nd{array} $$ which can be written as in equation \mathbf{e}qref{eq:MudF}. \mathbf{e}nd{proof} Now we can prove the following lemma: \begin{lemma}\label{lemma:rankH} The rank of $H$ is independent of the choice of the local frame $\{X_1,X_2\}$. Moreover, if $rank(H)=1$, then $\overline{\mathbf{e}ta}=\mathbf{e}ta+3\mathbf{t}heta$. \mathbf{e}nd{lemma} \begin{proof} By lemmas \ref{lemma:ChangeL} and \ref{lemma:ChangeF}, the column-vector $[\overline{A},\overline{B}]^t$ belongs to the kernel of $\overline{H}$ if and only if $[A,B]^t$ belongs to the kernel of $H$, where $[\overline{A},\overline{B}]^t=R_{\mathbf{e}psilon}(-3\mathbf{e}psilon\mathbf{t}heta)[A,B]^t$, which implies the invariance of $rank(H)$. In case $\mathbf{e}psilon=1$, we have that \begin{equation*} \mathbf{t}an(\mathbf{e}ta+3\mathbf{t}heta)=\frac{\sin(\mathbf{e}ta)\cos(3\mathbf{t}heta)+\cos(\mathbf{e}ta)\sin(3\mathbf{t}heta)}{\cos(\mathbf{e}ta)\cos(3\mathbf{t}heta)-\sin(\mathbf{e}ta)\sin(3\mathbf{t}heta)}=\frac{B\cos(3\mathbf{t}heta)+A\sin(3\mathbf{t}heta)}{A\cos(3\mathbf{t}heta)-B\sin(3\mathbf{t}heta)}=\frac{\overline{B}}{\overline{A}}, \mathbf{e}nd{equation*} thus proving that $\overline{\mathbf{e}ta}=\mathbf{e}ta+3\mathbf{t}heta$. Similarly, in case $\mathbf{e}psilon=-1$, \begin{equation*} \mathbf{t}anh(\mathbf{e}ta+3\mathbf{t}heta)=\frac{B\cosh(3\mathbf{t}heta)+A\sinh(3\mathbf{t}heta)}{A\cosh(3\mathbf{t}heta)+B\sinh(3\mathbf{t}heta)}=\frac{\overline{B}}{\overline{A}}, \mathbf{e}nd{equation*} again proving that $\overline{\mathbf{e}ta}=\mathbf{e}ta+3\mathbf{t}heta$. \mathbf{e}nd{proof} \subsection{Some formulas} For further references, we write some formulas that hold for any transversal bundle $\sigma$. The symmetry conditions on the cubic forms imply that \begin{equation}\label{eq:SymmetryC1} \begin{array}{c} 2\Gamma_{22}^2+\mathbf{t}au_1^1(X_2)=-\Gamma_{12}^1+\mathbf{e}psilon\Gamma_{11}^2+\mathbf{t}au_2^1(X_1)\\ -2\mathbf{e}psilon\Gamma_{11}^1-\mathbf{e}psilon\mathbf{t}au_1^1(X_1)=\mathbf{e}psilon\Gamma_{21}^2-\Gamma_{22}^1+\mathbf{t}au_2^1(X_2) \mathbf{e}nd{array} \mathbf{e}nd{equation} and \begin{equation}\label{eq:SymmetryC2} \begin{array}{c} -2\Gamma_{12}^1-\mathbf{e}psilon\mathbf{t}au_1^2(X_1)=\mathbf{t}au_2^2(X_2)\\ -2\Gamma_{21}^2+\mathbf{t}au_1^2(X_2)=\mathbf{t}au_2^2(X_1) \mathbf{e}nd{array} \mathbf{e}nd{equation} On the other hand, the condition $[X_1,X_2,\mathbf{x}i_1,\mathbf{x}i_2]=1$ implies that \begin{equation}\label{eq:Det1} \begin{array}{c} \Gamma_{11}^1+\Gamma_{12}^2+\mathbf{t}au_1^1(X_1)+\mathbf{t}au_2^2(X_1)=0\\ \Gamma_{21}^1+\Gamma_{22}^2+\mathbf{t}au_1^1(X_2)+\mathbf{t}au_2^2(X_2)=0, \mathbf{e}nd{array} \mathbf{e}nd{equation} (see \cite{Nomizu93}). \section{ The affine normal plane bundle} \subsection{Definition and some relations} Consider a $g$-ortonormal local frame $\{X_1,X_2\}$ of the tangent bundle. We say that a transversal bundle $\sigma$ is equiaffine if \begin{equation*} \begin{array}{c} \mathbf{e}psilon\mathbf{n}abla(g)(X_1,X_1,X_1)+\mathbf{n}abla(g)(X_1,X_2,X_2)=0\\ \mathbf{e}psilon\mathbf{n}abla(g)(X_2,X_1,X_1)+\mathbf{n}abla(g)(X_2,X_2,X_2)=0 \mathbf{e}nd{array} \mathbf{e}nd{equation*} The affine normal plane bundle is an equiaffine bundle $\sigma$ satisfying \begin{equation*} \begin{array}{c} \mathbf{n}abla(g)(X_2,X_1,X_1)+\mathbf{n}abla(g)(X_1,X_2,X_1)=0\\ \mathbf{n}abla(g)(X_1,X_2,X_2)+\mathbf{n}abla(g)(X_2,X_1,X_2)=0 \mathbf{e}nd{array} \mathbf{e}nd{equation*} Lemma 7.3 of \cite{Nomizu93} says that the affine normal plane is characterized by the conditions \begin{equation}\label{eq:AffineBundle1} \Gamma_{12}^2=-\Gamma_{11}^1;\ \Gamma_{21}^1=-\Gamma_{22}^2; \mathbf{e}nd{equation} and \begin{equation}\label{eq:AffineBundle2} 2\Gamma_{11}^1=\Gamma_{21}^2+\mathbf{e}psilon\Gamma_{22}^1;\ 2\Gamma_{22}^2=\Gamma_{12}^1+\mathbf{e}psilon\Gamma_{11}^2. \mathbf{e}nd{equation} As a consequence of equations \mathbf{e}qref{eq:Det1} and \mathbf{e}qref{eq:AffineBundle1} we obtain \begin{equation}\label{eq:SumTau} \mathbf{t}au_1^1+\mathbf{t}au_2^2=0. \mathbf{e}nd{equation} It is proved in \cite{Nomizu93} that a non-degenerate immersion admits a unique affine normal bundle. \subsection{Characterization of the affine normal bundle in terms of the cubic forms} Define \begin{equation*}\label{eq:E1E2} \begin{array}{c} E_1=\mathbf{e}psilon C^1(X_1,X_1,X_1)+C^1(X_1,X_2,X_2)-\mathbf{e}psilon C^2(X_1,X_1,X_2)-C^2(X_2,X_2,X_2)\\ E_2=\mathbf{e}psilon C^1(X_1,X_1,X_2)+C^1(X_2,X_2,X_2)+\mathbf{e}psilon C^2(X_1,X_2,X_2)+C^2(X_1,X_1,X_1)\\ E_3=3C^1(X_1,X_1,X_1)-\mathbf{e}psilon C^1(X_1,X_2,X_2)+3C^2(X_1,X_1,X_2)-\mathbf{e}psilon C^2(X_2,X_2,X_2)\\ E_4=C^1(X_1,X_1,X_2)-3\mathbf{e}psilon C^1(X_2,X_2,X_2)+3C^2(X_1,X_2,X_2)-\mathbf{e}psilon C^2(X_1,X_1,X_1). \mathbf{e}nd{array} \mathbf{e}nd{equation*} \begin{Proposition}\label{prop:NormalCubic} $\sigma$ is the affine normal plane bundle if and only if $E_1=E_2=E_3=E_4=0$. \mathbf{e}nd{Proposition} \begin{proof} For a general transversal bundle $\sigma$, the components of the cubic form are given by \begin{equation}\label{eq:CubicFormulas} \begin{array}{c} C^1(X_1,X_1,X_1)=-2\Gamma_{11}^1+\mathbf{t}au_1^1(X_1),\\ C^1(X_1,X_1,X_2)=-2\Gamma_{21}^1+\mathbf{t}au_1^1(X_2),\\ C^1(X_1,X_2,X_2)=2\mathbf{e}psilon\Gamma_{12}^2-\mathbf{e}psilon\mathbf{t}au_1^1(X_1),\\ C^1(X_2,X_2,X_2)=2\mathbf{e}psilon\Gamma_{22}^2-\mathbf{e}psilon\mathbf{t}au_1^1(X_2), \mathbf{e}nd{array} \begin{array}{c} C^2(X_1,X_1,X_1)=-2\Gamma_{11}^2+\mathbf{t}au_1^2(X_1),\\ C^2(X_2,X_1,X_1)=-2\Gamma_{21}^2+\mathbf{t}au_1^2(X_2),\\ C^2(X_1,X_2,X_2)=-2\Gamma_{12}^1-\mathbf{e}psilon\mathbf{t}au_1^2(X_1),\\ C^2(X_2,X_2,X_2)=-2\Gamma_{22}^1-\mathbf{e}psilon\mathbf{t}au_1^2(X_2). \mathbf{e}nd{array} \mathbf{e}nd{equation} Assuming that $\sigma$ is the affine normal plane bundle, equations \mathbf{e}qref{eq:AffineBundle1} and \mathbf{e}qref{eq:AffineBundle2} easily imply that $E_1=E_2=0$. Moreover, it is not difficult to verify that equations \mathbf{e}qref{eq:AffineBundle1} and \mathbf{e}qref{eq:AffineBundle2} together with equations \mathbf{e}qref{eq:SymmetryC1}, \mathbf{e}qref{eq:SymmetryC2} and \mathbf{e}qref{eq:Det1} imply that $E_3=E_4=0$. Assume now that $E_1=E_2=E_3=E_4=0$. Then we can write \begin{equation*} \begin{array}{c} -\Gamma_{11}^1+\Gamma_{12}^2+\Gamma_{21}^2+\mathbf{e}psilon\Gamma_{22}^1=0\\ -3\Gamma_{11}^1-\Gamma_{12}^2-3\Gamma_{21}^2+\mathbf{e}psilon\Gamma_{22}^1=-2(\mathbf{t}au_1^1(X_1)+\mathbf{t}au_1^2(X_2)). \mathbf{e}nd{array} \mathbf{e}nd{equation*} and \begin{equation*} \begin{array}{c} -\Gamma_{21}^1+\Gamma_{22}^2-\mathbf{e}psilon\Gamma_{11}^2-\Gamma_{12}^1=0\\ \mathbf{e}psilon\Gamma_{11}^2-3\Gamma_{12}^1-\Gamma_{21}^1-3\Gamma_{22}^2=-2(\mathbf{t}au_1^1(X_2)-\mathbf{e}psilon\mathbf{t}au_1^2(X_1)). \mathbf{e}nd{array} \mathbf{e}nd{equation*} By using equations \mathbf{e}qref{eq:SymmetryC2} we obtain \begin{equation*} \begin{array}{c} -3\Gamma_{11}^1-\Gamma_{12}^2+\Gamma_{21}^2+\mathbf{e}psilon\Gamma_{22}^1=-2(\mathbf{t}au_1^1+\mathbf{t}au_2^2)(X_1)\\ \Gamma_{11}^1+\Gamma_{12}^2=(\mathbf{t}au_1^1+\mathbf{t}au_2^2)(X_1), \mathbf{e}nd{array} \mathbf{e}nd{equation*} and \begin{equation*} \begin{array}{c} -3\Gamma_{22}^2-\Gamma_{21}^1+\Gamma_{12}^1+\mathbf{e}psilon\Gamma_{11}^2=-2(\mathbf{t}au_1^1+\mathbf{t}au_2^2)(X_2)\\ \Gamma_{22}^2+\Gamma_{21}^1=(\mathbf{t}au_1^1+\mathbf{t}au_2^2)(X_2). \mathbf{e}nd{array} \mathbf{e}nd{equation*} Now we use equations \mathbf{e}qref{eq:Det1} to conclude that equations \mathbf{e}qref{eq:AffineBundle1} and \mathbf{e}qref{eq:AffineBundle2} hold, which proves that $\sigma$ is the affine normal plane bundle. \mathbf{e}nd{proof} \begin{remark} There is another choice of the transversal bundle $\sigma$ introduced by Klingenberg (\cite{Klingenberg51}) that is characterized by four conditions involving the cubic forms $C^1$ and $C^2$ (see lemma 6.1. of \cite{Nomizu93}). Two of these conditions are $E_1=E_2=0$. \mathbf{e}nd{remark} When we choose the affine normal bundle as the transversal bundle $\sigma$, the elements $F_{ij}$ of the matrix $F$ assume a remarkable simple form. \begin{Proposition} For the affine normal plane bundle \begin{equation}\label{eq:FNormal} F=\left[ \begin{array}{cc} F_{11}& F_{12}\\ F_{21}& F_{22} \mathbf{e}nd{array} \right]=4\left[ \begin{array}{cc} \Gamma_{22}^2+\mathbf{t}au_1^1(X_2) & \mathbf{e}psilon\Gamma_{11}^1+\mathbf{e}psilon\mathbf{t}au_1^1(X_1) \\ \Gamma_{11}^1-\mathbf{t}au_1^1(X_1) & -\Gamma_{22}^2+\mathbf{t}au_1^1(X_2) \mathbf{e}nd{array} \right] \mathbf{e}nd{equation} \mathbf{e}nd{Proposition} \begin{proof} We shall check these formulas for $F_{12}$, the other cases being similar. From equations \mathbf{e}qref{eq:CubicFormulas}, we have $$ F_{12}=\mathbf{e}psilon C^1(X_1,X_1,X_1)-3C^1(X_1,X_2,X_2)= $$ $$ =\mathbf{e}psilon \left[-2\Gamma_{11}^1+\mathbf{t}au_1^1(X_1) \right]-3\mathbf{e}psilon\left[ 2\Gamma_{12}^2 -\mathbf{t}au_1^1(X_1) \right]=4\mathbf{e}psilon\left[ \Gamma_{11}^1+\mathbf{t}au_1^1(X_1) \right], $$ where in last equality we have used equations \mathbf{e}qref{eq:AffineBundle1}. \mathbf{e}nd{proof} \subsection{Invariance of equations \mathbf{e}qref{eq:DerivEta} under the choice of the local frame} Consider $G_1$ and $G_2$ defined by equations \mathbf{e}qref{eq:defineG}. \begin{lemma} When $\sigma$ is the affine normal plane bundle we can write \begin{equation} \begin{array}{c} G_1=5\Gamma_{22}^2-3\mathbf{e}psilon\Gamma_{11}^2\\ G_2=5\Gamma_{11}^1-3\mathbf{e}psilon\Gamma_{22}^1. \mathbf{e}nd{array} \mathbf{e}nd{equation} \mathbf{e}nd{lemma} \begin{proof} We shall prove the above formula for $G_1$, the proof for $G_2$ is similar. We have $$ G_1=\Gamma_{22}^2-\mathbf{e}psilon\Gamma_{11}^2+\mathbf{t}au_1^1(X_2)-\mathbf{e}psilon\mathbf{t}au_1^2(X_1)=\Gamma_{22}^2-\mathbf{e}psilon\Gamma_{11}^2+2\Gamma_{12}^1, $$ where we have used formulas \mathbf{e}qref{eq:SymmetryC2} and \mathbf{e}qref{eq:SumTau}. Now using equations \mathbf{e}qref{eq:AffineBundle2}, we obtain the desired formula. \mathbf{e}nd{proof} \begin{lemma} We have that \begin{equation} \left[ \begin{array}{c} \overline{G}_1\\ \overline{G}_2 \mathbf{e}nd{array} \right] =R_{\mathbf{e}psilon}(-\mathbf{e}psilon\mathbf{t}heta) \left[ \begin{array}{c} G_1\\ G_2 \mathbf{e}nd{array} \right] +3 \left[ \begin{array}{cc} -\mathbf{e}psilon & 0 \\ 0 & 1 \mathbf{e}nd{array} \right] R_{\mathbf{e}psilon}(\mathbf{t}heta) \left[ \begin{array}{c} d\mathbf{t}heta(X_1)\\ d\mathbf{t}heta(X_2) \mathbf{e}nd{array} \right]. \mathbf{e}nd{equation} \mathbf{e}nd{lemma} \begin{proof} We consider the case $\mathbf{e}psilon=1$, the case $\mathbf{e}psilon=-1$ being similar. From equations \mathbf{e}qref{eq:ChangeFrame1} we obtain \begin{equation*} \begin{array}{c} \mathbf{n}abla_{Y_1}Y_1=\cos^2(\mathbf{t}heta)\mathbf{n}abla_{X_1}X_1+\sin(\mathbf{t}heta)\cos(\mathbf{t}heta)(\mathbf{n}abla_{X_1}X_2+\mathbf{n}abla_{X_2}X_1)+\sin^2(\mathbf{t}heta)\mathbf{n}abla_{X_2}X_2\\ +\left[ d\mathbf{t}heta(X_1)\cos(\mathbf{t}heta)+d\mathbf{t}heta(X_2)\sin(\mathbf{t}heta) \right]Y_2 \mathbf{e}nd{array} \mathbf{e}nd{equation*} \begin{equation*} \begin{array}{c} \mathbf{n}abla_{Y_2}Y_2=\sin^2(\mathbf{t}heta)\mathbf{n}abla_{X_1}X_1-\sin(\mathbf{t}heta)\cos(\mathbf{t}heta)(\mathbf{n}abla_{X_1}X_2+\mathbf{n}abla_{X_2}X_1)+\cos^2(\mathbf{t}heta)\mathbf{n}abla_{X_2}X_2\\ -\left[ -d\mathbf{t}heta(X_1)\sin(\mathbf{t}heta)+d\mathbf{t}heta(X_2)\cos(\mathbf{t}heta) \right]Y_1 \mathbf{e}nd{array} \mathbf{e}nd{equation*} Using again equations \mathbf{e}qref{eq:ChangeFrame1} we obtain $$ \overline{G}_1=-3\left(\cos(\mathbf{t}heta)d\mathbf{t}heta(X_1)+\sin(\mathbf{t}heta)d\mathbf{t}heta(X_2)\right)+\cos^3(\mathbf{t}heta)G_1-\sin^3(\mathbf{t}heta)G_2+ $$ $$ +5\left[\sin^2(\mathbf{t}heta)\cos(\mathbf{t}heta)(\Gamma_{11}^2+\Gamma_{12}^1+\Gamma_{21}^1)-\sin(\mathbf{t}heta)\cos^2(\mathbf{t}heta)(\Gamma_{22}^1+\Gamma_{12}^2+\Gamma_{21}^2) \right] $$ $$ -3\left[\sin^2(\mathbf{t}heta)\cos(\mathbf{t}heta)(\Gamma_{22}^2-\Gamma_{12}^1-\Gamma_{21}^1)+\sin(\mathbf{t}heta)\cos^2(\mathbf{t}heta)(\Gamma_{21}^2+\Gamma_{12}^2-\Gamma_{11}^1) \right]. $$ Using now equations \mathbf{e}qref{eq:AffineBundle1} and \mathbf{e}qref{eq:AffineBundle2} we obtain \begin{equation*} \overline{G}_1=\cos(\mathbf{t}heta)G_1-\sin(\mathbf{t}heta)G_2-3(\cos(\mathbf{t}heta)d\mathbf{t}heta(X_1)+\sin(\mathbf{t}heta)d\mathbf{t}heta(X_2)) \mathbf{e}nd{equation*} Similar calculations leads to \begin{equation*} \overline{G}_2=\sin(\mathbf{t}heta)G_1+\cos(\mathbf{t}heta)G_2+3(-\sin(\mathbf{t}heta)d\mathbf{t}heta(X_1)+\cos(\mathbf{t}heta)d\mathbf{t}heta(X_2)), \mathbf{e}nd{equation*} thus proving the lemma. \mathbf{e}nd{proof} \begin{corollary} We have that \begin{equation} \left[ \begin{array}{c} d\overline\mathbf{e}ta(Y_1)+\mathbf{e}psilon \overline{G}_1\\ d\overline\mathbf{e}ta(Y_2)-\overline{G}_2 \mathbf{e}nd{array} \right] =R_{\mathbf{e}psilon}(\mathbf{t}heta)\cdot\left[ \begin{array}{c} d\mathbf{e}ta(X_1) +\mathbf{e}psilon G_1 \\ d\mathbf{e}ta(X_2) -G_2 \mathbf{e}nd{array} \right]. \mathbf{e}nd{equation} \mathbf{e}nd{corollary} \begin{proof} By lemma \ref{lemma:rankH}, $\overline{\mathbf{e}ta}=\mathbf{e}ta+3\mathbf{t}heta$. Thus, if $\mathbf{e}psilon=1$, \begin{equation*} \begin{array}{c} d\overline{\mathbf{e}ta}(Y_1)=\cos(\mathbf{t}heta)d\mathbf{e}ta(X_1)+\sin(\mathbf{t}heta)d\mathbf{e}ta(X_2)+3(\cos(\mathbf{t}heta)d\mathbf{t}heta(X_1)+\sin(\mathbf{t}heta)d\mathbf{t}heta(X_2))\\ d\overline{\mathbf{e}ta}(Y_2)=-\sin(\mathbf{t}heta)d\mathbf{e}ta(X_1)+\cos(\mathbf{t}heta)d\mathbf{e}ta(X_2)+3(-\sin(\mathbf{t}heta)d\mathbf{t}heta(X_1)+\cos(\mathbf{t}heta)d\mathbf{t}heta(X_2)), \mathbf{e}nd{array} \mathbf{e}nd{equation*} which implies that \begin{equation*} \begin{array}{c} d\overline{\mathbf{e}ta}(Y_1)+\overline{G}_1=\cos(\mathbf{t}heta)(d\mathbf{e}ta(X_1)+G_1)+\sin(\mathbf{t}heta)(d\mathbf{e}ta(X_2)-G_2)\\ d\overline{\mathbf{e}ta}(Y_2)-\overline{G}_2=-\sin(\mathbf{t}heta)(d\mathbf{e}ta(X_1)+G_1)+\cos(\mathbf{t}heta)(d\mathbf{e}ta(X_2)-G_2). \mathbf{e}nd{array} \mathbf{e}nd{equation*} If $\mathbf{e}psilon=-1$, \begin{equation*} \begin{array}{c} d\overline{\mathbf{e}ta}(Y_1)=\cosh(\mathbf{t}heta)d\mathbf{e}ta(X_1)+\sinh(\mathbf{t}heta)d\mathbf{e}ta(X_2)+3(\cosh(\mathbf{t}heta)d\mathbf{t}heta(X_1)+\sinh(\mathbf{t}heta)d\mathbf{t}heta(X_2))\\ d\overline{\mathbf{e}ta}(Y_2)=\sinh(\mathbf{t}heta)d\mathbf{e}ta(X_1)+\cosh(\mathbf{t}heta)d\mathbf{e}ta(X_2)+3(\sinh(\mathbf{t}heta)d\mathbf{t}heta(X_1)+\cosh(\mathbf{t}heta)d\mathbf{t}heta(X_2)), \mathbf{e}nd{array} \mathbf{e}nd{equation*} implying that \begin{equation*} \begin{array}{c} d\overline{\mathbf{e}ta}(Y_1)-\overline{G}_1=\cosh(\mathbf{t}heta)(d\mathbf{e}ta(X_1)-G_1)+\sinh(\mathbf{t}heta)(d\mathbf{e}ta(X_2)-G_2)\\ d\overline{\mathbf{e}ta}(Y_2)-\overline{G}_2=\sinh(\mathbf{t}heta)(d\mathbf{e}ta(X_1)-G_1)+\cosh(\mathbf{t}heta)(d\mathbf{e}ta(X_2)-G_2), \mathbf{e}nd{array} \mathbf{e}nd{equation*} thus proving the corollary. \mathbf{e}nd{proof} \section{Proof of the Main Theorem} We begin with the following lemma: \begin{lemma}\label{lemma:Proof} The system of equations \begin{equation}\label{eq:Equiv1} dA(X_1)=G_1B;\ dA(X_2)=-\mathbf{e}psilon G_2B;\ dB(X_1)=-\mathbf{e}psilon G_1A;\ dB(X_2)=G_2A; \mathbf{e}nd{equation} is equivalent to \begin{equation}\label{eq:Equiv2} A^2+\mathbf{e}psilon B^2=c;\ d\mathbf{e}ta(X_1)=-\mathbf{e}psilon G_1;\ d\mathbf{e}ta(X_2)=G_2, \mathbf{e}nd{equation} for some constant $c$, where $\mathbf{t}an(\mathbf{e}ta)=\frac{B}{A}$, if $\mathbf{e}psilon=1$, and $\mathbf{t}anh(\mathbf{e}ta)=\frac{B}{A}$, if $\mathbf{e}psilon=-1$. \mathbf{e}nd{lemma} \begin{proof} If we assume that equations \mathbf{e}qref{eq:Equiv1} hold, then \begin{equation*} AdA(X_1)+\mathbf{e}psilon BdB(X_1)=0; \ \ AdA(X_2)+\mathbf{e}psilon BdB(X_2)=0, \mathbf{e}nd{equation*} which implies $A^2+\mathbf{e}psilon B^2=c$, for some costant $c\mathbf{n}eq 0$, and \begin{equation*} d\mathbf{e}ta(X_1)=-\mathbf{e}psilon G_1;\ \ d\mathbf{e}ta(X_2)=G_2. \mathbf{e}nd{equation*} On the other hand, if equations \mathbf{e}qref{eq:Equiv2} hold, then we can define \begin{equation*} \mathbf{t}ilde{G}_1=\frac{dA(X_1)}{B}=-\mathbf{e}psilon\frac{dB(X_1)}{A} \mathbf{e}nd{equation*} to obtain \begin{equation} d\mathbf{e}ta(X_1)=-\mathbf{e}psilon \mathbf{t}ilde{G}_1 \mathbf{e}nd{equation} and conclude that $\mathbf{t}ilde{G}_1=G_1$. In a similar way we show that $dA(X_2)=-\mathbf{e}psilon G_2B;\ dB(X_2)=G_2A$, which completes the proof of the lemma. \mathbf{e}nd{proof} \paragraph{Proof of theorem \ref{thm1}, part 1:} Assume that $\Omega$ is a parallel symplectic form such that $S$ is $\Omega$-Lagrangian. Differentiating \begin{equation} \Omega(X_1,X_2)=0 \mathbf{e}nd{equation} with respect to $X_1$ and $X_2$ we obtain \begin{equation*} \begin{array}{c} \Omega(D_{X_1}X_1,X_2)+\Omega(X_1,D_{X_1}X_2)=0\\ \Omega(D_{X_2}X_1,X_2)+\Omega(X_1,D_{X_2}X_2)=0, \mathbf{e}nd{array} \mathbf{e}nd{equation*} which is equivalent to \begin{equation*} \begin{array}{c} \Omega(\mathbf{x}i_1,X_2)+\Omega(X_1,\mathbf{x}i_2)=0\\ \Omega(\mathbf{x}i_2,X_2)+\Omega(X_1,-\mathbf{e}psilon\mathbf{x}i_1)=0, \mathbf{e}nd{array} \mathbf{e}nd{equation*} Write then \begin{equation} \Omega(X_1,\mathbf{x}i_2)=A,\ \Omega(X_2,\mathbf{x}i_1)=A; \ \ \Omega(X_1,\mathbf{x}i_1)=B,\ \Omega(X_2,\mathbf{x}i_2)=-\mathbf{e}psilon B; \mathbf{e}nd{equation} for some functions $A$ and $B$. Differentiating $A$ with respect to $X_1$ in the first two equations we obtain \begin{equation}\label{eq:DA1} \begin{array}{c} dA(X_1)=\left( \Gamma_{11}^1+\mathbf{t}au_2^2(X_1) \right) A+ \left( -\mathbf{e}psilon\Gamma_{11}^2 +\mathbf{t}au_2^1(X_1) \right) B+\Omega(\mathbf{x}i_1,\mathbf{x}i_2); \\ dA(X_1)= \left( \Gamma_{12}^2+\mathbf{t}au_1^1(X_1) \right)A +\left( \Gamma_{12}^1-\mathbf{e}psilon \mathbf{t}au_1^2(X_1) \right)B-\Omega(\mathbf{x}i_1,\mathbf{x}i_2). \mathbf{e}nd{array} \mathbf{e}nd{equation} or equivalently \begin{equation*} \begin{array}{c} (\Gamma_{11}^1+\mathbf{t}au_2^2(X_1)-\Gamma_{12}^2-\mathbf{t}au_1^1(X_1)) A +2\Omega(\mathbf{x}i_1,\mathbf{x}i_2)= (\Gamma_{12}^1-\mathbf{e}psilon\mathbf{t}au_1^2(X_1)+\mathbf{e}psilon\Gamma_{11}^2-\mathbf{t}au_2^1(X_1)) B\\ 2dA(X_1)=(\Gamma_{11}^1+\Gamma_{12}^2+\mathbf{t}au_1^1(X_1)+\mathbf{t}au_2^2(X_1))A+(\Gamma_{12}^1-\mathbf{e}psilon\Gamma_{11}^2+\mathbf{t}au_2^1(X_1)-\mathbf{e}psilon\mathbf{t}au_1^2(X_1))B. \mathbf{e}nd{array} \mathbf{e}nd{equation*} By using equations \mathbf{e}qref{eq:SymmetryC1}, \mathbf{e}qref{eq:SymmetryC2}, \mathbf{e}qref{eq:Det1}, \mathbf{e}qref{eq:AffineBundle1} and \mathbf{e}qref{eq:AffineBundle2}, we verify that these equations are equivalent to \begin{equation}\label{eq:DA11} F_{21}A+F_{22}B+4\Omega(\mathbf{x}i_1,\mathbf{x}i_2)=0 \mathbf{e}nd{equation} and \begin{equation}\label{eq:DA12} dA(X_1)=G_1 B. \mathbf{e}nd{equation} Differentiating $A$ with respect to $X_2$ we obtain \begin{equation}\label{eq:DA2} \begin{array}{c} dA(X_2)=\left( \Gamma_{21}^1+\mathbf{t}au_2^2(X_2) \right) A+ \left( -\mathbf{e}psilon\Gamma_{21}^2 +\mathbf{t}au_2^1(X_2) \right) B; \\ dA(X_2)= \left( \Gamma_{22}^2+\mathbf{t}au_1^1(X_2) \right)A +\left( \Gamma_{22}^1-\mathbf{e}psilon \mathbf{t}au_1^2(X_2) \right)B, \mathbf{e}nd{array} \mathbf{e}nd{equation} which are equivalent to \begin{equation}\label{eq:DA21} F_{11} A +F_{12} B=0 \mathbf{e}nd{equation} and \begin{equation}\label{eq:DA22} dA(X_2)=-\mathbf{e}psilon G_2 B. \mathbf{e}nd{equation} Now differentiate $B$ with respect to $X_1$ to obtain \begin{equation}\label{eq:DB1} \begin{array}{c} dB(X_1)=\left( \Gamma_{11}^2+\mathbf{t}au_1^2(X_1) \right) A+ \left( \Gamma_{11}^1 +\mathbf{t}au_1^1(X_1) \right) B; \\ -\mathbf{e}psilon dB(X_1)= \left( \Gamma_{12}^1+\mathbf{t}au_2^1(X_1) \right)A -\mathbf{e}psilon\left( \Gamma_{12}^2+ \mathbf{t}au_2^2(X_1) \right)B. \mathbf{e}nd{array} \mathbf{e}nd{equation} We can verify that these equations are equivalent to \begin{equation}\label{eq:DB11} F_{11} A + F_{12} B=0 \mathbf{e}nd{equation} and \begin{equation}\label{eq:DB12} dB(X_1)=-\mathbf{e}psilon G_1 A. \mathbf{e}nd{equation} Differentiating $B$ with respect to $X_2$ we get \begin{equation}\label{eq:DB2} \begin{array}{c} dB(X_2)=\left( \Gamma_{21}^2+\mathbf{t}au_1^2(X_2) \right) A+ \left( \Gamma_{21}^1 +\mathbf{t}au_1^1(X_2) \right) B-\Omega(\mathbf{x}i_1,\mathbf{x}i_2); \\ -\mathbf{e}psilon dB(X_2)= \left( \Gamma_{22}^1+\mathbf{t}au_2^1(X_2) \right)A -\mathbf{e}psilon\left( \Gamma_{22}^2+ \mathbf{t}au_2^2(X_2) \right)B-\mathbf{e}psilon\Omega(\mathbf{x}i_1,\mathbf{x}i_2), \mathbf{e}nd{array} \mathbf{e}nd{equation} which are equivalent to \begin{equation}\label{eq:DB21} F_{21} A +F_{22} B-4\Omega(\mathbf{x}i_1,\mathbf{x}i_2)=0 \mathbf{e}nd{equation} and \begin{equation}\label{eq:DB22} dB(X_2)=G_2 A. \mathbf{e}nd{equation} From equations \mathbf{e}qref{eq:DA11} and \mathbf{e}qref{eq:DB21} we conclude that $\Omega(\mathbf{x}i_1,\mathbf{x}i_2)=0$. It follows that equations \mathbf{e}qref{eq:DA11}, \mathbf{e}qref{eq:DA21}, \mathbf{e}qref{eq:DB11} and \mathbf{e}qref{eq:DB21} are reduced to \begin{equation*}\label{eq:KernelF} \begin{array}{c} F_{11}A+F_{12}B=0\\ F_{21}A+F_{22}B=0, \mathbf{e}nd{array} \mathbf{e}nd{equation*} which says that $[A,B]^t$ belongs to the kernel of $F$. Differentiating $\Omega(\mathbf{x}i_1,\mathbf{x}i_2)=0$ we obtain \begin{equation} \begin{array}{c} \lambda_{11}^1A-\lambda_{11}^2\mathbf{e}psilon B-\lambda_{21}^1B-\lambda_{21}^2A=0\\ \lambda_{12}^1A-\lambda_{12}^2\mathbf{e}psilon B-\lambda_{22}^1B-\lambda_{22}^2A=0, \mathbf{e}nd{array} \mathbf{e}nd{equation} which can be written as \begin{equation*}\label{eq:KernelL} \begin{array}{c} L_{11}A+L_{12}B=0\\ L_{21}A+L_{22}B=0. \mathbf{e}nd{array} \mathbf{e}nd{equation*} We conclude that $[A,B]^t$ belongs to the kernel of $L$ and hence $rank(H)<2$. Finally, by lemma \ref{lemma:Proof}, equations \mathbf{e}qref{eq:DA12}, \mathbf{e}qref{eq:DA22}, \mathbf{e}qref{eq:DB12} and \mathbf{e}qref{eq:DB22} are equivalent to $A^2+\mathbf{e}psilon B^2=c$, for some constant $c$ and to equations \mathbf{e}qref{eq:DerivEta}. Equation $A^2+\mathbf{e}psilon B^2=c$ implies $\Omega\mathbf{w}edge\Omega=c[\cdot,\cdot,\cdot,\cdot]$. \paragraph{Proof of theorem \ref{thm1}, part 2:} Assume that $rank(H)=1$ and equations \mathbf{e}qref{eq:DerivEta} hold. Denote by $[A,B]^t$ a column-vector in $Ker(H)$ satisfying $A^2+\mathbf{e}psilon B^2=c$, for some constant $c\mathbf{n}eq 0$. Define the symplectic form $\Omega$ by the conditions \begin{equation*} \begin{array}{c} \Omega(X_1,X_2)=\Omega(\mathbf{x}i_1,\mathbf{x}i_2)=0\\ \Omega(X_1,\mathbf{x}i_2)=\Omega(X_2,\mathbf{x}i_1)=A\\ \Omega(X_1,\mathbf{x}i_1)=-\mathbf{e}psilon \Omega(X_2,\mathbf{x}i_2)=B \mathbf{e}nd{array} \mathbf{e}nd{equation*} We shall prove that the symplectic form $\Omega$ is parallel. Observe first that \begin{equation*} \begin{array}{c} D_{X_1}\Omega(X_1,X_2)=-A+A=0\\ D_{X_2}\Omega(X_1,X_2)=-B+ B=0. \mathbf{e}nd{array} \mathbf{e}nd{equation*} Moreover \begin{equation*} \begin{array}{c} D_{X_1}\Omega(\mathbf{x}i_1,\mathbf{x}i_2)=L_{11}A+L_{12}B=0\\ D_{X_2}\Omega(\mathbf{x}i_1,\mathbf{x}i_2)=L_{21}A+L_{22}B=0. \mathbf{e}nd{array} \mathbf{e}nd{equation*} We must prove now that $(D_{X_k}\Omega)(X_i,\mathbf{x}i_j)=0$, for any $i,j,k=1,2$. We shall prove for $(i,j,k)=(1,2,1)$ and $(i,j,k)=(2,1,1)$, the other cases being similar. We have \begin{equation*} \begin{array}{c} dA(X_1)-\Gamma_{11}^1A-\Gamma_{11}^2(-\mathbf{e}psilon B)-\mathbf{t}au_2^1(X_1)B-\mathbf{t}au_2^2(X_1)A=0\\ dA(X_1)-\Gamma_{12}^1B-\Gamma_{12}^2A-\mathbf{t}au_1^1(X_1)A-\mathbf{t}au_1^2(X_1)(-\mathbf{e}psilon B)=0. \mathbf{e}nd{array} \mathbf{e}nd{equation*} But, as we have seen above, this pair of equations are equivalent to \begin{equation*} \begin{array}{c} F_{21}A+F_{22}B=0\\ dA(X_1)-G_1 B=0, \mathbf{e}nd{array} \mathbf{e}nd{equation*} which holds by lemma \ref{lemma:Proof}. \begin{thebibliography}{99} \bibitem{Burstin27} Burstin,C. and Mayer,W.: \mathbf{t}extit{ Die Geometrie zweifach ausgedehnter Mannigfaltigkeiten $F_2$ in affinen Raum $\mathbb{R}_4$}, Math.Z. 27, 373-407, 1927. \bibitem{Morvan87} Chen,B.Y. and Morvan,J.M.: \mathbf{t}extit{ G\'eom\'etrie des surfaces Lagrangiennes de $\mathbb{C}^2$}, J.Math. Pures et Appl., 66, 321-335, 1987. \bibitem{Craizer14} Craizer,M., Domitrz,W. and Rios,P.deM.: \mathbf{t}extit{Even dimensional improper affine spheres}, J. of Mathematical Analysis and Applications, 421, 1803-1826, 2015. \bibitem{Dillen} Dillen, F., Mys, G., Verstraelen, L. and Vrancken, L.:\mathbf{t}extit{The affine mean curvature vector for surfaces in $\mathbb{R}^4$}, Math.Nachr., 166, 155-165, 1994. \bibitem{Klingenberg51} Klingenberg,W.: \mathbf{t}extit{Zur affinen Differetialgeometrie, Teil II: Uber 2-dimesionale Fl\"achen im 4-dimensionalen Raum}, Math.Z., 54, 184-216, 1951. \bibitem{Magid} Magid, M., Scharlach, C. and Vrancken, L.:\mathbf{t}extit{ Affine umbilical surfaces in $\mathbb{R}^4$}, Manuscripta Math., 88, 275-289, 1995. \bibitem{Martinez05} Martinez,A.: \mathbf{t}extit{Improper affine maps}, Math.Z. 249, 755-766, 2005. \bibitem{Nomizu93} Nomizu,K. and Vrancken,L.: \mathbf{t}extit{A new equiaffine theory for surfaces in $\mathbb{R}^4$}, Int. J. of Mathematics., 4(1), 127-165, 1993. \bibitem{Verstraelen} Verstraelen, L., Vrancken, L. and Witowicz, P.:\mathbf{t}extit{Indefinite affine umbilical surfaces in $\mathbb{R}^4$}, Geom.Dedicata, 79, 109-119, 2000. \bibitem{Vrancken} Vrancken, L.:\mathbf{t}extit{ Affine surfaces whose geodesics are planar curves}, Proc.Amer.Math.Soc., 123(12), 3851-3854, 1995. \mathbf{e}nd{thebibliography} \mathbf{e}nd{document}
\begin{document} \title{Sparsity of integral points on moduli spaces of varieties} \begin{abstract} Let $X$ be a quasi-projective variety over a number field, admitting (after passage to $\mathbb{C}$) a geometric variation of Hodge structure whose period mapping has zero-dimensional fibers. Then the integral points of $X$ are {\em sparse}: the number of such points of height $\leq B$ grows slower than any positive power of $B$. For example, homogeneous integral polynomials in a fixed number of variables and degree, with discriminant divisible only by a fixed set of primes, are sparse when considered up to integral linear substitutions. \end{abstract} \author{Jordan S. Ellenberg, Brian Lawrence, and Akshay Venkatesh} \maketitle \begin{comment} Following Koll\`{a}r~\cite[4.1]{koll:shaf}, we make the following definitions. \begin{defn} If $X$ is a normal variety, a {\em normal cycle} on $X$ is a morphism $w:W \to X$ such that $W$ is an irreducible normal variety and $w$ is finite and birational onto its image. \end{defn} \begin{defn} Let $X$ be a normal projective variety over an algebraically closed field, and let $Z$ be a proper closed subvariety of $X$. We say $X$ has {\em large algebraic fundamental group away from $Z$} if, for every positive-dimensional normal cycle $W \to X$, the induced map $\pi_1(W \backslash (W \cap Z)) \to \pi_1(X \backslash Z)$ has infinite image. \end{defn} If $X/k$ is a variety over an arbitrary field $k$ of characteristic $0$, we say $X$ has large algebraic fundamental group away from $Z$ if the criterion above holds for $X_{\overline{k}}$. Note that if $X$ has large algebraic fundamental group, the same is true for $X \times_k K$ where $K$ is any uncountable algebraically closed field of characteristic $0$. {\bf This needs further justification, it is not just Lefschetz principle but I think it is standard.} The main theorem of this paper is that varieties with large fundamental group away from $Z$ have few $Z$-integral points. \end{comment} \section{Introduction} Let $K \subset \field{C}$ be a finite field extension of the rational numbers, and $S$ a finite set of primes of $K$. We will consider $S$-integral points on quasi-projective $K$-varieties $X^{\circ} \subset \mathbb{P}^m$. More precisely: in this situation we will write $X$ for the Zariski closure of $X^{\circ}$ in $\mathbb{P}^m$, $\mathcal{L} = \mathcal{O}(1)$ the associated hyperplane bundle, and $Z$ for $X \backslash X^{\circ}$. After choosing a good integral model of $X$ and $Z$ (see \S \ref{integralmodel} for details) we obtain a notion of ``$S$-integral point of $X^{\circ}$,'' and the projective embedding allows us to refer to the ``height'' of such a point. (By ``height'' we always mean the \emph{multiplicative} height.) We will denote by $X_{\field{C}}$ the complex variety obtained by base extension via the fixed embedding $K \hookrightarrow \field{C}$, by $X(\field{C})$ its complex points, and by $X^{{\text{an}}}$ its complex analytification. \begin{thm} \getsbel{thm1} Let $X^{\circ} \subset \mathbb{P}^m$ be a quasi-projective variety over $K$ such that $X^{\circ}_{\field{C}}$ admits a geometric variation of Hodge structure, whose associated period map (see \cite{Griffiths} for definitions) is locally finite-to-one, i.e. has zero-dimensional fibers. Then integral points on $X^{\circ}$ are sparse, in the sense that \begin{equation} \getsbel{th:main}\# \{x \text{ an $S$-integral point of $X^{\circ}$, of (multiplicative) height at most $B$} \} =O_{\epsilonsilon}(B^{\epsilonsilon}). \end{equation} \end{thm} By ``geometric variation of Hodge structure'' we mean a direct summand of a VHS arising from a smooth projective family over $X^{\circ}_{\field{C}}$; in particular the Hodge structures arising here are pure. In the situation of the Theorem, the ``period map'' is understood to be the complex-analytic map $\widetilde{X^{\circ {\text{an}}}} \rightarrow D$ classifying the VHS, where $\widetilde{X^{\circ {\text{an}}}}$ is the universal cover of $X^{ \circ {\text{an}}}$ and $D$ is the period domain associated to the varying Hodge structures. We note that the condition we are imposing on $X^{\circ}$ depends only on the complex algebraic variety $X^{\circ}_{\field{C}}$ and not on its rational form over $K$. The notation $O_{\epsilonsilon}(B^{\epsilonsilon})$ is that of analytic number theory: for each $\epsilonsilon > 0$ there exists $c_{\epsilonsilon}$ such that the left hand side is at most $c_{\epsilonsilon} B^{\epsilonsilon}$. Theorem \ref{thm1} will be deduced from the following more general theorem. \begin{thm} \getsbel{thm2} Let $\pi: \mathfrak{X} \rightarrow X^{\circ}$ be a projective smooth morphism of $K$-varieties, and (for some $i \geq 0$) let $\field{P}hi$ be the period map associated to the variation of Hodge structure $\mathsf{V} = R^i \pi_* \field{Q}$ on $(X^{\circ})^{{\text{an}}}$. Then the $S$-integral points of $X^{\circ}$ with height at most $B$ are covered by $O_{\epsilonsilon}(B^{\epsilonsilon})$ geometrically irreducible $K$-varieties, each lying in a single fiber of $\field{P}hi$. \end{thm} We have abused language in the statement: for an irreducible analytic subvariety $V \subset X^{\circ {\text{an}}}$ we shall say that $V$ ``lies in a fiber of $\field{P}hi$'' if some component (hence every component) of the preimage of $V$ in the universal cover lies in such a fiber. We will use similar language elsewhere in the paper -- if a property of the period map is local, we will phrase it on $X^{\circ}$ rather than the universal cover. The bound \eqref{th:main} applies to many natural moduli spaces of varieties. For instance, one may take $X$ to be the projective space parametrizing hypersurfaces of a given degree and dimension, and $Z$ the locus of singular hypersurfaces, and obtain the following corollary, which we will derive from Theorem~\ref{thm1} in \S \ref{Corproof}. \begin{cor} \getsbel{Corhyp} Fix $n \geq 2$ and $d \geq 3$ and a finite set $S$ of rational primes. Then the $S$-integral homogeneous degree $d$ polynomials $P= \sum a_{i_1\dots i_n} x_1^{i_1} \dots x_n^{i_n}$ in $n$ variables with $S$-integer discriminant and $\max_{i, v \in S} |a_i|_v \leq B$ lie in at most $O_{S,d,n,\epsilonsilon}(B^{\epsilonsilon})$ orbits of the group of integral linear substitutions $\GL_n(\field{Z}[\frac{1}{S}])$. Similarly the set of such integral polynomials (i.e. with $\field{Z}$ coefficients), with discriminant exactly equal to a nonzero integer $N \in \mathbb{Z}$, and $\max|a_i| \leq B$, lie in at most $O_{N,d,n,\epsilonsilon}(B^{\epsilonsilon})$ orbits of the group of unimodular integral linear substitutions $\SL_n(\field{Z})$. \end{cor} Recall that the {\em discriminant} of a homogeneous polynomial is a certain homogeneous polynomial in the coefficients of $P$ which vanishes precisely when the hypersurface defined by $P=0$ is singular; see \cite{PS} for more discussion of its significance. In the paper \cite{LV18} of the second- and third- named author, it is proved by a much more complicated argument that for ``large enough'' $n,d$ the set of polynomials with $S$-integer discriminant is not Zariski dense in the ambient affine hypersurface (which of course neither implies nor is implied by what we prove here). It is expected (\cite[Conj.\ 1.4]{JL}) that this set is {\em finite} as part of a ``Shafarevich-type'' statement about hypersurfaces with good reduction; in fact, if one assumes the Lang-Vojta conjecture, then results on the hyperbolicity of period domains imply finiteness of $S$-integral points in the general context of Theorem \ref{thm2}; see \cite{Zuo} and \cite[Thm.\ 1.5]{JL}.\footnote{ The theorem in loc.\ cit.\ is stated only for complete intersections, but the proof applies in the generality of our Theorem \ref{thm2}.} We note that the Shafarevich assertion can hold even in cases where there is no Torelli theorem; for instance, it holds for the case $(n,d) = (4,3)$ of cubic surfaces, by a result of Scholl~\cite{scholl:delpezzo}. In addition to cubic surfaces, the Shafarevich assertion is also known for cubic and quartic threefolds \cite{JL}. \subsection{Discussion of the proof} The reduction of Theorem \ref{thm1} to Theorem \ref{thm2} is relatively formal, and so we will focus on the latter Theorem here. The idea of the proof goes back to ideas initiated in the work of Bombieri-Pila \cite{BP}, Heath-Brown \cite{HBR} (and, independently and in a different context, Coppersmith \cite{Coppersmith}): One can show that an excess of rational (or integral) points on any $X$ as above must lie on the zero-loci of some auxiliary functions, i.e., must be covered by a certain collection of divisors. Then we iterate the process, replacing $X$ by these divisors, and covering the points by codimension-$2$ subvarieties, etc.; since we have almost no control over what these subvarieties look like, it is crucial to have, as in Bombieri-Pila and Heath-Brown, that the basic bounds are {\em uniform} in the ambient variety. See Remark \ref{Brobremark} for a quick review of this method. Unfortunately, this process is quite lossy when executed in dimensions larger than $1$. The problem is the usual one in studying rational points on higher dimensional varieties: Even if a variety has high degree, it may contain subvarieties of much lower degree, such as hyperplanes. For this reason, there are very few situations where one can achieve $B^{\epsilonsilon}$-type bounds (see e.g.\ \cite{PW} for an example in a different context). The key point of our strategy here is to use the fact that the bounds of \cite{Broberg} improve under {\'e}tale covers, which are plentiful under the conditions on $\mathfrak{X} \rightarrow X$ we have imposed; these covers can be used to raise the degree of intermediate subvarieties. To deploy this in a way suitable for an iterative argument, we use the global invariant cycle theorem of Hodge theory to construct a cover $\widetilde{X} \rightarrow X$ which remains nontrivial when restricted to any subvariety of $X$. In some sense, this strategy generalizes the approach of \cite{EV:2d}, which used \'etale covers to slightly improve on Heath-Brown's bounds in case $X$ is a non-rational curve (i.e. a curve with interesting \'etale covers); in the case $\dim X = 1$, of course, the issue of restriction of covers to subvarieties becomes trivial. In the end, the core part of the proof is quite short -- the main idea is the induction on dimension laid out in \S \ref{conclude}, with the key induction step being provided by Lemma \ref{lem:induction}. The reader may want to skip directly to these, and refer back as necessary. One of the reasons for the length of the text is the technicalities involved in making various results uniform over all subvarieties of $X$. One way of thinking about the strategy executed here is in terms of ``profinite repulsion" between low-height integral points. For simplicity of exposition we take $K=\field{Q}$ for this paragraph. For any prime $p$, the embedding of $X(\field{Z}[1/S])$ into $X(\field{Q}_p)$ induces a topology on the former set, and the methods of Bombieri-Pila and Heath-Brown rely on an argument that low-height points tend to repel each other in this topology. In the cases considered in this paper, there is an alternate profinite topology on $X^{\circ}(\field{Z}[1/S])$ afforded by the map \begin{displaymath} X^{\circ}(\field{Z}[1/S]) \to H^1(G_\field{Q}, \pi_1(X^{\circ}_{\overline{\field{Q}}})) \end{displaymath} and our approach can be thought of as exploiting the fact that in this topology, too, the low-height $S$-integral points repel each other. The two profinite repulsion phenomena working in tandem is what allows us to get a better upper bound on the number of low-height points. To be more precise: to say that two points $P_1$ and $P_2$ of $X^{\circ}(\field{Z}[1/S])$ are close together in the alternate profinite topology is to say that there is some high-degree cover $\widetilde{X}^\circ \to X^\circ$, with $\widetilde{X}^\circ$ irreducible, such that $P_1$ and $P_2$ both lie in the image of $\widetilde{X}^\circ(\field{Q})$. Then the method of Heath-Brown is used to control the low-height points of $\widetilde{X}^\circ(\field{Q})$, which is aided by the fact that this new variety has very high degree. In the course of writing this paper we learned of the paper \cite{Br} of Brunebarbe, which uses the same basic idea of increasing the positivity of line bundles by passing to {\'e}tale covers, in the context of a VHS with finite period map, in order to prove uniform algebro-geometric assertions about covers of varieties carrying such a VHS. Indeed Brunebarbe gives a more analytic (and therefore effective) proof of a result (\cite[Theorem 1.13]{Br}) of the type of Lemma \ref{lem31}, whereas our argument uses general finiteness statements of algebraic geometry. As in our paper, a crucial input to \cite{Br} is the invariant cycle theorem of Hodge theory. Finally, let us mention some potential refinements of the result. \begin{itemize} \item The reader will observe that the {\em geometricity} of the variation of Hodge structure is, in fact, barely used at all in our proof. It permits us to phrase the argument in quite an explicit way, but it should be possible to entirely avoid it. \item Indeed, the whole apparatus of variations of Hodge structures is used simply to provide examples of varieties $X^\circ$ with the property that the image of $\pi_1(W^{\circ {\text{an}}}) \to \pi_1(X^{\circ {\text{an}}})$ has infinite order for every subvariety $W^\circ$ of $X^\circ$. We note that this is essentially the same as Koll\'{a}r's condition of {\em large fundamental group} from \cite{kollar}. For an $X^{\circ}$ which possessed this property for some reason unrelated to Hodge structures, the arguments here should work just as well to control its rational points. For example, abelian varieties do not admit variations of Hodge structure but they do have large fundamental group, so the arguments of this paper should in principle imply sparseness of rational points; but for abelian varieties, we know this already, by the Mordell-Weil theorem. \item One might ask what happens for varieties $X^\circ$ carrying a variation of {\em mixed} Hodge structures; it seems plausible that a theorem like the one proved here will still hold. \item It would be interesting to refine our statement $O_{\epsilonsilon}(B^{\epsilonsilon})$ to a more precise upper bound. To do this would, at the very least, require that the inexplicit dependence on degree in Broberg's bounds be made explicit. Questions in this vein have been the topic of much recent progress; see, e.g., the recent result of Castryck, Cluckers, Dittman, and Nguyen~\cite[Theorem 2]{CCDN}, which gives bounds with explicit polynomial dependence on degree. The quantitative approach of \cite{Br} will also likely be needed in order to get effective lower bounds for the degrees of covers that arise in the argument. \end{itemize} \subsection{Notation and integral models} \getsbel{integralmodel} As above, $K$ is a number field embedded in $\field{C}$; let $\overline{K}$ be the algebraic closure of $K$ in $\field{C}$ (so $\overline{K}$ is just another name for $\overline{\mathbb{Q}}$). By a variety $V$ (over $K$), or $K$-variety for short, we mean, as usual, an integral, separated, finite-type scheme over $K$. In this situation, $V$ will always denote the scheme over $K$, and $V_{\field{C}}$ the base extension of $V$ to a complex scheme via $K \hookrightarrow \field{C}$. (Note that a $K$-variety is not required to be geometrically irreducible; thus, $V_{\field{C}}$ may have multiple components.) We will sometimes use notation such as $Y_{\field{C}}$ even when $Y$ is not a base-change from $K$, simply to emphasize that $Y_{\field{C}}$ is a complex variety. We will write $V^{{\text{an}}}$ for the (complex) analytification of $V_{\field{C}}$. As above let $S$ be a finite set of primes of $K$. Let $\mathfrak{o}_S$ be the ring of $S$-integers of $K$, i.e.\ the set of those elements of $K$ that are integral outside $S$. \begin{defn} \getsbel{goodmodel} Let $X^{\circ} \subset \mathbb{P}^m$ be a locally closed subvariety, and $X$ its Zariski closure. Let $Z$ be the complement $X \backslash X^{\circ}$, a Zariski-closed subset of $\mathbb{P}^m$. We will call a \emph{good integral model} for $(X, Z)$ a choice of a projective flat $\mathfrak{o}_S$-scheme $X_S \subset \mathbb{P}^N_S$ extending $X$ on the generic fiber. Given a good integral model, we define $Z_S$ to be the Zariski closure of $Z$ inside $X_S$, and take $X^{\circ}_S = X_S - Z_S$. If we are given a projective smooth morphism $f: \mathfrak{X} \rightarrow X^{\circ}$ and we have chosen a good integral model for $(X,Z)$, a good integral model for $f$ will be a projective smooth morphism $f_S: \mathfrak{X}_{S} \rightarrow X^{\circ}_S$. \end{defn} Good integral models exist after possibly increasing $S$, by standard spreading arguments (a nice reference here is Theorem 3.2.1 and Appendix C, Table 1 of \cite{PoonenRP}). A $K$-point of $X$ will be said to be {\em $S$-integral} if it extends to a morphism $\mathrm{Spec} (\mathfrak{o}_S) \rightarrow X_S-Z_S$. Explicitly, assuming $\mathfrak{o}_S$ to have class number one for simplicity, any $K$-point, represented by a point with relatively prime homogeneous coordinates $\mathbf{x} = [x_0: \dots: x_N] \in \mathfrak{o}_S^{N+1}$, is \emph{integral} at a prime $p \notin S$ if there exists a homogenous function $f \in \mathfrak{o}_S[x_0, \dots, x_N]$ in the ideal of $Z_S$ such that $f(\mathbf{x})$ is nonzero mod $p$; it is $S$-integral if it is integral at all primes $p \notin S$. If we choose two different choices of good integral model as above, there exists a finite set $T$ such that any $S$-integral point for the first model is $S \coprod T$-integral for the second model, so the choice of good integral model is irrelevant to the statement of the theorem. Finally, we fix some other notation to be used: For any field $F$ and any $F$-variety $Y$ mapping to $X$, we denote by $Y^{\circ}$ the preimage of $X^{\circ}$ inside $Y$. If the map $f:Y \rightarrow X$ is finite, the ``degree'' of $Y$ will be the degree of the pullback $f^* \mathcal{L}$ of the hyperplane bundle $\mathcal{L}$, characterized by the asymptotic formula \begin{equation} \getsbel{growth} \dim_F \Gamma(Y, f^* \mathcal{L}^{\otimes k}) \sim \frac{ \mathrm{deg} Y}{(\dim Y)!} k^{\dim Y} \end{equation} for large $k$. \section{Some results used in the proof} \getsbel{res} We collect results we will use in the proof. These are a few lemmas from algebraic geometry, and a crucial bound on rational points due to Broberg. We emphasize again that, for $F$ a field, a variety over $F$ is assumed reduced and irreducible, but not geometrically irreducible. \begin{lem} \getsbel{KS} Suppose that $\pi: \mathfrak{X} \rightarrow X$ is a projective smooth morphism of algebraic varieties over a subfield $\kappa \subset \field{C}$, with $X$ smooth, and let $\mathcal{T}$ be the tangent bundle of $X$. Let $\field{P}hi$ be the period map associated to the variation of Hodge structure $\mathsf{V} = R^i \pi_* \field{Q}$ on $(X)^{{\text{an}}}$, for some $i>0$. Then there is a morphism \begin{equation} \getsbel{gdef} g \colon \mathcal{H}_1 \otimes \mathcal{T} \rightarrow \mathcal{H}_2\end{equation} of vector bundles over $X$ (everything defined over $\kappa$) such that the derivative of the period map along a tangent vector $t$ at $x$ vanishes if and only if $g_x( -, t)$ is the zero map from $\mathcal{H}_1$ to $\mathcal{H}_2$. \end{lem} \proof This is a consequence of standard facts in Hodge theory, in particular the algebraicity of the Gauss--Manin connection (see \cite{Katz_Oda} and \cite[\S 1]{Katz_nilp}). The $i$th relative algebraic de Rham cohomology of $\mathfrak{X}/X$ gives a variation of Hodge structure $H_{\text{dR}}$ on $X$. As a variation of Hodge structure, $H_{\text{dR}}$ is a vector bundle, equipped with a flat connection and a filtration by algebraic subbundles $F^p H_{\text{dR}}$. These data determine the period map $\widetilde{X^{\circ {\text{an}}}} \rightarrow D$, where $D$ is a flag variety classifying filtrations of a fixed vector space of dimension $\operatorname{rank} H_{\text{dR}}$ by subspaces of dimensions $\operatorname{rank} F^p H_{\text{dR}}$. For each step $F^p H_{\text{dR}}$ of the filtration, the connection on $H_{\text{dR}}$ defines an $\mathcal{O}_X$-linear map \[ g_p \colon F^p H_{\text{dR}} \otimes \mathcal{T} \rightarrow H_{\text{dR}} / F^p H_{\text{dR}}, \] which describes how the subspace $F^p H_{\text{dR}, x}$ varies in $x \in X$. Combining the maps $g_p$ over all $p$, we obtain \[ g \colon \bigoplus_p F^p H_{\text{dR}} \otimes \mathcal{T} \rightarrow \bigoplus_p H_{\text{dR}} / F^p H_{\text{dR}}. \] This map $g$ determines the differential of the period map. More precisely, at every point, the tangent space to $D$ is identified with a subspace of \[ \operatorname{Hom} \left ( \bigoplus_p F^p H_{\text{dR}}, \bigoplus_p H_{\text{dR}} / F^p H_{\text{dR}} \right ), \] and with this identification, $g$ gives the differential of the period map. Finally, we note that $g$ is defined over $\kappa$ by \cite{Katz_Oda} and \cite[\S 1]{Katz_nilp}. \qed \begin{lem} \getsbel{degree} Let $F$ be a field of characteristic zero. Suppose that $Y$ is a proper $F$-variety equipped with an ample line bundle $\mathcal{L}$, and let $g: \widetilde{Y} \rightarrow Y$ be finite, with $\dim(\widetilde{Y}) = \dim(Y)$. Writing $\deg g$ for the degree of $g$ at the generic point, we have \begin{equation} \getsbel{degfinite} \deg_{g^* \mathcal{L}}(\widetilde{Y}) = (\deg g) \deg_{\mathcal{L}}(Y).\end{equation} \end{lem} We will apply this only when $\widetilde{Y}$ is a variety, but the argument does not require that, taking \eqref{growth} as the definition of degree. \proof By the projection formula we have $$\Gamma(\widetilde{Y}, g^* \mathcal{L}^{\otimes k}) = \Gamma(Y, g_* \mathcal{O} \otimes \mathcal{L}^{\otimes k}).$$ Now $g_* \mathcal{O}$ is isomorphic to $\mathcal{O}^{\oplus (\mathrm{deg} g)}$ away from a set of positive codimension on $Y$. Therefore $\Gamma(\widetilde{Y}, g^* \mathcal{L}^{\otimes k})$ coincides with $(\mathrm{deg} \ g) \frac{ \mathrm{deg}_{\mathcal{L}} Y}{(\dim Y)!} k^{\dim Y}$ for large $k$, up to terms $O(k^{\dim Y-1})$. We conclude using $\dim \widetilde{Y} = \dim Y$. \qed The next Lemma is \cite[Expos{\'e} I, Corollaire 10.8]{SGA1}. \begin{lem} \getsbel{disjointness} Suppose that $X$ is an irreducible normal noetherian scheme and $f: Y \rightarrow X$ a finite {\'e}tale cover. Then the irreducible components of $Y$ are disjoint. \end{lem} The following result asserts, essentially, ``boundedness'' of the set of irreducible varieties in a fixed projective space and bounded degree. Results of this type are also stated and used in work of Salberger \cite[Lemma 1.4, Thm.\ 3.2]{Salberger} in a similar context; in the interest of self-containedness we give a proof of precisely what we use. \begin{lem} \getsbel{Kleiman} Let $F$ be a field of characteristic zero. \begin{itemize} \item[(a)] Suppose that $V \subset \mathbb{P}^m_F$ is a closed subvariety (irreducible, reduced closed subscheme) of degree $d$. Then there are bounds, depending only on $m,d$ (not on $F$) for each coefficient of the Hilbert polynomial of $V$, and in particular the homogeneous ideal $I(V)$ of $V$ is generated in degree $O_{m,d}(1)$. \item[(b)] Suppose given integers $m, n, d, R$. Then there exist bounds $D$ and $N$ with the following property: Suppose $V_1, \ldots, V_r$ (with $r \leq R$) is a collection of closed subvarieties of $\mathbb{P}^m_F$, with each $V_i$ of dimension $\leq n$ and degree $\leq d$. Let $Z$ be the intersection \begin{equation} \getsbel{Zdef} Z = \bigcap_{i=1}^{r} V_i. \end{equation} Then the number of irreducible components of $Z$ is at most $N$, and the degree of each such component (endowed with the reduced scheme structure) is at most $D$. Furthermore, the bounds $D$ and $N$ are independent of the field $F$. \item[(c)] Suppose that $V \subset \mathbb{P}^m_F$ is a closed variety of dimension $n$ and degree $d$ which is {\em not} geometrically irreducible. Then there exist finitely many subvarieties $V_1, \ldots, V_N \subseteq V$, defined over $F$, such that: \begin{itemize} \item Each $V_i$ is irreducible of dimension $\leq n-1$, \item $V(F) = \bigcup V_i(F)$, and \item the number $N$ of $V_i$'s, and degree of each $V_i$ can be bounded in terms of $n, m, d$ (but independently of the field $F$ and the variety $V$). \end{itemize} \item[(d)] Suppose that $V \subset \mathbb{P}^m_F$ is a closed variety of dimension $n$ and degree $d$. Then the set of points in $V(F)$ which are singular on $V$ can again be covered by varieties $V_1, \dots, V_N$ with the same properties as (c). \end{itemize} \end{lem} \begin{comment} \begin{rem} In statement (c) we understand $\bigcap V_i$ to be endowed with its scheme structure as a fiber product; it need not be reduced, and neither do its irreducible components, which are considered as schemes via a schematic closures of their generic points. However, the statement immediately implies the same result if we endow $Z$ instead with its reduced scheme structure. \end{rem} \end{comment} \begin{exmp} As an example of ``bad'' examples for part (a): take a $d$-dimensional variety, and adjoin to it a large set of disjoint points; this modification does not affect the degree, and shows the need for irreducibility or at least equidimensionality. Similarly, consideration of embedded points shows that ``reduced'' is also important. As an example of the situation in part (c), consider the plane curve defined by $x^2 + y^2 = 0$. This is irreducible over $\mathbb{Q}$ but not geometrically irreducible; it only has one rational point $(0, 0)$, which is contained in a zero-dimensional, geometrically irreducible subvariety. More generally, suppose $F$ is a number field, and consider ``the affine line over $F$, with the origin reduced to a $\mathbb{Q}$-point'' -- that is, $\operatorname{Spec} ( \mathbb{Q} + T F[T])$. If $F \neq \mathbb{Q}$, this scheme is irreducible, but geometrically reducible, and its only $\mathbb{Q}$-point is the origin. (Taking $F = \mathbb{Q}[i]$ recovers the original example.) \end{exmp} \proof The first assertion of (a) follows from \cite[Expos\'e XIII, Corollary 6.11(a)]{SGA6}. That assertion applies as formulated to ``special positive cycles'' over an algebraically closed field; in our situation $V_{\overline{F}} \subset \mathbb{P}^m_{\overline{F}}$ is reduced equidimensional and its decomposition into irreducible components give the closed subschemes appearing in {\em loc. cit.} D{\'e}finition 6.9. The consequence on bounded generation of $I(V)$ follows, because such a bound on generation of the defining ideal is valid in any finite type subscheme of the Hilbert scheme. For (b), we may as well consider one fixed $r$. Let $\mathcal{P}$ be the finite set of polynomials arising from (a), and let $\operatorname{Hilb}$ be the Hilbert scheme parametrizing closed subschemes of $\mathbb{P}^m$ with Hilbert polynomial in $\mathcal{P}$. This is a finite-type $\mathbb{Q}$-scheme by (a). Now tuples $(V_1, \ldots, V_r)$ are classified by suitable $K$-points of $\operatorname{Hilb}^r$, which is again of finite type; and the result now follows from standard results on families over a finite-type base. Specifically, we have the universal schemes $\mathcal{V}_1, \ldots, \mathcal{V}_r$ over $\operatorname{Hilb}^r$; let $\mathcal{Z} \subseteq \mathbb{P}^m \times \operatorname{Hilb}^r$ be their fiber product over $\mathbb{P}^m$, so fiberwise $\mathcal{Z}$ gives the intersection of the $\mathcal{V}$s (although with a possibly non-reduced schematic structure). We will work by Noetherian induction. Let $\mathrm{et}a$ be the generic point of a closed irreducible subscheme $H \subseteq \operatorname{Hilb}^r$. We will show that there exists a relatively open subset $U \subseteq H$ (that is, open in $H$, but not necessarily in $\operatorname{Hilb}^r$) such that the number, dimensions, and degrees of the irreducible components of fibers $\mathcal{Z}_h$, for $h \in U$, are bounded. Here, as in the statement, ``degree'' is taken with reference to the reduced scheme structure. The number of irreducible components of any geometric fiber of $\mathcal{Z}$ is bounded by \cite[9.7.9]{EGAIV3}. This bounds the number of geometric components, and so also the number of irreducible components, of any $Z$ as in \eqref{Zdef}. Now we turn to the degree. For each $j$ with $0 \leq j \leq n$, let $Z_j$ be the closure in $\mathcal{Z}$ of the union of $j$-dimensional components of $\mathcal{Z}_{\mathrm{et}a}$ (thought of, for now, merely as a closed subset of $\mathcal{Z}$ in the Zariski topology; we will revisit the issue of scheme structure shortly.) In particular, the fiber over $\mathrm{et}a$ of each $Z_j$ is equidimensional of dimension $j$. By \cite[9.5.1, 9.5.5]{EGAIV3}, we can restrict to an open $U \subseteq H$, on which: \begin{itemize} \item[(i)] For each $s \in U$, the fiber $(Z_j)_s$ is equidimensional of dimension $j$, \item[(ii)] For each $s \in U$, the fiber $\mathcal{Z}_s$ is set-theoretically covered by the various $(Z_j)_s$, and \item[(iii)] For each $s \in U$, and for all $j' < j$, the intersection $(Z_j)_s \cap (Z_{j'})_s$ has all components of dimension strictly less than $j'$. \end{itemize} Take $s \in U$ and let $K$ be any irreducible component of $\mathcal{Z}_s$, say of dimension $q$; by (ii) it is contained in some irreducible component of some $(Z_j)_s$. Since $K$ is maximal among irreducible subsets, we must have equality here, i.e.\ $K$ coincides with an irreducible component of this $(Z_j)_s$, and since $(Z_j)_s$ is equidimensional of dimension $j$ we must have $j=q$. Now let us endow $Z_j$ (so far merely a closed set) with its reduced scheme structure. But now by generic flatness (\cite[6.9.1]{EGAIV2}), we further shrink $U$ so that each $Z_j$ is flat over $U$. Then for each $j$, the degree of each $(Z_j)_s$ is independent of $s$. Now, the scheme structure on $(Z_j)_s$ need not be reduced, but nonetheless there is an inequality of degrees $\mathrm{deg} \ (Z_j)_s \geq \mathrm{deg} \ (Z_j)_s^{\mathrm{red}}$ (where $(Z_j)_s^{\mathrm{red}}$ denotes the fiber taken with the reduced scheme structure) and the degree of $(Z_q)_s^{\mathrm{red}}$ bounds from above the degree of $K$ taken with its reduced scheme structure. Now we turn to (c). Consider the geometrically irreducible components $W_1, \dots, W_h$ inside the base change $V_{\overline{F}}$ of $V$ to an algebraic closure. Note that $h$ is bounded by the degree of $V$, which we have assumed bounded, and similarly the degree of each $W_i$ is bounded by the degree of $V$. The intersection $\cap_{i=1}^h W_i$ (with its reduced structure) is a Galois-stable closed subscheme of $V_{\overline{F}}$ and thereby descends to a reduced $F$-subscheme $W \subset V$. Since $V$ is not geometrically irreducible, we know that $\operatorname{dim} W \leq \operatorname{dim} V - 1$. Also (b), applied with $F$ replaced by $\overline{F}$, implies that $W_{\bar{F}}$ has a bounded number of irreducible components, each of bounded degree. The same is then true for the $F$-scheme $W$. We claim, further, that $W(F) = V(F)$. To see this, note that any $F$-point of $V$ must be Galois-invariant; since Galois permutes the geometric components of $V$ transitively, the associated element of $V(\overline{F})$ belongs to all $W_i$. (See \cite[Lemma 0G69]{Stacks} for a similar argument.) The argument for (d) is similar to that for (b). Again we can parameterize all such $V$ by a suitable finite type Hilbert scheme $\mathrm{Hilb}$ and, writing $\mathcal{H} \rightarrow \mathrm{Hilb}$ for the universal subscheme, the smooth locus of $\pi$ coincides with the set of points of $\mathcal{H}$ that are smooth in their fiber over $\mathrm{Hilb}$ (see e.g.\ \cite[17.5.1]{EGAIV4}). Let $\mathcal{Z}$ be the complement of this smooth locus, endowed with the reduced structure. Then proceed as in (b). \qed Finally, the following theorem of Broberg \cite{Broberg} builds on fundamental ideas of Heath-Brown \cite{HBR} and Bombieri-Pila \cite{BP}: \begin{thm}[Broberg, 2004] \getsbel{Brobs} Let $V \subset \mathbb{P}^M_{K}$ be an irreducible closed subvariety of dimension $n$ and degree $d$. Then the points of $V(K)$ whose naive height is at most $H$ are contained in a set of $K$-rational divisors of cardinality $\ll_{\epsilonsilon, M} H^{\frac{n+1 +\epsilonsilon}{d^{1/n}} }$, and each of which has degree $O_{\epsilonsilon, M}(1)$. \end{thm} We note that, as stated in \cite{Broberg}, Broberg requires a bound on the generation of the ideal of $V$. This bound is however automatic from Lemma \ref{Kleiman} part (a). Also ``divisor'' in the statement means ``effective Cartier divisor.'' \begin{rem} \getsbel{Brobremark} Because Theorem \ref{Brobs} is so crucial, particularly its dependence on degree, we briefly outline where it comes from, taking $K=\field{Q}$ to simplify notation; this is not needed in the remainder of the paper. One chooses a large integer $k$ and embeds $V \hookrightarrow \mathbb{P}^{e-1}$ via a basis of sections of $\Gamma(V, \mathcal{O}(1)^{\otimes k})$. In fact, we can and do choose from $\Gamma(\field{P}^M, \mathcal{O}(1)^{\otimes k})$ a set of monomials $f_1, \ldots, f_e$ of degree $k$ in the $M+1$-variables which freely span $\Gamma(V, \mathcal{O}(1)^{\otimes k})$. Choose a ``good'' prime $p$ and examine a collection of $e$ points $P_i \in V(\field{Q})$ which all reduce to the same point modulo $p$ of $\field{P}^M$, and whose height is at most $H$. Expressing each $P_i$ in coprime integer coordinates, we can speak of the evaluation $f_i(P_j) \in \mathbb{\field{Z}}$. Consider $$ \Delta := \det \left[ f_i(P_j) \right]_{1 \leq i,j \leq e} \in \mathbb{Z}$$ which measures the volume of the $e$-simplex in $\mathbb{Z}^e$ spanned by the $P_i$ and the origin. On the one hand, $\Delta$ is bounded by a constant multiple of $H^{ke}$. On the other hand, $\Delta$ is highly divisible by $p$, because the values of $f_j$ modulo power of $p$ are highly constrained, and therefore there are many relations (mod $p^k$) between rows of $\Delta$. To see this more formally, fix $r \geq 1$ and let denote by $V(\field{Z}_p)_0$ the subset of $V(\field{Z}_p)$ consisting of points with a given reduction modulo $p$. Set $$M_r := \{ \mbox{functions: $V(\field{Z}_p)_0 \rightarrow \field{Z}/p^r$}\}.$$ Each $f_j$ gives an element of $M_r$, by evaluation and reduction modulo $p^r$. For $r=1$, all these functions (for varying $j$) lie in a $\field{Z}/p$-module of rank one: the constant functions. For $r=2$, these functions depend only on the ``constant term'' and ``derivative'' of $f_j$, and thereby lie in a $\mathbb{Z}/p^2$-submodule of $M_2$ of rank $n+1$. For $r=3$ we get $\mathbb{Z}/p^3$-submodule of $M_3$ of rank $(n+1)(n+2)/2$, where the number comes from counting possible Taylor expansions of $f_j$ up to degree two. Each such statement gives linear constraints on the rows of $f_i(P_j)$, and therefore leads to divisibility for $\Delta$. Computing with this we find $$v_p(\Delta) \gtrsim k e \cdot \frac{ d^{1/n} }{1+1/n},$$ where $d$ arises on the right-hand side eventually through the asymptotic behaviour of $e = \dim \Gamma(V, \mathcal{O}(k))$, cf.\ \eqref{growth}. Choosing $p$ so that $p^{v_p(\Delta)}$ is larger than the size bound $H^{ke}$ then forces $\Delta=0$; so the points $P_i$ lie on a hyperplane of $\mathbb{P}^{e-1}$, i.e. all the points of $V$ with a fixed mod $p$ reduction lie on a divisor. So we produce $\approx p^{n}$ divisors covering the points of height $\leq H$. The argument above is so flexible -- in particular, using freedom to choose $p$ -- that one can achieve bounds that are uniform in $V$. Notice the crucial point: as the degree of $\mathcal{O}(1)$ on $V$ increases, $\mathcal{O}(1)^{\otimes k}$ has more sections for a fixed $k$, giving stronger divisibility for $\Delta$ and thus stronger bounds. \end{rem} \section{Reduction to Theorem \ref{thm2}} We describe how Corollary \ref{Corhyp} is reduced to Theorem \ref{thm2}. \getsbel{Main1} \subsection*{Deduction of Corollary \ref{Corhyp} from Theorem \ref{thm1}} \getsbel{Corproof} We will focus on the first statement of the Corollary, with $S$-integral points, and remark at the end of the proof on the only modification needed to handle the statement about fixed discriminant. For $n=2$ (binary forms of degree three and above) one in fact knows finiteness (Birch--Merriman \cite{BM}). The same is true for the case $n=4, d=3$ of cubic surfaces (Scholl \cite{scholl:delpezzo}). We may now restrict to the remaining cases $n \geq 3, d \geq 3$ and $(n,d) \neq (4,3)$. We will apply Theorem \ref{thm2} taking $X=\mathbb{P}^M$ the projective space parameterizing polynomials of degree $d$ in $n$ variables up to scaling, with $Z$ the zero-locus of the discriminant, and taking the geometric variation of Hodge structure to arise from the middle cohomology of the universal family of hypersurfaces over $X^{\circ}$. Here the infinitesimal Torelli theorem is known, see \cite[Thm.\ 9.8(b)]{Griffiths}; that is, every fiber of the period map is locally contained in an orbit of $\mathrm{PGL}_n(\field{C})$; so, noting that the Weil height of $[a_0: \dots : a_M]$ is given by $\prod_{v} \max_{i} |a_{i,v}|$ and is thereby bounded by a power of $B$ in the situation of the Corollary, Theorem \ref{thm2} shows that the integral points in question are covered by $O_{\epsilonsilon}(B^{\epsilonsilon})$ orbits of $\field{P}GL_n(\field{C})$ or equivalently $\GL_n(\field{C})$. We must replace $\GL_n(\field{C})$ by $\GL_n(\field{Z}[S^{-1}])$. This is not difficult but one must take care because a hypersurface could have automorphisms in characteristic $p$ that do not lift to characteristic zero. Fix $P_0$ as in the Corollary. We will show that the number of $\GL_n(\field{Z}[S^{-1}])$-orbits on $S$-integral polynomials $P \in \GL_n(\field{C}) P_0$ with $S$-integral discriminant is bounded in terms of $n,d,S$. Let $h$ be the degree of the discriminant polynomial. For any $S$-integral $P \in \GL_n(\field{C}) P_0$ with $S$-integral discriminant, there exists a rescaling of $P$ by an $S$-unit whose discriminant has $p$-valuation between $0$ and $h$. It suffices, then, to show that for any integer $N = \prod_{p \in S} p^{a_p}$ (with $0\leq a_p < h$) the set of $S$-integral polynomials in $\GL_n(\field{C}) P_0$ with discriminant $N$ lie in a union of $O_{n,d,S}(1)$ orbits of $\SL_n(\field{Z}[S^{-1}])$. Now, write $Y$ for the affine hypersurface defined by $\mathrm{disc}(P)=N$, which we can regard as an affine scheme over $\field{Q}$ (and even over $\field{Z}$). It is equipped with an action of the $\field{Q}$-algebraic group $G=\mathrm{SL}_n$. We must show that the intersection of $Y(\field{Z}[S^{-1}])$ with any $G(\field{C})$-orbit is covered by $O_{n,d,S}(1)$ orbits of $G(\field{Z}[S^{-1}])$. We will need: \begin{quote} {\em Claim 1:} The action morphism $G \times Y \rightarrow Y \times Y$ is a {\em finite} morphism of algebraic varieties over $\field{Q}$. \end{quote} This follows essentially from the theorem of Matsumura and Monsky \cite{MM} that each stabilizer $G_y$ for $y \in Y(\field{C})$ is finite. The deduction can be carried out using results of geometric invariant theory to show that the stack of smooth hypersurfaces is separated; see \cite{JLmoduli}. We give a self-contained argument using similar ideas. \proof (of {\em Claim 1.}) It is sufficient to prove this over $\field{C}$, and, since the morphism is quasi-finite, it is sufficient to prove that it is proper. Using the singular value decomposition one reduces to checking that the action of the diagonal subgroup $T = \{ (t_1, \dots, t_n) \text{ with } t_i \in \field{R}_+ \}$ on $Y(\field{C})$ is proper, which can be checked for the analytic topology \cite[XII, Prop 3.2]{SGA1}. In other words, one must show that for any compact regions $\Omega_1$ and $\Omega_2$ in $Y(\field{C})$, the set $\{g \in T: g\Omega_1 \cap \Omega_2 \mbox{ nonempty} \}$ is bounded. It suffices to show that, if $P = \sum a_{i_i \dots i_n} x_1^{i_1} \dots x^{i_n}$ and $Q = \sum b_{i_1\dotsi_n} x_1^{i_1} \dots x^{i_n}$ and $(t_1, \ldots, t_n) \cdot P = Q$ then the absolute values of the $t_j$ can be bounded in terms of the coefficients of $P$ and $Q$. Write $\Sigma$ for the set of $I = (i_1, \ldots, i_n)$ such that $a_I \neq 0$. For each $t = (t_1, \ldots, t_n)$, write $t^I$ for $\prod_j t_j^{i_j} = \exp(\sum i_j \log t_j)$. Then the maximal absolute value of a coefficient of $(t_1, \ldots, t_n) \cdot P$ is $\max_{I \in \Sigma} |a_I| t^I$, so the condition that $(t_1, \ldots, t_n) \cdot P = Q$ provides upper bounds on $|a_I| t^I$ for all $I \in \Sigma$. An upper bound on $|a_I| t^I$ restricts $(\log t_1, \ldots \log t_n)$ to a half-space; we are done once we show that the intersection of these half-spaces over all $I \in \Sigma$ is a compact region in $T$. This is the case exactly when the region \begin{equation} \getsbel{badset} \{ t \in T: \sum i_j \log t_j > 0 \mbox{ for all $I \in \Sigma$}\}\end{equation} is {\em empty}. Suppose otherwise; then there exists $t$ in this region of the form $t_0^{m_1}, \ldots, t_0^{m_n}$ for some (whence any) $t_0 \in \field{R}_+$ and with $m_j \in \field{Z}$. By assumption on $t$, the limit in the analytic topology $P_0 := \lim_{t_0 \rightarrow 0} (t_0^{m_1}, \ldots, t_0^{m_n}) \cdot P$ exists; explicitly, $P_0 = \sum_{I \in \Sigma_0} a_I t^I$ where $\Sigma_0$ is that subset of $\Sigma$ consisting of those $I$ with $\sum i_j m_j = 0$. Then $\mathrm{disc}(P_0) = \lim_{t_0 \to 0} \mathrm{disc} (t \cdot P) = \mathrm{disc}(P)$ is nonzero, so $P_0$ cuts out a smooth affine hypersurface. But the identity $ \sum m_j X_j \frac{d}{dX_j} P_0 = 0$ means that the $n$ conditions $\frac{dP}{dX_j} = 0$ cutting out the nonsmooth locus are dependent, contradicting smoothness of $P_0=0$ (cf. proof in \cite{MM}). Thus region \eqref{badset} is empty, so the action of $T$ is proper, so the action of $G$ is proper as well. This concludes the proof of {\em Claim 1.} \qed Take $y_1, y_2 \in Y(\field{Z}[S^{-1}])$. We will now prove \begin{quote} {\em Claim 2 :} The action of $\Gal(\overline{\field{Q}}/\field{Q})$ on the stabilizers $G_{y_i}(\overline{\field{Q}})$ and also on the set $G_{12} := \{g \in G(\overline{\field{Q}}): g y_1 = y_2\}$ is unramified outside a set of primes $\mathcal{P}$ depending only on $n,d,S$. \end{quote} \proof (of {\em Claim 2}). It is enough to prove the claim for $G_{12}$; take $y_1=y_2$ to get the claim about stabilizers. From finiteness of the action map we see that the matrix entries of $g^{\pm 1}$ for $g \in G_{12}$ satisfy monic polynomials whose coefficients are rational polynomials in the coordinates of $y_i$. Take $A_1(y_1,y_2), \ldots, A_M(y_1,y_2)$ to be the finite collection of all coefficients arising in this way. Now, for any extension of the $p$-valuation on $\field{Q}$ to $\overline{\field{Q}}$, and for every matrix entry $g^{\pm 1}_{ij}$ of $g^{\pm 1}$ we have \begin{equation} \getsbel{cb} |g^{\pm 1}_{ij}|_p \leq \max_k |A_k(y_1, y_2)|_p. \end{equation} Take $P$, larger than any prime in $S$, such that the coefficients of each $A_i$ are $p$-integral for all $p > P$; it now follows from \eqref{cb} that, for all $p > P$, any element $g$ which lies in $G_{12}$ for any $y_1,y_2 \in Y(\field{Z}[S^{-1}])$ has for entries roots of monic polynomial with $p$-integral coefficients; in other words, it is $p$-integral. Thus, for all such $p$ there is an induced map $$ G_{12} \rightarrow \SL_n(\overline{\field{Z}}_p) \rightarrow \SL_n(\overline{\mathbb{F}}_p)$$ which is necessarily injective if we also require $p$ to be larger than the order of $G_{12}$, since any torsion element of the kernel of $\SL_n(\overline{\field{Z}}_p) \rightarrow \SL_n(\overline{\mathbb{F}}_p)$ must have $p$-power order. For general constructibility reasons (similar to Lemma \ref{Kleiman}) the order of $G_{12}$ is bounded above; choose $P$ to be larger than this bound. Now for any $\sigma$ belonging to the inertia group at $p >P$, and any $g \in G_{12}$, $g^{\sigma}$ and $g$ have the same image in $\SL_n(\overline{\mathbb{F}_p})$, so must coincide. This is precisely to say that the Galois action on $G_{12}$ is unramified at $p$, so we have proved {\em Claim 2} with $\mathcal{P}$ the set of primes less than $P$. \qed Now, with $y \in Y(\field{Z}[S^{-1}])$, there is an injection $$ \mbox{$G(\field{Q})$-orbits on $Gy \cap Y(\field{Q})$} \hookrightarrow \mbox{$G_{y}$-torsors over $\field{Q}$}$$ sending $y' \in Y(\field{Q})$ to the right torsor $\{g: gy=y'\}$. The {\em Claim} means that, under this map, elements of $Y(\field{Z}[S^{-1}])$ are sent to unramified-away-from-$\mathcal{P}$ torsors for the unramified-away-from-$\mathcal{P}$-group $G_y$, whose order is moreover bounded. Hermite-Minkowski gives an upper bound on the size of unramified Galois $H^1$ in this setting, and we conclude that the {\em $S$-integral} points lying in $Gy \cap Y(\field{Q})$ lie in a collection of $G(\field{Q})$-orbits whose cardinality is bounded in terms of $d,n,S$. Finally, we pass to $G(\field{Z}[S^{-1}])$-orbits using \eqref{cb}. This proves the first statement of the Corollary. In the last sentence of the argument we used \eqref{cb} only at $p \notin S$; but using it also for $p \in S$ allows one to conclude similarly that \begin{equation} \getsbel{wwp2} \{y \in Y(\mathbb{Z}): \mathrm{disc}(y) =N, \mbox{ht.}(y) \leq B\}\end{equation} is covered by $O_{\epsilonsilon}(B^{\epsilonsilon})$ orbits of $\SL_n(\field{Z})$. Here $\mathrm{ht}(y)$ just refers to the largest coefficient of $y$, rather than a Weil height. This is the second statement of the Corollary. \subsection*{Reduction of Theorem \ref{thm1} to Theorem \ref{thm2}} Suppose that $X^{\circ}$ is quasi-projective and $X^{\circ}_{\field{C}}$ admits a geometric variation of Hodge structure, i.e.\ there is a morphism $$ \pi_{\field{C}}: \mathfrak{X}_{\field{C}} \rightarrow X_{\field{C}}^{\circ}$$ such that the variation of Hodge structure in the theorem statement is a direct summand of $R^i \pi_{\field{C}*} \field{C}$, for some $i \geq 0$. (That such $\mathfrak{X}$ exists is understood to be the content of the word ``geometric.'') Theorem \ref{thm1} assumes that the period morphism associated to $\mathsf{V}$ is locally finite-to-one. In this case, the period morphism associated to $R^i \pi_{\field{C}*} \field{C}$ has the same property. The issue to be handled is that $\mathfrak{X}_{\field{C}}$ is defined only over $\field{C}$ whereas Theorem \ref{thm2} requires a $K$-morphism as input. To verify sparsity it will suffice, by descending Noetherian induction and extending the base field, to produce a proper Zariski-closed subset $E \subset X_{\overline{K}}$ with the property that integral points on $X^{\circ}$ that do not lie inside $E$ are sparse. (Warning: this is not the same as studying ``integral points on the quasi-projective variety $X^{\circ} \backslash E$.'') By a standard spreading out technique we may extend $\pi_{\field{C}}: \mathfrak{X}_{\field{C}} \rightarrow X_{\field{C}}$ over the spectrum of a subring $R \supset K$ of $\field{C}$, finitely generated over $K$: $$ \pi: \mathfrak{X}_R \rightarrow X_R = X \times_{\mathrm{Spec} \ K} S,$$ where $S$ is the spectrum of $R$. This recovers $\pi_{\field{C}}$ upon taking the pullback via the map $s : \Spec \field{C} \rightarrow S$ associated to $R \rightarrow \field{C}$. The image of $s$ is the generic point $\mathrm{et}a_S$ of $S$. Since $S$ is the spectrum of the finitely generated integral domain $R$, $S \rightarrow \operatorname{Spec} K$ is smooth at $\mathrm{et}a_S$. Then, deleting the nonsmooth locus, we may suppose that $S$ is a smooth $K$-variety. Consider the morphism of vector bundles supplied by Lemma \ref{KS}, which, applied to the morphism $\mathfrak{X}_R \rightarrow X \times S$, gives a morphism of locally free sheaves \begin{equation} \getsbel{g2} g: T_{X \times S} \otimes \mathcal{H}_1 \rightarrow \mathcal{H}_2\end{equation} over $X \times S$. Let $T_{X} \subset T_{X \times S}$ be the sub-bundle defined by the pullback of the tangent bundle of $T_X$. Since $\pi_{\field{C}}$ has finite-to-one period map, the specialization $g_{x,s}|_{T_X}$ is injective for each $x \in X(\field{C})$; the same is then true for a Zariski-open neighbourhood $U$ of $X \times \{\mathrm{et}a_S\}$ inside $X \times S$. Choose $s' \in S(\overline{K})$ such that $X(\overline{K}) \times s'$ meets $U(\overline{K})$. The fiber of $\mathfrak{X}_R \rightarrow X_R$ over $s'$ gives a proper smooth morphism of $\bar{K}$-varieties $\mathfrak{X}_{s'} \rightarrow X_{\bar{K}}$. The associated period map has generically injective derivative; let $E \subset X_{\bar{K}}$ be the locus where its derivative has a nontrivial kernel, i.e., where $g_{x,s'}|T_X$ fails to be injective. $E$ is a proper Zariski closed subset of $X$ defined over some finite extension $K_1 \supset K$, and Theorem \ref{thm2} (applied after passage to $K_1$) implies that integral points on $X^{\circ}$ that lie on the complement of $E$ are sparse. This establishes the inductive step, and therefore concludes the reduction of Theorem \ref{thm1} to Theorem \ref{thm2}. \section{Proof of Theorem \ref{thm2}} The remainder of the paper is devoted to the proof of Theorem \ref{thm2}. We use notation as in the statement. We will fix throughout a good integral model for both $(X,Z)$ and the morphism $\mathfrak{X} \rightarrow X^{\circ}$, as in Definition \ref{goodmodel}. As we have noted, the proof involves proving that integral points of $X$ lie on various collections of subvarieties, whose dimension will be steadily reduced until they are either points or fibers of the period map. The key inductive statement used to reduce the dimension of the subvarieties is Lemma \ref{lem:induction}. It may be helpful to note, in advance, that we will not need to keep track of any {\em integral} structure on our subvariety. The notion of ``integral point on a subvariety'' will simply mean a $K$-rational point of the subvariety that is integral as a rational point on $X$. \subsection{Large fundamental group.} \getsbel{large} Enlarging $S$ if necessary, we fix a prime $p \in S$ for which the integral cohomology of complex fibers of $\mathfrak{X}$ over $X^{\circ}$ is $p$-torsion-free, say of rank $r$ over $\field{Z}$. Fixing $x \in X^{\circ}(\field{C})$ we get a monodromy representation of the topological fundamental group \begin{equation} \getsbel{Gmdef} \pi_1(X^{\circ, {\text{an}}}, x) \longrightarrow G_n := \mathrm{Aut}(H^i(\mathfrak{X}_x, \field{Z}/p^n)) \simeq \GL_r(\field{Z}/p^n \field{Z}).\end{equation} In this setting, the {\em global invariant cycle theorem} (\cite[Corollaire 4.1.2 and 4.1.3.3]{Deligne_Hodge2}) implies the following statement: For any complex irreducible subvariety $\iota: V \hookrightarrow X_{\field{C}}$, not contained in $Z_{\field{C}}$ and with $V^{\circ}$ not contained in a fiber of the period map, the image of the monodromy representation \begin{equation} \getsbel{i-inf} \pi_1(V^{\circ,{\text{an}}}) \rightarrow G_n \end{equation} (topological $\pi_1$, taken for an arbitrary choice of basepoint) has size that grows without bound as $n \rightarrow \infty$. Indeed \cite[Corollaire 4.1.2 and 4.1.3.3]{Deligne_Hodge2} applies, after passing from $V$ to the smooth part $V'$ of its intersection with $X^{\circ}$, to show that the image of $$ \pi_1({V'}^{{\text{an}}}) \longrightarrow \mathrm{Aut} \ H^i(\mathfrak{X}_x, \field{Q})$$ is infinite. In particular, the image of the monodromy representation of $\pi_1((V^{\circ})^{{\text{an}}})$ on $H^i(\mathfrak{X}_x, \field{Z})$ is infinite, so the size of the image of monodromy on $H^i(\mathfrak{X}_x, \field{Z})/p^n$ grows without bound as $n \rightarrow \infty$. These results transpose, as usual, to the {\'e}tale topology. Indeed, $R^i \pi^{\mathrm{et}}_*(\field{Z}/p^n \field{Z})$ defines a locally constant {\'e}tale sheaf of $\field{Z}/p^n$-modules on $X^{\circ}$, which, by the local constancy of direct images for a smooth proper morphism (\cite[Theorem 5.3.1]{SGA4.5}), extends to a locally constant {\'e}tale sheaf on the good integral model $X^{\circ}_S$ over $\mathfrak{o}_S$. For $V$ as above, standard comparison theorems show that the homomorphism in \eqref{i-inf} factors as \[ \pi_1(V^{\circ,{\text{an}}}) \rightarrow \pi_1^{\mathrm{et}}(V^{\circ}) \rightarrow G_n, \] so the image of {\'e}tale $\pi_1$ \begin{equation} \getsbel{fff} \pi_1^{\mathrm{et}}(V^{\circ}) \rightarrow G_n\end{equation} (again, with an arbitrary geometric basepoint in $(V^{\circ})$) has size that grows without bound as $n \rightarrow \infty$. Next, let $E \supset K$ be an arbitrary algebraically closed field, with base change $X_E := X \times_{K} E$; let $i: V \hookrightarrow X_E$ be a closed $E$-subvariety not contained in $Z_E$ or a fiber of the period map. (Note that we can make sense of the latter condition without reference to $\field{C}$ by using Lemma \ref{KS}: by ``$V$ is contained in a fiber of the period map'' we mean that the associated morphism of vector bundles is zero on the smooth locus of $V$.) We get by base change $\pi_E: \mathfrak{X}_E \rightarrow X_E$ and an {\'e}tale local system $R^i \pi^{\mathrm{et}}_{E*} (\field{Z}/p^n \field{Z})$ on $X_E^{\circ}$; and the same conclusion as above holds, i.e.\ the monodromy representation \eqref{fff} for $V$ on $R^i \pi^{\mathrm{et}}_* (\field{Z}/p^n \field{Z})$ has ``large image'' in the sense specified above. We will use this only in the case when $E$ is the algebraic closure of a finitely generated field; we may then choose a $K$-embedding $\sigma: E \rightarrow \field{C}$, and thus also $V_{\field{C}} \subset X_{\field{C}}$ compatibly with $V \subset X_E$. The local system $R^i \pi^{\mathrm{et}}_{E*} (\field{Z}/p^n \field{Z})$ on $V_E$ pulls back to the similarly defined system on $V_{\field{C}}$, so ``large monodromy'' for $V$ follows from the same statement for $V_{\field{C}}$. \subsection*{Construction of a suitable cover of $X$} \getsbel{bigcover} Our proof will involve an induction over higher and higher-codimension subvarieties of $X$ about which we know almost nothing apart from their degree. It is thus crucial to have at hand covers of $X$ whose monodromy is uniformly bounded below on restriction to {\em every} subvariety of bounded dimension and degree. \begin{lem}\getsbel{lem31} Fix $d, n, D \geq 1$, and let $H$ be any (locally closed, finite type) complex subvariety of the Hilbert scheme of subschemes of $X_{\field{C}}$ of degree $\leq d$ and dimension $n$. There are a finite group $G$ and a finite morphism $f: \widetilde{X} \to X$ of $K$-varieties, equipped with an injection $G \hookrightarrow \mathrm{Aut}(\widetilde{X} / X)$, \ such that: \begin{itemize} \item[(a)] $f|_{X^{\circ}}$ is finite {\'e}tale Galois with deck group $G$, and, moreover, extends to a finite {\'e}tale cover of the good integral model $X^{\circ}_S$. \item[(b)] Let $U \subset X$ be any $n$-dimensional irreducible closed complex subvariety of degree $\leq d$. Suppose that: \begin{itemize} \item The point of the Hilbert scheme classifying $U$ lies in $H(\field{C})$, and \item $U$ is not contained in $Z$ and $U^{\circ {\text{an}}}$ is not contained in a single fiber of the period map $\field{P}hi$. \end{itemize} Let $Q$ be any irreducible component of $f^{-1} U$, endowed with the reduced structure, and such that the induced finite map $f:Q \rightarrow U$ is dominant (note that it is automatically \'etale over $U^{\circ}$). Then the degree of $f:Q \rightarrow U$ at the generic point is $\geq D$, i.e., the induced map of function fields has degree $\geq D$. \end{itemize} \getsbel{lem:bigcover} \end{lem} \begin{proof} Through the rest of this proof, $U$ will represent a single subvariety of $X$, classified by a point of the Hilbert scheme, and we will use $\mathcal{U}$ for the universal family. We are going to find a cover $f: \widetilde{X} \to X$ and a proper Zariski-closed subset $H_1 \subseteq H$ such that the conclusion of (b) holds for any $U=U_h$, satisfying the assumptions of (b), and with $h \in (H-H_1)(\field{C})$. The result will follow by Noetherian induction. In particular, removing the singular locus of $H$ at the start, we may suppose that $H$ is smooth. Take a geometric generic point $\mathrm{et}a \rightarrow H$. Let $U_{\mathrm{et}a} \subset X_{\mathrm{et}a}$ be the corresponding generic subscheme. We may assume without loss of generality that: \begin{quote} {\em Situation:} \begin{itemize} \item[(a)] Every geometric fiber of $\mathcal{U} \rightarrow H$ is integral; \item[(b)] Every fiber of $\mathcal{U} \rightarrow H$ meets $X^{\circ}$; \item[(c)] On each fiber $\mathcal{U}_h$ for $h \in H(\field{C})$ the period map is not locally constant. \end{itemize} \end{quote} For (a), note that the locus of points with geometrically integral fiber by \cite[12.2.1(x)]{EGAIV3} is open on the base, so if there exists one $h \in H(\field{C})$ for which the fiber $U_h$ is integral, then (after shrinking $H$ to a suitable nonempty open neighbourhood) we can suppose it is true for all $h$. If there is no such $h$, then the conclusion of the theorem holds for $H$ vacuously. For (b), note that the set of $h$ for which $U_h$ meets $X^{\circ}$ is constructible. To see this, first note that $U_h$ is reduced for every $h$. Now note that $U_h$ meets $X^{\circ}$ if and only if $(U_h \cap Z)_s \rightarrow U_s$ is not surjective, and apply \cite[9.6.1(i)]{EGAIV3}. Thus, restricting to an open subset of $H$, we can assume that $U_h$ meets $X^{\circ}$ either for no $h$ or for all $h$. In the former case, the statement is vacuously true; so we can assume that (b) holds for all $h$. For (c) let $\mathcal{U}'$ be the smooth locus of the morphism $\mathcal{U} \rightarrow H$, which, by flatness of the morphism, coincides with the locus of points which are smooth points of their fibers (\cite[17.5.1]{EGAIV4}). Note that: \begin{itemize} \item $\mathcal{U}'$ is itself smooth over $\mathrm{Spec} \ K$, since it is smooth over $H$ and $H$ was assumed smooth. \item $\mathcal{U}'$ contains an open dense subset of every fiber, since these fibers are all integral. \item $\mathcal{U}'$ is a $K$-variety: this follows from the previous conditions. It is reduced by smoothness, and since $\mathcal{U}' \rightarrow H$ is flat (\cite[Tag 01VF]{Stacks}), $H$ is irreducible, and the fibers are irreducible, it readily follows (\cite[Tag 004Z]{Stacks}) that $\mathcal{U}'$ is itself irreducible. \end{itemize} The tangent bundle $T_{\mathcal{U}'}$ has a sub-bundle $T_{\mathcal{U}'/H}$ made up of`vertical'' vector fields. Restricting the morphism of Lemma \ref{KS} to this sub-bundle we get $$ g: \mathcal{H}_1 \otimes T_{\mathcal{U}'/H} \rightarrow \mathcal{H}_2$$ Now, we may certainly assume there is some $h \in H(\field{C})$ such that $\mathcal{U}_h$ satisfies the conditions of (b) in the statement of the Lemma, or else the Lemma once again holds vacuously. In particular, there exists a point $u \in \mathcal{U}_h(\field{C})$, smooth in the fiber $\mathcal{U}_h$, such that $g_u$ is nonzero. It follows that $g_u$ is nonzero on a nonempty Zariski-open subset of $\mathcal{U}'$; the image of this Zariski-open by the dominant morphism $\mathcal{U}' \rightarrow H$ contains an nonempty open subset of $H$, and we replace $H$ by this open to obtain the second part of the {\em Situation.} So we proceed assuming ourselves to be in the {\em Situation} above. We continue to write $\mathcal{U}'$ for the smooth locus of $\mathcal{U}/H$ and $\mathcal{U}'^{\circ}$ for the preimage of $X^{\circ}$ in $\mathcal{U}'$. Recall that our assumptions guarantee that $\mathcal{U}'^{\circ}$ is fiberwise dense in $\mathcal{U}'$. By \S \ref{large} we can find an $m$ for which the image of the {\em geometric} monodromy representation of $\pi_1(U_{\mathrm{et}a}^{'\circ})$ in $G_m$ has size at least $D$ (same notation as in \S \ref{large}). This choice of $m$ determines a finite \'etale Galois cover of $X^{\circ}$ with Galois group $G_m$, which extends to a finite {\'e}tale Galois cover of the $\mathfrak{o}_S$-model $X^{\circ}_S$. Let $f: \widetilde{X} \rightarrow X$ be the normalization of $X$ in this $G_m$ cover; then $\widetilde{X}$ is a normal $K$-variety and the morphism $f$ is finite (although not necessarily flat). The action of $G_m$ by deck transformations above $X^{\circ}$ extends uniquely to a $G_m$-action on the morphism $f$. The morphism $f$ gives of course a morphism $f: \widetilde{X}_{\mathrm{et}a} \rightarrow X_{\mathrm{et}a}$ after base-change from $\operatorname{Spec} \ K$ to $\mathrm{et}a$. The restricted map \begin{equation} \getsbel{fU0} f_{\mathrm{et}a}^{-1} \mathcal{U}_{\mathrm{et}a}'^{\circ} \rightarrow U_{\mathrm{et}a}'^{\circ} \end{equation} is finite {\'e}tale and has degree $\geq D$ restricted to each geometric component of the source by choice of $f$. Now this morphism is the geometric generic fiber of a finite {\'e}tale morphism of smooth $H$-schemes: \begin{equation} \getsbel{fU} f^{-1} \mathcal{U}'^{\circ} \rightarrow \mathcal{U}'^{\circ}\end{equation} and we want to draw the same conclusion about degrees for the fibers of \eqref{fU} over a nonempty open subset of $H$. This will imply the desired conclusion, for -- with $Q$ as in the statement -- the assumed dominance implies that $Q \cap f^{-1} U'^{\circ}$ is an open nonempty subset of $Q$. We now use \cite[Lemma 055A]{Stacks} (see also \cite[Prop.\ 9.7.8]{EGAIV3} and references therein). which guarantees the existence of a morphism $g: H' \rightarrow H$ (which in fact factors as a finite {\'e}tale surjection followed by an open immersion) such that, after base change of $f^{-1} \mathcal{U}'^{\circ} \rightarrow \mathcal{U}'^{\circ}$ by $g$ -- i.e.\ replacing $\mathcal{U}'^{\circ}$ by $\mathcal{U}'^{\circ} \times_{H} H'$ and similarly for $f^{-1} \mathcal{U}'^{\circ}$ -- the following assertions hold: \begin{itemize} \item[(a)] Each irreducible component of the generic fiber $(f^{-1} \mathcal{U}'^{\circ})_{\mathrm{et}a'}$ (with $\mathrm{et}a'$ the generic point of $H'$ -- not a geometric generic point here) is in fact a geometrically irreducible component of that generic fiber. \item[(b)] Let $\overline{Z_1}, \dots, \overline{Z_r}$ be the Zariski closures of these generic irreducible components $Z_1, \dots, Z_r$ inside $f^{-1} \mathcal{U}'^{\circ}$. These $\overline{Z_i}$ give, upon intersection with the fiber $(f^{-1} \mathcal{U}'^{\circ})_{h'}$ above any $h' \in H'$, the decomposition of that fiber into irreducible components, and indeed each of these irreducible components are geometrically irreducible. \end{itemize} In the decomposition of (b) $$ (f^{-1} \mathcal{U}'^{\circ}) = \coprod \overline{Z_{\alpha}}$$ the sets $Z_{\alpha}$ are disjoint. Indeed, upon restriction to each fiber, this decomposition recovers the decomposition of $f^{-1} \mathcal{U}'^{\circ}_{h'}$; however, this is finite {\'e}tale over $\mathcal{U}'^{\circ}_{h'}$ and Lemma \ref{disjointness} implies the disjointness. In particular, the $\overline{Z_{\alpha}}$ are both closed and open, and in particular inherit a scheme structure as open sets in $f^{-1} \mathcal{U}'^{\circ}$. The restriction of the map $f$ to each $\overline{Z_{\alpha}}$ is then a finite {\'e}tale map $f_{\alpha}:\overline{Z_{\alpha}} \rightarrow \mathcal{U}'^{\circ}$. The degree of such a map is locally constant on the base, and here, by assumption, that degree is $\geq D$ everywhere on the generic fiber $\mathcal{U}'^{\circ}_{\mathrm{et}a'}$. That generic fiber is dense because it contains every generic point of of $\mathcal{U}'^{\circ}$, and, consequently the degree of $f_{\alpha}$ is everywhere $\geq D$. Restricting to a single fiber $\mathcal{U}'^{\circ}_{h'}$ for $h' \in H'$ gives the desired conclusion -- that is, the bound stated in (b) of the Theorem holds for all fibers $U_h$ for all $h$ in a nonenmpty open subset of $H$, explicitly, the image of $H' \rightarrow H$. Finally, we conclude by Noetherian induction. We have shown that, given $H$, there is a Zariski-closed $H_1 \subseteq H$ and an $m > 0$, such that the image of geometric monodromy in $G_m$ gives an \'etale cover for $X$, that satisfies the required properties for any $h \in (H - H_1)(\mathbb{C})$. There is no harm in replacing $G_m$ by $G_{m'}$, for $m' \geq m$. Thus we can apply Noetherian induction to find one $m$ that works for all $h \in H(\mathbb{C})$. \qed \subsection*{Bounding rational points on a subvariety} \getsbel{ss:induction} The following result is the key inductive step. We note that {\em all constants appearing in this discussion are permitted to depend on the variety $X$, and indeed on the integral model chosen in \S \ref{integralmodel}, without explicit mention.} \begin{lem} \getsbel{lem:induction} Let $V$ be a geometrically irreducible closed subvariety of $X$ defined over $K$, of dimension $n$ and degree $d$, such that $V_{\field{C}}$ is not contained in $Z$ and $V^{\circ {\text{an}}}$ is not contained in a fiber of $\field{P}hi$. Then all integral points of $X^\circ$ of height $\leq B$ that lie on $V$ can be covered by $O_{d,\epsilonsilon} (B^{\epsilonsilon})$ irreducible (but not necessarily geometrically irreducible) subvarieties, all defined over $K$, with dimension $\leq n-1$ and degree $O_{d, \epsilonsilon}(1)$. \end{lem} \proof Choose $D$ so that $\frac{n+1}{D^{1/n}} < \epsilonsilon$. Let $\mathcal{P}$ be the (finite, by Lemma \ref{Kleiman}) set of Hilbert polynomials that arise from irreducible subvarieties of $X$ of dimension $n$ and degree $d$, and let $H_{\mathcal{P}}$ be the associated Hilbert scheme. Write $H^{red}_{\mathcal{P}}$ for the reduced induced closed subscheme of $H_{\mathcal{P}}$. For this choice of $n,d,D, H=H^{red}_{\mathcal{P}}$ take a finite group $G$ and a finite morphism $f: \widetilde{X} \rightarrow X$ as provided by Lemma~\ref{lem:bigcover}. We note that $H_{\mathcal{P}}(\field{C}) = H^{red}_{\mathcal{P}}(\field{C})$, so the passage to the reduced subscheme structure is irrelevant for the statements on complex subvarieties proved in Lemma~\ref{lem:bigcover}. For every $x \in X^{\circ}(K)$, the action of $G$ on the fiber of $\widetilde{X}$ over $x$ defines a class in the Galois cohomology group $H^1(\Gal(\overline{K}/K), G)$. Concretely, since the Galois action on $G$ is trivial, $H^1(\Gal(\overline{K}/K), G)$ classifies homomorphisms $\rho: \Gal(\overline{K}/K) \rightarrow G$ up to conjugacy. Choosing a point $\widetilde{x}$ above $x$, we define a homomorphism $\rho$ by the rule \begin{equation} \getsbel{tww} \sigma(\widetilde{x}) = \rho(\sigma) \cdot \widetilde{x}\end{equation} for $\sigma$ in the Galois group. If $x$ is $S$-integral, this homomorphism $\rho$ is in fact unramified outside $S$. Such a $\rho$ can also be used to twist $\widetilde{X} \rightarrow X$, namely, one modifies the Galois action on $\widetilde{X}$ through $\rho$; and then \eqref{tww} means precisely that $x$ will lift to a $K$-rational point on the twist of $\widetilde{X}$ indexed by $\rho$. (See $\S$ 4.5 and Thm.\ 8.4.1 of \cite{PoonenRP} for further discussion.) There are only finitely many homomorphisms $\Gal(\overline{K}/K) \rightarrow G$, unramified outside $S$; call them $\rho_1, \rho_2, \dots, \rho_R$. This list does not depend on $B$. Each such $\rho_j$ can be used to twist $f$ to a map $f_j:\widetilde{X}_j \rightarrow X$. Our previous discussion now shows that any integral point of $X^\circ$ lifts along some $f_j$ to a point of $\widetilde{X}_j(K)$. For a sufficiently large integer $e$ the pullback $(f_j^* \mathcal{L})^{\otimes e}$ is very ample and defines, after fixing a basis of sections, a projective embedding $\widetilde{X}_j \hookrightarrow \mathbb{P}^{M_j}$. Now the data of the diagram of $K$-varieties and line bundles \begin{equation} \getsbel{diag} (X, \mathcal{L}^{\otimes e}) \stackrel{f_j}{\longleftarrow}(\widetilde{X}_j, (f_j^* \mathcal{L})^{\otimes e}) \hookrightarrow (\mathbb{P}^{M_j}, \mathcal{O}(1))\end{equation} depends on various choices, but these choices can (and will) be made once and for all depending only on $d,\epsilonsilon$. Then for $P \in \widetilde{X}_j(K)$ we get \begin{equation} \getsbel{imp} H_\mathcal{L}(f_j(P))^e \asymp H_{f_j^* \mathcal{L}}(P)^e \asymp H_{\mathbb{P}^{M_j}}(P) \end{equation} where the symbol $\asymp$ means that the ratio is bounded above and below by constants that may depend on $f_j$. Since there are only finitely many $f_j$, and their coefficients are bounded in terms of $d$ and $\epsilonsilonilon$ (and, as always, $X$ and $S$) but don't depend on $B$, these constants depend only on $d$ and $\epsilonsilonilon$. Therefore, we have shown that the integral points of $X^\circ$ with height $\leq B$ belonging to $V$ all have the form $f_j(P)$, where $P$ is a $K$-rational point of $f_j^{-1}(V)$ with $H_{\mathbb{P}^{M_j}}(P) \leq c_{d,\epsilonsilonilon}B^e$. It will suffice to prove the conclusion for those $P$ for which $f_j(P)$ is a {\em smooth} point of $V$, simply by including each irreducible component of the singular locus of $V$ in the list of subvarieties (see Lemma \ref{Kleiman} part (d) for the necessary bounds). Let $V' \subset V$ be the (open) smooth locus. Consider those geometric components $Q^{\circ} \subset (f_j^{-1} V')^{\circ}$ that have a $K$-rational point. Because $ (f_j^{-1} V')^{\circ}$ is a finite {\'e}tale cover of the geometrically irreducible smooth $K$-variety $V'^{\circ}$, its geometric components are pairwise disjoint (Lemma \ref{disjointness}) and permuted by the Galois group; so any such $Q^{\circ}$ is defined over $K$ and the number of such $Q^{\circ}$ is bounded in number by the size of the group $G$. The Zariski closure $Q$ of any $Q^{\circ}$ is again geometrically irreducible and defined over $K$; we understand it to be endowed with its reduced scheme structure. The map $f_j: \widetilde{X}_j \rightarrow X$ induces a compatible map $f_j: Q \rightarrow V$, which is dominant since, by construction of $Q$, the image contains a nonempty open set of $V'^{\circ}$. Indeed, $f_j: Q \rightarrow V$ is {\'e}tale over $V^{\circ}$, with degree between $D$ and $(\# G)$; the lower bound comes from (b) of Lemma \ref{lem31}, using also the fact that $f_j$ is a twist of $f$. The degree of $V$ with respect to $\mathcal{L}^{\otimes e}$ is $d e^n$, and therefore, by Lemma \ref{degree} the degree of $Q$, considered as a closed subvariety of $\mathbb{P}^{M_i}$ via \eqref{diag}, satisfies $$ D d e^n \leq \mathrm{deg} Q \leq (\# G) d e^n.$$ We apply Theorem \ref{Brobs} to each $Q$ that arises in the above fashion, i.e.\ to the Zariski closure of any irreducible geometric component of $(f_j^{-1} V')^{\circ}$ that has a $K$-point. Theorem \ref{Brobs} and our choice of $D$ implies that the set of rational points of $Q$ of height $\leq c B^e$ are supported on a set of proper closed subvarieties of $Q$ of degree $O_{d,\epsilonsilon}(1)$ with cardinality $\ll_{d,\epsilonsilonilon} B^{2\epsilonsilonilon}$. These subvarieties are defined over $K$ and need not be geometrically irreducible. For any such $Q$ and any such proper subvariety $Y \subset Q$, the scheme-theoretic image $f_j(Y)$ under the finite map $f_j$ is a proper subvariety $f_j(Y) \subset V$, in particular, of dimension $\leq n-1$. Moreover, $f_j$ restricts to a finite map $Y \rightarrow f_j(Y)$. By Lemma \ref{degree} the $\mathcal{L}$-degree of $f_j(Y)$ is no larger than the $f_j^* \mathcal{L}$-degree of $Y$, in particular, $O_{d,\epsilonsilon}(1)$. The number of maps $f_j$ depends only on $d,\epsilonsilon$, and the number of $Q$ arising is then at most the number of $f_j$ multiplied by the order of $G$, which is again $O_{d,\epsilonsilon}(1)$. Consequently, the number of $Y$ arising as in the prior paragraph is $O_{d,\epsilonsilonilon}( B^{2\epsilonsilonilon})$, concluding the proof (after the obvious scaling $\epsilonsilonilon \leftarrow \epsilonsilonilon/2$.) \end{proof} \subsection{Conclusion of the proof of Theorem \ref{thm2}} \getsbel{conclude} \begin{proof} Fix $\epsilonsilon > 0$. We use descending induction via Lemma \ref{lem:induction}. The inductive statement is the following: \begin{quote} $(\star)_n$: For every $n$ with $0 \leq n \leq \dim X$, there exists an integer $d_n$ with the following property: for all $B > 0$, the $S$-integral points of $X^\circ$ are covered by a collection of \[ O_{\epsilonsilon}(B^{(\dim X - n) \epsilonsilon}) \] irreducible subvarieties of $X$, all defined over $K$, each of which is either \begin{itemize} \item[--]$\textrm{(a)}_n$: a subvariety of dimension $\leq n$ and degree $\leq d_n$, or \item[--]$\textrm{(b)}$: a geometrically irreducible subvariety that is contained in a single fiber of the period map. \end{itemize} \end{quote} The base case is given by $n = \dim X$, in which case, of course, the single subvariety $X \subseteq X$ suffices. The implication $(\star)_n \implies (\star)_{n-1}$ follows from Lemma~\ref{lem:induction}: Let $\mathcal{V}_n$ be the collection of $n$-dimensional varieties in the statement of $(\star)_n$. For each $V \in \mathcal{V}_n$, we will construct a set of varieties covering all the integral points of $X^{\circ}$ lying on $V$. We subdivide into cases: \begin{itemize} \item $V$ is not geometrically irreducible. In this case, we take the set $\{V_i\}$ of subvarieties given by part (c) of Lemma \ref{Kleiman}. These varieties number at most $O_{n,d_n}(1)$ and they have dimension $\leq n-1$ and degree $O_{n,d_n}(1)$. \item $V$ is geometrically irreducible but $V^{\circ {\text{an}}}$ is contained in a fiber of $\field{P}hi$: then we take the singleton set $\{V\}$. \item $V$ is contained in $Z$; in this case we can take the empty set $\emptyset$. \item $V$ is geometrically irreducible and not contained in $Z$, and $V^{\circ{\text{an}}}$ is not contained in a fiber of $\field{P}hi$; then we may apply Lemma \ref{lem:induction} to show that integral points of height $\leq B$ on $V$ are covered by $O_{d_n,\epsilonsilon}(B^{\epsilonsilon})$ irreducible $K$-varieties of dimension $\leq n-1$ and degree $O_{d_n,n,\epsilonsilonilon}(1)$. \end{itemize} We take $d_{n-1}$ to be the largest of the implicit constants $O_{n,d_n}(1)$ and $O_{n, d_{n}, \epsilonsilonilon}(1)$ appearing in the above proof. Then, to sum up, by $(\star)_n$ we know that the $S$-integral points of $X^\circ$ of height at most $B$ are covered by $O_{\epsilonsilon}(B^{(\dim X - n) \epsilonsilon})$ subvarieties $V$ satisfying either $\textrm{(a)}_n$ or (b), and we know that for each of those $V$, the subset of those points lying on $V$ is covered by $O_{\epsilonsilon}(B^{\epsilonsilon})$ subvarieties satisfying either $\textrm{(a)}_{n-1}$ or (b); together, these facts yield $(\star)_{n-1}$. We emphasize that this is the point in the argument where the uniformity in Broberg's result is crucial. We have no control of the heights of the varieties making up the collection $\mathcal{V}_n$, and indeed these heights will grow with $B$; but since the implicit constants in Lemma~\ref{lem:induction} depend only on $X$ and $\epsilonsilon$, not on $V$, this lack of control does not present a problem. The case $n = 0$ gives the Theorem. \end{proof} \section{Author affiliations} Jordan S.\ Ellenberg, University of Wisconsin Brian Lawrence, University of California, Los Angeles; \url{[email protected]} Akshay Venkatesh, Institute for Advanced Study \end{document}
\begin{document} \title{Geometric phase gates in dissipative quantum dynamics} \author{Kai Müller} \author{Kimmo Luoma} \email{[email protected]} \author{Walter T. Strunz} \affiliation{Institut f{\"u}r Theoretische Physik, Technische Universit{\"a}t Dresden, D-01062,Dresden, Germany} \date{\today} \begin{abstract} Trapped ions are among the most promising candidates for performing quantum information processing tasks. Recently, it was demonstrated how the properties of geometric phases can be used to implement an entangling two qubit phase gate with significantly reduced operation time while having a built-in resistance against certain types of errors {(Palmero et. al., Phys. Rev. A {\bf 95}, 022328 (2017))}. {In this article, we investigate the influence of both quantum and thermal fluctuations on the geometric phase in the Markov regime.} We show that additional environmentally induced phases as well as a loss of coherence result from the non-unitary evolution, {even at zero temperature}. We connect these effects to the associated dynamical and geometrical phases. This suggests a strategy to compensate the detrimental environmental influences and restore some of the properties of the ideal implementation. Our main result is a strategy for zero temperature to construct forces for the geometric phase gate which compensate the dissipative effects and leave the produced phase as well as the final motional state identical to the isolated case. We show that the same strategy helps also at finite temperatures. Furthermore, we examine the effects of dissipation on the fidelity and the robustness of a two qubit phase gate against certain error types. \end{abstract} \maketitle \section{Introduction} During the last decades there has been an increasing effort to develop reliable, large scale quantum information processors. Since such a device could utilize quantum properties like superpositions and entanglement, its computing power could potentially surpass every conceivable classical device for certain problems \cite{Feynman1982,Nielson&Chuang} with potential applications in various fields of science and technology. At the moment there are several physical realizations developed in parallel, each with their own benefits and drawbacks. One of the most advanced platforms for quantum information processing is based on {trapped ions} \cite{Linke3305}, where many elementary operations have already been experimentally demonstrated with high precision \cite{PhysRevLett.113.220501,PhysRevLett.117.060504,PhysRevLett.117.060505}. Up to date there are, however, still various difficulties to overcome \cite{Wineland1998}. One of the most severe issues is dissipation and decoherence resulting from the interaction of the quantum system with the environment leading to detrimental effects on the quantum resources and to quantum gate errors. Although there exist quantum error correction schemes that can compensate small errors of the quantum gates these only allow error rates of roughly 1\% and come at the cost of a high computational overhead \cite{errorrates}. This means that in order to construct an efficient quantum information processor it is necessary to reduce the error rates of the individual quantum gates as much as possible. It is therefore important to have a good understanding of the environmental effects and how to compensate for them. In this work we want to specifically focus on two-qubit phase gates, which perform the following operation \begin{align} \label{eq:phaseGateOperation} &\ket{00} \to \ket{00},\hspace{20pt} \ket{11} \to \ket{11},\mathbb Notag\\ &\ket{01} \to e^{i\mathsf{P}hi}\ket{01}, \hspace{6pt}\ket{10} \to e^{i\mathsf{P}hi} \ket{10}. \end{align} Two-qubit phase gates are important since they can be used to convert the separable state $ 1/2(\ket{11} + \ket{10} + \ket{01} + \ket{00} ) $ into a maximally entangled state $ 1/2(\ket{11} + i\ket{10} + i\ket{01} + \ket{00} ) $. First experimental implementations were realized over a decade ago \cite{Sackett2000,experimentalDemonstration}, based on theoretical proposals in \cite{PhysRevLett.82.1971,PhysRevA.59.R2539,MSJ}. However due to recent efforts in theory \cite{PhysRevA.95.022328,Steane_2014,PhysRevA.71.062309,PhysRevLett.91.157901} and experimental techniques \cite{Schafer2018} the operation times and error rates have significantly reduced. These realizations leverage the idea of geometric phases first introduced by Berry \cite{Berry45,PhysRevLett.58.1593} where the cyclic evolution of a quantum state results in the acquisition of a phase. {The aim of this article is to investigate the effects of quantum and thermal fluctuations on the geometric phase gate given by Eq.~({\rm Re}f{eq:phaseGateOperation}). We show how to extend the ideal (fluctuation free) implementation of the gate given in~\cite{PhysRevA.95.022328} in order to compensate the detrimental environmental effects.} {The outline of the remainder of the article is the following.} In Sec.~{\rm Re}f{sec:model} we first review the ideal isolated case, introduce our notation and then present our open system model in the context of a single trapped ion. In Sec.~{\rm Re}f{sec:geometricPhases} we then show how dissipation leads to additional phases and in which way they can be connected to the conventional geometrical and dynamical phases. Furthermore we show which conditions the experimental protocol must satisfy in order to implement a phase gate and how the sensitivity of the gate against small experimental errors is altered compared to the case where the system is perfectly isolated from its environment. In {\rm Re}f{sec:twoQubit} we then apply our results to the two-qubit phase gate protocol proposed in~\cite{PhysRevA.95.022328} and examine the impact on the fidelity. {In Sec.~{\rm Re}f{sec:finiteTemperatureEffects} we use the results of the previous sections to draw conclusions for the finite temperature case.} We close the {main part of the article} with a summary and an outlook. {Lastly, some of the analytic computations are presented in the Appendix.} \section{Model} \label{sec:model} In this section we will first introduce a model {for} an isolated phase gate. Then we expand the model to account for {detrimental environmental effects.} \subsection{Isolated system} We consider the ion trap as a quantum harmonic oscillator with mass $m$ and frequency $\omega$ which is driven by an external force leading to the Hamiltonian \begin{equation} \label{eq:modelIsolated} H_{{\mathrm{isol}}}(t) = \hbar\omega a^{\dagger}a + V(t). \end{equation} Here $a$ $(a^\dagger)$ is the annihilation (creation) operator for the vibrational mode satisfying bosonic commutation relations $[a,a^\dagger]=1$. The potential $ V(t) $ arises from the externally applied force $ F(t) $. Since we want to implement the operation described in Eq.~\eqref{eq:phaseGateOperation} we need to introduce state-dependent forces $ F_{1} $ and $ F_{0} $ that depend on an internal (e.g. spin-) state of the ion in order to distinguish these states. An ion in the internal state $ \ket{1} $ will only experience $ F_{1} $ and vice versa. In the following we will use the notation $ F_{j} $ with $j = 0,1$ labeling the internal state of the ion. Furthermore, the external forces $ F_{j}(t) $ are assumed to be homogeneous over the extent of the motional state. This can be assumed, for example, for forces realized by lasers if the wavelength of the laser is much greater than the amplitude of oscillation. Under these circumstances the Hamiltonian can be written in the following form \begin{align} \label{eq:Hschroedinger} H(t) &= \hbar\omega(a^{\dagger}a) + \kb{0}{0}\otimes V_{0}(t) + \kb{1}{1}\otimes V_{1}(t), \\ V_{j}(t) &= F_{j}(t) x = f_j(t)(a+a^{\dagger}), \end{align} with $ f_j(t) = \dfrac{\hbar}{2m\omega}F_j(t) $. Before we determine the evolution of a quantum state in this model we will simplify the equations more by switching to an interaction picture with respect to $\hbar\omega a^\dagger a$ leading to a simpler Hamiltonian \begin{align} \label{eq:coupledPotential} \widetilde{H}(t) &= \kb{0}{0}\otimes \widetilde{V}_{0}(t) + \kb{1}{1}\otimes \widetilde{V}_{1}(t),\\ \widetilde{V}_{j}(t) &= \widetilde{f}_j^{*}(t)a+\widetilde{f}_j(t)a^{\dagger}, \mathbb Notag \end{align} where $\widetilde{f}_{j} = e^{i\omega t} f_{j}$. The equation of motion in the interaction picture for a quantum state represented by the density operator $ \rho $ is the von-Neumann equation \begin{equation} \label{eq:VNisolated} \dot{\rho} = -i\frac{1}{\hbar} [\widetilde{H}(t),\rho]. \end{equation} This equation can be solved by inserting {an ansatz $\rho=\kb{\mathsf{P}si_t}{\mathsf{P}si_t}$}, where \begin{align} \label{eq:ansatzisol} {\ket{\mathsf{P}si_t}} =& {\sum_{j=0}^1a_je^{i\textrm{Var}phi_j(t)}\ket{j,z_j(t)}}, \end{align} where $ j $ represents the internal state and {$\ket{z_j(t)}=e^{-\abs{z_j(t)}^2/2}\sum_{n=0}^\infty ((z_j(t))^n/\sqrt{n!})\ket{n}$ is a coherent state for the motional degree of freedom of the ion when the internal state is $\ket{j}$, ($j=0,1$) and $a_j$ is a constant determined from the initial conditions.} Inserting this ansatz into Eq.~({\rm Re}f{eq:VNisolated}) leads to the following equation for the coherent state label $z_j(t)$ \begin{equation} \label{eq:z(t)isol} \dot{z}_{j} = \dfrac{1}{i\hbar}\widetilde{f}_{j}(t). \end{equation} {In the {context of implementing} a phase gate{,} we want $ f_j(t) $ to be part of some protocol which is switched on at a certain time and is completed some time $ T $ later. Therefore $ f_j $ shall only be non-zero in the interval $ \left[0,T\right] $ and it shall be such that the motional state undergoes a cyclic evolution $ z_{j}(0) = z_{j}(T) $ whereas the internal degrees of freedom acquire a phase according to Eqs.~({\rm Re}f{eq:phaseGateOperation}).} It is known that such a cyclic quantum evolution leads to the acquisition of a phase $\mathcal{P(H)}i_j = \mathcal{P(H)}i_{\mathrm{g},j}+\mathcal{P(H)}i_{\mathrm{d},j}$, where the dynamical and geometrical phases satisfy $\dot\mathcal{P(H)}i_{\mathrm{d},j}=-(1/\hbar)\bra{j,z_j(t)}H(t)\ket{j,z_j(t)}$ and $\dot\mathcal{P(H)}i_{\mathrm{g},j}=i\bra{j,z_j(t)}\partial_t\ket{j,z_j(t)}$, respectively. The total phase acquired is equal to twice the area enclosed by the trajectory $ z_j(t) $ in the interaction picture (see Eq.~({\rm Re}f{eq:phiA}))~\cite{experimentalDemonstration,PhysRevA.95.022328}. In the following we will expand this model to include {quantum and thermal fluctuations}. \subsection{Open system} \label{ssec:dissipativeCase} Effects of the ion coupling to some external environment are modeled phenomenologically by a Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) {master} equation \cite{Lindblad1976,doi:10.1063/1.522979} of the following form {\begin{align} \label{eq:model} \dot{\rho} &= -i\frac{1}{\hbar} [\widetilde{H}(t),\rho] + \gamma(\bar{n} +1) (2a\rho a^{\dagger} - a^{\dagger}a\rho - \rho a^{\dagger}a)\mathbb Notag\\ & + \gamma\bar{n} (2a^{\dagger}\rho a - aa^{\dagger}\rho - \rho aa^{\dagger}), \end{align}}{in the interaction picture}. {The model ({\rm Re}f{eq:model}) describes dissipation and thermal excitation of the motional state of the ion with rates $\gamma(\bar{n} +1)$ and $\gamma\bar{n}$, respectively. $\bar{n}$ models the average occupation number of the bosonic heat bath modes at a relevant system frequency at finite temperature. At zero temperature, $\bar{n}=0$ and only quantum fluctuations and damping with rate $\gamma$ are present. } {{Motional} coherence times $(\gamma\bar{n})^{-1}$ for the trapped ion systems are of the order of {1 - 100 milliseconds~\cite{Wineland1998, lucas2007longlived}}. Typical frequency for the harmonic motion of the ion around the minimum of the trap is in the MHz range, whereas operation times for the two qubit phase gate investigated later in the article is in the $\mu$s range~\cite{PhysRevA.95.022328}.} { As shown in \cite{PhysRevA.74.022102,Strunz2005}, finite temperature effects can be incorporated in the zero temperature model by adding a fluctuating force $\sqrt{2\gamma\bar{n}}\hbar\mathbf{C}i(t)$ to the Hamiltonian. Here $\mathbf{C}i(t)$ is a Gaussian white noise process with $\langle\mathbf{C}i(t)\mathrm{ran}\,gle = \langle \mathbf{C}i(t)\mathbf{C}i(t')\mathrm{ran}\,gle = 0 $ and $\langle\mathbf{C}i(t)\mathbf{C}i^*(t')\mathrm{ran}\,gle = \delta(t-t') $. Thus the Hamiltonian reads \begin{align} \label{eq:H_xi} \widetilde{H_{\mathbf{C}i}}(t) &= \kb{0}{0}\otimes \widetilde{V}_{0,\mathbf{C}i}(t) + \kb{1}{1}\otimes \widetilde{V}_{1,\mathbf{C}i}(t),\\ \widetilde{V}_{j}(t) &= (\widetilde{f}_j^{*}(t) + \sqrt{2\gamma\bar{n}}\hbar\mathbf{C}i^*(t))a\\ &+(\widetilde{f}_j(t)+ \sqrt{2\gamma\bar{n}}\hbar\mathbf{C}i(t))a^{\dagger}. \mathbb Notag \end{align} Note that the noise does not depend on the internal state $j$. The finite temperature model~({\rm Re}f{eq:model}) is thus equivalent to an ensemble of zero temperature models with stochastic Hamiltonian \begin{align} \label{eq:rho_xi} \dot{\rho_{\mathbf{C}i}} = -i&\frac{1}{\hbar} [\widetilde{H_{\mathbf{C}i}}(t),\rho_{\mathbf{C}i}] + \gamma (2a\rho_{\mathbf{C}i} a^{\dagger} - a^{\dagger}a\rho_{\mathbf{C}i} - \rho_{\mathbf{C}i} a^{\dagger}a). \end{align} The evolution of the density operator can be recovered by taking an average over $\mathbf{C}i(t)$ \begin{equation} \label{eq:rhoAverage} \rho(t) = \langle\rho_{\mathbf{C}i}(t)\mathrm{ran}\,gle, \end{equation} and the average state $\rho(t)$ satisfies Eq.~({\rm Re}f{eq:model}).} {Remarkably, Eq.~({\rm Re}f{eq:rho_xi}) can still be solved by a coherent state ansatz, such as Eq.~({\rm Re}f{eq:ansatzisol}), although it contains the effects of thermal and quantum fluctuations. Inserting the ansatz from Eq.~\eqref{eq:ansatzisol} for a particle in the internal state $\ket{j}$ into Eq.~({\rm Re}f{eq:model}) leads to the following equation for the coherent state label $z_j(t)$ \begin{equation} \label{eq:z(t)} \dot{z}_{j} + \gamma z_{j} = \dfrac{1}{i\hbar}\left(\widetilde{f}_{j}(t) + \sqrt{2\gamma\bar{n}}\hbar\mathbf{C}i(t)\right). \end{equation}} {For current ion-traps the motional coherence time $1/(\gamma\bar{n})$ is much longer than the operation time~\cite{Wineland1998, lucas2007longlived}} so that the motion of the ion is dominated by the deterministic force.\\ In the following sections we will first investigate the zero temperature case for an arbitrary force. In section {\rm Re}f{sec:finiteTemperatureEffects} we can apply these results to a noisy force and average over the noise. \section{Consequences of quantum fluctuations} \label{sec:geometricPhases} In this section we investigate the consequences of coupling the trapped ion to a zero temperature bath. The effect of thermal fluctuations will be considered in Sec.~{\rm Re}f{sec:finiteTemperatureEffects}. \subsection{Consequences for the phase} \label{ssec:consequncesPhase} In the following we want to investigate how the Lindblad terms in the time evolution equation affect the phase. We therefore consider a model which is in principle identical to ({\rm Re}f{eq:model}) {at zero temperature} but slightly more general. The result can then be applied to the phase gate model. The Hamiltonian shall be of the form \begin{equation*} H(t) = \kb{0}{0}\otimes H_0(t) + \kb{1}{1}\otimes H_1(t), \end{equation*} where $ \ket{0} $, $ \ket{1} $ represent the internal states of the ion and $ H_0(t) $ and $ H_1(t) $ act on the motional degree of freedom. We assume that the dissipation and decoherence is well described by a general GKSL master equation and thus arrive at the following model \begin{align} \label{eq:time evolution} \dot{\rho} &= \frac{-i}{\hbar}[H(t),\rho] + \mathcal{L}\left[\rho\right],\\ \mathcal{L}[\rho] &= \sum_{l=1}^{N} L_l\rho L_l^{\dagger} - \dfrac{1}{2}\left(L_l^{\dagger}L_l\rho + \rho L_l^{\dagger}L_l\right).\mathbb Notag \end{align} Furthermore we assume that this model has a pure state solution {$\kb{\mathsf{P}hi_t}{\mathsf{P}hi_t}$, where $\ket{\mathsf{P}hi_t}=\sum_j\ket{j,\mathsf{P}si_j(t)}$ and the} internal state $ j $ remains unchanged during the evolution. Although these are very limiting assumptions we will see that the results can nevertheless be applied to the phase gate scenario mentioned before.\\ Under these assumptions, we can repeat the argument proposed in~\cite{PhysRevLett.58.1593} for a cyclic quantum evolution governed by a time-dependent Schrödinger equation to determine the relative phase that arises if the initial state is in a superposition of internal states. In our case, however, the cyclic quantum evolution of a pure state is modified by a damping term. The details of the computation can be found in the appendix~{\rm Re}f{sec:Anhangphidissipative}. We then get a new complex valued term $\xi$ in addition to the dynamical and geometrical phase \begin{flalign} \label{eq:phi} \mathcal{P(H)}i &= \mathcal{P(H)}i_{\mathrm{g}} + \mathcal{P(H)}i_{\mathrm{d}} + \xi,\\ \xi &= \mathcal{P(H)}i_{\mathrm{L}} + i\eta, \end{flalign} where the individual terms are defined as follows: \begin{align} \dot{\mathcal{P(H)}i}_{\mathrm{g}} &= i\left(\bra{\mathsf{P}si_0}\partial_t\ket{\mathsf{P}si_0} - \bra{\mathsf{P}si_1}\partial_t\ket{\mathsf{P}si_1}\right), \mathbb Notag\\ \dot{\mathcal{P(H)}i}_{\mathrm{d}} &= -\frac{1}{\hbar}(\langle H_0(t)\mathrm{ran}\,gle - \langle H_1(t)\mathrm{ran}\,gle),\\ \dot{\xi} &= -i \sum_{l=1}^{N}\bra{\mathsf{P}si_0}L_l\ket{\mathsf{P}si_0}\bra{\mathsf{P}si_1}L_l^{\dagger}\ket{\mathsf{P}si_1} \mathbb Notag\\ &- \dfrac{1}{2}\left(\bra{\mathsf{P}si_0}L_l^{\dagger}L_l\ket{\mathsf{P}si_0} + \bra{\mathsf{P}si_1}L_l^{\dagger}L_l\ket{\mathsf{P}si_1}\right). \mathbb Notag \end{align} The first two terms are identical to the unitary case mentioned before and also found in~\cite{PhysRevLett.58.1593}. Therefore, they correspond to dynamical and geometrical phases which arise during a cyclic evolution of a quantum system. Since we have constructed relative phases they are expressed as the difference between the dynamical/geometrical phases of particles in the internal states $0$ and $1$. The last sum cannot be expressed in such a way and contains dissipative effects. In general it leads to real terms in the exponent which result in a loss of coherence. We can apply this equation to the damped harmonic oscillator if we set $ L = \sqrt{\gamma}a $ and $ \ket{\mathsf{P}si_j} = \ket{j,z_j} $. Furthermore we can identify $H_j(t)$, which corresponds to the Hamiltonian seen by a particle in the internal state $\ket{j}$ as $\hbar\omega a^{\dagger}a + V_j(t)$ (see Eq.~({\rm Re}f{eq:Hschroedinger})). This means we can calculate the dynamical phase for a particle in the internal state $j$ with $\ket{j,z_j}$ in the interaction picture by using Eqs.~({\rm Re}f{eq:z(t)}) and~\eqref{eq:coupledPotential} as \begin{align} \mathcal{P(H)}i_{\mathrm{d},j} &= -\frac{1}{\hbar}\langle H_j(t)\mathrm{ran}\,gle \mathbb Notag\\ &= \frac{-1}{\hbar}\int\limits_{0}^{T}\bra{j,z_j(t)} e^{-i\omega t a^{\dagger}a} H(t) e^{i\omega t a^{\dagger}a}\ket{j,z_j(t)} {\rm d} t\mathbb Notag\\ &= \frac{-1}{\hbar}\int\limits_{0}^{T}\bra{j,z_j(t)}\hbar\omega a^{\dagger}a + \widetilde{V}_j(t)\ket{j,z_j(t)} {\rm d} t\mathbb Notag\\ &= \int\limits_{0}^{T}2\operatorname{Im}\left(\dot{z}_j(t) z_j^{*}(t)\right) - \omega\abs{z_j(t)}^2 {\rm d}{t}. \end{align} For the geometric phase we arrive at \begin{flalign} \mathcal{P(H)}i_{\mathrm{g},j} &= i\left(\bra{z_j(t)}e^{-i\omega t a^{\dagger}a}\partial_te^{i\omega t a^{\dagger}a}\ket{z_j(t)}\right) \mathbb Notag\\ &= \int\limits_{0}^{T}-\operatorname{Im}(\dot{z}_j(t)z_j^{*}(t)) + \omega\abs{z_j(t)}^2 {\rm d}{t},\\ \end{flalign} again with $\ket{j, z_j}$ in the interaction picture. As we can see the dynamical and geometrical phase are remarkably similar for the harmonic oscillator. Furthermore we can combine these two phases for the total phase in the isolated ($\gamma = 0$) case $\mathcal{P(H)}i_{{\mathrm{isol}}}$: \begin{flalign} \mathcal{P(H)}i_{{\mathrm{isol}}} &= (\mathcal{P(H)}i_{d,0} - \mathcal{P(H)}i_{d,1}) + (\mathcal{P(H)}i_{g,0} -\mathcal{P(H)}i_{g,1}) \mathbb Notag\\ \label{eq:phaseAsIm} &= \int\limits_{0}^{T}\operatorname{Im}\left((\dot{z}_0(t)z_0^{*}(t)\right) - \dot{z}_1(t)z_1^{*}(t)) {\rm d}{t}. \end{flalign} For a cyclic evolution this reduces to the known result \cite{PhysRevA.95.022328,experimentalDemonstration} \begin{equation} \label{eq:phiA} \mathcal{P(H)}i_{{\mathrm{isol}}} = 2 (A_0 - A_1), \end{equation} where $A_j$ is the area enclosed by the cyclic evolution of $z_j$. This is shown in Fig.~{\rm Re}f{fig:phasespacearea}. From now on, we do not always write the time dependence of the coherent state labels explicitly in order to shorten the notation. \begin{figure} \caption{(color online) In the isolated case the relative phase between two cyclic evolutions is proportional to the difference of the swept phase space area in the interaction picture.} \label{fig:phasespacearea} \end{figure} The influence of the dissipation is {contained} in the term $ \xi $ with { \begin{align*} \label{eq:phiL} i\dot{\xi} &= \gamma\left(2z_0z_1^{*} - \left(\abs{z_0}^2 + \abs{z_1}^2\right)\right)\\ &= -\gamma \abs{z_{1}-z_{0}}^2 + i \gamma\abs{z_{1}}\abs{z_{0}}\sin(\theta_{1} - \theta_{0}), \end{align*} }where the phases $\theta_j$ are defined by $z_j=\abs{z_j}e^{i\theta_j}$. {Note that $ \xi $ consists of a real as well as of an imaginary part.\\ In summary, an initial state which is in a superposition of spin states {\begin{equation*} \rho(0) = \begin{pmatrix} \abs{a}^{2} && ab^{*} \\ a^{*}b && \abs{b}^{2} \end{pmatrix} \otimes \kb{z(0)}{z(0)}, \end{equation*}} will be transformed into the following state after the cyclic evolution: {\begin{equation*} \rho(T) = \begin{pmatrix} \abs{a}^{2} && ab^{*} e^{i\mathcal{P(H)}i_{{\mathrm{isol}}} + i\xi} \\ a^{*}b e^{-i\mathcal{P(H)}i_{{\mathrm{isol}}} - i\xi^{*}} && \abs{b}^{2} \end{pmatrix} \otimes \kb{z(0)}{z(0)}. \end{equation*}} This shows that for $ \gamma \mathbb Neq 0 $ the damping results in an additional real term $ \eta = \gamma\int_{0}^{T} \abs{z_{1}-z_{0}}^2 {\rm d} t $ in the exponent which does only depend on the damping strength $ \gamma $ and the difference of the amplitudes of the paths. This real term leads to a dephasing of the spin state by diminishing the off-diagonal elements of the density matrix. We will therefore refer to it as dephasing term from now on. } We can also see a new phase term $\mathcal{P(H)}i_{\mathrm{L}}= \gamma\int\limits_{0}^{T}\abs{z_{1}}\abs{z_{0}}\sin(\theta_{1} - \theta_{0}) {\rm d} t$ which depends on the absolute values of $ z_0 $ and $ z_1 $. The integral over this term can vanish for sufficiently symmetrical $ z_j(t) $ (e.g. if $ z_j(t) = z_j(T-t) $) with $ j\in\{0,1\} $ or if $ z_{1} $ or $ z_{0} $ is in the ground state during the entire operation. In Sec.~{\rm Re}f{sec:twoQubit} we will see that the 2-qubit phase gates proposed in \cite{PhysRevA.95.022328,PhysRevA.71.062309} {and realized in \cite{experimentalDemonstration}} do indeed have the latter property which means that even with damping the phases produced by those phase gates are still only determined by the respective areas. The dephasing term can, however, only vanish if $ f_{1} = f_{0} $ which implies that there is no relative phase as well. We can also conclude that the dephasing is stronger for higher energies of the particle which means it is especially relevant for short operation times as we will see in Sec.~{\rm Re}f{ssec:fidelity}. \subsection{Consequences for the path} \label{ssec:force} We have seen in the previous section how the damping results in additional phase terms. However, from Eq.~({\rm Re}f{eq:z(t)}) it is clear that the damping alters the path as well. Therefore, the paths which are closed in the isolated case are no longer closed in the damped case. It is a natural question to ask which forces $ \widetilde{f}_j(t) = f_j(t)e^{i\omega t} $ can be used to achieve the cyclical evolution ($z_j(0) = z_j(T) $) in the damped case and whether some of those forces should be used preferably because they minimize the dephasing term. First we note that it is not possible to completely compensate the effects of the damping by applying some sophisticated force $f_{\mathrm{c}}$. This can be seen from Eq.~({\rm Re}f{eq:z(t)}), since the isolated dynamics of a coherent state is described by $\dot{z}_j = 0$, such a force would need to satisfy $\widetilde{f} = e^{i\omega t}f_{\mathrm{c}}(t) = \mathrm{const} $, which is impossible for real $f_{\mathrm{c}}(t)$. To determine the effects of a force $ f_j(t) $ on the path $ z_j(t) $ we have to solve Eq.~({\rm Re}f{eq:z(t)}). {This leads to the solution} \begin{flalign} z_j(t) &= z_{j,\mathrm{hom}} + z_{j,\mathrm{inhom}} \mathbb Notag\\ \label{eq:alpha(t)} &= z_j(0) e^{-\gamma t} + \int_{0}^{t} \dfrac{-i}{\hbar} \widetilde{f}_j e^{-\gamma (t-\tau)} {\rm d}{\tau}. \end{flalign} We can therefore conclude that in order to achieve the cyclic dynamics $ z_j(0) = z_j(T) $ we need the forces to satisfy \begin{equation} \label{eq:conditionf} z_j(0) \left(e^{\gamma T} - 1\right) = \int_{0}^{T}f_j(\tau)e^{i\omega\tau}e^{\gamma\tau} {\rm d}{\tau}. \end{equation} The equation shows explicitly that for $ \gamma \mathbb Neq 0 $ the condition depends on the initial state $ z_j(0) $. This means that in contrast to the undamped case where $ f_j $ would always lead to closed trajectories it now only works for a specific initial condition $z_j(0)$. The fault tolerance of a quantum phase gate towards the initial motional state is therefore lost in the damped case. \\ An interesting observation at this point is that if we consider $z_j(0) = 0$, we can derive forces $ f_{\mathrm{d}} $ which return $ z_j $ to the ground state after time $ T $ in the damped case from the forces $ f_{\mathrm{nd}} $ which accomplish this in the undamped case by using the formula \begin{equation} \label{eq:relationFDamping} f_{\mathrm{d}} = f_{\mathrm{nd}}\cdot e^{-\gamma t}. \end{equation} We will use this link between the damped and the undamped scenarios in the next section to generalize an already existing protocol for 2-qubit phase gate to account for dissipative effects. For an experimental realization it is desirable to minimize the dephasing term for a given relative phase. To examine how this can be done we want to consider the case $f_{0}(t) = 0 $ and $ \ket{z_j(0)} = \ket{0} $ for the sake of simplicity. This means that $ \ket{z_0} $ is the ground state at all times and we only need to discuss the dynamics of $ \ket{z_1} $. These simplifications are well justified because in an experimental setup the ground state can be prepared initially and an additional force $ f_{0} $ does not bring any benefits but just makes the computations more complex. We can then derive simple expressions for the phase and dephasing terms after time $ T $ from Eq. \eqref{eq:phiA} if we write $ z_1 = r(t) e^{i\theta(t)} $ {(in the following equations we have omitted the time dependence of $ r $ and $ \theta $)} \begin{flalign*} \int_{0}^{T} z_1^{*}\dot{z_1} {\rm d}{t} &= \int_{0}^{T} re^{-i\theta}\left(\dot{r} e^{i\theta} + ir\dot{\theta} e^{i\theta}\right){\rm d} t \\ &= \int_{0}^{T} r\dot{r} {\rm d}{t} + i\int_{0}^{T} \dot{\theta}r^2 {\rm d}{t}. \end{flalign*} Since $ z_1(0) = z_1(T) $ we can see that the first integral vanishes by integrating by parts and we are left with the following expression for the phase: \begin{equation} \label{eq:comparisonPhase} \mathcal{P(H)}i_{\mathrm{isol}}= \int_{0}^{T} \dot{\theta}r^2 {\rm d}{t},\quad \mathcal{P(H)}i_{\mathrm{L}}=0. \end{equation} In this case, $\mathcal{P(H)}i_{\mathrm{L}}=0$ because $z_0=0$ at all times. For the dephasing term we find \begin{equation} \label{eq:comparisonDephasing} \eta=\gamma \int_{0}^{T} r^2 {\rm d}{t}. \end{equation} We can see that the easiest way to minimize the dephasing for a given relative phase is to make $ \dot{\theta} $ as large as possible. This result also makes intuitive sense since if the path goes around the origin multiple times (large $ \dot{\theta} $) it needs a smaller amplitude (which results in less dephasing) in order to sweep over the same area. Since $\theta$ corresponds to the interaction picture it is affected by the frequency $\omega$ of the harmonic oscillator. Fig.~{\rm Re}f{fig:paths1} displays the paths $z_1(t)$ which are generated by forces of the form $f_1 = e^{-\gamma t} \sin(\Omega t)$ and $f_{0} =0$ (see Eq.~({\rm Re}f{eq:z(t)})). \begin{figure} \caption{(color online){Path of $ z_1 $ in the complex plane {when using the force $ f(t) = e^{-\gamma t} \label{fig:paths1} \end{figure} The paths (a) and (b) correspond to the isolated case with $\gamma = 0$ whereas $\gamma > 0$ for the paths (c) and (d). {The exact parameters used in this numerical simulation were $ \Omega = 2\,\mathrm{MHz}$ for the frequency of the driving force in all trajectories, $ \omega = 2\Omega $ for the frequency of the harmonic trap in trajectories (a) and (c) and $ \omega = 4\Omega $ for (b) and (d). The damping constant for paths (c) and (d) was set to $\gamma/\Omega = 0.2$.} We can see how the upper two paths are symmetrical with respect to the imaginary axis. As shown in~\cite{PhysRevA.95.022328} this implies that the phase does not change (in first order) if the force is subjected to a homogeneous, small constant offset $ f \mapsto f + \delta f $ in the $ \gamma = 0 $ case. In contrast, the paths for $ \gamma \mathbb Neq 0 $ are no longer symmetrical. However whether the path is symmetric or not depends on the force. In the next section we therefore want to investigate how the robustness can be maintained in the damped case by constructing forces differently compared to Eq.~\eqref{eq:relationFDamping}. \subsection{Consequences for the robustness} \label{sec:consequenceRobustness} At first we want to show how the condition for the robustness against small constant offsets of the force $ f \mapsto f+\delta f $ reads in the dissipative case studied here. According to Eq.~({\rm Re}f{eq:phaseAsIm}) with $ z_0 = 0 $, \begin{flalign*} \mathcal{P(H)}i_{\mathrm{isol}} &= \int_{0}^{T}\operatorname{Im}(\dot{z}_1 z_1^{*}) {\rm d} t \mathbb Notag\\ &= \dfrac{-1}{\hbar}\int_{0}^{T}\operatorname{Re}(z_{1}^{S} f_1) {\rm d} t, \end{flalign*} where we used Eq.~\eqref{eq:z(t)} and $ z_{1}^{S} = e^{-i\omega t} z_1 $ is the path in the Schrödinger picture. Therefore the offset to the phase in first order becomes \begin{equation*} \delta \mathcal{P(H)}i_{\mathrm{isol}} = \dfrac{-\delta f}{\hbar} \int_{0}^{T}\operatorname{Re}(z_{1}^{S}) {\rm d} t \overset{!}{=} 0. \end{equation*} This result is identical to the one in the isolated case found in \cite{PhysRevA.95.022328}. By inserting Eq.~\eqref{eq:alpha(t)}, assuming $ z_{1}(0) = 0 $ and integrating by parts we can express this as a condition for the force \begin{equation} \label{eq:forceResisanceCondition} 0 = \int_{0}^{T}f(t){\rm d} t. \end{equation} Together with the condition for a cyclic evolution, Eq.~\eqref{eq:conditionf}, we therefore have a set of conditions that for $ z_j(0) = 0 $ may be seen as orthogonality conditions for $ f(t) $ \begin{equation} \label{eq:orthogonal} f(t) \perp \{e^{\gamma t} \sin(\omega t), e^{\gamma t} C_0(\Omega)s(\omega t), 1\} =: \mathcal{C}. \end{equation} This means that we can construct forces to suit our needs by a Gram-Schmidt procedure. It is useful to orthogonalize the set $ \mathcal{C}$ and then do one more orthogonalizing step for the arbitrary function $g$ which will then become orthogonal to the set $ \mathcal{C} $. This method makes it possible to construct a plethora of forces which will leave the phase unchanged under a small constant offset $ \delta f $ and produce a cyclic evolution. By superposing many of such forces one can then ensure to meet further demands like e.g. $ f(0) = f(T) = 0 $. We can conclude that it is possible to maintain the robustness of the phase gate against small constant offsets of the force $ f \mapsto f +\delta f $ in the damped case. However as we have seen in the previous section (Eq.~\eqref{eq:conditionf}) the gate loses its resistance against fluctuations in the initial motional state. \section{Application to 2-qubit phase gates} \label{sec:twoQubit} In this section we want to show how the relations we found in the previous sections apply to two-qubit phase gates which have been realized in \cite{experimentalDemonstration,Schafer2018}. These two-qubit gates consist of two ions in a harmonic trap potential which experience a force that depends on the internal state ($ \ket{\uparrow} $ or $ \ket{\downarrow} $) of the ion. As shown in \cite{PhysRevA.95.022328}, the Hamiltonian of such a system can be written as \begin{align} H_{\mathrm{tot}} &= H_{+} + H_{-},\mathbb Notag\\ \label{eq:H2Qubit} H_{\pm} &= \dfrac{p_{\pm}^2}{2} + \dfrac{1}{2}\Omega_{\pm}^2 x_{\pm}^2 + f_{\pm} x_{\pm}. \end{align} Here, $ H_{+} $ describes an oscillation of a stretch mode where the displacement from the equilibrium position of the two ions are equal but in opposite directions and $ H_{-} $ describes an oscillation of the center-of-mass mode where the displacement of the ions is identical. {$ x_{\pm} $ are mass weighted normal mode coordinates which for equal mass ions take the form \begin{equation*} x_{\pm} = \sqrt{m}((x_{1}-x_{1}^{(0)}) \mp (x_{2}-x_{2}^{(0)})). \end{equation*} Here, $ x_{i} $ is the position operator for ion $ i $, $ x_{i}^{(0)} $ is the equilibrium position of ion $i$ and the canonically conjugate momentum operators are $p_{\pm}=-i\hbar\partial/\partial x_{\pm} $.} Note that here we ignored a term in $ H_{\mathrm{tot}} $ which is proportional to the difference of the forces experienced by the two ions. This (purely time dependent) term will therefore lead to additional phases for certain configurations. Similar to Eq.~\eqref{eq:orthogonal} we will however later (see Eq.({\rm Re}f{eq:orthogonality2qubit})) present a way to construct forces which satisfy $ \int_{0}^{T} f {\rm d} t = 0 $ so that this phase will vanish. A more detailed derivation of the Hamiltonian and discussion of the purely time dependent term can be found in \cite{PhysRevA.95.022328}. {If the forces on the two ions take the form $ F_j = F(t) \sigma_{j}^{z} $ one can derive the following values for the force in the interaction picture $ \widetilde{f}_{\pm} $ \begin{flalign} \widetilde{f}_{+}(P) &= \widetilde{f}_{-}(A) = 0, \mathbb Notag\\ \widetilde{f}_{-}(\uparrow\uparrow) &= -2F/\sqrt{2m}e^{i\Omega_- t},\mathbb Notag\\ \widetilde{f}_{+}(\uparrow\downarrow) &= -2F/\sqrt{2m}e^{i\Omega_+ t},\mathbb Notag\\ \widetilde{f}_{-}(\downarrow\downarrow) &= 2F/\sqrt{2m} e^{i\Omega_- t}, \mathbb Notag\\ \label{eq:2Qubitf} \widetilde{f}_{+}(\downarrow\uparrow) &= 2F/\sqrt{2m} e^{i\Omega_+ t}, \end{flalign} where $ P \in \{\uparrow\uparrow, \downarrow\downarrow\} $ denotes parallel and $ A\in \{\uparrow\downarrow, \downarrow\uparrow\} $ anti-parallel spin combinations. This type of force can be realized by off resonant lasers in the Lamb-Dicke regime~\cite{schleich2011quantum}. For the optimization of the functional form of the force see \cite{PhysRevA.95.022328}. Frequencies $\Omega_\pm$ are defined as $\Omega_+ = \sqrt{3}\omega$ and $\Omega_-=\omega$~\cite{PhysRevA.95.022328}.} We can bring the Hamiltonian in the same form as in the previous section by introducing creation and annihilation operators for the stretch and center-of-mass mode $ a_{\pm}, a_{\pm}^{\dagger} $ and switching to an interaction picture \begin{flalign} \ket{\mathsf{P}si_{I}} &= e^{-iH_{0}t/\hbar}\ket{\mathsf{P}si}, \mathbb Notag\\ \label{eq:2QubitInteractionPicture} H_{0} &= \hbar\Omega_{+}(a_{+}^{\dagger}a_{+} + \dfrac{1}{2}) + \hbar\Omega_{-}(a_{-}^{\dagger}a_{-} + \dfrac{1}{2}). \end{flalign} The Hamiltonian then reduces to \begin{flalign} \label{eq:2QubitInteractionHamiltonian} \widetilde{V} &= \widetilde{f}_{+}^{*}a_{+} + \widetilde{f}_{+}a_{+}^{\dagger} + \widetilde{f}_{-}^{*}a_{-} + \widetilde{f}_{-}a_{-}^{\dagger} \mathbb Notag\\ &= \widetilde{V}_{+} + \widetilde{V}_{-}. \end{flalign} To model the damping we can introduce two Lindblad terms similar to Eq. \eqref{eq:model}. We assume identical damping rates $\gamma$ since all degrees of freedom couple to the same bath which we assume to have a flat spectral density on the relevant frequency scale. We then find in the interaction picture \begin{align} \label{eq:2QubitTimeEvolution} \dot{\rho} &= \mathcal{L}_{+}[\rho] + \mathcal{L}_{-}[\rho] ,\\ \mathcal{L}_{\pm} &:= \frac{-i}{\hbar}[\widetilde{V}_{\pm},{\rho}] + \gamma\left(2a_{\pm}\rho a_{\pm}^{\dagger} - a_{\pm}^{\dagger}a_{\pm}\rho - \rho a_{\pm}^{\dagger}a_{\pm}\right). \mathbb Notag \end{align} We can see that the $ \mathcal{L}_{\pm} $ only act on one of the two modes and are of the same form as the right hand side of Eq.~({\rm Re}f{eq:model}). Therefore the two modes $+,-$ are not coupled by Eq.~\eqref{eq:2QubitTimeEvolution} and we can reuse the results from the previous Sec.~{\rm Re}f{sec:geometricPhases} to determine the evolution of an initial state that is in a superposition of spin states and in a coherent motional state \begin{equation*} \rho(0) = \begin{pmatrix} \abs{a}^{2} && ab^{*} \\ a^{*}b && \abs{b}^{2} \end{pmatrix} \otimes \kb{z_{+}(0)}{z_+(0)}\otimes\kb{z_{-}(0)}{z_{-}(0)}. \end{equation*} The coherent motional state evolves according to \begin{equation} \label{eq:2Qubitalphat} \dot{z}_{\pm}^{j}+\gamma z_{\pm}^{j} = \dfrac{1}{i\hbar}\widetilde{f}_{\pm}(j), \end{equation} where the internal degrees of freedom are given by $j$ and oscillation in the stretch and center-of-mass mode is described by $z_{+}^{j}$ and $z_{-}^{j}$ respectively. The forces $\widetilde{f}_{\pm}(j)$ are given in Eq.~\eqref{eq:2Qubitf}. Furthermore, we can observe that Eq.~\eqref{eq:2QubitTimeEvolution} is of the form of Eq.~\eqref{eq:time evolution} which means that we can apply the results from Sec.~{\rm Re}f{ssec:consequncesPhase} to calculate the phase and dephasing which arise during a cyclic evolution: \begin{equation} \label{eq:2QubitA} \mathcal{P(H)}i(T) = \mathcal{P(H)}i_{{\mathrm{isol}}} + \xi, \end{equation} where \begin{align} \mathcal{P(H)}i_{{\mathrm{isol}}} &= 2(A_{+}^{1} + A_{-}^{1} - A_{+}^{0} - A_{-}^{0}),\mathbb Notag\\ \xi &= \gamma \int_{0}^{T} d_{+}(\tau) + d_{-}(\tau) {\rm d}{\tau}, \mathbb Notag\\ d_{\pm} &= i\abs{z_{\pm}^{1}-z_{\pm}^{0}}^2 + \abs{z_{\pm}^0}\abs{z_{\pm}^1}\sin(\theta_{\pm}^1 - \theta_{\pm}^0).\mathbb Notag \end{align} We see that $\mathcal{P(H)}i_{{\mathrm{isol}}} = \mathcal{P(H)}i_{\mathrm{g}} + \mathcal{P(H)}i_{\mathrm{d}}$ is proportional to the areas swept in the stretch and center-of-mass modes of oscillation $A_{+}$ and $A_{-}$, respectively, and it is identical to the phase of the isolated evolution ($\gamma =0$). Because of the symmetries of the forces described in Eq.~\eqref{eq:2Qubitf}, $\mathcal{P(H)}i_{{\mathrm{isol}}}$ is only nonzero if $ 0 $ is a parallel and $ 1 $ an anti-parallel spin combination or vice versa. The $\xi$ term originates from the Lindblad operators and since $\operatorname{Im}(\xi) \mathbb Neq 0$ it will result in dephasing equivalently to the one ion case discussed in Sec.~{\rm Re}f{ssec:consequncesPhase}. However, if the evolution starts in the ground state either $z_{\pm}^{0}$ or $z_{\pm}^{1}$ will remain in the ground state for the entire operation because of Eq.~\eqref{eq:2Qubitf}. Therefore $\operatorname{Re}(\xi)$ will always be zero which means that the phase depends only on the difference of the swept phase space areas like in the undamped case studied in \cite{PhysRevA.95.022328}. Since the equations \eqref{eq:2Qubitalphat} for the evolution of $ z $ are identical to the one ion case, Eq. \eqref{eq:relationFDamping} still holds true and we can easily generalize the force $ F_{\mathrm{nd}}$ constructed in \cite{PhysRevA.95.022328} to the damped oscillator: \begin{align} \label{eq:f_damped} F(t) = &\kappa e^{-\gamma t}\cdot F_{\mathrm{nd}}. \end{align} According to Eq.~({\rm Re}f{eq:alpha(t)}) this means for the damped path \begin{equation*} {z_{\mathrm{d}}=\kappa e^{-\gamma t}z_{\mathrm{nd}}.} \end{equation*} Here we introduced two correction factors $ \kappa $ and ${e^{-\gamma t}}$ to the original force for the undamped case. $ \kappa $ is a constant to compensate for the smaller area due to the damping and it therefore ensures that the phase (which corresponds to the area) stays the same. The exponential factor ensures that $ z $ returns to the ground state after time $ T $. It would also be possible to construct forces via the Gram-Schmidt process described in Sec.~{\rm Re}f{sec:consequenceRobustness} to maintain the resistance against small constant offsets of the force. Since these forces now have to produce closed paths in both modes there are more orthogonality conditions \begin{align} \label{eq:orthogonality2qubit} f \perp& \{e^{\gamma t} \sin(\Omega_+ t), e^{\gamma t} C_0(\Omega)s(\Omega_+ t), \mathbb Notag\\ &\quad e^{\gamma t} \sin(\Omega_{-} t), e^{\gamma t} C_0(\Omega)s(\Omega_{-} t), 1\}, \end{align} but the method of constructing $ f $ stays the same. For the next section we will nevertheless stick to Eq.~({\rm Re}f{eq:f_damped}) since the resulting forces are less complex and sufficient for discussing the impact on the fidelity. \\ The paths of the resulting $ z_{\pm} $ with and without damping and under the influence of different forces are shown in Fig.~{\rm Re}f{fig:2QubitAlphaPath}. The plots (a), (b) and (c) are in the interaction picture whereas the plot (d) is in the Schrödinger picture. The parameters for trajectory (a) were chosen identically to \cite{PhysRevA.95.022328}: $ T = {0.8}$ $\mu$s, $\omega/2\pi = {2}$ MHz and $ \gamma/\omega = 0 $. The trajectory (b) corresponds to the same force and parameters as trajectory (a) but now with a damping $ \gamma/\omega = 0.1 $. We can see how the path is no longer closed and that the area decreased as well. In the plot (c) we used the same damping and parameters as in (b) but with the adjusted force \eqref{eq:f_damped}. Now the paths are closed again and the area difference is identical to (a). The plot (d) shows the trajectory for a shorter operation time $ T = {0.3}{\mu}$s in the Schrödinger picture. It is important to note that the ticks are different for this plot because the trajectory has a much greater amplitude. The intuitive reason for this is that the particle needs a large momentum to complete the loop in a shorter time, the trajectory is therefore stretched out in $ p \propto \operatorname{Im}(z^{S}) $ direction. This illustrates that shorter operation times come with the trade-off of more dephasing (see Sec.~{\rm Re}f{ssec:fidelity} and Fig.~{\rm Re}f{fig:Infidelity}). \begin{figure} \caption{(color online) Figure (a), (b) and (c) show paths of $ z_{+} \label{fig:2QubitAlphaPath} \end{figure} \subsection{Influence of damping on the fidelity} \label{ssec:fidelity} The fidelity measures the overlap of the final state $ \rho_{\mathrm{f}} $ with the desired state $ \ket{\mathsf{P}si_{\mathrm{d}}} $ \begin{equation*} \mathcal{F} = \bra{\mathsf{P}si_{\mathrm{d}}}\rho_{\mathrm{f}}\ket{\mathsf{P}si_{\mathrm{d}}}. \end{equation*} {Since we want to implement a} two qubit phase gate our desired state $ \ket{\mathsf{P}si_{\mathrm{d}}} $ is \begin{equation*} \ket{\mathsf{P}si_{\mathrm{d}}} = ae^{i\mathcal{P(H)}i_{{\mathrm{isol}}}}\ket{P} + b\ket{A}, \end{equation*} with $ \abs{a}^2 + \abs{b}^2 = 1 $. $P$ and $A$ denote an arbitrary parallel and anti-parallel spin combination(e.g. $ P =\uparrow\uparrow $ and $ A=\uparrow\downarrow $). The final (spin-) state (in the basis $ \{ \ket{P}, \ket{A}\} $) after the cyclic evolution is \begin{equation*} \rho_{\mathrm{f}} = \begin{pmatrix} \abs{a}^2 & ab^{*} e^{i\mathcal{P(H)}i_{{\mathrm{isol}}} - \Gamma} \\ a^{*}b e^{-i\mathcal{P(H)}i_{{\mathrm{isol}}} -\Gamma} & \abs{b}^2 \end{pmatrix}, \end{equation*} where \begin{equation} \label{eq:Gamma} \Gamma = \gamma\int_{0}^{T} \abs{z_{+}(A)}^2 + \abs{z_{-}(P)}^2 {\rm d}{t}. \end{equation} We can calculate the fidelity as \begin{equation} \label{eq:fidelity} \mathcal{F} \geq \dfrac{1 + e^{-\Gamma}}{2}. \end{equation} The lower bound of the inequality above is reached if the prefactors satisfy $a=b=1/\sqrt{2}$.\\ Figure {\rm Re}f{fig:Infidelity} shows the maximal infidelity $ 1 - \mathcal{F} $ and phase difference $ \Delta\mathcal{P(H)}i = 2(\mathcal{P(H)}i(A) - \mathcal{P(H)}i(P)) $ for different $ \gamma $ and $ T $. In (a) and (b) the operation time was chosen as constant $ T={0.8}{\mu}$s whereas $ \gamma/\omega $ varied and in (c) and (d) we chose constant $ \gamma/\omega = 10^{-4} $ for varying $ T $. The plots (b) and (d) show the phase difference for the force given in Eq.~\eqref{eq:f_damped} which accounts for the damping (solid blue line) and for the original force (dashed red line). We can see that the force we constructed (blue line) correctly compensates for the phase whereas lack of compensation would lead to a drastic phase deviation for larger damping strengths. Different operation times do not affect the phase significantly for both forces because for $ \gamma/\omega = 10^{-4} $ the lost area is still marginal. For large $ \gamma $ the curve in (a) goes towards $ 1/2 $ apart from that the infidelity is roughly in the same order of magnitude as $ \gamma/\omega $. In the plot (c) it is demonstrated that short operation times $ T $ lead to a higher infidelity, because the forces needed to achieve the desired phase result in a higher amplitude $ \abs{z} $ and therefore in a larger $ \Gamma $ (see Fig.~{\rm Re}f{fig:2QubitAlphaPath} (d)). The bump in figure (c) comes from our choice of the path and has no other physical meaning. We can compare these infidelities which are roughly of the order of $ \gamma/\omega $ to infidelities from different sources studied by \cite{PhysRevA.95.022328}. The infidelity caused by the anharmonic Coulomb repulsion is below $ 10^{-4} $ whereas the infidelity caused by considering the correct sinusoidal form of the force $ F(x,t) = F(t)\cdot\sin(kx) $ is in between $ 10^{-5} $ and $ 0.1 $ depending on the operation time. \begin{figure} \caption{(color online) The upper two plots show the maximum infidelity (a) and phase (b) $ \Delta\mathcal{P(H)} \label{fig:Infidelity} \end{figure} \section{Finite temperature effects} \label{sec:finiteTemperatureEffects} In section {\rm Re}f{ssec:dissipativeCase} we outlined how the effects of finite temperature can be taken into account by using a noisy force equal to $ \widetilde{f}_{j}(t) + \sqrt{2\gamma\bar{n}}\hbar \mathbf{C}i(t) $, where $ \mathbf{C}i(t) $ is a complex valued Gaussian white noise process. In this section we show how the previous results can be extended to include finite temperature effects and what impact these effects have on the fidelity of a two-qubit phase gate.\\ We begin with a stochastic GKSL-type master equation that describes the influence of a finite temperature heat bath coupled to the trapped ions implementing the two qubit phase gate. We model the effect of the heat bath by equal strength but independent damping of both vibration modes ($a_\pm$) and by independent thermal noise processes affecting both modes ($\mathbf{C}i_\pm(t)$) \begin{align} \label{eq:2QubitTimeEvolutionFiniteT} \dot{\rho_{\mathbf{C}i}} =& \mathcal{L}_{\mathbf{C}i,+}[\rho_\mathbf{C}i] + \mathcal{L}_{\mathbf{C}i,-}[\rho_\mathbf{C}i] ,\\ \mathcal{L}_{\mathbf{C}i,\pm}[\rho] =& \frac{-i}{\hbar}[\widetilde{V}_{\mathbf{C}i,\pm},{\rho}] + \gamma\left(2a_{\pm}\rho a_{\pm}^{\dagger} - a_{\pm}^{\dagger}a_{\pm}\rho - \rho a_{\pm}^{\dagger}a_{\pm}\right). \mathbb Notag\\ \widetilde{V}_{\mathbf{C}i,\pm} =& (\widetilde{f}_{\pm}^{*}+ \sqrt{2\gamma\bar{n}}\hbar \mathbf{C}i_\pm^{*}(t))a_{\pm} + (\widetilde{f}_{+}+ \sqrt{2\gamma\bar{n}}\hbar \mathbf{C}i_\pm(t))a_{\pm}^{\dagger} \mathbb Notag\\ \rho =& \langle\rho_{\mathbf{C}i}\mathrm{ran}\,gle. \mathbb Notag \end{align} In this form the equation is almost identical to Eq.~\eqref{eq:2QubitTimeEvolution} with the small difference that the thermal noise has been added to the deterministic driving force. The solution $\rho_{\xi}$ is similar to the zero temperature case, with the addition of the random thermal process. Solution can be expressed in terms of coherent states, whose labels satisfy \begin{equation} \label{eq:noisyTrajectory} \dot{z}_{\pm}^{j}+\gamma z_{\pm}^{j} = \dfrac{1}{i\hbar}\left(\widetilde{f}_{\pm}(j) + \sqrt{2\gamma\bar{n}}\hbar\mathbf{C}i_{\pm}\right). \end{equation} However an important difference is that due to the noise the path can no longer be closed reliably by choosing an appropriate force. As can be seen in Fig.~{\rm Re}f{fig:thermal_traj}, the thermal noise causes fluctuations around the paths driven by deterministic force ($z_+^1,\, z_-^0$) and the non-driven paths ($z_+^0,\, z_-^1$) are no longer stationary but fluctuate around the ground state. \begin{figure} \caption{\label{fig:thermal_traj} \label{fig:thermal_traj} \end{figure} \subsection{Fidelity in the finite temperature case} \label{ssec:fidelityFiniteTemperature} {One important question is how the fidelity of the phase gate is affected by the thermal noise and whether the compensation strategy suggested in~{\rm Re}f{ssec:force} still improves this fidelity. In this section we seek to answer this question.\\ As in section {\rm Re}f{ssec:fidelity} the fidelity is defined as \begin{equation*} \mathcal{F} = \bra{\mathsf{P}si_{\mathrm{d}}}\rho(T)\ket{\mathsf{P}si_{\mathrm{d}}}. \end{equation*} Since in the finite temperature case the density operator is given as an average $ \rho = \langle\rho_{\mathbf{C}i}\mathrm{ran}\,gle$ the fidelity can be obtained by averaging as well, \begin{equation*} \mathcal{F} = \langle\bra{\mathsf{P}si_{\mathrm{d}}}\rho_{\mathbf{C}i}(T)\ket{\mathsf{P}si_{\mathrm{d}}}\mathrm{ran}\,gle. \end{equation*} We will proceed to first evaluate the fidelity for a general $\rho_{\mathbf{C}i}(T)$ and then take the average over the stochastic processes $\mathbf{C}i_\pm(t) $.\\ We again consider the overlap with the target state $\ket{\mathsf{P}si_{\mathrm{d}}} = ae^{i\mathcal{P(H)}i_{{\mathrm{isol}}}}\ket{P} + b\ket{A}$ and choose $a=b=1/\sqrt{2}$. Other choices for the prefactors will lead to higher fidelity as discussed in Sec.~{\rm Re}f{ssec:fidelity}. The fidelity of any $\rho_{\mathbf{C}i}$ after the phase gate operation is given as \begin{flalign} \label{eq:fidelityFull} \mathcal{F} = &\dfrac{1}{4}( \exp(-(\abs{z_{-}^{1}}^2+\abs{z_{+}^{1}}^2)) + \exp(-(\abs{z_{-}^{0}}^2 + \abs{z_{+}^{0}}^2))\mathbb Notag\\ & +2\operatorname{Re}(e^{-i\Delta\mathcal{P(H)}i - \Gamma -\dfrac{1}{2}\left(\abs{z_{-}^{1}}^2 + \abs{z_{+}^{1}}^2 + \abs{z_{-}^{0}}^2 + \abs{z_{+}^{0}}^2 \right)})), \\ \Gamma = &\gamma\int_{0}^{T} \abs{z_{+}^{1}(\tau) - z_{+}^{0}(\tau)}^{2} + \abs{z_{-}^{1}(\tau) - z_{-}^{0}(\tau)}^{2} {\rm d}{\tau}, \mathbb Notag\\ \Delta\mathcal{P(H)}i = &\mathcal{P(H)}i(T) - \mathcal{P(H)}i_{\mathrm{isol}}.\mathbb Notag \end{flalign} All quantities in the expression for $\mathcal{F}$ are evaluated at time $T$, that is, at the end of the phase gate operation.\\ In the expression one can identify the three possible causes for fidelity-loss: \begin{enumerate} \item The terms $ \propto \exp(\abs{z_\pm^j(T)}^2) $ arise when the path is not closed, which means that $ z(T) \mathbb Neq 0 $. \item The term $ \exp(-i\Delta\mathcal{P(H)}i) $ occurs when $ \mathcal{P(H)}i(T) \mathbb Neq \mathcal{P(H)}i_{\mathrm{isol}} $, \item and the term $ \exp(-\Gamma) $ describes decoherence, which is induced by the damping and can not be prevented. \end{enumerate} In the zero temperature case we could close the path and restore the phase of the isolated case. Therefore only the decoherence was contributing to a fidelity loss and the formula reduced to Eq.~\eqref{eq:fidelity}. For finite temperature this is no longer the case and we therefore have to work with the full expression for $ \mathcal{F} $. We have plotted the average infidelity as a function of $\gamma\bar{n}T$ over 5000 realizations in Fig.~{\rm Re}f{fig:thermal_fid} (solid line). We checked that our result coincide with directly solving the master equation ({\rm Re}f{eq:model}) (not shown in the Figure). The other parameters are as in Fig.~{\rm Re}f{fig:2QubitAlphaPath} c), except $\gamma/\omega=0.2$. We see that the compensation strategy improves the fidelity even at finite temperatures. {The reason is that it still increases the average overlap of the final state and the ground state. }{We note, however, that the importance of this effect decreases as $\bar{n}/(\gamma T)$ increases}. When $\gamma\bar{n}T\approx 0.1$ (dot) $\mathcal{F}\approx 0.61$ and given the values $T=0.3\mu$s, and $\gamma/\omega = 0.2$ and $\omega/2\pi=2 $\, MHz, the average thermal photon number $\bar{n}\approx 2.7$. This corresponds to a temperature of about $0.3$ mK (we evaluate $\bar{n}$ at $\omega$). {These temperatures are within the range of current experimental capabilities, see for example~\cite{Feldker2020}}.\\ In the appendix {\rm Re}f{sec:AnhangFidelity} we evaluate the $\mathbf{C}i$ dependence of the fidelity. We use a cumulant expansion to find an approximate analytical expression for the average fidelity. This approximation is plotted in Fig.~{\rm Re}f{fig:thermal_fid} (dashed line)\footnote{For the plot we used the results \eqref{eq:avg1stTerm} and \eqref{eq:avg2ndTerm} instead of the final linearized expression \eqref{eq:avgFidelity} since those provided better agreement in the rather strongly damped regime}. The inset shows that our approximation gives the first order correction in $\gamma\bar{n} T$ to the zero temperature result. This approximation is motivated by the fact that for current ion traps the operation time $T$ is much faster then the coherence time $1/(\bar{n}\gamma)$\cite{Schafer2018, doi:10.1063/1.5088164}. Our scenario is in the range $\bar{n}\gamma\sim 10^{-1}T$ and as we can see, {the approximation nicely captures the finite temperature effects on the fidelity in this regime. The rather complicated approximate expression can be found in the appendix.}}\\ \begin{figure} \caption{\label{fig:thermal_fid} \label{fig:thermal_fid} \end{figure} \section{Summary and Outlook} \label{sec:summary} We examined how phase gates based on the geometrical phases of driven trapped ions behave under dissipation. We used forces which depend on an internal state of the trapped ion in order to construct relative phases which show up in the density operator. We then showed that in the special case of a GKSL-type evolution with closed phase space paths admitting pure state solutions the total phase will always have an additional third contribution beyond dynamical- and geometric phases due to dissipation \begin{flalign*} \mathcal{P(H)}i &= \mathcal{P(H)}i_{\mathrm{d}} + \mathcal{P(H)}i_{\mathrm{g}} + \xi,\\ \xi &= \mathcal{P(H)}i_{\mathrm{L}} + i\eta. \end{flalign*} This third contribution will in general be {complex} and can be directly related to the Lindblad operators. Applied to the harmonic oscillator this means that the damping results in a new additional phase that can however vanish for certain special cases. More severely, dephasing occurs which cannot be avoided and depends on the amplitude of the oscillation and the damping strength. We applied our results obtained for a single trapped ion to a two-qubit phase gate proposed in \cite{PhysRevA.95.022328} which is based on two trapped ions. We found that in the presence of damping the phase produced by the gate depends on the area swept in the interaction picture alone, if the ion is in the ground state at the beginning of the operation. However due to the dephasing the fidelity of the gate is reduced. This loss of fidelity is especially noticeable for large damping strengths or short operation times. Furthermore, our calculations show how due to the damping the phase gate no longer operates independently of the initial motional state. On the other hand it is possible to maintain the robustness against small constant offsets of the force $ f \mapsto f + \delta f $ in the damped case by constructing the force using a Gram-Schmidt procedure that also ensures that the force produces a closed phase space path evolution. {We considered also finite temperature effects in order to assess the feasibility of this scheme. We conclude that this scheme could be soon within reach of current experimental techniques by reducing the operation time of the gate or by using colder ion traps, thus preventing the onset of thermal fluctuations.} {To give fair assessment of the relevance of our results, we point out that at the moment two qubit gates are typically experimentally implemented under conditions where the motional state of the ion is rather heated than damped~\cite{PhysRevLett.117.060504,PhysRevLett.117.060505}. However, our results predict improvement on the performance of the geometric phase gate if it would be implemented using buffer gas cooling scenario~\cite{Feldker2020} where damping can be the main source for gate errors.} {This work could be extended to more than two ions if the block structure of the Hamiltonian is kept and only independent decay processes are considered. In future work we will model the system from first principles in order to determine the validity of the phenomenological model presented and analyzed in this article.} \acknowledgments The authors would like to thank Valentin Link for fruitful discussions {and the anonymous Referees for valuable comments.} \appendix \section{Analyzing the phase of a dissipative time evolution} \label{sec:Anhangphidissipative} Since we assumed that the evolution can be described by a pure state we can construct two solutions to Eq.~(\eqref{eq:time evolution}): \begin{align*} \rho_{00}(t) &= \kb{0}{0}\otimes\kb{\mathsf{P}si_0(t)}{\mathsf{P}si_0(t)} \\ \rho_{11}(t) &= \kb{1}{1}\otimes\kb{\mathsf{P}si_1(t)}{\mathsf{P}si_1(t)}. \end{align*} In order to determine the relative phase between those two states we examine the evolution of the superposition which means that we have a density operator of the form \begin{flalign} \rho &= \rho_{00} + \rho _{01} + \left(\rho_{01}\right)^{\dagger} + \rho_{11} \mathbb Notag\\ \label{eq:rho01A} \rho_{01} &= e^{i\mathcal{P(H)}i(t)} \kb{0}{1}\otimes\kb{\mathsf{P}si_0}{\mathsf{P}si_1}. \end{flalign} We already know that $ \rho_{00} $ and $ \rho_{11} $ solve the equation so after inserting $ \rho $ into Eq.~({\rm Re}f{eq:time evolution}) we are left with \begin{align} \label{eq:Anhangrho01dot} \dot{\rho}_{01} &= \frac{-i}{\hbar}[{H},{\rho_{01}}] + \mathcal[\rho_{01}]\\ \mathcal{L}[\rho_{01}] &= \sum_{l=1}^{N} L_l\rho_{01}L_l^{\dagger} - \dfrac{1}{2}\left(L_l^{\dagger}L_l\rho_{01} + \rho_{01} L_l^{\dagger}L_l\right)\mathbb Notag.\\ \end{align} In analogy to \cite{PhysRevLett.58.1593} we can use \eqref{eq:rho01A} and calculate \begin{align} \dot{{\rho}}_{01} &= -i\dot{\mathcal{P(H)}i}{\rho}_{01} + e^{-i\mathcal{P(H)}i(t)} \frac{{\rm d}}{{\rm d} t} \left(\kb{0}{1}\otimes\kb{\mathsf{P}si_0}{\mathsf{P}si_1}\right)\\ -\dot{\mathcal{P(H)}i} &= -i\bra{\mathsf{P}si_0} \left(\frac{{\rm d}}{{\rm d} t} \kb{\mathsf{P}si_0}{\mathsf{P}si_1}\right) \ket{\mathsf{P}si_1} + ie^{-i\mathcal{P(H)}i}\bra{\mathsf{P}si_0}\dot{\rho}_{01}\ket{\mathsf{P}si_1}. \end{align} Since we want to determine $ \dot{\mathcal{P(H)}i} $ this leaves us with two terms to evaluate: \begin{equation*} \bra{{\mathsf{P}si}_0}\left(\frac{{\rm d}}{{\rm d} t} \kb{\mathsf{P}si_0}{\mathsf{P}si_1}\right)\ket{{\mathsf{P}si}_1} = \langle{{\mathsf{P}si}_0}\mathbf{e}rt{\dot{{\mathsf{P}si}}_0}\mathrm{ran}\,gle + \langle{\dot{{\mathsf{P}si}}_1}\mathbf{e}rt{{\mathsf{P}si}_1}\mathrm{ran}\,gle, \end{equation*} and \begin{align*} e^{-i\mathcal{P(H)}i(t)}\bra{{\mathsf{P}si}_0}\dot{\rho}_{01}\ket{{\mathsf{P}si}_1} &= \frac{-ie^{-i\mathcal{P(H)}i}}{\hbar}\left( \bra{\mathsf{P}si_0}[{H},{\rho_{01}}] + L\left[\rho_{01}\right]\ket{\mathsf{P}si_1} \right)\\ &= \dfrac{-i}{\hbar}\left(\bra{\mathsf{P}si_0}H\ket{\mathsf{P}si_0} - \bra{\mathsf{P}si_1}H\ket{\mathsf{P}si_1}\right) \\ &+ \sum_{l=1}^{N}\bra{\mathsf{P}si_0}L_l\ket{\mathsf{P}si_0}\bra{\mathsf{P}si_1}L_l^{\dagger}\ket{\mathsf{P}si_1} \\ &- \dfrac{1}{2}\left(\bra{\mathsf{P}si_0}L_l^{\dagger}L_l\ket{\mathsf{P}si_0} + \bra{\mathsf{P}si_1}L_l^{\dagger}L_l\ket{\mathsf{P}si_1}\right). \end{align*} This leads to the final result \begin{align*} -\frac{{\rm d}{\mathcal{P(H)}i}}{{\rm d} t} =& -i\left(\bra{\mathsf{P}si_0}\partial_t\ket{\mathsf{P}si_0} - \bra{\mathsf{P}si_1}\partial_t\ket{\mathsf{P}si_1}\right) +\frac{1}{\hbar}(\langle H_0\mathrm{ran}\,gle - \langle H_1\mathrm{ran}\,gle) \\ & i\sum_{l=1}^{N}\bra{\mathsf{P}si_0}L_l\ket{\mathsf{P}si_0}\bra{\mathsf{P}si_1}L_l^{\dagger}\ket{\mathsf{P}si_1} \\ &- \dfrac{1}{2}\left(\bra{\mathsf{P}si_0}L_l^{\dagger}L_l\ket{\mathsf{P}si_0} + \bra{\mathsf{P}si_1}L_l^{\dagger}L_l\ket{\mathsf{P}si_1}\right). \end{align*} Here we also used that $ \langle{\mathsf{P}si}\mathbf{e}rt\dot{\mathsf{P}si}\mathrm{ran}\,gle $ is purely imaginary and therefore $ \langle{\mathsf{P}si}\mathbf{e}rt{\dot{\mathsf{P}si}}\mathrm{ran}\,gle = -\langle{\dot{\mathsf{P}si}}\mathbf{e}rt{\mathsf{P}si}\mathrm{ran}\,gle $. {\section{Fidelity at finite temperature}} \label{sec:AnhangFidelity} {In Sec.~{\rm Re}f{sec:finiteTemperatureEffects} we derived an expression for the fidelity of the gate for finite temperatures. Since this expression depends on the random noise $ \mathbf{C}i $ we have to take the average of $\mathcal{F}$ over $\mathbf{C}i$ in order to make meaningful predictions about the fidelity of the phase gate. \\ We will first take the averages of the terms $\propto\exp(-\abs{z_{\pm}^{j}(T)}^2)$ and later evaluate the average of the term $ \exp(-i\Delta\mathcal{P(H)}i - \Gamma -\dfrac{1}{2}\left(\abs{z_{-}^{1}}^2 + \abs{z_{+}^{1}}^2 + \abs{z_{-}^{0}}^2 + \abs{z_{+}^{0}}^2 \right)) $.\\ In the finite temperature case the presence of noise leads to the following equation for $ z(t) $: \begin{equation} \label{eq:z(t)FiniteT} z_{\pm}^{j}(t) = \dfrac{1}{i\hbar}\int_{0}^{t}(f_{\pm}^{j}+ \sqrt{2\gamma\bar{n}}\hbar\mathbf{C}i_{\pm})e^{i\Omega_{\pm}\tau - \gamma (t-\tau)}{\rm d}{\tau}. \end{equation} Note that there are two different uncorrelated noises $ \mathbf{C}i_{\pm} $ for the two different modes of oscillation (stretch and center of mass mode) that do however not depend on the internal states. Since the noise is just added to the force we can split $ z $ in two parts, where one part $ z_{\pm,f}^{j}(t) $ is under full control of the force (zero temperature path) and the second part $ z_{\pm,\mathbf{C}i}^{j}(t) $ is a fluctuation due to the noise. \begin{flalign} \label{eq:splitZ} z_{\pm}^{j}(t) &= z_{\pm,f}^{j}(t)+z_{\pm,\mathbf{C}i}^{j}(t),\\ z_{\pm,f}^{j}(t) &= \dfrac{1}{i\hbar}\int_{0}^{t}f_{\pm}^{j}e^{i\Omega_{\pm}\tau - \gamma (t-\tau)}{\rm d}{\tau}, \mathbb Notag\\ z_{\pm,\mathbf{C}i}^{j}(t) &= \dfrac{1}{i\hbar}\int_{0}^{t} \sqrt{2\gamma\bar{n}}\hbar\mathbf{C}i_{\pm}e^{i\Omega_{\pm}\tau - \gamma (t-\tau)}{\rm d}{\tau}.\mathbb Notag \end{flalign} Since $z_{\pm,f}^{j}$ does not depend on $\mathbf{C}i$ we arrive at \begin{flalign} \label{eq:fidelityAbsTerms} &\langle\exp(-\abs{z_{\pm}^{j}(T)}^2)\mathrm{ran}\,gle = e^{-\abs{z_{\pm,f}^{j}(T)}^2} \mathbb Notag\\ &\times \langle\exp\Big[-z_{\pm,f}^{j*}(T)z_{\pm,\mathbf{C}i}^{j}(T) - z_{\pm,f}^{j}(T)z_{\pm,\mathbf{C}i}^{j*}(T) \mathbb Notag\\%sry I don't know how to space it better ^^ &\hspace{20pt}- \abs{z_{\pm,\mathbf{C}i}^{j}(T)}^2\Big]\mathrm{ran}\,gle \end{flalign} If we define \begin{flalign} \mathcal{V}_{\pm}^{j}(\tau) &= -z_{\pm,f}^{j*}(T)\dfrac{1}{i\hbar}\sqrt{2\gamma\bar{n}}\hbar\mathbf{C}i_{\pm}e^{i\Omega_{\pm}\tau - \gamma(T-\tau)} \mathbb Notag\\ &- z_{\pm,f}^{j}(T)\dfrac{1}{-i\hbar}\sqrt{2\gamma\bar{n}}\hbar\mathbf{C}i_{\pm}^{*}e^{-i\Omega_{\pm}\tau - \gamma(T-\tau)}\mathbb Notag\\ &- \int_{0}^{T} 2\gamma\bar{n}\mathbf{C}i_\pm(\tau)\mathbf{C}i^{*}_\pm(s) e^{i\Omega_\pm(\tau-s)-\gamma(T-\tau + T -s)}{\rm d}{s}, \end{flalign} we can rewrite Eq.~\eqref{eq:fidelityAbsTerms} as \begin{flalign} \label{eq:fidelityV} \langle e^{-\abs{z_{\pm}^{j}(T)}^2}\mathrm{ran}\,gle &= e^{-\abs{z_{\pm,f}^{j}(T)}^2} \langle e^{\int_{0}^{T} \mathcal{V}_{\pm}^{j}(\tau) {\rm d}{\tau}}\mathrm{ran}\,gle . \end{flalign} We now evaluate the average term by taking the cumulant expansion to 2nd order: \begin{flalign*} \langle e^{\int_{0}^{T} \mathcal{V}_{\pm}^{j}(t) {\rm d}{t}}\mathrm{ran}\,gle = e^{\int_{0}^{T} \langle\mathcal{V}_{\pm}^{j}(t)\mathrm{ran}\,gle {\rm d}{t} + \dfrac{1}{2} \int_{0}^{T}\int_{0}^{T} \langle\mathcal{V}_{\pm}^{j}(t_1)\mathcal{V}_{\pm}^{j}(t_2)\mathrm{ran}\,gle{\rm d}{t_1} {\rm d}{t_2} + \dots} \end{flalign*} Using the Gaussian nature of the processes $\mathbf{C}i_\pm$ we get \begin{flalign} \langle e^{-\abs{z_{\pm}^{j}(T)}^2}\mathrm{ran}\,gle\approx& \exp\Big[-\bar{n}\left(1-e^{-2\gamma T}\right) \mathbb Notag\\ &-\abs{z_{\pm,f}^{j}}^{2}(1-\bar{n}\left(1-e^{-2\gamma T}\right)) \mathbb Notag\\ \label{eq:avg1stTerm} &+ \dfrac{3}{2} \left(\bar{n}\left(1-e^{-2\gamma T}\right)\right)^{2}\Big] . \end{flalign} From the result we can see that the finite temperature leads to additional decoherence which increases even further for $ z_{\pm,f}^{j}(T) \mathbb Neq 0 $. Note that $ z_{\pm,f}^{j}(T) = 0 $ when the path is closed in the zero temperature case. The third term in the expansion above is of the order $ (\gamma\bar{n}T)^{2} = (T/\tau_{\mathrm{d}})^{2} $. Since the coherence time $ \tau_{\mathrm{d}} $ is much larger than the operation time for current ion-traps \cite{Schafer2018, lucas2007longlived} we will only keep terms in the first order of $ \gamma\bar{n}T $ from here on.\\ We are now going to take a look at the average of the term \begin{equation} \label{eq:2ndTerm} \langle e^{-i\Delta\mathcal{P(H)}i - \Gamma -\dfrac{1}{2}\left(\abs{z_{-}^{1}}^2 + \abs{z_{+}^{1}}^2 + \abs{z_{-}^{0}}^2 + \abs{z_{+}^{0}}^2 \right)}\mathrm{ran}\,gle. \end{equation} At first one finds that $ \Gamma $ is actually independent of $ \mathbf{C}i $ since it is defined as the difference between two paths of different internal states (see Eq.~\eqref{eq:fidelityFull}) and since the noise is independent of the internal state it cancels out. We than follow the same strategy as before and introduce a $\mathcal{V}'(t)$ and approximate the exponential with a cumulant expansion \begin{equation*} \mathcal{V}'(\tau) = -i\dot{\mathcal{P(H)}i_{\mathbf{C}i}}(\tau) +\dfrac{1}{2}\left(\mathcal{V}_{-}^{1}(\tau) + \mathcal{V}_{+}^{1}(\tau)+ \mathcal{V}_{-}^{0}(\tau)+\mathcal{V}_{+}^{0}(\tau)\right) . \end{equation*} Like the trajectory before we also split the phase into a part that is determined by the force alone $ \mathcal{P(H)}i_{f} $ and a part that depends on the noise $ \mathcal{P(H)}i_{\mathbf{C}i} $: $ \mathcal{P(H)}i = \mathcal{P(H)}i_f + \mathcal{P(H)}i_{\mathbf{C}i} $ With this definition we can write Eq.~\eqref{eq:2ndTerm} as \begin{flalign*} e^{-i(\mathcal{P(H)}i_f - \mathcal{P(H)}i_{\mathrm{isol}}) - \Gamma -\dfrac{1}{2}\left(\abs{z_{f,-}^{1}}^{2} +\abs{z_{f,+}^{1}}^{2}+\abs{z_{f,-}^{0}}^{2}+\abs{z_{f,+}^{0}}^{2} \right)} \langle e^{\int_{0}^{T} \mathcal{V}'(\tau) {\rm d}{\tau}}\mathrm{ran}\,gle . \end{flalign*} We again take the cumulant expansion up to 2nd order and only keep terms up to first order of $ \gamma\bar{n}T $ to arrive at: \begin{widetext} \begin{flalign} \label{eq:avg2ndTerm} &\langle \exp({-i\Delta\mathcal{P(H)}i - \Gamma -\frac{\abs{z_{-}^{1}}^2 + \abs{z_{+}^{1}}^2 + \abs{z_{-}^{0}}^2 + \abs{z_{+}^{0}}^2}{2}})\mathrm{ran}\,gle \approxeq \exp\Bigg[ -i(\mathcal{P(H)}i_f(T) - \mathcal{P(H)}i_{\mathrm{isol}}) - \Gamma -2\bar{n}\left({1-e^{-2\gamma T}}\right) \mathbb Notag\\ &\hspace{10pt}+ \dfrac{\gamma\bar{n}}{\hbar}\operatorname{Im}\left(\int_{0}^{T}{\rm d}{t_1}\int_{0}^{t_1}{\rm d}{t_2} \left(z_{-,f}^{1*}(t_2)\widetilde{f}_{-}^{1}(t_1) + z_{+,f}^{0*}(t_2)\widetilde{f}_{+}^{0}(t_1) \right)e^{-\gamma(T+t_1 - 2t_2)} \right) \mathbb Notag\\ &\hspace{10pt}-\dfrac{\bar{n}}{\hbar^2}\operatorname{Re}\left(\int_{0}^{T}{\rm d}{t_1}\int_{0}^{t_1}{\rm d}{t_2} \left(\widetilde{f}_{-}^{1}(t_1)\widetilde{f}_{-}^{1*}(t_2) + \widetilde{f}_{+}^{0}(t_1)\widetilde{f}_{+}^{0*}(t_2)\right)e^{-\gamma t_1}\sinh(\gamma t_2)\right)\mathbb Notag\\ &\hspace{10pt}+ \dfrac{\gamma\bar{n}}{2}\int_{0}^{T}{\rm d}{t_1}\abs{z_{-,f}^{1}(t_1)}^{2} + \abs{z_{+,f}^{0}(t_1)}^{2} + \dfrac{i\bar{n}}{\hbar}\operatorname{Re}\left(\int_{0}^{T}{\rm d}{t_1}\left(z_{-,f}^{1}(T)\widetilde{f}_{-}^{1}(t_1) - z_{+,f}^{0}(T)\widetilde{f}_{+}^{0}(t_1)\right)e^{-\gamma T}\sinh(\gamma t_1)\right)\mathbb Notag\\ &\hspace{10pt}-\dfrac{\abs{z_{-,f}^{1}(T)}^{2}+ \abs{z_{+,f}^{0}(T)}^{2}}{2}{\left(1-\bar{n}\left(1-e^{-2\gamma T}\right)\right)}\Bigg] . \end{flalign} With Eq.~\eqref{eq:avg1stTerm}, \eqref{eq:avg2ndTerm} we can now express the average fidelity in first order of $ \bar{n}\gamma T $, where we also expanded the exponentials of $\gamma T$: \begin{flalign}\label{eq:avgFidelity} \langle\mathcal{F}\mathrm{ran}\,gle \approxeq& \dfrac{\exp(-4\bar{n}\gamma T - \abs{z_{-,f}^{1}(T)}^{2}(1-2\bar{n}\gamma T))+\exp(-4\bar{n}\gamma T - \abs{z_{+,f}^{0}(T)}^{2}(1-2\bar{n}\gamma T))}{4} \mathbb Notag\\ &+ \frac{1}{2}\operatorname{Re}\Bigg[ \exp\Bigg\{ -i(\mathcal{P(H)}i_f(T) - \mathcal{P(H)}i_{\mathrm{isol}}) - \Gamma -4\bar{n}\gamma T - \dfrac{\gamma\bar{n}}{2}\int_{0}^{T}{\rm d}{t_1}\abs{z_{-,f}^{1}(t_1)}^{2} + \abs{z_{+,f}^{0}(t_1)}^{2} \mathbb Notag\\ &+ \dfrac{\gamma\bar{n}}{\hbar}\operatorname{Im}\left(\int_{0}^{T}{\rm d}{t_1}\int_{0}^{t_1}{\rm d}{t_2} z_{+,f}^{0*}(t_2)\widetilde{f}_{+}^{0}(t_1)+z_{-,f}^{1*}(t_2)\widetilde{f}_{-}^{1}(t_1)\right) \mathbb Notag\\ &+ \dfrac{i{\gamma}\bar{n}}{\hbar}\operatorname{Re}\left(\int_{0}^{T}{\rm d}{t_1}\left(z_{-,f}^{1}(T)\widetilde{f}_{-}^{1}(t_1) - z_{+,f}^{0}(T)\widetilde{f}_{+}^{0}(t_1)\right){t_1}\right)\mathbb Notag\\ &{-}\dfrac{{\gamma}\bar{n}}{\hbar^2}{\operatorname{Re}}\left(\int_{0}^{T}{\rm d}{t_1}\int_{0}^{t_1}{\rm d}{t_2} \left(\widetilde{f}_{-}^{1}(t_1)\widetilde{f}_{-}^{1*}(t_2) + \widetilde{f}_{+}^{0}(t_1)\widetilde{f}_{+}^{0*}(t_2)\right){t_2}\right)\mathbb Notag\\ &-\dfrac{\abs{z_{-,f}^{1}(T)}^{2}+ \abs{z_{+,f}^{0}(T)}^{2}}{2}\left(1-{2}\bar{n}\gamma T\right)\Bigg\}\Bigg] . \end{flalign} {Note that this expression still contains the endpositions of the zero temperature paths $\abs{z_{\pm,f}^{j}(T)}^{2}$. Since these vanish if the compensation strategy is employed, this suggests that the strategy improves the fidelity even at finite temperatures. Furthermore, although all of the terms in the exponential are of first order of $\gamma\bar{n}T$ we found that for our set of forces the dominating temperature dependent contribution would be the $-4\bar{n}\gamma T$ terms. They arise because the finite temperature paths can no longer be closed reliably.} \end{widetext}} \end{document}
\begin{document} \pagestyle{myheadings} \markboth{ Twisted Alexander polynomials associated to metacyclic representations } { \ \ M. Hirasawa \& K. Murasugi} \title{Twisted Alexander polynomials of $2$-bridge knots associated to metacyclic representations} \author{Mikami Hirasawa} \address{Department of Mathematics, Nagoya Institute of Technology\\ Nagoya Aichi 466-8555 Japan\\ {\it E-mail: [email protected]} } \author{Kunio Murasugi} \address{Department of Mathematics, University of Toronto\\ Toronto, ON M5S2E4 Canada\\ {\it E-mail: [email protected]} } \maketitle \begin{abstract} Let $p=2n+1$ be a prime and $D_p$ a dihedral group of order $2p$. Let $\widehat{\rho} : G(K) \rightarrow D_p \rightarrow GL(p,\ZZ)$ be a non-abelian representation of the knot group $G(K)$ of a knot $K$ in 3-sphere. Let $\widetilde{\Delta}_{\widehat{\rho},K} (t)$ be the twisted Alexander polynomial of $K$ associated to $\widehat{\rho}$. Then we prove that for any 2-bridge knot $K(r)$ in $H(p)$, $\widetilde{\Delta}_{\widehat{\rho},K}(t)$ is of the form $\left\{\dfrac{{\Delta}_{K(r)} (t)}{1-t}\right\} f(t) f(-t)$ for some integer polynomial $f(t)$, where $H(p)$ is the set of $2$-bridge knots $K(r), 0<r<1$, such that $G(K(r))$ is mapped onto a non-trivial free product $\ZZ/2 * \ZZ/p$. Further, it is proved that $f(t) \equiv \left\{\dfrac{{\Delta}_{K} (t)} {1+t}\right\}^n$ (mod $p$), where ${\Delta}_{K} (t)$ is the Alexander polynomial of $K$. Later we discuss the twisted Alexander polynomial associated to the general metacyclic representation. \end{abstract} \keywords{$2$-bridge knot, twisted Alexander polynomial, dihedral representation, metacyclic representation.} \ccode{Mathematics Subject Classification 2000: 57M25, 57M27} \section{Introduction} In the previous paper \cite{HM}, we studied the parabolic representation of the group of a $2$-bridge knot and showed some properties of its twisted Alexander polynomial. In this paper, we consider a metacyclic representations of the knot group. Let $G(m,p|k)$ be a (non-abelian) semi-direct product of two cyclic groups $\ZZ/m$ and $\ZZ/p$, $p$ an odd prime, with the following presentation: \begin{equation} G(m,p|k)=\langle s,a|s^m=a^p=1,sas^{-1}=a^k\rangle, \end{equation} where $k$ is a primitive $m$-th root of 1 (mod $p$), i.e. $k^m \equiv 1$ (mod $p$), but $k^q \not\equiv 1$ (mod $p$) for any $q, 0<q<m$ and $k \ne 0, 1$. If $k=-1$, then $m=2$ and hence $G(2,p|-1)$ is a dihedral group $D_p$. Since $k$ is a primitive $m$-th root of 1 (mod $p$), $G(m,p|k)$ is imbedded in the symmetric group $S_p$ and hence in $GL(p,\ZZ)$ via permutation matrices. Now suppose that the knot group $G(K)$ of a knot $K$ is mapped onto $G(m,p|k)$ for some $m,p$ and $k$. Then, we have a representation $f:\ G(K) \rightarrow G(m,p|k) \rightarrow GL(p,\ZZ)$ and the twisted Alexander polynomial $\widetilde{\Delta}_{f,K}(t)$ associated to $f$ is defined \cite{L} \cite{W} \cite{KL}. One of our objectives is to characterize these twisted Alexander polynomials. In fact, we propose the following conjecture. \noindent{\bf Conjecture A}. {\it $\widetilde{\Delta}_{f,K}(t)= \left\{\dfrac{\Delta_{K} (t)}{1-t}\right\} F(t)$, where $\Delta_{K}(t)$ is the Alexander polynomial of $K$ and $F(t)$ is an integer polynomial in $t^m$. } First we study the case $k=-1$, dihedral representations of the knot group. Let $D_p$ be a dihedral group of order $2p$, where $p=2n+1$ and $p$ is a prime. Then the knot group $G(K)$ of a knot $K$ is mapped onto $D_p$ if and only if ${\Delta}_{K} (-1) \equiv 0$ (mod $p$) \cite{Fox62}, \cite{Ha}. Therefore, if ${\Delta}_{K} (-1) \ne \pm 1$, $G(K)$ has at least one representation on a certain dihedral group $D_p$. For these cases, we can make Conjecture A slightly sharper: \noindent{\bf Conjecture B}. {\it Let $\widehat{\rho}:\ G(K) \rightarrow D_p \rightarrow GL(p, \ZZ)$ be a non-abelian representation of the knot group $G(K)$ of a knot $K$ and let $\widetilde{\Delta}_{\widehat{\rho}, K} (t)$ be the twisted Alexander polynomial of $K$ associated to $\widehat{\rho}$. Then \begin{equation} \widetilde{\Delta}_{\widehat{\rho}, K} (t) = \left\{\frac{{\Delta}_{K}(t)}{1-t}\right\}f(t) f(-t), \end{equation} where $f(t)$ is an integer polynomial and further, \begin{equation} f(t) \equiv \left\{\frac{{\Delta}_{K} (t)}{1+t}\right\}^{n} ({\rm mod\ } p) \end{equation} } We should note that $(1+t)^2$ divides $\Delta_K(t)$ (mod $p$) if and only if $\Delta_K(-1)\equiv 0$ (mod $p$). The main purpose of this paper is to prove (1.2) for a $2$-bridge knot $K(r)$ in $H(p)$, $p$ a prime, and (1.3) for a $2$-bridge knot with ${\Delta}_{K}(-1) \equiv 0$ (mod $p$). (See Theorem 2.2.) Here $H(p)$ is the set of $2$-bridge knots $K(r), 0<r<1,$ such that $G(K(r))$ is mapped onto a free product $\ZZ/2 * \ZZ/p$. We note that knots in $H(p)$ have been studied extensively in \cite{GR} and \cite{ORS}. A proof of the main theorem (Theorem 2.2) is given in Section 2 through Section 7. Since this paper is a sequel of \cite{HM}, we occasionally skip some details if the argument used in \cite{HM} also works in this paper. In Section 8, we consider another type of metacyclic groups, denoted by $N(q,p)$. $N(q,p)$ is a semi-direct product of two cyclic groups, $\ZZ/2q$ and $\ZZ/p$ defined by \begin{equation} N(q,p)=\langle s, a| s^{2q}=a^p=1, sas^{-1}=a^{-1}\rangle, \end{equation} where $q \geq 1$ and $p$ is an odd prime and $\gcd(q,p)=1$. We note that $N(1,p) =D_p$ and $N(2,p)$ is called a binary dihedral group. Let $\widetilde{\nu}:\ G(K) \longrightarrow N(q,p) \longrightarrow GL(2pq,\ZZ)$ be a representation of $G(K)$. (For details, see Section 8.) Then we show that for a $2$-bridge knot $K(r)$, the twisted Alexander polynomial $\widetilde{\Delta}_{\widetilde{\nu},K(r)}(t)$ associated to $\widetilde{\nu}$ is completely determined by the Alexander polynomial $\Delta_{K(r)}(t)$ and the twisted Alexander polynomial $\widetilde{\Delta}_{\widehat{\rho},K(r)}(t)$ associated to $\widehat{\rho}$. (Proposition \ref{prop:8.5}) In Section 9, we give examples that illustrate our main theorem and Proposition \ref{prop:8.5}. It is interesting to observe that $\widetilde{\Delta}_{\widetilde{\nu},K(r)}(t)$ is an integer polynomial in $t^{2q}$. In Section 10, we briefly discuss general $G(m,p|k)$-representations of the knot group and give several examples, one of which is not a $2$-bridge knot, that support Conjecture A. In Section 11, we prove Proposition 2.1 and Lemma 5.2 that plays a key role in our proof of the main theorem. Finally, for convenience, we draw a diagram below consisting of homomorphisms that connect various groups and rings. \begin{center} $ \begin{array}{ cccc ccc} & & GL(p,\ZZ) & & GL(2n,\ZZ)& & \\ & & \mbox{\large $\pi$}\uparrow& \nearrow& \hspace*{-11mm} \mbox{\large $\pi_0$} \ \ \ \ \ \uparrow \mbox{\large $\gamma$}& & \\ G(K)&\underset{\mbox{\large $\rho$}}{\longrightarrow}&D_p& \underset{\mbox{\large $\xi$}}{\longrightarrow}&GL(2,\CC)& & \\ \downarrow& & \downarrow& & & & \\ \ZZ G(K)&\longrightarrow&\ZZ D_p& \underset{\mbox{\large $\zeta$}}{\longrightarrow}&\widetilde{A}(\omega)& & M_{2n,2n}(\ZZ[t^{\pm 1}])\\ & \mbox{\large $\rho^{*}$}\hspace{-1mm}\searrow&\downarrow& &\downarrow&&\uparrow \mbox{\large $\gamma^{*}$}\\ & & \ZZ D_p[t^{\pm1}]&\underset{\mbox{\large $\zeta^{*}$}}{\longrightarrow}& \widetilde{A}(\omega)[t^{\pm 1}]& \underset{\mbox{\large $\xi^{*}$}}{\longrightarrow}& M_{2,2}\bigl((\ZZ [\omega])[t^{\pm 1}]\bigr) \end{array} $ \end{center} Here, $\tau=\rho\circ\xi, \widehat\rho=\rho\circ\pi, \rho_0=\rho\circ\pi_0, \eta=\xi\circ\gamma, \Phi^*=\rho^*\circ\zeta^*\circ\xi^*$ and $\nu=\rho\circ\xi\circ\gamma$. Unmarked arrows indicate natural extensions of homomorphisms. \section{ Dihedral representations and statement of the main theorem} We begin with a precise formulation of representations. Let $p=2n+1$ and $D_p$ be a dihedral group of order $2p$ with a presentation: $D_p = \langle x,y| x^2 = y^2 = (xy)^p = 1\rangle$. As is well known, $D_p$ can be faithfully represented in $GL(p,\ZZ)$ by the map $\pi$ defined by: \begin{center}$ x\mapsto \left[ \begin{array}{ccccccc} 1&0&0&\cdots&0&0&0\\ 0&0&0&\cdots&0&0&1\\ 0&0&0&\cdots&0&1&0\\ \vdots & \vdots& \vdots& &\nana & &\vdots \\ \vdots &\vdots & &\nana& & \vdots& \vdots\\ \vdots & & \nana & & \vdots&\vdots & \vdots\\ 0&1&0&\cdots&0&0&0 \end{array} \right] \ y\mapsto \left[ \begin{array}{ccccccc} 0&1&0&\cdots&0&0&0\\ 1&0&0&\cdots&0&0&0\\ 0&0&0&\cdots&0&0&1\\ 0&0&0&\cdots&0&1&0\\ \vdots & \vdots& \vdots& & \nana& & \vdots\\ \vdots & \vdots& &\nana & \vdots& \vdots&\vdots \\ 0&0&1&\cdots&0&0&0 \end{array} \right] $ \end{center} However, $\pi$ is reducible. In fact, $\pi$ is equivalent to $id \ast{\pi}_0$, where \begin{equation} \pi_0: x\mapsto \left[ \begin{array}{cccccc} 0&0&\cdots&0&0&1\\ 0&0&\cdots&0&1&0\\ \vdots & \vdots& &\nana & & \vdots\\ \vdots& &\nana & & \vdots& \vdots\\ 0&1&\cdots&0&0&0\\ 1&0&\cdots&0&0&0 \end{array} \right] y\mapsto \left[ \begin{array}{ccccccc} -1&0&0&\cdots&0&0&0\\ -1&0&0&\cdots&0&0&1\\ -1&0&0&\cdots&0&1&0\\ \vdots & \vdots& \vdots& & \nana&& \vdots\\ \vdots & \vdots& & \nana & & \vdots& \vdots\\ -1&0&1&\cdots&0&0&0\\ -1&1&0&\cdots&0&0&0 \end{array} \right] \end{equation} For convenience, ${\pi}_0$ is called the {\it irreducible representation} of $D_p$ (of degree $p-1=2n$). Now let $K(r), 0<r<1, r= \frac{\beta}{\alpha}$ and $\gcd(\alpha,\beta)=1$, be a $2$-bridge knot and consider a Wirtinger presentation of the group $G(K(r))$: \begin{align} &G(K(r)) = \langle x,y| R \rangle,\ {\rm where}\nonumber\\ &R = WxW ^{-1} y^{-1}, W = x^{\epsilon_1} y^{\epsilon_2} \cdots x^{\epsilon_{\alpha-2}} y^{\epsilon_{\alpha-1}}\ {\rm and}\nonumber\\ &\epsilon_j = \pm 1\ {\rm for}\ 1\leq j \leq \alpha -1. \end{align} Suppose $p$ be a prime. If $\alpha \equiv 0$ (mod $p$), then a mapping \begin{equation} \rho: x \mapsto x\ {\rm and}\ y \mapsto y \end{equation} defines a surjection from $G(K(r))$ to $D_p$. Therefore $\rho_0 = \rho \circ \pi_0$ defines a representation of $G(K(r))$ into $GL(2n,\ZZ)$ and we can define the twisted Alexander polynomial $\widetilde{\Delta}_{\rho_0, K(r)} (t)$ associated to $\rho_0$. Since $\pi = id \ast \pi_0$, the twisted Alexander polynomial associated to $\widehat{\rho} = \rho \circ \pi $ is given by $\left[\dfrac{\Delta_{K(r)}(t)}{1-t}\right] \widetilde{\Delta}_{\rho_0, K(r)}(t)$ and hence (1.2) becomes \begin{equation} \widetilde{\Delta}_{\rho_0, K(r)} (t) = f(t) f(-t). \end{equation} Now there is another representation of $D_p$ in $GL(2,\CC)$. To be more precise, consider $\xi: D_p \rightarrow GL(2,\CC)$ given by \begin{equation} \xi (x) =\mtx{-1}{1}{0}{1}\ {\rm and}\ \xi (y) =\mtx{-1}{0}{\omega}{1}, \end{equation} where $\omega \in \CC$ is determined as follows. First we set $\xi (x) = \mtx{-1}{1}{0}{1}$ and $\xi (y) =\mtx{-1}{0}{z}{1}$, and write $\xi((xy)^k) = \mtx{a_k(z)}{b_k(z)}{c_k(z)}{d_k(z)}$. Since $\xi(xy) = \mtx{1+z}{1}{z}{1}$, we see that $a_k ,b_k ,c_k$ and $d_k$ are exactly the same polynomials found in \cite[(4.1)]{HM}. Further, as is mentioned in \cite{HM}, $a_n(z)$ and $b_n(z)$ are given as follows: \cite[Propositions 10.2 and 2.4]{HM}: \begin{equation} a_n(z) = \sum_{k=0}^{n} \binom{n+k}{2k}z^k\ {\rm and}\ b_n(z)=\sum_{k=0}^{n-1} \binom{n+k}{2k+1} z^k. \end{equation} Since $(xy)^{2n+1} =1$, we have $(xy)^n x= y (xy)^n$ and hence, a simple calculation shows that $\xi ((xy)^n x)=\xi (y (xy)^n)$ yields $a_n(z) + 2 b_n(z)= 0$. Therefore, the number $\omega$ we are looking for is a root of $\theta_n (z) = a_n (z) + 2 b_n (z)$. Write $\theta_n (z)={c_0}^{(n)} + {c_1}^{(n)} z + \cdots + {c_{n-1}}^{(n)} z^{n-1} + {c_n}^{(n)} z^n$. Then we see \begin{equation} {\displaystyle {c_k}^{(n)}=\binom{n+k}{2k}+ 2\binom{n+k}{2k+1} =\frac{2n+1}{2k+1}\binom{n+k}{n-k}}. \end{equation} If $p = 2n+1$ is prime, then, for $0 \leq k \leq n-1$, ${c_k}^{(n)} \equiv 0$ (mod $p$), but ${c_0}^{(n)} = p$ and ${c_n}^{(n)} = 1$. Therefore, by Eisenstein's criterion, $\theta_n (z)$ is irreducible and it is the minimal polynomial of $\omega$. Let $C_n$ be the companion matrix of $\theta_n (z)$. By substituting $C_n$ for $\omega$, we have a homomorphism $\gamma : GL(2,\CC) \rightarrow GL(2n,\ZZ)$, namely, $\gamma (1) = E_n$ and $\gamma (\omega)= C_n$, where $E_n$ is the identity matrix, and hence we obtain another representation $\eta=\xi \circ \gamma : D_p \rightarrow GL(2n,\ZZ)$. The following proposition is likely known, but since we are unable to find a reference, we prove it in Section 11. \begin{prop}\label{prop:2.1} Two representations $\pi_0$ and $\eta$ are equivalent. In other words, there is a matrix $U_n \in GL(2n,\ZZ)$ such that \begin{equation} U_n \pi_0 (x) {U_n}^{-1}= \eta (x)\ {\rm and}\ U_n \pi_0 (y) {U_n}^{-1} = \eta (y). \end{equation} \end{prop} Let $K(r)$ be a $2$-bridge knot in $H(p)$. Then $\tau = \rho \circ \xi : G(K(r)) \rightarrow D_p \rightarrow GL(2,\CC)$ defines a representation of $G(K(r))$ and let $\widetilde{\Delta}_{\tau, K(r)} (t |\omega)$ be the twisted Alexander polynomial associated to $\tau$. Sometimes, we use the notation $\widetilde{\Delta}_{\tau, K(r)} (t|\omega)$ to emphasize that the polynomial involves $\omega$. Let $\omega_1, \omega_2, \cdots, \omega_n$ be all the roots of $\theta_n (t)$. Since $\theta_n (t)$ is irreducible, the total $\tau$-twisted Alexander polynomial $D_{\tau, K(r)} (t)$ defined in \cite{SW} is given by \begin{equation} D_{\tau, K(r)} (t)=\prod_{j=1}^n \widetilde{\Delta}_{\tau, K(r)} (t|\omega_j). \end{equation} It is known that the polynomial $D_{\tau, K(r)} (t)$ is rewritten as \begin{equation} D_{\tau, K(r)} (t)=\det[ \widetilde{\Delta}_{\tau, K(r)} (t|\omega)]^{\gamma}. \end{equation} By (2.5), we see that $D_{\tau, K(r)} (t)$ is exactly the twisted Alexander polynomial of $K(r)$ associated to $\nu = \rho \circ \eta :G(K) \rightarrow GL(2n,\ZZ)$. Since, by Proposition \ref{prop:2.1}, $\pi_0$ and $\eta$ are equivalent, $\rho_0$ and $\nu$ are equivalent, and hence $\widetilde{\Delta}_{\rho_0, K(r)} (t) = D_{\tau, K(r)}(t)$. Conjecture A now becomes the following theorem under our assumptions that will be proven in Sections 5-7. \begin{thm}\label{thm:2.2} If a 2-bridge knot $K(r)$ is in $H(p)$, then \begin{equation} D_{\tau, K(r)} (t) = f(t) f(-t) \end{equation} for some integer polynomial $f(t)$, and further, for any $2$-bridge knot $K(r)$ with $\Delta_{K(r)}(-1) \equiv 0$ (mod $p$), \begin{align} &(1)\ D_{\tau, K(r)} (t) \equiv f(t) f(-t)\ {\rm (mod}\ p)\ {\rm and}\nonumber\\ &(2)\ f(t) \equiv \Bigl\{\dfrac{\Delta_K(t)}{1+t}\Bigr\}^n\ {\rm (mod}\ p), \end{align} where $\Delta_{K(r)}(t)$ is the Alexander polynomial of $K(r)$. \end{thm} We note that $\Delta_{K(r)}(t)$ is divisible by $1+t$ in $(\ZZ/p)[t^{\pm 1}]$. \begin{rem} If $n=1$, i.e., $p=3$, $\theta_1(z)=z+3$, and hence $\omega=-3$. Therefore, $\gamma$ is an identity homomorphism and $\widetilde\Delta_{\rho_0,K(r)}(t)=D_{\tau,K(r)}(t)$. \end{rem} \section{Basic formulas} In this section, we list various formulas involving $a_k, b_k, c_k$ and $d_k $ which will be used throughout this paper. Most of these materials are collected from Section 4 in \cite{HM}. For simplicity, let $\xi (x) =X= \mtx{-1}{1}{0}{1}$ and $\xi (y)=Y = \mtx{-1}{0}{\omega}{1}$, where $\omega$ is a root of $\theta_n (z)$. First we list several formulas which are similar to \cite[Proposition 4.2]{HM} \begin{prop}\label{prop:3.1} As before, write $(XY)^k$ = $ \mtx{a_k}{b_k}{c_k}{d_k}$. \begin{align} &(I)\ a_0 = d_0 = 1\ {\it and}\ b_0 = c_0 = 0.\nonumber\\ &(II)\ a_1 = 1+ \omega, b_1 =1, c_1 = \omega\ {\it and}\ d_1 = 1.\nonumber\\ &(III)\ (i)\ {\it For}\ k \geq 2,\nonumber\\ &\ \ (1)\ a_k = (2+ \omega) a_{k-1} - a_{k-2},\nonumber\\ &\ \ (2)\ \omega b_k = (1+ \omega) a_{k-1} - a_{k-2},\nonumber\\ &\ \ \ \ \ \ \ (ii)\ {\it For}\ k \geq 1,\nonumber\\ &\ \ (3)\ \omega b_k = a_k - a_{k-1},\nonumber\\ &\ \ (4)\ \omega b_k = c_k,\nonumber\\ &\ \ (5)\ a_k = \omega b_k + d_k,\nonumber\\ &\ \ (6)\ d_k = a_{k-1},\nonumber\\ &\ \ (7)\ b_k = b_{k-1} + a_{k-1},\nonumber\\ &\ \ (8)\ c_k + d_k = a_k,\nonumber\\ &\ \ (9)\ a_0 + a_1 + \cdots + a_{k-1} = b_k. \end{align} \end{prop} Since a proof of Proposition \ref{prop:3.1} is exactly the same as that of Proposition 4.2 in \cite{HM}, we omit the details. Next three propositions are different from the corresponding proposition \cite[Proposition 4.4]{HM}, since they depend on defining relations of $D_p$. \begin{prop}\label{prop:3.2} Let $p=2n+1$. \begin{align} &(1)\ {\it For}\ 0 \leq k \leq 2n, a_k = a_{2n-k}\ {\it and}\ a_{2n+1} = a_0.\nonumber\\ &(2)\ {\it For}\ 0 \leq k \leq 2n, b_k = -b_{p-k}\ {\it and}\ b_p =0. \end{align} \end{prop} {\it Proof.} Since $(XY)^k = (YX)^{p-k} = Y(XY)^{p-k}Y$, we have\\ $\mtx{a_k}{b_k}{c_k}{d_k}=\mtx{a_{p-k}-\omega b_{p-k}}{-b_{p-k}} {-\omega a_{p-k}-c_{p-k}+\omega^2 b_{p-k}+\omega d_{p-k}} {\omega b_{p-k}+d_{p-k}}$ and hence $a_k = a_{p-k} - \omega b_{p-k}$ and $b_k = - b_{p-k}$ which proves (2). Further, $a_k = a_{p-k} - \omega b_{p-k} = a_{p-k} + \omega b_k$ and thus, $a_{p-k} = a_{k-1}$ by (3.1)(III)(3). This proves (1). Finally, it is obvious that $a_p = a_0$. \fbox{} \begin{prop}\label{prop:3.3} Let $p=2n+1$. Then we have the following \begin{align} &(1)\ a_0 + a_1 + \cdots + a_{2n} = 0,\nonumber\\ &(2)\ b_1 + b_2 + \cdots + b_{2n} = 0,\nonumber\\ &(3)\ d_0 + d_1 + \cdots + d_{2n} = 0,\nonumber\\ &(4)\ a_n + 2b_n =0.\nonumber\\ &(5)\ {\it If}\ k \equiv \ell\ {\rm (mod}\ p), \ {\it then}\ a_k = a_{\ell}, b_k = b_{\ell}, c_k = c_{\ell}\ {\it and}\ d_k = d_{\ell}. \end{align} \end{prop} {\it Proof.} First, we see that $(XY)^n X= Y(XY)^n$ implies\\ $\mtx{-a_n}{a_n+b_n}{-c_n}{c_n+d_n}= \mtx{-a_n}{-b_n}{\omega a_n+c_n}{\omega b_n+d_n}$, and hence $a_n + b_n = - b_n$ that proves (4). (5) is immediate, since $(XY)^p$ = 1. (1) follows from (3.1)(III)(9), since $a_0 + a_1 + \cdots + a_{2n}=b_{2n+1}=0$. To show (2), use (3.1)(III)(3). Since $b_0$ = 0, we see $\omega (b_1 + b_2 + \cdots + b_{2n}) =(a_1 - a_0) + (a_2 - a_1) + \cdots + (a_{2n-1} - a_{2n-2}) + (a_{2n} - a_{2n-1}) =a_{2n} - a_0$ = 0, by (3.2)(1). (3) follows from (3.1)(III)(6), since $d_0 = 1 = a_0 = a_{2n}$ and $d_0 + d_1 + \cdots + d_{2n} =1 + a_0 + a_1 + \cdots + a_{2n-1}$ = $a_0 + a_1 + \cdots + a_{2n-1} + a_{2n} = 0$. \fbox{} Now we define an algebra $\widetilde {A}(\omega)$ using the group ring $\ZZ D_p$. Consider the linear extension $\widehat {\xi}$ of $\xi: \ZZ D_p \rightarrow M_{2,2}(\ZZ[\omega])$ given by $\widehat {\xi} (x)=X$ and $\widehat {\xi} (y)=Y$, where $M_{k,k} (R)$ denotes the ring of $k \times k$ matrices over a commutative ring $R$. Let ${\widehat {\xi}}^{-1} (0)$ be the kernel of $\widehat {\xi}$. Then $\widetilde {A}(\omega)= \ZZ D_p / {\widehat {\xi}}^{-1} (0)$ is a non-commutative $\ZZ[\omega]$-algebra. Some elements of ${\widehat {\xi}}^{-1} (0)$ can be found in Proposition \ref{prop:3.4} below. We define $\zeta: \ZZ D_p \rightarrow \widetilde {A}(\omega)$ to be the natural projection. \begin{prop}\label{prop:3.4} In $\widetilde{A}(\omega)$, the following formulas hold, where $1$ denotes the identity of $\widetilde{A}(\omega)$. \begin{equation} {\it For}\ 1 \leq k \leq n, (xy)^k + (yx)^k = ( a_{k-1} + a_k) 1. \end{equation} \begin{align} &(1)\ {\it For}\ 1 \leq k \leq n-1, (xy)^k x + y(xy)^k = a_k (x+y),\nonumber\\ &(2)\ (xy)^n x = y(xy)^n=\frac{a_n}{2} (x+y) = -b_n (x+y). \end{align} \end{prop} {\it Proof.} To prove (3.4), it suffices to show that $(XY)^k + (YX)^k =(a_{k-1}+ a_k) E_n$. In fact, for $1 \leq k \leq n$, \begin{equation*} (XY)^k + (YX)^k =(XY)^k + (XY)^{p-k} = \mtx{a_k+a_{p-k}}{b_k+b_{p-k}}{c_k+c_{p-k}}{d_k+d_{p-k}}. \end{equation*} Since $a_k + a_{p-k} = a_k + a_{k-1}$ by (3.2)(1), $b_k + b_{p-k}=0$ by (3.2)(2), $c_k + c_{p-k} = \omega (b_k + b_{p-k}) =0$ and $d_k +d_{p-k} = a_{k-1} + a_{2n-k} = a_{k-1} + a_k$ by (3.1)(6) and (3.2)(1), (3.4) follows immediately. Next, for $1 \leq k \leq n-1$, $(XY)^kX + Y(XY)^k= \mtx{-2a_k}{a_k}{\omega a_k}{2(c_k+d_k)}=a_k(X+Y)$, which proves (3.5)(1). Finally, (3.5)(2) follows, since $(xy)^n x =y(xy)^n$ and $a_n = -2b_n$. \fbox{} \section{Polynomials over $\widetilde{A}(\omega)$} In this section, as the first step toward a proof of Theorem \ref{thm:2.2}, we introduce one of our key concepts in this paper. \begin{dfn} Let $\varphi (t)$ be a polynomial on $t^{\pm 1}$ with coefficients in the non-commutative algebra $\widetilde{A}(\omega)$. We say $\varphi (t)$ is {\it split} if $\varphi (t)$ is of the form:\\ $\varphi (t)=\sum_j \alpha_j t^{2j} + \sum_k \beta_k (x+y) t^{2k+1}$, where $\alpha_j, \beta_k \in \ZZ[\omega]$. The set of split polynomials is denoted by $S(t)$. For example, $\varphi(t)$ = $1+t^2, (x+y)t$ are split. \end{dfn} First we show that $S(t)$ is a commutative ring. \begin{prop}\label{prop:4.2} If $\varphi (t)$ and ${\varphi}^{\prime}(t)$ are split, so are $\varphi (t) + {\varphi}^{\prime}(t)$ and $\varphi (t) {\varphi}^{\prime}(t)$. \end{prop} {\it Proof.} Let $\varphi (t)=\sum_j \alpha_j t^{2j} + \sum_k \beta_k (x+y) t^{2k+1}$ and ${\varphi}^{\prime}(t)=\sum_{\ell} {\alpha_{\ell}}^{\prime}t^{2\ell} + \sum_m {\beta_m}^{\prime}(x+y) t^{2m+1}$. Then obviously $\varphi (t)+ {\varphi}^{\prime}(t)$ is split. Further, \begin{align*} \varphi (t) {\varphi}^{\prime}(t)& = \sum_{j, \ell} \alpha_j {\alpha_{\ell}}^{\prime}t^{2j+2\ell} + \sum_{j,m} \alpha_j {\beta_m}^{\prime}(x+y) t^{2j+2m+1}\\ &\ \ + \sum_{k,\ell} \beta_k {\alpha_\ell}^{\prime} (x+y) t^{2k+2\ell+1} + \sum_{k,m} \beta_k {\beta_m}^{\prime}(x+y) (x+y) t^{2k+2m+2}. \end{align*} Since $(x+y)(x+y)=2+xy+yx=(2+b_2)1$ by (3.4) and (3.1)(III)(9), it follows that $\varphi (t) {\varphi}^{\prime}(t)$ is split. \fbox{} Next, to obtain the proposition corresponding to Lemma 4.5 in \cite{HM}, we define the polynomials over $\widetilde{A}(\omega)$. Let $Q_k (t)=1 + (yx)t^2 + (yx)^2 t^4 + \cdots + (yx)^k t^{2k}$ and\\ $P_k (t)=1 + (xy)t^2 + (xy)^2 t^4 + \cdots + (xy)^k t^{2k}$. Note $Q_k (t) = y P_k (t) y$. The following proposition is a slight modification of Lemma 4.5 in \cite{HM}. \begin{prop}\label{prop:4.3} Let $p=2n+1$.\\ (1) $(y^{-1} t^{-1}) (1-yt) Q_{2n} (t) yt(1-xt) \in S(t)$.\\ (2) $(y^{-1} t^{-1}) \{(1-yt) Q_n (t) yt + (yx)^{n+1}t^{2n+2}\} (1-xt) \in S(t)$.\\ (3) $(y^{-1} t^{-1}) \{(1-yt) Q_{3n+1}(t) yt + (yx)^{3n+2}t^{6n+4}\} (1-xt) \in S(t)$.\\ (4) $(y^{-1}t^{-1}) (1-yt) Q_{4n}(t) yt (1-xt) \in S(t)$. \end{prop} {\it Proof.} First we prove (2). Since \begin{align*} (1-yt) Q_n(t) yt + (yx)^{n+1}t^{2n+2} &=(1-yt) yP_n(t) t + (yx)^{n+1} t^{2n+2}\\ &=yt (1-yt) P_n (t) + yt (xy)^n x t^{2n+1}\\ &=yt\{(1-yt)P_n (t) + (xy)^n x t^{2n+1}\}, \end{align*} {\rm it\ suffices\ to\ show} \begin{equation} \{(1-yt) P_n (t) + (xy)^n x t^{2n+1}\}(1-xt) \in S(t). \end{equation} Now a simple computation shows that \begin{align*} &\{(1-yt) P_n (t) + (xy)^n x t^{2n+1}\}(1-xt)\\ &=\left\{\susum{k=0}{n} (xy)^k t^{2k} - \susum{k=0}{n-1} y(xy)^k t^{2k+1}\right\} (1-xt)\\ &=1 + \susum{k=1}{n} \left\{(xy)^k + (yx)^k\right\} t^{2k} - \susum{k=0}{n-1} \left\{y(xy)^k + (xy)^k x\right\} t^{2k+1}\\ &=1+ \susum{k=1}{n} (a_{k-1} + a_k ) t^{2k} - \susum{k=0}{n-1} (x+y) a_k t^{2k+1} \in S(t), \end{align*} by (3.4) and (3.5). This proves (4.1). {\it Proof of (1).} Since \begin{align*}(1-yt) Q_{2n}(t) yt (1-xt) &=(1-yt) yP_{2n}(t) t (1-xt)\\ &=yt (1-yt) P_{2n} (t) (1-xt), \end{align*} it suffices to show \begin{equation} (1-yt) P_{2n} (t) (1-xt) \in S(t). \end{equation} However, the following straightforward calculation proves (4.2): \begin{align*} &(1-yt) P_{2n}(t) (1-xt)\\ &= {\textstyle\sum\limits_{k=0}^{2n}}(xy)^k t^{2k} - {\textstyle\sum\limits_{k=0}^{2n}} y(xy)^k t^{2k+1} - {\textstyle\sum\limits_{k=0}^{2n}} (xy)^k x t^{2k+1} + {\textstyle\sum\limits_{k=0}^{2n}} (yx)^{k+1} t^{2k+2}\\ &=1 + {\textstyle\sum\limits_{k=1}^{2n}} \left\{ (xy)^k + (yx)^k\right\} t^{2k} + (yx)^p t^{2p} - {\textstyle\sum\limits_{k=0}^{2n}}\left\{ y(xy)^k + (xy)^k x\right\} t^{2k+1}\\ &=1+ {\textstyle\sum\limits_{k=1}^{2n}} (a_{k-1} + a_k) t^{2k} + t^{2p} - {\textstyle\sum\limits_{k=0}^{2n}} a_k (x+y) t^{2k+1} \in S(t). \end{align*} {\it Proof of (3).} Since \begin{align*} &\left\{(1-yt) Q_{3n+1}(t) yt + (yx)^{3n+2} t^{6n+4} \right\}(1-xt)\\ &=yt \{(1-yt) P_{3n+1}(t) + (xy)^{3n+1} x t^{6n+3}\}(1-xt), \end{align*} it suffices to show\\ \begin{equation} \{(1-yt) P_{3n+1}(t) + (xy)^{3n+1}x t^{6n+3}\} (1-xt) \in S(t). \end{equation} Since $P_{3n+1}(t)=P_{2n}(t) + t^{4n+2} P_n (t)$ and $(xy)^{3n+1} x = (xy)^n x$, we must show\\ $\Bigl\{(1-yt) \{P_{2n}(t) + P_n (t) t^{4n+2}\} + (xy)^n x t^{6n+3}\Bigr\}(1-xt) \in S(t)$. However, since $(1-yt) P_{2n}(t) (1-xt) \in S(t)$ by (4.2), it suffices to show that \begin{equation} \{(1-yt) P_n (t) t^{4n+2} + (xy)^n x t^{6n+3}\}(1-xt) \in S(t). \end{equation} Now, (4.4) follows from (4.1), since $t^{4n+2}$ is split. {\it Proof of (4).} Since $(yx)^{2n+1} = 1$, we have \begin{equation*} Q_{4n}(t)=\susum{k=0}{2n} (yx)^k t^{2k} + \susum{k=2n+1}{4n} (yx)^k t^{2k} =(1+t^{2p})Q_{2n}(t). \end{equation*} Since $(1+t^{2p})$ is split, it follows that \begin{equation*} (y^{-1}t^{-1}) (1-yt) Q_{4n}(t) yt (1-xt) =(1+t^{2p})(y^{-1}t^{-1}) (1-yt) Q_{2n}(t) yt (1-xt) \end{equation*} is split by (1). \fbox{} \section{Proof of Theorem 2.2.(I)} In this section we prove Theorem \ref{thm:2.2} (2.11) for a torus knot $K(1/p)$, $p =2n+1$ a prime. First we define various homomorphisms among group rings.\\ Let $g=x^{m_1} y^{m_2} x^{m_3}y^{m_4} \cdots x^{m_{k-1}} y^{m_k}$, where $m_j$ are integers and let $m = \susum{j=1}{k} m_j$ and $\ell$ is arbitrary. Then we have: \begin{align} &(1)\ {\rho}^\ast : \ZZ G(K) \rightarrow \ZZ D_p [t^{\pm 1}]\ {\rm is\ defined\ by}\ {\rho}^\ast (g)=\rho (g) t^m,\nonumber\\ &(2)\ {\zeta}^\ast:\ZZ D_p [t^{\pm 1}] \rightarrow \widetilde{A}(\omega)[t^{\pm 1}]\ {\rm is\ defined\ by}\ {\zeta}^\ast (gt^{\ell})=\zeta (g) t^{\ell},\nonumber\\ &(3)\ {\xi}^\ast: \widetilde{A}(\omega) [t^{\pm 1}] \rightarrow M_{2,2} (\ZZ[\omega][t^{\pm 1}])\ {\rm is\ defined\ by}\ {\xi}^\ast (gt^{\ell})=\xi (g) t^{\ell},\nonumber\\ &(4)\ {\gamma}^\ast: M_{2,2}(\ZZ[\omega] [t^{\pm 1}]) \rightarrow M_{2n,2n} (\ZZ[t^{\pm 1}])\ {\rm is\ defined\ by} \nonumber\\ &\ \ \ {\gamma}^\ast \mtx{\sum_j p_j t^j}{\sum_j q_j t^j} {\sum_j r_j t^j}{\sum_j s_j t^j} =\mtx{\sum_j \gamma(p_j) t^j}{\sum_j \gamma(q_j) t^j} {\sum_j \gamma(r_j) t^j}{\sum_j \gamma(s_j) t^j}. \end{align} Now we show the following proposition. \begin{prop}\label{prop:5.1} Let $p=2n+1$, a prime. Then $D_{\tau,K(1/p)}$(t) is of the form $q(t) q(-t)$ for some integer polynomial $q(t)$. \end{prop} {\it Proof.} We write $G(K(1/p))= \langle x,y| R_0 = W_0 x {W_0}^{-1} y^{-1} =1\rangle$, where $W_0 = (xy)^n$. Consider the free derivative of $R_0$ with respect to $x$; \begin{equation*} \dfrac{\partial R_0}{\partial x} =(1-y) \dfrac{\partial W_0}{\partial x} + W_0 =(1-y) \susum{k=0}{n-1} (xy)^k + (xy)^n, \end{equation*} and we write \begin{equation*} {\Phi}^\ast\left( \dfrac{\partial R_0}{\partial x}\right) =\mtx{h_{11}(t)}{h_{12}(t)}{h_{21}(t)}{h_{22}(t)}, \end{equation*} where ${\Phi}^\ast= {\rho}^\ast \circ {\zeta}^\ast \circ {\xi}^\ast$. \\ Then we see; \begin{align} (1)\ h_{11}(t)&=\susum{k=0}{n} a_k t^{2k} + \susum{k=0}{n-1} a_k t^{2k+1} =(1+t) \susum{k=0}{n-1} a_k t^{2k} + a_n t^{2n},\nonumber\\ (2)\ h_{12}(t)&=\susum{k=0}{n} b_k t^{2k} + \susum{k=0}{n-1} b_k t^{2k+1} =(1+t) \susum{k=0}{n-1} b_k t^{2k} + b_n t^{2n},\nonumber\\ (3)\ h_{21}(t)&=\susum{k=0}{n} c_k t^{2k} - \omega \susum{k=0}{n-1} a_k t^{2k+1} - \susum{k=0}{n-1} c_k t^{2k+1}\nonumber\\ & =- \omega t \susum{k=0}{n-1} a_k t^{2k} + (1-t) \susum{k=0}{n-1} c_k t^{2k} + c_n t^{2n},\nonumber\\ (4)\ h_{22}(t)&=\susum{k=0}{n} d_k t^{2k} - \omega \susum{k=0}{n-1} b_k t^{2k+1} - \susum{k=0}{n-1} d_k t^{2k+1} \nonumber\\ &=- \omega t \susum{k=0}{n-1} b_k t^{2k} + (1-t) \susum{k=0}{n-1} d_k t^{2k} + d_nt^{2n}. \end{align} Since $h_{11}(1) = 0$ and $h_{21}(1) = 0$, both $h_{11} (t)$ and $h_{21}(t)$ are divisible by $1-t$. In fact, we have: \begin{align*} h_{11}(t)&=(1-t) \Bigl\{\ \ \susum{k=0}{n-1} (2a_0 + 2a_1 + \cdots +2a_{k-1} + a_k)t^{2k}\\ &\hspace*{18mm} + \susum{k=0}{n} (2a_0 + 2a_1 + \cdots + 2a_k)t^{2k+1}\Bigr\}\\ & =(1-t) \left\{\susum{k=0}{n-1} (b_k + b_{k+1}) t^{2k} + \susum{k=0}{n-1} 2b_{k+1} t^{2k+1}\right\},\ {\rm and}\\ h_{21}(t)&=- \omega t(1-t^2) \susum{k=0}{n-2}( a_0 + a_1 + \cdots +a_k) t^{2k} \\ &\hspace*{15mm}- \omega t(1-t) (a_0 + a_1 + \cdots +a_{n-1}) t^{2n-2} +(1-t) \susum{k=1}{n-1} c_k t^{2k}\\ & =(1-t)\left\{- \omega t(1+t) \susum{k=0}{n-2} b_{k+1} t^{2k} - \omega tb_n t^{2n-2} + \susum{k=1}{n-1} c_k t^{2k}\right\}. \end{align*} Since $c_k = \omega b_k$, we see $h_{21}(t) = (1-t) \left\{ - \omega t \susum{k=0}{n-1}b_{k+1}t^{2k}\right\}$, and hence,\\ $\dfrac{1}{1-t} \det \left(\dfrac{\partial R_0}{\partial x}\right)^{{\Phi}^\ast} =\dfrac{1}{1-t} \det \mtx{h_{11}(t)}{h_{12}(t)} {h_{21}(t)}{h_{22}(t)} =\det \mtx{h_{11}^{\prime}(t)}{ h_{12}(t)} {h_{21}^{\prime}(t)}{h_{22}(t)}$, where \begin{align*} {h_{11}}^{\prime}(t)&=\susum{k=0}{n-1} (b_k + b_{k+1}) t^{2k} + \susum{k=0}{n-1} 2b_{k+1} t^{2k+1}\\ &=\susum{k=1}{n-1} b_k (1+t^2)t^{2k-2} + b_nt^{2n-2}+\susum{k=1}{n} 2b_k t^{2k-1},\ {\rm and}\\ {h_{21}}^{\prime}(t)&=- \omega t \susum{k=0}{n-1} b_{k+1} t^{2k}. \end{align*} Let $g(t)= \susum{k=1}{n-1} b_k (1+t^2) t^{2k+2} + b_n t^{2n-2}$ and $h(t) = \susum{k=1}{n} b_k t^{2k-1}$. Then \begin{equation*} {h_{11}}^{\prime}(t) = g(t) +2h(t)\ {\rm and}\ {h_{21}}^{\prime}(t) = - \omega h(t). \end{equation*} Further a straightforward computation shows that \begin{equation*} {h_{11}}^{\prime}(t) + h_{12}(t)=(1+t) (g(t) + h(t)). \end{equation*} And, \begin{align*} {h_{21}}^{\prime}(t) + h_{22}(t) &=- \omega t \susum{k=1}{n} b_k t^{2k-2} - \omega t \susum{k=1}{n-1}b_k t^{2k} +(1-t) \susum{k=1}{n-1} d_k t^{2k} + d_n t^{2n}\\ &=- \susum{k=1}{n} c_k t^{2k-1} - \susum{k=1}{n-1} c_k t^{2k+1} + (1-t) \susum{k=0}{n-1} d_k t^{2k} + d_n t^{2n}. \end{align*} Since $c_k + d_k = a_k$ and $d_0 = a_0$, we see \begin{equation*} - \susum{k=1}{n-1} c_k t^{2k+1} - \susum{k=0}{n-1}d_k t^{2k+1} = - \susum{k=0}{n-1} a_k t^{2k+1}, \end{equation*} and hence \begin{equation*} {h_{21}}^{\prime}(t) + h_{22}(t)=\susum{k=0}{n} d_k t^{2k} - \susum{k=0}{n-1}(a_k + c_{k+1}) t^{2k+1}. \end{equation*} Now, ${h_{21}}^{\prime}(t) + h_{22}(t)$ is divisible by $1+t$, and in fact, we have \begin{equation*} {h_{21}}^{\prime}(t) + h_{22}(t)=(1+t)\{g(t) - 2h(t) - \omega h(t)\}. \end{equation*} Therefore, \begin{align*} \dfrac{1}{(1-t)(1+t)} \det \left({\Phi}^\ast \dfrac{\partial R_0}{\partial x}\right) &=\det \mtx{g(t) + 2h(t)}{g(t) + h(t)} {-\omega h(t)}{g(t) - 2h(t) - \omega h(t)}\\ &=\det \mtx{g(t) + 2h(t)}{- h(t)} {- \omega h(t)}{ g(t) - 2h(t)}, \end{align*} and hence \begin{equation} \widetilde{\Delta}_{\tau,K(1/p)}(t) = g(t)^2 - (4+w) h(t)^2. \end{equation} Now we apply the following key lemma. \begin{lemm}\label{lem:5.2} Let $C_n$ be the companion matrix of $\theta_n (z)$, the minimal polynomial of $\omega$. Then there exists a matrix $V_n \in GL(n,\ZZ)$ such that ${V_n}^2 = 4 E_n + C_n$. \end{lemm} Since our proof involves a lot of computations, the proof is postponed to Section 11. Since the total twisted Alexander polynomial of $K(1/p)$ at $\tau$ is $D_{\tau,K(1/p)}(t) = \det [\widetilde{\Delta}_{\tau,K(1/p)}(t)]^{{\gamma}^\ast}$, we obtain, noting that $V_n$ commutes with $C_n$, \begin{align*} D_{\tau, K(1/p)}(t)&=\det[g(t|C_n)^2 - {V_n}^2 h(t|C_n)^2]\\ &=\det [g(t|C_n) - V_n h(t|C_n)] \det [g(t|C_n) +V_n h(t|C_n)]. \end{align*} Let $q(t) = \det [g(t|C_n) - V_n h(t|C_n)]$. Then since $g(-t) = g(t)$ and $h(-t)= - h(t)$, it follows that \begin{equation*} D_{\tau,K(1/p)}(t) = q(t) q(-t). \end{equation*} This proves Theorem \ref{thm:2.2} (2.11) for $K(1/p)$. \begin{rem} It is quite likely that \begin{equation} q(t) = (1+t)^n \left\{\Delta_{K(1/p)}(t) \right\}^{n-1}, \end{equation} where $\Delta_{K(1/p)}(t)$ is the Alexander polynomial of $K(1/p)$. \end{rem} \section{Proof of Theorem 2.2 (II)} Now we return to a proof of Theorem \ref{thm:2.2} (2.11) for a $2$-bridge knot $K(r)$ in $H(p)$. Let $G(K(r)) = \langle x,y|R\rangle, R=WxW^{-1} y^{-1}$, be a Wirtinger presentation of $G(K(r))$. Then as is shown in \cite{HM}, $R$ is written freely as a product of conjugates of $R_0$: $R=\prod_{j=1}^s u_j {R_0}^{\epsilon_j}{u_j}^{-1}$, where for $1 \leq j \leq s$, $\epsilon_j = \pm 1$ and $u_j \in F(x,y)$, the free group generated by $x$ and $y$, and $\frac{\partial R}{\partial x}= \sum_{j} \epsilon_j u_j (\frac{\partial R_0}{\partial x})$, and hence \begin{align*} \widetilde{\Delta}_{\tau, K(r)}(t) &=\det\left(\frac{\partial R}{\partial x}\right)^{{\Phi}^\ast}/ \det (y^{{\Phi}^{\ast}}- E_2)\\ &= \widetilde{\Delta}_{\tau, K(1/p)}(t) \det \bigl(\sum_{j} \epsilon_j u_j\bigr) ^{{\Phi}^{\ast}}. \end{align*} As we did in \cite{HM}, we study $\lambda (r)=(\sum_{j} \epsilon_j u_j) ^{{\tau}^\ast} \in \widetilde{A}(\omega)[t^{\pm 1}]$, where ${\tau}^\ast = {\rho}^{\ast} \circ {\zeta}^{\ast}$. For simplicity, we denote ${\tau}^{\ast}(\lambda (r))$ by ${\lambda_r}^{\ast}(t)$. In fact, it is a polynomial in $t^{\pm 1}$.\\ Since $K(r) \in H(p)$, the continued fraction of $r$ is of the form:\\ $r=[pk_1, 2m_1, pk_2, \cdots, 2m_{\ell}, pk_{\ell +1}]$, where $k_j$ and $m_j$ are non-zero integers.\\ First we state the following proposition. \begin{prop}\label{prop:6.1} Suppose $K(r)$ and $K(r^{\prime})$ belong to $H(p)$ and let\\ $r=[pk_1, 2m_1, pk_2, \cdots, 2m_{\ell}, pk_{\ell +1}]$, $r^{\prime}=[{pk_1}^{\prime}, 2m_1, {pk_2}^{\prime}, \cdots, 2m_{\ell}, pk^{\prime}_{\ell +1}]$ be continued fractions of $r$ and $r^{\prime}$. Suppose that $k_j \equiv {k_j}^{\prime}$ (mod $4$) for each $j, 1 \leq j \leq \ell +1$. Then if $y^{-1}t^{-1} \lambda_r^{\ast}(t)$ is split, so is $y^{-1} t^{-1} \lambda_{r^{\prime}}^{\ast}(t)$. \end{prop} Since a proof is analogous to that of Proposition 6.3 in \cite{HM}, we omit the details. Now we study the polynomial ${\lambda_r}^{\ast}(t) \in \widetilde{A}(\omega) [t^{\pm 1}]$ and we prove that $y^{-1}t^{-1} {\lambda_r}^{\ast}(t)$ is split. As is seen in Section 7 in \cite{HM}, ${\lambda_r}^{\ast}(t)$ is written as ${w^\ast}_{2\ell +1}(t)$ and we will prove the following proposition. The same notation employed in Section 7 in \cite{HM} will be used in this section. \begin{prop}\label{prop:6.2} $ y^{-1} t^{-1}w^\ast_{2\ell +1}(t) \in S(t)$. \end{prop} {\it Proof.} Use induction on $j$. First we prove $y^{-1}t^{-1}{w^\ast}_1 (t) \in S(t)$.\\ (1) If $w_1(t) = yt$, then $y^{-1}t^{-1}{w^\ast}_1 (t) = 1$ and hence $y^{-1}t^{-1}{w^\ast}_1 (t) \in S(t)$.\\ (2) If $w_1 = y - (yx)^{n+1}$, then ${w^\ast}_1 (t) = yt - (yx)^{n+1} t^{2n+2}$ and\\ $y^{-1}t^{-1}{w^\ast}_1 (t)=1-(xy)^n x t^{2n+1}$ = $1 + b_n (x+y) t^{2n+1}$ and hence\\ $y^{-1}t^{-1}{w^\ast}_1 (t) \in S(t)$.\\ (3) If $w_1 = - (yx)^{n+1}$, then ${w^\ast}_1 (t) = - (yx)^{n+1} t^{2n+2}$ and\\ $y^{-1}t^{-1}{w^\ast}_1 (t) =-(xy)^n x t^{2n+1}=b_n (x+y) t^{2n+1}$ and hence\\ $y^{-1}t^{-1}{w^\ast}_1 (t) \in S(t)$. Now suppose $y^{-1}t^{-1}{w^\ast}_{2j - 1} (t) \in S(t)$ for $j \leq \ell$, and we claim \\ $y^{-1}t^{-1}{w^\ast}_{2\ell + 1}(t) \in S(t)$. There are three cases to be considered. (See \cite[Proposition 7.1.]{HM} Case 1. $k_{\ell+1}=1$. $w_{2\ell + 1} = \{(1-y) Q_n y + (yx)^{n+1}\} \sum_{j} m_j (x-1) y^{-1} w_{2j-1} - (yx)^{n+1} y^{-1} w_{2\ell-1} + y$.\\ Then \begin{align*} y^{-1}t^{-1}{w^\ast}_{2\ell + 1}(t) &= y^{-1}t^{-1}\{(1-yt) Q_n (t) yt \\ &\ \ + (yx)^{n+1}t^{2n+2}\} \sum_{j} m_j (xt-1) y^{-1}t^{-1}{w^\ast}_{2j-1}(t)\\ &\ \ - (xy)^n x t^{2n+1}(y^{-1}t^{-1}{w^\ast}_{2\ell - 1}(t)) + 1. \end{align*} By Proposition \ref{prop:4.3}(2), each summand is split. Further, $-(xy)^n x t^{2n+1}= b_n (x+y) t^{2n+1} \in S(t)$ and $1 \in S(t)$. Therefore, the sum of them is split. Proofs of the other cases are essentially the same. Case 2. $k_{\ell+1}=2$.\\ $w_{2\ell+1} =(1-y) Q_{2n} y\{\sum_{j}m_j (x-1) y^{-1}w_{2j-1}\} + (yx)^{2n+1}w_{2\ell-1} - (yx)^{n+1} + y$.\\ Then \begin{align*} y^{-1}t^{-1}{w^\ast}_{2\ell + 1}(t) &=y^{-1} t^{-1}(1-yt) Q_{2n}(t) yt \{ \sum_{j} m_j (xt-1)y^{-1}t^{-1}{w^\ast}_{2j - 1}(t)\}\\ &\ \ +y^{-1}t^{-1} t^{4n+2} {w^\ast}_{2\ell - 1}(t) - x(yx)^n t^{2n+1}+1. \end{align*} Again, $y^{-1}t^{-1}(1-yt) Q_{2n}(t) yt (xt-1) \in S(t)$ by Proposition \ref{prop:4.3}(1) and\\ $y^{-1}t^{-1}{w^\ast}_{2j- 1}(t) \in S(t)$ by induction hypothesis and $t^{4n+2}$,\\ $- x(yx)^n t^{2n+1} = b_n (x+y) t^{2n+1}$ and 1 are split. Thus, $y^{-1}t^{-1}{w^\ast}_{2\ell + 1}(t) \in S(t)$. Case 3. $k_{\ell+1}=3$. \begin{align*} w_{2\ell+1} & = \{(1-y) Q_{3n+1}y + (yx)^{3n+2}\} \sum_{j} m_j (x-1) y^{-1} w_{2j-1} \\ &\ \ - (yx)^{3n+2} y^{-1}w_{2\ell-1}+(yx)^{p}y - (yx)^{n+1}+y. \end{align*} Then \begin{align*} y^{-1}t^{-1}{w^\ast}_{2\ell + 1}(t) &=y^{-1} t^{-1}\{(1-yt)Q_{3n+1}(t) yt\\ &\ \ + (yx)^{3n+2}t^{6n+4}\} \sum_{j} m_j (xt-1) y^{-1}t^{-1}{w^\ast}_{2j-1}(t)\\ &\ \ - (xy)^n x t^{6n+3}(y^{-1}t^{-1}{w^\ast}_{2\ell - 1}(t))+t^{2p} - (xy)^n x t^{2n+1}+1. \end{align*} We see that $y^{-1}t^{-1}{w^\ast}_{2\ell + 1}(t)$ is split, since each of $y^{-1}t^{-1}\{(1-yt) Q_{3n+1}(t) yt + (yx)^{3n+2} t^{6n+4}\} (xt-1)$, $y^{-1}t^{-1}{w^\ast}_{2j - 1}(t)$ and $- (xy)^n x t^{6n+3}=b_n (x+y) t^{6n+3}$ and $- (xy)^n x t^{2n+1} = b_n (x+y) t^{2n+1}$ is split. This proves Proposition \ref{prop:6.1} \fbox{} Now a proof of (2.11) for our knots is exactly the same as we did in Section 5. Since $y^{-1}t^{-1}{w^\ast}_{2\ell + 1}(t) \in S(t)$, we can write \begin{equation*} y^{-1}t^{-1}{w^\ast}_{2\ell + 1}(t) =\sum_{j} \alpha_j t^{2j} + \sum_{k} \beta_k (x+y) t^{2k+1}, \end{equation*} where $\alpha_j, \beta_k \in \ZZ[\omega]$. Define $g(t) = \sum_{j} \alpha_j t^{2j}$ and $h(t) = \sum_{k} \beta_k t^{2k+1}$. Since $X+Y = \mtx{-2}{1}{\omega}{2}$, \begin{equation*} {\xi}^\ast [y^{-1}t^{-1}{w^\ast}_{2\ell + 1}(t)] =\mtx{g(t) - 2h(t)}{h(t)} {\omega h(t)}{g(t)+2h(t)}\ {\rm and} \end{equation*} \begin{equation*} \det (y^{-1}t^{-1}{w^\ast}_{2\ell + 1} (t))^{{\xi}^\ast} = g(t)^2 - (\omega+4) h(t)^2. \end{equation*} Thus $\widetilde{\Delta}_{\tau,K(r)}(t|\omega) =\widetilde{\Delta}_{\tau, K(1/p)}(t|\omega) \bigl\{g(t)^2 - (w+4) h(t)^2\bigr\}$, and hence, we have \begin{equation*} D_{\tau, K(r)}(t) = D_{\tau, K(1/p)}(t) \det[g(t|C_n)^2 - (C_n +4E_n) h(t|C_n)^2]. \end{equation*} Now by Lemma \ref{lem:5.2}, there exists a matrix $V_n \in GL(n, \ZZ)$ such that ${V_n}^2 = C_n + 4E_n$. Since $V_n$ commutes with $C_n$, we see \begin{align*} g(t|C_n)^2 - (C_n +4E_n) h(t|C_n)^2 &=g(t|C_n)^2 - {V_n}^2 h(t|C_n)^2\\ &=\{g(t|C_n) - V_n h(t|C_n)\} \{g(t|C_n) + V_n h(t|C_n)\}. \end{align*} Let $f(t) = \det [g(t|C_n) - V_n h(t|C_n)]$. Since $h(-t|C_n) = - h(t|C_n)$ and $g(- t|C_n)= g(t|C_n )$, $f(-t) = \det [g(t|C_n) + V_n h(t|C_n)]$, and thus, \begin{equation*} \det [g(t|C_n)^2 - (C_n +4E_n ) h(t|C_n)^2] = f(t) f(-t). \end{equation*} Therefore, $D_{\tau, K(r)}(t) = D_{\tau, K(1/p)}(t) f(t) f(-t)$. Since $D_{\tau, K(1/p)}(t)$ is of the form $q(t) q(-t)$, it follows that $D_{\tau, K(r)}(t) = F(t) F(-t)$, where $F(t)=q(t) f(t)$. This proves (2.11) for $K(r)$ in $H(p)$. \fbox{} \section{Proof of Theorem 2.2 (III)} In this section, we prove (2.12) for a $2$-bridge knot $K(r)$ with $\Delta_{K(r)} (-1) \equiv 0$ (mod $p$). First we state the following easy lemma without proof. \begin{lemm}\label{lem:7.1} Let $M$ be a $2n \times 2n$ matrix over a commutative ring which is decomposed into four $n \times n$ matrices, $A, B, C$ and $D$: $M =\mtx{A}{B}{C}{D}$. Suppose that each matrix is lower triangular and in particular, $C$ is strictly lower triangular, namely, all diagonal entries are 0. Then $\det M = (\det A) (\det D)$, and hence, $\det M$ is the product of all diagonal entries of $M$. \end{lemm} Lemma \ref{lem:7.1} can be proven easily by induction on $n$. Now let $K(r), 0<r<1$, be a $2$-bridge knot and consider a Wirtinger presentation $G(K(r)) = \langle x,y| R\rangle$, where $R=x^{\epsilon_1}y^{\eta_1} x^{\epsilon_2} y^{\eta_2} \cdots x^{\epsilon_{\alpha}} y^{\eta_{\alpha}}$ and $\epsilon_j, \eta_j = \pm 1$ for $1\leq j \leq \alpha$. Applying the free differentiation, we have $\dfrac{\partial R}{\partial x} =\susum{i=1}{\alpha} g_i, g_i \in \ZZ G(K)$, where \begin{equation} g_i= \begin{cases}\begin{array}{ll} x^{\epsilon_1}y^{\eta_1} x^{\epsilon_2} y^{\eta_2} \cdots x^{\epsilon_{i-1}} y^{\eta_{i-1}}& {\rm if\ } \epsilon_i =1\\ -x^{\epsilon_1}y^{\eta_1} x^{\epsilon_2} y^{\eta_2} \cdots x^{\epsilon_{i-1}} y^{\eta_{i-1}} x^{-1}& {\rm if\ } \epsilon_i = -1. \end{array} \end{cases} \end{equation} Let $\Psi: \ZZ G(K) \rightarrow \ZZ[t^{\pm 1}]$ be the homomorphism defined by $ \Psi( g_i)=\epsilon_i t^{m_i}$, where $m_i = \susum{j=1}{i-1} (\epsilon_j + \eta_j ) + \dfrac{\epsilon_i - 1}{2}$. Then $\left(\dfrac{\partial R}{\partial x}\right)^\Psi$ gives the Alexander polynomial $\Delta_{K(r)}(t)$ of $K(r)$. On the other hand, $\dfrac{1}{(1-t)(1+t)} \det\left( \dfrac{\partial R}{\partial x}\right)^{{\Phi}^\ast}$ gives the twisted Alexander polynomial $\widetilde{\Delta}_{\rho_0,K(r)}(t|w)$ associated to the irreducible dihedral representation $\rho_0$, and further, we see $D_{\tau,K(r)}(t)=\det\left[ \dfrac{1}{(1-t)(1+t)} (\dfrac{\partial R}{\partial x})^{{\Phi}^\ast}\right]^{ {\gamma}^\ast}$. Now using (7.1), we compute $\left(\dfrac{\partial R}{\partial x}\right)^{ {\Phi}^\ast}=\sum_{i} {{\Phi}^\ast}(g_i)$.\\ If $\epsilon_i = 1$, then $m_i$ is even and \begin{align*} {{\Phi}^\ast}(g_i) &= [(xy)^{i-1}]^{\xi} t^{m_i}\\ &=\mtx{a_{i-1}}{b_{i-1}}{c_{i-1}}{d_{i-1}}t^{m_i}. \end{align*} If $\epsilon_i = - 1$, then $m_i$ is odd and \begin{align*} {{\Phi}^\ast}(g_i) &= - [(xy)^{i-1} x]^{\xi} t^{m_i}\\ & =-\mtx{-a_{i-1}}{ a_{i-1} + b_{i-1}} {-c_{i-1}}{ c_{i-1} + d_{i-1}}t^{m_i}. \end{align*} Therefore we have \begin{align*} (\dfrac{\partial R}{\partial x})^{{\Phi}^\ast}&=\susum{i}{} {{\Phi}^\ast}(g_i)\\ &={\displaystyle \sum_{m_i = even}} \mtx{a_{i-1}}{b_{i-1}}{c_{i-1}}{d_{i-1}} t^{m_i} - {\displaystyle \sum_{m_j = odd}} \mtx{-a_{j-1}}{a_{j-1} + b_{j-1}}{-c_{j-1}}{c_{j-1} + d_{j-1}}t^{m_j}. \end{align*} We note that as polynomials on $\omega$, the constant terms of $a_{i-1}$ and $d_{i-1}$ both are $1$. Further, since $c_{i-1}= \omega b_{i-1}$, the constant term of $c_{i-1} + d_{i-1}$ is also 1, and hence \begin{equation*} \sum_{i} [{g_i}^{{\Phi}^\ast}]^{{\gamma}^\ast} =\mtx{\Delta_{K(r)}(-t) + \omega \mu_{11}}{\mu_{12}} {\omega \mu_{21}} {\Delta_{K(r)}(t) + \omega \mu_{11}},\ {\rm where}\ \mu_{ij} \in (\ZZ[\omega])[t^{\pm 1}]. \end{equation*} If we replace $\ZZ$ by $\ZZ/p$, then $C_n$ is reduced to $\left[\begin{array}{ccc|c} 0&\cdots&0&0\\ \hline &&&0\\ &\Hsymb{E} & &\vdots\\ &&&0 \end{array} \right] $ and hence $\sum_{i} [{g_i}^{\Phi^\ast}]^{\gamma^\ast} \equiv \mtx{A}{B}{C}{D}$ (mod $p$), where $A,B,C$ and $D$ are lower triangular and in particular, $C$ is strictly lower triangular, and each diagonal entry of $A$ and $D$ is $\Delta_{K(r)}(t)$ (mod $p$) and $\Delta_{K(r)}(-t)$ (mod $p$), respectively. Therefore, by Lemma \ref{lem:7.1}, we have \begin{align*} D_{\tau,K(r)}(t) &\equiv \det (\susum{i}{} [{g_i}^{\Phi^\ast}]^{{\gamma}^\ast}) / \det [(1-t)(1+t)]^{\gamma^\ast}\\ & \equiv \left\{\dfrac{\Delta_{K(r)}(t)}{1+t}\right\}^n \left\{ \dfrac{\Delta_{K(r)}(-t)}{1-t}\right\}^n\ {\rm (mod}\ p). \end{align*} This proves (2.12) for any $2$-bridge knot $K(r)$ with $\alpha \equiv 0$ (mod $p$). We note that $\Delta_{K(r)}(t)$ is divisible by $1+t$ over $(\ZZ/p)[t^{\pm1}]$. \section{$N(q,p)$-representations} In this section, we discuss another type of metacyclic representations and the twisted Alexander polynomial associated to these representations. Let $q \geq 1$ and $p = 2n+1$ be an odd prime. Consider a metacyclic group, $N(q,p)= \ZZ/2q {\small \marusen} \ZZ/p$ that is a semi-direct product of $\ZZ/2q$ and $\ZZ/p$ defined by \begin{equation} N(q,p) = \langle s,a| s^{2q}= a^p = 1 , sas^{-1}=a^{-1} \rangle. \end{equation} Note that $N(1,p)=D_p$ and $N(2,p)$ is a binary dihedral group, denoted by $N_p$. Since $s^2$ generates the center of $N(q,p)$, we see that $N(q,p)/\langle s^2\rangle =D_p$ and hence $|N(q,p)| = 2pq$. For simplicity, we assume hereafter that $\gcd(q,p)= 1$. Now it is known \cite{Ha-M}, \cite{Ha} that the knot group $G(K)$ of a knot $K$ is mapped onto $N(q,p)$ if and only if $G(K)$ is mapped onto $D_p$, namely, $\Delta_K (-1) \equiv 0$ (mod $p$). For a 2-bridge knot $K(r)$, if $\Delta_{K(r)} (-1) \equiv 0$ (mod $p$), then we may assume without loss of generality that there is an epimorphism $\widetilde{\rho}: G(K(r)) \longrightarrow N(q,p)$ for any $q \geq 1$ such that \begin{equation} \widetilde{\rho}(x)= s\ {\rm and}\ \widetilde{\rho}(y) = sa. \end{equation} As before, we draw a diagram below consisting of various groups and connecting homomorphisms. $ \begin{array}{ cc ccc cc} & & & &GL(2qp,\ZZ)& &\\ & & &\hspace*{5mm} \mbox{\Large $\nearrow$}\mbox{{\large $\tilde{\xi}$}}& & &\\ & &\hspace*{5mm} N(q,p) & \hspace*{5mm} \overset{\mbox{\large $\tilde{\pi}$}}{\longrightarrow}& GL(2n,\CC)& \overset{\mbox{\large $\tilde{\gamma}$}}{\longrightarrow}&GL(2nm,\ZZ)\\ & \mbox{\large $\tilde{\rho}$}\mbox{\Large $\nearrow$}& & &&& \\ G(K)& \overset{\mbox{{\large $\rho_p$}}}{\longrightarrow}&N_p& \overset{\mbox{\large $\xi_p$}}{\longrightarrow}&SU(2,\CC)& \overset{\mbox{\large $\gamma_p$}}{\longrightarrow}& GL(4n,\ZZ)\\ & \mbox{\large $\rho$}\mbox{\Large $\searrow$}&& \hspace*{5mm} &&&\\ &&D_p&\hspace*{5mm}\overset{\mbox{{\large $\pi_0$}}}\longrightarrow&GL(2n,\ZZ)&&\\ &&&\hspace*{5mm} \mbox{\Large $\searrow$}\mbox{\large$\pi$}&&&\\ &&&&GL(p,\ZZ)&& \end{array} $\\ Here, $p=2n+1, \hat{\rho}=\rho\circ\pi,\rho_0=\rho\circ\pi_0, \tilde\nu=\tilde\rho\circ\tilde\xi,\tilde\tau=\tilde\rho\circ\tilde\pi$,$\tau_p=\rho_p\circ\xi_p$ and $m$ is the degree of the minimal polynomial of $\zeta$ over $\QQ$. \\ Using the irreducible representation $\pi_0$ of $D_p$ on $GL(2n,\ZZ)$, we can define a representation of $N(q,p)$ on $GL(2n,\CC)$. In fact, we have \begin{lemm}\label{lem:newSec8.1} Let $\zeta$ be a primitive $2q$-th root of $1$, $q \geq 1$. Then the mapping $\widetilde{\pi}: N(q,p) \longrightarrow GL(2n,\CC)$ defined by \begin{align} \widetilde{\pi} (s) &= \zeta \pi_0 (x)\ {\it and}\nonumber\\ \widetilde{\pi} (sa) &= \zeta \pi_0 (y) \end{align} gives a representation of $N(q,p)$ on $GL(2n,\CC)$. \end{lemm} Since a proof is straightforward, we omit details. Now $\widetilde{\tau}= \widetilde{\rho} \circ \widetilde{\pi}: G(K(r)) \longrightarrow GL(2n,\CC)$ defines a metacyclic representation of $G(K(r))$. Then the twisted Alexander polynomial $\widetilde{\Delta}_{\tilde{\tau}, K(r)}(t|\zeta)$ of $K(r)$ associated to $\widetilde{\tau}$ is given by \begin{equation} \widetilde{\Delta}_{\tilde\tau, K(r)}(t|\zeta) = \widetilde{\Delta}_{\rho_0, K(r)}(\zeta t), \end{equation} where ${\rho}_0 =\rho \circ {\pi}_0$. Therefore, the total twisted Alexander polynomial is \begin{equation} D_{\tilde{\tau}, K(r)}(t)= \prod_{(2q,k)=1} \widetilde{\Delta}_{\rho_0, K(r)}({\zeta}^k t). \end{equation} This proves the following theorem. \begin{thm} Let $p=2n+1$ be an odd prime and $q \geq 1$. Let $K(r)$ be a 2-bridge knot. Suppose $\Delta_{K(r)}(-1)\equiv 0$ (mod $p$). Then $G(K(r))$ has a metacyclic representation \begin{equation*} \widetilde{\tau}=\widetilde{\rho} \circ \widetilde{\pi}: G(K(r)) \longrightarrow N(q,p) \longrightarrow GL(2n,\CC). \end{equation*} Let $\zeta$ be a primitive $2q$-th root of $1$. Then the twisted Alexander polynomial $\widetilde{\Delta}_{\tilde{\tau}, K(r)}(t)$ and the total twisted Alexander polynomial $D_{\tilde{\tau}, K(r)}(t)$ associated to $\widetilde{\tau}$ are given by \begin{align} &(1)\ \widetilde{\Delta}_{\tilde{\tau}, K(r)}(t) =\widetilde{\Delta}_{\rho_0, K(r)}(\zeta t).\nonumber\\ &(2)\ D_{\tilde{\tau}, K(r)}(t)= \prod_{(2q,k)=1}\widetilde{\Delta}_{\rho_0,K(r)}({\zeta}^k t). \end{align} \end{thm} We conclude this section with a few remarks. First, as we mentioned earlier, if $q=2$, $N(2,p)$ is a binary dihedral group, denoted by $N_p$. It is known \cite{Kl} \cite{L2} that generators $s$ and $sa$ of $N_p$ are represented in $SU(2, \CC)$ by trace free matrices. In fact, the mapping $\xi_p$: \begin{equation} \xi_p (s) =\mtx{0}{1}{-1}{0}\ {\rm and}\ \xi_p (sa) =\mtx{0}{v_p}{-v_p^{-1}}{0} \end{equation} gives a representation of $N_p$ into $SU(2, \CC)$, where $v_p=e^{\frac{2\pi i}{p}}$. Then we will show that the total twisted Alexander polynomial $D_{{\tau}_p, K(r)}(t)$ associated to ${\tau}_p = {\rho}_p \circ {\xi}_p$ is given by \begin{equation} D_{{\tau}_p, K(r)}(t)= \widetilde{\Delta}_{\rho_0,K(r)}(it) \widetilde{\Delta}_{\rho_0, K(r)}(-it), \end{equation} where $i = \sqrt{-1}$. Therefore we have the following corollary. \begin{cor} If $q=2$, then $D_{\tilde\tau, K(r)}(t) = D_{\tau_p, K(r)}(t)$. \end{cor} {\it Proof of (8.8).} Let $C_p$ be the companion matrix of the minimal polynomial of $v_p$, namely, $C_p= \left[\begin{array}{ccc|c} 0&\cdots&0&-1\\\hline & & &-1\\ & \hsymb{E}& &\vdots\\ & & &-1 \end{array} \right]. $ Then, by definition, we have \begin{equation} D_{\tau_p, K(r)} (t) = \det[\widetilde{\Delta}_{\tau_p,K(r)}(t|C_p)]. \end{equation} And (8.8) follows from the following lemma. \begin{lemm}\label{lem:newSec8.4} Let $E^*_{2n} = \left[a_{j,k}\right]$ be a $2n \times 2n$ matrix such that $a_{j,k}= 1$, if $k+j =2n+1$ and $0$, otherwise ($E_{2n}^{*}$ is the \lq mirror image\rq\ of $E_{2n}$.) Denote\\ \begin{align*} &A = \mtxc{0}{E_{2n}}{-E_{2n}}{0}, B = \mtxc{0}{C_p}{-C_p^{-1}}{ 0},\ {\it and}\\ &\widehat A = \mtxc{i E_{2n}^{*}}{0}{0}{ -iE_{2n}^{*}}, \widehat B =\mtxc{i\pi_0(y)}{0}{0}{-i\pi_0(y)}. \end{align*} Then there exists a matrix $M_{4n} \in GL(4n,\CC)$ such that $M_{4n}A {M_{4n}}^{-1}=\widehat A$ and $M_{4n} B {M_{4n}}^{-1} = \widehat B$. \end{lemm} {\it Proof.} A simple computation shows that $M_{4n}=\frac{1}{\sqrt{2}}\mtx{E_{2n}}{-iE_{2n}^{*}}{E_{2n}}{iE_{2n}^{*}}$ is what we sought. \fbox{} Secondly, the metacyclic group $N(q,p)$ is also represented by $\widetilde{\xi}$ in $GL(2qp,\ZZ)$ via {\it maximum} permutation representation on the symmetric group $S_{2qp}$. To be more precise, let \begin{align*} S=\{1,s,s^2,\cdots,s^{2q-1},\ & a,sa,s^2a,\cdots,s^{2q-1}a,\ a^2,sa^2,s^2a^2,\cdots,s^{2q-1}a^2,\ \cdots,\\ & a^{p-1},sa^{p-1},s^2a^{p-1},\cdots,s^{2q-1}a^{p-1}\} \end{align*} be the ordered set of the elements of $N(q,p)$. Then the right multiplication by an element $g$ of $N(q,p)$ on $S$ induces a permutation associated to $g$, and by taking the permutation matrix corresponding to this permutation, we obtain the representation $\widetilde\xi$ of $N(q,p)$ on $GL(2qp,\ZZ)$. Then we have the following: \begin{prop}\label{prop:8.5} For any $q\ge 1$, the twisted Alexander polynomial $\widetilde{\Delta}_{\tilde\nu,K(r)}(t)$ of $K(r)$ associated to $\widetilde{\nu}=\widetilde{\rho} \circ \widetilde{\xi}$ is given by \begin{equation} \widetilde{\Delta}_{\tilde{\nu}, K(r)}(t) = \frac{\displaystyle{\prod_{k=0}^{2q-1}} \Delta_{K(r)}(\zeta^k t)}{1-t^{2q}} \prod_{k=0}^{2q-1} \widetilde{\Delta}_{\rho_0, K(r)}(\zeta^k t), \end{equation} where $\zeta$ is a primitive $2q$-th root of $1$. Therefore, $\widetilde{\Delta}_{\tilde{\nu},K(r)}(t)$ is an integer polynomial in $t^{2q}$ and $D_{\tilde{\tau},K(r)}(t)$ divides $\widetilde{\Delta}_{\tilde{\nu},K(r)}(t)$. \end{prop} {\it Proof.} By construction, $\widetilde\xi(s)=\rho(x) \otimes C$ and $\widetilde\xi(sa)=\rho(y) \otimes C$, where $C$ is the transpose of the companion matrix of $t^{2q}-1$ and $[a_{i,j}] \otimes C=[a_{i,j}C]$, the tensor product of $[a_{i,j}]$ and $C$. Therefore (8.10) follows immediately. \fbox{} If Conjecture A holds for $K(r)$, $\widetilde{\Delta}_{\tilde{\nu}, K(r)}(t)$ is of the form: \begin{equation*} \widetilde{\Delta}_{\tilde{\nu}, K(r)}(t) = \frac{\prod_{k=0}^{2q-1} \Delta_{K(r)}({\zeta}^k t)}{1-t^{2q}} f(t^{2q})^2, \end{equation*} for some integer polynomial $f(t^{2q})$ in $t^{2q}$. If coefficients are taken from a finite field, then (8.10) becomes much simpler. The following proposition is a metacyclic version of (2.12). Since a proof is easy, we omit details. \begin{prop} Let $p$ be an odd prime. Suppose $\Delta_{K(r)}(-1)\equiv 0$ (mod $p$). Then we have \begin{equation} \widetilde\Delta_{\tilde\nu,K(r)}(t)\equiv \left\{\prod_{k=0}^{2q-1} \Delta_{K(r)}(\zeta^k t)\right\}^p / (1-t^{2q})^p\ {\rm (mod}\ p\ {\rm )}. \end{equation} \end{prop} \begin{rem} In \cite{cha}, Cha defined the twisted Alexander invariant of a fibred knot $K$ using its Seifert fibred surface. Evidently, this invariant is closely related to our twisted Alexander polynomial. For example, as is described in \cite[Example]{cha}, if we consider a regular dihedral representation $\rho$, then the invariant he defined is essentially the same as the twisted Alexander polynomial associated to a regular dihedral representation $\widetilde{\nu}=\widetilde{\rho} \circ \widetilde{\xi}: G(K) \rightarrow D_p = N(1,p) \rightarrow GL(2p,\ZZ)$ we discussed in this section. More precisely, let $A_{\rho,K}(t)$ be Cha's twisted Alexander invariant associated to $\rho$ and $\widetilde{\Delta}_{\widetilde{\nu},K}(t)$ the twisted Alexander polynomial of a knot $K$ associated to $\widetilde{\nu}$. Then we have \begin{equation} (1-t^2) \widetilde{\Delta}_{\widetilde{\nu},K}(t)= A_{\rho,K}(t^2). \end{equation} We should note that $\widetilde{\Delta}_{\widetilde{\nu},K}(t)$ is an integer polynomial in $t^2$. (See Proposition \ref{prop:8.5}) Details will appear elsewhere. \end{rem} \section{Example} The following examples illustrate our main theorem. \begin{ex} Dihedral representations $\tau: G(K(r))\longrightarrow D_p\longrightarrow GL(2,\CC)$. (I) Let $p=3$ and $n=1$. Then $\theta_1(z) = z+3$ and $\omega = -3$.\\ (a) $r=1/3$. $D_{\tau,K(1/3)}(t)=\widetilde{\Delta}_{\rho_0,K(1/3)}(t) = 1-t^2$.\\ (b) $r= 1/9$. $D_{\tau,K(1/9)}(t)= \widetilde{\Delta}_{\rho_0,K(1/9)}(t)=(1-t^2)(1-t^3+t^6)(1 +t^3 +t^6)$.\\ (c) $r=5/27$. $D_{\tau,K(5/27)}(t)=\widetilde{\Delta}_{\rho_0,K(r)}(t) =(1-t^2) (1+t -t^2 +t^3 +t^4) (1 -t -t^2 -t^3 +t^4)$. Note $\Delta_{K(5/27)}(t)=(1 -t +t^2) (2 - 2t +t^2 -2t^3 +2t^4)$ and\\ $2 - 2t +t^2 -2t^3 +2t^4 \equiv - (1 -t -t^2 -t^3 +t^4)$ (mod $3$), and \begin{align*} D_{\tau,K(5/27)}(t) &\equiv \dfrac{\Delta_{K(r)}(t)}{1+t} \dfrac{\Delta_{K(r)}(-t)}{1-t}\\ &\equiv \dfrac{(1+t)^2 (1 -t -t^2-t^3 +t^4)}{1+t} \dfrac{(1-t )^2 (1+t -t^2+t^3 +t^4)}{1-t} \\ &\equiv (1-t^2) (1-t -t^2 -t^3 +t^4) (1+t -t^2 +t^3 +t^4) \ {\rm (mod}\ 3). \end{align*} (II) Let $p=5$ and $n=2$. Then $\theta_2 (z) = z^2 + 5z +5$.\\ (a) $r=1/5$. $D_{\tau,K(r)}(t)=(1-t^2)^2 \Delta_{K(1/5)}(t) \Delta_{K(1/5)}(-t)$.\\ (b) $r= 19/85$ $D_{\tau,K(r)}(t) =D_{\tau,K(1/5)}(t)f(t) f(-t)$, where $f(t) = 1- 3t - 2t^2 + 4t^3 - t^4 - 4t^6 -3t^7 + 7t^8 - 3t^9 -4t^{10} -t^{12} +4t^{13} -2t^{14} - 3t^{15}+ t^{16}$, and $\Delta_{K(r)}(t) = \Delta_{K(1/5)}(t) g(t)$, where $g(t) = 2 -2t +2t^2 - 2t^3 + t^4 - 2t^5 + 2t^6 - 2t^7 + 2t^8$, and $f(t) \equiv g(t)^2$ (mod $5$). Since $\Delta_{K(1/5)}(t) \equiv (1+t)^4$ (mod $5$), we see \begin{align*} D_{\tau,K(r)}(t) &= D_{\tau,K(1/5)}(t) f(t) f(-t)\\ &=\left\{(1+t)^2 \Delta_{K(1/5)}(t) f(t) \right\} \left\{ (1-t)^2 \Delta_{K(1/5)}(-t) f(-t) \right\}\\ &\equiv \{(1+t)^6 g(t)^2\}\{(1-t)^6 g(-t)^2\}\\ &\equiv \{(1+t)^3 g(t)\}^2 \{(1-t)^3 g(-t)\}^2\\ &\equiv\left\{\dfrac{\Delta_{K(1/5)}(t) g(t)}{1+t}\right\}^2 \left\{\dfrac{\Delta_{K(1/5)}(-t) g(-t)}{1-t}\right\}^2\\ &\equiv \left\{\dfrac{\Delta_{K(r)}(t)}{1+t}\right\}^2 \left\{\dfrac{\Delta_{K(r)}(-t) }{1-t}\right\}^2\ {\rm (mod}\ 5). \end{align*} (c) $r=21/115$. $D_{\tau,K(r)}(t)= D_{\tau,K(1/5)}(t)f(t) f(-t)$, where $f(t) = 4 +2t - 3t^2 - t^3 - 8t^5 - 3t^6+4t^7 + t^9 + 9t^{10} +t^{11} +4t^{13} -3t^{14} -8t^{15} - t^{17} -3 t^{18} +2t^{19} +4t^{20}$, and $ \Delta_{K(r)}(t) = \Delta_{K(1/5)}(t) g(t)$, where $g(t) = 2 -2t +2t^2 - 2t^3 +2t^4 -3t^5 +2 t^6 - 2t^7 + 2t^8 - 2t^9 + 2t^{10}$, and $f(t) \equiv g(t)^2$ (mod $5$). Therefore, we see \begin{equation*} D_{\tau, K(r)}(t) \equiv \left\{ \dfrac{\Delta_{K(r)}(t)}{1+t} \right\}^2 \left\{\dfrac{\Delta_{K(r)}(-t)}{1-t}\right\}^2\ {\rm (mod}\ 5). \end{equation*} \end{ex} \begin{ex}\label{ex:9.2n} Binary dihedral representations. $\tau_p:G(K(r))\longrightarrow N_p \longrightarrow GL(2n,\CC)$ (I) Let $p=3$ and $n=1$.\\ (a)When $r=1/9$, $D_{\tau_p, K(r)}(t)$ = $(1+t^2)^2(1-t^6+t^{12})^2$.\\ (b)When $r=5/27$, $D_{\tau_p, K(r)}(t) = (1+t^2)^2(1+3t^2+t^4+3t^6+t^8)^2$. (II) Let $p=5$ and $n=2$.\\ (a) When $r=1/5$, $D_{\tau_p, K(r)}(t) = (1+t^2)^4(1-t^2+t^4-t^6+t^8)^2$.\\ (b) When $r =19/85$, $D_{\tau_p, K(r)}(t) = (1+t^2)^4(1-t^2+t^4-t^6+t^8)^2 f(t)^2$, where $f(t)= 1+13t^2+26t^4+20t^6+13t^8+22t^{10}+40t^{12} +33t^{14}+25t^{16}+33t^{18}+40t^{20}+22t^{22} +13t^{24}+20t^{26}+26t^{28}+13t^{30}+t^{32}$. \end{ex} \begin{ex}\label{ex:9.3} $N(q,p)$-representations. $\widetilde{\nu}:G(K(r)) \longrightarrow N(q,p) \longrightarrow GL(2pq,\ZZ)$. (I) Let $q=4,p=3,N(4,3)=\ZZ/8{\small \marusen} \ZZ/3$.\\ (a) $r=1/3$. $\widetilde\Delta_{\tilde{\nu},K(1/3)}(t)= (1-t^8)(1+t^8+t^{16})$.\\ (b) $r=1/9$. $\widetilde\Delta_{\tilde{\nu},K(1/9)}= (1-t^8)(1+t^8+t^{16})(1+t^{24}+t^{48})^3$.\\ (c) $r=5/27$. \begin{align*} &\ \ \ \ \widetilde\Delta_{\tilde{\nu},K(5/27)}\\ &\ \ \ \ \ \ \ \ = (1-t^8)(1+t^8+t^{16})(16+31t^{8}+16t^{16})^2(1-79t^8+129t^{16} -79t^{24}+t^{32})^2. \end{align*} (II) Let $q=5,p=3, N(5,3)=\ZZ/10{\small \marusen} \ZZ/3$.\\ (a) $r=1/3$. $\widetilde\Delta_{\tilde{\nu},K(1/3)}(t)= (1-t^{10})(1+t^{10}+t^{20})$.\\ (b) $r=1/9$. $\widetilde\Delta_{\tilde{\nu},K(1/9)}= (1-t^{10})(1+t^{10}+t^{20})(1+t^{30}+t^{60})^3$.\\ (c) $r=5/27$. \begin{align*} &\ \ \ \ \widetilde\Delta_{\tilde{\nu},K(5/27)}\\ &\ \ \ \ \ \ \ \ = (1-t^{10})(1+t^{10}+t^{20})(1-228t^{10}-314t^{20}-228t^{30}+t^{40})^2\\ &\ \ \ \ \ \ \ \ \ \ \times (1024+1201t^{20}+1024t^{40}). \end{align*} (III) Let $q=3,p=5$, $N(3,5)=\ZZ/6{\small \marusen} \ZZ/5$\\ (a) $r=1/5$. $\widetilde\Delta_{\tilde\nu,K(1/5)}(t)= (1-t^6)^3(1+t^6+t^{12}+t^{18}+t^{24})^3$.\\ (b) $r=19/85$. \begin{align*} &\ \ \widetilde\Delta_{\tilde{\nu},K(19/85)}\\ &\ \ \ \ = (1-t^6)^3(1+t^6+t^{12}+t^{18}+t^{24})^3\\ &\ \ \ \ \ \ \times (64+64t^6+48t^{12}+12t^{18}+49t^{24}+12t^{30}+48t^{36}+64t^{42}+64t^{48})\\ &\ \ \ \ \ \ \times (1-1243t^6+3335t^{12}+1570t^{18}-2423t^{24}+6320t^{30}-992t^{36}\\ &\ \ \ \ \ \ \ \ -2181t^{42} +9451t^{48}-2181t^{54}-992t^{60}+6320t^{66}-2423t^{72}\\ &\ \ \ \ \ \ \ \ +1570t^{78}+3335t^{84}-1243t^{90}+t^{96})^2 \end{align*} \end{ex} \section{$K$-metacyclic representations} In this section, we briefly discuss $K$-metacyclic representations of the knot group. Let $p$ be an odd prime. Consider a group $G(p-1,p|k)$ that has the following presentation: \begin{equation} G(p-1,p|k) = \langle s,a | s^{p-1}= a^p = 1, sas^{-1} =a^k \rangle, \end{equation} where $k$ is a primitive $(p-1)$-st root of $1$ (mod $p$). We call $G(p-1,p| k)$ a $K$-metacyclic group according to \cite{Fox}. \begin{prop} Two $K$-metacyclic groups of the same order, $p(p-1)$ say, are isomorphic. \end{prop} {\it Proof.} Let $G(p-1,p| \ell)=\langle u,b | u^{p-1}= b^p = 1, u b u^{-1} = b^{\ell}\rangle$ be another $K$-metacyclic group. Since $\ell$ is also a primitive $(p-1)$-st root (mod $p$), we see that $\ell \equiv k^m$ (mod $p$), $1 \leq m \leq p-2$ for some $m$, where $m$ and $p-1$ are coprime. Take two integers $\lambda$ and $\mu$ such that $m \lambda + (p-1) \mu= 1$. Then it is easy to show that a homomorphism $h:\ G(p-1,p|k) \rightarrow G(p-1,p|\ell)$ defined by $h(s) = u^{\lambda}$ and $h(a) = b$ is in fact an isomorphism. \fbox{} The following proposition is also well-known. \begin{prop} \cite{Fox}\cite{Ha} Let $p$ be an odd prime. Suppose that $k$ is a primitive $(p-1)$-st root of 1 (mod $p$). Then the knot group $G(K)$ is mapped onto $G(p-1,p|k)$ if and only if $\Delta_K(k) \equiv 0$ (mod $p$). \end{prop} As is shown in \cite{Fox}, $G(p-1,p|k)$ is faithfully represented in $S_p$ by \begin{equation} \sigma (a) = (1 2 3 \cdots p)\ {\rm and}\ \sigma (s) = (k^{p-1} k^{p-2} \cdots k^2 k). \end{equation} Let $\pi_*:\ G(p-1,p |k) \rightarrow GL(p, \ZZ)$ be a matrix representation of $G(p-1,p|k)$ via $\sigma$. Now, let $K(r)$ be a $2$-bridge knot. Suppose that $\Delta_K(k) \equiv 0$ (mod $p$) for some primitive $(p-1)$-st root of $1$ (mod $p$). Then a homomorphism $\delta:\ G(K(r)) \rightarrow G(p-1,p | k)$ given by \begin{equation} \delta (x) = s\ {\rm and}\ \delta (y)=sa, \end{equation} induces a $K$-metacyclic representation $\Theta = \delta \circ \pi_* :\ G(K(r)) \rightarrow GL(p,\ZZ)$. Then Conjecture A states that \begin{equation} \widetilde{\Delta}_{\Theta,K(r)} (t) = \left[\frac{\Delta_{K(r)}(t)}{1-t}\right] F(t^{p-1}). \end{equation} We will see that (10,4) holds for the following knots including a non-$2$-bridge knot. \begin{ex} (1) Consider a trefoil knot $K$. Since $\Delta_K(-2) \equiv 0$ (mod 7) and $-2$ is a primitive $6$th root of 1 (mod 7), $G(K)$ is mapped onto $G(6,7|-2)$. Then $(\delta \circ \sigma) (x)= \sigma(s) = (132645)$ and $(\delta \circ \sigma) (y) = \sigma (sa) = (146527)$ and we see $\widetilde{\Delta}_{\Theta,K} (t) = \left[\frac{\Delta_K(t)}{1-t}\right] (1-t^6)$. (2) Let $K=K(1/9)$. Since $K(1/9) \in H(3)$, $G(K(1/9))$ is mapped onto $G(6,7|-2)$, and $\widetilde{\Delta}_{\Theta,K} (t) =\left[\frac{\Delta_K(t)}{1-t}\right] (1-t^6)(1-t^6+t^{12})$. (3) Let $K=K(5/27) \in H(3)$. Then $\widetilde{\Delta}_{\Theta,K} (t) =\left[\frac{\Delta_K(t)}{1-t} \right] (1-t^6) (1-7t^6+9t^{12}-7t^{18}+t^{24})$. \end{ex} \begin{ex} Consider a knot $K=K(5/9)$. Since $\Delta_K (t)=2 - 5t + 2t^2$, $\Delta_K (2)$ = 0 and hence $G(K(5/9))$ is mapped onto $G(m,p|2)$ for any odd prime $p$, where $m$ is a divisor of $p-1$. If $p=5$ or $11$, then $2$ is a primitive $(p-1)$-st root of 1 (mod $p$). We see then: (i) For $p=5, \widetilde{\Delta}_{\Theta,K} (t) = \left[\frac{\Delta_K(t)}{1-t}\right] (1-t^4)$. (ii) For $p=11, \widetilde{\Delta}_{\Theta,K} (t) = \left[\frac{\Delta_K(t)}{1-t}\right] (1-t^{10})$. It is quite likely that we have $\widetilde{\Delta}_{\Theta,K} (t) =\left[\frac{\Delta_K(t)}{1-t}\right] (1-t^{p-1})$, for any odd prime $p$ such that $2$ is a primitive $(p-1)$-st root of 1 (mod $p$). (iii) If $p=7$,then $2$ is a primitive third root of $1$ (mod $7$) and hence $G(K)$ has a representation $\Theta:\ G(K) \rightarrow G(3,7|2) \rightarrow GL(7,\ZZ)$ and we obtain $\widetilde{\Delta}_{\Theta,K} (t) = \left[\frac{\Delta_K(t)}{1-t}\right] (1-t^3)^2$. \end{ex} \begin{ex} Consider a non-2-bridge knot $K=8_5$ in Reidemeister-Rolfsen table. We have a Wirtinger presentation $G(K)=\langle x,y,z|R_1,R_2\rangle$, where\\ \hspace*{2cm} $R_1= (x^{-1}y^{-1}zyxy^{-1}x^{-1}y^{-1}) x(yxyx^{-1}y^{-1}z^{-1}yx)y^{-1}$ and\\ \hspace*{2cm} $R_2 = (yx^{-1}y^{-1}z^{-1}x^{-1})y (xzyxy^{-1})z^{-1}$. (10.5) \setcounter{equation}{5} Since $\Delta_{K} (t) = (1-t+t^2 )(1-2t+t^2 -2t^3 +t^4)$, it follows that $\Delta_{K}(-1) \equiv 0$ (mod $3$) and $\Delta_{K} (-1) \equiv 0$ (mod $7$), and further $\Delta_{K} (-2) \equiv 0$ (mod $7$). Therefore, $G(K)$ is mapped onto each of the following groups: $D_3, D_7, N(2,3), N(2, 7)$ and $G(6,7|-2)$, since $-2$ is a primitive $6$-th root of $1$ (mod 7). Now we have five representations and computed their twisted Alexander polynomials. (1) For $\rho_1:\ G(K) \rightarrow D_3 \rightarrow GL(3,\ZZ)$, defined by $\rho_1(x)= \rho_1(z)= \pi \rho (x)$ and $\rho_1(y) = \pi \rho(y)$,we have $\widetilde{\Delta}_{\rho_1, K}(t) = \left[ \frac{\Delta_K(t)}{1-t}\right] f_1(t) f_1(-t)$, where $f_1 (t)= (1+t)(1+t - 2t^2 +t^3 +t^4 )$. (2) For $\rho_2:\ G(K) \rightarrow D_7 \rightarrow GL(7,\ZZ)$, defined by $\rho_2(x)=\rho_2 (y)=\pi \rho (x)$ and $\rho_2(z) =\pi \rho(y)$, we have $\widetilde{\Delta}_{\rho_2, K}(t) = \left[ \frac{\Delta_K(t)}{1-t}\right] f_2 (t) f_2 (-t)$, where $f_2 (t)=(1+t)^3 (1+2t - 7t^3 -13t^4 -13t^5 -11t^6 -13t^7 -13t^8 -7t^9 +2t^{11} +t^{12})$. (3) For $\rho_3:\ G(K) \rightarrow N(2,3) \rightarrow GL(12,\ZZ)$, defined by $\rho_3(x)= \rho_3(z)= \widetilde{\nu} (x)$ and $\rho_3 (y)= \widetilde{\nu} (y)$, we have $\widetilde{\Delta}_{\rho_3, K}(t) = (1+t^2 )^2 (1+5t^2 +4t^4 +5t^6+t^8)^2$. (4) For $\rho_4:\ G(K) \rightarrow N(2,7) \rightarrow GL(28,\ZZ)$, defined by $\rho_4(x)=\rho_4 (y)=\widetilde{\nu} (x)$ and $\rho_4 (z)=\widetilde{\nu} (y)$, we have $\widetilde{\Delta}_{\rho_4, K}(t)=(1+t^2)^6 (1+4t^2 + 2t^4 +19t^6 +13t^8 +37t^{10} +17t^{12} +37t^{14} +13t^{16} +19t^{18} +2t^{20} +4t^{22} +t^{24})^2$. (5) For $\rho_5:\ G(K) \rightarrow G(6,7|-2) \rightarrow GL(7,\ZZ)$, defined by $\rho_5(x)= \rho_5 (z)=\Theta (x)$ and $\rho_5 (y)=\Theta (y)$, we have $\widetilde{\Delta}_{\rho_5, K}(t) = \left[ \frac{\Delta_K(t)}{1-t}\right]F(t)$, where $F(t)=(1-t^6)(1-72t^6 -82t^{12} -72t^{18} +t^{24})$. \end{ex} We note that this example also supports Conjecture A. \section{Appendix} \noindent {\bf 11.1. Proof of Proposition 2.1.}\\ Let $\theta_n (z) = c_0^{(n)} + c_1^{(n)} z + \cdots + c_n^{(n)}z^n$ be the polynomial defined in Section 2. Here $c_k^{(n)} =\binom{n+k}{2k} +2\binom{n+k}{2k+1}$. Now we define four $n \times n$ integer matrices $A, A^*, B, B^*$ as follows: $A=[A_{i,j}]$, where $A_{i,j} = a_{i,n-j+1}$, $A^* = [A^{*}_{i,j}]$, where $A_{i,j}^{*} = - a_{i,j-1}$, $B = [B_{i,j}]$, where $B_{i,j} = b_{i,n-j+1}$, and $B^* = [B_{i,j}^*]$, where $B_{i,j}^* = b_{i,j}$. Here $a_{j,k}$ and $b_{j,k}$ are given as follows. \begin{align} &(1)\ a_{j,j} = b_{j,j} = 1\ {\rm for}\ 1 \leq j \leq n.\nonumber\\ &(2)\ {\rm For}\ 1 \leq j \leq k, a_{j,k}=\binom{j+k-1}{2j-1}\ {\rm and}\ b_{j,k} = \binom{j+k-2}{2j-2}.\nonumber\\ &(3)\ {\rm If}\ 0 \leq k < j, a_{j,k} = b_{j,k} = 0. \end{align} \begin{lemm}\label{lemm:a1} {\it The following formulas hold.} \begin{align} {\it For}\ &0 \leq k \leq n, \nonumber\\ &(1)\ c_k^{(n)} = a_{k+1,n+1} + a_{k+1,n}.\nonumber\\ {\it For}\ &1 \leq j \leq k,\nonumber\\ &(2)\ b_{j,k} = a_{j,k} - a_{j,k-1},\nonumber\\ &(3)\ b_{j,k} = a_{j-1,k-1} + b_{j,k-1}\ {\it and}\nonumber\\ &(4)\ -2\sum_{k=j}^n b_{j,k} = a_{j-1,n} - c_{j-1}^{(n)} + b_{j,n}. \end{align} \end{lemm} {\it Proof.} Only (4) needs a proof. Since $\sum_{k=j}^n b_{j,k} = \sum_{k=j}^n (a_{j,k} - a_{j,k-1}) = a_{j,n}$, we need to show that $-2a_{j,n} = a_{j-1,n} - c_{j-1}^{(n)} + b_{j,n}$. However, it follows easily from (11.2) (1)-(3). \fbox{} Now these formulas are sufficient to show that the $2n \times 2n$ matrix $U_n =\mtx{A}{A^*}{B}{B^*}$ is what we sought. Since a proof is straightforward, we omit the details. \arraycolsep=1.7pt \begin{ex}\label{ex:a2} For $n=4, 5$, $U_n$ are given by \begin{center} $U_4=\left[\begin{array}{rrrrrrrr} 4& 3& 2& 1& 0& -1& -2& -3\\ 10& 4& 1& 0& 0& 0& -1& -4\\ 6& 1& 0& 0& 0& 0& 0& -1\\ 1& 0& 0& 0& 0& 0& 0& 0\\ 1& 1& 1& 1& 1& 1& 1& 1\\ 6& 3& 1& 0& 0& 1& 3& 6\\ 5& 1& 0& 0& 0& 0& 1& 5\\ 1& 0& 0& 0& 0& 0& 0& 1 \end{array}\right]$, and $U_5=\left[\begin{array}{rrrrrrrrrr} 5& 4& 3& 2& 1& 0& -1& -2& -3& -4\\ 20& 10& 4& 1& 0& 0& 0& -1& -4& -10\\ 21& 6& 1& 0& 0& 0& 0& 0& -1& -6\\ 8& 1& 0& 0& 0& 0& 0& 0& 0& -1\\ 1& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 1& 1& 1& 1& 1& 1& 1& 1& 1& 1\\ 10& 6& 3& 1& 0& 0& 1& 3& 6& 10\\ 15& 5& 1& 0& 0& 0& 0& 1& 5& 15\\ 7& 1& 0& 0& 0& 0& 0& 0& 1& 7\\ 1& 0& 0& 0& 0& 0& 0& 0& 0& 1 \end{array}\right]. $ \end{center} \end{ex} \arraycolsep\labelsep \noindent {\bf 11.2. Proof of Lemma 5.2. } First we write down a solution $X=V_n$ of the equation $X^2 = 4E_n+C_n$. Let us begin with the alternating Catalan series \begin{equation} \mu (y) = \sum_{k=0}^\infty b_k y^k,\ {\rm where}\ b_k=\frac{(-1)^k}{k+2} \binom{2k+2}{k+1}. \end{equation} Therefore, $\mu (y) = 1- 2y +5y^2 - 14y^3 + 132y^4 - 429y^5+ 1430y^6 + \cdots$. Let $\theta_n (z)=c_0^{(n)}+ c_1^{(n)} z+ \cdots + c_n^{(n)}z^n$ be the polynomial defined in Section 2. Using $\theta_n (z)$, we define a new polynomial $f_n (x)=x^n \theta (x^{-1})= a_0^{(n)} + a_1^{(n)} x + a_2^{(n)}x^2 + \cdots + a_n^{(n)} x^n$. For example, $f_1(x) = x \theta_1(x^{-1})= x(3+x^{-1}) = 3x + 1$, and $f_2(x)= 5x^2+ 5x +1$. Since $a_k^{(n)} = c_{n-k}^{(n)}$, we see that \begin{equation} a_k^{(n)}= \frac{2n+1}{2n-2k+1} \binom{2n-k}{2n-2k} =\binom{2n-k+1}{2n-2k+1} + \binom{2n-k}{2n-2k+1}. \end{equation} Next, we compute $f_n(x) \mu (y)$ = $\sum_{r,s \geq 0} c_{r,s}^{(n)} x^r y^s$, where $c_{r,s}^{(n)} = a_r^{(n)}b_s$, and define integers $d_{k,\ell}^{(n)}$, $0 \leq k, \ell$, as follows: \begin{equation} d_{k,\ell}^{(n)}=c_{k,\ell}^{(n)} + c_{k-1,\ell+1}^{(n)} + c_{k-2,\ell+2}^{(n)} + \cdots+ c_{0,k+ \ell}^{(n)} ={\displaystyle \sum_{i=0,i+j=k+\ell}^k} a_i^{(n)}b_j. \end{equation} Then we claim: \begin{prop}\label{prop:a3} $V_n$ = $[ v_{j,k}^{(n)}]_{1 \leq j,k \leq n}$, where $v_{j,k}^{(n)} = d_{n-j,k-1}^{(n)}$, is a solution. \end{prop} \begin{ex}\label{ex:a4} The following is the list of solutions $V_n, n=1,\dots,5$. \\ \arraycolsep=1.7pt \begin{center} $[1]$, $\mtx{3}{-5}{1}{-2}$, $ \left[\begin{array}{rrr} 5&-7&14\\ 5&-9&21\\ 1&-2&5 \end{array}\right]$, $ \left[\begin{array}{rrrr} 7& -9&18& -45\\ 14&-23& 51&-132\\ 7&-13& 31& -84\\ 1& -2& 5& -14 \end{array}\right]$, $ \left[\begin{array}{rrrrr} 9& -11& 22& -55& 154\\ 30& -46& 99& -253& 715\\ 27&-47&108&-286&825\\ 9&-17& 41& -112& 330\\ 1& -2& 5& -14& 42 \end{array}\right]$. \end{center} \end{ex} \arraycolsep\labelsep Now, to prove Proposition \ref{prop:a3}, we need several technical lemmas. \begin{lemm}\label{lem:a5} For $n \geq 2$ and $0 \leq k \leq n$, the following recursion formula holds. \begin{equation} a_k^{(n)}=a_k^{(n-1)} + 2a_{k-1}^{(n-1)} -a_{k-2}^{(n-2)}. \end{equation} \end{lemm} For convenience, we define $a_0^{(0)}$ = 1. Since a direct computation using (11.4) verifies (11.6) easily, we omit details. Next, for $n, m \geq 0$, we define a number $F(n, m)$ as follows. \begin{equation} F(n, m) = \sum_{j=0}^{n}a_{n-j}^{(n)} b_{m+j}. \end{equation} \begin{ex}\label{ex:a6} We have the following values for $F(n,m)$; \begin{align*} (1)\ &(i)\ F(0, 0) = a_0^{(0)} b_0= 1.\\ &(ii)\ F(0, m) = a_0^{(n)}b_m = b_m.\\ (2)\ &(i)\ F(1,0) = a_1^{(1)}b_0 + a_0^{(1)} b_1 = 3 - 2 = 1.\\ &(ii)\ F(1,1) = a_1^{(1)}b_1 +a_0^{(1)}b_2 = -6 + 5 = -1.\\ &(iii)\ F(1, m) = a_1^{(1)} b_m + a_0^{(1)}b_{m+1} = 3 b_m + b_{m+1}.\\ (3)\ &(i)\ F(2, 0) = a_2^{(2)}b_0 +a_1^{(2)}b_1 + a_0^{(2)}b_2 = 0.\\ &(ii)\ F(2, 1) = 1.\\ &(iii)\ F(2, 2) = - 3. \end{align*} \end{ex} \begin{lemm}\label{lem:a7} For $n \geq 2$ and $m \geq 0$, the following recursion formula holds. \begin{equation} F(n, m) = F(n-1, m+1) + 2F(n-1, m) - F(n-2, m). \end{equation} \end{lemm} \noindent {\it Proof.} Use (11.6) to show (11.8) as follows: \begin{align*} F(n, m)&=\sum_{j=0}^{n}a_{n-j}^{(n)}b_{m+j} =\sum_{j=0}^{n}[a_{n-j}^{(n-1)} + 2a_{n-1-j}^{(n-1)} -a_{n-2-j}^{(n-2)}]b_{m+j}\\ &=\sum_{j=0}^{n-1}a_{n-1-j}^{(n-1)}b_{m+1+j} + 2\sum_{j=0}^{n-1}a_{n-1-j}^{(n-1)} b_{m+j} - \sum_{j=0}^{n-2}a_{n-2-j}^{(n-2)} b_{m+j}\\ &=F(n-1, m+1) + 2F(n-1, m) - F(n-2, m).\ \ \ \ \fbox{} \end{align*} \begin{lemm}\label{lem:a8} The following formulas hold. \begin{align} &(1)\ {\it For}\ n \geq 1\ {\it and}\ 0 \leq k \leq n, \sum_{j=0}^{n}a_{k-j}^{(k)}b_{j} = a_k^{(n-1)}.\nonumber\\ &(2)\ {\it For}\ n \geq 2\ {\it and}\ 0 \leq m \leq n-2, F(n, m) =0.\nonumber\\ &(3)\ {\it For}\ n \geq 1, F(n, n- 1) = 1.\nonumber\\ &(4)\ {\it For}\ n \geq 1, F(n, n) = - (2n-1). \end{align} \end{lemm} \noindent {\it Proof.} (1) Use induction on $n$. Since (1) holds for $n = 1$, we may assume that it holds for $n$. Further, if $k = 0$, (1) holds trivially, and hence it suffices to show (1) for $n = n+1$ and $k = k+1$. Then, by (11.6), \begin{align*} \sum_{j=0}^{k+1}a_{k+1-j}^{(n+1)}b_j &=\sum_{j=0}^{k+1}\{a_{k+1-j}^{(n)} + 2a_{k-j}^{(n)} -a_{k-1-j}^{(n-1)}\} b_j\\ &=\sum_{j=0}^{k+1}a_{k+1-j}^{(n)}b_j + 2\sum_{j=0}^{n}a_{k-j}^{(n)} b_j - \sum_{j=0}^{k-1}a_{k-1-j}^{(n-1)} b_j\\ &=a_{k+1}^{(n-1)} + 2a_k^{(n-1)} - a_{k-1}^{(n-2)}\\ & = a_{k+1}^{(n)}. \end{align*} \noindent {\it Proof of (2).} Since $F(n,m+1)=F(n+1,m)-2F(n,m)+F(n-1,m)$, it suffices to show that $F(n,0)=0$ if $n\ge 2$. Now \begin{align*} F(n, 0)&= \sum_{j=0}^{n}a_{n-j}^{(n)} b_{j} = \sum_{j=0}^{n}\frac{(2n+1)}{(2j+1)}\binom{n+j}{2j} \frac{(-1)^{j}}{(j+2)} \binom{2j+2}{j+1}\\ &= (2n+1) \sum_{j=0}^{n} (-1)^j \frac{(n+j)!}{(2j+1)!(n-j)!} \frac{(2j+2)!}{(j+2)!(j+1)!}\\ &=(2n+1) \sum_{j=0}^{n} (-1)^j \frac{(n+j)! (2j+2)}{(n-j)!(j+2)!(j+1)!}\\ &= (2n+1) \sum_{j=0}^{n} (-1)^j\frac{2 (n+j)!}{(n-j)!(j+2)! j!}. \end{align*} Therefore, to prove (2), it suffices to show \begin{equation} \sum_{j=0}^{n} (-1)^j\frac{(n+j)!}{(n-j)!(j+2)! j!} =0 \end{equation} or equivalently, by multiplying both sides through $n!/(n-2)!$, to show \begin{equation} \sum_{j=0}^{n} (-1)^j \binom{n}{j} \binom{n+j}{j+2} = 0. \end{equation} To show (11.11), we apply the following lemma \cite[Lemma 5.3]{HM2}. \begin{lemm}\label{lem:a9} For $N \geq M \geq 0$ and $N \geq K \geq 0$, \begin{align} &\binom{N}{K} \binom{M}{M} - \binom{N-1}{K-1} \binom{M}{M-1} +\binom{N-2}{K-2} \binom{M}{M-2} - \cdots \nonumber\\ &+(-1)^M \binom{N-M}{K-M} \binom{M}{0} \nonumber\\ &= \binom{N-M}{K}. \end{align} \end{lemm} Put $N = 2n$, $K = n+2$ and $M = n$ in (11.12). Since $N - M = n < K$, we see \begin{equation*} \binom{2n}{n+2} \binom{n}{n}-\binom{2n-1}{n+1} \binom{n}{n-1}+ \cdots + (-1)^{n} \binom{n}{2} \binom{n}{0} =\binom{n}{n+2}=0, \end{equation*} and hence $\sum_{j=0}^{n} (-1)^j \binom{n+j}{j+2} \binom{n}{j}=0$. This proves (2). {\it Proof of (3)}. By (11.8), we see that for $n \geq 2$, \begin{equation*} F(n+1, n-2) = F(n, n-1) + 2F(n, n-2) - F(n-1, n-2). \end{equation*} Since $F(n+1, n-2) =F(n, n-2) =0$ by (11.9) (2), it follows that $F(n, n-1) = F(n-1, n-2)$ , and hence $F(n, n-1)=F(1, 0)= 1$ by Example \ref{ex:a6} (2)(i). {\it Proof of (4)}. Use (11.8) for $n \geq 1$ to see \begin{equation*} F(n+1, n-1) = F(n, n) + 2F(n, n-1) - F(n-1, n-1). \end{equation*} Since $F(n+1, n-1)=0$ and $F(n, n-1)= 1$, it follows that \begin{equation*} F(n, n)=F(n-1, n-1) - 2 \end{equation*} and hence, \begin{equation*} F(n, n)=F(1,1)-2(n-1)= -1-2n+2=-(2n-1). \end{equation*} \fbox{} We define another number $H_k^{(n)}$ as follows. For any $n \geq 1$ and $k \geq 2$, we define \begin{equation} H_k^{(n)}=\sum_{j=0}^{k} a_j^{(n)} F(n-1,n+k-2-j) - \sum_{j=0}^{k-2} a_j^{(n-1)} F(n,n+k-3-j). \end{equation} For example, $H_2^{(5)}=a_0^{(5)} F(4,5) + a_1^{(5)} F(4,4) + a_2^{(5)} F(4,3) - a_0^{(4)} F(5,4)= 0$.\\ In particular, we should note; \begin{equation} {\rm For}\ k \geq 2, H_k^{(1)} = 0. \end{equation} \begin{align*} {\rm In\ fact},\ H_k^{(1)}&=a_0^{(1)} F(0, k-1) + a_1^{(1)} F(0, k-2) -a_0^{(0)} F(1, k-2)\\ &=b_{k-1} + a_1^{(1)} b_{k-2} - 3b_{k-2} -b_{k-1}\\ &= 0. \end{align*} The last formula we need is the following lemma. \begin{lemm}\label{lem:a10} For any $n \geq 1$ and $k \geq 2$, we have $H_k^{(n)} = 0.$ \mbox{{\rm (11.15)}} \end{lemm} \setcounter{equation}{15} \noindent {\it Proof.} We compute $H=H_k^{(n)} - H_k^{(n-1)}$. By definition, for $n \geq 2$, \begin{align*} H &=- \sum_{j=0}^{k-2} a_j^{(n-1)} F(n, n+k-3-j) + a_0^{(n)}F(n-1, n+k-2)\\ &\ \ \ +a_1^{(n)} F(n-1, n+k-3) + \sum_{j=0}^{k-2}(a_{j+2}^{(n)} +a_j^{(n-2)})F(n-1, n+k-4-j) \\ &\ \ \ -\sum_{j=0}^{k} a_j^{(n-1)} F(n-2, n+k-3-j). \end{align*} Since $a_{j+2}^{(n)} + a_j^{(n-2)}=a_{j+2}^{(n-1)} + 2a_{j+1}^{(n-1)}$ and $a_1^{(n)}=a_1^{(n-1)}+2a_0^{(n-1)}$ by (11.6), we see \begin{align*} H &=- \sum_{j=0}^{k-2} a_j^{(n-1)} F(n, n+k-3-j) + a_0^{(n)}F(n-1, n+k-2) \\ &\ \ \ \ +(a_1^{(n-1)}+2a_0^{(n-1)})F(n-1, n+k-3)\\ &\ \ \ \ + \sum_{j=0}^{k-2}(a_{j+2}^{(n-1)} + 2a_{j+1}^{(n-1)} )F(n-1, n+k-4-j)\\ &\ \ \ \ -\sum_{j=0}^{k} a_j^{(n-1)} F(n-2, n+k-3-j). \end{align*} Note $a_0^{(n)} =a_0^{(n-1)}$ = 1 to see \begin{align*} H & =a_0^{(n-1)}\{-F(n, n+k-3) + F(n-1, n+k-2)\\ &\ \ \ \ \ \ \ \ +2F(n-1, n+k-3) - F(n-2, n+k-3)\}\\ &\ \ \ \ + a_1^{(n-1)}\{-F(n, n+k-4) + F(n-1, n+k-3) \\ &\ \ \ \ \ \ \ \ +2F(n-1, n+k-4) - F(n-2, n+k-4)\} + \cdots \\ &\ \ \ \ + a_{k-2}^{(n-1)}\{-F(n, n-1) + F(n-1, n) \\ &\ \ \ \ \ \ \ \ +2F(n-1, n-1) - F(n-2, n-1)\}\\ &\ \ \ \ + a_{k-1}^{(n-1)}\{F(n-1, n-1) +2F(n-1, n-2) - F(n-2, n-2)\}\\ &\ \ \ \ +a_k^{(n-1)} \{ F(n-1, n-2)-F(n-2, n-3)\}. \end{align*} By (11.8) and (11.9)(3),(4), we see easily that each term of the summation is equal to $0$. This proves $H = 0$. \fbox{} Now we are in position to prove Proposition \ref{prop:a3}. Let $\mathbf{u}_j=(v_{j,1}^{(n)}, v_{j,2}^{(n)}, \cdots, v_{j,n}^{(n)})$ and $\mathbf{w}_k=(v_{1,k}^{(n)}, v_{2,k}^{(n)}, \cdots, v_{n,k}^{(n)})^{T}$ be, respectively, the $j$-th row vector and the $k$-th column vector of $V_n$. Then we must show \begin{align} &(1)\ \mathbf{u}_{n-j} \cdot \mathbf{w}_k = 0\ {\it for}\ (i)\ 0 \leq j \leq n-3, 1 \leq k \leq n-j-2\ {\it and}\ \nonumber\\ &\ \ \ \ (ii)\ 2\leq j \leq n-1, n-j+1 \leq k \leq n-1.\nonumber\\ &(2)\ \mathbf{u}_{n-j}\cdot \mathbf{w}_{n-j-1}= 1\ {\it for}\ 0 \leq j \leq n-2.\nonumber\\ &(3)\ \mathbf{u}_{n-j}\cdot \mathbf{w}_{n-j}= 4,\ {\it for}\ 1 \leq j \leq n-1.\nonumber\\ &(4)\ \mathbf{u}_n\cdot \mathbf{w}_n = 4 - a_1^{(n)} = 4 - c_{n-1}^{(n)},\nonumber\\ &(5)\ \mathbf{u}_{n-j} \cdot \mathbf{w}_n = - a_{j+1}^{(n)} = - c_{n-j-1}^{(n)}\ {\it for}\ 1 \leq j \leq n-1. \end{align} Since (11.16) is obviously true for $n=1$, we assume hereafter that $n \geq2$. We introduce new vectors, $\mathbf{b}_j=(b_j, b_{j+1}, \cdots, b_{j+n-1})$ for $j \geq 0$ and $\mathbf{a}_k^{(n)}= (a_k^{(n)}, a_{k-1}^{(n)}, \cdots, a_0^{(n)}, 0 , \cdots,0)^T$ for $0 \leq k \leq n$. Then, from the definition of $v_{j,k}^{(n)}$, it is easy to see the following: \begin{align} &(1)\ {\rm For}\ 0 \leq j \leq n-1, \mathbf{u}_{n-j} = a_j^{(n)} \mathbf{b}_0 + a_{j-1}^{(n)} \mathbf{b}_1 + \cdots + a_0^{(n)} \mathbf{b}_j.\nonumber\\ &(2)\ {\rm For}\ 1 \leq k \leq n, \mathbf{w}_k = b_{k-1} \mathbf{a}_{n-1}^{(n)} + b_k \mathbf{a}_{n-2}^{(n)} + \cdots + b_{n+k-2} \mathbf{a}_0^{(n)}. \end{align} Since $\mathbf{u}_{n-j}\cdot \mathbf{w}_k= \sum_{i=0}^{j} a_{j-i}^{(n)} (\mathbf{b}_i \cdot \mathbf{w}_k)$, we first compute $\mathbf{b}_i \cdot \mathbf{w}_k$. In fact, a straightforward computation shows\\ \hspace*{1.5cm}$\mathbf{b}_i\cdot \mathbf{w}_k= b_{k-1}( a_{n-1}^{(n)} b_i + a_{n-2}^{(n)} b_{i +1} + \cdots + a_0^{(n)} b_{n+i-1})\\ \hspace*{1.5cm}\hspace*{1.5cm} + b_k( a_{n-2}^{(n)} b_i + a_{n-3}^{(n)} b_{i +1} + \cdots + a_0^{(n)}b_{n+i-2}) + \cdots + b_{n+k-2}(a_0^{(n)} b_i)$\\ \hspace*{1.5cm}\hspace*{1.1cm} $= b_{k-1}(F(n, i-1) - a_n^{(n)} b_{i-1})\\ \hspace*{1.5cm}\hspace*{1.5cm} + b_k ( F(n, i-2) - a_{n-1}^{(n)} b_{i-1} - a_n^{(n)} b_{i-2})\\ \hspace*{1.5cm}\hspace*{1.5cm} + \cdots\\ \hspace*{1.5cm}\hspace*{1.5cm} + b_{k+i-2} ( F(n, 0) - a_{n-i+1}^{(n)} b_{i-1}- \cdots - a_n^{(n)} b_0)\\ \hspace*{1.5cm}\hspace*{1.5cm} +b_{k+i-1}(a_{n-1}^{(n-1)} - a_{n-i}^{(n)} b_{i-1}- \cdots - a_{n-1}^{(n)} b_0)\\ \hspace*{1.5cm}\hspace*{1.5cm} + \cdots\\ \hspace*{1.5cm}\hspace*{1.5cm} + b_{n+k-2}(a_i^{(n-1)} - a_1^{(n)} b_{i-1} - a_2^{(n)} b_{i-2} - \cdots - a_i^{(n)} b_0)\\ \hspace*{1.5cm}\hspace*{1.5cm} + b_{n+k-1}(a_{i-1}^{(n-1)} - a_0^{(n)} b_{i-1}- a_1^{(n)} b_{i-2} - \cdots - a_{i-1}^{(n)} b_0)\\ \hspace*{1.5cm}\hspace*{1.5cm} + \cdots \\ \hspace*{1.5cm}\hspace*{1.5cm} + b_{n+k+i-2}(a_0^{(n-1)} - a_0^{(n)} b_0)$. Note that in the above summation, each of the last $i$ terms is 0 by (11.9)(1). By rearranging this summation, we obtain\\ \hspace*{1.5cm}$\mathbf{b}_i \cdot \mathbf{w}_k=b_{k-1}F(n, i-1) + b_k F(n, i-2) + \cdots\\ \hspace*{1.5cm}\hspace*{1.5cm} + b_{k+i-2}F(n, 0) + F(n-1, k+i-1)\\ \hspace*{1.5cm}\hspace*{1.5cm} - b_{i-1}F(n, k-1) - b_{i-2}F(n, k) - \cdots - b_0 F(n,k+i-2)$. Since $0 \leq i \leq j \leq n-1$, we have for $\ell \geq 0$, $i-1- \ell \leq n-2$ and hence $F(n, i-1- \ell)= 0$. Therefore \begin{equation} \mathbf{b}_i \cdot \mathbf{w}_k=F(n-1, k+i-1) - \sum_{\ell=0}^{i-1} b_{i-1-\ell} F(n, k-1+\ell). \end{equation} Case 1. $i= 0$. Then $\mathbf{b}_0 \cdot \mathbf{w}_k=F(n-1, k-1)$. If $1\leq k \leq n-2$, then $F(n-1, k-1)= 0$, and hence $\mathbf{u}_n \cdot \mathbf{w}_k= a_0^{(n)}(\mathbf{b}_0 \cdot \mathbf{w}_k)= 0$. Further, \begin{align*} &\mathbf{u}_n \cdot \mathbf{w}_{n-1}=a_0^{(n)}F(n-1, n-2)= a_0^{(n)} = 1,\ {\rm and}\\ &\mathbf{u}_n \cdot \mathbf{w}_n=a_0^{(n)}F(n-1, n-1) = -(2n-3) =4-(2n+1) = 4 - a_1^{(n)}. \end{align*} This proves (11.16) for $j= 0$. Case 2. $i=1$. Then $\mathbf{b}_1 \cdot \mathbf{w}_k=F(n-1, k) - b_0 F(n,k-1)$. If 1 $\leq k \leq n-3$, then $F(n-1, k) = F(n, k-1)= 0$ and $\mathbf{b}_1\cdot \mathbf{w}_k= 0$. Since $\mathbf{b}_0 \cdot \mathbf{w}_k= 0$, we have $\mathbf{u}_{n-1} \cdot \mathbf{w}_k= 0$ for $1 \leq k \leq n-3$. Further, \begin{align*} \mathbf{u}_{n-1} \cdot \mathbf{w}_{n-2}&= a_1^{(n)}(\mathbf{b}_0 \cdot \mathbf{w}_{n-2}) + a_0^{(n)} (\mathbf{b}_1 \cdot \mathbf{w}_{n-2})\\ &=a_1^{(n)}F(n-1, n-3) + a_0^{(n)} \{F(n-1, n-2) - b_0 F(n, n-3)\}\\ &=a_0^{(n)} F(n-1, n-2) = 1. \end{align*} Also, \begin{align*} \mathbf{u}_{n-1} \cdot \mathbf{w}_{n-1} &= a_1^{(n)} F(n-1, n-2) + a_0^{(n)} \{F(n-1, n-1)- b_0 F(n, n-2)\}\\ &= a_1^{(n)} + a_0^{(n)}(-(2n-3))= 2n+1 -(2n-3) = 4. \end{align*} Finally, \begin{align*} \mathbf{u}_{n-1} \cdot \mathbf{w}_n&=a_1^{(n)}F(n-1, n-1) + a_0^{(n)} \{F(n-1, n) - b_0 F(n, n-1)\}\\ &=H_2^{(n)} - a_2^{(n)}=- a_2^{(n)},\ {\rm by\ (11.15)}. \end{align*} This proves (11.16) for $j=1$. Now we assume that $2 \leq j \leq n-1$ and compute $\mathbf{u}_{n-j} \cdot \mathbf{w}_k$, $1 \leq k \leq n$. Then \begin{align*} \mathbf{u}_{n-j} \cdot \mathbf{w}_k &=\sum_{i=0}^{j} a_{j-i}^{(n)} ( \mathbf{b}_i \cdot \mathbf{w}_k)\\ &= a_j^{(n)} (\mathbf{b}_0 \cdot \mathbf{w}_k) + \sum_{i=1}^{j} a_{j-i}^{(n)} (\mathbf{b}_i \cdot \mathbf{w}_k)\\ &=a_j^{(n)}F(n-1, k-1) + \sum_{i=1}^{j}a_{j-i}^{(n)} \{F(n-1, k+i-1) \\ &\ \ \ \ -\sum_{\ell=0}^{i-1} b_{i-1-\ell}F(n, k-1+ \ell)\}\\ &= \sum_{i=0}^{j} a_{j-i}^{(n)} F(n-1, k+i-1) - \sum_{i=1}^{j}a_{j-i}^{(n)} \sum_{\ell=0}^{i-1} b_{i-1-\ell} F(n, k-1+\ell). \end{align*} Therefore, the coefficient of $F(n, k-1+q)$, $0 \leq q \leq i-1$, is equal to \begin{align*} \sum_{i=1}^{j} a_{j-i}^{(n)} b_{i-1-q} &=\sum_{i=q+1}^{j}a_{j-i}^{(n)} b_{i-1-q}\\ &=a_{j-q-1}^{(n-1)}\ {\rm by}\ (11.9)(1),\ {\rm and\ hence} \end{align*} \begin{equation} \mathbf{u}_{n-j} \cdot \mathbf{w}_k=\sum_{i=0}^{j} a_{j-i}^{(n)} F(n-1, k+i-1) - \sum_{q=0}^{j-1}a_{j-q-1}^{(n-1)}F(n, k-1+q). \end{equation} If $1 \leq k \leq n-j-2$, then $k+i-1 \leq k+j-1 \leq n-3$ and also $k-1+q \leq k+j-2 \leq n-4$, and hence, $\mathbf{u}_{n-j} \cdot \mathbf{w}_k = 0$. If $k = n-j-1$, then $\mathbf{u}_{n-j} \cdot \mathbf{w}_{n-j-1}= a_0^{(n)}F(n-1, n-2)=a_0^{(n)}= 1$. Further, $\mathbf{u}_{n-j} \cdot \mathbf{w}_{n-j}=a_0^{(n)} F(n-1, n-1) + a_1^{(n)} F(n-1, n-2)=-(2n-3) + 2n+1= 4$. Now suppose $n-j+1 \leq k \leq n-1$. Then by (11.9), \begin{align*} \mathbf{u}_{n-j} \cdot \mathbf{w}_k &=\sum_{i=0}^{j}a_{j-i}^{(n)} F(n-1,k+i-1) - \sum_{q=0}^{j-1}a_{j-q-1}^{(n-1)} F(n,k-1+q)\\ &= \sum_{i=n-k-1}^{j}a_{j-i}^{(n)} F(n-1, k+i-1) - \sum_{q=n-k}^{j-1}a_{j-q-1}^{(n-1)} F(n,k-1+q). \end{align*} That is exactly $H_{k-(n-j-1)}^{(n)}$ and hence $\mathbf{u}_{n-j} \cdot \mathbf{w}_k= 0$ for $n-j+1 \leq k \leq n-1$. Finally, a similar computation shows that \begin{align*} \mathbf{u}_{n-j} \cdot \mathbf{w}_n&=\sum_{i=0}^{j} a_{j-i}^{(n)} F(n-1, n+i-1) - \sum_{q=0}^{j-1}a_{j-1-q}^{(n-1)} F(n,n+q-1)\\ &=H_{j+1}^{(n)} - a_{j+1}^{(n)} F(n-1,n-2)\\ &=- a_{j+1}^{(n)}\\ &= -c_{n-j-1}^{(n)}. \end{align*} This proves (11.16) and a proof of Proposition \ref{prop:a3} is now complete. \fbox{} \noindent {\bf Acknowledgements. } The first author is partially supported by MEXT, Grant-in-Aid for Young Scientists (B) 18740035, and the second author is partially supported by NSERC Grant~A~4034. The authors thank Daniel Silver and Joe Repka for their invaluable comments. \end{document}
\begin{document} \title[]{Equilibration on average in quantum processes with finite temporal resolution} \author{Pedro Figueroa--Romero} \email[]{[email protected]} \author{Kavan Modi} \author{Felix A. Pollock} \affiliation{School of Physics \& Astronomy, Monash University, Victoria 3800, Australia} \date{\today} \begin{abstract} We characterize the conditions under which a multi-time quantum process with a finite temporal resolution can be approximately described by an equilibrium one. By providing a generalization of the notion of equilibration on average, where a system remains closed to a fixed equilibrium for most times, to one which can be operationally assessed at multiple times, we place an upper-bound on a new observable distinguishability measure comparing a multi-time process with a finite temporal resolution against a fixed equilibrium one. While the same conditions on single-time equilibration, such as a large occupation of energy levels in the initial state remain necessary, we obtain genuine multi-time contributions depending on the temporal resolution of the process and the amount of disturbance of the observer's operations on it. \end{abstract} \keywords{Suggested keywords} \maketitle A fundamental question at the core of statistical mechanics is that of how equilibrium arises from purely quantum mechanical laws in closed systems. This phenomenon is generically known as \emph{equilibration} or \emph{thermalization}, where in the latter case the system relaxes to a thermal state. The dissipative nature of equilibration, however, is at odds with the unitary nature of quantum mechanics. There are three main approaches to resolving this conundrum: \emph{typicality}~\cite{Popescu2006,GogolinPureQStat, gemmer2009, Goldstein_2010, Gogolin, Garnerone_2013}, which argues that small subsystems of a composite are in thermal equilibrium for almost all pure states of the whole; \emph{dynamical equilibration on average}~\cite{Tasaki_1998, Reimann_2008, PhysRevE.79.061103, ShortSystemsAndSub, ShortFinite, XXEisert}, which demonstrates that time-dependent quantities of quantum systems evolve towards fixed values and stay close to them for most times, even if they eventually deviate greatly from it; and the \emph{eigenstate thermalization hypothesis}~\cite{Deutsch1991,Srednicki, Srednicki_1999,PhysRevE.87.012118,Rigol2008, Turner2018}, which argues that the expectation values of a `physical observable' at long times are indistinguishable for an isolated system from a thermal one. What these approaches have in common, is that they look at the statistical properties of the state of the system at long times; however, finding a system in or close to an equilibrium state does not necessarily imply all observable properties of the system have equilibrated. In particular, when measurements are coarse (i.e. only a subpart is measured), it may be that temporal correlations due to a sequence of observations may contain signatures indicating whether the system is in equilibrium or not. The recent studies of out-of-time-order correlation functions use multi-time statistics to distinguish between thermalised systems and coherent complex systems~\cite{Kitaev}. However, it may be that these multi-time statistics also equilibrate in general; that is, they are most often found close to some average value.\textsuperscript{\footnote{The out-of-time-order correlations require propagating the system back and forth in time. Here we only go forward in time.}} In this manuscript, we focus on the case of finite temporal resolution for the dynamical equilibration of quantum processes where multiple operations are applied in sequence. We present sufficient conditions for general multi-time observations to relax close to their equilibrium values when the corresponding operations are implemented with an imperfect, or \emph{fuzzy}, clock (or, equivalently, on a system with uniformly fluctuating energies). In particular, we place an upper bound on how distinguishable the statistics of such observations are from those made in equilibrium. \begin{figure} \caption{We characterize the attainability of quantum process equilibration, i.e., under what conditions a $k$-step process $\overline{\Upsilon} \label{Fig: processes} \end{figure} We first briefly recapitulate the landmark results of Refs.~\cite{ShortFinite,Reimann_2008}, in which the attainability of observable equilibration on average for a single observation is characterized mainly by two factors: the energy eigenstates of the system having a large overlap with the initial state, and the \emph{scale} set by the operator norm of the observable being measured. Building on the works of Refs.~\cite{ShortFinite,Reimann_2008}, we first ask, for the case of a single measurement, how different is an equilibrium process from an out-of-equilibrium one, when the available clock is fuzzy. We then generalise to considering observations over multiple points in time; here, temporal correlations become relevant. While the original conditions for equilibration to occur still hold, we obtain additional conditions related to the temporal resolution of the observations and how much these disturb the intermediate states. Our results hold for Hamiltonian dynamics of the total system, while the measurements are allowed to be general quantum operations that are coarse and may only act on a sub-part of the whole system. \section{Equilibration on average}\label{sec: equilibration on avg} In the past decade, the program of equilibration has focused on upper bounding fluctuations of observable expectation values around equilibrium, from which conclusions about equilibration of the state of the system itself have been drawn~\cite{ShortSystemsAndSub, Gogolin}. The basic mechanism behind equilibration is that of dephasing~\cite{Reimann_2008, Oliveira_2018}, and equilibration will occur as long as the initial state, following a perturbation, has an overlap with many energy eigenstates of the Hamiltonian driving the dynamics. The only further assumption is that there are not too many degenerate energy gaps~\cite{PhysRevA.98.022135}, ensuring that the majority of the system plays a dynamical role~\cite{PhysRevE.79.061103}. Specifically, observable equilibration in the sense of Ref.~\cite{ShortFinite} considers time-independent Hamiltonian dynamics given by a unitary operator $U=\exp\{-i H t\}$, with results depending on energetic properties of the Hamiltonian $H$, such as the number of distinct energy levels $\mathfrak{D}\leq{d}$ and the maximum number of energy gaps $N(\epsilon)$ in an energy window of width $\epsilon>0$. While the choice of an equilibrium state is arbitrary, an intuitive candidate is the infinite-time-averaged subsystem state \begin{gather} \omega := \lim_{T\to\infty}\overline{\rho}^T \qquad \mbox{with} \qquad \overline{X}^T :=\frac{1}{T}\int_0^T X(t)\,dt \end{gather} where $\rho$ stands for the initial state of the whole system, time-averaged over a finite time window of width $T$. Notably, this corresponds to a dephasing of the initial state with respect to $H$, i.e. $\omega=\mc{D}(\rho)$ where we define \begin{gather} \mc{D}(\cdot):=\sum_nP_n(\cdot)P_n, \end{gather} with $P_n$ a projector onto the $n$th eigenspace of $H=\sum_n E_n P_n$. In fact, we can similarly describe general finite-time-averaged evolution by the map \begin{gather} \begin{split} &\mc{G}_T(\cdot):=\sum_{n,m}G_{nm}^{(T)}P_n(\cdot)P_m \quad \mbox{with} \\ &G_{nm}^{(T)}:=\overline{\exp[it(E_m-E_n)]}^T \end{split}\label{eq: G maps} \end{gather} Whenever it is clear that the averaging window is $T$, we will simply denote these by $\mc{G}$ and $G_{nm}$. In this minimal setting, the authors of Ref.~\cite{ShortFinite} prove an important result on observable equilibration by showing the closeness between the evolved state $\rho(t)$ and $\omega$. Specifically, they upper-bound the temporal fluctuations of the expectation value of a general operator $A$ around equilibrium within a finite-time window as \begin{gather} \overline{|\tr[A(\rho(t) - \omega)]|^2}^T \leq \frac{\|A\|^2 N(\epsilon)f(\epsilon{T})}{d_\text{eff}(\rho)}, \end{gather} where $f(\epsilon{T}) = 1+8\log_2 (\mathfrak{D})/\epsilon{T}$ and $\|\cdot\|$ denotes largest singular value; crucially, the so-called inverse effective dimension (or inverse participation ratio) of the initial state, $d_\text{eff}^{-1}(\rho) := \sum_n[\tr(P_n\rho)]^2$, quantifies the number of energy levels contributing significantly to the dynamics of the initial state $\rho$. In general, we have the hierarchy $1\leq{d}_\text{eff}(\rho)\leq\mathfrak{D}\leq{d}$, and this result implies that equilibration is attained for large $d_\text{eff}$. It has been argued, on physical grounds, that the effective dimension takes a large value in realistic situations~\cite{Gogolin,Reimann_2008}, increasing exponentially in the number of constituents of generic many-body systems~\cite{XXEisert}, and it has been proven that it takes a large value for local Hamiltonian systems whenever correlations in the initial state decay rapidly~\cite{PhysRevLett.118.140601}. The temporal fluctuations of the expectation values of $A$ around equilibrium constitute a meaningful quantifier of equilibration: a small variance relates to the expectation value of $A$ concentrating around its mean~\textsuperscript{\footnote{Strictly, the full statistics should then display equilibration.}}. This behaviour of long-time fluctuations around equilibrium has been studied both analytically and numerically in various physical models~\cite{PhysRevLett.109.247205,PhysRevE.87.012106, PhysRevE.88.032913, PhysRevE.89.022101, PhysRevB.101.174312}, as well as for the more restrictive case of thermalization~\cite{Rigol2008,Mori_2018,Gluza_2019,PhysRevLett.123.200604}. Similarly, related questions such as an absence of equilibration~\cite{PhysRevB.76.052203, Nandkishore, Hess, PhysRevLett.120.080603}, or the robustness of equilibration and further relaxation after a perturbation have been investigated~\cite{Robinson1973,PhysRevLett.118.130601,PhysRevLett.118.140601, PhysRevA.98.022135, PhysRevE.98.062103}. Here, we take a related approach towards investigating the behaviour of quantum quantum process when the interrogations are fuzzy in time. Doing so gives focus on an operationally meaningful scenario where we show that observations that have a finite temporal resolution make it hard to tell an out-of-equilibrium process from one that is in equilibrium. \section{Equilibration due to finite temporal resolution} Motivated by the above result, we consider the operationally relevant implications of limited resolution in time. Firstly, we focus on the dynamics of a $d_S$-dimensional subpart $\mathsf{S}$ of a $d_Ed_S$-dimensional system $\mathsf{SE}$, and we refer to subsystem equilibration as the relaxation of $\mathsf{S}$ towards some steady state, while the whole $\mathsf{SE}$ evolves unitarily with a general time-independent Hamiltonian $H$; our results can then naturally reduce to closed-system equilibration if coarse operations on the whole $\mathsf{SE}$ are considered. We then ask how different an evolving quantum state appears from equilibrium when measured at a time that can vary in each realisation, being randomly drawn from some distribution that quantifies the \emph{fuzziness} associated with finite temporal resolution. Specifically, by a finite-temporal resolution observation we mean an observable $A$ (either on subsystem $\mathsf{S}$ or acting coarsely on $\mathsf{SE}$) measured after a time $t>0$ sampled from a probability distribution with density function $\mathscr{P}_{T}$, i.e. which is such that $\int_0^\infty\,dt\,\mathscr{P}_T(t)=1$. Here the parameter $T$ represents the \emph{uncertainty} or \emph{fuzziness} of the distribution; for example, it could be associated with the variance of the distribution. With this definition, we may generalize the time-average over a time-window $T$ by \begin{gather} \overline{X}^{\mathscr{P}_T} := \int_{0}^{\infty}dt\,\mathscr{P}_T(t)\,X. \end{gather} In particular, we require that the distributions $\mathscr{P}_T$ are such that the finite-time averaging map gives the dephasing map in the infinite-time limit, $\lim_{T\to\infty}\mc{G}=\mc{D}$, or equivalently, such that $\lim_{T\to\infty}G_{nm}=\delta_{nm}$; this also renders the equilibrium state $\omega=\lim_{T\to\infty}\overline{\rho(t)}^T$ to be independent of the particular choice of distribution. The average distinguishability by means of an observable $A$ between the equilibrium and non-equilibrium cases can be quantified as $|\tr[A(\overline{\rho}^{\mathscr{P}_T}-\omega)]|$. This can be upper-bounded by \begin{gather} \left|\left<A\right>_{\overline{\rho}^{\mathscr{P}_T}-\omega}\right| \leq \mathscr{S}\, \|A\|\|\rho-\omega\|_2, \label{eq: main standard case} \end{gather} where $\left<X\right>_\sigma := \tr[X\sigma]$ and $\mathscr{S}:=\max_{n\neq{m}}|G_{nm}|$. Here $\|\sigma\|_2=\sqrt{\tr(\sigma\sigma^\dg)}$ and $\|\rho-\omega\|_2^2\leq1-(d_Ed_S)^{-1}$ is the difference in purity of the full state $\rho$ with respect to that of the equilibrium $\omega$. \noindent\textit{Proof.} Given that $\left|\left<A\right>_{\overline{\rho}^{\mathscr{P}_T}-\omega}\right|=|\tr[A(\mc{G}-\mc{D})(\rho)]|$ and $\tr[X\sigma]\leq\|X\|\|\sigma\|_2$, Eq.~\eqref{eq: main standard case} follows because \begin{align} \left\| \left(\mc{G} - \mc{D}\right)(\rho)\right\|_2^2 =&\tr\left|\sum_{n \neq m} {G}_{nm} P_n\rho P_m\right|^2\nonumber\\ =& \sum_{\substack{n \neq m \\ n^\prime \neq m^\prime}} G_{nm}G_{m^\prime{n}^\prime}\tr\left[ P_n\rho P_mP_{m^\prime}\rho P_{n^\prime}\right]\nonumber\\ =&\sum_{n \neq m } |G_{nm}|^2\tr\left[ P_n \rho P_m\rho\right]\nonumber\\ \leq& \max_{n\neq{m}}|G_{nm}|^2\sum_{n \neq m}\tr[P_n\rho P_m \rho]\nonumber\\ =&\max_{n\neq{m}}|G_{nm}|^2 \ \tr(\rho^2-\omega^2)\nonumber\\ =&\|\rho-\omega\|_2^2 \ \max_{n\neq{m}}|G_{nm}|^2, \label{eq:singlestep} \end{align} where we used $\tr(\rho^2-\omega^2) = \|\rho-\omega\|_2^2$, as $\tr(\rho\,\omega) = \tr(\omega^2)$. In general $\|\omega\|_2^2\leq{d}_\text{eff}^{-1}(\rho)$, with equality both for pure $\rho$ or when the Hamiltonian is non-degenerate; both quantities relate to how spread the initial state $\rho$ is in the energy eigenbasis. In particular, when the fuzziness $T$ corresponds to that of the uniform distribution over an interval $[0,T]$, the probability density function is $\mathscr{P}_T=T^{-1}$, as in the results of Ref.~\cite{ShortFinite} outlined above, and we get $|G_{nm}|=|\mathrm{sin}(T\mc{E}_{nm})/T\mc{E}_{nm}|$ with $\mc{E}_{nm} := (E_n-E_m)/2$. The bound in Eq.~\eqref{eq: main standard case} then tells us that the evolved state $\rho(t)$ will differ from the equilibrium $\omega$ when measured at a given time with a temporal-resolution $T$ at most with proportion $|T\mc{E}_{nm}|^{-1}$ for the smallest energy gap $\mc{E}_{nm}$, with a scale set by the size of the observable $A$ and how different the initial state $\rho$ is from the equilibrium $\omega$. One, however, might not stop at a single observation but continue gathering data to assess how close the system remains to equilibrium with respect to a set of possible operations, $\{\mc{A}_i\}$, as we suggestively depict in Fig.~\ref{Fig: processes}. The reason for fuzziness in the initial time is that we do not know when the process actually began. However, one question we can ask is whether, by making a sequence of measurements, we are able to overcome the fuzziness of the initial interval. These operations can correspond to any possible experimental intervention, which can be correlated with any other interventions previously made, through an ancillary system. In this case the information between time-steps propagated through the environment and the disturbance introduced by the operations might become relevant. However, the subsequent measurements will also suffer from some level of fuzziness and this must be accounted for. We now precisely establish the description for multi-time quantum processes in such generality, followed by a generalisation of Eq.~\eqref{eq: main standard case} through an upper bound to the distinguishability between a finite-time resolution process and an equilibrium one. \section{Multi-time quantum processes} Consider an initial state $\rho$ of the joint $\mathsf{SE}$ system unitarily evolving through a time-independent Hamiltonian dynamics until, at time $t_0$, an operation $\mc{A}_0$ is made on $\mathsf{S}$ along with an ancilla $\mathsf{\Gamma}$, which is initially uncorrelated in state $\gamma$. We denote the full initial state by $\varrho:=\rho\otimes\gamma$. After the first operation, the environment and system evolve unitarily again for a time $t_1$ until another operation $\mc{A}_1$ is made on $\mathsf{S\Gamma}$, and so on for $k$ time-steps. The joint expectation value of the series of operations is given by \begin{gather} \langle{\mc{A}_k,\ldots,\mc{A}_0}\rangle := \tr[\mc{A}_k\,\mc{U}_k \cdots \mc{A}_0\,\mc{U}_0\,(\varrho)], \label{eq: exp val} \end{gather} where $\mc{U}_\ell(\cdot)=\ex^{-iH_\ell{t}_\ell}(\cdot)\,\ex^{iH_\ell{t}_\ell}$ acts on $\mathsf{SE}$, while by an operation we explicitly mean $\mathcal{A}_\ell(\cdot) := \sum_\mu a_{\ell_\mu} K_{\ell_\mu}(\cdot)K_{\ell_\mu}^\dg$, with $\sum_\mu K_{\ell_\mu}^\dg\,K_{\ell_\mu}\leq \mbb{1}$, which acts solely on $\mathsf{S\Gamma}$; here $K_{\ell_\mu}$ are Kraus operators, potentially corresponding to measurement outcomes, and $a_{\ell_\mu}$ are the corresponding outcome weights. The Hamiltonians $H_\ell$ are in general different at each step. The ancillary space $\mathsf{\Gamma}$ can be interpreted as a quantum memory device, and might carry information about previous interactions with the system. The information about the intrinsic dynamical process, i.e., the initial $\mathsf{SE}$ state $\rho$ and the joint unitary evolutions $\mc{U}_i$ with their respective timescales at each step, can be encoded in a positive semi-definite tensor $\Upsilon$, and similarly, the sequence of operations $\{\mc{A}_i\}$ can be encoded in a tensor of the form $\Lambda$, as depicted in Figure~\ref{Fig: processes} and detailed in Appendix~\ref{appendix: process tensor}. This simplifies the joint expectation value in Eq.~\eqref{eq: exp val} as the inner product \begin{equation} \langle\Lambda\rangle_\Upsilon := \tr[\Lambda\Upsilon] = \tr[\mc{A}_k\,\mc{U}_k \cdots \mc{A}_0\,\mc{U}_0\,(\varrho)] \end{equation} which can be seen as a generalisation of the Born rule to multi-time step quantum processes~\cite{Costa}. Here, $\Upsilon$ becomes an unnormalized many-body density operator, and $\Lambda$ an observable. Temporal correlations, or memory, in operations are carried through space $\mathsf{\Gamma}$; any $\Lambda$ can be represented as a sequence of uncorrelated operations on a joint $\mathsf{S\Gamma}$ system. Both classically correlated operations, where the measurement basis is conditioned on past outcomes, and coherent quantum correlated measurements can be represented in this way~\cite{PhysRevA.99.042108}. The case of infinite memory and the case of completely uncorrelated operations are then extreme limits of this general setting. Formally, $\Upsilon$ is the Choi state~\cite{watrous_2018} of a quantum process, containing all its accessible dynamical information~\cite{ Markovorder1, PhysRevA.99.042108} and is the quantum generalisation of a stochastic process~\cite{Quolmogorov,processtensor, processtensor2, OperationalQDynamics}. We finally notice that when a process ends at the first intervention $\mc{A}_0$ we have $\Upsilon(t_0) =\rho_S(t_0)$, becoming the corresponding quantum state, so that all of the previous results (single measurement) apply for such case. \begin{figure} \caption{We consider equilibration for quantum processes with a fuzzy clock, by which we mean that each Hamiltonian evolution is time-averaged ($\mc{G} \label{fig: fuzziness} \end{figure} \section{Equilibration of multi-time observables} Consider a quantum process as above with a fixed single Hamiltonian for all time intervals such that $H_i=H=\sum_nE_nP_n$. In Appendix~\ref{Appendix: bound} we present the general case where we do not make this assumption. This multi-time process consists of free evolution sandwiched by generalised measurements. We denote the time intervals of the free evolutions as $t_j$, which is preceded by the $j$-th measurement and followed by $j+1$-th measurement. In other words, $t_j$ is the waiting time between $j$-th and $j+1$-th measurements. As before, each of these waiting times is allowed to be fuzzy, taking a value $t_i>0$ sampled from probability distribution $\mathscr{P}_{T_i}(t_i)$, i.e. $\int_0^\infty dt_i \mathscr{P}_{T_i}(t_i)=1$. We denote the average length as \begin{gather} \tau_i := \int_{0}^{\infty} dt_i \ t_i \ \mathscr{P}_{T_i}(t_i). \end{gather} We pictorially represent this in Fig.~\ref{fig: fuzziness}. We denote multi-time probability distribution as $\mathscr{P}_\mathbf{T}(\mathbf{t})=\prod_{i=0}^k\mathscr{P}_{T_i}(t_i)$, where \begin{gather} \mathbf{t} := (t_0, t_1, \dots, t_k) \quad \mbox{and} \quad \mathbf{T} = (T_0,T_1,\ldots,T_k) \end{gather} are waiting times and the fuzziness parameters for each time interval, respectively; in what follows, we take $T_i=T_j = T$~$\forall i,j$ for simplicity (see Appendix~\ref{Appendix: bound} for the general case). The corresponding finite-temporal-resolution process is then given by \begin{align} \overline{\Upsilon}^{\mathscr{P}_\mathbf{T}}:= \int_0^\infty dt_k \cdots \int_0^\infty dt_1 \int_0^\infty dt_0 \,\mathscr{P}_\mathbf{T}(\mathbf{t})\,\Upsilon. \end{align} We are interested in quantifying how different this out-of-equilibrium process, where time intervals are fuzzy, looks from an equilibrium process. To define the equilibrium process we follow the lead of earlier results, i.e., the initial state relaxes to the equilibrium state $\varpi: = \mc{D}(\varrho) = \omega\otimes\gamma$ until an operation $\mc{A}_0$ is made, and subsequently the system relaxes again to an equilibrium state $\varpi_1:=\mc{D}\mc{A}_0(\varpi)$ until an operation $\mc{A}_1$ is made, and so on for $k$-time-steps. This then leads to the definition \begin{equation} \varpi_i:=\mc{D}\mc{A}_{i-1}\cdots\mc{A}_0\mc{D}(\varrho),\quad\text{for}\quad i=0,\cdots,k \end{equation} as the equilibrium states after each intervention up to $\mc{A}_{i-1}$. This is a sensible definition for the intermediate equilibrium states, which, however, is dependent on each operation $\mc{A}_j$. We can now define the equilibrium quantum process as \begin{gather} \Omega:=\lim_{\mathbf{T}\to\infty} \overline{\Upsilon}^{\mathscr{P}_\mathbf{T}}. \label{eq: def Omega} \end{gather} This is depicted in Fig.~\ref{Fig: processes} as a set of dephasing maps $\mc{D}$ at each timestep. This means then that we can write $\langle\Lambda\rangle_\Omega=\tr[\Lambda\Omega]=\tr[\mc{A}_k\varpi_k]$ for the expectation of a sequence of operations $\{\mc{A}_i\}$ on the equilibrium process $\Omega$, where equivalently, $\varpi_i=\lim_{T_i\to\infty}\overline{\mc{A}_{i-1}(\varpi_{i-1})}^{T_i}$. Since we can also express each finite averaging in the energy eigenbasis using the partial dephasing map $\mc{G}$, defined in Eq.~\eqref{eq: G maps}, we can similarly write $\langle\Lambda\rangle_{\overline{\Upsilon}^{\mathscr{P}_\mathbf{T}}}=\tr[\mc{A}_k\varrho_k]$, where we now define \begin{gather} \varrho_i := \mc{G}\mc{A}_{i-1} \cdots \mc{A}_0 \mc{G}(\varrho),\quad\text{for}\quad i=0,\cdots,k \end{gather} as intermediate, finite-time-averaged states after each intervention up to $\mc{A}_{i-1}$. As by definition $\lim_{T\to\infty}\mc{G}=\mc{D}$, the infinite-time limits $\mathbf{T}\to\infty$ make $\Upsilon$ indistinguishable from $\Omega$. We also depict this in Fig.~\ref{Fig: processes}. We may then generalize the left hand side in Eq.~\eqref{eq: main standard case} to general quantum processes with $|\langle\Lambda\rangle_{\overline{\Upsilon}^{\mathbf{T}}-\Omega}|$, asking how different the statistics of a set of operations $\{\mc{A}_i\}$ can be on a fuzzy clock process, $\overline{\Upsilon}^{\mathbf{T}}$, as opposed to those in the equilibrium one $\Omega$. We provide one such answer with the following, \begin{theorem}\label{Thm: main multitime equilibration} Given an environment-system-ancilla $\mathsf{(SE\Gamma)}$ with initial state $\varrho=\rho\otimes\gamma$ and initial equilibrium state $\varpi=\omega\otimes\gamma$, for any $k$-step process $\Upsilon$ with an evolution generated by a time-independent Hamiltonian on $\mathsf{SE}$, and for any fuzzy multi-time observable $\Lambda$ corresponding to a sequence of temporally local operations $\{\mc{A}_i\}_{i=0}^k$ each with fuzziness $T$ acting on a joint $\mathsf{S\Gamma}$ system, \begin{align} & \left|\langle\Lambda\rangle_{\overline{\Upsilon}^{\mathscr{P}_\mathbf{T}}-\Omega}\right|\leq \mbb{A}_k + \sum_{\ell=0}^{k-1}\|\mc{A}_{k:\ell+1}\|\left(\mbb{B}_\ell + \mbb{C}_\ell\right) , \label{eq: result main}\\ & \mbox{with} \qquad \mbb{A}_k:=\mathscr{S}^{k+1}\|\mc{A}_{k:0}\|\,\|\varrho-\varpi\|_2, \label{eq: bound term A} \end{align} where here $\mathscr{S}:=\max_{n\neq{m}}|G_{nm}|$ and $\mc{A}_{j:i}:=\mc{A}_j\cdots\mc{A}_i$ is a composition of operations; the norm $\|\cdot\|$ here stands for the norm on superoperators induced by the Frobenius norm, $\|\mc{X}\|=\sup_{\|\sigma\|_2=1}\|\mc{X}(\sigma)\|_2$; the first is a single-time equilibration contribution, whereas the second term contains $k$ multi-time contributions where \begin{align} \mbb{B}_\ell &:= \|[\mc{G}^{k-\ell}-\mc{D},\mc{A}_\ell]\varrho_\ell\|_2, \label{eq: bound t1}, \\ \mbb{C}_\ell &:= \|[\mc{D},\mc{A}_\ell](\varrho_\ell-\varpi_\ell)\|_2, \label{eq: bound t2} \end{align} with $\varrho_i=\mc{G}\mc{A}_{i-1}\cdots\mc{A}_0\mc{G}(\varrho)$ and $\varpi_i=\mc{D}\mc{A}_{i-1}\cdots\mc{A}_0\mc{D}(\varrho)$ intermediate finite-time averaged and equilibrium states at step $i$ and where $[\cdot,\cdot]$ denotes a commutator of superoperators. \end{theorem} The proof is given in full in Appendix~\ref{Appendix: bound}. In general, by definition the term $\mathscr{S}$, which depends on the waiting time distribution $\mathscr{P}_T$, converges to zero in increasing $T$, with the rate of convergence depending on the specific distribution. In particular, for the uniform distribution on all time-steps, as in the approach by~\cite{ShortFinite}, we average over a time-window of width $T$ around each $\tau_i$ for all time-steps, with $\mathscr{P}_T=T^{-1}$ in the interval $[\tau_i-T/2, \tau_i+T/2]$, and $\mathscr{P}_T=0$ outside it. This yields $|G_{mn}|=|\mathrm{sin}(T\mc{E}_{mn})/T\mc{E}_{mn}|$; the term $\mathscr{S}$ then picks the smallest non-zero energy gap in the Hamiltonian. Similarly, if the fuzziness corresponds to that of a half-normal distribution with variance $T$, then overall $\mathscr{S}$ decays exponentially with \begin{gather} |G_{mn}|\sim\exp(-T\mc{E}_{mn}^2)|1-\mathrm{erf}(i\sqrt{T}\mc{E}_{mn})|, \end{gather} where $\mathrm{erf}$ is the error function and $E_m-E_n=2\mc{E}_{mn}$. For both cases, if $T$ is small, $\mathscr{S}$ will also be vanishingly small whenever the energy gap $\mc{E}_{nm}$ that maximizes $|G_{nm}|$ is large enough, i.e. $\mc{E}_{nm}\gg{T}$. This property holds in general, since distributions $\mathscr{P}_T$ can be approximated as uniform for small $T$ or because the gaps $\mc{E}_{nm}$ can be seen as a rescaling factor on $T$ in the definition of $G_{nm}$. The term $\mbb{A}_k$ in Eq.~\eqref{eq: bound term A} neglects temporal correlations and the operations $\{\mc{A}_i\}$ are all composed as a single operation $\mc{A}_{k:0}=\mc{A}_k\cdots\mc{A}_0$. The two-norm distance satisfies $\|\varrho-\varpi\|_2^2\leq{1-(d_Ed_S)^{-1}}$ as the ancillary input $\gamma$ can be taken to be pure. As discussed above, this term is suppressed through the $\mathscr{S}$ contributions when $i)$ the averaging window, or equivalently the fuzziness of the clock $T$ is large enough and $ii)$ for small $T$ whenever the energy gap maximizing the time-averaging $|G_{nm}|$ factor is large with respect to $T$. The second term in Eq.~\eqref{eq: result main} contains genuine multi-time contributions to the bound for equilibration, which we bound further in Appendix~\ref{Appendix: bound}. These terms relate to how well intermediate states, at step $\ell$, equilibrate. Crucially, we show that the term in Eq.~\eqref{eq: bound t1} can be upper-bounded as $\mbb{B}_\ell\lesssim\mathscr{S}^{k-\ell}$, so that it is suppressed overall in the width of the time-window $T$. For the term in Eq.~\eqref{eq: bound t2}, notice that expanding and using the triangle inequality, \begin{gather} \begin{split} \mbb{C}_\ell \leq& \|\mc{D}(\varrho_{\ell+1})\|_2 + \|\varpi_{\ell+1}\|_2 \\ &+ \|\mc{A}_\ell\| (\|\mc{D}(\varrho_\ell)\|_2 + \|\varpi_{\ell}\|_2), \end{split} \end{gather} where each term is the purity of a dephased state, which will decay as the effective dimension of that state.\textsuperscript{\footnote{ In general, $\tr[(\mc{D}(\sigma))^2]\leq{d}_\text{eff}^{-1}(\sigma)$ for any state $\sigma$, with equality for either pure states or non-degenerate Hamiltonians.}} On the other hand, when the control operations from $0$ to $\ell$ succeed in driving $\varrho_\ell$ so that the action of the commutator on it does not dephase it so much, the purity of $\varrho_\ell$ may be large and thus $\mbb{C}_\ell$ may become trivial (i.e. it approaches 1). More concretely, the operations $\mc{A}_j$ interleaved within the intermediate states $\varrho_\ell$ and $\varpi_\ell$ will relate in Eq.~\eqref{eq: bound t1} and Eq.~\eqref{eq: bound t2} to how greatly they disturb either the finite-time averaged $\varrho_{j-1}$ or the equilibrated $\varpi_{j-1}$. This is most evident in the terms $\mbb{C}_\ell$, which can be bounded as $\mbb{C}_\ell\leq \|[\mc{D}, \mc{A}_\ell]\| \|\varrho_\ell - \varpi_\ell\|_2$. The norm of the commutator can be written in terms of both the capacity of the operations $\mc{A}_\ell$ to generate coherences between different energy eigenspaces from equilibrium and the degree to which the operations can turn such coherences into populations. Environments in physical systems are typically much larger than the subsystems that can be probed, and, keeping in mind that the operations $\mc{A}_j$ act only on subsystem $\mathsf{S}$ and the ancilla $\mathsf{\Gamma}$, the ability to generate and detect energy coherences should be severely limited in many physically relevant cases. \section{Conclusions} We have introduced an extended notion of equilibration that pertains to observations made across multiple times with a finite-temporal resolution. In a similar way to the standard case of equilibration for observables at a single time, we have put bounds on the degree to which it holds that depend on the Hamiltonian driving the evolution. We have shown that either subsystems or global coarse properties of a closed time-independent Hamiltonian system will display equilibration for multiple sequential operations with a temporal uncertainty or fuzziness provided $i)$ both the initial and intermediate states have a significant overlap with the energy eigenstates, $ii)$ the temporal fuzziness is large enough relative to the average measurement time or, equivalently, the energy gaps in the Hamiltonian are large enough with respect to small temporal fuzziness, and $iii)$ the disturbance by the operations on intermediate states is small. Our approach for the operations that can act on the process is general in the sense that these are completely positive maps which can be correlated between time-steps and propagate information from their interactions with the subsystem $\mathsf{S}$ through the ancillary space $\mathsf{\Gamma}$. While these set a scale in all terms of the right-hand side of Eq.~\eqref{eq: result main}, they can also contribute to loosen it, potentially allowing to distinguish the fuzzy process from the equilibrium one within a finite time. It is not entirely clear, however, if a departure from equilibration is more readily accessible with a larger ancillary space $\mathsf{\Gamma}$ and for long time fuzziness $T$ the upper-bound in Eq.~\eqref{eq: result main} should remain close to zero. Similar to the single-time standard case, equilibration over multiple observations is expected intuitively through decoherence arguments~\cite{Yukalov_2012}. The interplay with memory effects, through both the environment and the ancillary space in the interventions is as yet not entirely clear, e.g., under which circumstances finite-temporal resolution equilibration can occur without the dynamics being Markovian, i.e., memoryless, or if the temporal correlations among interventions through the ancillary space can display a departure from equilibration within a finite-time. We have previously shown rigorously that most processes in large dimensional environments are close to Markovian, and hence strongly equilibrate, in the strong coupling limit in a full typicality sense~\cite{AlmostMarkov}, as well as on complex systems obeying a large deviation bound~\cite{Markovianization}, but outside this regime the relationship between the two properties is less transparent.\\ \begin{acknowledgments} We are grateful to Daniel Burgarth and Lucas C\'{e}leri for valuable discussions. PFR is supported by the Monash Graduate Scholarship (MGS) and the Monash International Postgraduate Research Scholarship (MIPRS). KM is supported through Australian Research Council Future Fellowship FT160100073. \end{acknowledgments} \begin{thebibliography}{51} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Popescu}\ \emph {et~al.}(2006)\citenamefont {Popescu}, \citenamefont {Short},\ and\ \citenamefont {Winter}}]{Popescu2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Popescu}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Short}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Winter}},\ }\href {\doibase 10.1038/nphys444} {\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {754} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gogolin}(2010)}]{GogolinPureQStat} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gogolin}},\ }\emph {\bibinfo {title} {Pure State Quantum Statistical Mechanics}},\ \href {https://arxiv.org/abs/1003.5058} {Ph.D. thesis},\ \bibinfo {school} {Julius-Maximilians-Universitat Wuurzburg Theoretische Physik III} (\bibinfo {year} {2010})\BibitemShut {NoStop} \bibitem [{\citenamefont {Gemmer}\ \emph {et~al.}(2009)\citenamefont {Gemmer}, \citenamefont {Michel},\ and\ \citenamefont {Mahler}}]{gemmer2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gemmer}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Michel}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Mahler}},\ }\href {https://books.google.com.au/books?id=Ua5tCQAAQBAJ} {\emph {\bibinfo {title} {Quantum Thermodynamics: Emergence of Thermodynamic Behavior Within Composite Quantum Systems}}},\ Lecture Notes in Physics\ (\bibinfo {publisher} {Springer Berlin Heidelberg},\ \bibinfo {year} {2009})\BibitemShut {NoStop} \bibitem [{\citenamefont {Goldstein}\ \emph {et~al.}(2010)\citenamefont {Goldstein}, \citenamefont {Lebowitz}, \citenamefont {Mastrodonato}, \citenamefont {Tumulka},\ and\ \citenamefont {Zangh\'{i}}}]{Goldstein_2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Goldstein}}, \bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {Lebowitz}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Mastrodonato}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Tumulka}}, \ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Zangh\'{i}}},\ }\href {\doibase 10.1098/rspa.2009.0635} {\bibfield {journal} {\bibinfo {journal} {P. Roy. Soc. A-Math. Phy.}\ }\textbf {\bibinfo {volume} {466}},\ \bibinfo {pages} {3203} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gogolin}\ and\ \citenamefont {Eisert}(2016)}]{Gogolin} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gogolin}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}},\ }\href {\doibase 10.1088/0034-4885/79/5/056001} {\bibfield {journal} {\bibinfo {journal} {Rep. Prog. Phys.}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {056001} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Garnerone}(2013)}]{Garnerone_2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Garnerone}},\ }\href {\doibase 10.1103/PhysRevB.88.165140} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {165140} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tasaki}(1998)}]{Tasaki_1998} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Tasaki}},\ }\href {\doibase 10.1103/PhysRevLett.80.1373} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {pages} {1373} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Reimann}(2008)}]{Reimann_2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Reimann}},\ }\href {\doibase 10.1103/PhysRevLett.101.190403} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo {pages} {190403} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Linden}\ \emph {et~al.}(2009)\citenamefont {Linden}, \citenamefont {Popescu}, \citenamefont {Short},\ and\ \citenamefont {Winter}}]{PhysRevE.79.061103} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Linden}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Popescu}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Short}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Winter}},\ }\href {\doibase 10.1103/PhysRevE.79.061103} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {061103} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Short}(2011)}]{ShortSystemsAndSub} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Short}},\ }\href {\doibase 10.1088/1367-2630/13/5/053009} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {053009} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Short}\ and\ \citenamefont {Farrelly}(2012)}]{ShortFinite} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Short}}\ and\ \bibinfo {author} {\bibfnamefont {T.~C.}\ \bibnamefont {Farrelly}},\ }\href {\doibase 10.1088/1367-2630/14/1/013063} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {013063} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wilming}\ \emph {et~al.}(2018)\citenamefont {Wilming}, \citenamefont {de~Oliveira}, \citenamefont {Short},\ and\ \citenamefont {Eisert}}]{XXEisert} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Wilming}}, \bibinfo {author} {\bibfnamefont {T.~R.}\ \bibnamefont {de~Oliveira}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Short}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}},\ }\enquote {\bibinfo {title} {Equilibration times in closed quantum many-body systems},}\ in\ \href {\doibase 10.1007/978-3-319-99046-0_18} {\emph {\bibinfo {booktitle} {Thermodynamics in the Quantum Regime: Fundamental Aspects and New Directions}}},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {F.}~\bibnamefont {Binder}}, \bibinfo {editor} {\bibfnamefont {L.~A.}\ \bibnamefont {Correa}}, \bibinfo {editor} {\bibfnamefont {C.}~\bibnamefont {Gogolin}}, \bibinfo {editor} {\bibfnamefont {J.}~\bibnamefont {Anders}}, \ and\ \bibinfo {editor} {\bibfnamefont {G.}~\bibnamefont {Adesso}}}\ (\bibinfo {publisher} {Springer International Publishing},\ \bibinfo {address} {Cham},\ \bibinfo {year} {2018})\ pp.\ \bibinfo {pages} {435--455}\BibitemShut {NoStop} \bibitem [{\citenamefont {Deutsch}(1991)}]{Deutsch1991} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Deutsch}},\ }\href {\doibase 10.1103/PhysRevA.43.2046} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages} {2046} (\bibinfo {year} {1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Srednicki}(1994)}]{Srednicki} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Srednicki}},\ }\href {\doibase 10.1103/PhysRevE.50.888} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {50}},\ \bibinfo {pages} {888} (\bibinfo {year} {1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Srednicki}(1999)}]{Srednicki_1999} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Srednicki}},\ }\href {\doibase 10.1088/0305-4470/32/7/007} {\bibfield {journal} {\bibinfo {journal} {J. Phys. A-Math. Gen.}\ }\textbf {\bibinfo {volume} {32}},\ \bibinfo {pages} {1163} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Steinigeweg}\ \emph {et~al.}(2013)\citenamefont {Steinigeweg}, \citenamefont {Herbrych},\ and\ \citenamefont {Prelov\ifmmode~\check{s}\else \v{s}\fi{}ek}}]{PhysRevE.87.012118} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Steinigeweg}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Herbrych}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Prelov\ifmmode~\check{s}\else \v{s}\fi{}ek}},\ }\href {\doibase 10.1103/PhysRevE.87.012118} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {012118} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rigol}\ \emph {et~al.}(2008)\citenamefont {Rigol}, \citenamefont {Dunjko},\ and\ \citenamefont {Olshanii}}]{Rigol2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Rigol}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Dunjko}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Olshanii}},\ }\href {https://doi.org/10.1038/nature06838} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {452}},\ \bibinfo {pages} {854 EP } (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Turner}\ \emph {et~al.}(2018)\citenamefont {Turner}, \citenamefont {Michailidis}, \citenamefont {Abanin}, \citenamefont {Serbyn},\ and\ \citenamefont {Papi\'{c}}}]{Turner2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~J.}\ \bibnamefont {Turner}}, \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Michailidis}}, \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Abanin}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Serbyn}}, \ and\ \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Papi\'{c}}},\ }\href {\doibase 10.1038/s41567-018-0137-5} {\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {745} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Alexei Kitaev}}(2014)}]{Kitaev} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibnamefont {{Alexei Kitaev}}},\ }\href@noop {} {\enquote {\bibinfo {title} {2015 breakthrough prize fundamental physics symposium},}\ }\bibinfo {howpublished} {\textsc{url:}~\url{https://breakthroughprize.org/Laureates/1/L3}} (\bibinfo {year} {2014}),\ \bibinfo {note} {stanford University}\BibitemShut {NoStop} \bibitem [{Note1()}]{Note1} \BibitemOpen \bibinfo {note} {The out-of-time-order correlations require propagating the system back and forth in time. Here we only go forward in time.}\BibitemShut {Stop} \bibitem [{\citenamefont {de~Oliveira}\ \emph {et~al.}(2018)\citenamefont {de~Oliveira}, \citenamefont {Charalambous}, \citenamefont {Jonathan}, \citenamefont {Lewenstein},\ and\ \citenamefont {Riera}}]{Oliveira_2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~R.}\ \bibnamefont {de~Oliveira}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Charalambous}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Jonathan}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lewenstein}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Riera}},\ }\href {\doibase 10.1088/1367-2630/aab03b} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo {pages} {033032} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gallego}\ \emph {et~al.}(2018)\citenamefont {Gallego}, \citenamefont {Wilming}, \citenamefont {Eisert},\ and\ \citenamefont {Gogolin}}]{PhysRevA.98.022135} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Gallego}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Wilming}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gogolin}},\ }\href {\doibase 10.1103/PhysRevA.98.022135} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {022135} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Farrelly}\ \emph {et~al.}(2017)\citenamefont {Farrelly}, \citenamefont {Brand\~ao},\ and\ \citenamefont {Cramer}}]{PhysRevLett.118.140601} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Farrelly}}, \bibinfo {author} {\bibfnamefont {F.~G. S.~L.}\ \bibnamefont {Brand\~ao}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Cramer}},\ }\href {\doibase 10.1103/PhysRevLett.118.140601} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages} {140601} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{Note2()}]{Note2} \BibitemOpen \bibinfo {note} {Strictly, the full statistics should then display equilibration.}\BibitemShut {Stop} \bibitem [{\citenamefont {Ziraldo}\ \emph {et~al.}(2012)\citenamefont {Ziraldo}, \citenamefont {Silva},\ and\ \citenamefont {Santoro}}]{PhysRevLett.109.247205} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ziraldo}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Silva}}, \ and\ \bibinfo {author} {\bibfnamefont {G.~E.}\ \bibnamefont {Santoro}},\ }\href {\doibase 10.1103/PhysRevLett.109.247205} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\ \bibinfo {pages} {247205} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Venuti}\ and\ \citenamefont {Zanardi}(2013)}]{PhysRevE.87.012106} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~C.}\ \bibnamefont {Venuti}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zanardi}},\ }\href {\doibase 10.1103/PhysRevE.87.012106} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {012106} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zangara}\ \emph {et~al.}(2013)\citenamefont {Zangara}, \citenamefont {Dente}, \citenamefont {Torres-Herrera}, \citenamefont {Pastawski}, \citenamefont {Iucci},\ and\ \citenamefont {Santos}}]{PhysRevE.88.032913} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~R.}\ \bibnamefont {Zangara}}, \bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont {Dente}}, \bibinfo {author} {\bibfnamefont {E.~J.}\ \bibnamefont {Torres-Herrera}}, \bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Pastawski}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Iucci}}, \ and\ \bibinfo {author} {\bibfnamefont {L.~F.}\ \bibnamefont {Santos}},\ }\href {\doibase 10.1103/PhysRevE.88.032913} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {032913} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Campos~Venuti}\ and\ \citenamefont {Zanardi}(2014)}]{PhysRevE.89.022101} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Campos~Venuti}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zanardi}},\ }\href {\doibase 10.1103/PhysRevE.89.022101} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {022101} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schiulaz}\ \emph {et~al.}(2020)\citenamefont {Schiulaz}, \citenamefont {Torres-Herrera}, \citenamefont {P\'erez-Bernal},\ and\ \citenamefont {Santos}}]{PhysRevB.101.174312} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schiulaz}}, \bibinfo {author} {\bibfnamefont {E.~J.}\ \bibnamefont {Torres-Herrera}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {P\'erez-Bernal}}, \ and\ \bibinfo {author} {\bibfnamefont {L.~F.}\ \bibnamefont {Santos}},\ }\href {\doibase 10.1103/PhysRevB.101.174312} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo {pages} {174312} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mori}\ \emph {et~al.}(2018)\citenamefont {Mori}, \citenamefont {Ikeda}, \citenamefont {Kaminishi},\ and\ \citenamefont {Ueda}}]{Mori_2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Mori}}, \bibinfo {author} {\bibfnamefont {T.~N.}\ \bibnamefont {Ikeda}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Kaminishi}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ueda}},\ }\href {\doibase 10.1088/1361-6455/aabcdf} {\bibfield {journal} {\bibinfo {journal} {J. Phys. B-A. Mol. Opt.}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo {pages} {112001} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gluza}\ \emph {et~al.}(2019)\citenamefont {Gluza}, \citenamefont {Eisert},\ and\ \citenamefont {Farrelly}}]{Gluza_2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gluza}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Farrelly}},\ }\href {\doibase 10.21468/SciPostPhys.7.3.038} {\bibfield {journal} {\bibinfo {journal} {SciPost Phys.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {38} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wilming}\ \emph {et~al.}(2019)\citenamefont {Wilming}, \citenamefont {Goihl}, \citenamefont {Roth},\ and\ \citenamefont {Eisert}}]{PhysRevLett.123.200604} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Wilming}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Goihl}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Roth}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}},\ }\href {\doibase 10.1103/PhysRevLett.123.200604} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {123}},\ \bibinfo {pages} {200604} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Basko}\ \emph {et~al.}(2007)\citenamefont {Basko}, \citenamefont {Aleiner},\ and\ \citenamefont {Altshuler}}]{PhysRevB.76.052203} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont {Basko}}, \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Aleiner}}, \ and\ \bibinfo {author} {\bibfnamefont {B.~L.}\ \bibnamefont {Altshuler}},\ }\href {\doibase 10.1103/PhysRevB.76.052203} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {052203} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nandkishore}\ and\ \citenamefont {Huse}(2015)}]{Nandkishore} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Nandkishore}}\ and\ \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Huse}},\ }\href {\doibase 10.1146/annurev-conmatphys-031214-014726} {\bibfield {journal} {\bibinfo {journal} {Annu. Rev. Conden. Ma. P.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {15} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hess}\ \emph {et~al.}(2017)\citenamefont {Hess}, \citenamefont {Becker}, \citenamefont {Kaplan}, \citenamefont {Kyprianidis}, \citenamefont {Lee}, \citenamefont {Neyenhuis}, \citenamefont {Pagano}, \citenamefont {Richerme}, \citenamefont {Senko}, \citenamefont {Smith}, \citenamefont {Tan}, \citenamefont {Zhang},\ and\ \citenamefont {Monroe}}]{Hess} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont {Hess}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Becker}}, \bibinfo {author} {\bibfnamefont {H.~B.}\ \bibnamefont {Kaplan}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kyprianidis}}, \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Lee}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Neyenhuis}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Pagano}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Richerme}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Senko}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Smith}}, \bibinfo {author} {\bibfnamefont {W.~L.}\ \bibnamefont {Tan}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhang}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Monroe}},\ }\href {\doibase 10.1098/rsta.2017.0107} {\bibfield {journal} {\bibinfo {journal} {Philos. T. R. Soc. A.}\ }\textbf {\bibinfo {volume} {375}},\ \bibinfo {pages} {20170107} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hamazaki}\ and\ \citenamefont {Ueda}(2018)}]{PhysRevLett.120.080603} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Hamazaki}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ueda}},\ }\href {\doibase 10.1103/PhysRevLett.120.080603} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages} {080603} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Robinson}(1973)}]{Robinson1973} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Robinson}},\ }\href {\doibase 10.1007/BF01646264} {\bibfield {journal} {\bibinfo {journal} {Commun. Math. Phys.}\ }\textbf {\bibinfo {volume} {31}},\ \bibinfo {pages} {171} (\bibinfo {year} {1973})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kiendl}\ and\ \citenamefont {Marquardt}(2017)}]{PhysRevLett.118.130601} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Kiendl}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Marquardt}},\ }\href {\doibase 10.1103/PhysRevLett.118.130601} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages} {130601} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Knipschild}\ and\ \citenamefont {Gemmer}(2018)}]{PhysRevE.98.062103} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Knipschild}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gemmer}},\ }\href {\doibase 10.1103/PhysRevE.98.062103} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {062103} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Costa}\ and\ \citenamefont {Shrapnel}(2016)}]{Costa} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Costa}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Shrapnel}},\ }\href {\doibase 10.1088/1367-2630/18/6/063032} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {063032} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Taranto}\ \emph {et~al.}(2019{\natexlab{a}})\citenamefont {Taranto}, \citenamefont {Milz}, \citenamefont {Pollock},\ and\ \citenamefont {Modi}}]{PhysRevA.99.042108} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Taranto}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Milz}}, \bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Pollock}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Modi}},\ }\href {\doibase 10.1103/PhysRevA.99.042108} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {042108} (\bibinfo {year} {2019}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Watrous}(2018)}]{watrous_2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Watrous}},\ }\href {\doibase 10.1017/9781316848142} {\emph {\bibinfo {title} {The Theory of Quantum Information}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year} {2018})\BibitemShut {NoStop} \bibitem [{\citenamefont {Taranto}\ \emph {et~al.}(2019{\natexlab{b}})\citenamefont {Taranto}, \citenamefont {Pollock}, \citenamefont {Milz}, \citenamefont {Tomamichel},\ and\ \citenamefont {Modi}}]{Markovorder1} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Taranto}}, \bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Pollock}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Milz}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Tomamichel}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Modi}},\ }\href {\doibase 10.1103/PhysRevLett.122.140401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {122}},\ \bibinfo {pages} {140401} (\bibinfo {year} {2019}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Milz}\ \emph {et~al.}(2020)\citenamefont {Milz}, \citenamefont {Sakuldee}, \citenamefont {Pollock},\ and\ \citenamefont {Modi}}]{Quolmogorov} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Milz}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Sakuldee}}, \bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Pollock}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Modi}},\ }\href {\doibase 10.22331/q-2020-04-20-255} {\bibfield {journal} {\bibinfo {journal} {{Quantum}}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {255} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pollock}\ \emph {et~al.}(2018{\natexlab{a}})\citenamefont {Pollock}, \citenamefont {Rodr\'{\i}guez-Rosario}, \citenamefont {Frauenheim}, \citenamefont {Paternostro},\ and\ \citenamefont {Modi}}]{processtensor} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Pollock}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Rodr\'{\i}guez-Rosario}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Frauenheim}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Paternostro}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Modi}},\ }\href {\doibase 10.1103/PhysRevA.97.012127} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo {pages} {012127} (\bibinfo {year} {2018}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pollock}\ \emph {et~al.}(2018{\natexlab{b}})\citenamefont {Pollock}, \citenamefont {Rodr\'{\i}guez-Rosario}, \citenamefont {Frauenheim}, \citenamefont {Paternostro},\ and\ \citenamefont {Modi}}]{processtensor2} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Pollock}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Rodr\'{\i}guez-Rosario}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Frauenheim}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Paternostro}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Modi}},\ }\href {\doibase 10.1103/PhysRevLett.120.040405} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages} {040405} (\bibinfo {year} {2018}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Milz}\ \emph {et~al.}(2017)\citenamefont {Milz}, \citenamefont {Pollock},\ and\ \citenamefont {Modi}}]{OperationalQDynamics} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Milz}}, \bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Pollock}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Modi}},\ }\href {\doibase 10.1142/S1230161217400169} {\bibfield {journal} {\bibinfo {journal} {Open Syst. Inf. Dyn.}\ }\textbf {\bibinfo {volume} {24}},\ \bibinfo {pages} {1740016} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{Note3()}]{Note3} \BibitemOpen \bibinfo {note} {In general, $\protect \tr [(\protect \mathcal {D}(\sigma ))^2]\leq {d}_\protect \text {eff}^{-1}(\sigma )$ for any state $\sigma $, with equality for either pure states or non-degenerate Hamiltonians.}\BibitemShut {Stop} \bibitem [{\citenamefont {Yukalov}(2012)}]{Yukalov_2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Yukalov}},\ }\href {\doibase 10.1016/j.aop.2011.09.009} {\bibfield {journal} {\bibinfo {journal} {Ann. Phys.-New York}\ }\textbf {\bibinfo {volume} {327}},\ \bibinfo {pages} {253 } (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Figueroa-Romero}\ \emph {et~al.}(2019)\citenamefont {Figueroa-Romero}, \citenamefont {Modi},\ and\ \citenamefont {Pollock}}]{AlmostMarkov} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Figueroa-Romero}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Modi}}, \ and\ \bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Pollock}},\ }\href {\doibase 10.22331/q-2019-04-30-136} {\bibfield {journal} {\bibinfo {journal} {{Quantum}}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {136} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Figueroa-Romero}\ \emph {et~al.}(2020)\citenamefont {Figueroa-Romero}, \citenamefont {Pollock},\ and\ \citenamefont {Modi}}]{Markovianization} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Figueroa-Romero}}, \bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Pollock}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Modi}},\ }\href@noop {} {} (\bibinfo {year} {2020}),\ \Eprint {http://arxiv.org/abs/2004.07620} {arXiv:2004.07620 [quant-ph]} \BibitemShut {NoStop} \end{thebibliography} \onecolumngrid \appendix \renewcommand{\Roman{subsection}}{\Roman{subsection}} \section{The process tensor}\label{appendix: process tensor} The process tensor is defined as a linear, completely positive (CP) and trace non-increasing map $\mc{T}$ from a set of CP maps $\{\mc{A}_i\}$ referred to as control operations, e.g. measurements, to a quantum state, and its action can be described as a multi-time open system evolution, e.g. for joint unitary evolution of an environment $\mathsf{E}$ plus system $\mathsf{S}$, with $\dim(\mc{H}_E\otimes\mc{H}_S)=d_Ed_S$, a $k$-step process is determined by \begin{gather} \mc{T}_{k:0}[\{\mc{A}_i\}_{i=0}^{k-1}]=\tr_E[\,\mc{U}_k\mc{A}_{k-1}\cdots\mc{A}_0\,\mc{U}_0(\rho)] \end{gather} where $\rho$ is an initial joint $\mathsf{SE}$ state, $\mc{U}$ are unitary maps acting on $\mathsf{SE}$, and the maps $\mc{A}$ act solely on subsystem $\mathsf{S}$. We employ weighted operations, i.e. such that $\mc{A}(\cdot)=\sum{a}_\mu\,K_\mu(\cdot)K_\mu^\dg$ where $K_\mu$ are the Kraus operators of $\mc{A}$ satisfying $\sum_\mu{K}_\mu^\dg{K}_\mu\leq\mbb1$ and $a_\mu\in\mathbb{R}$ are the outcome weights for $\mc{A}$. The associated Choi state of a time-evolved process tensor with initial state $\rho$ is then given by \begin{gather} \Upsilon_{k:0}=\tr_E[\,\mc{U}_{k:0}(\rho\otimes\psi^{\otimes{k}})\,\mc{U}_{k:0}^\dg], \label{PT Choi state Def} \end{gather} where $\psi=\sum|ii\rangle\!\langle{jj}|$ is maximally entangled and unnormalized, and where here \begin{gather} \mc{U}_{k:0}:=(U_k\otimes\mbb1)\mc{S}_k\cdots(U_1\otimes\mbb1)\mc{S}_1(U_0\otimes\mbb1),\end{gather} with all identity operators $\mbb1$ in the total ancillary system and with the $U_i$ being $\mathsf{SE}$ unitary operators at step $i$, and \begin{gather}\mc{S}_i:=\sum_{\alpha,\beta}\mf{S}_{\alpha\beta}\otimes\mbb1_{_{A_1B_1\cdots{A}_{i-1}B_{i-1}}}\otimes|\beta\rangle\!\langle\alpha|\otimes\mbb1_{_{B_iA_{i+1}B_{i+1}\cdots{A}_kB_k}},\end{gather} with $\mf{S}_{\alpha\beta}=\mbb1_E\otimes|\alpha\rangle\!\langle\beta|$. This can be visualised as the quantum circuit depicted in Figure~\ref{PT circuit diagram} when the unitary evolution is determined by a time-independent Hamiltonian, as we detail below, and we highlight that $\Upsilon$ is defined directly with the first input being the initial state $\rho$ (as opposed to half of a maximally entangled state in $\mathsf{S}$). Explicitly, it can be written as \begin{align}\Upsilon_{k:0}=\sum\tr_E\left[{U}_k\FS_{\alpha_k\beta_k} \cdots{U}_1\FS_{\alpha_1\beta_1}U_0\,\rho\,U_0^\dg \FS_{\gamma_1\delta_1}^\dg{U}_1^\dg\cdots\FS_{\gamma_k\delta_k}^\dg{U}_k^\dg\right]\otimes|\beta_1\alpha_1\cdots\beta_k\alpha_k\rangle\!\langle\delta_1\gamma_1\cdots\delta_k\gamma_k|.\end{align} $\mf{S}_{\alpha\sigma}=\mf{S}_{\sigma\alpha}^\dg$, $\mf{S}_{ab}\mf{S}^\dg_{cd}=\delta_{bd}\mf{S}_{ac}$ and $\tr(\mf{S}_{ab})=d_E\delta_{ab}$. Also notice that the resulting Choi process tensor state belongs to the whole $\mathsf{S}$ plus ancillary system, which has dimension $d_S^{2k+1}$. \begin{figure} \caption{\footnotesize{Circuit diagram of the Choi state of a process tensor corresponding to definition~\eqref{PT Choi state Def} \label{PT circuit diagram} \end{figure} In particular, here we deal with evolution by time-independent Hamiltonians $H$, i.e., with $U_j=\exp[-iHt_j]$. Also, as done in~\cite{ShortSystemsAndSub, ShortFinite, PhysRevE.79.061103}, we consider first a pure initial state $\rho=|\phi\rangle\!\langle\phi|$ and then extend our results to mixed initial states by purification; this also allows to choose an energy eigenbasis $\{|n\rangle\}$ for $H$ such that the evolution of the initial state is the same as that given by a non-degenerate Hamiltonian $H^\prime=\sum_n{E}_n|n\rangle\!\langle{n}|$, i.e., \begin{gather} \rho(t_0)=U_{0}\rho\,U^\dg_{0}=\sum_{m,n}e^{-it_0(E_m-E_n)}\rho_{mn}|m\rangle\!\langle{n}|, \end{gather} where $\rho_{mn}=\langle{m}|\rho|n\rangle=\langle{m}|\phi\rangle\!\langle\phi|n\rangle$. \section{Proof of the main result}\label{Appendix: bound} We consider the multi-time expectation for CP (weighted) maps $\mc{A}_i(\cdot)=\sum{a}_i^\mu{A}_i^\mu(\cdot){A}_i^{\mu\,\dg}$ acting on a subsystem $\mathsf{S}$ of a joint $\mathsf{SE}$ system, together with an ancillary space $\mathsf{\Gamma}$, where the weights are $a_i^\mu\in\mathbb{R}$ and with $\sum{A}_i^{\mu\,\dg}{A}_i^\mu\leq\mbb1$. The full initial $\mathsf{SE\Gamma}$-state $\varrho$ will be given by $\rho\otimes\gamma$, where $\gamma$ acts on $\mathsf{\Gamma}$. This expectation in a process $\Upsilon$ is given by \begin{equation} \langle\Lambda\rangle_\Upsilon=\tr[\mc{A}_k\,\mc{U}_k\cdots\mc{A}_0\,\mc{U}_0(\varrho)], \end{equation} where implicitly $\mc{A}_i$'s act only on $\mathsf{S\Gamma}$, while the unitaries $\mc{U}_i(\cdot)=U_i(\cdot)U_i^\dg$ with $U_\ell=\exp(-iH_\ell t)$ act on the $\mathsf{SE}$ system. We consider a fixed set of projectors $\{P_n\}$ for all Hamiltonians such that $H_\ell=\sum P_nE_{n_\ell}$ at each step $\ell$, with $P_n$ projecting onto the energy eigenspaces of $H_i$ with energy $E_{n_i}$. We also denote simply by $\cdot$ the composition of superoperators when clear by context. Let \begin{equation} \varpi\equiv\varpi_0\equiv\lim_{T_0\to\infty}\int_0^\infty\mc{U}_0(\varrho)\,\mathscr{P}_{T_0}\,dt_0=\lim_{T_0\to\infty}\int_0^\infty\mc{U}_0(\rho)\,\mathscr{P}_{T_0}(t_0)\,dt_0\otimes\gamma, \end{equation} for a probability distribution on all $t_0>0$ with density function $\mathscr{P}_{T_0}$ with a parameter $T_0$, and similarly, \begin{align} \varpi_1&\equiv\lim_{T_1\to\infty}\int_0^\infty\mc{U}_1\mc{A}_0(\varpi_0)\,\mathscr{P}_{T_1}(t_1)\,dt_1,\\ &\vdots\nonumber\\ \varpi_k&\equiv\lim_{T_k\to\infty}\int_0^\infty\mc{U}_k\mc{A}_{k-1}(\varpi_{k-1})\,\mathscr{P}_{T_k}\,dt_k(t_k), \end{align} for all $t_1,\ldots,t_k$. These probability functions are set to be such that \begin{equation} \lim_{T_\ell\to\infty}\int_0^\infty\,\exp[-it(E_{n_\ell}-E_{m_\ell})]\,\mathscr{P}_{T_\ell}(t)\,dt=\delta_{n_\ell m_\ell}, \end{equation} with the parameters $T_j$ taking the role of an uncertainty parameter, e.g. the variance of a given distribution. We now define $\langle\Lambda\rangle_\Omega\equiv\lim_{\mathbf{T}\to\infty}\overline{\langle\Lambda\rangle_\Upsilon}^{\mathscr{P}_\mathbf{T}}=\tr[\mc{A}_k(\varpi_k)]$ where the overline means time-average over all $t_i$, i.e. \begin{equation} \overline{X}^{\mathscr{P}_\mathbf{T}}=\int_0^\infty\mathscr{P}_{T_k}(t_k)\,dt_k\cdots\int_0^\infty\,\mathscr{P}_{T_0}(t_0)\,dt_0\,X(t_0,\ldots,t_k). \end{equation} In general, we can write \begin{align} \varpi_i&=\sum{P}_n\mc{A}_{i-1}(P_{n^\prime}\mc{A}_{i-2}(\cdots\mc{A}_0(\varpi_0)\cdots{P}_{n^\prime})P_{n}, \end{align} for any $0\leq{i}\leq{k}$. Let us denote by $\mc{P}_{nm}=P_n(\cdot)P_m$ and $\mc{D}(X)=\sum_{n}\mc{P}_{nn}(\cdot)=\sum_nP_n(\cdot)P_n$ the dephasing map with respect to $\{P_n\}$, then we can similarly write \begin{align} \varpi_i&=\mc{D}\mc{A}_{i-1}\mc{D}\cdots\mc{A}_0\mc{D}(\varrho), \end{align} and so \begin{equation} \langle\Lambda\rangle_\Omega=\tr[\mc{A}_k\mc{D}\mc{A}_{k-1}\mc{D}\cdots\mc{A}_0\mc{D}(\varrho)]. \end{equation} We now consider the difference \begin{align} |\langle\Lambda\rangle_{\overline{\Upsilon}^{\mathscr{P}_\mathbf{T}}-\Omega}|\equiv|\tr[(\overline{\Upsilon}^{\mathscr{P}_\mathbf{T}}-\Omega)\Lambda]|, \end{align} where \begin{equation} \langle\Lambda\rangle_{\overline{\Upsilon}^{\mathscr{P}_\mathbf{T}}}=\sum_{n,m}\overline{\mathfrak{E}(\mathbf{t},\mathbf{n},\mathbf{m})}^{\mathscr{P}_\mathbf{T}}\tr[\mc{A}_k\mc{P}_{nm}\mc{A}_{k-1}\cdots\mc{A}_0\mc{P}_{nm}(\rho)], \end{equation} with \begin{equation} \overline{\mathfrak{E}(\mathbf{t},\mathbf{n},\mathbf{m})}^{\mathscr{P}_\mathbf{T}}\equiv\overline{\exp[-it_k(E_{n_k}-E_{m_k})]|}^{\mathscr{P}_{T_k}}\cdots\overline{\exp[-it_0(E_{n_0}-E_{m_0})]}^{\mathscr{P}_{T_0}}. \end{equation} Defining $G_{n_\ell{m}_\ell}^{(\ell)}\equiv\overline{\exp[-it_\ell(E_{n_\ell}-E_{m_\ell})]|}^{\mathscr{P}_{T_\ell}}$, this is $\overline{\mathfrak{E}(\mathbf{t},\mathbf{n},\mathbf{m})}^{\mathscr{P}_\mathbf{T}}=\prod_{\ell=0}^kG_{n_\ell{m}_\ell}^{(\ell)}$. Let us define the partial dephasing superoperator $\mc{G}_\ell$ by $\mc{G}_\ell(\rho) = \sum_{n_\ell, m_\ell} G^{(\ell)}_{n_\ell, m_\ell} \mc{P}_{nm}(\rho)$ and $\mc{G}_{k:\ell}:= \bigcircop_{j=\ell}^k \mc{G}_j$, such that $\mc{G}_{k:\ell}(\rho) = \sum_{n,m} G^{(k)}_{n_k, m_k}\cdots{G}^{(\ell)}_{n_\ell, m_\ell}\mc{P}_{nm}(\rho)$. Whenever we denote $\mc{G}(\varrho)$, this means $\mc{G}(\rho)\otimes\gamma$, and similarly for any other maps. Let us look first at the case $k=1$ (we label with a subindex the step at which $\mc{D}$ is applied where relevant), \begin{align} \left|\langle\Lambda\rangle_{\overline{\Upsilon}^{\mathscr{P}_\mathbf{T}}-\Omega}\right|&=\left|\tr\left[\left(\bigcircop_{j=0}^1 \mathcal{A}_j \mc{G}_j - \bigcircop_{j=0}^1 \mathcal{A}_j \mc{D}_j\right)(\varrho)\right]\right|\nonumber \\ &= \Bigg|\tr\left[\mc{A}_1\mc{A}_0 \left(\mc{G}_1\mc{G}_0 - \mc{D}_1\mc{D}_0\right)(\varrho)\right] +\tr\Big[\mathcal{A}_1\Big([\mc{G}_1,\mc{A}_0]\mc{G}_0 - [\mc{D}_{1}, \mc{A}_0]\mc{D}_0\Big)(\varrho)\Big]\Bigg|\nonumber\\ &\leq\left|\tr\left[\mc{A}_1\mc{A}_0 \left(\mc{G}_1\mc{G}_0 - \mc{D}\right)(\varrho)\right]\right| + \left|\tr\left\{\mathcal{A}_{1} [\mc{G}_{1} - \mc{D}, \mc{A}_0] \mc{G}_0(\varrho)\right\}\right| + \left|\tr\left\{\mathcal{A}_{1} [\mc{D},\mc{A}_0] (\mc{G}_0-\mc{D})(\varrho)\right\}\right|, \end{align} where the third line follows by the triangle inequality ($|a-c|\leq|a-b|+|b-c|$, here with $b=\tr\{\mc{A}_1[\mc{D},\mc{A}_0]\mc{G}_0(\varrho)\}$). Similarly, for general $k$, \begin{align} \left|\langle\Lambda\rangle_{\overline{\Upsilon}^{\mathscr{P}_\mathbf{T}}-\Omega}\right|&=\left|\tr\left[\left(\bigcircop_{j=0}^{k} \mathcal{A}_j \mc{G}_j - \bigcircop_{j=0}^{k} \mathcal{A}_j \mc{D}_j\right)(\varrho)\right]\right|\nonumber \\ &= \Bigg|\tr\left[\mc{A}_{k:0} \left(\mc{G}_{k:0} - \mc{D}_{k:0}\right)(\varrho)\right] \nonumber \\ &\qquad + \sum_{\ell=0}^{k-1}\tr\Big[\mathcal{A}_{k:\ell+1} \Big([\mc{G}_{k:\ell+1}, \mc{A}_\ell] \mc{G}_\ell \bigcircop_{j=0}^{\ell-1} \mathcal{A}_j \mc{G}_j- [\mc{D}_{k:\ell+1}, \mc{A}_\ell]\mc{D}_\ell \bigcircop_{j=0}^{\ell-1} \mathcal{A}_j \mc{D}_j \Big)(\varrho)\Big]\Bigg|\nonumber\\ \leq& \left|\tr\left[\mc{A}_{k:0} \left(\mc{G}_{k:0} - \mc{D}\right)(\varrho)\right]\right|\nonumber \\ & + \sum_{\ell=0}^{k-1}\left|\tr\left[\mathcal{A}_{k:\ell+1} [\mc{G}_{k:\ell+1} - \mc{D}, \mc{A}_\ell] (\varrho_\ell)\right]\right| + \sum_{\ell=0}^{k-1}\left|\tr\left[\mathcal{A}_{k:\ell+1} [\mc{D}, \mc{A}_\ell] (\varrho_\ell-\varpi_\ell)\right]\right|, \end{align} where $\varrho_\ell:=\mc{G}_\ell\bigcircop_{j=0}^{\ell-1}\mc{A}_j \mc{G}_j(\varrho)$. Using the Schwartz inequality as $|\tr \mc{X}(\varrho)|= |\langle\!\langle X | \varrho \rangle\!\rangle| \leq \|\mc{X}\|\|\varrho\|_2$, where here we denote $\|\mc{X}\|:= \sup_{\|\sigma\|_2=1} \|\mc{X}(\sigma)\|_2$ for simplicity, we further find \begin{align} \left|\langle\Lambda\rangle_{\overline{\Upsilon}^{\mathscr{P}_\mathbf{T}}-\Omega}\right|&\leq \left\|\mc{A}_{k:0}\right\|\, \left\| \left(\mc{G}_{k:0} - \mc{D}\right)(\varrho)\right\|_2 + \sum_{\ell=0}^{k-1} \left\|\mathcal{A}_{k:\ell+1} \right\|\, \left\| [\mc{G}_{k:\ell+1} - \mc{D}, \mc{A}_\ell] ( \varrho_\ell)\right\|_2 \nonumber \\ & \qquad \qquad + \sum_{\ell=0}^{k-1}\left\|\mathcal{A}_{k:\ell+1} \right\|\, \left\|[\mc{D}, \mc{A}_\ell] ( \varrho_\ell-\varpi_\ell)\right\|_2\nonumber\\ &\leq \left\|\mc{A}_{k:0}\right\|\, \left\| \left(\mc{G}_{k:0} - \mc{D}\right)(\varrho)\right\|_2 + \sum_{\ell=0}^{k-1} \left\|\mathcal{A}_{k:\ell+1} \right\|\,\left\| \mc{A}_\ell\right\|\, \left\|(\mc{G}_{k:\ell+1} - \mc{D}) ( \varrho_\ell)\right\|_2 \nonumber \\ & \qquad \qquad + \sum_{\ell=0}^{k-1} \left\|\mathcal{A}_{k:\ell+1} \right\|\, \left\| (\mc{G}_{k:\ell+1} - \mc{D})\mc{A}_\ell ( \varrho_\ell)\right\|_2 + \sum_{\ell=0}^{k-1}\left\|\mathcal{A}_{k:\ell+1} \right\|\, \left\|[\mc{D}, \mc{A}_\ell] ( \varrho_\ell-\varpi_\ell)\right\|_2. \label{appendix eq: 2normbound} \end{align} For the first term of Eq.~\eqref{appendix eq: 2normbound}, we have $\mc{G}_{k:\ell} - \mc{D} = \sum_{n \neq m} G_{n_\ell{m}_\ell}^{(\ell)}\cdots{G}_{n_k{m}_k}^{(k)}\mc{P}_{nm}$, therefore, \begin{align} \left\| \left(\mc{G}_{k:0} - \mc{D}\right)(\varrho)\right\|_2^2&=\tr\left|\sum_{n \neq m} G_{n_k{m}_k}^{(k)}\cdots{G}_{n_0m_0}^{(0)} \mc{P}_{nm}(\varrho)\right|^2\nonumber\\ &=\sum_{\substack{n \neq m \\ n^\prime \neq m^\prime}}\prod_{j=0}^k G_{n_jm_j}^{(j)}G_{m_j^\prime{n}_j^\prime}^{(j)}\tr\left[ \mc{P}_{nm}(\varrho)\mc{P}_{m^\prime{n}^\prime}(\varrho)\right]\nonumber\\ &=\sum_{n \neq m }\prod_{j=0}^k |G_{n_jm_j}^{(j)}|^2\tr\left[ P_n \varrho P_m\varrho\right]\nonumber\\ &\leq\prod_{j=0}^k\max_{n\neq{m}}|G_{n_jm_j}^{(j)}|^2\left\{\sum_{n , m}\tr[P_n\varrho P_m \varrho]-\sum_n\tr[P_n\varrho P_n \varrho]\right\}\nonumber\\ &=\prod_{j=0}^k\max_{n\neq{m}}|G_{n_jm_j}^{(j)}|^2\tr(\varrho^2-\varpi^2)\nonumber\\ &=\|\varrho-\varpi\|_2^2\prod_{j=0}^k\max_{n\neq{m}}|G_{n_jm_j}^{(j)}|^2. \end{align} where we used $\tr(\varrho^2-\varpi^2)=\|\varrho-\varpi\|_2^2$, because $\tr(\varrho\,\varpi)=\tr(\varpi^2)$. This, together with Eq.~\eqref{appendix eq: 2normbound} already gives the result in Eq.~\eqref{eq: result main} for the case $H_i=T_j$ and $T_i=T_j$ for all $i\neq{j}$. For the second term then, similarly, \begin{align} \left\| \left(\mc{G}_{k:\ell+1} - \mc{D}\right)(\varrho_\ell)\right\|_2^2 &\leq\prod_{j=\ell+1}^k\max_{n\neq{m}}|G_{n_jm_j}^{(j)}|^2\left\{\sum_{n , m}\tr[P_n\varrho_\ell P_m \varrho_\ell]-\sum_n\tr[P_n\varrho_\ell P_n \varrho_\ell]\right\}\nonumber\\ &=\prod_{j=\ell+1}^k\max_{n\neq{m}}|G_{n_jm_j}^{(j)}|^2\tr[\varrho_\ell^2-\mc{D}(\varrho_\ell)\varrho_\ell]\nonumber\\ &=\|\varrho_\ell-\mc{D}(\varrho_{\ell})\|_2^2\prod_{j=\ell+1}^k\max_{n\neq{m}}|G_{n_jm_j}^{(j)}|^2, \end{align} as $\tr[(\mc{D}(\varrho_\ell))^2]=\tr[\mc{D}(\varrho_\ell)\varrho_\ell]$. For the third term, with $\varrho^\prime_\ell=\mc{A}_\ell(\varrho_\ell)$, \begin{align} \left\| \left(\mc{G}_{k:\ell+1} - \mc{D}\right)(\varrho_\ell^\prime)\right\|_2^2 &\leq\prod_{j=\ell+1}^k\max_{n\neq{m}}|G_{n_jm_j}^{(j)}|^2\left\{\sum_{n , m}\tr[P_n(\varrho_\ell^\prime) P_m (\varrho_\ell^\prime)]-\sum_n\tr[P_n(\varrho_\ell^\prime) P_n (\varrho_\ell^\prime)]\right\}\nonumber\\ &=\prod_{j=\ell+1}^k\max_{n\neq{m}}|G_{n_jm_j}^{(j)}|^2\tr[\varrho_\ell^{\prime\,2}-\mc{D}(\varrho_\ell^\prime)\varrho_\ell^\prime]\nonumber\\ &=\|\mc{A}_\ell(\varrho_\ell)-\mc{D}\mc{A}_\ell(\varrho_\ell)\|_2^2\prod_{j=\ell+1}^k\max_{n\neq{m}}|G_{n_jm_j}^{(j)}|^2. \end{align} Finally, for the fourth term, as \begin{align} \|[\mc{D},\mc{A}_\ell](\varrho_\ell-\varpi_\ell)\|_2&\leq\|\mc{D}(\varrho_{\ell+1})-\varpi_{\ell+1}\|_2+\|\mc{A}_\ell\|\|\mc{D}(\varrho_\ell)-\varpi_\ell\|_2, \end{align} then, let us denote $\mf{S}_{b:a}:=\prod_{j=a}^b\max_{n\neq{m}}|G_{n_jm_j}^{(j)}|$, so that putting all together, \begin{align} \left|\langle\Lambda\rangle_{\overline{\Upsilon}^{\mathscr{P}_\mathbf{T}}-\Omega}\right|&\leq \mf{S}_{k:0}\,\|\mc{A}_{k:0}\|\,\|\varrho-\varpi\|_2\nonumber\\ &\quad +\sum_{\ell=0}^{k-1}\mf{S}_{k:\ell+1}\,\|\mc{A}_{k:\ell+1}\|\,\bigg\{\|\mc{A}_\ell\|\|\varrho_\ell-\mc{D}(\varrho_{\ell})\|_2+\|\mc{A}_\ell(\varrho_\ell)-\mc{D}\mc{A}_\ell(\varrho_\ell)\|_2\bigg\}\nonumber\\ &\quad\quad +\sum_{\ell=0}^{k-1}\|\mc{A}_{k:\ell+1}\|\left\{\|\mc{D}(\varrho_{\ell+1})-\varpi_{\ell+1}\|_2+\|\mc{A}_\ell\|\|\mc{D}(\varrho_\ell)-\varpi_\ell\|_2\right\}, \end{align} as discussed in the main text. \end{document}
\begin{equation}gin{document} \title{Uniform asymptotic stability\for convection-reaction-diffusion equations\in the inviscid limit towards Riemann shocks} \begin{equation}gin{abstract} The present contribution proves the asymptotic orbital stability of viscous regularizations of stable Riemann shocks of scalar balance laws, uniformly with respect to the viscosity/diffusion parameter $\varepsilon $. The uniformity is understood in the sense that all constants involved in the stability statements are uniform and that the corresponding multiscale $\varepsilon $-dependent topology reduces to the classical $W^{1,\infty}$-topology when restricted to functions supported away from the shock location. Main difficulties include that uniformity precludes any use of parabolic regularization to close regularity estimates, that the global-in-time analysis is also spatially multiscale due to the coexistence of nontrivial slow parts with fast shock-layer parts, that the limiting smooth spectral problem (in fast variables) has no spectral gap and that uniformity requires a very precise and unusual design of the phase shift encoding \emph{orbital} stability. In particular, our analysis builds a phase that somehow interpolates between the hyperbolic shock location prescribed by the Rankine-Hugoniot conditions and the non-uniform shift arising merely from phasing out the non-decaying $0$-mode, as in the classical stability analysis for fronts of reaction-diffusion equations. {\small \paragraph {\bf Keywords:} traveling waves; asymptotic stability; orbital stability; vanishing viscosity limit; Riemann shocks; scalar balance laws; reaction-diffusion equations. } {\small \paragraph {\bf AMS Subject Classifications:} 35B35, 35L67, 35B25, 35K10, 35K58, 35K15, 35B40, 37L15, 35L02. } \end{abstract} \tableofcontents \section{Introduction} In the present contribution, we prove for the very first time an asymptotic stability result, uniform with respect to the viscosity parameter, for a viscous regularization of a discontinuous traveling-wave of an hyperbolic equation. \subsection{The original hyperbolic result} The purely inviscid result \cite{DR1}, that we extend to the slightly viscous regimes, is itself quite recent. More generally, despite the fact that hyperbolic models are largely used for practical purposes and that for such models singularities such as shocks and characteristic points are ubiquitous, the analysis of nonlinear asymptotic stability of singular traveling-waves of hyperbolic systems is still in its infancy. The state of the art is essentially reduced to a full classification of waves of scalar equations in any dimension \cite{DR1,DR2} (obtained using some significant insights about characteristic points from \cite{JNRYZ}) and the case study of a discontinuous wave without characteristic point for a system of two equations in dimension $1$ \cite{SYZ,YangZumbrun20}. Let us stress that, in the foregoing, stability is understood in the sense of Lyapunov, that is, globally in time, and for a topology encoding piecewise smoothness. This is consistent with the fact that concerning stability in the sense of Hadamard, that is, short-time well-posedness, for piecewise-smooth topologies, a quite comprehensive (but not complete) theory is already available even for multidimensional systems; see \cite{Majda1,Majda2,Metivier_cours-chocs,Benzoni-Serre_book}. At this level of regularity, being a weak solution is characterized by a free-interface initial boundary value problem, composed of equations taken in the classical sense in zones of smoothness, and the Rankine-Hugoniot transmission conditions along the free interfaces of discontinuity. As is well-known, for hyperbolic equations, weak solutions are not unique and one needs to make an extra choice. The one we are interested in is the most classical one when the extra condition is to be obtained as a vanishing viscosity limit. For scalar equations, in any dimension, since the pioneering work of Kru{\v z}kov \cite{Kruzhkov} (see also \cite[Chapters~4 and~6]{Bressan}), this is known to be sufficient to ensure uniqueness and to be characterized by the so-called entropy conditions, which at our level of smoothness are reduced to inequalities at the free interfaces of discontinuity. For systems, even in dimension~$1$, despite decisive breakthroughs achieved in \cite{Bianchini-Bressan}, such questions are still the object of intensive research; see for instance \cite{Kang-Vasseur}. The present contribution lies at the crossroad of these questions related to the basic definitions of the notion of solution for hyperbolic equations and the ongoing development of a robust general theory for the stability of traveling waves, for which we refer the reader to \cite{Sattinger-book,Henry-geometric,Zumbrun,Sandstede,KapitulaPromislow-stability,JNRZ-conservation}. From the former point of view, the present contribution may be thought as a global-in-time scalar version of \cite{Goodman-Xin,Grenier-Rousset,Rousset}. From the latter point of view, though of a very different technical nature, by many respects, it shares similar goals with other vanishing viscosity stability programs --- see for instance \cite{Bedrossian-Germain-Masmoudi,Herda-Rodrigues} --- and the present contribution is thought as being to \cite{DR1} what \cite{Bedrossian-Masmoudi-Vicol} is to \cite{Bedrossian-Masmoudi}. We focus on the most basic shock stability result of \cite{DR1}. Consider a scalar balance law in dimension~$1$, \begin{equation}\label{eq:hyp} \d_t u+\d_x(f(u))=g(u) \end{equation} with traveling wave solutions $\R\times\R\to\R$, $(t,x)\mapsto \uu(x-(\psi_0+\sigma_0 t))$ with initial shock position $\psi_0\in\R$, speed $\sigma_0\in\R$ and wave profile $\uu$ of Riemann shock type, that is, \[ \uu(x)=\begin{equation}gin{cases} \uu_{-\infty} &\text{ if } x<0\\ \uu_{+\infty} &\text{ if } x>0 \end{cases}\] where $(\uu_{-\infty},\uu_{+\infty})\in\R^2$, $\uu_{+\infty}\neq\uu_{-\infty}$. The fact that this does define a weak solution is equivalent to \begin{equation}gin{align}\label{hyp-weak} g(\uu_{+\infty})&=0\,,& g(\uu_{-\infty})&=0\,,& f(\uu_{+\infty})-f(\uu_{-\infty})&=\sigma_0 (\uu_{+\infty}-\uu_{-\infty})\,, \end{align} whereas a strict version of entropy conditions may be enforced in Oleinik's form \begin{equation}gin{equation}\label{hyp-Lax} \begin{equation}gin{cases} \qquad\qquad\qquad\sigma_0\,>\,f'(\uu_{+\infty})\,,&\\[0.5em] \frac{f(\tau\,\uu_{-\infty}+(1-\tau)\,\uu_{+\infty})-f(\uu_{-\infty})}{\tau\,\uu_{-\infty}+(1-\tau)\,\uu_{+\infty}-\uu_{-\infty}}> \frac{f(\tau\,\uu_{-\infty}+(1-\tau)\,\uu_{+\infty})-f(\uu_{+\infty})}{\tau\,\uu_{-\infty}+(1-\tau)\,\uu_{+\infty}-\uu_{+\infty}}&\qquad\textrm{for any }\ \tau\in(0,1)\,,\\[0.5em] \qquad\qquad\qquad f'(\uu_{-\infty})\,>\,\sigma_0\,. \end{cases} \end{equation} Requiring a strict version of entropy conditions ensures that they still hold for nearby functions and in particular they disappear at the linearized level. In the foregoing, and throughout the text, for the sake of simplicity, we assume that $f,g\in \cC^\infty(\R)$ though each result only requires a small amount of regularity. The following statement is one of the alternative versions of \cite[Theorem~2.2]{DR1} described in \cite[Remark~2.3]{DR1}. \begin{theorem}[\cite{DR1}]\label{th:hyp} Let $(\sigma_0,\uu_{-\infty},\uu_{+\infty})\in\R^3$ define a strictly-entropic Riemann shock of \eqref{eq:hyp} in the above sense. Assume that it is spectrally stable in the sense that \[ g'(\uu_{+\infty})<0 \quad \text{ and } \quad g'(\uu_{-\infty})<0\,. \] There exist $\delta>0$ and $C>0$ such that for any $\psi_0\in\R$ and $v_0\in BUC^1(\R^*)$ satisfying \[ \|v_0\|_{W^{1,\infty}(\R^*)}\leq\delta\,, \] there exists $\psi\in\cC^2(\R^+)$ with initial data $\psi(0)=\psi_0$ such that the entropy solution to~\eqref{eq:hyp}, $u$, generated by the initial data $u(0,\cdot)=(\uu+v_0)(\cdot+\psi_0)$, belongs to $BUC^1(\R_+\times\R\setminus\{\,(t,\psi(t))\,;\,t\geq0\,\})$ and satisfies for any $t\geq 0$ \begin{equation}gin{align*} \|u(t,\cdot-\psi(t))-\uu\|_{W^{1,\infty}(\R^*)} +|\psi'(t)-\sigma_0| &\leq \|v_0\|_{W^{1,\infty}(\R^*)}\,C\, \eD^{\max(\{g'(\uu_{+\infty}),g'(\uu_{-\infty})\})\, t}\,, \end{align*} and moreover there exists $\psi_\infty$ such that \[ |\psi_\infty-\psi_0|\,\leq \|v_0\|_{L^{\infty}(\R^*)} C\,, \] and for any $t\geq 0$ \[ |\psi(t)-\psi_\infty-t\,\sigma_0|\,\leq \|v_0\|_{L^{\infty}(\R^*)} C\, \eD^{\max(\{g'(\uu_{+\infty}),g'(\uu_{-\infty})\})\, t}\,. \] \end{theorem} In the foregoing, we have used notation $BUC^k(\Omega)$ to denote the set of $\cC^k$ functions over $\Omega$ whose derivatives up to order $k$ are bounded, and uniformly continuous on every connected component of $\Omega$. In other words, $BUC^k(\Omega)$ is the closure of $W^{\infty,\infty}(\Omega)$ for the $W^{k,\infty}(\Omega)$ topology. Working with $BUC^k$ instead of $W^{k,\infty}$ allows to use approximation by smooth functions, an argument ubiquitous in local well-posedness theories, without imposing vanishing at $\infty$. Note that expressed in classical stability terminology the previous theorem provides asymptotic orbital stability with asymptotic phase. We stress however that the role of phase shifts is here deeper than in the classical stability analysis of smooth waves since it is not only required to provide decay of suitable norms in large-time but also to ensure that these norms are finite locally in time. In particular here there is no freedom, even in finite time, in the definition of phase shifts that need to synchronize discontinuities to allow for comparisons in piecewise smooth topologies. It is also instructive to consider the corresponding spectral problem. In a moving frame, linearizing from $u(t,x)=\uu(x-(\psi_0+\sigma_0 t)-\psi(t))+v(t,x-(\psi_0+\sigma_0 t)-\psi(t))$ gives a linear IBVP in $(v,\psi)$ \begin{equation}gin{align*} (\d_t&+(f'(\uu_{+\infty})-\sigma_0)\d_x-g'(\uu_{+\infty}))\,v(t,\cdot)\,=\,0\qquad\textrm{on }{\mathbb R}_+^*\,,\\ (\d_t&+(f'(\uu_{-\infty})-\sigma_0)\d_x-g'(\uu_{-\infty}))\,v(t,\cdot)\,=\,0\qquad\textrm{on }{\mathbb R}_-^*\,,\\ \psi'(t) &-\,\left(\frac{f'(\uu_{+\infty})-\sigma_0}{\uu_{+\infty}-\uu_{-\infty}}v(t,0^+) -\frac{f'(\uu_{-\infty})-\sigma_0}{\uu_{+\infty}-\uu_{-\infty}}v(t,0^-)\right)\,=\,0\,. \end{align*} The corresponding spectrum on $BUC^1(\R^*)\times\R$ is \[ \left\{\ \lambda\ ;\ \Re(\lambda)\leq \max(\{g'(\uu_{-\infty});g'(\uu_{+\infty})\})\ \right\}\cup\{0\} \] and when $\max(\{g'(\uu_{-\infty});g'(\uu_{+\infty})\})<0$, $0$ has multiplicity $1$ (in the sense provided by resolvent singularities) with eigenvector $(0,1)$. This shows that Theorem~\ref{th:hyp} sharply reproduces linear behavior. \subsection{The vanishing viscosity problem} Since even the local-in-time notion of solution involves vanishing viscosity approximations, it is natural to wonder whether Theorem~\ref{th:hyp} may have a small-viscosity extension or whether the local-in-time vanishing viscosity limits may be globalized in time about the stable Riemann shocks of Theorem~\ref{th:hyp}. We answer such a question for the following parabolic approximation \begin{equation}\label{eq:scalar} \d_t u+\d_x(f(u))=\varepsilon \,\d_x^2 u+g(u)\,. \end{equation} Note that solutions to \eqref{eq:scalar} are smooth (not uniformly in $\varepsilon $) so that techniques based on free-interfaces IBVP formulations for \eqref{eq:hyp} cannot easily be extended to the study of \eqref{eq:scalar}. In the reverse direction, to gain a better control on smoothness of solutions to \eqref{eq:scalar}, it is expedient to introduce fast variables \[ u(t,x)=\tu\underbrace{\left(\frac{t}{\varepsilon },\frac{x}{\varepsilon }\right)}_{(\ttt,\tx)} \] that turn \eqref{eq:scalar} into \begin{equation}\label{eq:scalar-fast} \d_\ttt \tu+\d_\tx(f(\tu))=\d_\tx^2\tu+\varepsilon \,g(\tu)\,. \end{equation} We stress however that this is indeed in original variables $(t,x)$ that we aim at proving a uniform result. In particular, a large part of the analysis is focused on distinctions between norms that get large and norms that get small when going from slow to fast variables. For a closely related discussion we refer the reader to \cite{Kang-Vasseur_bis,Kang-Vasseur}. In order to carry out the extension, the first step is to elucidate the existence of traveling waves to \eqref{eq:scalar} near $\uu$. A preliminary observation in this direction is that the formal $\varepsilon \to0$ limit of \eqref{eq:scalar-fast} does possess a smooth traveling-wave solution $(\ttt,\tx)\mapsto \uU_0(\tx-\sigma_0\,\ttt)$ of speed $\sigma_0$ and profile $\uU_0$ such that \begin{equation}gin{align*} \lim_{-\infty}\uU_0&=\uu_{-\infty}\,,& \lim_{+\infty}\uU_0&=\uu_{+\infty}\,, \end{align*} simply obtained by solving \begin{equation}gin{align*} \uU_0(0)&=\frac{\uu_{-\infty}+\uu_{+\infty}}{2}\,,& \uU_0'&\,=\,f(\uU_0)-f(\uu_{+\infty}) -\sigma_0\,(\uU_0-\uu_{+\infty})\,. \end{align*} We recall that $\sigma_0$ is tuned to ensure $f(\uu_{-\infty})-f(\uu_{+\infty}) -\sigma_0\,(\uu_{-\infty}-\uu_{+\infty})=0$ and observe that the Oleinik's entropy conditions imply that $\uU_0$ is strictly monotonous. This $\varepsilon =0$ viscous profile is often called viscous shock layer and plays the role of a short-time free-interface boundary layer. This simple limiting fast profile may be perturbed to yield profiles for \eqref{eq:scalar-fast} hence for \eqref{eq:scalar}. To state such a perturbation result with optimal spatial decay rates, we introduce, for $\varepsilon \geq0$, \begin{equation}gin{align*} \theta_\varepsilon ^r &:=\frac{1}{2}|f'(\uu_{+\infty})-\sigma_\varepsilon | +\frac{1}{2}\sqrt{(f'(\uu_{+\infty})-\sigma_\varepsilon )^2+4\,\varepsilon \,|g'(\uu_{+\infty})|}\,,\\ \theta_\varepsilon ^\end{lemma}l &:=\frac{1}{2}\,(f'(\uu_{-\infty})-\sigma_\varepsilon ) +\frac{1}{2}\sqrt{(f'(\uu_{-\infty})-\sigma_\varepsilon )^2+4\,\varepsilon \,|g'(\uu_{-\infty})|}\,. \end{align*} \begin{proposition}\label{p:profile} Under the assumptions of Theorem~\ref{th:hyp}, for any $0<\alpha^\end{lemma}l<\theta_0^\end{lemma}l$, $0<\alpha^r<\theta_0^r$ and $k_0\in\mathbf N^*$, there exist $\varepsilon _0>0$ and $C_0>0$ such that there exist a unique $(\sigma_\varepsilon ,\uU_\varepsilon )$, with $\uU_\varepsilon \in\cC^2(\R)$, \begin{equation}gin{align*} \uU_\varepsilon (0)&=\frac{\uu_{-\infty}+\uu_{+\infty}}{2}\,,& (f(\uU_\varepsilon )-\sigma_\varepsilon \,\uU_\varepsilon )'&=\uU_\varepsilon ''+\varepsilon \,g(\uU_\varepsilon )\,, \end{align*} and \begin{equation}gin{align*} |\sigma_\varepsilon -\sigma_0| +\|\eD^{\alpha^\end{lemma}l|\,\cdot\,|}(\uU_\varepsilon -\uU_0)\|_{W^{1,\infty}(\R_-)} +\|\eD^{\alpha^r\,\cdot\,}(\uU_\varepsilon -\uU_0)\|_{W^{1,\infty}(\R_+)} &\leq C_0\,\varepsilon \,, \end{align*} and, moreover, there also holds \begin{equation}gin{align*} \|\eD^{\theta_\varepsilon ^\end{lemma}l|\,\cdot\,|}(\uU_\varepsilon -\uu_{-\infty}) -\eD^{\theta_0^\end{lemma}l|\,\cdot\,|}(\uU_0-\uu_{-\infty})\|_{L^{\infty}(\R_-)} &\leq\, C_0\,\varepsilon \,,\\ \|\eD^{\theta_\varepsilon ^r\,\cdot\,}(\uU_\varepsilon -\uu_{+\infty}) -\eD^{\theta_0^r\,\cdot\,}(\uU_0-\uu_{+\infty})\|_{L^{\infty}(\R_+)} &\leq\, C_0\,\varepsilon \,,\\ \|\eD^{\theta_\varepsilon ^\end{lemma}l|\,\cdot\,|}\uU_\varepsilon ^{(k)} -\eD^{\theta_0^\end{lemma}l|\,\cdot\,|}\uU_0^{(k)}\|_{L^{\infty}(\R_-)} &\leq\, C_0\,\varepsilon \,,&1\leq k\leq k_0\,,\\ \|\eD^{\theta_\varepsilon ^r\,\cdot\,}\uU_\varepsilon ^{(k)} -\eD^{\theta_0^r\,\cdot\,}\uU_0^{(k)}\|_{L^{\infty}(\R_+)} &\leq\, C_0\,\varepsilon \,,&1\leq k\leq k_0\,. \end{align*} \end{proposition} Note that a traveling-wave $(t,x)\mapsto \uu_\varepsilon (x-(\psi_0+\sigma_\varepsilon t))$ with $\psi_0\in\R$ arbitrary, is obtained from $\uU_\varepsilon $ through \[ \uu_\varepsilon (x)\,:=\,\uU_\varepsilon \left(\frac{x}{\varepsilon }\right) \] and that, uniformly in $\varepsilon $, \begin{equation}gin{align*} |\uu_\varepsilon (x)-\uu(x)|&\lesssim \eD^{-\theta_\varepsilon ^\end{lemma}l\,\frac{|x|}{\varepsilon }}\,,& x<0\,,\\ |\uu_\varepsilon (x)-\uu(x)|&\lesssim \eD^{-\theta_\varepsilon ^r\,\frac{x}{\varepsilon }}\,,& x>0\,,\\ |\uu^{(k)}_\varepsilon (x)|&\lesssim \frac{1}{\varepsilon ^k}\eD^{-\theta_\varepsilon ^\end{lemma}l\,\frac{|x|}{\varepsilon }}\,,& x<0\,,&\ k\geq 1\,,\\ |\uu^{(k)}_\varepsilon (x)|&\lesssim \frac{1}{\varepsilon ^k}\eD^{-\theta_\varepsilon ^r\,\frac{x}{\varepsilon }}\,,& x>0\,,&\ k\geq 1\,.\\ \end{align*} We prove Proposition~\ref{p:profile} in Appendix~\ref{s:profiles}. The existence and uniqueness part with suboptimal spatial rates follows from a rather standard Lyapunov-Schmidt argument. We stress however that it is crucial for our linear and nonlinear stability analyses to gain control on $\uU'_\varepsilon $ with sharp spatial decay rates. We obtain the claimed upgrade from suboptimal to optimal rates essentially as a corollary to the refined spectral analysis needed to carry out the nonlinear study. We point out that, despite the fact that the literature on the subject is quite extensive --- see for instance \cite{Harterich_hyperbolic,Harterich_canard,Crooks-Mascia,Crooks,Gilding} and references therein ---, we have not found there an existence result with the level of generality needed here, that is, including non-convex fluxes and yielding optimal spatial decay rates. With the existence of $\varepsilon $-versions of traveling waves in hands, the next natural question is whether these are spectrally stable. It is settled by standard arguments, as expounded in \cite{KapitulaPromislow-stability}, combining direct computations of the essential spectrum with Sturm-Liouville theory. The latter uses crucially that $\uU'_\varepsilon $ is monotonous, a consequence of the Oleinik's entropy conditions. The upshot is that, in slow original variables, the spectrum of the linearization about $\uu_\varepsilon $ in a co-moving frame, acting on $BUC^1(\R)$, is stable and exhibits a spectral gap between the simple eigenvalue $0$ and the rest of the spectrum of size $\min(\{|g'(\uu_{+\infty})|,|g'(\uu_{-\infty})|\})+\cO(\varepsilon )$. Note that in fast variables the spectral gap is of size $\varepsilon \times\min(\{|g'(\uu_{+\infty})|,|g'(\uu_{-\infty})|\})+\cO(\varepsilon ^2)$ Details of the latter are given in Section~\ref{s:spectral}. The real challenge is \emph{uniform} nonlinear asymptotic stability. Indeed, if one removes the uniformity requirement, nonlinear stability follows from spectral stability by now well-known classical arguments as expounded in \cite{Sattinger-book,Henry-geometric,Sandstede,KapitulaPromislow-stability}, and initially developed in, among others, \cite{Sattinger_one,Sattinger_two,Henry-geometric,Kapitula,Wu-Xing,Xing}. Since the limit is singular, it is worth spelling out what we mean by uniform stability. There are two closely related parts in the requirement. Explicitly, on initial data, \begin{equation}gin{enumerate} \item the most obvious one is that the restriction on the sizes of allowed initial perturbations (encoded by the smallness of $\delta$ in Theorem~\ref{th:hyp}) should be uniform with respect to $\varepsilon $, so that the lower bound on the size of the basin of attraction provided by the analysis is nontrivial in the limit $\varepsilon \to0$; \item the second one is more intricate\footnote{But our result satisfies a much simpler and stronger version of the requirement.}, it states that the $\varepsilon $-dependent norms, say $\|\cdot\|_{(\varepsilon )}$, used to measure this smallness (in slow original variables) should be controlled by an $\varepsilon $-independent norm for functions supported away from the shock, so that in particular for any $v\in \cC^\infty_c(\R)$ supported in $\R^*$, $\limsup_{\varepsilon \to0}\|v\|_{(\varepsilon )}<+\infty$. \end{enumerate} On the control of solutions arising from perturbations, we impose similar constraints but with upper bounds replacing lower bounds in the requirements. Constraints on the control of solutions ensure that the bounds provide a nontrivial control whereas constraints on the control of initial data ensure that nontrivial perturbations are allowed. It may be intuitive that the stringer the norm is the larger the size of the basin of attraction is since a qualitatively better control is offered by the topology. In the present case, the discussion is on the amount of localization encoded by the norm since, though this is somewhat hidden, time decay is controlled by initial spatial localization (as opposed to cases where regularity drives decay as for instance in \cite{Bedrossian-Masmoudi,Bedrossian-Masmoudi-Vicol,Bedrossian-Germain-Masmoudi}). To offer a quantitative insight, let us use as in \cite{Herda-Rodrigues} a simple ODE as a toy model to predict the size constraints. Consider the stability of $y\equiv 0$ for $y'=-\tau\,y+\rho\,y^2$ where $\tau>0$ encodes the size of the spectral gap and $\rho>0$ measures the size of nonlinear forcing. For such an equation, a ball of radius $r_0$ and center $0$ is uniformly attracted to $0$ provided that $r_0<\tau/\rho$. Now, if one considers \eqref{eq:scalar} directly in $BUC^1(\R)$ (or any reasonable unweighted topology) and forgets about issues related to phase definitions and possible regularity losses, the spectral gap offered by a linearization about $\uu_\varepsilon $ is of order $1$ whereas the forcing by nonlinear terms is of order $\varepsilon ^{-1}$ (since this is the size of $\uu_\varepsilon '$) hence the rough prediction of a basin of size $\cO(\varepsilon )$. Yet, working with weights such as $\eD^{-\theta\,\varepsilon ^{-\alpha}\,|x|}$, for some sufficiently small $\theta>0$ and some $0<\alpha\leq 1$, moves the spectrum to increase the size of the gap to the order $\varepsilon ^{-\alpha}$ yielding the expectation of an $\cO(\varepsilon ^{1-\alpha})$ basin. Note that the choice $\alpha=1$ would provide a uniform size and is consistent with the size of viscous shock layers but it would force initial perturbations to be located in an $\cO(\varepsilon )$ spatial neighborhood of the shock location. The foregoing simple discussion predicts quite accurately\footnote{Actually it is even a bit optimistic for the unweighted and $\alpha<1$ cases.} what could be obtained by applying the most classical parabolic strategy to the problem at hand. The failure of the classical strategy may also be read on the deeply related, but not equivalent, fact that it uses the phase only to pull out the contribution of nonlinear terms through the spectrally non-decaying $0$-mode. This is inconsistent with the stronger role of the phase for the hyperbolic problem, all the spectrum contributing to the phase in the latter case. A completely different approach is needed. Additional strong signs of the very challenging nature of the uniform stability problem may also be gathered from the examination of the viscous layer stability problem, that is, the stability of $(\ttt,\tx)\mapsto\uU_0(\tx-\sigma_0\,\ttt)$ as a solution to \eqref{eq:scalar-fast} with $\varepsilon =0$. The problem has been extensively studied, see for instance \cite{Liu,Goodman,Goodman_bis,Jones-Gardner-Kapitula,Kreiss-Kreiss,Howard_lin,Howard_nonlin} for a few key contributions and \cite{Zumbrun} for a thorough account. The spectrum of the linearization includes essential spectrum touching the imaginary axis at $0$, which is still an eigenvalue, so that the decay is not exponential but algebraic and requires a trade-off, localization against decay, as for the heat equation. This is a consequence of a conservative nature of the equation, but the conservative structure may also be used to tame some of the apparent difficulties. To give one concrete example: one may remove the embedded eigenvalue $0$ from the essential spectrum by using the classical antiderivative trick, dating back at least to \cite{Matsumura-Nishihara}, either directly at the nonlinear level under the restriction of zero-mean perturbations (as in \cite{Goodman_ter} or in \cite{Matsumura-Nishihara} for a system case) or only to facilitate the linear analysis as in \cite{Howard_lin,Howard_nonlin}. In fast variables, turning on $\varepsilon >0$ moves the essential to the left, creating an $\cO(\varepsilon )$ spectral gap but breaks the conservative structure thus rendering almost impossible, and at least quite inconvenient, the use of classical conservative tools. Our stability analysis requires a description as detailed as the one of \cite{Howard_lin,Howard_nonlin} and, without the antiderivative trick at hand, this involves the full machinery of \cite{Zumbrun-Howard,Zumbrun-Howard_erratum}. Roughly speaking, one of the main outcomes of our detailed spectral analysis, expressed in fast variables, is that the $\varepsilon $-proximity of essential spectrum and $0$-eigenvalue induces that the essential spectrum has an impact of size $1/\varepsilon $ on the linear time-evolution, but that at leading-order the algebraic structure of the essential-spectrum contribution is such that it may be absorbed in a suitably designed phase modulation. Note that this is consistent with the fact that, in fast variables, variations in shock positions are expected to be of size $1/\varepsilon $ and with the fact that, in slow variables, the phase is involved in the resolution of all the hyperbolic spectral problems, not only the $0$-mode. To summarize and extend the discussion so far, we may hope \begin{equation}gin{enumerate} \item to overcome the discrepancy between the Rankine-Hugoniot prescription of the phase and the pure $0$-mode modulation, and to phase out the hidden singularity caused by the proximity of essential spectrum and $0$ eigenvalue, by carefully identifying the most singular contribution of the essential spectrum as phase variations and including this in a carefully designed phase; \item to guarantee uniform nonlinear decay estimates provided that we can ensure that, in sow variables, nonlinear terms of size $1/\varepsilon $ also come with a spectral-gap enhancing factor $\eD^{-\theta\,|x|/\varepsilon }$ (for some $\theta>0$). \end{enumerate} The latter expectation is motivated by the fact that it is indeed the case for terms forced by $\uu'_\varepsilon $ but we need to prove that it is so also for stiff terms caused by the derivatives of the perturbation itself. Concerning the latter, we stress that even if one starts with a very gentle perturbation supported away from the shock the nonlinear coupling instantaneously creates stiff parts of shock-layer type in the perturbation thus making it effectively multi-scale. There remains a somewhat hidden issue, that we have not discussed so far. Along the foregoing discussion, we have done as if we could use Duhamel principle based on a straight-forward linearization, as in classical semilinear parabolic problems. Yet, here, closing nonlinear estimates in regularity by using parabolic regularization either explicitly through gains of derivatives or indirectly through $L^q\to L^p$, $q<p$, mapping properties, effectively induces losses in power of $\varepsilon $ in an already $\varepsilon $-critical problem thus is completely forbidden. Instead, we estimate \begin{equation}gin{itemize} \item the variation in shock position $\psi$, the shape variation $v$ and the restriction of its derivative $\d_xv$ to an $\cO(\varepsilon )$ neighborhood of the shock location through Duhamel formula and linear decay estimates; \item the remaining part of $\d_xv$ by a suitably modified Goodman-type hyperbolic energy estimate. \end{itemize} The latter energy estimate is similar in spirit to those in \cite{Goodman_ter,Rodrigues-Zumbrun,YangZumbrun20} but the hard part of its design is precisely in going from a classical hyperbolic estimate that would work in the complement of an $\cO(1)$ neighborhood of the shock location to a finely tuned estimate covering the complement of an $\cO(\varepsilon )$ neighborhood, since this is required for the combination with a lossless parabolic regularization argument. Moreover, there are two more twists in the argument: on one hand we need the estimate to include weights encoding the multi-scale nature of $\d_xv$ ; on the other hand, for the sake of sharpness, to remain at the $\cC^1$ level of regularity, we actually apply the energy estimates on a suitable nonlinear version of $\d_xv$ so that they yield $L^\infty$ bounds for $\d_xv$. The arguments sketched above, appropriately worked out, provide the main result of the present paper. To state such results, we introduce multi-scale weights and corresponding norms: for $k\in\mathbf N$, $\varepsilon >0$, and $\theta\geq0$, \begin{equation}gin{align*} \omega_{k,\varepsilon ,\theta}(x) &:= \frac{1}{1+\frac{1}{\varepsilon ^k}\,\eD^{-\theta\,\frac{|x|}{\varepsilon }}}\,,& \|v\|_{W_{\varepsilon ,\theta}^{k,\infty}(\R)} &=\sum_{j=0}^k\,\|\omega_{j,\varepsilon ,\theta}\,\d_x^jv\|_{L^{\infty}(\R)} \end{align*} Note that \begin{equation}gin{enumerate} \item Each norm $\|\cdot\|_{W_{\varepsilon ,\theta}^{k,\infty}(\R)}$ is equivalent to any standard norm on $\|\cdot\|_{W^{k,\infty}(\R)}$ but non uniformly in $\varepsilon $ and that the uniformity is restored if one restricts it to functions supported in the complement of a fixed neighborhood of the origin. \item The norm $\|\cdot\|_{W_{\varepsilon ,\theta}^{0,\infty}(\R)}$ is uniformly equivalent to $\|\cdot\|_{L^{\infty}(\R)}$. \item If $\theta<\min(\{\theta_0^\end{lemma}l,\theta_0^r\})$ then $\|\uu_\varepsilon -\uu\|_{W_{\varepsilon ,\theta}^{k,\infty}(\R)}$ is bounded uniformly with respect to $\varepsilon $. \end{enumerate} \begin{theorem}\label{th:main-sl} Enforce the assumptions and notation of Theorem~\ref{th:hyp} and Proposition~\ref{p:profile}.\\ There exists $\theta_0>0$ such that for any $0<\theta\leq\theta_0$, there exist $\varepsilon _0>0$, $\delta>0$ and $C>0$ such that for any $0<\varepsilon \leq\varepsilon _0$, any $\psi_0\in\R$ and any $v_0\in BUC^1(\R)$ satisfying \[ \|v_0\|_{W_{\varepsilon ,\theta}^{1,\infty}(\R)} \leq\delta\,, \] there exists $\psi\in\cC^1(\R^+)$ with initial data $\psi(0)=\psi_0$ such that the strong\footnote{We ensure $u\in BUC^{0}(\R_+;BUC^{1}(\R))\cap \cC^\infty(\R_+^*;BUC^{\infty}(\R))$.} solution to~\eqref{eq:scalar}, $u$, generated by the initial data $u(0,\cdot)=(\uu_\varepsilon +v_0)(\cdot+\psi_0)$, is global in time and satisfies for any $t\geq 0$ \begin{equation}gin{align*} \|u(t,\cdot-\psi(t))-\uu_\varepsilon \|_{W_{\varepsilon ,\theta}^{1,\infty}(\R)} +|\psi'(t)-\sigma_\varepsilon | &\leq \|v_0\|_{W_{\varepsilon ,\theta}^{1,\infty}(\R)}\,C\, \eD^{\max(\{g'(\uu_{+\infty}),g'(\uu_{-\infty})\})\, t}\,, \end{align*} and moreover there exists $\psi_\infty$ such that \[ |\psi_\infty-\psi_0|\,\leq \|v_0\|_{W_{\varepsilon ,\theta}^{1,\infty}(\R)} C\,, \] and for any $t\geq 0$ \[ |\psi(t)-\psi_\infty-t\,\sigma_\varepsilon |\,\leq \|v_0\|_{W_{\varepsilon ,\theta}^{1,\infty}(\R)} C\, \eD^{\max(\{g'(\uu_{+\infty}),g'(\uu_{-\infty})\})\, t}\,. \] \end{theorem} Among the many variations and extensions of Theorem~\ref{th:hyp} provided in \cite{DR1}, the simplest one to extend to a uniform small viscosity result is \cite[Proposition~2.5]{DR1} that proves that the exponential time decay also holds for higher order derivatives without further restriction on sizes of perturbations. It does not require any new insight besides the ones used to prove Theorem~\ref{th:main-sl} and we leave it aside only to cut unnecessary technicalities. Likewise, one may obtain in an even more direct way, that is, up to immaterial changes, exponential damping of norms encoding further slow spatial localization. To give an explicit example, let us extend notation $W_{\varepsilon ,\theta}^{k,\infty}$, $L^\infty$, into $W_{\varepsilon ,\theta,\theta'}^{k,\infty}$, $L^\infty_{\theta'}$, accordingly to weights \begin{equation}gin{align*} \omega_{k,\varepsilon ,\theta,\theta'}(x) &:= \frac{1}{\eD^{-\theta'\,|x|}+\frac{1}{\varepsilon ^k}\,\eD^{-\theta\,\frac{|x|}{\varepsilon }}}\,,& \omega_{\theta'}(x) &:=\eD^{\theta'\,|x|}\,,& \end{align*} with $\theta'\geq0$ arbitrary. One may prove for instance that for any $\theta'\geq 0$ there exist $C_{\theta'}$ and $\varepsilon _{\theta'}>0$ such that, under the sole further restrictions $0<\varepsilon \leq \varepsilon _{\theta'}$ and $\eD^{\theta'\,|\,\cdot\,|}\,v_0\in L^\infty(\R)$, there holds \begin{equation}gin{align*} \|u(t,\cdot-\psi(t))-\uu_\varepsilon \|_{L_{\theta'}^{\infty}(\R)} &\leq \|v_0\|_{L_{\theta'}^{\infty}(\R)}\,C_\theta\, \eD^{\max(\{g'(\uu_{+\infty}),g'(\uu_{-\infty})\})\, t}\,. \end{align*} One point in considering these weighted topologies is that, when $\theta'>0$, $L^\infty_{\theta'}$ is continuously embedded in $L^1\cap L^\infty$, so that an estimate on $\|u(t,\cdot-\psi(t))-\uu\|_{L^p}$ is provided by the combination of the foregoing bound with the already known bound \[ \|\uu_\varepsilon -\uu\|_{L^p(\R)}\lesssim \varepsilon ^{\frac1p}\,. \] \subsection{Outline and perspectives} The most natural nontrivial extensions of Theorems~\ref{th:hyp}/\ref{th:main-sl} that we have chosen to leave for future work concern on one hand the parabolic regularization by quasilinear terms and on the other hand planar Riemann shocks in higher spatial dimensions (see \cite[Theorem~3.4]{DR1} for the hyperbolic case). We expect many parts of the present analysis to be directly relevant in quasilinear or multiD cases but we also believe that their treatments would also require sufficiently many new arguments to deserve a separate treatment. In the multidimensional case, even the outcome is expected to be significantly different. In this direction, let us point out that the hyperbolic spectral problem is critical in the stronger sense that the spectrum includes the whole imaginary axis, instead of having an intersection with the imaginary axis reduced to $\{0\}$. This may be tracked back to the fact that the linearized Rankine-Hugoniot equation takes the form of a transport equation in transverse variables for the phase. Consistently, as proved in \cite[Theorem~3.4]{DR1}, for the hyperbolic problem, perturbing a planar shock may lead asymptotically in large time to another non-planar Riemann shock sharing the same constant-states. This may still be interpreted as a space-modulated asymptotic stability result, in the sense coined in \cite{JNRZ-conservation} and thoroughly discussed in \cite{R,R_Roscoff,R_linKdV,DR2}. A similar phenomenon is analyzed for scalar \emph{conservation} laws in \cite{Serre_scalar}. Concerning the quasilinear case, the main new difficulty is expected to arise from the fact that, to close the argument, one needs to prove that the $L^\infty$ decay of $\varepsilon \,\d_x^2v$, where $v$ still denotes the shape variation, is at least as good as the one of $\d_xv$. A priori, outside the shock layer this leaves the freedom to pick some initial typical size $\varepsilon ^{-\end{theorem}a_0}$, $\end{theorem}a_0\in [0,1]$, for $\d_x^2v$ and to try to propagate it. Indeed, roughly speaking, in the complement of an $\cO(\varepsilon )$ neighborhood of the shock location, this $L^\infty$ propagation stems from arguments similar to the ones sketched above for $\d_xv$. The key difference is that now one cannot complete it with a bound obtained through Duhamel formula since this would involve an $L^\infty$ bound on $\d_x^3v$. Thus the quasilinear study seems to require to be able to close an estimate for $\d_x^2v$ entirely with energy-type arguments, a highly non-trivial task. In another direction, we expect that the study of waves with characteristic points, as arising in the full classification obtained in \cite{DR2} for scalar balance laws, should not only involve some new patches here and there but follow very different routes and thus will require significantly new insights even at a general abstract level. As a strong token of this expectation, we point out that regularity is expected to play a paramount role there since, at the hyperbolic level, the regularity class chosen deeply modifies the spectrum when a characteristic point is present in the wave profile ; see \cite{JNRYZ,DR2}. The rest of the paper is organized as follows. We have decided to shift the derivation of wave profile asymptotics, proving Proposition~\ref{p:profile}, to Appendix~\ref{s:profiles}, because we believe that the backbone of the paper is stability and provide it mostly for completeness' sake. Next section contains a detailed examination of the required spectral preliminaries. The following one explains how to use these to obtain a practical representation of the linearized time-evolution. Though we mostly follow there the arguments in \cite{Zumbrun-Howard}, with some twists here and there, we provide a detailed exposition for two distinct reasons. The first one is that we need to track in constructions which parts are $\varepsilon $-uniform and which parts are not, a crucial point in our analysis. The second one is that most of the papers of the field requiring a detailed analysis, as we do, are either extremely long \cite{Zumbrun-Howard} or cut in a few long pieces \cite{Mascia-Zumbrun_hyp-par,Mascia-Zumbrun_hyp-par_bis} and we want to save the reader from back-and-forth consultations of the literature. This makes our analysis essentially self-contained (up to basic knowledge of spectral analysis) and we believe that it could serve as a gentle introduction to the latter massive literature. Note however, that, to keep the paper within a reasonable size, we only expound the bare minimum required by our analysis. After these two preliminary sections, we enter into the technical core of the paper, with first a section devoted to detailed linear estimates, including the identification of most-singular parts of the time-evolution as phase variations, and then a section devoted to nonlinear analysis, including adapted nonlinear maximum principles proved through energy estimates and the proof of Theorem~\ref{th:main-sl}. \section{Spectral analysis}\label{s:spectral} We investigate stability for traveling waves introduced in Proposition~\ref{p:profile}. We have chosen to carry out all our proofs within co-moving fast variables. Explicitly, we introduce new unknowns and variables through\footnote{Note the slight co-moving inconsistency with the introduction.} \[ u(t,x)=\tu\underbrace{\left(\frac{t}{\varepsilon },\frac{x-\sigma_\varepsilon \,t}{\varepsilon }\right)}_{(\ttt,\tx)}. \] However, since we never go back to the original slow variables, we drop tildes on fast quantities from now on. One reason to opt for the fast variables is that it provides a simpler reading of size dependencies on $\varepsilon $. Therefore our starting point is \begin{equation}\label{eq:scalar-sl} \d_t u+\d_x(f(u)-\sigma_\varepsilon \,u)=\d_x^2 u+\varepsilon \,g(u)\,, \end{equation} about the stationary solution $\uU_\varepsilon $. Accordingly we consider the operator \begin{equation}\label{def:operator} \cL_\varepsilon :=-\d_x((f'(\uU_\varepsilon )-\sigma_\varepsilon )\,\cdot\,)+\d_x^2+\varepsilon g'(\uU_\varepsilon ) \end{equation} on $BUC^0(\R)$ with domain $BUC^2(\R)$. Though the elements we provide are sufficient to reconstruct the classical theory, the reader may benefit from consulting \cite{KapitulaPromislow-stability} for background on spectral analysis specialized to nonlinear wave stability. In particular, we shall make extensive implicit use of the characterizations of essential spectrum in terms of endstates of wave profiles and of the spectrum at the right-hand side\footnote{We picture the complex plane with the real axis pointing to the right and the imaginary axis pointing to the top.} of the essential spectrum\footnote{There are (at least) two reasonable definitions of essential spectrum, either through failure of satisfying Fredholm property or through failure of satisfying Fredholm property with zero index. In the context of semigroup generators both definitions provide the same right-hand boundary thus the conventional choice is immaterial to stability issues.} in terms of zeroes of Evans' functions. The reader is referred to \cite{Kato,Davies} for less specialized, basic background on spectral theory. The backbone of the theory is the interpretation of spectral properties of one-dimensional differential operators in terms of spatial dynamics and a key-part of the corresponding studies is the investigation of exponential dichotomies. It starts with the identification between the eigenvalue equation \[ (\lambda-\cL_\varepsilon )\,v\,=\,0 \] and the system of ODEs \[ \frac{\dD}{\dD x}\bV(x)\,=\,\bA_\varepsilon (\lambda,x)\,\bV(x) \] for the vector\footnote{The use of flux variables is not necessary but it simplifies a few computations here and there.} $\bV=(v,\d_xv-(f'(\uU_\varepsilon )-\sigma_\varepsilon )\,v)$ where \begin{equation}\label{def:Aeps} \bA_\varepsilon (\lambda,x) \,:=\,\bp f'(\uU_\varepsilon )-\sigma_\varepsilon &1\\ \lambda-\varepsilon \,g'(\uU_\varepsilon )&0 \ep\,. \end{equation} For later use, we shall denote $\bPhi_\varepsilon ^\lambda(x,y)$ the corresponding solution operators, mapping datum at point $y$ to value at point $x$. The essential spectrum is characterized in terms of matrices $\bA_\varepsilon ^r(\lambda):=\bA^\varepsilon (\lambda;\uu_{+\infty})$ and $\bA_\varepsilon ^\end{lemma}l(\lambda):=\bA^\varepsilon (\lambda;\uu_{-\infty})$ with \begin{equation}gin{align}\label{def:Aeps_rl} \bA^\varepsilon (\lambda;u) &\,:=\,\bp f'(u)-\sigma_\varepsilon &1\\ \lambda-\varepsilon \,g'(u)&0 \ep\,. \end{align} Eigenvalues of $\bA^\varepsilon (\lambda;u)$ are given by \begin{equation}\label{def:mupm} \mu_{\pm}^\varepsilon (\lambda;u):=\frac{f'(u)-\sigma_\varepsilon }{2} \pm\sqrt{\frac{(f'(u)-\sigma_\varepsilon )^2}{4}+\lambda-\varepsilon \,g'(u)} \end{equation} and are distinct when $\lambda\neq\varepsilon \,g'(u)-\tfrac14\,(f'(u)-\sigma_\varepsilon )^2$. In this case, the matrix may be diagonalized as \[ \bA^\varepsilon (\lambda;u)\,=\,\bp \bR_+^\varepsilon (\lambda;u)&\bR_-^\varepsilon (\lambda;u)\ep\, \bp \mu_+^\varepsilon (\lambda;u)&0\\0&\mu_-^\varepsilon (\lambda;u) \ep\, \bp \bL_+^\varepsilon (\lambda;u)\\\bL_-^\varepsilon (\lambda;u)\ep \] with \begin{equation}gin{align}\label{def:Vpm} \bR_{\pm}^\varepsilon (\lambda;u)&:=\bp 1\\-\mu_{\mp}^\varepsilon (\lambda;u)\ep\,,& \bL_{\pm}^\varepsilon (\lambda;u)&:=\frac{\bp \pm\mu_{\pm}^\varepsilon (\lambda;u)&\pm 1\ep}{ \mu_+^\varepsilon (\lambda;u)-\mu_-^\varepsilon (\lambda;u)}\,. \end{align} The eigenvalues $\mu_{\pm}^\varepsilon (\lambda;u)$ have distinct real parts when $\lambda$ does not belong to \[ \cD_\varepsilon (u):=\varepsilon \,g'(u)-\tfrac14\,(f'(u)-\sigma_\varepsilon )^2+\R^-\,. \] All our spectral studies will take place far from the half-lines $\cD_\varepsilon (\uu_{+\infty})\cup\cD_\varepsilon (\uu_{-\infty})$, that correspond to the set termed \emph{absolute spectrum} in \cite{KapitulaPromislow-stability}. From now on, throughout the text, we shall use $\sqrt{\cdot}$ to denote the determination of the square root on $\C\setminus\R^-$ with positive real part. \subsection{Conjugation to constant coefficients} Our starting point is a conjugation of spectral problems to a piecewise constant coefficient spectral problem. This is mostly relevant in compact zones of the spectral plane and in the literature by Kevin Zumbrun and his collaborators this is known as a \emph{gap lemma} --- since a gap or in other words an exponential dichotomy is the key assumption ---; see for instance \cite[Lemma~2.6]{MZ_AMS} for a version relevant for the present analysis. Since we need to ensure uniformity in $\varepsilon $ for the case at hand we provide both a statement and a proof. \begin{proposition}\label{p:gap-lemma} Let $K$ be a compact subset of $\C\setminus\cD_0(\uu_{+\infty})$. There exist positive constants $(\varepsilon _0,C,\theta)$ such that there exists\footnote{As follows from the proof, $P_\varepsilon ^r(\lambda,\cdot)$ is defined as soon as $\lambda\notin\cD_\varepsilon (\uu_{+\infty})\cup\cD_\varepsilon (\uu_{-\infty})$.} a smooth map \[ P^r\,:\,[0,\varepsilon _0]\times K\times \R\mapsto GL_2(\C)\,,\qquad (\varepsilon ,\lambda,x)\mapsto P_\varepsilon ^r(\lambda,x) \] locally uniformly analytic in $\lambda$ on a neighborhood of $K$ and such that, for any $(\varepsilon ,\lambda,x)\in [0,\varepsilon _0]\times K\times [0,+\infty)$, \begin{equation}gin{align*} \|P_\varepsilon ^r(\lambda,x)-\text{I}_2\|&\leq C\,\eD^{-\theta\,|x|}\,,& \|(P_\varepsilon ^r(\lambda,x))^{-1}-\text{I}_2\|&\leq C\,\eD^{-\theta\,|x|}\,, \end{align*} and, for any $(\varepsilon ,\lambda,x,y)\in [0,\varepsilon _0]\times K\times (\R_+)^2$, \[ \bPhi_\varepsilon ^\lambda(x,y)\,=\, P_\varepsilon ^r(\lambda,x)\,\eD^{(x-y)\,\bA_\varepsilon ^r(\lambda)}\, (P_\varepsilon ^r(\lambda,y))^{-1}\,. \] \end{proposition} The same argument applies to the conjugation on $(-\infty,0]$ with the flow of $\bA_\varepsilon ^\end{lemma}l(\lambda)$ and defines a conjugation map denoted $P^\end{lemma}l$ from now on. \begin{equation}gin{proof} The proof is essentially a quantitative "cheap" gap lemma --- conjugating only one trajectory instead of solution operators --- but applied in $\cM_2(\C)$ instead of $\C^2$. Let us first observe that it is sufficient to define $P^r$ on $[0,\varepsilon _0]\times K\times [x_0,+\infty)$ for some suitably large $x_0$. Indeed then one may extend $P^r$ by \[ P_\varepsilon ^r(\lambda,x)\,:=\, \bPhi_\varepsilon ^\lambda(x,x_0)\,P_\varepsilon ^r(\lambda,x_0)\,\eD^{(x_0-x)\,\bA_\varepsilon ^r(\lambda)}\, \] and bounds are extended by a continuity-compactness argument. Likewise the uniformity in $\varepsilon $ is simply derived from a continuity-compactness argument since the construction below is continuous at the limit $\varepsilon =0$. Note moreover that in the large-$x$ regime the bound on $(P_\varepsilon ^r(\lambda,x))^{-1}$ may be derived from the bound on $P_\varepsilon ^r(\lambda,x)$ by using properties of the inverse map. The requirements on $P_\varepsilon ^r(\lambda,\cdot)$ are equivalent to the fact that it converges exponentially fast to $\text{I}_2$ at $+\infty$ (uniformly in $\varepsilon $) and that it satisfies for any $x$ \[ \frac{\dD}{\dD x}P_\varepsilon ^r(\lambda,x) \,=\,\cA_\varepsilon ^r(\lambda)(P_\varepsilon ^r(\lambda,x)) +(\bA_\varepsilon (\lambda,x)-\bA_\varepsilon ^r(\lambda))\,P_\varepsilon ^r(\lambda,x) \] where $\cA_\varepsilon ^r(\lambda):=\cA^\varepsilon (\lambda;\uu_{+\infty})$ is a linear operator on $\cM_2(\C)$ defined through \[ \cA^\varepsilon (\lambda;u)(P):=[\bA^\varepsilon (\lambda;u),P] =\bA^\varepsilon (\lambda;u)P-P\bA^\varepsilon (\lambda;u)\,. \] When $\lambda\notin\cD_\varepsilon (u)$, $\cA^\varepsilon (\lambda;u)$ admits \[ (\bR_+^\varepsilon (\lambda;u)\bL_-^\varepsilon (\lambda;u),\quad \bR_-^\varepsilon (\lambda;u)\bL_+^\varepsilon (\lambda;u),\quad \bR_+^\varepsilon (\lambda;u)\bL_+^\varepsilon (\lambda;u),\quad \bR_-^\varepsilon (\lambda;u)\bL_-^\varepsilon (\lambda;u)) \] as a basis of eigenvectors corresponding to eigenvalues \[ (\mu_+^\varepsilon (\lambda;u)-\mu_-^\varepsilon (\lambda;u),\quad -(\mu_+^\varepsilon (\lambda;u)-\mu_-^\varepsilon (\lambda;u)),\quad 0,\quad 0)\,. \] Note that $\text{I}_2$ always lies in the kernel of $\cA^\varepsilon (\lambda;u)$. We denote by $\Pi_u^\varepsilon (\lambda;u)$, $\Pi_s^\varepsilon (\lambda;u)$, $\Pi_0^\varepsilon (\lambda;u)$ the corresponding spectral projections respectively on the unstable space, the stable space and the kernel of $\cA^\varepsilon (\lambda;u)$ and for further later study we point out that they are given as \begin{equation}gin{align*} \Pi_u^\varepsilon (\lambda;u)(P)&=\bL_+^\varepsilon (\lambda;u)\,P\,\bR_-^\varepsilon (\lambda;u)\quad\bR_+^\varepsilon (\lambda;u)\bL_-^\varepsilon (\lambda;u)\,,\\ \Pi_s^\varepsilon (\lambda;u)(P)&=\bL_-^\varepsilon (\lambda;u)\,P\,\bR_+^\varepsilon (\lambda;u)\quad\bR_-^\varepsilon (\lambda;u)\bL_+^\varepsilon (\lambda;u)\,,\\ \Pi_0^\varepsilon (\lambda;u)(P)&=\bL_+^\varepsilon (\lambda;u)\,P\,\bR_+^\varepsilon (\lambda;u)\quad\bR_+^\varepsilon (\lambda;u)\bL_+^\varepsilon (\lambda;u)\,\\ &\ +\bL_-^\varepsilon (\lambda;u)\,P\,\bR_-^\varepsilon (\lambda;u)\quad\bR_-^\varepsilon (\lambda;u)\bL_-^\varepsilon (\lambda;u)\,. \end{align*} Then the result follows when $\theta$ is sufficiently small and $x_0$ is sufficiently large from a use of the implicit function theorem on \begin{equation}gin{align*} P_\varepsilon ^r(\lambda,x) &\,=\,\text{I}_2\,-\int_x^{+\infty}\Pi_0^\varepsilon (\lambda;\uu_{+\infty})\left( (\bA_\varepsilon (\lambda,y)-\bA_\varepsilon ^r(\lambda))\,P_\varepsilon ^r(\lambda,y)\right)\,\dD y\\ &-\int_x^{+\infty}\eD^{-(y-x)\,(\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty}))}\,\Pi_u^\varepsilon (\lambda;\uu_{+\infty})\left( (\bA_\varepsilon (\lambda,y)-\bA_\varepsilon ^r(\lambda))\,P_\varepsilon ^r(\lambda,y)\right)\,\dD y\\ &+\int_{x_0}^x\eD^{-(x-y)\,(\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty}))}\,\Pi_s^\varepsilon (\lambda;\uu_{+\infty})\left( (\bA_\varepsilon (\lambda,y)-\bA_\varepsilon ^r(\lambda))\,P_\varepsilon ^r(\lambda,y)\right)\,\dD y \end{align*} with norm control on matrix-valued maps through \[ \sup_{x\geq x_0}\eD^{\theta\,|x|}\|P(x)-\text{I}_2\|\,. \] \end{proof} \br The properties of the foregoing proposition do not determine $P^r$ uniquely. The normalizing choice made in the proof is $\Pi_s^\varepsilon (\lambda;\uu_{+\infty})\left(P_\varepsilon ^r(\lambda,x_0)\right)=\text{0}_2$ but we could have replaced $\text{0}_2$ with any analytic choice of an element of the stable space of $\cA^\varepsilon (\lambda;\uu_{+\infty})$. \er \br The proposition is sufficient to prove classical results about the determination of the essential spectrum from endstates spectra. \er We now investigate possible failure of uniformity in the regime of large spectral parameters. In the literature by Kevin Zumbrun and his collaborators, similar purposes are achieved through comparison of unstable manifolds with their frozen-coefficients approximations by a type of lemma termed there \emph{tracking lemma}; see for instance \cite{Humpherys-Lyng-Zumbrun,BJRZ}. The rationale is that to large-frequencies smooth coefficients seem almost constant and thus may be treated in some adiabatic way, a fact ubiquitous in high-frequency/semiclassical analysis. We follow here a different path and rather effectively build a conjugation as in the foregoing gap lemma. The first step is a suitable scaling to ensure some form of uniformity in the large-$x$ contraction argument of the proof of Proposition~\ref{p:gap-lemma}. The second-step is a high-frequency approximate diagonalization combined with an explicit solving of the leading-order part of the system ensuring that in the large-frequency regime the latter construction could actually be carried out with $x_0=0$. \begin{proposition}\label{p:tracking-lemma} There exist positive constants $(\varepsilon _0,C,\theta,\delta)$ such that setting \[ \Omega_\delta:=\left\{\,\lambda\,;\ \Re\left(\sqrt{\lambda}\right) \,\geq \frac{1}{\delta} \right\} \] there exists a smooth map \[ P^{r,HF}\,:\,[0,\varepsilon _0]\times \Omega_\delta\times \R\mapsto GL_2(\C)\,,\qquad (\varepsilon ,\lambda,x)\mapsto P_\varepsilon ^{r,HF}(\lambda,x) \] locally uniformly analytic in $\lambda$ on a neighborhood of $\Omega_\delta$ and such that, for any $(\varepsilon ,\lambda,x)\in [0,\varepsilon _0]\times \Omega_\delta\times [0,+\infty)$, \begin{equation}gin{align*} \left\|\bp 1&0\\0&\frac{1}{\sqrt{\lambda}}\ep\,P_\varepsilon ^{r,HF}(\lambda,x)\,\bp 1&0\\0&\sqrt{\lambda}\ep\, -\eD^{-\frac12\int_x^{+\infty}(f'(\uU_\varepsilon (y))-f'(\uu_{+\infty}))\,\dD y}\,\text{I}_2 \right\| &\leq C\,\frac{\eD^{-\theta\,|x|}}{\Re\left(\sqrt{\lambda}\right)}\,,\\ \left\|\bp 1&0\\0&\frac{1}{\sqrt{\lambda}}\ep (P_\varepsilon ^{r,HF}(\lambda,x))^{-1} \bp 1&0\\0&\sqrt{\lambda}\ep -\eD^{\frac12\int_x^{+\infty}(f'(\uU_\varepsilon (y))-f'(\uu_{+\infty}))\,\dD y}\,\text{I}_2\right\| &\leq C\,\frac{\eD^{-\theta\,|x|}}{\Re\left(\sqrt{\lambda}\right)}\,, \end{align*} and, for any $(\varepsilon ,\lambda,x,y)\in [0,\varepsilon _0]\times \Omega_\delta\times \R^2$, \[ \bPhi_\varepsilon ^\lambda(x,y)\,=\, P_\varepsilon ^{r,HF}(\lambda,x)\,\eD^{(x-y)\,\bA_\varepsilon ^r(\lambda)}\, (P_\varepsilon ^{r,HF}(\lambda,y))^{-1}\,. \] \end{proposition} We point out that for our main purposes we do not need to identify explicitly the leading order part of the conjugation. As for Proposition~\ref{p:gap-lemma} the same argument applies to the conjugation on $(-\infty,0]$ with the flow of $\bA_\varepsilon ^\end{lemma}l(\lambda)$ and defines a conjugation map denoted $P^{\end{lemma}l,HF}$ from now on. \begin{equation}gin{proof} As a preliminary remark, we observe that the condition $\Re\left(\sqrt{\lambda}\right)\gg1$ and $\Re(\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty}))\gg 1$ are equivalent, with uniform control of one by the other and vice versa. Scaling $P_\varepsilon ^r$ to $Q_\varepsilon ^r$ defined as \[ Q_\varepsilon ^r(\lambda,\cdot) :=\bp 1&0\\0&\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})\ep^{-1}\, P_\varepsilon ^r(\lambda,\cdot) \,\bp 1&0\\0&\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})\ep \] removes high-frequency singularities by replacing $\bR_{\pm}^\varepsilon (\lambda;\uu_{+\infty})$ and $\bL_{\pm}^\varepsilon (\lambda;\uu_{+\infty})$ with \begin{equation}gin{align*} \bp 1\\-\frac{\mu_{\mp}^\varepsilon (\lambda;\uu_{+\infty})}{ \mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\ep\,,&& \bp \frac{\pm\mu_{\pm}^\varepsilon (\lambda;\uu_{+\infty})}{ \mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}&\pm 1\ep\,, \end{align*} whereas the only other effect is the replacement of $\bA_\varepsilon (\lambda,y)-\bA_\varepsilon ^r(\lambda)$ with \[ \bp f'(\uU_\varepsilon )-f'(\uu_{+\infty})&0\\[0.25em] -\varepsilon \,\frac{g'(\uU_\varepsilon )-g'(\uu_{+\infty})}{ \mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}&0 \ep\,. \] At this stage, let us choose coordinates to identify $\cM_2(\C)$ with $\C^4$ in such a way that $\C^2\times\{0\}^2$, $\{0\}\times\C\times\{0\}$, $\{0\}^3\times\C$, $(1,0,0,0)$ and $(0,1,0,0)$ correspond respectively --- after scaling and choice of coordinates --- to the kernel of $\cA_\varepsilon ^r(\lambda)$, its unstable space, its stable space, $\text{I}_2$ and $\dfrac{1}{\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}A^r_{\varepsilon ,HF}$. Then the problem to be solved takes the form \begin{equation}gin{align*} \frac{\dD}{\dD x}\bfalpha(x)&=\cB_0(x)\,\bfalpha(x) +\cO(\eD^{-\theta\,|x|})\,\bp \begin{equation}ta\\\gamma\ep(x)\\ \frac{\dD}{\dD x}\begin{equation}ta(x)&= (\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty}) +\omega_+(x))\,\begin{equation}ta(x) +\cO(\eD^{-\theta\,|x|})\,\bp \bfalpha\\\gamma\ep(x)\\ \frac{\dD}{\dD x}\gamma(x)&= -(\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty}) +\omega_-(x))\,\gamma(x) +\cO(\eD^{-\theta\,|x|})\,\bp \bfalpha\\\begin{equation}ta\ep(x) \end{align*} with $(1,0,0,0)$ as limiting value at $+\infty$, for some\footnote{Along the proof we allow ourselves to change the precise value of $\theta$ from line to line.} $\theta>0$, where $\cB_0(x)$, $\omega_+(x)$, $\omega_-(x)$ are also of the form $\cO(\eD^{-\theta\,|x|})$. It follows that when $\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})$ is sufficiently large, by a further change of variables differing from $\text{I}_4$ by a block off-diagonal term \[ \bp \bfalpha_{bis}\\\begin{equation}ta_{bis}\\\gamma_{bis}\ep(x) \,=\,\left(\text{I}_4+\cO\left(\frac{\eD^{-\theta\,|x|}}{\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\right)\right)\bp \bfalpha\\\begin{equation}ta\\\gamma\ep(x) \] one may transform the problem to \begin{equation}gin{align*} \frac{\dD}{\dD x}\bfalpha_{bis}(x)&=\tcB_0(x)\,\bfalpha_{bis}(x) +\cO\left(\frac{\eD^{-\theta\,|x|}}{\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\right)\,\bp \begin{equation}ta_{bis}\\\gamma_{bis}\ep(x)\\ \frac{\dD}{\dD x}\begin{equation}ta_{bis}(x)&= (\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty}) +\tom_{+}(x))\,\begin{equation}ta_{bis}(x)\\ &\hspace{7em} +\cO\left(\frac{\eD^{-\theta\,|x|}}{\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\right)\,\bp \bfalpha_{bis}\\\gamma_{bis}\ep(x)\\ \frac{\dD}{\dD x}\gamma_{bis}(x)&= -(\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty}) +\tom_{-}(x))\,\gamma_{bis}(x)\\ &\hspace{7em} +\cO\left(\frac{\eD^{-\theta\,|x|}}{\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\right)\,\bp \bfalpha_{bis}\\\begin{equation}ta_{bis}\ep(x) \end{align*} with $(1,0,0,0)$ as limiting value at $+\infty$, where $\tcB_0-\cB_0$, $\tom_{-}-\omega_{-}$ and $\tom_{+}-\omega_{+}$ are all of the form \[ \cO\left(\frac{\eD^{-\theta\,|x|}}{\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\right)\,. \] Now, we point out that there is a single solution to the leading-order part \[ \frac{\dD}{\dD x}\bfalpha_{main}(x)=\tcB_0(x)\,\bfalpha_{main}(x) \] with $(1,0)$ as limiting value at $+\infty$. This follows from a fixed point argument on $x\geq x_0$ for $x_0$ large followed by a continuation argument. We may be even more explicit. Indeed an explicit computation yields \[ \tcB_0(x)\,=\,\cB_{main}(x)+\cO\left(\frac{\eD^{-\theta\,|x|}}{\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\right) \] with \[ \cB_{main}(x)\,\bp 1\\0\ep\,=\,\frac12(f'(\uU_\varepsilon (x))-f'(\uu_{+\infty}))\,\bp 1\\0\ep \] so that \[ \bfalpha_{main}(x)\,=\,\eD^{-\frac12\int_x^{+\infty}(f'(\uU_\varepsilon (y))-f'(\uu_{+\infty}))\,\dD y}\,\bp 1\\0\ep +\cO\left(\frac{\eD^{-\theta\,|x|}}{\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\right)\,. \] The proof is thus achieved by a fixed point argument on a problem of type \begin{equation}gin{align*} \bfalpha_{bis}(x) &\,=\,\bfalpha_{main}(x)\,-\int_x^{+\infty}\Phi_0(x,y)\, \cO\left(\frac{\eD^{-\theta\,|y|}}{\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\right)\,\bp \begin{equation}ta_{bis}\\\gamma_{bis}\ep(y)\,\dD y\\ \begin{equation}ta_{bis}(x)&= -\int_x^{+\infty}\eD^{-(y-x)\,(\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty}))+\int_y^x\tom_+}\, \cO\left(\frac{\eD^{-\theta\,|y|}}{\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\right)\,\bp \bfalpha_{bis}\\\gamma_{bis}\ep(y)\,\dD y\\ \gamma_{bis}(x)& =\int_{0}^x\eD^{-(x-y)\,(\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty}))-\int_y^x\tom_-}\, \cO\left(\frac{\eD^{-\theta\,|y|}}{\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\right)\,\bp \bfalpha_{bis}\\\begin{equation}ta_{bis}\ep(y)\,\dD y \end{align*} where $\Phi_0$ denotes the solution operator associated with $\tcB_0$. \end{proof} In the following we shall complete Proposition~\ref{p:tracking-lemma} that provides $P^{r,HF}$ on $\Omega_\delta$ with an application of Proposition~\ref{p:gap-lemma} on \[ K_\delta:= \left\{\,\lambda\,;\ d(\lambda,\cD_0(\uu_{+\infty}))\,\geq\,\delta \quad\textrm{and}\quad |\lambda|\leq \frac{1}{\delta} \right\}\,. \] When $\delta$ is sufficiently small $\Omega_\delta$ and $K_\delta$ overlaps. Yet a priori $P^r$ and $P^{r,HF}$ differ from each other even in regions where both exist. Fortunately the implied possible mismatch disappears at the level of Green functions. \subsection{Evans' function and its asymptotics} Now wherever it makes sense we set \begin{equation}gin{align*} \bV^{r,s}_\varepsilon (\lambda,x)&:=\eD^{x\,\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\,P^r_\varepsilon (\lambda,x)\,\bR_-^\varepsilon (\lambda;\uu_{+\infty})\,,\\ \bV^{r,u}_\varepsilon (\lambda,x)&:=\eD^{x\,\mu_+^\varepsilon (\lambda;\uu_{+\infty})}\,P^r_\varepsilon (\lambda,x)\,\bR_+^\varepsilon (\lambda;\uu_{+\infty})\,,\\ \bV^{\end{lemma}l,s}_\varepsilon (\lambda,x)&:=\eD^{x\,\mu_-^\varepsilon (\lambda;\uu_{-\infty})}\,P^\end{lemma}l_\varepsilon (\lambda,x)\,\bR_-^\varepsilon (\lambda;\uu_{-\infty})\,,\\ \bV^{\end{lemma}l,u}_\varepsilon (\lambda,x)&:=\eD^{x\,\mu_+^\varepsilon (\lambda;\uu_{-\infty})}\,P^\end{lemma}l_\varepsilon (\lambda,x)\,\bR_+^\varepsilon (\lambda;\uu_{-\infty})\,, \end{align*} and similarly for $\bV^{r,s,HF}_\varepsilon $, $\bV^{r,u,HF}_\varepsilon $, $\bV^{\end{lemma}l,s,HF}_\varepsilon $ and $\bV^{\end{lemma}l,u,HF}_\varepsilon $. Note that notation is used here to recall stable and unstable spaces, we also use them in areas of the spectral plane where they do not match with stable and unstable spaces. Instead, this fits analytic continuation of generators of stable/unstable spaces. Correspondingly we define the Evans' function \begin{equation}\label{def:Evans} D_\varepsilon (\lambda):=\det\bp\bV^{r,s}_\varepsilon (\lambda,0)&\bV^{\end{lemma}l,u}_\varepsilon (\lambda,0)\ep \end{equation} and its high-frequency counterpart $D^{HF}_\varepsilon $. Note that we define the Evans function at point $0$ but on one hand, we do not make use of any particular property due to normalization so that the point $0$ could be replaced with any other point and on the other hand relations between Evans functions at different points are simply derived from Liouville's formula for Wronskians. For instance, \begin{equation}\label{eq:Wronskian} \det\bp\bV^{r,s}_\varepsilon (\lambda,x)&\bV^{\end{lemma}l,u}_\varepsilon (\lambda,x)\ep \,=\,D_\varepsilon (\lambda)\,\eD^{\int_0^x\Tr(\bA_\varepsilon (\lambda,\cdot))} \,=\,D_\varepsilon (\lambda)\,\eD^{\int_0^x(f'(\uU_\varepsilon )-\sigma_\varepsilon )}\,. \end{equation} A simple corollary to Proposition~\ref{p:tracking-lemma} is \begin{corollary}\label{c:Evans-large} Uniformly in $\varepsilon $ (sufficiently small) \[ \lim_{\Re(\sqrt{\lambda})\to\infty}\,\frac{D^{HF}_\varepsilon (\lambda)}{\sqrt{\lambda}}\,=\, 2\,\eD^{-\frac12\int_0^{+\infty}(f'(\uU_\varepsilon (y))-f'(\uu_{+\infty}))\,\dD y +\frac12\int_{-\infty}^0(f'(\uU_\varepsilon (y))-f'(\uu_{-\infty}))\,\dD y}\,. \] \end{corollary} To complete Corollary~\ref{c:Evans-large}, we derive information on compacts sets of $\lambda$ in the limit $\varepsilon \to0$. \begin{proposition}\label{p:Sturm-Liouville} There exists $\end{theorem}a_0>0$ such that for any $\delta>0$ there exist positive $(\varepsilon _0,c_0)$ such that for any $\varepsilon \in[0,\varepsilon _0]$, $D_\varepsilon (\cdot)$ is well-defined on \[ K_{\end{theorem}a_0,\delta} := \left\{\,\lambda\,;\ d(\lambda,(-\infty,-\end{theorem}a_0])\,\geq\,\min\left(\left\{\delta,\frac{\end{theorem}a_0}{2}\right\}\right) \quad\textrm{and}\quad|\lambda|\leq \frac{1}{\delta} \right\}\, \] and for any $\lambda\in K_{\end{theorem}a_0,\delta}$, \begin{equation}gin{align*} |D_\varepsilon (\lambda)|\,\geq\,c_0\,\min(\{1,|\lambda|\})\,. \end{align*} \end{proposition} \begin{equation}gin{proof} We derive the result from Sturm-Liouville theory and regularity in $\varepsilon $. To apply Sturm-Liouville theory, we introduce the weight \[ \omega_\varepsilon (x):=\eD^{\frac12\int_0^x(f'(\uU_\varepsilon (y))-\sigma_\varepsilon )\,\dD y}\,. \] We observe that considered as an operator on $L^2(\R)$ with domain $H^2(\R)$ the operator \[ L_\varepsilon :=\frac{1}{\omega_\varepsilon }\cL_\varepsilon \left(\omega_\varepsilon \,\cdot\,\right) \] is self-adjoint and in the region of interest it possesses no essential spectrum and its eigenvalues agree in location and algebraic multiplicity with the roots of $D_\varepsilon $. As a consequence the zeroes of $D_\varepsilon $ are real and since $\uU'_\varepsilon /\omega_\varepsilon $ is a nowhere-vanishing eigenvector for the eigenvalue $0$, $0$ is a simple root of $D_\varepsilon $ and $D_\varepsilon $ does not vanish on $(0,+\infty)$. From here the corresponding bound is deduced through a continuity-compactness argument in $\varepsilon $. \end{proof} \section{Green functions}\label{s:Green} Now we use the introduced spectral objects to obtain representation formulas for linearized solution operators. \subsection{Duality} To begin with, to provide explicit formulas for spectral Green functions to be introduced below, we extend to dual problems the conclusions of Section~\ref{s:spectral}. Note that the duality we are referring to is not related to any particular choice of a specific Banach space but rather distributional/algebraic. To begin with, we introduce the formal adjoint \begin{equation}\label{def:dual-operator} \cL_\varepsilon ^{adj}:=(f'(\uU_\varepsilon )-\sigma_\varepsilon )\d_x+\d_x^2+\varepsilon g'(\uU_\varepsilon ) \end{equation} and note that for any sufficiently smooth $v$, $w$, and any points $(x_0,x_1)$ \begin{equation}\label{eq:duality} \bV\cdot \bp 0&-1\\1&0\ep \bp w\\\d_x w\ep\,(x_1) -\bV\cdot \bp 0&-1\\1&0\ep \bp w\\\d_x w\ep\,(x_0) \,=\,\int_{x_0}^{x_1}(w\,\cL_\varepsilon v-v\,\cL_\varepsilon ^{adj}w)\, \end{equation} with $\bV=(v,\d_xv-(f'(\uU_\varepsilon )-\sigma_\varepsilon )\,v)$. As a first simple consequence of \eqref{eq:duality} note that if $(\lambda,y,v_y^r,v_y^\end{lemma}l)$ are such that $(\lambda-\cL_\varepsilon )v_y^r=0$ and $(\lambda-\cL_\varepsilon )v_y^\end{lemma}l=0$, then the function \[ \varphi_y\,:\ \R\to\C\,,\qquad x\mapsto \begin{equation}gin{cases} v_y^r(x)&\quad \textrm{if }x>y\\ v_y^\end{lemma}l(x)&\quad \textrm{if }x<y \end{cases} \] solves $(\lambda-\cL_\varepsilon )\varphi_y\,=\,\delta_y$ if and only if \begin{equation}gin{align*} v_y^r(y)&\,=\,v_y^\end{lemma}l(y)\,,\\ \d_xv_y^r(y)-(f'(\uU_\varepsilon )(y)-\sigma_\varepsilon )\,v_y^r(y)\, &=\d_xv_y^\end{lemma}l(y)-(f'(\uU_\varepsilon )(y)-\sigma_\varepsilon )\,v_y^\end{lemma}l(y)+1\,. \end{align*} Specializing to the tensorized case where $v_y^r(x)=v^r(x)\,\alpha(y)$, $v_y^\end{lemma}l(x)=v^\end{lemma}l(x)\,\begin{equation}ta(y)$, note that the foregoing conditions are equivalent to \begin{equation}gin{align*} \bp \bV^r(y)&\bV^\end{lemma}l(y)\ep\, \bp \alpha(y)\\ -\begin{equation}ta(y) \ep\,=\,\bp 0\\1\ep \end{align*} where $\bV^{\sharp}=(v^{\sharp},\d_xv^{\sharp}-(f'(\uU_\varepsilon )-\sigma_\varepsilon )\,v^{\sharp})$, $\sharp\in\{r,\end{lemma}l\}$. Hence, we need to find vectors satisfying some orthogonality property to identify the inverse of the matrix : \begin{equation}gin{align*} \bp \bV^r(y)&\bV^\end{lemma}l(y)\ep\, \end{align*} To go further, we identify \[ (\lambda-\cL_\varepsilon ^{adj})\,w\,=\,0 \] and the system of ODEs \[ \frac{\dD}{\dD x}\bW(x)\,=\,\tbA_\varepsilon (\lambda,x)\,\bW(x) \] for the vector $\bW=(w,\d_x w)$ where \begin{equation}\label{def:tAeps} \tbA_\varepsilon (\lambda,x) \,:=\,\bp 0&1\\ \lambda-\varepsilon \,g'(\uU_\varepsilon )& -(f'(\uU_\varepsilon )-\sigma_\varepsilon ) \ep\,. \end{equation} Note that \[ \tbA_\varepsilon (\lambda,x)\,=\,\bA_\varepsilon (\lambda,x)\,-\,(f'(\uU_\varepsilon )-\sigma_\varepsilon )\,\text{I}_2 \] so that all the proofs of Section~\ref{s:spectral} purely based on limiting-matrices spectral gaps arguments apply equally well to the corresponding dual problems under the exact same assumptions. Alternatively one may derive results on dual problems by using directly the relation between solution operators \[ \tbPhi_\varepsilon ^\lambda(x,y)\,=\,\bPhi_\varepsilon ^\lambda(x,y)\,\eD^{-\int_y^x(f'(\uU_\varepsilon )-\sigma_\varepsilon )}\,. \] Here and elsewhere throughout the text from now on we denote with a $\,\widetilde{\,\,}\,$ all quantities arising from dual problems. Let us point out that our choices lead to \begin{equation}gin{align*} \tmu_{\pm}^\varepsilon (\lambda;u)& \,=\,\mu_{\pm}^\varepsilon (\lambda;u)-\,(f'(u)-\sigma_\varepsilon )\,,& \tbR_{\pm}^\varepsilon (\lambda;u)&\,=\,\bR_{\pm}^\varepsilon (\lambda;u)\,,& \tbL_{\pm}^\varepsilon (\lambda;u)&\,=\,\bL_{\pm}^\varepsilon (\lambda;u)\,, \end{align*} and \begin{equation}gin{align*} \tP^r_\varepsilon (\lambda,x)& \,=\,P^r_\varepsilon (\lambda,x)\,\eD^{\int_x^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}\,,& \tP^\end{lemma}l_\varepsilon (\lambda,x)& \,=\,P^\end{lemma}l_\varepsilon (\lambda,x)\,\eD^{-\int_{-\infty}^x(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\,,& \end{align*} (and likewise for high-frequency versions). \begin{proposition}\label{p:duality-mean} Let $K$ be a compact subset of $\C\setminus(\cD_0(\uu_{+\infty})\cup\cD_0(\uu_{+\infty}))$. There exists $\varepsilon _0>0$ such that there exist smooth maps \begin{equation}gin{align*} \tau^r\,&:\,[0,\varepsilon _0]\times K\mapsto\C\,,\ (\varepsilon ,\lambda)\mapsto \tau_\varepsilon ^r(\lambda)\,,& \rho^r\,&:\,[0,\varepsilon _0]\times K\mapsto\C\,,\ (\varepsilon ,\lambda)\mapsto \rho_\varepsilon ^r(\lambda)\,,\\ \tau^\end{lemma}l\,&:\,[0,\varepsilon _0]\times K\mapsto\C\,,\ (\varepsilon ,\lambda)\mapsto \tau_\varepsilon ^\end{lemma}l(\lambda)\,,& \rho^\end{lemma}l\,&:\,[0,\varepsilon _0]\times K\mapsto\C\,,\ (\varepsilon ,\lambda)\mapsto \rho_\varepsilon ^\end{lemma}l(\lambda)\,,\\ \ttau^r\,&:\,[0,\varepsilon _0]\times K\mapsto\C\,,\ (\varepsilon ,\lambda)\mapsto \ttau_\varepsilon ^r(\lambda)\,,& \trho^r\,&:\,[0,\varepsilon _0]\times K\mapsto\C\,,\ (\varepsilon ,\lambda)\mapsto \trho_\varepsilon ^r(\lambda)\,,\\ \ttau^\end{lemma}l\,&:\,[0,\varepsilon _0]\times K\mapsto\C\,,\ (\varepsilon ,\lambda)\mapsto \ttau_\varepsilon ^\end{lemma}l(\lambda)\,,& \trho^\end{lemma}l\,&:\,[0,\varepsilon _0]\times K\mapsto\C\,,\ (\varepsilon ,\lambda)\mapsto \trho_\varepsilon ^\end{lemma}l(\lambda)\,, \end{align*} locally uniformly analytic in $\lambda$ on a neighborhood of $K$ and such that, for any $(\varepsilon ,\lambda)\in [0,\varepsilon _0]\times K$, for any $x\in\R$ \begin{equation}gin{align*} \bV_\varepsilon ^{r,s}(\lambda,x)&=\,\rho^r_\varepsilon (\lambda)\,\bV_\varepsilon ^{\end{lemma}l,s}(\lambda,x) +\tau^r_\varepsilon (\lambda)\,\bV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)\,,\\ \bV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)&=\,\rho^\end{lemma}l_\varepsilon (\lambda)\,\bV_\varepsilon ^{r,u}(\lambda,x) +\tau^\end{lemma}l_\varepsilon (\lambda)\,\bV_\varepsilon ^{r,s}(\lambda,x)\,,\\ \tbV_\varepsilon ^{r,s}(\lambda,x)&=\,\trho^r_\varepsilon (\lambda)\,\tbV_\varepsilon ^{\end{lemma}l,s}(\lambda,x) +\ttau^r_\varepsilon (\lambda)\,\tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)\,,\\ \tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)&=\,\trho^\end{lemma}l_\varepsilon (\lambda)\,\tbV_\varepsilon ^{r,u}(\lambda,x) +\ttau^\end{lemma}l_\varepsilon (\lambda)\,\tbV_\varepsilon ^{r,s}(\lambda,x)\,,\\ \end{align*} and \begin{equation}gin{align*} \rho^r_\varepsilon (\lambda)&\,=\, D_\varepsilon (\lambda)\,\frac{\eD^{-\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}}{\mu_+^\varepsilon (\lambda;\uu_{-\infty})-\mu_-^\varepsilon (\lambda;\uu_{-\infty})}\,,\\ \rho^\end{lemma}l_\varepsilon (\lambda)&\,=\, D_\varepsilon (\lambda)\,\frac{\eD^{\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}}{\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\,,\\ \trho^r_\varepsilon (\lambda)&\,=\, D_\varepsilon (\lambda)\,\frac{\eD^{\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}}{\mu_+^\varepsilon (\lambda;\uu_{-\infty})-\mu_-^\varepsilon (\lambda;\uu_{-\infty})}\,,\\ \trho^\end{lemma}l_\varepsilon (\lambda)&\,=\, D_\varepsilon (\lambda)\,\frac{\eD^{-\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}}{\mu_+^\varepsilon (\lambda;\uu_{+\infty})-\mu_-^\varepsilon (\lambda;\uu_{+\infty})}\,. \end{align*} As a consequence for such a $(\varepsilon ,\lambda)$ and any $x\in\R$ \begin{equation}gin{align*} \bV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)\cdot \bp 0&-1\\1&0\ep \tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)&=\,0\,,& \bV_\varepsilon ^{r,s}(\lambda,x)\cdot \bp 0&-1\\1&0\ep \tbV_\varepsilon ^{r,s}(\lambda,x)&=\,0\,, \end{align*} \begin{equation}gin{align*} \bV_\varepsilon ^{r,s}(\lambda,x)\cdot \bp 0&-1\\1&0\ep \tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)&=\, -D_\varepsilon (\lambda)\,\eD^{-\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\,,\\ \bV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)\cdot \bp 0&-1\\1&0\ep \tbV_\varepsilon ^{r,s}(\lambda,x)&=\, D_\varepsilon (\lambda)\,\eD^{\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}\,. \end{align*} \end{proposition} Notation $\tau$, $\rho$ is used here to echo transmission/reflection coefficients of the classical scattering framework. A corresponding proposition holds for the high-frequency regime. \begin{equation}gin{proof} All the properties are readily obtained by combining the fact that both $(\bV_\varepsilon ^{r,s}(\lambda,\cdot),\bV_\varepsilon ^{r,u}(\lambda,\cdot))$ and $(\bV_\varepsilon ^{\end{lemma}l,s}(\lambda,\cdot),\bV_\varepsilon ^{\end{lemma}l,u}(\lambda,\cdot))$ form a basis of solutions of the spectral system of ODEs, the Liouville formula for Wronskians and duality relation \eqref{eq:duality}. \end{proof} We thus have that : \begin{equation}gin{align*} \bp \bV_\varepsilon ^{r,s}(y)&\bV_\varepsilon ^{\end{lemma}l,u}(y)\ep\,^{-1}= \begin{equation}gin{pmatrix} \dfrac{e_2 \cdot \tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,y)}{\, D_\varepsilon (\lambda)\,\eD^{-\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\,} & \dfrac{e_1 \cdot \tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,y)}{\, -D_\varepsilon (\lambda)\,\eD^{-\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\,} \\ -\dfrac{e_2\cdot \tbV_\varepsilon ^{r,s}(\lambda,y)}{\, D_\varepsilon (\lambda)\,\eD^{\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}\,} & \dfrac{e_1 \cdot \tbV_\varepsilon ^{r,s}(\lambda,y)}{\, D_\varepsilon (\lambda)\,\eD^{\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}\,} \end{pmatrix}\,. \end{align*} \begin{proposition}\label{p:duality-high} There exist positive constants $(\varepsilon _0,C,\delta)$ such that setting with $\Omega_\delta$ as in Proposition~\ref{p:tracking-lemma} \[ \Omega_\delta:=\left\{\,\lambda\,;\ \Re\left(\sqrt{\lambda}\right) \,\geq \frac{1}{\delta} \right\} \] there exist on $[0,\varepsilon _0]\times \Omega_\delta$ maps $\tau^{r,HF}$, $\rho^{r,HF}$, $\tau^{\end{lemma}l,HF}$, $\rho^{\end{lemma}l,HF}$, $\ttau^{r,HF}$, $\trho^{r,HF}$, $\ttau^{\end{lemma}l,HF}$, $\trho^{\end{lemma}l,HF}$, satisfying high-frequency versions of the conclusions of Proposition~\ref{p:duality-mean} and moreover all these functions are uniformly bounded on $[0,\varepsilon _0]\times \Omega_\delta$. \end{proposition} \begin{equation}gin{proof} Most of the proof is contained in the proof of Proposition~\ref{p:duality-mean}. The remaining part is directly derived from the observation that Proposition~\ref{p:tracking-lemma} provides asymptotics for $(P_\varepsilon ^\end{lemma}l(\lambda,0))^{-1}\,P_\varepsilon ^r(\lambda,0)$ thus also for the coefficients under consideration. \end{proof} This leads to the following definition (wherever it makes sense) \begin{equation}gin{align}\label{eq:spectral-Green} G_\varepsilon (\lambda;x,y) \,:=\, \begin{equation}gin{cases} -\dfrac{\eD^{\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}}{D_\varepsilon (\lambda)}\, \begin{equation}D_1\cdot\bV_\varepsilon ^{r,s}(\lambda,x)\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,y) &\quad\textrm{if }x>y\,,\\[0.5em] -\dfrac{\eD^{-\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}}{D_\varepsilon (\lambda)} \begin{equation}D_1\cdot\bV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{r,s}(\lambda,y) &\quad\textrm{if }x<y\,, \end{cases} \end{align} where $\begin{equation}D_1:=(1,0)$. Note that $(\lambda-\cL_\varepsilon )G_\varepsilon (\lambda;\cdot,y)=\delta_y$ and, for $\Re(\lambda)$ sufficiently large, $G_\varepsilon (\lambda;x,y)$ is exponentially decaying as $\|x-y\|\to\infty$. To bound $G_\varepsilon (\lambda;x,y)$, we shall refine the alternative $x<y$ \emph{vs.} $x>y$. For instance, when $x>y$, more convenient equivalent representations of $G_\varepsilon (\lambda;x,y)$ are \begin{equation}gin{align*} -\dfrac{\eD^{\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}}{D_\varepsilon (\lambda)}\, \begin{equation}D_1\cdot\bV_\varepsilon ^{r,s}(\lambda,x)\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,y) &\quad\textrm{when }x>0>y\,,\\[0.5em] -\eD^{\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\, \begin{equation}D_1\cdot\bV_\varepsilon ^{r,s}(\lambda,x)\, \begin{equation}D_1\cdot\left(\dfrac{\trho^\end{lemma}l_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\,\tbV_\varepsilon ^{r,u}(\lambda,y) +\dfrac{\ttau^\end{lemma}l_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\,\tbV_\varepsilon ^{r,s}(\lambda,y)\right) &\quad\textrm{when }x>y>0\,,\\[0.5em] -\eD^{\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\, \begin{equation}D_1\cdot\left( \dfrac{\rho^r_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\,\bV_\varepsilon ^{\end{lemma}l,s}(\lambda,x) +\dfrac{\tau^r_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\,\bV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)\right)\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,y) &\quad\textrm{when }0>x>y\,. \end{align*} \br The representation of spectral Green functions, thus of resolvent operators, with Evans' functions is sufficient to prove classical results about the identification of spectrum --- including algebraic multiplicity --- at the right-hand side of the essential spectrum with zeros of Evans' functions. \er We use similar formulas in the high-frequency regime. Yet the Green functions of the high-frequency regime and the compact-frequency regime agree where they co-exist (by uniqueness of the spectral problem (in a suitably weighted space) in some overlapping regions and uniqueness of analytic continuation elsewhere) so that we do not need to introduce a specific piece of notation for the high-frequency regime. We also point out that it follows from Proposition~\ref{p:tracking-lemma} that in the zone of interest \[ |\d_xG_\varepsilon (\lambda;x,y)|\leq C\,\max(\{1,\sqrt{|\lambda|}\})\,|G_\varepsilon (\lambda;x,y)| \] for some uniform constant $C$. \subsection{Time-evolution} It follows from standard semigroup theory that the representation \begin{equation}\label{eq:semi-group} S_\varepsilon (t)\,=\,\frac{1}{2\iD\pi}\,\int_\R \eD^{\Lambda(\xi)\,t}\Lambda'(\xi)\, (\Lambda(\xi)\text{I}-\cL_\varepsilon )^{-1}\,\dD\xi \end{equation} holds in $\cL(BUC^0(\R))$ when $\Lambda:\R\to \C$ is a continuous, piecewise $\cC^1$ simple curve such that \begin{equation}gin{enumerate} \item $\Lambda$ is valued in the right-hand connected component of\footnote{This set contains $\{\lambda;\Re(\lambda)\geq \omega\}$ when $\omega$ is sufficiently large.} \[ \left\{\,\lambda\,;\ \textrm{for }u\in\{\uu_{-\infty},\uu_{+\infty}\}\,,\ \Re\left(\mu_+^\varepsilon (\lambda,u)\right)>0>\Re\left(\mu_-^\varepsilon (\lambda,u)\right) \right\}\,; \] \item there hold \begin{equation}gin{align*} \lim_{\xi\to\pm\infty}\text{I}m(\Lambda(\xi))&\,=\,\pm\infty\,,& \int_\R \eD^{\Re(\Lambda(\xi))\,t} \frac{|\Lambda'(\xi)|}{1+|\Lambda(\xi)|}\dD\xi <+\infty\, \end{align*} and there exist positive $(R,c)$ such that for $|\xi|\geq R$ \[ \Re(\Lambda(\xi))\geq -c\,|\text{I}m(\Lambda(\xi))|\,; \] \item there is no root of $D_\varepsilon $ on the right\footnote{The second condition implies that this makes sense.} of $\Lambda(\R)$. \end{enumerate} Failure of the third condition could be restored by adding positively-oriented small circles to the contour $\Lambda$. This is the first condition that we want to relax by going to Green functions. For curves as above, applying the above formula to functions in $W^{\infty,\infty}(\R)$ and testing it against functions in $\cC^\infty_c(\R)$ leads to a similar representation for Green functions \begin{equation}\label{eq:time-Green} G^\varepsilon _t(x,y)\,=\,\frac{1}{2\iD\pi}\,\int_\R \eD^{\Lambda(\xi)\,t}\Lambda'(\xi)\, G_\varepsilon (\Lambda(\xi);x,y)\,\dD\xi\,. \end{equation} The point is that at fixed $(t,x,y)$, the constraints on $\Lambda$ ensuring the representation formula are significantly less stringent and one may use this freedom to optimize bounds. In particular depending on the specific regime for the triplet $(t,x,y)$ or the kind of data one has in mind, one may trade spatial localization for time-decay and \emph{vice versa} by adjusting contours to the right so as to gain spatial decay or to the left in order to improve time decay. When doing so, we essentially follow the strategy of \cite{Zumbrun-Howard}. The critical decay is essentially encoded in limiting-endstates spectral spatial decay and Evans' function root location. Therefore, roughly speaking, leaving aside questions related to the presence of a root of the Evans' function at zero, contours are chosen here to approximately\footnote{In some cases a genuine optimization --- as in direct applications of the Riemann saddle point method --- would be impractical.} optimize bounds on \[ \int_\R \eD^{\Re(\Lambda(\xi))\,t+\Re(\mu_\sharp^\varepsilon (\Lambda(\xi),\uu_{\sign(x)\infty}))\,x +\Re(\tmu_\flat^\varepsilon (\Lambda(\xi),\uu_{\sign(y)\infty}))\,y}\frac{|\Lambda'(\xi)|}{|D_\varepsilon (\Lambda(\xi))|}\,\dD\xi \] with $(\sharp,\flat)\in\{+,-\}^2$. More precisely, at fixed $(t,x,y)$, one picks $\Lambda_0$ real in $[-\tfrac12\end{theorem}a_0,+\infty]$ (with $\end{theorem}a_0$ as in Proposition~\ref{p:Sturm-Liouville}), approximately minimizing \[ \Re(\lambda)\,t+\Re(\mu_\sharp^\varepsilon (\lambda,\uu_{\sign(x)\infty}))\,x +\Re(\tmu_\flat^\varepsilon (\lambda,\uu_{\sign(y)\infty}))\,y \] among such real $\lambda$ in $[-\tfrac12\end{theorem}a_0,+\infty]$ and then depending on cases one defines $\Lambda$ through one of the equations \begin{equation}gin{align*} \Re(\mu_\sharp^\varepsilon (\Lambda(\xi),\uu_{\sign(x)\infty}))\,x &+\Re(\tmu_\flat^\varepsilon (\Lambda(\xi),\uu_{\sign(y)\infty}))\,y\\ &=\Re(\mu_\sharp^\varepsilon (\Lambda_0,\uu_{\sign(x)\infty}))\,x +\Re(\tmu_\flat^\varepsilon (\Lambda_0,\uu_{\sign(y)\infty}))\,y +\iD\,\xi\,\zeta_{\sign(\xi)}\,(\sharp\,x+\flat\,y)\\ \Re(\tmu_\flat^\varepsilon (\Lambda(\xi),\uu_{\sign(y)\infty}))\,y &=\Re(\tmu_\flat^\varepsilon (\Lambda_0,\uu_{\sign(y)\infty}))\,y +\iD\,\xi\,\zeta_{\sign(\xi)}\,\times(\flat\,y)\\ \Re(\mu_\sharp^\varepsilon (\Lambda(\xi),\uu_{\sign(x)\infty}))\,x &=\Re(\mu_\sharp^\varepsilon (\Lambda_0,\uu_{\sign(x)\infty}))\,x +\iD\,\xi\,\zeta_{\sign(\xi)}\,\times(\sharp\,x)\\ \end{align*} with $\zeta_\pm$ conveniently chosen to ensure a condition analogous to the second condition of the semigroup representation and including \begin{equation}gin{align*} \lim_{|\xi|\to\infty}\Re(\sqrt{\Lambda(\xi)})&=+\infty\,,& \lim_{\xi\to\pm\infty}\text{I}m(\Lambda(\xi))&=\pm\infty\,. \end{align*} This should be thought as an approximate/simplified version of the saddlepoint method in the sense that $\Lambda_0=\Lambda(0)$ is an approximate maximizer of the exponential decay rate among real numbers, but a minimizer along the curve $\Lambda(\cdot)$. Computational details --- carried out in next section --- are cumbersome but the process is rather systematic. \section{Linear stability}\label{s:linear} We now make the most of our spectral preparation to derive linear stability estimates. To motivate the analysis, let us anticipate that our achievement is the splitting of $(S_\varepsilon (t))_{t\geq0}$ as \begin{equation}\label{eq:linear-phase} S_\varepsilon (t)(w)(x)\,=\,\uU_\varepsilon '(x)\,s_\varepsilon ^{\textrm{p}}(t)(w)\,+\,\tS_\varepsilon (t)(w)(x)\,. \end{equation} for some $(s_\varepsilon ^{\textrm{p}}(t))_{t\geq0}$, $(\tS_\varepsilon (t))_{t\geq0}$ with $\tS_\varepsilon (0)=\text{Id}$, so that the following proposition holds. \begin{proposition}\label{p:linear} There exists $\varepsilon _0>0$ such that \begin{equation}gin{enumerate} \item there exists $C>0$ such that for any $t\geq0$, any $0\leq\varepsilon \leq\varepsilon _0$ and any $w\in BUC^0(\R)$ \begin{equation}gin{align*} \|\tS_\varepsilon (t)(w)\|_{L^\infty(\R)}&+\min\left(\{1,\sqrt{t}\}\right) \|\d_x\tS_\varepsilon (t)(w)\|_{L^\infty(\R)}+|\d_ts_\varepsilon ^{\textrm{p}}(t)(w)|\\ &\leq\,C\,\eD^{-\min\left(\left\{|g'(\uu_{-\infty})|,|g'(\uu_{+\infty})|\right\}\right)\,\varepsilon \,t}\,\|w\|_{L^\infty(\R)}\,, \end{align*} and when moreover $w\in BUC^1(\R)$ \begin{equation}gin{align*} \|\tS_\varepsilon (t)(w)\|_{W^{1,\infty}(\R)} &\leq\,C\,\eD^{-\min\left(\left\{|g'(\uu_{-\infty})|,|g'(\uu_{+\infty})|\right\}\right)\,\varepsilon \,t}\,\|w\|_{W^{1,\infty}(\R)}\,, \end{align*} \item for any $\theta>0$ there exist positive $(C_\theta,\omega_\theta)$ such that for any $t\geq0$, any $0\leq\varepsilon \leq\varepsilon _0$ and any $w\in BUC^0(\R)$ \begin{equation}gin{align*} \|\tS_\varepsilon (t)(w)\|_{L^\infty(\R)}&+\min\left(\{1,\sqrt{t}\}\right) \|\d_x\tS_\varepsilon (t)(w)\|_{L^\infty(\R)}+|\d_ts_\varepsilon ^{\textrm{p}}(t)(w)|\\ &\leq\,C_\theta\,\eD^{-\omega_\theta\,t}\,\|\eD^{\theta\,|\,\cdot\,|}\,w\|_{L^\infty(\R)}\,. \end{align*} \end{enumerate} \end{proposition} Estimates on operators are derived through pointwise bounds on Green kernels from the trivial fact that if $\bT$ is defined through \[ \bT(w)(x)\,=\, \int_\R \bK(x,y)\,w(y)\,\dD y \] then \[ \|\bT(w)\|_{L^\infty(\R)} \,\leq\,\|\bK\|_{L^\infty_x(L^1_y)}\,\|w\|_{L^\infty(\R)}\,. \] \subsection{Auxiliary lemmas} To begin with, to gain a practical grasp on the way the placement of spectral curves impacts decay rates, we provide two lemmas, that will be of ubiquitous use when establishing pointwise bounds on Green functions. Both lemmas are motivated by the fact that when $\begin{equation}ta\geq0$ and $t>0$ the minimization of \[ \Lambda_0\,t+\left(\frac{\alpha}{2}-\,\sqrt{\frac{\alpha^2}{4}-\,b\,+\Lambda_0}\right)\,\begin{equation}ta \] over $\Lambda_0\in (-\frac{\alpha^2}{4}+\,b\,,+\infty)$ is equivalent to \begin{equation}\label{eq:natural} 2\,\sqrt{\frac{\alpha^2}{4}-\,b\,+\Lambda_0} \,=\,\frac{\begin{equation}ta}{t}\,. \end{equation} The first lemma directly elucidates the consequences of this choice of $\Lambda_0$ in the approximate saddlepoint method sketched above. \begin{lemma}\label{l:curves-large} Let $t>0$, $\alpha\in\R$, $\begin{equation}ta\geq0$, $\begin{equation}ta_0\geq0$ $b<0$, and $(\zeta_-,\zeta_+)\in\C^2$ such that \begin{equation}gin{align*} \Re(\zeta_\pm)&>|\text{I}m(\zeta_\pm)|\,,& \mp\text{I}m(\zeta_\pm)>0\,. \end{align*} Then the curve $\Lambda\,:\ \R\to\C$ defined through\footnote{Sign conditions on $\text{I}m(\zeta_\pm)$ ensure that this is a licit definition.} \[ 2\,\sqrt{\frac{\alpha^2}{4}-\,b\,+\Lambda(\xi)} \,=\,\frac{\begin{equation}ta_0}{t}+\iD\xi\,\zeta_{\sign(\xi)}\,, \] satisfies for any $\xi\in\R$, when either $\begin{equation}ta=\begin{equation}ta_0$ or ($\begin{equation}ta\geq\begin{equation}ta_0$ and $\alpha\leq0$) \begin{equation}gin{align*} \Re\left(\Lambda(\xi)\,t+\left(\frac{\alpha}{2}-\,\sqrt{\frac{\alpha^2}{4}-\,b\,+\Lambda(\xi)}\right)\,\begin{equation}ta\right) &\leq-\left(\frac{\alpha^2}{4}+\,|b|\right)\,t +\frac{\alpha}{2}\,\begin{equation}ta_0-\frac{\begin{equation}ta_0^2}{4\,t}-\frac{\xi^2}{4}\,\Re(\zeta_{\sign(\xi)}^2)\,t\\ &\qquad=-\,|b|\,t -\frac{(\begin{equation}ta_0-\alpha\,t)^2}{4\,t}-\frac{\xi^2}{4}\,\Re(\zeta_{\sign(\xi)}^2)\,t \end{align*} and for any $\xi\in\R^*$ \[ |\Lambda'(\xi)|\,\leq\,|\zeta_{\sign(\xi)}|\,\left(1+\frac{\Re(\zeta_{\sign(\xi)})}{|\text{I}m(\zeta_{\sign(\xi)})|}\right) \Re\left(\sqrt{\frac{\alpha^2}{4}-\,b\,+\Lambda(\xi)}\right)\,. \] \end{lemma} We omit the proof of Lemma~\ref{l:curves-large} as straightforward and elementary. The second lemma is designed to deal with cases when the natural choice \eqref{eq:natural} is not available because of extra constraints arising from Evans' function possible annulation in $(-\frac{\alpha^2}{4}+\,b\,,0)$. Explicitly, it focuses on the case when $\begin{equation}ta/t\leq \omega_0$ when $\omega_0$ is typically picked as either $\omega_r^{\end{theorem}a_0}$ or $\omega_r^{\end{theorem}a_0}$ with \begin{equation}gin{align}\label{eq:om0} \omega_r^{\end{theorem}a_0}&:= 2\,\sqrt{\frac{(f'(\uu_{+\infty})-\sigma_\varepsilon )^2}{4}\,-\frac{\end{theorem}a_0}{2}}\,,& \omega_\end{lemma}l^{\end{theorem}a_0}&:= 2\,\sqrt{\frac{(f'(\uu_{-\infty})-\sigma_\varepsilon )^2}{4}\,-\frac{\end{theorem}a_0}{2}}\,, \end{align} where $\end{theorem}a_0$ is as in Proposition~\ref{p:Sturm-Liouville}. Since $\begin{equation}ta/t\leq \omega_0$ should be thought as a bounded-domain restriction, it is useful to let the second lemma also encode the possible trade-off between spatial localization and time decay. \begin{lemma}\label{l:curves-small} Let $t>0$, $\alpha\in\R$, $\begin{equation}ta\geq0$, $b<0$, $(\zeta_-,\zeta_+)\in\C^2$ such that \begin{equation}gin{align*} \Re(\zeta_\pm)&>|\text{I}m(\zeta_\pm)|\,,& \mp\text{I}m(\zeta_\pm)>0\,, \end{align*} and $\omega_0\geq 0$ such that \[ \begin{equation}ta\,\leq\,\omega_0\,t\,. \] Then the curve $\Lambda\,:\ \R\to\C$ defined through \[ 2\,\sqrt{\frac{\alpha^2}{4}-\,b\,+\Lambda(\xi)} \,=\,\omega_0+\iD\xi\,\zeta_{\sign(\xi)}\,, \] satisfies for any $\xi\in\R^*$ \[ |\Lambda'(\xi)|\,\leq\,|\zeta_{\sign(\xi)}|\,\left(1+\frac{\Re(\zeta_{\sign(\xi)})}{|\text{I}m(\zeta_{\sign(\xi)})|}\right) \Re\left(\sqrt{\frac{\alpha^2}{4}-\,b\,+\Lambda(\xi)}\right)\,. \] and for any $\xi\in\R$ and $\end{theorem}a>0$, \begin{equation}gin{align*} &\Re\left(\Lambda(\xi)\,t+\left(\frac{\alpha}{2}-\,\sqrt{\frac{\alpha^2}{4}-\,b\,+\Lambda(\xi)}\right)\,\begin{equation}ta\right)\\ &\leq-\,\left(\frac{\alpha^2-(1+\end{theorem}a)\,\omega_0^2}{4}+|b|\right)\,t -\frac{\omega_0-\alpha}{2}\,\begin{equation}ta -\frac{\xi^2}{4}\,\left(\Re(\zeta_{\sign(\xi)}^2)-\frac{1}{\end{theorem}a}\,|\text{I}m(\zeta_{\sign(\xi)})|^2\right)\,t \end{align*} and, when moreover $\omega_0<|\alpha|$, \begin{equation}gin{align*} &\Re\left(\Lambda(\xi)\,t+\left(\frac{\alpha}{2}-\,\sqrt{\frac{\alpha^2}{4}-\,b\,+\Lambda(\xi)}\right)\,\begin{equation}ta\right)\\ &\leq-\,|b|\,t -\frac{(|\alpha|\,t-\begin{equation}ta)^2}{4\,t}\,\left(1-(1+\end{theorem}a)\,\frac{\omega_0^2}{\alpha^2}\right) -\frac{\xi^2}{4}\,\left(\Re(\zeta_{\sign(\xi)}^2)-\frac{1}{\end{theorem}a}\,|\text{I}m(\zeta_{\sign(\xi)})|^2\right)\,t\,. \end{align*} \end{lemma} Note that to guarantee for some $\end{theorem}a>0$ both \begin{equation}gin{align*} \Re(\zeta_{\sign(\xi)}^2)&>\frac{1}{\end{theorem}a}\,|\text{I}m(\zeta_{\sign(\xi)})|^2\,,& 1&>(1+\end{theorem}a)\,\frac{\omega_0^2}{\alpha^2}\,, \end{align*} one needs to enforce \begin{equation}gin{align}\label{eq:curve-condition} \Re(\zeta_{\sign(\xi)})>\frac{|\alpha|}{\sqrt{\alpha^2-\omega_0^2}}\,|\text{I}m(\zeta_{\sign(\xi)})|\,. \end{align} Likewise when $\omega_0>|\alpha|$, one may extract large-time decay for $|\xi|\geq \xi_0>0$ provided that $\Re(\zeta_{\sign(\xi)})$ is sufficiently large. \begin{equation}gin{proof} The starting point is that for any $\end{theorem}a>0$, \begin{equation}gin{align*} &\Re\left(\Lambda(\xi)\,t+\left(\frac{\alpha}{2}-\,\sqrt{\frac{\alpha^2}{4}-\,b\,+\Lambda(\xi)}\right)\,\begin{equation}ta\right)\\ &\leq-\,\left(\frac{\alpha^2-\omega_0^2\,-\end{theorem}a\,\left(\omega_0-\frac{\begin{equation}ta}{t}\right)^2}{4}+|b|\right)\,t -\frac{\omega_0-\alpha}{2}\,\begin{equation}ta -\frac{\xi^2}{4}\,\left(\Re(\zeta_{\sign(\xi)}^2)-\frac{1}{\end{theorem}a}\,|\text{I}m(\zeta_{\sign(\xi)})|^2\right)\,t\\ &\qquad= -\,|b|\,t -\frac{(\alpha\,t-\begin{equation}ta)^2}{4\,t}\, -\frac{\xi^2}{4}\,\left(\Re(\zeta_{\sign(\xi)}^2)-\frac{1}{\end{theorem}a}\,|\text{I}m(\zeta_{\sign(\xi)})|^2\right)\,t +\frac{(\omega_0\,t-\begin{equation}ta)^2}{4\,t}\,\left(1+\end{theorem}a\right)\,. \end{align*} The first bound on the real part is then obtained by using the first formulation of the foregoing bound jointly with \[ \left(\omega_0-\frac{\begin{equation}ta}{t}\right)^2\leq \omega_0^2 \] whereas the second bound, specialized to the case $|\alpha|>\omega_0$, stems from the second formulation and \begin{equation}gin{align*} (|\alpha|\,t-\begin{equation}ta)^2&\leq\ (\alpha\,t-\begin{equation}ta)^2\,,& (\omega_0\,t-\begin{equation}ta)^2&\leq\ \frac{\omega_0^2}{\alpha^2}\,(|\alpha|\,t-\begin{equation}ta)^2\,. \end{align*} \end{proof} \subsection{First separations} We would like to split $G^\varepsilon _t(x,y)$ into pieces corresponding to different behaviors. Yet we must take into account that our description of $G_\varepsilon (\lambda;x,y)$ is different in high-frequency and compact regimes. To do so, we pick some curves and break them into pieces. Explicitly, motivated by \eqref{eq:curve-condition} with $\omega_0$ either $\omega_r^{\end{theorem}a_0}$ or $\omega_r^{\end{theorem}a_0}$ --- defined in \eqref{eq:om0} with $\end{theorem}a_0$ as in Proposition~\ref{p:Sturm-Liouville} ---, we first choose $\zeta^{HF}_\pm$ such that \begin{equation}gin{align*} \Re(\zeta^{HF}_\pm)&\geq 2\,\frac{\max\left(\left\{|f'(\uu_{+\infty})-\sigma_\varepsilon |,f'(\uu_{-\infty})-\sigma_\varepsilon \right\}\right)}{\sqrt{2\,\end{theorem}a_0}}\,|\text{I}m(\zeta^{HF}_\pm)|\,,& \mp\text{I}m(\zeta^{HF}_\pm)>0\,. \end{align*} Then we define curves $\Lambda_\varepsilon ^r$, $\Lambda_\varepsilon ^\end{lemma}l$ through \begin{equation}gin{align*} 2\,\sqrt{\frac{(f'(\uu_{+\infty})-\sigma_\varepsilon )^2}{4}-\,\varepsilon \,g'(\uu_{+\infty})\,+\Lambda_\varepsilon ^r(\xi)} &\,=\,\omega_r^{HF}+\iD\xi\,\zeta^{HF}_{\sign(\xi)}\,,\\ 2\,\sqrt{\frac{(f'(\uu_{-\infty})-\sigma_\varepsilon )^2}{4}-\,\varepsilon \,g'(\uu_{-\infty})\,+\Lambda_\varepsilon ^\end{lemma}l(\xi)} &\,=\,\omega_\end{lemma}l^{HF}+\iD\xi\,\zeta^{HF}_{\sign(\xi)}\,, \end{align*} where $\omega_r^{HF}$ and $\omega_\end{lemma}l^{HF}$ are fixed such that \begin{equation}gin{align*} \omega_r^{HF}&>|f'(\uu_{+\infty})-\sigma_0|\,,& \omega_\end{lemma}l^{HF}&>|f'(\uu_{-\infty})-\sigma_0|\,. \end{align*} Note that this is sufficient to guarantee that both curves satisfy requirements ensuring \eqref{eq:semi-group} thus also \eqref{eq:time-Green}. We shall do a particular treatment of the parts of the curves corresponding to $|\xi|\leq\xi^{HF}$ where we choose $\xi^{HF}$ as \begin{equation}gin{align*} \xi^{HF} := \frac{2\,\max\left(\left\{\omega_r^{HF}-\omega_r^{\end{theorem}a_0},\omega_\end{lemma}l^{HF}-\omega_\end{lemma}l^{\end{theorem}a_0}\right\}\right)}{\min\left(\left\{|\text{I}m(\zeta^{HF}_+)|,|\text{I}m(\zeta^{HF}_-)|\right\}\right)}\,. \end{align*} Once again the motivation for the definition of $\xi^{HF}$ stems from Lemma~\ref{l:curves-small}. Indeed the definition ensures that for $\omega\in\R$, a curve $\Lambda$, defined through \begin{equation}gin{align*} 2\,\sqrt{\frac{(f'(\uu_{+\infty})-\sigma_\varepsilon )^2}{4}-\,\varepsilon \,g'(\uu_{+\infty})\,+\Lambda(\xi)} &\,=\,\omega+\iD\xi\,\zeta^{\omega}_{r,\sign(\xi)}\,, \end{align*} respectively through \begin{equation}gin{align*} 2\,\sqrt{\frac{(f'(\uu_{-\infty})-\sigma_\varepsilon )^2}{4}-\,\varepsilon \,g'(\uu_{-\infty})\,+\Lambda(\xi)} &\,=\,\omega+\iD\xi\,\zeta^{\omega}_{\end{lemma}l,\sign(\xi)}\,, \end{align*} with \[ \zeta^{\omega}_{\sharp,\pm} :=\Re(\zeta^{HF}_\pm)+\iD\,\left(\text{I}m\left(\zeta^{HF}_\pm\right) \mp\frac{\omega_\sharp^{HF}-\omega}{\xi^{HF}}\right)\,, \quad\sharp\in\{r,\end{lemma}l\}\,, \] satisfies \begin{equation}gin{align*} \Lambda(\pm\xi^{HF})&=\Lambda_\varepsilon ^r(\pm\xi^{HF})\,,& \textrm{respectively }\quad \Lambda(\pm\xi^{HF})&=\Lambda_\varepsilon ^\end{lemma}l(\pm\xi^{HF})\,, \end{align*} whereas, for $\sharp\in\{r,\end{lemma}l\}$, $\omega\in[\omega_\sharp^{\end{theorem}a_0},\omega_\sharp^{HF}]$, \begin{equation}gin{align*} \Re(\zeta^{\omega}_{\sharp,\pm})&=\Re(\zeta^{HF}_\pm)\,,& \mp\text{I}m(\zeta^{\omega}_{\sharp,\pm})&>0\,,& |\text{I}m(\zeta^{\omega}_{\sharp,\pm})|&\leq \frac32|\text{I}m(\zeta^{HF}_\pm)|\,. \end{align*} In the following, for $\sharp\in\{r,\end{lemma}l\}$, we use notation $\Lambda_\varepsilon ^{\sharp,LF}:=(\Lambda_\varepsilon ^\sharp)_{|[-\xi^{HF},\xi^{HF}]}$ and $\Lambda_\varepsilon ^{\sharp,HF}:=(\Lambda_\varepsilon ^\sharp)_{|\R\setminus[-\xi^{HF},\xi^{HF}]}$. To ensure that Lemma~\ref{l:curves-small} provides exponential time decay for the part of the evolution arising from $\Lambda_\varepsilon ^{\sharp,HF}$, we reinforce the constraint on $\Re(\zeta^{HF}_\pm)$ by adding \[ \Re(\zeta^{HF}_\pm)\ \geq \ \sqrt{\text{I}m(\zeta^{HF}_\pm)^2+2\,\frac{\max\left(\left\{(\omega_r^{HF})^2-(f'(\uu_{+\infty})-\sigma_\varepsilon )^2, (\omega_\end{lemma}l^{HF})^2-(f'(\uu_{-\infty})-\sigma_\varepsilon )^2\right\}\right)}{(\xi^{HF})^2}}\,. \] Anticipating our needs when analyzing small-$\lambda$ expansions, we point out that by lowering $\end{theorem}a_0$ and $\omega_\end{lemma}l^{HF}$, $\omega_r^{HF}$, we may enforce that for $\sharp\in\{r,\end{lemma}l\}$, when $\omega=\omega_\sharp^{\end{theorem}a_0}$, and $\Lambda$ is defined as above, there exists $\omega'>0$ and some $\delta >0$, such that for any $\xi\in [-\xi^{HF},\xi^{HF}]$ \begin{equation}gin{align*} \Re(\Lambda(\xi))&\leq -\omega'\,,& \Re\left(\sqrt{\frac{(f'(\uu_{+\infty})-\sigma_\varepsilon )^2}{4}-\,\varepsilon \,g'(\uu_{+\infty})\,+\Lambda(\xi)}\right)\leq \frac{(-f'(\uu_{+\infty})+\sigma_\varepsilon )}{2}\,- \delta \, , \end{align*} respectively \begin{equation}gin{align*} \Re(\Lambda(\xi))&\leq -\omega'\,,& \Re\left(\sqrt{\frac{(f'(\uu_{-\infty})-\sigma_\varepsilon )^2}{4}-\,\varepsilon \,g'(\uu_{-\infty})\,+\Lambda(\xi)}\right)\leq \frac{(f'(\uu_{-\infty})-\sigma_\varepsilon )}{2}\, -\delta \,. \end{align*} After these preliminaries, to account for different behaviors, when $t>0$ we break $G^\varepsilon _t$ as \begin{equation} G^\varepsilon _t\,=\,G^{\varepsilon ,\textrm{pt}}_t+G^{\varepsilon ,\textrm{ess}}_t \end{equation} with $G^{\varepsilon ,\textrm{pt}}_t$ and $G^{\varepsilon ,\textrm{ess}}_t$ defined as follows. First \begin{equation}gin{align*} G^{\varepsilon ,\textrm{pt}}_t(x,y) &=0\,,&\,\textrm{ if }xy>0\textrm{ and } \left(\,y\geq\omega_r^{HF}\,t\ \textrm{or}\ y\leq-\omega_\end{lemma}l^{HF}\,t\,\right) \end{align*} \begin{equation}gin{align*} G^{\varepsilon ,\textrm{pt}}_t(x,y)&\,=\, \frac{1}{2\iD\pi}\,\int_\Lambda \eD^{\lambda\,t}\, G_\varepsilon (\lambda;x,y)\,\dD\lambda &\,\textrm{if }xy<0\,, \end{align*} and when $xy>0$ and $-\omega_\end{lemma}l^{HF}\,t<y<\omega_r^{HF}\,t$ \begin{equation}gin{align*} &G^{\varepsilon ,\textrm{pt}}_t(x,y)\\ &= \begin{equation}gin{cases} -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{r,LF}} \eD^{\lambda\,t}\, \eD^{\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\, \begin{equation}D_1\cdot\bV_\varepsilon ^{r,s}(\lambda,x)\, \dfrac{\ttau^\end{lemma}l_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{r,s}(\lambda,y)\,\dD\lambda &\,\textrm{if }x>y>0\\[0.5em] -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{\end{lemma}l,LF}} \eD^{\lambda\,t}\, \eD^{\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\, \dfrac{\tau^r_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\,\begin{equation}D_1\cdot\bV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,y)\,\dD\lambda &\,\textrm{if }0>x>y\\[0.5em] -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{\end{lemma}l,LF}} \eD^{\lambda\,t}\, \eD^{-\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}\, \begin{equation}D_1\cdot\bV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)\, \dfrac{\ttau^r_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,y)\,\dD\lambda &\,\textrm{if }0>y>x\\[0.5em] -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{r,LF}} \eD^{\lambda\,t}\, \eD^{-\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}\, \dfrac{\tau^\end{lemma}l_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\,\begin{equation}D_1\cdot\bV_\varepsilon ^{r,s}(\lambda,x)\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{r,s}(\lambda,y)\,\dD\lambda &\,\textrm{if }y>x>0\,, \end{cases} \end{align*} where, here and in the definition of $G^{\varepsilon ,\textrm{ess}}_t$, $\Lambda$ is either $\Lambda=\Lambda_\varepsilon ^r$ or $\Lambda=\Lambda_\varepsilon ^\end{lemma}l$, and we use compact notation for integrals over curves instead of explicitly parametrized versions. Second, \begin{equation}gin{align*} G^{\varepsilon ,\textrm{ess}}_t(x,y) &=0\,,&\,\textrm{ if }xy<0 \end{align*} \begin{equation}gin{align*} G^{\varepsilon ,\textrm{ess}}_t(x,y) &=\frac{1}{2\iD\pi}\,\int_\Lambda \eD^{\lambda\,t}\, G_\varepsilon (\lambda;x,y)\,\dD\lambda\,, &\,\textrm{ if }xy>0\textrm{ and } \left(\,y\geq\omega_r^{HF}\,t\ \textrm{or}\ y\leq-\omega_\end{lemma}l^{HF}\,t\,\right)\,, \end{align*} and when $xy>0$ and $-\omega_\end{lemma}l^{HF}\,t<y<\omega_r^{HF}\,t$ \begin{equation}gin{align*} &G^{\varepsilon ,\textrm{ess}}_t(x,y)\\ &= \begin{equation}gin{cases} \,-\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{r,LF}} \eD^{\lambda\,t}\, \eD^{\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\, \begin{equation}D_1\cdot\bV_\varepsilon ^{r,s}(\lambda,x)\, \dfrac{\trho^\end{lemma}l_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\, \begin{equation}D_ 1\cdot\tbV_\varepsilon ^{r,u}(\lambda,y)\,\dD\lambda&\\ \quad+\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{r,HF}} \eD^{\lambda\,t}\, G_\varepsilon (\lambda;x,y)\,\dD\lambda &\,\textrm{if }x>y>0\\[0.5em] -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{\end{lemma}l,LF}} \eD^{\lambda\,t}\, \eD^{\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\, \dfrac{\rho^r_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\,\begin{equation}D_1\cdot\bV_\varepsilon ^{\end{lemma}l,s}(\lambda,x)\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,y)\,\dD\lambda&\\ \quad+\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{\end{lemma}l,HF}} \eD^{\lambda\,t}\, G_\varepsilon (\lambda;x,y)\,\dD\lambda &\,\textrm{if }0>x>y\\[0.5em] -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{\end{lemma}l,LF}} \eD^{\lambda\,t}\, \eD^{-\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}\, \begin{equation}D_1\cdot\bV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)\, \dfrac{\trho^r_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{\end{lemma}l,s}(\lambda,y)\,\dD\lambda&\\ \quad+\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{\end{lemma}l,HF}} \eD^{\lambda\,t}\, G_\varepsilon (\lambda;x,y)\,\dD\lambda &\,\textrm{if }0>y>x\\[0.5em] -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{r,LF}} \eD^{\lambda\,t}\, \eD^{-\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}\, \dfrac{\rho^\end{lemma}l_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\,\begin{equation}D_1\cdot\bV_\varepsilon ^{r,u}(\lambda,x)\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{r,s}(\lambda,y)\,\dD\lambda&\\ \quad+\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{r,HF}} \eD^{\lambda\,t}\, G_\varepsilon (\lambda;x,y)\,\dD\lambda &\,\textrm{if }y>x>0\,. \end{cases} \end{align*} Note that the above splitting implies \[ \d_xG^\varepsilon _t\,=\,\d_xG^{\varepsilon ,\textrm{pt}}_t+\d_xG^{\varepsilon ,\textrm{ess}}_t \] where here and elsewhere throughout the text, $\d_x$ acting on either $G^{\varepsilon ,\textrm{pt}}_t$ or $G^{\varepsilon ,\textrm{ess}}_t$ is understood as a pointwise derivative wherever these functions are continuous. The rationale behind the splitting is that the large-time decay of $G^{\varepsilon ,\textrm{ess}}_t$ is essentially limited by spatial decay hence may be thought as purely explained by essential spectrum considerations whereas the large-time asymptotics of $G^{\varepsilon ,\textrm{pt}}_t$ is driven by the presence near the spectral curves of a root of $D_\varepsilon $ at $\lambda=0$, hence is due to the interaction of essential and point spectra. Some extra complications in the splitting are due to the fact that we need to prepare the identification of the most singular part as a phase modulation, which comes into a tensorized form. This explains why we define zones in terms of the size of $|y|$, instead of the otherwise more natural $|x-y|$. \subsection{First pointwise bounds} We begin our use of Lemmas~\ref{l:curves-large} and~\ref{l:curves-small} with short-time bounds. \begin{lemma}\label{l:short-time} There exist positive $(\varepsilon _0,C,\omega,\theta)$ such that for any $t>0$, any $0\leq\varepsilon \leq\varepsilon _0$ and any $(x,y)\in\R^2$ \begin{equation}gin{align*} |G^{\varepsilon ,\textrm{pt}}_t(x,y)|+\min\left(\{1,\sqrt{t}\}\right)|\d_xG^{\varepsilon ,\textrm{pt}}_t(x,y)|&\,\leq\,C\,\eD^{\omega\,t}\,\eD^{-\theta\,|x|}\,\frac{1}{\sqrt{t}}\eD^{-\theta\,\frac{y^2}{t}}\,, \end{align*} \end{lemma} The foregoing lemma does not contain estimates on $G^{\varepsilon ,\textrm{ess}}_t$ because those would be redundant with the corresponding large-time estimates. The point of Lemma~\ref{l:short-time} is to show that for short-time estimates the singularity at $\lambda=0$ may be avoided whereas this singularity is not present in $G^{\varepsilon ,\textrm{ess}}_t$. \begin{equation}gin{proof} To bound $G^{\varepsilon ,\textrm{pt}}_t(x,y)$ when $xy<0$, we separate between $x>0>y$ and $x<0<y$. The analyses being completely similar, we only discuss here the former case. To treat it, we move curves as in Lemmas~\ref{l:curves-large} and~\ref{l:curves-small} with $\begin{equation}ta_0=\begin{equation}ta=|y|$, $\alpha=f'(\uu_{-\infty})-\sigma_\varepsilon $, $b=\varepsilon \,g'(\uu_{-\infty})$ and note that \[ \Re(\mu_-^\varepsilon (\lambda,\uu_{+\infty}))\,\leq\, \frac12\left(f'(\uu_{+\infty})-\sigma_\varepsilon \right) \,<\,0\,. \] More explicitly, we use Lemma~\ref{l:curves-large} to bound the regime $|y|\geq \omega_\end{lemma}l^{HF}\,t$ which leads to the claimed heat-like bound since \begin{equation}gin{align*} \eD^{-\frac{(|y|-\alpha\,t)^2}{4\,t}} &\leq \eD^{-\left(1-\frac{|\alpha|}{\omega_\end{lemma}l^{HF}}\right)\,\frac{y^2}{4\,t}}\,,& &|y|\geq \omega_\end{lemma}l^{HF}\,t\,. \end{align*} In the remaining zone where $|y|\leq \omega_\end{lemma}l^{HF}\,t$ we use instead Lemma~\ref{l:curves-small} to derive a bound that may be converted into a heat-like bound through \begin{equation}gin{align*} \eD^{-\frac{\omega_\end{lemma}l^{HF}-\alpha}{2}|y|} &\leq \eD^{-\left(1-\frac{|\alpha|}{\omega_\end{lemma}l^{HF}}\right)\,\frac{y^2}{2\,t}}\,,& &|y|\leq \omega_\end{lemma}l^{HF}\,t\,. \end{align*} The estimates on $G^{\varepsilon ,\textrm{pt}}_t(x,y)$ when $xy>0$ are obtained in exactly the same way. \end{proof} We proceed with bounds on $G^{\varepsilon ,\textrm{ess}}_t$. \begin{lemma}\label{l:Grho} There exist positive $(\varepsilon _0,C,\omega,\theta)$ such that for any $t>0$, any $0\leq\varepsilon \leq\varepsilon _0$ and any $(x,y)\in\R^2$ \begin{equation}gin{align*} &|G^{\varepsilon ,\textrm{ess}}_t(x,y)|+\min\left(\{1,\sqrt{t}\}\right)|\d_xG^{\varepsilon ,\textrm{ess}}_t(x,y)|\\ &\quad\leq\,C\,\text{\bf 1}_{|x-y|\leq |y|}\,\eD^{-\min\left(\left\{|g'(\uu_{-\infty})|,|g'(\uu_{+\infty})|\right\}\right)\,\varepsilon \,t}\,\frac{1}{\sqrt{t}}\left(\eD^{-\theta\,\frac{|x-y-(f'(\uu_{+\infty})-\sigma_\varepsilon )\,t|^2}{t}} +\eD^{-\theta\,\frac{|x-y-(f'(\uu_{-\infty})-\sigma_\varepsilon )\,t|^2}{t}}\right)\\ &\qquad+\,C\,\text{\bf 1}_{|x-y|\geq |y|}\eD^{-\omega\,t}\,\frac{1}{\sqrt{t}}\eD^{-\theta\,\frac{y^2}{t}}\,. \end{align*} This also implies that there exist positive $(\varepsilon _0,C,\theta)$ such that for any $\theta'>0$ there exists $\omega'>0$ such that for any $t>0$, any $0\leq\varepsilon \leq\varepsilon _0$ and any $(x,y)\in\R^2$ \begin{equation}gin{align*} \eD^{-\theta'|y|}\,\Big(|G^{\varepsilon ,\textrm{ess}}_t(x,y)|&+\min\left(\{1,\sqrt{t}\}\right)|\d_xG^{\varepsilon ,\textrm{ess}}_t(x,y)|\Big)\\ &\quad\leq\,C\,\eD^{-\omega'\,t}\,\frac{1}{\sqrt{t}}\left(\eD^{-\theta\,\frac{|x-y-(f'(\uu_{+\infty})-\sigma_\varepsilon )\,t|^2}{t}} +\eD^{-\theta\,\frac{|x-y-(f'(\uu_{-\infty})-\sigma_\varepsilon )\,t|^2}{t}} +\eD^{-\theta\,\frac{y^2}{t}}\right)\,. \end{align*} \end{lemma} In the foregoing statement and throughout the text we use $\text{\bf 1}_A$ to denote a characteristic function for the condition $A$. \begin{equation}gin{proof} To deduce the second bound from the first we observe that for any $\alpha$ \[ \frac{\theta}{2\,t}\,|x-y-\alpha\,t|^2+\theta'|x-y| \geq\,\begin{equation}gin{cases} \frac12\,\theta'\,|\alpha|\,t &\,\textrm{if }|x-y-\alpha\,t|\leq \frac12|\alpha|\,t\\ \frac{\theta}{4}\,|\alpha|^2\,t &\,\textrm{if }|x-y-\alpha\,t|\geq \frac12|\alpha|\,t \end{cases}\,. \] To prove the first bound we should distinguish between regimes defined by $0<y<x$, $y<x<0$, $0>y>x$ and $y>x>0$. Regimes $0<y<x$ and $0>y>x$ on one hand and $y<x<0$ and $y>x>0$ on the other hand may be treated similarly and we give details only for the cases $y<x<0$ and $0<y<x$. Note that when $y<x<0$, we have $|x-y|\leq |y|$. When $y<x<0$ and $|y|\geq\omega_\end{lemma}l^{HF}\,t$, we move the curve according to Lemma~\ref{l:curves-large} with $\begin{equation}ta_0=\begin{equation}ta=|x-y|$, $\alpha=f'(\uu_{-\infty})-\sigma_\varepsilon $, $b=\varepsilon \,g'(\uu_{-\infty})$. To analyze the regime when $y<x<0$ and $|y|<\omega_\end{lemma}l^{HF}\,t$, we never move the curve $\Lambda_\varepsilon ^{\end{lemma}l,HF}$ (but bound its contribution according to Lemma~\ref{l:curves-small}) whereas we move $\Lambda_\varepsilon ^{\end{lemma}l,LF}$ as in Lemma~\ref{l:curves-large} with $\zeta_{\pm}=\zeta^{|x-y|/t}_{\end{lemma}l,\pm}$ when $|x-y|\geq \omega_\end{lemma}l^{\end{theorem}a_0}\,t$ or as in Lemma~\ref{l:curves-small} with $\zeta_{\pm}=\zeta^{\omega_\end{lemma}l^{\end{theorem}a_0}}_{\end{lemma}l,\pm}$ when $|x-y|\leq\omega_\end{lemma}l^{\end{theorem}a_0}\,t$. To bound the contribution of the regime $0<y<x$, we may proceed as when $y<x<0$ provided that $|x-y|\leq |y|$ or $-\omega_\end{lemma}l^{HF}\,t\leq y\leq \omega_r^{HF}\,t$. The remaining case is dealt with by applying Lemma~\ref{l:curves-large} with $\begin{equation}ta_0=|y|$ and $\begin{equation}ta=|x-y|$ using the fact that $f'(\uu_{+\infty})-\sigma_\varepsilon <0$. \end{proof} \subsection{Linear phase separation} The large-time estimates for $G^{\varepsilon ,\textrm{pt}}_t$ require a phase separation. To carry it out we first recall that there exist $(a^r_\varepsilon ,a^\end{lemma}l_\varepsilon )\in\R^2$, each uniformly bounded from below and above, such that \begin{equation}gin{align*} \begin{equation}D_1\cdot\bV_\varepsilon ^{r,s}(0,\cdot)&=a^r_\varepsilon \,\uU'_\varepsilon \,,& \begin{equation}D_1\cdot\bV_\varepsilon ^{\end{lemma}l,u}(0,\cdot)&=a^\end{lemma}l_\varepsilon \uU'_\varepsilon \,. \end{align*} Then we split $G^{\varepsilon ,\textrm{pt}}_t$ as \[ G^{\varepsilon ,\textrm{pt}}_t(x,y)\,=\,\uU'_\varepsilon (x)\,G^{\varepsilon ,\textrm{p}}_t(y)\,+\,\tG^{\varepsilon ,\textrm{pt}}_t(x,y) \] with \begin{equation}gin{align*} G^{\varepsilon ,\textrm{p}}_t(y) &=0\,,&\,\textrm{ if } \left(\,y\geq\omega_r^{HF}\,t\ \textrm{or}\ y\leq-\omega_\end{lemma}l^{HF}\,t\,\right)\,, \end{align*} whereas when $-\omega_\end{lemma}l^{HF}\,t<y<\omega_r^{HF}\,t$ \begin{equation}gin{align*} G^{\varepsilon ,\textrm{p}}_t(y) &= \begin{equation}gin{cases}\ds -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{r,LF}} \eD^{\lambda\,t}\, \eD^{-\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}\, a^\end{lemma}l_\varepsilon \, \begin{equation}D_1\cdot\tbV_\varepsilon ^{r,s}(\lambda,y)\,\dfrac{\dD\lambda}{D_\varepsilon (\lambda)} &\,\textrm{if }y>0\\[1em]\ds -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{\end{lemma}l,LF}} \eD^{\lambda\,t}\, \eD^{\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\, a^r_\varepsilon \, \begin{equation}D_1\cdot\tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,y) \dfrac{\dD\lambda}{D_\varepsilon (\lambda)} &\,\textrm{if }y<0\,. \end{cases} \end{align*} As a result, when ($y\geq\omega_r^{HF}\,t$ or $y\leq-\omega_\end{lemma}l^{HF}\,t$), \begin{equation}gin{align*} \tG^{\varepsilon ,\textrm{pt}}_t(x,y) &= \begin{equation}gin{cases} 0\,,&\,\textrm{ if }xy>0\\ \frac{1}{2\iD\pi}\,\int_\Lambda \eD^{\lambda\,t}\, G_\varepsilon (\lambda;x,y)\,\dD\lambda &\,\textrm{if }xy<0\,, \end{cases} \end{align*} whereas when $xy<0$ and $-\omega_\end{lemma}l^{HF}\,t<y<\omega_r^{HF}\,t$ \begin{equation}gin{align*} &\tG^{\varepsilon ,\textrm{pt}}_t(x,y)\\ &= \begin{equation}gin{cases} \,-\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{r,LF}} \eD^{\lambda\,t}\, \eD^{\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\, \begin{equation}D_1\cdot\left(\bV_\varepsilon ^{r,s}(\lambda,x)-\bV_\varepsilon ^{r,s}(0,x)\right)\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,y)\,\dfrac{\dD\lambda}{D_\varepsilon (\lambda)}&\\ \quad+\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{r,HF}} \eD^{\lambda\,t}\, G_\varepsilon (\lambda;x,y)\,\dD\lambda \,\textrm{if }x>0>y\\[0.75em] -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{\end{lemma}l,LF}} \eD^{\lambda\,t}\, \eD^{-\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}\, \begin{equation}D_1\cdot\left(\bV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)-\bV_\varepsilon ^{\end{lemma}l,u}(0,x)\right)\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{r,s}(\lambda,y)\, \,\dfrac{\dD\lambda}{D_\varepsilon (\lambda)}&\\ \quad+\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{\end{lemma}l,HF}} \eD^{\lambda\,t}\, G_\varepsilon (\lambda;x,y)\,\dD\lambda \,\textrm{if }y>0>x\,, \end{cases} \end{align*} and when $xy>0$ and $-\omega_\end{lemma}l^{HF}\,t<y<\omega_r^{HF}\,t$, $\tG^{\varepsilon ,\textrm{pt}}_t(x,y)$ equals \begin{equation}gin{align*} \begin{equation}gin{cases} -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{r,LF}} \eD^{\lambda\,t}\, \eD^{\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\, \left(\ttau^\end{lemma}l_\varepsilon (\lambda)\begin{equation}D_1\cdot\bV_\varepsilon ^{r,s}(\lambda,x)-\ttau^\end{lemma}l_\varepsilon (0)\begin{equation}D_1\cdot\bV_\varepsilon ^{r,s}(0,x)\right)\, \dfrac{1}{D_\varepsilon (\lambda)}\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{r,s}(\lambda,y)\,\dD\lambda \\ \qquad-\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{r,LF}} \eD^{\lambda\,t}\, \eD^{\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\, \begin{equation}D_1\cdot\bV_\varepsilon ^{r,s}(0,x)\, \dfrac{\trho^\end{lemma}l_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{r,u}(\lambda,y)\,\dD\lambda \,\textrm{if }x>y>0\\[0.5em] -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{\end{lemma}l,LF}} \eD^{\lambda\,t}\, \eD^{\int_{-\infty}^0(f'(\uU_\varepsilon )-f'(\uu_{-\infty}))}\, \left(\tau^r_\varepsilon (\lambda)\,\begin{equation}D_1\cdot\bV_\varepsilon ^{\end{lemma}l,u}(\lambda,x) -\tau^r_\varepsilon (0)\,\begin{equation}D_1\cdot\bV_\varepsilon ^{\end{lemma}l,u}(0,x) \right)\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,y) \,\dfrac{\dD\lambda}{D_\varepsilon (\lambda)}\\ \,\textrm{if }0>x>y\\[0.5em] -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{\end{lemma}l,LF}} \eD^{\lambda\,t}\, \eD^{-\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}\, \left(\ttau^r_\varepsilon (\lambda)\,\begin{equation}D_1\cdot\bV_\varepsilon ^{\end{lemma}l,u}(\lambda,x)-\ttau^r_\varepsilon (0)\,\begin{equation}D_1\cdot\bV_\varepsilon ^{\end{lemma}l,u}(0,x)\right)\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{\end{lemma}l,u}(\lambda,y) \,\dfrac{\dD\lambda}{D_\varepsilon (\lambda)}\\ \qquad-\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{\end{lemma}l,LF}} \eD^{\lambda\,t}\, \eD^{-\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}\, \begin{equation}D_1\cdot\bV_\varepsilon ^{\end{lemma}l,u}(0,x)\, \dfrac{\trho^r_\varepsilon (\lambda)}{D_\varepsilon (\lambda)}\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{\end{lemma}l,s}(\lambda,y)\,\dD\lambda \,\textrm{if }0>y>x\\[0.5em] -\frac{1}{2\iD\pi}\,\int_{\Lambda_\varepsilon ^{r,LF}} \eD^{\lambda\,t}\, \eD^{-\int_0^{+\infty}(f'(\uU_\varepsilon )-f'(\uu_{+\infty}))}\, \left(\tau^\end{lemma}l_\varepsilon (\lambda)\,\begin{equation}D_1\cdot\bV_\varepsilon ^{r,s}(\lambda,x) -\tau^\end{lemma}l_\varepsilon (0)\,\begin{equation}D_1\cdot\bV_\varepsilon ^{r,s}(0,x)\right)\, \begin{equation}D_1\cdot\tbV_\varepsilon ^{r,s}(\lambda,y)\,\dfrac{\dD\lambda}{D_\varepsilon (\lambda)}\\ \,\textrm{if }y>x>0\,, \end{cases} \end{align*} where, here again $\Lambda$ is either $\Lambda=\Lambda_\varepsilon ^r$ or $\Lambda=\Lambda_\varepsilon ^\end{lemma}l$. In $\tG^{\varepsilon ,\textrm{pt}}_t$ the contributions due to $\trho$-terms do not fit directly the framework of Lemmas~\ref{l:curves-large} and~\ref{l:curves-small}. This is here that instead we use that $\end{theorem}a_0$, $\omega_r^{HF}$ and $\omega_\end{lemma}l^{HF}$ were taken sufficient small to guarantee even simpler bounds when $\Lambda_\varepsilon ^{\sharp,LF}$ is moved as in Lemma~\ref{l:curves-small} with $\omega_0^\sharp=\omega_\sharp^{\end{theorem}a_0}$, $\zeta_{\pm}=\zeta^{\omega_0^\sharp}_{\sharp,\pm}$, though we do not restrict to the zone $|y|\leq \omega_0^\sharp\,t$. Proceeding as above for the rest of bounds, we obtain the following lemmas. \begin{lemma}\label{l:tGtau} There exist positive $(\varepsilon _0,C,\omega,\theta)$ such that for any $t>0$, any $0\leq\varepsilon \leq\varepsilon _0$ and any $(x,y)\in\R^2$ \begin{equation}gin{align*} &|\tG^{\varepsilon ,\textrm{pt}}_t(x,y)|+\min\left(\{1,\sqrt{t}\}\right)|\d_x\tG^{\varepsilon ,\textrm{pt}}_t(x,y)|\\ &\quad\leq\,C\,\eD^{-\min\left(\left\{|g'(\uu_{-\infty})|,|g'(\uu_{+\infty})|\right\}\right)\,\varepsilon \,t}\,\eD^{-\theta\,|x|}\, \frac{1}{\sqrt{t}}\left(\eD^{-\theta\,\frac{|y-(f'(\uu_{+\infty})-\sigma_\varepsilon )\,t|^2}{t}} +\eD^{-\theta\,\frac{|y+(f'(\uu_{-\infty})-\sigma_\varepsilon )\,t|^2}{t}}\right)\\ &\qquad +C\,\eD^{-\omega\,t}\,\eD^{-\theta\,|x|}\,\eD^{-\theta\,|x-y|}\,. \end{align*} This also implies that there exist positive $(\varepsilon _0,C,\theta)$ such that for any $\theta'>0$ there exists $\omega'>0$ such that for any $t>0$, any $0\leq\varepsilon \leq\varepsilon _0$ and any $(x,y)\in\R^2$ \begin{equation}gin{align*} \eD^{-\theta'|y|}\,&\Big(|\tG^{\varepsilon ,\textrm{pt}}_t(x,y)|+\min\left(\{1,\sqrt{t}\}\right)|\d_x\tG^{\varepsilon ,\textrm{pt}}_t(x,y)|\Big)\\ &\leq\,C\,\eD^{-\omega'\,t}\,\left(\frac{1}{\sqrt{t}}\left(\eD^{-\theta\,\frac{|y-(f'(\uu_{+\infty})-\sigma_\varepsilon )\,t|^2}{t}} +\eD^{-\theta\,\frac{|y+(f'(\uu_{-\infty})-\sigma_\varepsilon )\,t|^2}{t}}\right) +\,\eD^{-\theta\,|x|}\,\eD^{-\theta\,|x-y|}\,\eD^{-\theta'|y|}\right)\,. \end{align*} \end{lemma} \begin{lemma}\label{l:Gpsi} There exist positive $(\varepsilon _0,C,\omega,\theta)$ such that for any $t>0$, any $0\leq\varepsilon \leq\varepsilon _0$ and any $y\in\R$ \begin{equation}gin{align*} |G^{\varepsilon ,\textrm{p}}_t(y)|&\,\leq\,C\,\eD^{\omega\,t}\,\frac{1}{\sqrt{t}}\eD^{-\theta\,\frac{y^2}{t}}\,. \end{align*} Moreover, there exist positive $(\varepsilon _0,C,\omega,\theta)$ such that for any $t>0$, any $0\leq\varepsilon \leq\varepsilon _0$ and any $y\in\R$ \begin{equation}gin{align*} &|\d_tG^{\varepsilon ,\textrm{p}}_t(y)| &\leq\,C\,\eD^{-\min\left(\left\{|g'(\uu_{-\infty})|,|g'(\uu_{+\infty})|\right\}\right)\,\varepsilon \,t}\, \frac{1}{\sqrt{t}}\left(\eD^{-\theta\,\frac{|y-(f'(\uu_{+\infty})-\sigma_\varepsilon )\,t|^2}{t}} +\eD^{-\theta\,\frac{|y+(f'(\uu_{-\infty})-\sigma_\varepsilon )\,t|^2}{t}}\right)\,. \end{align*} This also implies that there exist positive $(\varepsilon _0,C,\theta)$ such that for any $\theta'>0$ there exists $\omega'>0$ such that for any $t>0$, any $0\leq\varepsilon \leq\varepsilon _0$ and any $y\in\R$ \begin{equation}gin{align*} \eD^{-\theta'|y|}\,|\d_tG^{\varepsilon ,\textrm{p}}_t(y)| &\leq\,C\,\eD^{-\omega'\,t}\,\frac{1}{\sqrt{t}}\left(\eD^{-\theta\,\frac{|y-(f'(\uu_{+\infty})-\sigma_\varepsilon )\,t|^2}{t}} +\eD^{-\theta\,\frac{|y+(f'(\uu_{-\infty})-\sigma_\varepsilon )\,t|^2}{t}}\right)\,. \end{align*} \end{lemma} \hspace{2em} To conclude and prove Proposition~\ref{p:linear}, we pick a smooth cut-off function $\chi$ on $[0,+\infty)$ such that $\chi\equiv 1$ on $[2,+\infty)$ and $\chi\equiv 0$ on $[0,1]$ and define $s_\varepsilon ^{\textrm{p}}$, $\tS_\varepsilon $ by \begin{equation}gin{align*} s_\varepsilon ^{\textrm{p}}(t)(w)&\,:=\, \int_\R \chi(t)\,G^{\varepsilon ,\textrm{p}}_t(y)\,w(y)\,\dD y\,,\\ \tS_\varepsilon (t)(w)(x)&\,:=\, \int_\R \left(\chi(t)\,(G^{\varepsilon ,\textrm{ess}}_t(x,y)+\tG^{\varepsilon ,\textrm{pt}}_t(x,y))+(1-\chi(t))\,G^\varepsilon _t(x,y)\right)\,w(y)\,\dD y\,. \end{align*} The definitions are extended to $t=0$ by $s_\varepsilon ^{\textrm{p}}(0)(w)=0$ and $\tS_\varepsilon (0)(w)=w$. As explained near its statement, Proposition~\ref{p:linear} follows then from $L^\infty_xL^1_y$ bounds on Green kernels, which themselves are derived from pointwise bounds proved above. \section{Nonlinear stability} In the present section we conclude the proof of Theorem~\ref{th:main-sl}. To do so, we seek for $u$ solving \eqref{eq:scalar-sl} under the form \begin{equation}\label{eq:stab-ansatz} u(t,x)\,=\,\uU_\varepsilon (x+\psi(t))+v(t,x+\psi(t))\, \end{equation} with $(v,\psi')$ exponentially decaying in time. In these terms the equation becomes \begin{equation}gin{align}\label{eq:scalar-ansatz} \d_tv&+(f'(\uU_\varepsilon +v)-\sigma_\varepsilon +\psi')\d_x v-\d_x^2 v\\ &\nonumber \,=\,\varepsilon \,(g(\uU_\varepsilon +v)-g(\uU_\varepsilon )) -(f'(\uU_\varepsilon +v)-f'(\uU_\varepsilon )+\psi')\uU_\varepsilon '\,. \end{align} Equation~\eqref{eq:scalar-ansatz} may be solved through \begin{equation}gin{align}\label{eq:v} v(t,\cdot)&\,=\, \tS_\varepsilon (t)\,(v_0)\,+\,\int_0^t\tS_\varepsilon (t-s)\,\cN_\varepsilon [v(s,\cdot)),\psi'(s)]\,\dD s\,,\\ \label{eq:psi} \psi'(t)&\,=\, \d_ts_\varepsilon ^{\textrm{p}}(t)\,(v_0)\,+\,\int_0^t\d_ts_\varepsilon ^{\textrm{p}}(t-s)\,\cN_\varepsilon [v(s,\cdot)),\psi'(s)]\,\dD s\,, \end{align} with \begin{equation}gin{align*} \cN_\varepsilon [w,\varphi]&:= -(f'(\uU_\varepsilon +w)-f'(\uU_\varepsilon )+\varphi)\d_x w +\varepsilon \,(g(\uU_\varepsilon +w)-g(\uU_\varepsilon )-g'(\uU_\varepsilon )\,w)\\ &\quad-(f'(\uU_\varepsilon +w)-f'(\uU_\varepsilon )-f''(\uU_\varepsilon )\,w)\uU_\varepsilon '\,. \end{align*} In the present section, for notational concision's sake, we denote \[ \omega_\infty=\min(|g'(\uu_{-\infty})|,|g'(\uu_{+\infty})|)\,. \] To begin with, we observe that estimates of the foregoing section are almost sufficient to run a continuity argument on \eqref{eq:v}-\eqref{eq:psi}. Indeed they provide the following proposition. \begin{proposition}\label{p:Duhamel} There exist $\theta_0>0$ and $\varepsilon _0>0$ such that for any $0<\theta\leq \theta_0$ and $\delta>0$, there exist $C>0$ such that for any $0<\varepsilon \leq\varepsilon _0$ and $T>0$, if $(v,\psi')$ solves \eqref{eq:v}-\eqref{eq:psi} on $[0,T]$, with \begin{equation}gin{align*} \|v(t,\cdot)\|_{L^\infty(\R)}&\leq \delta\,,&\qquad t\in[0,T]\,, \end{align*} then, for any $t\in [0,T]$, \begin{equation}gin{align*} |\psi'(t)|&+\|v(t,\cdot)\|_{W^{1,\infty}(\R)}\\ &\leq\, C\,\|v(0,\cdot)\|_{W^{1,\infty}(\R)}\,\eD^{-\varepsilon \,\omega_\infty t}\\ &\quad\times\,\exp\left( C\,\sup_{0\leq s\leq T}\eD^{\varepsilon \,\omega_\infty s}(|\psi'(s)| +\|v(s,\cdot)\|_{L^\infty(\R)} +\|(\varepsilon +\eD^{-\theta |\cdot|})^{-1}\d_xv(s,\cdot)\|_{L^\infty(\R)})\right)\,. \end{align*} \end{proposition} The estimate fails to close by the fact that $\|\d_x w\|_{L^\infty(\R)}$ provides a weaker $\varepsilon $-uniform control on $w$ than $\|(\varepsilon +\eD^{-\theta |\cdot|})^{-1}\d_x w\|_{L^\infty(\R)}$. Note however that for any $x_*>0$, \[ \|(\varepsilon +\eD^{-\theta |\cdot|})^{-1}\d_x w\|_{L^\infty([-x_*,x_*])} \leq \eD^{\theta x_*}\,\|\d_x w\|_{L^\infty(\R)} \] so that we only need to improve the estimates on $\d_x v(t,\cdot)$ on the complement of some compact neighborhood of $0$. \subsection{Maximum principle and propagation of regularity}\label{s:maximum} To close our nonlinear estimates without using neither localization nor parabolic smoothing --- which would cause loss in powers of $\varepsilon $---, we shall use a maximum principle argument. To begin with, we state and prove a convenient classical abstract maximum principle. We provide a proof mostly to highlight that it may be thought as an energy estimate on a suitable nonlinear function. \begin{lemma}\label{l:maximum} Let $T>0$, $x_*\in\R$, $a\in L^1([0,T];W^{1,\infty}([x_*,+\infty)))$ bounded from above away from zero and $h\in\cC^0([0,T]\times[x_*,+\infty)\times\R)$. If $w\in\cC^2((0,T)\times[x_*,+\infty))\cap\cC^0([0,T]\times[x_*,+\infty))$ is a bounded function such that \[ \d_t w+a(\cdot,\cdot)\,\d_x w\leq\d_x^2 w+h(\cdot,\cdot,w)\,,\qquad \textrm{on }[0,T]\times[x_*,+\infty) \] and $M$ is a positive constant such that \begin{equation}gin{align*} M&\geq w(\cdot,x_*)\,,&\qquad\textrm{on }[0,T]\,,\\ M&\geq w(0,\cdot,)\,,&\qquad\textrm{on }[x_*,+\infty)\,,\\ 0&\geq \text{\bf 1}_{\cdot>M}\,h(t,x,\cdot)\,, \end{align*} then \[ w\leq M\,,\qquad\textrm{on }[0,T]\times[x_*,+\infty)\,. \] \end{lemma} \begin{equation}gin{proof} When moreover \begin{equation}gin{align*} M&> \limsup_{x\to\infty}w(\cdot,x)\,,&\qquad\textrm{on }[0,T] \end{align*} the claim is proved by a Gr\"onwall argument on \[ t\mapsto\int_{x_*}^{+\infty}(w(t,x)-M)_+\,\dD x\,. \] The general case is recovered by applying this special case to $(t,x)\mapsto\eD^{-\theta\,(x-x_*)\,}w(t,x)$ with $\theta>0$ sufficiently small and taking the limit $\theta\to0$. \end{proof} We now use the foregoing lemma to derive a weighted bound on $\d_xv$ outside a sufficiently large compact neighborhood of $0$. We shall insert such a bound in a continuity argument so that we only need to prove that as long as $\d_xv$ does not become too large it remains small. This is the content of the following proposition. \begin{proposition}\label{p:energy} There exists $\theta_0>0$ such that for any $0<\theta\leq \theta_0$, there exist $x_*>0$, $\varepsilon _0>0$, $\delta>0$ and $C>0$ such that for any $0<\varepsilon \leq\varepsilon _0$ and $T>0$, if $(v,\psi')$ solves \eqref{eq:scalar-ansatz} on $[0,T]\times\R$, with \begin{equation}gin{align*} |\psi'(t)|+\|v(t,\cdot)\|_{L^\infty(\R)}&\leq \delta \eD^{-\varepsilon \,\omega_\infty t}\,,&\qquad t\in[0,T]\,,\\ \frac{|\d_xv(t,x)|}{\varepsilon +\eD^{-\theta |x|}}&\leq \delta \eD^{-\varepsilon \,\omega_\infty t}\,,&\qquad (t,x)\in [0,T]\times\R\,, \end{align*} then for any $(t,x)\in [0,T]\times(\R\setminus [-x_*,x_*])$ \begin{equation}gin{align*} \frac{|\d_xv(t,x)|}{\varepsilon +\eD^{-\theta |x|}}\,\leq C\,\eD^{-\varepsilon \,\omega_\infty t} \,\times\Big(\ &\sup_{0\leq s\leq T}\eD^{\varepsilon \,\omega_\infty s}(|\psi'(s)| +\|v(s,\cdot)\|_{L^\infty(\R)} +\|\d_xv(s,\cdot)\|_{L^\infty([-x_*,x_*])})\\ &+\|(\varepsilon +\eD^{-\theta |\cdot|})^{-1}\d_xv(0,\cdot)\|_{L^\infty(\R)}\Big)\,. \end{align*} \end{proposition} \begin{equation}gin{proof} We may argue separately to deal with bounds on $x\geq x_*$ on one hand and on $x\leq -x_*$ on the other hand, and provide details only for the former. From now on we focus on $x\geq x_*$. We would like to apply Lemma~\ref{l:maximum} to both $A_\varepsilon \,\d_x v$ and $-A_\varepsilon \,\d_x v$ for a suitable weight $A_\varepsilon $ equivalent to $(t,x)\mapsto\eD^{\varepsilon \,\omega_\infty t}\,(\varepsilon +\eD^{-\theta |x|})^{-1}$. Our choice is \[ A_\varepsilon (t,x):= \frac{\eD^{\omega_\infty\,\varepsilon \,t}\eD^{\int_t^{+\infty}\varepsilon \eD^{-\varepsilon \,\omega_\infty s}\dD s}}{\varepsilon +\eD^{-\theta\,|x|}}\,. \] Note that one has \begin{equation}gin{align*} \d_t(A_\varepsilon \d_x v) &+\left((f'(\uU_\varepsilon +v)-\sigma_\varepsilon +\psi')+2\frac{ \theta \eD^{-\theta |x|}}{\varepsilon +\eD^{-\theta |x|}}\right)\d_x(A_\varepsilon \d_xv)-\d_x^2(A_\varepsilon \d_xv)\\ &=(f'(\uU_\varepsilon )-f'(\uU_\varepsilon +v)-\psi')A_\varepsilon \uU_\varepsilon '' +(f''(\uU_\varepsilon )-f''(\uU_\varepsilon +v))A_\varepsilon \uU_\varepsilon '^2 \\ &-A_\varepsilon \d_xv\,\Big(\varepsilon \eD^{-\varepsilon \,\omega_\infty t} -(f'(\uU_\varepsilon +v)-\sigma_\varepsilon +\psi')\frac{\theta \eD^{-\theta |x|}}{\varepsilon +\eD^{-\theta |x|}}\\ &\qquad\qquad-\varepsilon \,(\omega_\infty+g'(\uU_\varepsilon +v)) -\frac{\theta^2\eD^{-\theta x}\,(2\,\varepsilon \,+\eD^{-\theta x})}{(\varepsilon +\eD^{-\theta x})^2} +f''(\uU_\varepsilon +v)(2\uU_\varepsilon '+\d_xv) \Big)\,. \end{align*} Fixing first $\theta>0$ sufficiently small, then $x_*$ sufficiently large and $\delta$ and $\varepsilon $ sufficiently small, one enforce that the term in front of $\d_x(A_\varepsilon \d_xv)$ is bounded from above away from zero and the term in front of $A_\varepsilon \d_xv$ is bounded from below by a multiple of $\varepsilon \eD^{-\varepsilon \,\omega_\infty t}+\theta \eD^{-\theta |x|}$. This is sufficient to apply Lemma~\ref{l:maximum} and derive the claimed upper bound on $x\geq x_*$. \end{proof} \subsection{Proof of Theorem~\ref{th:main-sl}} Our very first task when proving Theorem~\ref{th:main-sl} is to convert classical local well-posedness yielding maximal solutions $u$ to \eqref{eq:scalar-sl} into convenient local existence results for $(v,\psi')$. This follows from the following simple observation. By design, $s_\varepsilon ^{\textrm{p}}(t)\equiv0$ when $0\leq t\leq 1$. Thus if $u$ solves \eqref{eq:scalar-sl} on $[0,T]\times\R$ then $(v,\psi')$ satisfying \eqref{eq:stab-ansatz}-\eqref{eq:v}-\eqref{eq:psi} may be obtained recursively through \begin{equation}gin{align*} \psi(t)&=\psi_0\,,& v(t,\cdot)&=u(t,\cdot-\psi(t))-\uU_\varepsilon \,,& \qquad\textrm{when }0\leq t\leq \min(\{1,T\})\,,& \end{align*} and, for any $n\in\mathbf N$, \begin{equation}gin{align*} \psi'(t)&\,=\, \d_ts_\varepsilon ^{\textrm{p}}(t)\,(v_0)\,+\,\int_0^{t-1}\d_ts_\varepsilon ^{\textrm{p}}(t-s)\,\cN_\varepsilon [v(s,\cdot)),\psi'(s)]\,\dD s\,,\\ \psi(t)&=\psi_0+\int_0^t\,\psi'(s)\,\dD s\,,\\ v(t,\cdot)&=u(t,\cdot-\psi(t))-\uU_\varepsilon \,, \qquad\textrm{when }\min(\{n,T\})\leq t\leq \min(\{n+1,T\})\,.& \end{align*} Now, combining together Propositions~\ref{p:Duhamel} and~\ref{p:energy}, one obtains that for any $\theta>0$ sufficiently small, there exist $\varepsilon _0>0$, $\delta>0$ and $C\geq 1$ such that for any $0<\varepsilon \leq \varepsilon _0$, and $(v_0,\psi_0)$ with \[ \|v_0\|_{L^\infty(\R)} +\|(\varepsilon +\eD^{-\theta |\cdot|})^{-1}\d_xv_0\|_{L^\infty(\R)} \,\leq\,\delta\,, \] the corresponding solution $u$ to \eqref{eq:scalar-sl}, in the form \eqref{eq:stab-ansatz}, satisfies that if for some $T>0$ and any $0\leq t\leq T$ \begin{equation}gin{align*} |\psi'(t)| +\|v(t,\cdot)\|_{L^\infty(\R)} &+\|(\varepsilon +\eD^{-\theta |\cdot|})^{-1}\d_xv(t,\cdot)\|_{L^\infty(\R)}\\ &\,\leq\,2\,C\,\eD^{-\varepsilon \,\omega_\infty t}\, \left(\|v_0\|_{L^\infty(\R)} +\|(\varepsilon +\eD^{-\theta |\cdot|})^{-1}\d_xv_0\|_{L^\infty(\R)}\right) \end{align*} then for any $0\leq t\leq T$ \begin{equation}gin{align*} |\psi'(t)| +\|v(t,\cdot)\|_{L^\infty(\R)} &+\|(\varepsilon +\eD^{-\theta |\cdot|})^{-1}\d_xv(t,\cdot)\|_{L^\infty(\R)}\\ &\,\leq\,C\,\eD^{-\varepsilon \,\omega_\infty t}\, \left(\|v_0\|_{L^\infty(\R)} +\|(\varepsilon +\eD^{-\theta |\cdot|})^{-1}\d_xv_0\|_{L^\infty(\R)}\right)\,. \end{align*} From this and a continuity argument stem that $u$ is global and that the latter estimate holds globally in time. One achieves the proof of Theorem~\ref{th:main-sl} by deriving bounds on $\psi$ by integration of those on $\psi'$ and going back to original variables. \appendix \section{Wave profiles}\label{s:profiles} In the present Appendix, we prove Proposition~\ref{p:profile}. Let us first reformulate the wave profile equation in terms of \begin{equation}gin{align*} \tuU_\varepsilon &:=\frac{\uU_\varepsilon -\uU_0}{\varepsilon }\,,& \tsigma_\varepsilon &:=\frac{\sigma_\varepsilon -\sigma_0}{\varepsilon }\,.& \end{align*} The equation to consider is \begin{equation}gin{align*} \tuU_\varepsilon ''-\left((f'(\uU_0)-\sigma_0)\,\tuU_\varepsilon \right)' &\,=\,-\,g(\uU_0+\varepsilon \,\tuU_\varepsilon ) -\left(\tsigma_\varepsilon \,(\uU_0+\varepsilon \,\tuU_\varepsilon )\right)'\\ &\quad+\left(\frac{f(\uU_0+\varepsilon \,\tuU_\varepsilon )-f(\uU_0)-\varepsilon \,f'(\uU_0)\,\tuU_\varepsilon }{\varepsilon }\right)'\,, \end{align*} with $\tuU_\varepsilon (0)=0$, $(\tsigma_\varepsilon ,\begin{equation}D^{\theta\,|\,\cdot\,|}\tuU_\varepsilon ,\begin{equation}D^{\theta\,|\,\cdot\,|}\tuU_\varepsilon ')$ uniformly bounded, for some sufficiently small $\theta>0$. As announced in the introduction the framework we first consider is suboptimal from the point of view of spatial localization but we shall refine it in a second step. To carry out the first step we introduce spaces $W^{k,\infty}_{\theta}$ and their subspaces $BUC^{k}_{\theta}$, corresponding to norms \begin{equation}gin{align*} \|v\|_{W_{\theta}^{k,\infty}(\R)} &=\sum_{j=0}^k\,\|\eD^{\theta\,|\,\cdot\,|}\,\d_x^jv\|_{L^{\infty}(\R)}\,. \end{align*} In this first step, we just pick some $0<\theta<\min(\{\theta_0^\end{lemma}l,\theta_0^r\})$ and let all the constants depend on this particular choice. We begin with two preliminary remarks. Firstly note that a simple integration yields that a necessary constraint is \[ \tsigma_\varepsilon \,=\,-\frac{1}{\uu_{+\infty}-\uu_{-\infty}}\,\int_{\R}g(\uU_0+\varepsilon \,\tuU_\varepsilon )\,=:\,\tSigma_\varepsilon [\tuU_\varepsilon ] \] and that \begin{equation}gin{align*} \tcN_\varepsilon [\tuU_\varepsilon ] &:=-\,g(\uU_0+\varepsilon \,\tuU_\varepsilon ) -\left(\tSigma_\varepsilon [\tuU_\varepsilon ]\,(\uU_0+\varepsilon \,\tuU_\varepsilon )\right)'\\ &\quad+\left(\frac{f(\uU_0+\varepsilon \,\tuU_\varepsilon )-f(\uU_0)-\varepsilon \,f'(\uU_0)\,\tuU_\varepsilon }{\varepsilon }\right)' \end{align*} defines a continuous map from $BUC^{1}_{\theta}$ to the closed subspace of $BUC^{0}_{\theta}$ whose range is contained in the set of functions with zero integral and that, on any ball of $BUC^{1}_{\theta}$, has an $\cO(\varepsilon )$-Lipschitz constant. Secondly, denoting $L_0$ the operator defined by \begin{equation}gin{align*} L_0(v)\,:=\,v''-(f'(\,\uU_0)-\sigma_0)v)' \end{align*} on $BUC^{0}_{\theta}$, with domain $BUC^{2}_{\theta}$, we observe that $L_0$ is Fredholm of index $0$ (as a continuous operator from $BUC^{2}_{\theta}$ to $BUC^{0}_{\theta}$), its kernel is spanned by $\uU'_\varepsilon $ and the kernel of its adjoint is reduced to constant functions. The foregoing claims are easily proved by direct inspection but may also be obtained with the arguments of Sections~\ref{s:spectral} and~\ref{s:Green}, combining spatial dynamics point of view with a Sturm-Liouville argument. Since evaluation at $0$ acts continuously on $BUC^{2}_{\theta}$ and $\uU_0'(0)\neq0$, this implies that the restriction of $L_0$ from the closed subspace of $BUC^{2}_{\theta}$ consisting of functions with value $0$ at $0$ to the closed subspace of $BUC^{0}_{\theta}$ consisting of functions with zero integral is boundedly invertible. Indeed, the inverse of this restriction is readily seen to be given by \begin{equation}gin{align*} L_0^\dagger(h)(x):=-\int_0^x\int_{z}^{+\infty}\frac{\uU_0'(x)}{\uU_0'(z)}\ h(y)\,\dD y\,\dD z \,=\,\int_0^x\int^{z}_{-\infty}\frac{\uU_0'(x)}{\uU_0'(z)}\ h(y)\,\dD y\,\dD z\,. \end{align*} Note that from the profile equation itself stems that if $\tuU_\varepsilon $ is a $BUC^{1}_{\theta}$-solution it is also a $BUC^{2}_{\theta}$-solution so that the problem reduces to \begin{equation}gin{align*} \tsigma_\varepsilon &=\tSigma[\tuU_\varepsilon ]\,,& \tuU_\varepsilon &=L_0^\dagger(\tcN_\varepsilon [\tuU_\varepsilon ])\,. \end{align*} If $C_0$ is chosen such that $C_0>\|L_0^\dagger\tcN_\varepsilon [\text{0}_{\R}]\|_{W^{1,\infty}_{\theta}(\R)}$, it follows that, when $\varepsilon $ is sufficiently small, the map $L_0^\dagger\circ\tcN_\varepsilon $ sends the complete space \[ \left\{\,v\in BUC^{1}_{\theta}(\R)\,;\,v(0)=0\ \textrm{and}\ \|v\|_{W^{1,\infty}_{\theta}(\R)}\leq C_0\,\right\} \] into itself and is strictly contracting with an $\cO(\varepsilon )$-Lipschitz constant. Thus resorting to the Banach fixed-point theorem achieves the first step of the proof of Proposition~\ref{p:profile}. Note that, in order to conclude the proof, it is sufficient to provide asymptotic descriptions of $\eD^{\theta_\varepsilon ^\end{lemma}l|\,\cdot\,|}(\uU_\varepsilon -\uu_{-\infty})$, $\eD^{\theta_\varepsilon ^r\,\cdot\,}(\uU_\varepsilon -\uu_{+\infty})$, $\eD^{\theta_\varepsilon ^\end{lemma}l|\,\cdot\,|}\uU_\varepsilon '$ and $\eD^{\theta_\varepsilon ^r\,\cdot\,}\uU_\varepsilon '$. Indeed, on one hand, the asymptotic comparisons for $\eD^{\theta_\varepsilon ^\end{lemma}l|\,\cdot\,|}\uU_\varepsilon ^{(k)}$ and $\eD^{\theta_\varepsilon ^r\,\cdot\,}\uU_\varepsilon ^{(k)}$, $k\geq2$, are then deduced recursively by using the profile equation (differentiated $(k-2)$ times). On the other hand, since, for $\#\in\{\end{lemma}l,r\}$, $\theta_\varepsilon ^\#=\theta_0^\#+\cO(\varepsilon )$, the asymptotic descriptions are sufficient to upgrade the existence part of the first step arbitrarily close to optimal spatial decay rates, $\alpha^\#\to\theta_0^\#$. As a further reduction, we observe that the asymptotics for $\eD^{\theta_\varepsilon ^\end{lemma}l|\,\cdot\,|}(\uU_\varepsilon -\uu_{-\infty})$, $\eD^{\theta_\varepsilon ^r\,\cdot\,}(\uU_\varepsilon -\uu_{+\infty})$, may be deduced from the ones for $\eD^{\theta_\varepsilon ^\end{lemma}l|\,\cdot\,|}\uU_\varepsilon '$ and $\eD^{\theta_\varepsilon ^r\,\cdot\,}\uU_\varepsilon '$ by integration since \begin{equation}gin{align*} \eD^{-\theta_\varepsilon ^\end{lemma}l\,x\,} (\uU_\varepsilon (x)-\uu_{-\infty}) &-\eD^{-\theta_0^\end{lemma}l\,x\,}(\uU_0(x)-\uu_{-\infty})\\ &=\int_{-\infty}^x\eD^{\theta_\varepsilon ^\end{lemma}l\,(y-x)\,} \left(\eD^{-\theta_\varepsilon ^\end{lemma}l\,y\,}\uU_\varepsilon '(y)-\eD^{-\theta_0^\end{lemma}l\,y\,}\uU_0'(y)\right)\,\dD y\\ &\quad+\int_{-\infty}^x\eD^{\theta_0^\end{lemma}l\,(y-x)\,} \left(\eD^{(\theta_\varepsilon ^\end{lemma}l-\theta_0^\end{lemma}l)\,(y-x)\,}-1\right)\,\eD^{-\theta_0^\end{lemma}l\,y\,}\uU_0'(y)\,\dD y \end{align*} and likewise near $+\infty$. To conclude, we derive the study of $\eD^{\theta_\varepsilon ^\end{lemma}l|\,\cdot\,|}\uU_\varepsilon '$ and $\eD^{\theta_\varepsilon ^r\,\cdot\,}\uU_\varepsilon '$ from the analysis of Proposition~\ref{p:gap-lemma} (with $K=\{0\}$). Indeed, \begin{equation}gin{align*} \theta_\varepsilon ^\end{lemma}l&=\mu_+^\varepsilon (0;\uu_{-\infty})\,,& \theta_\varepsilon ^\end{lemma}l&=-\mu_-^\varepsilon (0;\uu_{+\infty})\,, \end{align*} and \begin{equation}gin{align*} \uU_\varepsilon '(x)&=\frac{\left(\uu_{+\infty}-\uu_{-\infty}\right)}{2}\,\frac{\eD^{\theta_\varepsilon ^\end{lemma}l\,x\,}\,\begin{equation}D_1\cdot P^\end{lemma}l_\varepsilon (0,x)\,\bR_+^\varepsilon (0;\uu_{-\infty})}{ \int_{-\infty}^0 \eD^{\theta_\varepsilon ^\end{lemma}l\,y\,}\,\begin{equation}D_1\cdot P^\end{lemma}l_\varepsilon (0,y)\,\bR_+^\varepsilon (0;\uu_{-\infty})\,\dD y}\,,\\ &=\frac{\left(\uu_{+\infty}-\uu_{-\infty}\right)}{2}\,\frac{-\eD^{\theta_\varepsilon ^r\,x\,}\,\begin{equation}D_1\cdot P^r_\varepsilon (0,x)\,\bR_-^\varepsilon (0;\uu_{+\infty})}{ \int_0^{+\infty} \eD^{-\theta_\varepsilon ^r\,y\,}\,\begin{equation}D_1\cdot P^r_\varepsilon (0,y)\,\bR_-^\varepsilon (0;\uu_{+\infty})\,\dD y}\,. \end{align*} Thus the claimed expansion stems from the smoothness in $\varepsilon $ afforded by Proposition~\ref{p:gap-lemma}. \newcommand{\end{theorem}alchar}[1]{$^{#1}$} \newcommand{\SortNoop}[1]{} \begin{equation}gin{thebibliography}{JNR{\end{theorem}alchar{+}}19} \bibitem[BJRZ11]{BJRZ} B.~Barker, M.~A. Johnson, L.~M. Rodrigues, and K.~Zumbrun. \newblock Metastability of solitary roll wave solutions of the {S}t. {V}enant equations with viscosity. \newblock {\em Phys. D}, 240(16):1289--1310, 2011. \bibitem[BGM17]{Bedrossian-Germain-Masmoudi} J.~Bedrossian, P.~Germain, and N.~Masmoudi. \newblock On the stability threshold for the 3{D} {C}ouette flow in {S}obolev regularity. \newblock {\em Ann. of Math. (2)}, 185(2):541--608, 2017. \bibitem[BM15]{Bedrossian-Masmoudi} J.~Bedrossian and N.~Masmoudi. \newblock Inviscid damping and the asymptotic stability of planar shear flows in the 2{D} {E}uler equations. \newblock {\em Publ. Math. Inst. Hautes \'{E}tudes Sci.}, 122:195--300, 2015. \bibitem[BMV16]{Bedrossian-Masmoudi-Vicol} J.~Bedrossian, N.~Masmoudi, and V.~Vicol. \newblock Enhanced dissipation and inviscid damping in the inviscid limit of the {N}avier-{S}tokes equations near the two dimensional {C}ouette flow. \newblock {\em Arch. Ration. Mech. Anal.}, 219(3):1087--1159, 2016. \bibitem[BGS07]{Benzoni-Serre_book} S.~Benzoni-Gavage and D.~Serre. \newblock {\em Multidimensional hyperbolic partial differential equations}. \newblock Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, Oxford, 2007. \newblock First-order systems and applications. \bibitem[BB05]{Bianchini-Bressan} S.~Bianchini and A.~Bressan. \newblock Vanishing viscosity solutions of nonlinear hyperbolic systems. \newblock {\em Ann. of Math. (2)}, 161(1):223--342, 2005. \bibitem[Bre00]{Bressan} A.~Bressan. \newblock {\em Hyperbolic systems of conservation laws}, volume~20 of {\em Oxford Lecture Series in Mathematics and its Applications}. \newblock Oxford University Press, Oxford, 2000. \newblock The one-dimensional Cauchy problem. \bibitem[Cro10]{Crooks} E.~C.~M. Crooks. \newblock Front profiles in the vanishing-diffusion limit for monostable reaction-diffusion-convection equations. \newblock {\em Differential Integral Equations}, 23(5-6):495--512, 2010. \bibitem[CM07]{Crooks-Mascia} E.~C.~M. Crooks and C.~Mascia. \newblock Front speeds in the vanishing diffusion limit for reaction-diffusion-convection equations. \newblock {\em Differential Integral Equations}, 20(5):499--514, 2007. \bibitem[Dav07]{Davies} E.~B. Davies. \newblock {\em Linear operators and their spectra}, volume 106 of {\em Cambridge Studies in Advanced Mathematics}. \newblock Cambridge University Press, Cambridge, 2007. \bibitem[DR20]{DR1} V.~Duch{\^ e}ne and L.~M. Rodrigues. \newblock Large-time asymptotic stability of {R}iemann shocks of scalar balance laws. \newblock {\em SIAM J. Math. Anal.}, 52(1):792--820 889, 2020. \bibitem[DRar]{DR2} V.~Duch{\^ e}ne and L.~M. Rodrigues. \newblock Stability and instability in scalar balance laws: fronts and periodic waves. \newblock {\em Anal. PDE}, to appear. \bibitem[Gil10]{Gilding} B.~H. Gilding. \newblock On front speeds in the vanishing diffusion limit for reaction-convection-diffusion equations. \newblock {\em Differential Integral Equations}, 23(5-6):445--450, 2010. \bibitem[Goo86]{Goodman} J.~Goodman. \newblock Nonlinear asymptotic stability of viscous shock profiles for conservation laws. \newblock {\em Arch. Rational Mech. Anal.}, 95(4):325--344, 1986. \bibitem[Goo89a]{Goodman_bis} J.~Goodman. \newblock Stability of viscous scalar shock fronts in several dimensions. \newblock {\em Trans. Amer. Math. Soc.}, 311(2):683--695, 1989. \bibitem[Goo89b]{Goodman_ter} J.~Goodman. \newblock Stability of viscous scalar shock fronts in several dimensions. \newblock {\em Trans. Amer. Math. Soc.}, 311(2):683--695, 1989. \bibitem[GX92]{Goodman-Xin} J.~Goodman and Z.~P. Xin. \newblock Viscous limits for piecewise smooth solutions to systems of conservation laws. \newblock {\em Arch. Rational Mech. Anal.}, 121(3):235--265, 1992. \bibitem[GR01]{Grenier-Rousset} E.~Grenier and F.~Rousset. \newblock Stability of one-dimensional boundary layers by using {G}reen's functions. \newblock {\em Comm. Pure Appl. Math.}, 54(11):1343--1385, 2001. \bibitem[H{\"{a}}r00]{Harterich_hyperbolic} J.~H{\"{a}}rterich. \newblock Viscous profiles for traveling waves of scalar balance laws: the uniformly hyperbolic case. \newblock {\em Electron. J. Differential Equations}, pages No. 30, 22, 2000. \bibitem[H{\"{a}}r03]{Harterich_canard} J.~H{\"{a}}rterich. \newblock Viscous profiles of traveling waves in scalar balance laws: the canard case. \newblock {\em Methods Appl. Anal.}, 10(1):97--117, 2003. \bibitem[Hen81]{Henry-geometric} D.~Henry. \newblock {\em Geometric theory of semilinear parabolic equations}, volume 840 of {\em Lecture Notes in Mathematics}. \newblock Springer-Verlag, Berlin-New York, 1981. \bibitem[HR18]{Herda-Rodrigues} M.~Herda and L.~M. Rodrigues. \newblock Large-time behavior of solutions to {V}lasov-{P}oisson-{F}okker-{P}lanck equations: from evanescent collisions to diffusive limit. \newblock {\em J. Stat. Phys.}, 170(5):895--931, 2018. \bibitem[How99a]{Howard_lin} P.~Howard. \newblock Pointwise estimates on the {G}reen's function for a scalar linear convection-diffusion equation. \newblock {\em J. Differential Equations}, 155(2):327--367, 1999. \bibitem[How99b]{Howard_nonlin} P.~Howard. \newblock Pointwise {G}reen's function approach to stability for scalar conservation laws. \newblock {\em Comm. Pure Appl. Math.}, 52(10):1295--1313, 1999. \bibitem[HLZ09]{Humpherys-Lyng-Zumbrun} J.~Humpherys, G.~Lyng, and K.~Zumbrun. \newblock Spectral stability of ideal-gas shock layers. \newblock {\em Arch. Ration. Mech. Anal.}, 194(3):1029--1079, 2009. \bibitem[JNR{\end{theorem}alchar{+}}19]{JNRYZ} M.~A. Johnson, P.~Noble, L.~M. Rodrigues, Z.~Yang, and K.~Zumbrun. \newblock Spectral stability of inviscid roll waves. \newblock {\em Comm. Math. Phys.}, 367(1):265--316, 2019. \bibitem[JNRZ14]{JNRZ-conservation} M.~A. Johnson, P.~Noble, L.~M. Rodrigues, and K.~Zumbrun. \newblock Behavior of periodic solutions of viscous conservation laws under localized and nonlocalized perturbations. \newblock {\em Invent. Math.}, 197(1):115--213, 2014. \bibitem[JGK93]{Jones-Gardner-Kapitula} C.~K. R.~T. Jones, R.~Gardner, and T.~Kapitula. \newblock Stability of travelling waves for nonconvex scalar viscous conservation laws. \newblock {\em Comm. Pure Appl. Math.}, 46(4):505--526, 1993. \bibitem[KV21a]{Kang-Vasseur_bis} M.-J. Kang and A.~Vasseur. \newblock Contraction property for large perturbations of shocks of the barotropic {N}avier-{S}tokes system. \newblock {\em J. Eur. Math. Soc. (JEMS)}, 23(2):585--638, 2021. \bibitem[KV21b]{Kang-Vasseur} M.-J. Kang and A.~F. Vasseur. \newblock Uniqueness and stability of entropy shocks to the isentropic {E}uler system in a class of inviscid limits from a large family of {N}avier-{S}tokes systems. \newblock {\em Invent. Math.}, 224(1):55--146, 2021. \bibitem[Kap94]{Kapitula} T.~Kapitula. \newblock On the stability of travelling waves in weighted {$L^\infty$} spaces. \newblock {\em J. Differential Equations}, 112(1):179--215, 1994. \bibitem[KP13]{KapitulaPromislow-stability} T.~Kapitula and K.~Promislow. \newblock {\em Spectral and dynamical stability of nonlinear waves}, volume 185 of {\em Applied Mathematical Sciences}. \newblock Springer, New York, 2013. \newblock With a foreword by Christopher K. R. T. Jones. \bibitem[Kat76]{Kato} T.~Kato. \newblock {\em Perturbation theory for linear operators}. \newblock Springer-Verlag, Berlin, second edition, 1976. \newblock Grundlehren der Mathematischen Wissenschaften, Band 132. \bibitem[KK98]{Kreiss-Kreiss} G.~Kreiss and H.-O. Kreiss. \newblock Stability of systems of viscous conservation laws. \newblock {\em Comm. Pure Appl. Math.}, 51(11-12):1397--1424, 1998. \bibitem[Kru70]{Kruzhkov} S.~N. Kru{\v z}kov. \newblock First order quasilinear equations with several independent variables. \newblock {\em Mat. Sb. (N.S.)}, 81 (123):228--255, 1970. \bibitem[Liu85]{Liu} T.-P. Liu. \newblock Nonlinear stability of shock waves for viscous conservation laws. \newblock {\em Mem. Amer. Math. Soc.}, 56(328):v+108, 1985. \bibitem[Maj83a]{Majda1} A.~Majda. \newblock The existence of multidimensional shock fronts. \newblock {\em Mem. Amer. Math. Soc.}, 43(281):v+93, 1983. \bibitem[Maj83b]{Majda2} A.~Majda. \newblock The stability of multidimensional shock fronts. \newblock {\em Mem. Amer. Math. Soc.}, 41(275):iv+95, 1983. \bibitem[MZ03]{Mascia-Zumbrun_hyp-par} C.~Mascia and K.~Zumbrun. \newblock Pointwise {G}reen function bounds for shock profiles of systems with real viscosity. \newblock {\em Arch. Ration. Mech. Anal.}, 169(3):177--263, 2003. \bibitem[MZ04]{Mascia-Zumbrun_hyp-par_bis} C.~Mascia and K.~Zumbrun. \newblock Stability of large-amplitude viscous shock profiles of hyperbolic-parabolic systems. \newblock {\em Arch. Ration. Mech. Anal.}, 172(1):93--131, 2004. \bibitem[MN85]{Matsumura-Nishihara} A.~Matsumura and K.~Nishihara. \newblock On the stability of travelling wave solutions of a one-dimensional model system for compressible viscous gas. \newblock {\em Japan J. Appl. Math.}, 2(1):17--25, 1985. \bibitem[M{\'{e}}t01]{Metivier_cours-chocs} G.~M{\'{e}}tivier. \newblock Stability of multidimensional shocks. \newblock In {\em Advances in the theory of shock waves}, volume~47 of {\em Progr. Nonlinear Differential Equations Appl.}, pages 25--103. Birkh\"{a}user Boston, Boston, MA, 2001. \bibitem[MZ05]{MZ_AMS} G.~M{\'{e}}tivier and K.~Zumbrun. \newblock Large viscous boundary layers for noncharacteristic nonlinear hyperbolic problems. \newblock {\em Mem. Amer. Math. Soc.}, 175(826):vi+107, 2005. \bibitem[Rod13]{R} L.~M. Rodrigues. \newblock {\em Asymptotic stability and modulation of periodic wavetrains, general theory \& applications to thin film flows}. \newblock Habilitation {\`a} diriger des recherches, Universit\'e Lyon 1, 2013. \bibitem[Rod15]{R_Roscoff} L.~M. Rodrigues. \newblock Space-modulated stability and averaged dynamics. \newblock {\em Journ\'ees \'Equations aux d\'eriv\'ees partielles}, 2015(8):1--15, 2015. \bibitem[Rod18]{R_linKdV} L.~M. Rodrigues. \newblock Linear asymptotic stability and modulation behavior near periodic waves of the {K}orteweg--de {V}ries equation. \newblock {\em J. Funct. Anal.}, 274(9):2553--2605, 2018. \bibitem[RZ16]{Rodrigues-Zumbrun} L.~M. Rodrigues and K.~Zumbrun. \newblock Periodic-coefficient damping estimates, and stability of large-amplitude roll waves in inclined thin film flow. \newblock {\em SIAM J. Math. Anal.}, 48(1):268--280, 2016. \bibitem[Rou02]{Rousset} F.~Rousset. \newblock Viscous limits for strong shocks of one-dimensional systems of conservation laws. \newblock In {\em Journ\'{e}es ``\'{E}quations aux {D}\'{e}riv\'{e}es {P}artielles'' ({F}orges-les-{E}aux, 2002)}, pages Exp. No. XVI, 12. Univ. Nantes, Nantes, 2002. \bibitem[San02]{Sandstede} B.~Sandstede. \newblock Stability of travelling waves. \newblock In {\em Handbook of dynamical systems, {V}ol. 2}, pages 983--1055. North-Holland, Amsterdam, 2002. \bibitem[Sat73]{Sattinger-book} D.~H. Sattinger. \newblock {\em Topics in stability and bifurcation theory}. \newblock Lecture Notes in Mathematics, Vol. 309. Springer-Verlag, Berlin-New York, 1973. \bibitem[Sat76]{Sattinger_one} D.~H. Sattinger. \newblock On the stability of waves of nonlinear parabolic systems. \newblock {\em Advances in Math.}, 22(3):312--355, 1976. \bibitem[Sat77]{Sattinger_two} D.~H. Sattinger. \newblock Weighted norms for the stability of traveling waves. \newblock {\em J. Differential Equations}, 25(1):130--144, 1977. \bibitem[Ser21]{Serre_scalar} D.~Serre. \newblock Asymptotic stability of scalar multi-{D} inviscid shock waves. \newblock {\em arXiv preprint arXiv:2103.09615}, 2021. \bibitem[SYZ20]{SYZ} A.~Sukhtayev, Z.~Yang, and K.~Zumbrun. \newblock Spectral stability of hydraulic shock profiles. \newblock {\em Phys. D}, 405:132360, 9, 2020. \bibitem[WX05]{Wu-Xing} Y.~Wu and X.~Xing. \newblock The stability of travelling fronts for general scalar viscous balance law. \newblock {\em J. Math. Anal. Appl.}, 305(2):698--711, 2005. \bibitem[Xin05]{Xing} X.-x. Xing. \newblock Existence and stability of viscous shock waves for non-convex viscous balance law. \newblock {\em Adv. Math. (China)}, 34(1):43--53, 2005. \bibitem[YZ20]{YangZumbrun20} Z.~Yang and K.~Zumbrun. \newblock Stability of {H}ydraulic {S}hock {P}rofiles. \newblock {\em Arch. Ration. Mech. Anal.}, 235(1):195--285, 2020. \bibitem[Zum01]{Zumbrun} K.~Zumbrun. \newblock Multidimensional stability of planar viscous shock waves. \newblock In {\em Advances in the theory of shock waves}, volume~47 of {\em Progr. Nonlinear Differential Equations Appl.}, pages 307--516. Birkh\"{a}user Boston, Boston, MA, 2001. \bibitem[ZH98]{Zumbrun-Howard} K.~Zumbrun and P.~Howard. \newblock Pointwise semigroup methods and stability of viscous shock waves. \newblock {\em Indiana Univ. Math. J.}, 47(3):741--871, 1998. \bibitem[ZH02]{Zumbrun-Howard_erratum} K.~Zumbrun and P.~Howard. \newblock Errata to: ``{P}ointwise semigroup methods, and stability of viscous shock waves'' [{I}ndiana {U}niv. {M}ath. {J}. {\bf 47} (1998), no. 3, 741--871; {MR}1665788 (99m:35157)]. \newblock {\em Indiana Univ. Math. J.}, 51(4):1017--1021, 2002. \end{thebibliography} \end{document}